
AI Now Statement on the UK AI Safety Institute transition to the UK AI Security Institute
Feb 14, 2025
On February 14th, 2025 the UK Department for Science, Innovation and Technology announced the UK AI Safety Institute’s transition to the UK AI Security Institute. Read AI Now’s statement on the transition below.
AISI’s partnership with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, heralds the UK government turning its attention to focus on frontier AI use within the defense and national security apparatus. This comes on the heels of a slew of recent announcements that major AI companies are to be integrating their frontier AI models into national security use cases. As our research has demonstrated, these systems carry significant risks, including threats to national security given the cyber vulnerabilities inherent to frontier AI models, and that the sensitive data for which they may be trained can be extracted by adversaries.
While we welcome AISI’s signals to potentially investigate these risks amidst heightened “AI race” dynamics, we warn against an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready. These issues cannot be easily fixed or patched, and require independent safety critical evaluation which must be insulated from industry partnerships. If our leaders barrel ahead with their plans to implement frontier AI for defense use, they risk undermining our national security. This is a trade-off that AI’s purported benefits cannot justify.
This statement can be attributed to AI Now Chief AI Scientist Heidy Khlaaf.