In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI’s potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.

Some companies pitching foundation model-based AI to the military are “proposing commercial models that already have been pre-trained on a set of commercial data. They are not talking about military-exclusive commercial models that have been purely trained on military data,” Khlaaf told Defense One. 

___

Said West: “We’ve seen this propensity, you know, even like the introduction of fast-tracking Fed RAMP in order to promote rapid adoption of generative AI use cases, the creation of these carve outs, which is why these sort of voluntary frameworks and higher order principles are insufficient, particularly where we’re dealing with uses that are very much life-or-death stakes, and where the consequences for civilians are very significant.”

To read more, click here