“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing
guardrails for generative AI are deeply lacking, and it has been shown how easily
compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI
scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they
cannot guard their systems against benign cases, they’d be able to do so for complex
military and surveillance operations.”

Read more here.

Research Areas