Tagged:
Red teaming
Red teaming refers to a broad range of risk-management measures for AI systems, including testing vulnerabilities, identifying mitigation measures, providing feedback, and so on, typically performed by internally appointed groups of individuals acting in an adversarial role.