Meredith Whittaker

Co-founder and co-director, AI Now Institute

It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect. This, even as A.I. systems are increasingly tasked with socially significant decisions, from who gets hired to who gets bail to which school your child is permitted to attend. We urgently need ways to hold A.I., and those who profit from its development, accountable to the public. This should include external auditing and testing that subjects A.I. companies’ infrastructures and processes to publicly accountable scrutiny and validation.

It must also engage local communities, ensuring those most at risk of harm have a say in determining when, how or if such systems are used. While building these cornerstones of trust will require tech community cooperation, the stakes are too high to rely on voluntary participation. Regulation will almost certainly be necessary, as what’s required will necessitate major structural changes to the current “launch, iterate and profit” industry norms.

Read more here.