“Essentially what this is doing is flattening descriptions of, say, ‘an Indian person’ or ‘a Nigerian house’ into particular stereotypes which could be viewed in a negative light,” Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity. Midjourney did not respond to multiple requests for an interview or comment for this story.

Almost every AI researcher Rest of World spoke to said the first step to improving the issue of bias in AI systems was greater transparency from the companies involved, which are often secretive about the data they use and how they train their systems. “It’s very much been a debate that’s been on their terms, and it’s very much like a ‘trust us’ paradigm,” said Amba Kak of the AI Now Institute.

For more, read here.