Ethical, Legal & Social Implications of machine learning in genomics | National Human Genome Research Institute | Varoon Mathur
/ April 13, 2021
AI Now’s uniquely interdisciplinary research agenda is organized around a set of four core themes: rights and liberties, labor, bias and inclusion, and infrastructure.
Our work ranges from analyzing how “dirty data” impacts predictive systems to examining how discrimination and inequality in the AI sector are replicated in AI technology, and much more.
by Prof. Rashida Richardson, Amba Kak and Ian Head
By Meredith Whittaker
/ March 25, 2021
Ifeoma Ozoma, Veena Dubal, Meredith Whittaker from the AI Now Institute and John Tye from Whistleblower Aid for a tech-worker focused webinar, covering the basics of safe whistleblowing and your rights as a worker.
Hariri Institute for Computing / March 05, 2021
Panel on AI, Inequality, and Big Tech, featuring Timnit Gebru, Sabelo Mhlambi and Meredith Whittaker
National Science Policy Symposium / November 14, 2020
Sarah Myers West spoke together with Nicol Turner-Lee, and Tyrone Grandison. Technology is an integral part of our everyday lives through broad-band internet usage, protection of cyber-security security, or the usage of artifical intelligence (AI) to mimic human-operations. Historically, technology has perpetuated racial discrimination with biases in algorthims used in...
Indiana University / October 30, 2020
Sarah Myers West gave a colloquium talk about discrimination in artificial intelligence for Indiana University's Informatics Colloquium. The artificial intelligence industry is in the midst of a crisis in diversity and inclusion: while the representation of women in computer science recently fell below levels in the 1960s, inclusion across lines...
Royal Holloway University of London Information Security Group / July 30, 2020
Sarah Myers West presented on a qualitative research project examining how digital activists navigate risks posed to them in online environments.
New York City, NY / March 07, 2020
BetaNYC hosts NYC School of Data 2020 at CUNY School of Law in Queens. This event, held on International Open Data Day 2020, concludes the fourth annual NYC Open Data Week. NYC School of data is a community driven conference with a focus on open data, civic technology, and service...
New York City, USA / March 03, 2020
Dr. Joy Lisi Rankin, Research Lead at AI Now Institute, recently spoke about bias, algorithmic technologies and surveillance at the UN's "Counted and Visible" Conference.
AI Now Institute / February 27, 2020
Important Bird Opera (music, photographs, and film by Ryan Moritz with a libretto by Anjuli Fatima Raza Kolb) is an experimental opera in three acts about birds, migration, climate crisis, and rewilding. The piece began as a research project and experiment in nature photography, sound documentary, and the twentieth-century tone...
Brussels, Belgium / February 20, 2020
Our Executive Director (Former) Andrea Nill Sánchez testified before the European Parliament on the risks and harms associated with predictive policing systems.
Brussels, Belgium / February 18, 2020
Amba Kak presented on a panel on the future of data protection law and algorithmic accountability at CPDP 2020
Washington D.C. / January 15, 2020
Learn more about the steps Congress should take to mitigate the harmful effects of AI and facial recognition tech.
AI Now Institute / December 16, 2019
AI Now hosted a gathering of NYC-area attorneys of color in honor of the MLK holiday.
New York, NY / December 07, 2019
AI Now and several other organizations partnered with NAACP Legal Defense and Educational Fund (LDF) to host a Community forum on algorithmic bias in connection with the publication of Confronting Black Boxes: A Shadow Report on the New York City Automated Decision System Task Force.
AI Now Institute / November 22, 2019
AI Now Fridays creates space for discussion, exploration, and insight. Each event will feature a short talk, followed by casual conversation. Held on the last Friday of every month, this edition featured artists Michelle Dizon and Viet Le.
Berlin, Germany / November 14, 2019
The workshop, co-hosted with the Digital Freedom Fund in Berlin, brought together European and North American litigators with experience in cases at the intersection of automated/algorithmic decision-making and human rights. The aim of the workshop was to facilitate the sharing of lessons learned from already-existing cases, while helping those at...
Milan, Italy / October 26, 2019
The Training Humans Symposium engaged with the themes of Training Humans, the first major photography exhibition devoted to training images: the collections of photos used by scientists to train artificial intelligence (AI) systems in how to “see” and categorize the world. Featuring Prof. Stephanie Dick, Prof. eden Medina, and Prof....
NYU Skirball Center, New York, NY / October 02, 2019
The fourth annual AI Now Symposium provided behind-the-scenes insights from those at the frontlines of the growing pushback against harmful AI. Our program featured leading lawyers, organizers, scholars, and tech workers, all of whom have engaged creative strategies to combat exploitative AI systems across a wide range of contexts, from...
Washington, D.C. / September 28, 2019
U.S. National Academies of Sciences, Engineering, and Medicine hosted a public symposium on human rights and digital technologies
AI Now Institute / September 12, 2019
In their new book, The Costs of Connection: How Data Colonizes Human Life and Appropriates it for Capitalism (Stanford University Press, August 2019). Couldry and Mejias argue that the role of data in society needs to be grasped as not only a development of capitalism, but as the start of...
AI Now Institute / September 09, 2019
We hosted a talk by Māui Hudson, Associate Professor in the Faculty of Māori and Indigenous Studies at the University of Waikato, titled Developing an Indigenous Data Science Platform. Presented in collaboration with NYU Native Studies Forum; NYU Department of Anthropology; NYU Center for Media, Culture, and History; and Local...
AI Now Institute / August 22, 2019
This workshop convened both practitioners from the more traditional software security world, and those developing AI systems, researchers studying AI security and vulnerability, and researchers examining the social and ethical implications of AI. AI systems can be exploited by a variety of triggers, from outright adversarial attacks, to bugs, to...
AI Now Institute / July 18, 2019
AI is in the news a lot today - and most of it is not good. From tools that classify people by race and gender to systems that try to “predict” future crimes based on racist police data sets, AI-driven systems seem to be classifying, dividing, and controlling people more...
AI Now Institute / June 28, 2019
AI Now Fridays creates space for discussion, exploration, and insight. Each event will feature a short talk, followed by casual conversation. Held on the last Friday of every month, this edition featured scholar and activist Joan Greenbaum.
New York City, USA / June 21, 2019
This workshop, co-hosted with NYU Law’s Center on Race, Inequality and the Law, brought together folks focused on litigating algorithmic decision-making in various areas of the law (from employment to social benefits to criminal justice) to discuss strategy, best practices, and exchange ideas about experiences and strategic thinking around litigation...
AI Now Institute / June 05, 2019
TPHH is a gathering of folks working at the intersection of technology, law and policy. From advocates to policymakers, academics, journalists, and everything in between.
AI Now Institute / May 31, 2019
AI Now's Kate Crawford and Meredith Whittaker will kick off this inaugural AI Now Friday, discussing AI Now's history, their recent work, and some of the challenges they see on the horizon.
AI Now Institute / April 19, 2019
Disconnect: Facebook’s Affective Bonds (University of Minnesota Press, 2018) is a book about the risk each social media platform faces and needs to respond. It centers on the idea that the different functions and services of social media are not built for the sake of connecting people but to prevent...
AI Now Institute / March 28, 2019
This workshop brought together a small group of experts from academia, industry and civil society to start a conversation on issues around the intersection of disability, bias, and AI, and to identify areas where more research is urgently needed. AI systems are proliferating and being deployed across core social institutions,...
NYU Skirball Center, New York, NY / October 16, 2018
The AI Now 2018 Symposium addressed the intersection of AI, ethics, organizing, and accountability–examining the landmark events of the past year that have brought these topics squarely into focus. What can we learn from them and where is there more work to be done?
Washington D.C., USA / July 11, 2018
Policy director Rashida Richardson discusses regulation of AI and its use in government on a panel alongside two members of congress and the head of the IT Industry Council.
Berlin, Germany / July 03, 2018
Our roundtable on Machine Learning, Inequality and Bias, co-hosted in Berlin with the Robert Bosch Academy, gathered researchers and policymakers from across Europe to address issues of bias, discrimination, and fairness in machine learning and related technologies.
New York, NY / June 22, 2018
AI Now partnered with NYU Law’s Center on Race, Inequality and the Law and the Electronic Frontier Foundation to host a first of its kind workshop that examined current United States courtroom litigation where the use of algorithms by government was central to the rights and liberties at issue in...
New York, NY / February 21, 2018
Recently, the New York City Council passed the first general algorithmic-accountability legislation in the country. At the same time, the European Union has been moving forward on its own data accountability regime per the new General Data Protection Regulation. But what does it mean to hold a machine accountable? With...
New York, NY / January 25, 2018
The Data Genesis Working Group convenes experts from across industry and academia to examine the mechanics of dataset provenance and maintenance.
New York, NY / September 15, 2017
Our workshop on Immigration, Data, and Automation in the Trump Era, co-hosted with the Brennan Center for Justice and the Center for Privacy and Technology at Georgetown Law, focused on the Trump Administration’s use of data harvesting, predictive analytics, and machine learning to target immigrant communities.
MIT Media Lab, Cambridge, MA / July 10, 2017
The second annual AI Now Symposium deepened the examination of the near-term social and economic implications of AI begun during the first Symposium, addressing four key issues in relation to AI: Rights and Liberties, Labor and Automation, Bias and Inclusion, and Ethics and Governance. These themes built on the work...
NYU Skirball Center, New York, NY / July 07, 2016
In July of 2016, Kate Crawford and Meredith Whittaker co-chaired the first AI Now Symposium in collaboration with the Obama White House’s Office of Science and Technology Policy and the National Economic Council. The event brought together leading experts and members of the public to discuss the near-term social and...
by Prof. Rashida Richardson, Amba Kak and Ian Head
FOIA Basics for Activists - Tech Requests
The Covid-19 Crisis, Computational Resource Control, and water relief policy
By Meredith Whittaker
This is a perilous moment. Private computational systems marketed as AI are threading through our public life and institutions, concentrating industrial power, compounding marginalization, and quietly shaping access to resources and information.
A submission by the AI Now Institute and Data & Society Research Institute
by Ada Lovelace Institute, AI Now Institute and Open Government Partnership.
Learning from the first wave of policy implementation
by Ada Lovelace Institute, AI Now Institute and Open Government Partnership.
Learning from the first wave of policy implementation
by Anti-Eviction Mapping Project
"Counterpoints" brings together cartography, essays, illustrations, poetry, and more in order to depict gentrification and resistance struggles from across the SF Bay Area and act as a roadmap to counter-hegemonic knowledge making and activism.
By Rashida Richardson & Amba Kak
This piece provides a summary overview of our new article, forthcoming in the University of Michigan Journal of Law Reform. This article received the Reidenberg-Kerr Award for best paper by pre-tenured scholars at the Privacy Law Scholars Conference 2021.
by Ben Green and Amba Kak
What does “human oversight of A.I.” really mean?
By Meredith Whittaker, Shazeda Ahmed, and Amba Kak
We're launching an essay series exploring the myths, realities, actors, and incentives underpinning dominant China tech and AI narratives.
A guest post by Alexandria Williams.
Alexandria Williams writes about her insider experience working for a Chinese tech firm and conducting on-the-ground reporting about Chinese tech in Africa.
By Meredith Whittaker, Shazeda Ahmed, and Amba Kak
We're launching an essay series exploring the myths, realities, actors, and incentives underpinning dominant China tech and AI narratives.
By Dr. Theodora Dryer
Response to two of the European Commission’s key priorities for the upcoming years to “accelerate innovation and digitalisation” while at the same time “reaching climate neutrality and high environmental standards.”
By Dr. Theodora Dryer
Response to two of the European Commission’s key priorities for the upcoming years to “accelerate innovation and digitalisation” while at the same time “reaching climate neutrality and high environmental standards.”
A guest post by Lucy Suchman
National Security Commission on Artificial Intelligence (NSCAI) released its Final Report and Recommendations. The Commission’s recommendations rest upon a set of unexamined, and highly questionable, assumptions.
The AI Now Institute, Ada Lovelace Institute, and the Open Government Partnership (OGP) are partnering to launch the first global study evaluating this initial wave of algorithmic accountability policy.
The AI Now Institute, Ada Lovelace Institute, and the Open Government Partnership (OGP) are partnering to launch the first global study evaluating this initial wave of algorithmic accountability policy.
by Amba Kak
A Digital New Deal: Visions of Justice in a Post-Covid World
2020 has been a year of hard truths and tragedy, as interlocking crises put the failures, inadequacies, and structural limitations of our core institutions in the spotlight.
Gina Barba with Erin McElroy
by Sarah Myers West
In this article, Sarah Myers West outlines a feminist critique of extant methods of dealing with algorithmic discrimination.
by Joy Lisi Rankin
The Gender, Race, and Power in AI Program looks to past and current social, political, and economic justice movements for paths forward.
This post reflects on and excerpts from our most recent report, Regulating Biometrics: Global Approaches and Urgent Questions.
Amid heightened public scrutiny, interest in regulating biometric technologies like face and voice recognition has grown significantly across the globe, driven by community advocacy and research.
Amid heightened public scrutiny, interest in regulating biometric technologies like face and voice recognition has grown significantly across the globe, driven by community advocacy and research.
by Erin McElroy, Meredith Whittaker, Genevieve Fried
Excerpt from a piece we wrote in the Boston Review on how property technology (proptech) is leading to new forms of housing injustice in ways that increase the power of landlords and further disempower tenants and those seeking shelter.
How is this persistent strain of reactionary politics currently manifesting within the tech industry? What do the views held by these AI founders suggest about the technologies they are building? And — most importantly — what should we do about it?
The (re)makings of austerity, disaster capitalism, and the no return to normal
On the risks and harms of predictive policing
German OpEd on lessons from the Clearview AI revelations
A call to halt the use of facial recognition
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
From CPP protesting the Vietnam War to Google employees walking out over sexual harassment, theres a rich history of tech workers organizing. Read Joan Greenbaum's latest essay on the radical history of resistance in the tech industry.
Written testimony to the New York City Council Committee on Technology Oversight Hearing on Automated Decision Systems
Atlantic Plaza Towers tenants launched a campaign to halt facial recognition from being installed in their building and won! Tranae’ Moran, one of the lead organizers, writes about this campaign & why a moratorium on facial recognition is needed.
Written Testimony to the New York City Council on Creating Comprehensive Reporting and Oversight of NYPD Surveillance Technologies
Read our annual report
A Shadow Report of the New York City Automated Decision System Task Force
Algorithmic bias builds on long patterns of historical discrimination. And the same communities are hit the hardest
Written Testimony to the New York City Council on the use of Electronic Health Records
We hosted scholars and advocates working at the intersection of disability, bias, and AI. Our report draws on that workshop and looks at what disability studies and activism can tell us about the risks and possibilities of AI.
Read our latest report
Comments to HUD Proposed Rule on the Fair Housing Act's Disparate Impact Standard
How they’re connected and what we can do about it
Opening remarks from our 2019 Symposium on the growing pushback against harmful AI
Litigating Algorithms 2019 U.S. Report
New Challenges to Government Use of Algorithmic Decision Systems
A Guide for Students
Written Testimony to the US House Committee on Science, Space, and Technology
Written Testimony to the US Senate Subcommittee on Communications, Technology, Innovation and the Internet
Examples of Government Use Cases
A European Parliament commissioned report on policy options for the governance of algorithmic technologies
Read our report on the state of AI
Outlining a New AI Research Agenda
A YEAR IN REVIEW
For advocates interested in understanding government use of algorithmic systems
Challenging Government Use of Algorithmic Decision Systems
Recommendations from NYC Advocates
New report offering a practical framework for AI accountability in public agencies
Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning: Stockholm, Sweden
AI Now’s initial recommendations for automation accountability in NYC
A letter to Mayor DeBlasio
With key recommendations for the field of artificial intelligence
Erin McElroy, Meredith Whittaker & Nicole E. Weber on how landlords, bosses, and schools' intrusions of surveillance technologies into the home extends the carceral state into domestic space.
Smartphones, sensors and consumer habits reveal much about society. Too few people have a say in how these data are created and used.
Amba Kak and Ben Green challenge the global convergence toward policies requiring human oversight of AI.
“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard. “Researchers aren’t always going to be well-placed to make these assessments on their own. That’s...
"Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies..."
A 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.
“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google...
“AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said."
"You never see these companies picking ethics over revenue," says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. "These are companies that are governed by shareholder capitalism."
“My reaction was: What on earth are they doing?” Meredith Whittaker, cofounder of the AI Now Institute and one of the organizers of the 2018 Google Walkouts, told us. “It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue...
A vision of what life might be like under widespread emotion recognition can be found in China, says Shazeda Ahmed, a researcher with the AI Now Institute who recently co-wrote a report on the dire implications the technology has on human rights in the country. Ahmed discovered applications ranging from...
"Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted," said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview.
These are signs that the U.S. is entering a new era of regulation, said Meredith Whittaker of the AI Now Institute at New York University. "It remains to be seen how they use that power, but we are seeing at least in that agency a turn to a much more...
The lawmakers pointed to a 2019 study from New York University School of Law and NYU’s AI Now Institute which discovered predictive policing systems being trained on what the researchers call "dirty data," or data derived from "corrupt, biased, and unlawful practices."...
A study by researchers at New York University’s AI Now Institute of thirteen US jurisdictions where predictive policing tools have been in operation concluded that ‘illegal police practices can significantly distort the data that is collected, and the risks that dirty data will still be used for law How problematic...
Joy Lisi Rankin's op-ed on Slate: "Google’s digital training for Black women is emphatically not an effort to fix the system but rather an effort to “fix” Black women."
The landlord tech industry, while alive and well prior to COVID-19, has ramped up in the past year to develop new ways to accumulate wealth at the expense of tenants. An Op-Ed by Erin McElroy, Wonyoung So, and Nicole Weber.
For Shazeda Ahmed, a visiting researcher at New York University’s AI Now Institute who contributed to the Article 19 report, these are all “terrible reasons”. “That Chinese conceptions of race are going to be built into technology and exported to other parts of the world is really troubling, particularly since...
This reflects in part its lack of inclusivity: according to New York University’s AI Now Institute, just 18% of authors in leading AI conferences are women, roughly 80% of AI professors are men, and non-white engineers comprise less than 5% of most major technology companies’ workforces. And if current AI...
“I’m not surprised that the pendulum swung one way and there was all this room for experimentation and now it’s swinging back the other way and you actually can’t do these things,” said Shazeda Ahmed, a visiting researcher at New York University’s AI Now Institute who has extensively studied the...
A 2019 study by the AI Now Institute at New York University found that only 10 percent of AI researchers at Google were women. At Facebook, only 15 percent of AI researchers were women. At Google, Black women represent only 0.7 percent of its technical workforce, according to the company’s...
But Meredith Whittaker, co-founder of the AI Now Institute that studies the social implications of artificial intelligence, said she was not satisfied with Twitter’s response. “Systems like Twitter’s image preview are everywhere, implemented in the name of standardization and convenience,” she told Thomson Reuters Foundation. “This is another in a...
Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute, told CNBC: “Algorithmic discrimination is a reflection of larger patterns of social inequality … it’s about much more than just bias on the part of engineers or even bias in datasets, and will require more than a...
“That process of creating a synthetic data set, depending on what you’re extrapolating from and how you’re doing that, can actually exacerbate the biases,” says Deb Raji, a technology fellow at the AI Now Institute. “Synthetic data can be useful for assessment and evaluation [of algorithms], but dangerous and ultimately...
‘We were seeing AI being used extensively before Covid-19, and during Covid-19 you're seeing an increase in the use of some types of tools,’ noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the...
In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.
"It clearly seems to be a racist way of saying: 'Look through your tenants who you don't want to live here and replace them with tenants who you do,'" Erin McElroy, a researcher at the AI Now Institute and cofounder of the Anti-Eviction Mapping Project, told Business Insider.
As Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute explained to CBS News, “"We turn to machine learning in the hopes that they'll be more objective, but really what they're doing is reflecting and amplifying historical patterns of discrimination and often in ways that are...
Inioluwa Deborah Raji, a fellow at NYU’s AI Now Institute, which works on algorithmic fairness, says people reaching for a technical solution often embrace statistical formulas too tightly. Even well-supported pushback is perceived as highlighting a need for small fixes, rather than reconsidering whether the system is fit for the...
“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute. Scientists have an obligation to think about applications and consider restricting research, she says, especially in fields like facial...
"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
A recent report from the AI Now Institute revealed that 80% of AI professors, 85% of AI research staff at Facebook, and 90% of AI employees at Google are male.
Meredith Whittaker, co-founder of AI Now, a research institute at New York University that studies the social implications of artificial intelligence, said she is happy the legislation would require companies to notify workers when their candidacy or performance is being assessed by technology. But she said she would want to...
“The inner workings of these systems are largely shrouded in corporate secrecy, which only puts more power in the hands of corporations at the expense of workers and the public overall,” Andrea Nill Sanchez, Executive Director of the AI Now Institute, a group that researches the social implications of AI,...
“We’re glad the EU report acknowledges that facial recognition, when deployed in public spaces, poses a threat to fundamental rights and to the GDPR,” wrote Amba Kak, director of global strategy and programs at the AI Now Institute at NYU
“We’re glad the EU report acknowledges that facial recognition, when deployed in public spaces, poses a threat to fundamental rights and to the GDPR,” wrote Amba Kak, director of global strategy and programs at the AI Now Institute at NYU
One activist, an engineer named Liz Fong-Jones, called attention to Project Maven in an internal blog post, according to other workers who saw it, and the circle of concern grew. Many of the engineers feared that the technology would be used to single out targets for killing. A growing number...
The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.
"I think we need to pause the technology and let the rest of it catch up," said Meredith Whittaker, co-director of New York University's AI Now Institute and a witness at the hearing. She argued rules needed to be put in place requiring consent for facial recognition software. Currently, in...
AI experts told Recode that the AI guidelines are a starting point. “It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government...
But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use. A city spokesperson says the officer will “maintain a platform where some information about relevant tools...
A new report on the social implications of artificial intelligence from NYU’s A.I. Now Institute argues that people who work under algorithms in 2019—from Uber drivers to Amazon warehouse workers to even some white collar office workers who may not know that they’re being surveilled—have increasing cause for concern and...
“Often a job candidate doesn’t even know a system is in use,” and employers aren’t required to disclose it, says Sarah Myers West, a researcher at the AI Now Institute, a New York University research group. A new Illinois law will go into effect next month requiring employers to disclose...
The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices.
A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions. The AI Now Institute at New York University said action against such software-driven “affect recognition” was its top priority...
With such a small potential talent pool, companies will need to be strategic. Researchers at the AI Now Institute at New York University have proposed several potential strategies to consider. They recommend that companies publish compensation levels and ensure pay and opportunity equality. They also recommend publishing harassment and discrimination...
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
So how do we address the need for diversity and prevent bias? New York University's AI Now Institute report suggests that in addition to hiring a more diverse group of candidates, companies must be more transparent about pay and discrimination and harassment reports, among other practices, to create an atmosphere...
Postdoctoral researcher at the AI Now Institute, Dr. Sarah Myers West, says these systems are built to reflect the data they are fed, and that data can be built on bias. “These systems are being trained on data that’s reflective of our wider society,” West said. “Thus, AI is going...
“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New...
Meredith Whittaker, a co-founder of the AI Now Institute, a research centre in New York, said that it's a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn't fit, based on their facial movements, their tone of voice,...
The increasing prevalence of AI has boosted efficiency and reduced costs for companies but has also drawn concerns about job losses and hidden discrimination. Reuters last year unveiled that Amazon abandoned an AI recruiting tool in development, as the tech giant cannot fix its bias against women. Uber’s facial recognition...
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
“If the immediate feedback you’re giving to a student is going to be biased, is that useful feedback? Or is that feedback that’s also going to perpetuate discrimination against certain communities?” Sarah Myers West, a postdoctoral researcher at the AI Now Institute, told Motherboard.
Meredith Whittaker, a co-founder of the AI Now Institute at New York University and a former Google employee, characterized the tech industry’s scaremongering about China as a tactical move meant to deflect criticism. “It’s a really convenient narrative,” Ms. Whittaker said. “It evokes nationalism and a red scare trope that...
On paper, Amazon is giving out cool stuff for free. But the company is also getting "extremely inexpensive access to record some of the most intimate parts of your life," says Meredith Whittaker, co-founder of the AI Now Institute.
Customers have used “affect recognition” for everything from measuring how people react to ads to helping children with autism develop social and emotional skills, but a report from the A.I. Now Institute argues that the technology is being “applied in unethical and irresponsible ways.”
Meredith Whittaker, a distinguished research scientist at New York University and co-director of the AI Now Institute, argues that Cogito could become another example of AI software that’s difficult for people to understand, but ends up having a massive impact on their lives regardless.
“AI systems have evidenced a persistent pattern of gender and race-based discrimination. I have yet to encounter an AI system that was biased against white men,” Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Google engineer Irene Knapp spoke in favor of a proposal to tie executive compensation to the company’s progress on diversity and inclusion. Knapp cited research by the group AI Now showing that bias in artificial intelligence technology is related to the lack of diversity in the industry.
Women and people of color are fighting many battles in the tech world and in the fast-growing world of artificial intelligence. The other panel member, Meredith Whittaker, a founder and a director of the AI Now Institute at New York University, noted that voice recognition tools that rely on A.I....
And in the case of emotion detection, significant decisions — like whether or not you get a job — can hang on the software's interpretation of your facial expressions, says Meredith Whittaker, co-founder of NYU's AI Now Institute
Meredith Whittaker, another walkout organizer, and leader of Google’s Open Research project, wrote in her emails that the company disbanded its external AI ethics council and was told her role would be “changed dramatically.” Whittaker said she was told that, in order to stay at the company, she would have...
"The people who are able to apply and market these technologies are large brands and corporations," says Meredith Whittaker, co-founder of the AI Now Institute. "This is not an equal-access set of technologies. They're used by some on others."
“Corporate secrecy laws are a barrier to due process,” said Jason Schultz, the AI Now Institute’s research lead for law and policy. “They contribute to the ‘black box effect,’ rendering systems opaque and unaccountable, making it hard to assess bias.”
"Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains."
"There’s all kinds of different ... AI Now Institute is doing some, all kinds of groups of people are doing it. But they kept talking about norms so I was sort of laughing, because it was all these academics saying, “We have these norms, those norms, and these norms,” and...
"Jason Schultz, a law professor at New York University, said Ever AI should do more to inform Ever app’s users about how their photos are being used. Burying such language in a 2,500-word privacy policy that most users do not read is insufficient, he said. “They are commercially exploiting the...
"In December AI Now reported on a subclass of facial recognition that supposedly measures your affect with claims that it can detect your true personality, your inner feelings and even your mental health based on images or video of your face. AI Now warned against using these tools for hiring...
"The embedded discrimination highlighted in the AI Now report is truly frightening. Tech must begin its reforms not from the perspective of what’s happened before, but what we can see will be the case in the future."
Additionally, the organizers say Meredith Whittaker and Claire Stapleton—who helped organize the first Google Walkout—have faced retaliation within the company, a document shared with Motherboard says. WIRED also reported last month that Whittaker and Stapleton have faced retaliation. Whittaker claims that after Google disbanded its AI ethics board on April...
A report from New York University’s AI Now Institute found that a predominantly white male coding workforce is causing bias in algorithms.
Meredith Whittaker, who co-signed the email, said her role had also been "changed dramatically". In a separate controversy at the tech company, Google set up an external AI ethics council this spring and invited Kay Cole James to sit on the panel. Ms James is the president of the Heritage...
One of the authors on the AI Now report, Sarah Myers West, said in a press call that such “algorithmic gaydar” systems should not be built, both because they’re based on pseudoscience and because they put LGBTQ people at risk. “The researchers say, ‘We’re just doing this because we want...
“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.
Companies at all stages on the AI development spectrum want to know where public policy on intelligent systems is headed. The work being produced by New York–based research institute AI Now offers a road map. The institute delivers periodic topical reports and more general “state of play” annual reports that...
The AI Now Institute recently published an assertive list of some of the lowlights in AI ethics over the past year, along with a report highlighting the growing ethical risks in surveillance. The latter manifest themselves both in the form of unchecked capacity to monitor individuals beyond their ability to...
According to Meredith Whittaker, co-founder and co-director of the AI Now Institute at NYU, this is only the tip of the ethical iceberg. Accountability and liability are open and pressing issues that society must address as autonomous vehicles take over roadways. “Who ultimately bears responsibility if you’re looking at a...
Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the "GooglePayOutsForAll" social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she...
“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence. But she also told the conference that this change was...
“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.” She continued to say that users and...
The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University. In addition, many AI systems and the companies that make them are opaque. "Technocratic smokescreens have made it...
Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the...
Policy director Rashida Richardson weighs in on California's new law, warning about the risk of biases baked into algorithmic decision making.
Policy director Rashida Richardson discusses the letter sent to Bill de Blasio and the Automated Decision Systems Task Force cosigned by AI Now.
Cofounder Meredith Whittaker appears on the ACLU’s “At Liberty” podcast to discuss regulating algorithms and emerging threats to our civil liberties.
Policy director Rashida Richardson discusses ethical and technical concerns around law enforcement use of facial recognition technology.
Cofounder Meredith Whittaker raises concerns about AI products used in sourcing and hiring employees, questions their claims to "remove bias" from hiring, and calls for increased oversight and accountability.
AI Now's Algorithmic Impact Assessment (AIA) framework gives lawmakers a way to evaluate the effect of algorithmic decision-making in government.
Cofounder Meredith Whittaker weighs in on the industry's efforts to address bias in facial recognition tools.
Fast Company draws parallels to the environmental disasters that led to the creation of environmental impact statements in covering AI Now's Algorithmic Impact Assessment framework.
Cofounder Meredith Whittaker discusses the dangers of AI-powered surveillance and law enforcemenet tools, and the structural injustices that could lead to selective application.
Cofounders Kate Crawford and Meredith Whittaker spoke to Quartz about the official launch of the AI Now Institute at NYU.
In their 2017 report, AI Now became the first major research organization to publicly call for an end to 'black box' algorithms in core public agencies - such as those responsible for criminal justice, healthcare, welfare and education.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.