WMC News & Features

Amazon’s now-defunct AI hiring tool was anti-woman

Amazon

Amazon shut down the artificial intelligence (AI) tool it was using to evaluate potential hires because the algorithm was found to be biased against women, according to a story reported by Reuters on Tuesday. The news comes amid mounting evidence that artificial intelligence programs reproduce structural inequalities, including racism and sexism, and so do little to address the bias that pervades human decision-making.

“We are increasingly focusing on algorithmic fairness as an issue,” Rachel Goodman at the American Civil Liberties Union told Reuters. Goodman is a staff attorney with the ACLU’s Racial Justice Program. The organization is currently challenging a law said to criminalize research into the potentially discriminatory impact of AI on hiring and housing decisions.

Amazon started developing its AI hiring tool in 2014. A team based in Edinburgh worked to create about 500 computer models that focused on a variety of job functions and locations and taught the system to identify 50,000 terms that appeared on resumes submitted in the past. The tool was designed to give potential candidates a score ranking from one to five, data which Amazon hoped could be used to streamline and expedite the hiring process.

But within a year, researchers realized the program wasn’t evaluating candidates in gender-neutral way. They discovered that the system was giving lower rankings to resumes that included the word “women’s,” as in “women’s soccer coach,” and also penalizing graduates of two all-women’s colleges. The team edited the program to address its biased response to specific terms, only to find that other flaws remained. For example, the system favored verbs 

more commonly found on men’s resumes, including “executed” and “captured.”

The problem, according to Reuters, is that the data Amazon was using to build its AI model—the resumes submitted to the company over the previous 10 years—had their own gender bias. U.S. tech companies are dominated by men, especially when it comes to technical roles, and Amazon is no exception. So in training the model to evaluate potential applicants based on resumes submitted in the past, the team at Amazon effectively taught the program to favor male candidates over anyone else.  

Some proponents of AI say that the technology will help build a fairer world by eliminating the conscious and subconscious biases inherent to human decision-making. In part through the advocacy of criminal justice reformers, AI models are already being used to help determine prison sentences and who should be released on bail. Yet evidence indicates that these programs, from being race-neutral, are systemically anti-Black. In 2016, ProPublica found that the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) program, which estimates the likelihood that someone will commit a future crime, wrongly labeled Black people as future criminals at twice the rates of whites.

As Stephen Buranyi wrote in an August 2017 piece for The Guardian: “There is a saying in computer science: garbage in,  garbage out. When we feed machines data that reflects our prejudices, they mimic them.”The data used to develop these models—arrest rates, zip codes, social and familial ties—are themselves bound up in America’s long history of racism.

As for Amazon, the company maintains that its failed hiring tool didn’t impact any potential employees. The company told Reuters that that recruiters looked at the rankings produced by the algorithm, but never relied solely on those in making hiring decisions. 



More articles by Category: Misogyny, Science and tech
More articles by Tag:
SHARE

[SHARE]

Article.DirectLink

Contributor
Categories
Sign up for our Newsletter

Learn more about topics like these by signing up for Women’s Media Center’s newsletter.