X

Concerns over biased algorithms grow as computers make more decisions

A new report suggests ways to improve the technology and its accountability.

Shara Tibken Former managing editor
Shara Tibken was a managing editor at CNET News, overseeing a team covering tech policy, EU tech, mobile and the digital divide. She previously covered mobile as a senior reporter at CNET and also wrote for Dow Jones Newswires and The Wall Street Journal. Shara is a native Midwesterner who still prefers "pop" over "soda."
Shara Tibken
5 min read
gettyimages-1230834373

Technology is increasingly taking the place of humans in making decisions. 

Getty Images

When the US started distributing COVID-19 vaccines late last year, an essential question emerged: Who should get priority access to the shots? Many medical facilities and health officials decided to first vaccinate workers, including nurses and janitors, who came into close contact with infected people. Stanford Medicine, part of one of the country's top universities, instead built an algorithm to determine the order. 

The only problem with letting a computer decide who should get the vaccine first is that its "very complex algorithm" -- which turned out to not be very complicated at all -- was built on faulty assumptions and data. Namely, the algorithm prioritized medical workers over a certain age without taking into account that many older doctors weren't regularly seeing patients. Only seven of 5,000 doses in Stanford Medicine's initial batch of COVID-19 vaccines were allocated to front-line resident physicians. Most were meant for senior faculty and doctors who work from home or have little contact with COVID-19-infected patients. Stanford quickly scrapped its algorithm and worked to vaccinate its front-line employees. 

"Our algorithm that the ethicists and infectious disease experts worked on for weeks to use age, high-risk work environments [and] prevalence of positivity within job classes … clearly didn't work right," Tim Morrison, a director of Stanford's ambulatory care team, said in a video posted on Twitter in mid-December. 

Watch this: Beyond the 5G hype: Searching for real solutions to the coronavirus mess

Stanford's vaccine debacle is only one example of the many ways algorithms can be biased, a problem that's becoming more visible as computer programs take the place of human decision makers. Algorithms hold the promise of making decisions based on data without the influence of emotions: Rulings could be made more quickly, fairly and accurately. In practice, however, algorithms aren't always based on good data, a shortcoming that's magnified when they're making life-and-death decisions such as distribution of a vital vaccine. 

The effects are even broader, according to a report released Tuesday by the Greenlining Institute, an Oakland, California-based nonprofit working for racial and economic justice, because computers determine whether someone gets a home loan, who gets hired and how long a prisoner is locked up. Often, algorithms retain the same racial, gender and income-level biases as human decision makers, said Greenlining CEO Debra Gore-Mann. 

"You're seeing these tools being used for criminal justice assessments, housing assessments, financial credit, education, job searches," Gore-Mann said in an interview. "It's now become so pervasive that most of us probably don't even know that some sort of automation and assessment of data is being done." 

The Greenlining report examines how poorly designed algorithms threaten to amplify systemic racism, gender discrimination and prejudices against people with lower incomes. Because the technology is created and trained by people, the algorithms -- intentionally or not -- can reproduce patterns of discrimination and bias, often without people being aware it's happening. Facial recognition is one area of technology that's proved to be racially biased. Fitness bands have struggled to be accurate in measuring the heart rates of people of color. 

"The same technology that's being used to hyper-target global advertising is also being used to charge people different prices for products that are really key to economic well being like mortgage products insurance, as well as not-so-important things like shoes," said Vinhcent Le, technology equity legal counsel at Greenlining. 

In another example, Greenlining flagged an algorithm created by Optum Health that could be used to determine priority for medical attention for patients. One of the factors was how much patients spent on health expenses, with the assumption that the sickest people spent the most on health care. Using that parameter alone wouldn't take into account that people with less money sometimes had to choose between paying rent or paying medical bills, something that would disproportionately hurt Black patients, Greenlining said. 

Optum Health said the health provider that tested the use of the algorithm in that way didn't ultimately use it to determine care. 

"The algorithm is not racially biased," Optum said in a statement. "The tool is designed to predict future costs that individual patients may incur based on past health care experiences and does not result in racial bias when used for that purpose -- a fact with which the study authors agreed."

No easy fix

In its report, Greenlining presents three ways for governments and companies to ensure the technology does better. Greenlining recommends that organizations practice algorithm transparency and accountability; work to develop race-aware algorithms in instances where they make sense; and specifically seek to include disadvantaged populations in the algorithm assumptions. 

Ensuring that happens will fall to lawmakers. 

"The whole point [of the report] is build the political will to start regulating AI," Le said. 

In California, the state legislature is considering Assembly Bill 13, also known as the Automated Decision Systems Accountability Act of 2021. Introduced Dec. 7 and sponsored by Greenlining, it would require businesses that use "an automated decision system" to test for bias and the impacts it would have on marginalized groups. If there's an impact, the organizations have to explain why the discriminatory treatment isn't illegal. "You can treat people differently, but it's illegal when it's based on protected characteristics like race, gender and age," Le said. 

In April 2019, Sens. Cory Booker of New Jersey and Ron Wyden of Oregon and Rep. Yvette D. Clarke of New York, all Democrats, introduced the Algorithmic Accountability Act, which would have required companies to study and fix flawed computer algorithms that resulted in inaccurate, unfair, biased or discriminatory decisions impacting Americans. A month later, New Jersey introduced the similar Algorithmic Accountability Act.. Neither bill made it out of committee. 

If California's AB13 passes, it would be the first such law in the US, Le said, but it may fail because it's too broad as it's currently written. Greenlining instead hopes to narrow the bill's mandate to first focus on government-created algorithms. The hope is the bill will set an example for a national effort. 

Most of the issues with algorithms aren't because people are biased on purpose, Le said. "They are just taking shortcuts in developing these programs." In the case of the Stanford vaccine program, the algorithm developers "didn't think through the consequences," he said.  

"No one's really quite sure [about] all the things that need to change," Le added. "But what [we] do know is that the current system is not well equipped to handle AI."

Update at 4 p.m. PT to include information from Optum Health.