Giving Compass' Take:
- Authors at MacArthur Foundation offer a critique of algorithmic decision making, which contributes to discrimination towards and overcriminalization of people of color.
- How have centuries of bias contributed to the inherent discrimination of algorithmic decision making? How does bias in AI skew data and perpetuate systemic oppression?
- Read about racial bias in AI hiring technology.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Scooping up countless bits of data, predictive computer models make projections behind the scenes about what products people will buy, how they will vote, who they might want to date, which TV shows they will watch, and more.
The process is known as algorithmic decision making, or ADS. It is also used to guide judgements that can change lives forever—whether a person is likely to commit a crime, for example, or whether their children should be taken from them.
“As tools of algorithmic decision making appear in every field of our lives, it’s going to be critical that communities understand what these tools are,” said Bryan Mercer, Executive Director of Movement Alliance Project (MAP). The Philadelphia nonprofit unites community organizations around initiatives where technology, race, and inequality intersect.
Risk assessment tools, a type of ADS, are intended to help reform criminal justice and other areas by removing human bias from decisions. But activists, attorneys, and data scientists warn that the software can perpetuate inequality.
“These tools take decades of bias and turn it into math,” said Hannah Sassaman, MAP Policy Director. She challenges the notion that local governments need artificial intelligence to decide questions such as who can be released from jail safely before trial.
Experts’ primary concern is that, in many cases, data fed into the software that produces risk scores is tainted. How? It is gathered through years of unfair housing programs, discriminatory policing, and similar inequitable practices that hurt communities of color and portray them negatively.
As a result, those systems can inaccurately conclude people of color pose high risks.
Read the full article about algorithmic decision making at MacArthur Foundation.