What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Giving Compass' Take:
· Kristian Lum, the lead statistician at Human Rights Data Analysis Group, is vigorously studying the algorithms behind machine learning within the criminal justice system, including the controversial predictive policing and sentencing programs.
· There have been many accusations that programmed AI can be biased. Without a clear consensus of what fairness means, what can be done to ensure that the programming of AI can be unbiased and inclusive?
· Learn more about artificial intelligence and three potential impacts it may have in the future.
As the lead statistician at the nonprofit Human Rights Data Analysis Group, Kristian Lum, 33, is trying to make sure the algorithms increasingly controlling our lives are as fair as possible. She’s especially focused on the controversial use of predictive policing and sentencing programs in the criminal justice system. When it comes to bias, Lum isn’t concerned only with algorithms. In a widely read December blog post, she described harassment she’d experienced at academic conferences when she was a doctoral student at Duke University and an assistant research professor at Virginia Tech. No longer in academia, she uses statistics to examine pressing human-rights problems.
What’s the relationship between statistics and AI and machine learning?
AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.
The move toward using AI, or quantitative methods, in criminal justice is at least in part a response to a growing acknowledgment that there’s racial bias in policing. A selling point for a lot of people is that they think a computer can’t be racist, that an algorithm can’t be racist. I wanted to challenge the idea that just because it’s a computer making the predictions, that it would solve those problems.
Is that a tough sell, the idea that a computer can be biased?
I feel like I can’t open Twitter without seeing another article about the racist AI. What’s hard about this is there isn’t universal agreement about what fairness means. People disagree about what fairness looks like. That’s true in general, and also true when you try to write down a mathematical equation and say, “This is the definition of fairness.”
Read the full article about the algorithms behind artificial intelligence by Ellen Huet at Bloomberg.com.