What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Giving Compass' Take:
• Caitlin Chin and Bhaargavi Ashok discuss the recent ethical questions around fairness and equity when Artificial Intelligence is used in the legal system.
• What measures can be taken to recognize the biases in data input into AI systems, and how can the legal system take into account these errors?
• Learn about the ethics of AI everywhere.
In most cases, AI provides positive utility for consumers—such as when machines automatically detect credit card fraud or help doctors assess health care risks. However, there is a smaller percentage of cases, such as when AI helps inform decisions on credit limits or mortgage lending, where technology has a higher potential to augment historical biases.
Policing is another area where artificial intelligence has seen heightened debate—especially when facial recognition technologies are employed. When it comes to facial recognition and policing, there are two major points of contention: the accuracy of these technologies and the potential for misuse. The first problem is that facial recognition algorithms could reflect biased input data, which means that their accuracy rates may vary across racial and demographic groups. The second challenge is that individuals can use facial recognition products in ways other than their intended use—meaning that even if these products receive high accuracy ratings in lab testing, any misapplication in real-life police work could wrongly incriminate members of historically marginalized groups.
When considering algorithmic bias, an important legal question is whether an AI product causes a disproportional disadvantage, or “disparate impact,” on protected groups of individuals. However, plaintiffs often face broad challenges in bringing anti-discrimination lawsuits in AI cases. First, disparate impact is difficult to detect; second, it is difficult to prove. Plaintiffs often bear the burden of gathering evidence of discrimination—a challenging endeavor for an individual when disparate impact often requires aggregate data from a large pool of people.
Algorithmic bias is a multi-layered problem that requires a multi-layered solution, which may include accountability mechanisms, industry self-regulation, civil rights litigation, or original legislation.
Read the full article about addressing fairness in the context of artificial intelligence by Caitlin Chin and Bhaargavi Ashok at Brookings.