Giving Compass' Take:
- Evan Tachovsky and Terrell Seabrooks explain how a new interdisciplinary AI research institute is working to combat AI bias and serve the social good.
- What role can you play in advancing AI for social good? Can a collaborative effort like this one advance your work?
- Read about how to advance gender equity in AI.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Artificial intelligence, or AI, and machine learning is used in myriad ways across the public and private sectors. It can serve as a tool to solve a wide range of societal problems, such as preventing homelessness, improving agricultural capacity, or combating pathogens. However, a critical challenge facing these tools is AI bias, an issue that can lead to discriminatory outcomes and disparity for poor and vulnerable communities.
Two of the most important biases in AI are data bias and societal bias. Data bias is when an algorithm is mining biased data. Societal bias is when societal norms cause blind spots in our thinking. Data scientists review variables of assumptions in all datasets as part of their work. Societal biases are variables that can be overlooked because of society norms, gender norms, or culture. These erroneous assumptions can lead to discriminatory outcomes and exacerbate disparities.
Current advances in the quantitative methods that underlie AI technology generally translate into more positive impacts for wealthy communities and disproportionately negative impacts on poor and vulnerable communities. Wealthier communities benefit because they are better represented in the data. Wealthier communities are more likely to appear in common data repositories – for example, credit card lists and health insurance data. AI tools are designed to offer solutions for populations within the data, which leaves poor and vulnerable communities excluded from the benefits of AI tools.
Biased data in AI tools and machine learning is an issue many organizations are confronting. To address the problem, research teams set out to reevaluate data and assess for the influence of bias. But a common concern is a diversity gap in some AI research teams. Another pitfall is when an outside influence, like a big technology company, influences members into group think or a top-down mentality.
DAIR seeks to mitigate this diversity gap through its core objectives:
- foster research that analyzes its end goal and potential risks and harms from the start;
- ensure its research pool includes members from many different backgrounds who can participate while embedded in their communities;
- communicate the impact of members’ work to impacted communities in straightforward and practical terms, rather than exclusively through academic research papers.
Read the full article about AI bias by Evan Tachovsky and Terrell Seabrooks at The Rockefeller Foundation.