Giving Compass' Take:
- Genevieve Smith and Ishita Rustagi highlight the importance of gender equity in AI and explain how social change leaders and machine learning developers can combat inequity in AI.
- What role can you play in advancing gender equity in technology?
- Learn why biased AI is bad for health.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
In 2019, Genevieve (co-author of this article) and her husband applied for the same credit card. Despite having a slightly better credit score and the same income, expenses, and debt as her husband, the credit card company set her credit limit at almost half the amount. This experience echoes one that made headlines later that year: A husband and wife compared their Apple Card spending limits and found that the husband’s credit line was 20 times greater. Customer service employees were unable to explain why the algorithm deemed the wife significantly less creditworthy.
Many institutions make decisions based on artificial intelligence (AI) systems using machine learning (ML), whereby a series of algorithms takes and learns from massive amounts of data to find patterns and make predictions. These systems inform how much credit financial institutions offer different customers, who the health care system prioritizes for COVID-19 vaccines, and which candidates companies call in for job interviews. Yet gender bias in these systems is pervasive and has profound impacts on women’s short- and long-term psychological, economic, and health security. It can also reinforce and amplify existing harmful gender stereotypes and prejudices.
As we conclude Women's History Month, social change leaders—including researchers and professionals with gender expertise—and ML systems developers alike need to ask: How can we build gender-smart AI to advance gender equity, rather than embed and scale gender bias?
Prioritizing gender equity and justice as a primary goal for ML systems can create a downstream impact on design and management decisions. We must acknowledge that ML systems are not objective. Even ML systems designed for good (for example, a system built to make creditworthiness assessments or hiring more equitable) can be prone to bias-related issues, just like their human creators. There are roles for social change leaders, as well as leaders at organizations developing ML systems, to make gender-smart ML and advance gender equity.
Social change leaders can:
- Use feminist data practices to help fill data gaps.
- Lend your expertise to the field of gender-equitable AI, advocate for AI literacy training, and join the conversation.
- In considering or using AI systems to tackle gender gaps, think critically
What ML Developers Can Do:
- Embed and advance gender diversity, equity, and inclusion among teams developing and managing AI systems.
- Recognize that data and algorithms are not neutral, and then do something about it.
- Center the voices of marginalized community members, including women and non-binary individuals, in the development of AI systems.
- Establish gender-sensitive governance approaches for responsible AI.
Read the full article about gender bias in AI by Genevieve Smith and Ishita Rustagi at Stanford Social Innovation Review.