Artificial intelligence is rapidly evolving and reshaping how people and organizations think and behave, across many sectors and around the world. In the United States, companies like Netflix and Amazon have been leveraging AI for years to tailor recommendations and provide virtual assistance to customers, while research institutions and AI labs like DeepMind are using it to accelerate medical research and fight climate change.

Nonprofit organizations, however, have been less involved in this moment of technological innovation. To some degree, this makes sense. The nonprofit sector faces widespread challenges, including a lack of investment in research and development and a shortage of staff with AI expertise, that other sectors don’t. But it also needs to change. AI’s impact on society—how people work and live—will only increase over time, and the social sector can’t afford to not engage with it. Indeed, nonprofits have an important role to play in its development. When designed and implemented with equity in mind, AI tools can help close the data gap, reduce bias, and make nonprofits more effective. Funders, nonprofit leaders, and AI experts need to move quickly and in alignment with one another to advance equitable AI in the social sector.

Using AI to Support Equity

Of course, while AI offers many exciting opportunities, its potential to cause serious harm is well-known. Developers train AI algorithms using data culled from across society, which means biases are baked in from the start. For example, financial service providers often use AI to make lending decisions, but the financial industry in the United States has a long history of systematic discrimination against women and communities of color, including redlining and inequitable appraisal and underwriting policies. Because algorithms used to make lending decisions are trained on historical data that reflect the intentional disadvantaging of certain zip codes, occupations, and other proxies associated with race or gender, they can perpetuate unfair lending practices and financial inequities if left unaddressed. Even well-intentioned nonprofits could easily design flawed AI applications that result in unintended and damaging consequences. An organization providing seed funding to social entrepreneurs, for instance, could train AI using biased financial data and end up working against its mission of advancing wealth equity by mistakenly preferencing certain populations.

Read the full article about equitable AI by Kelly Fitzsimmons at Stanford Social Innovation Review.