Giving Compass' Take:
- Jared Chung examines the reasoning behind funders' hesitance around AI, despite the fact that most nonprofits want to implement AI tools.
- As a funder, how can you learn more about ethical, high-quality AI implementation and work around the challenges of evaluating nonprofits' projects involving AI?
- Learn more about trends and topics related to best practices in giving.
- Search Guide to Good for purpose-driven nonprofits in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Over the last few years, I’ve had hundreds of conversations with nonprofit leaders and funders about artificial intelligence. One theme keeps coming up: most funders don’t know how to evaluate AI projects. This isn’t a controversial statement, but funders' hesitance around AI is one of the biggest blockers standing in the way of impact.
And that matters, because the demand for AI is overwhelming. The Center for Effective Philanthropy’s new report, “AI With Purpose,” finds that while nearly 90 percent of nonprofits are interested in expanding their use of AI, 90 percent of foundations say they are not providing any support for AI implementation. Even more surprisingly, of the 10 percent of foundations who are funding AI projects, for many it may be unintentional — through general operating support, not by deliberately funding an AI initiative. The result: an enormous supply-demand gap due to funders' hesitance around AI.
Frankly, as a nonprofit with AI at the heart of our work, we knew there was a gap — but we didn’t realize it was this massive.
What is driving such a chasm? What we hear over and over is that funders are unsure how to evaluate AI projects. Indeed, one of the top reasons funders themselves noted in the CEP report for not providing support for AI was that they had “not thought about it/wouldn’t know where to start.”
If you don’t know how to tell whether an AI project is safe, high-quality, effective, or feasible, it puts a chilling effect on philanthropic decision makers. The resulting paralysis risks sidelining the very organizations best positioned to harness AI for equity and impact — those who are asking for support and know what they need.
So how should funders evaluate AI projects? Here’s what we’ve learned on the front lines at CareerVillage.org, where we’ve been implementing AI systems for years to help job seekers navigate the changing labor market.
Challenge 1: Funders Are Nervous About AI Risks and Don’t Know How To Assess Them
Solution: AI risk management is tangible and process-based, not abstract.
The “AI With Purpose” report makes clear that nonprofit and foundation leaders share a common set of concerns about AI: data security, misinformation, staff expertise, and bias. These are real concerns. But the way to address them is not through a philosophical framework — it’s through practical processes and people.
Read the full article about funders' hesitance around AI by Jared Chung at The Center for Effective Philanthropy.