Giving Compass' Take:
- Vu Le draws attention to various issues related to AI use that the nonprofit sector should be having more often and with greater depth.
- As a donor or funder, how can you start nuanced conversations about AI use? How can you help ensure responsible use of AI in the philanthropic sector?
- Search for a nonprofit focused on ethical AI use.
- Access more nonprofit data, advanced filters, and comparison tools when you upgrade to Giving Compass Pro.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
A couple of years ago, I published a post called “Hey funders, don’t freak out about AI-supported grant proposals,” where I admonished funders who punished nonprofits that used artificial intelligence technology to craft their proposals. If we must use AI, it’s precisely for pointless, time-wasting activities like writing grant proposals.
That being said, I don’t think we’re having the right conversations about AI. The ones we’ve been having have been alarmingly superficial. I’ve been to many conferences now where AI has been brought up in plenaries or in workshops, and only at the Community-Centric Fundraising family reunion last month did I see colleagues really dive into the ethics of using AI. Something colleague Carlos García León said on the panel really resonated with me, and I paraphrase it here:
“I was talking to a fundraiser who said they used AI and they doubled their annual appeal revenues from $50K to $100K. Well, is your 50k in additional funds worth the $300K's worth of environmental and other forms of damage and trauma to communities?”
There has been a tremendous amount of defense and rationalization for the usage of AI (and to be transparent, two years ago I did advise everyone to give AI a chance). Often, ethical concerns are completely glossed over by AI experts, many of whom don’t mention them in their presentations. When they are brought up, I’ve seen a tendency for these concerns to be dismissed or there’s very little time that’s allocated to address them.
As a sector that’s focused on creating a just and equitable world, we cannot ignore conversations like the above, in favor of a toxic and likely unfounded optimism about AI. It’s been a few years now, and we have more data and experience to go on, and we must create time and space to thoughtfully discuss issues like:
How AI Use Harms Marginalized Communities
As Shay Stewart-Bouley (Black Girl in Maine) says in this blog post I recommend everyone reads: “At present, the data centers required to run these technologies are more commonly found in Black, Brown and rural communities. In other words, the data centers are being placed in the communities of people that the folks in charge consider the most disposable. Communities where the most impacted are at risk for the greatest harm. The owners of these companies aren’t placing the data centers in their own neighborhoods, instead choosing marginalized communities to place these resource hogs, where it means greater risk of environmental harms (which, practically speaking, are higher risks of cancer and respiratory illness, on top of creating water supply issues).”
Read the full article about AI and the nonprofit sector by Vu Le at Nonprofit AF.