Article

The Nonprofits Leading the AI Revolution

What is Giving Compass?

We connect donors to learning resources and ways to support community-led solutions. Learn more about us.


This article by Sara Herschander was previously published on May 29, 2024 in the Chronicle of Philanthropy. Reprinted with permission.


Aaron Estevez’s middle-school classroom might not look like the front lines of a technological revolution, but inside the Colorado school, the writing is on the wall — or rather, on the whiteboard. There, a visual tally of students caught using A.I. to write their essays hasn’t changed much since a plagiarism crackdown last year. But that doesn’t mean the technology has left the classroom.

With the help of a nonprofit called Quill.org, Estevez has been using A.I. — the same technology some of his middle schoolers have used to cheat on their homework — to teach students to write for themselves. Quill’s interactive writing exercises, which come with personalized feedback, have been an unexpected hit in the classroom. Estevez ran through 15,000 stickers rewarding each completed lesson in his first year using Quill. Parents would call, amazed that their kids were at home, chatting with friends on the phone as they plowed through lessons like a video game.

“We got about as close to making grammar cool as we probably ever will,” says Estevez, who says his goal is to enable students to interact safely with a technology that he sees as a growing part of their lives.

A new generation of A.I.-focused nonprofits is taking root in U.S. classrooms, courtrooms, and cultural institutions, often in close partnership with tech companies eager to put a positive spin on the controversial technology. Going far beyond a premium ChatGPT subscription, these nonprofits have made A.I. central to their missions, leveraging the technology to, for example, help asylum seekers navigate legal cases, provide mental-health support to teens, and combat deforestation.

The A.I. revolution isn’t coming for nonprofits, these tech-savvy organizations argue: It’s already here. If their A.I.-driven approach is an outlier now, it may soon become the norm for nonprofits, many of which have begun to experiment with A.I. for simple tasks. As that technological shift takes root, these early adopters’ lessons, mistakes, and breakthroughs will reverberate in the years to come.

“I do see this as an inflection point,” says Maggie Johnson, global head of Google.org, the tech giant’s grant-making arm, which has given more than $200 million to over 150 A.I. grantees and projects since 2018. “There have been just a few times like this that I’ve experienced where technology is truly enabling possibilities that weren’t feasible before.”

Johnson likens the current phenomenon to the internet’s launch in the 1990s, when tech-savvy nonprofits scrambled to get online, leveraging the web to build new learning hubsvirtual art galleries, and eHarmony-like sites for matching charities with surplus food donations.

This time, learning to adapt is “even more critical because it’s moving faster than in the past,” says Johnson. In 1996, just over one-fifth of foundations had a website, and only half used email to communicate internally. Today, nearly half of foundations use some kind of A.I., a mere year and a half since ChatGPT captured the public imagination.

“If we’re going to get this technology to really work for everyone,” then nonprofits ought to start experimenting with A.I. now, says Johnson. That’s the only way that “we’re going to get a really good sense of what works and what doesn’t work,” she says.

Tech Company Tutelage

A.I. is evolving too rapidly for most nonprofits to keep pace. More than three-fifths of nonprofits surveyed by Google.org earlier this year cited a lack of familiarity with generative A.I. as a barrier to adoption.

But whether organizations are ready or not, A.I is becoming part of work life. “Within three years, most nonprofits will be using A.I. because it will be in their tech stack” of tools like Microsoft Office, Salesforce, and Slack, says Sarah Di Troia, senior strategic adviser at the nonprofit Project Evident, where she consults with nonprofits and foundations on their A.I. use.

While A.I. refers to a broad range of simpler tools — such as Google Translate, chatbots like Siri and Alexa, or Netflix’s recommendation algorithm — generative A.I. is a type of advanced artificial intelligence capable of creating new content. Only a few big tech companies have been able to develop generative A.I. products — like ChatGPT or Google’s Gemini — because of the technology’s complexity and cost.

“Generative A.I. is just really wicked expensive still,” says Di Troia, making it difficult for most nonprofits to customize alone. “When you see implementations of gen A.I., it’s because people have a relationship with Microsoft” or other large tech companies, many of which have announced A.I. training and support programs for nonprofits in the past year.

One of those nonprofits is Quill, whose longstanding relationship with Google.org has helped push the boundaries of what’s technologically possible for the organization.

Cofounder Peter Gault was fresh out of college when he and a group of friends built Quill in 2012. Inspired by his own experience building educational video games as a kid, he conceived of Quill.org as a gamified grammar platform, a kind of Oregon Trail game for writing skills. A.I. didn’t come into the picture until 2018, when Quill participated in Google.org’s A.I. for Social Good Impact Challenge.

Along with a $1.3 million grant, Google provided a team of engineers who helped Quill go from building early prototypes to launching A.I.-powered writing and assessment tools, which have since been used by millions of students in over 35,000 schools, according to Quill. While the organization charges schools to access its most in-depth student diagnostic reports, all of Quill’s hundreds of online activities and diagnostics are available online for free. The nonprofit has also received nearly $2 million in grants from the Bill & Melinda Gates Foundation, $2.2 million from the Overdeck Family Foundation, and $750,000 from the Patrick J. McGovern Foundation.

Though Quill has been using A.I. for years, a lot has changed since the group participated in Google.org’s first A.I. impact challenge. Earlier this year, Google.org selected Quill as one of 21 nonprofits to participate in a $20 million generative A.I. accelerator, which will provide grants, mentorship, and A.I. coaching.

“Using A.I. today is far more accessible than it was before,” says Gault. “It’s still not very easy, but 10 years from now, it will be much, much easier.”

While privacy concerns swirl around generative A.I., Gault says that protections put in place by Google will prevent the company from using student data to train its models. The same might not be said for all educational software companies, some of which may have weaker privacy protections, he says.

At a recent EdTech — or educational technology — conference, Gault noticed an explosion in companies and nonprofits hawking their A.I. software. “You’re hearing a lot of big promises” from education giants like McGraw Hill, he says, and smaller businesses like QuillBot, a start-up — often confused for Quill.org and vice versa — that use A.I. to paraphrase text and, says Gault, could easily be misused to help kids cheat on their homework.

“They’re pretty much our mortal enemies because they come at the expense of student learning,” he says. “There are some real tensions in the space right now.”

Chatbot as Immigration Counsel

Step inside any of the country’s dozens of immigration courts and chances are someone will be using A.I. to decipher a legal document, talk to a volunteer, or argue their case.

With only one qualified nonprofit attorney for every 1,400 undocumented immigrants in need of legal aid, most noncitizens fend for themselves in court, often in a nonnative language. That means A.I.-powered translation tools like Google Translate have quickly become a lifeline and standard practice for non-English speakers, government officials, and legal-service nonprofits.

Members of Justicia Lab during a convening with other immigration justice groups and Google.org exploring ways to apply generative AI.
JUSTICIA LAB
Representatives from Justicia Lab and other immigration-rights stakeholders worked with Google.org to brainstorm ways to use generative A.I. in their work.

“While we might not know it, and while many immigrants in this country might not know it, they’re already using A.I.,” says Rodrigo Camarena, director of Justicia Lab, a nonprofit that builds tech tools for the immigrant-rights movement.

ChatGPT is no immigration lawyer, but that won’t stop people desperate for advice from asking it their questions. A.I. tools like Google Translate are susceptible to inaccuracies egregious enough to imperil an asylum case. Yet without alternatives, people in need of medical advice, legal help, or talk therapy are turning to off-the-shelf A.I. tools for assistance, even if those tools are not designed with that purpose and could endanger users’ safety and privacy.

This is where Justicia Lab has found a niche. For over a decade, the group’s free Citizenshipworks tool — a kind of TurboTax for applying for citizenship — is reported to have helped tens of thousands of people become U.S. citizens, including both of Camarena’s parents, who are originally from Mexico. Instead of Googling or asking ChatGPT about pathways to legal status or recovering stolen wages, immigrants can use Justicia Lab’s online tools, which are vetted by immigration-law experts.

“It should be as easy for you to uphold your rights as it is to order an Uber,” Camarena says.

That is Camarena’s goal. In April, Justicia Lab launched an A.I. innovation lab, conceived as part of its participation in Google.org’s generative A.I. accelerator for nonprofits, the same program that Quill.org is participating in.

“We’ve seen millions if not billions invested in the private legal-tech sector and a very small amount in the public-interest tech sector,” says Camarena.

As part of the accelerator, Justicia Lab will work with pro bono engineers from Google.org to integrate the company’s generative A.I. technology into, say, a chatbot versed in immigration law or its own in-house translation tools. The details are still being workshopped and will likely take some time to account for accuracy and privacy protections.

“The reason why you haven’t seen us launch 10 different chatbots to help people resolve their immigrant legal questions is because we all know that the risks are very high,” says Camarena, who notes that by using a private server accessible only to a legal-aid organization or nonprofit like his, it’s possible to “completely safeguard or at least limit the misuse of personally identifiable information.”

Most technologies require some privacy trade-offs, but with proper guardrails, he says, in this case, the rewards may outweigh the risks.

“It’s time we brought the most innovative resources to the people who are least resourced,” he says.

Bridging the A.I. Divide

If technologists speak one language — of scalability, machine learning, and DevOps — then philanthropists speak another, with phrases like capacity building, theory of change, and stakeholder engagement.

Translating that jargon may one day be the job of an A.I. chatbot. But for now, groups like Fast Forward, a nonprofit start-up accelerator, have taken on the task. Co-founded in San Francisco in 2014 by a tech entrepreneur and a veteran philanthropy professional, Fast Forward nurtures burgeoning tech nonprofits — Quill.org participated in one of the first cohorts in 2015 — and helps them find funding.

Most important, says co-founder Shannon Farley, the group acts as a translator between representatives of big tech and nonprofits and traditional philanthropies, many of whom remain deeply skeptical of A.I. and its leading companies, which have been subject to a barrage of copyright lawsuits in recent months.

James J. Collins is the Termeer Professor of Medical Engineering & Science and Professor of Biological Engineering in the Department of Biological Engineering and Institute for Medical Engineering & Science at MIT in Cambridge, Massachusetts, USA.
M. SCOTT BRAUER
Jim Collins, a prominent researcher at M.I.T., co-founded Phare Bio, a nonprofit lab that used A.I. to discover a novel class of antibiotics for the first time in decades.

“Nonprofits take their cues from their funders about what they’re allowed to do,” says Farley, and if an important grant maker doesn’t see A.I. as a worthy investment, their grantees might miss out on the latest tools. Some of that skepticism may be healthy, given the hype around the technology, which is sometimes oversold as a catch-all solution for every problem, despite looming questions over A.I. ethics, privacy, and business models.

“We’re already seeing a lot of snake oil entering the market,” which can make it difficult to differentiate between “hype and reality,” says Carrie Bishop, generative A.I. lead for U.S. Digital Response, a nonprofit that works with governments on pro bono tech projects.

Despite their own tech smarts — U.S. Digital Response is yet another participant in Google.org’s accelerator — the nonprofit describes itself as “tech -neutral” and cautions local governments against embracing A.I. too quickly and without guardrails. Likewise, many traditional philanthropists fear funding an A.I. project that they may see as part of a flashy new fad.

So what’s a budding tech nonprofit to do in meetings with risk-averse donors?

“Start slowly,” advises Felecia Webb, chief strategy officer for the Partnership on A.I., a nonprofit collaborative that hosts a steering committee on A.I. and philanthropy. “Everyone is in their own silos” so it’s important to “come to a common language together,” says Webb, who’s provided glossaries to help traditional grant makers understand technical language.

Some nonprofits have found success arguing their case to donors by explaining where A.I. can be uniquely useful —and not just a bright shiny object.

In 2019, the Audacious Project, a coalition of philanthropists organized by TED, invited Jim Collins, a prominent researcher at M.I.T., to present his groundbreaking A.I. work to discover new antibiotics. Impressed by the potential impact, they decided to back his efforts with nearly $25 million in funding, which he used to start a nonprofit called Phare Bio, dedicated to developing antibiotics against deadly superbugs for which few treatment options exist.

Last year, Collins and his team achieved a major milestone: Using A.I., they discovered a novel class of antibiotics for the first time in decades.

“We’re not tackling these crises in the same way that’s been done before,” says Akhila Kosaraju, CEO of Phare Bio. The nonprofit lab can test 20 million compounds a day using A.I., compared with just 20,000 using traditional laboratory methods. Plus, the philanthropy-funded model allows it to focus on the research that’s needed most, regardless of financial returns.

Donors can see that “A.I. is uniquely useful for antibiotics, and it’s been validated scientifically,” she says. “It’s not just an idea but a reality. We’re at an inflection point where we need to scale a technology that’s already producing results.”

A.I.-Powered Ads

A.I. is beginning to shake up longstanding nonprofit norms.

Take foster care. Most advertising on the internet relies on A.I. to get the right product — or social cause — in front of the right person. A plane ticket booked online might lead to an onslaught of luggage ads. And in a growing number of states, a visit to a cooking blog or charity website, indicating domesticity and financial security, could lead to a flurry of ads asking: Do you want to be a foster parent?

The targeted ads are part of an effort by a Portland-based nonprofit called the Contingent, which took over statewide foster-parent recruitment from Oregon’s embattled child-welfare agency after it faced a class-action lawsuit from foster youths in 2019. Amid a shortage of foster parents, children were being shuttled between hotel rooms, converted juvenile jails, and other far-flung institutions.

Recruiting foster parents has become more difficult in recent years, with the pandemic and economic uncertainty contributing to nationwide shortages. Faced with a daunting task, Contingent CEO Ben Sand saw that billboards and booths at community events weren’t doing the trick and turned to an unlikely source of inspiration: the A.I.-powered advertising strategies used by political campaigns and beauty brands.

The Contingent is a nonprofit that recruits foster parents in four states using targeted ads/A.I. Shown here is foster parent Anthony Dixon.
THE CONTINGENT
The Contingent, a Portland, Ore., nonprofit, uses A.I. to target ads recruiting foster parents, like Anthony Dixon, shown here.

“We make decisions differently now,” says Sand. “What if we hacked those tools for foster care?”

A.I. as a possible answer didn’t come cheap. Partnering with Microsoft, the Contingent invested $150,000 in A.I. — about one-tenth of its state contract — on a suite of tools to build targeted advertising campaigns. “You’re staring that decision down, going, ‘I can hire two people or I can make an investment in technology,’” says Sand.

Working with Microsoft, the nonprofit created a model of a likely foster parent, then used A.I. to identify potential candidates based on their online behavior and demographics. The strategy appears to be working: In just two years, the Contingent has expanded its approach to four states, with plans to reach nine by 2026. In Arkansas this April, the nonprofit’s A.I.-powered approach attracted five times as many potential foster parents as traditional nonprofits did.

That doesn’t mean that everybody’s happy about it. Harvesting data to recruit foster parents — at a cost of around $100,000 a year — has sparked some concerns over privacy violations and bias, yet Sands argues that people’s data is out there regardless. If nonprofits want to compete for people’s attention, sometimes they have to play the same game as other advertisers, he says.

“How do you balance the barrage of ads that people get selling them weight-loss products with messages asking people to do hard things for each other?” he asks.

A Nonprofit ChatGPT?

Yet A.I. alone will not solve the nation’s shortage of foster parents; nor will it cure every medical malady or erase educational inequity.

But some experts maintain that A.I. can be a powerful tool for nonprofits — especially if the innovation directly benefits society, says Champika Fernando, head of the Mozilla Foundation’s Data Futures Lab, which offers technology grants and resources. In the 1990s, Mozilla’s Firefox web browser emerged as a nonprofit alternative to browsers like Safari and Google Chrome; and the foundation is currently exploring ways to build a nonprofit, open-source large language model, the kind of data engine behind ChatGPT.

“A.I. is increasingly a technology that we all depend on in every aspect of society, so there need to be options outside of just the three or four private-sector players,” says Fernando, who cautioned that many off-the-shelf tools nonprofits use could replicate the biases or possible privacy violations embedded in many of the most powerful A.I. models.

Quill.org founder Peter Gault with NYC high school students using Quill.org’s technology.
QUILL.ORG
Artificial intelligence powers the writing exercises and personalized feedback provided by Quill.org, a tech nonprofit cofounded by Peter Gault (left) to help students strengthen their writing skills.

“In order to have alternatives, there needs to be an investment outside of the private sector,” Fernando says.

That includes both philanthropy and nonprofits capable of guiding technology where it’s needed most.

“Most people don’t understand how innovative nonprofits can really be,” says Devi Thomas, global head of nonprofit community capacity at Microsoft Philanthropies.

Back in Aaron Estevez’s classroom, that innovative spirit is apparent. As his students use Quill to hone their writing skills and train their own chatbots, they’re not just learning grammar — they’re preparing for a world where A.I. is woven into the fabric of daily lives, for better or worse.

“If we don’t like what students are doing, we don’t punish,” says Estevez. “We teach.”

A version of this article appeared in the May 28, 2024, issue.

Newsletter

Become a newsletter subscriber to stay up-to-date on the latest Giving Compass news.

Donate

Donate to Giving Compass to help us guide donors toward practices that advance equity.

Donate mdi-hand-heart
Follow
About
About Giving Compass In The News Content at Giving Compass
Partnerships
Nonprofits Authors Partner With Us Contact Us
Topics
Climate Democracy Education Homelessness Reproductive Justice

© 2024 Giving Compass Network

A 501(c)(3) organization. EIN: 85-1311683

Privacy Policy User Agreement