Giving Compass' Take:
- Kelly Born shares 10 ways for philanthropy to help maximize the benefits and minimize the risks associated with generative AI.
- What role can you play in supporting one or more of these approaches?
- Read about who pays for the social consequences of AI.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
The future is now. At this uncertain time, as the potential use-cases of generative AI begin to become apparent, there are at least 10 things that funders can do to help the existing field of tech-related nonprofits—and society at large—better prepare.
Most obviously, funders working in specific issue areas—climate, health, education, or in my case, democracy—can work to support efforts downstream to prepare government and civil society in their respective sectors to take advantage of the opportunities and mitigate the risks of AI on their specific areas of concern. This might include:
1. Understanding, and developing guidelines and guardrails for, government use of AI. The discriminatory effects of predictive AI in prison sentencing decisions are now well understood, and judges and lawyers are already using generative AI to write opinions. Yet surprisingly little is known about how government is using AI beyond the justice system, much less what the guardrails are, or should be.
2. Building government (and civil society) capacity to use AI. Even with the right knowledge and guardrails in place, government leaders will still need to develop the capacity to meaningfully employ these technologies—especially at the state and local level.
3. Transparency and data access. First, the most essential requirement is that governments and civil society must have visibility into how AI tools are being used. This includes the degree of bias, explainability, and interpretability of inputs, and outputs; the degree to which those outputs are “aligned” with, and accountable to, user (and societal) interests; the frequency of their “hallucinations” and more. Data access will be a necessary, but not sufficient, condition for any efforts aimed at understanding impacts, holding companies accountable, and providing redress for individuals or communities harmed.
4. Advocacy for research funding. Looking back at the disinformation field, philanthropy has invested over $100M to build research centers devoted to understanding harms, and (to a lesser degree) potential solutions.
5. Formal collaborative institutions. There have been many recent calls for some form of multi-stakeholder table: a Christchurch Call on Algorithmic Outcomes similar to the original Christchurch call, or a table equivalent to the Global Internet Forum to Counter Terrorism (GIFCT) which combats terrorist content online.
6. Informing voluntary industry best practices and codes of conduct.
7. Advocating for new models for AI in the public interest. The AI field is currently dominated by private companies with profit incentives. Different financial models warrant consideration.
8. Building government and civil society capacity to govern AI. Governments around the world have struggled to keep up with today’s technological pace of change, often failing to successfully appreciate and mitigate associated externalities until significant harm has been done.
9. Developing new legal theory. There is significant work to be done translating existing legal theory to apply to the societal harms posed by modern technologies.
10. Informing narrative change. The most upstream problem of all is the question of how we, as a society, view the role of technology in our lives. How do we tell the story of generative AI?
Read the full article about preparing for generative AI by Kelly Born at Stanford Social Innovation Review.