Giving Compass' Take:
- Lina Srivastava explains how collective governance can limit AI's harms and harness its potential for good.
- What role can you play in supporting responsible and effective AI governance?
- Read about who pays the price for AI's consequences.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
The aftermath of the OpenAI governance controversy revealed the extent to which power has been consolidated by AI tech giants, a situation with dangerous implications for critical aspects of society. The potential of AI tools to provide societal benefits is real: for example, we have already seen the use of chatbots to manage humanitarian disaster responses, the deployment of AI-enhanced data analysis for climate mitigation and adaptation, and the use of data integration and textual analysis to address gender based violence, among other examples. But putting unchecked development in the hands of (primarily) male tech executives who espouse a particular Silicon Valley ethos oriented toward profit and dominance above all else, will only intensify threats to our social systems and vulnerable communities. It will erode information systems, produce algorithmic bias, introduce gender and racial discrimination, facilitate sexual abuse, increase labor exploitation, allow for the exploitation of creative works, and create new risks of violence, death, and deprivation to civilians in war from autonomous AI decision-making.
In short, relying on tech companies to govern their own AI development carves a path toward societal collapse by repeating mistakes made in past development of the web and social media. We need a new roadmap.
To establish effective AI governance, then, is the challenge for civil society organizations and social innovators. This entails determining the frameworks and structures we need to build to effectively organize and govern society amid rapid technological change and unchecked power consolidation. To address this challenge, it is crucial to elevate the voices, perspectives, and solutions of communities who directly experience the harms of AI.
Community-Led Transformation
An important way to create community-led AI governance lies in supporting cooperative, collective, and collaborative structures. The first step in doing so will be building an enabling environment and establishing the conditions that can support an ecosystem of advocates, creatives, and practitioners who can build the AI sector toward justice, equity, and shared prosperity. In a perfect world, AI would be treated as a public utility, and we would foster a collaborative and equitable approach to its development and deployment through open-source frameworks and transparent governance structures. If we viewed AI as a communal resource, we would shift the focus from proprietary interests to the collective good, prioritizing accessibility and ensuring that AI benefits are shared across diverse communities.
Read the full article about community governance for AI by Lina Srivastava at Stanford Social Innovation Review.