What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Giving Compass' Take:
• Vivianne Ming, a theoretical neuroscientist, and co-founder of Socos Labs in Berkeley, California, discusses various approaches to minimizing the risk of getting biased outcomes from AI.
• How can stakeholders across fields work together to maximize the benefits of AI and mitigate potential problems?
• Read about why AI needs to reflect society's diversity.
Vivienne Ming, a theoretical neuroscientist and cofounder of Socos Labs in Berkeley, California, defines artificial intelligence (AI) as “any autonomous and artificial system that can make a decision under uncertainty and make expert human judgements cheaper, faster, and increasingly, in some domains, better than a human can.”
Using that definition, AI powers many of today’s popular technological services: ride-sharing, email communication, facial recognition, mobile banking, music recommendations, and social media.
AI has already been widely applied across business, social, and government sectors. But if it’s not applied carefully, AI can lead to distorted results or decisions and potentially exclude historically marginalized or underrepresented populations.
On a recent episode of the Urban Institute’s podcast, Critical Value, Ming discusses three approaches to minimize the risk of AI supporting problematic or biased outcomes.
- Conduct regular audits If AI is trained on biased data and learns from biased samples, the system can reproduce bias that originated from discriminatory human decisions and practices.
- Involve strong regulatory institutions According to Ming, AI can unlock new realms of scientific research and tackle challenging social issues. But AI must meet existing standards, and any new standards we establish, to reach its greatest potential. Strong regulatory institutions can design a framework to develop new technical standards, ethical guidelines, and public policies to maximize the benefits of AI.
- Empower and educate people Bridging the AI education gap may help society deal with AI’s impacts as they come. People trained in a range of AI-related skills, including machine learning, programming, distributed computing, and data science and engineering, can provide insight on data analysis, management, and regulation.
AI can support clearer decisions and efficiency across our society, but we need to prepare and equip institutions and people to maximize the benefits and ensure they are shared equitably.
Read the full article about understanding AI and its benefits by Jacinth Jones at Urban Institute.