Giving Compass' Take:
- Mary Burns, Rebecca Winthrop, Natasha Luther, Emma Venetis, and Rida Karim discuss how to ensure responsible, ethical AI implementation in children's education.
- What might AI-enriched learning look like for students in school systems across the nation? What are the potential harms to students of overreliance on AI in education?
- Search for a nonprofit focused on AI implementation in education.
- Access more nonprofit data, advanced filters, and comparison tools when you upgrade to Giving Compass Pro.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Since the debut of ChatGPT and with the public’s growing familiarity with generative artificial intelligence (AI), the education community has been debating its promises and perils. Rather than wait for a decade to conduct a postmortem on the failures and opportunities of AI, the Brookings Institution’s Center for Universal Education embarked on a yearlong global study—a premortem—to understand the potential negative risks that generative AI poses to students, and what we can do now to prevent these risks, while maximizing the potential benefits of AI through ensuring responsible AI implementation.
At this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits.
After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits, demonstrating the importance of ensuring responsible AI implementation. This is largely because the risks of AI differ in nature from its benefits—that is, these risks undermine children’s foundational development—and may prevent the benefits from being realized.
It’s Not Too Late to Bend the Arc on Ensuring Responsible AI Implementation
We find that AI has the potential to benefit or hinder students, depending on how it is used, showing the need for ensuring responsible AI implementation. We all have the agency, the capacity, and the imperative to help AI enrich, not diminish, students’ learning and development.
- AI-enriched learning. Well-designed AI tools and platforms can offer students a number of learning benefits if deployed as a part of an overall, pedagogically sound approach.
- AI-diminished learning. Overreliance on AI tools and platforms can put children and youth’s fundamental learning capacity at risk. These risks can impact students’ capacity to learn, their social and emotional well-being, their trusting relationships with teachers and peers, and their safety and privacy.
To this end, we offer three pillars for action: Prosper, Prepare, and Protect. Under each pillar, we present actionable recommendations for governments, technology companies, education system leaders, families, and all those who touch this issue. We urge all relevant actors to identify at least one recommendation to advance over the next three years, ensuring responsible AI implementation.
Read the full article about AI implementation in education by Mary Burns, Rebecca Winthrop, Natasha Luther, Emma Venetis, and Rida Karim at Brookings.