Giving Compass' Take:
- Gastón Wright discusses the importance of supporting ethical AI use in schools, focusing on the role of philanthropy in shaping AI governance.
- How can the philanthropic sector position itself to effectively support ethical AI use in schools?
- Learn more about key issues in education and how you can help.
- Search our Guide to Good for nonprofits focused on education in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Search our Guide to Good
Start searching for your way to change the world.
Education has always been a top priority for philanthropy. If it wants to remain effective in this area, it needs to understand the intersection of AI and education, supporting ethical AI use and helping develop AI governance.
Across the globe, governments are racing to integrate artificial intelligence (AI) into public education. Singapore is deploying AI-powered adaptive learning platforms in maths classrooms. China has rolled out smart education infrastructure across tens of thousands of schools. The UK’s Department for Education has even guided generative AI in lesson planning. While these state-led efforts signal a technological leap in pedagogy, one critical dimension is being left behind: the ethical, civic, and democratic foundations that should underpin supporting ethical AI use in schools. Strikingly, philanthropy—once a catalytic force in global education innovation—is largely absent from this new frontier. If funders fail to act now, we risk building more innovative classrooms underpinned by dumber, less inclusive policies.
The Global Push: AI in the Classroom, Ready or Not
National education strategies are increasingly positioning AI not as an add-on, but as a structural pillar of pedagogy. In Singapore, the Ministry of Education has embedded adaptive learning systems (ALS) in secondary schools, using AI to tailor content to student performance. In China, the Ministry of Education’s Smart Education of China initiative integrates AI into real-time performance monitoring, biometric feedback, and classroom analytics nationally. Even in more rights-oriented systems, like the UK and Finland, AI is entering classrooms—albeit with more cautious framing.
The UK’s guidelines outline how generative AI tools such as ChatGPT can support teacher planning, assessment, and workload reduction. Finland’s AI guidelines for educators focus on promoting algorithmic literacy among teachers and students. Still, across all these examples, AI is rapidly operationalised in public education systems—often with minimal debate about democratic accountability, civic outcomes, or long-term implications for learners’ autonomy. The wording is non-existent in public education documents.
What’s Missing: Ethics, Literacy, and Agency
The glaring gap across most of these national strategies is that AI is being introduced in schools without ensuring students and teachers understand what it is or how it works. Tools are being implemented, but their civic and ethical reasoning is left unexplored. Students may learn with AI, but are not being taught to question it. Teachers are asked to rely on platforms they can’t interrogate or evaluate critically.
Read the full article about philanthropy, AI, and education by Gastón Wright at Alliance Magazine.