When AI started upending the way people do work, as a global nonprofit, our team at Global School Leaders was skeptical. While we want to do our work well and efficiently, we care about doing work in a human-centered way, valuing deeply the unique wisdom and perspectives of our team, our partners, school leaders, and students. So, we held back a little bit. This piece explores how we began to lean into a human-centered model for AI adoption.

The Beginning of Our Thinking About Human-Centered AI Adoption

Global School Leaders focuses on strengthening school leadership across the Global South. Our team is all remote and global.

Our first conversations about AI focused on how we could support school leaders at scale more effectively and how school leaders could feel more empowered by AI tools.

At first, we thought we just needed to learn to use it, and we would see solutions for others. With that motivation, we sought out AI experts to understand how our organization could deliberately and cautiously start using AI to address big, messy world problems.

The answer: Before we develop AI-driven solutions to help others, we have to learn to use it to help ourselves first.[1]

A Learning Model That Centers People

We considered the traditional path of “learning” like bringing in a trainer to lead a session. But we quickly realized that AI was not just a tool to learn, it shifts how we think about and do our work more fundamentally.

We built a voluntary community of practice across all our teams (not just the likely players — technology and operations) to build collective knowledge and learning about AI.

This strategy took more time, but we believed, would change our team’s practices in deeper ways and ensure that adoption of AI happened across the organization and not just in siloes.

With that deeper understanding, we could then be able to see how AI could solve important social problems.

Read the full article about human-centered AI adoption by Avni Gupta-Kagan at Blue Avocado.