Giving Compass' Take:
- Natalia Kucirkova discusses assessing ed tech effectiveness using the Science of Learning to determine which tools are most useful.
- How can donors and funders support additional research into the effectiveness of various ed tech tools?
- Learn more about key issues in education and how you can help.
- Search our Guide to Good for nonprofits focused on education in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Educational technology such as apps and learning platforms are used by millions of children in classrooms and at home. Recent reports suggest that not all ed tech, including some of the most popular tools, are supporting learning. It’s crucial to evaluate whether these tools are truly effective. But how to tell what works when assessing ed tech effectiveness?
A debate about how best to do this has long centered on two competing approaches: randomized controlled trials (RCTs) and co-design. Kirk Walters from West Ed and Katie Boody Adorno from LeanLab represent the opposing views. Walters argues that RCTs, which test tools in controlled environments, provide the most reliable and objective data on what works in the classroom. Boody Adorno champions co-design, a process that involves teachers and students in shaping technology to ensure it meets their needs. Both believe that relying on the other type of evidence leaves ed tech evaluations flawed.
Framing this as a choice between two methods misaligns with the principles of the Science of Learning. The Science of Learning studies how people learn and how teaching methods can be improved through research. It combines expertise from psychology, neuroscience and education to determine the most effective strategies for diverse students and resources. Because learning depends on many related influences — a student’s background, teaching methods, culture and classroom situation — Science of Learning uses various methods to understand what works best.
When learning scientists gather evidence on whether and how a technology changes education, they select a method based on the goal of the evaluation. If the aim is to understand how a tool can be created to fit teachers’ needs, then co-design methods are the best fit. If the goal is to measure a tool’s impact on specific learning outcomes with statistical precision, then RCTs are more appropriate.
So, why does not everyone simply use both approaches? There are both philosophical and pragmatic reasons for this.
Read the full article about assessing ed tech effectiveness by Natalia Kucirkova at The 74.