What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
A new CEP report titled Understanding & Sharing What Works highlights the challenges foundations face in understanding what does and doesn’t work programmatically, deciding what to do with that information, and sharing what they have learned. At the Edna McConnell Clark Foundation (EMCF), we’ve encountered some of the same challenges on our journey with evidence building and evaluation. In hopes that it will be helpful to others in the field, we would like to share some of what we’ve learned, how we’ve evolved our thinking, and where we are heading.
In 2011, EMCF partnered with the federal Social Innovation Fund, which was created to identify and scale effective programs, to select 12 grantees with strong outcomes and support each of them in undertaking a rigorous evaluation. Most of the grantees, which we carefully selected after in-depth due diligence, aimed to complete a randomized controlled trial (RCT), generally considered the most rigorous type of third-party evaluation.
Our goal in investing in these evaluations was to build the evidence base of what works for children and youth, and then apply this knowledge to help scale effective programs. We engaged experts, including our own evaluation advisory committee, to help us assess grantees’ readiness for evaluation, explore different evaluation designs, and develop evaluation plans. Although we knew the findings wouldn’t be as simple as “thumbs-up or thumbs-down” ratings of these programs’ effectiveness, we were optimistic about the potential for these programs to demonstrate consistently positive outcomes — and to strengthen their evidence bases.
Despite all the upfront analysis we undertook — including supporting implementation studies and testing the feasibility of achieving the sample size — our experience taught us that even the most carefully considered evidence-building process can be much more challenging than originally anticipated. After a few years and several millions of dollars invested, our grantees’ evaluations yielded, with a couple of exceptions, mixed results. Instead of a deeper and relatively straightforward assessment of whether or not a program worked, we had headscratchers. For example, some evaluations showed statistically significant positive results in one area, but surprisingly not in others.
Read the full article about evaluation by Jehan Velji and Teresa Power at The Center for Effective Philanthropy.