What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Today, nonprofits and funders alike increasingly use equity-serving and participant-centered approaches in their program design, and it’s time to sharpen the equity lens on building evidence of a program’s impact. The shift calls for making program participants full partners in evaluation, as experts in their own experience versus subjects of an experimental study.
To explore a more holistic approach, blending participatory findings with empirical data, Pace Center for Girls and MilwayPLUS social impact advisors conducted focus groups, interviews, and a survey with 15 organizations emphasizing participatory approaches in research and evaluation. We worked with three leaders in participatory measurement on this project — Fund for Shared Insight, a funder collaborative; Feedback Labs, which builds NGO peer-learning networks; and Project Evident, an advisor on evidence strategy.
Our findings showed participatory measurement can casually link to outcomes and positively influence capacity building and advocacy. When focus-group participants ranked a series of characteristics before and after the organizations implemented participatory methods, they noted their work became more outcome-focused (27.5 percent higher on average), more inclusive (27.4 percent higher), and more data-driven (26.5 percent higher). More than 70 percent of the organizations we studied reported that participatory methods (most often surveys, focus groups, storytelling, and town halls) helped them define relevant outcomes.
Using Participatory Measures to Understand Outcomes and Impact
A core difference between experimental and participatory approaches is at the heart of the equity argument. Experimental approaches like randomized controlled trials (RCTs) follow a treatment and control group over time. In contrast, participatory tools including feedback, surveys, and focus groups connect immediate learning with continuous improvements to programs and policies, with participants seen as experts in their own experience.
For the last six years, Pace has focused on participatory measures in research and evaluation, uncovering new patterns of girls’ experiences by segmenting responses according to race, age, length of stay in the program, and other variables. In the process, they have identified causal links between feedback on instructor relationships and educational outcomes for the girls. Pace now immediately incorporates participants’ insights into program improvements.
The work to expand the use of participatory approaches takes a mental shift, one that organizations and funders can nurture by pondering several key questions as they gather evidence of program impact:
- Who needs the proof?
- Who determines what impact matters most?
- How do you ask the right questions?
- What are the links to program outcomes?
- What are the wins for society?
Read the full article about building equitable evidence by Lymari Benitez, Yessica Cancel, Mary Marx, and Katie Smith Milway at The Center for Effective Philanthropy.