Part one in a three-part series. Read part two.


In 2012, Pace Center for Girls, a multiservice nonprofit serving middle- and high-school-age girls with histories of trauma, faced a fork in the road. We could see that our program was working. Our individualized services, such as counseling, anger, and stress management were supporting girls in becoming strong advocates for their own personal and professional worth, but there was a demand from donors for hard evidence to support our success.

That year, we made the decision to launch a randomized controlled trial (RCT) to assess whether performance was better for girls in the program than for those not enrolled. This was at odds with our mission because it meant withholding services from some girls (the control group) and referring them elsewhere. 

Despite the downsides, we pursued the RCT and it revealed that Pace girls were nearly twice as likely to be on track to graduate from high school as girls not at Pace, but the methodology only covered standard measures of grades and attendance and did not reflect a causal link with Pace’s signature individualized services such as building self-efficacy and self-advocacy.

We determined there had to be a better approach. Since 2015, Pace has put into practice a research methodology that blends empirical studies and participatory methods. In an effort to advance the field with the most effective tools, Pace worked with MilwayPLUS, Fund for Shared Insight, Feedback Labs, and Project Evident to better understand the role of participatory approaches in program design and measurement. The findings revealed that establishing feedback loops with program participants ultimately helps organizations and funders collect evidence to determine if the problems they aim to solve are truly addressing relevant needs specific to their communities. 

Nonprofits and funders serious about building equity must ensure that the people and communities they serve are providing the feedback to advance program direction. Our research found that not only did this methodology reveal causal links to outcomes, but also influenced a more inclusive, resilient culture within organizations. 

When program participants are full partners in the evaluation — including designing the research and gathering and analyzing data — our study shows that the benefits go well beyond more effective evaluations. For instance, 60% of study respondents said the use of participatory approaches, such as surveys, focus groups, and town halls, led them to hire differently — specifically, to prioritize candidates who experienced the issues their programs aimed to solve. Through feedback from our girls and team members, we discovered that further strengthening of relationships with teachers and counselors would directly improve behavior modification and performance efforts for our girls. In response, Pace integrated multiple team member training and support tools to enable teachers and counselors to provide better social and emotional support, resulting in stronger behavioral transformation.

Informed by our own participatory research, Pace created girls’ leadership councils and a statewide council representing each site. The latter helped design, execute, and interpret program research and evaluation, conduct focus groups with peers, aid in interviewing new hires, and contribute to program decisions. Deep engagement from the girls has motivated staff: Team-member turnover has declined by nearly two-thirds within five years and productivity and engagement have increased by more than a quarter. 

The nonprofit push to build evidence, capacity, and culture more equitably has become a growing movement within the sector. In the past six years, funders, like the 10 foundations that collaborate as core investors in Shared Insight, have given grants to help nonprofits develop tools for listening to their participants. 

With the expansion of experimental and participatory approaches comes greater opportunity to mix methods. As funders, evaluators, and nonprofits proceed on this journey, they should all ask to what extent they can affirm the following:

  1. Are we connecting with those most affected by our interventions so they can influence our approach to evaluating impact? 
  2. Are we disaggregating the data we gather by race, age, and any other relevant criteria to understand the disparate experiences of groups of participants within the overall results? 
  3. With an understanding that race intersects all issues, are we identifying and testing questions through appropriate focus groups or panels of participants? 
  4. Are we calibrating outcomes to ensure they are equitable and not determined or predictable by other innate factors (e.g., gender, disability, or race)? 
  5. Are we using the data to inform programs and strategies that are themselves in the service of equity?

Organizations making intentional and ongoing progress toward affirming all five questions, such as Pace, are well on their way to building more equitable and impactful programs for the communities they serve. It is our hope that seeing the groundbreaking research being implemented in organizations like ours will inspire funders to support this movement and shift back power to the communities we serve.