In the social sector, most people tend to see program evaluations as high-stakes endeavors designed to confirm the value of specific programmatic work. And yet the findings often feel irrelevant or unactionable to the very people who do that work. Results may lack coherence with what the organization intuitively knows about itself, its culture, its beneficiaries, and its history.

Key lessons:

  1. There are no artificial boundaries between questions: Often, identifying the true motivation for an evaluation is a missed opportunity. If the focus is strictly programmatic, issues of governance, leadership, or political realities are not likely to surface early on.
  2. There is greater flexibility throughout the evaluation: When you look at a nonprofit organization and its programmatic impact simultaneously, it’s easier to make on-the-ground adaptations to the evaluation plan that can help inform decision-making.
  3. Trust comes through continuous learning: Learning from an evaluation is most likely to stick when researchers and organization staff have strong relationships and build trust throughout the process, rather than experience the one-way delivery of an evaluation report. And compared to traditional evaluators, organizational consultants may be more attentive to a team’s receptivity to receiving and acting on findings.

Read the full article on evaluative and organizational development by Saphira M. Baker and Anita McGinty at Stanford Social Innovation Review.