Giving Compass' Take:
- Lymari Benitez, Yessica Cancel, Mary Marx, and Katie Smith Milway highlight the benefits of incorporating "soft" data, like participant feedback, into nonprofit program measurements.
- How can you support the collection and use of 'soft' data for nonprofit program evaluations?
- Read more about how feedback practices can help drive nonprofits forward.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Participant feedback is generally pegged as the “softer” leg of nonprofit program measurement compared to quantitative approaches like randomized controlled trials (RCTs). Organizations can view input from users of social products and services more as a “suggestion box” than a critical measure of effectiveness. Meanwhile, the field has long considered third-party evaluations that relegate participants to subjects of a study a gold standard for developing evidence of program outcomes.
However, several new research initiatives aim to show funders and nonprofits that participant feedback has empirical links to hard outcomes. The early returns confirm that organizations that gather feedback from direct participants and their communities to continuously improve their programs and policies are finding that surveys, interviews, and focus groups can do more than surface new ways to interpret quantitative findings and explain the why and how. They can also highlight causal links to past outcomes and, remarkably, provide a proxy for future outcomes.
Two Case Studies
The Center for Employment Opportunities (CEO), a criminal justice organization that aims to move paroled men and women into livelihoods with higher job retention and to lower recidivism, offers an example. During its first four decades, CEO (founded in the 1970s as the Vera Institute of Justice) built a reputation for achieving hard measures of success; the data it gathered about program participants showed that the organization meaningfully reduced recidivism for people recently released from prison compared to formerly paroled people who did not participate in its programs.
But in 2016, after encounters with David Bonbright of Keystone Accountability, a proponent of constituent voice, and a feedback initiative grant from Fund for Shared Insight, CEO shifted its approach from only gathering data about participants to also gathering data from them. As a result of participant input, it formed partnerships with external agencies to expand training opportunities, offering OSHA construction certification and commercial driver licensing. It also shifted certain administrative tasks, such as signing up for road crew shifts, from participants to staff so that participants could focus on attending interviews for more-permanent jobs.
After seeing how feedback sparked program improvements, CEO began to include participants at higher levels of decision-making: For example, it appointed people who completed the program to advisory committees across the United States in 2019, and to its board of directors in 2020. In 2021, with a research grant from the Fund for Shared Insight, it began investigating how participant feedback on program quality connected to hard outcomes like obtaining permanent jobs. Among other findings, CEO learned that participants who responded to requests for feedback during the first four weeks of the program were more likely to meet the organization’s job search and placement goals three months and six months later. The act of giving feedback itself became a predictor of outcomes.
Read the full article about "soft" program measurement by Lymari Benitez, Yessica Cancel, Mary Marx, and Katie Smith Milway at Stanford Social Innovation Review.