What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Editors’ Note: The following article was adapted from “Evaluation Research and Institutional Pressures: Challenges in Public-Nonprofit Contracting” by Frumkin and Reingold. The theoretical and political issues raised in this adaptation are explored in greater depth in the full paper, available online.
This story, reprinted from the Fall 2004 edition of the Nonprofit Quarterly, was first published online on September 21, 2004.
Public management and program evaluation should be natural allies in the quest for greater effectiveness. Armed with good data, government agencies could be in position to focus resources and attention on programs that have demonstrated the greatest impact. In too many cases, however, the link between solid evidence of program impact and government efforts to grow and replicate programs has been weak or missing.
Over the past three decades, connecting evaluation research to public decision-making has been transformed by the rise of service contracting, shifting responsibility for the delivery of public programs to non-governmental organizations. Many responsibilities have been pushed down to more local levels of government through devolution and out to nonprofit service providers through privatization. As this movement “down and out” has swept through government, the task of collecting and acting on evaluation data has changed. Rather than focus on government exclusively, researchers are examining new models of service delivery that increasingly use nonprofit organizations as prime vehicles for implementation.
Evaluation research that is focused on the performance of outside contractors can help public officials make difficult contracting decisions more soundly, learn about programs implemented in other areas, and avoid funding efforts that others have learned do not work.
In principle, this vision of how evaluation research can be used to improve public management and contracting appears sound and reasonable. Evidence that a particular initiative or program has positive effects—and “works”—can and should fuel replication and expansion, while evidence that an intervention does not achieve its intended objectives should lead to its abandonment. But does it?
In practice, public managers have had trouble replicating what works and, in at least a few notable cases that are examined here, have devoted large amounts of public funds into programs that evaluation research has shown do not work or for which there is no evidence of either success or failure.
In investigating this topic, it might be tempting to single out and blame a few individual managers for exercising poor judgment or failing to keep abreast of developments. Instead, this article uses three examples to lay out a framework for understanding what institutional and political forces lead rational managers to spend scarce program funds on efforts that have a very low likelihood of success.
Read the original article at Nonprofit Quarterly
Read more about program effectiveness at GivingCompass.org