Giving Compass' Take:

• Michael J. Weiss, Rebecca Unterman, and Dorota Biedzio share learnings and gaps in understanding around the long-term impacts of higher education programs.

• What role can you play in supporting further research to fill identified gaps? 

• Read about higher education and COVID-19: philanthropy's key role.


Some education programs’ early positive effects disappear over time. Other programs have unanticipated positive long-term effects. Foundations warn of the dangers of putting too much weight on in-program effects, which “often fade over time.” The U.S. Department of Education’s Institute of Education Sciences (IES) even has a special funding category dedicated to continued follow-up to explore these issues.

This Issue Focus tackles the topic of post-program effects in postsecondary education, a previously unexamined context. Are in-program effects—that is, the effects observed while the program was active—maintained once the program ends? Do they grow and improve? Or do they fade out? This investigation capitalizes on two decades of rigorous program evaluations conducted by MDRC, including approximately 25 postsecondary programs, to consider these questions. The programs varied widely in terms of their features (like financial support, advising, learning communities, tutoring, success courses, instructional reforms, and communication campaigns), their duration (lasting from one semester to three years), the populations they served, and their contexts. Each evaluation used a well-executed randomized controlled trial (RCT) design to estimate program effectiveness. RCTs are often considered the gold standard for evaluating program effectiveness. The results are striking: During the year after these programs ended, effects on academic progress (as measured by credit accumulation, an indicator of progress toward a degree) were consistently maintained. There is no evidence of discernible fade-out after these programs ended, providing encouraging information about the lasting value of many postsecondary programs, as well as the value of evidence collected from ongoing programs. There was also no evidence of post-program growth, perhaps prompting the need for the development of programs that equip students for success beyond their current durations.

The finding that effects on credit accumulation are broadly maintained after postsecondary programs end should encourage education reformers concerned about fade-out. While in-program effects are sometimes important on their own, benefits maintained into the future are especially powerful.

Some important points are worth noting:

First, the above analyses focus on a single outcome—cumulative credits earned. While that outcome is an important indicator of academic progress, it is not the only important one targeted by postsecondary programs. Exploring other outcomes, like enrollment, grade point average, or degree completion, is an area ripe for future research.

Second, while post-program maintenance of effects is better than fade-out, growth would be even better.