Giving Compass' Take:

• MDRC describes the importance of research transparency and replication and explains the steps that the organization is taking to improve their own transparency and replication. 

• What are the consequences of failing to replicate studies? How can funders build more trust in philanthropy through transparency?

• Learn why the famous Stanford "marshmallow study" has been called into question.


Many researchers are concerned about a crisis in the credibility of social science research because of insufficient replicability and transparency in randomized controlled trials and in other kinds of studies.

Reproducibility refers to the ability of other researchers to produce the same results with the same data. Replicability refers to the ability to repeat a study and obtain the same results, or at least results of similar magnitude for the same outcomes, with new data.

Both are important to ensure that policymakers, practitioners, and the public can have confidence in the evidence. Transparency allows other researchers to assess the methods, reproduce and replicate the studies, and advance knowledge by generating and testing new hypotheses.

  1. Prespecifying and preregistering research plans. Prespecifying obliges and permits researchers to (1) elucidate the goals of their study, (2) support the scientific integrity of their findings by minimizing the possibility of “fishing” for positive findings via multiple specifications, (3) allow others to assess the validity of their methods, (4) circumvent publication bias against studies without significant findings, and (5) allow for reproducibility and replication.
  2. Requiring more than statistical significance. Findings in experimental studies are typically designated as effects, or impacts, when the p-values for corresponding hypothesis tests are smaller than the prespecified threshold of statistical significance (usually 0.05 or 0.10). A p-value of 0.10 means that there is only a 10 percent chance that a program with no effect would have produced such a large estimate. But small changes in the steps of the analysis (such as sample restrictions or model specification) can change the p-value. To reduce the risk of drawing incorrect conclusions based on a false positive, we have several practices to guide our interpretation of findings based on all the evidence.
  3. Sharing data. At MDRC, we frequently produce public use files from our data, allowing other researchers to reproduce our analyses and findings, conduct robustness checks with other methods, or explore additional hypotheses.
  4. Conducting replication studies. We have also begun undertaking replications of some of our most well-known studies.

Read the full article on research transparency and replication by Rachel Rosen at MDRC.