Giving Compass' Take:
- Suvarna Pande and Catherine MacLeod discuss utilizing randomized controlled trials to improve the effectiveness of anti-poverty policy development.
- How can donors and funders support improved research into the effectiveness of economic development policies?
- Learn more about strengthening democracy and how you can help.
- Search our Guide to Good for nonprofits focused on democracy in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Search our Guide to Good
Start searching for your way to change the world.
Randomized controlled trials (RCTs) are a powerful tool for understanding what works in development and anti-poverty programs. They provide insights to guide practitioners and policymakers in improving and scaling interventions. But for RCT findings to inform these decisions, they must be communicated clearly and systematically—something that’s easier said than done. Good reporting isn’t just about sharing final findings; researchers should share their process to make results actionable when utilizing randomized controlled trials. This includes going beyond the numbers to explain the context, what findings mean, and their applicability to the real world. In this blog, we draw on lessons learned from a collaboration between 3ie and ideas42, including a review of RCTs evaluating behavioral designs in cash transfer programs.
We identified good practices that can improve evaluations design and reporting. These recommendations aim to help researchers and practitioners ensure their work leads to more informed, impactful decisions and, ultimately, better outcomes for the communities they serve.
How can researchers improve reporting practices to ensure utilizing randomized controlled trials findings is clear and actionable?
While RCT evaluations can provide a robust foundation for understanding the effects of interventions, researchers should ensure consistency and transparency in design and reporting to meaningfully inform other researchers and practitioners. When information is lacking, it is harder to assess the quality of the work, hindering the usefulness of program evaluations and replicability. However, to report critical information, programs must also be designed to collect the relevant data during implementation.
Through a collaborative stocktaking project, we identified the following design and reporting aspects that can help increase the value of reports by reflecting well-thought-out and well-conducted evaluations:
- Clearly report dropout rates. This is important for RCTs, which are particularly prone to attrition problems (especially for longer-term evaluations). It is just as critical to report tests for whether there are differential dropouts between treatment and control groups to understand if the resulting data loss is balanced for both groups.
- Consider blinding to prevent possible effects of participants knowing which group they’re in (e.g., how this might change their behavior and ultimately affect study results). For example, acknowledging the likelihood of respondents changing their behavior depending on their treatment, ideas42 used brown envelopes in one of their RCTs to blind participants to group status.
Read the full article about strengthening randomized controlled trials by Suvarna Pande and Catherine MacLeod at Ideas42.