One of the great—if largely unsung—bipartisan congressional acts of recent history was the passage in 2018 of the Foundations for Evidence-Based Policymaking Act. In essence, the “Evidence Act” codified the goal of using solid, consistent evidence as the basis for funding decisions on trillions of dollars of public money. Agencies use this data to decide on the most effective and most promising solutions for a vast array of issues, from early-childhood education to environmental protection.

Four years later, while most federal agencies have created fairly robust evidence bases, unlocking that evidence for practical use by decision makers remains challenging. One might argue that if Evidence 1.0 (i.e. the last five years) was focused on the production of evidence, then the next four years—let’s call it Evidence 2.0—will be focused on the effective use of that evidence. Now that evidence is readily available to policymakers, the question is, how can that data be standardized, aggregated, derived, applied, and used for predictive decision-making?

In the following conversation, two expert leaders—Nick Hart, president of the Data Foundation, and Jason Saul, founder and executive director of the Center for Impact Sciences at the University of Chicago’s Harris School of Public Policy—share thoughts about the next phase of the evidence movement.

Q. Can you summarize for us the goal of Evidence 2.0?

Nick Hart: It’s all about using the data. Evidence 1.0 is great: we’ve generated a wealth of better knowledge, and that is fantastic. But the real point is to make all that knowledge accessible and usable, so that our policymaking is better informed. It doesn’t make any difference if you’ve got the best study in the world but nobody uses it. We want all this research and evaluation to be open and understandable to all. That’s the goal!

Q. Jason, how do we get there?

Jason Saul: The crux of the issue is unlocking the data. We’ve generated hundreds of thousands of pieces of “evidence”—evaluations, research studies and control trials published in PDFs. But there’s a pretty big difference between “evidence” and actionable data. Every piece of evidence or evaluation study uses different terminology and data definitions. The data are not coded in any standardized way; we have no common indexing or taxonomies for impact. Look at what Google did for website indexing, look at what Westlaw did for indexing case law, look at what the Human Genome Project did for indexing genetic research. We need to an “impact index” to do the same for social science research.

Q. Is the government doing that?

Nick Hart: The Evidence Act actually set the stage for this via the data governance processes. One example: Congress passed another law in 2022 called the Financial Data Transparency Act that clearly says: publish financial information as searchable data, not just as written reports. We have to do that across the board. It's a hugely exciting opportunity for the government to build public trust in government institutions, in data, in evidence, and in the ability to communicate better with the American public using tools that are available to everyone today. That’s the democratization of data. Some government agencies are doing well, but many have a long ways to go. It’s like changing the course of a large ship.

Read the full article about evidence-based policymaking by Nick Hart and Jason Saul at Stanford Social Innovation Review.