What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Science relies on the careful collection and analysis of facts. Science also benefits from human judgment, but that intuition isn't necessarily reliable. A study finds that scientists did a poor job forecasting whether a successful experiment would work on a second try.
That matters, because scientists can waste a lot of time if they read the results from another lab and eagerly chase after bum leads.
"There are lots of different candidates for drugs you might develop or different for research programs you might want to invest in," says Jonathan Kimmelman, an associate professor of biomedical ethics at McGill University in Montreal. "What you want is a way to discriminate between those investments that are going to pay off down the road, and those that are just going to fizzle."
Kimmelman has been studying scientific forecasting for that reason. He realized he had a unique opening when other researchers announced a multi-million dollar project to replicate dozens of high-profile cancer experiments. It's called the Reproducibility Project: Cancer Biology. Organizers have written down the exact protocols they would be using and promised not to deviate.
"This was really an extraordinary opportunity," he says, because so often scientists change their experiment as they go along, so it's hard to know whether a poor forecast was simply because the experiment had changed along the way.
Kimmelman and his postdoctoral fellow, Daniel Benjamin, asked nearly 200 professors, postdocs and graduate students to forecast the results from six of those repeated cancer experiments. The follow-up studies have now been done, and the results are in.
How did the 200 scientists do? According to a report published last week in PLOS Biology, not so hot.