A key tenant in the scientific process is reproducibility: scientists should be able to repeat a study’s original protocol and yield similar results. But a new report has shown that cancer research has a reproducibility problem, Carla K. Johnson reports for the Associated Press (AP).
For the last eight years, a team of scientists with the Reproducibility Project: Cancer Biology have meticulously worked to replicate some key, fundamental studies on cancer biology. They attempted to walk through 193 experiments from 53 studies published from 2010 to 2012 but found that only about a quarter were reproducible, Tara Haelle reports for Science News.
“The report tells us a lot about the culture and realities of the way cancer biology works, and it’s not a flattering picture at all,” says Jonathan Kimmelman, a bioethicist at McGill University in Montreal, tells Science News.
The project published its findings in two papers published this week in the journal eLife. One detailed the challenges in replication; the other addressed the implications.
Though the team set out to replicate nearly 200 experiments, several major setbacks shrunk their list down to 50 studies. Some research didn’t have detailed or clear enough protocols; for example, tiny details like how quickly a flask is stirred or clearly defining “biweekly” can ruin an experiment, Angus Chen reports for STAT News.
None of the 193 experiments were explicit enough to replicate without reaching out to the original researchers for more details. For 41 percent of the experiments, the original investigators were ranked as “extremely helpful” or “very helpful” when asked for help. About a third were “not at all helpful” or didn’t reply to the team’s inquiries, according to the paper.
This reflects the culture of academia, which often invests in original innovation and shiny new studies over replication. Reproducing studies can also feel threatening, like somebody is looking to fault the original investigators; as such, scientists are less inclined to fully detail their protocols and share their data, Science News reports. Furthermore, replication studies are rarely published in most scientific journals.
“If replication is normal and routine, people wouldn’t see it as a threat,” Brian Nosek, the executive director of the Center for Open Science which supports the Reproducibility Project, tells Science News. “Publication is the currency of advancement, a key reward that turns into chances for funding, chances for a job and chances for keeping that job. Replication doesn’t fit neatly into that rewards system.”
But of the experiments that were able to be replicated, the team found their results to be less impressive. They showed an 85 percent decrease in effect size—or the magnitude of the studies—compared to the originals. Tim Errington, a cancer biologist at the Center for Open Science, tells STAT News that sometimes science can charge ahead with a promising result without fully evaluating it. Replication can help catch a “lucky fluke,” or validate the results, he says.
“In general, the public understands science is hard, and I think the public also understands that science is going to make errors,” Nosek tells Science News. “The concern is and should be, is science efficient at catching its errors?”
The studies evaluated by the Reproducibility Project were only in the very beginning stages. Drugs and treatments that make it to clinical trials are rigorously tested and repeated before reaching the market. But catching problems through replication early on can lead to more robust results down the road and prevent cancer patients from getting their hopes up about early studies described as “promising,” the AP reports.
“Human biology is very hard, and we’re humans doing it. We’re not perfect, and it’s really tricky,” Errington tells STAT News. “None of these replications invalidate or validate the original science. Maybe the original study is wrong — a false positive or false signal. The reverse may be true, too, and the replication is wrong. More than likely, they’re both true, and there’s something mundane about how we did the experiment that’s causing the difference.”
Solutions to the reproducibility problem are hotly debated, but one thing is clear: experimental protocols should be widely available and as detailed as possible. Partly thanks to the work of the Center for Open Science, some journals are now allowing scientists to include more detail in their protocols—which was limited before—and other journals are even considering publishing replication studies, STAT News reports.