Australasian Science: Australia's authority on science since 1938

Scientists Unknowingly Tweak Experiments

Scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published, according to research in PLOS Biology.

The study investigated a type of publication bias called p-hacking, which happens when researchers either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result.

“We found evidence that p-hacking is happening throughout the life sciences,” said lead author Dr Megan Head from the Australian National University. If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.

Head’s study used text mining to extract p-values – a statistical value that indicates the likelihood that a result occurs by chance – from more than 100,000 research papers published across many scientific disciplines, including medicine, biology and psychology.

“Many researchers are not aware that certain methods could make some results seem more important than they are,” Head said. “They are just genuinely excited about finding something new and interesting.

“I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in.

“Journals, especially the top journals, are more likely to publish experiments with new, interesting results, creating incentive to produce results on demand.”

Head’s study found that a high number of p-values were only just over the traditional threshold considered statistically significant. “This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold,” she said. “They might look at their results before an experiment is finished, or explore their data with lots of different statistical methods, without realising that this can lead to bias.”

P-hacking could bias scientific conclusions, even in reviews that combine the results from many studies. For example, if enough results in a review have been p-hacked, a drug could look more effective than it is.

“We looked at the likelihood of this bias occurring in our own specialty, evolutionary biology, and although p-hacking was happening it wasn’t common enough to drastically alter general conclusions that could be made from the research,” Head said.