
Scientists who ran separate analyses on a single data set about the effect of grass cover on Eucalyptus seedlings arrived at vastly different answers.Credit: Laurence Dutton/Getty
In a massive exercise to examine reproducibility, more than 200 biologists analysed the same sets of ecological data — and got widely divergent results. The first sweeping study1 of its kind in ecology demonstrates how much results in the field can vary, not because of differences in the environment, but because of scientists’ analytical choices.
“There can be a tendency to treat individual papers’ findings as definitive,” says Hannah Fraser, an ecology meta researcher at the University of Melbourne in Australia and a co-author of the study. But the results show that “we really can’t be relying on any individual result or any individual study to tell us the whole story”.
Replication games: how to make reproducibility research more systematic
Variation in results might not be surprising, but quantifying that variation in a formal study could catalyse a larger movement to improve reproducibility, says Brian Nosek, executive director of the Center for Open Science in Charlottesville, Virginia, who has driven discussions about reproducibility in the social sciences.
“This paper may help to consolidate what is a relatively small, reform-minded community in ecology and evolutionary biology into a much bigger movement, in the same way as the reproducibility project that we did in psychology,” he says. It would be hard “for many in this field to not recognize the profound implications of this result for their work”.
The study was published as a preprint on 4 October. The results have not yet been peer reviewed.