Seed magazine profiles the recent work from John Ioannidis, author of the groundbreaking article “Why most published research findings are false”.
I’ve written about him before in several contexts and the importance of understanding this research. The counter-intuitive thing is how much his research redeems science as an enterprise and emphasizes how denialists can abuse our literature.
I recommend that scientists take a chance to read some of his work, and ideally watch this video (it’s a lot more approachable) I uploaded to google a few months ago. It is a bit long – it’s the grand rounds he delivered at NIH a few months ago – but well worth the time if you’re working in, or interested in the results of biological research.
It’s really fascinating stuff, and as someone who is always harping about error and statistics in my lab, a welcome wake-up call to biologists to understand the meaning of statistical significance, and the importance of skepticism of new results until they’re broadly verified.
Simply put, what Ioannidis does, is he takes the most cited articles from a certain time period, then, look 20 years down the line to see which of these highly-cited, groundbreaking articles has held up. A simple breakdown of the findings in biological fields is that if you start with 100 groundbreaking papers that promise immediate clinical translation, about 27 of them have results really hold up very well and result in clinical trials, about 5 of them will result in actual licensed treatments for people (a sign of successful transitional research), and about 1 of them will result in a technology that revolutionizes medical treatment.
He also shows a bias towards initial big effects. Frequently the first paper showing a new finding reports a really big, highly significant effect. However, as other researchers study the problem there is a rapid correction, sometimes completely canceling out the initial observation, but usually settling on a much more moderate effects.
Ultimately science is redeemed, since the data that shows that many initial findings are exaggerated or overblown comes from subsequent attempts at replication from other scientists. The system works. But it emphasizes many things that are important to us here at denialism blog. It shows the importance of knowing the difference between statistical significance and statistical power of experiments. Also it’s important to not immediately accept everything that comes out of even prominent journals, as one of the critical elements of the scientific enterprise is replication replication, replication. Skepticism of even high-profile research is critical until results hold up under replication. Ideally, the response to research like this would be to emphasize trial design in judging the worthiness of publication of a result – the big journals are always biased instead towards big splashy results – and more of a willingness to publish negative data from well-designed trials.
Finally, and most importantly, it shows that the literature is full of missteps, and those that would misuse science can always find studies of weak statistical power showing the effect they’re interested in promoting. It is important in science never to cherry pick the study that you want to be true, but to consider the totality of the scientific literature before drawing conclusions.
Leave a Reply