Science-based medicine – The good and the bad on a good new blog

I must say I’ve loved much of the writing at the new blog Science-Based Medicine. These guys are fighting the good fight and presenting very sophisticated aspects of evaluating the medical literature in a very accessible way. In particular I’d like to point out David Gorski’s critique of NCCAM and the directly-relevant articles from Kimball Atwood on the importance of prior probability in evaluating medical research. I mention these as a pair because lately I’ve really become highly attuned to this issue due to the research of John Ioannidis which is critical for understanding which evidence in the literature is high-quality and likely to be true. Atwood rightly points out that pre-study odds, or prior probability is critical for understanding how the literature gets contaminated with nonsense. Stated simply, the emphasis on statistical significance in evidence based medicine is unfortunate because statistical significance is ultimately an inadequate measure of the likelihood of a result being true.

The scenario goes like this. You have an test, let’s say, the efficacy of magnets in increasing circulation in rats. Because magnets are believed to have some health benefit according to some snake oil salesmen, you and 99 other researchers decide to put this to the test in your rat-based assay. Based on chance alone, as many as 5% of you may get a statistically significant result in your studies that appeared real simply due to chance. 95 of you will then say, “oh well, nuts to this” and shove the data in the file drawer to be forgotten. The other 5% may then say, “wow, look at that” and go ahead and try to publish your results. This is what is known as the file-drawer effect. Positive results get published, negative results do not, thus false positive results, especially ones with big effects will often sneak into the literature. Luckily science has a self-correcting mechanism that requires replication, but since we don’t delete the initial studies, they will always be there for the cranks to access and wave about.

This makes two things very important. One is the importance of replication and the evaluation of the totality of the literature rather than a single report. Two is the critical importance of pre-study odds. You don’t even need to be an expert in Bayesian statistics to figure out how to compute this, it’s just common sense. Before an experiment is performed one should ask is there a good rationale for the experiment? Is there a reasonable physiological or physical basis for your hypothesis or are you just setting yourself up to report a false positive? These are questions good researchers ask all the time because they protect you from getting fooled by randomness.

This feeds into why I liked Dr. Gorski’s piece on NCCAM so much and why it changed my mind on the National Center for Complementary and Alternative Medicine. I used to think that money spent on NCCAM, while not ideal, at least was subjecting CAM claims to scientific inquiry and it benefited from being run by a legitimate set of scientists, notably the late Stephen Strauss. At worst it was just a boondoggle and we might get some interesting results out of it.

Now, I’m convinced, and in no small part by the Ioannidis, Atwood, and Gorski that NCCAM can not help but be a fundamentally-flawed endeavor that will ultimately contaminate the literature with nonsense and noise. The pre-study odds of CAM modalities are exceedingly poor, however if you study snake-oil long enough, even if it’s doing nothing, eventually you’ll end up with some positive results that you can publish. Since negative studies don’t get published all you see is a contamination of the literature with false positives. NCCAM isn’t just a benign waste of money, it has the potential to contaminate the literature with nonsense that will never go away.

Uggh. It gives me shivers.

Finally a negative comment about Evidence-Based Medicine, and that is a pair of articles from Wallace Sampson, who usually has his head screwed on right, that disappointed me. They are the Iraqi Civilian War Dead Scandal, and its follow-up which I believe fail to meet the standards of a blog that wishes to represent science and evidence-based writing. First of all, the word “scandal” is way over the top in evaluating the Lancet articles that used sampling to estimate the increasing death rate in Iraq after the US invasion. There is no scandal. There may be controversy, but not scandal. Second, Sampson attacks the paper with conspiracy theories, some outrageous allegations of falsification, and one of my favorite crank attacks – armchair math. Always beware when someone takes on some complicated scientific theory or result with armchair math, it’s usually a sign you’re reading Uncommon Descent. Tim Lambert and others in the comments (including his co-bloggers) show Sampson to be way out of line, and making truly incorrect claims about these studies, probably due to his reading poor information sources (read liars) writing about these studies. I’m not calling this crankery yet, but I would like to have seen his follow up be a little contrite about the excesses of his first article. I’ll just say for now that these articles by Sampson fall below my standards for scientific writing or for a critique of the scientific literature. They can do better and I think Sampson can do better.


  1. Great analysis of the site, Mark.

    By the way, the 5% false positive rate is also another way to explain to the public why reports on health issues (antioxidants, hormone-replacement therapy, etc.) often seem to contradict one another. Since media reports often consider single studies in isolation, it’s no wonder that we come off as “those scientists who can’t agree on anything.” Medical journalists need to consider this point in reporting their stories in proper context.

    It’s not until we have an accumulation of studies and a reasonable consensus before public health recommendations are made; but even then, recommendations can change based on the emergence of new studies that refine the current state of the science.

  2. Thanks Mark for pointing this out to me. I am a semi-layperson with a scientific background trying to fight this onslaught of altie medicine, especially that coming from India, and I run into these false positives all the time. They are usually touted as proof of the efficacy of an altie treatment modality. Usually reading meta-studies of the multiple results yield a better picture of what is most likely to be true, but these one-off studies are usually milked for what they are worth by the altie proponents. I can use this knowledge of prior probability to enhance my understanding of a particular study and make an evaluation as to its merits.

  3. I think what you call the “file-drawer effect” is pretty much the same as what I know as “publication bias”, the tendency to publish positive results as these are usually more interesting than negative ones. (That’s even without the additional effect in drug trials of the commercial pressure to, um, er, de-emphasise negative results.)

    I wonder if it is fair of Dr Gorski to grumble at NCCAM for funding research on herbal remedies and dietary interventions which could be considered mainstream and scientific. In effect, Gorski says that they shouldn’t be doing scientific research (because that’s for scientific researchers), and that it’s unacceptable that NCCAM’s other research is unscientific! NCCAM is damned if they do, damned if they don’t!

    Unfortunately I think scientists have to accept (however grudgingly!) that the public funds this stuff and that it has a right to choose the sort of medicine it desires, even if this is nonsense. We can only try to educate.

  4. Johnny Vector

    Didn’t Orac post about this a while back? Someone should put him in touch with Dr. Gorski; I think they’d get along famously.

    I agree with Dr. Gorski that putting things like herbs and diet under NCCAM is a mistake, since it serves to legitimize the idea of “alternative” medicine. Like the box of “homeopathic” cold remedy I saw in a Fresh Fields one time, which claimed to be clinically proven to reduce cold symptoms. Well sure, since it was something like a “2X” preparation of ephedra extract. Uh, yeah, that works for me too, only I buy it as Sudafed, and it’s NOT HOMEOPATHIC.

    People see this product that’s clinically shown to work, and because it’s “homeopathic”, they transfer that patina of scientific respectability to other homeopathic codswallop. Same thing with treating real medicine as “alternative”. As Orac is fond of saying, there is no “alternative”. There’s “does more than placebo” and “doesn’t do more than placebo”.


    This also reminds me of back in the 90’s when Paul Brodeur was scarifying everyone with warnings of death by ElectroMagnetic Fields. I went to my school library and looked up several (four or five, IIRC) papers reporting ‘positive’ results regarding EMF causing cancers. OMG, I have never seen such shoddy research. As a physicist, reading these papers all but convinced me that there was no scientific value whatever in the entire field of epidemiology. I think I still have them, actually. I should re-read them some night for entertainment.

    The briefest of considerations in the general direction of prior likelihood would have prevented any of those papers being published. And for sure none of them would have had a wafer-thin chance of publication if they had not found a positive result.

    But I digress. Anyway, thanks for the article.

Leave a Reply

Your email address will not be published. Required fields are marked *