The Placebo effect, how significant is it?

Are placebo’s really effective? So asks Darshak Sanghavi in Slate, citing this study from 2001 that shows the placebo effect, compared to passive observation, to be relatively minor for improvements in pain or objective measures of health.

This is an interesting topic, but unfortunately, a really bad article. Given how many alties love to stress the role of placebo and its apparent proof of the benefit of positive thinking, we should critically re-evaluate the evidence that placebos on their own can do anything more than improve subjective symptoms. Although there is a fair amount of proof that the placebo effect is a lot less significant than many believe even for those. It would be worth evaluating the effect of placebo itself – if ethically possible – more rigorously for specific symptoms and illnesses.

It’s an interesting article all the same and, deserves some consideration, but I worry Sanghavi’s analysis is so unsophisticated it damages an otherwise worthy goal. For one, he starts with a pretty egregious genetic fallacy:

Beecher’s paper is highly suspect. Half the studies he cited were his own, and his math was, frankly, misleading. Ted Kaptchuk, an associate professor at Harvard Medical School and former FDA expert panelist, dismisses the paper as a “polemical ploy,” and other researchers have derided Beecher as “statistically naïve to the extreme.” And yet Beecher’s paper and the notion of a powerful placebo effect have escaped widespread scrutiny. For decades, mainstream medicine has uncritically promoted faith in the placebo effect–leaving behind reality-based science in the process.

One should consider the possibility that the reason it might have survived a less than perfect start, is that it actually was a really good idea, and still a necessary and important control for clinical trials. While I would agree with Sanghavi the idea of harnessing the placebo effect is at best useless, and at worst grossly irresponsible, he seems to be challenging the use of placebo and sham operations in clinical trials, while using simplistic and incorrect arguments to do so:

There are other similar trials: In the late 1990s, veterans enrolled in a Houston study of knee arthritis were assigned randomly to have either real knee surgery or a sham procedure, which consisted of being sedated, getting prepped in the operating room, having four superficial knee incisions, and hearing simulated splashing sounds. In 2005, 74 migraine sufferers in England had experimental devices snaked through a vein (subscription required) in their groin and implanted into the heart; 73 people in a sham group got the groin incisions without the device. More commonly, though, placebo trials involve dummy medication. Since 1962, the Food and Drug Administration has required all new drug approval trials–like those for high cholesterol, AIDS, cancer, and depression–to include a placebo group, where half the patients get inactive pills to create the false impression of therapy.

This is a simplistic and incorrect view of how clinical trials are performed. An ethical trial requires that patients receive the standard of care at a minimum during a trail. If there is a treatment, they get it. The drug being tested is compared to the standard of care unless there is no alternative treatment. While it is unfortunate that doing a sham operation is invasive, I can understand including it as a control in certain cases, but I agree it probably should be reduced and replaced with passive observation.

Sanghavi then goes on and spits out a rather silly straw man:

There’s no question that placebos have psychological effects. The question is whether those effects really trigger healing on their own. For too long, medical science has accepted the magical thinking that patients’ beliefs could activate dying neurons, heal knee cartilage, prevent air bubbles from traveling through the heart to cause migraines, lower bad cholesterol, and even cure cancer and AIDS.

Maybe the Alties like Andrew Weil may promote the idea that placebos lead to such incredible improvements, but I think most doctors realize the placebo is largely a subjective intervention, and any improvement in objective measures is by chance or some other variable. Thinking positive may decrease stress, prevent depression and therefore alleviate some illness, mask symptoms, improve outlook etc., but it’s not going to lead to physically-impossible findings like cartilage growing back or AIDS being cured. I would like to see the citations that show that any reasonable clinician believes this (at which point I would then categorize that doc as unreasonable).

Sanghavi does convince me that maybe the evidence for extraordinary placebo effects is overblown, certainly the type of silliness the Weil advocates is over-the-top. However, placebo itself is a necessary intervention for a properly controlled trial, and even the authors of the Dutch study he touts so strongly include that as their final conclusion. They only challenged the notion that placebos are valuable outside RCTs.


Comments

  1. I have quite an extensive blog on the actual physiology behind the placebo effect

    http://daedalus2u.blogspot.com/2007/04/placebo-and-nocebo-effects.html

    The effect is quite real, is not “psychological” at all. What it derives from is the normal regulation of the allocation of resources between different tasks. When an organism is in an acute “fight or flight” state, allocating resources to repair damaged tissue is inefficient. If the predator catches the organism, the state of repair of its tissues is irrelevant to survival. The “optimum” response it to allocate what ever resources to escape are necessary, even if the organism sustains damage as a result. Any injury short of death is better than being caught by a predator.

    This is why organisms can run themselves to death. That ability is a “feature”, that ability is balanced by survival when fleeing predators. Evolution has minimized the sum of deaths from being caught by predators plus running oneself to death.

    The placebo effect is the “standing down” of the “fight or flight” state and restoring the default state where cellular repair occurs.

  2. Since 1962, the Food and Drug Administration has required all new drug approval trials–like those for high cholesterol, AIDS, cancer, and depression–to include a placebo group, where half the patients get inactive pills to create the false impression of therapy.

    As you say, this is wrong. Example: FDA approved Lantus (insulin glargine) in 2000 on the basis of three Phase 3 studies in type 1 diabetes plus two Phase 3 studies in type 2 diabetes. All used NPH insulin as an active control. There were no placebos, and none of the studies were blinded. From the FDA Medical Review:

    Studies were unblinded because HOE 901 [Lantus] is a clear solution while NPH is a suspension. Prior to an injection, patients need to verify that the HOE 901 solution is clear and that the NPH is uniformly suspended. Blinding would therefore not be medically appropriate and sham injections would have severely limited the ability to recruit volunteers.

    I’m sure there are plenty of other examples. This is just one I happen to know.

  3. bob koepp

    Just being pedantic (or maybe cranky), but the placebo effect can be quite significant, even if placebos themselves are totally “ineffective.” Indeed, actual placebos are _assumed_ to be “medically inert.”

  4. It is difficult to reconcile this recent meta-analysis with the older literature showing quite substantial placebo effects. Interpretation is complicated by the fact that the way that such studies are done has changed due to shifts in ideas of medical ethics. There was a time when it was considered ethical to keep subjects in the dark about the fact that some of them would be receiving placebos, whereas the modern idea of informed consent requires that subjects be apprised of the fact that they may be receiving a placebo. It seems plausible that placebo effects in this condition may be smaller than when subjects have no idea that there is a chance that the pill they are taking does not contain active drug.

    I’m also somewhat surprised that Hrobjartsson and Gotzsche’s description of their meta-analysis does not say anything about excluding studies in which there is a placebo run-in and placebo responders are excluded from the final study, as this is a rather common practice

Leave a Reply

Your email address will not be published. Required fields are marked *