Science based Medicine: Placebo Myths Debunked

Cheshire

Senior Member (Voting Rights)
Several decades and thousands of studies later, the most popular CAM modalities (homeopathy, acupuncture, reiki, manipulation for medical indications, and more) have been shown to be no more effective than placebo. This means they don’t work.

Not to be deterred by reality, CAM proponents simply shifted the goal posts. Now many of them are saying that placebo effects are real, and therefore being as effective as placebo means that their treatments “work.” As part of this strategy they have promoted and amplified common myths about placebo effects. Let’s take a closer look at these myths and show why they are wrong.


https://sciencebasedmedicine.org/placebo-myths-debunked/
 
The main thing to know is that the idea of the powerful placebo was based on an article published in 1955 which jumped to a conclusion involving the healing power of suggestion instead of carefully considering other explanations for why patients often improved even with a sham treatment.

Some reasons an illness can improve or falsely appear to improve even without effective treatment:

- Many illness get better without treatment.
- A change in how patients perceive, think or report their symptoms, such as answers of politeness, hope, undergoing therapy that aims to change perception of the illness, etc.
- Patients are receiving some other treatment at the same time.
- Fluctuations in illness severity, occurring by chance at a time that creates the appearance of a sham treatment working.

There seems to be a great deal of confirmation bias in the literature promoting the idea of the powerful placebo.
 
Last edited:
It is interesting to note that there has been a study where ME/CFS patients were treated with homeopathy. It is difficult to compare the results directly with the PACE study, as Weatherley-Jones et al used the multidimensional fatigue inventory (MFI) for outcomes. However, if the improvement is normalized by its standard deviation (SD), the fatigue for patients that underwent homeopathy improved by 0.7, the fatigue in the CBT group in PACE improved by 1.0, and the fatigue in the GET group in PACE improved by 1.2.* Note that the homeopathy study was blinded, and that the placebo response probably would have been even larger in a non-blinded trial.

This comparison suggests that the order of magnitude of the subjective improvements in PACE is consistent with a placebo response. If Peter White and colleagues believe that subjective outcomes are appropriate because “that is how the illness is defined”, then they should also promote homeopathy as a useful treatment.

Weatherley-Jones E, Nicholl JP, Thomas KJ, et al. A randomised, controlled, triple-blind trial of the efficacy of homeopathic treatment for chronic fatigue syndrome. J Psychosom Res. 2004;56(2):189–97.

* As the SD of the outcomes were specified at baseline and end of treatment in PACE, I derived the SD for the difference under the assumption that the baseline values and the improvements were independent.
 
Last edited:
Not to be deterred by reality, CAM proponents simply shifted the goal posts. Now many of them are saying that placebo effects are real, and therefore being as effective as placebo means that their treatments “work.”
How familiar is that! The BPS crew's reaction to their unwarranted reliance on subjective outcome measures, is to downplay the importance of objectivity, and instead start getting creative why subjective outcomes are adequate.
 
Damn, I guess this means that we can't take the word of royal German veterinarians as proof that homeopathy works :rolleyes::
It should already be obvious, however, that these assumptions are incorrect. There are many sources of placebo effects that do not depend upon the subject knowing they are being treated, such as regression to the mean, the self-limiting nature of many ailments, and non-specific effects or benefits from simultaneous interventions.

Further, however, someone has to determine that the animal or baby has improved. That person is vulnerable to biased perception and reporting, and will also contribute to any measured effect.

This means that studies of treatments in animals or babies still need to be properly controlled, and whoever is assessing the outcome needs to be properly blinded to treatment allocation.

I don't completely agree with this regarding the size of the placebo effect. If it's deliberately exploited as part of the treatment process, it makes sense that the placebo effect will be bigger, such as with deliberate brain-washing regarding symptom denial:
There is also no evidence for the second part, that alternative practitioners elicit more of a placebo effect. What the scientific evidence shows is that all interventions will produce some placebo effect, depending mainly on the outcome to be followed. The more subjective and amenable to variables such as mood, the larger the measured effect will be.
 
Can we include this article in our science library please @Cheshire, because I find it very educational.
done. see https://www.s4me.info/index.php?thr...l-of-that-shiny-new-study-a-bibliography.212/ I've included the following summary:
In this article, Novella explains that the placebo effect isn’t one thing, but rather a collection of spurious effects than can affect trial outcomes. These include: a) regression to the mean (people often enrol in studies at their worst, so there is a good likelihood of some spontaneous improvement during the trial); b) bias in perceiving and reporting subjective symptoms where improvement is expected; and c) indirect effects, such as increased compliance with other treatment recommendations. The article further points out that the placebo effect is unlikely to be “real”, because in virtually all cases, no accompanying improvements are seen on objectively measurable outcomes. The only exception to this is likely to be certain psychological complaints, such as anxiety, where the promise of possible improvement might have a direct beneficial effect on the person’s state of mind.
Yay, first time we've cross-referenced an article in the library with its corresponding S4ME discussion!

Note in that section of the library, there are loads of articles on the placebo effect, expectation effects and other artefacts that affect clinical trials. There are also some that assess how big these effects might be.
 
I don't completely agree with this regarding the size of the placebo effect. If it's deliberately exploited as part of the treatment process, it makes sense that the placebo effect will be bigger, such as with deliberate brain-washing regarding symptom denial
The Lilienfeld article in the science library talks more comprehensively about spurious factors affecting psychotherapy trials in particular:

https://www.s4me.info/index.php?thr...l-of-that-shiny-new-study-a-bibliography.212/

Lilienfeld et al refer to this effect as "Response shift bias". (you're taught to evaluate your symptoms differently, and this has a direct effect on self reported symptoms, in the absence of genuine change).
 
Lilienfeld et al refer to this effect as "Response shift bias". (you're taught to evaluate your symptoms differently, and this has a direct effect on self reported symptoms, in the absence of genuine change).
So people would especially self report differently if they are convinced they have a real underlying illness, but are reprogrammed to then believe they are fundamentally out of condition, and can can exercise their way back to health again. Presumably there must be a timing issue here also, because the self reporting will be affected by whether the subject is on the crest of a euphoric wave of optimism following their reprogramming, or later after the bubble has burst and the truth dawns once more.
 
Presumably there must be a timing issue here also, because the self reporting will be affected by whether the subject is on the crest of a euphoric wave of optimism following their reprogramming, or later after the bubble has burst and the truth dawns once more
Totally. Hence PACE's CBT and GET "worked" for a while, and then stopped working (no benefit at long-term follow-up).
 
Lilienfeld et al refer to this effect as "Response shift bias". (you're taught to evaluate your symptoms differently, and this has a direct effect on self reported symptoms, in the absence of genuine change).

That is basically PACE with CBT/GET as well as Smile. Crawley's trials are designed to use this effect to get a positive effect.

Presumably there must be a timing issue here also, because the self reporting will be affected by whether the subject is on the crest of a euphoric wave of optimism following their reprogramming, or later after the bubble has burst and the truth dawns once more.

I think there is also a timing issue in when forms are filled out. If you give people the chance to send forms back over a month or two they may wait till they are at their best point and report that.
 
So wouldn't it be much more ethical to simply inject people with morphine and then get them to fill in a questionnaire saying how they feel - and then produce a paper or 27 that says morphine relieves some symptoms and improves the sense of well being in those with chronic illness?

Coz apart from the fact that it would be more reproducible, and "easier" on people's health, I can't see much of a difference - well and cost - it doesn't cost £5 million to supply a few hundred people for a few months.

....and whilst it is unlikely to improve anyones long term health, it's also unlikely to damage them either, the same cannot be said for GET/CBT.

I'm not suggesting this as a "treatment", I'm simply using it as an example to point out the absurdity.

Hopefully.
 
Wonko said:
I can't see much of a difference - well and cost - it doesn't cost £5 million to supply a few hundred people for a few months.

Yeah, but let them BPS people have their share of the cake...it's quite unfair to give everything to pharma, isn't it? :D
 
I think there is also a timing issue in when forms are filled out. If you give people the chance to send forms back over a month or two they may wait till they are at their best point and report that.

I can absolutely agree with that, at least in my case. When I realized I'm actually doing that I fixed a certain date to fill out the forms. I also can agree to the fact that I wanted to please the clinician. I am very strict, though, and honest in that aspect, so I disciplined myself.

Interesting topic, and thanks for the information!
 
Back
Top Bottom