The pervasive problem with placebos in psychology: Why active control groups are not sufficient..., 2013, Boot et al.

Ha.

You ask them if they are ok with magical healing crystals benefitting from "expectancy effects", you will find a very different answer.

The true believers in the "powerful placebo effect" have no problem with magical healing crystals, though some feel that mainstream medicine should have a monopoly over placebo effects (rather than complementary/alternative medicine).
 
That's true. But I think does only apply for etiological research (epidemiology and pathomechanisms) not in clinical trials, i.e. reserach on treatments/ interventions as discussed here?

I am not sure what you mean. The psychological research that is being done needs controls with accepted physical diseases. Take the PACE trial. They found that there was a large number of drop outs in the GET arm but assumed this was related to social things (moving home and so on) not the treatment. They also found a slightly significant effect of this group at the end of the trial.

If they had had a control group with a physical disease, graded exercise may well have made them fitter and there may well have been less drop outs. That is, it would have put the results into context.

The other groups may have had similar results.

The BPS have made psychological interventions the only treatments researched but it all leaves out any effect of long term chronic illness and that has to be controlled for.
 
The true believers in the "powerful placebo effect" have no problem with magical healing crystals, though some feel that mainstream medicine should have a monopoly over placebo effects (rather than complementary/alternative medicine).
I still hold by my argument that these folks defending the status quo for psyc trials would make a huge ruckus if anyone tried to publish a non-blinded study showing support for magic healing crystals.

Supporting evidence here:

Fiedorowicz, J. G., Levenson, J. L., & Leentjens, A. F. (2021). When is lack of scientific integrity a reason for retracting a paper? A case study. Journal of Psychosomatic Research, 144, 110412
 
The whole problem with expectation bias is that it will move its goalposts whenever you try to ring fence it
This sounds akin to the problem that used to occur with older style voltmeters, Avo voltmeters etc, where the very act of connecting the meter could change the very voltage you were trying to measure. Much less of a problem in that case however, because the characteristics of the meter were clearly defined and any reading compensated for appropriately. But if the measuring instrument's characteristics can shift indeterminately under your feet, then there is no way to correct the readings.
 
Last edited:
I still hold by my argument that these folks defending the status quo for psyc trials would make a huge ruckus if anyone tried to publish a non-blinded study showing support for magic healing crystals.

Supporting evidence here:

Fiedorowicz, J. G., Levenson, J. L., & Leentjens, A. F. (2021). When is lack of scientific integrity a reason for retracting a paper? A case study. Journal of Psychosomatic Research, 144, 110412
A good exercise for this would be to submit a BPS-ME/MUS paper with the fewest changes possible, swapping everything related to whatever behavioral crap they are doing with healing crystals. It would likely not be accepted and no doubt the methodological aspects would be admonished, but only because of the use of healing crystals, otherwise they are basically standard practice for clinical psychology. The very flaws they dismiss as irrelevant would be all that matters, with insistance that it's not because healing crystals are involved, even though of course it would be.

I'm pretty sure of the outcome of this experiment. It's entirely because belief in the magical powers of the mind overrule everything, turn critical thinking off entirely.
 
I wanted to draw people's attention to some interesting bits in this paper.

They include a section where they address what are common counterarguments against controlling for expectancy effects in psychological intervention designs. Here are the four arguments they consider. The final two are ones we hear a lot from the BPS brigade (I've shown them in bigger font).
Boot et al paper said:
Is it unfair to demand adequate testing of and control for placebo effects in all psychological interventions? We think not, but others may disagree. Below we address several of the more common reactions to these guidelines that we have encountered in our discussions with colleagues and in the literature.

The requirement to control for placebo problems will make it too difficult to “get an effect”
In other words, imposing a requirement for adequate active control conditions will produce too many false negatives in studies of training benefits (Schubert & Strobach, 2012). Balancing the risk of missing a real effect against the risk of false positives is essential. However, those risks must be considered in light of the consequences of not knowing whether effects are due to the treatment itself or to participants’ expectations. We do not see why controlling for the confound of differential expectations undermines the chances of finding a true benefit if one exists.

The early, exploratory stages of research should tolerate less rigorous adherence to methodological standards
Perhaps the initial study in a field should have license to use less-than-ideal control conditions to identify possible treatments if the authors acknowledge those limits. Even then, a study lacking appropriate controls risks wasting effort, money, and time as researchers pursue false leads. Moreover, the methods of an initial, flawed study can become entrenched as standard practice, leading to their perpetuation; new studies justify their lack of control by citing previous studies that did the same. For that reason, we argue that any intervention, even one addressing a new experimental question, should include adequate tests for expectation effects.

Our methods are better than those used in other psychology intervention studies
All intervention studies should use adequate controls for placebo effects, and the fact that other studies neglect such controls does not justify substandard practices. For example, the use of active control conditions in the video-game-training literature is better than the common use of no-contact controls in the working-memory- training literature, but that does not excuse the lack of placebo controls in either. “Everyone else is doing it” does not justify the use of a poor design.

Converging evidence overcomes the weaknesses in any individual study, thereby justifying causal conclusions
Replication and converging evidence are welcome, but convergence means little if individual studies do not eliminate confounds. In some areas, such as the video-game literature, researchers often appeal to cross- sectional data comparing gamers with nongamers as converging evidence that games cause changes in perception and cognition. Of course nonexperimental studies suffer from a host of other problems (namely third variable and directionality problems), and such designs do not permit any causal conclusions (Boot et al., 2011; Kristjánsson, 2013). Converging evidence is useful in bolstering causal claims only to the extent that we have confidence in the methods of the individual studies providing the evidence.
 
A good exercise for this would be to submit a BPS-ME/MUS paper with the fewest changes possible, swapping everything related to whatever behavioral crap they are doing with healing crystals. It would likely not be accepted and no doubt the methodological aspects would be admonished, but only because of the use of healing crystals, otherwise they are basically standard practice for clinical psychology. The very flaws they dismiss as irrelevant would be all that matters, with insistance that it's not because healing crystals are involved, even though of course it would be.

I'm pretty sure of the outcome of this experiment. It's entirely because belief in the magical powers of the mind overrule everything, turn critical thinking off entirely.
Brilliant, @rvallee!
 
''
I wanted to draw people's attention to some interesting bits in this paper.
'The requirement to control for placebo problems will make it too difficult to “get an effect”'

We actually do see this (or something close to it) being used. Considering that blinding is a form of controlling for unwanted effects in trials, the BPS crew have been known to say a good few times, that because it is impossible to fully blind psych trials, then they have to go with what they can manage. But as noted plenty of times before (@Jonathan Edwards first highlighted I believe), if the only kind of trial you can do as a cr@p one, then that is no excuse to treat it as if it were a good one ... it is just plain wrong.
 
Back
Top Bottom