Is it unfair to demand adequate testing of and control for placebo effects in all psychological interventions? We think not, but others may disagree. Below we address several of the more common reactions to these guidelines that we have encountered in our discussions with colleagues and in the literature.
The requirement to control for placebo problems will make it too difficult to “get an effect”
In other words, imposing a requirement for adequate active control conditions will produce too many false negatives in studies of training benefits (Schubert & Strobach, 2012). Balancing the risk of missing a real effect against the risk of false positives is essential. However, those risks must be considered in light of the consequences of not knowing whether effects are due to the treatment itself or to participants’ expectations. We do not see why controlling for the confound of differential expectations undermines the chances of finding a true benefit if one exists.
The early, exploratory stages of research should tolerate less rigorous adherence to methodological standards
Perhaps the initial study in a field should have license to use less-than-ideal control conditions to identify possible treatments if the authors acknowledge those limits. Even then, a study lacking appropriate controls risks wasting effort, money, and time as researchers pursue false leads. Moreover, the methods of an initial, flawed study can become entrenched as standard practice, leading to their perpetuation; new studies justify their lack of control by citing previous studies that did the same. For that reason, we argue that any intervention, even one addressing a new experimental question, should include adequate tests for expectation effects.
Our methods are better than those used in other psychology intervention studies
All intervention studies should use adequate controls for placebo effects, and the fact that other studies neglect such controls does not justify substandard practices. For example, the use of active control conditions in the video-game-training literature is better than the common use of no-contact controls in the working-memory- training literature, but that does not excuse the lack of placebo controls in either. “Everyone else is doing it” does not justify the use of a poor design.
Converging evidence overcomes the weaknesses in any individual study, thereby justifying causal conclusions
Replication and converging evidence are welcome, but convergence means little if individual studies do not eliminate confounds. In some areas, such as the video-game literature, researchers often appeal to cross- sectional data comparing gamers with nongamers as converging evidence that games cause changes in perception and cognition. Of course nonexperimental studies suffer from a host of other problems (namely third variable and directionality problems), and such designs do not permit any causal conclusions (Boot et al., 2011; Kristjánsson, 2013). Converging evidence is useful in bolstering causal claims only to the extent that we have confidence in the methods of the individual studies providing the evidence.