But, really, exactly how many angels can dance on a hairpin? This is the true question of our time. Surpassing all other issues.
If this is any indication of the state of self-reflection about randomized psychological trials, things are not going to change any time soon. And by that I mean that by the end of the century, if society did not change (which it will), the whole field would be stuck at the same place. There is something actually silly with this discussion focusing exclusively on controlling for effects, ignoring the fact that it's not even possible to do a proper psychological controlled trial in the first place, that even in drug trials, where everything but one thing can be controlled, it still fails regularly.
Might be simply because the author chose to focus on that single point, but entirely missing is the much larger problem of excessive bias, which makes any form of control basically like putting a net to stop water. There is so much bias in psychological studies, even more in trials. Everyone who runs trials want their intervention, which they more often than not developed themselves, or more accurately tweaked and borrowed a few random things that have already been done countless times before, to succeed. More often than not they will even evangelize their results, even when they fail, and that's despite having done everything they could get away with to force a positive bias.
The outcomes achieved in rigorously controlled RCTs are usually diminished in clinical practice. This phenomenon, referred to as “voltage drop” or research-to-practice gap6 , is common across medicine, but has some unique considerations in psychological treatments. Many of the research procedures necessary to ensure internal validity reduce generalizability. For example, treatment integrity processes are quality controls that can strengthen treatment potency but are not processes that commonly exist in realworld settings. Subsequent implementation trials are necessary to evaluate the effects of a treatment under real-world conditions7
In everything having to do with psychology, there is no such thing as a rigorously controlled RCT. Never happened once. Every time someone says they want to try, they always end up tweaking this and changing that to report biased positive outcomes. But they can't seem to imagine that, here the author seems to have swallowed the idea that it's not because those trials are actually legitimately bad, a problem that exists even in proper double-blinded drug RCTS, it's just that reality gets in the way of biased artificial conditions, and since the biased artificial conditions are preferred, they must find ways to get reality to bend to their desires.
As if there is such a way to accomplish what they set out here. To even speak of treatment integrity processes given everything we've seen, how standardized the mediocrity is, is simply absurd. Those treatments fail in real life because they would have failed in rigorous trials but the standard has always been to simply cheat. You can't cheat nature. You can cheat people. You can cheat millions of people, especially when you can lie, and even better when you have a legal mandate and exemption for it. But you can't cheat nature. It does not bend to people's will. It simply doesn't care what you want to be true. It doesn't even care what's true, because nature IS what's true.
In summary, well-controlled RCTs of psychological interventions are necessary for the protection of all stakeholders, including patients, from ineffective treatments. Considerations unique to RCTs for psychological interventions include the definition of control conditions and ensuring that treatment integrity procedures are consistent across treatment arms.
They are necessary for that. And having failed at that, patients have in fact been subjected to harmful nonsense for decades, directly and indirectly. There is no such thing as a well-controlled RCT of psychological interventions, because they are entirely about changing people's perception while using people's reported perception as outcomes, and the second you try to do this, you have failed. There are only randomized trials here, and they are all lousy. This is why the effect of all treatments go down with time, and why the *cebo effects also go down with time. Or seem to go down, because actually there was never anything there, just lousy methodologies and biased interpretation.
That it's not even possible to adequately control makes a focus on controlling frankly silly. It's a distraction. In science, you perform experiments where "all other things are equal", and it's simply not possible in psychological studies. Moving the discussion onto which dance moves the angels can do on hairpins, accepting the premise that it not only is a valid thing to discuss, but that obviously angels can and do dance on hairpins, doesn't make things any better. It only shows that things can't get better, the academic echo chamber is only able to bounce around the same noises over and over again, the noise of people who should know better clapping at how clever they are for cheating in a standardized way.