Recommendations for the development, implementation, and reporting of control interventions in...trials of physical, psychological,the CoPPs statement

Sly Saint

Senior Member (Voting Rights)
Recommendations for the development, implementation, and reporting of control interventions in efficacy and mechanistic trials of physical, psychological, and self-management therapies: the CoPPS Statement

Control interventions (often called “sham,” “placebo,” or “attention controls”) are essential for studying the efficacy or mechanism of physical, psychological, and self-management interventions in clinical trials. This article presents core recommendations for designing, conducting, and reporting control interventions to establish a quality standard in non-pharmacological intervention research. A framework of additional considerations supports researchers’ decision making in this context. We also provide a reporting checklist for control interventions to enhance research transparency, usefulness, and rigour.

https://www.bmj.com/content/381/bmj-2022-072108

 
A lack of consensus on relevant issues exposes the field to justifiable criticism due to concerns over bias1819202122 and leaves questions of treatment mechanism unanswered.
For clinical trials of treatment efficacy or mechanisms, we recommend designing control interventions that are as similar as possible to the tested intervention, apart from the components examined by the study

Structured planning, early stakeholder engagement, feasibility work, and piloting will improve the quality and acceptability of control interventions

When participant blinding is an objective, blinding effectiveness should be routinely assessed and reported

Detailed and transparent reporting will improve the interpretation and repeatability of clinical trials
[*]“The control is the same as the tested treatment, except that one component has been removed. In this trial, we test the effects of this component.”

[*] “The tested treatment consists of multiple components. The trial’s aim is to study the effect of some of these components. To do so, the test treatment is compared with a control that has all of the original components except those components that the trial aims to study.”

Table 2 has a list of things which will ideally be kept the same - e.g. number of sessions, delivery format and treatment environment.

Similarly, a trial’s hypothesis will dictate the choice of outcome measures. We conclude that neither patient-reported nor more “objective” measures are more desirable in the general context of control interventions. This decision depends primarily on the trial’s objectives. Furthermore, the evidence regarding their differential susceptibility to placebo effects is inconclusive.51 If available and appropriate, both patient-reported and more objective outcome measures can be used.5253
The evidence regarding the susceptibility of objective and patient-reported outcomes to placebo effects is inconclusive? I don't think so. The wishy-washy commentary about objective outcomes in efficacy trials is disappointing. They seem to assume that therapists and participants won't be able to work out which therapy is the 'real' one.
 
I don't think much of this matters as long as the current trend of doing the same trials over and over again until they get favorable results, then building up evidence by cherry-picking those and ignoring everything that doesn't confirm their expectations as unimportant. And the lack of honest reporting. You can't really be more wrong than having something like PACE, which at best can claim that 1/7 get some form of minor subjective benefit turning into "this is a cure, and it means that it's psychological".

It's that biased thinking that ruins everything, the rest can be good and it wouldn't change anything, it's still no better than snake oil merchants trying to market their own brand of snake oil in the end. They chose to sell that specific brand of snake oil because they think it works. Homeopathy does the exact same thing, and so do most alternative medicines.

It's bias that's the problem. This discipline is completely biased in favor of being right. None of this changes the crisis of validity, in fact this seems pretty much like another layer of varnish on top.
 
Let's just take the example of IQWIG, who filtered out something like 97% of trials on the basis of being too poor and/or biased, and somehow kept mostly the same recommendations anyway. Even though the recommendations are basically toxic simply by being "confirmed" again and again in highly biased experiments. This is disastrous, especially as it concerns research whose substance is actively used as is in real life. They aren't early phase R&D prototypes, they're used as products in real life on real people, sometimes recommended straight out of the first feasibility "trial", where they're basically already planning for its implementation, assuming it "works", since they know that "works" doesn't mean anything in this context.

This is not a matter of details, it's at the highest levels, in the decisions and priorities, in the endless cycle of doing the same stuff over and over again and in what is even the point of the whole system, which currently mostly seems to be "employ people".

You cannot tweak out of such near perfect level of failure. The people who love this research are still doing or approving or funding the exact same type and level of quality of research, the very same they already did dozens of times over, in some cases like "CBT for fatigue" already hundreds of times over, if not above a thousand. You can't fix that at the researcher level, there has to be some actual oversight that is capable of ending gravy trains of pointless self-indulgence, and not just even when it's politically hard and controversial, but especially when it is.

Otherwise it's easy to see decades of "trying" CBT for LC-MUS, then the same for LC-FND, then for COVID-related "persistent symptoms", then some other generic descriptions. Then inventing modified versions of CBT, dozens of them and whose details don't even matter, and trying all variations of them for years and years, recommending them, then continuing to "research" the exact same stuff anyway, the academic version of doing "one take videos" that took many takes to get right but you just delete the ones that missed.

Then it's ACT. Then it's self-hugging, or water yoga. For years and decades, with nothing useful being delivered. None of this is any different than doing the same thing with hundreds of homeopathic concoctions, in the end it's all "testing" the same stuff, and just poorly enough that it can "pass", just not on actual legitimate standards.
 
Back
Top Bottom