Discrepancies from registered protocols and spin occured frequently in randomized psychotherapy trials – a meta-epidemiologic study (2020) Stoll et al

Cheshire

Senior Member (Voting Rights)
Highlights
• protocol discrepancies and spin in psychotherapy outcome research have not been investigated in detail so far

• protocol discrepancies are less frequent in psychotherapy trials which are registered prospectively as compared to retrospectively registered trials

• registration of psychotherapy trials is not associated with less spin in the publications

Abstract
Objective
To investigate the relationship between trial registration, trial discrepancy from registered protocol and spin in non-pharmacological trials.

Study Design and Setting
Recent psychotherapy trials on depression (2015 – 2018) were analyzed regarding their registration status and its relationship to discrepancies between registered and published primary outcomes and to spin (discrepancy between the non-significant finding in a study and an overly beneficial interpretation of the effect of the treatment).

Results
196 trials were identified of which 78 (40%) had been registered prospectively and 56 (29%) retrospectively. In 102 (76%) of 134 registered trials, discrepancies between trial and protocol were present. Of 72 trials with a non-significant difference between treatments for the primary outcome, 68 trials (94%) showed spin. Discrepancies from protocol were less frequent in prospectively than in retrospectively registered trials (OR = 0.19; 95% CI [0.07, 0.52]), but regarding the amount of spin there was no difference between prospectively and retrospectively registered trials (rb = -.12; 95% CI [-.41;.19]) or between registered and unregistered trials (rb = -.22; 95% CI [-.49;.08]).

Conclusion
Protocol discrepancies and spin have a high prevalence in psychotherapy outcome research. Results show no relation between registration and spin, but prospective registration may prevent discrepancies from protocol.

https://www.sciencedirect.com/science/article/abs/pii/S0895435620302080
 
Some good bits from the paper:
The field of psychotherapy research is of particular interest regarding reporting biases: Contrary to most medical trials, a typical psychotherapy trial is conducted by a researcher with clinical expertise who works as a therapist and whose school of thought is exceptionally shaped by a long education in this therapy [24]. While in medical research the pharmaceutical industry as an external factor may play a relevant role in the conduction of trials, industry is less involved in the conduction of psychotherapy trials. Therefore, psychotherapy trials are more dependent on the individual researcher and the researcher’s personal interests in the outcome of the trial might play a more important role. These interests are discussed in terms of researcher allegiance. Evidence shows that researchers with higher researcher allegiance often published studies with larger effects [25].

The criteria for detemining that a trial had stuuck to its proocol:
We analyzed protocol discrepancies in all trials for which a registration could be identified by firstly, comparing the respective registered and published primary outcomes. They were classified as discrepant if their definitions differed (e.g., different methods of measurement) or if the amount of information differed (e.g., a time point was registered but not reported). They were classified as concordant if the registered and reported primary outcomes matched exactly.
It is interesting to note that the PACE trial fails these criteria (although its not on depression, so isn't counted).

The criteria for determining whether there was "spin":
To assess spin, we examined all trials with at least one non-significant [primary outcome]. ...Seven forms of spin were investigated:
1. selective reporting (the non-significant primary outcome is not mentioned in the screened section),
2. distracting with secondary analyses (the primary outcome is not mentioned but significant secondary analyses are),
3. distracting with within-group differences (the primary outcome is not mentioned but significant within-group differences are),
4. focus on significant secondary analyses (a. secondary analyses are mentioned before the primary outcome; b. effect sizes are mentioned instead of primary outcome effect sizes; c. effect is depicted in figures but primary outcome is not),
5. focus on significant within-group differences over time (a.-c., see above),
6. interpreting non-significant primary results as showing treatment equivalence in a superiority trial,
7. claiming or emphasizing the beneficial effect of the treatment despite a non-significant outcome.

Spin forms were investigated in five sections of the publication: abstract results and conclusions, main text results, discussions, and conclusions (see table
The analysis was section by section, but some examples of PACE spin come easily to mind. On reading 5) and 6), I was mindful of the focus on within-subject differences in the PACE trial long-term follow-up, and the interpretation of the nonsignficant change between 12 months and long-term follow-up being interpreted as evidence of treatment effectiveness in a superiority trial (remember "the beneficial effects of CBT/GET were maintained at long-term follow up").

The spin results, shocking but true:
68 trials (94%) showed at least one form of spin (median amount of spin per trial was 5.75, IQR 3–8).

...the most frequently used forms of spin were that the non-significant PO was not mentioned in the abstract conclusion section (selective reporting in 37/69 trials; 54%), and that the beneficial effect of the treatment was claimed in the abstract (30/69 trials; 43%) or the main text (22/48 trials; 46%) conclusion section. The text section with the highest prevalence of spin ratings was the abstract conclusion section, in which 56 of 69 (81%) investigated trials showed some form of spin, and the main text discussion section, in which 58 of 72 (81%) investigated trials showed some form of spin
 
From the Discussion:
The reasons for the high prevalence of reporting biases such as spin are still unclear. Chiu et al. (2017) [21] showed that funding source is one of the most frequently investigated factors associated to spin, but they did not find a significant association between industry sponsorship and spin. It might be speculated, especially for psychotherapy trials, that other factors such as researcher allegiance [24, 25, 34, 35] or inappropriate incentives fueled through the academia reward system may contribute to the high prevalence of bias [36, 37].

The most well-known reporting guideline is the Consolidated Standards of Reporting Trials 2010 Statement (CONSORT) that requires that primary and secondary outcome measures are completely defined, including “how and when they were assessed”, that trial`s registration number and name of trial registry are reported and that any changes of outcomes after the trial commenced are mentioned, “with reasons” [39]. We especially encourage journals publishing non-pharmacological trials to implement these guidelines.....Second, other potential sources of bias despite funding, e.g. researcher allegiance [34, 36], have to be better identified and need transparency
 
the entire history of this profession is littered with fraud why the hell would any intelligent person want to join in wasting their lives. money may be the only motivator for people who lack the ethics or intelligence to do something more productive .
 
Some just get a kick out of lording it over others. Makes them feel special and important and superior, that they are leaders, and their lives are more meaningful.

It is a very powerful motivator.
 
Back
Top Bottom