Interpretation bias modification (CBM-I) for fatigue in long term health conditions – A feasibility study 2025 Moss-Morris, Chalder, Hirsch et al

Andy

Senior Member (Voting rights)

Highlights​

  • Interpreting fatigue-related info negatively is associated with greater fatigue.
  • We designed FLEX for those with long term conditions who report high fatigue levels.
  • FLEX is a fatigue-focused Cognitive Bias Modification intervention.
  • FLEX is acceptable to people with LTCs who experience fatigue.
  • FLEX reduces negative interpretations and fatigue symptoms.

Abstract (250/250 words)​

Background​

This study examined the acceptability of a new fatigue-focused Cognitive Bias Modification training for interpretations (CBM-I) for those with long term health conditions (LTC) and the feasibility of delivering a randomised controlled trial of fatigue-focused CBM-I compared to control training. Effects on fatigue-related interpretation bias, self-reported fatigue, depression and anxiety were also explored.

Methods​

A two-armed (CBM-I or control) randomised controlled feasibility and acceptability trial. Participants with a LTC (cancer, multiple sclerosis, chronic fatigue syndrome or post-COVID condition) were randomly allocated to 12 online training sessions of CBM-I (N = 66) or a matched control condition (N = 65). Participants were assessed at baseline pre-randomisation (T0), and post-intervention (T1), two (T2) and four months post-randomisation (T3). Assessments included measures of acceptability, interpretation bias, self-reported fatigue and mood.

Results​

The results indicate that fatigue-focused CBM-I training appears acceptable to people with LTCs, shown by good rates of adherence (77% completing full dose) and acceptability scores. It appears feasible to recruit and retain participants through follow-up (70% retained at four months). There was a large effect size (g = 0.834, 95% CI [0.472,1.196]) in favour of the intervention on the purported mechanism of change (interpretation biases) and small effects on self-reported fatigue and depression but not anxiety.

Conclusions​

The study suggests that CBM-I training is an easy to administer, relatively brief digital intervention which shows promise in reducing fatigue and associated symptoms in those with long term health conditions. A full-scale trial of CBM-I for fatigue LTCs is justified on the basis of the findings.

Open Access
 
Last edited by a moderator:
From the open access introduction,

"Emerging research suggests that interpretation biases, particularly those related to fatigue and bodily sensations, may play a role in the experience and maintenance of fatigue in LTCs. Individuals with chronic fatigue syndrome (CFS), for instance, have demonstrated an attentional bias towards fatigue-related stimuli and a tendency to interpret ambiguous information in a somatic way (Hughes et al. 2016, 2017). Further investigation into these cognitive processes is an important step towards developing targeted interventions aimed at modifying interpretation biases and reducing the burden of fatigue."


"Cognitive Bias Modification for Interpretation (CBM-I) is a novel web-based digital intervention that has been shown to modify interpretation biases in individuals with anxiety and depression (Hirsch et al., 2018, 2020, 2021). CBM-I involves repeated practice of generating more benign and less negative outcomes to uncertain or ambiguous scenarios across a number of brief sessions.

By consistently accessing more positive interpretations, individuals develop a more helpful positive/benign interpretation bias, which in turn reduces negative emotional responses in people suffering from anxiety and/or depression (Hirsch et al., 2018, 2021). Given the potential role of interpretation biases in maintaining fatigue, CBM-I presents a promising avenue for intervention. By tailoring the CBM-I training to target and modify negative fatigue-related interpretations, CBM-I could potentially reduce fatigue severity and fatigue-related distress, as well as improve overall well-being in individuals with LTCs."
 
That sounds to me like training people to fill in questionnaires differently.

It seems notable that there was small or no effect on fatigue, depression and anxiety, which presumably the treatment is supposed to address. If it was statistically signficant I am sure they would have said it was significant rather than 'small'.

Are they really just getting people to say, I interpret my symptoms more positively but I still feel just as fatigued, anxious and depressed?

Since it's paywalled, I have no idea.

As for the claims of a positive response based on a 70% retention rate, I suspect a lot of people stick with courses like this to the end in the vain hope that the next session might actually tell them something useful.
 
It seems notable that there was small or no effect on fatigue, depression and anxiety, which presumably the treatment is supposed to address. If it was statistically signficant I am sure they would have said it was significant rather than 'small'.
It’s worse, they didn’t even bother doing a statistical analysis:

2.7. Analysis​

Analysis was conducted as described in the pre-registered statistical analysis plan in R using the GAMLj3 package (Gallucci, 2024). All analyses were intention-to-treat. As this was a feasibility study no significance tests were performed, and effect sizes are provided for descriptive reasons only and to inform a future full-scale randomised control trial.
 
The paper says it’s open access to me, and I can read all of it.
2.10. Signal of efficacy for treatment effect on self-reported measures
(…)
Table 4 shows small effect sizes were found for CFQ and PHQ-9 at all follow-up points except for CFQ (T2) where the effect is less than would be considered a small effect.
CFQ is Chalder’s Fatigue Scale, and PHQ-9 is an awful depression scale.
CBRQ effects varied by subscale, with medium effects seen in embarrassment avoidance at all time points. Small effects were seen damage beliefs (T1 and T3) and symptom focusing (all time points).
CBRQ seems to be yet another terrible questionnaire about what you think about your symptoms:
Effects on fear avoidance, all-or-nothing (T2 and T3) and resting behaviour (all time points) CBRQ subscales were negligible.
You’d assume these were the most relevant parts for the intervention, where the aim was to get the patients to act differently.
Effects of GAD-7 at all-time points were negligible.
GAD-7 is about anxiety.
 
It should be noted that in the CBM-I group, the self-reported usefulness score decreased slightly from baseline to post-intervention. However given the standard deviations, it is likely that is not a significant change. In contrast to the relatively stability of scores in the CBM-I the decrease in perceived usefulness was much greater in the control condition. Future research could explore how to increase the perception of the usefulness of CBM-I, possibly through clearer psychoeducation informing participants how FLEX works and why it can be helpful.
Of course it’s the patient’s fault that they did not find the intervention more useful after trying it, it can’t possibly be because it isn’t useful!

Small effect sizes for change in symptoms were found in the current study. This is in line with other low-intensity interventions for fatigue. For example, the effects seen for FLEX are similar to the lower end of the range seen for fatigue-focused CBT for those with LTCs (Hulme et al., 2018). FLEX requires no therapist time and a total of 4 h of the patient's time over a period of four weeks. The low-intensity nature of the intervention would lend itself to being used as an adjunct to existing CBT methods for fatigue.
«It’s normal to have small effects for these interventions, our small effect means it’s a success!»
It is important to continue developing psychological interventions for fatigue as findings from systematic reviews of interventions for fatigue in LTCs show that pharmacological interventions do not consistently demonstrate benefits for participants (Hulme et al., 2018).
Sure, the solution has to be psychological because it hasn’t been solved medically yet. The hubris is off the charts.
Meta-analyses have shown that psychological interventions based in CBT are effective in decreasing fatigue in those with MS (Moss-Morris et al., 2021), cancer (Hosseini Koukamari et al., 2025) and CFS (Maas gennant Bermpohl et al., 2024).
The CFS analysis is this, where they ignored the terrible quality of the trials and lie about the conclusion in a paper about the importance of blinding.
 
Of course it’s the patient’s fault that they did not find the intervention more useful after trying it, it can’t possibly be because it isn’t useful!


«It’s normal to have small effects for these interventions, our small effect means it’s a success!»

Sure, the solution has to be psychological because it hasn’t been solved medically yet. The hubris is off the charts.

The CFS analysis is this, where they ignored the terrible quality of the trials and lie about the conclusion in a paper about the importance of blinding.
to copy your first quote with bolding:
It should be noted that in the CBM-I group, the self-reported usefulness score decreased slightly from baseline to post-intervention. However given the standard deviations, it is likely that is not a significant change. In contrast to the relatively stability of scores in the CBM-I the decrease in perceived usefulness was much greater in the control condition. Future research could explore how to increase the perception of the usefulness of CBM-I, possibly through clearer psychoeducation informing participants how FLEX works and why it can be helpful.

That is shocking.

It is basically saying, outright, "how can we cheat the system" and admitting their focus is on manipulating/coercing responses

and then claiming that such responses that mean nothing regarding health or what anyone should be being paid for somehow mean anything to do with health and/or that their claims that people will 'recover' or 'be better' for wasting their energy and time that they don't have on this aren't plain lies.

Worse is the misinformation to bystanders and policy makers that there is 'a cure' - except it is only to the issue of saying 'nothing changes the responses we get on surveys'

How to claim a useless cost-sucker of a treatment is actually a treatment by force-feeding terribly ill people brainwashing to make them say it is useful when it isn't.... that is then used against said patient group to claim when they don't recover it isn't the fault of the useless 'non-therapy con' but themselves... what... 'not doing it right'?

How callous? And it gets right up my nose that these individuals are allowed to use any terms claiming that they are good or helpful or care about mental health when they really don't seem at all to be looking at the health in a positive sense aspect clearly when the main aim is this.

This grim old group of people telling themselves that what is basically seen as crookery or manipulation in other context they somehow kid themselves by misusing in their own heads their distorted idea that rather than placebo being important to watch out for the influence of those running a trial affecting results to make them inaccurate is somehow not circumventing truth or consent (if in a clinical situation) but doing good because it according to them makes people happier to be conned?

I find it so hard to get my head around the thinking of those who write and those who read these things and aren't finding their jaw dropping and noting concerns regarding funding and this sort of attitude being put onto patients, and the chain of fallacies required for them to make sense feels like it is becoming increasingly long?
 
Only 4 participants with CFS/LC made it to T3, the follow up questionaire. 4 out of 13, quite a drop out rate, way more than the 70% retention rate
the authors mention, 5 didn't even do the T1.
In total 33 CFS/LC ending up with only 18, that's not 70% , 4 + 14 (controls)
Not able to retain these patients and not even mentioning it in the paper?

I'm not good with numbers anymore(brainfog), but these authors play hide and seek with numbers.
 
Chalder et al. continuously fail to understand that relationships are built on mutual benefit and respect.

They ignore genuine patient feedback (like high drop out rates). This is how you build ineffective, harmful therapies.

That this level of ignorance comes from people working in the field of psychology is astonishing.
 
Yep. That they can get away with such a brazen admission just proves how corrupted the quality control system is in this area of medicine has become.

How is this not straight scientific and moral fraud? Where's the fucking outrage from the rest of medicine and society? :mad::mad::mad:
The dog whistle for decades has been that safe and effective treatments exist, are able to provide 100% cure for everyone who puts in the effort. Sometimes it's said in a direct way, sometimes using veiled language, but it's the near universal belief in the profession. It's completely absurd but this is what most believe, or pretend to for the split second they get involved.

They can get away with it in much the same way as a democracy can sometimes commit horrible acts with a legitimate democratic mandate: it's popular. It's wrong, it makes a mockery of most of their so-called principles and violates numerous laws, both in spirit and letter, but it's popular.

Right or wrong rarely actually matter in real life. It's who has the power to impose their will on others that does. The medical profession is extremely wrong here, but they have 100% of the balance of power and zero scruples about wielding it in precisely the way that both maximizes the harms on us and minimizes their understanding of it. Everyone involved in this fraudulent nonsense is doing very well for themselves. They're even encouraged to keep producing the same copy-paste garbage.

This is a major reason why there is such a crisis of trust in experts and institutions. It's wholly deserved by now. The only reliable process that can bypass this is science, and it's not available here because the only science that exists in medicine is biomedical. The rest can be comically corrupt and it doesn't matter because it's popular. The institutions themselves are no better than whatever existed centuries ago.

Because this is at the heart of everything. When the quality control process fails it doesn't matter what the rest is like. If it approves broken things then all the processes involved in building up to the quality control process will naturally degrade, since it doesn't even matter. It acts kind of as a casino that would randomly give someone back almost all their losses. They all just keep playing because there are no consequences for failure.

When everyone cheats, is anyone really cheating? It just becomes the way things are done(TM).
 
I just realized this is a mix of the paperclip problem in AI and the radicalization problem in social media.

The paperclip problem in AI is a thought experiment about an AI that is tasked with a single thing, producing as many paperclips as it can, and the danger of being misaligned to the point of destroying the entire planet in the relentless pursuit of its goal of maximizing the production of paperclips. It's mostly a silly thought experiment but it makes valid points about unintended consequences.

And this is what academia has become: it produces papers. Not studies. Not research. Not useful knowledge or even data. Papers. To be a successful academic, one must produce as many papers as they can. It doesn't matter if they are entirely useless, it's a numbers game. Quality is irrelevant, citation metrics and influence are the only metrics of success. It depends on the field, of course. In non-biomedical academia, it's all there is.

We see the most extreme examples of this in the biopsychosocial ideology. There are 'studies' like DanFunD that might be up to 20+ papers. It doesn't matter that none of them add anything, the academics involved are fulfilling their relentless task of producing as many papers as they can. It's what they are expected to do and they are rewarded for producing as many paper(clip)s as they can.

And just like the radicalization problem in social media, the kinds of papers that work the best at this are low effort, low quality papers that amplify controversies. Everything then becomes about the simple act of continuing with the niches that work. This kind of work keeps the controversy alive, since they themselves are the source of it, and that means attention, influence. Exactly like social media influencers. No one succeeds at this game by making reasonable arguments. It's all about outrage and controversy about subjects that are deeply emotional and subjective, about inflaming controversies, not resolving them.

Most social media influencers do it for money. It often pays well. In academia it's for the production of as many papers as they can. The papers are the currency, it buys influence. It doesn't matter that they hold no value in themselves, the value is entirely artificial, it's a social construct in which by being considered valuable by the people who decide what is valuable, then they have value. Even if that value only serves the production of the exact same papers in an infinite loop.
 
The dog whistle for decades has been that safe and effective treatments exist, are able to provide 100% cure for everyone who puts in the effort. Sometimes it's said in a direct way, sometimes using veiled language, but it's the near universal belief in the profession. It's completely absurd but this is what most believe, or pretend to for the split second they get involved.

They can get away with it in much the same way as a democracy can sometimes commit horrible acts with a legitimate democratic mandate: it's popular. It's wrong, it makes a mockery of most of their so-called principles and violates numerous laws, both in spirit and letter, but it's popular.

Right or wrong rarely actually matter in real life. It's who has the power to impose their will on others that does. The medical profession is extremely wrong here, but they have 100% of the balance of power and zero scruples about wielding it in precisely the way that both maximizes the harms on us and minimizes their understanding of it. Everyone involved in this fraudulent nonsense is doing very well for themselves. They're even encouraged to keep producing the same copy-paste garbage.

This is a major reason why there is such a crisis of trust in experts and institutions. It's wholly deserved by now. The only reliable process that can bypass this is science, and it's not available here because the only science that exists in medicine is biomedical. The rest can be comically corrupt and it doesn't matter because it's popular. The institutions themselves are no better than whatever existed centuries ago.

Because this is at the heart of everything. When the quality control process fails it doesn't matter what the rest is like. If it approves broken things then all the processes involved in building up to the quality control process will naturally degrade, since it doesn't even matter. It acts kind of as a casino that would randomly give someone back almost all their losses. They all just keep playing because there are no consequences for failure.

When everyone cheats, is anyone really cheating? It just becomes the way things are done(TM).
Yep seems popular but the really creepy bit about these guys is the documents where they explicitly made plans to inveigle their way into school level to ‘catch people early’ on imposing their distorted version of values and fallscisl ideas on what helps or harms health from those young ages

That makes a group particularly‘special’ to have gone that far in the planning etc
 
That sounds to me like training people to fill in questionnaires differently.

It seems notable that there was small or no effect on fatigue, depression and anxiety, which presumably the treatment is supposed to address. If it was statistically signficant I am sure they would have said it was significant rather than 'small'.

Are they really just getting people to say, I interpret my symptoms more positively but I still feel just as fatigued, anxious and depressed?

Since it's paywalled, I have no idea.

As for the claims of a positive response based on a 70% retention rate, I suspect a lot of people stick with courses like this to the end in the vain hope that the next session might actually tell them something useful.
 
Back
Top Bottom