Response Shift After CBT Targeting Severe Fatigue: Explorative Analysis of Three Randomized Controlled Trials, 2022, Müller, Knoop et al

Andy

Retired committee member
Abstract

Background

Cognitive behavioral therapy (CBT) is an evidence-based intervention for severe fatigue. Changes in patients’ fatigue scores following CBT might reflect not only the intended relief in fatigue but also response shift, a change in the meaning of patients’ self-evaluation. Objectives were to (1) identify the occurrence of response shift in patients undergoing CBT, (2) determine the impact of response shift on the intervention effect, and (3) investigate whether changes in fatigue-related cognitions and perceptions, targeted during CBT, are associated with response shift.

Methods
Data of three randomized controlled trials testing the efficacy of CBT in individuals with chronic fatigue syndrome (CFS, n = 222), cancer (n = 123), and diabetes (n = 107) were re-analyzed. Fatigue severity was measured with 8 items from the Checklist Individual Strength, a valid and widely used self-report questionnaire. Structural equation modelling was applied to assess lack of longitudinal measurement invariance, as indication of response shift.

Results
As expected, in all three trials, response shift was indicated in the CBT groups, not the control groups. Response shift through reprioritization was indicated for the items “Physically, I feel exhausted” (CFS) and “I tire easily” (cancer, diabetes), which became less vs. more important to the measurement of fatigue, respectively. However, this did not affect the intervention effects. Some changes in cognitions and perceptions were associated with the response shifts.

Conclusions
CBT seems to induce response shift through reprioritization across patient groups, but its occurrence does not affect the intervention effect. Future research should corroborate these findings and investigate whether patients indeed change their understanding of fatigue.

Open access, https://link.springer.com/article/10.1007/s12529-022-10111-8
 
There is so much wrong with the assumptions in this paper it's hard to know where to start.
Here are a few really problematic excerpts, with my bolding:

From the introduction:
CBT is based on the cognitive-behavioral model of fatigue, which states that disease and its treatment initially precipitate fatigue, while cognitive and/or behavioral variables perpetuate fatigue in the long-term [6, 8, 9].

Treatment effectiveness is commonly evaluated through assessing changes in patient’s self-reported fatigue. A reduction in self-reported fatigue from pre- to post-treatment is taken to indicate a positive intervention effect: a reduction in fatigue severity. However, as CBT also directly targets patients’ cognitions about fatigue, it may also induce a change in the meaning that patients attach to their fatigue evaluation, which is also known as response shift.

Response shift can occur, for example, through a change in (1) patients’ internal standards with which they assess their fatigue (i.e., “recalibration”), (2) the relative importance patients assign to different aspects of fatigue (i.e., “reprioritization”), or (3) the meaning of fatigue itself (i.e., “reconceptualization”) [16]. Importantly, when response shift occurs, this may render the comparison of self-evaluations over time incompatible.
It seems good that they are recognising that fatigue may not actually have changed, but the interpretation placed on it may be what changes scores on questionnaires.

However, I think they leave out the most likely reasons for response shift, namely:
- gratitude at getting the treatment (especially when the control group is getting no treatment);
- liking the therapist and not wanting to let them down;
- unwillingness to admit that they have 'failed' - with self blame;
- Self delusion - I'm doing what I've been told so I must be getting better;
- hope combined with adrenaline giving a false sense of improvement;
- obedience training - trying to give the right answer according to what you have been told should happen.

In other words, I think the most likely form of response bias is a superficial overlay of patients 'giving the right answer' even when there has been no change either in their symptoms or their real interpretation of them. Patients know that their fatigue is not just normal fatigue that they have been misinterpreting, they know that if they do more they get sicker. They know the therapists intepretation is wrong. See below:

In line with the cognitive-behavioral model of fatigue, CBT aims to change patients’ dysfunctional cognitions assumed to perpetuate fatigue, including fatigue catastrophizing, negative expectations and feelings of helplessness regarding fatigue, low self-efficacy, the extent to which patients think they are able to influence fatigue, and an extensive focus on fatigue. Also, patients’ activity-related cognitions are addressed as patients are challenged to gradually increase their daily activities. Improvements in these fatigue- and activity-related cognitions appear to play a mediating role in the reduction in fatigue following CBT [10, 26, 27]. Additionally, changes in patients’ perceptions of fatigue, such as it being exhausting or pleasant, appear to accompany the decrease in fatigue severity following CBT [28, 29]. Improvements in these cognitions and perceptions may not only explain the reduction in patients’ fatigue severity but may also explain patients’ new evaluation of their fatigue.

So they can do all the statistical jiggery pokery they like on the data from their questionnnaires from the trials of CBT, but if they still work on the false model of the perpetuating factors of fatigue, they are essentially selling patients a lie, and all the questionnaires are doing is testing how compliant patients are in going along with the lie and acting out 'improvement' to please the therapist and/or being brainwashed into believing the lie.

Here they go again - spelling out just what lies they are telling patients:
To further substantiate the interpretation of findings as indications of response shift, the associations between detected response shift effects and potential mechanism variables, according to response shift theory [36, 62], were also investigated. Moreover, it seems likely that CBT can induce response shift as it directly targets patients’ cognitions and perceptions regarding their fatigue: patients are challenged to change maladaptive cognitions about fatigue (e.g., “the fatigue forces me to become inactive”) into adaptive ones (e.g., “despite being fatigued I can gradually increase my level of activity”) and learn to (re-)evaluate fatigue as a normal everyday experience (e.g., “it is normal to become fatigued after a late night, and I will recover from it”). Such change in fatigue-related beliefs could lead to a change in response to the self-report questionnaire, despite an equal level of fatigue.

And the whole sections on CBT for CFS in the paper are built on the assumption that CBT does actually lead to clinically significant reduction in fatigue, which reanalysis of PACE showed wasn't true.
 
The elephant looks around the room and says: there's no one here.

They can't accept that their BS questionnaires are unreliable, even though their entire process consists of trying to convince people to give different answers on those questionnaires precisely because they are open to manipulation. Even though it's the obvious answer. But that means they were wrong, and that's evidence they refuse.

Basically this is cheap excuses for why subjective questionnaires give different responses than reality, and how they choose to prefer the subjective, fake, data that they can manipulate and interpret favorably. All these people have is excuses for why their methods don't confirm their claims, which to them means that their claims are right, they just don't have a way to evaluate it.

You know: alternative medicine, where claims of effectiveness are only that, claims.
 
It seems that the authors have published several papers claiming that what they call 'response shift' (a form of response bias in my view) does not affect the effect sizes found in CBT trials. For example here in PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0252035

Their approach of measuring response shift seems to be based on this 2005 paper by Frans Oort that used structural equation modelling. The details are a bit too technical for me, but it seems that it simply adds all sorts of unwarranted explanations to changes in model fit and factors loadings. Changes in the model fit of measurements over time is interpreted as indicating a 'response shift' and changes on factor loading as meaning participants changed their interpretation of certain questions.
 
I haven't read the paper, but do they explain why the identified "response shift" doesn't change the apparent effect of the intervention? Why wouldn't a tendency to change how the questionnaire is responded to lead to a change in the final outcomes (not in direction but in size)?
 
I haven't read the paper, but do they explain why the identified "response shift" doesn't change the apparent effect of the intervention? Why wouldn't a tendency to change how the questionnaire is responded to lead to a change in the final outcomes (not in direction but in size)?
They write:

"While the detected response shifts in this study were of a large magnitude, they occurred only on a single item per trial, thereby limiting their impact on overall fatigue severity, as measured by eight items."​
 
It seems that the authors have published several papers claiming that what they call 'response shift' (a form of response bias in my view) does not affect the effect sizes found in CBT trials. For example here in PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0252035

Their approach of measuring response shift seems to be based on this 2005 paper by Frans Oort that used structural equation modelling. The details are a bit too technical for me, but it seems that it simply adds all sorts of unwarranted explanations to changes in model fit and factors loadings. Changes in the model fit of measurements over time is interpreted as indicating a 'response shift' and changes on factor loading as meaning participants changed their interpretation of certain questions.
Oort et al have published a paper aiming to explain SEM in an accessible way, in case it may be of interest: Using structural equation modeling to investigate change and response shift in patient-reported outcomes: practical considerations and recommendations, Verdam et al, 2021

A recent critical review from the Response Shift - in Sync Working Group, an international initiative, investigated 11 methods to assess response shift and found none were optimal, and all must be interpreted cautiously: Critical examination of current response shift methods and proposal for advancing new methods, Sébille et al, 2021

The same group has proposed a revised theoretical model of response shift (not yet experimentally validated): Response shift in patient-reported outcomes: definition, theory, and a revised model, Vanier et al, 2021 although their set of assumptions seems optimistic given the limited content validity of PROMs (for example, when screening anxiety in POTS patients — or other chronic illnesses — with the GAD questionnaire).

At the end of the day, response shift evaluation is no more than a statistical tool with a theoretical framework, with all the limitations that it carries. Also, none of these models seem to have benefited from patient input so their theoretical basis can be questioned.
 
Last edited:
Oh to have the franchise for chairs on the Titanic.
There seems to be a never ending supply of them to shuffle .....

I suspect that a never ending supply of chairs would eventually come up with a useful arrangement, rather we are seeing just a few chairs missing vital components such as a usable seat that are never going to achieve functionality no matter how often they are reshuffled.

I never thought that I would be looking forward to an iceberg.
 
Wouldn't it be a whole lot easier to use CBT, on the test group, and objectively monitor whether activity levels changed (from pre-intervention/baseline) i.e. using actimetry (FitBit type devices)? Having established that the intervention didn't work then why not publish that and suggest that it's a waste of time?
OK, if you wish to be academic about the whole thing, then explain that unblinded/inadequately blinded studies are open to bias (if measured subjectively - questionnaires).
 
Back
Top Bottom