Response Shift After CBT Targeting Severe Fatigue: Explorative Analysis of Three Randomized Controlled Trials, 2022, Müller, Knoop et al

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Andy, Jul 24, 2022.

Tags:
  1. Andy

    Andy Committee Member

    Messages:
    23,032
    Location:
    Hampshire, UK
    Abstract

    Background

    Cognitive behavioral therapy (CBT) is an evidence-based intervention for severe fatigue. Changes in patients’ fatigue scores following CBT might reflect not only the intended relief in fatigue but also response shift, a change in the meaning of patients’ self-evaluation. Objectives were to (1) identify the occurrence of response shift in patients undergoing CBT, (2) determine the impact of response shift on the intervention effect, and (3) investigate whether changes in fatigue-related cognitions and perceptions, targeted during CBT, are associated with response shift.

    Methods
    Data of three randomized controlled trials testing the efficacy of CBT in individuals with chronic fatigue syndrome (CFS, n = 222), cancer (n = 123), and diabetes (n = 107) were re-analyzed. Fatigue severity was measured with 8 items from the Checklist Individual Strength, a valid and widely used self-report questionnaire. Structural equation modelling was applied to assess lack of longitudinal measurement invariance, as indication of response shift.

    Results
    As expected, in all three trials, response shift was indicated in the CBT groups, not the control groups. Response shift through reprioritization was indicated for the items “Physically, I feel exhausted” (CFS) and “I tire easily” (cancer, diabetes), which became less vs. more important to the measurement of fatigue, respectively. However, this did not affect the intervention effects. Some changes in cognitions and perceptions were associated with the response shifts.

    Conclusions
    CBT seems to induce response shift through reprioritization across patient groups, but its occurrence does not affect the intervention effect. Future research should corroborate these findings and investigate whether patients indeed change their understanding of fatigue.

    Open access, https://link.springer.com/article/10.1007/s12529-022-10111-8
     
    MSEsperanza, Hutan and Peter Trewhitt like this.
  2. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
  3. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,081
    … … … but so far?
     
    MEMarge, MSEsperanza, Hutan and 3 others like this.
  4. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    There is so much wrong with the assumptions in this paper it's hard to know where to start.
    Here are a few really problematic excerpts, with my bolding:

    From the introduction:
    It seems good that they are recognising that fatigue may not actually have changed, but the interpretation placed on it may be what changes scores on questionnaires.

    However, I think they leave out the most likely reasons for response shift, namely:
    - gratitude at getting the treatment (especially when the control group is getting no treatment);
    - liking the therapist and not wanting to let them down;
    - unwillingness to admit that they have 'failed' - with self blame;
    - Self delusion - I'm doing what I've been told so I must be getting better;
    - hope combined with adrenaline giving a false sense of improvement;
    - obedience training - trying to give the right answer according to what you have been told should happen.

    In other words, I think the most likely form of response bias is a superficial overlay of patients 'giving the right answer' even when there has been no change either in their symptoms or their real interpretation of them. Patients know that their fatigue is not just normal fatigue that they have been misinterpreting, they know that if they do more they get sicker. They know the therapists intepretation is wrong. See below:

    So they can do all the statistical jiggery pokery they like on the data from their questionnnaires from the trials of CBT, but if they still work on the false model of the perpetuating factors of fatigue, they are essentially selling patients a lie, and all the questionnaires are doing is testing how compliant patients are in going along with the lie and acting out 'improvement' to please the therapist and/or being brainwashed into believing the lie.

    Here they go again - spelling out just what lies they are telling patients:
    And the whole sections on CBT for CFS in the paper are built on the assumption that CBT does actually lead to clinically significant reduction in fatigue, which reanalysis of PACE showed wasn't true.
     
  5. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,837
    Location:
    UK West Midlands
    Yeah well my response to this type of “research” hasn’t shifted
     
    MEMarge, MSEsperanza, EzzieD and 16 others like this.
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    The elephant looks around the room and says: there's no one here.

    They can't accept that their BS questionnaires are unreliable, even though their entire process consists of trying to convince people to give different answers on those questionnaires precisely because they are open to manipulation. Even though it's the obvious answer. But that means they were wrong, and that's evidence they refuse.

    Basically this is cheap excuses for why subjective questionnaires give different responses than reality, and how they choose to prefer the subjective, fake, data that they can manipulate and interpret favorably. All these people have is excuses for why their methods don't confirm their claims, which to them means that their claims are right, they just don't have a way to evaluate it.

    You know: alternative medicine, where claims of effectiveness are only that, claims.
     
  7. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    It seems that the authors have published several papers claiming that what they call 'response shift' (a form of response bias in my view) does not affect the effect sizes found in CBT trials. For example here in PLOS One: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0252035

    Their approach of measuring response shift seems to be based on this 2005 paper by Frans Oort that used structural equation modelling. The details are a bit too technical for me, but it seems that it simply adds all sorts of unwarranted explanations to changes in model fit and factors loadings. Changes in the model fit of measurements over time is interpreted as indicating a 'response shift' and changes on factor loading as meaning participants changed their interpretation of certain questions.
     
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
  9. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    I haven't read the paper, but do they explain why the identified "response shift" doesn't change the apparent effect of the intervention? Why wouldn't a tendency to change how the questionnaire is responded to lead to a change in the final outcomes (not in direction but in size)?
     
  10. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
  11. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    They write:

    "While the detected response shifts in this study were of a large magnitude, they occurred only on a single item per trial, thereby limiting their impact on overall fatigue severity, as measured by eight items."​
     
    MSEsperanza, Sean, MEMarge and 4 others like this.
  12. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    Oort et al have published a paper aiming to explain SEM in an accessible way, in case it may be of interest: Using structural equation modeling to investigate change and response shift in patient-reported outcomes: practical considerations and recommendations, Verdam et al, 2021

    A recent critical review from the Response Shift - in Sync Working Group, an international initiative, investigated 11 methods to assess response shift and found none were optimal, and all must be interpreted cautiously: Critical examination of current response shift methods and proposal for advancing new methods, Sébille et al, 2021

    The same group has proposed a revised theoretical model of response shift (not yet experimentally validated): Response shift in patient-reported outcomes: definition, theory, and a revised model, Vanier et al, 2021 although their set of assumptions seems optimistic given the limited content validity of PROMs (for example, when screening anxiety in POTS patients — or other chronic illnesses — with the GAD questionnaire).

    At the end of the day, response shift evaluation is no more than a statistical tool with a theoretical framework, with all the limitations that it carries. Also, none of these models seem to have benefited from patient input so their theoretical basis can be questioned.
     
    Last edited: Jul 25, 2022
  13. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I think I might do better sitting back and enjoying everyone's comments

    Having established that Schrödinger's cat was in fact dead, do I learn more from being told why it might have jumped into the armchair if it was alive?

    I might have to think about it tomorrow!
     
  14. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    Oh to have the franchise for chairs on the Titanic.
    There seems to be a never ending supply of them to shuffle .....
     
  15. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,081
    I suspect that a never ending supply of chairs would eventually come up with a useful arrangement, rather we are seeing just a few chairs missing vital components such as a usable seat that are never going to achieve functionality no matter how often they are reshuffled.

    I never thought that I would be looking forward to an iceberg.
     
  16. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    They are literally demonstrating proof of response biases, but still manage to conclude that their therapy is effective!?!
     
  17. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Exactly.
     
    MEMarge, MSEsperanza, bobbler and 6 others like this.
  18. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    Tails they win. Heads you lose. Sideways they win.
     
  19. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    People give different answers when they're being observed. It's as simple as that. And if you're observed over a longer period, that will have an effect.

    It's just training people to respond more positively.

    The underlying problem is still the same.
     
  20. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,812
    Wouldn't it be a whole lot easier to use CBT, on the test group, and objectively monitor whether activity levels changed (from pre-intervention/baseline) i.e. using actimetry (FitBit type devices)? Having established that the intervention didn't work then why not publish that and suggest that it's a waste of time?
    OK, if you wish to be academic about the whole thing, then explain that unblinded/inadequately blinded studies are open to bias (if measured subjectively - questionnaires).
     
    MEMarge, EzzieD, MSEsperanza and 2 others like this.

Share This Page