S4ME: Submission to the public review on common data elements for ME/CFS: Problems with the Chalder Fatigue Questionnaire

Discussion in 'Open Letters and Replies' started by Andy, Jan 23, 2018.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    As Simon Wessely joked about, if they didn't do this, no one would "recover" according to their own criteria.

    So funny haha. Medical research fraud is hi-la-rious.
     
    Barry, MEMarge, bobbler and 5 others like this.
  2. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    I just wanted to link through to somewhere that discusses the CFQ and its problems, because I had another, closer, look at the original "Development of a fatigue scale" paper by Chalder et al., and spotted something I hadn't realised before. And it's important, because I'm sure that whenever any criticism is made of it, the response will be "But it's been validated. It's been published in a peer-reviewed journal. It's been used in thousands of studies without issue." etc etc.

    So let's look at the validation.

    Normally, that would be done against some "gold standard" - in this case, a standard (hopefully, something actually *objective*) used to diagnose "fatigue".

    So what did they do? They took one question [item] out of Lewis (and Pelosi's) Clinical Interview Schedule (Revised) questionnaire - which was devised to screen for psychiatric conditions in general populations, and used *that* as the "gold standard" after turning it into a series of questions [correction - this "item" already consisted of a series of questions]:
    And that's their "gold standard" pseudo-objective measure of fatigue.

    It sort of begs the question, why didn't they just use that as their instrument?

    Anyway. Any validation process involving ROC curves and PCA with two scales that use similar questions is going to end up with the scales agreeing by default. It doesn't take any kind of complex statistical analysis to see that. The analysis is merely a way of going through the motions.

    It's also clear that they used Goldberg's General Health Questionnaire as a template - hence the reason it uses that unbalanced Likert scale, and measures change rather than absolute values (intensity).

    What shocks me more is that you don't need to read the whole paper to get an indication of this. It's in the abstract.
    And now CFQ is being used to assess folks with Long Covid.

    What can we do about it? How can I help???
     
    Last edited: Aug 9, 2023
  3. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,920
    Location:
    UK
    Last edited: Aug 9, 2023
    alktipping, Ariel, bobbler and 3 others like this.
  4. Midnattsol

    Midnattsol Moderator Staff Member

    Messages:
    3,774
    alktipping, Ariel, bobbler and 2 others like this.
  5. tornandfrayed

    tornandfrayed Senior Member (Voting Rights)

    Messages:
    111
    Location:
    Scotland
    Apart from all the points highlighted by the excellent analysis here, I've always been intensely irritated by the fact that Question 11 - How is your memory? is a different type of question from the others. It invites a narrative response, but is supposed to be scored as a value. "More than usual" could be interpretated as "I'm having more problems with my memory than usual" OR "my memory is functioning better than usual".

    Patients must query it all the time, but the powers that be seem to have never noticed this anomaly.
     
    Tal_lula, alktipping, RedFox and 6 others like this.
  6. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    If that's how Q11 is displayed, they are using an incorrect copy of the CFQ. It should also have the options "Better than usual" >>> "Much worse than usual" for that question.

    Wait till you see the GHQ!!!
     
    obeat, Sean, alktipping and 3 others like this.

Share This Page