Trial Report Exploring the content validity of the Chalder Fatigue Scale using cognitive interviewing in an ME/CFS population, 2024, Gladwell

Discussion in 'ME/CFS research' started by Dolphin, Apr 7, 2024.

  1. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,793
    Free full text:
    https://www.tandfonline.com/doi/full/10.1080/21641846.2024.2335861

    Research Article
    Exploring the content validity of the Chalder Fatigue Scale using cognitive interviewing in an ME/CFS population
    Peter Gladwell
    Matthew Harland
    Aysha Adrissi
    Saskia Kershaw
    &
    Emma Dures
    Received 11 Dec 2023, Accepted 25 Mar 2024, Published online: 06 Apr 2024

    ABSTRACT
    Background
    The Chalder Fatigue Scale, also known as the Chalder Fatigue Questionnaire (CFQ) is a Patient Reported Outcome Measure (PROM) comprising 11 items designed to measure physical and cognitive fatigue. It is widely used with people with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). There is no published evaluation of the content validity of the CFQ.

    Objectives
    To elicit information regarding the cognitive processes undertaken by people living with ME/CFS, when completing the CFQ to allow examination of the CFQ’s content validity.

    Methods
    A qualitative study utilising semi-structured cognitive interviewing. All data were collated according to the CFQ item but some general criticisms of the content validity of the CFQ were also identified.

    Results
    The CFQ currently consists of one item clearly related to physical symptoms (1.6), four items clearly related to cognitive function (1.8, 1.9, 1.10, 1.11) and one item relating to fatigue (1.5) which could be interpreted as cognitive and/or physical fatigue. The other five items have been identified by participants as lacking clarity (1.1, 1.7), relating to behaviour not symptoms (1.2, 1.4), or relating to sleepiness not fatigue (1.3).

    Conclusion
    Participants provided a wealth of insight into the challenges related to relevance, comprehensiveness, and comprehensibility of the CFQ, indicating that revision is required. This strengthens the case for participation of people with lived experience at all stages of PROM development. There is a need for an assessment tool/PROM for clinical and research use ME/CFS which has undergone content validation involving people living with ME/CFS.

    KEYWORDS:

     
    Peter Trewhitt, Simon M, Sean and 6 others like this.
  2. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    No there isn't. In even situation you want to know something different.
    Content validation is meaningless.
     
  3. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,850
    Location:
    UK West Midlands
    Revision is required :banghead:

    only action needed is pressing delete
     
  4. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    Pretty much every scale like this used with pwME has a ceiling effect. Not much use.

    Fatigue severity scale a tad better but not much. Most pwME will score at the max.

    Neither covers PEM explicitly.
     
    MEMarge, Peter Trewhitt, Sean and 9 others like this.
  5. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Seems like there is an error in thinking when it comes to subjective questionnaires, that the same principle that applies to objective measurements must also apply here. Obviously a thermometer must give consistent values at every measurement. Same is true of any scientific instrument. If you get different answers across multiple measurements of the same thing, the instrument is invalid. And that follows, it's rational and consistent.

    But the same thing doesn't apply to subjective questions, as they don't have a correct objective answer. The thinking basically goes that a better subjective instrument would operate much the same. And that thinking is flawed. Especially as there is never any way to know that the same thing is being measured. Or, well, in this case rated, since subjective values cannot be measured in a scientific sense, only evaluated or rated.

    Also in far too many cases, and the more psychosocial a questionnaire is the more it does it, questions tend to be super weird and ambiguous, have multiple possible interpretations so that everyone who fills in the questionnaire is answering a different question based on how they interpret it, in many cases the same answer is given for completely opposite reasons. And those will always be worse than useless. But this is the bread-and-butter of biopsychosocial, questionnaires that are more the ink blot type than asking about objective values, things like how many hours of sleep or other quantifiable things.

    I don't think it's possible to make a reliable subjective PROM like this. The whole idea should be abandoned, it has failed miserably. Unless they ask about things that relate to things in real life, things that can be quantified, they will always be either worse than nothing, or merely only as good as asking only a small number of questions, oftentimes even just a single one.

    I can only think of maybe a couple of those questionnaires that held any value, and they were built explicitly to ask about functional capacity, what patients can do, how often and so on. The rest is almost entirely for the benefit of researchers so they can push their personal ideologies while claiming that "I'm not saying it, the questionnaires say so", which is more or less what the PACE gang has been doing with trials, and when the trials got invalidated, they're simply saying that it's their opinion and since no one seems to care, it holds. The instruments are only useful as rhetoric devices, peak pseudoscience.

    Chemistry only became a true science once we could measure real natural values: temperature, weight, volume, mass and so on. You can't do science without measuring things, without true measurements. Evidence-based medicine is all about going around that and pretending like it's just as good. That'll never lead anywhere.
     
    Tal_lula, RedFox, cfsandmore and 11 others like this.
  6. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,850
    Location:
    UK West Midlands
    This questionnaire has been in use by these people for decades yet only now they are seeing any problems or challenging it. If this paper had been done in 2013 you might have given them a bit of credit but really this is a lightweight attempt at performance of showing distance from the previous status quo. Whitewashing.
     
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    This.

    When I get asked these kind of questions by a clinician I usually have no idea what they are actually asking nor how to answer in a way that is accurate and meaningful, and not at high risk of misinterpretation.

    No doubt that some bigwig in psych land will frame that as some kind of psychopathology in me.
     
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    OK so this paper is highly pertinent as its conclusions (and intro) state this makes the case for 'PROMS' being needed.

    Although it doesn't actually, it just critiques the issues with the CFQ as a measure.

    And discussion of the flawed areas would have been more insightful from a retrospective point of view to ensure there wasn't a repetition of the same errors or same-same-but different, where you've changed things but introduced different flaws but due to the same underlying issue.

    I was quite surprised they actually used patients on this - although cynical that lots of patients in the BACME patient groups can also be staff members/have conflicts of various kinds.

    I note the difference between content validity and face validity here: Face Validity - an overview | ScienceDirect Topics
    Looking closely they recruited them from 'a specialist ME/CFS service' and note they have been diagnosed with both FUkada and Nice so have PEM*** SEE EDIT AT BOTTOM. They were recruited during an introductory lecture to ME/CFS so may be relatively new to the condition.

    The next bit - and I think it is probably worth bearing in mind that sample (of 13 participants) and if they were relatively 'new' to the condition and understanding given how pwme can be when asked direct questions, and these were potentially a lot of quite tricky questions - details the method.


    EDIT: hang on!!!!! how naughty. It is a 2024 paper and their claims of:
    "Patients were required to meet both the Fukuda et al. [Citation28] criteria and the NICE [Citation29] criteria at assessment, so post-exertional malaise (PEM) was a necessary criterion for diagnosis."

    use the reference of the 2007 guideline:

    [29] NICE. Chronic fatigue syndrome/myalgic encephalomyelitis (or encephalopathy): diagnosis and management of CFS/ME in adults and children.2007

    EDIT: I've now realised this was actually done in 2018/ethics approval was applied for in Jan 2018.
     
    Last edited: Apr 8, 2024
  9. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734


    This surprises me again, because I would have assumed a focus group or some sort of working group - or if interviews that they would have at least seen the questionnaire on paper in full to be speaking to it. There seem to be a lot of instructions and aspects that might be deemed to be helpful (maybe they are in 'ticking the box' with healthy persons) but could add to the load and 'rapport' for those with ME/CFS perhaps unnecessarily.


    Is the CFQ always administered orally? or does it get administered by the person getting to write their own answers?


    If it were me then I would have wanted the question on paper in front of me to ease that load given how many questions were being asked for each (whereas when doing the survey at least if it were done orally they would only have to answer it with whatever score).

    And I would have preferred time after having had the whole questionnaire being administered, rather than the concurrently aspect because what matters is 'in the moment' and stopping to ask people what they understood of the question after Q1 would change the behaviour when paying attention to being read Q2 because you know 'what you are going to be asked about'. You'd also get that sense of if you've just been done over by something which you couldn't keep up with/didn't make sense.

    Being able to have at least done it once through, then pause and be given the paper sheet to look back at what you just answered without the pressure to answer any questions yet would have given people the chance to think if they had any issues or difficulties with any of them. BUt then I guess there might be pros and cons.

    How will be the PROMS be administered?
     
  10. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Anyway, is it possible to analyse this from the perspective of it being done by the people who were planning on saying there is a need for PROMS and what they planned to put in said PROMS


    For example, what is in the PROMS vs what the CFQ measures and whether either are really doing something completely valid for ME/CFS clinic measures?

    But it also gives a bit of insight potentially does it for how they decided to go about the PROM development?

    Where does all of this come from in the background section for example.., and how is it being applied to the PROMs or is it just their assessment of CFQ:


     
    Peter Trewhitt, Kitty and Sean like this.
  11. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Hmm I've just read:

    Thirteen participants took part (seeTable 2) before the COVID-19 pandemic stopped delivery of the face-to-face seminars where participants were recruited: sufficient datahad been collected to constitute a round of cognitive interviewing [26]


    So this was already looking into PROMS potentially at that point? But even if it was the second lockdown then it wouldn't have been long enough after any new guideline was confirmed to have been 'due to that'

    I'm intrigued now what date this actually was begun - does this mean/confirm that their idea of PROMS was from before the new guideline ?

    yep:
    Ethics
    Ethical approval was given on 11 January 2018, REC reference 17/NW/0726, North West – Liverpool Central Research Ethics Committee.


    So they'd been planning it long enough before 2018 they'd applied for ethics approval by then. PS the ME/CFS clinic they did it with was Bristol.
     
    Last edited: Apr 8, 2024
  12. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,081
    Well this is an excellent article for the authors to self cite when justifying other related work, eg the MEA funded project currently happening.

    Why not undertake some objective measures to calibrate the questionnaires against rather than attempting to use other questionnaires.
     
    bobbler, MEMarge, obeat and 3 others like this.
  13. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    341
    Location:
    Western US
    I've been using wellness scale instead of fatigue scale, and that seems to be one way to avoid the ceiling effect. Knowing that 1 means complete immobilization, I never dipped below 3 even when I was struggling day in day out. These days, my worst PEM scores about 4.

    It could be psychological. Focusing on health rather than sickness may moderate your view of how sick you are.
     
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    That seems to be the fundamental problem with PROMs: they can't be calibrated. Instead they get 'validated' by some convoluted process involving acceptability, reproducibility and few other hoops, but none of those actually provide any reliability or accuracy. More often than not they're not even assessing when they're supposed to, they're barely more than Meyers-Briggs/horoscopes of health.

    Maybe if at least it were understood that PROMs are a bit better than nothing, may be useful in some contexts, but cannot be used to reliably build evidence, it could work out. But that's not what we see happening. PROMs are systematically misused, starting with misinterpretation about being measures, when in reality they're only patient-reported outcome assessments or evaluations, not measures in a scientific sense.

    EBM has been misusing PROMs for decades in large part because they are so easy to misuse, and everything biopsychosocial is built on it, their whole evidence based vanishes otherwise. CFQ has been misused for decades even though it's probably one of the most invalid unreliable ones out there, precisely because it allowed ideologues to misrepresent reality.

    They're basically an imperfect solution to a different problem: that MDs inherently don't trust patient input, and are not entirely wrong about it. So they built this convoluted process where they get patient outcomes, but in a structured way, except they're mostly arbitrary, usually biased, often weird, and in no way any more valid than simply asking the right handful of questions. But that's always the hard part: which questions to ask? And how to interpret them?

    But of course no PROMs is also bad, since most illnesses and their impacts can't be assessed by some biological test. So we're stuck in this weird worst-of-both-worlds situation where the problem is amplified instead of corrected.
     
    Kitty, bobbler, EzzieD and 5 others like this.
  15. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I have 2 problems with this research.

    First, why wasn't it done decades ago? Anyone with half a brain could just read the 11 statements and see they were useless for 'measuring' severity of fatigue in ME/CFS, for all the reasons found in this research, yet UK clinics and researchers all over the world have been using it for decades. Surely part of the validation process for any questionnaire should be to ask a large sample of the relevant patient population and healthy controls to say how they have interpreted each question and on what basis they picked an answer.

    Second, the study doesn't even mention all the other problems with CFQ, as demonstrated by the S4ME paper about CFQ. Nor do the authors even consider the major problem of subjective PROMS being used as outcome measures for service evaluation and trials for rehab. clinics, or the better objective measures that could be used. So they end up concluding all that is needed is better PROMS.
     
  16. Maat

    Maat Senior Member (Voting Rights)

    Messages:
    447
    My issue with PROMs is that, to use their own words, in every bit of research I've seen, and I didn't have to read research until after i was treated, is that it is portrayed as 'perception' or 'perceived'. Meaning, become aware of (something) by the use of one of the senses, especially that of sight! Therefore, we perceive fatigue, we perceive pain, emotion, abuse, but that must be ignored and we must take up our beds and walk.

    ETA This perception has effected a loss of trust. meaning to cause (something) to happen; bring about:
     
  17. Maat

    Maat Senior Member (Voting Rights)

    Messages:
    447
    Also on 13 December 2023

    Attribution of neuropsychiatric symptoms and prioritization of evidence in the diagnosis of neuropsychiatric lupus: mixed methods analysis of patient and clinician perspectives from the international INSPIRE study | Rheumatology | Oxford Academic (oup.com)

    This post has been copied to a new thread. Please go to that thread if you want to discuss it.
     
    Last edited by a moderator: Apr 14, 2024
  18. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Seems like a useful paper that identified many of the problems with the popular Chalder Fatigue Scale, that have been mentioned multiple times here on the forum.

    Some quotes from the paper:

    One challenge relates to the initial instruction: ‘If you have been feeling tired for a long while, compare yourself to how you felt when you were last well’. Most participants had longstanding ME/CFS so were being asked to recall how they felt many years ago: they doubted their ability to do this accurately

    Another challenge is that this instruction seems to offer a choice: that only those who had longstanding symptoms should compare themselves to when they were last well

    As it stands, the CFQ does not allow participants to represent their variable experience over the past month, and the impact of PEM. In addition, the questionnaire does not capture information about the cyclical nature of the condition over a longer period

    The response options also raised questions for some participants who indicated that endorsing the response items ‘more than usual’ or ‘much more than usual’ might indicate an increase in severity, or frequency, or both

    The findings indicate that the CFQ consists of one item clearly related to physical symptoms (6), four items clearly related to cognitive function (8, 9, 10,11) and one item relating to fatigue (5) which could be interpreted as cognitive and/or physical fatigue. The other five items have been identified by participants as lacking clarity (1, 7), relating to behaviour not symptoms (1, 4), or relating to sleepiness not fatigue (3).​
     
    bobbler, Sean, rvallee and 6 others like this.
  19. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    I can't remember seeing this. Can you post it?
     
    Kitty and Peter Trewhitt like this.
  20. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    Kitty, MEMarge, Comet and 3 others like this.

Share This Page