Exploring the validity of the chalder fatigue scale in chronic fatigue syndrome - 1998 Morriss et al

In this trial of exercise in lupus co-authored by Peter White, they used three fatigue scales. The Chalder Fatigue scores were the only ones to change significantly. Scores on the other two fatigue scales - the Fatigue Severity Score and Visual Analogue Scale - did not change significantly between groups.

Interesting. Do you have a link to the trial?
 
The evidence base for GET and CBT in our illness relies heavily on the Chalder Fatigue Questionnaire, because the change in SF-36 Physical Function scores is often unimpressive/non-existent. I think a good paper outlining its flaws, or maybe better, a study demonstrating them, could be helpful.

This is a great idea

Is there some kind of royalty system for use of such questionnaires or are they free to use? I hope there’s no money being made off this claptrap.
 
@Lucibee and I talked about particular problems with the Chalder Fatigue scale on a previous thread. We looked at NHS data from Collin & Crawley's 2017 paper and PACE data.

#123 through 128 of this thread https://www.s4me.info/threads/a-general-thread-on-the-pace-trial.807/page-7#post-61308
we were also discussing it at the other place in 2017
https://forums.phoenixrising.me/thr...medicine-again-by-d-tuller-et-al.50158/page-5

the original paper on the development of the Fatigue scale
http://simonwessely.com/Downloads/Publications/CFS/32.pdf
 
In this trial of exercise in lupus co-authored by Peter White, they used three fatigue scales. https://academic.oup.com/rheumatology/article/42/9/1050/1772059

The Chalder Fatigue scores were the only ones to change significantly. Scores on the other two fatigue scales - the Fatigue Severity Score and Visual Analogue Scale - did not change significantly between groups.

I think the Chalder fatigue scale is misleading because in chronic conditions, patients cannot accurately do what the scale asks them to do, which is to compare their current state with their pre-illness state.

I don't have a memory of how being healthy felt like. What I think happens is that the brain, when it is asked for memories it doesn't have, simply makes the best guess. Maybe this best guess is further or particularly influenced by expectations (patients having been told that the treatment will improve their fatigue etc)
 
As an aside, it seems remarkable to me that Morriss et al also used some objective measures to validate two of the constructed "four factors of fatigue", and if I understood correctly, they claim the objective measures correlated with the subjective assessments?

It is important to point out that there is a big difference between correlations at a static point in time and correlations in an unblinded prospective clinical trial which is inherently subject to various biases.
 
Write up looking at the CFQ in Occupational Medicine (2014)

https://academic.oup.com/occmed/article/65/1/86/1433061

I looked up other publications by the author of this article (Craig Jackson) and found this one:

Variation and Interactional Non-Standardization in Neuropsychological Tests: The Case of the Addenbrooke's Cognitive Examination.
Abstract
The Addenbrooke's Cognitive Examination (ACE-111) is a neuropsychological test used in clinical practice to inform a dementia diagnosis. The ACE-111 relies on standardized administration so that patients' scores can be interpreted by comparison with normative scores. The test is delivered and responded to in interaction between clinicians and patients, which places talk-in-interaction at the heart of its administration. In this article, conversation analysis (CA) is used to investigate how the ACE-111 is delivered in clinical practice. Based on analysis of 40 video/audio-recorded memory clinic consultations in which the ACE-111 was used, we have found that administrative standardization is rarely achieved in practice. There was evidence of both (a) interactional variation in the way the clinicians introduce the test and (b) interactional non-standardization during its implementation. We show that variation and interactional non-standardization have implications for patients' understanding and how they might respond to particular questions.

https://www.ncbi.nlm.nih.gov/pubmed/31550997
https://sci-hub.tw/10.1177/1049732319873052

interesting that did not apply any of this to the interpretation/administration of the CFQ. (eg instructions on how to interpret 'as usual')

(I think this is him [there's an email contact address]
https://www.bcu.ac.uk/social-sciences/about-us/staff/psychology/craig-jackson )

would be interesting to see what he thinks about @Lucibee @Michiel Tack and others assessment of the CFQ on the latest Cochrane review thread.

eta; and of course the comments on this thread
 
Last edited:
It is important to point out that there is a big difference between correlations at a static point in time and correlations in an unblinded prospective clinical trial which is inherently subject to various biases.

Objective measures will always show some correlation. Everything is correlated to everything to some extent - positively or negatively. So slow walking is correlated to hypothyroidism and to a broken leg but it is not a very good measure of hypothyroidism or broken legs (or depression).

Agree with both.

I was coming from an other perspective.

There seems to be a consensus that in trials on therapist-delivered treatments where blinding is not possible, objecive measures are needed.

There also seems to be a certain consensus that for trials on certain illnesses like ME, because there are no objectively measurable signs and symptoms specific to that illness, you have to mostly rely on subjective measures to assess any treatment's effect on symptoms / illness severity.

Cochrane and other institutions assessing evidence of treatment effects seem to be rather confident with this situation; it might be considered as just one factor among others in the 'tools' they use to assess the quality of evidence. Which seems at odds with the standards used for trials investigating other illnesses (with 'hard' signs and symptoms).

Coming from that perspective, I found it was interesting that people tested the correlation of some items on the Chalder Scale with objective measures. Even if those are very unsprecific meausures, don't they invaildate the otherwise used argument that subjective symtoms like fatigue can't be objectivly assessed per se?
 
Last edited:
Subjectivity has several meanings but the important one here is that a measure can be skewed enough in one direction or another by knowing what result you want to provide a statistically significant effect. That is entirely consistent with there still being some degree of correlation with an objective measure. Scoring in ice dancing is subjective but the results tend to correlate quite well with an absence of technical errors like falling over.

As we have debated in the past, the answer is to have multiple threshold measures that require both a significant change in the desired subjective measure AND a significant change in some objective measure that corroborates the subjective measure. There is still room for getting spurious results but much less.
 
There also seems to be a certain consensus that for trials on certain illnesses like ME, because there are no objectively measurable signs and symptoms specific to that illness, you have to mostly rely on subjective measures to assess any treatment's effect on symptoms / illness severity.
The end goal must be to increase patient's sustainable activity capacity back to within the range of the healthy general population, which is definitely open to objective measurement.

The it's subjectively defined, therefore only subjective outcome measures matter claim really make me angry. It is the last refuge of scoundrels and cowards desperate to avoid a proper assessment of their claim.
 
The end goal must be to increase patient's sustainable activity capacity back to within the range of the healthy general population, which is definitely open to objective measurement.

The it's subjectively defined, therefore only subjective outcome measures matter claim really make me angry. It is the last refuge of scoundrels and cowards desperate to avoid a proper assessment of their claim.
:thumbup:

I think @Jonathan Edwards a while back said something along the lines of if you wanted to know if the patient felt better you could just ask the patient.

Even where there are objective measurements the definition recovery & harms for any trial should be agreed with the patient population.

In a healthcare system where there are never enough resources the point is not to treat people regardless of outcome, it is to improve the outcome for the patient.

The BPSer have long held that patients with ME, & now those with MUS, shouldn't undergo unnecessary tests, even though you can't be 100% sure a test isn't necessary until you have done the test and seen the result. Yet at the same time feel it's appropriate for patients to undergo treatments that do nothing to improve the outcome for the patient, except perhaps to make the majority worse. As long as the treatment is provided by their own speciality or cronies of course.

The CFQ is simply a means to create noise. A distraction to stop people noticing that there was no meaningful improvement in function or quality of life.
 
Back
Top Bottom