Measuring fatigue. Discussion of alternatives to questionnaires.

Esther12

Senior Member (Voting Rights)
Moderator note. Thread moved from:
https://www.s4me.info/threads/persi...-2018-pariante-et-al.7050/page-19#post-130652

FWIW, I do not think the tools that my colleagues and I have developed will be much use here, for a number of reasons that are very boring and technical, plus the simpler one that I have no reason to think that the problems with ME/CFS research involve fabrication or falsification of data; my guess is that they will turn out to be much more about data dredging, or what the statistician Andrew Gelman calls "The Garden of Forking Paths".

I get the impression that a key problem we're facing is that we don't have a good measure of 'fatigue' yet people who've made their careers as experts in 'fatigue' want to try to overlook that. At this point, they know what outcomes they can 'improve', and so they can pre-specify them. Post-PACE all CBT/GET CFS trials will be doing all they can to focus on subjective self-report outcomes in a way that makes it as easy as possible for them to claim they've reached a clinically significant improvement.

Is anyone aware of any literature on that sort of long-term 'Garden of Forking Paths', with researchers choosing primary outcomes for later trials on the basis of what they can get as positive in earlier trials rather than on the basis of what is a the more useful outcome for providing patients with useful information about treatment efficacy? I get the impression that a lot of academics do not view this as a problem.
 
Last edited by a moderator:
I get the impression that a key problem we're facing is that we don't have a good measure of 'fatigue' yet people who've made their careers as experts in 'fatigue' want to try to overlook that.

Perhaps we should try make a "fatigue" measure ourselves?

Brainstorm ideas from we who actually know what "fatigue" is and isn't. Still ignores all the other probs we have, but anything *we* can come up with has got to be better than Chalder's idea of measurement surely.
 
Perhaps we should try make a "fatigue" measure ourselves?

Brainstorm ideas from we who actually know what "fatigue" is and isn't. Still ignores all the other probs we have, but anything *we* can come up with has got to be better than Chalder's idea of measurement surely.

I don't hold out much hope of coming up with a questionnaire that will not be prone to problems with bias. I do think that there are better ones that the Chalder Fatigue Scale already, and moving on from that would be something, but I think that research which assumes fatigue questionnaires are valid and reliable measures of the symptom of fatigue is going to cause us problems whatever questionnaires they use.
 
Perhaps we should try make a "fatigue" measure ourselves?

Brainstorm ideas from we who actually know what "fatigue" is and isn't. Still ignores all the other probs we have, but anything *we* can come up with has got to be better than Chalder's idea of measurement surely.
I've been thinking about subjective assessments, and have decided that they separate out into two camps (which overlap of course).

One asks intangible questions about how people feel, or how difficult they find something. These are always susceptible to pressure, particularly if the answers are graded. How am I at walking a mile? Well, am I comparing it with how I used to walk a mile, or how much difficulty a person who cannot even get to a chair from a bed would have?

The other asks questions about actual actions, such as a decrease in working hours. There are bad questions that can be asked along these lines: the kids at school used to like the question "How much television do you watch?" – it was very very difficult to explain to them that no-one ever can give a reliable answer to that.

So, the sort of questionnaire that I would respect would contain well-defined questions on actual actions.

I've gone off topic. Sorry.
 
I get the impression that a key problem we're facing is that we don't have a good measure of 'fatigue' yet people who've made their careers as experts in 'fatigue' want to try to overlook that.

Is anyone aware of any literature on that sort of long-term 'Garden of Forking Paths', with researchers choosing primary outcomes for later trials on the basis of what they can get as positive in earlier trials rather than on the basis of what is a the more useful outcome for providing patients with useful information about treatment efficacy?

Perhaps we should try make a "fatigue" measure ourselves?

I agree that this is what needs focussing on.

There is even doubt that, as a symptom, the target should be called fatigue.

So I am increasingly of the view that what we want to measure is what Esperanza has called motor fatigue. And the need is to measure a specific pattern of motor fatigue X, that is characteristic of ME, or maybe patterns X and Y and even Z that are patterns characteristic of subsets of ME. Whether the symptom is best called fatigue does not matter so much. X or Y would be objective indicators of whatever symptomatology was responsible for the impact on active life in ME.

It would be the equivalent of respiratory function tests for asthma. Nobody assesses asthma by asking how wheezy someone is. They measure air volumes shifted. And to confirm that the problem is truly asthma, which is reversible from episode to episode, patients are asked to measure their own volumes on a daily basis.

This is of course where the accelerometers come in, but they need to be used intelligently, not just as a crude index of the number of wiggles. Off hand I can think of about a dozen PWME whose movement patterns I am familiar with. They will differ from those of healthy people in a variety of specific respects even if the total number of wiggles is not different. I am pretty sure this can be measured, but it requires intelligence in interpretation.

I do not know of literature on Garden Paths as a problem but the reality of the phenomenon is very familiar to any rheumatologist. Around 1970 the first new drug to have an immediate effect on joint pain and swelling was developed using what we now consider standard trial methods - ibuprofen. Jason et al. devised a scoring system that came to be known as DAS, based on counting numbers of swollen and tender joints. It was a useful system for anti-inflammatory drugs because it measured the rather limited benefits they gave. But it became embedded in scoring systems for all anti-rheumatics, to the extent that you could not make an application to the FDA for a new drug license without using it. This was a serious problem for the designers of collagenase inhibitors, which were intended to have no effect on symptoms but to protect tissue from damage. Another scoring system was the ACR improvement grading. ACR20 was for many years considered the end point, because a 20% improvement across various variables was considered good going. But when we started seeing ACR70 grades on a regular basis there was argument about the relevance of ACR20.

I guess what this illustrates is that picking outcome measures that look good is a regular part of treatment development and to some extent makes sense and is legitimate - but it can also distract away from more important goals. And if claims are made based on limited outcomes that are overblown, and negative findings suppressed, then things are going badly wrong.

I think a fatigue measure could be generated from accelerometers with good software. You would probably need ankle and wrist monitors at least, maybe both sides.
 
I agree that this is what needs focussing on.

There is even doubt that, as a symptom, the target should be called fatigue.

So I am increasingly of the view that what we want to measure is what Esperanza has called motor fatigue. And the need is to measure a specific pattern of motor fatigue X, that is characteristic of ME, or maybe patterns X and Y and even Z that are patterns characteristic of subsets of ME. Whether the symptom is best called fatigue does not matter so much. X or Y would be objective indicators of whatever symptomatology was responsible for the impact on active life in ME.

It would be the equivalent of respiratory function tests for asthma. Nobody assesses asthma by asking how wheezy someone is. They measure air volumes shifted. And to confirm that the problem is truly asthma, which is reversible from episode to episode, patients are asked to measure their own volumes on a daily basis.

This is of course where the accelerometers come in, but they need to be used intelligently, not just as a crude index of the number of wiggles. Off hand I can think of about a dozen PWME whose movement patterns I am familiar with. They will differ from those of healthy people in a variety of specific respects even if the total number of wiggles is not different. I am pretty sure this can be measured, but it requires intelligence in interpretation.

I do not know of literature on Garden Paths as a problem but the reality of the phenomenon is very familiar to any rheumatologist. Around 1970 the first new drug to have an immediate effect on joint pain and swelling was developed using what we now consider standard trial methods - ibuprofen. Jason et al. devised a scoring system that came to be known as DAS, based on counting numbers of swollen and tender joints. It was a useful system for anti-inflammatory drugs because it measured the rather limited benefits they gave. But it became embedded in scoring systems for all anti-rheumatics, to the extent that you could not make an application to the FDA for a new drug license without using it. This was a serious problem for the designers of collagenase inhibitors, which were intended to have no effect on symptoms but to protect tissue from damage. Another scoring system was the ACR improvement grading. ACR20 was for many years considered the end point, because a 20% improvement across various variables was considered good going. But when we started seeing ACR70 grades on a regular basis there was argument about the relevance of ACR20.

I guess what this illustrates is that picking outcome measures that look good is a regular part of treatment development and to some extent makes sense and is legitimate - but it can also distract away from more important goals. And if claims are made based on limited outcomes that are overblown, and negative findings suppressed, then things are going badly wrong.

I think a fatigue measure could be generated from accelerometers with good software. You would probably need ankle and wrist monitors at least, maybe both sides.

Interesting, but even accelerometers are about physical fatigue. My symptoms are primarily cognitive. Though how anyone could measure lack of mental energy, inability to process information, indecision, poor concentration, etc, I have no idea.
 
Last edited:
Interesting, but even acclerometers are about physical fatigue. My symptoms are primarily cognitive. Though how anyone could measure lack of mental energy, inability to process information, indecision, poor concentration, etc, I have no idea.

I am not sure accelerometers are about 'physical fatigue'. They document patterns of activity. Cognitive problems are likely to impact patterns of activity. Put an accelerometer on the right wrist of someone with ME doing a finals exam and you are likely to notice differences from others.
 
My symptoms are primarily cognitive. Though how anyone could measure lack of mental energy, inability to process information, indecision, poor concentration, etc, I have no idea
I am like this, too.

To measure correctly any deviation, you'd need a baseline, and imo you'd need to approach our brain PEM as episodic, not unlike one would try to assess calcium or potassium levels in PP patients before and after episodic attacks.

Easier said than done.

I seem to remember a handful of years ago there being reports that someone had modified the Wechsler scale for people with mild TBI; perhaps that is a possible approach.
 
I am not sure accelerometers are about 'physical fatigue'. They document patterns of activity. Cognitive problems are likely to impact patterns of activity. Put an accelerometer on the right wrist of someone with ME doing a finals exam and you are likely to notice differences from others.

True, though I'm not sure it would capture the full disability.
 
Interesting, but even acclerometers are about physical fatigue. My symptoms are primarily cognitive. Though how anyone could measure lack of mental energy, inability to process information, indecision, poor concentration, etc, I have no idea.

By tracking eye and head movements during a prolonged intellectual task? Someone with poor concentration won't be able to focus on the task as well.
 
True, though I'm not sure it would capture the full disability.

I think the idea would be that you calibrate the data against a subjective account of 'full disability'. That could be related to before and after or up and down. Measures pain such as timed walking distance have to be calibrated against a subjective account of pain.

There is an obsession in medical measurement with standardising everything. However, it is perfectly possible and legitimate to customise scales to individuals to record change. We did this in a lupus study in the 1980s and although I never followed that line of work further the referees were enthusiastic about the unconventional scoring system.
 
It strikes me that although I am probably as active as most ‘sedentary’ people, looking at activity for every minute would show how slowly I’m getting things done, and how each period of activity is followed by a period of rest. There wouldn’t be any spikes of anything that looked like moderate or intense activity.
 
Back
Top Bottom