Response variability is associated with self-reported cognitive fatigue in multiple sclerosis, 2010, Bruce et al.

Woolie

Senior Member
I thought this piece of (oldish) research might be of interest to people interested in measuring cognitive dysfunction in ME. It notes that self reported fatigue in MS doesn't correlate well with actual cognitive performance measures (which is the same in ME). Here, they developed a new performance measure that appears to track self reports much better, which they call response time variability. Its a measure of the standard deviation of response times to items in a memory recognition tsk (excluding incorrect responses).

full text of article can be downloaded here and here.
Bruce, J. M., Bruce, A. S., & Arnett, P. A. (2010). Response variability is associated with self-reported cognitive fatigue in multiple sclerosis. Neuropsychology, 24(1), 77–83. https://doi.org/10.1037/a0015046

Abstract
Cognitive fatigue is a common, often debilitating symptom of multiple sclerosis (MS). Although MS patients frequently report that fatigue negatively affects cognitive functioning, most studies have found little evidence for a direct relationship between self-reported cognitive fatigue and traditional measures of neuropsychological functioning. The purpose of the present study was to examine the association between self-reported cognitive fatigue and a measure of response time variability (RTV). MS patients demonstrated significantly higher RTV than controls, and RTV was highly correlated with self-reported cognitive fatigue among relapsing-remitting and secondary progressive MS patients. Results highlight the need to implement newer methods to further elucidate the relationship between cognitive fatigue and neuropsychological functioning in MS.
 
Last edited:
(haven't read the study yet) but why would response time variability have anything to do with fatigue if the response time itself isn't correlated. Sounds like a statistical correlation that doesn't mean much, to be honest.
 
(haven't read the study yet) but why would response time variability have anything to do with fatigue if the response time itself isn't correlated. Sounds like a statistical correlation that doesn't mean much, to be honest.
What they did is just create a performance measure from the standard deviation of the response times, and they found that high scores on this measure were associated with higher self-reported fatigue.

The rationale seems pretty sound to me. The idea is that certain kinds of cognitive difficulties might exhibit themselves not in consistently slow performance, but in greater variability in performance (because small trial-to-trial changes in overall noise levels within the cognitive system may have abormally large effects on performance).
 
Last edited:
Neither of the links works for me and I couldn't get it on sci hub, so I haven't read the full article.

I wonder whether the variability was actually a slowing down, or more erratic.
 
Thanks Andy. Don't know why it didn't work the first time I tried.

To answer my own question, the discussion section suggests it's not just that fatigue leads to slowing down over a series of tasks, but that attention wanders and it's hard to get concentration back some of the time.

From the discussion:
Response time variability measures aspects of executive functioning related to a person’s ability to consistently focus and purposefully sustain mental effort. Caused primarily by frontal systems dysfunction and white matter damage, increased RTV is associated with various fatiguing conditions and fatigue due to sleep deprivation. This was the first study to examine RTV among patients with MS. Consistent with hypotheses and results found in other neurological populations, MS patients exhibited increased RTV and response latency when compared with controls.

Cognitive fatigue may not cause a linear decrement in neuropsychological performance with sustained mental effort, as has been assumed. Instead, fatigue may affect cognition by increasing response variability during individual mental tasks. In this manner, people who experience cognitive fatigue may have occasional lapses in attention; during these lapses, additional effort may be required to muster the necessary mental reserves to efficiently and consistently perform a designated task.
 
Haven't read this paper yet but this is not a new concept. I recall reading a paper from 2007 or so about subjective cognitive complaints and increased standard deviation of performance on timed tasks in normal people. The lack of correlation between objective neuropsych tests and subjective cognitive problems is a perennial problem in research not just in ME but all kinds of neuro and psych conditions.
 
Haven't read this paper yet but this is not a new concept. I recall reading a paper from 2007 or so about subjective cognitive complaints and increased standard deviation of performance on timed tasks in normal people. The lack of correlation between objective neuropsych tests and subjective cognitive problems is a perennial problem in research not just in ME but all kinds of neuro and psych conditions.

One of the main problems with these studies is that they tend to be cross-sectional, so they don't have a conception of the pre-morbid performance of participants. Hence such studies are highly susceptible to participation biases, namely participants tend to have well above normal intelligence, skewing the results.

In the case of healthy participants, the variation of the degree of impairment is probably insufficient compared to the easily-biased self-reports of subjective complaints and it is rarely ethical to subject healthy participants to whatever is required to cause significant impairment on such tests.
 
ne of the main problems with these studies is that they tend to be cross-sectional, so they don't have a conception of the pre-morbid performance of participants. Hence such studies are highly susceptible to participation biases, namely participants tend to have well above normal intelligence, skewing the results.

There are tests (reading and vocabulary) that can estimate premorbid intelligence pretty well so you can match your participants with controls that way. Commonly done in neuropsych.
 
There are tests (reading and vocabulary) that can estimate premorbid intelligence pretty well so you can match your participants with controls that way. Commonly done in neuropsych.

"pretty well"? How much of the variance does it explain with what samples? I suggest there is substantial variance with that method.
There are many problems. What about individuals who became ill as teenagers and thus stunted their education? What about general differences in educational background and interests. Factors like this are compounded for individuals with significantly above average intelligence.
 
"pretty well"? How much of the variance does it explain with what samples? I suggest there is substantial variance with that method.
There are many problems. What about individuals who became ill as teenagers and thus stunted their education? What about general differences in educational background and interests. Factors like this are compounded for individuals with significantly above average intelligence.

You're not matching them for variance, you're matching them for mean premorbid intelligence.

What about individuals who became ill as teenagers and thus stunted their education?

Yup, this is a big problem. We know that ME is the most common cause of long-term school absence and it's an illness that often strikes its victims in adolescence.
 
You're not matching them for variance, you're matching them for mean premorbid intelligence.

Those who assume they can match them using such tests without understanding that such tests only explain some of the variance, and therefore do not fully predict premorbid intelligence are committing a deeper methodological sin...
 
There are tests (reading and vocabulary) that can estimate premorbid intelligence pretty well so you can match your participants with controls that way. Commonly done in neuropsych.
My personal experience with these is they can be inaccurate and subject to bias. For example, if I recall correctly, premorbid intelligence estimates are accompanied with confidence levels, eg, 90%. Well a 90% confidence is just plainly terrible.

To make a long story short, in my n=1 story, two different psychs - hired by insurance companies on two separate occasions - missed my premorbid IQ by over 15 points.

IMO, accordingly, those tests can suck. Moreover, it would appear to me, at least, some of those who execute them while on insurance company payroll may "estimate" in a manner that favors the insurance company.

Patients test at their peril. Not that many have any choice.
 
Last edited:
IMO, accordingly, those tests can suck. Moreover, it would appear to me, at least, some of those who execute them while on insurance company payroll may "estimate" in a manner that favors the insurance company.

Exactly. Hypothetical example: premorbid IQ may be 130, but the test estimates 120 and current IQ test result is 120 due to poor concentration, brain fog etc. Yet they state there is no cognitive impairment because 120=120. Fact is that estimates of premorbid IQ have a greater error range the further one's IQ is from the mean.
 
My personal experience with these is they can be inaccurate and subject to bias. For example, if I recall correctly, premorbid intelligence estimates are accompanied with confidence levels, eg, 90%. Well a 90% confidence is just plainly terrible.

To make a long story short, in my n=1 story, two different psychs - hired by insurance companies on two separate occasions - missed my premorbid IQ by over 15 points.

IMO, accordingly, those tests can suck. Moreover, it would appear to me, at least, some of those who execute them while on insurance company payroll may "estimate" in a manner that favors the insurance company.

Patients test at their peril. Not that many have any choice.

Exactly. Hypothetical example: premorbid IQ may be 130, but the test estimates 120 and current IQ test result is 120 due to poor concentration, brain fog etc. Yet they state there is no cognitive impairment because 120=120. Fact is that estimates of premorbid IQ have a greater error range the further one's IQ is from the mean.

Yes, tests that estimate premorbid IQ have a low ceiling. Insurance companies have a vested interested in denying claims of cognitive impairment. Misuse of neuropsych testing in litigation cases or by pseudo-scientists such as BPS researchers does not mean that neuropsych testing can't be used appropriately, if you are trying to study the topic in good faith.

Most of the population falls within +/- 2 SD so it would not be too difficult to design a study looking at a range of patients compared to normal people while excluding exceptionally bright people. 98% of people do not have an IQ of 130.
 
if you are trying to study the topic in good faith.
Too many are not.

Most of the population falls within +/- 2 SD so it would not be too difficult to design a study looking at a range of patients compared to normal people while excluding exceptionally bright people.
I eviscerated my psych write-up precisely, in part, because the psych kept writing that my intelligence was "normal" - even as that psych kept repeating the mistake of underestimating my premorbid IQ by almost 20 pts. Do I think this was deliberate? I'm not really sure. I think this psych just found it easy to embrace using a low confidence level to mesh nicely with the position of the insurance carrier. No malice, perhaps, just an easy way out, at a patient's expense.

Point is, the tests can be manipulated. As such, they can represent a danger. Until such a time as controls are tightened, eg confidence levels no less than 98%, etc, imo they should be disallowed.
 
Last edited:
if you are trying to study the topic in good faith.
@Sid, I know what you are trying to say here, and I agree up to a point. However, what about those researchers/clinicians whose acts of good faith are finding results that support whatever team they happen to be behind? There are those who believe this is the greater truth.

You see this same dilemma on the front lines of journalism.
 
Last edited:
Back
Top Bottom