Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

Discussion in 'ME/CFS research' started by Andy, Feb 21, 2024.

  1. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,204
    @Hutan I for example looked at the highest amount of hard rounds played in a row per player (for the first 35 rounds, excluding HV F).

    For the HV you get the numbers: 2 2 2 3 5 3 12 2 2 5 2 4 2 2 5 5
    For ME/CFS you get the numbers: 3 2 5 5 3 5 1 6 2 2 5 2 4 2 2

    That means the mean of the highest amount of hard rounds played in a row (for the first 35 rounds) for HVs is 3.625 whilst for pwME it is 3.2, a small difference driven by HV H (the same HV H that also largely drives the end results of the study, i.e. the higher mean in choosing hard).

    Of course this one statistic really captures very little or close to nothing of the dynamics of the game and how often one has to take breaks.

    If you look at the highest amount of hard rounds played in a row that were won per player (for the first 35 rounds, without HV F) you would expect something more significant because pwME have a much lower percentage of completion of hard tasks. That is also precisely what you get.

    For the HV you get the idential numbers: 2 2 2 3 5 3 12 2 2 5 2 4 2 2 5 5
    For ME/CFS you get the numbers: 1 1 5 1 3 5 1 1 2 5 2 4 2 2
    That means the mean of highest amount of hard rounds successfully completed in a row (for the first 35) rounds for HVs is 3.625 whilst for pwME it is 2.3, which is a rather large difference which already shows us quite a bit.

    Now these number don't cover nearly the full "rhythm-dynamics" in which the game is played because people play hard multiple times, more often than just the one time they do their maximum and so forth. It's a bit more complicated to try to capture the full dynamics of taking "breaks". The easiest thing would be too look at the above but instead of taking just the maximum per player you take all of the occurrences, i.e. whenever someone plays hard more than once in a row you count how often that happens in a row and add all of those occurences up, that's something I haven't done yet. Alternatively one easily could look at how often do people go from easy to hard in total.

    I did "come up" with a metric, that is conceptually easy, but very time consuming to implement, to capture the full dynamics of each players "breaks" which ends up giving you one final value (a number) for the amount of "breaks" for each player takes and which does account for the rhythm in which the game is played in (rather than just "playing easy=one break"). For that you plot graphs per player for the number of times people play hard/easy as the game progresses, the y axis will cature your dynamics and the x axis is just the rounds of trials. If someone plays hard multiple times in a row they get added +1 on that graph for all of those games from where they were previously standing and the first time someone goes to easy you then add -1 to their current value for that game. If they continue with easy they are given the additional value -1 (which is the difference between playing hard multiple times where you don't accumulate +1's, but for easy you do accumulate -1's, because the end metric is supposed to count the pauses/breaks you need), if they go with easy again then you go to -2, if they then play hard you go up by +1 and so forth. The metric one then uses to count the breaks per player is something mathematics call total variation and essentially measures the total height of your graph. You can then add up all total varitations of the individual players to get the mean for each group (ME/CFS and HV). You do the same thing for games in a row as well as games succesfully played in a row (with the difference here being that successes for easy count the same way as just choosing easy, since both are a break).

    I'm somewhat confident this captures taking "breaks" reasonably well over the full duration of the game (but I still have to think about it properly and am not even sure about if the whole thing does what you want it to do) and gives you one number at the end of everything that also captures the dynamics of the game (I'm also fairly confident I explained it very badly and posting a graph would better explain what I mean). I think one will be able to come up with something better or even something better by just looking at the easy rounds, because these are the breaks, but I haven't really come up with something yet that also accounts for the rhythm in which the game is played. I also still have to think about whether the above thing is the right way to go about this and whether it really captures everything you want to capture and if that is even meaningful at all.

    Whether that would end up giving you something valuable I currently don't know, especially because the variation in what pwME do in this trial is so high and there is no obvious "rhythm" in how the game is played rather than just different people doing completely different things.

    The end idea would probably be to have some notion "how often does someone have to take breaks but yet still choose to do hard, even if he can't". But I'm not sure if any of the above means anything and I'm guessing that it doesn't, because the whole setup of the game is very unrobust.
     
    Last edited: Mar 8, 2024
    alktipping, ukxmrv, Hutan and 3 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Unfortunately, this is how biopsychosocial works. We've long been a women's illness, despite lots of men being affected. For decades a minimal tendency towards a bit more childhood adversity, likely nothing but an anomaly, was emphasized to mean that it's a major cause, even though many of us did not. We're both 'type A' personalities and neurotics. We push ourselves too much and also we become deconditioned because we're afraid to push minimally. They just overlook anything that doesn't fit, and argue that something, somewhere, could fit.

    And the same was done about deconditioning. They simply ignore the vast number of patients who had a healthy weight and were fit, even very fit, at the time of their illness, even further along. They'll say about high-level athletes that they don't know how to do basic training. They're even ignoring their own results here that show no evidence of physical deconditioning, and still have a quote about how it could explain it. Backwards.

    This is biopsychosocial, nothing matters. Things that obviously happen after are argued to be the cause. Things that have the tiniest statistical significance, even a very dubious one, are somehow a universal fact, simply because they argue that it may not be that specific thing, but it could be a number of other things. They can't tell, but they will argue it anyway. They took PACE's 1 in 7 has some form of subjective improvement and it's been argued that it's a complete cure, works 100% of the time. It's a madness model.

    Falsification is one of the most important thing in science. In biopsychosocial ideology, they do the opposite: if they can't confirm it positively, they will simply say that there could be a reason they just haven't found yet. It works completely backwards, to the point where it has to be proven false, otherwise their thing is guaranteed to be true.

    We can't reason with people who did not reason themselves in their opinion. Their whole thing makes no sense, so no matter how much sense we use to prove that they don't make sense, they don't have to care, this is how biopsychosocial is done: arbitrary, and "I'm right, you're wrong".
     
    Starlight, alktipping, ukxmrv and 7 others like this.
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    They don't need grants, they're using the intramural fund. So not only are they wasting this funding, they're wasting funding that is supposed to be going to serious research. It's just as insulting as Rosmalen getting funds that were explicitly written to exclude anything psychosomatic. We all know what her team will do with that funding, just as we know what Walitt will do with the similar studies in GWI and LC. Just like Crawley's funding for the ClocK study basically published a paper saying "Long Covid? Doesn't appear to affect kids." It's always the same crap with these ideologues.
     
  4. Sean

    Sean Moderator Staff Member

    Messages:
    8,066
    Location:
    Australia
    To the extent they even bothered to check for deconditioning, which was, and still is, barely at all.
    Especially when they get so lavishly rewarded for it.
     
    JoClaire, Starlight, ukxmrv and 2 others like this.
  5. Eddie

    Eddie Senior Member (Voting Rights)

    Messages:
    145
    Location:
    Australia
    I think it's more complicated to accurately measure it than what's commonly done. My understanding is typically the velocity of the blood is measured (as this is easy to do) but this can differ from volume of blood flow to the brain. I also have had it measured by your average neurologist and it was within normal bounds. However, this paper found a pretty significant reduction by trying to calculate the total volume of blood flow not just velocity. If it is true that pw ME/CFS have less blood flow to the brain, it would certainly effect the result of a decision making study involving being upright. https://pubmed.ncbi.nlm.nih.gov/32140630/
     
    Last edited: Mar 9, 2024
  6. Dakota15

    Dakota15 Senior Member (Voting Rights)

    Messages:
    756
    Shared this in another thread but also wanted to ask here (mods feel free to delete/move as you see fit)

    Does anyone think we should try this route? Just was wondering.

    'Research Misconduct: Research misconduct is defined as fabrication, falsification and plagiarism, and does not include honest error or differences of opinion."

    https://grants.nih.gov/help/report-a-concern#research

    'Who to Contact:'
    'Requirements for Making a Finding of Research Misconduct'
    • There be a significant departure from accepted practices of the relevant research community;
    • The misconduct be committed intentionally, knowingly, or recklessly; and
    • The allegation be proven by a preponderance of the evidence.'
    https://grants.nih.gov/policy/research_integrity/requirements.htm
     
    ukxmrv, Kitty and Peter Trewhitt like this.
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    JoClaire, alktipping, ukxmrv and 5 others like this.
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    No I haven't tried to as I was looking for the EEfRT data. Were you able to replicate their GEE results? If so, would it be possible for you to share your code (this might help in better understanding the results).
     
    Binkie4, Kitty and Peter Trewhitt like this.
  9. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I also had a go at the data posted by Evergreen. I’ve ignored the 4 test trials and the invalid data by HV F. The figures below only show the raw data, not modelled data controlled for factors such as sex and number of trials per participant.

    Feel free to point out errors (I hope I didn’t make any stupid mistakes).

    Proportion choosing hard trials
    I found that controls choose hard trials 42% of the time while patients only 35% of the time.
    upload_2024-3-9_16-28-25.png

    The difference is mostly visible when the probability of reward is 0.5 resulting in 41% hard tasks for controls compared to 29% for patients.
    upload_2024-3-9_16-28-34.png

    Here are the group results for choosing the hard effort as the trials progress. This doesn’t seem like a strong difference.
    upload_2024-3-9_16-28-52.png

    Below are also the proportion of hard efforts per participant. The original study used a duration 20 minutes and only looked at the first 50 trials to make the analysis more consistent. In the Walitt the trial duration was reduced to 15 minutes, so few patients reached 50 trials.

    upload_2024-3-9_16-29-2.png

    Succesfull completion
    I found that patients completed 87% of trials successfully, compared to 98% for controls.
    upload_2024-3-9_16-29-19.png

    There was no difference in the easy tasks but a big difference (96% versus 67%) in completing hard tasks.
    upload_2024-3-9_16-29-33.png

    The completion rate is lowest for the hard trials with low probability of reward and increases as the reward probability increases.
    upload_2024-3-9_16-30-1.png

    Here’s the overall completion rate over trial progression, there doesn’t seem to be a fatiguing effect because of the test.
    upload_2024-3-9_16-30-31.png
     
    Hutan, Binkie4, Evergreen and 4 others like this.
  10. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I also had a look at some of the trials posted by EndME.

    The proportion choosing hard tasks seems to be quite low in the Walitt trial but it is difficult to compare because of changes to the test procedure.
    upload_2024-3-9_16-33-44.png
    The completion rate does seem to be above 90%, only the autism cohort scored a bit lower, probably because they choose more instead of less hard efforts compared to controls.
    upload_2024-3-9_16-34-51.png

    The descriptions used for what the test aims to measure focus on motivation and include terms like:

    willingness to exert effort
    amotivation
    reduced motivation
    effort-based decision-making impairment
    aberrant effort allocation
    altered reward-based decision-making
    incentive motivation deficits
    motivational impairments
    impaired reward motivation
     
    Last edited: Mar 9, 2024
    Hutan, Binkie4, Evergreen and 3 others like this.
  11. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Here are some quotes I found about the need to calibrate the effort and the possibility of participants using strategies. I think EndME and Bobbler already posted the most important ones. Boldings are mine.

    Calibration

    An important requirement for the EEfRT is that it measure individual differences in motivation for rewards, rather than individual differences in ability or fatigue. The task was specifically designed to require a meaningful difference in effort between hard and easy-task choices while still being simple enough to ensure that all subjects were capable of completing either task, and that subjects would not reach a point of exhaustion. Two manipulation checks were used to ensure that neither ability nor fatigue shaped our results. First, we examined the completion rate across all trials for each subject, and found that all subjects completed between 96%-100% of trials. This suggests that all subjects were readily able to complete both the hard and easy tasks throughout the experiment. As a second manipulation check, we used trial number as an additional covariate in each of our GEE models.
    Treadway et al. 2009
    Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia | PLOS ONE

    subjects completed 4 trials with each hand where they were instructed to press the respective key as many times as possible. The trial with the lowest number of button presses was discarded, and the maximum button press rate was calculated as the mean from the remaining 3 trials. The button press criterion used in the actual task was individually set at 90% of the subject’s calculated maximum rate. This manipulation was done to control for non-specific differences in motoric ability between groups, and to assure that each individual had the capacity to complete the trials
    Fervaha et al. 2013
    Incentive motivation deficits in schizophrenia reflect effort computation impairments during cost-benefit decision-making - PubMed (nih.gov)

    if patients simply found the button pressing more difficult, one might have expected a main effect of overall lower hard task choice, rather than specific interactions with reward magnitude and probability.
    Barch et al. 2014
    Effort, Anhedonia, and Function in Schizophrenia: Reduced Effort Allocation Predicts Amotivation and Functional Impairment - PMC (nih.gov)

    The easy task requires one-third the amount of the individually calibrated hard number of presses to be made within 7 s, with the dominant index finger. The individual calibration phase precedes the practice rounds and choice trials. It requires participants to button-press as many times as possible within 30-s time intervals with both the dominant and nondominant pinkie fingers and after 3 rounds with right and left hands, an average is calculated. The target for the “hard” trials is 85% of this average value;
    Reddy et al. 2015
    Effort-Based Decision-Making Paradigms for Clinical Trials in Schizophrenia: Part 1—Psychometric Characteristics of 5 Paradigms - PubMed (nih.gov)

    Participants were asked to repeat the calibration run after the effort task, in order to evaluate whether they were able to complete the requisite number of hard presses. During this post-task run, all but one participant exceeded the requisite number of button presses; the one participant who did not exceed this number, executed only 2 fewer button presses than the number required to complete the task. This suggests that all participants were indeed able to expend the effort required for the hard trial option, meaning that decisions to expend less effort cannot be attributed to fatigue effects.
    Fervaha et al. 2015.
    Effort-based decision making as an objective paradigm for the assessment of motivational deficits in schizophrenia - PubMed (nih.gov)

    The version of the EEfRT used in the present study was similar to the original study in that it did not individually calibrate the number of button presses required for successful completion of hard and easy tasks. The task may have been more difficult for participants with motor difficulties, potentially impacting task completion, but finger tapping ability appears to be unrelated to choosing the hard task (Barch et al., 2014).
    McCarthy et al. 2016
    Inefficient Effort Allocation and Negative Symptoms in Individuals with Schizophrenia - PMC (nih.gov)

    Because it is likely that participants with greater motoric ability exert more clicks throughout the task, which does not reflect their actual approach motivation, we included 10 motoric trials (5 at the start of each block, using either the left or the right hand according to the randomized block order) to test participants’ motoric abilities. Within these motoric trials, participants were instructed to press the spacebar as often as possible within 20 s. […] Participants’ individual motoric abilities were included in our statistical models. Although the inclusion of this factor was not pre-registered, we decided to do so, because our preliminary analyses (see Section 2.1) revealed a large impact of participants’ individual motoric abilities on the number of clicks they exerted and partici[1]pants of both substance groups also differed in their motoric abilities as measured via the motoric trials, as participants within the sulpiride condition did show lower motoric abilities (see Table 1). Therefore, not including this factor could have distorted the results
    Ohmann et al. 2020
    A low dosage of the dopamine D2-receptor antagonist sulpiride affects effort allocation for reward regardless of trait extraversion - PMC (nih.gov)

    Participants with greater motoric ability exert more clicks throughout the modified version of the EEfRT [24] and studies calibrating an individual number of clicks to succeed within the original EEfRT suggest that participants with higher motoric abilities might also choose the hard task more often in the original version [14, 32], which does not reflect their actual approach motivation. Therefore, we included 10 motoric trials to test participants’ motoric abilities before each version of the EEfRT Within these motoric trials, participants were instructed to press the spacebar as often as possible within 20 seconds. Critically, participants were not able to gain any rewards in these trials and visual feedback was reduced to a countdown and a display of the number of clicks they exerted. Participants’ individual motoric abilities were operationalized as maximal clicks in motoric trials (MaxMot) and included in our statistical models
    Ohmann et al. 2022
    Examining the reliability and validity of two versions of the Effort-Expenditure for Rewards Task (EEfRT) | PLOS ONE

    Strategy

    at the end of study day two we asked participants in an unstructured and open manner about their strategies while playing the EEfRT. Descriptively, out of 60 participants 23 reported to have based their choices on concrete “threshold” – strategies (e.g. choosing the hard task only in trials with a reward magnitude higher than 2,50 €). To control for a possible confoundation of our results, we removed those 23 participants from analyses. However, after reanalyzing the data without these participants the stimulation effects were still present. […] As reported above out of 60 participants 23 reported to have based their choices on concrete “threshold” – strategies, which are based on the premise that the hard task lasts about three times as long as the easy task (21 vs. 7 s). Choosing the hard task might therefore be a disadvantageous choice in trials with low reward magnitudes or low probabilities of reward attainment. […] Therefore, future studies should counteract these limitations by (1) systematically asking participants about their strategies while playing the EEfRT and/or (2) optimizing the EEfRT, such that the only valid strategy for participants to increase their rewards is to increase their effort allocation.
    Ohmann et al. 2018
    Ohmann 2018.pdf

    In a previous study (Ohmann et al., 2018), we found that using the original EEfRT also comes with a major downside: at least some participants understand that choosing the hard task is often lowering the possible overall monetary gain as the hard task takes almost three times as long as the easy task. Hence, at least some participants’ choices are partly based on a strategic decision and less on approach motivation per se. To overcome this downside, we modified the original EEfRT. First, we fixed the number of trials (2 blocks × 15 trials = 30 trials) and the duration of each trial (= 20 s).
    Ohmann et al. 2020
    Ohmann 2020.pdf
     
    JoClaire, Hutan, Medfeb and 9 others like this.
  12. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    363
    Yep, I'm a fan of the work of van Campen and that team generally, though mindful that they're likely seeing an unrepresentative group. What I'm not clear on is how we'd explain cognitive difficulties when supine, though perhaps we're generally on enough of an incline to make a difference over a longer period of time, or it could be the after-effects of previous uprightness...
     
    alktipping, Kitty, Sean and 2 others like this.
  13. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    363
    @ME/CFS Skeptic Very helpful posts! Especially on the literature - great to have the info in your tables and quotes assembled all together.

    I was looking at something a bit different - I was interested to know if floundering a bit on hard tasks might have made people (both healthy volunteers and pwME) less likely to choose hard tasks. I noticed this pattern:

    People whose success rate on hards was over 90% chose hard 44% of the time.
    People whose success rate on hards was under 90% chose hard 33% of the time.

    It's not something easily seen on a graph:

    upload_2024-3-10_10-16-17.png

    but @andrewkq found it explained the data better than whether people were healthy or had ME/CFS. The issue with it is that 90% is arbitrary. If you go above and below 85% instead, the percentages change to 42% and 34% respectively, because a pwME and a healthy volunteer with 85-87% success on hards chose them only 24-26% of the time. In such a small sample a couple of participants' data can pull things around a lot.

    Edit: Adding that >=85 vs <85 would divide the sample into n=22 and n=9.
    Above and below 90 divides the sample into n=20 and n=11.
    =100 and <100 would divide the sample into n=16 and n=15 and yield the percentages 45% and 34%. Maybe the last option is the most valid in such a small sample, but I like the idea of giving healthy people the chance to not be perfect.
     
    Last edited: Mar 10, 2024
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    So, exactly as a reasonable person would expect of a seriously ill group. They make a calculation based on rewards that scales with difficulty. And being able to do less, they do a bit less. Even the controls behaved the same way, just not as impaired by their ability so applying a slightly different reward function.

    This is horseshit. They literally attribute some imaginary construct because we behave exactly as any reasonable person would predict. It's like arguing that poor people going to cheaper restaurants, or foregoing it, means that they have poor people mentality. What complete bigoted nonsense from small-minded people.
     
    Hutan, Starlight, alktipping and 13 others like this.
  15. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    This
     
    Hutan, Missense, Starlight and 4 others like this.
  16. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I don't know if this thread has just slowed down or if others are working on a letter behind the scenes. I think a lot of valid points were raised that prove that Walitt et al.'s interpretation of the EEfRT data is seriously flawed.

    I have written a first draft for a letter to be sent to the authors, asking for a correction. Pretty much all arguments have already been made by others on this thread so I take no credit for them - just want to help to string them toghether.

    EDIT: I unfortunately will not be able to sign or co-sign a letter as (due to personal circumstances) I prefer to remain anonymous at the moment.
    I thus have no concrete plan to send the letter. I hope that others who have more statistical knowledge and academic qualifications than me will take this further.

    Incorrect interpretation of ME/CFS patients’ effort preference
    In a comprehensive study of patients with post-infectious myalgic encephalomyelitis/chronic fatigue syndrome (PI-ME/CFS), Walitt et al. claim that “an alteration of effort preference, rather than physical or central fatigue” is a defining feature of the illness.1 Effort preference was defined as “how much effort a person subjectively wants to exert” and measured using the Effort-Expenditure for Rewards Task (EEfRT). The interpretation of EEfRT results by Walitt and colleagues, however, is highly problematic as it fails to consider that tasks required more effort from patients than from healthy controls.

    The EEfRT is a multi-trial experiment that has been used to measure reward motivation in patients with anhedonia, schizophrenia, and other mental disorders. In Walitt et al.’s study of 15 PI-ME/CFS and 17 healthy the EEfRT lasted 15 minutes. In each of the successive trials, participants were instructed to choose between an easy and hard task. Both required several button presses within a limited time frame for successful completion: 30 button presses in 7 seconds for the easy task and 98 button presses in 21 seconds for the hard task. If participants completed the hard task successfully, they had a chance of receiving a higher reward than for completing the easy task. The reward values and probability of receiving it varied across trials and this information was provided to participants before they made their choice. Effort preference was estimated by the proportion of hard task choices. Walitt et al. report that given equal levels and probabilities of reward, healthy controls chose more hard tasks than PI-ME/CFS patients (Odds Ratio, OR = 1.65 [1.03, 2.65], p = 0.04).

    The EEfRT requires however that participants can complete the tasks successfully and that the effort needed is equivalent in patients and controls. Treadway et al., the research team that developed the EEfRT and whose protocol was implemented by Walitt et al. with minor modifications, cautioned: “An important requirement for the EEfRT is that it measure individual differences in motivation for rewards, rather than individual differences in ability or fatigue. The task was specifically designed to require a meaningful difference in effort between hard and easy-task choices while still being simple enough to ensure that all subjects were capable of completing either task, and that subjects would not reach a point of exhaustion.” 2

    Several techniques have been introduced in the EEfRT literature to ensure that the test measures reward motivation rather than differences in effort or ability. These include individually calibrating the required number of button presses3, controlling for participants’ motoric ability4, and evaluating whether participants had an adequate completion rate.2

    Although Walitt et al. implemented four test trials before the EEfRT started, they did not implement measures to ensure that the effort required to complete tasks was similar in patients and controls. Consequently, PI-ME/CFS patients could only complete 67% of the hard tasks successfully compared to 98% in controls. This was a much larger difference (OR = 27.23 [6.33, 117.14], p < 0.0001) than the group difference in choosing hard over easy tasks. This problem was already evident in the four test trials during which PI-ME/CFS patients could only complete 42% of the hard tasks compared to 82% for controls. When we added successful completion rate to the statistical model, the difference in hard task choices was no longer significant (OR = 1.19 [0.79, 1.81]).

    Walitt et al. did note that during the EEfRT there was no difference in the decline in button-press rate over time for either group for hard tasks. This might indicate that task-related fatigue did not influence the results. There was however a decline in button-press-rate in PI-ME/CFS patients for easy tasks that was not seen in controls. In addition, these measurements only reflect fatigue induced by the 15-minute button-pressing test, not the symptoms and debility participants already had at the start of the EEfRT.

    PI-ME/CFS patients were severely disabled with a mean SF-36 physical function score of 31.8 compared to a score of 97.9 for the control group. Reduced psychomotor function5 and impairments in fingertip dexterity and gross movement of the hand, fingers, and arm6 have been reported in ME/CFS patients. Walitt et al. also found that patients in their cohort were unable to maintain force during a hand grip task.1 It is therefore likely that the EEfRT required more effort from PI-ME/CFS patients than from controls. Walitt et al. also reported a strong correlation (R=0.57) between PI-ME/CFS patients’ hard task choices and the ability to maintain force during the grip test. This supports our conclusion that patients’ EEfRT choices reflected motor ability as well as effort preference. No such correlation was found in healthy controls (R=-0.04).

    The fact that the group difference in hard task choices was relatively small compared to the larger difference in completion rate, suggests that patients kept trying to succeed on hard tasks, despite past failures. Figure 1 shows the recorded button presses and completion rate per trial for all 31 participants. Several PI-ME/CFS patients had repeated failed attempts to complete the hard tasks (seen as repeated high red bars on the graph), a pattern that was not seen in controls. These findings are contrary to Walitt et al.’s hypothesis that ME/CFS patients prefer to exert themselves less than healthy controls.

    upload_2024-3-11_17-21-20.png
    Figure 1. Button presses per participant for each of the trials they managed to complete during the 15-minute EEfRT. Successful completions are pictured in green while failed attempts are indicated in red.

    In conclusion, the EEfRT data indicates that the button-pressing tasks were more difficult for PI-ME/CFS patients than for controls, not that the former have abnormal effort preferences. In the past, ME/CFS patients have repeatedly been ‘victim blamed’ when behavioral consequences of their illness were incorrectly proposed as the cause of their symptoms.7 Considering the negative impact such misattributions may have, we kindly ask Walitt et al to correct their erroneous account of the EEfRT results.

    References
    1. Walitt, B. et al. Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome. Nat Commun 15, 907 (2024).

    2. Treadway, M. T., Buckholtz, J. W., Schwartzman, A. N., Lambert, W. E. & Zald, D. H. Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia. PLOS ONE 4, e6598 (2009).

    3. Fervaha, G. et al. Incentive motivation deficits in schizophrenia reflect effort computation impairments during cost-benefit decision-making. J Psychiatr Res 47, 1590–1596 (2013).

    4. Ohmann, H. A., Kuper, N. & Wacker, J. A low dosage of the dopamine D2-receptor antagonist sulpiride affects effort allocation for reward regardless of trait extraversion. Personal Neurosci 3, e7 (2020).

    5. Schrijvers, D. et al. Psychomotor functioning in chronic fatigue syndrome and major depressive disorder: a comparative study. J Affect Disord 115, 46–53 (2009).

    6. Sanal-Hayes, N. E. M., Hayes, L. D., Mclaughlin, M., Berry, E. C. J. & Sculthorpe, N. F. People with Long Covid and ME/CFS Exhibit Similarly Impaired Dexterity and Bimanual Coordination: A Case-Case-Control Study. Am J Med S0002-9343(24)00091–3 (2024) doi:10.1016/j.amjmed.2024.02.003.

    7. Thoma, M. et al. Why the Psychosomatic View on Myalgic Encephalomyelitis/Chronic Fatigue Syndrome Is Inconsistent with Current Evidence and Harmful to Patients. Medicina 60, 83 (2024).
     
    Last edited: Mar 12, 2024
    Yann04, Hutan, Robert 1973 and 25 others like this.
  17. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Unfortunately, I have not been able to replicate their statistical models. This estimation came from @andrewkq in this post:
    https://www.s4me.info/threads/use-o...s-2024-walitt-et-al.37463/page-16#post-519455

    I used data shared by @Karen Kirke in this post:
    https://www.s4me.info/threads/use-o...s-2024-walitt-et-al.37463/page-19#post-519652

    This is a replication attempt of the excellent graph posted by @Murph here:
    https://www.s4me.info/threads/use-o...fs-2024-walitt-et-al.37463/page-8#post-518691
     
    JoClaire, Hutan, Robert 1973 and 14 others like this.
  18. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    That looks excellent, @ME/CFS Skeptic. I hope you will submit it to the journal.
     
  19. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    I agree--excellent letter. I've only followed this discussion with half-an-eye because I have no statistical skills, but this seems to track what the brilliant minds here have collectively uncovered in a very reader-friendly way. I think it would be very worthwhile to post this on a pre-print server or pub-peer. I would love to then re-post on Virology Blog. And it absolutely should be submitted to the journal.
     
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I am a bit involved with other things at present to engage in polishing what looks like a very good draft letter. I am very happy to sign a letter if it helps and if I do I will go through it in detail in case I can suggest anything. But others have done the work and know the detail so I doubt I will contribute much. I like that it is very focused on the key methodological problem.
     

Share This Page