Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

Discussion in 'ME/CFS research' started by Andy, Feb 21, 2024.

  1. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    363
    I think it would be most compelling to demonstrate the issue with the Effort task with data - people might find the logic of it hard to follow. If it can be shown from the data that the difference in the hard:easy ratio between patients and controls could be due to something else - specifically something like a difference between the groups in terms of the reward values and probabilities they were presented with - then that would be game over.

    If not, then I think the key issue is the different starting point of patients and controls, in terms of it being a potentially fatiguing task for one group but not the other, render the task fundamentally invalid for comparing these groups, because it scuppers the reward system. One group has a penalty that they have to factor in when considering the rewards, the other doesn't.

    If people are looking at the data, which I'm going to upload again to this message, note that HV F's data were invalid so you'll need to remove those rows.
     

    Attached Files:

    JoClaire, Fero, Sean and 5 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    And even when the rewards are higher, they are so laughably small that they are completely tokenistic for all intents and purposes.

    My thinking about this worthless test, I don't think it's valid in any context, is precisely about token rewards. Video games provide the best example of this, and they are indeed very popular. People will go through all sorts of challenges and spend a huge amount of time and effort for what is usually nothing but a token reward. And there is a huge amount of psychology about gamification, in large part because it's so lucrative to the video game industry.

    But what they don't account for here is the shift in priorities and the opportunity cost of going for a pointless tokenistic reward. In general, pwME are all out of fucks to give, our lives have been broken, we have lost more than most people can even imagine anyone could lose without dying, and here they were, presented with a pointless task for a bullshit token reward that carries an opportunity cost, a risk where going for the "higher" (but still tokenistic) reward can mean significant deterioration, in a context where there already is a significant demand on the body that could wreck them, possibly permanently, for simply having made the very hard choice of participating in such a grueling battery of tests.

    So it's a bit like having someone stranded in a remote area with a broken leg, who have to carry on if they want to survive, then subject them to a stupid game of hopscotch to win a ribbon with a star on it, then conclude that they're probably a bit depressed/lazy/unmotivated, or whatever, since they're not interested in playing an easy game for an easy reward. People love rewards, even fake ones, if presented the right way, as a challenge, and these people aren't playing along, there's clearly something wrong with them, if you completely ignore the context in which they exist.

    So for all the faults of this bullshit invalid test, the worst flaw the researchers did is completely ignore the context in which the patients were playing in, which already included the hard choice of participating in the study. Which is the same mindset that brings people with PEM to refuse to keep pushing themselves, and conclude that they must be lazy/unmotivated/delusional since there is clearly nothing wrong with them, if one doesn't believe in the illness, doesn't accept the patients' lived experience as valid, and decides to suspend disbelief about the totality and infallibility of medical testing, which can't even be said to be half-reliable in the best of cases.

    Like everything else biopsychosocial, it makes sense if you don't think about it, and it takes all but a minute to see so many holes in the reasoning that no serious person could argue that it's a valid model. It takes beliefs, and we already knew about Walitt's beliefs, they are very clear, in the end that's all that went into the interpretation.
     
    EzzieD, Sean, bobbler and 10 others like this.
  3. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    995
    Location:
    UK
    In case this hasn't been shared already:
     
  4. Eleanor

    Eleanor Senior Member (Voting Rights)

    Messages:
    267
    Does he genuinely think they've "shown" that on the basis of a single p.04 result from 8 patients and 6 controls? While other results in the same trial show no "effort preference" effect?
     
  5. tornandfrayed

    tornandfrayed Senior Member (Voting Rights)

    Messages:
    111
    Location:
    Scotland
    I'm slowly catching up with this thread so may just be repeating what others have said, but this task is about calculating odds, not about how much energy a person can/is willing to spend. Surely "success" in the task is getting maximum reward for minimum effort. Also the task is cost free for a healthy person but not so, either physically or cognitively, for pwME. The only utility it might have with ME is showing cognitive disparity in making good calculations very quickly and repeatedly.
     
    Binkie4, SNT Gatchaman, Sean and 6 others like this.
  6. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    Thanks Evergreen
    If I sort the data by trial number, I see that, for each trial, the reward amount is the same, and the probability of the reward is the same. That's true regardless of whether the participant chooses the easy or hard option.

    I find it incredible if that was really the case - that it did not matter which trial difficulty the participant chooses, the probability adjusted likelihood of a reward is the same.

    So, for example, the #-4 practice trial, the value of the reward is 4.12 and the probability of the reward is 0.88. 8/17 HV chose the hard option. 8/15 Me/CFS chose the hard option.
    Trial #1, the value of the reward is 3.04 and the probability of the reward is 0.12. 3/17 HV chose the hard option. 6/15 ME/CFS chose the hard option.

    Why are people choosing the hard option if the probability adjusted reward is the same? Was there some reward adjustment that isn't mentioned in the spreadsheet? If any of the participants are members, can you explain what happened?

    I think there's definitely opportunities for analysis here e.g. how did choices vary with time? In those two early trials I selected randomly, the ME/CFS were more likely to choose the hard option.

    By trial #50, only 3 HV and 4 ME/CFS undertook it, no one did a hard trial. By trial #51, only 2 HV and 2 ME/CFS undertook it, no one did a hard trial. Trial #53, only two ME/CFS undertook it, one did a hard trial. Trial #54, only one ME/CFS undertook it, an easy trial. Trial #55, only one ME/CFS undertook it, an easy trial. So actually, in those last trials, the ME/CFS participants were more likely to choose the hard option than the HV.



    Scrolling through, the percentage choosing hard options didn't really look that different. So, I added up the number of hard option choices and divided by the number of participants:
    347 hard choices divided by 17 HV = average of 20.4 hard choices per HV;
    280 hard choices divided by 15 ME/CFS = average of 18.7 hard choices per ME/CFS

    That is a tiny difference really, with small samples. And they've built a story of faulty effort preference on that? I'm flabbergasted.
     
    Last edited: Feb 27, 2024
    JoClaire, Chris, EzzieD and 14 others like this.
  7. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Yes. I'm trying to think of a simple way/term that might communicate it or what we can use to show up the issue with what Walitt has done/is saying - I'm wondering whether others can think of better 'frames of reference' eg common games others might know that we could use to explain it?? anyone up for some idea-generating?

    The closest analogy I've got is taking a normal exam, and remembering the 'exam technique' stuff we used to get taught about not writing a long essay on the first few questions that often only are to warm up and have just one or two marks per question - because you'll run out of time for the big stuff at the end.

    Well imagine the EEfRT as Treadway et al (2009) have designed it as like someone sitting down and doing a standard exam paper, with the usual different questions have different (max) numbers of points/marks associated with them, time is limited/tight and you do your best to get as many questions answered well as you can without wasting time on any. Imagine some of these questions say needed 200 word answers and others 30 (so you shouldn't give them much time)

    And instead of them calculating your performance based on what mark you got vs the marking scheme, and maybe looking at other factors like how far through you got and where you might have fallen-down like under or over-answering questions vs marks available...... explaining it as individual differences in exam behaviour

    Then instead someone comes along and takes that collection of descriptive stats as a tool and instead just marks the whole thing based on:
    'out of those questions you did, how many did you write 200 words for' - and used that as a 'metric'

    without any of the detail or pulling out the sensitivities like whether it was a question that needed 200 words, how many marks it was worth or not, how many questions you actually completed/far you got in the test etc.

    and called that metric 'effort preference'

    What would that mean?, without any more detail into whether someone completed more questions or wrote more words in total across the whole paper. Would that metric mean what the term 'effort preference' sounds like? Would it be valid like the person who might have invented the analysis tools to show different issues with exam technique had validated it for?



    I'm aware that perhaps underneath it all some of these things might map out in analysis (or might not) if done as per the EEfRT models and methods, but that metric on its own isn't the validated part from Treadway and the new name is misleading vs the instructions and design of the tool?
     
    Last edited: Feb 28, 2024
    tornandfrayed, Sean, Hutan and 3 others like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Oh yes, I've used a filter and see the same, all the trial 1 have the same 'reward' whatever they chose in the hard column. And so on for all the trial numbers. Surely not and maybe it means the combo or something but either way..

    I've done very scratchy and unscientific calcs
    effort preference totals.png
     

    Attached Files:

    cfsandmore, Fero, Simon M and 4 others like this.
  9. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    as a totally random one I've just noticed. HVB seems to have selected easy for nearly all of their high probability and low probability tasks. then strangely is more half and half of doing hard when it was medium 50:50 probability.
     
    Kitty and Peter Trewhitt like this.
  10. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Because of the nature of the increased time to do each hard trial vs easy trial when you get to the later trial numbers there are fewer and fewer people who did them at all. If someone did 'too many' hard early on they wouldn't get to the later ones. And of course it is designed so that noone knows what is coming up - although I assume they were told the same info as in the Treadway version - the max and min reward values, so could have a sense just enough to get some sort of personal heuristic (eg whether a reward was so small it wasn't worth the extra time) once they got into it.

    No strategy could really win given it was 2 trials selected at random - even if you were a super-fit finger-athlete, do you prevent the lowest of your 'win-trials' from being too low by always going for hard (particularly if it isn't a low probability one) or do you try and keep the mode/mean up by going with trying to move quickly through so you can get to the higher value ones and just going for quick and easy if it is £1 something. But of course you don't know how many are £1 something and how many are £3 something. That's the point.

    It's interesting looking at an individual level by probability-level and I don't think I'm seeing things in looking at the different individual patterns that people are employing strategies. And that some of the participants are having to change take part way through. For example on medium there are a number of ME-CFS who look like they make a conscious choice around trial 11 to go for easy (unless it is a really high value) when it is 50:50. WHen you look at HVs then even then there seem to be a small group who do the same but later on (like late 30s) which could either be exhaustion or strategy.

    To risk a sidenote but I don't think it is, we all have to remember that having ME/CFS has probably taught us a lot that isn't about illness. I could teach well people a lot about efficiency and wasted time/energy and so on.

    The idea it is all down to fatigue rather than us having to make these sorts of daily judgements - and they accumulate as our disability does, or we 'get into' starting to look at life differently to be more efficient because we've other important stuff (including more committments than healthy people due to appointments, meds and so on) - is also a misnomer;

    we might just be pretty used to looking at tasks like these and the calculations involved, having to find a way to see through the BS and pick a 'best-course'. And experience might also show as 'decreased activity' either in shock or puzzlement etc at what to do or just the thinking involved with coming up with a strategy.
     
    Last edited: Feb 28, 2024
  11. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    :rofl: it's exactly like broken-leg in the middle of nowhere hopscotch for a ribbon!
     
    EzzieD, Peter Trewhitt, Sean and 2 others like this.
  12. Sean

    Sean Moderator Staff Member

    Messages:
    8,067
    Location:
    Australia
    Context matters.

    Who knew?
     
    Last edited: Feb 28, 2024
    RedFox, EzzieD, bobbler and 5 others like this.
  13. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    995
    Location:
    UK
    Just seen this - great work. I think it is exactly what we would expect:

    EEfRT raw data analysis by @bobbler
    Minimal difference on 12% win probability tasks where easy tasks are more logical
    Likewise 88% win probability tasks were hard tasks are more logical
    Significant though not massive difference on the 50% win probability tasks:
    ME: 31% hard tasks
    HV: 41% hard tasks

    IIRC, Treadway also indicated it was 50/50 tasks where the difference was to be found. As far as I can tell, a lot of the complexity was added to make it hard for participants to find a simple (no-thinking) winning strategy
     
    Binkie4, RedFox, cfsandmore and 6 others like this.
  14. Trish

    Trish Moderator Staff Member

    Messages:
    55,416
    Location:
    UK
    Is there anything about how the EEfRT is carried out?

    What I mean is, were the participants seated, feet on floor, at a computer, or able to lie down during the 15 minute exercise and the time while it was being explained and practiced? Was their arm supported in a comfortable position so only the little finger had to move? These things matter for pwME.

    For me, the task would be extremely draining, if not impossible, if I had to sit up for it, and if I had to hold my arm in position without support.
     
    Missense, MEMarge, RedFox and 9 others like this.
  15. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    I'm still not understanding how the experiment worked. If you look at the source data, a single trial e.g. Trial 1, had a specified fixed probability of a reward, and a specified fixed reward e.g. 0.88 and $4.50. And you could choose either a hard task or an easy task. There does not seem to be any indication of a difference in the reward or probability between the two tasks. So, why is it more sensible to ever opt for the hard task? Surely, it is always better to opt for the easy and fast task?

    (Apart from maybe once to satisfy your curiosity about what it's like and check there is no surprise, and maybe once again when your dominant index finger needs a break.)

    It makes no sense, so I assume I am missing something.

    Check out the file that Evergreen posted here
    Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome, 2024, Walitt et al
    and sort by the trial number. You will see that the reward and probability are constant for everyone within a single trial, regardless of whether they did the hard or easy task.
     
    Last edited: Feb 28, 2024
    cfsandmore, EzzieD, Kitty and 5 others like this.
  16. Sam Carter

    Sam Carter Established Member (Voting Rights)

    Messages:
    41
    Isn't it because the reward value for the easy task is always lower than the reward value for the hard task (with the probability of winning being the same)?

    For an easy task the reward was always $1 whereas for the hard tasks the reward was one of [1.24, 1.42, 1.6, 1.78, 1.96, 2.14, 2.32, 2.5, 2.68, 2.86, 3.04, 3.22, 3.4, 3.49, 3.58, 3.76, 3.94, 4.12].
     
    oldtimer, Sean, bobbler and 4 others like this.
  17. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    Quite possibly. Where does it say that the reward for the easy task was always $1?
     
    oldtimer, bobbler, Sam Carter and 3 others like this.
  18. Sam Carter

    Sam Carter Established Member (Voting Rights)

    Messages:
    41
    It's never explicitly stated, but if you look in the supplementary data on page 9 of the file titled 41467_2024_45107_MOESM1_ESM.pdf there's a diagram which shows this (and I think it's more clearly set out in some of Treadway's work.)

    I also struggled to understand the task for the same reasons as you, but I think it only makes sense if the easier option always has a lower payout?
     
  19. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Yup if it is the case and not an artefact of the datasheet (e.g. if each amount we see in the excel column actually equates to a 'pairing' of hard-easy eg $4.50-$3.50) then you are absolutely correct, he hasn't run the experiment at all because there should be no choice. Or it would be something completely different regarding strategy - as they only pick 2 win trials maybe there is something in the time you take on the different options of hard-easy but I mean it would be so minimal and so complex it would blow the whole point.

    It would be such a fundamental thing I'm having to assume that it is just what is reported in the column that is the error for now, and not that it represents that he only showed one 'reward magnitude' on the test?
     
    Hutan, Peter Trewhitt, Sean and 2 others like this.
  20. Sam Carter

    Sam Carter Established Member (Voting Rights)

    Messages:
    41
    MEMarge, RedFox, Hutan and 5 others like this.

Share This Page