Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

Discussion in 'ME/CFS research' started by Andy, Feb 21, 2024.

  1. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    Yes, I think we do need to look into this. I think they may be assuming that tests are more certain than they actually are, I've seen references that cast doubt on the certainty of claims about central/peripheral fatigue based on electromyography. There are things like cutaneous fat that seem to be able to add noise.

    Good point about possibly needing to adjust for a sex effect.

    Yes, I was going to add after reading your 'final' summary that @bobbler found that the measure used in the original study incorporated the factor of variable probabilities - so, did a higher probability make someone more likely to choose a harder task? That was, if I'm understanding things correctly, much of the point of the original test, the risk-reward calculation. As in, how sensitive were people to changes in the likely reward? It appeared that Walitt et al used a different measure that just used a ratio of hard tasks to easy tasks, with no consideration of the probability. So, it rather ignored the whole effort:reward relationship. I'm not 100% certain about much of this, but it seems likely that the simplistic ratio was not accurate enough to find what they claim to find.

    It would help a lot if we knew exactly how the test worked e.g. how the reward size and probabilities of a reward changed over the course of the test. (Of course, the ridiculousness of the trivial rewards and the increased fatigue that people with ME/CFS would have experienced means the test findings mean nothing about 'effort preference'. But, I think even within the flawed paradigm, we may be able to find that the logic is wrong.)
     
    Louie41, Simon M, Sean and 4 others like this.
  2. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    My impression of 3e is that PWME have acquired a good sense of their fatiguability whereas healthy controls haven't a clue and are all over the shop. Or maybe since it doesn't matter to the healthy controls they haven't taken the preference task seriously

    It seems a nonsense.
     
    bobbler, Louie41, Binkie4 and 11 others like this.
  3. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    OK @Hutan and anyone else who might be interested

    Yes one of my questions I'm intrigued by - and it might just be the complication of how the paper is laid out due to it having so many different experiments and therefore the bits of method, results etc for e.g. EEfRT being dotted around - is

    when Walitt et al (2024) states 'standardised probability and reward' what do they mean that they did?

    because the Treadway et al (2009) paper has analyses by probability level, so did they 'pre-hoc' standardise or is this inferring some sort of computation for what Treadway et al (2009) did or found with eg the probability (it seems for example the difference in hard choice behaviour wasn't there for low probability tasks) and just report it as one 'three-way analysis' (x probability, x reward magnitude and x HV/ME-CFS)?

    eg in the methods they state:

    Whereas Treadway et al (2009) note in their conclusion:

    So is 'uncertain' the same or different to 'standardised' in the sense that has Walitt 'rolled in' the certainty variables vs 'hard choices'?

    I'm also struggling because of the lingo if I'm being honest, but the tool (and there is a website for it: Tools — TReAD Lab. ) is obviously complicated enough in itself to explain on here as it seems to be somewhat about looking at the different pattern of effects over these different variables. And once you get into the 'GEE' stuff it's really horrible to pull apart and compare across papers.

    I'm getting somewhat crippled by ambiguity, which seems to then be inbuilt to each paper and section that then 'builds' on those foundations. For example the paragraph in Treadway et al (2009) that introduces its EEfRT model, somehow manages to not make clear/certain how 'integral' the probability and reward magnitude etc variables varying and being analysed are to the model or whether these are 'extras' to its validity

    I suspect the answer is in what it is claiming it is trying to do as @Simon M states, which is that it seems it is trying to map the different 'patterns' you see across different conditions or personality traits and then just 'replicate them' to confirm these as valid. Not that I'm fully able to get to the bottom of what each of those really mean.

    EDIT: which PS without validation of the scale itself, or it seems quite a lot of the 'traits' it is using it on (and even when not, there not being utter sure pre-hoc hypotheses about what it should show as a pattern of 'motivation' across all these different variables that would be specific to that condition), would make the whole thing entirely circular?

    And yes I agree that it all should really be 'overshadowed' by far more of everything on the test being more explained by a physical condition that affects one group physically and with fatigue that will affect other aspects of them being able to perform the task (and the bottom reference at least does note that it also completely starts to change what the 'pros' or 'rewards' and 'effort' actually are and how they balance between different groups).
     
    Last edited: Feb 26, 2024
    Louie41, Amw66, Simon M and 5 others like this.
  4. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734



    I haven't found anything yet in the literature that uses the term effort preference, and the EEfRT, but haven't searched extensively. So that's another one. To find out where that came from.

    EEfRT in Treadway et al (2009) doesn't even include the word 'preference' nevermind effort preference as a term.



    I've found the following 2022 paper which is claiming to do a review of EEfRTs and modified versions and their validity, which 'might' give some comfort (or false security blanket) of the papers it has in its references up until 2022 to see which have used the EEfRT to go through, and just see from a quick cntrl+f (if that gets past any paywalled mentions of course) if the effort preference came from them?

    Examining the reliability and validity of two versions of the Effort-Expenditure for Rewards Task (EEfRT) | PLOS ONE
     
    Last edited: Feb 26, 2024
    Louie41, Ash, Fero and 7 others like this.
  5. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    That 2022 paper I mentioned: Examining the reliability and validity of two versions of the Effort-Expenditure for Rewards Task (EEfRT) | PLOS ONE

    Isn't giving me any more certainty, as its diagrams also analyse the original EEfRT 'by probability level' but the test is less clear on 'what is the primary outcome' - I don't think it is intended to be that sort of thing is what I'm starting to conclude, but how that translates then to Walitt et al (2024) then and how he has used it as mainly one outcome measure I don't know then.

    But I can see that all of the charts in this also have things separated by low, medium, high probability on each graph. So I don't think the 'modified version' cited in this is Walitt et al (2024) 's 'modified version' ... or is it?

    It feels like reading this one it is no wonder it is driving me a bit doo-lally :walkingdeado_O. (Apart from my reason for reading it: that it is a review of the scale's usage so far) The 2022 paper's main aim seems to be to assess the reliability and validity of the scale itself, and then you find out later on even doing this for 'the original scale' and 'the modified version' they couldn't resist making their own slight adaptations (to the way the incentive was done) (!)

    and then the conclusion is similarly both definitive in language, whilst being ambiguous about the 'what'. What is the 'split-half' reliability? and does this mean by inference that 'on top of' the 'not split-half' or by not mentioning it does it mean 'but not reliable if not split-half'?

     
    Last edited: Feb 26, 2024
    Louie41, cfsandmore, Fero and 4 others like this.
  6. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    363
    Is it possible to work backwards from the dataset to figure it out? (Source data for figure 3A seems to have everything, right?)

    Completely beyond me, but your and others' effort preferences might be more favourable than mine.;)
     
    bobbler, Louie41, Simon M and 2 others like this.
  7. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    I wouldn't advise clicking on the generic 'source data' as I guess it is a big old file (wouldn't download for me and got a go slow on my computer) and the supplementary files aren't labelled with anything EEfRT related in them.

    Whereas the Treadway et al (2009) notes that tasks will be low (12%), medium (50:50), high (88%) probability of being a 'no-win' task - which won't count at all (they eventually select at random 2 tasks from the 'win' tasks completed to pay) - and then displays charts split by low, medium, high

    They then detail quite a bit about how this affects what the task actually is:

    "Subjects were informed that they had twenty minutes to play as many trials as they could. Since hard-task trials take approximately twice as much time to complete as easy-task trials, the number of trials that the subject was able to play depended in part on the choices that he or she made. This meant that making more hard-task trials toward the beginning of the experiment could reduce the total number of trials, which could in turn mean that the subject did not get a chance to play high-value, high-probability trials that might have appeared towards the end of the playing time. This trade-off was explained clearly to the subject. Importantly, subjects were not provided with any information regarding the distribution of trial types. The goal of this trade-off was to ensure that neither a strategy of always choosing the easy or the hard option could lead to an ‘optimal’ performance on the task."

    Walitt et al (2024) just says:

    "The task began with a one second blank screen, followed by a five second choice period in which the participant was informed of the value of the reward and the probability of winning if the task were completed successfully. After the participant chose the task, another one second blank screen was displayed. Next, the participant either completed 30 button presses in seven seconds with the dominant index finger if they chose the easy task, or 98 button presses in 21 s using the non-dominant little finger if they chose the hard task. Next, the participant received feedback on whether they completed the task successfully. Finally, the participant learned if they have won, based upon the probability of winning and the successful completion of the task. This process repeats in its entirety for 15 min.

    Participants were told at the beginning of the game that they would win the dollar amount they received from two of their winning tasks, chosen at random by the computer program (range of total winnings is $2.00–$8.42)."

    Then in their results the most relating to probability-level is:
    "Given equal levels and probabilities of reward, HVs chose more hard tasks than PI-ME/CFS participants (Odds Ratio (OR) = 1.65 [1.03, 2.65], p = 0.04; Fig. 3a). "


    Whereas the Treadway et al (2009) reports all of its effects by each probability level eg

    and then when looking at anhedonia and other scales is comparing by each probability level in the figures, rather than %hard selected in general: Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia | PLOS ONE
    Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia | PLOS ONE


    So I can't work out whether the 'effort preference' version of Walitt et al (2009) somehow, without knowing how many tasks participants would do (because they are choosing between easy which took 15 secs on ave and hard which took 30 secs on ave in the Treadway one), making sure that they all had the exact same standardised amount of the combined levers of 'probability' and 'reward' that appeared on the screen.

    Or whether he has rolled that into the calc to make it 2 or 3 way or what the 'standardisation' involved. Or how they could say 'given equal levels of probability and reward'.
     
    Kitty, Arvo, Peter Trewhitt and 2 others like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734

    I would be tempted, now I have found the following: Tools — TReAD Lab

    WHich seems to allow you to download the 'tool' of the EEfRT to think that is what has happened and hence why the limited description of methods and analysis. Because it has a Matlab Psychtoolbox

    but then he's adapted it at least in the number of button-presses for example (98 vs 100 for hard)

    and that particular page talks about it being a tool for motivation ie the effort-preference didn't come from there.

    The Treadlab page says the following:

    "In addition to varying reward magnitude, trials are presented with different probability levels for reward receipt, allowing us to examine the extent to which the relationship between motivation levels and effort-based decision-making is modulated by perceived reward magnitude."

    and also has later down the page:

    "Please note that some aspects of the code and instructions may need to be adjusted to meet the needs of your specific study. Please be aware that we have limited resources to troubleshoot/advise on the use of the EEfRT, and we recommend that individuals interested in using the EEfRT have prior experience using computerized decision-making tasks."

    Which could be interpreted as the download allowing those using it to change those factors, like what the reward magnitudes are and maybe probability. But then maybe that wording just means 'as part of the standard game trials are presented with different levels'

    I'm still not sure that even if this was used then it makes 'implicit' whether the 'equal probability' is an artefact of using the standard format to compute out the total, which somehow is controlling for the probability-level to the point of 'making it equal'

    technically the wording "given equal levels and probabilities of reward" I'm trying to think of how else they would do that other than perceiving that was what the post-hoc calcs meant?
     
    Kitty, Arvo, Peter Trewhitt and 2 others like this.
  9. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,377
    Location:
    Aotearoa New Zealand
    :) Yeah, if we take those words at face value, then that would make the HV's stupid and the PI-ME/CFS participants smart. Because, if you are faced with the choice of two tasks with equal rewards and equal probabilities of receiving the reward, then why would you not choose the easy task?

    But, I think we can safely say that what the investigators did in this particular experiment is less than clear.
     
    Last edited: Feb 27, 2024
    Louie41, Amw66, Fero and 5 others like this.
  10. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    I was wondering that--is "effort preference" the Wallitt-specific framing of this construct, whatever it is?
     
    bobbler, Kitty, Sean and 4 others like this.
  11. Arvo

    Arvo Senior Member (Voting Rights)

    Messages:
    854

    Thank you for making a good summary. I'd like to highlight this:

    The test was designed for anhedonia, to "test the relationship between anhedonia and putative reward 'wanting' in humans".

    Walitt et al indeed seemed to have failed to check that the test did not exhaust subjects who were chosen because they had a fatiguing illness. They also did not seem to have taken into account the context in which the test was taken which would have influenced the outcome. (E.g. dressing & travelling to the location, and wheter the participants also have other tests or activities earlier in the day or the days before or not.)

    The whole point of EEfRT is exactly NOT to cause fatigue:

    This test is wildly unsuitable to test ME patient's motivation exactly because it depends on the definite exclusion of individual differences in ability or fatigue.
     
    bobbler, Louie41, Ash and 20 others like this.
  12. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    363
    It was fine for me - it's a zip folder which downloaded to my slow computer in a couple of seconds. Then the source file for 3A is all of 131lKB. I'm uploading the latter, which contains the full data for the Effort task, to this message.

    For each participant you get 40-odd rows (=#of trials they did) with these columns:
    ID
    Age
    Sex_Male_is_1
    Valid Data_is_1
    Trial
    Trial_Difficulty_Hard_is_1
    Choice_Time
    Recorded_Presses
    Required_Presses
    Successful_Completion_Yes_is_1
    Reward_Granted_Yes_is_1
    Value_of_Reward
    Probability_of_Reward
    Completion_Time
    Button_Press_Rate
     

    Attached Files:

  13. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    995
    Location:
    UK
    To be fair, it is designed to fatigue healthy people (that is the effort, the cost), but it is explicitly designed not to exhaust...
    ...precisely.
    That's easy - the term was coined for the new paper while silently repurposing a reward motivation test as an undefined 'effort preference' test.
     
    Ash, Amw66, SNT Gatchaman and 10 others like this.
  14. Arvo

    Arvo Senior Member (Voting Rights)

    Messages:
    854
    I don't agree: not only do I not understand how they came to say that they "demonstrated" that "a brain abnormality makes it harder for those with ME/CFS to exert themselves physically or mentally.", but I find the fact that these comments are not the info you get from the paper problematic.

    I'm just commenting on this snippet relating to "effort preference" (a quick look at the full article makes it seem like a good one), but what's happening in this snippet is hella weird IMO: to me it reads they're trying to portray the paper's claims in a more palatable-sounding way, which comes across like they are trying to polish the effort preference turd. (And the question for me is then if they are doing it because they are aware that being honest about their shit would make them look bad, or if they're trying to reshape what they allowed in the paper while not agreeing with it - and either would be bad.)

    The remark that when "the brain is telling ME patients" not to make an effort it is not "voluntary" irks me a lot. This is somehow meant to make the effort preference concept sound better, but to me it doesn't; sticking the reason for illness or any other unwanted "deviation" from how you "should be" on disablist/prejudiced notions and then saying that the patient is doing that not on purpose but subconsciously is a boring old trick (you are blind because you subconsciously wanted someone else to be blind, you have spasms because you subconsciously want attention, you are a lesbian because you subconsciously want to be a man, etc, etc, etc). The person it concerns is demoted to being an unreliable narrator about themselves, while the person saying it usually feels entitled to fill in what that subconscious thing really is, according to their prejudiced preferences.
    So personally I consider this is used as a fig leaf, a shield to protect the authors when pointing out that their paper is disablist, and quite the red flag.


    Walitt et al says:

    About the grip strength it says that
    All these sentences imply choice, preference etc - as far as I see it is nowhere stressed or even mentioned that this would even be unvoluntary. The only time the word "unconscious is used in the paper is
    when linking "effort preference" to other test results, with a reference [ref 35) to a Knoop & Van der Meer publication that opens its background section in the abstract with "Chronic fatigue syndrome (CFS) is characterized by disabling fatigue, which is suggested to be maintained by dysfunctional beliefs. Fatigue and its maintenance are recently conceptualized as arising from abnormally precise expectations about bodily inputs and from beliefs of diminished control over bodily states, respectively." and concludes "alterations in behavioral choices on effort investment, prefrontal functioning, and supplementary motor area connectivity, with the dorsolateral prefrontal cortex being associated with prior beliefs about physical abilities"

    (@dave30th, you might be interested in this)
     
    Louie41, Ash, Sam Carter and 15 others like this.
  15. Arvo

    Arvo Senior Member (Voting Rights)

    Messages:
    854
    It specifically says on the tin it is designed to monitor motivation (in anhedonia) without it being influenced by (depression-type) fatigue. The point is to observe anhedonia, "a decreased motivation for and sensitivity to rewarding experiences" in action, to see if the participants with depression will have a lower motivation for reward without interference from fatigue, another MDD symptom.

    It might be discussed wheter the EEfRT can actually be set up without that effect in MDD, but in general healthy people, depressed or not, can get through a bit of button pressing; It was specifically supposed to be set up this way, so easy that all participants could do the hard tasks equally,that that would not be an issue.

    Walitt et al worked from the notion of fatigue as "a limit on ability or a diminution of ability to perform a task" and if the EEfRT had been an apt thing to use (which I think it isn't) then they definitely should have taken steps to exclude that as an influence on results.


    (Btw, as we don't know eachother and typing can be awkward, I just want to stress that I appreciate and agree with the good overview - I'm just picking this apart a bit more, not being contrary)


    Same impression here. It's a train of weird, rickety steps:

    1. Take the EEfRT, a test for MDD looking at motivation/want,
    2. Then apply the EEfRT to people it was not designed or usable for (without MDD and with the symptoms which interfere with EEfRTs outcome as a key symptoms),
    3. Then transplant EEfRT's metric for anhedonia detection in MDD (the proportion of hard task choices) to the ME ones
    4. And stick a new label on that proportion, showcasing your prejudice as it is both negatively valued and outcome-based (like I said earlier: it's weird. It relabels a proportion as a preference, and the gives that preference (which should be neutrally valued) a negative definition ànd meaning of activity/hard task avoidance.
    5. Then weaving the new label all through the paper on biomedical finds, linking it to results, giving it a prominent place and even saying you have found the determining factor for disability.

    (Edited to add the word rickety)
     
    Last edited: Feb 27, 2024
    bobbler, Sam Carter, Fero and 14 others like this.
  16. Keela Too

    Keela Too Senior Member (Voting Rights)

    A couple of things strike me about this “effort preference” test. (It’s bonkers obviously, and my comments are in addition to agreeing with the various criticisms above).

    1. The different probabilities of the rewards seems incredibly obtuse.

    In my experience (of teaching science to adults), most adults struggle to understand probabilities at the best of times. On top of that, few folk can do quick calculations in their heads, of the sort that seem to be required to make the choices that this whole “effort preference” premise rests upon.

    Add to that the problems ME folk can have with cognitive processing speeds, and the whole test seems mad, even before starting to waggle any fingers!

    2. People with ME will be aware that their choice of activity will be monitored, and that some interpretation will be placed on the choices they make.

    There is no doubt, if I were doing this, I’d be trying to second guess the motive of the test. My thought processes would NOT be primarily related to earning the highest ££.

    So, I might think I should attempt more hard activities, in order to demonstrate that ME peeps are not malingerers?

    But then another part of my brain would jump in and think I’d better not attempt those harder tasks, or the researchers might think there’s nothing wrong with me.

    Then I might start to think I about whether the researchers are wondering if ME patients see pacing as more important than the small monetary gain?

    Or maybe they want me to keep trying the hard activity, so that my failure demonstrates actual finger fatiguability? Or will they then blame me for failing, and not pacing?

    Or maybe I would choose to alternate activities, to give each of the fingers a rest? Perhaps that is the strategy that would be the best?

    ****

    What I’m trying to point out here, is that the ME folk will have had much more baggage in their heads about the whole activity.

    So the whole test is likely to be confusing for ME folk, in a way it is less likely to be for the healthy controls.

    “Effort Preference” is just such a cruel term. Even “Effort Decision” would be kinder, as it implies the decision requires weighing up options.
     
    bobbler, Louie41, Sam Carter and 27 others like this.
  17. Trish

    Trish Moderator Staff Member

    Messages:
    55,416
    Location:
    UK
    I agree, @Keela Too. To summarise, perhaps:

    For HV, it's a game with a reward.

    For pwME it's a test with unknown real life jeopardy.
     
  18. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734

    The issue we are trying to look at is that your point 3. Assumption that he took the test and used it is incorrect

    Walitt cites it as a 'Metric' but it isn't a metric in Treadway. He might describe it as a way to describe data (more people chose hard than) but no it isn't a metric.

    And it doesn't seem to be the 'primary output' either. In fact given the way that Treadway and those who have gone on to use the EEfRT for various conditions and traits have used it and explored the patterns, there is definitely theoretically (at least) the possibility that the could not be an overall difference btween two groups in the generic 'proportion of times hard selected' but the model still 'holds' or as they seem to claim it 'is validated'.

    Because if one group was engaging in more 'unusual behaviour' like selecting hard when probability is low and vice versa the same amount of times others were doing the opposite then that 'overall % hard selected' would be the same across groups, but their behaviour certainly wouldn't be described as such.

    when you study papers that usd the EEfRT they don’t report findings as one hard vs easy choice thing without looking across the variables to differentiate how choice behaviour varies

    they might note how many chose hard choices but it’s not the be all and end all reliable bit of the test in the way walitt cites it.

    and that difference has effectively been manufactured by getting people who get fatiguability to do what is a pointless and reward less task vs those who don’t get fatiguability

    he’s used the spiel associated with another more complex test without following said tools literature in these things. The 2022 paper itself says manipulating the different features and using different subjects will change what the choice itself is.

    We need to stop talking about both the term and test as if it is something that stands as something scientific because by doing so you/we are making it look like it’s ’a thing’ when it’s an emperors new clothes term not on solid ground from what I can see as being backed by his claimed ‘test’ being something that validly tests that at all
     
    Last edited: Feb 27, 2024
    Louie41, EndME, Sean and 9 others like this.
  19. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    the point seems to be NOT to do ‘hard’ when you get a task with low (12%) probability because even if healthy you are wasting 30secs on a task that almost certainly won’t count/is a no-win trial.

    they then have high probability which is symmetrical in that it’s 88% (100-12) so only minimal 12% chance it’s pointless ‘won’t count’ task.

    the medium probability ones have 50:50 chance it will count of not.

    so the point ISNT and NEVER WAS to always choose hard.

    sometimes you want the higher money (in case it is picked as one of the two incentive trials as your pay out), sometimes you want the ‘shorter time’ by picking easy (because it’s a pointless 88% it’s a no-win doesn’t count task).

    someone Uber competitive would select easy on these then hammer their pressing to get it done asap to move onto trials that aren’t low probability
     
    Louie41, Sean, Peter Trewhitt and 2 others like this.
  20. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Walitt has pitched it as if it is a game where like basic life cliches it is a simple case of 'you work harder you get more reward'.

    But what he has used as a tool is something that deliberately ensures that there is no particular strategy that 'wins' / is better. You aren't supposed to choose 'hard' when you get a low probability (88% chance it is a no-win, won't count) task.

    It isn't a test of 'effort preference' what Treadway invented but a tool that allows for study of the pattern/shape across all the different factors (they get an amount option for hard vs easy that varies along with the probability) in order to see sensitivity to these different subtle (and that is a key point noting the issue with fatiguability/disability below) aspects in the test.

    It isn't even the cliche of 'when you have a cold you will get that £10 note that is sitting on your back garden/lawn, but when you have flu you can't'. But he is trying to pitch it like this by making the game so impenetrable and complicated that noone is looking it up to see it doesn't test even that.

    The amounts of money picked aren't huge in order that it relies on other intrinsic individual differences - but it also isn't either 'guessable' or really very possible to affect very much what you would end up with as an amount.

    It wasn't really supposed to be a test of effort EDIT: with the 'metric' Walitt has claimed of 'just take % of hard choices', but how people interpret all these little subtle varying elements involved in choice behaviour when you vary them at such a speed that it has to be instinctive and can't have a social element.

    It the Treadway et al (2009) example it is 20mins, hard takes 30sec on ave and easy 15 secs. If someone just approached the test as wanting to test out their little finger against getting the bar up the screen and only did hard that would be about 40 tasks they'd get. Assuming a lot such as the random presentation of probability levels meaning approx even low, medium, high then 40/3 = 13 only 50% of those even 'count', and noting that the 12% is 'inverted' for the low vs high it's the same thing if you averaged out those and were doing them blind ie 50% of each task is a no-win. is literally Pointless, you get a screen at the end saying it 'doesn't count'.

    SO yes, someone just selecting 'hard', even in an averaged sense would be spending literally 50% of their time on tasks that 'don't count'. But then so would someone who only selected 'easy'.

    So the point of all of these other factors being displayed is to give those doing the game the impression that there are some of these tasks that are more 'worth it' than others.

    EDIT: and the 'tool' is more complex and has lots of analyses bits because it is looking at the sensitivity of different 'individual differences' to things like eg 'reward magnitude' vs 'probability it is no-win' and so on. Of course when you add in complications like fatiguability and PEM that causes a significant additional imbalance that isn't accounted for in the EEfRT because it was explicitly designed and validated on those without physical issues. So for ME/CFS they 're-weight' the balance of all of these validated (on physically healthy people) factors anyway.

    It wasn't about effort on its own - which is why the acronym is unfortunate. But looking at how (they think) individual differences - and yes within that they might mean things like schizophrenia other times its invalidated small personality traits - just to be 'curious' whether when you look at all the little patterns of how people chose some eg were more responsive to eg 'high probability', some might have been 'sensitive' (which they suggest with anhedonia) to the kick in the teeth of doing all your clicks then being told it is a 'no-win' so become increasingly more likely to choose easy for everything except high probability (or even that) as time goes on.

    It wasn't supposed to be about chucking out one figure at the end '% hard' (particularly if you've modified it so you've nothing validated to compare it to from other trials) and suggesting they are the harder workers. Like Walitt seems to be pitching. The test just doesn't operate like that.

    Treadway et al have thrown in all of these different factors in order that it is showing individual differences - which fatiguability would be a huge one, massively outweighing anything else, particularly if aspects have been weighted to outweigh other 'rewards and downsides' within the game.

    I think it would help for us to have a separate thread on this in order that we can try and get these points across of the game before people comment based on the assumption Walitt is trying to sell that the test he has used 'backs up' or 'tests' the 'construct' he has invented. I do not think that is helpful because ironically you then get within the argument itself an inadvertent validation of something that isn't valid.

    It's a hard enough one to communicate given what he has done if you get a straight trail, and I know he's drip-fed his non-concept through other bits of his paper but being able to interrogate the concept itself in a 'clear run' to show where things he writes in such a way that readers 'assume its there/validated/true' aren't easily shown up as missing.
     
    Last edited: Feb 27, 2024
    Louie41, tornandfrayed, Sean and 6 others like this.

Share This Page