UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

Discussion in 'ME/CFS research news' started by InitialConditions, May 8, 2023.

  1. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    Good to know
     
    bobbler, Sean, alktipping and 4 others like this.
  2. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    Last edited: Apr 8, 2024
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Well I suppose if you had Long Covid and did a lot of exercise you might be down to 0.8 of a woman.
     
  4. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    I hate that it looks like they’re going to go through what we went through.
     
    Arvo, Missense, MEMarge and 10 others like this.
  5. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
  6. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    6,796
    Location:
    UK
    From the website:

    Good to hear that at least some of them were considered useful!
     
    Trish, Missense, MEMarge and 3 others like this.
  7. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    Statement by Sarah Tyson
     

    Attached Files:

  8. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    So it seems Gladwell was working on this report, evaluating the Chalder Fatigue Scale, prior to the pandemic. Luckily, it’s shown that it has issues which could be rectified by PROMS. Just as Sarah Tyson and Gladwell are weeks away from releasing their new MEAQ as part of their PROMS. Neat.
     
    Missense, MEMarge, Sean and 5 others like this.
  9. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    Sigh, so the project rolls merrily on.

    I hope the PASS questionnaire will be radically revised.
     
    Arvo, MEMarge, Sean and 7 others like this.
  10. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    well that is what they put on it as their suggestion. But of course the only thing the actual research was doing: Trial Report - Exploring the content validity of the Chalder Fatigue Scale using cognitive interviewing in an ME/CFS population, 2024, Gladwell | Science for ME (s4me.info)

    is looking at the CFQ and talking to people at their clinic prior to the pandemic and new guideline about that in order to critique it

    HIs report shows no evidence that PROMS would be better rather than worse. And doesn't test PROMS.

    It's like getting to sell your new maths course by saying the old one isn't getting good results without being expected to say whether yours is any different nevermind 'better' if the same gap was what is actually needed.

    The bigger issue is that PACE used fatigue and physical function to define CFS tests - we know what their claimed results were back then for it, and we know those have now been reanalysed. We also know that Crawley et al (2013) did the same measures of fatigue (CFQ) and physical function (SF-36, 11 point improvement defined as recovery) and it failed to show difference in physical function.

    SO is PROMS now effectively a sneaky way of moving the pesky 'physical function' bit out of 'what needs to be measured' ?

    And realising that they needed to 're-brand' the fatigue scale to hide just basing it on a subjective questionnaire about fatigue (as the name chalder fatigue scale is an issue)
     
    Sean, MrMagoo, alktipping and 2 others like this.
  11. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    I hope that the emphasis on physical function being the main measure will be a firm line from the ME Association

    And , bearing that in mind, there is some standing-back and thinking how that needs to be appropriately measured without constraining it to having to be within 'constraints' pitched by eg Gladwell and the Crawley et al (inc White) paper in PROMS

    Technically the acronym of PROMS just means certain words, I'd really like us to investigate whether any 'instructive' stuff out there from eg NHS (if this is about clinics) that do constrain and define that further as to what is must include and so on.

    To make sure that perhaps what people think is 'dictated' if it is to be a PROM really is and so on.

    And indeed the methodology behind it so we can understand how weightings can be translated - as others have said there will be some sort of algorithm but those are based on weightings and so there is stuff underneath that setting these things that can and should be able to be made more transparent.
     
    Sean, MrMagoo, Kitty and 1 other person like this.
  12. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Well this was something as an article/advertorial put together focused on trying to PR what has been happening rather, yes it is mostly about selling the PROMS project (trojan-horsing the paper which was about the CFQ and didn't test PROMS at all as a 'reason for press release' but then quickly moving on to being about ... and so the PROMS project...). Lots of testimonials from people who filled in the survey and are apparently saying wonderful things.

    Cynical me might use the term 'bolted onto the front' about the one-liner mentioning the paper Gladwell was involved with (that got its data before lockdown from its 13 participants and didn't ask them about PROMS) that was published on the 6th. The other content might have been written prior to/in anticipation of this.
     
    Sean, MrMagoo, alktipping and 2 others like this.
  13. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    MEA article:

    "The research team have personal experience of ME/CFS so we understand the energy cost of completing the surveys. Please know that your efforts are greatly appreciated and your feedback is being put to good use. Thank you so much!"

    "We have been overwhelmed and humbled by the thousands (literally!) of people who have supported the project so far, by completing the surveys and providing invaluable feedback about the tools. These will be combined with the results of the statistical analysis to revise the tools so the final versions are ‘fit for purpose’."

    "With perfect timing, we are putting the finishing touches to the survey to test out the next assessment in the toolkit; The ME Activity Questionnaire (MEAQ). This aims to assess activity levels. We will publicise the link to complete the survey in a couple of weeks, via the MEA newsletter."


    VS response which I have for various reasons not to distract cut out some rather inappropriate aspects from merely because it is interesting that the topic of discussion is very specifically 'physical function' and a paper which had used the SF-36 for that.

    I think elsewhere on the thread others read through these 'offered as a retort' and found these were paywalled and not relevant, I'll look up the exact comment

    OK here are the relevant posts:


    Now this final bit, I kept that last one. Editing out some 'problem areas' because beyond the tropes

    - which one could suggest acted as a 'massive distraction' to divert things well-away from the very thing that had just come up as a constructive discussion/fair question further ahead in the thread -

    I think it is worth studying this line.


    What she appeared to have been responding to appears to have been exactly what she claims here she joined the thread for.

    I'm now looking up a certain paper she is mentioning, and wondering whether the content was a bit too 'near the knuckle' on this exact issue

    She didn't want to answer the question??

    - physical function being 'an issue' for eg Crawley, White (and PACE?) and so when you don't get the results on one half of a measure then change the measure to phase it out might be a thought.


    Is it worth us, with the new hindsight, having a careful look back through these?
     
    Last edited: Apr 9, 2024
    Sean, MrMagoo, alktipping and 2 others like this.
  14. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    Another thing that strikes me as worth looking into / considering , having read the Crawley et al (2013) inc P D White paper (who found no result for physical function)

    is how 'rolled in together' these PROMS are aiming to be regarding the end score or profile or whatever the output is.

    If for example your research failed because you'd set 11 point increase on SF-36 as your hypothesis needing to be met for 'improvement' to claim a treatment works, but 'got a result' on a subjective fatigue scale then there is a risk if you were 'putting all the questions together' on:

    - weighting out certain factors vs others on input? (easiest example is finding there are 2 x each question on CFQ added into a model and 1 x each question on SF-36)

    - or having less questions from the physical function

    - or on 'calculation' (don't make people answer the CFQ twice but just give answers to certain things more weight in the calculation)

    - or on 'scoring' limits that can impact calculation eg if one measure has ceiling or floor effects that limits how much it can change vs a more sensitive measure, meaning that small change in that one is more likely and won't be cancelled out by changes in the other being equivalent

    We might not love the physical function side of the SF-36 specifically, however the issue with 'delivery' somehow produced a 'change' in the CFQ without a change in that and so I think it is worth our studying the elements close to those questions very carefully.

    It's interesting how the real end conclusion from Crawley et al (2013) vs what they thought back then were differing PACE results (although are they now? did reanalysis show any change in physical function) is that 'in clinic' apparently the scores on the subjective fatigue scale 'got changed' by people's physical function didn't.

    Whereas they thought/claimed that PACE did both (although they also changed - pertinent to this thread - some of the 'measures' as what was defined as recovery, so was it that they changed it from '11 points difference'?)

    I'm trying to 'get inside the mind' if you really believed your stuff on 'change the mindset become less disabled', what precisely would you have thought was missing on the delivery - that needed to be measured to pick up on these deficiencies/differences in 'delivery' - in CBT and GET that meant the clinic lot weren't 'jumping the shark' and changing physical function, despite changing 'fatigue scores'?

    Is there even anything logical there (or illogical that could be their logic) that doesn't just point to the hidden message of 'better get rid of this measure' and then claiming you need a new one 'because the results aren't consistent with the trial because some physios must be doing it wrong'?
     
    Peter Trewhitt, MrMagoo and Sean like this.
  15. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    From MEA press release (my bolding):

    "Well-developed assessment tools that represent people’s experience and produce robust, good quality information/data have several benefits for both people with ME/CFS and NHS specialist services. First, and most importantly, they are a way for people to identify and summarise their difficulties,

    Secondly, the information the tools provide can act as a starting point for discussions with the clinical team about people’s needs and priorities, and how to manage them. They can also be used as evidence of difficulties and limitations in applications for disability benefits, or workplace adjustments, for example.

    Finally, when combined, all the elements of the toolkit can be used to assess how well NHS specialist services are performing, by identifying what they are doing well and areas for improvement.

    The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail.

    This information can be invaluable for NHS specialist services to develop a business case for service improvements. For example, demonstrating the need for more staff, input from different professions, or more flexible ways of working. The assessments in the toolkit could also be used as outcome measures in clinical trials, but this is a secondary purpose.

    With perfect timing, we are putting the finishing touches to the survey to test out the next assessment in the toolkit; The ME Activity Questionnaire (MEAQ). This aims to assess activity levels. We will publicise the link to complete the survey in a couple of weeks, via the MEA newsletter."



    I've bolded a few bits because this does seem to be about using the patient and how they progress as a measure.

    The following sentence in the middle, it is worth noting how they have termed the second of these 'satisfaction with NHS specialist services' as a 'PREM' so it is patient reported experience measure:

    "The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail."

    I can't help but feel that a lot of claims and look at the shiny keys early on where sort of inferring that the PROM was that PREM and about measuring satisfaction with the services.


    Particularly given @Maat 's description of what they experienced when they had to sign off consent for their GP, employers HR and OH and so on to all be able to talk about them and plan the 'return to work' at the same time as the clinic 'GET-ting them'

    why instead of measuring 'physical function' have we got a tool that is measuring all of these things in a person and then 'with the final activity questionnaire' I guess their activity levels.

    SO someone who claims they couldnt use tech to inform physical function measures to be objective now wants clinics to be monitoring activity levels?

    I'm sorry but it seems like some of the discussions that got closed down for certain claimed reasons are just not adding up with what is then being sold here.

    Why would you 'monitor activity levels' instead of discussion 'physical function' and the methodology that is most appropriate and accurate for that?
     
    Peter Trewhitt, MrMagoo, Sean and 2 others like this.
  16. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    6,796
    Location:
    UK
    Not hypnotic enough?

    Magic spell not working?

    But seriously, I wonder if they're confused about the subjective sense of fatigue and actual physical function?

    When I'm more active I experience noticeably less fatigue, and if I was asked about it without knowledge of the way the information would be used, I'd say that. However, my actual physical function will be somewhat lower in the following days, because ... well, I have ME and I can't sustain periods of higher activity.
     
    Arvo, MeSci, MEMarge and 6 others like this.
  17. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734

    And how on earth is this PROM going to identify what any clinic is 'doing well'?

    I'm very familiar with benchmarking and passing on/exchange of best practice in other sectors and scenarios

    Often these things begin with someone standing up doing a case study about how they built a service around proper customer input and built-in co-design and co-creation for example. And how they did checks through it.

    Or someone who was tackling a specific issue and was able to describe how they investigated what was going on there and how they could improve it, what it meant for staff as they had to change themselves and way of doing things, but how the benefits to all were


    They rarely involve 'measuring up' a load of customers on so many different factors, and being quite closed minded about the methods you will possibly consider for each.

    And I'm suspicious of the 'patient input' claims. Is that a facade? we can all say we 'talked to 25 people' with a straight face

    "
    This was a very well written and relevant paper that illustrated the pitfalls of failing to include people with lived experience when developing measurement tools, leading to poor quality data and misleading results.

    In the MEA-funded Clinical Assessment Toolkit project we are working with people with ME/CFS and clinicians from ME/CFS specialist services to produce a suite of measurement tools that overcome these short-comings."


    SO if I pick out what the shortcomings of the CFQ were identified by 13 people who were at the Bristol clinic in 2018 were from those one-to-one with a psychologist very specific style of interviews....

    Firstly the CFQ at least did show their intended effects in PACE and Crawley et al (2013) - so are they saying/agreeing that the only effect in the latter 'fatigue' from their 'treatment' was the result of a tool that 'leads to poor quality data and misleading results'?

    Because if so they should be first campaigning to get out both of those treatments and anything masquerading as them from any clinic?


    Then, this PROM getting all of this on individuals is the only and most accurate, and most acceptable to patients way of 'fixing that'? SO I assume that all of those questions on the CFQ are what has disappeared. And not the 'physical function' scale that didn't get their desired results?
     
    Arvo, Peter Trewhitt, MrMagoo and 3 others like this.
  18. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    More like 'the hypnotism really works (for fear of heights, certainly 'on paper') but it turns out they had a balance issue (say Ménière's disease or undiagnosed Parkinson's, MS or something) so still have vertigo' but without the bit where they acknowledge you can't treat all vertigo with hypnotism.

    And so instead of 'getting the issue' they claim the problem is 'the delivery of the hypnotism' in order to justify: changing the measure

    In order to remove the test where they have to go up a ladder and stand on one foot for ten seconds. Or do other physical function tests.

    That was alongside the questionnaire about whether they like things to do with heights and can read words about tall objects etc - which turned out to be the irrelevant bit to treating Menieres anyway, as that guideline had just confirmed shouldn't be therefore the focus of the offer



    Well yes, like the people writing 'managing energy levels'

    who don't get that unfortunately that has been what too many of us worked out how to do far too well for far too long, when we have an energy limit
     
    alktipping, Arvo, Keela Too and 4 others like this.
  19. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    With that usual caveats about subjective reporting and the potential for such measures to be misleading and open to manipulation, there might be some value in a general measure of patient satisfaction with the clinical encounter, separate from actual therapeutic benefit.
     
    Arvo, Kitty, Binkie4 and 5 others like this.
  20. MrMagoo

    MrMagoo Senior Member (Voting Rights)

    Messages:
    1,191
    It’s all a bit much for me just now, but there was discussion with Sarah Tyson about some question having an answer along the lines of some, moderate, massively, etc and posters worried that those terms mean different things to different people, so it wasn't an objective or clear question/answer option. The response was that it’s “what it means to the person answering” which annoyed posters.
    Gladwell criticisms of the CFQ reminded me of that.
     
    alktipping, Arvo, Kitty and 5 others like this.

Share This Page