UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
OMG
The PROMs are the new CBT
You have to say you’re better even if you’re not
(And by that I mean the “reinforcing GET” CBT that made you agree that overdoing it was good and didn’t make you sick)
 
I think that there is an issue which isn't unique to the MEA where some organisations do not realise that if you are going to commission something - be it marketing or research or insight then you do have to have a member of staff internally who has the skills/been hired because they are capable and qualified to do said commissioning and putting briefs and oversight together.

You should never just be leaving whatever agency or individual or hired team to it. And it needs someone who has the skills that fit what is being asked for/commissioned to be able to do that job. And also significant amounts of time to be half-designing the project on the internal end before it then liaises with the 'agency/team' to see what technique they would use, what is possible with the costs and recruitment etc.

Something like this is not just flinging money at a team with an approx description of what you want. As if it was a grant for an academic's defined research project that was already heavily defined and wasn't going to be something like eg a toolkit or measurement.

And that agency/team having to bear in mind their own overheads and what can be done for that amount vs what could be a changeable 'customer' hoping for it to tackle different things and adding bits to the list - a bit like a builder dealing with someone changing their mind on adding another bathroom half-way through

It feels still like there is that missing oversight part / position that really should have been running this (and was probably big enough it might have been one individual with their own support team given how far this has expanded it needed an internal MEA research and development team) and had sufficient resource to be doing so in commissioning and what was best to do first etc.

There is also another reason for this 'missing aspect' being key which is that the person who is in that role on the MEA side is doing something pretty hefty with regards translating the governance and representation of their target audience into something that the project team is then quite specifically commissioned to do (but would not be as subject to said governance and reporting etc).

WHen you break this down then it is potentially a project that needed a team for probably years, because of how many 'potentially this or that' have been allowed to be bundled under it. Although they could have hit the ground running with a long-term strategy and then bitten off which were the first things to tackle from it ie building block no-brainer items or the highest priority projects.

It really isn't like the same role as whoever is signing off standard ready-made research academic projects. Particularly now we are starting to get things like apps added in. There is no way the oversight and control can be kept if someone is trying to do so from it being a minor part of someone's role that is in something else entirely. Because we really are talking about them having wandered into new product development territory. And it all involves quite specific skills and experience and support and structure to support it etc.
I think that this was somehow “sold” as an idea to fit apps and the data set thing was shoehorned in.
 
Yep. To do this successfully you'd have to understand the difference between a customer satisfaction survey, a functional assessment, a service audit, and the measurement and analysis of trial outcomes.

Anyone who did know that wouldn't even attempt to roll them up into one. The purposes are at odds and the range of professional expertise required to design and utilise them is vanishingly unlikely to be found in one person.
Actually, thinking about the last few years and the hundreds of trials and papers, efficacy is basically the least significant part of the evaluation, even in trials, but even more so in service evaluations and audit-like things (I really haven't seen anything rise up to the rigor of an actual audit). Hence why completely ineffective treatments keep getting praised as effective, because efficacy is evaluated based on anything but it. "Customer satisfaction" in health care is almost entirely independent of treatment outcome, except when efficacy is the only thing being evaluated. Which is the only evaluation that actually matters, and of course this is why it never happens for us, it reveals the scam.

All it takes for those evaluations to turn up positive is a good warm smile and at least the ability to fake listen. That's enough for a passing grade in almost all cases, we've seen this unfold over decades, a totally fraudulent process. Inefficacy doesn't remove any points from this evaluation, because what's being evaluated is everything but efficacy.

So probably a competent audit would only be concerned with efficacy, with treatment outcomes. Which obviously would not be accepted, because nothing they do is effective, it would make them look bad, and that's exactly what everyone needs. So it's unlikely to happen.
 
jnmaciuch said:
... one of the first internship projects I ever did was a Rasch analysis. It’s really just a statistical framework for refining questionnaires. I found that it’s primarily helpful for a few things...

it does not in any way ensure that your questionnaire assesses what you intended it to assess. Or that the results of the survey will actually be meaningful and useful, for that matter

@jnmaciuch if I understood right, its not sacrosanct magic surety, its a statistical restructuring device to perfect draft questionnaires by Rasch analysis, flag up dual interpretation, prune query lists, grade severity of phases: but guarantees nothing.

I guess it can be used in algorithms which focus tools to screen, assess, classify, profile, grade, alert, warn, prevent, protect, demograph, survey and monitor people who all serve and are served at cost. Time and motion study got very advanced too

Can it query analysis of the objective technical measurements, still being replaced by triumphant behaviour analysts, in their comfort zones, way beyond the edge of feasible rehab?

* "Under Medical Devices Regulations, ethical approval is required

Ethical approval is required for a clinical investigation, undertaken by or with the support of the manufacturer, in order to:

- demonstrate the safety and performance of:

* a non-marked medical device

* a marked device that has been modified

* a marked medical device to be used for a new purpose" (e.g re-re-purposed Tyson must must re-re-re-cycle)

Can market registration of the Tysons' clinical software tool get approved on the basis of an investigation - by patient survey - gaining ethical approval?

Was it the filed and accessible Tyson application ..., or an accessible Tysons' Rehab Academy (Evidence-Based) application, or accessible MEA applications supported by the manufacturer (a Tyson, the Tysons, or their Academy).

Why not keep us instruments informed
 
Last edited:
One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
The clinics are marking their own homework.
To do this successfully you'd have to understand the difference between a customer satisfaction survey, a functional assessment, a service audit, and the measurement and analysis of trial outcomes.

Anyone who did know that wouldn't even attempt to roll them up into one. The purposes are at odds and the range of professional expertise required to design and utilise them is vanishingly unlikely to be found in one person.
Actually, thinking about the last few years and the hundreds of trials and papers, efficacy is basically the least significant part of the evaluation,

What they said.

And we already manage ourselves, free of charge, to the highest standards it's possible to achieve. No one else should be earning money from that, or taking the credit for it.

Very important point, that needs to be taken into full account in any research hypothesis, clinical model, and medico-legal assessment.

Might also be something we can make much more use of for advocacy.
 
Trish Davis
Perhaps it will help you understand where I'm coming from with PEM if you read the S4ME fact sheet on PEM. https://www.s4me.info/threads/science-for-me-fact-sheets.43310/post-606969

Trish Davis


Sarah Tyson

Trish Davis The results of the PASS gives fascinating and detailed description of PEM in ME/CSF and I look forward to sharing and discussing the details in due course.



Sarah Tyson
Trish Davis If you don’t mind me saying, I think you have not grasped (or maybe are unwilling/unable to accept) that the point of the MEAQ is to ask about what people do, rather than the full detail of how they do it. There will be a multitude of different ways in which people perform tasks, make adaptation or other compromises to get through life. And that is fine. The issue is whether they do it, however they do it. Whether someone else may do it differently, or whether the person can do it in a certain way is irrelevant. We are not attempting to standardise the activities, we are asking if people do the activity, however they do it.

You ask about the scoring system and analysis approach. It is pretty hard to summarise as it is a large section of my life’s work! However, you really want to get your head around these issues I recommend the seminal texts by Anne Bowling. Luckily some of it appears to be available for free now-a-days. This is a good intro (Chapter 2) Measuring Health - Ann Bowling - Google Books
The analysis which focusses on a really thorough examination of the construct validity using Rasch analysis. The Wikipedia page is pretty good introduction. Rasch model - Wikipedia


Trish Davis
Sarah Tyson Thank you for the information about how you analyse the data. I'll take a look.
You say: "If you don’t mind me saying, I think you have not grasped (or maybe are unwilling/unable to accept) that the point of the MEAQ is to ask about what people do, rather than the full detail of how they do it."

My response to that is that I'm sorry to say I find this response patronising and missing my point. As I said earlier, I find FB particuarly difficult to use for in depth discussion, but I've tried to convey the central factors of cumulative effect and how often one can do an activity to gauging key factors of severity and need for care, which are presumably an important part of care planning. However good the mathemeatical/analytical instruments used, any summary statistic is too easily skewed by the sort of data this will provide. I think I'll have to leave it there. I have no energy left for arguing against someone so convinced their knowledge is superior to anything I might say.

Yes, if you include in 'et al' all of BACME and their continuing dominance in ME/CFS provision. If FUNCAP were adopted instead of the Tyson PROMs it could still be a problem if it's used to justify ongoing rehab style clinics, though it might be harder to misuse it to pretend the clinics are showing patients' health improving.

I have chosen to reply specifically to Sarah's comment above ( "I think you have not grasped (or maybe are unwilling/unable to accept) that the point of the MEAQ is to ask about what people do, rather than the full detail of how they do it. " ) because I think a reply on her 'you don't get that this activity survey is just about activity DONE' spiel is really important to 'debunk' as a distraction. And then on top of that there are separate, valid and accurate issues regarding 'avoid' stitching up those filling it in etc.

The issue is that, even if it was a good way in the mix of what she has done, the way she has done it has and will I think distort the measure of activity. Both regarding 'envelope' and 'limitations'.

It isn't measuring the total of what people have done like a HR or smart watch might try, and catch that you did 4 car rides even on an average day compared to the month before only needing 1. But counts if you did a bus as well as a car journey. It can't underpin and relate to the PASS/PEM one because of this. It isn't caring about limitations more than 'adjust' so isn't measuring 'function' or 'disability'.

What on earth is it measuring and why? And by now this is a basic question that should be being answered very clearly. And replying with nonsense on Rasch - which can't check it is 'measuring what it supposed to measure' - without answering this question is disturbing.

It is an even fiercer and more pertinent problem when you imagine clinics still having people with the mindset of encouraging someone to crowbar in something, and being quite coercive or persuasive about 'doing it' - like 'try a little walk' or 'chat to a friend' because of their BACME/pseudo-cbt mindset/false thinking. And then having a survey that doesn't acknowledge even the shorter term compensation of someone yes actually being 'stuck in bed' unable to shower as frequently because of it.

And that bit apparently being the bit 'the survey doesn't intend to solve, that's for having the convo with your BACME rep to tell them directly with them knowing it isn't measured anywhere and it is just their own notes', so it won't even be a 'patient-reported' but one done in a situation of social pressure even if someone can look past their false beliefs and write down something near-accurate!

Which, given there has been mention of this being used as a dataset ie collecting data for prognosis for the implementation plan (and obviously there long-term outcomes ie 2, 5, 10yr should be the primary outcome), which would cause big problems for all if inaccurate data was being used to inform what % get worse or better over time.

I note that the history of this is that it is effectively replacing the SF-36 physical function survey? Which is the quality of life arm of their measures that calibrates disability against those with other conditions - and obviously 'how adjusted' ceasing to be quantified actually does also compromise 'how much activity/exertion' even each individual task could be. For example if that 'bus/car ride' or 'chat/visit' activity's adjustment has moved from: doing over an hour to doing less than 5mins.

I'll note that the SF-36 physical function survey explicitly focuses on the 'ness'/amount of limitation, whilst still having the small number of options Sarah claims is necessary:

For each activity, you select one of three options: "Yes, limited a lot," "Yes, limited a little," or "No, not limited at all".

But worse, ALL ITEMS that relate to disability on that SF-36 need to have been accommodated - and I think there is a separate thing to be done fishing that up (again) to see what has 'disappeared'. These exclusions cannot be justified by a Rasch analysis that does not check if what is actually being measured is the concept claimed. If this survey apparently describes 'disability/physical function' (which should be related to activity whichever side-step is being claimed by the team here) then a test needs to be done to see if those coming low on SF-36 PF are showing as such on this.
 
Last edited:
I have chosen to reply specifically to Sarah's comment above because I think a reply on her 'you don't get that this activity survey is just about activity' spiel is really important to 'debunk' as a distraction. And then on top of that there are separate, valid and accurate issues regarding 'avoid' stitching up those filling it in etc.

The issue is that, even if it was a good way in the mix of what she has done, the way she has done it has and will I think distort the measure of activity.

Which, given there has been mention of this being used as a dataset ie collecting data for prognosis for the implementation plan (and obviously there long-term outcomes ie 2, 5, 10yr should be the primary outcome), which would cause big problems for all if inaccurate data was being used to inform what % get worse or better over time.

I note that the history of this is that it is effectively replacing the SF-36 physical function survey? Which is the quality of life arm of their measures that calibrates disability against those with other conditions - and obviously 'how adjusted' ceasing to be quantified actually does also compromise 'how much activity/exertion' even each individual task could be. For example if that 'bus/car ride' or 'chat/visit' activity's adjustment has moved from: doing over an hour to doing less than 5mins.

I'll note that the SF-36 physical function survey explicitly focuses on the 'ness'/amount of limitation, whilst still having the small number of options Sarah claims is necessary:



But worse, ALL ITEMS that relate to disability on that SF-36 need to have been accommodated - and I think there is a separate thing to be done fishing that up (again) to see what has 'disappeared'. These exclusions cannot be justified by a Rasch analysis that does not check if what is actually being measured is the concept claimed. If this survey apparently describes 'disability/physical function' (which should be related to activity whichever side-step is being claimed by the team here) then a test needs to be done to see if those coming low on SF-36 PF are showing as such on this.

Anyway, back to the questions on this - how much of a problem the inbuilt ceiling and floor effect are, the lack of 'ness' in the options and combined with the choice of activities, when believing it measures 'how much activity has been done'.

The choices of activity she has defined as the envelope, combined with her choice to not stratify adjustment, avoid into impact or frequency create huge ceiling/floor effects as well as not at all representing the full envelope for any person, particularly a pwme.

It cannot measure anyone either getting worse or being severe, as 'adjustment is adjustment' for example if there were a question on toileting then using this, even if someone declines from a level needing a raised toilet seat to a commode or bedpan for example it will rank the same).

Because those core 'unavoidable' activities that most pwme will find make up either more than or most of their 'envelope' are 'avoided' by her survey and method.

In simple terms for anyone other than mild that extra activity (work, social, medical/admin) that will 'count' as 'more' will be compensated for by 'less' eating, cooking, getting to the toilet, showering - no 'social life' or grocery trip to lose first. But because it can't be 'none' and will already have been limited in some way then the options stitch up almost all with ME/CFS from showing that impact - adjustment is adjustment, avoid is avoid.

AND for the claims it will somehow 'help clinics' then it won't because by hiding that impact then it makes it look-like the activity that caused it - noting there wasn't a 'but it will leave me in bed for a week' option - was OK. It's the PACE-style doing the 6min walk training at the sacrifice of showering and grocery shopping again isn't it?

The very activities where any impact will be shown are either entirely not present, not wholly accounted for (by frequency eg of shower moving from 1 week to 1month), or hidden by only allowing boxes that effectively 'anchor' the answer someone can give to being unmoveable or not stratifying different severities.

Effectively for most pwme, most of the time, the very areas and manner in which significant change in envelope ie amount of doing (which this survey claims it is measuring) is being hidden/missed.


or indeed whether people do well or worse from clinic treatment (only measuring 'delivery' and if a clinic's 'treatment' is making pwme get better or worse), is a problem if someone can get significantly worse and the survey cannot register this due to the way it is using amount of boxes of a select group of activities ticked, and not including the aspects that 'drop-off' in compensation necessarily either by stratifying adjustments or having a complete enough list one could see that a medical appointment meant a significant knock-on effect in frequency and adjustment for personal hygiene type tasks ie the overall envelope didn't stretch but went down.

It is the 'disappearing the harms' again. People at a certain level can only improve 'amount of activity' according to what counts on this, or stay the same with these boxes and choices?

I used the toilet example myself because if a pwme had to crowbar in a compulsory activity such as an appointment then it would tend to 'show' activity-wise, particularly in those who are not mild ie moderate-very severe, specifically in those core activities being compromised further because with activities such as eating or personal hygiene they can only be further adapted and reduced in frequency - which this does not allow to be measured.



PS a Rasch analysis (mentioned elsewhere in the thread by Sarah) would not cover checking this 'completeness of envelope' issue at all, indeed does not check for 'is it measuring the concept it claims to be measuring' just internal consistency as a tool,

and ceiling/floor effects ie that it can differentiate moderate ME/CFS from more severe, would not be checked by it unless:
  • it was being tested on those more severe or moving between levels of severity (Sarah herself acknowledges this - instead saying those would 'need a separate survey' - which does not answer this point as it is the same as discharging people when they get worse and calling it recovered if that isn't fed into the outcomes but hidden under drop-outs or not providing a box for it to exist but instead putting that box on a survey that doesn't count)
  • using a different tool that would pick up on this 'gap' to test this tool against ie a comparator that would identify someone getting worse and seeing if this survey was also picking them up?
 
Last edited:
I have chosen to reply specifically to Sarah's comment above ( "I think you have not grasped (or maybe are unwilling/unable to accept) that the point of the MEAQ is to ask about what people do, rather than the full detail of how they do it. " ) because I think a reply on her 'you don't get that this activity survey is just about activity DONE' spiel is really important to 'debunk' as a distraction. And then on top of that there are separate, valid and accurate issues regarding 'avoid' stitching up those filling it in etc.

The issue is that, even if it was a good way in the mix of what she has done, the way she has done it has and will I think distort the measure of activity. Both regarding 'envelope' and 'limitations'.

It isn't measuring the total of what people have done like a HR or smart watch might try, and catch that you did 4 car rides even on an average day compared to the month before only needing 1. But counts if you did a bus as well as a car journey. It can't underpin and relate to the PASS/PEM one because of this. It isn't caring about limitations more than 'adjust' so isn't measuring 'function' or 'disability'.

What on earth is it measuring and why? And by now this is a basic question that should be being answered very clearly. And replying with nonsense on Rasch - which can't check it is 'measuring what it supposed to measure' - without answering this question is disturbing.

It is an even fiercer and more pertinent problem when you imagine clinics still having people with the mindset of encouraging someone to crowbar in something, and being quite coercive or persuasive about 'doing it' - like 'try a little walk' or 'chat to a friend' because of their BACME/pseudo-cbt mindset/false thinking. And then having a survey that doesn't acknowledge even the shorter term compensation of someone yes actually being 'stuck in bed' unable to shower as frequently because of it.

And that bit apparently being the bit 'the survey doesn't intend to solve, that's for having the convo with your BACME rep to tell them directly with them knowing it isn't measured anywhere and it is just their own notes', so it won't even be a 'patient-reported' but one done in a situation of social pressure even if someone can look past their false beliefs and write down something near-accurate!

Which, given there has been mention of this being used as a dataset ie collecting data for prognosis for the implementation plan (and obviously there long-term outcomes ie 2, 5, 10yr should be the primary outcome), which would cause big problems for all if inaccurate data was being used to inform what % get worse or better over time.

I note that the history of this is that it is effectively replacing the SF-36 physical function survey? Which is the quality of life arm of their measures that calibrates disability against those with other conditions - and obviously 'how adjusted' ceasing to be quantified actually does also compromise 'how much activity/exertion' even each individual task could be. For example if that 'bus/car ride' or 'chat/visit' activity's adjustment has moved from: doing over an hour to doing less than 5mins.

I'll note that the SF-36 physical function survey explicitly focuses on the 'ness'/amount of limitation, whilst still having the small number of options Sarah claims is necessary:



But worse, ALL ITEMS that relate to disability on that SF-36 need to have been accommodated - and I think there is a separate thing to be done fishing that up (again) to see what has 'disappeared'. These exclusions cannot be justified by a Rasch analysis that does not check if what is actually being measured is the concept claimed. If this survey apparently describes 'disability/physical function' (which should be related to activity whichever side-step is being claimed by the team here) then a test needs to be done to see if those coming low on SF-36 PF are showing as such on this.


PS I'm also not sure whether Sarah and Pete get the following regarding ME/CFS and their measuring activity:

PEM generally isn't fatiguability that you get straight-away and last a few minutes/hours. SO there is something called 'rolling PEM' that most people at some point will be stuck in if their threshold vs committments (or in early days maybe 'chosen' activities) mean someone can't recover properly to get out of PEM.

And this leads over time to eventual deterioration in function. And in objective activity done.

Which can be seen over months. And will mostly be noticed in 6months+


I'm not sure they get, or will be emphasising to those they think will be using this in clinics, that when they are looking at someone's other surveys - if the rest of them cover 'health' and 'wellbeing' - and realising that it isn't that current month of Sept activity total that has impacted on that, but probably the many months leading up to that eg June, July, Aug overdoing it ----> Sept suddenly threshold is reduced and you can neither do as much as you could or feel OK.

But certainly as a bare minimum if this wasn't rubbish as measuring activity done then someone going in tomorrow in Sept to a clinic saying they feel terrible and presenting 4 surveys. Well that 'clinician' should be pulling out at least Aug's activity survey and assuming they probably did too much then, as the cause for the deterioration now (which would logically include if this survey worked, an objective reduction in function and/or activities that people could reduce).


So there is a pertinent question there too for them: if this doesn't collect the information that shows if someone's 'pace points' or 'total activity done' so it can't tell more than if someone ticked more of a variety of tasks

then I can't see how this is anything other than a physio's starter for ten on rehab. That has nothing to do with ME/CFS.


And I am becoming even more sure that this is written to the 'concept' of 'CFS' that Gladwell and it seems Tyson have of it being 'chronic fatigue with a bit of payback if you pace up too fast'. PASS seems to be needing another look as I suspect it is measuring fatiguability that Sarah keeps ducking saying she even understands as a concept and her description of PEM sounds suspiciously like fatiguability instead on quite a few occasions. And Gladwell seems to see it as DOMS.


The situation where the connection between these 4 surveys is NOT being clearly defined and spelt out using unambiguous terms ie how they will be computed together to show either how the illness works or to undermine and perpetuate false information and paradigms which will do harm - the very algorithm and pattern of whether these two are cart-before-horsing yet again the symptoms and cause in ME/CFS - is not acceptable.

It would be like someone getting to run PACE without them even having to state a protocol or what the primary outcome measures or hypothesis and null hypotheses are. Just got to collect the info, saying how they will use it later.

Shifting framing to one minute claiming 'it is just for clinic staff' when being asked how it will be used across respondents and then when someone asks about something specific to it not working well for clinics deciding to answer on the basis of 'it being for something else' makes me even more suspicious.

This is skirting basic ethics - and it is ducking the ethics board for research by claiming it isn't research, ducking the ethics of informed consent of an app collecting data by calling itself a questionnaire, ducking the question of why if it is about 'clinical outcomes' it is neither focusing questions on what the staff are delivering nor the objective outcomes by claiming it isn't for that when that is the direct question being asked. Then saying it is for all of these things in approx terms when asked 'why is it being done'.

WHen actually, if there is a legal point of view, it should actually have been required to meet every single one of these - not none?

Is it going to try and slide the app bit by saying it is just 'using a survey designed for something else' and the research ethics bit - you know where you have to tell participants what you are measuring and how before they do the surveys - by saying 'its just for clinic development' forever?
 
Last edited:
PS I'm also not sure whether Sarah and Pete get the following regarding ME/CFS and their measuring activity:

PEM generally isn't fatiguability that you get straight-away and last a few minutes/hours. SO there is something called 'rolling PEM' that most people at some point will be stuck in if their threshold vs committments (or in early days maybe 'chosen' activities) mean someone can't recover properly to get out of PEM.

And this leads over time to eventual deterioration in function. And in objective activity done.

Which can be seen over months. And will mostly be noticed in 6months+


I'm not sure they get, or will be emphasising to those they think will be using this in clinics, that when they are looking at someone's other surveys - if the rest of them cover 'health' and 'wellbeing' - and realising that it isn't that current month of Sept activity total that has impacted on that, but probably the many months leading up to that eg June, July, Aug overdoing it ----> Sept suddenly threshold is reduced and you can neither do as much as you could or feel OK.

But certainly as a bare minimum if this wasn't rubbish as measuring activity done then someone going in tomorrow in Sept to a clinic saying they feel terrible and presenting 4 surveys. Well that 'clinician' should be pulling out at least Aug's activity survey and assuming they probably did too much then, as the cause for the deterioration now (which would logically include if this survey worked, an objective reduction in function and/or activities that people could reduce).


So there is a pertinent question there too for them: if this doesn't collect the information that shows if someone's 'pace points' or 'total activity done' so it can't tell more than if someone ticked more of a variety of tasks

then I can't see how this is anything other than a physio's starter for ten on rehab. That has nothing to do with ME/CFS.


And I am becoming even more sure that this is written to the 'concept' of 'CFS' that Gladwell and it seems Tyson have of it being 'chronic fatigue with a bit of payback if you pace up too fast'. PASS seems to be needing another look as I suspect it is measuring fatiguability that Sarah keeps ducking saying she even understands as a concept and her description of PEM sounds suspiciously like fatiguability instead on quite a few occasions. And Gladwell seems to see it as DOMS.


The situation where the connection between these 4 surveys ie how they will be computed together to show either how the illness works or to undermine and perpetuate false information and paradigms which will do harm - the very algorithm and pattern of whether these two are cart-before-horsing yet again the symptoms and cause in ME/CFS - is not acceptable.

It would be like someone getting to run PACE without them even having to state a protocol or what the primary outcome measures or hypothesis and null hypotheses are.

Shifting framing to one minute claiming 'it is just for clinic staff' when being asked how it will be used across respondents and then when someone asks about something specific to it not working well for clinics deciding to answer on the basis of 'it being for something else' makes me even more suspicious.

This is skirting basic ethics - and it is ducking the ethics board for research by claiming it isn't research, ducking the ethics of informed consent of an app collecting data by calling itself a questionnaire, ducking the question of why if it is about 'clinical outcomes' it is neither focusing questions on what the staff are delivering nor the objective outcomes by claiming it isn't for that.

WHen actually, if there is a legal point of view, it should actually have been required to meet every single one of these - not none?

Is it going to try and slide the app bit by saying it is just 'using a survey designed for something else' and the research ethics bit - you know where you have to tell participants what you are measuring and how before they do the surveys - by saying 'its just for clinic development' forever?
You’re right. I’m so fed up of the fundamental misunderstanding that “today” isn’t a “new day” or “clean slate” it’s a day with a history of struggles, pain and symptoms. Every day is.

This basic cookie-cutter approach has as much use as those diets you used to get in the newspaper/magazine. People would commit to a week of having an “open sandwich” for lunch with 1 small piece of bread, 3 cherry tomatoes and a slice of lean ham, and other impossibly small foods - then quit because it doesn’t fit with real life. It’s incompatible and not worth the effort because it fails.
 
You’re right. I’m so fed up of the fundamental misunderstanding that “today” isn’t a “new day” or “clean slate” it’s a day with a history of struggles, pain and symptoms. Every day is.

This basic cookie-cutter approach has as much use as those diets you used to get in the newspaper/magazine. People would commit to a week of having an “open sandwich” for lunch with 1 small piece of bread, 3 cherry tomatoes and a slice of lean ham, and other impossibly small foods - then quit because it doesn’t fit with real life. It’s incompatible and not worth the effort because it fails.
and even more silly if someone turning up for that in Sept had their food diary that week analysed as if it is what caused them to be a stone heavier now. Instead of the last few months.

Except noone is taking objective measurements, this is both the measure for 'activity done' regarding 'cause' and for 'activity done' which they might be intending to use as 'outcome' and 'baseline to work from'

and by this measure if they were using the survey this way and someone had 10 of those in their 'average day last month' it would look like overall ten times less than if they had 10 different items meaning they ticked the box for having 'done 10 different sandwiches on an ave day'? so if it instead of other objective measurements, what sandwiches people listed as having done was the 'measurement' then it would make the same 'overall amount' look like one was a gold star compared to the other based on what could be ticked box-wise by them not measuring 'amount/frequency' under each?

wondering if I've translated the analogy right..?
 
Back
Top Bottom