UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
OMG
The PROMs are the new CBT
You have to say you’re better even if you’re not
(And by that I mean the “reinforcing GET” CBT that made you agree that overdoing it was good and didn’t make you sick)
 
I think that there is an issue which isn't unique to the MEA where some organisations do not realise that if you are going to commission something - be it marketing or research or insight then you do have to have a member of staff internally who has the skills/been hired because they are capable and qualified to do said commissioning and putting briefs and oversight together.

You should never just be leaving whatever agency or individual or hired team to it. And it needs someone who has the skills that fit what is being asked for/commissioned to be able to do that job. And also significant amounts of time to be half-designing the project on the internal end before it then liaises with the 'agency/team' to see what technique they would use, what is possible with the costs and recruitment etc.

Something like this is not just flinging money at a team with an approx description of what you want. As if it was a grant for an academic's defined research project that was already heavily defined and wasn't going to be something like eg a toolkit or measurement.

And that agency/team having to bear in mind their own overheads and what can be done for that amount vs what could be a changeable 'customer' hoping for it to tackle different things and adding bits to the list - a bit like a builder dealing with someone changing their mind on adding another bathroom half-way through

It feels still like there is that missing oversight part / position that really should have been running this (and was probably big enough it might have been one individual with their own support team given how far this has expanded it needed an internal MEA research and development team) and had sufficient resource to be doing so in commissioning and what was best to do first etc.

There is also another reason for this 'missing aspect' being key which is that the person who is in that role on the MEA side is doing something pretty hefty with regards translating the governance and representation of their target audience into something that the project team is then quite specifically commissioned to do (but would not be as subject to said governance and reporting etc).

WHen you break this down then it is potentially a project that needed a team for probably years, because of how many 'potentially this or that' have been allowed to be bundled under it. Although they could have hit the ground running with a long-term strategy and then bitten off which were the first things to tackle from it ie building block no-brainer items or the highest priority projects.

It really isn't like the same role as whoever is signing off standard ready-made research academic projects. Particularly now we are starting to get things like apps added in. There is no way the oversight and control can be kept if someone is trying to do so from it being a minor part of someone's role that is in something else entirely. Because we really are talking about them having wandered into new product development territory. And it all involves quite specific skills and experience and support and structure to support it etc.
I think that this was somehow “sold” as an idea to fit apps and the data set thing was shoehorned in.
 
Yep. To do this successfully you'd have to understand the difference between a customer satisfaction survey, a functional assessment, a service audit, and the measurement and analysis of trial outcomes.

Anyone who did know that wouldn't even attempt to roll them up into one. The purposes are at odds and the range of professional expertise required to design and utilise them is vanishingly unlikely to be found in one person.
Actually, thinking about the last few years and the hundreds of trials and papers, efficacy is basically the least significant part of the evaluation, even in trials, but even more so in service evaluations and audit-like things (I really haven't seen anything rise up to the rigor of an actual audit). Hence why completely ineffective treatments keep getting praised as effective, because efficacy is evaluated based on anything but it. "Customer satisfaction" in health care is almost entirely independent of treatment outcome, except when efficacy is the only thing being evaluated. Which is the only evaluation that actually matters, and of course this is why it never happens for us, it reveals the scam.

All it takes for those evaluations to turn up positive is a good warm smile and at least the ability to fake listen. That's enough for a passing grade in almost all cases, we've seen this unfold over decades, a totally fraudulent process. Inefficacy doesn't remove any points from this evaluation, because what's being evaluated is everything but efficacy.

So probably a competent audit would only be concerned with efficacy, with treatment outcomes. Which obviously would not be accepted, because nothing they do is effective, it would make them look bad, and that's exactly what everyone needs. So it's unlikely to happen.
 
jnmaciuch said:
... one of the first internship projects I ever did was a Rasch analysis. It’s really just a statistical framework for refining questionnaires. I found that it’s primarily helpful for a few things...

it does not in any way ensure that your questionnaire assesses what you intended it to assess. Or that the results of the survey will actually be meaningful and useful, for that matter

@jnmaciuch if I understood right, its not sacrosanct magic surety, its a statistical restructuring device to perfect draft questionnaires by Rasch analysis, flag up dual interpretation, prune query lists, grade severity of phases: but guarantees nothing.

I guess it can be used in algorithms which focus tools to screen, assess, classify, profile, grade, alert, warn, prevent, protect, demograph, survey and monitor people who all serve and are served at cost. Time and motion study got very advanced too

Can it query analysis of the objective technical measurements, still being replaced by triumphant behaviour analysts, in their comfort zones, way beyond the edge of feasible rehab?

* "Under Medical Devices Regulations, ethical approval is required

Ethical approval is required for a clinical investigation, undertaken by or with the support of the manufacturer, in order to:

- demonstrate the safety and performance of:

* a non-marked medical device

* a marked device that has been modified

* a marked medical device to be used for a new purpose" (e.g re-re-purposed Tyson must must re-re-re-cycle)

Can market registration of the Tysons' clinical software tool get approved on the basis of an investigation - by patient survey - gaining ethical approval?

Was it the filed and accessible Tyson application ..., or an accessible Tysons' Rehab Academy (Evidence-Based) application, or accessible MEA applications supported by the manufacturer (a Tyson, the Tysons, or their Academy).

Why not keep us instruments informed
 
Last edited:
One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
The clinics are marking their own homework.
To do this successfully you'd have to understand the difference between a customer satisfaction survey, a functional assessment, a service audit, and the measurement and analysis of trial outcomes.

Anyone who did know that wouldn't even attempt to roll them up into one. The purposes are at odds and the range of professional expertise required to design and utilise them is vanishingly unlikely to be found in one person.
Actually, thinking about the last few years and the hundreds of trials and papers, efficacy is basically the least significant part of the evaluation,

What they said.

And we already manage ourselves, free of charge, to the highest standards it's possible to achieve. No one else should be earning money from that, or taking the credit for it.

Very important point, that needs to be taken into full account in any research hypothesis, clinical model, and medico-legal assessment.

Might also be something we can make much more use of for advocacy.
 
Back
Top Bottom