The evidence base for physiotherapy in myalgic encephalomyelitis/chronic fatigue syndrome when considering PEM: a systematic review (2020) Wormgoor

As I understand it in most areas of medicine, researchers are usually accused of defining their inclusion criteria too strict; of selecting patients where they think treatment will work. Clinicians than usually reply that this is not a realistic representation of their clinical practice where only a small percentage would meet those criteria, so the study is not as impactful as it claims etc. etc.

So when people argue that the PACE trial has broad inclusion criteria, I think most researchers from outside the field would initially interpret this as a strength, not a flaw.

They would have been able to legitimately claim that the treatment worked for at least some - that's all. And of course if they had monitored deterioration properly that would have come to light and the distinction would have been made. If they had done the trial properly and interpreted it properly all would have been well.

Which brings us back to the fact that the error was to do a trial very badly, not to have broad inclusion criteria per se.

There are always risks of harm in trials. Part of the point of a trial is to make sure there is no harm. Prior to PACE there was no clear evidence of harm as far as I am aware. We still do not have reliable evidence of harm. What we have is a strong suggestion that there may be harm and that is enough to make treatment unethical. The unethical aspect of PACE was to fail very badly to adequately document deterioration.

Thank you again for discussing this, it is a vital thing for us.

To state the case very baldly, the medical profession can't believe, don't know or don't accept the way that patients with ME have been treated by the so called experts. The analogy that springs to mind is the way black people complain about the justice system. All the legislation has safeguards built in but it only works if people stick to those rules. Black people believe that the rules are not applied properly to them because of an inbuilt prejudice.

Similarly, it is so common as to have lost its shock value the way that the rules do not apply to people with ME. You only have to browse the work of Dave Tuller to see that in action with the BMJ but it has been going on for years.

Doctors do trials on MS, cancer, everything when they are trying to find out something about the disease. ME trials are set up to show that people with ME are not really sick. This works best if you can be selective about who goes into it.

We do not like to say it, but the insurance industry does not have to pay out for psychological diseases. When the ME clinics in the UK were mostly based in the mental heath sector patients complained but we were assured we were misinterpreting that was just where there was space. The next week an insurance company said ME was now classed as a mental heath issue. They were forced to retract but the advice to the DWP remained the same "People with ME think they can't walk but they could if they wanted to" This has now been changed to admitting that some people can't walk but insists that if that is the case muscles will be wasted something that is very rare in ME.

So everything that has been said about medical trials is true for everyone else but not for ME because there is a dark history and agenda behind it all. Being paranoid does not mean it isn't true :) The pace trial was not an aberration, it was a high point in a campaign that started 50 years ago. The mistakes were not because of bad science or a failure of modern medicine, they were part of a deliberate strategy following a deliberate agenda that almost succeeded.

Simon Wessely said that his intention was to destroy the idea of ME as a disease and he has only failed because of the work of very sick patients and now the likes of Dave Tuller and Jonathan Edwards.

I hesitate to post this because anything we say will be used to fuel the idea of us as antiscience terrorists needing the surveillance of the special branch. Though that in itself shows how little the treatment of ME compares with other diseases.
 
Categories for inclusion in trials do not have to be specific diagnoses. They need to make some sort of sense in terms of the treatment strategy.

I think one of the many issues with PACE and the usual crowd pushing BPS is that they were so focussed, if not biased, towards the treatment they were researching they failed to use the many opportunities such a trial would give to learn more about the illness or group of illnesses that were targeted.

If the focus is on the treatment, as implied by the title of the PACE trial ( "Pacing, graded Activity, and Cognitive behaviour therapy; a randomised Evaluation") then going wider makes sense, as long as you thoroughly monitor and record differences between the groups and any adverse events.

We all know they took a large wooden spoon and stirred data, entry criteria, recovery criteria until it was an "uninterpretable mess".

They also seemed to misrepresent what they were studying. They were studying a treatment and not the disease. Even though, later on was it Sharpe who said no, we were not studying ME but CFS? In reality they were studying neither, they weren't even studying fatigue per se just a particular therapy for fatigue.

Stark contrast to the Rituximab trial which also investigated a treatment, yet they never claimed to be researching the underlying disease. I understand even though the trial found Rituximab was not a suitable therapy the trial gave some clues as the researchers monitored not only whether it was successful but also the way it didn't work. A treatment trial sith a narrower focus but with much more rigorous methodology.
 
PEM is heterogenous anyways. There's no need to study fatigue aka Oxford.
Judging by several comments I noticed from MS patients on the DailyMail article about GET for LC fatigue, I don't think studying fatigue in any generic way makes any sense. They seem to share the very same frustration of poor evidence fabricated using the same junk process of EBM. The frustration they express mirrors our own, this problem is universal to medicine.

Main problem is that the underlying assumptions are about healthy people made by healthy people. They don't consider the context that they are dealing with sick people, because they see the problem as healthy people themselves and the experience of chronic illness is simply too alien, especially to people who believe the whole narrative about magical psychology causing every symptom they want. And as we know, physicians cannot actually tell the difference between sick and healthy people on most aspects of illness. If this were different it could work but they clearly can't.

The more I think of it, the more it seems that healthy people studying sick people is simply not compatible. It doesn't work, not all by itself. Which makes sense, it would be as if chefs were non-eaters who cannot taste and never deigned listen to what their clients told them. That's simply not a valid method of understanding the problem.

The medicine-by-physicians-for-physicians works very well for objective problems. For subjective problems it is self-defeating, doesn't work at all, breaks down at the very first step and only breaks down further from there. This failure has been ongoing for decades and is still heading in the worst direction, I can't think of better evidence that the entire premise is flawed and that the profession is simply incapable of integrating subjective illness experience, lacks the necessary skills and cannot overcome this limit without being forced from above to let go of this obviously flawed approach. Too much baggage. So much damn baggage.

Hence, a specialty dedicated to chronic illness is absolutely necessary, one that functions very differently from the rest of medicine that can be benefit from a machine that goes BEEP or any piece of technology that can measure things. Without measurement, it's simply not science. Even social sciences measure things, here there is no measurement, hence no science is possible, leading to total failure in outcomes.
 
Hence, a specialty dedicated to chronic illness is absolutely necessary, one that functions very differently from the rest of medicine that can be benefit from a machine that goes BEEP or any piece of technology that can measure things. Without measurement, it's simply not science. Even social sciences measure things, here there is no measurement, hence no science is possible, leading to total failure in outcomes.

Involving those with chronic illness in ME research is essential, as DECODE ME are doing, but I can't help feeling that a "speciality that functions very differently" is what the BPS team built with their MUS/IAPT monstrosity.

The exposure of the faults of PACE has surely shown us that science is possible, and science remains our best defence against vendors of snake-oil: it was through logic and good use of scientific principles that researchers were able to show that PACE did not show what its authors claimed it showed, and in fact showed the opposite.

I'd like the management of multisystem chronic illnesses to be scientifically grounded, conducting sensibly designed studies and publishing in a transparent way. But especially after what has happened to us, doctors working in this field must display exemplary ethics: they should routinely incorporate patient groups in research and treatment, share decision-making between doctor and patient, and develop Yellow Card notification systems for all kinds of interventions, not just medications.
 
30 years ago, it was probably perfectly good science to do studies of drugs aiming to control 'breast cancer', using women with 'breast cancer' as participants. If a drug improved overall outcomes, then the statistics suggested that women had better odds with the drug. It was useful information, although not perfect information. It's likely that some of the treatments found useful for 'breast cancer' actually had net negative effects for some individuals.

Now, a lot more is known about the types of breast cancer. I expect trials of treatments now often aim to identify and select participants with a particular gene, or cancer cells that respond in a particular way to a hormone. It is good science to make use of what is known about a disease when designing a treatment trial for it.

In the case of PACE, the researchers thought they had treatments that would help people with very broadly defined 'CFS'. I guess the argument about whether it was good science or not to use the Oxford criteria to select participants comes down to opinions about whether the researchers made adequate use of what was known about ME/CFS when designing the trial. Certainly though, their selection criteria was not their worst mistake.
 
Involving those with chronic illness in ME research is essential, as DECODE ME are doing, but I can't help feeling that a "speciality that functions very differently" is what the BPS team built with their MUS/IAPT monstrosity.

The exposure of the faults of PACE has surely shown us that science is possible, and science remains our best defence against vendors of snake-oil: it was through logic and good use of scientific principles that researchers were able to show that PACE did not show what its authors claimed it showed, and in fact showed the opposite.

I'd like the management of multisystem chronic illnesses to be scientifically grounded, conducting sensibly designed studies and publishing in a transparent way. But especially after what has happened to us, doctors working in this field must display exemplary ethics: they should routinely incorporate patient groups in research and treatment, share decision-making between doctor and patient, and develop Yellow Card notification systems for all kinds of interventions, not just medications.
I meant different in just about the opposite way the BPS folks have been doing ;)

It's basically a good starting point for an inverted blueprint and do the exact opposite of the BPS model at every turn: trust the patients, involve them at every level and do real science (which means measuring things, not ratings and guesses). Definitely none of that brute-force-but-always-the-same-thing that is standard in EBM (imagine a brute force password cracker that only ever tries the same password... seriously WTH?). BPS is basically anti-science, so definitely a guide but for what not to do.
 
Did they in fact make any 'mistakes' tho?

It all seems pretty deliberate and premeditated to me.

They designed it, ran it, and when the results did not agree with what they wanted they used several different 'methods' to make them look 'good'.

and then they refused to release any data which could be used to cast doubt on what they were saying.

All the while trumping on about their wonderful results, making statements that even their own published 'adjusted' paper didn't support

Even once the data was obtained, analysed and rebuttals published they still fought, or ignored.

Doesn't sound like any definition of a 'mistake' that I have ever come across.
 
They classify definitions as chronic fatigue (no PEM), CFS (PEM optional) and ME (PEM and other symptoms required). Oxford was classified as a chronic fatigue definition, the London criteria as a CFS criteria and the rest as you might expect although the interpretation that NICE requires PEM is incorrect IMO (NICE considers PEM's symptom exacerbation to be optional)

But they rate study methodology much higher than would appear to be warranted - 15 of 19 studies rated high quality including PACE, 4 as fair quality.

In the paper, the NICE criteria were assigned to the 'optional PEM' group, see table 4. According to the 2007 NICE guidelines, fatigue with post-exertional malaise and/or fatigue was one of the features that needed to be present. However all other core symptoms were optional.
 
"A self-management program including eight biweekly meetings of 2.5 hours duration. The control group received usual care."

Biweekly, so twice a week or every two weeks?

If twice a week I can see this treatment easily being counterproductive on the basis of the exertion alone.

Here, biweekly means 'every other weeks'
 
In the paper, the NICE criteria were assigned to the 'optional PEM' group, see table 4. According to the 2007 NICE guidelines, fatigue with post-exertional malaise and/or fatigue was one of the features that needed to be present. However all other core symptoms were optional.

Yes, you are right they listed NICE in the optional PEM group. I assume they did that because while the 2007 NICE guidelines said post-exertional malaise and/or fatigue, it also said that the post-exertion exacerbation of symptoms is optional. So they seem to have defined PEM in a non-standard way since post-exertional exacerbation by definition is core to PEM.
 
Main problem is that the underlying assumptions are about healthy people made by healthy people.

A bit off the main theme but can't help myself.
The quote from rvallee's post is to me the essential central prejudice from which flows many of the biases inherent in BPS research.

When people become ill they do not respond to illness as healthy people who are not ill would respond. Duh (directed at BPS expertise).
 
Back
Top Bottom