BMJ: Pressure grows on Lancet to review “flawed” PACE trial

The information content will inevitably be prone to a degree of interpretation I would think no matter how scientifically approached. And if the teams are not rigorously scientific with their approach, then this very interpretation process is going to be subject to the authors' bias. Results are not data, but an interpretation of the data ... I think?

My old science teacher would have loved you!

A long time ago, when I was a little girl a stroppy teen, science experiments had to be written up in a certain way and you had to show your raw data, plus you had to list your assumptions, then you wrote up your conclusion.

If your assumption was unproven, unfounded and not backed up by the data it was there for all to see.

Even if your actual experiment went well, if your methodology and reporting weren't good enough it dropped your grade.

If people were being taught this from the age of 12, when they were lucky to get 50p as pocket money, I find it absurd that they'll give people who don't understand this basic principle £5 million!
 
Although Michael Sharpe seems pretty confused about which illness PACE was actually targeting. He tells us now - expediently - it wasn't ME but CFS. Or chronic fatigue syndromes plural, even!
Just coming back to this again, I think there is of course a truth in this which everyone knows but MS and co have always 'conveniently' overlooked. But with that special BPS-brigade twist:
  1. PACE almost certainly did include participants who did not have ME, but whose outcomes where nonetheless gathered with no true distinction of whether they had ME or not.
  2. The (mis)reported outcomes of PACE have been used and abused to bolster the use of CBT-a-la-PACE and GET as treatments for ME, even though PACE did not uniquely trial ME.
  3. Muddling up illnesses within a trial, and then presenting treatments that were arrived at on a whole-cohort basis, as being suitable for a poorly identified trial subset (that itself was likely unrepresentative of the population in general), seems itself to be dreadfully amateur.
MS et al have never ever tried to set the record straight for PwME, even though they would have been best placed to do so, and even though they have (sort of) admitted the treatments they trialled were not uniquely for ME. If there is ever an inquest, this to me is a key point that should not be overlooked, when they make their stock announcement that all they ever want is the best for patients.

So yes, PACE did not really target ME, yet everything in its literature, from the bottom up, claimed it did; conflation to suit the authors, DWP, disability insurers, NHS funding, etc.
 
Just coming back to this again, I think there is of course a truth in this which everyone knows but MS and co have always 'conveniently' overlooked. But with that special BPS-brigade twist:
  1. PACE almost certainly did include participants who did not have ME, but whose outcomes where nonetheless gathered with no true distinction of whether they had ME or not.
  2. The (mis)reported outcomes of PACE have been used and abused to bolster the use of CBT-a-la-PACE and GET as treatments for ME, even though PACE did not uniquely trial ME.
  3. Muddling up illnesses within a trial, and then presenting treatments that were arrived at on a whole-cohort basis, as being suitable for a poorly identified trial subset (that itself was likely unrepresentative of the population in general), seems itself to be dreadfully amateur.
MS et al have never ever tried to set the record straight for PwME, even though they would have been best placed to do so, and even though they have (sort of) admitted the treatments they trialled were not uniquely for ME. If there is ever an inquest, this to me is a key point that should not be overlooked, when they make their stock announcement that all they ever want is the best for patients.

So yes, PACE did not really target ME, yet everything in its literature, from the bottom up, claimed it did; conflation to suit the authors, DWP, disability insurers, NHS funding, etc.

Nothing Wessely and co have done targets ME, that is the whole crux. Their conflation with fatigue has caused chaos and untold harm. The PACE architects would often say in the 'small print' that ME and CFS may not be the same thing (or words to that effect) but then would go ahead and conflate anyway. We were fighting this conflation long before PACE. PACE is just the disastrous outcome of conflation.
 
Nothing Wessely and co have done targets ME, that is the whole crux. Their conflation with fatigue has caused chaos and untold harm. The PACE architects would often say in the 'small print' that ME and CFS may not be the same thing (or words to that effect) but then would go ahead and conflate anyway. We were fighting this conflation long before PACE. PACE is just the disastrous outcome of conflation.

Technically, they say that ME is nothing more than the patients' belief that they have an illness called ME. However, contradicting themselves, they did apply the London criteria for ME (based on Ramsay) to a subset of patients, in addition to the weaker CFS criteria.

So I think we can't say it wasn't tested on ME patients, but I think we can say it probably didn't including many if any moderate to severe patients by its very nature.

Whether the London criteria were applied to their already recruited, CFS-defined patients or to patients that were first diagnosed with those ME criteria might make a difference, though.
 
I may be wrong here so feel free to correct.

they did apply the London criteria for ME (based on Ramsay) to a subset of patients, in addition to the weaker CFS criteria.

I don't think they did apply the London criteria. They applied their own modified version of the London criteria and simply continued to call it the London criteria.

If memory serves Ellen Goudsmit was one of the authors of the London criteria and has tried to point this out to them.
 
I may be wrong here so feel free to correct.

I don't think they did apply the London criteria. They applied their own modified version of the London criteria and simply continued to call it the London criteria.

If memory serves Ellen Goudsmit was one of the authors of the London criteria and has tried to point this out to them.

Well this is their modus operandi, after all. 'Make it up as you go along.' It's entirely feasible this happened.

Either way, I think it highly unlikely that none of their patients had ME at all, even if only by accident. Regardless of who they included, however, the treatments still didn't work.
 
Last edited by a moderator:
Just saw this on a French medical website (i don't know this site, so don't know about its quality and readership):

Les pressions s’intensifient pour que la revue The Lancet réexamine l’essai PACE « entaché d’irrégularités »
Plus d’une centaine d’universitaires, de groupes de patients, d’avocats et de politiciens ont signé une lettre ouverte appelant la revue The Lancet à ordonner une ré-analyse indépendante de l’essai Stimulation, activité et thérapie comportementale et cognitive : une évaluation randomisée (PACE), rapporte la revue British Medical Journal.

Article in French
Google translate
 
Last edited:
Just saw this on a French medical website (i don't know this site, so don't know about its quality and readership):

Les pressions s’intensifient pour que la revue The Lancet réexamine l’essai PACE « entaché d’irrégularités »


Article in French
Google translate

Since I was living in France as an 18 year old undergraduate - 36 years ago - when my illness first manifested I am v pleased to see this article.
 
The long-term follow-up result should be placed at the centre of any discussion about PACE, and kept there. It is the main finding from PACE, that trumps all others from the study.

Once that is accepted, debate about the rest of it becomes secondary.
The trouble with the LTFU is that there are ways of arguing away the absence of treatment effects. Because a good number of the SMC group went on to receive GET or CBT after the 52 weeks of the trial were up.

I know, I know, in the Rethinking paper, they looked at this, and couldn't find any support for the idea that "extra" CBT or GET was the problem. But still, this crowd so like to present absence of evidence as evidence of absence.

Then to make matters worse, there are the hints in the LTFU paper that the sample that completed the survey may consist largely of the "troublesome" patients (meaning, a large proportion of those participants were members of patient organisations).

Finally, there's the argument that its a valid goal to "accelerate" improvement, get people feeling better faster, even if they would have found their own way eventually.

Of course, I don't think any of these arguments fly. But always good to know our enemy.
 
Last edited:
slysaint said:
except they've changed their response to this and say it helps patients get over their 'current episode' (whatever that is!).
More rubbish. Again, how could this be measured? What's the point if it has to be repeated every time you crash? Doesn't that make it ridiculously expensive? The NHS would have to fork out multiple times a year for a treatment with no lasting benefit.
What they mean is that patients will always carry with them the psychological weaknesses that make them prone to "unexplained" illnesses. So next time they get themselves into a tizzy, there's no guaranteeing they won't go off again.

Its sometimes hard to understand their logic until you view things from their belief system.
 
Last edited:

I'm very worried about this little side-step into psychologisation that we see in Kerr's letter. We actually don't know if psychological stress can cause reactivation of herpes viruses, but we know for damned sure that immunological stress can (e.g. a serious new infection).

It really worries me, the way people can't seem to let go of the psychological nonsense, even when they've seen how wrong it can be.
 
Last edited:
Because a good number of the SMC group went on to receive GET or CBT after the 52 weeks of the trial were up.
This is such a mess... Why was this paper published at all if the results can be turned down easily whenever it suits someone? Either the peer review was awfull, and they shouldn't have let that happen because the data are uninterpretable, or the authors are contradicting themselves each time they argue that CBT/GET were superior after all, or both.
 
This is such a mess... Why was this paper published at all if the results can be turned down easily whenever it suits someone? Either the peer review was awfull, and they shouldn't have let that happen because the data are uninterpretable, or the authors are contradicting themselves each time they argue that CBT/GET were superior after all, or both.
I know, I know! "If the results go the way we planned, we win. If they don't we still win".
 
The trouble with the LTFU is that there are ways of arguing away the absence of treatment effects. Because a good number of the SMC group went on to receive GET or CBT after the 52 weeks of the trial were up.

I know, I know, in the Rethinking paper, they looked at this, and couldn't find any support for the idea that "extra" CBT or GET was the problem. But still, this crowd so like to present absence of evidence as evidence of absence.

Then to make matters worse, there are the hints in the LTFU paper that the sample that completed the survey may consist largely of the "troublesome" patients (meaning, a large proportion of those participants were members of patient organisations).

Finally, there's the argument that its a valid goal to "accelerate" improvement, get people feeling better faster, even if they would have found their own way eventually.

Of course, I don't think any of these arguments fly. But always good to know our enemy.

From my memory of the long term follow up paper, there were no between group differences, which the authors tried to explain away on the grounds of low turnout and some of the SMC and APT groups going on to do GET or CBT. And their disgraceful spinning of the fact that of those who did provide LTFU data, the CBT/GET groups had maintained their improved outcomes (omitting to mention that the other groups had caught up).

But what seems the most significant thing to me is that if GET/CBT are so good, why did the patients in those groups stay stuck at their low 'slightly improved' level of functioning. The logic of GET on someone who is deconditioned would be the longer you keep doing it, the fitter and healthier you get, so all the GET group should have been fully recovered, not stuck at the 'slightly improved' stage.

There really should have been an unfit but healthy control group doing GET as well, so the stark contrast in the effectiveness of exercise on couch potatoes and pwME could be highlighted. You could easily think from the media hype around the LTFU paper that GET returns pwME to full health and fitness.

Edited for clarity.
 
Back
Top