CORRESPONDENCE The PACE trial of treatments for chronic fatigue syndrome: a response to WILSHIRE et al (2019) Sharpe, Goldsmith & Chalder

Cheshire

Senior Member (Voting Rights)
Chronic Fatigue Syndrome (CFS) is chronic disabling illness characterized by severe disabling fatigue, typically made worse by exertion. Myalgic Encephalomyelitis (ME) is thought by some to be the same disorder (then referred to as CFS/ME) and by others to be different. There is an urgent need to find effective treatments for CFS. The UK Medical Research Council PACE trial published in 2011 compared available treatments and concluded that when added to specialist medical care, cognitive behaviour therapy and graded exercise therapy were more effective in improving both fatigue and physical function in participants with CFS, than both adaptive pacing therapy and specialised medical care alone. In this paper, we respond to the methodological criticisms of the trial and a reanalysis of the trial data reported by Wilshire at al. We conclude that neither the criticisms nor the reanalysis offer any convincing reason to change the conclusions of the PACE trial

Open Access
Article here
 
I can't face reading this at present, will need to build up to it.

What is interesting is the timing of this article indicating that the PACE appologists are currently orchestrating a planned and coordinated defence of the indefensible. Also it may be good that they have published it now as it should provide opportunity to demonstrate that they continue to fail to respond to substantive criticism.
 
We disagree with this proposition. Whilst participant rated outcomes do potentially pose a risk of bias for all trials testing the effect of unblindable treatments, we do not agree that this is a convincing alternative explanation for the PACE trial findings. This is because: First, participants did not just give global ratings at the end of treatment, they answered specific questions about fatigue and function, as long as six months after therapy was completed, making a transient placebo type effect very unlikely.

This explanation makes ZERO sense. Like, what on earth does this even mean?
 
Last edited:
No Peter White...

EDIT: the journal received the first manuscript in July 2018, so the bulk of this was probably written, almost a year ago.

ANOTHER EDIT: This journal has Open Peer Review. So normally you can see what reviewers said about the manuscript. There isn't much information though on the website, except for a short comment by one of the reviewers, Francesco Pagnini. It doesn't explain why the PACE-authors had to resubmit their manuscript three times...
https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-019-0288-x/open-peer-review
 
Last edited:
Just to quote one of their feeble justifications:
We prefer the definitions of recovery we used to those used by Wilshire et al. as they give absolute rates more consistent both with the literature, and with our clinical experience. We also note that, even in Wilshire et al.’s analysis, the relative rate of recovery with CBT and GET was approximately twice that with APT and SMC alone

So they admit that they think their recovery rates are better because they 'prefer' their definition. And even more damning, they think their results are right because they fit with 'the literature' and 'our clinical experience'.
That sounds like Wessely's ship analogy of changing course to make sure you reach the correct destination.

And they don't understand that if something is not statistically significant, you can't use the non significant differences to justify anything.

I haven't read it all, but it's not a scientific analysis, it's pages of self justification of the indefensible on very shaky and unscientific grounds.
 
Last edited:
This explanation makes ZERO sense. Like, what on earth does this even mean?

It means they are in denial about the fact that the findings reported in PACE are entirely consistent with placebo effects. It could also mean that they're just trying to win the debate by producing a lot of bullshit hoping that it will confuse and convince naive readers.
 
Wilshire et al. still found that both CBT and GET were statistically superior to their only comparison treatment (SMC, which they choose to refer to as‘con-trol’). However, they then abolished this statistical significance by applying an excessive Bonferroni correction for multiple testing (multiplying the required pvalues byfive or six, rather than by a more appropriate two)

As far as I know, they did this because it was planned in the original protocol. The PACE authors appear to consider this "excessive" because it doesn't give the results they were expecting (the original protocol is very optimistic about finding large treatment effects that would easily withstand such corrections).
 
So they admit that they think their recovery rates are better because they 'prefer' their definition. And even more damning, they think their results are right because they fit with 'the literature' and 'our clinical experience'.
That sounds like Wessely's ship analogy of changing course to make sure you reach the correct destination.

It's also revealing that they do not describe in any detail how they defined recovery, presumably because if they did, anyone with some familiary of the scales used would immediately see that there is a problem:
So here's recovery as originally defined (left) versus how it was defined in the publications (right)


recovery.jpg

A score of 60 on the SF-36 physical function scale is typical for an 80 year old or younger people with severe chronic illness like multiple sclerosis and heart failure.

When the PACE authors write that
We prefer the definitions of recovery we used to those used by Wilshire et al. as they give absolute rates more consistent both with the literature, and with our clinical experience

I only wonder what that says about the literature and their clinical experience. Also, researchers aren't supposed to fit their results to the previous results or their clinical experience. They're supposed to find out the truth.
 
Last edited:
So they admit that they think their recovery rates are better because they 'prefer' their definition. And even more damning, they think their results are right because they fit with 'the literature' and 'our clinical experience'.
That sounds like Wessely's ship analogy of changing course to make sure you reach the correct destination.


We should remember that the definition used by Wilshire is their definition as they published in the protocol. To claim they prefer theirs definition because it fits their expectations does suggest they manipulated the definition until they got the desired result.
 
We conclude that neither the criticisms nor the reanalysis offer any convincing reason to change the conclusions of the PACE trial.

Let's just look at their conclusions again:
Interpretation CBT and GET can safely be added to SMC to moderately improve outcomes for chronic fatigue syndrome, but APT is not an effective addition.

Strictly speaking, addition of CBT and GET does moderately improve the outcomes they used, whichever analysis you use. Whether that is "safe", depends on how you define "safe" (they don't say how they define it, and only mention it once in the entire rebuttal).

But the conclusions weren't necessarily the problem. Apart from a few niggles (use of difference of means rather than median difference), their analysis isn't necessarily a problem.

It was much more fundamental than that. It was the methods they used. It was the outcome measures themselves (not just the cut-offs). It was their patient selection criteria. It was their prior assumptions about the condition. It was their dismissal of all objective measures. It was the fact that there was no longer any difference at long-term follow-up...

They still haven't addressed any of those things.
 
Reviewer's report:
The authors have submitted an opinion paper that challenges the results of a re-analysis of the PACE data. I am aware of the importance of this debate, in the general field of psychology, and I believe that, rather than facing as a reviewer the content itself, the paper deserves publication as a part of the ongoing discussion.

Therefore, I will not enter into the merit of the discussion [...]

https://static-content.springer.com/openpeerreview/art:10.1186/s40359-019-0288-x/40359_2019_288_ReviewerReport_V0_R1.pdf
https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-019-0288-x/open-peer-review
 
This journal has Open Peer Review. So normally you can see what reviewers said about the manuscript. There isn't much information though on the website, except for a short comment by one of the reviewers, Francesco Pagnini. It doesn't explain why the PACE-authors had to resubmit their manuscript three times...
https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-019-0288-x/open-peer-review

3 times is quite usual I think even just for clearer wording, typos and formatting.
 
But the conclusions weren't necessarily the problem. Apart from a few niggles (use of difference of means rather than median difference), their analysis isn't necessarily a problem.

I'm not sure that a difference of medians would be valid for the CFQ.

I do think there is a huge statistical naivety around the use of the mean and SD with the sf36 as I see no way that the scores would be equidistant on a definition of physical function. I do think this is an important issue and one that needs to be aired as it seems such a common issue in so much research.

But you are right in terms of the biggest issue being that the outcome measures are not robust as for example sf36 is a measure of perception of physical function not physical function itself therefore you can't test if function or perception has changed and interventions are aimed at changing perception hence the experiment is not worth doing.
 
Back
Top Bottom