Response: Sharpe, Goldsmith and Chalder fail to restore confidence in the PACE trial findings

Tom Kindlon

Senior Member (Voting Rights)
This open access rejoinder officially came out today. I thought I would highlight it in its own thread.

https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-019-0296-x

Response: Sharpe, Goldsmith and Chalder fail to restore confidence in the PACE trial findings

  • Carolyn E. Wilshire
  • Tom Kindlon
BMC Psychology20197:19
https://doi.org/10.1186/s40359-019-0296-x

Abstract


In a recent paper, we argued that the conclusions of the PACE trial of chronic fatigue syndrome are problematic because the pre-registered protocol was not adhered to. We showed that when the originally specific outcomes and analyses are used, the evidence for the effectiveness of CBT and graded exercise therapy is weak. In a companion paper to this article, Sharpe, Goldsmith and Chalder dismiss the concerns we raised and maintain that the original conclusions are robust. In this rejoinder, we clarify one misconception in their commentary, and address seven additional arguments they raise in defence of their conclusions. We conclude that none of these arguments is sufficient to justify digressing from the pre-registered trial protocol. Specifically, the PACE authors view the trial protocol as a preliminary plan, subject to honing and improvement as time progresses, whereas we view it as a contract that should not be broken except in extremely unusual circumstances. While the arguments presented by Sharpe and colleagues inspire some interesting reflections on the scientific process, they fail to restore confidence in the PACE trial’s conclusions.
 
Last edited:
Brutal response. I'd throw in the towel if I was them.
LUJkeHP.gif
 
Argument 1: That the changes to outcome measures were insubstantial...
So why make the changes, and then defend them so ferociously, if they are of no significance?

We prefer the definitions of recovery we used to those used by Wilshire et al. as they give absolute rates more consistent both with the literature, and with our clinical experience.
So the changes do make a significant difference? Make up your minds. :rolleyes:

The whole damn point of PACE was to test those previous results and clinical experience. The PACE authors themselves called PACE "definitive". That was the basis on which they pitched PACE to the funding and approval bodies.

Also, clinical opinion is the lowest ranking in the formal evidence hierarchy. It is meaningless in the context of a (supposedly) rigorous clinical trial.
 
I read it again and find myself wanting to quote loads of it here just to be able to say YES!!! each time. Since that would just produce a whole lot more for you to read, I'll just say one big

YES!!!

to the whole paper. And huge thanks to @Carolyn Wilshire and @Tom Kindlon for writing such a superb rebuttal of Sharpe's nonsense. No wonder he's so rattled he's blocking people who link to this paper on Twitter.
 
I don't see how scientists can survive saying this in the long term:

We prefer the definitions of recovery we used to those used by Wilshire et al. as they give absolute rates more consistent both with the literature, and with our clinical experience.

Nor can I. I was astonished when I read it - couldn't believe they could so blatantly make such an unscientific statement. Like a criminal admitting their crime openly, but too stupid to realise what they'd done.

I loved the Wilshire and Kindlon response to it:

Clearly, it is not appropriate to loosen the definition of recovery simply because things did not go as expected based on previous studies. Researchers need to be open to the possibility that their results may not align with previous findings, nor with their own preconceptions. That is the whole point of a trial. Otherwise, the enterprise ceases to be genuinely informative, and becomes an exercise in belief confirmation.
 
I don't see how scientists can survive saying this in the long term:

We prefer the definitions of recovery we used to those used by Wilshire et al. as they give absolute rates more consistent both with the literature, and with our clinical experience.
Simon Wessely had stated something similar in a comment under @Jrehmeyer article in statnews (edit, in 2016):
https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comments
(page 2 of comments)

In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many. And re analysis proves the wisdom of that to be honest. But even then, using criteria that were indeed incongruent with previous work and clinical routine outcome studies, the overall pattern remains the same.

It was really puzzling that he didn't realise how such a statement was revealing of their conception of "science".
But it was in an internet comment.
Now Sharpe et al. are claiming that in an academic journal. How can they not see that this is a non scientific statement?
 
Last edited:
So, we have the perfect storm:

Risk of expectation bias is high in unblinded trials.
Risk is expected to be higher for ME, because of the nature of the relevant endpoints.
Risk is expected to be exceptionally high for CBT since cognitive bias is part of the treatment.
Risk is expected to be off scale if the investigators do not understand the purpose of a scientific experiment - to challenge expectations.
 
Back
Top Bottom