BBC: Chronic fatigue trial results 'not robust', new study says

No mention of the DWP involvement though.....
I rather hope this all points to the mainstream media finally deciding the BSP narrative is fake news, and they'd better jump ship before they get dragged down with it. I'm afraid my opinion of most newspapers is that they peddle whatever they think supports the winning - and thereby most lucrative - side; public opinion tarts basically. The encouraging thing here is they look to be dumping the side they have been so strongly supporting until very recently.
 
Why do they not want to bridge the gap with the community and make amends?
Because there is no bridge here they can build that leads them to safety. Any concession they make leads inevitably to further exposure of their scam.

They have made it abundantly clear that they are going down with the good ship PACE. Seems a fitting way to end their careers.

Simon Wessely gave the game away when he more or less said they had changed the outcomes so that the results "would be consistent with earlier trials".
He certainly did.
 
Comments are disabled, sadly - it would have been good to praise the journalist. But please do click through to demonstrate to the editor that people are interested in this story.

I don't usually click on Daily Mail links but you have convinced me to do so in this case. Though I don't rate my chances of convincing Paul Dacre of anything that requires compassion for other people!
 
My daughter's year head emailed yesterday( at 07.21)
She has not been at school for 2 years. Ostensibly he was emailing as A had transferred to adult services in NHS and he was enquiring re consultant details , but he knew this happened in January .

He had seen the BBC article and perhaps a penny dropped.

I emailed back with the details, and added that a huge issue is that noone tells you that these forms of treatment can do harm . Ended with links to virology blog and Jane Colby' s tweet.
 
I must have missed that along the way. Not only bad science, just ... not science at all.

I don't know if that is what @Snow Leopard was refering to.
Simon Wessely's comment under @Jrehmeyer article in statnews:
In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many. And re analysis proves the wisdom of that to be honest. But even then, using criteria that were indeed incongruent with previous work and clinical routine outcome studies, the overall pattern remains the same.
https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comments
(page 2 of comments)
 
The study has now made it onto the SMC website -

Reanalysis of the PACE trial

http://www.sciencemediacentre.org/reanalysis-of-the-pace-trial/
Thanks for posting this, @Eagles.

Actually, its not too bad as a defence of the PACE trial - given that its tough to make an argument here to save it. Some of the points are actually reasonable:
The authors in this paper seem to have selected the most extreme analysis to make their point: for example by making adjustment for 6 comparisons where 3 or 5 comparisons are also described, and focusing on 52 week data only.
The first bit is true - we did use a tough method of correction. We could have used 3 comparisons.

However, the second bit of that sentence is blindingly stupid - 52 weeks is the primary endpoint of the trial. It's the primary endpoint. It's the PRIMARY endpoint.
The authors rely heavily on p-values and thresholds for statistical significance when reporting results; this is quite an out-dated approach and information about the magnitude and precision of treatment effects is missing in most cases.
We could have presented odds ratios (obviously not confidence intervals, they're only for continuous data). But this isn't what you said you'd do in the protocol. We did what you originally said you'd do. That was the whole point.
The authors have made little attempt to uncover the reasons for protocol deviations in the PACE trial or the point at which they were made; trialists could have been invited to comment.
Well, I don't think that's defensible. We carefully examined every published statement made in justification of the changes. No more can be expected, especially not from such a hostile group. Look what it took to get that little data sample we worked on - imagine trying to get them to actually answer our questions!
The new paper may give the impression that all, or almost all the evidence on CBT and GET comes from the PACE study, and it says that “it seems unlikely that further research based on these treatments will yield more favourable results”. In fact, CBT and exercise therapies have been investigated in several other studies, and these have been reviewed in Cochrane reviews. The latest such Cochrane review (of exercise therapies, from 2017) includes eight studies other than PACE, and does come to positive conclusions about some aspects of effectiveness of exercise therapies.
Translation: even if our study wasn't that impressive, there's all those great other positive studies out there.

Are you referring to the previous, lower-quality poorly controlled studies that gave inflated effects due to their lack of appropriate controls - the ones you were trying to supersede with your better controlled "definitive" study? Its a bit disappointing to be back there in 2007 again. Alas that's the way of artefactual effects in Psychology. The more you control against them, the more they disappear!

For me, I love a good academic argument. Its the stuff that moves science forward. This defence is not a bad effort, but pretty easy to counter. I wish I had a cleverer adversary. That would be much more fun.
 
Last edited:
Oh, I forgot this gem:
However, it is worth noting the PACE trial was planned at a time when trial registration was in its infancy and the problems with selective reporting were less well known.
I suspect that is a huge fudge. I cannot imagine that all the trial methodology strategies that people like @Jonathan Edwards have been clarifying for us here, have only just been understood in the last decade or so! Although much of it has been enlightening to me in recent times, for those involved in clinical trials it must have been bread and butter for a long time. And in any case, it is still a tacit admission PACE got it wrong.
Yes @Barry, this was a curious move to make. It is indeed an admission. What on earth do they think the published protocol was for? Just helping them think through a few ideas? Its perplexing. And whatever poor the state of the researchers' knowledge was at that time, they've had plenty of chances to learn the proper procedures since then, and address the problem.

This defence "We've been doing. it like this for years" comes up quite a lot in Psychology and mental health research. "Everybody does it". It doesn't really matter how many people do it like that. The point is to fix what you do when you find out how to do it right.

For me, this is not about the PACE researchers (although I understand you all have good reason to feel quite strongly about them!). This is about bad research being used to support claims that harm people. So the researchers' excuses for not doing it right are not really of interest to me. I just want to see it done right.
 
This is an excellent article - and refreshingly includes no horseshit from the SMC.

Comments are disabled, sadly - it would have been good to praise the journalist. But please do click through to demonstrate to the editor that people are interested in this story.

Wonder why comments disabled. I looked for it on their website under health and couldnt find it though older articles were there.
 
I don't know if that is what @Snow Leopard was refering to.
Simon Wessely's comment under @Jrehmeyer article in statnews:

In essence though they decided they were using a overly harsh set of criteria that didn’t match what most people would consider recovery and were incongruent with previous work so they changed their minds – before a single piece of data had been looked at of course. Nothing at all wrong in that- happens in vast numbers of trials. The problem arises, as studies have shown, when these chnaged are not properly reported. PACE reported them properly. And indeed I happen to think the changes were right – the criteria they settled on gave results much more congruent with previous studies and indeed routine outcome measure studies of which there are many. And re analysis proves the wisdom of that to be honest. But even then, using criteria that were indeed incongruent with previous work and clinical routine outcome studies, the overall pattern remains the same.

https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/comment-page-6/#comments
(page 2 of comments)

This is how to beat Wessely: use his own words against him.
 
However, it is worth noting the PACE trial was planned at a time when trial registration was in its infancy and the problems with selective reporting were less well known.


Oh, I forgot this gem:


Yes @Barry, this was a curious move to make. It is indeed an admission. What on earth do they think the published protocol was for? Just helping them think through a few ideas? Its perplexing. And whatever poor the state of the researchers' knowledge was at that time, they've had plenty of chances to learn the proper procedures since then, and address the problem.

This defence "We've been doing. it like this for years" comes up quite a lot in Psychology and mental health research. "Everybody does it". It doesn't really matter how many people do it like that. The point is to fix what you do when you find out how to do it right.

Yes the statement is like still pushing conversion therapy for gay people and then just saying, "Oh, well ten years ago everyone thought it was a mental illness so no one saw any issue with the study design".

Or how about, "well back in 2004 no one understood leprosy so it was reasonable to design the trial in such a way that the conclusions would fit the belief that it was caused by demonic possession".

In defense of the BPS crowd they probably are as selective, stupid and incompetent now as they were ten years ago.
 
Back
Top Bottom