Rethinking the treatment of chronic fatigue syndrome—A reanalysis and evaluation of findings from a recent major trial of graded exercise and CBT

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Carolyn Wilshire, Feb 6, 2018.

  1. Carolyn Wilshire

    Carolyn Wilshire Senior Member (Voting Rights)

    Messages:
    103
    The actual written text was all me. But different points were raised by different authors (big contributors were Tom, Alem and David). So you can have a bit of fun guessing who raised each point.

    Tom: Insightful, sharp comments based on his wide understanding of the literature. Lots of nice references that extended and expanded on major points.
    David Tuller: "Stop sugar-coating it!" ;) Also, good comments about the researcher's justifications, and the wider political picture.
    Alem: Specific arguments based on their long history working with the data and PACE pubs. Incredibly smart guy.

    LOTS of work went on behind the scenes. Many people not listed as authors shared their thoughts and their careful research, and commented on earlier drafts. I was able to take advantage of so much expertise relating to MECFS research more generally. This really was a community-wide effort.
    This was interesting wasn't it Barry? Although we were only considering which comparisons reached significance and which did not. To really demonstrate that the two treatments differentially affected fatigue and physical function scores, we'd need to do some sort of test of the interaction between treatment and outcome.

    I might have a look into doing this (but it probably won't be significant)
    This is a good point too. Might be worth trawling trough the manual again to see what I can come up with here.
    You've made it ever easier for me - thanks, @strategist!
    Thanks, @Sly Saint. Very interesting. If you know the source of that quote, I'd be interested to learn it.
    To be fair, the cohort we used to determine these figures excluded those with a significant long-term medical condition.

    Its still really bad, though, isn't it?
    :laugh:!
    Thanks, @Simon M. Yes, that's a good point, thanks for mentioning. Although they may be using the term more in the sense of "we're calling it 'exploratory' to get around the fact that we didn't use the definition we set out in the protocol".
     
    Last edited: Feb 8, 2018
    janice, Forestvon, Evergreen and 27 others like this.
  2. Carolyn Wilshire

    Carolyn Wilshire Senior Member (Voting Rights)

    Messages:
    103
    I think it would be hard for the researchers to come back and say 'no, we were never planning to use the Bonferroni method', because this is the method they used in all their published papers.

    Besides, I think the reanalyses merely show that things weren't as rosy as they were made to appear, and they do that quite effectively as they are. The real fatal flaw, in my view, is in the failure of improvements to extend to objectively measured outcomes.
    Yes, but it has to be statistically significant too. And technically, neither of those percentages falls "between 2 and 3 times the improvement rate of SSMC" One is at 2 times the SMC improvement rate; the other is slightly below.

    I have to go now, but will take a look at your other points later today (thanks, @Esther12, for taking the time to make them!).
     
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Which, to me, cascades into what seems another underlying problem: The investigators apparent dismissal of objective outcomes as having any relevance or importance. It's as if their whole mindset is along the lines of:

    We know ME/CFS is a psychological problem; psychological problems are only ever measurable subjectively, which has always been good enough to date, and always will be; objective outcomes don't feature in our world, and we can't understand what all the fuss is about.
    Is my jaded view on it.
     
  4. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Yes, I agree on the problem with subjective self-report/objective outcomes. Just always also on the look out for nice simple ways to summarise the problems with their spin, that also don't risk getting me accused of being misleading.

    Ouch - that's so accurate and yet it feels so devious. It made me hurt.
     
  5. Samuel

    Samuel Senior Member (Voting Rights)

    Messages:
    634
    permit me a bizarre, over the top brainstorm. could it be that in some cases it is "the problem is that people are saying they are sick. we will train them not to do that. what's the fuss?"?

    thus at root not a denial of disease [ontology] or credibility [epistemic worth]. those are irrelevant when the target population has little moral worth and are burdens to society. we have seen burden [and threat] language rise in both academic papers and newspapers.

    afaik objective measures were irrelevant to the burden-to-society appeals that were used in action t4. must be careful with historical parallels. i am only suggesting that the ontological and epistemic claims [and objectivity itself] might be less relevant to mindset than we assume. @Barry
     
  6. Carolyn Wilshire

    Carolyn Wilshire Senior Member (Voting Rights)

    Messages:
    103
    This was the date of approval of the change be the Trial Steering Committee. Note how close this date was to the submission of the primary 2011 paper. And also how very far this date was form trial commencement.
    Gosh, this is good. I looked high and low for information about when and how that change was made, but never knew about this.
    That's really interesting to know.

    Some of this is understandable. There really was too much data for one paper. But it doesn't explain why they chose to report walking test results in the initial Lancet paper, and not fitness test results. The only reason for singling our the walking test here must have been that some of those results just passed the threshold for statistical significance. You will also note reading our paper that they never actually bothered to presented statistical analyses for the objective measures that showed nothing (fitness, employment, benefits). Sometimes they noted there wasn't much difference, but sometimes they just said nothing.
    I think we were trying to be conservative. The 2013 recovery paper must have been written in early 2012 at the very latest (the first version of the paper was received by the journal in August 2012). And that must have been at least a year after the the 2011 Lancet paper was submitted (there's no submission date on the Lancet paper, so we can only guess, but it was certainly before Feb 18, 2011, which was the date of actual publication).

    Thanks for picking up the typos!
    Perhaps "good" is too generous. But it did seem to us to have been better than what PwMEs commonly get in the UK in general. At least patients were given medication for pain and sleep.
    If were a gambling woman, I'd put a lot of money on null outcomes here. If they had been positive, we would certainly have heard about them by now!
    Haha, I see what you mean! Yes, this paper was rewritten from that earlier one that was reviewed so inadequately in the BMJ. Although just about every paragraph ended up being different, a couple of phrases seem to have slipped through!

    Do you think "merely" is problematic? Its possibly something we could change at the proofs stage if it were.
     
  7. Carolyn Wilshire

    Carolyn Wilshire Senior Member (Voting Rights)

    Messages:
    103
    I think that's spot-on, @Samuel.

    To them, the whole illness is a problem with the way you think. So what needs to be done is to change the way you think.

    Again, its a sort of begging-the-question situation: its all fine if you believe the problem is psychological. The conclusion rests on assumptions about illness cause.
     
  8. Daisybell

    Daisybell Senior Member (Voting Rights)

    Messages:
    2,645
    Location:
    New Zealand
    But - even if that is the belief - what the data keeps showing is that even if you change what people think, it doesn’t actually change their behaviour. They aren’t more active, they don’t get back to work, they don’t use fewer resources in terms of benefits etc... so that must be seen as a failure.

    I don’t think non-complaining sick people is usually the goal, if the research is sponsored by the DWP! Fine for the health service, but not otherwise.
     
    janice, ladycatlover, ukxmrv and 11 others like this.
  9. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Thanks. I didn't know about that date.

    Yeah - and as you say in your paper, they could have released results for the outcomes in their protocol, and then also presented additional analyses. Sharpe's comment implies all those results were to be released...

    I see now. @Barry explained that I'd misunderstood that, so it seems it was a problem at my end.

    I'm sure you'd know better than I. It just stood out and amused me because that peer reviewer seemed to think that it's usage marked you out as the work of the devil. When I saw you commenting under Coyne's blog I wondered then if you would do everything possible to ensure that phrase made it through to the final paper!

    Great work. Thanks for the clarifications. I think that for now I might leave it to others to start boldly asserting that the pre-specified analysis for PACE's primary outcome show no difference between groups, and see how they get on before I dive in too.
     
  10. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Also worth keeping in mind that one of the PACE participants recently observed (cannot recall where, maybe in a response to one of DT's blogs?) that even the objective measures were not really that objective, because in order to meet the trial's physical demands, they backed off from some of their other physical activities, robbing Paul to pay Peter. Trial design issues again.
     
  11. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    I had assumed that the recovery definition was basically dropped from being a secondary outcome when the stats analysis plan was approved as it failed to mention recovery. I don't think that plan ever acknowledged the changes or gave reasons but just did them. I wondered if they never got explicit approval for the actual changes.
     
  12. Daisymay

    Daisymay Senior Member (Voting Rights)

    Messages:
    686
    Quite and the COI of the PI's too.
     
  13. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,925
    Location:
    UK
  14. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    995
    Location:
    UK
    i’m probably going on about this unnecessarily, but...
    the recovery definition used is based around the “normal“ range for the primary outcomes of fatigue and function. This normal range was explicitly labelled as post hoc in the 2011 Lancet paper.

    Now, they didn’t need trial data to create the erroneous “normal“ range, but I think somebody mentioned that the authors claimed it was a reviewer who insisted using this range in the Lancet paper. If that’s the case, it was surely created after data analysis - and therefore the recovery paper itself must have used a recovery definition created after sight of the data.

    The May 2010 dates that you mentioned for trial steering group approval of changes also coincides, I think, with date unblinding. So presumably they will say they got the changes approved, then did the analysis.
     
    Last edited: Feb 8, 2018
  15. Guest 102

    Guest 102 Guest


    Thank you, Carolyn and others - I haven't read yet - but you are stars, all of you, for continuing to challenge the atrocity of PACE. Am so grateful.
     
    Last edited by a moderator: Feb 8, 2018
  16. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Very much akin to "I'd entered a race I was not good enough for, so if I'd not tripped the other bloke up I would not have won."
     
  17. Alvin

    Alvin Senior Member (Voting Rights)

    Messages:
    3,309
    I think this is the gist of many PACE type successes, we can do more then we would by choice/experience but we pay for it later, just report the doing more, ignore the consequences, ignore the resulting permanent deterioration and claim its a cure :emoji_face_palm:
    Then use that "cure" to harm more patients. :emoji_face_palm:
     
  18. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    But in the case I was talking about, somebody had said the participants had in some cases had to stop doing some of their normal activities so they could maintain their activities for Peter White's PACE trial. So even if it looked like some objective measures were similar at 52 weeks to baseline, the person might in fact have been significantly worse overall. PACE was only selectively sampling physical activity change, with no real handle on the participants' overall activity change.
     
  19. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Recovery was a secondary outcome in the protocol. As I see it the stats plan in May 2010 replaced the protocol in terms of the analysis that was done and if I remember correctly contained no mention of recovery. This then allowed them to use an adhoc recovery definition when they wrote the recovery paper. So I think they lined up two different decisions.
     
  20. Alvin

    Alvin Senior Member (Voting Rights)

    Messages:
    3,309
    That makes sense, so that makes two mechanisms, overdoing then paying for it and reallocating.
    I admit that i have done both :ill:
     

Share This Page