PACE trial TSC and TMG minutes released

Discussion in 'Psychosomatic news - ME/CFS and Long Covid' started by JohnTheJack, Mar 23, 2018.

  1. Robert 1973

    Robert 1973 Senior Member (Voting Rights)

    Messages:
    1,554
    Location:
    UK
    Is this a question that could be put to the PIs (or SW)? – i.e. at which meeting(s) were the protocol changes approved by the TSC?
     
  2. Stewart

    Stewart Senior Member (Voting Rights)

    Messages:
    238
    Or "Given that you've claimed that all deviations from the pre-(ahem)-trial protocol were correctly approved by the TSC, can you explain why the TSC minutes do not record any instances of such discussions taking place or such decisions being taken?"
     
  3. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Also from that FoI

    One of the reasons I think it is important is to understand the reasons given for changes that were made. The ones published in the Lancet aren't sufficient to justify the change in my view so this leads me wondering is there more (reasons) or could they get away with anything.
     
  4. Sasha

    Sasha Senior Member (Voting Rights)

    Messages:
    4,006
    Location:
    UK
    I'm not sure I undertand your point, @Joel. A graph (though not even the summary numbers underlying it, refused in response to an FOI request) showing step-test performance was published in the Chalder et al. mediation paper.

    But you know that, so what am I missing?http://www.thelancet.com/journals/lanpsy/article/PIIS2215-0366(14)00069-8/abstract

    Are you talking about a different kind of data?
     
    Luther Blissett likes this.
  5. Robert 1973

    Robert 1973 Senior Member (Voting Rights)

    Messages:
    1,554
    Location:
    UK
    I’ve tweeted MS about TSC approval for protocol changes. (Other tweets included for interest, and to highlight the ad hominem attack in response to a question for critism beyond the ad hominem.)

    https://twitter.com/user/status/977316766832578561


    https://twitter.com/user/status/977680457541345280


    https://twitter.com/user/status/978275023319334912


    [Edited thread and reposted tweet to read TSC instead of STG. I’ve also now emailed Carol Monaghan to ask is she can make any formal enquiries.]
     
    Last edited: Mar 26, 2018
  6. Sarah

    Sarah Senior Member (Voting Rights)

    Messages:
    1,510
    Hi. I'm not sure if it has been picked up that the Lancet made front page of The Times with 'Back pain treatment is useless' on the 22 March edition that covered Wilshire et al on page 2. The article covers a 3 paper series published in The Lancet on 21 March. The series includes comment co-authored by Richard Horton. Abstracts indicate BPS perspective; free access with log-in.

    IMG_20180326_170228.jpg

    http://www.thelancet.com/series/low-back-pain

    From the Foster paper:

    'Recommendations include use of a biopsychosocial framework to guide management with initial non-pharmacological treatment, including education that supports self-management and resumption of normal activities and exercise, and psychological programmes for those with persistent symptoms.'

    From the Buchbinder paper:

    'Develop and implement positive strategies for primary prevention of disabling low back pain that are integrated with strategies for preventing other chronic conditions (physical activity, maintenance of healthy weight, mental health)'
     
  7. Indigophoton

    Indigophoton Senior Member (Voting Rights)

    Messages:
    849
    Location:
    UK
  8. Joel

    Joel Senior Member (Voting Rights)

    Messages:
    941
    Location:
    UK
    I did mean the step test, but I thought a bit more about it and decided it's a non-issue because they did appropriately consider AEs from the trial regardless of whether it was part of the treatment or not.
     
    Luther Blissett and Sasha like this.
  9. Woolie

    Woolie Senior Member

    Messages:
    2,922
    Thanks all. It is so bizarre to see them still making major decisions as how to analyse the data four years after they first started collecting it, and almost as long since they published their trial protocol.

    I wonder why? Bad planning? Or were they motivated by the results they were receiving?
     
  10. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    It is quite clear they knew their results were going to be crap (reports from therapists and the FINE trial having null results) and that is why they reduced the analysis thresholds to what is basically the minimum possible change on questionnaires.

    There is no other reason why they would suddenly decide to radically change their analysis strategy when they had already acquired 99-100% of the data.
     
  11. Woolie

    Woolie Senior Member

    Messages:
    2,922
    Would it be a digression to ask you about the airline safety example, @Lucibee? Afraid I'm ignorant about it.
     
  12. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I remember some on-line comment from someone who said that they were a PACE participant, and that they had stopped conducting the step test towards the end of the trial as they were getting too many bad reactions. It's always worth being cautious about what one reads on the internet, but these minutes indicate that there was concern about the step test. I wonder if there was a sharp decline in participation rates towards the end of the trial.... guess we'd need access to the secret data to check.
     
  13. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    Perhaps hedging bets til FINE data was available to enable fine tuning?
     
  14. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    Didn't read full thread before posting.
     
    Luther Blissett likes this.
  15. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I agree that it is hardly credible that a trial overseen by the MRC clinical trials team should be doing this in 2009. When I was involved in setting up a trial of this size in 2000 it was absolutely clear to us that we had to define our primary end point before starting anything. The importance of predefined primary endpoints was understood at least by the mid 1990s.

    So I guess the question is whether the MRC team were so asleep that they did not sort this at the beginning or whether they did but in the face of incoming data that was going to show nothing they tried a salvage operation. The latter seems more plausible.

    But we are still left with the fact that anyone who knew about trials at this date knew that what was being done would have made the study unpublishable if it was dealing with a commercial drug. The suspicion has to be that the MRC somehow felt that because this was a therapist-delivered treatment within the healthcare system that pragmatic healthcare policy somehow justified abandoning scientific methods.

    -------------

    I have been thinking that the Horton radio quote tells us much more than it might seem.

    'Richard Horton: ... it was eagerly awaited. ... the investigators stepped back ... to do an experiment comparing conventional treatments for chronic fatigue, cognitive behavioural therapy ... against a treatment ... endorsed by ... patient community but very sceptically received by the more scientific community ... adaptive pacing therapy. So they were ... comparing two philosophies, not just two treatments, two philosophies of what chronic fatigue syndrome was.

    ... Yeah, I mean adaptive pacing therapy essentially believes that chronic fatigue is an organic disease ... Whereas cognitive behaviour therapy obviously believes (sic) that chronic fatigue is ... reversible and these two philosophies are kind of facing off against one another in the patient community and what these scientists were trying to do is to say well, let’s see, which one is right.'


    So it seems clear that from the triallists point of view (not what was said to AfME) this was a trial intending to highlight the difference between CBT/GET and the patients' favourite pacing. I think this has important implications for something we do not know - how was adaptive pacing presented to patients by therapists? So far we can assume that it did not have the positive message 'this will make you better'. But if the therapists were aware that the trial was intended to test whether the sort of treatments doctors might like to pay them to do worked (so they would still have a job), and the sort the patients liked did not, then they would have a very powerful motive indeed to downplay any chance of improvement with adaptive pacing. The inherent element of belief-in-the therapy that Wessely and Chalder 1989 described as delivered with 'cognitive strategies' would be expected to be mirrored by a proactive no-belief-in-the-therapy for adaptive pacing. In the unblinded context the therapists are not only free to tell the patients what to say at assessment but are highly motivated by what Horton says was the trial philosophy to get them to say the right thing.

    It seems to me we now have enough information to propose that there is a formal independent enquiry into the trial. Since the concern relates to what is supposed to be the highest arbiter within science, the MRC, the enquiry would presumably have to be parliamentary. It looks as if there is at least one MP, Carol Monaghan, who is interested in exploring the problem. But the level of discussion has to rise above what we have seen so far. Independent experts in trial design, probably from abroad, should be brought in to give evidence. The issue is not simply one of the PACE trial. It is how the MRC trials staff oversaw such incompetence in a publicly funded trial.
     
  16. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Its not just the trial as handled by the MRC but also how they handled issues afterwards. They dismissed complaints including the changing of endpoints saying it would have little effect. They also acted as a witness in the information tribunal to try to suppress access to the data (perhaps embarrassed at what may come out and their lack of governance).

    I wonder about the TSC from the minutes it doesn't seem to be playing much of a governance role but more of an advisory role with little follow up on advice.
     
  17. Marco

    Marco Senior Member (Voting Rights)

    Messages:
    277
    Simon Wessley once referred to PACE as "a thing of beauty" and that "HMS PACE did make it successfully across the Atlantic" (the latter being an interesting phrase in itself) :

    https://www.nationalelfservice.net/...syndrome-choppy-seas-but-a-prosperous-voyage/

    I can see why he might say that. What better than a large publicly funded study that ticked all the boxes in countering existing criticism of their preferred therapies while showing up the only other existing (and patient supported) therapy as ineffective if not counter-productive.

    Large scale compared to previous underpowered studies; clear evidence (sic) of the superiority of CBT/GET; adaptive pacing (sic) shown to be useless; no harms (sic) associated with GET despite patients' complaints etc. Hence no remaining barriers to the roll out of CBT/GET as the approved mainstream treatment of choice and oh so cost-effective.

    What's not to like?
     
    Last edited: Mar 27, 2018
  18. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    I've been wondering about this. Data scientists often talk about the need to have subject matter experts who understand the data in order to get good analysis results. In this case I think that would mean having someone who understands the nature of the measurement instruments and hence how they can be used in analysis and what they mean. So for example, knowing that the SF36 gives perceived rather than actual physical ability. Perhaps knowing the underlying properties of the questionnaires in terms of whether they are interval scales and the way data and errors are likely to be distributed. Also understanding the impact of having two different (and sometimes contradictory) marking schemes for one scale.

    It seems to me important that these things are understood. Unfortunately, I'm not sure the PACE team did understand them.

    [Added]

    I think there are some subtlety around data collection that would be useful to know such as how reliable was the data collection. Some of this is hinted at in the minutes - for example, issues with the step test. Other pieces came in their response to letters such as the way they did the walking test (using a short corridor). All these things should, of course, be documented with the data. They are hard to remember and to not document means they could get lost. Also analysis should take account of any likely additional errors (or unreliability) associated with data collection.
     
    Last edited: Mar 27, 2018
  19. Sasha

    Sasha Senior Member (Voting Rights)

    Messages:
    4,006
    Location:
    UK
    Is this true, though? What about all that work by Ben Goldacre's team (irony, irony) on outcome switching? It seems to be being done on a huge scale (and they're looking specifically at trials published in the top five medical journals):

    http://compare-trials.org/

    I'm assuming a lot of those trials will be drug trials.

    But we do know something about that. We know what was in the manuals. I find it odd that people often focus on what was in that notorious 'trial newsletter' that bigged up CBT and GET but not APT. Only some of the patients got that newsletter, whereas every single patient in the CBT, GET and APT groups was given a manual about their therapy. As Wilshire says, in the CBT and GET manuals, patients were told that those therapies were 'powerful' and 'effective' and that there was no reason why they shouldn't recover. The APT manual, in contrast, opens with this 'abandon hope' statement:

    The basic underlying concept of adaptive pacing is that a person can adapt to CFS/ME but that there is a limited amount that they can do to change it, other than provide the right conditions for natural recovery.​

    The therapists themselves were also given manuals about each of the therapies, and IIRC, some effort was made to make sure that they were sticking to delivering the therapies consistently (or have I just made that up in my head)?

    The manuals are all here (you'll need to scroll down a bit).




    But if the therapists were aware that the trial was intended to test whether the sort of treatments doctors might like to pay them to do worked (so they would still have a job), and the sort the patients liked did not, then they would have a very powerful motive indeed to downplay any chance of improvement with adaptive pacing. The inherent element of belief-in-the therapy that Wessely and Chalder 1989 described as delivered with 'cognitive strategies' would be expected to be mirrored by a proactive no-belief-in-the-therapy for adaptive pacing. In the unblinded context the therapists are not only free to tell the patients what to say at assessment but are highly motivated by what Horton says was the trial philosophy to get them to say the right thing.

    It seems to me we now have enough information to propose that there is a formal independent enquiry into the trial. Since the concern relates to what is supposed to be the highest arbiter within science, the MRC, the enquiry would presumably have to be parliamentary. It looks as if there is at least one MP, Carol Monaghan, who is interested in exploring the problem. But the level of discussion has to rise above what we have seen so far. Independent experts in trial design, probably from abroad, should be brought in to give evidence. The issue is not simply one of the PACE trial. It is how the MRC trials staff oversaw such incompetence in a publicly funded trial.[/QUOTE]
     
  20. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    I think someone had done some analysis and the drug companies had cleaned up their act quite well but universities were less likely to follow good methodology. The last thing drug companies want is for a drug not to be approved because they have not followed the rules. They probably think more about the measures up front now.
     

Share This Page