Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,424
    When I read this, I remembered that Karl Morten has collected metabolomics data on patients doing GET and he said there was a decline in energy metabolism. He showed some of this data at a conference and I understand it will eventually be published. Maybe @Andy knows more.

    I remember this was not an adequately controlled study so it wouldn't be cast iron evidence but it completely aligns with the reports of patients deteriorating after GET. It would also be unethical to design a study only for the purpose of observing the harm caused by a treatment.

    Edit: these were long term effects, not something short lived.

    Edit 2:

    From the CMRC 2017 conference.
     
    Last edited: Jun 8, 2020
    NelliePledge, Sean, Medfeb and 7 others like this.
  2. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Which is ironic, because these reviews of theirs are in fact oh-so empty ... they just have a lot of words in.
     
    MEMarge, 2kidswithME, obeat and 9 others like this.
  3. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Within each trial where data is included how much investigation is done into the trial and exactly what was measured. For example if they were to use the 6min walking test for PACE would they do enough work to find that the way it was administered was wrong as they didn't have a long enough corridor to walk down and hence results are not comparable. I think that came out later than the main paper! Hence would the trial data be rejected?
     
    MEMarge, Amw66, Hoopoe and 4 others like this.
  4. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,925
    Location:
    UK
    "
    How have selection bias and disease misclassification undermined the validity of myalgic encephalomyelitis/chronic fatigue syndrome studies? is a paper that compares the selection criteria of the Oxford criteria (OC) and the Canadian Consensus Criteria (CCC) noting that for every 15 patients selected under the Oxford criteria there are 14 false positives when compared to CCC.[1]"


    ""When studies using the broad Oxford criteria (Sharpe et al., 1991) were excluded, a virtual disappearance of effect for graded exercise therapy (GET), cognitive behaviour therapy (CBT) and other psychological therapies recommended by the NICE guidelines (National Institute for Health and Care Excellence (NICE), 2007) was revealed. ""
     
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    There weren't any studies in the version of the review I refereed that credibly passed as eligible by the criterion of 'controlled trial'.
     
  6. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    yes there definitely was some substitution of activities in PACE, so people might have walked but didn't do their laundry.
     
  7. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia

    I think it's more than an either/or choice about how high or low the bar is set for entry. When a wider group of studies is included, then choices follow about how they will be critiqued and analyzed, including potentially pre-specifying sensitivity analyses - which examine the impact on results of studies with particular characteristics. Sly Saint quoted an example of that from the AHRQ review:


    The inclusion criteria for this particular review will be the subject of really critical and important discussion in the coming months. It's complicated, and it's critical for credibility, too. Thought experiment: imagine there was a trial that you thought was the pivotal and most important trial on a new drug for ME/CFS, and it showed important benefit. And a systematic review had elaborate highly specific criteria that excluded it from their systematic review, and concluded there was "no evidence" the drug benefited people with ME/CFS. What would you think?

    We don't know how often Cochrane reviews have no included studies because of inclusion criteria that are stricter than, say, including only studies with control groups, or randomized control groups - or when it's just because a study question hasn't been addressed by any controlled study at all. Those "empty" reviews are common - nearly 9% in 2010. We don't know how common "empty" reviews in other journals, or in the review portfolios of other review organizations like health technology assessment agencies. One study of a sample of reviews from journals from February 2014, for example, found it was 7% of Cochrane and 1% of non-Cochrane reviews. I don't off the top of my head know of a study on this in health technology assessment agencies. Or of how often particular questions within reviews - even the primary ones - come up empty: that would come up very high.

    I don't agree that "empty" reviews are seen as an editorial embarrassment in Cochrane: since the vast majority of editorial groups have published them, that's unlikely. Many people feel strongly that it's important to ask questions, whether or not the studies currently exist to answer them, and hope that it might stimulate good studies. On the other hand, given constrained resources, it's now common to give more priority to questions where there are known to be studies needing review. So presumably they will become less common because of that.
     
  8. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    If the "pivotal" study was actually excluded because it was of low quality, eg unblinded comparison groups, did not use objective outcome measures and the primary outcome measures had never been tested for relevance to patients (by asking them!), then I'd suggest that the review was correct in excluding the study due to poor quality.
     
    Woolie, JaneL, ladycatlover and 17 others like this.
  9. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Say it did all that very well, but the reason it was excluded was because it had 498 people in it and the inclusion criteria stipulated a minimum of 500 participants.
     
    Kitty likes this.
  10. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    Isn't that kind of arbitrary? I mean any highly specific criteria is going to need sufficient scientific justification and I cannot think of any reasonable justification for excluding a study with 450 participants, instead requiring 500+
     
    Woolie, JaneL, Kitty and 12 others like this.
  11. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    Yes, we are just talking about trying to eliminate studies that don't have any scientific validity i.e. have issues that mean that the findings don't tell us anything useful about the question we are trying to answer.

    So, for ME/CFS at least, a study with an unblinded treatment, a waiting list control and subjective outcomes (or an objective outcome measured over too short a time period) can't tell us if the treatment is helpful. I don't think we want to see 'elaborate, highly specific criteria', just criteria that ensure that the studies' findings are relevant to the question of whether the treatment is useful for people with ME/CFS.
     
  12. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    You can make a case for all sorts of things, and it's done all the time for individuals in trials and studies in systematic reviews. People can disagree about what's reasonable justification about almost anything (indeed, could probably quite safely delete the "almost" there). There's often nowhere near as much consensus about many methodological issues as people might expect once they are themselves convinced about something. The point I was trying to make was not about the evidence on this particular question, or what this particular review should do. It was that one of the issues to consider here is that predetermining a review's outcome through a long list of inclusion/exclusion criteria can be a shortcut to a review that is easy to discredit as irretrievably biased.
     
  13. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Others argue that a range of other studies (eg without control groups) about the potential harm of exercise for people with ME/CFS and people with ME/CFS' views about it should carry weight in addressing questions of the effects - including people's self-reports about their wellbeing, and the impact on their lives. Many people think these questions have simple answers; many don't.
     
    Kitty, JemPD, andypants and 2 others like this.
  14. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    For sure, the lack of formal evidence makes addressing the question of harms complicated. The cautionary principle would suggest that when all major ME/CFS patient support and advocacy organisations world-wide express concern about harms from a particular treatment, there should be pause for thought.

    But the question of harm is irrelevant if there is no evidence to support the utility of a treatment. And when a treatment involves cognitive manipulation (as CBT, and GET and the Lightning Process do - convincing the patient that their perception of reality is incorrect and they have failed morally if they don't improve), then a survey about well-being at the end of treatment is not valid evidence.
     
    JaneL, ladycatlover, Chezboo and 19 others like this.
  15. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,424
    Widespread reports of harms need to be taken seriously even if they aren't obtained from a clinical trial.

    Clinical trials cannot be assumed to accurately replicate what happens in clinical practice. The investigators may also be hiding the harm in ways that cannot be discovered with published information.

    People are generally also highly biased towards positive treatment effects. When they report an unwanted and unexpected effect, it is more credible than the expected and desired effect.
     
    JaneL, ladycatlover, Kitty and 15 others like this.
  16. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I agree that detailed discussion is useful. On this forum and a predecessor such discussion has kept people engaged for five years and new issues are constantly arising. But the bottom line is not complicated. Open label trials with subjective primary endpoints (or switched endpoints) are valueless. Every trainee physician learns that and understands why it is so. We know there is a huge potential placebo effect on subjective outcomes from the phase 2 rituximab follows study and it could have been predicted. So 'control' has tome 'placebo control' and 'treatment as usual cannot be that. A comparator with the same level of 'new treatment' credibility for both patient and therapist is minimum requirement.

    So on two grounds all studies are unable to provide usable evidence of efficacy. This is not a long list of inclusion/exclusion criteria. It does not include anything arbitrary like needing 500 patients. It is just what every competent physician knows to be the basics of trial design. I have asked hundreds of people about this now and it remains a simple truth that only people with vested interests in particular treatments, professional status, methodological research or whatever show any signs of disagreement.
     
    JaneL, Chezboo, Skycloud and 26 others like this.
  17. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    A thought experiment: What if Cochrane were to specify that reviews should have a section for listing trials, particularly large and influential ones (eg PACE) and explaining why they don't meet basic criteria for clinical trials and are therefore worthless.

    Rather than pretending they don't exist, or excluding them because they don't fit new protocol criteria, such trials need to be demonstrated to be worthless and reasons given.

    If they are simply dropped from reviews, their authors can go on claiming they provide useful evidence for a subgroup of patients, and continue to use them to prop up things like IAPT, and continue to suck up research funding from bodies like the UK NIHR for more such studies.

    If an influential review body like Cochrane explicitly demonstrated they are of no value and such trials should not be funded in future, and will not be included in Cochrane reviews, that would go a long way to helping us, I think.
     
  18. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    966
    Location:
    Oxford UK
    Yes!
    "...give priority to questions where there are known to be studies needing review"
    This is exactly my point. Who are these people who "know" there are studies needing review? I thought the the review question, prioritised by the needs of patients, was supposed to drive the search for studies, not the knowledge that there are "studies out there" which need reviewing. Reviews can be empty (or nearly empty) because included trials which aim to answer an important question don't have much, or any valuable data in them - as Jonathan Edwards has said more than once...." the bottom line is not complicated. Open label trials with subjective primary endpoints (or switched endpoints) are valueless". I think there is some data from objective measures squirrelled away in PACE, and that may have some value.
     
    Skycloud, Kitty, MSEsperanza and 11 others like this.
  19. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    I think there is a basic question around whether data quality should exclude trials. This can take multiple forms such as the measures are too subject to biases (such as subjective outcomes with an open label trial) they could be because the measurement systems are simply not reliable etc.

    Other questions that then come to mind when doing a meta analysis which is are the data sets comparable (i.e. is the data that they report captured in a similar enough way to compare with other data). This can be a real issue for data science in general but requires work to dig into the data. I'm not sure what this means in terms of inclusion as the trial may be ok just not in a way that makes its results comparable and combinable with other trials.

    A further data issue is around the measures being used and the stats performed on them. I think for example, that it is not valid to quote a mean for the sf36 scale since it doesn't seem to have linear properties. I assume if a decent statistician actually looked at the scale they would notice this and use L1 norms (median) (not sure what this means for a meta analysis). But a protocol needs to look at measures and how they will be processed and it needs to get it right!

    You can't just go combining data from different sources without a detailed understanding of the properties of the data, how it is collected, likely errors, distributions, etc yet this is what we have seen in the previous cochrane review.
     
    JaneL, Skycloud, Kitty and 11 others like this.
  20. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I haven't come across the concept of an empty review before. Am I right that empty reviews are used to justify funding of more research? If that were to happen for this review, I think there would be a problem on ethical grounds.

    If a really well run trial were carried out that ensured that participants stuck to the graded exercise program that would be unethical, as we know from multiple patient surveys that GET makes pwME sicker, and not just as a temporary side effect for a few days, but long term worsening.
     

Share This Page