Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Adam pwme

    Adam pwme Senior Member (Voting Rights)

    Messages:
    679
    Just adding this reference as it may be relevant as Students 4 best evidence are supported by Cochrane UK and the blog author is on Cochrane UK & Ireland Trainees Advisory Group (CUKI-TAG). Im not familiar with Cochrane org structure though.

    "In cases where blinding is not possible or feasible, the outcome measures must be objective! If you are reading a study that is un-blinded, with subjective outcome measures, then you may as well stop reading it and move on."

    https://www.students4bestevidence.net/blog/2017/06/26/blinding-comprehensive-guide-students/
     
    JaneL, 2kidswithME, Skycloud and 24 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Only as endpoints for definitive evidence. In phase 1 trial to test for feasibility? No problem, it allows for cheap experimentation. But the most that can come from this is the need for more stringent evaluation. To use this evidence in real life, with consequences onto millions, is clearly wrong. No drug would ever be approved on the basis of a bunch of small open label trial where people "felt" better on some arbitrary benchmark only relevant to researchers whose starting point is denial that there is a disease process in the first place. The reasons why this is OK for alternative treatments frankly don't make sense.

    There is no opposition to making such early experiments, they just never actually pan out precisely because they allow too much illusion of effectiveness so they actually amount to very little. There is however a clear need to require a much higher burden of evidence in clinical advice meant to be used in real life. Tens of millions of lives will be affected by this and the harmful consequences will be permanent for many, with no adequate medical support available to them. This is what we have now. The last few decades have made it clear just how disastrous it is to make life-altering decisions based solely on such a low level of evidence.

    But even in those loosely defined early phase experiments, the results amount to nothing. A slight reduction in one questionnaire of "fatigue" misses out on more than 90% of the illness, no matter the sampling or methodology or advanced mathemagics, the benchmarks typically used in this type of research are simply not relevant to the real life needs of this disease, in large part because of invalid definitions such as "fatigue" and nothing else.
     
    JaneL, Chezboo, EzzieD and 9 others like this.
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    The analgesia in the first part of labour was controlled by the patient, so they may not have been using any pain relief at the time of transition. The authors seemed quite confident that the blinding worked; they ensured the medical staff did not know which treatment each woman was receiving. Look at how close the mean pain scores were in the two groups and how wide the range of scores were within each group. If the blinding had not worked well, would you not expect to see the patients in the epidural treatment reporting much lower pain than those in the saline treatment?

    Regarding the possibility of the patients getting extra analgesia - only 10 patients out of the 400 were given extra pain relief (6 in the epidural treatment and 4 in the saline treatment). The last measure of pain was made 90 minutes after the change from patient-controlled pain relief (at the start of the second stage of labour). Given that the mean time of the second stage of labour (ending in giving birth) was only around 50 minutes, perhaps this was enough.

    So, a blinded trial structure was viable in the case of epidurals in labour. There are often ways around ethical and practical issues with blinding, having a control treatment or having an objective outcome.

    I'm not arguing that epidurals are useless, or that the lack of statistical difference in pain levels between the epidural and saline treatments found in that study would have held for the mothers with long labours. But, unless trials of epidurals are blinded, you can't know how much of the reported pain relief is attributable to a placebo effect. And that's a problem, because then you can't properly weigh up the costs and benefits.

    In my view the trials of epidurals with no controls or comparison treatments are quite limited in value. Cochrane reports that the evidence for epidural pain relief when giving birth is of low quality:
     
    Last edited: Jun 9, 2020
    2kidswithME, Skycloud and Trish like this.
  4. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    People are not objecting to the notion of subjective self reporting per se. It's when subjective primary outcomes are used in an unblinded trial that is the problem. The combination of the two.
     
    JaneL, 2kidswithME, Keela Too and 5 others like this.
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I am disappointed Hilda @Hilda Bastian ,

    I am sorry if I get a bit abrasive when I see red but it is genuinely in the spirit of constructive dialogue and...


    The dialogue has continued a lot since I was here last but I think you are missing the point.

    Firstly, calling something absolutist is just a political slanging tool. Let's forget those. The point I was making that unblinded trials with subjective endpoints are essentially valueless is, as far as I know still valid and agreed on within medicine and not very complicated. There are mitigating situations but pretty few.

    And I am particularly disappointed to hear the red herring about subjective outcomes being inferior. That was never IN ANY WAY IMPLIED. It is just that they are no good if your trial is unblinded.

    I cannot follow the obstetric trial discussion in detail but as far as I can see nothing has been said to change what I proposed. If a study of an epidural is unblinded then it is going to be valueless, since the only relevant outcome is subjective. We all think epidurals work but if a company produced a new epidural agent and did an unblinded trial it would be of no use. And doing blinded trials of anaesthetic injections is easy - I have done one myself and included it in a Cochrane review! There might well be a placebo response - especially if patients had never had a labour with a more traditional treatment. If the trial is blinded it will be fine. There are actually lots of ways of achieving effective blinding even when there are potential giveaways - we have discussed them here at length. The most powerful tool is a dose response study.

    I am actually staggered that someone like yourself should not accept that unblinded trials with subjective outcomes are valueless in principle - at least until someone has found a very good reason why an exception can be made. I am not accusing you of having a vested interest - it is just that you would be the first one without that I have encountered. I am baffled because I cannot quite see why you think my statement is unreasonable. It is not as if it just applies to trials. It applies in an immunology lab or in a physics lab counting neutrinos. If you know what answer you 'ought to get' on each occasion and the measure is subjective you are stuffed. It is the first thing a PhD student in the lab gets taught - 'blind your samples'. Everyone cheats doing experiments whether they realise it or not. The most famous enunciation comes from the physicist Richard Feynman - you are the easiest person to fool.

    Where exactly is my argument unreasonable? What are the real examples that go against it if there are any?
     
    Last edited: Jun 9, 2020
    JaneL, 2kidswithME, Skycloud and 24 others like this.
  6. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Yes. The thing you are most likely to believe, is the thing you most want to believe.
     
  7. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    Some views are more relevant and valid than others.

    This is a thread on the update of the exercise review and so there have been many good points made with regard to methodological issues.

    I'm not going to say anything about that. What I will say is that the people who did the research being reviewed in this instance have no expertise with which to discuss this illness in the first place.

    I see the expertise of BPS people in this illness as equivalent to saying they have the kind of expertise in cancer that replaces the need for an oncologist. They are set up as primary treatment providers simply by virtue of no other useful treatment being available. Yet their expertise being cognitive and behavioural is de facto treating metal health. And so long as that has been the case we have not seen proper research into any other explanation for our being ill.

    If I understand things correctly, part of this exercise will see research reviews moved out of the mental health area. This to me is might prove to be the most useful thing that could be argued for to change the current landscape of rather epically misplaced expertise.

    As a post script -- Though I understand that organisations run on processes I think there are times when finding consensus is neither relevant or useful. Reality is what it is regardless.
     
    Woolie, MEMarge, EzzieD and 7 others like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I guess the point is that is it unreasonably a standard part of the discourse if it is valid? We seem to be living in a world at present where ignorance, or ignoring, of the most basic principles of medical practice is blustered through as 'following the science'.
     
    Mithriel, 2kidswithME, JemPD and 2 others like this.
  9. spinoza577

    spinoza577 Senior Member (Voting Rights)

    Messages:
    455
    Yes, the problem with PACE and studies like that is even bigger than being prone to be biased because of a placebo effect.

    The danger is that they rather measure the hope which they hold out to the patients.

    One may see that such a design is open for certain forms of sadism, for charlatanry, and for all kinds of short sightness.


    Although one can say (probably in agreement with @Hilda Bastian) that such a design can display a reliable outcome, there is no chance to ensure that it indeed does so.

    As long as not everyone shouts hurrah for the rest of their life it´s nothing else than a sort of cunning fantasy.
     
    Last edited: Jun 9, 2020
  10. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    That is absolutely spot on. Pretenders with influence.
     
    2kidswithME, MEMarge, JemPD and 7 others like this.
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    The other thing I have said elsewhere (sorry to be repetitive) is that history is interesting here. Trials of exercise, like PACE et al, were done with physios when rheumatology was 'physical medicine' in the 1970s. Maybe part of the problem is that people are too young to remember. By the mid eighties rheumatologists wanted to be proper doctors with reliable evidence. When exercise trials were done better they mostly showed nothing much. But it proved impossible to get physios to get a reliable evidence base for most of their work. I had the job of trying at one point. I had the clear impression the task was too threatening. By 1990 few if any trials of exercise were done in rheumatology.

    It seems that with the advent of 'talking therapies' by about 2000 there was a movement to re-invent the off-centre wheel of unblinded trials of therapist-delivered treatments and restart trials of a sort that had been abandoned earlier for good reason. I get the impression that clinical trial centres now get a lot of business from such trials - maybe a rather particularly British Empire phenomenon with NHS styled systems.

    Maybe I should repeat the point I think made in my Health Psychol article that although it might be possible to mitigate the problems of unblinded trials of therapist -delivered treatments with subjective endpoints many seem to have been run in a such a way as to make things bad as they could be.
     
    JaneL, Mithriel, 2kidswithME and 14 others like this.
  12. Willow

    Willow Established Member (Voting Rights)

    Messages:
    87
    Location:
    Midwest, USA
    Couldn't agree more.

    I am always so disheartened to see over and over again research on ME or CFS published in the "Journal of Psychosomatics" and the like. What does this say to the world, including medical professionals, about this disease?!? It gives the entirely wrong message and only serves to reinforce and further entrench the wrong beliefs, attitudes, and stigma about ME/CFS that have endured for way too long.

    Moving ME/CFS out of the mental health area, to me, is such an important and necessary step and the start of turning this whole picture around. When will this move to another area take place? How much longer do we have to wait? Sadly we have already waited far too long. For heaven sakes, what is Cochrane waiting for?

    Edited to add that the exact name of the Cochrane group ME/CFS currently resides in is Common Mental Disorders Review Group. So, according to them, ME/CFS is a common mental disorder. Need I say more?
     
    Last edited: Jun 9, 2020
    JaneL, Mithriel, 2kidswithME and 11 others like this.
  13. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    The protocol didn't include most of the objective measures, so they weren't included. I pointed this out in my conversation with David Tovey--that it was unacceptable to publish a review and just arbitrarily decide to discard all the objective criteria. Had they included all the failed objective outcomes into the review it is self-evident that it would have led to a downgrading of the assessment. They also excluded the long-term outcomes of two years, which for some of the included trials showed that any benefits wore off.

    In other words, the criteria for the review worked very well to exclude data that would have required a downgrading of the assessment.
     
    JaneL, JohnM, Mithriel and 29 others like this.
  14. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    The problem is when a feasibility is transformed into a full trial while swapping outcomes and not disclosing anything, as the University of Bristol investigators did in BMJ's Lightning Process study. And the other problem is when a fully powered trial is published as if it were always intended to be a feasibility study, as the Norwegian group just did in another BMJ study.

    https://www.virology.ws/2020/05/31/...rics-open-about-that-cbt-music-therapy-study/

    My colleagues at Berkeley and elsewhere are continually shocked every single time by what they see in these ME/CFS studies. They don't understand why BMJ, Lancet, etc are publishing studies that violate so many core principles, and why they refuse to acknowledge the problems when they are pointed out--or publish 3,000-word corrections to pretzel their way out of retractions, as they did in the Lightning Process study. That's why I can get so many dozens of outraged scientists and academics to sign letters. Patients have every reason to question the robustness and integrity of any study coming from investigators who adhere to the CBT/GET paradigm.
     
    Last edited: Jun 9, 2020
    JaneL, JohnM, Mithriel and 29 others like this.
  15. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    That's like students writing the exam paper. Surely a crucial part of Cochrane's responsibility is to independently assess the protocol, not just the trial's adherence to the protocol? If the protocol is flawed then a trial that adheres to it is going to be flawed ... but it may pass with flying colours. Amazing. It's like me writing software that reduces engine power the further you press the throttle down, because that is how the requirement specification was written. And the Q.A. department then saying all is good because the software adheres to the specification. As an engineer this all just seems so ... tragic.
     
    2kidswithME, MEMarge, Trish and 7 others like this.
  16. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    Just to clarify--the protocol here was Cochrane's protocol for the exercise review. It did not include the objective outcomes of the studies included. So the problem there was Cochrane's own protocol, not other studies' protocols.
     
    JaneL, Mithriel, 2kidswithME and 16 others like this.
  17. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Ah. So the buck stops with the Q.A. department's validation procedures.

    ETA to clarify: My implication being Cochrane are roughly the equivalent of a Quality Assurance department.
     
    Last edited: Jun 10, 2020
    MEMarge, Kitty and spinoza577 like this.
  18. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Unfortunately, Cochrane quality control of what it publishes even in Cochrane reviews isn't all it needs to be. I don't think it's necessarily better in its blogs or the blogs of organizations it supports. (Nor should it censor people's opinions.)
     
    Amw66, Trish and Adam pwme like this.
  19. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    @Hilda Bastian, sorry to keep labouring the point about unblinded treatments + subjective end points not being suitable for assessing treatment efficacy, but it is so fundamental to the problems the ME/CFS community and many others have had with the BPS proponents, and so fundamental to the usefulness of the coming review. And it looks as though we haven't yet convinced you.

    Maybe you can give us an example of a treatment efficacy trial that had unblinded treatments and subjective end points where the conclusion about treatment efficacy could not be legitimately questioned as being partly (and even significantly) distorted by factors not strictly related to the treatment? That might help us identify under what, if any, circumstances the combination is ok and what circumstances it is not.

    There was a proposal for research recently, for a commercial and expensive intervention that had virtually no scientific evidence underpinning it. The intervention had been talked about a lot by the regional patient support group - there was a lot of excitement about it. The trial structure was unblinded, with a Treatment As Usual control, and with a change in fatigue as the primary outcome (from a survey administered before and after a month of treatment). Can we agree that that trial would be unreasonably biased in favour of the intervention and so the outcome would be worthless?
     
    Last edited: Jun 10, 2020
    2kidswithME, MEMarge, rvallee and 9 others like this.
  20. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    I didn't think of that word as emotive, but happy to replace with another and won't keep using it. Global? All-encompassing? I totally agree that derogatory words are a barrier to discussion, and I look forward to not being on the receiving end of slanging, even though I have a far higher tolerance of it than most.

    I realize you hold the position adamantly, and nothing I say will change that, but just for anyone who's seeing this and not reading the whole thread, this is my position: an entire trial is not necessarily valueless even if the data from one, or some, of its outcomes is biased. (I explain that here.) Nor are subjective endpoints always valueless, even on effects, and sometimes there is no endpoint more valid than a patient-reported outcome.


    I did not at any time imply that's what you were saying, and I understood the point: that's why every example I gave was of an intervention I didn't believe could be blinded. And I acknowledged I had been made a mistake with the epidural example in that it turns out there have been studies that attempted to blind epidurals - see my "OK boomer" reply.)

    It's not the case that the only relevant outcome of a trial of epidural analgesia in labor for pain relief is pain. Even leaving aside other possible maternal outcomes (e.g. whether it increases the risk of forceps or cesarean, and the potential harms of epidurals to women), objective impact on a newborn of drugs in labor are clearly critical. If the first studies had found marginal pain relief and major newborn harm, epidural analgesia would have been dead in the water.

    I think I've explained that as well as I can. The real examples against the argument that an entire trial is worthless if it's unblinded and it includes even one subjective endpoint? For starters, that includes every trial of surgery that didn't have a sham surgery arm (which is almost every trial of surgery ever) if it included any patient-reported or clinician-reported outcome: so if they also measured quality of life or asked about pain, no other data from that trial has any value at all - not length of the surgery, not blood loss, not mortality....all because people were asked to rate their pain. That is literally what the statement here in this comment means.

    Of course I believe that it can be wrong to place great weight on a highly biased outcome that's not fit-for-purpose: but that is a different issue to whether every unblinded/unblindable trial that ever asks patients anything should be relegated to a garbage heap. It's possible to both agree that certain outcomes in certain trials are totally invalid and that there are trials that provide good evidence on questions, even if they include certain types of outcomes among the many they measure.

    I've now spent a considerable amount of time trying to make the point that it's possible for competent people of goodwill to disagree on an issue put forward as incontrovertible. I think I've done it about as well as I can without spending days on it. I hope some people have seen what I was trying to show. It goes to a larger point - it's worth understanding why people disagree.
     
    Last edited: Jun 10, 2020
    spinoza577 and Medfeb like this.

Share This Page