Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I'm clearly misunderstanding something then. Because I thought you said that even if a trial is flawed to the degree that its primary outcomes could not be relied upon (for instance due to the combination of being unblinded with highly subjective primary outcomes), that it can still be possible to subsequently glean other other useful information from the trial nonetheless.

    How is that not retrospective? How is it not cherry picking? (Genuine questions, in case not obvious in an online forum discussion).
     
    Louie41 and Kitty like this.
  2. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    This I think is the root of the problem, Hilda. (@Hilda Bastian )

    This is no good. What the systematic reviewing community has been moving towards is no good if it is flawed. I have been in the business of designing, conducting, assessing, and applying to my practice clinical trials for nearly fifty years. An appeal to the authority of the systematic reviewing community does not work. You may have missed it but in a previous post I pointed out that for reviews to trawl through secondary endpoints is to commit exactly the same crime that authors are not allowed to commit.

    You might say, but oh! the reviewers are using the right tools and doing things the right way. No good. The reason we are having this discussion is that they don't. They are as human as the authors. Moreover, the same people who write major reviews on how to judge trial quality, or chair committees on devising exactly the tools we are talking about (risk of bias), turn out to be authors on some of the very worst examples of poor studies in the ME field.

    Let me put it this way. Cochrane has lost its Michelin star. I used to assume Cochrane was as free of bias as Mr Spock of Star Trek. But various recent events have made it clear that this is not how things are. The 'systematic reviewing community' has to take what comes on TripAdvisor just like the rest of us. And it is getting things wrong.

    This is actually why lots of members here were very pleased to hear that you were getting involved. Your Absolutely Maybe is full of sensible and cutting analyses. You are clearly driven by a sense of patient rights. But the idea was to have you come in to sort out exactly what you say things are 'moving towards' - cherry picking by 'those in the know'. I gather you are to be congratulated on having recently submitted a PhD thesis. For me, the main function of a PhD is to teach a student how to recognise how much hot air their supervisor and friends produce.

    You cannot get around this basic fact - if only primary outcome measures are good enough to stand as measures of usefulness of a treatment because everything else is subject to the problem of multiple analyses then the other measures remain too unreliable to use in a review. You might say that in a meta-analysis lots of secondary measures pointing in a direction have more weight but the whole point of systematic bias is that this is not so. In systematic bias everything leans a bit the way people want. Lots of studies seeming to point in a direction tells you nothing. There is some deeply flawed thinking going on.

    My original comment that unblinded trials with subjective primary outcome measures are valueless was meant as a first approximation and it is technically possible to find exceptions. But as a practical translational biologist I am only interested in the real world - where I think you will find that the statement holds very well - for two reasons.

    One is the a high proportion of medical science is badly done. Hidden out of site are all sorts of methodological crimes committed but never recorded (junior assessors changing the data to 'help' get the 'right' answer, discarding 'outliers', repeating analyses that 'did not seem right'...). Studies are inherently unreliable. We mitigate that by checking the authors seem to know what they are doing. Any authors who set up an unblinded trial with a primary endpoint sufficiently subjective to be open to systematic bias in that context are not up to the job. And if that seems harsh one only has to look at the what went on with PACE and SMILE and FINE.

    In other words a trial with this design should not be taken seriously. It is not so much that it is valueless, I now realise, as potentially harmful.

    The second reason relates to the above. An unblinded trial with a primary subjective endpoint is not so much valueless as dangerous because the 'systematic reviewing community' may well come along and find all sorts of things that prove things people want to prove and thereby cause harm. Again, that is exactly where we are at present with the exercise review. And a newly published tool suggests a more lenient approach to bias!

    So the bottom line is that any new review that does not recognise the fact that the current systematic review policy is deeply flawed is going to be of no value to the patient community we both want to support. That may put you in a very difficult position, I realise. But you are such a strong advocate for patient rights that I think members here may still hope you will see the point!
     
    JaneL, Sly Saint, Simbindi and 28 others like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Exactly. Only shows they are pulling in the same direction, not that the direction is right. ME/CFS research is littered with this. You need an independent reference to establish the validity of the direction, and if that shows that all the studies are wrong, then so be it. That is what integrity and good science is supposed to be about.
    Yes!
     
    Kitty, TrixieStix, Willow and 5 others like this.
  4. spinoza577

    spinoza577 Senior Member (Voting Rights)

    Messages:
    455
    It won´t be retrospective cherry picking indeed when already at the begin the view is skewed. We had the point already, it´s following just only what is en vogue, here or there, without thinking. For ME/CFS there is not only no clear evidence, instead the situation is unclear all over. But people have responsibility, and people can generally be harmed especially when there is something like PEM. Isn´t this known? Does Cochrane not know that EBV is a legend in triggering the illness? Why the B strains but not the A strains of enteroviruses? Doesn´t this lead to questionmarks when reading such trials? Is Cochrane really that naive?

    Power of judgement is not debatable ... and all depends on agreement (we had the point already), until the failure materializes through consequences, say through crashes like plane accidents or war. But face validity and common sense should be in charge, yes? If Cochrane does not want, there is nothing to do, of course. And the only consequence might be that in some time they will be shown to have erred.

    Technical the review may be ok, but this is not sufficient. You can also look at the single perceptions "This is a car" and "This is a tree"and come to the conclusion everything is ok, but you don´t take into consideration the single knowledge "This is ice on the street."
     
  5. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Even if the uncertainty around one or more outcomes, including the primary outcome, is so great that it can't be relied on, it does not necessarily follow that no other outcome in the trial has a lower risk of bias (for example, an objective adverse effect of surgery). I've referred a few times to a post where I've discussed this, and I'll do that again now, because I don't think I can explain what I mean better than I did there. The prospective part is developing a protocol before you start. (I wrote up my thoughts on that recently here.)
     
    Hutan, spinoza577 and Barry like this.
  6. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,424
    Another reason to reject being very generous with claims of treatment efficacy is that patients are expected and sometimes pressured by society to try a treatment if there's a chance it might help.

    A review that says there is a chance a treatment might help will be used to pressure patients. The party doing the pressuring might be their family, their doctor, the insurance company, employer, and patients are often not in a good position to say no because saying no could threaten an already precarious financial situation or make it harder to obtain other medical care.
     
  7. spinoza577

    spinoza577 Senior Member (Voting Rights)

    Messages:
    455
    All in all it´s: We don´t have anything, so we take this (CBT and GET).

    For patients it can be a catastrophe, why not admit: "Sorry we don´t know?" (This is easily possible as most doctors are adroit enough, at least, in finding out if the patient tells the truth.)
     
  8. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Disagree. Trawling would be something like subgroup analyses that had never been part of the protocol in the first place, not reporting data on a (reasonable) pre-specified analysis. A trial might only be powered for its primary endpoints, but not its secondary ones - for example, mortality might be a secondary endpoint. Whereas, mortality might be a primary outcome for a systematic review: if there isn't enough data from enough included studies, the systematic review might also be underpowered to draw any conclusions about mortality, but it might not.

    Whereas well before that organization began, I didn't think any research or any organization or any group of people could be free of bias, and I got involved in setting up the Cochrane Collaboration in part because I believed someone concerned about its potential to do harm needed to be part of its leadership.


    Disagree that every measure in a trial other than a primary one is inevitably subject to the problem of multiple analyses - and nor do I agree with the apparent implication that primary endpoints aren't potentially subject to them (repeated interim analyses, for example). Some trials are even very spare in the number of endpoints they have, and don't undertake any analysis that wasn't pre-specified or diligently report every single additional analysis and report clearly that it wasn't in the protocol.

    Multiplicity will be an important consideration for statistician scrutiny in developing and reviewing the protocol, as well as the review.

    I doubt there will be anything in this process, or any decision made along the way, that won't be disagreed with by many people: the only potential exception is that I (and some others) will be in a very difficult position! :nailbiting:

    [correction made in the minutes after posting]
     
    Last edited: Jun 13, 2020
    Hutan and spinoza577 like this.
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    It is by definition if it is to be used as evidence to answer the same question - is the treatment useful.
    Many of your comments are technically correct within themselves but they do not impact on this central issue of using trials to ask whether a treatment is useful. Yes, there are situations where something like mortality can be a useful primary measure in a meta-analysis when it is not in individual studies. But most of these situations relate to meta-analyses concluding that treatments are not so useful after all. And I doubt they relate to unblinded trials with subjective primary outcomes.
     
  10. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    If a trial is designed with bad primary outcomes then then that should lead to basic questions being asked about the competence of those running the trial. If they can't select sufficiently good primary outcomes then are the capable of running other aspect of the trial.? Hence I would say if a trial has primary outcomes that are not sufficient to be relied upon a very detailed analysis needs to be done on any other data used and reading the paper may not be sufficient for that.

    It may be that other outcomes are reliable as outcomes but they are only reliable if they are correctly used and the overall trial is run correctly. If some evidence suggests that the trial design was not of an acceptable standard then this suggests the other outcomes may have been influenced by bad practice. For instance it emerged that the 6mwt data taken in PACE was based on a wrongly run test.

    Also real care needs to be taken with drop outs where the measurement of outcomes may influence the illness. For example, exercise testing with ME. It suggests that missing data is not missing at random. But couple that with indicators of bad trial practice (such as an inability to select good primary outcomes) then we cannot dismiss biases and any stats need to take this into account.

    Giving an example where things may have worked doesn't go in any way to mitigate the issues of what can happen.
     
    Simbindi, 2kidswithME, Hutan and 14 others like this.
  11. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Disagree. Trialists giving undue weight to secondary endpoints is a problem, but that's not by definition because of multiplicity. Whereas you can have multiplicity problems with primary endpoints across trials with multiple treatment arms, etc.
     
  12. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Been pondering this.

    When an independent observer reads the finally written up report of a trial, there is only so much they can glean, even though that should be a lot. If that observer were then magically able to go back in time and be a fly-on-the-wall observer of that trial, they would doubtless realise there was more that they gleaned that was unfathomable from the report alone. There will be all sorts of things that researchers don't have to report, and thanks to human fallibilities, will be glad they don't have to!

    So when an independent reviewer reads trial reports, there are inevitably some things the reviewer is having to take on trust, and be making decisions about trustworthiness of. I would think the default position might be that the all is OK, unless warning flags start to emerge as the report is read and analysed. There will I'm sure be some kind of threshold here, because nothing is perfect. But I can imagine two thresholds: One where significant danger signs become evident, and a further one where a decision has to be made to just pull the plug on that trial, because the risks are just too great that even the stuff that seems OK from the reporting, very probably are not OK. By definition authors cannot report on their own unconscious biases, and how those biases may have influence some bits of their study or reporting of it more or less than other bits of it.

    The point is that flaws discoverable from a trial's report in one area of it, must inevitably indicate potential untrustworthiness in other areas, no matter how they may seem superficially reliable. Suppose I have a survey done on a house I'm looking to buy, and they report there is a crack in the wall but they assure it is benign and not a problem. So I ask a highly trusted/competent/experienced surveyor friend of mine to read the report, and they tell me it does not stack up, that from what is in the report, the conclusion of the crack being benign simply cannot be right, the report cannot be trusted. I double check if there might just be a typo, but my friend tells me that from the way the report is written that is extremely unlikely. He/she is not going to then advise me (I sincerely hope!) that the rest of the items reported in the survey can still be trusted, and only the crack need be worried about. No, the huge clanger casts doubt on the whole report, on the whole competency (or lack of) underlying the report; the things you would only otherwise know from being a fly on the wall.

    And when it come to a graph of flaws found versus overall trial trustworthiness, that is most certainly not going be straight forward and linear. Even if only number of flaws were factored in, then I think two flaws will more than double overall untrustworthiness. But severity of flaws will also be a major factor, again non-linear against trustworthiness I think. I'd love to be able to pin this down to numbers but I can't, but sure this is a very significant issue.

    I suspect overall weighting of flaws will incorporate not just the type of flaw, but also the areas of the trial the flaw applies. A major flaw would be more heavily weighted. A flaw in a major aspect of the trial would be more heavily weighted. A major flaw in determining the primary outcomes of a trial would to me end up way past the final threshold, indicating nothing should be relied upon, no matter how apparently viable it had been reported.
     
    Last edited: Jun 13, 2020
    2kidswithME, Amw66, rvallee and 3 others like this.
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Sums up my very long post in a nutshell.
     
    2kidswithME, Sean, rvallee and 2 others like this.
  14. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    What methods are they using to deal with uncertainty from a mathematical perspective?

    [Adding additional comments]

    The methods are of course critical in that any underlying assumptions of (since uncertaintly is mentioned) known distributions, normal data or error, or linearity of measures and independence of measures or missing data will have an huge impact in any results.
     
    Last edited: Jun 13, 2020
    JohnM and Trish like this.
  15. Mithriel

    Mithriel Senior Member (Voting Rights)

    Messages:
    2,816
    ME was described by Dr Melvin Ramsay as a disease where exercise made symptoms worse. He said there was "an abnormal response to exercise". When the BPS faction came from nowhere, redefined it as Chronic fatigue Syndrome and declared that the best treatment was exercise, he said that if someone gets better with exercise that shows they did not have ME.

    This is a simple fact. ME is at least a syndrome where exertion causes a worsening of disease. The fact that BPS people lump it in with chronic fatigue does not change the nature of that subsection which was historically called ME. No matter what you call it there are a group of patients who have the same symptoms that were seen after the ME epidemics (and now look to be found after covid-19 as well)

    Now if you have a group of patients who have an allergic reaction to peanuts and you do research that says peanuts will cure them it is so counterintuitive the evidence must be strong and robust. Especially if you are in the same position as ME/CFS patients where refusing the treatment can lead to losing benefits and being told you don't want to get better to being sectioned or children being forcibly removed from the parental home.

    This is not a simple scientific debate. The real world consequences are huge. If the welfare of patients was put first every sentence of these trials would be subject to debate and the people who did the research would be asked to justify every step.

    Instead we are getting intellectual debate about what is subjective and how can you get objective results. It feels as if the researchers have been given all the benefit of the doubt at every step.

    Put quite simply, the conclusions given in these papers stray so far from our experience and seem so divorced from the standards held by other branches of medicine that unless the review is very robust we will be unable to believe it's conclusions, not because we are close minded but because we know too much. It will become another stick to beat us with - already, the imprimatur of Chochrane is being used to paint us as antiscience - and patients will be left worse off. Esther Crawley is already teaching police officers how to recognise the "emotional abuse" that gives children ME/CFS so they can take action to protect the children.

    Do it properly to alleviate our concerns or it will feel like another case of the establishment protecting its own.
     
    Sly Saint, JohnM, Simbindi and 24 others like this.
  16. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,424
    Yes, I can't like this enough. They are given endless benefit of doubt and privilege. No negative result is ever enough to justify the conclusion that maybe their approach is just not working, no it's all very complicated and too difficult for patients to understand.

    What is needed are people like Carolyn Wilshire, Jonathan Edwards and David Tuller that have the guts to say that their research is not good enough, and actually a piece of crap. The problems are obvious and there is some sort of widespread collective inability to admit this, presumably because the problem is much bigger than just the PACE trial or even ME/CFS.
     
    Last edited: Jun 13, 2020
  17. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    Take a look around at other social movements. How polite are they? It took BOTH Malcom X and MLK to make progress with civil rights in the USA. Then the progress stopped when many black people started to be nice again.

    Why do you think we have to be extra polite not to offend idiots who don't understand the scientific method?
     
  18. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    That is non-sequitur again, Hilda, if you look. You have taken words out of context. If you asking the same question several ways then multiplicity is the problem. Neither of your statements is relevant.
     
    Louie41, alktipping and TrixieStix like this.
  19. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    If you are asking the same question in multiple ways (i.e. multiple outcomes essentially measuring the same thing) then you should worry if they don't provide the same answer (high correlation between measures). If they don't then you have an issue with something in your experiment which needs explaining.
     
    Louie41, alktipping, Hutan and 10 others like this.
  20. spinoza577

    spinoza577 Senior Member (Voting Rights)

    Messages:
    455
    In the PACE trial paper: If you look at figure 2 (questionnaires scores over a year), you may wonder what the trial is about.

    You find "international CFS only", "London ME only" and "depressive disorder only". (Depression is also referenced at at page 834, btw, and again within their own trial.)

    ME/CFS has nothing to do with depression. A trial that delivers results for all three groups with a rather similar success (look at the values) you cannot take seriously, can you?

    Yes, there is a small difference, but what could this mean. The paper is also absolutely not helpful in providing an idea about the questionnaires, which is bad style. If you read other papers in medicine, you often get a meaningful introduction, at least when the paper is good.


    "What we think about ourselves" and "what we experience in ourselves" is by far not necessary the same (this would be easy living). It´s not only an art (and even requires a society) but it even requires a biological machinery.

    In depression the mood and the thinking is primarily affected (possibly by unknown biomechanisms). In ME/CFS - it´s not, maybe secondarily (as you lost your life). Instead it is, or presents itself at least as an operative inability.

    The paper nevertheless conveys that there is somehow something in these two illnesses which can be influenced by these two interventions upon special medical care. And that´s it -

    It´s even rather not saying much anyway - because the result is restricted to "in addition to special medical care" what ever it means (if not something ridiculous, right?). It rather says nothing (no responsibility either therefore), it only conveys.
     
    Last edited: Jun 15, 2020
    alktipping likes this.

Share This Page