Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

Discussion in '2021 Cochrane Exercise Therapy Review' started by Lucibee, Feb 13, 2020.

  1. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    But unless incredibly careful this could be used to mask the fact a trial failed in its main objectives. Trials can cause great harm if their findings are misrepresented, and subsequently influence political, medical and public thinking. The quality assurance checks and balances have to be so very good to ensure this does not happen, and of course in the case of ME/CFS we know they are not.

    Once you start talking of retrospectively cherry picking different facets of a seriously methodologically flawed trial, according to which bits are then deemed more reliable, to me that seems a very slippery slope. I'm a modestly qualified engineer not a scientist, but I know that that approach means you implicitly (and unconsciously can) make flawed assumptions about something's validity, when in truth you have to rerun tests, sanity checks, to really know if you are fooling yourself or not. At best I would have thought they might provide indications for further, much better quality research, no more.

    In the case of PACE, it was unblinded (as it inevitably had to be), yet the outcomes including primary were highly subjective, and very importantly the interventions specifically targeted patients' self perceptions, with a view to distorting patients own illness beliefs. As if that was not enough, the outcomes eventually chosen by the investigators to report on, were outcomes that do not properly indicate the true illness status of pwME; they just indicate the pwME's perceptions of their illness, the ones that had been messed with by the intervention.

    In the case of ME/CFS the best indicators of the physical illness itself are the objective ones; the ones the investigators chose to not report on.

    If you are going to trial whether a physical illness has improved or been recovered from, then you really have to measure things that indicate that. And if you insist on measuring illness perceptions as a proxy for real illness status, then you need to control for skewing of those illness perceptions, else your proxy readings will be even more wide of the mark. But if instead you not only don't control for skewed perceptions, but instead deliberately bias them towards indicating a more favourable illness outcome ... then your proxy measurements have totally failed you, because as a proxy for the physical illness they are way too unreliable.

    To me PACE was not just useless but far worse than useless, because it truly has done more harm than good. pwME are far worse off today than if PACE had never happened, because all the findings from it of any influence were totally distorted. And the negative impact of PACE was, and still is, significantly amplified because the checks and balances failed to call out its methodological mediocrity, but instead gave it a huge seal of approval. And similar trials of course.
     
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    A good example of this would be someone locked in solitary confinement for an extended period of time. On average they would rate going outside to a small fenced courtyard for a few minutes as much more enjoyable than for someone who can do the same any time they want to. It would even be the highlight of their entire month.

    The experience of going outside for a few minutes is generally appreciated by most people, but it would rate very low in terms of how significant it is to their overall well-being, because it consists of so much more. It would be incredibly appreciated by someone stuck in solitary confinement, even though it has nothing to do with the actual experience but mostly as relief from the torture of being locked in this way.

    It's true that small actual benefits are significant for chronically ill people, which is why they should light up any evaluation like a giant Christmas tree, not have to be wrung out like blood from a rock and require to squint intensely while looking through a microscope to see that the needle has effectively moved by as much as a Planck length.

    Which actually makes the point that all those unrelated benefits are mostly an illusion, because given the intense suffering people with chronic illness experience, there would be no need for so much fooling around to show something is a significant benefit and the participants, the beneficiaries, themselves would scream it from rooftops if they were genuinely significant.

    Chronically ill people pretty much already do and none of those benefits being shouted have anything to do with those cognitive manipulation interventions. They are in fact mostly about the opposite, such as GET vs. pacing or stop focusing on your symptoms vs managing around symptoms. Or disability benefits vs. "enabling the sick role". For myself, disability benefits is the only useful thing I ever got out of health care, the only thing that helped, and a particular focus of the BPS model, which insists it "enables the sick role" and is itself a cause of disability, a circular logical fallacy.
     
    Last edited: Jun 12, 2020
  3. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    4,500
    This.
    This is the bit that people who're not ofay with the CBT for CFS/ME protocol miss.... That if you are going to give a treatment that teaches patients that their thoughts & behaviour are the only thing wrong with them.... that believing they feel ill, saying they feel ill, thinking they feel ill, is why they feel ill, & in order to stop feeling so ill they need to believe they are well, say they feel well & behave as if they are, otherwise they will continue to suffer....
    You cannot then use a questionnaire which asks them how ill they feel, as an outcome measure for how well the person actually feels, because if the therapist has done their job then the person will report they feel better, regardless of how they actually feel, in the belief that doing so will lead to improvement.

    Edited to add: comma in final sentence - for ease of reading
     
    Last edited: Jun 12, 2020
    Mithriel, mango, Atle and 15 others like this.
  4. Daisymay

    Daisymay Senior Member (Voting Rights)

    Messages:
    686
    This, this, this! This point is absolutely pivotal for people new to the BPS view of ME/CFS and all their research to understand.
     
    mango, Atle, Trish and 8 others like this.
  5. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    It wasn't an update. It's still the same version of the review - see this at the top:

    Screen Shot 2020-06-13 at 8.39.03 am.png


    You can't make changes without the date being automatically recorded. For the explanation of what changes have occurred, it's supposed to be in the "see what's new link", which is here for that review:

    Screen Shot 2020-06-13 at 8.44.37 am.png
     
    Barry likes this.
  6. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    Yes. Although a trial's objective is supposed to be answering what it's testing, and if the intervention fails to achieve what's hoped for, the trial succeeds if it shows that. Doing excellent research is hard, and most of it (of all types) is far from excellent. Can't afford to be less than excellent in a clinical trial, but unfortunately that's common (not only in ME/CFS). Systematic reviews are a strategy that has the potential to overcome or at least mitigate the consequences, but unfortunately, all the same issues apply to that study type, and they can end up making the situation worse instead of better. (Just like anything people do, it can end up doing more harm than good.)


    Not if you are methodical about it and each step in those methods is sound: then you end up with conclusions calibrated to what is there in total. For example, the Cochrane and AHRQ reviews on the same question came to different conclusions: using this set of tools was not necessarily a very slippery slope. Those differences are not because of the available toolkit.

    On the other hand, when arguing in a situation like this, the approach of saying an entire trial is rubbish by arguing that "everyone knows" a single reason renders the entire trial useless, when that is in fact not as widely agreed, is a dead end that will lead to people clocking out in disagreement. (That's the contention that started this conversation.) If people hear early on in a debate they're new to, one side saying something they know to be wrong and consider absurd, they often disengage quickly. Shortcuts are tempting, and they can be effective for some people and purposes, but sometimes, there's only a hard slog and shortcuts are costly.
     
    Last edited: Jun 13, 2020
    Barry, Amw66, spinoza577 and 2 others like this.
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    For example, when it comes to patients reporting harms.

    Both these examples can be measured much more objectively using a kitchen scale or an unobtrusive sleep monitor. Given they are used properly, then the bias from dodgy recall can be eliminated from the outcome (or at least reduced so far as to be non-significant).

    Even better, split the study so one half use recall, and the other scales/monitors, and then we can also get the size of the bias, and maybe additional data about other possible factors affecting the bias, like age, gender, etc.

    We get clearer answers, and more of them.

    Not even clear if patients' perceptions actually changed, or just their reporting of them (test scoring behaviour). How do we tell the difference?

    Not even the objective secondary measures used in PACE can answer that. All they can say is that whatever drove the responses on subjective measures did not make any difference on objective measures.

    Exactly.
     
    Last edited: Jun 13, 2020
  8. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    I'm presuming that Hilda chose to post the example of the urinary incontinence trial out of all the possible medical intervention trials for a particular reason. Having worked my way through it, I've been thinking, to see what we can learn from it:

    Imagine you are a single mother with two young children and you have urinary incontinence. It's affecting your self-confidence and your life - you work from home for low wages partly because the condition is hard to manage. You don't socialise much for the same reason. And the pads and laundry cost a lot.

    You know that surgery would probably help a lot, but maybe you would have to pay some or all of the cost. And you don't have anyone to take care of the children while you are in hospital, or to do all the lifting that care of young children involves once you are home from the surgery. You read the Cochrane review and the chance of a cure from the pelvic floor exercises look great (56%). Then this study comes out and, yes, it's a big, credible looking study. And, once you have got past the rather misleading presentation of the results, you realise that the subjective cure rate from the pelvic floor exercises was only 16% (against the Cochrane reported natural recovery rate of 6%) - and that 16% cure rate is in the minority of the women allocated to the pelvic floor exercise treatment who didn't go on to get surgery done in that same year and who stuck around in the trial to rate the intervention after a year. So the actual cure rate might not be much more than zero.

    Is the Cochrane review likely to be closer to the truth (presumably based on a combination of studies with subjective outcomes, possibly biased by women who might have felt some stigma in reporting that they got no result, perhaps thinking that they didn't try hard enough or because they don't want to disappoint the nice therapist, and with overly optimistic objective measures that don't bear much relationship to real life)? Or is this big study (with the possibility that most of the women signed up in order to get quicker access to surgery and might have felt that reporting no cure would help get surgery) with its quite different outcome, likely to be closer to the truth? How can you know?

    Perhaps, you think, you should just try the exercises first, as after all, they probably don't hurt. But going to the physio for training in the pelvic floor exercises and delaying surgery isn't a nil cost exercise. There's possibly the cost of the physio sessions and travel, and babysitters to arrange and pay for. If it doesn't work, there's the extra time spent feeling embarrassed, restricting employment opportunities and restricting social opportunities for both yourself and your children. And's there's the potential to feel that you are a failure because the physios have read the Cochrane review summary and are confident that, if you put in the work, you will have success. If a government is providing the training for free, it isn't a nil cost to society either.

    What I take away from looking at that trial is that clearly subjective outcomes in the unblinded trial did not give reliable information about the utility of the treatment that might help people make good decisions. We are left floundering around not sure if the likely real cure rate of pelvic floor exercises in SUI is closer to zero or closer to 60%. (Sure, the badly chosen objective measure was a problem too). Another takeaway is that subjective outcomes in unblinded studies are preventing good decision-making about treatments in more conditions than just ME/CFS.

    In this study, the clearly subjective measure of cure was a single question asked after 12 months along the lines of 'do you experience urinary leakage in response to physical activity, coughing or sneezing?' Yes/No. So, does a little bit of leakage count - what is normal? If you had quite a lot of leakage last month when you had a chest cold with coughing, but there's been no leakage at all this past month, what do you answer?

    An outcome that had women wearing pads each day for a week or two at a time and recording on an app each day when there was any leakage (as measured by pads that show urine with colour) would be a much better measure of treatment success. Yes, it still requires participant reporting, but it's more objective than subjective. It takes deliberate effort to bias the result. And it's not a deliberate effort to mislead that mostly needs to be guarded against. (Of course you could go further and have a perfectly objective outcome by having photos of the pads taken and sent in, or having pads put into plastic bags and sent in for weighing.) Usually it's not that hard to think of good, fairly objective measures.

    Considering the huge amount of effort and cost involved in a large trial, why aren't trial designers giving more thought to useful outcome measures that might ensure the trial answers the question it is made for? Perhaps if Cochrane reviews started rating subjective measures of whether a treatment works in unblinded trials as unhelpful (not just low quality but really no evidence at all), trial designers might think a bit harder?

    If a clearly subjective outcome in an unblinded trial can provide useful evidence of treatment utility (that is, we aren't left wondering if the outcome was significantly affected by bias) and it is so very absurd to suggest otherwise - then surely there must be lots and lots of examples around?
     
  9. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    We're not dismissing patient reports so categorically, we're dismissing the sole use of specific PROMS for which patients have NEVER BEEN ASKED WHETHER THEY THINK THEY ARE IMPORTANT, and stating that they can be untrustworthy if not corroborated with meaningful outcome measures at the same time. This is possible for ME and potentially for other conditions for which researchers like Prof Bentall misleadingly claim there are no possible objective outcomes.


    That blinding vs unblinding meta-meta analysis itself is of poor quality. The underlying reviews themselves cherry-pick what outcome measures and studies to be included. Their measure of blinding itself is flawed as they included assessor-blind study results bundled with patient-blind and performed no sensitivity analysis based on studies which bothered to measure (and demonstrate) a high rate of allocation concealment. Although I will point out that the authors did find a difference between subjective vs objective outcomes with a post-hoc analysis (which should have been part of the original protocol, but I digress).
    Likewise novel analyses looking at the temporality of the underlying trials - when were(n't) unblinded trials followed up by subsequent blinded trials and exampining the pattern of results? If Cochrane were committed to testing this scientifically, they could create a prospective register for new novel treatments that are first tested with unblinded trials (with subjective outcome measures), to be followed up with larger double blind studies later. Aka, anyone who wishes to claim blinding doesn't matter needs to put their money where their mouth is.

    Many people are using that "debate" to justify half-arsed study quality, but I don't think history will look fondly upon those people.
     
    Last edited: Jun 13, 2020
    Hutan, rvallee, Robert 1973 and 8 others like this.
  10. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    No.

    Those people who clock out without listening to those who are directly affected should check their own ethical values or be ignored. Those people who don't bother to give us the time of day should have no power to affect our lives.

    This isn't a debate about philosophical truths. Our argument cannot be considered absurd when what we are insisting on is a scientific norm in other fields!

    The outcome directly affects our lives, but does not affect the lives of those who disagree with us, beyond reputational damage to their careers.

    It reminds me of the time when Simon Wessely roped in his old school mate Mike Godwin (of Godwin's law fame) into a discussion on Twitter, insisting that he's being criticised by patients making absurd arguments. Mike quickly realised the importance of listening to those who are directly affected and shortly told Simon that he's made a mistake and he agrees with us.

    Fact is that actigraphy was initially supposed to be an outcome measure of the 'definitive' PACE trial, being recommended by a patient group. Actigraphy was indeed measured at baseline, but was dismissed as a outcome measure during the trial. Those of us who have seen the PACE steering committee notes are rather suspicious of the rationale behind dropping it - after the results of Dutch colleagues were discussed and found that activity levels did not improve despite self-reports stating the contrary.

    Which brings me to my final conclusion - much of this debate would be avoided if they'd bothered to systematically ask patients across the world in the first place. The fact that they didn't, should tell you where the priorities of those researchers ultimately lie.
     
    Last edited: Jun 13, 2020
    Sly Saint, Hutan, rvallee and 18 others like this.
  11. spinoza577

    spinoza577 Senior Member (Voting Rights)

    Messages:
    455
    This is in fact what is happening in human societies, but this is also in fact what potentially put the truth aside, as it goes only how many ppl have the same opinion probably upon their own situation, without questioning their assumptions.

    What ever one is doing, one should ask oneself again and again, What I am essentially doing here. I agree, the non-blinding/subjective outcome is not the sole possible problem. But if you ask, what it allows for (without that this is necessary taking place), then you must be skeptical.

    I say it again: They only measured the hope which they themselves hold out to the patients, when patients are desperate and want to get any help possible. This is what the design allows for. And then authors not publishing objective outcomes which in this case must have been made - I am sorry this is inacceptable. That Cochrane was not interested in objective outcomes is hardly to understand either:

    Given the whole presentation of the illness, given the whole of different people with some having had a normal and successful live, the underlying, even not well elaborated, assumptions are against any common sense and have little face validity. There are certain viruses, there are outbreaks. What is with sever cases? (These then must be that dull that any hope is lost anyway.)

    And this Cochrane is not able to spot?
     
  12. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    I chose urinary incontinence, and then just did a fairly cursory search for an example of a trial.

    The reason was because I asked myself, what were the health areas where the people affected by the condition profoundly challenged and changed the assumptions I had about outcome measurement? The first one that sprang vividly to my mind was urinary incontinence. I was the sole consumer representative on the national agency tasked with advising the Australian government on what procedures should be publicly funded. As it happens, coincidentally, I was a single mother with 2 young children at the time, and living just below the poverty line, and decisions about reimbursement concerned me intensely. But when you think of who is affected by this and were concerned about reimbursement, it was older women, men after prostate surgery, parents of kids in wheelchairs, people who had strokes....A really broad range.

    Until then, I hadn't really understood at all what urinary incontinence was - that it was, as the medicos framed it, the complaint about these symptoms: that a whole constellation of issues were involved, and it wasn't as simple as just taking urodynamic measurements, or counting how many soaked pads in a day. It wasn't the first time I'd had to wrestle with the difficulties of quality of life measurement. But I think it was my first in-depth encounter with an issue where every measurement method was flawed, and for some/many questions, condition-specific health-related quality of life instruments were the best measure (or "least worst", if you prefer), and research effort had gone into it. (Here's some general background on standards for assessing that kind of research.) And then on top of that, blinding wasn't possible, of course, when it was surgery vs no surgery, or very different methods of exercise training/biofeedback etc. So it was an encounter with a situation where there the frame for considering research was very different to say, osteoarthritis in the knee: you needed far more research to ever slowly work your way closer to a body of evidence that could give you some confidence, because there wasn't the shortcut that a few good big trials in another area with short-term outcomes could provide.

    Unfortunately, that's just what researchers often do: think of things they think are good, fairly objective measures, and develop some new measurement tool - without properly studying whether that is in fact the case. It's similar to coming up with an intervention that fits people's theories of what a problem is and how to fix it.

    Many do, and there's a lot of really great work, too, tackling the problem - see for example the standards I referred to above, and which I used to look into the disputed outcome measures when I wrote my first post about ME/CFS. There's the NIH Common Data Elements. In general, too, there's the COMET Initiative, which is all about developing Core Outcome Sets - including involving the people affected by the conditions. (See for example the history of OMERACT.)
     
  13. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    There seems to be a strong degree of concern from many on this forum for more people to understand and support the concerns of people with ME/CFS. I'm seeing people often asking why don't more people listen, care, take action? So everyone doesn't think it doesn't matter if people who listen are put off by what they hear.

    I've seen people on this forum raise that about me specifically, too, when I first wrote a post about ME/CFS: as in, why did it take me so long to dig into and then engage with this issue? People like me fall into the category that you're saying just don't matter if we clock out when we start listening to what people are saying. There is a bottomless well of issues needing people's time and attention: there was no ethical obligation for me to spend a lot of time and stick my neck out for this particular one.

    And yes, I realize there is at least some wish that I had stayed clocked out and/or that this IAG had never come to be! :wtf: My sympathies!
     
  14. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,425
    One of the concerns is precisely the proliferation of studies that exploit the lack of reliability of subjective outcomes in an unblinded context to make claims of treatment efficacy.

    I have just heard the news that another severely ill patient has been forcefully removed from their home. A psychiatrist has decided to intervene. The treatment this patient can expect will include forced graded exercise therapy. Their carer has been labelled an "enabler", which is a nicer way of saying that he fuels delusional illness beliefs and maladaptive behaviours. The patient is so severely ill that she risks catastrophic deterioration. Sectioning also often precedes the death of severely ill ME patients.

    This is just one of the many ways that misleading studies like the PACE trial harm patients, because they appear to confirm the otherwise difficult to confirm theories that say ME is a self-perpetuating state of deconditioning and false illness beliefs that can be overcome with a positive state of mind and graded exercise. And there is no way for patients to be heard, because any disagreement they voice will be considered to be expression of their maladaptive beliefs.

    We patients know what needs to be done to stop this and we are telling you. People that want to keep ME within the sphere of influence of mental health will exploit any weakness in clinical trials to create the illusion of treatment efficacy. There is a simple remedy, which is to no longer consider unblinded clinical trials with subjective primary outcomes to be reliable. Which happens to be the standard most of the rest of medicine holds itself to.
     
    Last edited: Jun 13, 2020
  15. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    +1

    Good point.
     
  16. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    I want relevant subjective/self-report/QoL/PROM measures used. But only if adequately controlled by blinding or objective measures.

    Inadequate control, and failing to adequately account for control outcomes, is how this area of medicine got into the quagmire it is now in.

    Correcting that is how medicine gets out of it.
     
    Hutan, Willow, Andy and 3 others like this.
  17. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    :broken_heart::broken_heart::broken_heart:
     
    Louie41, Sly Saint, Hutan and 8 others like this.
  18. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Is there an evidential base for this? Cases where reviews have drawn conclusions from elements retrospectively cherry picked from a seriously methodologically flawed trial (which is open loop, based on presumed understanding), and later ratified by real world evidence (closed loop, based on reality) that those original conclusions were valid?
     
  19. Hilda Bastian

    Hilda Bastian Guest Guest

    Messages:
    181
    Location:
    Australia
    I'm not talking about retrospective cherry-picking: I'm talking about methodically checking every aspect of each study, according to predetermined criteria to try to minimize bias. That's the opposite of cherry-picking. It requires that the criteria are not chosen in order to ensure the review has the outcome that the reviewers want.

    There are surely many examples where people's conclusions from reviews biased to ensure they arrived at those conclusions, are also "right". But if the issue is controversial, there's no reason others should or would place much weight on them if they could see the bias - unless they shared that bias.
     
    Louie41, JemPD and Barry like this.
  20. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    What I so very dearly wish for is that the appallingly bad Cochrane reviews of appallingly bad studies on ME/CFS get turned around for the better. So that pwME can be properly served by those whose responsibility it is to identify and call out bad science, and so serve the best interests of pwME. Instead of (inadvertently or not so inadvertently) serving the best interests of the researchers of those studies. pwME would be in a so much better place by now if this had all been done properly. If you can help achieve that, and you are totally committed to that aim, then you would get my vote. If you were not committed to that then you wouldn't.

    And yes I fully appreciate it's a two way street, and not just a case of us saying, OK, you've got the job, we'll just sit back and watch you get on with it. But hopefully this thread alone shows where we are at on this. This is a robust discussion, but I would be worried if it were not.
     
    Hutan, Willow, Kalliope and 3 others like this.

Share This Page