Trials we cannot trust: investigating their impact on systematic reviews and clinical guidelines in spinal pain, 2023, O'connell et al

Discussion in ''Conditions related to ME/CFS' news and research' started by EndME, Jul 17, 2023.

  1. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,203
    Highlights
    • A group of trials with trust concerns had major impacts on the results of systematic reviews and clinical guidelines.
    • They substantially impacted effect sizes and influenced the conclusions and recommendations drawn.
    • There is a need for a greater focus on the trustworthiness of studies in evidence appraisal.
    Abstract

    We previously conducted an exploration of the trustworthiness of a group of clinical trials of cognitive behavioural therapy (CBT) and exercise in spinal pain. We identified multiple concerns in eight trials, judging them untrustworthy. In this study, we systematically explored the impact of these trials (“index trials”) on results, conclusions and recommendations of systematic reviews and clinical practice guidelines (CPGs).

    We conducted forward citation tracking using Google Scholar and the citationchaser tool, searched the Guidelines International Network (GIN) library and National Institute of Health and Care Excellence (NICE) archive to June 2022 to identify systematic reviews and CPGs. We explored how index trials impacted their findings. Where reviews presented meta-analyses, we extracted or conducted sensitivity analyses for the outcomes pain and disability, to explore how exclusion of index trials affected effect estimates.

    We developed and applied an ’Impact Index’ to categorise the extent to which index studies impacted their results. We included 32 unique reviews and 10 CPGs. None directly raised concerns regarding the veracity of the trials. Across meta-analyses (55 comparisons), removal of index trials reduced effect sizes by a median 58% (IQR 40 to 74). 85% of comparisons were classified as highly, 3% as moderately, and 11% as minimally impacted. Nine out 10 reviews conducting narrative synthesis drew positive conclusions regarding the intervention tested. Nine out of 10 CPGs made positive recommendations for the intervention(s) evaluated. This cohort of trials, with concerns regarding trustworthiness, has substantially impacted the results of systematic reviews and guideline recommendations.

    Perspective
    We found that a group of trials of CBT for spinal pain with concerns relating to their trustworthiness have had substantial impacts on the analyses and conclusions of systematic reviews and clinical practice guidelines. This highlights the need for a greater focus on the trustworthiness of studies in evidence appraisal.

    https://www.jpain.org/article/S1526-5900(23)00467-4/fulltext
     
    Last edited by a moderator: Jul 17, 2023
  2. boolybooly

    boolybooly Senior Member (Voting Rights)

    Messages:
    592
    It is good these authors are deconstructing the errors allowing badly researched psychology to "wag the dog".

    Heaven forbid anyone should say it was done deliberately and financed by bad actors in bad faith.
     
  3. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,290
    Location:
    Pennsylvania
    You need to be able to give useless studies zero weight in reviews, instead of a very small weight.
     
  4. Sean

    Sean Moderator Staff Member

    Messages:
    8,050
    Location:
    Australia
    Or even negative weight.
     
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    This is what GRADE fails on. If a study is too open to bias to be interpretable it should score zero. There is no logical justification for just 'downgrading' a pip or two.

    And this as well. The PACE study provides us with very robust evidence for CBT and GET not producing a cost effective benefit. If there is any benefit it is too small to be worth it. Since it was a big trial whose authors did their darnedest to get a positive result from it should carry very significant negative weight.
     
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,651
    Location:
    Canada
    Somehow doesn't seem to have the same impact, when comparing the response to NICE vs. to IQWIG, which simply discarded all but 3 trials for being too biased to use. Or maybe since the conclusions remain mostly the same, they just don't care about their work being rated as too poor to even consider.

    I'm really puzzled by the different response considering that grading them as not even worth rating, carrying no weight at all, is obviously worse than being rated as very low and carrying little weight.
    Which is another puzzling part of this. IQWIG rated identical trials, from mostly the same people, with mostly the same process and methodology, and obviously the exact same "treatments", as worthless, which should have negatively rated similar conclusions from the 3 trials of low quality that they found acceptable. Same food, from the same kitchen, cooked by the same staff using the same ingredients and equipment. This is all weird, absolute zero silence on a similar situation.
     
    Joan Crawford, bobbler and RedFox like this.
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    I don't think GRADE has to fail on this - it depends entirely on how it is used. Quoting my comment from elsewhere:
    So, if a trial is completely unreliable, for example the researcher sat at a table and just made up the data, it can and should be excluded from an analysis.

    But, a trial with subjective outcomes and no blinding is not completely unreliable. In some cases, a subjective outcome is actually something we want to know and report on.

    As I say above, a subjective trial with no blinding might tell us that the intervention has no effect, even though all the biases are stacked in its favour. Or that trial and all the others with the same design might show "benefits" that can be identified as likely to be within the benefit range expected from a hyped placebo. Then we know that the benefit isn't big enough to be real. There should be a step in the evaluation process that comes to a sensible conclusion on what magnitude of benefit is real and relevant and lasts long enough.

    Alternatively, what if an intervention has a fantastically useful outcome? A trial with subjective outcomes (do you think the treatment solved the problem?) and no blinding might result in nearly 100% positive outcomes, and almost all of the participants continued to use it at followup. In that case, the result is so far in excess of what you would expect with a placebo treatment that you can conclude that the treatment is probably useful.

    The thing is, any evaluation system has to be used by people who are thinking properly about what they are doing. I think it is entirely possible to use the GRADE approach to make a useful analysis. As I've noted elsewhere, the main benefit of GRADE and the other tools it is often used with is that they provide a transparent structure for assessing trials, for reporting, and to some extent standardising, how the assessor thought about things.
     
    Joan Crawford, rvallee and Sean like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,725

    Indeed, the point made in the abstract about 9 out of ten of these reviews (of which 85% were highly impacted) being positive and effectively voting positively for the treatment makes it really obvious this isn't just an issue of bias and selectivity in those that end up being reported. As far as I can see there is nothing stopping there being ten studies done in a row until the right answer finally gets fudged 'just enough' from the poor methods, and the previous 9 don't need to be reported.

    NOTE: therefore none of those ever have to also be included in 'the review' and so its nonsense that these would be somehow useful given they've been selected out from a pile of others people wouldn't see.

    Even if you somehow couldn't design a single trial without any kind of objective measures to triangulate [which I think is just excuses for not wanting to 'do the work'] that is just playing the probability alongside having bad methods that you choose to not improve, but instead focus on the outcome that works for your conflict.

    To say that being the habit in an area makes it 'not science' is an understatement surely?

    There is nothing stopping good quality qualitative research from taking place to add meat to bones and so on so these arguments of justification seem beyond weak and almost like trying it on. Add in the huge risk of perceived coercion, particularly in the area of ME/CFS, and particularly when done by those who are BPSM (as their beliefs are in behavioural treatments like 'remove support', make life hard, 'secondary gains' nonsense, and their sales patter itself they write about people suggesting they are psychosomatic tends to ruin access to medical and other support), and I think that very much needs to be accounted for - I still don't understand why they aren't required in methods to account for how they both made people safe and made them feel they were definitely safe 'to give whatever truth vs 'their right answer''.
     
    Last edited: Jul 19, 2023
    Joan Crawford, RedFox and EndME like this.
  9. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,651
    Location:
    Canada
    By far the biggest flaw and biasing element in this methodology: change the people and you change the outcome. That's voting with a few extra steps. A cornerstone of science is, well, exactly the opposite of that.
     
  10. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,290
    Location:
    Pennsylvania
    It would be easier to list the trials we can trust over the trials we couldn't. There are studies that deliver certain results, and a long tail of studies that mean little, either because they're highly speculative, or methodologically flawed.
     
  11. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    I guess, but all science has similar problems. As this paper says"
    And meta-analyses can compound the bias.

    I think it is possible to do a review with rigour, and for your assumptions to be laid out so clearly that others can challenge them, and conduct sensitivity analyses to see if assuming something else makes a difference.


    The problems with the studies they identified as flawed are summarised:
    The authors don't seem to have identified the problem of subjective outcomes in unblinded trials, which surely must have applied to most of these studies.

    Here's the chart of the changes in effect size for each of the meta-analyses timepoints, with and without the trials. The right hand scale is an estimated Number Needed to Treat.
    Screen Shot 2023-07-20 at 6.11.11 pm.png

    Look how the standard mean difference tends to zero. And that's before taking into account the bias from subjective outcomes in unblinded trials.

    A number of the reviews noticed that certain trials were pulling the results in a favourable direction. Some explained this away with speculation that the interventions were somehow superior, rather than checking for the flaws that these authors found.
    Same story with the clinical practice guidelines:
     
    Last edited: Jul 20, 2023
  12. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
     
    bobbler, rvallee and Sean like this.
  13. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    Thanks for posting this @EndME. I'd like to use it in advocacy as it seems pretty compelling. What do we know about the authors? Do they have a track record of good, reliable work?
     
    bobbler, MEMarge, EndME and 1 other person like this.
  14. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    Ah, this is interesting:
    Neil O'Connor, the first author
     
    bobbler, rvallee, MEMarge and 2 others like this.
  15. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,944
    Location:
    betwixt and between
    In addition to Neil O' Connor's there's more authors' involvement with Cochrane and NICE:

     
    bobbler, MEMarge, oldtimer and 2 others like this.
  16. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    On Amanda C de C Williams (the final author)
    an interview with transcript
    https://integrativepainscienceinsti...-does-it-help-with-dr-amanda-c-de-c-williams/
     
    Joan Crawford, Sean and oldtimer like this.
  17. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,351
    Location:
    Aotearoa New Zealand
    Thanks MSEspe! Interesting to read those links. There's a huge Cochrane involvement, and, from Ecclestone, some extraordinary expressions of prejudice about CFS:
     
    bobbler, Sean, oldtimer and 2 others like this.
  18. Trish

    Trish Moderator Staff Member

    Messages:
    55,399
    Location:
    UK
    I had a quick look at the Cochrane review of psychological therapies for pain. The outcome measures were all subjective, ie questionnaires.
    It only found evidence of small improvements with CBT and none with BT or ACT. I would interpret that as CBT being more focused on changing unhelful beliefs than the others so more likely to have patients reporting improvement even when no real change. But of course they don't interpret it that way.
     
  19. Sean

    Sean Moderator Staff Member

    Messages:
    8,050
    Location:
    Australia
    People with fatigue tend to be actively ruminating about possible solutions, and often desperate for change,

    Real mystery why patients might be desperately seeking solutions to their serious life-trashing health problems, especially when the professional experts are botching it badly.

    Can't imagine why patients would do that.
     
    Last edited: Jul 21, 2023
    obeat, bobbler, NelliePledge and 8 others like this.
  20. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,651
    Location:
    Canada
    Definitely, but this problem is massively amplified in a context of open label trials with biased subjective outcomes that mostly amount to "we're experts, we say so", multiplied again by the fact that what is being evaluated is a black box made up of a bunch of different things and involve direct 1-on-1 attempts at manipulating the outcomes. No one knows or checks the substance of the trials, the actual "intervention". What they do is rate books by their cover. It mostly ends up being all bunched up together under the "everything but the kitchen sink" label of non-pharmaceutical interventions.

    What this means is that even if the results say something, they absolutely cannot be applied in the tyrannical fashion that the biopsychosocial is forced onto us, taking what is, at the very best, some minor subjective benefit in 1/7 of participants being turned into "this is a complete cure and despite being a pragmatic trial, where causality cannot be inferred, we believe this proves that the participants are only suffering from psychological issues".

    There are so many more layers in the biopsychosocial ideology compared to other disciplines. They may be the same issues, but they are amplified 100x at every single point, from the design to the evaluation of services delivered without any actual reliable evidence, since they were designed from the start with the intent of implementing straight from the start, assuming they could not possibly fail.

    If it was merely a suggestion that patients could ignore without any issues, then OK. But without fail patients keep being told to "just exercise" no matter how many times they report that it makes them severely ill, with the insistence, without any evidence, that they are anxious, afraid, or other schoolyard taunt-level nonsense.
     
    Last edited: Jul 20, 2023

Share This Page