Systematic reviews do not (yet) represent the ‘gold standard’ of evidence: A position paper, 2022, Moore et al

Discussion in 'Research methodology news and research' started by Andy, Jan 11, 2022.

  1. Andy

    Andy Committee Member

    Messages:
    22,308
    Location:
    Hampshire, UK
    Abstract

    The low quality of included trials, insufficient rigour in review methodology, ignorance of key pain issues, small size, and over-optimistic judgements about the direction and magnitude of treatment effects all devalue systematic reviews, supposedly the ‘gold standard’ of evidence. Available evidence indicates that almost all systematic reviews in the published literature contain fatal flaws likely to make their conclusions incorrect and misleading. Only 3 in every 100 systematic reviews are deemed to have adequate methods and be clinically useful. Examples of research waste and questionable ethical standards abound: most trials have little hope of providing useful results, and systematic review of hopeless trials inspires no confidence. We argue that results of most systematic reviews should be dismissed. Forensically critical systematic reviews are essential tools to improve the quality of trials and should be encouraged and protected.

    Paywall (despite claiming to be full access), https://onlinelibrary.wiley.com/doi/10.1002/ejp.1905
     
  2. Andy

    Andy Committee Member

    Messages:
    22,308
    Location:
    Hampshire, UK
  3. Andy

    Andy Committee Member

    Messages:
    22,308
    Location:
    Hampshire, UK
    lycaena, MSEsperanza, Ariel and 9 others like this.
  4. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    I used to work with Andrew Moore at the Cochrane Pain Group. He recently asked me for the pdf of @Michiel Tack, @dave30th and my paper (https://www.tandfonline.com/doi/abs/10.1080/21641846.2020.1848262) on bias caused by relying on subjective outcomes. He said it might be relevant for something he was writing, but surprise surprise he didn't cite it in this paper. Probably because his co-author is my former boss Christopher Ecclestone who is a Cochranite/psychologist fond of psychological therapies for chronic pain, and other things, including, of course, ME/CFS. I thought Andrew might actually get it. But I don't think anyone at Cochrane will ever take this problem rife in research seriously. Because pain/fatigue/mood etc. are subjective, they seem to think it's fine to accept the dearth of objective outcomes measured in primary research on treatments for chronic conditions, like actual participation in work/education/society (essentially quality of life).
     
    MSEsperanza, FMMM1, Ariel and 17 others like this.
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,965
    Location:
    London, UK
    Reminds me of 'GET only does harm if you don't do it properly like we do'.
    Systematic reviews are rubbish unless you do them 'forensically' (ahem) like we do.
    When I see 'research waste' I know someone has an axe to grind, and likely to be used on the wrong trees.

    I hadn't actually heard of anyone referring to systematic reviews as a gold standard. Double blind RCTs yes, not SRs.
     
  6. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,265
    I wonder if this argument would work to convince a person that believes self-reported outcomes are ideal for subjective problems:

    If a subjective problem does not produce any objective abnormalities that an outside observer could see, either in the body or the person's life, then they are fairly mild. It's simply not possible to be unaffected by symptoms once they reach a certain intensity. Subjective outcomes might be the only option when someone intends to limit themselves to studying mild symptoms, for whatever reason.

    A person with pain that limits walking will reduce their average daily time spent walking. This reduction can be measured and so can an improvement.

    Presumably the popularity of subjective outcomes is in part because they "work better" in the eyes of people who haven't yet understood that the effects they are seeing might well be entirely due to bias.

    In other part due to how easy and cheap they are (but it makes little sense to save money if it means the results cannot be trusted).
     
    Last edited: Jan 12, 2022
    rainy, MSEsperanza, FMMM1 and 11 others like this.
  7. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    This.
     
    rainy, MSEsperanza, FMMM1 and 4 others like this.
  8. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    Maybe Cochrane should re-brand as producers of platinum-standard evidence, as it's Jubilee year. I might suggest it. It would probably be the only one of my suggestions they would ever take seriously.
     
    MSEsperanza, FMMM1, EzzieD and 6 others like this.
  9. Michelle

    Michelle Senior Member (Voting Rights)

    Messages:
    272
    This is truly one of the scandals of pain research. If they want to, there are objective measures available such as blinded actigraphs. They simply have no interest in objective markers.
     
    MSEsperanza, rvallee, FMMM1 and 5 others like this.
  10. Sean

    Sean Moderator Staff Member

    Messages:
    7,490
    Location:
    Australia
    Yep. They are actively and deliberately weakening and corrupting methodological standards, because they cannot both meet minimum standards and get a positive result for their pet theories.

    "Faced with the choice between changing one's mind and proving that there is no need to do so, almost everyone gets busy on the proof."
    John Kenneth Gailbrath
     
  11. shak8

    shak8 Senior Member (Voting Rights)

    Messages:
    2,292
    Location:
    California
    To me, a systematic review is just a literature review, the first thought before embarking on a tentative project for research, before an hypothesis.

    Unless you can hack off every bit of each study to figure out whether they tested what they say they tested and see that all the parameters (like good sampling) and elimination of compound variables AND adequate number of subjects were done, well, it's rather useless isn't it?

    And as a former nurse, I was taught to eat the pablum of the cult of medicine, to blindly accept their assertions. Give them a Nessie of the benefit of the doubt.
    No longer.

    Edited after sleeping on it: What one includes in one's review can be skewed to favor the reviewer's slant on the subject.

    Also, if a researcher or student doesn't have the brains/talent to do original research, they can mount a systematic review for publication or perish purposes.

    Also, a lot of research is just copy-cat and thus a systematic research review can skew to which direction the sheep all took to.
     
    Last edited: Jan 12, 2022
    Lilas, Wyva, rvallee and 6 others like this.
  12. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    Yes. Exactly. Why do Cochrane not see this? They fetish-ize "search strategies" (I used to be an enthusiast for those myself, but I am so over it now) and meta-analysis. They can slag everyone off for not being as good as them at those things with their ridiculous "hand"book which you can't lift with one hand. It doesn't matter - the stuff they find and then meta-analyse is mostly useless
     
    Snow Leopard, FMMM1, shak8 and 6 others like this.
  13. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    I have tended to presume that the people who are drawn, unquestioningly, to systematic reviews and their results are people who are good at dealing with numbers, rather than words or concepts, and who do not think it necessary to study too closely what the numbers might represent, or the varieties of ways in which the words may be used. Add two sheep to two goats and the answer may be four animals.
     
    Michelle, FMMM1, shak8 and 4 others like this.
  14. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,965
    Location:
    London, UK
    They could always make use of Gerald Ratner's description of his own brand jewellery.
     
  15. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    Do you think they are capable of showing the same degree of self-awareness?
     
    FMMM1, Peter Trewhitt and Mithriel like this.
  16. Trish

    Trish Moderator Staff Member

    Messages:
    53,396
    Location:
    UK
    I think part of the problem is that subjective outcome measures are taken at face value. The claims that they are reliable and valid seem to be based on comparison with the scores on very similar questionnaires. The whole pile of scales that purport to measure fatigue, pain, depression etc are no more than castles built on sand. It's high time they were all recognised to be fatally flawed. Measures of efficacy of treatments need to be based on more objective measures based around the person's capacity to sustain a normal life, such as activity levels, being able to study or do a job and have some semblance of a social life.

    It's time they stopped being allowed to pretend that subjective questionnaire tick boxes lead to meaningful data that can be turned into numbers or scales that can be dumped into a stats package and provide meaningful conclusions about treatment efficacy.

    It's not just the problem of therapist influence or placebo effect, it's the belief that a score of 22 on the Chalder fatigue questionnaire is significantly better than a score of 27. It means nothing.
     
  17. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    882
    Location:
    Oxford UK
    Exactly. the people who love crunching numbers and meta-analyses and wouldn't want to go near a member of the public, let alone a patient. They defer to doctors for all that messy stuff. The doctor/statistician (or information specialist) combo is very very dangerous. I remember asking a senior statistician who was talking in a meeting about how trials showed that children on anti-psychotics (or some other really toxic sounding drug, can't remember now) put on at lot of weight. I imagine it wasn't clear from the trials (probably done by drug companies) or from systematic reviews using that research, if the drug helped with their condition either. He looked at me completely blankly when I asked why children were being put on anti-psychotics in the first place. It was like that wasn't remotely relevant to his clever calculations.
     
    Simbindi, Lilas, Snow Leopard and 9 others like this.
  18. shak8

    shak8 Senior Member (Voting Rights)

    Messages:
    2,292
    Location:
    California

    Some researchers (a relative term) have built their careers developing 'valid' instruments (like the Chalder fatigue scale, the FIQ {fibro impact questionnaire) that are 'validated' to be accurate no matter who administers the test, or who the patient is.

    One problem is that the instrument is not nuanced enough. And so research paper after research paper are based on them. Meaningless in the end. The instruments need to be dismantled and replaced with....well, I don't know yet.
     
  19. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,761
    I noticed an appraisal of the PACE trial which highlighted that there was no increase in (participants) hours worked, participation in education ---- these type of indicators are also highlighted above. In terms of activity levels, actimetry (FitBit type devices) seems deliverable. However, it requires more money, effort (from researchers), breaking away from the familiar questionnaires. Actimetry should be incentivised by funders like NIH and MRC; if you don't use actimetry (or some objective indicator) then the data is useless in terms of developing policy. If public funders like NIH, MRC --- had a policy, re not funding unblinded studies with subjective outcome indicators, then I reckon you'd see better studies --- low quality application=no research funding & no research funding=no job --- would incentivise researchers to submit better applications.

    I think the more challenging measures relate to cognitive functioning and I don't think there are any simple fixes e.g. from dementia research ---- I think a lot can be inferred from increasing hours worked, (re-) participation in education ---- [EDIT] returning to a normal life.
     
    Peter Trewhitt and Sean like this.
  20. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,761
    OK pretty much everyone on this site agrees that unblinded studies with subjective outcome criteria=useless crap (or worse)

    The question seems to be why are career scientists publishing systemic studies derived from crap primary studies --- what is the incentive? Maybe the answer (referred to above) is that they (scientists) simply don't care enough to dismiss this as useless crap and move on to something more useful.

    The great and the good (GRADE/Cochrane) tout useless crap (systemic studies) as evidence and NICE have now joined up with Cochrane. OK we can challenge that but what is the driver ---- is it that gloss is all you need ---- it's cheaper than doing something meaningful ---- the problem (sick people unlikely to return to their normal life) isn't easy to fix --- something must be done and this is something (Sir Humphrey)
     

Share This Page