RoB 2: a revised tool for assessing risk of bias in randomised trials (2019) Sterne et al.

Discussion in 'Research methodology news and research' started by ME/CFS Skeptic, Aug 29, 2019.

  1. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    You can believe this will feature prominently in upcoming blog posts and perhaps open letters.
     
  2. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,837
    Location:
    UK West Midlands
    Was hoping so :thumbup:
     
    Anna H, Ravn, ladycatlover and 7 others like this.
  3. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    I think we are.

    Literally thousands of incomes, careers, empires, and egos – and, of course, some major government policies – depend critically on this 'science' not going down.

    It is why the CBT/BPS crowd have managed to survive and prosper for so long. Sure as shit ain't due to the quality of their science and ethics.
     
  4. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    The affected egos surely could help each other with a bit CBT?

    On a more serious note, if they just wanted to be good and helpful clinicians, therapists or researchers, they wouldn't have to be afraid of having nothing to apply their professional education to.

    There is a lot of urgent stuff to do and to investigate for psychiatrists and psychologists.

    In Germany at least, there are long waiting lists for patients who are in need of help from these professions (depression, addiction etc.)

    And so many basics of psychiatric or mental illness and neuropsychiatric ailments are not well understood, or not well treatable or drugs have severe side effects; so there actually is need of good research in the fields of psychology, psychiatry and neuropsychiatry.

    Sufficient amount of work to do in these fields without the need to invent problems that people don't have or to design fanciful illness models that have nothing to do with ill people's realites.
     
    Last edited: Aug 31, 2019
  5. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Anyone could help me understand this example?

    Are there machines for data collection that cannot be repaired or new ones would be too expensive, so that a broken machine would require utterly different methods of data collection or the collection of other data?
     
  6. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I suspect this example has been dreamt up as a case where there would not be bias because the breakdown of a machine is not something you organise deliberately in order to stop collecting data.

    The problem is that it illustrates very clearly the naivety of anyone thinking this is bias free. If yo didn't;t like the way the data looked you could say you could not afford to replace the machine. If the data looked the way you wanted you could apply for a top up grant and carry on.

    What is so devastating here is the total lack of understanding of Feynman's phrase - the easiest person to fool is yourself.
     
  7. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    Does "broken machines" include actimeters? (asking for a friend)
     
  8. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Sterne was also an author on Crawley's school attendance study which was done without ethical approval
     
  9. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    In a sense it would be difficult since it would need to be a set of broken machines. If it were a single one (or even a few) then I assume that would be considered as missing data.

    I guess what could go wrong is the data collection if for example the server collecting the data was having issues meaning data from all devices was lost.

    I can see the argument for broken machines. But at the same time a trial should have a quality and resilliance plan and careful design so that broken equipment doesn't lead to failures.
     
  10. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I think the broken machines is a red herring put in deliberately to divert attention from the much more dubious 'changes to analysis plan made before unblinded outcome data were available...' Looks to me like they were trying to slip that through unnoticed, and it's a serious problem. PACE and SMILE both did it.
     
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    It really is absurd that they should be saying this is allowed or so and so is allowed as if they were in a position to lay down rules when what matters is simply whether or not bias might have been slipped in.

    Changes to an analysis plan are only changes if the study has started and if a study has started in an unblinded trial than outcome data are available. In fact some aspects of outcome data are available in blinded trials - for instance that nobody so far has shown 50% improvement.

    In PACE they would not have known in advance where the functional grade results would cluster. But once they knew where they were clustering they could change the analysis to maximise the chance of systematic expectation bias generating a statistically significant difference.

    And so on and on as we have said before.
     
  12. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    It's pretty clear by now that psychiatry is underperforming relative to the rest of medicine, has not delivered reliable outcomes, makes very few predictions and has in fact barely progressed at all beyond the odd fortunate guesswork. Which is frustrating, obviously, as there rarely are those moments that every physician lives for, the undeniable recovery, where the patient leaps with joy and sings "I'M CUUUUUURED". That rarely happens, if at all, in psychiatry, which doesn't even have reliable objective metrics or even basic definitions for what "recovery" even means.

    Just to emphasize how mediocre progress in psychiatry is, not long ago, Wessely tweeted about what he saw as the most significant advances that happened in ME research and listed two things (that I remember anyway): that mood is not a predictor of disease severity (a stupid question no one in their right mind thought was relevant) and the creation of CFS, a very strong contender in the top 10 of worst blunders in the whole history of medicine.

    This is 3 decades of research, millions spent promoting an entire new paradigm and this is the sum of all the progress made there: one useless answer to a question no one should have ever asked and a major regression negatively impacting millions and stalling progress for a whole generation. That's harsh. That's very demoralizing without significant doses of lying-to-yourself.

    So it looks like within the field of psychiatry, things are moving to what they see as "easier" conditions, away from the frustrating stagnation that pervades the whole field. Unfortunately, it's moving onto those "easier" conditions with the exact same ideology that has systematically failed over and over again, despite having already massively failed thanks to Wessely and his ilk.

    Which make the project of steering chronically ill people away from traditional medicine and onto the least effective, reliable and objective specialty of medicine criminally insane. The field of mental illness has not delivered anything comparable to the rest of medicine and the idea is to steer people away from what works, though slowly and expensively, and onto what doesn't. Complete madness.
     
    Last edited: Aug 31, 2019
  13. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,570
    Location:
    Norway
    David Tuller: Trial By Error: Lead Author of Cochrane's New Bias Guideline is LP Study Co-Author

    Why did Professor Sterne sign off on the Lightning Process trial paper, in which critical information about retrospective registration and outcome-swapping were withheld from the public version of events? Is Professor Sterne aware that prospective registration and pre-designated outcomes are considered essential in reducing the “risk of bias”? Regarding the school absence paper, does Professor Sterne really believe that a study featuring a hypothesis, generalizable conclusions, and in-person interviews conducted by the lead investigator can reasonably be defined as “service evaluation” and appropriately exempted from ethical review?

    These are legitimate questions not only for Professor Sterne but for Cochrane, BMJ, and Professor Sterne’s many co-authors on this “risk of bias” revision. Given that both the Lightning Process and school absence studies are fraught with the kinds of methodological and ethical irregularities that should be obvious to first-year epidemiology students, it is unclear how Professor Sterne can currently serve as a credible authority on anything.
     
  14. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,570
    Location:
    Norway
    https://twitter.com/user/status/1167897023015636993
     
  15. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    The revised tool seems to make it quite easy for unblinded trials to be rated as low risk of bias. The paper reads:
    More information is given in the supplementary material. Most of the risk of bias due to a lack of blinding is assessed at Domain 2: Risk of bias due to deviations from the intended interventions. Suppose you have a trial where both patients and the personnel delivering the treatments are aware of the intervention group. In other words, they know who is getting the intervention and who isn't. According to RoB 2 this isn't much of a problem, as long as "no deviations from intended intervention arose because of the trial context." The term 'trial context' refers to:
    In other words, it doesn't refer to expectation bias directly. It doesn't refer to patients answering questionnaires differently because either they or their therapists know they are in the intervention group. And as long as "no deviations from intended intervention arose because of the trial context", the trail can be rated low risk of bias for this domain, despite being unblinded.

    The figure belows explains the algorithm: questions 2.1 and 2.2 do not form a problem if the answer to 2.3 is 'probably no'.
    upload_2019-8-31_22-36-33.png


    There is one other aspect of blinding in Domain 4: Risk of bias in measurement of the outcome. Here they determine whether outcome assessors were aware of the intervention received (For participant-reported outcomes, the outcome assessor is the study participant.). Once again the judgement is mild. There is only a high risk of bias if it is likely that assessment was influenced by knowledge of the intervention. The example they give of this is when physiotherapist who delivered the intervention is also making an assessment of recovery.
     
    Last edited: Aug 31, 2019
  16. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    It makes one wonder whether the people involved have ever done an experiment wanting to know the right answer (rather than the answer they wanted).
     
  17. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Sadly ironic they exhibit so much bias developing a tool for measuring bias. But not in the least surprising.
     
  18. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    I think these guys urgently need to attend an Unblinded Researcher's Anonymous meeting to rid them of this addiction.

    I expect to hear from them when they get to steps 8 and 9.
     
  19. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I understand the words of 2.3, but not really what it means ... what does it mean: "Deviations that arose because of the trial context?" Trying to understand how N/PN to that means Low Risk. Especially as subjective outcomes ignored for that low risk conclusion, despite being fully unblinded.
     
  20. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    This is what they say about the changes to 2:

    It sounds like they completely changed it from something that provided useful information about the risk of bias to something that limits assessment to only a small sub-section of the reasons for concern about trial design increasing the risk of bias.

    This is the (very brief) summary provided in the 2011 BMJ paper on their risk of bias tool:

    https://www.bmj.com/highwire/markup...postprocessors=highwire_figures,highwire_math

    https://www.bmj.com/content/343/bmj...ndmd&int_medium=cpc&int_campaign=usage-042019

    Has anyone seen the full text of the earlier risk of bias tool (for which Sterne was also a co-author)?
     

Share This Page