RoB 2: a revised tool for assessing risk of bias in randomised trials (2019) Sterne et al.

Discussion in 'Research methodology news and research' started by ME/CFS Skeptic, Aug 29, 2019.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,660
    Location:
    Canada
    What crisis of replicability? We'll just make bias OK.

    Crisis averted.
     
  2. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Yes. A clear understanding of the changes would be good.
     
    rvallee, Simbindi, Ravn and 1 other person like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    [my bold]

    On the face of it that makes some sense to me. The gold standard for comparison being "a large randomised trial without any flaws". So all we need now is either the authors' definition of that gold standard, or a reference in the paper to a document defining it.

    But I see no definition in this paper of any such gold standard for flawless large randomised trials, nor any reference to such a definition. (If I've missed it then please someone correct me). Without it the whole paper is worthless. The tool is for identifying bias (discrepancies from a gold standard), yet the gold standard is not identified; so how can the validity of the tool possibly be assessed from this paper? Without definition it is merely a muddy standard.
     
    Last edited: Sep 1, 2019
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I find it very opaque but as far as I can see there is no reference to the issue of whether or not outcome measures are subjective? Bias in unblinded studies is not about deviation from anything but about a weighting to one of two options at any stage. A subjective outcome is one where there are options.
     
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    So, as Michiel points out, subjective bias comes under domain 4. But the examples given are very weird:

    When there are strong levels of belief in either beneficial or harmfuleffects of the intervention, it is more likely that the outcome was influenced by knowledge of the intervention received. Examples may include patient-reported symptoms in trials of homeopathy, or assessments of recovery of function by a physiotherapist who delivered the intervention.

    Why would homeopathy be any different from rituximab or ibuprofen or any other treatment? The implication is that bias will only be a problem when patients have silly ideas about quack treatments. That fails to take into account the fact that they might have been encouraged to think a conventional treatment worked. It also fails to take into account the desire to please the investigator. The physio example is fair but why not when the assessment is done by the patient?

    The psychology has been completely air-brushed out.

    The irony is that if the person doing the RoB assessment is inexperienced enough in doing trials to need to be told what algorithm to follow they are not going to know that bias is inevitable. It is a bit like writing a recipe for people who have never cooked and putting 'fry the onions just enough'.
     
  6. Ravn

    Ravn Senior Member (Voting Rights)

    Messages:
    2,181
    Location:
    Aotearoa New Zealand
    To date no rapid responses have been published - are we the only ones left who care about bias in science?
     
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    If I was them I would also be trying desperately to convince the world that what I was doing was nothing like homeopathy.

    No sirree. This is genuine establishment-certified pseudo-scientific humbug, not that fake fringe alt-med woo.
     
    MSEsperanza, alktipping, Joh and 7 others like this.
  8. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    The penny may have dropped re how many policies, careers and egos would be affected if bias is addressed. .......
     
    Ravn, MSEsperanza, Sean and 6 others like this.
  9. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I think another key source of bias is hope. A person desperately wishes for an intervention to work, and to self report otherwise serves to destroy that hope. It is not a conscious thing, nor a dishonest thing, just normal human psychology. But it is very real and can be very significant.
     
    Daisybell, Ravn, MSEsperanza and 16 others like this.
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
  11. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    "The first principle is that you must not fool yourself – and you are the easiest person to fool."
    Richard Feynman

    It is excusable in desperate patients, even if not a good thing.

    Completely unacceptable in supposedly senior competent honest researchers and clinicians.
     
    Last edited: Sep 1, 2019
    Ravn, MSEsperanza, alktipping and 5 others like this.
  12. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    From the outset, it seems that RoB 2 is quite friendly to flawed trials, not solely on the issue of blinding.

    For example, if the trial analysis was not in accordance with a pre-specified plan (question 5.1) that only raises 'some concerns'. So if the authors publish a protocol and then don't stick to it (for example they change primary and secondary outcomes) because the pre-specified plan doesn't give good results, that's no reason to say the trial has a high risk of bias. And as Esther pointed out, changes that were made before unblinded outcome data were available, are seen as no problem at all. Such a trial can be rated as low risk of bias trial. But what with large unblinded trials where researchers get an indication of the direction the main outcomes are going before looking at the data? Apparently RoB 2 doesn't see this as a problem, as long as researchers don't select from multiple possible analyses or outcomes measures.

    Regarding missing data (question 3.1), that doesn't form a problem unless the assessor thinks that 'missingness' depends on the 'true value' for example if it is related to the patients' health status. They write: "If all missing outcome data occurred for documented reasons that are unrelated to the outcome then the risk of bias due to missing outcome data will be low (for example, failure of a measuring device or interruptions to routine data collection)." In such case, a trial can be rated as low risk of bias despite having a lot of missing data. That seems like a rather mild judgement as well and it leaves a lot of room to the interpretation of the assessor (the person doing the review). If he/she thinks the missingness isn't related to the true value than he/she can rate trials as having low risk of bias and the problem of having a lot of missing data would be out of the way. A reader of a review who only looks at the colourful overview of risk of bias, would see a green light and get the impression that there was no issue with missing data.

    I also find it weird that if the allocation sequence wasn't random this only raises 'some concerns'. So randomization isn't that important after all?

    I think the 'user feedback' the RoB 2 team addressed were mostly researchers complaining that their wonderful trials were rated as high risk of bias. Of course, I have never run a clinical trial but it doesn't seem too unrealistic to require that researchers properly randomize and conceal the randomization (and report it adequately in their paper), that they publish a protocol and stick with it, that they use intention to treat analysis and blinded assessors etc. I don't think those are unreasonable demands or that it would cost too much to do this. I think it's mostly a question of professionalism and a tradition of accepting these standards as necessary.
     
    Arnie Pye, Ravn, MSEsperanza and 12 others like this.
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Basically moving the goalpost to wherever they want to kick the ball.
     
    Ravn, alktipping, Chezboo and 6 others like this.
  14. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I also don't quite understand the issue of not reporting certain outcomes. For example, if an unblinded trial has subjective and objective outcomes and the former but not the latter show clinically significant improvements, then researchers could only publish the first and never report the objective outcomes. That seems like a major source of bias as well.

    But apparently this isn't addressed in RoB 2. It is not part of Domain 5: Risk of bias in selection of the reported result. In the overview of changes compared to the previous version, they explain:
    So are they saying that not reporting certain outcomes is not a problem of that trial or indicating bias of those researchers, it's only a problem for the review because certain outcomes were not available? I would disagree with that. If researchers leave out certain outcomes that would make me very suspicious of their trial and the outcomes that they did report...
     
    Last edited: Sep 1, 2019
  15. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Or to wherever the ball randomly ended up after they kicked it with blindfolds on.
     
  16. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Agree entirely. It is not simply about the outcomes being missing - it is about why they are missing. A highly plausible reason could be the authors seeking to bias the reporting of a trial's results. Removing this check from the tool is smoke and mirrors of the flimsiest kind.

    These people seem to be reverse-engineering the bias assessment tool in order to bias the reporting of bias ... to their own ends. Where on earth is the rigour needed to ensure cr*p like this doesn't get used in earnest?! Who oversees this?
     
    Ravn, alktipping, Annamaria and 7 others like this.
  17. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
  18. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Another problem I see with this detailed specification of what is and what is not considered high risk of bias is that immediately researchers will be checking the list before they send off their manuscripts to make sure they are worded in exactly the right way to avoid the obstacles. We are likely to end up like food labelling - Pork sausage, contains at least 10% pork (or at least that is what the abattoir man said).
     
  19. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Yes, becomes and exercise in linguistic agility rather than sound trial design.
     
    Ravn, MSEsperanza, alktipping and 9 others like this.
  20. Mithriel

    Mithriel Senior Member (Voting Rights)

    Messages:
    2,816
    As is usual with these authors their language is so dense and convoluted that it needs parsing out to find what they mean. When I was starting out at university forty odd years ago there was a movement towards clear speech in scientific papers which has not reached them yet. Some papers are hard to understand because of the concepts in them but here it looks as if it designed to obscure.

    Bias is a simple thing which can be spoken about clearly. The aim of minimising bias is to check how true the findings are, almost a measure of range of error. It is impossible to be free of bias altogether no matter how large your trial, it is built in to any trial design. This is not a bad thing if it is acknowledged. Subjective results may be all you can get but that doesn't matter as long as they are not given weight over objective outcomes as happened with the pace trial.

    Rules of bias should be straightforward to understand so that they can be put into practice in trial design and so that readers should be able to judge the results.

    Anyone setting up such rules should start by looking at where biases can creep in and how best to avoid them but this paper does none of that. Instead it minimises everything to hide the problems in their own research.
     

Share This Page