RoB 2: a revised tool for assessing risk of bias in randomised trials (2019) Sterne et al.

ME/CFS Skeptic

Senior Member (Voting Rights)
or you could bin the subjective abstracts and print the entire trial its methods and most importantly its data that will actually shed light on some of the preposterous conclusion/beliefs of the people publishing come to .
 
Does this seem to make it easier to allow the sort of outcome switching seen in PACE?

Bias in selection of the reported result
5.1 Were the data that produced this result analysed in accordance with a prespecified analysis plan that was finalised before unblinded outcome data were available for analysis? Y/PY N/PN NI
Is the numerical result being assessed likely to have been selected, on the basis of the results, from:
 5.2 ... multiple eligible outcome measurements (eg, scales, definitions, time points) within the outcome domain? N/PN Y/PY NI
 5.3 ... multiple eligible analyses of the data? N/PN Y/PY NI
Risk-of-bias judgment (low/high/some concerns)
Optional: What is the predicted direction bias due to selection of the reported results?

Seems that they wanted to make it easier to class trials as having a low risk of bias because so long as a trial is randomised it is good:

We expect the refinements we have made to the RoB tool to lead to a greater proportion of trial results being assessed as at low risk of bias, because our algorithms map some circumstances to a low risk of bias when users of the previous tool would typically have assessed them to be at unclear (or even high) risk of bias. This potential difference in judgments in RoB 2 compared with the original tool is particularly the case for unblinded trials, where risk of bias in the effect of assignment to intervention due to deviations from intended interventions might be low despite many users of the original RoB tool assigning a high risk of bias in the corresponding domain. We believe that judgments of low risk of bias should be readily achievable for a randomised trial, a study design that is scientifically strong, well understood, and often well implemented in practice. We hope that RoB 2 will be useful to systematic review authors and those making use of reviews, by providing a coherent framework for understanding and identifying trials at risk of bias. This framework might also help those designing, conducting, and reporting randomised trials to achieve the most reliable findings possible
 
From their appendix:

Changes to analysis plans that were made before unblinded outcome data were available, orthat were clearly unrelated to the results (e.g. due to a broken machine making data collection impossible) do not raise concerns about bias in selection of the reported result.

That makes it easier to spin results.
 
Authors' affiliations:
  1. 1Population Health Sciences, Bristol Medical School, University of Bristol, Bristol BS8 2BN, UK

    2NIHR Bristol Biomedical Research Centre, Bristol, UK

    3NIHR CLAHRC West, University Hospitals Bristol NHS Foundation Trust, Bristol, UK

    4School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia

    5METHODS team, Epidemiology and Biostatistics Centre, INSERM UMR 1153, Paris, France

    6Paris Descartes University, Paris, France

    7Cochrane France, Paris, France

    8Population Health Research Institute, St George’s, University of London, London, UK

    9Centre for Reviews and Dissemination, University of York, York, UK

    10Pragmatic Clinical Trials Unit, Centre for Primary Care and Public Health, Queen Mary University of London, UK

    11MRC Population Heath Research Unit, Clinical Trial Service Unit and Epidemiological Studies Unit, Nuffield Department of Population Health, University of Oxford, Oxford, UK

    12Departments of Epidemiology and Biostatistics, Harvard T H Chan School of Public Health, Harvard-MIT Division of Health Sciences of Technology, Boston, MA, USA

    13Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK

    14Centre for Evidence-Based Medicine Odense, Odense University Hospital, Odense, Denmark

    15Department of Clinical Research, University of Southern Denmark, Odense, Denmark

    16Open Patient data Explorative Network, Odense University Hospital, Odense, Denmark

    17Department of Emergency Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada

    18Applied Health Research Centre, Li Ka Shing Knowledge Institute, St Michael’s Hospital, Department of Medicine and Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada

    19Centre for Biostatistics, University of Manchester, Manchester, UK

    20Editorial and Methods Department, Cochrane Central Executive, London, UK

    21Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA

    22Translational Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK

    23Nuffield Department of Population Health, University of Oxford, Oxford, UK

    24Centre for Clinical Epidemiology, Lady Davis Institute, Jewish General Hospital, McGill University, Montreal, Quebec, Canada

    25MRC Integrative Epidemiology Unit, University of Bristol, Bristol, UK

    26MRC Clinical Trials Unit, University College London, London, UK
https://www.bmj.com/content/366/bmj.l4898.full?ijkey=gzAdEdyR713TWzf&keytype=ref#
 
Last edited:
I get the strong impression that these quality assessment groups have completely lost sight of the PSYCHOLOGY of bias in scientific studies. Unblinded studies are fine if the endpoints are objective. There seems to be no mention of this, or the fact that bias for subjective measures is about psychology.

A tool like this is at best an approximation to what an intelligent experienced set of expert would conclude in a given case. It can never be better than that but it can easily be much worse if the people devising it are not particularly intelligent or experienced - which is almost certainly the case.
 
Are we sure that we're not misunderstanding something? Are they really saying that randomization is sufficient for a clinical trial to achieve a low risk of bias status? Why is there no widespread outrage over this apparent attempt to pass off garbage methodology as good?
 
This isn't marking your own homework, it is revising the marking guidelines to allow the answer you have already given (and intend to keep on giving).

It is absurd that the use of randomised, uncontrolled, unblinded trials with subjective outcomes can be given the green light by someone someone who practices this methodology themselves.
 
Are we seeing an attempt to protect the vast amounts of research investment into unblinded trials with subjective out comes, not just in relation to ME, but more widely in relation to CBT and further in psychology in general?

Er, yes.

Does he realise by working with his mate on this dodgy paper he’s put his professional reputation at risk?

Maybe it was the other way around?
 
Does this seem to make it easier to allow the sort of outcome switching seen in PACE?



Seems that they wanted to make it easier to class trials as having a low risk of bias because so long as a trial is randomised it is good:
Well, there it is. Making the dumbening of medical research official, lowering the bar to pave the way for the psychologisation of illness. Nevermind that the results are disastrous, I guess that's a minor inconvenience and a sacrifice people are willing to make to be able to launder their personal opinion into "evidence".

This makes as much sense as returning to horse and buggies. How is regressing an entire field of science a good idea? Medicine is in serious need of major reform, this whole self-regulated thing is clearly not working.
 
Are we seeing an attempt to protect the vast amounts of research investment into unblinded trials with subjective out comes, not just in relation to ME, but more widely in relation to CBT and further in psychology in general?
Seems like it. The MUS project, and more specifically IAPT, cannot withstand genuine scrutiny and so the bar is being lowered all around to keep them artificially alive. In protecting failure, more failure is being added to the mix.

Rejecting reality and substituting their own.

This could explain the delays to the Cochrane reviews. With this new standard in place, they can be made to seem acceptable.
 
Last edited:
Back
Top Bottom