Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    The Rituximab trial? Probably the best one. The paper doesn't directly contrast it but frankly this is something worthy of a paper. Especially in being a comparative paper it would be accepted by the current NICE request for papers.

    For all the gloating by the PACE cheerleaders about it failing, it clearly showed the same thing as PACE did, yet somehow reaching opposite conclusions.
     
  2. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Thank you.
     
    Trish likes this.
  3. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    995
    Location:
    UK
    @Michiel Tack That was a truly brilliant blog: a beautiful assembly of strong evidence against the use of non-blinded trials with subjective outcomes, and a great story about Louis XVI.

    Just one thing: please don't blog about biomedical research, else I'm out of a job.
     
    Woolie, Nellie, Annamaria and 10 others like this.
  4. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    The key difference being true professionalism from Rituximab trial scientist, pursuance of the truth being their primary objective rather than self interest. There is no way back for the PACE authors now.
     
    Annamaria, Mithriel, Sean and 3 others like this.
  5. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Thanks for all this Michiel - I haven't been able to read things through yet but I'm looking forward to doing so and seeing what you've dug up.
     
    Simon M, Sean and ME/CFS Skeptic like this.
  6. Cheshire

    Cheshire Moderator Staff Member

    Messages:
    4,675
    Thanks @Michiel Tack this is a brilliant and clear sum up of this central question.
     
    Annamaria, rvallee, Simon M and 3 others like this.
  7. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I've not read this new paper from Hróbjartsson, but thought I'd post it here as it has different conclusions (with different methods):

    Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study
    BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.l6802 (Published 21 January 2020) Cite this as: BMJ 2020;368:l6802

    Open Access: https://www.bmj.com/content/368/bmj.l6802


    edit: Two linked commentaries:

    Blindsided: challenging the dogma of masking in clinical trials
    BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.m229 (Published 21 January 2020) Cite this as: BMJ 2020;368:m229

    1. Aaron M Drucker, assistant professor12,
    2. An-Wen Chan, associate professor12
    https://www.bmj.com/content/368/bmj.m229
    https://sci-hub.tw/https://www.bmj.com/content/368/bmj.m229

    Fool’s gold? Why blinded trials are not always best
    BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.l6228 (Published 21 January 2020) Cite this as: BMJ 2020;368:l6228

    1. Rohan Anand,, doctoral research student1,
    2. John Norrie,, professor2,
    3. Judy M Bradley,, professor1,
    4. Danny F McAuley,, clinical professor1,
    5. Mike Clarke,, professor3

    https://www.bmj.com/content/368/bmj.l6228
    https://sci-hub.tw/https://www.bmj.com/content/368/bmj.l6228
     
    Last edited: Jan 22, 2020
  8. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I just had a quick read of this - the results for different analyses have really wide confidence intervals, but seem fairly consistently around zero (Actually, I don't really have any intuitive sense of how close to zero effect these analyses are - they seem much closer than the other figures I've seen though - https://www.bmj.com/content/bmj/368/bmj.l6802/F2.large.jpg.) It seems odd that none of the things assessed were significantly associated with more positive/negative results.

    They do say:

    Seems like a surprising result, to say the least. Sometimes surprising results are righ. Seems like it would be a lot of work to go through the meta-analyses/studies and check their workings though.
     
    Last edited: Jan 22, 2020
  9. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    Interpret with caution... Their meta-meta analysis is of trials included in Cochrane reviews only. I suspect there is a strong bias against unblinded trials that don't rely on strong objective outcomes in the Cochrane database.

    There is large heterogeneity in the results, suggesting that lack of blinding can lead to bias (consistent with prior meta-analyses), but does not automatically lead to bias.

    Note that they are comparing unblinded trials to blinded trials for a given form of outcome measure, for example patient rated outcome measures, they did not compare objective vs subjective within the same trial and did not separate mortality from other forms of objective outcome measures. Secondly, many of these trials did not actually bother to check whether blinding was actually maintained. When drugs have clear effects (including side effects), it is obvious as to whether you are taking it or not.

    So the effect may be as a result of biases of selection in the Cochrane meta-analyses.

    These are all the mentions of objective outcomes:

    Yet no data is provided for this analysis.

     
    Last edited: Jan 22, 2020
    MSEsperanza, Simon M, Sean and 3 others like this.
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    The agreed measures of quality of evidence for trials in general are:

    1. Well matched controls
    2. Randomisation
    3. Blinding

    This is a trial of trials which has no well matched controls, is unrandomised and unblinded.
    It should not even make it into a meta-analysis.

    I think the authors are either remarkably naïve or disingenuous.
     
    MSEsperanza, Barry, MEMarge and 3 others like this.
  11. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    Which study are you commenting on?

    We're discussing the study that was published yesterday that was looking at risks of bias in Cochrane meta-analyses where blinded trials were compared to non-blinded for the same class of outcome measure.
    https://www.bmj.com/content/368/bmj.l6802
     
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Same as you @Snow Leopard. As you point out the controls are not well matched and moreover, the sampling of data is not randomised or blinded for content.
     
  13. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    How would you design such a meta-analysis comparing (randomised) unblinded to blinded trials without requiring a whole series of new trials being conducted?
     
    MEMarge likes this.
  14. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    No idea, but that does not mean it is any good doing it this way.
     
    rvallee and MEMarge like this.
  15. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Thanks for posting this @Esther12

    The editorial and accompanying opinion are quite frustrating. I'm afraid this will give researchers a free pass when it comes to risk of bias due to lack of blinding. I suspect that in the future researchers will just have to note somewhere in their discussion section that lack of blinding in their trial is not ideal but that more recent studies show that there is not really an effect of blinding and that "blinded trials are not always best", referencing one of these papers.

    The opinion piece by Anand et al. has sentences like: "Minimising biases with blinding might weaken the ability to predict the future accurately, because blinding is unlikely to be used in routine practice." It seems that authors prefer pragmatic trials that do resemble routine clinical practice and to spend the money saved by ditching efforts to properly blind trial participants on more of these pragmatic trials with larger sample sizes. I think that would be a step backwards, a step away from using the scientific method in medicine.

    When it comes to the actual paper by Moustgaard/Higgins/Sterne/Hrobjartsson itself, I agree with Jonathan that the problem lies with the methods used. It involves looking at meta-analyses that have both trials that used blinding and others that didn't to see if the effect sizes of the latter are inflated. That's a very rough method because the trials differed in selection, design and probably even the duration, dose and characteristics of the intervention.

    I think researchers basically assumed that this was ok: if you take enough meta-analyses all possible confounders would somehow be balanced and a true effect would emerge. Previous meta-epidemiological studies (summarized in this AHRQ report) suggested this was the case. They were in line of expectations if you take them all together, but the results were always a bit messy and the effect size of bias was lower than what one would expect. It should be noted that the previous largest study of this kind, the BRANDO project found the largest effect for lack of double-blinding for subjective/mixed outcomes with a ROR of 0.78 (0.65 to 0.92). The effects for other sources of bias such as inadequate randomization or allocation concealment were lower. So I expect that the same methodology if repeated, could also result in studies that 'demonstrate' that randomization and allocation concealment' are fool's gold as well.

    I suspect that the main problem with this approach is selection bias: the meta-analyses that have both trials with blinded and trials with unblinded participants are probably those were blinding is considered less of an issue. Areas of medicine where blinding is considered important probably won't have meta-analyses with trials that are unblinded because these are considered inadequate. So if we focus on comparable trials that already exist, some with and some without blinding, we're probably underestimating the effect of blinding overall. I think a better way would be to try and conduct studies with blinded and unblinded groups in the same trial so that other factors are controlled for.
     
    Last edited: Jan 22, 2020
  16. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Yes, this is a very bizarre statement that appears to show a complete lack of understanding of how information is being used. It is a bit like saying why use a weighing machine to weigh the sugar for your cake when there won't be any weighing machine involved when you actually mix the cake mix.

    I fear that there is a group of people with a very clear agenda to push pragmatic trials as part of an empire building exercise. These people now seem to be in charge of the relevant committees. Politics has always been politics but the fashion for pretending that shoddy procedures are gold standard when we know they are not seems new.

    An interesting possibility is that treatments that do not work are more likely to be tried in unblinded studies because the researchers realise they might well not work. And of course if unblinded studies show nothing they are very unlikely to get published, whereas blinded studies that show nothing may be seen as important. Treatments that actually work are probably more likely to be subjected to blinded studies because the preliminary data make it pretty certain that there will be an effect. - and they are also more likely to be based on sound rationale.
     
  17. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Not sure how this would work. If the treatment isn't effective I would still expect unblinded studies to show exaggerated effect sizes compared to blinded studies that test the same intervention because of bias. If unblinded studies that show nothing are very unlikely to get published, in contrast to blinded studies, then I would also expect to see inflated effect sizes in the unblinded studies compared to blinded studies that test the same intervention, which is what Moustgaard et al. couldn't find.

    I suspect the main problem is that intervention where both blinded and unblinded trials exist are also the ones where blinding is considered less important and so form a subset of medicine where the influence of blinding is less compared to for example interventions where all trials are blinded (because it's considered very important to do so).
     
  18. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    The accompanying editorials seem to be a bit more opinionated compared to the tentative conclusions by the authors of the study, namely that practise (blinding) should not change and their findings should not be considered generalisable until replicated using other trial databases.

    Nevertheless, despite the title "Fool’s gold? Why blinded trials are not always best", the conclusion was:

    Which brings a key point, that just because a trial is designed to be double-blinded, does not mean it lacks bias. On the other hand, I think most people agree that trials (such as cancer treatments) where the outcome measure is avoiding mortality, should not be blinded.

    The authors have started to be active on twitter and cited this paper they published last month:
    Ten questions to consider when interpreting results of a meta‐epidemiological study—the MetaBLIND study as a case
    https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1392

     
    Last edited: Jan 22, 2020
    Sean, MEMarge and ME/CFS Skeptic like this.
  19. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I don't think the ORs were necessarily insignificant, regarding the trial outcomes. It was the RORs that was apparently insignificant - the ORs of the blinded trials versus the ORs of the unblinded ones. Even if all the ORs had been 10 the ROR's would then have been 1. Unless I'm misunderstanding.
     
    Esther12 likes this.
  20. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I was having to learn what those stats meant as I was reading the paper so it's possible I misunderstood, was not clear, or have now misunderstood your post! (To confuse things further, I think that I may have linked to a different table to the on I intended.)

    So I was talking about the likelihood of blinding being associated with exaggerated effects estimates in any of the different scenarios they were looking at. Is it right that none of those analyses found a significant result?

    I really need to re-read this paper and think about what their figures really mean.
     
    Barry likes this.

Share This Page