Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    Yes, that's how I understood it (see figure 2 in the paper).

    The blinding of patients in trials with patient-reported outcomes came closest to finding an effect with a ROR and 95% confidence interval of 0.91 (0.61 to 1.34). The confidence intervals are quite large so it's possible that future replication attempts will find a significant effect and that this selection of meta-analysis happened to be one that resulted in no effect by coincidence. All in all, the number of meta-analyses for each analysis wasn't that big, for example 18 for blinding of patients in trials with patient-reported outcomes. It could be that two or three of these were misleading (the unblinded trials found low effects for other reasons) and messed up the analysis.

    The supplementary material gives an overview of the individual meta-analyses.

    upload_2020-1-23_11-51-12.png

    Would like to have a look at a few examples of blinded and not-blinded comparisons to see what's happening here, although that will be a lot of work (There were 132 trials contributing to this comparison). Below are what the 18 reviews are about (by googling the codes):

    upload_2020-1-23_11-52-33.png
     
    Last edited: Jan 23, 2020
    Woolie, Esther12 and Snow Leopard like this.
  2. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    I'm not sure how it would be possible to adequately blind a lot of these trials anyway (in terms of the patient knowing whether they received the intervention or not). The potential heterogeneity is huge. Grouping them by outcome seems to miss the point.
     
    Woolie, MEMarge, Sean and 2 others like this.
  3. Sarah

    Sarah Senior Member (Voting Rights)

    Messages:
    1,510
    Jaybee00, Esther12 and ME/CFS Skeptic like this.
  4. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Thanks. That's really useful.

    I just looked at the first participant blinded study in the MS review: https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD009131.pub3/epdf/full

    That was Brissart 2012:

    https://sci-hub.tw/https://www.tandfonline.com/doi/abs/10.1080/13554794.2012.701644

    It's not clear to me why it was rated as the participants being blinded:

    Patients were randomised into one of these two groups:

    To me (after a five minute skim) it looks like that the Cochrane reviewers were wrong to class participants as blinded in that trial, but that's not to say that the authors of this new BMJ blinding review classed these participants as blinded: they said that they had done their own checking, contacting authors when things were not clear. That's made me realise that it's going to be even more difficult to read these results in the context of the original trails as we only have info on how the individual trials were assessed by the Cochrane review authors, not the new BMJ paper's authors.
     
    Last edited: Jan 23, 2020
    Woolie, Lucibee, Sarah and 2 others like this.
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I think Fiona Godlee's editorial illustrates the hopelessness of the current academic situation - where 'informed opinion' is created by people who lack any common sense understanding of the material. This seems to be the world we are now in - presumably because motivations have changed. The old motivation for being rigorous in biomedical science has presumably disappeared under the forces of business interest.

    If the stalwarts here can make enough sense of this study to find obvious holes I welcome that. I am too old and tired I am afraid.

    And we should always remember E12's original comment to me that for therapist-delivered treatments for ME the problem of lack of blinding is as acute as it could possibly be because the treatment is specifically designed to induce subjective bias. At least that is written in black and white in what will be my published statement to NICE. (In addition of course to being out there in published studies such as Wilshire et al and Geraghty).
     
  6. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    The more I read this, the more disturbed I get. Now looking at the attached commentary: Fool's gold? Why blinded trials are not always best, yet the majority of the examples they use for arguing that blinding is bad, are instances where blinding *hasn't worked* (for whatever reasons). That's not the same thing. Sometimes it is really hard to find an adequate placebo that does the job properly - but that's a separate issue. In those instances, it is important to take other steps to try to minimise bias - like not relying on subjective outcomes, for example. *ahum*

    I had a similar (frustrating) experience while teaching med students about RCTs when I was at the Royal Free. My teaching assistant kept undermining the session to say that actually RCTs were not good trials at all, because the trialists could cheat the randomisation process and pick which patients got the intervention (apparently this is what the psychiatry professor she worked for was doing all the time!!!). That's not the fault of the methodology though - that's the fault of the trialist!

    I'm quite amazed that so far there are no rapid responses to the contrary. What is going on???!
     
  7. Milo

    Milo Senior Member (Voting Rights)

    Messages:
    2,138
    i agree @Lucibee. One good example of something you can’t really blind would be IV fluids. Some people swear it helps them, while others are simply sipping from their cup. You can’t blind IV therapy, because you know if it’s going in and sometimes you can tell whether it’s fast or slow.
     
    MEMarge and Trish like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    It's an organised campaign by activists, @Lucibee.

    Trials bring in money, so universities set ups trial units with statisticians and trial lists to make money. In order to make more money you need to do more trials so trials need to be freed up from regulation.
     
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    I think it may be very relevant that the 18 Cochrane reviews mentioned above are pretty much all looking at rather obscure interventions that are pretty unlikely to have any great value in the long term. They are fiddling around the edges interventions, not because the conditions being treated unimportant but because the treatments are a priori pretty unlikely to matter that much.

    Why are there no studies of things that work, simply and obviously? Maybe because meta-analyses by nature deal with things that don't work much so are hard to prove effectiveness for in a single trial.

    This is all about making it easier to do worthless trials, basically.
     
  10. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,425
    The methodology will be as poor as is necessary to deliver the desired results.
     
  11. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    You cannot blind to the fact of receiving IV fluids, but you can blind to whether you are being given any active medication or just placebo, same as with other medication.
     
    Invisible Woman, Saz94 and Mithriel like this.
  12. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I think that everyone should be cautious about the way this paper is assessed until it has been really dug into. There are fair reasons for concern that it could lead to problems being overlooked, and Sterne's involvement is going to lead to scepticism, but it's worth trying to avoid getting ahead of ourselves.
     
    MSEsperanza and Sean like this.
  13. Andy

    Andy Committee Member

    Messages:
    23,034
    Location:
    Hampshire, UK
  14. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I think this paper is based on a flawed premise: That if, on average, blinded trials show broadly similar treatment effectiveness to that shown by unblinded trials, then blinding is potentially superfluous and unnecessary.

    To me this is akin to saying that if, on average, aircraft with rotor blades show broadly similar flying capability as aircraft without rotor blades, then rotor blades on aircraft are potentially unnecessary.

    In the same way that some aircraft, helicoptors, are designed with rotor blades for very good reason, then so also some trials are designed with blinding for very good reason. And if we assume that a reasonable majority of unblinded trials are well designed by competent researchers, then why should those trials not show similar treatment effectiveness to blinded trials also designed by competent researchers?

    If you want to assess whether blinding is necessary or not, then you won’t prove anything by comparing trials properly designed to be blinded against trials properly designed to be unblinded. You would need to compare trials designed to be blinded, against those same trials being run without blinding.

    I'm not sure this paper proves anything with regards to badly designed unblinded trials; trials that should have been blinded, or used objective outcomes, but did not.
     
    Last edited: Jan 24, 2020
  15. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Some quick notes on the second trial where the MS cochrane review rated participants as being blinded, Cerasa 2012: 'Computer-Assisted Cognitive Rehabilitation of Attention Deficits for Multiple Sclerosis: A Randomized Trial With fMRI Correlates'. I'm hoping to find the time to go through them all, and then maybe move on to another review. Sorry if that clogs this thread up a bit.

    https://journals.sagepub.com/doi/full/10.1177/1545968312465194?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub=pubmed

    They describe it as a double-blind study and made some real efforts to limit problems with bias. When the participants are doing quite different tasks it's difficult to know to what extent this should be viewed as being 'blind' to allocation. This made me think of a potential source of bias: those particularly keen to promote their results might also be particularly willing to class their trial as double blind? (edit: I'm not sure if I should say this when it's such a prejudice driven hunch, but there were some things about this paper that made me feel like they were really trying to sell their work. They were from an Italian uni so that could just be different styles of writing in English?)

    The control group were using a programme created in-house, that they used at home while the treatment group used a commercially designed and sold software package at the clinical centre. Is that really double blind?

    The authors of the paper speculate as to why they may have better results than some other trials:

    They were particularly selective with their participants:

    The only outcome measure that seemed to have a statistically significant result was the stroop test, but that seemed to be a relatively big change, and there were only a total of 26 participants. Also, it's certainly possible to imagine a computer-game like training package improving results for the computer game-like stroop test without leading to real improvements in patients wider life. It's difficult to judge the extent to which the 'treatment' computer package was training people with programes similar to the stroop test, but the control package sounded more simplistic (and boring).

    I didn't read this paper closely but it looked like they used a number of objective outcomes and some subjective self-report ones.

    tldr: This was a small trial that made some efforts to account for the problems of not being a double blind trial, but I don't think that means it should be classed as a blinded trial. If a pharma trial treated trial participants this differently it would not be classed as a double blind trial. Also though, the outcome they reported a positive result for was the stroop test, not a self-report outcome.
     
  16. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    4,500
    Well I wouldn't have thought so no, but then i'm not any kind of scientist so perhaps I am wrong.
     
  17. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    @Esther12 The authors of the BMJ study selected trials from each Cochrane review for their comparison, they didn't use all of them. They said they used 132 trials from these 18 Cochrane reviews but if you add up all the studies in these reviews you would come to a much large number. So it's possible that the studies you discuss weren't used in their comparison.

    I don't know what No. high risk and No. low risk mean (I assume that this refers to the overall risk of bias by the Cochrane review authors) but if you add all those up, the total is 132 so this probably refers to the trials selected from each Cochrane review. For the MS trial you were looking at, that would mean they selected only 3 of the 20 studies in the review, presumably the ones with the most adequate comparison to test the effect of blinding.

    [​IMG]

    I don't think the study or supplementary material give any clues to what trials and (which outcomes in those trials) were actually used for their comparisons. We also don't know for which trials the blinding status was rated as "Definitely yes", " Probably yes", " Unclear", "Probably no", " Definitely no". Therefore I've sent an email to the corresponding author today, asking if she could give me a list of the trials and their assessment of blinding for each of them. I hope they respond soon. As you can see from the graph, for most comparisons only a handful of trials were used, so it should be possible to go over these to get an impression of what is being compared.
     
    Last edited: Jan 26, 2020
    Woolie and Esther12 like this.
  18. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    The supplementary material does give the total figures for each blinding assessment:

    Definitely no - 16

    Definitely yes - 15

    Probably no - 73

    Probably yes - 18

    Unclear – 10​

    The paper explains that the comparison was made between trials where blinding was assessed as Definitely yes and Probably yes, versus trials assessed as Unclear, Probably no, and Definitely no. So of the 132 trials that looked at blinding of patients, blinding was assessed as probably yes or definitely yes in only 33 trials (25%).

    I think this is a bit strange. You would think that the non-blinded trials form the exception. If blinding is possible you would think that most randomized trials did that and that the trials that didn’t manage to blind patients, even though it was possible, form the exception. Here it seems to be the other way around.
     
    Robert 1973 and Esther12 like this.
  19. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Thanks Michiel - I knew that my approach was highly inefficient, but I was a bit interested in going through the trials of neurological rehabilitation rated as being participant blinded anyway, and thought it could be useful for understanding how the BMJ paper as they must have had at least classed at least one trial as being properly blinded.

    I'm too scared to write to authors for further info... isn't that harassment nowadays?!
     
    ME/CFS Skeptic likes this.
  20. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I realize you're being sarcastic but in essence, I'm not asking for 'further info' but basic info about their methodology that probably should have been reported in the supplementary material of their paper. I think a paper should have sufficient information on how the results came about.
     

Share This Page