1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    Best place to be. You get a better view from up there.
     
    MEMarge likes this.
  2. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Sounds rather boring
     
    MEMarge and Snow Leopard like this.
  3. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    It's tactical... if giving the impression that you're on the fence makes sure you don't alienate readers who are firmly in either one camp or another. Her stance seems to be that "it's more complex than that", judging by the final paragraph.

    It was the opening para that intrigued me though, because she said it was a screw up with randomisation, which it wasn't. It was defo the fault of an inability to blind the treatment allocation in PREDIMED that led to complaints by the participants. It was then the allocation itself that failed.

    I still wonder how much the potential randomisation failure at the Edinburgh centre in PACE had on the outcome of the trial, and why no-one seems to care that it wasn't reported. I'd like to know what the treatment allocation was in the different centres, and whether failure at one centre helped the other centres predict which treatment was likely to be allocated next (there should have been an excess of GET places available at the other centres as a result). If there was any indication about participant preference prior to allocation, then that might have been a problem, and could lead to exaggerated responses. But I guess we'll never know.

    eta: HB mentioned a paper on blinding by Kenneth Schulz and David Grimes from their very good Epidemiology series in The Lancet in 2002, which is worth a look.
     
    Last edited: Feb 3, 2020
  4. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I think I'm genuinely still on the fence about the new paper too. There are things that concern me about it, and the politics surrounding it and Sterne, but I don't feel like I have a good understanding of it yet.

    edit: I'm on the fence on this paper - given the corruption we've seen around work like PACE, I'm not on the fence about the importance of blinding.
     
    Last edited: Feb 3, 2020
    MSEsperanza, Snow Leopard and Lucibee like this.
  5. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    831
    Location:
    Oxford UK
    Very Ben "I think you'll find it's more complicated than that" Goldacre
     
  6. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    I think you'll find I'm not as condescending as that! ;)
     
    MEMarge, Woolie, Mithriel and 2 others like this.
  7. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Hi Michiel - did you ever get a response to this?
     
    MSEsperanza and ME/CFS Skeptic like this.
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    No. They've first sent an email that they would get back to me, but they haven't despite another reminder email sent by me.

    I'll keep trying and send another reminder email in a couple of days.
     
  9. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    MSEsperanza and rvallee like this.
  10. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Finally received a response.

    They said that the dataset will only be made available after a post-publication period of 1 year to allow time for follow-up studies, as stated in the main publication (under Data Sharing at the very end), and that they prefer not to share parts of the dataset at present.

    Most of these don't make much sense to me.
     
    MSEsperanza, Esther12 and Robert 1973 like this.
  11. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    Thanks Michiel. That's annoying - the way they class trials as blinded or unblinded is a vital part of the paper so failing to let others check their assessment really makes it difficult to have an opinion on their findings. Given the way some people are talking about the importance of this review, making that information public would seem to be the responsible thing to do.

    Maybe I should go back to that MS review. There's probably a way of working out what trials they assessed as blinded by looking at their data, once a list of likely blind/nonblind assessments was put together. It would be a pain though.
     
    MSEsperanza and ME/CFS Skeptic like this.
  12. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    I agree it's quite frustrating because I'm not asking about the raw data of an experiment. To me, it feels more like I'm asking about the methodology of their study rather than about a dataset.

    It's a bit like someone does a systematic review and you ask them which studies they used and didn't use to form their conclusion. That's not really asking about a dataset, in my view. In a Cochrane review, for example, you have to list the studies selected and your risk of bias assessment. So I don't see why it should be different for this "meta-epidemiological study."

    The BMJ study was accompanied by an editorial and a commentary paper and has received 7 rapid response. The authors have also published a separate paper on how to interpret meta-epidemiological studies with the MetaBLIND study as a case example. So clearly the results are already being discussed and will have an influence on trial design and interpretation of studies. If they aren't willing to release their dataset until they work out the follow-up studies, perhaps they should have waited with publishing their main results as well.

    Yeah, and you would never be sure if your selection is the one used in the BMJ study. So I suspect we'll have to wait a whole year to find out more...
     
    MSEsperanza, Lucibee and Esther12 like this.
  13. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    It sounds like they've misunderstood what you were asking for.
     
    ME/CFS Skeptic likes this.
  14. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    I think they understand what I'm asking.

    They are just treating the result of their systematic search of the literature as a dataset, like the raw data of an experiment - probably because they put in a lot of work to find and scan all these studies. I suspect their rationale for withholding is that they are concerned that other researchers might then use that dataset (their hard work) to publish papers and separate analyses on it.

    I, however, see it as a part of the methodology that should be reported with the paper (for example in the supplementary material), much like a Cochrane review explains which studies it used and how these were assessed.
     
    MEMarge, Snow Leopard and Lucibee like this.
  15. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    I guess it's possible that they googled your name and saw this thread? That could explain the delay in telling you they didn't want to share that information about their work.
     
    Sean likes this.
  16. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Hmm, see no particular reason to think that. I suspect researchers are busy and sometimes forget to respond.
     
  17. Woolie

    Woolie Senior Member

    Messages:
    2,918
    I know I'm very, very late to the party here, but just wanted to comment on this.

    There are a lot of different ways in which the word "validation" is used in the context of self-report instruments. They can mean any of these things:

    Reliability: whether you get similar outcomes from testing different samples from a similar population, or the same sample at two timepoints (there are other types too, where you select out individudal items of the same kind and look at how consistent they are with one another). Technically this is not really a measure of validity, even though it is often referred to as such. Its just reliability. You need reliability before you can have validity but its not enough in itself.

    Content validity: whether the items in your scale appear to sample: a) all of the aspects of your construct; and b) no aspects of other constructs that could contaminate your results. To take an example from the cognitive testing literature (just to illustrate the point), a test of working memory should include verbal as well as nonverbal test items, because we know these are different facets of the construct of working memory as its currently understood; at the same time, it shouldn't involve items that require knowledge that not all testees might posses (so, no questions the hinge on whether you have a good vocabulary). Content validity is usually assessed informally - you just show the reader what the items look like. So it isn't very strong.

    Criterion validity (including concurrent and discriminant validity): whether two different tests designed to measure the same construct tend to rank people in the same way (concurrent validity), or whether they adequately discriminate between people categorised differently on a related test (discriminant validity) - for example, whether a depression scale effectively identifies between patients diagnosed with depression at interview. Again, this is not validity in my view, either, but another form of reliability. Usually, the scales are not truly independent - one is usually piggy-backed on the other - so its not very powerful. And even if the scales are truly independent, then its still really only a form of reliability.

    None of these really assesses construct validity. To do that, you need to show that relationships between scores on your scale and other totally different measures are consistent with the theory on which the test was built. This is virtually never done in the land of self-report scales, its never even considered.
     
  18. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    Just realized that we don't have a subforum on trial methdology, so I hope it's OK to leave this here:

    Looking around what diverse experts say on the need for objective outcomes in open-label trials I repeatedly came across the Centre for Evidence Based Medicne (CEBM) at the University of Oxford: https://www.cebm.net/about-cebm/ and their 'Critical Appraisal Tools'.

    Their 'critical appraisal' worksheet on RCTs I think doesn't say anything explicitly about open-label trials. It only states that:

    "It is ideal if the study is ‘double-blinded’ – that is, both patients and investigators are unaware of treatment allocation. If the outcome is objective (eg death) then blinding is less critical. If the outcome is subjective (eg symptoms or function) then blinding of the outcome assessor is critical."

    https://www.cebm.net/wp-content/uploads/2018/11/RCT.pdf

    Apart from wondering about the example they chose for an objective outcome, I wonder why they omitted to explicitly mention open-label trials. But maybe I misunderstood.

    The same center also produced a "response to systematic bias, wastage, error, and fraud in research underpinning patient care" -- paywalled & haven't read it, so just in case others find it interesting:

    Heneghan Carl, Mahtani Kamal R, Goldacre Ben, Godlee Fiona, Macdonald Helen, Jarvies Duncan et al. Evidence based medicine manifesto for better healthcare: A response to systematic bias, wastage, error, and fraud in research underpinning patient care, BMJ 2017; 357 :j2973, https://www.bmj.com/content/357/bmj.j2973
     
  19. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,468
    Location:
    London, UK
    I think this follows the approach we see in Wessely's textbook and in Jonathan Sterne's ROB2. What is egregious here is what is left out - any consideration the psychology of trials - the impact of knowing what is the 'test' treatment on the therapist and the patient. I am afraid I see this as deliberate.
     
  20. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    It should be obvious that patients will always be the first line assessors of subjective outcomes. [edit - unless the outcome is substantially removed from patients' subjective experience]

    Especially so to those heavily invested in the idea of 'illness beliefs' causing non-disease syndromes.

    ...

    Of course if the syndrome is purely caused by subjective assessment, then maybe the cure can be effected purely through a different subjective assessment. How convenient!

    I'm afraid I agree that this is a feature, not a bug.
     
    Last edited: Jun 11, 2020
    MSEsperanza, Barry and rvallee like this.

Share This Page