1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    I would like to use this thread for a discussion on the effects of a lack of blinding in randomized trials, something that frequently comes up in our discussions elsewhere on the forum.

    Ever since the new risk of bias (RoB 2) tool for Cochrane came out I’ve been trying to learn more about this. I’ll try to summarize what I’ve found in the section below...
     
  2. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    First some basics
    Blinding (sometimes called masking) refers to the process of keeping key persons involved in the conduct of a trial unaware of group assignment. So in the case of a blinded trial, patients, therapists, outcome assessors, etc. do not know who is getting the intervention and who is getting the control. The idea behind it is that knowledge of who’s getting what might influence the results.

    If patients for example, know that they are getting a new promising treatment they might be more optimistic about their health status (a placebo-effect) or they might report their symptoms according to what they think will please the investigators (response bias). There might also be problems in the control condition. If patients are aware they are not getting the intervention, they might drop out (attrition bias) or follow other treatments during the trial (co-intervention bias).

    So in a typical trial, one group of patients is given a drug, while the other group gets a placebo pill that looks just the same. Patients, therapists and outcome assessors do not know which patients are getting the drug or the placebo. So any reported differences in health are thought to be caused by the active ingredient in the medication being tested. In a trial on cognitive behavioural therapy or exercise therapy, however, blinding is often not possible. So patients and therapist know who is getting the intervention and who's getting the control and this expectation might influence the results. The question is how large is this effect and how do you measure it?
     
    Last edited: Sep 22, 2019
  3. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Meta-analyses
    The most common method to measure bias due to a lack of blinding in randomized controlled trials (RCT’s), is to look at meta-analyses. These provide an overview of RCT’s using the same treatment for the same type of patients. So if you have trials that are blinded and some that aren’t blinded in a meta-analysis, you can estimate the effect of blinding on the results. Of course, this will only provide a rough estimate. Even though trials are in the same meta-analysis they might still differ in lots of ways and these differences can influence effect sizes. But the main idea is that if you take enough of these meta-analyses where some trials are blinded and some aren’t, you can estimate the effect of blinding.

    A lot of reviews have tried to do this. This is the method for measuring other sources of bias as well, including adequate randomization sequence and allocation concealment. A lot of the researchers who have done these reviews have a connection with Cochrane because it provides a convenient database of meta-analyses with lots of information on the risk of bias. Famous names that show up are lain Chalmers, Douglas Altman, John Ioannidis, Peter Gøtzsche and Jonathan Sterne.

    Luckily, many of the research groups that did these reviews came together to form one database, the Bias in Randomised and Observational studies or BRANDO database. This makes it possible to come up with an estimate for all of these studies taken together. In 2012 they (Savovich et al.) published a large review on “the Influence of reported study design characteristics on intervention effect estimates.” Jonathan Sterne was the principal investigator. The analysis on the effect of ‘double blinding’ was based on 104 meta-analyses containing 1057 trials. The results were interesting.

    Lack of, or unclear, double blinding was associated with an average 13% exaggeration of intervention effects (relative odds ratio or ROR of 0.87). This was the result for all outcomes. If they looked at subjective outcomes only, the effect was much larger: a ROR of 0.77, meaning an estimated exaggeration of 23% in unblinded studies. This effect was larger than those found for other sources of bias. The authors wrote:

    “we found the influence of lack of double-blinding to be greater than that of inadequate or unclear random-sequence generation or allocation concealment.”
    That makes it strange that double-blinding gets so little attention in the new risk of bias tool by Cochrane (because the senior author of both papers is Jonathan Sterne). The researchers who analyzed the BRANDO database gave a pretty clear message. They wrote:

    “Our results suggest that, as far as possible, clinical and policy decisions should not be based on trials in which blinding is not feasible and outcome measures are subjectively assessed. Therefore, trials in which blinding is not feasible should focus as far as possible on objectively measured outcomes, and should aim to blind outcome assessors.”​

    These conclusions are supported by another extensive review issued by the Agency for Health Research and Quality (AHRQ). The authors (Berkman et al.) didn’t have a database but looked at all the reviews that are out there including some that weren’t in BRANDO. Berkman and colleagues concluded that this method of using meta-analyses makes it difficult to find a relationship between bias and effect sizes, probably because there are so many confounders. Nonetheless, the bias for which the evidence was most consistent was the combination of a lack of blinding and subjective outcomes. The authors concluded that:

    “Lack of double blinding, similar to lack of assessor blinding, was related to an exaggeration of the intervention effect estimates when subjective outcomes were estimated. These findings suggest that in circumstances that provide greater room for individual judgment or preferences, blinding is critical.”​
     
    Last edited: Sep 23, 2019
  4. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Problems with the meta-analysis approach
    There are some problems with this method though. Trials in the same meta-analysis differ in many ways and these may obscure the effects of bias. It’s possible for example that unblinded trials have overall lower methodological quality and that other issues such as poor randomization, small sample size or publication bias determine their higher effect size, rather than a lack of blinding. The BRANDO study, however, found little association between double-blinding and adequate sequence generation, only with allocation concealment and the results for blinding did not change in the multi-variate analysis.

    There are also reasons why this method might underestimate the effects of blinding. The reviews mostly compared trials that said they were double-blinded versus those that weren’t. The term 'double-blind' however has no agreed-upon definition.

    One would suspect that a trial that says it was double-blinded had at least blinded participants and blinded care providers (the most common interpretation also includes blinded outcome assessor). But unfortunately, this isn’t always the case. Some researchers are creative and say their trial was double-blinded when both patients and outcome assessors (but not therapists) were blinded. They tend to use the term ‘double-blinded’ because it suggests that their trial is of high quality and had no bias due to a lack of blinding.

    Chan & Altman found that in 90 trials that described themselves as double-blinded, the term had nine different variations with respect to who was blinded. Other studies (Deveraux et al., 2001; Montori et al., 2002) have found similar results. Haahr & Hrobjartsson looked at trials that simply said ‘double-blinded’ without giving more information. They contacted the authors and found that 19% of ‘double-blind’ trials had not blinded either patients, health care providers or data collectors. As a result, the 2010 CONSORT statement (a guideline that instructs researchers how they should report their finding) said that the term double-blind should be avoided and that instead, researchers should state which groups were blinded or not.

    So clearly this confusion will have impacted the results from the BRANDO-database. If some studies in the double-blinded group didn’t blind either patients or care providers, then they do not form a good comparison. Similarly, the other group consisted of trials that were not double-blinded. That means that key persons such as patients or outcome assessors could still be blinded. This would downplay the effects of blinding.

    What I believe most people are interested in, is the comparison between a trial where both patients and therapists are blinded versus a trial where they aren’t blinded. Cause that’s what happens when you compare the effectiveness of a drug (where trials are almost always blinded) compared to psychotherapy such as CBT (where trials cannot be blinded). Some studies have tried to look at the effects of blinding individual persons such as patients, therapists, outcome assessors etc., but these studies are usually small or restricted to one particular area in medicine that doesn’t really compare to the situation in psychotherapy or lifestyle-interventions.

    That is another problem with these meta-analyses: almost all have looked at binary outcomes (for example having a cardiovascular event or not) and few at continuous outcomes (such as fatigue- and pain questionnaires that add up to a score). As I will explain below, there is some evidence that binary outcomes have smaller placebo-effects, even on subjective outcomes.
     
    Last edited: Sep 29, 2019
  5. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Trials with and without blinding
    A better way to study the effects of blinding is to look at trials that have two parts: one that is blinded and one that is not. Unfortunately, these are very rare. Hrobjartsson et al. did a review and only found 12. The average difference in effect size for patient-reported outcomes was –0.56 (95% confidence interval –0.71 to –0.41). This is a significant difference. It means that in trials with a moderate effect size of –0.5, nonblinded patients cause an exaggeration of the estimated effect by 112%. In other words, a moderate treatment effect may be bias due to a lack of blinding.

    There were some issues though with this review. If you look at the 12 trials in question, only 2 had the preferred format, namely 4 groups: blinding + intervention, blinding + control, non-blinding + intervention and non-blinding + control. One of the trials with this format tested distant healing on CFS patients - I’ve discussed it elsewhere on the forum. The effects of blinding were very small and only visible on secondary outcomes. The other trial in Hrbojartsson’s review tested Echinacea for the common cold and found no effect of blinding.

    So the large differences found in the review come from the 10 other trials and these all tested acupuncture. These trials had only three groups: acupuncture, a passive control (for example a waiting list control) and sham-acupuncture. The latter involved shallow needling of body points considered to be ineffective but patients thought it was the intervention. Hrobjartsson then looked at the difference between the comparison between acupuncture and the passive control (not blinded) which usually had a large difference and acupuncture versus sham acupuncture (blinded for patients), where there usually was no statistical difference.

    The problems with this approach are (1) that therapists were aware of the sham-acupuncture and (2) that patients in the non-blinded control did not get as much attention and care from medical professionals. So this seems to measure not only the blinding of patients but also the inadequacy of the control condition. While this situation (an inadequate control) is often the case in unblinded trials, I think this effect should be separated from the effects caused by a lack of blinding. In principle, it should be possible to come up with a control condition that has similar contact hours with therapists, even though there is still a lack of blinding.
     
  6. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    An unfair comparison
    I’m amazed by how little discussion this issue gets in the scientific literature. One person who has tried to raise the problem is Douglas Berger, an American psychiatrist working in Japan. He argued that it is unfair to compare antidepressants and CBT for the treatment of major depressive disorder (MDD) because the effectiveness was assessed, using different standards. He wrote:

    “It is imperative that any intervention for a disorder with subjective endpoints such as MDD requires the same rigor in double-blinding in order to conclude that the results show 'efficacy' or are 'evidence-based'. This paper proposes to use the term, 'partially-controlled clinical data' in place of 'evidence-based clinical data” for results obtained from unblinded studies.'"​

    One of the reviewers of his papers commented: “Dr. Berger has taken to task a sacred cow in our field.”

    Unfortunately, Berger is not a researcher and a quick google search shows that he is rather controversial (there are several Reddit-threats accusing him of inappropriate behavior or trying to sell expensive drugs to patients). But I think he has a point here. It is often said that CBT and antidepressants are equally effective for MDD, even though they weren’t assessed by the same standard. Cuijpers et al. looked at pharmacotherapy trials that weren’t blinded and then compared these to the unblinded psychotherapy trials. In this comparison, there was “a very small, but significantly higher effect for pharmacotherapy.”
     
    Last edited: Sep 30, 2019
  7. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    The history of blinding
    One more thing I would like to share is this article that goes into the history of blinding (Jensen et al. 2016). It says that blinding only really took off after WOII (as did randomization) but that the principles were well understood centuries before that. The authors refer to a famous report issued by King Louis XVI of France, at the end of the 18th century. During that period, mesmerism or animal magnetism, the belief that an invisible natural force could influence health and disease, was popular at the court. Some therapists said they could influence this force by their healing powers. The king wanted to know if the effects of mesmerism were real so he ordered a commission of scientists to study it. The commission included the famous chemist Antoine Lavoisier and Benjamin Franklin, who was staying in France at the time.

    The scientists did all sorts of experiments to test the effects of mesmerism and quickly understood that these were caused by suggestion and imagination. The decisive experiment was to blindfold patients in whom mesmerism usually worked. When they believed they were being mesmerized, they reported all sorts of effects even though nothing was being done to them. And when someone did mesmerize them without them knowing, they reported no effects. The conclusion of the commission was insightful and remains relevant to this day.

    “The Commissioners suspected that these effects had been augmented by mental circumstances. Let us take the standpoint of a commoner, for that reason ignorant, struck by disease & desiring to get well, brought with great show before a large assembly composed in part of physicians, where a new treatment is administered which the patient is persuaded will produce amazing results. Let us add that the patient’s cooperation is paid for, & that he believes that it pleases us more when he says he feels effects, & we will have a natural explanation for these effects; at the least, we will have legitimate reasons to doubt that the real cause of these effects is magnetism.”​

    The scientists also understood that this effect was not new or unique to mesmerism. In a poetic and philosophical style they wrote:

    “Magnetism therefore is only an old error. This theory is being presented today with a more impressive apparatus, necessary in a more enlightened century; but it is not for that reason less false. Man seizes, abandons, takes up again the error that gratifies him. There are errors which will be eternally dear to humanity.”
    EDIT: What the commissioners didn't do is randomize patients to either (1) mesmerism or (2) a waiting list control or something like relaxation therapy to see if patients in the first group report lager difference on questionnaires. With this approach, I suspect people would still be practising animal magnetism...

     
    Last edited: Sep 23, 2019
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
  9. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    829
    Location:
    Oxford UK
    I am @Healthy_Control and found this excellent new thread just now without even being tagged!

    I have just submitted the following as a rapid response to the Risk of Bias 2 paper in the BMJ. It's amazing to see Sterne was PI on the 2012 paper...

    At the end of 2013 there was a debate held at the Cochrane Methods Symposium in Quebec on whether funding source should be a standard part of the Cochrane Risk of Bias tool. Lisa Bero argued in favour [1] and Jonathan Sterne against [2].

    Sterne’s argument concluded that to add funding source as a standard item would send a negative message to industry whose research would automatically receive a high risk of bias rating. He argued it would stop collaborative work with industry to help address problems with industry research. I argue that it is not only industry research that has serious problems with bias which need to be addressed. Much greater attention should be paid to the way the self-interest of researchers both inside and outside industry can bias published research findings.

    Both the old and revised version of the RoB tool focus on identifying bias using information published in journal articles, or by liaising with triallists. The bias affecting decisions taken by triallists before any data is collected is not considered. There is also no objective assessment of the bias affecting the decisions of review authors when planning and conducting a systematic review.

    Triallists from both industry and academia often have financial success and reputation invested in a favourable result for interventions they are testing. These conflicts will affect many of their decisions including the specification of participant eligibility criteria, and the choice of outcomes, and outcome measures. These decisions are likely to try to increase the chance of a positive result. If things don’t seem to be going the right way, triallists are given considerable leeway to make post-hoc adjustments, particularly if they are eminent clinicians and/or experienced triallists from prestigious institutions.

    Review authors can also have their own conflicts of interest. A Cochrane systematic review Exercise therapy for Chronic Fatigue Syndrome [3] is being revised after extensive criticism, validated by an internal Cochrane report [4]. The review authors altered the way results for the review’s primary outcome were presented, changing from a null result at follow-up to a largely positive result.

    They also chose to stick to the subjective outcome measures specified in the protocol of the previous version of the review. This is despite the fact than none of the included trials could be blinded because of the nature of the intervention.

    The reviewers also incorrectly gave a low risk of bias assessment for selective reporting for one of the included studies, the PACE Trial [5]. The PACE trial researchers controversially deviated from protocol specified criteria for primary and secondary outcomes. They dropped entirely an objective measure of participant activity levels after they were informed another research group had null results with this outcome measure. It was only after a costly legal battle that results for the trial's prespecified outcome measures were forced out [6, 7].

    Trials where blinding is not possible are unlikely to yield useful data unless both self-report and objective outcome measures are used [8]. Equipoise is essential in trials and systematic reviews, but in this case the Cochrane researchers chose to stick with subjective outcomes that are likely to flatter the intervention. This is also selective outcome reporting bias, undetectable by the RoB tool which only assesses the risk of bias in included studies. Unblinded trials using outcomes prone to problems with bias can be classed as having a low risk of bias if the assessors decide that it was not "likely that assessment of the outcome was influenced by knowledge of intervention received”. The increased leeway for judging trials at low risk of bias permitted by RoB2 will serve to obscure serious problems which affect the reliability of trial data.

    Equipoise is essential in trials and systematic reviews. The review authors in this case have rejected extensive and valid criticism of their risk of bias assessment and many other problems with the review. They have defended what the previous Cochrane Editor-in-Chief, David Tovey, called an “over-optimistic” assessment of the effectiveness of the intervention, the success of which appears to be tied to their academic reputations. This over-optimistic assessment was achieved as a result of numerous methodological shortcomings of the review, including the incorrect assessment of the PACE Trial as at low risk of bias for selective reporting, and the authors’ own ill-justified reliance on subjective outcomes.

    RoB 2 has an explicitly stated objective of increasing the number of trials receiving low risk of bias assessments. Trials can now be rated as having a low risk of selective reporting bias if protocol deviations occurred prior to data unblinding.

    There is no provision made for the fact that in unblinded trials, early indications of results can easily be discerned. Considering the problems afflicting important areas of medical research, this is a backward step in Cochrane’s stated mission to identify and summarize only the most reliable research evidence. It seems likely that the use of RoB 2 will further weaken the hand of those wishing to challenge over-optimistic evaluations of the evidence by recommending reviewers ignore important indications that the conduct of included studies was influenced by the pursuit of a positive result.

    An important part of the role of systematic reviewers is to expose biased research practices and ensure patients are properly informed of any potential problems with research which appears to endorse treatments that could be ineffective or harmful. RoB 2 seems likely to do a worse job of identifying and discouraging poor primary research practices such as relying on subjective outcomes in unblinded trials. If RoB 1 had been correctly applied, the PACE Trial would have been given a high risk of bias assessment for selective reporting. RoB 2 will allow review authors to judge the risk of selective reporting bias as low with even less justification than previously.

    Going back to Bero’s argument that reviews should include information about funding sources, I would go further. I think it would be useful to systematically collect and present funding sources and additional objectively observed meta-data about included studies. Information could include funding source, researcher allegiance (involvement in development and/or use of intervention in their clinical practice), balance of self-report and objective measures in unblinded trials, and level of patient involvement (in choice of outcomes, etc.). This would give readers a more complete overview of previous research including both positive (e.g. patient involvement) and negative factors (e.g. inappropriate outcome choice) affecting the risk of bias. It could help illustrate how conflicts of interest, both financial and reputational, may lead to serious bias in favour of review findings which help bolster academic and clinical careers rather than protect patients from ineffective or harmful treatments.

    [1] https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000075/full
    [2] https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.ED000076/full
    [3] https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD003200.pub7/full
    [4] http://www.virology.ws/wp-content/uploads/2019/03/Cochrane-Report-on-Courtney-Complaint-1.pdf
    [5] https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(11)60096-2/fulltext
    [6] https://www.tandfonline.com/doi/full/10.1080/21641846.2017.1259724
    [7] https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-018-0218-3
    [8] https://academic.oup.com/ije/article/43/4/1272/2952051
     
    Last edited by a moderator: Sep 23, 2019
  10. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    I find it astounding and downright tragic there is no proper oversight body here. Those with the influence and power twisting the rules to give themselves more influence and power. It's medieval.
     
    Simbindi, Andy, Annamaria and 6 others like this.
  11. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    829
    Location:
    Oxford UK
  12. Andy

    Andy Committee Member

    Messages:
    21,914
    Location:
    Hampshire, UK
    I'm guessing you meant "assessed"?
     
  13. Trish

    Trish Moderator Staff Member

    Messages:
    52,225
    Location:
    UK
    Annamaria, rvallee and Barry like this.
  14. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    I've tried to summarize the information in this thread in a blog post: https://mecfsskeptic.wordpress.com/2019/10/01/turning-a-blind-eye-to-blinding/

    I'll repost it here, as that makes it easier to quote and comment.


    Turning a blind eye to blinding
    Blinding, sometimes called masking, is the process of keeping key persons involved in the conduct of a clinical trial unaware of group assignment. In a typical randomized controlled trial, patients nor therapists know who is getting the active treatment and who is getting a placebo pill. The idea behind this is simple: knowledge of who’s getting what can influence the results. If patients know, for example, that they are getting a promising treatment they might be more optimistic about their current health status (a placebo-effect) or they might report symptoms according to what they think will please the investigators (response bias). Blinding has gradually become standard practice since the 1950s, even though its principles were well understood before that.

    Mesmerized
    To understand the importance of blinding, one has to go back to France at the end of the 18th century. In the last years of the Ancien Regime, the court of King Louis XVI was enchanted by what was called ‘animal magnetism’. The German physician Franz Anton Mesmer, had proclaimed that this invisible natural force present in all living things, formed the key to sickness and health. By the use of magnets and their proclaimed healing powers, Mesmer and his followers believed they could influence the mysterious fluid running through the body and cure patients. King Louis XVI, however, had his doubts about the proclaimed wonders of this new treatment. He ordered a commission of wise men to investigate if it had any merit. The Royal Commission assembled included famous scientists such as chemist Antoine Lavoisier and the American polymath Benjamin Franklin.

    The royal commission
    The commission did several experiments and quickly understood that the effects of mesmerism were caused by suggestion and imagination. The decisive experiment was to blindfold patients in whom mesmerism seemed to work. When these patients were blindfolded and told they were being mesmerized, they reported all sorts of effects even though nothing was being done to them. When someone did mesmerize them without their knowledge, they reported no effects. The conclusion of the commission was insightful and remains relevant to this day:

    “Let us take the standpoint of a commoner, for that reason ignorant, struck by disease and desiring to get well, brought with great show before a large assembly composed in part of physicians, where a new treatment is administered which the patient is persuaded will produce amazing results. Let us add that the patient’s cooperation is paid for and that he believes that it pleases us more when he says he feels effects, and we will have a natural explanation for these effects; at the least, we will have legitimate reasons to doubt that the real cause of these effects is magnetism.”​

    An unfair competition
    What the commissioners didn’t do is randomize patients to mesmerism and a (passive) control condition to see which group reports the largest improvement on symptom questionnaires. If mesmerism was compared to relaxation therapy or a waiting list control condition it’s quite likely that it would appear to be effective. As the commissioners explained, mesmerism involved a great show in which patients were persuaded by physicians that it will produce amazing results. It was a successful recipe to create placebo effects and response bias rather than a method to improve patients’ health.

    Although it is still frequently used today, randomizing patients and organizing a competition to see which intervention causes the largest improvements in reported health, is not a fair test. It’s a competition that can easily be won by interventions that are more aggressive in misleading patients or in instructing them to be more optimistic about their health. What is needed is a method that accounts for bias caused by the expectations of trial participants, something the royal commission had understood more than two centuries ago.

    BRANDO: the empirical evidence
    Today, we have empirical evidence that a lack of blinding leads to bias and overestimation of treatment effects. Researchers study this by looking at meta-analyses. These provide an overview of randomized trials that studied the same treatment for the same patient group. So if a meta-analysis contains both trials that used and didn’t use blinding, one can get a rough estimate of the effect of blinding on the results. Several research teams have tried this and, luckily, they have agreed to pool their results into one big database for a project called BRANDO (Bias in Randomised and Observational studies). The results show that unblinded trials overestimate treatment effects but mostly on subjective outcomes. For objective outcomes such as mortality, the effect is much smaller to negligible. According to the authors, the data give a clear message about how trials should be conducted and interpreted:

    “Our results suggest that, as far as possible, clinical and policy decisions should not be based on trials in which blinding is not feasible and outcome measures are subjectively assessed. Therefore, trials in which blinding is not feasible should focus as far as possible on objectively measured outcomes, and should aim to blind outcome assessors.”​

    This conclusion is supported by a 2014 review by the Agency for Health Agency for Healthcare Research and Quality (AHRQ) and an interesting study that used an alternative approach to measure the influence of blinding. Hrobjartsson and colleagues looked at studies that had both blinded and unblinded groups within the same trial. These studies were pretty rare; the authors only found 12 and most were about acupuncture. Nonetheless, the conclusion was similar to that of the BRANDO project. The average difference in effect size for patient-reported outcomes was 0.56, meaning that in trials with a moderate effect size, a lack of blinding can cause an exaggeration of the estimated effect by more than 100%.

    A gratifying error
    For reasons that remain obscure to me, open-label trials with only subjective outcomes are still popular and well-respected. The unfair competition is still allowed to proclaim it’s winners. Franklin, Lavoisier and their fellow commissioners wouldn’t be surprised. They understood mesmerism was but an old trick in a different disguise. One that is perhaps too tempting not to be reused. In their report they wrote:

    “Magnetism therefore is only an old error. This theory is being presented today with a more impressive apparatus, necessary in a more enlightened century; but it is not for that reason less false. Man seizes, abandons, takes up again the error that gratifies him. There are errors which will be eternally dear to humanity.”​
     
    Woolie, Nellie, Robert 1973 and 15 others like this.
  15. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Comments about language, spelling and content are welcome.
     
    Cheshire, Annamaria and Andy like this.
  16. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,385
    Excellent @Michiel Tack thank you!
    And so it goes around. PACE and its ilk are really nothing more than Magnetism in a new disguise. SW, MS, PW, EC, et. al. nothing more than the new Mesmers.
    None so blind (ha) as those who don't wand to see.
     
    Last edited: Oct 1, 2019
    Woolie, Annamaria, Cheshire and 5 others like this.
  17. Annamaria

    Annamaria Senior Member (Voting Rights)

    Messages:
    260
    Excellent piece! Thank you.

    Should read "neither patients nor therapists"

    Delete "Agency for": unintended repeat.
     
    Cheshire, andypants, Andy and 2 others like this.
  18. Trish

    Trish Moderator Staff Member

    Messages:
    52,225
    Location:
    UK
    Excellent, Michiel.

    Would it be worth writing a follow up piece that gives an example of a paper that demonstrates subjective and objective findings being different, and contrasting it with one (like PACE) that goes out of its way to focus on subjective measures and ignore or mishandle objective outcomes, and the influence this has. ie a demonstration of the ongoing use and impact of such bad research.

    Or do you think that's been done to death already!
     
  19. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Thanks @Annamaria , very helpful!

    Yeah, I think others have already made this point. One frequently used study is the one that used a sham asthma inhaler, published in the New England (Weshler et al., 2011)

    When I read up on the issue of blinding I was intrigued by the story about the royal commission and the empirical evidence that we have about the influence of blinding, such as the BRANDO project. So I decided to focus on that.
     
  20. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,483
    Location:
    Mid-Wales
    "it's winners" should be "its winners" - A gratifying error ;)
     

Share This Page