1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 8th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Bias due to a lack of blinding: a discussion

Discussion in 'Trial design including bias, placebo effect' started by ME/CFS Skeptic, Sep 22, 2019.

  1. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Here's another example of a paper that writes:

    "Using subjective outcomes in an open-label study undermines its internal validity because it makes it impossible to determine how much of the reported effect is related to the investigated treatment and how much is related to various forms of bias."​

    Source: Blinding in trials of interventional procedures is possible and worthwhile by Wartolowska et al.
    https://f1000research.com/articles/6-1663/v2
     
    MEMarge, lycaena, Lilas and 9 others like this.
  2. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    985
    Multivariate and multivariable regression analysis helps estimate the contributions of various factors to an outcome, but the general rule remains: garbage in, garbage out. Serious biases in the input data will lead to flawed conclusions whatever statistical methods they are analyzed with, as is well illustrated by machine learning algorithms.
     
    MEMarge, FMMM1, MSEsperanza and 7 others like this.
  3. Andy

    Andy Committee Member

    Messages:
    21,914
    Location:
    Hampshire, UK
    Trial By Error: Lowenstein’s Guardian Opinion; Eliot Smith’s Post-NICE View; Tack’s Take on Blinding Study

    https://www.virology.ws/2021/06/25/...-post-nice-view-tacks-take-on-blinding-study/
     
    MEMarge, MSEsperanza, rvallee and 5 others like this.
  4. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    There's another [edit] albeit rather halfhearted one:

    Bull, L. (2007). Sunflower therapy for children with specific learning difficulties (dyslexia): A randomised, controlled trial. Complementary Therapies in Clinical Practice, 13(1), 15–24. doi:10.1016/j.ctcp.2006.07.003

    https://www.s4me.info/threads/sunfl...arning-difficulties-dyslexia-2007-bull.21207/

    The rebuttal of the paper might be interesting, too.

    (Started a new thread for the paper and the rebuttal so that it will be easier to retrieve, also on a potential new subforum on research methodology.)
     
    Last edited: Jun 28, 2021
  5. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Thought this was an interesting study:

    Blinding in randomised clinical trials of psychological interventions: a retrospective study of published trial reports
    https://ebm.bmj.com/content/26/3/109

    Quotes:

    "In the present study, only 20.6% of the trials discussed the potential bias risk from unsuccessful or lack of blinding in the published trial report. This makes it difficult for readers to judge the quality of the research. 71.4% of the trials rejected their null hypothesis on their primary outcome. Considering our findings, there is a risk that previous randomised clinical trials of psychological interventions may have overestimated the beneficial effects and underestimated the harmful effects of the experimental interventions being studied due to bias risks associated with lack of blinding.2–9 There may be reasons to believe that unblinded trial key persons may, consciously or unconsciously, fail to acknowledge harmful effects of the interventions. A non-blinded trial person who has a certain interest in a result, for example a psychotherapist with many years of experience and expertise in a given psychological intervention, might give less attention to harmful effects because of his or hers underlying beliefs. Likewise, an unblinded participant who is told that the psychological intervention is effective and without any harms might either not register or fail to report harmful effects."

    ...

    "Despite this apparent obstacle, one could argue that it is appropriate to expect bias due to non-blinded participants. Research in non-psychological interventions has demonstrated that non-blinded participants may experience and report symptoms differently from blinded ones, because of response bias (when participants report symptoms according to what they think will please the investigators) and because of positive response expectancy from receiving a treatment considered to be superior.1 103 104 In a randomised clinical trial, this could result in participants (consciously or unconsciously) giving exaggerated reports of symptom relief merely because the treatment providers or outcome assessors are perceived as caring and interested in their well-being. It may also produce accurate reports of greater symptom relief, because of self-confirming response expectancies.103 A systematic review found that non-blinded participants generated more optimistic self-reported estimates of intervention effects compared with blinded ones.7 There is no reason to believe, that these processes would not also translate to randomised clinical trials of psychological interventions

    ...

    "In randomised clinical trials of psychological interventions, it may also be appropriate to expect bias due to non-blinded treatment providers. Therapists are often highly trained and supervised in delivering a specific psychological intervention, and they will then probably expect this intervention to be superior to treatment as usual given their personal investment."

     
  6. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    Not to mention that by choosing to study and then research psychology there is already a self-selection bias. People who are not interested don't apply.

    As with any area of life there will be a few who are naturally curious and open-minded. As well as a few more who might have done had they not fallen into an area of study fraught with knowledge from authority.

    But for the rest I suspect that doing psych research simply is a way of confirming what they already believe and helps allay their anxiety and fear that they may be wrong by 'proving' they are right.

    And while some other professions have varying degrees of corrective mechanisms to weed out those who don't think with a curious and open mind psychology research seems to lack any such mechanism. Quite the opposite IMO.
     
    DigitalDrifter and Michelle like this.
  7. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
  9. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Interesting response from Hans Knoop to questions from ME/CFS patient representatives. The followings summary was published on the website of Lou Corsius:

    5. It is known that measuring only subjective outcome parameters in such an open trial has a high risk for bias. Therefore, objective outcome parameters must be used also. How do you handle that?

    (K) Why are patient organizations so obsessed with objective measures? Objective measures are subjective too. They depend on how you interpret them.

    It is well known that the outcomes on subjective measures such as fatigue are directly influenced by the treatment (we have used the term ‘manipulated’ in the conversation).

    (K) So you’re basically saying you don’t trust the outcomes reported by the patients themselves? How does that relate to the fact that you as a patient organization represent these patients while you do not take their answers seriously?

    We take patients very seriously. But with regard to your research, we indicate that the treatment is aimed directly at influencing the patient’s thoughts about fatigue, the main outcome measure. That leads to bias.

    (K) We have only subjective primary and secondary outcome parameters. In medical research, more and more researchers are moving away from objective outcomes and they are looking at what is relevant for the patient and how he feels. The outcome measurements we perform are in accordance with ZonMw’s decision.

    In previous research, we have shown that there is no correlation between the objective and subjective outcome parameters.
    Source: https://corsius.wordpress.com/2022/...Dj0hZsSxcEQtX_wW89fNMXVKjnVQDYIeOhDlRinDhnBVU
     
    Milo, MEMarge, MSEsperanza and 10 others like this.
  10. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,463
    Location:
    London, UK
    So he doesn't understand the problem.
     
    Milo, MEMarge, EzzieD and 10 others like this.
  11. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,426
    Location:
    Canada
    Well, he is the problem.
     
    Milo, EzzieD, Sean and 7 others like this.
  12. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,252
    It seems clear to me he understands the problem (or at least that there is a problem). Why else would he give evasive answers?
     
    Last edited: Jan 25, 2022
    EzzieD, Snow Leopard, Sean and 7 others like this.
  13. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,426
    Location:
    Canada
    Describing the lowering of standards is nothing to celebrate but of course what the patients want it objective scientific research, because we have seen the devastation caused by subjective pseudoscientific nonsense like his. Also because unlike this guy we understand how science works. Or at least we want it to work, which is kind of the same thing here.

    So here he is literally arguing to value what patients want, which is objective rigorous research that follows the evidence and applies the scientific method, while arguing we should want the opposite, putting his words in our mouths. Amazing how con artists can twist themselves into pretzels even when millions of lives have already been destroyed by their folly.

    And all of this for something that at best can be argued with: "don't you think it's possible for this to be true?", which is where every single pseudoscience gets stuck at forever. When people argue this it's because they have nothing.
     
    EzzieD, Wyva, Sean and 1 other person like this.
  14. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    To me the real problem is the preconception that happens before any sort of research is done. It's more than just a hypothesis -- it cannot be refuted.

    People agreeing they feel better after being told to feel better is their cure.

    If you start with something that cannot be refuted how can you tell the difference between real efficacy and deception? That's not science. One would think long-term follow-up (years) might do something to clarify but there is the issue of post-viral illness that resolves on it's own so would need a separate control.

    A high drop out rate seems to not be a hindrance to their idea of a 'scientific method.'

    I think a better place for patient advocate influence is with the people who hold the purse strings. Convincing them that this is not sustainable, not reputable and not cost-effective over the mid-term may be a different tack to take. Because these researchers are well aware of the arguments about science and either don't understand or care.

    I don't know what it's like in the UK and elsewhere but in Canada at least since the last election citizens can vote from home (I have). Maybe many ill people don't vote. Now that there are many more maybe the time to suggest people either be helped by volunteers to get out or arrange to vote from home while making ME an issue that we're prepared to cast a vote for sympathetic candidates.

    Sorry, I realise once again that the subject got away from me and went beyond the thread title context.
     
  15. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,631
    I vaguely recall a BBC Radio (4?) program re the crisis in science; e.g. one guy (a social scientist) owned up to making up his results. So there's a problem with a lot of research, it's plain unreliable.

    The program contrasted the recent problem with the golden (post war) era when much good social science was done; often by those who had survived/escaped the holocaust. The lesson was that those who were interested/invested had carried out sound research. Seems like a lot of this flawed research is done by people who don't care.
    The program also highlighted that having a vested interest is generally considered inappropriate yet the evidence, from the golden (post war) era, challenged that view.

    Others have put this in simple terms ---
    "So, children, why do we blind trials?
    Because the outcomes might be biased by subjectivity, Sir.
    Well done children. So which trials specifically need blinding.
    Trials with subjective outcomes, Sir.
    Very good!

    Shall we just go over that again, for Josh and Mary?
    Yes, Sir, please Sir

    So, children..."
    https://www.s4me.info/threads/nice-...21-discussion-thread.23066/page-7#post-387365
     
  16. Sean

    Sean Moderator Staff Member

    Messages:
    7,159
    Location:
    Australia
    Once again Knoop gets it wrong. The objection is not to subjective measures themselves. It is to how they are used.

    Used on their own, without adequate control, is unacceptably weak methodology. The many and major limitations with subjective measures must be controlled for by either blinding or accompanying objective measures.

    It is the relationship between subjective and objective measures that is the critical info. They need to agree to claim a solid result.

    In other words, treatments must be both effective and acceptable.

    Knoop's psycho-behavioural approach is neither.
     
    MSEsperanza, EzzieD, rvallee and 3 others like this.
  17. Mithriel

    Mithriel Senior Member (Voting Rights)

    Messages:
    2,816
    I can't understand why psychologists keep insisting that subjective results are the only important ones because it is how the patient feels that matters. (They also say objective results are hard to measure which is also ludicrous but that is another story.)

    Phobias about things which are not dangerous like, say, a red ribbon are clearly psychological disorders. If a patient undergoes CBT the important result for them is not whether they can say they have no problems with ribbons anymore but actually being able to pick one up.

    If you have to do foreign travel for your job it is not important how you feeling about flying but whether you can get on an aeroplane. the objective result is what is important not how you are feeling when you fill out a form, especially as you are caught up in the enthusiasm of finally being told you are no longer trapped by a disorder. That is why results don't hold up at follow up.
     
    MSEsperanza, Simbindi, EzzieD and 8 others like this.
  18. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    Reading about heart disease which is interesting because trials of psychosocial interventions often include lots objective outcomes. These do not seem to support psychosomatic theories that factors such as stress or depression predispose people to heart disease or recurrence of infarcation.

    This 2017 Cochrane review reports:

    "there was no evidence that psychological treatments had an effect on total mortality, the risk of revascularisation procedures, or on the rate of non-fatal MI, although the rate of cardiac mortality was reduced and psychological symptoms (depression, anxiety, or stress) were alleviate."
    https://pubmed.ncbi.nlm.nih.gov/28452408/
     
    MSEsperanza, Hutan, Trish and 7 others like this.
  19. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,507
    Location:
    Belgium
    This looks like another example in surgery where non-blinded trials with bad controls reported positive results until a proper study was set up.

    Bhatt et al. 2014. A Controlled Trial of Renal Denervation for Resistant Hypertension

    BACKGROUND
    Prior unblinded studies have suggested that catheter-based renal-artery denervation reduces blood pressure in patients with resistant hypertension.

    .....

    CONCLUSIONS
    This blinded trial did not show a significant reduction of systolic blood pressure in patients with resistant hypertension 6 months after renal-artery denervation as compared with a sham control.

    Full text at: https://www.nejm.org/doi/full/10.1056/nejmoa1402670
     
  20. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,857
    Location:
    betwixt and between
    The Institute for Quality and Efficiency in Healthcare (IQWiG) in Germany is working on a report on the current state of knowledge on ME/CFS. (Forum thread here.)

    Last year they reviewed their general methods paper which will also be the basis for the report on ME/CFS.

    Some comments on the draft of the methods paper included suggestions on dealing with assessing non-pharmacological, unblindable trials that use only subjective outcomes.

    The IQWIG replied (machine translation by deepl) :

    2.1.4 Appreciation of comments on section 3.4 'Non-pharmaceutical therapeutic interventions'.

    "It is certainly desirable if endpoints that can be objectively recorded are also recorded in this way, because this generally increases the reliability and validity of the data collection (e.g. through blinding).

    "Conversely, however, patient-reported endpoints, such as pain or quality of life, are of utmost importance for patients and thus also for the assessment of a benefit, although by their nature they can only be recorded subjectively.

    "The fact that many symptoms can only be recorded subjectively is also not a disadvantage because ultimately only the patient can evaluate the success of his or her own treatment. If a person learns to rate his or her own symptoms as less severe or threatening, then this can be seen as a genuine relief, since here too it is the subjective patient perspective alone that counts. Overall, therefore, no need for change to the methods paper is seen on this point." (*)

    Perhaps the comments could have been worded more clearly. But why is it so difficult for people who are supposed to be experts in assessing evidence in the field of healthcare to see that using only subjective outcomes in open label trials can't produce reliable evidence, and that it would not be too difficult to use both objective and subjective outcomes?

    (*) Documentation and evaluation of comments on the Draft of the General Methods 6.1 (in German), https://www.iqwig.de/methoden/allgemeine-methoden_dwa-entwurf-fuer-version-6-1_v1-0.pdf

    Edit: I've tried to add some context and order:

    1) The section of the IQWiG's methods paper on which the quoted comments were submitted are here. (English translation)

    2) The mentioned comments on that section and the reply of the IQWiG are here. (English translation)

    3) The German original of the mentioned comments on that section and the reply of the IQWiG are here.
     
    Last edited: Apr 10, 2022

Share This Page