Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

@Hilda Bastian, sorry to keep labouring the point about unblinded treatments + subjective end points not being suitable for assessing treatment efficacy, but it is so fundamental to the problems the ME/CFS community and many others have had with the BPS proponents, and so fundamental to the usefulness of the coming review. And it looks as though we haven't yet convinced you.

Maybe you can give us an example of a treatment efficacy trial that had unblinded treatments and subjective end points where the conclusion about treatment efficacy could not be legitimately questioned as being partly (and even significantly) distorted by factors not strictly related to the treatment? That might help us identify under what, if any, circumstances the combination is ok and what circumstances it is not.

There was a proposal for research recently, for a commercial and expensive intervention that had virtually no scientific evidence underpinning it. The intervention had been talked about a lot by the regional patient support group - there was a lot of excitement about it. The trial structure was unblinded, with a change in fatigue as the primary outcome (from a survey administered before and after a month of treatment). Can we agree that that trial would be unreasonably biased in favour of the intervention and so the outcome would be worthless?
OK, but can't do it right now - have to work.
 
I wonder whether the difference of opinion on subjective outcomes in unblinded trials may be partly down to misunderstanding.

Take the PACE trial as an example, and ignoring for the sake of argument the ethical problems and outcome switching, and just focusing on objective versus subjective outcomes.

The trial was set up to find out whether patients with CFS improved or recovered with GET. We all agree it was an unblinded trial.

There were several subjective outcome measures which were combined to form criteria for improvement and published in the main paper in the Lancet, with a claim that 60% of patients improve with exercise. Great success. Lots of publicity. Similarly great publicity for the recovery paper, again based on a combination of subjective measures.

There were also several objective outcome measures that were not reported in the main papers or publicity because, inconveniently for the researchers, they showed GET to be no better than doing nothing.

So on that basis, the PACE trial was, in effect, an unblinded trial with subjective outcomes, and worthless. If the researchers had been honest and published, with equal fanfare, the fact that patients got no fitter, could still walk far less distance in 6 minutes than healthy people, and showed no benefit in being able to go back to work, then they would have had to report honestly that, apart from a transient subjective feelings of slight improvement, the treatment was ineffective on fitness, stamina and ability to work, and the trial would not have been so worthless.

So what I am saying is, yes, the PACE trial had both objective and subjective outcome measures, so should not have been worthless, and honest researchers would have reported both the subjective and objective outcomes, which would be fine.

It became worthless when the authors discarded the objective outcomes. The Cochrane review did the same. They knew objective outcome measures were possible and chose to write a protocol that excluded them, making the Cochrane review worthless, in my opinion.

It would be like an Asthma review only including subjective outcomes, when the objective outcome shows very different results, as in the graph someone posted up thread.
 
I wonder whether the difference of opinion on subjective outcomes in unblinded trials may be partly down to misunderstanding.
I think you are being too kind @Trish.

Subjective measures might be reasonable things to include in an unblinded trial for a range of reasons, but they don't measure treatment efficacy. The PACE authors incorrectly represented subjective responses to questionnaires as accurately measuring treatment efficacy. They did this in an environment that actively increased positive expectations from the GET and CBT treatment arms, and actively made failure to improve from these treatments shameful.

In the asthma study, it was a reasonable thing to ask patients how they thought they were after each of the interventions. The difference between the reported well-being and the actual impact on breathing gives us the important finding that asthma research should not rely on subjective reporting for assessing treatments in unblinded trials. It also tells clinicians to encourage patients to use their peak flow meter to objectively assess condition rather than relying solely on how they feel. If the asthma study had had a question 'how convenient did you find the treatment?' that would have been a perfectly valid subjective outcome to measure and it might have given a useful insight. Subjective measures can tell us things, but not reliably whether a treatment worked.

Perhaps if the PACE authors had reported both the objective and subjective measures (notwithstanding all the issues that there were with the "objective" as well as subjective measures), as the asthma researchers and the Mendus trial did, it could have helped more people to understand that ME/CFS research should not use subjective outcomes to assess whether a treatment helps.

Obviously, the existence of a subjective outcome in an unblinded trial doesn't make the trial worthless. The key thing is what question any outcome is used to answer. Using a subjective outcome as the measure of whether a treatment works in an unblinded trial gives a result that can't be assumed to be accurate.
 
I have admittedly missed parts of this thread but I'm struggling to understand why so much focus and discussion just on the issue of subjective measures in unblinded trials.

@Hilda Bastian's example of surgery and a pain outcome appears to be a valid example of an unblinded trial with a subjective outcome.

But even if that's questioned, PACE isn't problematic just because it used only subjective measures in unblinded trials. It also ignored objective measures,
... switched outcome measures,
... used one recovery measure that meant the patient could worsen from entry but be considered better,
... used selection criteria that selected patients who did not all have ME,
... failed to account for the biomedical evidence that directly discredits its claimed mechanism of action for the therapy,
and I'm sure others can fill in more

Its the whole package that makes that trial such a problem
 
So on that basis, the PACE trial was, in effect, an unblinded trial with subjective outcomes, and worthless. If the researchers had been honest and published, with equal fanfare, the fact that patients got no fitter, could still walk far less distance in 6 minutes than healthy people, and showed no benefit in being able to go back to work, then they would have had to report honestly that, apart from a transient subjective feelings of slight improvement, the treatment was ineffective on fitness, stamina and ability to work, and the trial would not have been so worthless.

Indeed. Instead, they buried the results from these objective outcomes in subsequent papers. It took patients to point out that the results from these objective outcome measures didn't match up with the subjective outcomes:

scr.png
https://journals.sagepub.com/doi/full/10.1177/1359105316675213
 
Last edited:
My colleagues at Berkeley and elsewhere are continually shocked every single time by what they see in these ME/CFS studies. They don't understand why BMJ, Lancet, etc are publishing studies that violate so many core principles, and why they refuse to acknowledge the problems when they are pointed out--or publish 3,000-word corrections to pretzel their way out of retractions, as they did in the Lightning Process study.
Personally I think it most likely because the key players are supremely adept at cultivating high level influential relationships across a wide sphere of influence. The sort of 'scientists' whose mantra is "it's not what you know but who you know". They then, in effect, choreograph the behaviours of other influencers, to achieve the broader outcomes they seek. Of course those seeking to limit NHS funding may well do their own influencing - knighthoods come to mind. And yes, I am a big fan of "Yes Minister" :).
 
The real examples against the argument that an entire trial is worthless if it's unblinded and it includes even one subjective endpoint? For starters, that includes every trial of surgery that didn't have a sham surgery arm (which is almost every trial of surgery ever) if it included any patient-reported or clinician-reported outcome: so if they also measured quality of life or asked about pain, no other data from that trial has any value at all - not length of the surgery, not blood loss, not mortality....all because people were asked to rate their pain. That is literally what the statement here in this comment means.

Did anybody actually say this? I think it's clear for all of us that including relevant subjective endpoints is always highly valuable, also in unblinded trials, but the point some of us repeatedly are trying to make is that if you, in unblinded trials, don't at the same time, in addition to subjective outcomes, apply robust objective outcomes, the evidence on that particular treatment effect is zero.

The examples you provided in the quote above (length of the surgery, blood loss, mortality) all seem to me to be relevant objective outcomes but they measure different treatment (side) effects. I think nobody suggested that objective outcomes are spoiled by also measuring subjective outcomes?

I think part of the trial design is to word a hypothesis and define which treatment effects matter according to the hypothesis and how they are measured. If you're saying a particular subjective outcome like pain relief is not the most or the only relevant treatment effect in a particular study that also objectively reported other relevant treatment (side) effects, then why use this example as an argument in favor of using solely subjective endpoints in trials were blinding is not possible?

(Edited for clarity.)
 
Last edited:
I have been wondering if the issue is one of whether or not the problem is temporary...
So - if you are measuring pain which is not chronic, as in childbirth, then a subjective measure is fine. But if you are measuring a subjective measure for a long-term illness/syndrome, then its not ok. You need an objective measure to be able to say that the subjective measure is actually worth measuring. Perhaps if you are looking just at pain, then a subjective measure is ok - but if you are looking at chronic pain, then your measures should include something objective about functioning as well as a subjective rating of the pain itself.
And - i do think that when we are talking about ‘therapies’ designed to alter the way we think, subjective measures, particularly when they are short-term only, are particularly problematic. I suppose the field of depression must be rife with problems of this nature...?
 
I have admittedly missed parts of this thread but I'm struggling to understand why so much focus and discussion just on the issue of subjective measures in unblinded trials.
I guess because it's such a fundamental issue when determining what studies provide useful evidence. And because there seems to be a surprising level of faith in subjective measures as reliable indicators of treatment utility in unblinded trials.

@Hilda Bastian's example of surgery and a pain outcome appears to be a valid example of an unblinded trial with a subjective outcome.
But even for things we might expect to be obvious ('epidural analgesia reduces pain levels when giving birth'), subjective outcomes have the potential to mislead. In the blinded example I quoted, the finding was that analgesia delivered during the second stage of delivery did not result in statistically lower pain scores. Of course that is a finding specific to the dosage and drug and protocol, but still. I expect that if the pain relief had not been blinded, the reported pain scores would have looked quite different. So, in this case of this temporary pain, the subjective measure in an unblinded study probably would not have been fine.

Given all the many examples of outcomes being distorted by bias in subjective measures, I don't understand how people can see a pain outcome being a valid example of a subjective outcome that reliably tells you whether a treatment works in an unblinded trial.

As @MSEsperanza says, I don't think anyone is saying that subjective measures can't be useful.
 
So what I am saying is, yes, the PACE trial had both objective and subjective outcome measures, so should not have been worthless, and honest researchers would have reported both the subjective and objective outcomes, which would be fine.

It became worthless when the authors discarded the objective outcomes. The Cochrane review did the same. They knew objective outcome measures were possible and chose to write a protocol that excluded them, making the Cochrane review worthless, in my opinion.

It would be like an Asthma review only including subjective outcomes, when the objective outcome shows very different results, as in the graph someone posted up thread.
Absolutely. As I think @Jonathan Edwards has said in the past, subjective outcomes are fine even in open label trials provided objective outcomes are also employed to calibrate them against.
 
My attempt to better explain how I see the lack of blinding + subjective outcomes problem.

Lack of blinding makes subjective outcomes unreliable.

Unblinded clinical trials that attempt to determine if a treatment is effective with subjective outcomes are generally worthless.

There may be exceptions to this, if nonspecific factors are very carefully kept equal between treatment groups, but that is very difficult.

CBT/GET and studies of similar interventions do not keep nonspecific effects equal between groups. Bias that will affect subjective outcomes is built into the interventions because their goal is to modify the patient's perception of their illness and symptoms and induce an optimistic state of mind. The investigators seem to believe this is one of the active ingredients of the therapy. They do not consider the possibility that all they're doing is using known methods to introduce bias and confusing this with having found an effective treatment. If an intervention was more than just bias, then objective outcomes should improve. The objective outcomes however are consistent with there being no treatment effect. Proponents of CBT/GET and similar interventions simply ignore unflattering objective outcomes when they make claims of treatment efficacy.

The design of CBT/GET clinical trials and the many other problems with them (like outcome switching or absurd definitions of recovery) make it possible to obtain positive results for almost any intervention. This is not serious science. It is at best incompetence, at worst a deliberate attempt to mislead. This problem is not limited to ME/CFS, it is becoming a widespread problem affecting many interventions and health conditions. The ultimate outcome of this will be a proliferation of implausible and even absurd interventions (like for example the lightning process) that seem to work for a wide range of conditions that all have in common the absence of biomarker of disease severity.

Once researchers have obtained a positive result with such a flawed clinical trial, they tend to view it as confirmation that their explanatory models for the illness are correct. These models typically claim that the illness is somehow a product of the patient's mind (and this seems logical if an intervention that manipulates mental state appears to help).

Objective outcomes may also be somewhat unreliable, depending on what is measured and how. In the context of ME/CFS would not consider a brief improvement in objectively measure daily steps or a brief return to work as reliable evidence of treatment efficacy because the problem in ME/CFS is more one of being unable to sustain activities than doing them at all. If properly used, objective outcomes should be able to give much more accurate information on treatment efficacy.
 
Last edited:
Here's an example I mentioned earlier - for those who think pain is ok as a primary measure of treatment utility in an unblinded study.

Mendus study

So there was a blinded controlled crossover design that found no benefit from a supplement (MitoQ) on pain. If anything, there was a trend to more pain with MitoQ.
Screen Shot 2020-06-10 at 7.53.19 PM.png

At the same time, people who missed out on taking part in the blinded study were able to buy their own MitoQ and participate in an open-label study. For this study, MitoQ was reported to reduce pain (albeit not by much) over the study period. There was a statistically significant decrease in pain in the first 40 days.
Screen Shot 2020-06-10 at 7.53.39 PM.png

So, do we conclude that the trial provides evidence that MitoQ is slightly helpful for pain in ME/CFS?
 
From a psychologist's perspective - my understanding of GET and it's underlying rationale.

GET is aimed at increasing physical activity despite on-going symptoms and flare up of symptoms - because these on-going symptoms are understood within the GET model of ME/CFS to due deconditioning, misattribution of benign bodily sensations as malign, and patient focusing attention on these and getting distressed and avoidant of activity as a result. This cycle goes round and round maintaining the patients' symptoms.

The model asserts that flare ups of pain are normal when people start rehab after being inactive. Akin to the acute exacerbation of pain, stiffness and so forth post surgery or after an accident, for example, during acute physio rehabilitation. This is largely 'to be expected'. Any increase in pain or other symptoms is purely down to deconditioning, lack of fitness, stamina, lack of use, inflammation and so forth. Within the GET model of CFS model there is no physical reason why patients cannot increase activity safely and consistently - other than the patient blocks the process - due to fear of harm or worsening symptoms like pain and debility. The GET model assumes that the symptoms the patients experience are due to the patients misinterpreting and misattributing benign bodily sensations as malign. Once the patient starts to move and gets going, bit by bit they can and are encouraged actively to do more and more. The theory being that this process can be additive until the person is functioning well and largely as normal and has re-learned that their symptoms are benign. Lots of talk about two steps forward, one back - like in standard physio rehab of acute injury.

So, that should be straightforward to do in practice and to demonstrate objectively. Easy peasey. (If it were true).

It is in essence a behavioural intervention to try and overcome a fear / phobia of movement, activity and exercise. Phobias are straightforward to treat and overcome in many instances and circumstances. Again easy peasey (If it were true).

However, this completely misses the point: The patients main symptoms of post exertional malaise (PEM) and increased debility across a wide range of symptoms. The more activity (mental and physical) they do, the worse they feel and more debilitated they become. There is objective evidence for increased activity making pwME/CFS worse. When objective measures of activity are used and patients increase activity then go on to do less activity and report more pain and lowering of mood. That is the opposite of what would be expected by the GET model.

The patients voice is completely absent from the GET way of working. The underlying clinician beliefs and the GET model being used are not openly shared with the patients. When this is subsumed within the MUS model (TC et al see these things as the same, e.g. CF = CFS = ME = ME/CFS = MUS = SSD = BDD = FMS = IBS etc) sharing the underlying model is actively discouraged. It's opaque. This is, in my view, unethical. There is no way a patient can truly give their informed consent. It is the opposite of good medical care. It (GET) is 'done' to the patient who is not fully informed. I have no doubt that the clinicians who are 'doing' this are well intentioned - but that is not enough for professional, ethical practice. And no objective checks are made to see if the process is effective or has construct validity - that what is being 'done' in research or clinic resembles or is based on what the model says it is doing. It only appears to matter if the patient 'feels better'. Which they are going to because if they have failed to improve - it is by definition of this model - the patient who has failed. And no one likes that - so there is huge psychological and social pressure to conform, continue and smile - whether it is working or not. Especially if the clinician was nice, welcoming, supportive, caring and so forth. And the patient had been pre-primed and given messages throughput the process that GET was effect, safe and so forth. And that change was down to the patient to take forward. Not achieving a small, positive effect under such circumstances would be more startling.

From a theoretical perspective the GET model should easily show high levels of change if the model was correct and had good validity - including face & construct validity. I would expect large effect size changes which can be measures objectively, subjectively and can be independently verified. Assessors pre and post therapy can be independent of the treating clinicians. That could/should be done to reduce bias too. Small subjective changes should ring large alarms bells. It does to me. Absence or no change in objective measures or the active dismissal / minimisation of the usefulness of objective measures by researchers should be ringing massively large clanging bells of bias.

As the human is largely highly loss averse - approaching the idea that GET is not effective is psychologically a difficult process - if the researcher / clinician has truly and wholly believed in it. The belief that GET for ME/CFS works will be maintained pretty much at all costs - and the human will change the goal posts until the desired outcome for the belief is proven - i.e. persuade themselves, co-researchers, funding agencies, colleagues, peer reviews that switching outcomes and relying on subjective measures etc is ok - unless they are held to account independently (that should happen via peer review...) and by objective evidence. Otherwise it is all belief and wishful thinking - no matter how well intentioned or desired.

Joan Crawford
Counselling Psychologist
 
Did anybody actually say this? I think it's clear for all of us that including relevant subjective endpoints is always highly valuable, also in unblinded trials, but the point some of us repeatedly are trying to make is that if you, in unblinded trials, don't at the same time, in addition to subjective outcomes, apply robust objective outcomes, the evidence on that particular treatment effect is zero.

The examples you provided in the quote above (length of the surgery, blood loss, mortality) all seem to me to be relevant objective outcomes but they measure different treatment (side) effects. I think nobody suggested that objective outcomes are spoiled by also measuring subjective outcomes?

I think part of the trial design is to word a hypothesis and define which treatment effects matter according to the hypothesis and how they are measured. If you're saying a particular subjective outcome like pain relief is not the most or the only relevant treatment effect in a particular study that also objectively reported other relevant treatment (side) effects, then why use this example as an argument in favor of using solely subjective endpoints in trials were blinding is not possible?

(Edited for clarity.)

My attempt to better explain how I see the lack of blinding + subjective outcomes problem.

Lack of blinding makes subjective outcomes unreliable.

Unblinded clinical trials that attempt to determine if a treatment is effective with subjective outcomes are generally worthless.

There may be exceptions to this, if nonspecific factors are very carefully kept equal between treatment groups, but that is very difficult.

CBT/GET and studies of similar interventions do not keep nonspecific effects equal between groups. Bias that will affect subjective outcomes is built into the interventions because their goal is to modify the patient's perception of their illness and symptoms and induce an optimistic state of mind. The investigators seem to believe this is one of the active ingredients of the therapy. They do not consider the possibility that all they're doing is using known methods to introduce bias and confusing this with having found an effective treatment. If an intervention was more than just bias, then objective outcomes should improve. The objective outcomes however are consistent with there being no treatment effect. Proponents of CBT/GET and similar interventions simply ignore unflattering objective outcomes when they make claims of treatment efficacy.

The design of CBT/GET clinical trials and the many other problems with them (like outcome switching or absurd definitions of recovery) make it possible to obtain positive results for almost any intervention. This is not serious science. It is at best incompetence, at worst a deliberate attempt to mislead. This problem is not limited to ME/CFS, it is becoming a widespread problem affecting many interventions and health conditions. The ultimate outcome of this will be a proliferation of implausible and even absurd interventions (like for example the lightning process) that seem to work for a wide range of conditions that all have in common the absence of biomarker of disease severity.

Once researchers have obtained a positive result with such a flawed clinical trial, they tend to view it as confirmation that their explanatory models for the illness are correct. These models typically claim that the illness is somehow a product of the patient's mind (and this seems logical if an intervention that manipulates mental state appears to help).

Objective outcomes may also be somewhat unreliable, depending on what is measured and how. In the context of ME/CFS would not consider a brief improvement in objectively measure daily steps or a brief return to work as reliable evidence of treatment efficacy because the problem in ME/CFS is more one of being unable to sustain activities than doing them at all. If properly used, objective outcomes should be able to give much more accurate information on treatment efficacy.

From a theoretical perspective the GET model should easily show high levels of change if the model was correct and had good validity - including face & construct validity. I would expect large effect size changes which can be measures objectively, subjectively and can be independently verified. Assessors pre and post therapy can be independent of the treating clinicians. That could/should be done to reduce bias too. Small subjective changes should ring large alarms bells. It does to me. Absence or no change in objective measures or the active dismissal / minimisation of the usefulness of objective measures by researchers should be ringing massively large clanging bells of bias.
Excellent points w.r.t. the discussion on subjective & objective outcomes in an unblinded trial (bolding mine). @strategist sums up things very well. Hope you can read those posts @Hilda Bastian
 
Last edited:
I have admittedly missed parts of this thread but I'm struggling to understand why so much focus and discussion just on the issue of subjective measures in unblinded trials.

@Hilda Bastian's example of surgery and a pain outcome appears to be a valid example of an unblinded trial with a subjective outcome.

But even if that's questioned, PACE isn't problematic just because it used only subjective measures in unblinded trials. It also ignored objective measures,
... switched outcome measures,
... used one recovery measure that meant the patient could worsen from entry but be considered better,
... used selection criteria that selected patients who did not all have ME,
... failed to account for the biomedical evidence that directly discredits its claimed mechanism of action for the therapy,
and I'm sure others can fill in more

Its the whole package that makes that trial such a problem

Doing the above behaviours is what the human will do to preserve loss (face, professional identity) etc when they know they are wrong and lack the courage to face up to this fact. This kind of behaviour should alert others to the pretty obvious fact that there was a dud result that the researcher don't want to fess up too / lack the capacity to accept. When that's coupled with a vice like grip onto the 'model' (I'm right, I'm right, I'm right...... because I say so...) it won't be relinquished easily. And, if I'm being cynical, highly debilitated patients are the least likely group to kick up a fuss - so the researchers have carried on - in the face of objections. However, the researchers have completely misunderstood the human spirit: the desire to be understood and the drive for knowledge, health and decency. People will not rest until they are understood.
 
I think you are being too kind @Trish.
I agree with all your points in that post, @Hutan. You put better than I did what I was trying to say. The subjective outcomes are not measures of efficacy, they are measures of how the patient is (temporarily) reporting how they feel. As with the asthma paper, it shows that subjective outcomes are misleading if the required outcome is efficacy in improving breathing. The subjective outcomes measured something different.

If, like Chalder, you think CFS is psychosomatic, with symptoms being a misinterpretation of normal bodily sensations, then in her world of logic, efficacy is measured by self reported improvement on her ridiculous fatigue scale. The fact that the patients' physical health and ability to function were no better than those in the group with no treatment seems to be of no interest to her because for her there was no physical health problem in the first place, and ability to function is related to psychological blocks, not physical ones. She, and many others are so steeped in the myth that this is a psychosomatic problem, and getting us to interpret our symptoms differently so we report reduced subjective symptoms is, in her eyes a cure of our psychosomatic illness.

Its the whole package that makes that trial such a problem
Sorry, yes, I knew using PACE as my example would be a problem, that's why I explicitly said I was setting aside the ethical and other problems to just focus on the subjective/objective outcomes aspect of the trial. My point was that because only the subjective outcomes were reported in the main papers and were used to claim GET is effective, that makes the trial - as published - worthless.
 
I'm still only able to occasionally read and post, so again apologies for missing most of the discussion when replying to a particular post.

Or consider something like heat or ice for pain from osteoarthritis. You can also measure something like how many painkillers did people take, but that is going to be a self-report, too. You can argue of course for how much weight to put on particular subjective outcomes in a trial, but to not dismiss the entire trial as "valueless" because patients were asked about their pain and it was a high-level outcome is not some weird outlier opinion.

Yes, having objective outcomes that are only self-reported is also problematic. I think there mostly exist some ways to reduce the risk that objective outcomes aren't adequately reported, though. Consumption of pain killers that need to be prescribed might be a feasible thing to double-check?

Of course, outcomes like taking pain killers or how many steps you walk each day are not utterly objective in all the senses of the term, e.g. if participants have a certain range of options to endure more pain or to endure the payback of any increased activity etc.

There are also a couple of other objective outcomes that aren't ideal to reliably measure an improvement. But then, a combination of objective outcomes could be used.

I think this example and all the discussions on other forum threads show that it's more complex than just subjective and objective outcomes; it's also about assessing the adequacy of outcomes in general, as well as the best way to measure and report both subjective and objective outcomes.

Many forum members are aware of this complexity and discuss implications for trial design, also together with investigators that are interested in getting our input.

However, that complexity to me still doesn't seem to challenge some basic facts about bias in trial design.
 
Last edited:
In the asthma study, it was a reasonable thing to ask patients how they thought they were after each of the interventions. The difference between the reported well-being and the actual impact on breathing gives us the important finding that asthma research should not rely on subjective reporting for assessing treatments in unblinded trials. It also tells clinicians to encourage patients to use their peak flow meter to objectively assess condition rather than relying solely on how they feel. If the asthma study had had a question 'how convenient did you find the treatment?' that would have been a perfectly valid subjective outcome to measure and it might have given a useful insight. Subjective measures can tell us things, but not reliably whether a treatment worked.

I agree with all of that. It was what I was rather ineptly trying to say.

But there's something that puzzles me about what the asthma study shows.
albuterolvsplacebo-jpg.11160

https://www.nejm.org/doi/full/10.1056/Nejmoa1103319#:~:text=Although albuterol, but not the,medication in patients with asthma.
Abstract
BACKGROUND

In prospective experimental studies in patients with asthma, it is difficult to determine whether responses to placebo differ from the natural course of physiological changes that occur without any intervention. We compared the effects of a bronchodilator, two placebo interventions, and no intervention on outcomes in patients with asthma.

METHODS
In a double-blind, crossover pilot study, we randomly assigned 46 patients with asthma to active treatment with an albuterol inhaler, a placebo inhaler, sham acupuncture, or no intervention. Using a block design, we administered one each of these four interventions in random order during four sequential visits (3 to 7 days apart); this procedure was repeated in two more blocks of visits (for a total of 12 visits by each patient). At each visit, spirometry was performed repeatedly over a period of 2 hours. Maximum forced expiratory volume in 1 second (FEV1) was measured, and patients' self-reported improvement ratings were recorded.

RESULTS
Among the 39 patients who completed the study, albuterol resulted in a 20% increase in FEV1, as compared with approximately 7% with each of the other three interventions (P<0.001). However, patients' reports of improvement after the intervention did not differ significantly for the albuterol inhaler (50% improvement), placebo inhaler (45%), or sham acupuncture (46%), but the subjective improvement with all three of these interventions was significantly greater than that with the no-intervention control (21%) (P<0.001).

CONCLUSIONS
Although albuterol, but not the two placebo interventions, improved FEV1 in these patients with asthma, albuterol provided no incremental benefit with respect to the self-reported outcomes. Placebo effects can be clinically meaningful and can rival the effects of active medication in patients with asthma. However, from a clinical-management and research-design perspective, patient self-reports can be unreliable. An assessment of untreated responses in asthma may be essential in evaluating patient-reported outcomes.

Doesn't this demonstrate that placebo effect is so powerful in its effects on subjective outcome measures that such measures should never be used for assessing efficacy for physical symptoms, even in double blinded trials? After all, if only the subjective measure had been used, the researchers would have concluded that Albuterol is no better than placebo as an asthma treatment, and therefore that it is ineffective.

What does this say about the conclusions drawn from the Phase 3 Rituximab trial:
B-Lymphocyte Depletion in Patients With Myalgic Encephalomyelitis/Chronic Fatigue Syndrome: A Randomized, Double-Blind, Placebo-Controlled Trial

https://annals.org/aim/article-abst...omyelitis-chronic-fatigue-syndrome-randomized

Abstract
Background:
Previous phase 2 trials indicated benefit from B-lymphocyte depletion in myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS).

Objective:
To evaluate the effect of the monoclonal anti-CD20 antibody rituximab versus placebo in patients with ME/CFS.

Design:
Randomized, placebo-controlled, double-blind, multicenter trial. (ClinicalTrials.gov: NCT02229942)

Setting:
4 university hospitals and 1 general hospital in Norway.

Patients:
151 patients aged 18 to 65 years who had ME/CFS according to Canadian consensus criteria and had had the disease for 2 to 15 years.

Intervention:
Treatment induction with 2 infusions of rituximab, 500 mg/m2 of body surface area, 2 weeks apart, followed by 4 maintenance infusions with a fixed dose of 500 mg at 3, 6, 9, and 12 months (n = 77), or placebo (n = 74).

Measurements:
Primary outcomes were overall response rate (fatigue score ≥4.5 for ≥8 consecutive weeks) and repeated measurements of fatigue score over 24 months. Secondary outcomes included repeated measurements of self-reported function over 24 months, components of the Short Form-36 Health Survey and Fatigue Severity Scale over 24 months, and changes from baseline to 18 months in these measures and physical activity level. Between-group differences in outcome measures over time were assessed by general linear models for repeated measures.

Results:
Overall response rates were 35.1% in the placebo group and 26.0% in the rituximab group (difference, 9.2 percentage points [95% CI, −5.5 to 23.3 percentage points]; P = 0.22). The treatment groups did not differ in fatigue score over 24 months (difference in average score, 0.02 [CI, −0.27 to 0.31]; P = 0.80) or any of the secondary end points. Twenty patients (26.0%) in the rituximab group and 14 (18.9%) in the placebo group had serious adverse events.

Limitation:
Self-reported primary outcome measures and possible recall bias.

Conclusion:
B-cell depletion using several infusions of rituximab over 12 months was not associated with clinical improvement in patients with ME/CFS.
This used subjective outcome measures. One thing it clearly shows is a strong placebo response, but does it show that Rituximab was ineffective in treating ME? On the basis of the asthma study's findings, is that conclusion valid?
 
@Hilda Bastian's example of surgery and a pain outcome appears to be a valid example of an unblinded trial with a subjective outcome

I missed that. What do you mean by a valid example, @Medfeb ? Do you mean that investigating pain as an outcome in an unblinded, not adequately controlled trial with self-reported pain relief as the only measure is valid?

I found these statements by @Hilda Bastian:

The thing I've been debating is partly whether subjective outcomes in an unblinded treatment trial can ever justifiably be a primary endpoint [...]

And even if it that one trial showed what you're arguing for those 50-ish minutes, this wouldn't mean that the dozens (or whatever there are now) of trials of epidurals for pain relief that didn't have placebos were valueless, and epidurals haven't really been shown to reduce pain in labor.

I am not sure what you mean here, @Hilda Bastian, but since I didn't read all posts I maybe missed it and you said somewhere that it also applies to unblinded trials that...

There are times when the only way of trying to assess whether a treatment is having a beneficial effect is by asking people, e.g. how much pain they are in.

...and that this will provide reliable evidence?

I don't think you're suggesting only because people don't apply certain criteria in other studies this means it's OK to do studies that way?

Or that, only because it would be helpful to know something there always must be means available to know that?

Apologies for being trivial, but in case it could be helpful for the discussion:

Do you agree, that, if you have well-founded reasons to apply certain treatments but aren't sure about their superiority compared to other treatments, because for whatever reasons this can't be properly assessed, the best thing to do is to honestly say what you know for sure and what you don't know? That you should say what is your reasoning behind suggesting a certain treatment, but at the same time you should be very clear about the fact that there is not sufficient evidence to favor one treatment over another?

I think documenting and registering outcomes of a certain intervention in practice can be of great value. But it's something different than doing an RCT.

For ME research I think treatment documentations and observational studies could be very useful, but these approaches have to be clear about the biases involved, apply adequate protocols and shouldn't be used to exaggerate the evidence.

(Apologies in advance if I'm not able to get back to replies for a while.)

Edited for clarity.
 
Last edited:
Back
Top Bottom