Is it useful to send them this study about albuterol vs placebo in asthma to illustrate that studies that do not properly control for bias in self-reported outcomes are at risk of producing garbage quality data?
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3154208/
Yes!
RESULTS
Among the 39 patients who completed the study, albuterol resulted in a 20% increase in FEV1, as compared with approximately 7% with each of the other three interventions (P<0.001). However, patients’ reports of improvement after the intervention did not differ significantly for the albuterol inhaler (50% improvement), placebo inhaler (45%), or sham acupuncture (46%), but the subjective improvement with all three of these interventions was significantly greater than that with the no-intervention control (21%) (P<0.001).
Firstly to avoid confusion, note their statement "
as compared with approximately 7% with each of the other three interventions" actually means "other three arms", including the non-intervention arm.
The objective findings showed negligible FEV
1 differences between sham interventions and no interventions, 0.2% and 0.4%.
Change in FEV
1 for the active intervention, compared to control, was 13%.
It's clear here that a small (but inadequate) objective change of around 7% did occur even with no intervention at all. Maybe this suggests there is just something about trial conditions that fosters this anyway? Possibly motivation to just try that tiny bit harder, maybe even egged on a bit more by therapists at the end compared to the start? Maybe more practised at doing the measurement procedure? Maybe other trial environmental things? If so, then the very minor 6mwt changes in the GET arm of PACE were swallowed up in the noise of any such effect, and in the light of this asthma trial could be argued even less significant than their already known insignificance.
Yet all participants receiving an intervention perceived a 50% improvement thereabouts, compared to 21% with no intervention. Even the no-intervention group's perception was 3x their objective outcome. And the sham intervention's perception was 7x.
The big problem convincing people about PACE's unblinded-interventions-with-subjective-outcomes, is that many still believe it is a wholly subjective illness, and so only subjective outcomes are of any consequence. And I think they therefore believe that if there do happen to also be any objective symptoms, then the subjective outcomes will be a good indication anyway. Clearly demonstrating the huge disparity between subjective and objective outcomes, but for a disease where the objective indications are very well understood and accepted, just might help break that Catch-22.
The findings of this asthma study seriously reinforces what people have been saying (
@Jonathan Edwards probably being the first) that the PACE findings are quite simply uninterpretable, and thereby valueless.