Written over a couple of days, so some points have since been made by others, but anyway...
This has developed into is a huge problem as researchers are not using objective measures in unblinded RCTs. And they have not been pulled up for it either via their institutions or via the peer review process.
The researchers into CBT and GET then mix up 'feeling better' (which small/modest improvement is inevitable pretty much after a face to face intervention where someone was nice to you) and 'being better' (recovered/cured/able to do largely as they please with no or few symptoms). The later is 100% what patients want/would like. The former has been used ad nauseum by researchers in RCTs of CBT and GET. Poor.
Small/modest improvements in subjective measures/questionnaires has then been 'sold' as a meaningful result - when all it is, is a demonstration of the placebo effect. And it doesn't get anyone back to work or health or anything close to what the patients would consider recovery. This is a classic example of the psychological process of substitution. Human's are good at it. And slow, poor at identifying it and calling it out.
Well said.
I am firmly on the side of subjective outcome measures requiring adequate blinding, and/or being used alongside objective measures (which are given at least equal weight to the subjective measures). Anything less is not scientific, and is potentially very dangerous.
I don't object to subjective outcome measures, in fact I want them used. They can provide valuable information, in particular from the correlations between subjective and objective measures. But they cannot be used on their own and without blinding, especially in a trial of a treatment whose whole purpose and means is to alter patients' subjective self-perception.
That just becomes circular nonsense revolving entirely around patients' questionnaire scoring behaviour, independent of any practical real world changes or benefit. Changes in actual perceptions and cognitions must have measurable external agency and consequences well beyond mere questionnaire scoring behaviour, otherwise what is the point?
Science works by allowing us to discriminate, and quantify the difference, between subjective and objective elements in our perceptions and cognitions. Unblinded subjective measures
on their own don't allow that discrimination and quantification, that necessary element of control (as in Randomised
Controlled Trial). It is not possible in such trials to distinguish between genuine therapeutic benefit, and potential confounders and artefacts of that methodology, such as placebo effect, wanting to please (or avoid displeasing) the therapist, etc.
At most unblinded subjective measures can only tell us there is an effect. They don't, on their own, tell us what the effect is due to (and hence if it is genuinely therapeutic). That is why we need blinding and/or objective measures as well, to help tease out causal pathways.
What the BPS school has done is construct an unfalsifiable 'methodology' that tries to maximise the effect of various confounders and artefacts, and arbitrarily relabels them as a therapeutic benefit.
This whole shitshow is going to come down to this technical issue. If the BPS school are not allowed to rely on unblinded subjective measures then they have nothing, and they know it. And so do their critics.
Wessely and Chalder:
"in the later stages of treatment patients are encouraged to increase their activity (which must ultimately be the aim of any treatment)"
Wessely, David, Butler, & Chalder – 1990
Change in activity level is objectively measurable. So there are no excuses for not measuring it.
Subjective measures might be reasonable things to include in an unblinded trial for a range of reasons, but they don't measure treatment efficacy.
Measuring patients' acceptability of a treatment might be legit, for example.
Another red herring I get is that blinded trials are hard to do for therapist-delivered treatments. Indeed, that highlights the weakness of the trials we have. It does not mean that it is OK to treat inadequate trials as if they were somehow adequate.
This excuse from them really gets up my nose. There is a minimum methodological standard to meet for any study wishing to claim scientific status. Trials that do not meet that minimum standard are not merely a weaker form of evidence, they are non-evidence to start with. They lack the necessary rigour and clarity to be interpreted and applied safely. No amount of hand waving and sophistry can change that.
It's like trying to build a house on a foundation of sand. No matter how well constructed the house is, it is still built on sand.