Jonathan Edwards
Senior Member (Voting Rights)
This excuse from them really gets up my nose. There is a minimum methodological standard to meet for any study wishing to claim scientific status. Trials that do not meet that minimum standard are not merely a weaker form of evidence, they are non-evidence to start with. They lack the necessary rigour and clarity to be interpreted and applied safely. No amount of hand waving and sophistry can change that.
Yes, I like this idea of non-evidence. A system for assessing usefulness of treatments like Cochrane needs to recognise that some forms or 'evidence' are simply not good enough to be worth even considering' low' or 'poor' or. 'weak'. If a higher score in a test group is actually rather less than what are known to be placebo effects in other trials for instance it is not weak evidence for an effect. If anything it is weak evidence for a negative effect and rather strong evidence for no positive effect.
Basically, trials have to be considered in the real world context of trial psychology.
The central motivation for a Cochrane review is to see if the treatment is useful. You can only get evidence of even minimum value from prespecified primary outcome data. Other measures may be of peripheral interest if the primary outcome is clear but we know that the problem of multiple analyses makes it all too easy to find some other 'evidence' of an effect. We know from past experience that people fiddle their experiments most of the time in all walks of science. We know that in trials of therapist-delivered treatments in ME/CFS they do it all the time - switching outcomes, truncating axes, etc etc. It is reasonable to expect any trial that is to be taken seriously to have a clear primary outcome, and if this is subjective and the trial is unblinded then this is non-evidence.
And I think it is reasonable, in a real life context, to consider all other data relevant to usefulness as non-evidence. For a systematic review to pick over the range of data to try to find some secondary measure that looks a bit positive is simply to commit the same crime that the authors are expected to avoid-post hoc analysis.
I think it is interesting that this discussion probably rarely arises in the context of standard pharmacological treatments or complementary therapies, although for different reasons. For complementary therapies nobody in the establishment system is threatened by being upfront about lack of credibility in real life terms. Everyone is happy that non-evidence is non-evidence. In a sense the same is true for drugs because those who would be threatened are often in industry and in theory outside the establishment system. But in reality this is very blurred. I think the main difference for drugs is that they get evaluated before they are in widespread use (unlike homeopathy etc.).
I see the central problem with therapist-delivered treatments, and also a range of 'medical procedures' like facet joint injections, acupuncture and, strangely, use of radioisotopes which do not require drug licenses, is that they get into routine use before evaluation and there is a large body of people in the establishment system with a strong interest in finding evidence to support continued usage. The salient example is Chalder and Wessely giving detailed instructions on the best method of CBT for ME/CFS in 1989 (if I remember rightly) at a time before any evidence had been gathered from any sort of trial.
There is a strange sense of entitlement shown by people involved in these treatments. It is suggested that they should not be criticised too much because they are doing their best to do trials in difficult circumstances. It is assumed that it is in everyone's interest to do more and better trials to show how well the treatments work. But if, as of now, it is very unclear that these treatments work, or are based on any theory that makes it likely that they will work, then why should these people granted this sort of leeway and support?
If all that can be found is compatible with spurious influences on assessment or random variation or all the other things that mean a PhD lab student can always find a positive result somewhere buried in a week's work then this is non-evidence. And very often the lack of anything more substantial is clear evidence of non-effect.