I realize you hold the position adamantly, and nothing I say will change that, but just for anyone who's seeing this and not reading the whole thread, this is my position: an entire trial is not necessarily valueless even if the data from one, or some, of its outcomes is biased. (I explain that
here.) Nor are subjective endpoints always valueless, even on effects, and sometimes there is no endpoint more valid than a patient-reported outcome.
The innuendo is still there Hilda -'adamantly'. I hold the position because it has been empirically verified thousands of times and is it is
in effect a tautology. What we mean by a subjective outcome is one that can suffer spurious influences if people have a prior idea what the answer should be and an unblinded trial is one where they do.
I will clarify my original comment in a minute but the suggestion that I hold this view through some sort of prejudice or vested interest is to commit exactly the impugnment that you were complaining of in others!
Thanks for replying and making clear your arguments - I and everyone else here really appreciates that. But the argument here is irrelevant again. As others have said, nobody would be as stupid as to imply that the presence a single subjective endpoint makes a trial valueless - why should anyone be so irrational (that impugnment again?). And I have already dealt with the red herring about subjective endpoints not being valueless. I actually put it in capital letters last time to try to make sure it got read! Of course they are important and valuable - if they are free from bias - which they will be in a blinded trial. Another red herring I get is that blinded trials are hard to do for therapist-delivered treatments. Indeed, that highlights the weakness of the trials we have. It does not mean that it is OK to treat inadequate trials as if they were somehow adequate. Reliability of evidence is a measurable factual matter, not something that shifts with your needs. Of course sometimes the subjective endpoint is more valid. Any fool can see that. But that is not the point.
Let me rephrase my original comment to give the meaning that is transparent to all members here but you seem to have difficulty with. It was I think
self-evident in terms of the context in which the comment was made.
The use of subjective outcomes (alone) from unblinded trials to judge the usefulness of a treatment is valueless.
The problem with the exercise review is that the trials included either do not have primary objective endpoints that would be adequate (which relates to the
superiority of subjective ones we have agreed on) or they were not meaningfully controlled because the psychological framing of the comparator was notcomparable. That is a different issue and one Ernst has interestingly commented in the context of the Lightning Process recently. I remember noting that one study had fitness as an objective endpoint that improved, but, as we agree, fitness is not a good index of whether or not someone feels less ill, which is what matters. OK the context is complex, but the bit about the combination subjective endpoints and lack of blinding is not. As other members have pointed out all sorts of things about both the trials and the review made things worse but we are discussing this one point.
I did not at any time imply that's what you were saying
I don't understand that comment. If you were not trying to defend the value of subjective endpoints- presumably against a suggestion that they were inferior - why do you keep mentioning it? I am baffled here.
It's not the case that the only relevant outcome of a trial of epidural analgesia in labor for pain relief is pain. Even leaving aside other possible maternal outcomes (e.g. whether it increases the risk of forceps or cesarean, and the potential harms of epidurals to women), objective impact on a newborn of drugs in labor are clearly critical. If the first studies had found marginal pain relief and major newborn harm, epidural analgesia would have been dead in the water.
As someone else has pointed out this is now about harms and clearly falls outside the meaning of my original comment in the relevant context. Yes trials can have value in showing harm. PACE would actually have great value in showing that CBT and GET do not work usefully if it were not for the dreadful mess about recruitment and generalisability. PACE
is of value in showing that the theoretical model was wrong - if it was right the objective measures would have paralleled the subjective ones and they definitely did not.
The first paper I ever wrote was a retrospective review of a surgical procedure for pain in the wrist. It was an unblinded study with a primary subjective endpoint. Its value was in the humbling learning experience for a Dr J Edwards who later realised he had made every mistake we are now discussing. But it was valueless as a guide to the usefulness of the treatment.
The real examples against the argument that an entire trial is worthless if it's unblinded and it includes even one subjective endpoint?
But that is not a sensible reading of my original comment - as I think everyone else here would agree. It makes no sense. I think I did actually use the term 'primary outcome' initially and I am sorry if I did not put it every time. But everyone else here is up to speed on the need for primary outcomes prespecified, the problems of multiple analyses and so on. When I say an unblinded trial with a subjective outcomeI assume that people will understand me to mean a primary outcome or an outcome used to decide whether or not the treatment is useful - that was the problem with the review.
I would highly recommend reading many of the threads here.We have had along discussion about combining subjective and objective endpoints into composite prespecified primary outcomes. Rheumatologists have done this for 25 years with the ACR criteria of improvement RA. You have to use a multiple threshold system where improvements in subjective features are combined with objective endpoints that corroborate their validity. It is probably the only way to assess many treatments for ME. It does not completely deal with the problem of bias (especially in very large trials if endpoints are defined statistically) but mitigates quite well.
I am afraid that you are straw-manning here, Hilda. It is a needless diversion from the work in hand - to try to get a Cochrane review that is fit for purpose.