Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

I just don't grasp how come someone can use poor quality outcome indicators when better ones are available - presumably cost [EDIT "of e.g. actimetry"] is a factor , but if the evidence gathered using questionnaires is so poor then why do/publish the study?
Because the "scientists" doing these studies are not the real deal. Proof their hypotheses are right, and the kudos that goes with that, is far more important to them than seeking scientific truths and thereby striving towards what is best for patients and science. It should be incredible that this is the real situation, but sadly it is all too credible.
 
Because the "scientists" doing these studies are not the real deal. Proof their hypotheses are right, and the kudos that goes with that, is far more important to them than seeking scientific truths and thereby striving towards what is best for patients and science. It should be incredible that this is the real situation, but sadly it is all too credible.

Yea and the design of the system means that academics need to publish so many papers/get so many points --- so I guess gaming the system [EDIT i.e. publishing poor quality studies with subjective outcome indicators] is incentivised!
 
I know Miranda quite well - she used to be my boss.

My criticism isn't of her but of the appointment of someone who is so involved in cochrane onto an independent advisory group.

As well as the language Hilda uses around 'contested areas' which suggests she doesn't get its about good or bad methodology but thinks its about debatable judgement calls.
 
My criticism isn't of her but of the appointment of someone who is so involved in cochrane onto an independent advisory group.

As well as the language Hilda uses around 'contested areas' which suggests she doesn't get its about good or bad methodology but thinks its about debatable judgement calls.
I agree. Miranda is a Cochrane loyalist. So was not open to many of my more challenging ideas and opinions.
 
As well as the language Hilda uses around 'contested areas' which suggests she doesn't get its about good or bad methodology but thinks its about debatable judgement calls.

I remember David Tovey responded to my complaint that relying on subjective outcomes in unblindable trials (and reviews of those trials) was "terrible methodology" as "simply an opinion". I sincerely hope that members of the IAG will recognise that it is terrible methodology to rely on self-reported outcomes when interventions are specifically designed to manipulate how you think and behave - for example how positively you fill in a questionnaire about your private experience.

https://healthycontrol.org/2019/01/...iew-of-exercise-for-chronic-fatigue-syndrome/.

[NB I have a new blog address - healthycontrol.org. All the blogs on it have been transferred from the old one https://healthycontrolblog.wordpress.com/]

upload_2021-7-3_15-5-20.png
 
I remember David Tovey responded to my complaint that relying on subjective outcomes in unblindable trials (and reviews of those trials) was "terrible methodology" as "simply an opinion". I sincerely hope that members of the IAG will recognise that it is terrible methodology to rely on self-reported outcomes when interventions are specifically designed to manipulate how you think and behave - for example how positively you fill in a questionnaire about your private experience.

https://healthycontrol.org/2019/01/...iew-of-exercise-for-chronic-fatigue-syndrome/.

[NB I have a new blog address - healthycontrol.org. All the blogs on it have been transferred from the old one https://healthycontrolblog.wordpress.com/]

View attachment 14349
Tovey's response here seems to reflect this common idea that since ME/CFS is diagnosed based on subjective criteria (it's a "private experience"), that means subjective outcomes are the most appropriate outcomes to rely on. Never mind that you can't equate fatigue with anxiety and pain, as fatigue is almost defined by the limiting impact it has on function and activity, it's generally very bad logic in itself.

Nothing actually directly measures the patient's "private experience", so you want to come up with the best measure for that. Given the key nature of ME/CFS as a function and activity limiting condition, and the difficulty with actually measuring something like "fatigue", even if you only cared about that "private experience" the best measure would still be objective measures of that person's level of function.

I hope the IAG will not let such high school philosophy level logic go unchecked.
 
Tovey's response here seems to reflect this common idea that since ME/CFS is diagnosed based on subjective criteria (it's a "private experience"), that means subjective outcomes are the most appropriate outcomes to rely on. Never mind that you can't equate fatigue with anxiety and pain, as fatigue is almost defined by the limiting impact it has on function and activity, it's generally very bad logic in itself.

Nothing actually directly measures the patient's "private experience", so you want to come up with the best measure for that. Given the key nature of ME/CFS as a function and activity limiting condition, and the difficulty with actually measuring something like "fatigue", even if you only cared about that "private experience" the best measure would still be objective measures of that person's level of function.

I hope the IAG will not let such high school philosophy level logic go unchecked.

I do find it shocking that they don't seem to have stopped to think through the argument they are making and whether it is sound. It just seems like a skill lacking in a community. But maybe the real skill that needs to be taught is to reason through an argument when you believe in the outcomes.


To me the point you make about direct measures is important beyond the simple arguments around subjective criteria. There is a more general point about what is a good and bad proxy for that private experience and the properties of such a proxy. They use the CFQ and SF36 but never seem to reflect on the questions and mix of questions whether these are likely to produce an accurate proxy. So with the SF36 we have questions around walking a block and climbing a flight of stairs - with someone with ME these are neither identical (which Likert would demand) or independent hence as you improve your ability to walk a block you also improve the ability to climb a flight of stairs - but at the edges of the 'scales' i.e. the questions which indicate good/bad abilities the interactions between questions is less. What this really means is the scales aren't linear and therefore aren't subject to means, mean differences etc.

With questionnaires like HADS they assume questions relate to mental state rather than physical ability again never really stopping to think through what the questions mean and why someone may answer the way they do.

I actually think that we need an alternative way of looking at results from questionnaires - perhaps based on clustering and how people transition between clusters on question answers. But that would be a lot more complex and involve some study. Something like labelling sets of answers as severe, moderate, mild, well, etc and then looking for change. It also wouldn't give a simple measurement of improvement and would be very course grain - but if an intervention really works then it should show in course grain measures.
 
To me the point you make about direct measures is important beyond the simple arguments around subjective criteria. There is a more general point about what is a good and bad proxy for that private experience and the properties of such a proxy.
I think your post encapsulates a vital argument that needs to be at the heart of analysis of the data from all the ME/CFS trials. The assumption in all the systematic reviews I have seen is that the data can be take at face value as reflecting patients' reality, and that it is linear data. Neither of these is true for SF-36 and CFQ, which are the most commonly used subjective measures in ME/CFS. There is a statistician on the Review panel. I wonder whether she ever looks below the numbers to what those numbers mean.

I think in any feedback we give on the protocol for the new review, we need to not only ensure there is a focus on objective outcome measures, not subjective ones, but that any subjective ones included are subject to the sort of scrutiny and a redesigned reanalysis that takes into account the fact that the subjective measures are not a good proxy for patients' reality.
 
I think in any feedback we give on the protocol for the new review, we need to not only ensure there is a focus on objective outcome measures
And address the quality of objective measures too. For example:

The six minute walk is, in my view, hopeless at identifying changes in the health of the people most likely to be signing up and sticking with a GET trial, that is people with relatively mild severities. It is highly subject to motivation in those participants (and also subject to how close the person comes to running), which can be affected by therapist expectations and monitoring. For example, if I am well-rested, I can easily walk briskly for six minutes if I have to.

Any activity monitoring has to be done for at least a month prior to the intervention and probably for at least 3 months after the intervention. It should involve a monitor rather than self-reported activity. It is relatively easy for someone with mild ME/CFS to increase their activity level for a month or so. The outcome should not be based on whether a person completed a specified activity such as a daily swim or a walk, as there could be activity swapping, where previous activities are given up in order to accommodate the new activity.

Any measure of work or school hours needs to be done many months after the intervention, as there are logistics involved in increasing hours, and then the deterioration that can come from such an increase can take several months to be obvious. For school hours, the study needs to be carefully timed so that long school holidays, or even periods of at-home study for exams don't confuse things.

Percentages of people actually finishing an exercise intervention versus how many people were offered it, and versus how many people started it do tell us something.
 
I think your post encapsulates a vital argument that needs to be at the heart of analysis of the data from all the ME/CFS trials. The assumption in all the systematic reviews I have seen is that the data can be take at face value as reflecting patients' reality, and that it is linear data. Neither of these is true for SF-36 and CFQ, which are the most commonly used subjective measures in ME/CFS. There is a statistician on the Review panel. I wonder whether she ever looks below the numbers to what those numbers mean.

I think in any feedback we give on the protocol for the new review, we need to not only ensure there is a focus on objective outcome measures, not subjective ones, but that any subjective ones included are subject to the sort of scrutiny and a redesigned reanalysis that takes into account the fact that the subjective measures are not a good proxy for patients' reality.
Especially in how it's always presented: "fatigue was reduced". No, it wasn't even measured, without a baseline and using a non-linear grading of relative meaning framed ambiguously in such a way that the participant is usually not answering the question the same way it is meant by the researcher, an ambiguity that is deliberate and exploited.

At best a proxy rating that is argued to represent "fatigue", whatever is meant here changes a lot, was altered after deliberate attempts to make participants alter it in a specific direction. To say that "fatigue was reduced" is simply invalid, it is not something that can be measured and it is not something that those questionnaires produce a reliable proxy for. Literally the opposite, as their definitions of the illness have little to do with reality.

The "validity" of those questionnaires was not established in a reliable way, their own arbitrary definitions and thresholds are gauged on one another. This is meant to be a similar process to say, the cosmic ladder, which measures distances to various astronomical objects by building measures on themselves, first close ones, then farther ones relative to the close ones. Except here there is not a single measurement, it's guesstimates compared to other guesstimates. Using finger-widths, not a sextant. Whose fingers? Doesn't matter, the numbers don't mean anything anyway.

If the field of EBM were any serious, it would care dearly about how this is just as invalid as rating temperature based not on a thermometer but on uncalibrated 1-11 dials of a stove.
 
The "validity" of those questionnaires was not established in a reliable way, their own arbitrary definitions and thresholds are gauged on one another.

This is one of the building block of the problems with outcome measurements.

Validity of questionnaires

The way results of questionnaires are interpreted as @Adrian said above

Objective measurements, lack of pre and post long term follow ups.as @Hutan mentioned - plus of course just which lart of the patient population are participating. I would suggest 3 months isn't nearly long enough especially if it happens to coincide with someone's best or worst season.

Plus entry criteria in the first place.
 
There is an interesting side story about the SF36-well, I find it interesting anyway. At the Illness Behaviour Conference in 1985, at which Cott introduced the McMaster BPS model to the world, a paper was delivered by Dr J Ware of the Rand Corporation. It appears that he was at that time responsible for development of the SF36. No reason is given for that paper not being reproduced in the book of the proceedings.

There was also a paper by Dr A Detsky, a health economist with interest in cost/benefit analysis. That also failed to make it into the book.

The reasons may, of course, be entirely innocent.
 
There is an interesting side story about the SF36-well, I find it interesting anyway. At the Illness Behaviour Conference in 1985, at which Cott introduced the McMaster BPS model to the world, a paper was delivered by Dr J Ware of the Rand Corporation. It appears that he was at that time responsible for development of the SF36. No reason is given for that paper not being reproduced in the book of the proceedings.

There was also a paper by Dr A Detsky, a health economist with interest in cost/benefit analysis. That also failed to make it into the book.

The reasons may, of course, be entirely innocent.

I think Rand were trying to male money on it so that could be the reason. I've also come across invited talks that don't get published as they are published elsewhere.
 
I think Rand were trying to male money on it so that could be the reason. I've also come across invited talks that don't get published as they are published elsewhere.

I did wonder about that and had a quick look at his other publications from about that time, but could see nothing obvious. It is a pity. One would have expected his paper to be tailored in some way to the circumstances and the audience, and should have been informative.
 
A post on an article about research integrity by Richard Smith, 'cofounder of the Committee on Medical Ethics (COPE), for many years the chair of the Cochrane Library Oversight Committee, and a member of the board of the UK Research Integrity Office' has been moved to its own thread here.
 
Until it's retracted, single-minded ideologues will keep pushing it and misleading people. Even though he says this is about LC he thinks he is making a point here, and keeps somehow insisting that any of those trials considered PEM, which he knows nothing about, and none of those trials even account for it.

Try and follow this discussion, if you can find any point other than just pushing his beliefs, he just can't accept that he is out of his depth here, especially as he is talking here to people who, unlike him, understand what PEM is, he is just using it as a substitute for fatigue.

 
Back
Top Bottom