Independent advisory group for the full update of the Cochrane review on exercise therapy and ME/CFS (2020), led by Hilda Bastian

as we see from the authors latest publication they appear to think PEM = 'Boom and bust'
This is such an important point that I want to mention it. PEM or PEF, respectively, or PENE is not a certain symptom, it´s specific patterns of appearance of symptoms.

I remember to have read in the PACE trial paper, that it would be astonishing that GET helps as PEM is (or "was") considered to be the main thing in CFS. So - in my words - per definition it rather should not work.

As a reader I would expect to find any explanation why this now suddenly and surprisingly wasn´t anymore in charge, and somehow could be helped by pushing through. Also as a reader from Cochrane I realy should expect this.

As a curious being I would also expect to come up with some guesses why this would be, this PEM. An interesting question.

However, there is no further try, simply observation (at its best), then saying something maybe not too untrue, and then in the short conclusion that GET and CBT helps.

This looks simply cheating, and if Cochrane is not able to spot this inconvenience, they are far away from the spirit which had made them growing important.
 
What I will never understand is if we were talking about unblinded trials using self reported outcomes of homeopathy, crystal healing, astrology or dowsing, there would be no disagreement, scientists worldwide would agree that results obtained by that methodology should be disregarded as biased, misleading and without value.

Apply the exact same methodology to talking therapies that are even worse in that they actively seek to influence how participants rate their symptoms, and now we are having a debate.
 
What I will never understand is if we were talking about unblinded trials using self reported outcomes of homeopathy, crystal healing, astrology or dowsing, there would be no disagreement, scientists worldwide would agree that results obtained by that methodology should be disregarded as biased, misleading and without value.

As they would for standard pharmacological treatments.
What is it about therapist-delivered treatments that is somehow shielded from these strictures?
 
Re the question on why there has been so much focus on the subjective outcome +unblinded trial issue:
I guess because it's such a fundamental issue when determining what studies provide useful evidence. And because there seems to be a surprising level of faith in subjective measures as reliable indicators of treatment utility in unblinded trials.

I know others will disagree but selecting the wrong cohort of patients is just as fundamental. Even if a body of research is perfectly conducted, its conclusions about efficacy and safety in a particular disease are meaningless if the cohort includes an overrepresentation of patients with some other disease.

Of course there are lots of ways that a study can be unreliable. But I'm not aware that the person leading a new review of GET for ME/CFS disputes that it's important that studies select cohorts actually with ME/CFS (and therefore with the core symptom of PEM). But @Hilda Bastian does seem to be open to the idea that subjective outcomes can be reliable measures of the utility of an intervention in an unblinded trial. So that seems worth discussing, to understand what we think about that idea and why we hold the views we do.
 
What I will never understand is if we were talking about unblinded trials using self reported outcomes of homeopathy, crystal healing, astrology or dowsing, there would be no disagreement, scientists worldwide would agree that results obtained by that methodology should be disregarded as biased, misleading and without value.

Apply the exact same methodology to talking therapies that are even worse in that they actively seek to influence how participants rate their symptoms, and now we are having a debate.
The very same comments were very popular with the BPS crowd commenting on the Rituximab trial. All valid points. They know those are problematic, they simply choose to waive them off for themselves.
 
From a psychologist's perspective - my understanding of GET and it's underlying rationale.

GET is aimed at increasing physical activity despite on-going symptoms and flare up of symptoms - because these on-going symptoms are understood within the GET model of ME/CFS to due deconditioning, misattribution of benign bodily sensations as malign, and patient focusing attention on these and getting distressed and avoidant of activity as a result. This cycle goes round and round maintaining the patients' symptoms.

The model asserts that flare ups of pain are normal when people start rehab after being inactive. Akin to the acute exacerbation of pain, stiffness and so forth post surgery or after an accident, for example, during acute physio rehabilitation. This is largely 'to be expected'. Any increase in pain or other symptoms is purely down to deconditioning, lack of fitness, stamina, lack of use, inflammation and so forth. Within the GET model of CFS model there is no physical reason why patients cannot increase activity safely and consistently - other than the patient blocks the process - due to fear of harm or worsening symptoms like pain and debility. The GET model assumes that the symptoms the patients experience are due to the patients misinterpreting and misattributing benign bodily sensations as malign. Once the patient starts to move and gets going, bit by bit they can and are encouraged actively to do more and more. The theory being that this process can be additive until the person is functioning well and largely as normal and has re-learned that their symptoms are benign. Lots of talk about two steps forward, one back - like in standard physio rehab of acute injury.

So, that should be straightforward to do in practice and to demonstrate objectively. Easy peasey. (If it were true).

It is in essence a behavioural intervention to try and overcome a fear / phobia of movement, activity and exercise. Phobias are straightforward to treat and overcome in many instances and circumstances. Again easy peasey (If it were true).

However, this completely misses the point: The patients main symptoms of post exertional malaise (PEM) and increased debility across a wide range of symptoms. The more activity (mental and physical) they do, the worse they feel and more debilitated they become. There is objective evidence for increased activity making pwME/CFS worse. When objective measures of activity are used and patients increase activity then go on to do less activity and report more pain and lowering of mood. That is the opposite of what would be expected by the GET model.

The patients voice is completely absent from the GET way of working. The underlying clinician beliefs and the GET model being used are not openly shared with the patients. When this is subsumed within the MUS model (TC et al see these things as the same, e.g. CF = CFS = ME = ME/CFS = MUS = SSD = BDD = FMS = IBS etc) sharing the underlying model is actively discouraged. It's opaque. This is, in my view, unethical. There is no way a patient can truly give their informed consent. It is the opposite of good medical care. It (GET) is 'done' to the patient who is not fully informed. I have no doubt that the clinicians who are 'doing' this are well intentioned - but that is not enough for professional, ethical practice. And no objective checks are made to see if the process is effective or has construct validity - that what is being 'done' in research or clinic resembles or is based on what the model says it is doing. It only appears to matter if the patient 'feels better'. Which they are going to because if they have failed to improve - it is by definition of this model - the patient who has failed. And no one likes that - so there is huge psychological and social pressure to conform, continue and smile - whether it is working or not. Especially if the clinician was nice, welcoming, supportive, caring and so forth. And the patient had been pre-primed and given messages throughput the process that GET was effect, safe and so forth. And that change was down to the patient to take forward. Not achieving a small, positive effect under such circumstances would be more startling.

From a theoretical perspective the GET model should easily show high levels of change if the model was correct and had good validity - including face & construct validity. I would expect large effect size changes which can be measures objectively, subjectively and can be independently verified. Assessors pre and post therapy can be independent of the treating clinicians. That could/should be done to reduce bias too. Small subjective changes should ring large alarms bells. It does to me. Absence or no change in objective measures or the active dismissal / minimisation of the usefulness of objective measures by researchers should be ringing massively large clanging bells of bias.

As the human is largely highly loss averse - approaching the idea that GET is not effective is psychologically a difficult process - if the researcher / clinician has truly and wholly believed in it. The belief that GET for ME/CFS works will be maintained pretty much at all costs - and the human will change the goal posts until the desired outcome for the belief is proven - i.e. persuade themselves, co-researchers, funding agencies, colleagues, peer reviews that switching outcomes and relying on subjective measures etc is ok - unless they are held to account independently (that should happen via peer review...) and by objective evidence. Otherwise it is all belief and wishful thinking - no matter how well intentioned or desired.

Joan Crawford
Counselling Psychologist
Brilliant. Right on point. Thank you!
 
Doing the above behaviours is what the human will do to preserve loss (face, professional identity) etc when they know they are wrong and lack the courage to face up to this fact. This kind of behaviour should alert others to the pretty obvious fact that there was a dud result that the researcher don't want to fess up too / lack the capacity to accept. When that's coupled with a vice like grip onto the 'model' (I'm right, I'm right, I'm right...... because I say so...) it won't be relinquished easily. And, if I'm being cynical, highly debilitated patients are the least likely group to kick up a fuss - so the researchers have carried on - in the face of objections. However, the researchers have completely misunderstood the human spirit: the desire to be understood and the drive for knowledge, health and decency. People will not rest until they are understood.
Again -- brilliant! It is so appreciated to hear clear, accurate thinking on the subject.

This is what was needed a long time ago. But, as you say, we will not rest until this wrong has been righted.

:thumbup:
 
This was posted an hour ago. The physio account has 28.000 followers.


Clicking on the link they used, no warning that the review is problematic and under, uh, review. This is very... problematic. As is the "Published 21 May, 2020", makes the conclusions look brand new and refreshed. And the substance, of course, very problematic.

Ah, you have to click on "Read the full abstract" to see the easily missed italicized warning about the review update.

Knowing that people will continue to be harmed by this... ugh. Nobody deserves this.
 
Ideally the outcome measures should relate to things that a relevant to the patient in the real world.

Notably, there has been no studies that actually ask patients which outcome measures are most relevant. Likewise, none of the PROMs have been tested to determine whether patients find them relevant, despite two systematic reviews on PROMs suggesting this is a major problem.
 
@Joan Crawford, thank you very much for your ongoing engagement here. I agree with much of what you write but I'd like to discuss a couple of points that I think are relevant to the Cochrane review and ME/CFS research design.

Ideally the outcome measures should relate to things that a relevant to the patient in the real world. Examples, might include: ability to work (or equivalent);...

The outcome measure 'ability to work (or equivalent) is quite a difficult one. I've given the example of my son returning to school after a year of home schooling before. So, if my son had been asked in January what his ability to attend school was, he would have said '100%'. He started school at the beginning of February and was attending well at the end of that month. If you had asked him then or had an objective measure based on school attendance, the rating might have been 90%. But by May, he was bed-bound and having difficulty even being awake to answer questions, let alone getting to school - so the rating was 0%. The point I am making is that it is very difficult to work out what a sustainable level of work or school is until you have been doing it for sometime. Any forecasting by the patient at the end of an intervention about what they might be able to sustain is likely to be optimistic. Particularly in the case of work, it takes time to apply for and obtain a job. And any objective measure of work hours or schooling needs to be made after enough time has passed for any cumulative impact to show itself. Enough time is probably at least three months after the increase in activity.

... actigraphy over a decent time period (a week or two) and followed up on at 12 / 24 months; ...
For the same reason (non-sustainability), a week or two is much too short a time to assess real changes in physical activity. As I've said before, a week or two of activity monitoring is really a subjective measure. Someone with ME who is motivated will often be able to push themselves to achieve higher levels of activity for that length of time and probably longer. A decent period of time for continuous monitoring is probably at least three months. That can be remeasured in a follow-up if desired, but the most important thing is to make the period of measurement long enough.

...
I think this is one area that patients can have more input - where they have been notably absent in the past...
:thumbup: Thank you.

Having secondary subjective outcomes measures, alongside subjective ones, in a trial is very helpful as it gives a wide range of functioning (activity, physical, emotional, mental health, self efficacy) that would also expect to improve. It's informative and provides a backdrop to support an intervention and could help translation into clinical practice from research.
(I'm sure you meant 'having secondary subjective outcome measures alongside objective ones'.) We've seen that secondary measures are often biased. If they are included, lots of thought needs to be given to ways to reduce the bias. Apps collecting real time assessments (e.g. daily) rather than a question asking how things were over the last month are likely to be better.

Many ME/CFS research proposals I've seen recently seem to want to collect huge amounts of data about mood and emotional wellbeing and the like, even though the intervention isn't aiming to change mood. I commented recently that answering long surveys about our emotional status often seems to be the price we have to pay for biomedical research. I'd like to see a lot more thought given to what data is really needed for a study. The less irrelevant (and probably highly biased subjective) data that is collected, the fewer the opportunities for cherry picking (e.g. our intervention didn't change activity but participants felt a bit happier!) or for later studies to torture the data to fit preconceived ideas.
 
Written over a couple of days, so some points have since been made by others, but anyway...

This has developed into is a huge problem as researchers are not using objective measures in unblinded RCTs. And they have not been pulled up for it either via their institutions or via the peer review process.

The researchers into CBT and GET then mix up 'feeling better' (which small/modest improvement is inevitable pretty much after a face to face intervention where someone was nice to you) and 'being better' (recovered/cured/able to do largely as they please with no or few symptoms). The later is 100% what patients want/would like. The former has been used ad nauseum by researchers in RCTs of CBT and GET. Poor.

Small/modest improvements in subjective measures/questionnaires has then been 'sold' as a meaningful result - when all it is, is a demonstration of the placebo effect. And it doesn't get anyone back to work or health or anything close to what the patients would consider recovery. This is a classic example of the psychological process of substitution. Human's are good at it. And slow, poor at identifying it and calling it out.
Well said.

I am firmly on the side of subjective outcome measures requiring adequate blinding, and/or being used alongside objective measures (which are given at least equal weight to the subjective measures). Anything less is not scientific, and is potentially very dangerous.

I don't object to subjective outcome measures, in fact I want them used. They can provide valuable information, in particular from the correlations between subjective and objective measures. But they cannot be used on their own and without blinding, especially in a trial of a treatment whose whole purpose and means is to alter patients' subjective self-perception.

That just becomes circular nonsense revolving entirely around patients' questionnaire scoring behaviour, independent of any practical real world changes or benefit. Changes in actual perceptions and cognitions must have measurable external agency and consequences well beyond mere questionnaire scoring behaviour, otherwise what is the point?

Science works by allowing us to discriminate, and quantify the difference, between subjective and objective elements in our perceptions and cognitions. Unblinded subjective measures on their own don't allow that discrimination and quantification, that necessary element of control (as in Randomised Controlled Trial). It is not possible in such trials to distinguish between genuine therapeutic benefit, and potential confounders and artefacts of that methodology, such as placebo effect, wanting to please (or avoid displeasing) the therapist, etc.

At most unblinded subjective measures can only tell us there is an effect. They don't, on their own, tell us what the effect is due to (and hence if it is genuinely therapeutic). That is why we need blinding and/or objective measures as well, to help tease out causal pathways.

What the BPS school has done is construct an unfalsifiable 'methodology' that tries to maximise the effect of various confounders and artefacts, and arbitrarily relabels them as a therapeutic benefit.

This whole shitshow is going to come down to this technical issue. If the BPS school are not allowed to rely on unblinded subjective measures then they have nothing, and they know it. And so do their critics.

Wessely and Chalder:
"in the later stages of treatment patients are encouraged to increase their activity (which must ultimately be the aim of any treatment)"

Wessely, David, Butler, & Chalder – 1990

Change in activity level is objectively measurable. So there are no excuses for not measuring it.

Subjective measures might be reasonable things to include in an unblinded trial for a range of reasons, but they don't measure treatment efficacy.
Measuring patients' acceptability of a treatment might be legit, for example.

Another red herring I get is that blinded trials are hard to do for therapist-delivered treatments. Indeed, that highlights the weakness of the trials we have. It does not mean that it is OK to treat inadequate trials as if they were somehow adequate.
This excuse from them really gets up my nose. There is a minimum methodological standard to meet for any study wishing to claim scientific status. Trials that do not meet that minimum standard are not merely a weaker form of evidence, they are non-evidence to start with. They lack the necessary rigour and clarity to be interpreted and applied safely. No amount of hand waving and sophistry can change that.

It's like trying to build a house on a foundation of sand. No matter how well constructed the house is, it is still built on sand.
 
Last edited:
Another red herring I get is that blinded trials are hard to do for therapist-delivered treatments. Indeed, that highlights the weakness of the trials we have. It does not mean that it is OK to treat inadequate trials as if they were somehow adequate.

Yet that is exactly the view of researchers in the field. They cannot personally think of a better way, so they assume their inadequate trials are good enough.







Note: Professor Bentall co-authored the Powell 2001 et al. graded exercise 'education' trial and the FINE trial.

See also (Bentall's double standards with regards to evidence quality):
http://cepuk.org/2020/06/04/guest-b...a-classic-failure-of-evidence-based-medicine/
 
If mental health research was held to the same standard as the rest of medicine, large portions of it would simply collapse (and maybe that would be best for patients). But the self interest of researchers is considered more important than the well being of patients and that is so wrong.
 
Another thing is that the deconditioning theory, which GET relies on (explicitly in the PACE paper), does not apply well at all.

If patients would not recover from say a virus infection and would somehow artificially prolong the symptoms, one would not expect to see symptoms like POTS or gut issues. In fact the symptoms are that wide ranged that they hardly can be classified as similar to a virus infection (despite that there may be some truth with the comparison, interestingly).

In addition, ppl report to have got ill from any, more or less normal, chemicals. I don´t see the empirical evidence that contact with ordinary enough chemicals could induce an illness which would lead to a deconditioning.

Therefor one would need to propose that the ME symptoms appear after the virus, maybe in a gradual way. This would match up with the gradual onset pattern without any noticeable trigger.

But then the deconditioning theory doesn´t apply, as in gradual onsets there is no deconditioning. And then only CBT should work. But the PACE trial just says that GET does work as well. This may indicate that the claimed success is indeed only a placebo effect.

Or one must propose that the two gradual onsets differ. Then one would at least expect to have shown this up in the trial. But obviously they didn´t care at all. Ppl with this or that ME - after this differentiation -, (even ppl with probably no ME) all the same. And once more a placebo effect comes to mind.


(It may also be, btw, not that likely that ME wasn´t already present at begin of the trigger event.)
 
Last edited:
Yet that is exactly the view of researchers in the field. They cannot personally think of a better way, so they assume their inadequate trials are good enough.
Another example:


Code:
https://twitter.com/SnowyPanthera/status/1222476712203575297

I realize that this is a difficult field especially with regard to objective measures in mental illness, and I admit I can't remember how the argument that certain trials were not 'unblinded' but 'assessor-blinded' was being dicussed on the forum.

But as others proposed, there are at least some ways to carefully combine diverse 'suboptimal' objective outcomes.

I think for ME in contrast to depression or psychosis it is even better feasible to define an adequate combination of objective outcomes, partially based on reproducible, specific symptom patterns induced by certain activities.

As we discussed on another thread, In MS research some researchers did find objective measures for motor fatigability (movement patterns). The propblem might persist that these measures not always completely correlate with fatigability perception. But only because it's complicated I think doesn't justify to ignore what could be better measures than those that are usually applied in the field of mental illness or MUS and ME.
 
Last edited:
Back
Top Bottom