Graded Exercise Therapy for Patients with Chronic Fatigue Syndrome in Secondary care: a benchmarking study, 2021, Smakowski, Chalder et al

So they looked at 92 patients
- 1 discontinied
- 20 "completed treatment early" with no outcome measures (in other words dropped out)
So figures are for 71 patients who completed
[my bold]

That's a very coy way of describing those who dropped out, and could be the most salient issue. Of those receiving GET almost 22% of them disappeared off the radar, and nothing in the study - as usual! - gives any thought to the possibility their health may have declined due to GET. So how can they assert ...
In conclusion, GET therapy is a safe and efficacious treatment for patients with CFS/ME in a clinical specialist environment.
... when there is so much missing data! You simply cannot just blindly assume that missing data does not potentially negatively impact the results, it's absurd. If just one of those patients dropped out because their health had been seriously impaired, you would have > 1% serious harms. Another one or two with significant impairment due to treatment, and you would maybe have > 3% harmed to at least a significant degree. Those numbers would surely be more than enough to trigger alarm bells for medicines. And in reality the % could be considerably more.
 
Yes you can, Barry! Yes you can!!!
!
Oh ... err ... yes, sorry David ... of course! It's part of the pseudo-scientific method isn't it :rolleyes: - how else could you get the results you want. Blindly assuming in unblinded trials.

To me this aspect of this study epitomises what we have been banging on about in the paused-NICE-guideline thread, and it's rather appropriate that it is a study of clinical practice. Harms, especially those of the form where people's health has deteriorated to a greater or lesser degree, is simply airbrushed out of the equation, in many cases probably just like this - they disappear from clinical practice monitoring. It is after all a pretty likely scenario - if you know the stuff they have been making you do has made you significantly more ill, what is the likelihood you are going to back like Oliver with "please sir, can I have some more?"

I grant that missing data must be a pain, but that does not mean it is all OK to therefore assume it was insignificant to your results anyway ... just because it is missing! That could well be why it may be highly significant to your results.

There just has to be a better way :confused: :banghead:.
 
"Completed treatment early" strongly implies that the benefits were so good the patients were cured so they did not need any more treatment.

This is taking spin to a fraudulent level.

Patients dropped out is at least neutral though the assumption is usually that the reasons for dropping out is not connected with the trial, such as moving from the area, and that their results would have the same number profile as those who remained so a similar percentage would be better or worse.

In ME we suspect it is that people have become too sick to continue but we have no proof as they never check so we only have logic and anecdote.
 
If just one of those patients dropped out because their health had been seriously impaired, you would have > 1% serious harms. Another one or two with significant impairment due to treatment, and you would maybe have > 3% harmed to at least a significant degree. Those numbers would surely be more than enough to trigger alarm bells for medicines. And in reality the % could be considerably more.

Indeed strange that there is no specific monitoring of harmful effects, just assuming that everyone that dropped out is unharmed by the tested treatment. I'm wonderings if it was indeed at least the one though:

The standard number of treatment sessions was 12; see appendix 2 for more detail. Twenty-one patients did not complete the optimal number of therapy sessions; see appendix 1 for a summary. With regard to adverse events, one patient discontinued treatment after 11 sessions because they felt GET was “not suitable for them”; 11 sessions is nearly a full course of treatment; however, unfortunately, with no outcome data around this time we cannot say whether the patient’s symptoms were improving or getting worse. No other adverse events were reported by patients. Some patients completed or stopped treatment early due to situational factors, such as, only being funded for a certain number of sessions, moving away, or becoming asymptomatic and therefore, not requiring further treatment. It should be noted that some patients gave no reason for stopping treatment.

If the one person had nearly completed their CBT course (just that final session is not going to make a landslide difference), they could have asked him/her to fill in the the questionnaires/come back one last time for assessment, with a small mark in the data tables that one was measured one CBT session short. "We cannot say wheter the patient's symptoms were improving or getting worse because we have no outcome data around this time" is pretty weak tea. As is "Yeah, we can't know because they just gave no reason." Like they emigrated to the South Pole without phone reception or email.


A bit off topic, and I'm not sure wheter to post it here or in the NICE or a GET thread, but what would be considered a % harm from tested treatment to keep an eye on? (I'd say every participant that looks like the harm might have been caused by treatment, as it's an important part of the data.)

I'm asking because in his first published CBT for CFS RCT, Sharpe mentions 2 participants from the trial group of 30 getting "worse" or "much worse" scores who attributed their deterioration to the treatment.

"At the final assessment significant subjective improvement ("much improved" or "very much improved") was reported by 60% (18/30) of the patients who received cognitive behaviour therapy and 23% (7/30) of the patients who had only medical care. Deterioration ("worse" or "much worse") was reported by 13% (4/30) of the cognitive behaviour therapy group and 10% (3/30) of the medical care only group. Two patients given cognitive behaviour therapy attributed their deterioration to the treatment and two experienced only temporary benefit."

That's 6,5% of the treatment arm reporting deterioration from the treatment, and a total of 13% getting worse.

I haven't looked further into the future (already tons to look at in my chosen historic period), but I have some doubts that Sharpe monitored harm-from-CBT in his future trials, while he knew it could happen.
 
Indeed strange that there is no specific monitoring of harmful effects, just assuming that everyone that dropped out is unharmed by the tested treatment.
Not that strange really, given the mindset of the investigators.

The following depends heavily on what they recognise (or more to the point do not recognise) as adverse events. Invariably if patients symptoms 'simply' get worse, that is not recognised as an adverse event. Just interpreted that the patient has not been persevering with their treatment.
No other adverse events were reported by patients.
...
It should be noted that some patients gave no reason for stopping treatment.
The highlights I've added here tend to suggest that possibility:

upload_2021-9-22_15-49-59.png
 
"Completed treatment early" strongly implies that the benefits were so good the patients were cured so they did not need any more treatment.

.

Could early completion also be because deterioration among participants was increasing in frequency as were the #s of those dropping out?
[ETA - not that I am suspicious or anything....]
 
Last edited:
"Completed treatment early" strongly implies that the benefits were so good the patients were cured so they did not need any more treatment.

This is taking spin to a fraudulent level.
Isn't it just. In a perverse way it is almost heartening, because it so clearly illustrates to anyone the absurd spin these people put on these things.

"Completed treatment", early or otherwise, can only mean ... the treatment was completed. Yet they make it abundantly clear those patients did not complete their treatment! You might as well say the train completed its journey early, whilst conveniently avoiding to mention that it broke down half way.
 
Not that strange really, given the mindset of the investigators.

O yeah, I forgot to put on my special BPSCBT Way-Of-Thinking cap with observation-clouding ear flaps and dark motivation tassle. My bad.;)

I understand and agree with what you're saying (and yes, it's indeed unsurprising), I was speaking out of what may be expected from someone who claims to have done a study in general. In this case also especially someone who, if she paid attention, should be motivated to be more meticulous about such things, given the developments of the last couple of years.

I forgot to check the appendix, it fell out of my head immediately after I meant to while reading the main text.:rolleyes:
At least 15 people could have the reason of worsening of symptoms.

Invariably if patients symptoms 'simply' get worse, that is not recognised as an adverse event. Just interpreted that the patient has not been persevering with their treatment.
That's how they dealt with it, and they had room to do so, because no-one with influence or monitoring tasks was ever critical about them. But I wonder how long it will be tenable for someone like Chalder to happily keep on babbling from her own pet theory bubble, bulldozing on with a repeat of her script, given the changing movement in science (quality control of research, patient-centred), scientific finds regarding ME and other illnesses, and a slow but ongoing change of attitude and consideration of ME by big bodies.
 
Might be good if a few of the patients who did drop out from this study would self-identify and be prepared to have a follow-up interview from a subsequent researcher...

Given this study was an evaluation of regular treatment as part of an ongoing specialist service provision patients may not be aware they were the subjects described in this specific study.

However anyone experiencing harm from input involving the standard service provision from the services involved would be relevant to understanding the drop out rates better in this study.
 
This thread has been merged now that the paper the letters refer to has been identified.
_______________________

Patients with chronic fatigue syndrome can improve with graded exercise therapy: response to Vink et al. 2022 Chalder, Smakowski, Adamson and Turner
paywalled.
can't find thread for either the original paper or Vinks response.

"Patients with chronic fatigue syndrome can improve with graded exercise therapy: response to Vink et al. 2022." Disability and Rehabilitation, ahead-of-print(ahead-of-print), pp. 1–2

https://www.tandfonline.com/doi/full/10.1080/09638288.2022.2059112
 
Last edited by a moderator:
Basically it says 'We don't understand how to assess evidence'.



Evidence from gold-standard randomised controlled trials (RCTs) finds that graded exercise therapy (GET) is an efficacious treatment for chronic fatigue syndrome (CFS) [1,2]. We compared self-report clinical outcomes from patients who received GET in routine clinical practice with outcomes from RCTs and found that patients reported improvements in several outcomes [3]. We now respond to Vink et al. who raised a number of issues related to our evaluation [4].

We used a range of self-report measures to capture different aspects of the patient experience. We maintain that patients’ own perception of their symptoms is of utmost importance. We think it is important to assess self-report outcomes in routine clinical practice within the UK National Health Service, where the routine use of objective measures is precluded due to cost. In our evaluation, we acknowledged the drawbacks of self-report measures. While the therapists encouraged patients to use objective measures during therapy we did not collect objective audit data routinely. We are therefore not selectively reporting.

Vink et al. [4] state that our conclusion that GET is safe is unsupported. However, a recent review of randomised controlled trials (RCT) concluded there was no evidence of excess harm with graded exercise therapy by either self-rated deterioration or by withdrawing from GET, in comparison to control interventions [5]. This evidence is more robust than the survey conducted in Oxford Brookes which was subject to the many biases associated with non-randomised evaluations [6].

In our paper, we described GET as was used in our clinical service. We did not use fixed incremental increases in physical activity or exercise. Goals were not prescribed but individually negotiated with each patient. Patients were offered up to 12 sessions of GET and some people decided jointly with their therapist that they needed fewer than 12. 37/92 patients attended less than 12 sessions. This included those who completed GET early (n = 16) or dropped out (n = 21) for other reasons, such as their clinical need changed or there was a lack of funding described in the Appendix, in our original paper. More specifically, their clinical need changed or there was a lack of funding. Those who had less than 12 sessions, therefore, were not necessarily dropouts. Four patients indicated that they either got worse or did not engage with GET.

Vink et al. [4] suggest that the rate of adherence would not be acceptable in a research trial. Although this evaluation was not a randomised controlled trial but an evaluation of outcomes from a routine clinical setting, the dropout was not unusually high. There is much evidence to show that there is a general issue with adherence rates to psychological/behavioural treatment in healthcare services [7]. Additionally, adherence is generally poor in people with a condition that lasts for more than 6 months [8].

Vink et al. [4] state the problem of missing data is unacknowledged. They also state that using a mean score would inflate scores if a patient was deteriorating. In response, we clearly state the number of patients who completed follow-up data each time a result was reported. We also compared pre-treatment scores of those with data and those without. We used an accepted, standard multiple-imputation pro-rating procedure to minimise data misrepresentation and described this in our article. Mean scores of patients improving would be affected in a similar way to scores of patients deteriorating.

Vink et al. [4] assert our article used selective evidence to support our conclusions. They refer to the Oxford Brookes study [6], the new NICE guidelines [9] and the Larun Cochrane review [1] as well as the statement to update the review [10]. As previously stated the Oxford Brookes study was subject to biases associated with non-randomised evaluations. With regards to the NICE guidelines, our evaluation was carried out before the revised recommendations were available and were in line with the scientific evidence and the NICE guidelines at the time. The Cochrane review is in the process of being updated.

In our final summary, we state that effect sizes for improvement following treatment were not as high as those found in randomised controlled trials and as Vink et al. [4] stated some patients remained disabled. Although there were significant improvements in physical functioning and work and social adjustment, improvements were modest. Clearly, new treatments need to be developed and treatments such as GET with some evidence of efficacy, need to be improved.

Vink et al. [4] suggest that spontaneous recovery is likely as many patients had been ill for 2–3 years. However, evidence from a published review showed that prognosis is poor in people with CFS if left untreated [11]. Indeed, prolonged inactivity is harmful even in healthy people [12]. Furthermore, nocebo-related effects, the belief that a treatment will cause harm, is likely to be observed in CFS, as symptom worsening follows negative expectations in chronic pain [13].

Although our evaluation was uncontrolled, we used recommended bench-marking techniques to report findings. We stand by our conclusion that GET for patients is safe if delivered by trained health care professionals and can be an efficacious treatment for patients with CFS/ME in a clinical setting. Effect sizes were smaller in routine clinical practice than RCTs and we, therefore, need to improve treatments to bring about larger effect sizes.
 
And the letter on this thread is a response by the authors of the paper on this thread to Vink's letter.

Edited now the threads have been merged.
 
Last edited:
The Vink letter is here (and is itself a response to an article, I think...)

https://www.tandfonline.com/doi/full/10.1080/09638288.2022.2048911

Patients with CFS remain severely disabled after treatment with graded exercise therapy in a specialist clinic in the UK Response to Smakowski et al.
Mark Vink, Alexandra Vink-Niese
& Sarah F. Tyson (Professor of Rehabilitation, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester and Manchester Academic Health Science Centre, Manchester, UK)
 
Back
Top Bottom