GETSET letters in The Lancet

When GETSET was published @Tom Kindlon put out a call to people to write in response. He pointed out that, like me, he does not have a degree (due to ME) but that has not prevented him from having letters and articles published in several peer-reviewed journals.

Without Tom’s encouragement I would never have considered writing to The Lancet or Nature. He and others on S4ME also gave me very helpful feedback prior to submission. So thank you to Tom and everybody else in this community who helped.

[edited to correct typos]
 
Last edited:
The authors’ response to my letter is very weak:
In response to Robert Saunders, we can confirm that all patients reported post-exertional malaise (PEM) and/or fatigue at baseline (as is required to meet NICE criteria for chronic fatigue syndrome); no patients were excluded based on having PEM. A previous trial showed that post-exertional fatigue improved more after graded exercise therapy compared to after both SMC alone and pacing therapy.5The lack of a significant difference in safety outcomes between treatment arms reinforces the safety of GES for chronic fatigue syndrome.

1) They answer a question I didn’t ask. We know that they adhered to the NICE criteria. The problem is that it is not clear how they interpreted those criteria. What is meant by “post-exertional fatigue”, which is differentiated from PEM in the criteria, and does not require that “physical or mental exertion makes symptoms worse”?

2) If ME/CFS is defined by its contraindications to exertion, how can a treatment which requires exertion lead to improvement?

3) The authors cite the “worthless” PACE trial in their defence.

4) They do not address the fundamental problem of the reliance on subjective outcome measures in a non-blinded trial. As @Jonathan Edwards has emphasised, such methodology “demonstrates a lack of understanding of basic trial design requirements” and a “disregard for the principles of science”.

I am also curious to understand why it has taken so long for these letters to be published. Mine was submitted in June and I seem to recall there was a two week deadline for submission. I don’t know whether that is an unusual delay for correspondence but I was surprised that it took so long to be accepted (January) and then published. Anyway, credit to The Lancet for publishing 5 critical letters.
 
Last edited:
When GETSET was published @Tom Kindlon put out a call to people to write in response. He pointed out that, like me, he does not have a degree (due to ME) but that has not prevented him from having letters and articles published in several peer-reviewed journals.

Without Tom’s encouragement I would never have considered writing to The Lancet or Nature. He and others on S4ME also gave me very helpful feedback prior to submission. So thank you to Tom and everybody else in this community who helped.

[edited to correct typos]
Ditto for me, but mine was rejected. @Tom Kindlon was very supportive in my endeavour nonetheless, as well as a few others I believe.
 
I’d like to second @Robert 1973 ‘s acknowledgement of Tom and others.

It is the example of @Tom Kindlon , Keith Geraghty and other patients writing letters and articles in scientific journals that has encouraged me to start writing.

As @Jonathan Edwards has pointed out,

“The work of Matthees and Kindlon is just as much science as that of Lipkin and Fluge.”

https://www.s4me.info/threads/the-i...anuary-2018-katz-et-al.2013/page-2#post-36407 post 40

I was lucky enough to be healthy until age 30, so I did manage to get degrees, but you don’t need them to make a cogent argument.

Thanks to Tom and others for mentoring and collaborating with so many so effectively.:thumbup:
 
The authors’ response to my letter is very weak:

I'd be tempted to write back and explain that to them!
It either indicates that the authors don't really know what PEM is, or that they are being deliberately obfuscatory.

What has always concerned me, though, is that The Lancet seem to have no mechanisms to ensure that authors have actually answered the questions asked of them. I saw this countless times when I used to edit Correspondence, and it frustrated the hell out of me. It's the reason why I don't think Correspondence is a particularly good method for "correcting the record". Once a paper has been published, that's sort of it really, unless you can demonstrate fraud. Everything else is just column inches. Sorry for being so cynical, but it's why I didn't leap to my keyboard when Tom called for responses.

I am also curious to understand why it has taken so long for these letters to be published. Mine was submitted in June and I seem to recall there was a two week deadline for submission. I don’t know whether that is an unusual delay for correspondence but I was surprised that it took so long to be accepted (January) and then published.

The reason for the deadline is to make sure that they don't send letters off to the study authors in dribs and drabs. I suspect the authors were given a choice over which letters to respond to, which is why they then accepted them in January (things can move slowly in Correspondence-land, but I guess it's also an indication of the priority they afforded the criticisms), and rejected the ones they didn't reply to. But I was never privy to the exact process involved, so that's just a guess.
 
Thank you, @Trish , on both counts.

Congrats to @Robert 1973 and the other authors. I really enjoyed reading everyone's points. When the letters are read together, a few themes emerge - the modesty of the reported changes in CFQ and SF36 PF scores, the adherence question, safety/harm and the need for objective measures. They read like a primer for critically reviewing trials of behavioural interventions for ME. Robert, I think yours will educate many about PEM!

If I’d had more than 250 words, I would have liked to cite Collin and Crawley’s (2017) data from adults who had undergone “specialist treatment of chronic fatigue syndrome/ME” at NHS specialist centres.

Patients who reported improvement at 2, 3, 4 or 5 year follow-up, in the form of reporting that their overall health was much better or very much better than it had been before they had “specialist treatment”, had a median baseline SF36 PF of 52.

Patients who reported no change at 2, 3, 4 or 5 year follow-up, in the form of reporting little or no change in their overall health, had a median baseline SF36 PF of 47.

Patients who reported worsening at 2, 3, 4 or 5 year follow-up, in the form of reporting that their overall health was much worse or very much worse, had a median baseline SF 36 PF of 29.5.

See table 5: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5513420/table/Tab5/

Full paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5513420/

These data suggest that those with worse physical functioning at baseline do worse with "specialist treatment", not better, than those with higher baseline physical functioning, in line with my argument in my letter.
 
We suggest that using patient-reported outcome measures is sensible in symptom-defined illnesses such as chronic fatigue syndrome.
My bold.
People used to suggest the moon was made of cheese, but suggestion does not cut it in scientific debate, unless it is backed with some ... science! How about they justify their suggestions with science. Why do they think it valid for a symptom defined illness to be exempt from needing objective outcome measures, especially as so many of the symptoms are physical? Or is it they believe symptoms can only ever be psychological? They are just ... weird!
 
Therapists reported that 29% of patients did not adhere more than “slightly” to the exercise programme, despite 88% attending at least 75% of their guided support. This area requires more research to understand non-adherence and how it might be improved.
[My bold]

Until you understand the reasons for non-adherence, you should not even be presuming it should be improved! If you find the reason people did not comply is because it made them more ill, then the presumption is wrong ... and stupid.
 
Therapists reported that 29% of patients did not adhere more than “slightly” to the exercise programme, despite 88% attending at least 75% of their guided support. This area requires more research to understand non-adherence and how it might be improved.
[My bold]

Until you understand the reasons for non-adherence, you should not even be presuming it should be improved! If you find the reason people did not comply is because it made them more ill, then the presumption is wrong ... and stupid.
They simply cannot face the possibility that their beautiful little idea is completely wrong, and doing serious harm.
 
Back
Top Bottom