A Trial of ME - Elizabeth's Story. #MEAction article, November 2019

DokaGirl

Senior Member (Voting Rights)
Last edited:
"The treatments (CBT/GET) that I did as part of the trial had had a hugely negative impact on my daily life, and so when the results showed only positive outcomes for the data and nobody made worse by the treatments, I was shocked and upset. I naturally questioned the results at the presentation and asked where I was represented on their graphs. I was told that it was because they had used a variety of the mean and median averages from the data, therefore discarding the people at either end of the scale."
 
We badly need research that analyzes experiences and outcomes from those who participated in the "positive" trials.

PACE reported no deterioration. This is obviously false. And frankly there's a good reason so much of the framing regarding the PACE data was about being able to identify participants, not that it would make it any sense to do it this way. But clearly having testimonials from participants in those trials is something they are deathly afraid of.

I'm pretty sure the example above, of having fudged the results to bury cases of serious deterioration is the norm in all those trials.
 
It is interesting that Elizabeth observed that some others with fatigue but not PEM in the group improved. It shows just how useless the Oxford defintion was, hiding people with ME among those with chronic fatigue so any serious harm as Elizabeth suffered could be conveniently 'averaged out'. Disgraceful.

Thanks to Elizabeth for writing it. I think this is an important article. One for the NICE committee perhaps, though I know they are only taking information from studies and surveys, not individuals. @adambeyoncelowe, @Keela Too.
 
It is interesting that Elizabeth observed that some others with fatigue but not PEM in the group improved. It shows just how useless the Oxford defintion was, hiding people with ME among those with chronic fatigue so any serious harm as Elizabeth suffered could be conveniently 'averaged out'. Disgraceful.

Thanks to Elizabeth for writing it. I think this is an important article. One for the NICE committee perhaps, though I know they are only taking information from studies and surveys, not individuals. @adambeyoncelowe, @Keela Too.
Yes, that's right, but lay members can raise individual experiences as anecdotes in the room. The difference is that NICE itself won't include them in their evidence finding activities, so it will never appear in the literature reviews.
 
A huge well done thank you to Elizabeth for sharing her story. It's so very sad, and angering, to hear about what happened to her.

Unfortunately this paragraph
Therefore, I was shocked to find that the PACE Trial had been approved and conducted, not very long after the end of the trial I took part in. This was followed very soon after by the set-up of the current NICE guidelines and roll out of the CFS clinics on the NHS (including one at the Frenchay pain clinic). This was all based on the results of the PACE trial which I felt, given my experience, couldn’t have been correct.
Is a little confused?
It makes it sound as if the PACE trial was completed & the NHS clinics being set up after PACE was conducted. But NICE guidelines 2007 & 1st PACE results published 2011. Its undoubtedly accurate that the same theories that underpinned the trial she participated in also underpinned everything else, but NICE guideline wasn't done on results of PACE specifically, since PACE had only just started in 2006/7 when guideline was being developed.

I know it seems like nitpicking, but factual accuracy is important.
 
Could this be the paper?

https://www.ncbi.nlm.nih.gov/pubmed/17014748

Hazel O'Dowd, Philip Gladwell...

Yes.

I can't believe they are claiming the study was double-blinded, when it is obvious to both the patients and the therapists which group they are in. Only the assessor was blinded, but this is not considered a "blinded" trial. They didn't bother to ask the patients whether they know and gave a poor excuse, that assessors anecdotally noted that patients considered the education and support group to be a valid approach (which has nothing to do with blinding and will not control for response bias!).

From that paper:

Conclusion said:
Group CBT did not achieve the expected change in the primary outcome measure as a significant number did not achieve scores within the normal range post-intervention. The treatment did not return a significant number of subjects to within the normal range on this domain; however, significant improvements were evident in some areas. Group CBT was effective in treating symptoms of fatigue, mood and physical fitness in CFS/ME. It was found to be as effective as trials using individual therapy in these domains. However, it did not bring about improvement in cognitive function or quality of life. There was also evidence of improvement in the EAS group, which indicates that there is limited value in the non-specific effects of therapy.

At baseline, 30% of patients had an SF-36 physical score within the normal range and 52% had an SF-36 mental health score in the normal range. At 12 months, the physical score was in the normal range for 46% of the CBT group, 26% of the EAS group and 44% of SMC patients.

A subject was assumed to have a score in the ‘normal’ range if the score was on or above the fifth centile for the distribution (estimated as the mean –1.645 × SD for the gender-specific age group). The age and gender-specific means and SDs for the general population were obtained from the SF-36 user manual

Assuming the "normal" range is anyone above the 5th percentile is statistical absurdity when more than 5% of the general population have chronic illnesses that would have prevented patients from participating in this trial in the first place.

I am reminded of this: https://pmj.bmj.com/content/94/1117/613

Also, the discussion of the Chalder questionnaire:

A further limitation of the fatigue scale is the asking of respondents to compare him- or herself with how they were before (a method adopted by the GHQ). The proceedings of a workshop organised by the National Taskforce on CFS looked at research methodology in CFS. They concluded that a format of comparison with previous self could be perceived as insensitive in chronic conditions, since the ‘usual’ state here may be interpreted as one of illness. Furthermore, an accurate comparison relies on the ability to remember pre-fatigue, which may be difficult for respondents who have been fatigued for a long period. The workshop also concluded that the format could lead to confusion if, for example, the person feels better than usual compared with their recent level of symptoms, but worse than usual compared with their pre-morbid symptoms.5

The real problem is the confusion in the scoring: ‘better than usual’, ‘no more than usual’. ‘worse than usual’, ‘much worse than usual’. (Scoring was Likert, eg 0, 1, 2, 3, maximum of 33)

The overall difference of ~2-3 points on the 33 point scale is a small difference and unlikely to be clinically significant.

If "no more than usual" is compared to pre-illness at the entry of the trial, but after the trial, it is compared to the level at the start of the trial, no change can be interpreted as an improvement. In this case it can no longer be considered a scale.
 
Last edited:
To me, this felt wrong and I felt their results should have shown everyone, including those made worse by the treatments. By using varying mathematical interpretations of the results, I felt people like me, who were made worse, were excluded from their results – therefore showing the results in a more positive light than I believe the raw data would have shown. When new drugs are going through testing, all side effects have to be declared, and I didn’t see why this should have been any different when testing a new treatment.
Exactly. How the hell do psychiatrists get away with this nigh-on criminal behaviour!
 
but she wasn't a PACE participant(?)
Agree, was pre-PACE by the look of it - 2002.
After losing my job and still not getting any better, I was invited to participate in a trial called ‘Chronic Fatigue Syndrome Research Project’ by my GP. It was run at the pain clinic at Frenchay Hospital in Bristol in 2002 and was being conducted into the effectiveness of cognitive behavioural therapy (CBT) and graded exercise therapy (GET) as a treatment for Chronic Fatigue Syndrome (CFS). Once I had been discharged from the neurologist with a diagnosis of Chronic Daily Headaches, I was eligible to take part and was enrolled on the trial in February 2002.

Do we have info on this trial?
 
Thanks @MEMarge - well found. That full report is too much for me to take in in detail but some things stand out. @p43 of the PDF one wonders whether this includes Elizabeth's data

Chalder fatigue scale
The Chalder fatigue scale also showed some
statistically significant differences. Initial analysis
of the data suggested that mean scores were
similar at 6 and 12 months (p = 0.19) and that the
trend across the groups did not change
significantly between the 6- and 12-month
assessments (p = 0.13), but did indicate a
difference between the three treatment conditions
(p = 0.039). However, three influential outlying

observations were identified and, after removing
these values from the analysis, there was a
suggestion of a change in trend across the groups
between 6 and 12 months (p = 0.087), in addition
to an overall significant difference between the

groups (p = 0.027). For the CBT and EAS groups,
there was no significant change in mean score with
time (p = 0.90 and 0.45, respectively), but for the
SMC cohort, the mean score was lower at 12 than
at 6 months (mean score at 6 months 21.87 versus
19.41 at 12 months, difference 2.46, 95% CI 0.32
to 4.61, p = 0.024)


It would seem to me that if "influential outlying observations are to be removed there needs to be some discussion of the nature of the outliers. I do not see any but that may be due to my limitations.

@PDF p52
Methods for modelling drop-out explicitly as part
of the analysis of longitudinal data are available
but, with just two follow-ups and only three
patients reporting explicitly that they were too ill
to attend, it was felt that the additional complexity
was not justified.


Three were well enough to attend for treatment, but not for follow up. Some were clearly not benefitting greatly from the treatment. If there is a detailed report of those suffering deterioration I have not noticed it.
 
Assuming the "normal" range is anyone above the 5th percentile is statistical absurdity when more than 5% of the general population have chronic illnesses that would have prevented patients from participating in this trial in the first place.
This is more than mere absurd. There is a complete breakdown in the entire process that this passed approval, funding, review and publication. Everyone involved in this should be expelled from the profession and their entire career reviewed as suspicious. There is no excuse to allow something this broken to have passed through. The premise of the underlying model is they are not even sick and so should at least be in the highest decile for physical functioning.

At least we know where the PACE team got the green light that this was OK. However it yet again blows in the water their pathetic excuses about having to adjust to circumstances. PACE was the final of what looks to be at least a dozen or so almost identical smaller trials. The treatments had been used in practice a full decade by this point. There should not be any major surprises at this point, no excuses not to pre-register a plan and follow it to the letter. But to find that they cheated the same way in earlier trials shows a pattern of deceit and willingness to lie and cheat brazenly, amounting to complete indifference to what actually happens to the patients, instead a willingness to do anything to show false evidence of benefits at all costs.

When cheating is condoned by regulatory process it is not the cheaters who are to blame but the whole process that needs top-to-bottom reform. When it is a pattern, in fact multiple patterns looking at Crawley's ethical lapses, there are no possible excuses and pretty much the opposite of extenuating circumstances, they are in fact the most damning circumstances.
 
Thanks @MEMarge - well found. That full report is too much for me to take in in detail but some things stand out. @p43 of the PDF one wonders whether this includes Elizabeth's data

Chalder fatigue scale
The Chalder fatigue scale also showed some
statistically significant differences. Initial analysis
of the data suggested that mean scores were
similar at 6 and 12 months (p = 0.19) and that the
trend across the groups did not change
significantly between the 6- and 12-month
assessments (p = 0.13), but did indicate a
difference between the three treatment conditions
(p = 0.039). However, three influential outlying

observations were identified and, after removing
these values from the analysis, there was a
suggestion of a change in trend across the groups
between 6 and 12 months (p = 0.087), in addition
to an overall significant difference between the

groups (p = 0.027). For the CBT and EAS groups,
there was no significant change in mean score with
time (p = 0.90 and 0.45, respectively), but for the
SMC cohort, the mean score was lower at 12 than
at 6 months (mean score at 6 months 21.87 versus
19.41 at 12 months, difference 2.46, 95% CI 0.32
to 4.61, p = 0.024)


It would seem to me that if "influential outlying observations are to be removed there needs to be some discussion of the nature of the outliers. I do not see any but that may be due to my limitations.

It's a bit strange, but they're basically wondering why fatigue dropped from 6->12 months in the SMC group but not the CBT or EAS groups. A "change in trend across the groups". This led to the differences in the CFQ between groups not to be significant at 12 months, but were significant at 6 months, which is why they pooled the 6 & 12 month data together as a cover-up.

Of course the most likely reason for this "trend" is simply a regression to the mean, or natural variation in both health and questionnaire answering behaviour of the participants.
 
We badly need research that analyzes experiences and outcomes from those who participated in the "positive" trials.

PACE reported no deterioration. This is obviously false. And frankly there's a good reason so much of the framing regarding the PACE data was about being able to identify participants, not that it would make it any sense to do it this way. But clearly having testimonials from participants in those trials is something they are deathly afraid of.

I'm pretty sure the example above, of having fudged the results to bury cases of serious deterioration is the norm in all those trials.


I wholeheartedly agree with @rvallee, that we need more testimonials from pwME who went through these trials. I don't know if they sign a non-disclosure agreement. I can't see why they would, as no funds were exchanging hands, but who knows?

Could the advocacy groups advertise for participants from these trials to come forward and tell their stories? Or, has this already been done? If it has already been done, could a second call go out?
 
Back
Top Bottom