Cochrane Review: 'Exercise therapy for chronic fatigue syndrome', Larun et al. - New version October 2019

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by MEMarge, Oct 2, 2019.

  1. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    While expecting others to have confidence in their own indefensible subjective judgements.

    It seems that CFS research has done a better job of providing insight into the problems with CFS researchers than CFS patients.
     
    Last edited: Oct 16, 2019
  2. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Another thought for the mix.

    Subjective questionnaires are going to result in pretty inconsistent answers even for people who are cognitively A1. But a key problem is that most pwME are a far from A1 cognitively, and so they will tend to have real trouble answering such a questionnaire when so impaired. My wife is mild/moderate, and if well rested and not in PEM, would probably do a fair job, as much as anyone can with such subjective measures. But if she were done in, then it would be like trying to think her way through mud, treacle, etc, trying to fill it in, and the her thoughts would be pretty askew. But in such circumstances she would likely lean heavily on the support of others I suspect.
     
  3. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    10) Harmful effects

    I also had a look at possible harmful effects. Next to fatigue, this was the primary outcome of the review. Larun et al. have downgraded the quality of evidence for this outcome to very low quality (the lowest rating) because there was too little data to form any conclusion. Only one study, the PACE trial, has reported on the rate of serious adverse reactions (SAR) between exercise therapy and treatment as usual. It reported 2 events in the intervention and 2 in the control group. Larun et al. have downgraded the quality of evidence with two levels for imprecision and notice that “this available trial was not sufficiently powered to detect differences this outcome.”

    A closer look at the safety data from the PACE trial
    There were more serious adverse events (SAE) in the GET group (17 in total) than there were in the SMC group (p=0·0433), (only 7 in total). That could mean two things: (1) that there happened to be more serious adverse events in the GET by coincidence and that these had nothing to do with the therapy or (2) that there was a problem with the judgement of the raters who might be unwilling to attribute a deterioration to treatments. The PACE-trial said the raters were blinded to treatment allocation, but they might be aware that not attributing serious adverse events to treatment would make GET and CBT look safer. After all, there weren’t many people claiming that APT would be harmful, and this was certainly not the case for SMC. Safety was suspected to be an issue for GET/CBT. So in a way, the raters could not be fully blinded to the trial hypothesis that GET and CBT would be safe and effective treatments. If you look at the description of the 10 SAR in the trial, most were related to depression, suicidal thoughts or episodes of self-harm. That seems to suggest that the raters were very reluctant to attribute a deterioration of physical health to the treatments. This only happened 5 times in a sample of 641 patients!

    You could argue that few serious adverse events indicate that the treatments were safe, but SAE was defined quite strictly. It involved either death, a life-threatening event, hospitalisation, a new medical condition or - and this was probably most relevant to the trial - “Increased severe and persistent disability, defined as a significant deterioration in the participant’s ability to carry out their important activities of daily living of at least four weeks continuous duration.” This was determined by people involved in the trial such as a research nurse or centre leader. 4 weeks is long but given this description, I would suspect that more patients would have experienced such an episode. For example, it happened only 8 times in the 161 patients in the CBT group. Even if no treatment was offered and it was just the natural course of the illness, I would suspect that more patients deteriorated in this way.

    The PACE trial also had data on non-serious adverse events, but these were so common that they probably mean very little to the safety of GET. Just about every patient had a non-serious adverse event during the trial. What we would be interested in is something between a non-serious and serious adverse event: something that gave enough data on deterioration without becoming trivial.

    The PACE trial did have some other indicators of deterioration such as a decline of 20 points for physical functioning or scores of ‘much worse’ or ‘very much worse’ on the PCGI scale, but the authors required this to be the case for two consecutive assessment interviews. As Tom Kindlon explained in his excellent commentary, this is not the case in the protocol where they speak of a deterioration compared to the previous measurement. The changes were not explained in the text and the protocol-specified outcome was never reported. That makes me a bit suspicious…

    The others trials
    In the other trials harms were not particularly looked at as an outcome. They only way patients could indicate deterioration was in the main outcome measures. There were some trends in some cases but I don’t think any of the outcomes measures used in any of the GET trials indicated a statistically significant deterioration in the GET group compared to the control group. Some trials used the clinical global impression scale where patients could indicate they got ‘A little worse’ (score 5) ‘Much worse’ (score 6) or ‘Very much worse’ (score 7). In Fulcher & White 1997 only one out 29 participants in the GET group give a score of 5, none gave a higher score. The same was true for the Wallman et al.2004 trial. In the Moss-Morris et al. 2005 trial, this was measured but not reported. The clinical global impression scale was used but they only looked at the improvement scores. The Jason et al. 2007 study had patients and clinicians rate overall improvement but they combined the scores for deterioration with these of ‘no change’. The percentage of patients in this category for the GET group was between 52-59%. I think this indicates how poorly these trials reported possible harmful effects of GET.

    One reason patients might fail to report harms of GET is that they are not blinded and reporting deterioration might make the intervention and thus the trial, the researcher or the therapist, look bad. Another reason is that the GET-manuals and instructions were quite assertive in telling patients that GET is safe (normally that would be the objective of doing a trial, not something you tell patients at the beginning). The booklet that was used in the FINE and Powell et al. 2001 trial said: "Activity or exercise cannot harm you. Instead, controlled gradually increasing exercise programmes have been used successfully to build up individuals who suffer from CFS.” The PACE trial manual said: “there is no evidence to suggest that an increase in symptoms is causing you harm. It is certainly uncomfortable and unpleasant, but not harmful.” Other sections of these manuals try to convince patients that their symptoms are due to deconditioning, stress or anxiety and that interpreting them as signs of disease may worsen outcomes.

    Drop outs
    Some have argued that drop outs should be seen as a measure of safety and harm. The (previous) Cochrane handbook advises to be cautious with such interpretations because there are other reasons why patients might drop out than safety: “Review authors should hesitate to interpret such data as surrogate markers for safety or tolerability because of the potential for bias.”

    The table below gives an overview of the dropouts in all the trials. In the two Wearden trials and the Powell et al. 2001 trial there was a higher dropout rate in the GET groups compared to controls. But this wasn’t the case in the other trials and the overall test for overall effect gave a non-significant p value of 0.2. In the trial by Moss-Morris et al. 2005 a lot of patients refused to do the exercise test but the same was true in the control group.

    EDIT: the graph has been updated to include the drop out of Wallman 2004
    upload_2019-10-18_13-58-57.png

    I’ve played a bit with the data. If the PACE trial was excluded from the Cochrane analysis, things change a little and the p-value comes close to significant (0.05). There’s also an issue that the FINE Trial is given a very low weight because it had no dropouts in the control group. If more patients had dropped out in the control group that would paradoxically increase the risk ratio for the entire analysis. On the other hand, the trial by Jason et al. 2007 was not included because it doesn’t give exact drop out figures. It did say the drop out was around 25% with no significant difference among the groups. The drop out in Wallman et al. 2004 was also not included, although it was higher in the control than in the exercise group (It is rather weird that in this trial, no participants dropped out during treatment, only during baseline testing). Overall, I don’t think there’s a statistically higher dropout rate in the GET group in this analysis.

    Intention to treat or available case analysis?
    Finally, from the email correspondence we know that one editor raised an issue about the dropout rate. Atle Fretheim explains in an email dated 29 may 2019:
    It seems that the (old) Cochrane handbook says there are two options: available case analysis where you report all the data you have and count each participant in the group to which she was randomized. The alternative is an intention to treat analysis where you analyze all randomized patients and impute data on participants where you don’t have any. The Cochrane handbook suggests that it’s a matter of opinion and judgment what is considered the best approach. They write: “Although imputation is possible, at present a sensible decision in most cases is to include data for only those participants whose results are known, and address the potential impact of the missing data in the assessment of risk of bias.” So my first impression would be that the authors have a point here. Would be interested to hear what others think.
     
    Last edited: Oct 18, 2019
    Hutan, Cheshire, JohnTheJack and 8 others like this.
  4. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    I do think that the Cochrane review could have mentioned the many patient surveys indicating harms by GET.

    They span a period of almost 30 years and cover multiple countries. All together they represent a patient sample that is much larger than the one included in the randomized trials.
    In some surveys did not ask specifically about GET. In the 2010 MEA survey, for example, they just asked their membership how useful/harmful they found different treatments. These included hydrocortisone, sleep medication such as Provigil, Imunovir, allergy treatments, thyroxine, antidepressants etc. But still, it was GET that received the most negative responses. That suggests that not all of these reports can be explained by selection bias.

    I understand that the patient reports could not be included in the review because they are not randomized comparisons, but there are so many of them over the years that it would be informative to readers to mention them in the discussion section. Currently, the authors only refer to reviews such as the 2007 NICE Guideline that support their conclusion. I think it would be good to mention the discrepancy between patient reports indicating harm by GET and the 8 randomized trials where few harms by GET are indicated.
     
    Hutan, MEMarge, ladycatlover and 11 others like this.
  5. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    No, there wasn't, partly because the numbers of actual drop-outs were small. They worked hard to make sure that they at least collected primary outcome data at 52 weeks, even if pts couldn't attend the clinic.

    However, if you look at those who failed to complete the walking test at 52 weeks, there are differences between the groups in terms of physical function (52 weeks), an indication that failure to complete the WT is not random, and is more marked in the GET group (bars are SDs, not SEs):

    6minWTmissing.png
     
    Last edited: Oct 17, 2019
    Hutan, MEMarge, alktipping and 11 others like this.
  6. Sean

    Sean Moderator Staff Member

    Messages:
    8,061
    Location:
    Australia
    Yes, where is the magical threshold where patients transform from being unreliable witnesses of their own state into reliable ones?

    How do the psychs determine this, beyond the superficial behaviour of patients simply nodding and agreeing with them, which is all these measures really seem to measuring.
     
    Last edited: Oct 19, 2019
  7. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    11) Selection bias
    Some have made the argument that the Cochrane review is wrong to claim that most studies had a low risk of selection bias. In the Cochrane risk of bias tool, however, selection bias refers to adequate randomization and allocation concealment. It’s about the risk of baseline differences at the start of the trial, which could mean an unfair advantage to one of the intervention groups. Put it another way it only addresses issues of internal validity, for example, whether the reported effect sizes are true or not. The argument that patients included in the trials are not representative of ME/CFS patients overall, is another question, one that addresses external validity. I’m not really sure where problems with this issue are normally addressed in Cochrane reviews, but I guess it’s by using the ‘indirectness of evidence’ category in the GRADE system.

    - Whether patients who go to see the centers where GET-trials are conducted are different is a difficult question to answer. I’m aware of two studies that did a comparison between biomedical orientated and BPS oriented clinics, one by Peter White in the UK and one by Van Houdenhove in Belgium. From what I remember, these found only small differences.

    - Another question is whether the trials used selection criteria to exclude a particular group that might differ from the average CFS patient. PACE, FINE and Powell et al. used a minimum threshold for fatigue or physical function that patients had to meet to enter the trial. Even though these thresholds weren’t very strict, they might have excluded some mild ME/CFS patients from participating. More important is that the GET trials did not include homebound patients or patients in a wheelchair due to practical reasons. Only the FINE trial did an attempt to go see patients at their home. So I think we can say that severe ME/CFS patients were not well represented in these trials, which was probably the reason why the 2007 NICE guideline added a caveat that these trials do not apply to severe ME/CFS patients. Fulcher & White, 1997 excluded patients with “current psychiatric disorder or symptomatic insomnia” but it’s the only trial that did that.

    - There can aso be selection bias because some eligible patients refuse to participate in the trial. Overall, this percentage was relatively modest. In the trial by Fulcher & White, 1997 only 7% of eligible patients refused the trial. In the trial by Wearden et al. 1998 this was 15,2%. In the trial of Powell et al., 2001 7,5% of eligible patients were not randomized. In the trial by Wallman et al. 2004 no selection is given only that 8,8% withdrew from baseline testing. Moss-Morris et al. 2005 and Jason et al. 2007 do not report a response rate. Of the 449 patients referred by their GP to the FINE trial, 13 declined because they disagreed with treatment approaches, a further 9 did not want to be randomized. After assessment of eligibility, none declined to participate. There is also a larger group that was sent to the trial but refused assessment (Declined to be assessed n=78, No reason given n=24). The data is also difficult to interpret in the PACE trial. Of the 3158 screened for eligibility, 139 unable to comply with protocol and 533 did not meet primary consent criteria. Given that only 898 were assessed for eligibility, the 533 number seems rather big. Of the 898 assessed another 69 did not meet primary consent criteria to start the trial. So there does seem to be a significant selection going on, but it’s hard to interpret because patients might simply refuse because they don’t want to be in a trial rather than have an objection to the treatment offer. I’m not sure if the percentage is particularly higher in the GET trials than in other trials.

    - I’ve also looked at some baseline data. For my ease, I only looked at the relevant GET group, not the average with the control groups included.

    upload_2019-10-19_18-17-2.png

    Patients in the trials seem more debilitated than large or representative samples of ME/CFS patients. I think Vink has made the argument that the VO2max values in some of the trials are comparable to healthy controls and that this cast doubts on the diagnosis of CFS. I’m not sure about this because I remember some of the 2 day CPET studies showing little differences in VO2 max on the first day. A review on exercise testing concluded that VO2peak was 5.2 on average ml. kg− 1min− 1 lower in CFS/ME patients versus healthy controls, which seems like a relatively modest difference.
     
    Last edited: Oct 19, 2019
    Hutan, Annamaria, rvallee and 4 others like this.
  8. Esther12

    Esther12 Senior Member (Voting Rights)

    Messages:
    4,393
    For so many important issues it seems that Cochrane can just let Larun choose to ignore them.

    I've never understood what was going on there. Given what else we've seen from the PACE researchers it's difficult to assume that there's no reason for concern, but I'm also not sure there's a strong enough argument there to convince others that we should be concerned.
     
  9. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,791
    Small point: a norm-based score of 30 is I believe two standard deviations below the mean of the population. So probably 2 x 24 = 48. If the mean of the population is say 84, that would be 36, a lot different to 70.
     
    JohnTheJack, Amw66, Annamaria and 2 others like this.
  10. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    Yes sorry that was a mistake.

    I've redone the calculation with the figure from the powerpoint I've linked earlier: a mean of 83.3 and standard deviation of 23.7 (I suspect these represent the figures of the 1998 National Survey of Functional Health, which I couldn't find a publication of).

    So that would make: X= 23.7 (30-50)/10 + 83 = 35.6.

    So exactly as you say. Thanks, Dolphin.
     
    Hutan, JohnTheJack, Amw66 and 3 others like this.
  11. Caroline Struthers

    Caroline Struthers Senior Member (Voting Rights)

    Messages:
    966
    Location:
    Oxford UK
    I have deleted these posts as they are speculative (in part) and not constructive
     
    MSEsperanza likes this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Actually extremely useful from my perspective but now redundant. Thanks.
     
  13. Andy

    Andy Committee Member

    Messages:
    23,025
    Location:
    Hampshire, UK
    Interesting nugget of information. The minutes of the latest CMRC say
    https://www.actionforme.org.uk/uploads/images/2019/11/CMRC_minutes_16.10.19_.pdf
     
  14. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Don't know where to put this so just leave it here:

    Cochrane seeks - Research Integrity Editor

    https://www.cochrane.org/news/cochrane-seeks-research-integrity-editor

     
  15. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    We were promised engagement by the editor-in-chief. Months ago.

    Hopefully this is it. Maybe. Some day between now and the first proton decay.
     
  16. Sean

    Sean Moderator Staff Member

    Messages:
    8,061
    Location:
    Australia
    I see you take the long view.
     
  17. Andy

    Andy Committee Member

    Messages:
    23,025
    Location:
    Hampshire, UK
    I don't know if Keith Laws had a particular meta-analysis in mind with this tweet but I thought that it fit with the discussion here.
    https://twitter.com/user/status/1224965660016619521
     
  18. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,568
    Location:
    Norway
  19. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    Cochrane has no intention of doing anything, do they? Just letting the clock tick? It's been months since the promise of a new process and not even a peep of even intending to start think about it.

    They seem to be waiting on the NICE process, maybe? So a full year of nothing, then slow-walking the process that will ultimately lead to yet another cosmetic update years from now that keeps all the conclusions the same?
     
    Joh, Barry, Hutan and 4 others like this.
  20. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,837
    Location:
    UK West Midlands
    hopefully some of the attendees at the CMRC in March will take the opportunity to ask some very direct questions of the editor https://www.eventbee.com/v/uk-cfsme...ience-conference/event?eid=105083142#/tickets
     
    Joh, Barry, Hutan and 14 others like this.

Share This Page