Germany: IQWIG Report to government on ME/CFS - report out now May 2023

Discussion in 'Other reviews with a public consultation process' started by Hutan, Jul 1, 2021.

Tags:
  1. Andy

    Andy Committee Member

    Messages:
    23,032
    Location:
    Hampshire, UK
    From https://www.nice.org.uk/guidance/ng206/documents/evidence-review-7 and in your order

    Page 54, PACE, "Serious population indirectness – Oxford criteria used; PEM is not a compulsory feature."

    Page 24, Janse et al, "Serious population indirectness – 1994 CDC criteria used; PEM is not a compulsory feature. "

    Page 13, GETSET, no comments about population indirectness but NICE 2007 criteria was used, which included this in the required symptoms, "characterised by post-exertional malaise and/or fatigue (typically delayed, for example by at least 24 hours, with slow recovery over several days)". I therefore think that a reasonable argument is the cohort might have had PEM but use of this criteria doesn't definitively prove that it did.
     
    adambeyoncelowe, Joh, Philipp and 4 others like this.
  2. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    Prof Carmen Scheibenbogen:

    “It is obvious that my expertise was not taken into account in this preliminary report. There is now the opportunity to submit statements, which should be used by everyone who can competently say something about it.

    The claim that GET is effective is rejected by all major international organizations such as CDC, NICE, EUROMENE and WHO due to the poor scientific quality of the studies and the well-established fact that it harms patients.

    The unsustainable concept of GET has hampered ME/CFS biomedical research and drug development for decades.”

    https://twitter.com/user/status/1580621210445959168
     
  3. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    Had a brief look at the Google Translated version of the document.

    They seem to have taken a rather stringent view on case definitions in that only the more recent case definitions that require PEM (CCC, ICC, IOM) are believed to capture the correct patient population. In their view, older case definitions such as the Fukuda criteria identify a more heterogenous group of patients which can lead to misdiagnosis or overdiagnosis. At the same time, the report highlights the vagueness in the definition and operationalization of PEM and says that this should be a focus of further research.

    The section on case definitions focuses on the sole study (Nacul et al. 2011) that provided a prevalence estimate for the CCC, namely of 0.1% in the UK. This was likely an underestimate because it was based on diagnoses that GPs had already made (and we have good reasons to believe GPs are underdiagnosing ME/CFS-patients). The report extrapolates the 0.1% figure to Germany, resulting in approximately 70.000 adult patients. For children, there is a US estimate by Jason et al. which found a much higher prevalence of 0.75% but the report says it could not extrapolate this to Germany because this prevalence study did not provide information on age standardization.

    When evaluating treatments, they first looked at existing reviews. They found 8 but only the NICE guideline was deemed sufficiently up-to-date and of high quality, so they used it as a starting point. The English translation states on page 30: “All data from NICE 2021 was extracted as part of the evidence mapping. The data were not cross-checked with or supplemented from the primary publications and no own calculations of results were performed.”

    So they are heavily reliant on the review by NICE but then they did something rather strange. The literature search by NICE found 85 randomized controlled trials (RCT’s) testing all sorts of treatments for ME/CFS. In the NICE guidance, all of these reports were reviewed and assessed. Those in which less than 95% of patients did not have PEM were still included but downgraded by one level according to the GRADE system because of indirectness. This meant that if the trials were high-quality, they could still provide important evidence, but that evidence was simply deemed a bit more uncertain because the patient population no longer fitted whit what we now define as ME/CFS.

    The IQWiG approach took a more radical approach: they disregard 77 of the 85 RCT’s because participants were not required to have PEM or because the trial did not provide enough info on this. On the other hand, they also tweaked the criteria that NICE used: instead of 95%, they used a threshold of 80%. Trials in which 80% or more participants were reported to have PEM, were included. This meant that the PACE trial was included even though it selected patients using the Oxford criteria. The Dutch trial of CBT by Janse et al. from 2018 was also included even though it selected patients using the Fukuda criteria.

    When evaluating the evidence on GET, only 2 trials were considered: the PACE trial and the GETSET trial. Similarly, when evaluating the evidence on CBT only 2 trials were considered: the PACE trial and the trial by Janse et al. In other words, it’s pretty clear that this report heavily relies on data from the PACE trial.

    When it comes to evaluating the quality of evidence, IQWiG seems to be on the same line as NICE. They say that the risk of bias was high for all 3 trials across all outcomes. The English translation explains (page 92): “The reason for this was the lack of blinding of the patients and the treating persons. Even if it is not possible to blind these people to the interventions examined, the open study design endangers the equality of treatment (performance including co-intervention bias in the case of concomitant treatments that can be influenced). In addition, in the absence of blinding, patients' specific expectations of the test intervention can be encouraged and thus the evaluation of subjective or subjectively collected endpoints can be statistically significantly influenced (detection bias)” They even say that they didn’t evaluate the quality of evidence of different outcome measure because all 3 trials were already considered high risk of bias. If I understand correctly, all estimates were considered the lowest quality of evidence, just like in the NICE review.

    Now the estimates themselves. For GET, they added the PACE and GETSET data together and found an effect, a standardized mean difference of 0.37 [0.53-0.19]. If I understand correctly the report itself states that this effect was not big or clear enough to be considered relevant. They used the following rule: only if the confidence interval for the SMD lies completely outside the irrelevance range, which they defined as [-0.2-0.2], is an effect considered meaningful. And because 0.19 < 0.2 this wasn’t the case for the effect for GET apparently. There was also data on GET at longer follow-up times but here the control group in the GETSET trial actually performed better, so the report says no conclusion can be made because the follow-up data were too heterogeneous.

    The data they used for the estimate above was a comparison of GET against usual care. So there really was no credible control group: in both trials patients who received GET also received usual care in addition to GET. Something similar was true for the evaluation of CBT as in the trial by Janse et al., patients in the “control group” were simply put on a waiting list to receive CBT as well. So, it was patients who got CBT versus patients waiting to get CBT. The effect size they found by adding this trial to the PACE trial data on CBT was an SMD of 0.39 [0.57-0.21]. This was considered a relevant effect because the lower bound (0.21) was higher than the significance threshold (0.2).

    Despite all of this, the report seems to suggest that there is evidence for GET and CBT. On page 154, for example, it states: “the available evidence is interpreted to the effect that GET can be beneficial at least for milder ME/CFS degrees of severity.”

    The surveys on patients’ harm are disregarded because these were not considered a reliable enough source of evidence and the RCT’s did not show adverse effects of GET. It states: “surveys (cross-sectional studies, retrospective before-after studies) are methodologically unsuitable for deriving reliable statements on the benefit or harm of a treatment.”

    The report also seems to suggest that reports of harm are due to improper delivery of GET. Again, on page 154 it states: “Patients should be warned before an undifferentiated use of a GET that this is only carried out by medical or physiotherapeutic specialists who have sufficient knowledge and experience with the clinical picture ME/CFS.”

    When it comes to other treatments, the report indicates no reliable conclusion can be made, which is similar to the conclusion by NICE and other reviews.

    The report recommends that the care for patients with ME/CFS should be strengthened starting with factual information for the general public and better teaching content for healthcare workers. It states that “The information on ME/CFS will be published as a ‘topic’ on www.gesundheitsinformation.de after the project is completed and will be updated regularly.” The report also recommends biomedical research and says that the studies have been set up in 2022, including a Germany-wide ME/CFS registry coordinated by centres in Munich and Berlin.
     
  4. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    Thanks for that detailed report @ME/CFS Skeptic.

    So the analysis IQWIG did seems ok - I mean we could argue over 95% with PEM versus 80% with PEM, but the 80% seems defendable when there is no better information to go on, so long as the uncertainty it creates (not to mention the issues with the studies of moving recovery criteria, subjective outcomes an unblinded trial, promotion of the treatment, waiting list controls etc) is used to push any very marginal findings of benefit into the 'irrelevant' category.

    It's the conclusion that is the weird bit:
    It really seems to be a leap in/from logic. Given the large amounts of anecdotal evidence of harm caused by GET, you would think that the precautionary principle would lead you to conclude that there isn't enough evidence to support GET as a treatment for GET.


    Perhaps there is something to be said about dropout percentages in these trials, the lack of improvement in any objective outcomes, and any reports from participants in the trials about how symptom exacerbation was ignored?

    I do hope people will submit comments, even if it looks as though the conclusion was and is pre-determined.
     
  5. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    I strongly disagree that 80% is an acceptable rate given the results of GET trials. PACE reported that approximately 20% in the GET arm “recovered” while there were 18% who did not have PEM (according to NICE). It is not implausible that these participants could have been the major contributors to the recovery rate.

    More generally speaking, it is not methodologically acceptable to include 20% of participants who do not have the core symptom that defines a disease in trials for this disease.
     
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    Well at least one useful thing is the confirmation by another health authority that almost all BPS research on ME is of such terrible quality that it shouldn't be used. It's so bad, in fact, that it obviously should end, considering that after decades they are all basically identically bad, all for the same reasons. They use it anyway but only selecting 3 trials out of 300+ is a 99% rate of complete garbage, even as they admit that those 3 are very bad anyway.

    At least that could be of use to some. Which is just about the lowest standard anyone can imagine. And would look at that this is the standard they use to make clinical decisions about real people in real life without any care in the world about what happens after.

    But this is not negligible. For all the pretense about solid evidence base, this is the second (hell, forget this it's the 3rd, the NIH pretty much concluded the same) authority that evaluates all of it as garbage, rates it as low quality to the point of excluding almost all of them and marking those they decide to use anyway as unreliable. That decisions are made using garbage is also significant but different.

    I frankly can't believe the No true Scotsman is happening here, though. This is straight out of the PACE ideologues' mouths, even though it's been thoroughly debunked. Especially as it was established that the more rigid PACE-like approach yields the worst results. Which they decide to use anyway. Incredible.

    But the evidence base can be described as "thrice-declared as garbage by medical authorities".
     
  7. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,374
    Location:
    Aotearoa New Zealand
    If you had one treatment trial and you knew that 20% of the participants didn't have the disease you are interested in, but 80% did, and there was a 70% rate of substantial improvement, then the trial would be useful, all other things being done properly. I think it would be methodologically acceptable to consider the trial, it's just that you would want to see a rate of improvement that makes it implausible that it is due only to the response of the participants without the disease. Which, as you point out, there was not. The consideration of the trial in this case was, I think, ok, it's just that the conclusion isn't sensible.

    What I don't understand is, if it is possible to identify which participants had PEM, why you wouldn't analyse the data for them separately? i.e. essentially treat it as a separate trial. Maybe the analysis was done? If the authors are not releasing individual data including PEM status for independent researchers to analyse, then that would be a red flag to me that the reported results aren't to be trusted.
     
  8. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Yes. I think that's the problem we discussed on the GRADE thread. This assessment tool and line of thinking allows that every study that is qualified as 'RCT', even if rated as having the highest possible risk of bias, still can provide evidence for the benefit of a treatment. It even doesn't matter if there are doubts on the proper report of harms in an RCT.

    If I understood correctly, even if they see only a slight 'hint' on a benefit of a treatment, in the absence of harm documented in RCTs, they will still recommend that treatment.

    I need to have a look again at the IQWIG's methods paper with which they work to judge whether it's possible to challenge this reasoning. (see also the forum discussion here.)

    In the preliminary report, the authors respond to the main points of the criticism of the PACE and similar trial.

    They state that Friedberg et al 2020 (*) summarized the relevant points. Apparently they didn't bother to read other critique in detail; other critique that was submitted in comments to the report plan -- e.g. Jonathan Edwards' expert testimony to NICE and the paper by Struthers/Tack/Tuller/ on 'Bias caused by reliance on patient-reported outcome measures in non-blinded randomized trials' is not listed in the preliminary reports' references.

    Not sure if Vinks' work had been submitted in any comment.

    I will copy and paste the section on how they address the criticism on behavioral/ exercise studies in the following posts.



    (*) Friedberg F, Sunnquist M, Nacul L. Rethinking the Standard of Care for Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. J Gen Intern Med 2020; 35(3): 906-909.
    https://dx.doi.org/10.1007/s11606-019-05375-y.
     
    Last edited: Oct 20, 2022
  9. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    From the machine translation of the prelimary report:

    Dealing with critical comments on ME/CFS studies [on behavioral/ exercise interventions] , p. 155 ff / p.175 ff in the machine translated PDF

    With regard to the methodology and the results of ME/CFS studies, there are a large number of critical publications, especially on the PACE study. The essential aspects, which are repeatedly mentioned, are discussed by Friedberg et al. in a publication from 2020 [131]. With these points in mind, the following explains how the methodology used in this report addresses the weaknesses of studies such as PACE.

    [1] Argument "lack of blinding"

    “Blinding”/“lack of blinding” is always an aspect in the Institute’s benefit assessments when

    assessing the risk of bias [77]. In this report, for example, the PACE study was attested to have a high risk of bias. In the cross-endpoint overall assessment of all results, this contributes to the fact that statements on the benefit of CBT and GET were assigned to the lowest level of certainty ("hint").

    This criticism is aimed in particular at the PACE study. In addition, parameters that are described as objective, such as the 6-minute walk test or social participation (“return to work”), do not deliver convincing results. The results of the subjective endpoints could show better disease management, stress reduction or acceptance of the ongoing health restrictions.

    Friedberg et al. point out that studies on CBT and GET are not blinded and do not take into account potentially distorting effects in the results of the subjective endpoints, which were mainly collected by the patients themselves. The results of the subjective endpoints could show better disease management, stress reduction or acceptance of the ongoing health restrictions.

    “Blinding”/“lack of blinding” is always an aspect in the Institute’s benefit assessments when assessing the risk of bias [77]. In this report, for example, the PACE study was attested to have a high risk of bias. In the cross-endpoint overall assessment of all results, this contributes to the fact that statements on the benefit of CBT and GET were assigned to the lowest level of certainty ("hint").
     
    Last edited: Oct 17, 2022
  10. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Continued form above

    [2] Argument: "Inadequate inclusion criteria"

    Friedberg et al. question the use of the Oxford diagnostic criteria as an inclusion criterion for PACE. As explained in this report, a study population diagnosed using these criteria can only be classified as relevant if PEM is also present as an obligatory symptom of ME/CFS disease.

    This is the case in PACE. The presence of the PEM symptom is documented for around 86% of the study participants (see Table 21).
     
  11. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Continued from above

    [3] Argument: "Definition of Endpoint: Recovery”

    Friedberg et al. question the suitability of the combined endpoint recovery collected in PACE.

    The authors point out that the definition of this endpoint, contrary to the name, was not aimed at a complete restoration of health and the response criteria for this endpoint had been adjusted by PACE in the course of the study.

    In the available benefit assessments, the results of the outcome “recovery” were not used due to an unsuitable operationalization (including use of the Oxford and CDC diagnostic criteria).
     
  12. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    [4] Disease model argument

    [...]

    [Not relevant for the assessment of the methodological quality of treatment trials]
     
  13. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    (Continued from above)

    [5] Argument: "Lack of consideration of patient surveys"


    In general, Friedberg et al. criticized the limitation to clinical study data and the insufficient consideration of patient and user surveys to assess possible harm of CBT and GET. These should be given greater consideration, particularly when there are significant discrepancies between the clinical data and the results of the surveys. The authors refer to a publication by an association of ME/CFS sufferers in Ireland [132], which would show that, as a result of inadequate advice on increasing activity, 50% of those surveyed not only felt no improvement, but often experienced a deterioration in their state of health


    This publication is an evaluation of 9 surveys of ME/CFS patients according to their self-assessment after using a GET or CBT.


    By emphasizing the surveys of those affected, Friedberg et al however disregard the fact that such surveys must also be subjected to (the same) critical methodological evaluation. In general, surveys (cross-sectional studies, retrospective before-after studies) are methodologically unsuitable for deriving reliable statements on the benefit or harm of a treatment [133]. This is because, for example, the decision to participate or not to participate can distort the results of the survey. In the case of ME/CFS, the selection of participants in surveys naturally also raises the question of which catalog of criteria (with or without PEM) was used to diagnose the participants.

    In addition, the publication of the Irish Association of ME/CFS sufferers [132] also derive different interpretations than those of Friedberg et al., as the following results demonstrate.

    In the patient surveys on CBT, between 7.1% and 38.0% of those questioned stated that their condition had deteriorated after CBT and between 7.0% and 56.9% of those questioned (own calculation) that their condition had improved condition has improved.

    In the cited patient surveys on GET, between 28.1% and 82.0% of those questioned stated that their condition had deteriorated after GET, but also between 13.1% and 60.8% of those questioned (own calculation) stated that her condition has improved.


    Edit: Couldn't find a reference to the Oxford Brookes University survey -- but too brain-fogged now to be sure.
     
    Last edited: Oct 14, 2022
  14. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    (Continued from above)

    [The authors conclusion on their dealing with the submitted methodological criticism]

    "Overall, based on these aspects, it can be shown that the methodology used in this report (and elsewhere by IQWiG) is expressly aimed at identifying methodological strengths and weaknesses of various studies in the selection of studies and the evaluation of the results to consider. But this applies to all studies - regardless of the direction of the result."
     
  15. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    I think two points deserve particular scrutinizing

    1) Their dealing with reports of harms see argument 5 above

    2) In the evaluation of interventions, how did they address the incoherence of benefits between endpoints that should show some relation?

    On 1)

    see also their assessment of benefit and harms in the 3 evaluated studies:

    p.149-150 (p. 169-170 in the machine translated PDF )

    Benefit-damage assessment

    In all 3 evaluated studies it is unclear to what extent the frequency of specific damage events was
    recorded. This applies, for example, to the essential damage aspect of repeated PEM events (crash)
    induced by the intervention-related cognitive, emotional or physical exertion. However, neither the
    data on study and therapy dropouts nor the direction of most of the effects on the examined
    interventions allow the conclusion that undocumented ME/CFS-specific adverse events could call
    the results of the studies into question in their entirety (see also the relevant section in chapter 6).

    On 2): see following post
     
    Last edited: Oct 14, 2022
  16. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    I wonder, in the evaluation of interventions, how did they address the incoherence of benefits between endpoints that should show some relation?

    Also, I wonder how they assessed the benefits for social participication -- what did the 3 included studies actually investigate with regard to that?

    Here's the translated text from the preliminary report:


    The assessment of benefits --
    p.149-150 (p. 169-170 in the machine translated PDF )

    Comparison CBT versus SMC


    When comparing CBT versus SMC, the studies showed statistically significant differences between
    the groups in favor of CBT for some endpoints, which were assessed as irrelevant. This affected the
    endpoints sleep quality, activity level, physical function and mental status. For the short and medium
    term, there was a benefit of the CBT compared to the SMC for the endpoint fatigue (reliability:
    indication or hint) and in the short and medium term for the endpoints social participation and general
    symptoms (reliability: hint). In the medium term, a benefit could also be derived for the outcome
    “feeling sick after exertion” (reliability of statements: hint). For all other endpoints, there were no
    statistically significant differences or there were none (evaluable) data available (see Table 50).

    An assessment of possible damage aspects was only possible to a limited extent due to the insufficient PEM reporting (see above) and the lack of usability of the SAE data in the Janse 2018 study.

    When all outcomes are weighed up across outcomes, there is a hint of a benefit of CBT compared
    to SMC for patients with mild to moderate ME/CFS severity, both in the short and medium term. A
    benefit statement for the use of CBT in patients with a higher degree of ME/CFS is not possible due
    to the lack of data.


    Comparison CBT versus SMC

    When comparing GET versus SMC, there were statistically significant differences between the groups in favor of
    GET for the morbidity endpoints of fatigue, sleep quality, physical function, physical fitness, social participation and
    psychological status for individual analysis times. Since the effect size was assessed as irrelevant in these cases,
    no hint of benefit or harm could be derived for these outcomes. For the outcome “general symptoms”, on the other
    hand, an indication of a benefit of GET could be derived for the short-term therapy effects. With regard to the
    outcome “feeling sick after exertion”, derived from the results of the PEM survey, there is a medium-term indication
    of a benefit of GET (see Table 50).

    For all other endpoints - pain, activity level, cognitive function, post-exertional malaise, health-related quality of life
    (HRQoL), all-cause mortality and serious adverse events - there were no statistically significant differences or no
    (evaluable) data were available.

    When all results are weighed up across outcomes, there is a hint of a benefit of GET compared to SMC for patients
    with mild to moderate ME/CFS severity in both the short and medium term. A benefit statement for the use of GET in patients with a higher degree of ME/CFS is not possible due to the lack of data.
     
  17. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    If I understood correctly, the IQWIG could not find any additional usable data on PEM/ non PEM subgroups except their overall percentage / estimated percentage in the complete study and in the trial arms.

    They did a sensitivity analysis on that on some PACE data, but only for the measure of fatigue in CBT compared to SMC:

    [Possible differences PEM vs. non-PEM sub-population in the PACE trial -- sensitivity analysis with the CBT and SMC groups in the PACE trial with regard to fatigue]

    [Not clear to me why they didn't do this sensitivity analysis with GET instead?)

    5.6.15 Sensitivity Analysis - /p.154-165 in the machine translated PDF


    In the PACE study, the proportion of patients with PEM at the start of the study across all groups was only 84.2% of the included patients. In principle, this proportion met the inclusion criteria (see Section A2.3.1.9).

    However, in order to get an impression of the extent to which the specific results of patients without PEM (as a key symptom of ME/CFS) can influence the results of the total population, a sensitivity analysis was carried out as an
    example for the data of the primary endpoint fatigue after 52 weeks after randomization, for which a hint of a benefit in favor of CBT could be derived for specific outcomes (see Section 5.6.1).

    For this purpose, different “true” effects of patients with PEM were assumed in 2 scenarios and it was examined in each case how strong the effect of patients without PEM would have to be in order to obtain the result actually observed in PACE. The number of subjects per arm included in the original endpoint analysis was not reported in the publications. It was assumed that the numbers of those included in the evaluations correspond to those for which there was a value for the endpoint at the last time of the survey.

    It was assumed that the proportion of patients with PEM is constant over the course of the study and that the
    standard deviation in the two subpopulations (patients with and without PEM) corresponds to the standard deviation in the total population. The formula of the two-sample t-test was used to calculate the standard deviations.
    In the 1st scenario for the endpoint fatigue, it was examined how strong the effect of the patients without PEM would have to be in order to obtain the result observed in PACE if the effect of CBT on the patients with PEM was on the threshold of a statistically significant difference in favor of the CBT.

    In order to obtain the overall effect of the total population of around -3.4 observed in PACE, the MWD of the subpopulation without PEM would have to be -13.0 in the CFQ (see Figure 14). For the subpopulation without PEM, this means that if the SMC group had a mean value of, for example, 23 CFQ points after 52 weeks (similar to the results of all PACE patients in the SMC group after 52 weeks), the fatigue in the CBT group would have to have dropped to only 10 points in the CFQ in order to achieve the observed effect of the overall population.



    In the second scenario, it was examined how strong the effect of the patients without PEM would have to be in
    order to obtain the result of the total population observed in PACE if the effect of CBT on the patients with PEM was on the threshold of a less benefit compared to SMC. In order to obtain the overall effect of the total population of around -3.4 observed in PACE, the MWD of the subpopulation without PEM would have to be -33.0 in the CFQ (see Figure 15). For the subpopulation without PEM, this means that after 52 weeks, the SMC group should have a mean CFQ value of 33 and thus maximum fatigue, and at the same time the CBT group should be completely fatigue-free (CFQ: 0 points) in order to achieve the observed effect of the total population.

    Overall, it is possible that the participation of patients without PEM, i.e. possibly without ME/CFS, in the study could
    have caused the measured effect on the endpoint fatigue to exceed the relevance threshold at the time point of 52
    weeks. However, based on the sensitivity analyses, it can be seen that the effect of the PEM group must be in the
    statistically significant range. Otherwise - even with an effect at the threshold of statistical significance in favor of
    the CBT in the PEM group - an unrealistic effect would be required in the group without PEM to give the observed
    overall effect. In order to mask an opposite effect of the people with PEM and to give the overall effect observed in
    PACE, an even more extreme effect of the people without PEM would be needed.
     
    Last edited: Oct 14, 2022
  18. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    What they say about the PEM measures in PACE is another topic that I think is worthwhile to have a look at. Hope I will be able to post later.

    For any comment to IQWIG and also for replies on Twitter it seems to me important to scrutinize how IQWIG addressed these issues I posted above. But it's also important to first look in detail at IQWIG's actual arguments and analysis, as they seem to already have taken into account much of the criticism and also much of patients' feedback.
     
    Last edited: Oct 14, 2022
  19. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    321
    They didn't find a clinically relevant effect size for GET vs SMC for the primary outcomes so I assume they saw no point in including a sensitivity analysis for GET
     
  20. cassava7

    cassava7 Senior Member (Voting Rights)

    Messages:
    1,051
    If I understand correctly, IQWiG’s sensitivity analyses showed that, had CBT in PACE given either a positive or negative result for patients with PEM but that was on the verge of statistical significance, the patients without PEM would have had to score -13 or -33 points, respectively, to obtain the result reported in the trial (-3.4 on the Chalder fatigue scale, 95% CI -5.0 to -1.8; p = 0.0001).

    IQWiG then go on to say that both cases are unrealistic, and that this implies that patients with PEM necessarily had a statistically significant result in favour of CBT in the first place. This seems fair enough, especially given that CBT is, generally speaking, not effective for chronic fatigue.

    I suppose they could rely on these sensitivity analyses to justify lowering the threshold of participants with PEM to 80% (compared to NICE’s 95%) as the same may apply to the other trials that were included, too.
     
    Last edited: Oct 14, 2022

Share This Page