Germany: IQWIG Report to government on ME/CFS - report out now May 2023

Discussion in 'Other reviews with a public consultation process' started by Hutan, Jul 1, 2021.

Tags:
  1. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Even if you allow the 80% threshold, then studies like PACE still haven't delivered a robust unambiguous benefit.

    The debate about 80 v. 95% is a bit of a red herring/straw man. Either way they still have not delivered.
     
    bobbler, Hutan, RedFox and 7 others like this.
  2. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    Brilliant. That's helpful.

    So this seems to be a general policy that they applied, rather than something they specifically devised for this review.

    That makes it more acceptable, though I still can't shake the feeling that you can't cherry pick parts of one review to replace doing your own work (i.e., the in-depth evidence reviews that NICE did).

    It still seems so much flimsier than the NICE process and the IOM process.
    True, but it could have spoken to an intention to get the result they wanted, had it been the case they chose the 80% after seeing (or being informed of) how the 95% worked for NICE.

    I wasn't aware this was a general policy that was applied previously, but I am aware that certain...people tried to influence the NICE process (luckily to no avail).

    It wouldn't surprise me if the same people were also in contact with their German colleagues, and had tipped them off about the things likely to get GET ruled out, for instance.

    As for other tactics, I think the EMEC comments are really good. The section on harms seem to me to be very strong. That seems a worthwhile avenue to pursue.

    Also, their comments on subjective measures in unblinded trials start off repeating the common arguments (which I think are less persuasive, as IQWiG would consider them factored into the quality rating), but then go in new directions that are very helpful and more persuasive, in my view.

    E.g., the stuff about 0.5 SD being the natural variation you get on subjective results compared to objective ones. That would mean that if a minimum clinically important difference in studies with blinded and objective outcomes was 0.5 SD, you'd need double that for those trials without both of those, to be sure you were getting a true benefit.

    MCIDs were an issue we had to discuss at length in NICE, because we don't have any agreed upon standards there that are specific to ME. We defaulted to 0.5 SD, but had I known about the studies EMEC mentioned, I'd have pushed for 1 SD for those outcomes with only a subjective correlate.

    This would be a worthwhile area for future research, because if we could categorically show what an MCID is, that stops a "hint" of evidence being used to make recommendations, as it currently is.

    I wonder if there is perhaps also an argument to be made for acceptability and tolerability? If patients state that a treatment is unacceptable and intolerable (in either sense of the latter word), then is it cost effective or even practical to offer it?

    Cost effectiveness is also an argument to be made in any country with socialised healthcare. I'm not sure there's any German-specific info, but they could extrapolate from UK data the way they have for other aspects of the review.

    In the UK, cost effectiveness is £20,000 per QALY (quality adjusted life year). If treatments exceed that, but are still under £30K, you can recommend them if they're exceptional or unique. At over £30K you need really, really compelling evidence to use them.

    GET was, IIRC, ~£24K per QALY. And that's assuming the effects shown were real and not due to various forms of bias. When you combine an "almost" (cost-effectiveness) with a "barely" (clinical effectiveness, assuming it's even real) you don't really get a compelling argument to justify its recommendation. Then you factor in harms and clinical experience, and it's a no-go.

    Did IQWiG do any cost effectiveness analysis? I haven't seen it in my (admittedly brief) read of the paper. Do they even do cost effectiveness analyses?

    I'm pleased EMEC also challenge the notion that GET delivered by trained professionals won't cause harm. The evidence doesn't support that assumption. "Experts" were equally as likely to harm patients as any other profession, except GPs, who were a third or so more likely to cause harm.

    And when you factor in, as Nina Muirhead's paper did, that those who consider themselves "experts" actually know less (rigid, inflexible thinking...), it makes sense. They appear to have "unhelpful illness beliefs" about ME.

    At this stage, I'm genuinely unsure if "big" changes can be made to the review, but I think it's worth getting as much objection as possible recorded officially.

    The only way to get a big change made would be to demonstrate a major fault in process. Though minor faults could result in tweaks.

    There are examples of poor reviews being pulled when they were intolerable to patients and had flawed processes (e.g. the CDC one) -- so maybe we can aim for that if the review isn't likely to be changed. A delay would be less helpful but is better than nothing.
     
    Last edited: Nov 4, 2022
    Hutan, RedFox, alktipping and 10 others like this.
  3. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    They have a section on cost effectiveness analyses in their methods paper but don't recall now whether also in the report plan and draft report on ME/CFS.

    Anyone feels up to check?

    (See the members only thread for links to the relevant documents).

    Edit: Link fixed.
     
    Last edited: Nov 7, 2022
  4. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    321
    This is the draft of comments I've written up. It got a bit long, so I put some of the less important or more duplicated points in an attached file (generally there’s some overlap between these comments and those made by others). Some, or all, of these points could be added to the EMEC comment, added to another comment, or anyone can take any of these points and add them to their own submitted comment (I could also help with that). I can rewrite these comments as required in any way, and I can also find the matching german quotes in the quoted documents to make translation easier, as well as provide the relevant pdfs. The comments:

    In 2012 a systematic review conducted by Haywood et al. on the quality and acceptability of PROMS (Patient Reported Outcome Measures) in ME/CFS found that “The poor quality of reviewed PROMs combined with the failure to measure genuinely important patient outcomes suggests that high quality and relevant information about treatment effect is lacking”. In particular it did not identify any evidence for content validity for two of the main outcomes used in ME/CFS research – the Chalder Fatigue Questionnaire and the SF-36. [1] Content validity refers to the extent which an instrument measures the concept of interest, and is assessed through qualitative studies (generally with patients) to determine if the instrument is an appropriate and comprehensive measure for the relevant concept, population and use. [2]

    On page 59 the IQWiG General Methods handbook states “instruments are required that are suitable for use in clinical trials”, and it refers to guidance of the United States Food and Drug Administration on Patient-Reported Outcomes. [3] In this guidance the FDA states that the adequacy of a PROM depends on its content validity, and that “Evidence of other types of validity (e.g., construct validity) or reliability (e.g., consistent scores) will not overcome problems with content validity because we evaluate instrument adequacy to measure the concept represented by the labeling claim.” [2] This guidance, endorsed by the General Methods handbook, indicates that if a PROM has no evidence for its content validity, it would not be adequate.

    In addition to not identifying any evidence for the content validity of the SF-36 and the CFQ, Haywood et al. did not find any evidence for validity of other outcomes used in the IQWiG report, such as the PEM survey (feeling ill after exertion), the Work and Social Adjustment Scale (WSAS – used for social participation), and the Clinical Global Impression Scale (CGI - used for general complaints). [1] For the WSAS I can identify one study that assessed construct validity in ME/CFS, [4] but no study assessing content validity. For the CGI and the PEM survey, I cannot find any study assessing validity in ME/CFS. For the SF-36 and CFQ, I cannot find any studies that assess content validity in ME/CFS. One study stated it assessed content validity for the SF-36, however did not assess it as it is defined by the FDA. [5]

    If sufficient evidence exists for the content validity of these scales in ME/CFS, then this should be referred to in the IQWiG report, and the adequacy of these scales should be justified in accordance with the FDA guidance. The validity of the Sickness Impact profile 8 was considered (page 134) – this should also be done with the other PROMS. Conclusions should not be made on the basis of PROMS without any evidence for content validity.

    In addition to the lack of evidence of validity, potential ceiling effects or floor effects have been detected for the CFQ and the SF-36. [1,5,6] This means that patients are scoring at or close to the maximum symptom severity value, which allows for the possibility that patients could experience decline without it being detected by the PROMS. The FDA guidance also notes that floor or ceiling effects should be avoided. [2]

    Page 152 of the report states “If ME/CFS-specific adverse events, in particular PEM, had occurred more frequently due to the intervention, it can be assumed that this would also have had a negative effect on the observed results and that no advantages, for example in the endpoint fatigue, could have been identified.” This cannot be assumed, given the poor quality of the PROMS, possible ceiling/floor effects, lack of blinding, and relatively short duration of results. Furthermore, there was heterogeneity in the results, for instance, at 15 months GETSET found a statistically significant effect in favor of the control group over GET for fatigue (page 97).

    Also on page 58 in the General Methods it states “The size of the effect observed is an important decision criterion for the question as to whether an indication of a benefit of an intervention with regard to PROs can be inferred from open studies.” Page 164 also notes that consideration of risk of bias issues, such as a lack of blinding, goes beyond the purely formal assessment. The size of the effects observed in the report have been modest. On page 174 of the general methods, for instance, a difference consisting of 15% of a scale is identified as a plausible threshold for a small but noticeable change, and the mean differences and the lower bounds of the confidence intervals of the effect sizes found fall below this. The modesty of these effect sizes should have been considered in whether a conclusion about benefit could be drawn, especially in light of the questionable quality of the PROMS.

    Page 171 of the General Methods handbook states “The problem of multiplicity cannot be solved completely in systematic reviews, but should at least be considered in the interpretation of results”. There is no indication this was considered in this review. The number of outcomes considered in the report substantially raises the likelihood that some outcomes will appear to have clinically relevant effects. In the case of GET vs SMC (page 150), for instance, where over 10 different outcomes across three time periods were considered, two outcomes that show a clinically relevant effect may be a matter of chance.

    1. Haywood KL, Staniszewska S, Chapman S. Quality and acceptability of patient-reported outcome measures used in chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME): a systematic review. Qual Life Res. 2012;21(1):35-52. doi:10.1007/s11136-011-9921-8
    2. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. U.S. Food and Drug Administration; 2009. Accessed November 8, 2022. https://www.fda.gov/media/77832/download
    3. General Methods Version 6.1. IQWiG; 2022. Accessed November 8, 2022. https://www.iqwig.de/methoden/general-methods_version-6-1.pdf
    4. Cella M, Sharpe M, Chalder T. Measuring disability in patients with chronic fatigue syndrome: reliability and validity of the Work and Social Adjustment Scale. J Psychosom Res. 2011;71(3):124-128. doi:10.1016/j.jpsychores.2011.02.009
    5. Davenport TE, Stevens SR, Baroni K, Mark Van Ness J, Snell CR. Reliability and validity of Short Form 36 Version 2 to measure health perceptions in a sub-group of individuals with fatigue. Disabil Rehabil. 2011;33(25-26):2596-2604. doi:10.3109/09638288.2011.582925
    6. Morriss R, Wearden A, Mullis R. Exploring the validity of the chalder fatigue scale in chronic fatigue syndrome. J Psychosom Res. 1998;45(5):411-417. doi:10.1016/S0022-3999(98)00022-1
     

    Attached Files:

  5. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Looks very good to me at a first skim-glance -- looking forward to being able to read in detail.

    The attached pdf also looks very helpful and I think the text deserves an additional post too, so easier to quote and refer to.

    That's very helpful, thanks @petrichor .

    On that occasion a reminder that the deadline for submissions has been postponed to 27 November 2022.

    So if anyone feels up to work on an own submission or help with others' submissions there's still some time.


    If you need help or want to offer help, it's good to also check the members only thread for that.
     
    Last edited: Nov 10, 2022
    bobbler, Hutan, Dolphin and 4 others like this.
  6. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    321
    As suggested by @MSEsperanza I will also post the comments that I put in the pdf too, so they're easier to view and refer to, which follow on from the comments I made in the post above:

    The General Methods handbook states on page 44 “In the event of particularly serious or even life-threatening diseases, for example, it is usually not sufficient only to demonstrate an improvement in quality of life by application of the intervention to be assessed, if at the same time it cannot be excluded with sufficient certainty that serious morbidity or even mortality are adversely affected to an extent no longer acceptable.” On page 28 the IQWiG report notes ME/CFS causes substantial functional impairment, including leaving patients bedridden or housebound. Given the seriousness of ME/CFS, limited evidence of validity of PROMS and potential ceiling/floor effects should preclude any conclusions about harms.

    Page 58 of the General Methods states “On the other hand, in the assessment of therapeutic interventions for chronic diseases, short-term studies are not usually suitable to achieve a complete benefit assessment of the intervention.” and “As both benefits and harms can be distributed differently over time, in long-term interventions the meaningful comparison of the benefits and harms of an intervention is only feasible with sufficient certainty if studies of sufficient duration are available.” ME/CFS is a very long term condition - it often lasts for decades. Results only from the medium term, or especially only in the short term, should significantly weaken any conclusions about harm or benefit, especially given that deterioration from repeated PEM typically has a time delay. (Note: I just added part in italics)

    Page 146 of the report states “Overall, it is possible that in the study the participation of patients without PEM, i.e. possibly without ME/CFS, could have caused the measured effect for the endpoint fatigue to exceed the relevance threshold at 52 weeks. On the basis of the sensitivity analyses, however, it can be seen that the effect of the PEM group must certainly lie in the statistically significant range.” As described on pages 171-172 of the General Methods handbook, clinical relevance cannot be derived only from statistical significance. Therefore this sensitivity analysis should have been grounds for reducing confidence in the conclusion about CBT.

    Page 151 of the report states, “In view of the lack of international standardisation of the recording of PEM (see section 4.2.2.3), the initial PEM survey in the studies was accepted as adequate despite the uncertainties in this report”. However on page 139 it notes the lack of specificity of the PEM survey in the PACE trial, indicating that it is inadequate for determining if a patient has PEM.
     
  7. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,001
    Location:
    Belgium
    Don't think EMEC will be able to incorporate them at this point so I think it would be best to submit them as their own comment. As far as I can see anyone can submit comments to the report.
     
  8. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    321
    That's understandable. Unfortunately I don't know German, so I would struggle to submit my own comment, but it would be good if anyone can incorporate any of the points I made into their submission
     
  9. Tom Kindlon

    Tom Kindlon Senior Member (Voting Rights)

    Messages:
    2,254
  10. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    The German Association for ME/CFS posted a summary of their submission on their website and also published an open letter by scientists from the UK, Europe, the US, Australia and New Zealand.

    Link to the summary:
    https://www.mecfs.de/wie-zucker-zur-behandlung-von-diabetes-zu-empfehlen/

    Google translate:
    https://www-mecfs-de.translate.goog...l=auto&_x_tr_tl=en&_x_tr_hl=de&_x_tr_pto=wapp

    Link to the open letter:

    https://www.mecfs.de/offener-brief-an-das-iqwig/

    (scroll down for the English original)


    Edited to add info about the Open letter.
     
    Last edited: Nov 22, 2022
  11. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    Institute recommendation could endanger thousands of patients
    German: https://www.berliner-zeitung.de/ges...tausende-mecfs-patienten-gefaehrden-li.287807
    English translation: https://www-berliner--zeitung-de.tr....287807?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=en

    The core symptom of the severe neuroimmunological disease ME/CFS is exercise intolerance. Nevertheless, the German institute IQWiG recommends activating therapies.

    (Paywalled)
    https://twitter.com/user/status/1594455484408406017
     
  12. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    From another thread on Long Covid -- Article on the website of two German Associations for Psychosomatic Medicine by Peter Henningsen

    (Professor for Psychosomatische Medicine, Director of 'Klinik und Poliklinik für Psychosomatische Medizin und Psychotherapie', Technical University of Munich, Associate Editor, Journal of Psychosomatic Research, and a history of co-working with Per Fink etc.)

    Edit: The German word 'Verfechter' also means 'advocate' but in this context I think is more apt to be translated with 'proponents'. So the quote above would rather read: "Since the strongly anti-psychosomatic and also anti-psychotherapeutic attitudes are traditionally particularly pronounced in the group of ME/CFS proponents...."


    I think this helps understand a bit the situation in Germany where I think most people with ME/CFS are still being referred to psychosomatic physicians ('Psychosomatische Ärzte' or 'Psychosomatiker' is how they refer to themselves).

    I experienced myself and hear from others that this mostly has the consequences I think also most forum people in other countries are familiar with -- doctors and psychologists twist anything we tell them so that it fits into their illness model, they doubt the actual limits the illness sets to our daily activities , they pathologize our attempts to extend and overcome those limits, our experiences with PEM and our coping strategies.

    And now there's also evidence that one of the German high-profile psychosomatic physicians and the national professional associations he is affiliated with promote their biased view of ME/CFS patient organizations and our advocates in order to denigrate our critique of their biases documented in their research as categorically 'anti-psych'.
     
    Last edited: Dec 12, 2022
  13. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    ME/CFS charity Fatigatio e.V. rejects to evaluate content deficient IQWiG information texts

    Google translation [link] :

    11/20/2022

    On behalf of IQWiG, the Hannover Medical School is looking for people affected by ME/CFS to assess the comprehensibility of information texts on the clinical picture. The texts show serious deficiencies in terms of content with regard to activating therapy methods and prevalence. At this point in time, we firmly reject a comprehensibility assessment by those affected.

    ---

    Recently, our regional and self-help groups received a request from the Medical University of Hanover to evaluate the understandability of information texts on the clinical picture ME/CFS from the perspective of those affected. The information texts are part of the preliminary report on the current state of knowledge on ME/CFS written by IQWiG (Institute for Quality and Efficiency in Health Care) , which was published on October 13, 2022. The health information texts are divided into four sections: (1) an overview of ME/CFS , (2) information about symptoms and diagnosis, (3) treatment, and (4) support in living with ME/CFS.

    After a thorough examination of the information texts written by IQWiG, we decidedly rejected the request to assess the comprehensibility from the perspective of those affected. The information texts – like the rest of the IQWiG preliminary report – show considerable deficiencies in terms of content. Of the numerous problematic formulations and statements, the following two are particularly noteworthy:

    • " According to estimates based on international studies, around 1 in 1000 adults in Germany suffers from ME/CFS - i.e. around 70,000 people. There are no studies in this country .” (Overview, p. 3)

      This estimate does not take into account the considerable number of unreported cases. ME/CFSis largely unknown due to a lack of adequate training in medicine and the medical profession. It is not uncommon for those affected to need decades before they are diagnosed - important years in which misdiagnosis and damaging therapy methods lead to a deterioration in their health. Not taking into account the number of unreported cases reproduces the trivialization of the frequency of the disease and contributes to the fact that doctors withhold diagnoses from those affected due to false ideas about the prevalence. Internationally renowned ME/CFS experts instead assume a significantly higher prevalence of around 250,000 people affected, the number of which will increase drastically as a result of the pandemic (see Renz-Polster/Scheibenbogen 2022 ).

    • " Options for dealing with the disease include: (...)

      Activation: Here the focus is on gradually and slowly increasing the physical stress under medical and physiotherapeutic care. It is important to adjust the stress so that the symptoms do not worsen.

      Cognitive behavioral therapy (CBT): This therapy should help to better deal with the mental stress caused by the disease. In this way, strategies are to be learned that help in dealing with the symptoms and with consequences such as depressive thoughts and fears. This can also include better assessing the disease and learning to dose activities correctly.

      Good studies have not shown a clear advantage for any of these (...) options. However, individual studies indicate that cognitive behavioral therapy and physical activation can help some people with mild to moderate ME/CFS to at least temporarily reduce certain symptoms. Treatments for people with severe ME/CFS are poorly understood. “ (Overview, p. 5)

    Context:
     
    Last edited: Nov 24, 2022
    bobbler, adambeyoncelowe and Hutan like this.
  14. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Since the strongly anti-psychosomatic and also anti-psychotherapeutic attitudes are traditionally particularly pronounced in the group of ME/CFS advocates,

    Massive-straw-man-in-desperate-attempt-to-smear-critics alert.

    We are not anti-psych. We are anti-unscientific claims of any kind. Can't speak for other groups of advocates, but anybody who has followed this forum for any length of time knows how hard we can be on substandard physiological claims too.

    I don't care where the science goes on ME. As long as it is methodologically robust. Which the current version of psych 'science' is most certainly not.

    That is the fundamental problem the psychs are doing everything they can to avoid confronting and fixing.

    None of this debate changes until they do that.

    The solution to their woes lies in their hands, not in the hands of their long suffering patients.
     
  15. Pustekuchen

    Pustekuchen Established Member (Voting Rights)

    Messages:
    31
  16. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,812
    Reply to the IQWiG report, by Prof. Matthias Kohl, a professional statistician (DeepL translation):

    "Benefit assessment of CBT and GET (Section 5).
    The three studies (Janse 2018, PACE, GETSET) used in Section 5 of the preliminary report for the benefit assessment of CBT and GET contain a high risk of bias, as also noted in Table 23 of the preliminary report. There are in fact a number of other methodological objections that can be and have been raised against these studies (see, for example, [1,2]). Overall, these studies can therefore only be said to be of poor methodological quality and must not only be assumed to be at high risk of bias per se, but that this bias could also be quite large [3].

    As the description in Section A2.3.3.3 of the preliminary report makes clear as well, great caution must be exercised in a meta-analysis with only a few studies. In particular, in the case of only two studies, the description suggests that it is difficult, if not impossible, to provide a reliable confidence interval for the estimated effect. For example, confidence intervals of the preferred method of Knapp and Hartung "are usually not presented in the case of 2 studies" (p. 178). This statement is generally true, i.e., even if the two underlying studies were of high methodological quality. However, in the present case, the complicating factor is that the three studies, each of which is compared in pairs (Jane 2018 and PACE or GETSET and PACE), are of poor methodological quality.

    Thus, in the present case, one must ask the much more fundamental question of whether these meta-analyses, based on only two studies of poor methodological quality, should have been performed at all. In this context, one also speaks of "garbage in, garbage out", a long known problem in the context of meta-analyses [4]. Egger et al. write: "if the 'raw material' is flawed, the findings of reviews of this material may also be compromised" [5]. The results of a meta-analysis can only be as reliable as the studies on which it is based [6]. Systematic bias in individual studies cannot be compensated for by a meta-analysis. In general, statistical methods should never be assumed to produce adequate results from inadequate data. On the contrary, there is even a risk that statistical analyses give an objective impression to poor data and thus lead to wrong conclusions [7]. These statements apply not only to the meta-analyses mentioned, but also to the sensitivity analyses described in Section 5.6.15 of the preliminary report. Furthermore, the question arises why neither a highly probable overestimation of the effect nor a relevance threshold were considered in the calculations for the scenarios presented there. This would change the results relevantly and would lead to the fact that the necessary effects in the subpopulation without PEM would have to be far less extreme.

    In summary, I can only strongly recommend IQWiG to discard the results of these meta- and sensitivity analyses as unreliable. Under no circumstances should IQWiG derive recommendations from results that are highly likely to be biased, neither in the current case nor in the future."

    @Caroline Struthers
     
  17. MSEsperanza

    MSEsperanza Senior Member (Voting Rights)

    Messages:
    2,947
    Location:
    betwixt and between
    Thanks for posting @FMMM1 . Where did you find that?
     
    adambeyoncelowe, Sean and alktipping like this.
  18. adambeyoncelowe

    adambeyoncelowe Senior Member (Voting Rights)

    Messages:
    2,736
    Thanks everyone for your excellent comments and input into the process! It looks like IQWiG faces some robust responses.

    Also, they're showing their hand.

    They claim we don't want an "all in the mind" hypothesis, but don't they understand what their own words mean?

    "All in the mind" doesn't mean "psychiatric", it means "imaginary, invented by the mind". I.e., psychosomatic.

    They're always pretending that they don't think ME is psychosomatic, just that it's about the mind and body. Then they criticise us for not being open to mind and body.

    Here they're basically admitting that is their model after all.
     
  19. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,734
    I haven't had the brain to put this together well as a whole concept but found the following paper:
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4688419/

    Which is on the issue of drop-outs, and when it causes bias vs not. The whole thing is reasonably good as a read on this topic but I've quoted from the discussion:

    My bolding as it clearly underlines that if you have high dropouts there is no guarantee that what is causing drop-out isn't something which is in the treatment arm unless you check and prove this. PACE have used comparison of different treatment arms but there is good reason to say that SMC (with literature telling people they are missing out), CBT (which gaslighted people in certain ways), GET (krypton factor physical endurance for a condition with PEM meaning those who did meet the criteria IQwig should only be interested in would be more impacted by this) all would have different issues at play for their sample.

    I do not understand how with the same hand iqwig can say they will include PACE and other trials - whilst at the same time not including the patient surveys of harm that would be highly relevant to analyses of whether the drop-out rates (plus the only 80% have PEM, not consistently defined to what others might say PEM would be defined as) meant that their 'end-sample' that they analysed was indeed not representative of the population they claim. ie you can't have one without the other in my book, if you read this type of issue and what the normal protocol should have been/be.

    Edit: because any claimed explanation for the drop-outs and their reasons and who they are really does have light shed on it by the patient surveys of harm and I do not know how a retrospective review like this can avoid a sensitivity analysis and not think that such data is highly explanatory for this particular weakness/issue of these trials.
     
    Last edited: Dec 4, 2022
    Pustekuchen, RedFox and alktipping like this.
  20. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    The failure of the PACE authors to do sensitivity analysis for their change of outcomes is one of their worst. Doing it could very well have exposed the fundamental problems much more starkly. Which, presumably, is why they didn't do it (or at least didn't publish it if they did.)
     

Share This Page