A new consensus? - ME/CFS skeptic blog

Discussion in 'ME/CFS research' started by ME/CFS Skeptic, May 17, 2023.

  1. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    321
    I get the impression that the distinction between "supportive CBT" and "pace-style CBT" may not be as clear as it seems. Both of them would still operate under a similar framework (changing cognitions, behaviours, and coping skills) and goals (doing more of certain things or dealing with certain difficulties). I think it's likely there would be quite a bit of overlap in their elements and the manner they're applied in practice.

    I think there's a lot of bad research in medicine beyond ME/CFS, from what I've seen. In some regards it may be better here because of the increased scrutiny.
     
    MSEsperanza, Dolphin, obeat and 8 others like this.
  2. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,384
    Location:
    Aotearoa New Zealand
    I heard the other day that doctors who want to be accepted into a college to train to be a specialist are often required to undertake some research. So, some medical research is undertaken by people who may or may not be very interested in research, and may or may not care great deal about finding a useful answer. They mostly need the study to fit into the 6 months that they have for the research and for it to tick the 'has done some research' box for the college. So, that means that there is often little interest in time-consuming consultation with relevant consumers.

    That's all very nice for the doctor, but it may be bad luck if you have the condition that the, for example, aspiring orthopaedic surgeon just banged out a C-grade study on.

    For sure, on the bad research not being confined to ME/CFS. I've seen some horrendous studies on schizophrenia in particular; I think people with schizophrenia have an even harder time than we do with stigma, and it's such a terribly difficult disease for the person and their supporters, with the result that there may not be much scrutiny.

    For sure, it's something I've been thinking about lately. Any assessment of CBT for ME/CFS needs to look very closely at what the CBT was aiming to do. The phrase used recently, I think by the Kuut?, 'activating CBT' could be a useful term, that is CBT that aims to make the patient more active, as opposed to coping better with the losses and frustration, or better able to reduce activity to a level that doesn't cause PEM.
     
    MSEsperanza, Ariel, Dolphin and 10 others like this.
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,384
    Location:
    Aotearoa New Zealand
    I've recently done a course on systematic reviews that had a short section on GRADE. Now, I may have things wrong, but I did not find GRADE to be a very bad system. In a systematic review at least, GRADE is applied to the collected evidence for an outcome, not to individual studies.

    First off, you decide if a study is so egregiously flawed that the results must be rubbish e.g. it looked at a population that is mostly different to your target population, or there is overt evidence of fraud. In that case, you don't include it in the systematic review at all. I don't believe unblinded studies with only subjective outcomes fit in that category, because if all the patients were suddenly rising up from their beds and reporting 'cured' levels of SF-36, and 6 months later 90% were still in that 'healthy' range, then the study could well be indicating that the treatment is useful. If there were those sort of results in a decent sized unblinded study of people with well-defined ME/CFS, I'd be thinking hard about trying the treatment, even if the outcomes were subjective. So, I think the GET and CBT studies mostly qualify to be in reviews.

    Then, my understanding is that an outcome from RCTs starts at 'high' certainty and is assessed against 5 criteria. Risk of bias is one of the five. For each criteria, you can downgrade the particular RCT outcome by one or two. So, I think it's clear that the CBT and GET outcomes drop by two categories to low certainty due to a high risk of bias. The other categories almost certainty add some further downgrades, for indirectness, for inconsistency (different results from different studies), for imprecision and for publication bias.
    On indirectness, most BPS studies would be downgraded for ME/CFS outcomes due to poor selection criteria.
    Outcomes could be downgraded for inconsistency if the few studies with objective criteria did not find an objective increase in activity, while the subjective reporting did, for example.
    For imprecision, if there is a risk that important harms were not considered (as would be the case in ME/CFS studies that did not track activity for a long enough period when there are hints of substantial harm coming from non-trial sources), then the outcome would be downgraded.
    Publication bias is where there is some evidence that studies that don't show a favourable effect of the intervention are not getting published. There are several ways to work that out, including looking at trial registers.

    And then there's more. Having got the outcome, with it's 'Very Low' certainty rating, you can make some comment about the size of the benefit. Even if all the studies are consistently reporting a benefit, if the benefit is small, within the range that would often be seen with ineffective but hyped treatments in unblinded situations, then you can report that there is a very low certainty that the reported small benefit is real.

    As with many tools, it's not so much the quality of the tool, as the quality of the person using it. Yes, to a certain extent, the GRADE outcome is subject to the prejudices of the person using it. What GRADE does is provide a framework for analysis and a requirement that assumptions that have been made are reported and made explicit. I think it's ok.

    Here's a link to the handbook.
    (And edited to add publication bias, which I had forgotten about.)
     
    Last edited: May 18, 2023
    Ariel, Arvo, Dolphin and 9 others like this.
  4. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,384
    Location:
    Aotearoa New Zealand
    Just on that finding that pacing has no effect compared to usual care:
    I agree that most people with ME/CFS learn fairly swiftly to reduce their activity in order to avoid post-exertional malaise. I think that within a year most people won't be trying to do the same amount of activity that they were doing before they became ill. They will be consciously reducing what they do, perhaps cutting out their weeknight social football game, or only having two showers a week, or changing to part-time work, or moving home to be looked after by Mum and Dad. So, I contend that maybe a year into the illness, most people have worked out how to pace in some way. Imperfectly, of course, but well enough that they are surviving.

    That means that they are already pacing. Trying to then apply a treatment of "pacing" to people who may be years into their illness, probably often offered by therapists who really don't know what they are talking about, and who have got the message that pacing is some mistaken belief that patients have and/or that pacing should actually be GET-lite, - well - it's not surprising that the treatment shows no benefit.
     
    Last edited: May 18, 2023
    Dolphin, obeat, Lou B Lou and 15 others like this.
  5. Sean

    Sean Moderator Staff Member

    Messages:
    8,072
    Location:
    Australia
    Thanks for this, @ME/CFS Skeptic.

    One point:
    Don't know about the Norwegian trial, but in PACE the version they used was not pacing as generally understood and used by the patient community. It isn't a useful result in either direction.

    I think we can. Any further indulgence in this kind of research is just a waste of funds and patients' lives. I think the question has been answered. It's a bust. Time to move on.
    ...then it should also be demonstrable via blinded and/or objective measures.

    I find the whole notion that self-reported improvement in general activity capacity can exist without a clear result on more robust measures deeply problematic. All that leads to is the circularity of teaching to the test.

    The whole history of ME/CFS makes it clear why that approach is just not safe.
     
    Dolphin, Michelle, alktipping and 8 others like this.
  6. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    996
    Location:
    UK
    Thank you for a brilliant and very helpful analysis, @ME/CFS Skeptic. I've tried to summarise it for my own benefit, and would appreciate your view on if I've got it right.

    "One fatal flaw is okay"
    Rigidd systems for assessing evidence quality don't allow reviewers to simply fail a study for being fatally flawed (in this case because of subjective outcomes in non-blinded trials). (Worse, the Pace trial showed no meaningful objective benefits – and objective measures are valid in nonblinded trials.)

    So CBT and GET are not summarily dismissed as useful treatments and remain on the table.

    The Belgian compromise
    Instead, what we have is expert consensus that bears very little relation to the published evidence, as below.:

    Graded exercise therapy
    The evidence for CBT and GET are the same. yet GET was ruled out as a treatment for additional rather suspect reasons, the biggest, of which is the controversy around its effectiveness and harms.

    Cognitive behaviour therapy
    In contrast, CBT was seen as an acceptable treatment, so long as it is used to cope with the psychological impact of the illness, rather than as a curative treatment. Even though there is no published evidence that such support iveCBT is helpful.

    Pacing
    Meanwhile, pacing is also promoted, despite there being no published research evidence that it helps to reduce symptoms or manage the illness. (@Jonathan Edwards pointed out after the Nice review that we need an evidence base about pacing, in case the prevailing political winds change direction.)

    As you note, this is a strange situation.
     
    Last edited: May 19, 2023
    MEMarge, Arvo, Dolphin and 12 others like this.
  7. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    996
    Location:
    UK
    It’s certainly true of the migraine research, – until the recent large and very well done trials of monoclonal antibodies. But I’ve looked at other illnesses for various reasons, and have been struck by how commonly the research is badly designed, underpowered, and biased.
     
    Last edited: May 22, 2023
    MSEsperanza, Ariel, MEMarge and 11 others like this.
  8. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,006
    Location:
    Belgium
    Yes sounds like a good summary, thanks.
     
  9. Trish

    Trish Moderator Staff Member

    Messages:
    55,446
    Location:
    UK
    I've moved my post discussing this to a new thread:
    Research on pacing as treatment for ME/CFS. Discussion of how to do it.
     
  10. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,384
    Location:
    Aotearoa New Zealand
    Sure, you wouldn't want to stop collecting evidence if you just have unblinded trials with subjective evidence. I'm just making the point that if, for example you have multiple trials like that, with that being the only flaw, and they all were showing that the majority of people were reporting lasting and substantial benefits, then they would constitute evidence. It would be low or very low quality evidence, probably not strong enough evidence to support using the treatment outside of trials, but you wouldn't want to not acknowledge the finding. The question is 'is the flaw major enough to exclude the trials from a systematic review' - and I don't think it is, because we can later make some assessment about the reported benefits (or lack of them).

    While @ME/CFS Skeptic's writing is as clear and brilliant as usual, I disagree that this is an accurate representation of how GRADE works. If assessors decide that an evaluation of an outcome in a trial really is fatally flawed, then the trial is not should not be included in the assessment of the evidence for the outcome. But, an unblinded trial with subjective outcomes is not necessarily fatally flawed. It can tell us something. It might tell us that there is no effect, even with all the biases stacked in favour of the intervention. It might tell us that an effect was reported, but it probably falls within the range of a placebo effect. Or it might tell us that there was an amazing result that exceeds the likely placebo response, and that this intervention shows real promise and should be investigated further, that it is worthwhile doing more expensive and difficult trials that are blinded and/or have objective outcomes.

    I don't think GRADE is particularly rigid. The handbook itself says that it is just a transparent system for making assumptions explicit. GRADE does not make it impossible to do useful, rigorous systematic reviews. In fact, I think it is helpful.
     
    Last edited: May 18, 2023
  11. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    996
    Location:
    UK
    Arguably, it’s an edge case in science more generally. This is an interesting tweet from Brian Noseck, one of the leaders of the open science movement, which is celebrating its 10th anniversary. Basically, he says that trustworthiness and replicability of research findings is poor across many disciplines. He says the problem is that the current system is broken, favouring, neat, tidy results that look good.

    The whole reward system is broken and something new is needed.

    This one tweet summarises this in a couple of graphics, and is part of a much longer thread.
    https://twitter.com/user/status/1661757186790371328
     
    Ariel, MEMarge, Dolphin and 8 others like this.
  12. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,081
    I would add another point, which is psychiatry and clinical psychology’s reliance on questionnaires to define a number of psychological/psychiatric conditions, but fail to recognise that many/most of these questionnaires confound psychological issues and physical ill health. This means much of psychological medicine struggles to distinguish physical and psychological issues.
     
    Ariel, MEMarge, Dolphin and 19 others like this.
  13. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,801
    I haven’t been reading much in-depth in recent years but I previously felt that fatigue should be broken down into different elements more. I suppose the (generally very flawed) Chalder fatigue scale and as I recall the MFI questionnaire does this to an extent but my recollection is when they and other fatigue questionnaires are used in interventional trials, only a composite score tends to be reported.
     
    Kitty, Simon M, Hutan and 2 others like this.
  14. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,801
    There are probably some things in medicine that at this time are not that amenable to measurement using objective measures e.g. a lot of types of pain. It would thus be hard to exclude subjective measures completely as not all interventions can be blinded.

    As I have mentioned before, when ME/CFS is combined with other conditions e.g. IBS, it is harder to argue objective activity measurement and the like are the best type of outcome measure.
     
    Kitty, Simon M, RedFox and 2 others like this.
  15. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,801
    This paper would include some other studies that show some evidence for pacing interventions though Wallman’s paced exercise study is included.


    Pacing as a strategy to improve energy management in myalgic encephalomyelitis/chronic fatigue syndrome: a consensus document
    Ellen M Goudsmit et al. Disabil Rehabil. 2012
    https://pubmed.ncbi.nlm.nih.gov/22181560/
     
    Kitty, Simon M, oldtimer and 3 others like this.
  16. Simon M

    Simon M Senior Member (Voting Rights)

    Messages:
    996
    Location:
    UK
    This is a conundrum. Response bias does not go away if subjective measures are used in unblinded trials, e.g. for pain treatments.

    One option might be to set a higher threshold for success, e.g. a 0.75 SD gain instead of 0.5. And it might still be worth checking that the subjective gains (even if retained as a primary measure) are backed up by secondary objective ones - which could be other measures apart from activity such as hours worked/at school.

    Ultimately, patients need robust evidence about what works. How to get that evidence in some scenarios requires a lot of thought (in ME, we have just had intransigence).
     
    Last edited: Jul 26, 2023
    FMMM1, Missense, lycaena and 10 others like this.
  17. Hutan

    Hutan Moderator Staff Member

    Messages:
    29,384
    Location:
    Aotearoa New Zealand
    As Simon says, I think it's still an important principle that there is not a reliance only on subjective outcomes in unblinded studies. When pain is bad, people can't keep functioning as normal, not on any regular basis. Secondary objective outcomes could include:
    time spent being sedentary or lying down;
    medication taken to control pain;
    alterations in gait
    results of cognitive tests

    These things would obviously need to be tracked over a sufficiently long period of time. If it wasn't possible to solely rely on patient reported outcomes, I bet reasonable objective outcomes would be found.
     
    Missense, lycaena, bobbler and 11 others like this.
  18. Arnie Pye

    Arnie Pye Senior Member (Voting Rights)

    Messages:
    6,422
    Location:
    UK
    I think the naming of Directive CBT and Supportive CBT as both "CBT" is a problem. When I first read about CBT being used in the PACE trial I didn't realise that CBT came in different flavours and assumed it was the supportive kind.

    I wonder how many people in the general healthy population are aware that CBT isn't always the same. The same is true for people with illnesses like cancer - they might be offered the supportive CBT and not realise that the other kind even exists. I think the directive CBT should be given it's own name, but I don't have any suggestions as to what that name might be.
     
    Missense, Ariel, Kitty and 10 others like this.
  19. Sean

    Sean Moderator Staff Member

    Messages:
    8,072
    Location:
    Australia
    Subjective self-report measures are particularly problematic when the treatment is aimed at changing cognition/perception. It is just ripe for all sorts of bias and confounding. Worst case scenario, indeed. It actually makes blinding, or additional less subjective measures more important, I think.

    Need to distinguish between two different levels of causal knowledge: Casual relationships, and causal mechanisms.

    It is possible to demonstrate a casual relationship (A causes B, at least probabilistically), without knowing the process by which that occurs.

    Ideally, of course, you want to know both. But you have to be able to demonstrate at least the first to claim useful science-based knowledge.

    Which is the problem with using unblinded subjective measure on their own. They don't allow us to say even that much. Causation has not been revealed and clarified. So we don't know if any effect is due to the treatment, or response bias, or...? All we can say is there is an effect, but not know its causal nature. Nor, hence, if it is of actual practical benefit.

    Which leads to this sort of guff.
     
    Missense, Hutan, Kitty and 6 others like this.
  20. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    6,811
    Location:
    UK
    I agree about not using these measures on their own, but the reliability of subjective outcomes could be improved by, as @Hutan says, following triallists for long enough. Outcomes at 18 to 24 months ought to be much reliable than at two or three; if a treatment has improved symptoms, patients would be expected to report better quality of life and being able consistently to do more.

    It doesn't seem standard practice to follow cohorts for this long, except in diseases that are usually terminal if not treated. But with a condition that (a) can fluctuate naturally in some people and (b) has no biomarker or readily measurable signs, 12 months has to be the absolute minimum and probably still isn't long enough to be considered gold standard.
     

Share This Page