Graded activity is an important component in CBT to reduce severe fatigue: cancer survivors (Knoop et al., 2019)

Discussion in 'Other psychosomatic news and research' started by Dolphin, Sep 21, 2019.

  1. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    So maybe they have ME? Or maybe they have prolonged fatigue of another sort that can be triggered by lots of things. The authors here treat the problem as 'post-cancer fatigue' because they think it is all about the psychological state of having had cancer. Is this a real category? Does anybody have the faintest idea?
     
  2. chrisb

    chrisb Senior Member (Voting Rights)

    Messages:
    4,602
    Could the problem be that "post cancer fatigue" is used in the same way that "post-viral fatigue" came to be used. They had cancer. Now they have fatigue. No causation is implied. Merely correlation?

    I was unhappy about that in th case of post-viral fatigue too.
     
  3. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,276
    Maybe this is similar to the Havana syndrome where exposure to toxic chemicals causes brain damage in some people that are predisposed to it.

    In more classical ME, the damage could come from the initial infection and immune response.
     
    alktipping likes this.
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    My impression is that post viral fatigue is quite well documented as a causal relation - bye things like the decay curve for fatigue in the Dubbo study.

    What I see as more problematic here is that if you have a specialist unit set up to take referrals for 'post-cancer fatigue' they will get everyone with fatigue who has had cancer and especially the people whose doctors think they are a bit, you know, psychological. It is like the situation with hyper mobility and fatigue - and of course it is the same people. It was Knoop who first suggested that people with hypermobility commonly had generalised fatigue. These people seem to be disease inventors. Their Achilles heel is that they want to shoehorn everything into a psychological explanation, which at times leads to ridiculous contradictions - as here.
     
    Mij, ukxmrv, ladycatlover and 11 others like this.
  5. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,723
    Location:
    Belgium
    These authors have admitted that a lot of CFS patients have relatively normal activity levels. So what I suspect their theory is, is that active CFS patients have a perception problem, namely thinking that symptoms are due to a yet undetected but chronic, neurological disease called ME that is preventing them from meeting their unrealistic, perfectionistic expectations. Characteristic of ME is that symptoms get worse after exercise.

    The treatment consists of changing those expectations and demonstrating that increases in physical activity don't lead to a worsening of symptoms.

    So by analogy, patients are terribly afraid of spiders (they think their symptoms indicate they have ME) even if they don't avoid them (their activity pattern is relatively normal), and by brief exposure to spiders (graded exercise), the treatment shows that they shouldn't be afraid and that thinking spiders are monsters (symptoms = ME) is unhelpful.

    Some patients do avoid spiders (the 25% 'passive' patients) and therefore their treatment should focus more on exposure and behavioural aspects to stop the fear-avoidance.

    That's the most sense I can make of it.
     
    Mij, MEMarge, ukxmrv and 8 others like this.
  6. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    Excellent, more sense than I can!!
     
    MEMarge, ukxmrv, alktipping and 2 others like this.
  7. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,998
    Location:
    Canada
    So the obvious conclusion is that fatigue-focused psychometric questionnaires are unreliable as a measure of fatigue. There is absolutely no reason why a diminution in fatigue should not lead in an increase of activity, especially in a population biased towards it by being younger than typical. Unless other symptoms like pain are in the way, but then the idea is the same for that so that's a moot point.

    But Knoop et al can't accept that the questionnaires do not measure reality and instead speculate on imaginary reasons to explain the failure in treatment. I don't even understand why so much speculation is allowed in a pragmatic trial. Either it works or it doesn't, a pragmatic trial is not the place for speculation, especially of the kind "we feel that it works and as such reject objective findings proving us wrong".

    And as with PACE the whole thing was excessively biased into producing a positive result and it still failed exactly the same way as it always does. It doesn't help that because of the weird redefinitions of fatigue commonly used in psychosocial research, it's not really clear what the participants were actually suffering from, since in this context fatigue basically means any and all symptoms that do not have a clear and reliable biomarker.

    At best we can conclude that CBT is an effective tool to modify responses on psychometric questionnaires. Which is about as useful as Scientology's thetans machine. It measures something, just not something useful. At least this is something that is becoming so consistent it can't be ignored anymore: the discrepancy between objective outcomes and psychometric evaluations cannot be explained any other way than as being a lousy way to measure "fatigue", whatever is actually meant in any given context.

    In a sense this is the same result as PACE, a failure to show any objective improvement, but with the opposite conclusion. The incompetence, it burns.
     
    MEMarge, alktipping, shak8 and 5 others like this.
  8. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    A pragmatic trial does not even tell you if it works - that is the definition of a pragmatic trial - it cannot tell you that. Goodness knows what use it is.
     
  9. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
  10. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,858
    Location:
    Australia
    There was substantial drop out rates, it is hard to say if this biased the results.

    I don't understand why they didn't include T2 results in Table 2? Why would you only publish half the results of a crossover trial? It suggests to me that something is not quite right. If physical activity mediated a benefit, it should be shown in both arms...
     
    ukxmrv, alktipping, Annamaria and 6 others like this.
  11. Hutan

    Hutan Moderator Staff Member

    Messages:
    27,980
    Location:
    Aotearoa New Zealand
    Yeah.

    In the supplemental information, there's this chart. All a bit useless without controls.
    Screen Shot 2019-09-22 at 1.39.26 PM.png

    So the patients who really don't like the treatment/don't think it's much use drop out, and more agreeable/willing to say nice things on a questionnaire patients are recruited in their place. No bias at all, then...
     
  12. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,200
    To me this is the core of the problem. Lots of diagnoses in psychiatry have no objective basis. We know something is wrong, but we have no idea if the pigeonholes we want to plonk patients into have any basis in reality. They are all symptom clusters. Sadly CFS is in the same boat, and ME as well though I think ME is in transition beyond that, especially given replication of brain and exercise findings. ME research and diagnosis can get out of this by developing a reliable diagnostic test, though multiple tests would be preferable. All of these kinds of syndromes, including even things like depression, need such tests developed for them. It should be a priority, because not knowing which group you are treating or investigating is a major impediment to quality research and care.

    I do understand that the brain is the last great medical mystery, though some disciplines like proteomics still have a long way to go. That is no reason for lax science; indeed its why scientific standards need to be high.
     
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,386
    Coming back to this, because I want to understand, and I think it is important to understand, what really is meant by a pragmatic clinical trial.

    Wikipedia says:
    Whereas the article says:
    Is it that a pragmatic trial is only really valid if it follows on from an explanatory trial? Where the latter has shown an intervention causing a desired effect under ideal conditions, but then following it up with a trial to see if it shows a matching correlation under more real-life conditions? If something shows evidence of a cause/effect relationship under optimal but not very generalizable conditions, is it then valid to back it up with more generalizable evidence of a matching correlation? Feels to me like it makes sense, but confess I cannot really put my finger on this properly.
     
    alktipping likes this.
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,998
    Location:
    Canada
    The title is very clickbait and not at all supported by the paper's conclusions. How are editors fine with hype straight in the title when there was a null result on objective outcomes?

    Are standards really this low now? "Important component" even though it lead to no objective change? This is Goop feng-shui infomercial level of making unevidenced claims.

    "If you buy this car, you will feel like a million bucks*"

    * You will not actually have a million bucks if you buy this car, you will merely feel like it and in the end, isn't that all that matters?
     
  15. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    I think the Wikipedia account is valid. The paper gives what people who do pragmatic trials want to claim they can do - which simply indicates they donor understand what they are doing.

    The pragmatic people have got this meme that these trials are in contrast to explanatory trials. Wrong. An explanatory trial addresses mechanism, not efficacy. Pragmatic trials contrast with Formal trials of efficacy that adequately deal with confounders. Wikipedia gives it clearly.

    There is clearly a huge problem here - the same peer review problem we have all along. The only people who call their trials pragmatic are the ones who do not understand that anything other than a formal efficacy (or operational) trial will test the cause and effect relation of efficacy.

    I have good reason to believe that people involved in Trials for ME,
    MUS, CBT, GET etc belong to the pragmaticists who do not understand. The article you cited is by another. Presumably the editor and referees are others too.
     
    MEMarge, rvallee, Trish and 4 others like this.
  16. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,858
    Location:
    Australia
    Sure, "pragmatic" trials can only ever be considered 'suggestive' quality evidence (along with case studies and pilot studies). The question is why do professionals assume that these trials can be considered 'moderately conclusive' quality evidence?
     
    MEMarge and Trish like this.
  17. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,082
    Location:
    London, UK
    Because there is a new breed of professionals only interested in personal and political gain rather than the right answer. And they are now in charge of quality control for Cochrane etc. it seems.
     
    ukxmrv, EzzieD, TrixieStix and 9 others like this.
  18. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,276
    They seem to do pragmatic (and feasibility) trials to collect data to find out which outcomes at what timepoint are most likely to give a positive result in a later clinical trial.

    The outcomes are not chosen according to importance to patients, ability to compare to other trials or conditions, or reliability. They are chosen to support claims of efficacy (they dropped actometers from PACE because they knew these would show no improvement)

    I suspect they designed the Chalder scale in this way too, that is, it's the scale that most easily gives a positive result.

    It seems to be an exercise in maximizing bias and generating unreliable positive results.
     
    Last edited: Sep 24, 2019
    EzzieD, Hutan, Snow Leopard and 7 others like this.
  19. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    3,723
    Location:
    Belgium
    Spot on.
     
    Simbindi, MEMarge and rvallee like this.
  20. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,636
    Location:
    UK
    this post from Gary explains the difference (based on his experience)
    https://www.s4me.info/threads/symptom-survey-for-those-with-me-cfs.3720/page-4#post-66728
     
    oldtimer and lycaena like this.

Share This Page