Graded activity is an important component in CBT to reduce severe fatigue: cancer survivors (Knoop et al., 2019)

It's real just as ME fatigue is real.

So maybe they have ME? Or maybe they have prolonged fatigue of another sort that can be triggered by lots of things. The authors here treat the problem as 'post-cancer fatigue' because they think it is all about the psychological state of having had cancer. Is this a real category? Does anybody have the faintest idea?
 
Maybe this is similar to the Havana syndrome where exposure to toxic chemicals causes brain damage in some people that are predisposed to it.

In more classical ME, the damage could come from the initial infection and immune response.
 
I was unhappy about that in th case of post-viral fatigue too.

My impression is that post viral fatigue is quite well documented as a causal relation - bye things like the decay curve for fatigue in the Dubbo study.

What I see as more problematic here is that if you have a specialist unit set up to take referrals for 'post-cancer fatigue' they will get everyone with fatigue who has had cancer and especially the people whose doctors think they are a bit, you know, psychological. It is like the situation with hyper mobility and fatigue - and of course it is the same people. It was Knoop who first suggested that people with hypermobility commonly had generalised fatigue. These people seem to be disease inventors. Their Achilles heel is that they want to shoehorn everything into a psychological explanation, which at times leads to ridiculous contradictions - as here.
 
But it sounds as if they are saying, by analogy that after treatment people are no better at coping with spiders, they just say they are. (Which I would be prepared to bet is because that is what they think they are supposed to say!)
These authors have admitted that a lot of CFS patients have relatively normal activity levels. So what I suspect their theory is, is that active CFS patients have a perception problem, namely thinking that symptoms are due to a yet undetected but chronic, neurological disease called ME that is preventing them from meeting their unrealistic, perfectionistic expectations. Characteristic of ME is that symptoms get worse after exercise.

The treatment consists of changing those expectations and demonstrating that increases in physical activity don't lead to a worsening of symptoms.

So by analogy, patients are terribly afraid of spiders (they think their symptoms indicate they have ME) even if they don't avoid them (their activity pattern is relatively normal), and by brief exposure to spiders (graded exercise), the treatment shows that they shouldn't be afraid and that thinking spiders are monsters (symptoms = ME) is unhelpful.

Some patients do avoid spiders (the 25% 'passive' patients) and therefore their treatment should focus more on exposure and behavioural aspects to stop the fear-avoidance.

That's the most sense I can make of it.
 
So the obvious conclusion is that fatigue-focused psychometric questionnaires are unreliable as a measure of fatigue. There is absolutely no reason why a diminution in fatigue should not lead in an increase of activity, especially in a population biased towards it by being younger than typical. Unless other symptoms like pain are in the way, but then the idea is the same for that so that's a moot point.

But Knoop et al can't accept that the questionnaires do not measure reality and instead speculate on imaginary reasons to explain the failure in treatment. I don't even understand why so much speculation is allowed in a pragmatic trial. Either it works or it doesn't, a pragmatic trial is not the place for speculation, especially of the kind "we feel that it works and as such reject objective findings proving us wrong".

And as with PACE the whole thing was excessively biased into producing a positive result and it still failed exactly the same way as it always does. It doesn't help that because of the weird redefinitions of fatigue commonly used in psychosocial research, it's not really clear what the participants were actually suffering from, since in this context fatigue basically means any and all symptoms that do not have a clear and reliable biomarker.

At best we can conclude that CBT is an effective tool to modify responses on psychometric questionnaires. Which is about as useful as Scientology's thetans machine. It measures something, just not something useful. At least this is something that is becoming so consistent it can't be ignored anymore: the discrepancy between objective outcomes and psychometric evaluations cannot be explained any other way than as being a lousy way to measure "fatigue", whatever is actually meant in any given context.

In a sense this is the same result as PACE, a failure to show any objective improvement, but with the opposite conclusion. The incompetence, it burns.
 
There was substantial drop out rates, it is hard to say if this biased the results.

I don't understand why they didn't include T2 results in Table 2? Why would you only publish half the results of a crossover trial? It suggests to me that something is not quite right. If physical activity mediated a benefit, it should be shown in both arms...
 
I don't understand why they didn't include T2 results in Table 2? Why would you only publish half the results of a crossover trial?
Yeah.

In the supplemental information, there's this chart. All a bit useless without controls.
Screen Shot 2019-09-22 at 1.39.26 PM.png

The T1 assessment of 35 of 124 remaining patients was missing (28%), and the T2 assessment of three additional patients (3%). In the majority of missing assessments (22 out of 35/63%), patients had stopped CBT prematurely. Most common reasons for drop-out were unmet expectations about CBT (n = 5) and cancer recurrence (n = 4). To prevent the study from being underpowered, patient recruitment was continued until reaching the required number of complete cases at T1.

So the patients who really don't like the treatment/don't think it's much use drop out, and more agreeable/willing to say nice things on a questionnaire patients are recruited in their place. No bias at all, then...
 
Is this a real category
To me this is the core of the problem. Lots of diagnoses in psychiatry have no objective basis. We know something is wrong, but we have no idea if the pigeonholes we want to plonk patients into have any basis in reality. They are all symptom clusters. Sadly CFS is in the same boat, and ME as well though I think ME is in transition beyond that, especially given replication of brain and exercise findings. ME research and diagnosis can get out of this by developing a reliable diagnostic test, though multiple tests would be preferable. All of these kinds of syndromes, including even things like depression, need such tests developed for them. It should be a priority, because not knowing which group you are treating or investigating is a major impediment to quality research and care.

I do understand that the brain is the last great medical mystery, though some disciplines like proteomics still have a long way to go. That is no reason for lax science; indeed its why scientific standards need to be high.
 
Coming back to this, because I want to understand, and I think it is important to understand, what really is meant by a pragmatic clinical trial.

Wikipedia says:
A pragmatic clinical trial (PCT), sometimes called a practical clinical trial (PCT),[1] is a clinical trial that focuses on correlation between treatments and outcomes in real-world health system practice rather than focusing on proving causative explanations for outcomes, which requires extensive deconfounding with inclusion and exclusion criteria so strict that they risk rendering the trial results irrelevant to much of real-world practice.[
Whereas the article says:
Pragmatic trials are designed to evaluate the effectiveness of interventions in real-life routine practice conditions, whereas explanatory trials aim to test whether an intervention works under optimal situations.
Is it that a pragmatic trial is only really valid if it follows on from an explanatory trial? Where the latter has shown an intervention causing a desired effect under ideal conditions, but then following it up with a trial to see if it shows a matching correlation under more real-life conditions? If something shows evidence of a cause/effect relationship under optimal but not very generalizable conditions, is it then valid to back it up with more generalizable evidence of a matching correlation? Feels to me like it makes sense, but confess I cannot really put my finger on this properly.
 
The title is very clickbait and not at all supported by the paper's conclusions. How are editors fine with hype straight in the title when there was a null result on objective outcomes?

Are standards really this low now? "Important component" even though it lead to no objective change? This is Goop feng-shui infomercial level of making unevidenced claims.

"If you buy this car, you will feel like a million bucks*"

* You will not actually have a million bucks if you buy this car, you will merely feel like it and in the end, isn't that all that matters?
 
Coming back to this, because I want to understand, and I think it is important to understand, what really is meant by a pragmatic clinical trial.

Is it that a pragmatic trial is only really valid if it follows on from an explanatory trial? Where the latter has shown an intervention causing a desired effect under ideal conditions, but then following it up with a trial to see if it shows a matching correlation under more real-life conditions? If something shows evidence of a cause/effect relationship under optimal but not very generalizable conditions, is it then valid to back it up with more generalizable evidence of a matching correlation? Feels to me like it makes sense, but confess I cannot really put my finger on this properly.

I think the Wikipedia account is valid. The paper gives what people who do pragmatic trials want to claim they can do - which simply indicates they donor understand what they are doing.

The pragmatic people have got this meme that these trials are in contrast to explanatory trials. Wrong. An explanatory trial addresses mechanism, not efficacy. Pragmatic trials contrast with Formal trials of efficacy that adequately deal with confounders. Wikipedia gives it clearly.

There is clearly a huge problem here - the same peer review problem we have all along. The only people who call their trials pragmatic are the ones who do not understand that anything other than a formal efficacy (or operational) trial will test the cause and effect relation of efficacy.

I have good reason to believe that people involved in Trials for ME,
MUS, CBT, GET etc belong to the pragmaticists who do not understand. The article you cited is by another. Presumably the editor and referees are others too.
 
There is clearly a huge problem here - the same peer review problem we have all along. The only people who call their trials pragmatic are the ones who do not understand that anything other than a formal efficacy (or operational) trial will test the cause and effect relation of efficacy.

Sure, "pragmatic" trials can only ever be considered 'suggestive' quality evidence (along with case studies and pilot studies). The question is why do professionals assume that these trials can be considered 'moderately conclusive' quality evidence?
 
Sure, "pragmatic" trials can only ever be considered 'suggestive' quality evidence (along with case studies and pilot studies). The question is why do professionals assume that these trials can be considered 'moderately conclusive' quality evidence?

Because there is a new breed of professionals only interested in personal and political gain rather than the right answer. And they are now in charge of quality control for Cochrane etc. it seems.
 
They seem to do pragmatic (and feasibility) trials to collect data to find out which outcomes at what timepoint are most likely to give a positive result in a later clinical trial.

The outcomes are not chosen according to importance to patients, ability to compare to other trials or conditions, or reliability. They are chosen to support claims of efficacy (they dropped actometers from PACE because they knew these would show no improvement)

I suspect they designed the Chalder scale in this way too, that is, it's the scale that most easily gives a positive result.

It seems to be an exercise in maximizing bias and generating unreliable positive results.
 
Last edited:
They seem to do pragmatic trials to collect data to find out which outcomes at what timepoint are most likely to give a positive result in a later clinical trial.

The outcomes are not chosen according to importance to patients, ability to compare to other trials or conditions, or reliability. They are chosen to support claims of efficacy (they dropped actometers from PACE because they knew these would show no improvement)
Spot on.
 
So maybe they have ME? Or maybe they have prolonged fatigue of another sort that can be triggered by lots of things. The authors here treat the problem as 'post-cancer fatigue' because they think it is all about the psychological state of having had cancer. Is this a real category? Does anybody have the faintest idea?
this post from Gary explains the difference (based on his experience)
https://www.s4me.info/threads/symptom-survey-for-those-with-me-cfs.3720/page-4#post-66728
 
Back
Top Bottom