Trial By Error: An Open Letter to Dr Godlee about BMJ’s Ethically Bankrupt Actions

I remember reading in relation to IAPT CBT that the NHS estimates the overall costs to be £100 per individual session to the CCGs, which is actually much higher than the cost would have been when the CBT therapist (or person centred counsellor) was employed directly by the GP surgery.

However, overall the IAPT model saves [sic] the NHS money because most patients are assigned to the much cheaper to employ 'Wellbeing Practitioners' - and unofficially because only a small proportion of referrals actually complete a full (clinically meaningful) course (i.e.8 weeks or more) of high intensity CBT treatment or indeed, see anyone at all.

Interesting point, of relevance to NICE members @adambeyoncelowe and @Keela Too.
It reminds me that Simon Wessely said to me that his only concern about PACE was that it would be followed by rolling out of 'CBT' done by subcontracted services using people who were inadequately trained.

Of course we have no idea who is 'adequately trained' here, emphasising the point that at present we have no idea what component, if any, of CBT is useful. It seems that psychologists assume that they can do it 'their way', which of course will work, even if trials are inconclusive.
 
Interesting point, of relevance to NICE members @adambeyoncelowe and @Keela Too.
It reminds me that Simon Wessely said to me that his only concern about PACE was that it would be followed by rolling out of 'CBT' done by subcontracted services using people who were inadequately trained.

Of course we have no idea who is 'adequately trained' here, emphasising the point that at present we have no idea what component, if any, of CBT is useful. It seems that psychologists assume that they can do it 'their way', which of course will work, even if trials are inconclusive.
Thanks. This is useful to know.
 
Some points of interest.

1. Clinical trials can be divided into explanatory and operational. An explanatory trial addresses mechanisms of disease or treatment action. An operational trial addresses whether a treatment works. Both sorts try to show cause and effect. Explanatory trials try to get at the cause and effect mechanism, operational trials just ask whether the treatment causes the effect of improvement.

In order to reliably show cause and effect, trials must factor out confounding effects like bias. A trial that does that adequately is a formal trial. Formal trials tend to employ randomisation, blinding, and matched controls. Evidence based recommendations can use trials that are formal enough to be reasonably reliable.

In addition there is a category of pragmatic trial. The Wikipedia definition looks to be well thought out. Pragmatic trials forfeit the opportunity to get cause and effect information, in exchange for the benefit of being closer to real service provision. They forfeit the ability to indicate cause and effect because they do not factor out confounding effects in the way formal trial do (by definition). This means that they neither tell us about mechanism nor whether the treatment caused an effect of improvement.

It is said that instead pragmatic trials tell us about correlations rather than causations. I am not clear exactly how that helps us but it may be that such trials can, for instance, tell whether or not a treatment can be delivered in a certain setting in practice. But nothing about whether it works.

I have a strong impression, which I have included in my recent letter to Dr Godlee, that pragmatic trials are seen as rough and ready ('pragmatic') ways to gain evidence about treatment efficacy. But they are defined as not being able to. My impression is that there are major competing interests here in terms of health service research politics and economics.

Pragmatic trials have nothing to do with being 'pragmatic' in the sense of using whatever evidence is available for what works when definitive evidence is lacking. They don't help and may well mislead.


2. My understanding of the basis of CBT used in ME is based on reading of the PACE trial and other documents such as the Wessely, Butler, David and Chalder paper of 1989. My understanding of the method may not match precisely the methods used by current psychologists.

However, if the methods used by current psychologists do not match what is in the formal trial literature then we have no way of knowing whether these new methods are of any value. If CBT is so poorly defined that current practice is not recognisably the practice that has been tested then it is hard to see how current practice can be recommended.


3. I have been asked to address the problems with trials in ME/CFS and have focused on the problems of using the unblinded/subjective outcome format. I have pointed out that it is apparently universally agreed amongst disinterested academics (throughout science) that this format cannot produce reliable evidence.

The question then arises as to whether the evidence base for treatments such as CBT, used in other conditions, is more solid, or whether, if the trial methodology is similar, it is equally weak. I have not attempted to look in to this and would not claim to know the answer. However, my feeling is that if the situation with trial methodology in ME/CFS is as bad as it seems to be then I would worry that at least to some extent this problem applies to use of similar treatments in other conditions.
 
Last edited:
if the methods used by current psychologists do not match what is in the formal trial literature then we have no way of knowing whether these new methods are of any value. If CBT is so poorly defined that current practice is not recognisably the practice that has been tested then it is hard to see how current practice can be recommended.
This is the case in Belgium.

The ME/CFS 'experts' recognize some of the absurdities of the CBT-model so they have adapted it to make it somewhat more reasonable. But they still claim it's evidence-based by referring to trials that used the fear-avoidance model of CBT. You can't have your cake and eat it!
 
However, my feeling is that if the situation with trial methodology in ME/CFS is as bad as it seems to be then I would worry that at least to some extent this problem applies to use of similar treatments in other conditions.

CBT for psychosis and probably also therapies that attempts to cure unexplained illness by searching and talking about presumed emotional trauma. Lightning process and similar "reprogram your brain" philosophies. My feeling is that none of these would pass a proper test.
 
CBT for psychosis and probably also therapies that attempts to cure unexplained illness by searching and talking about presumed emotional trauma. Lightning process and similar "reprogram your brain" philosophies. My feeling is that none of these would pass a proper test.
presumed emotional CHILDHOOD trauma is a whole paradigm here ( ACEs) which is embedding into everything - rather than address the underlying societal issues.
Talking therapy is so much more " effective" than addressing poverty and all its manifestations.....
 
Of course we have no idea who is 'adequately trained' here
this is something that appears to be on the increase generally in the NHS.
It is a bit odd that those wanting parity for Mental Health issues are seemingly unconcerned with who actually treats patients in so much as they do not appear to have to have any specific medical qualifications.*

As posted earlier, on the LP website they state that their practiioners are not medically trained, and other similar commercial treatments also appear to be only dependent on 'completing their training course'.

* this article (from the guardian, sorry) takes you through the options
Thinking of a career in therapy? Here are your options
CBT training ..
There are two ways into this: through a post-graduate diploma in CBT (listed on the BABCP website), or via the Improving Access to Psychological Therapies (IAPT), an initiative to help improve access to mental healthcare within the NHS. This is a cost-effective way to train as it is funded by the government and trainees earn a salary. If you don’t already have a prior qualification in a mental health related field, your professional experience may still count for something if you work in a setting where counselling skills are used, or if you have volunteered, for example, for a helpline.

https://www.theguardian.com/careers...-of-a-career-in-therapy-here-are-your-options
 
I recall an interview with some actress on the radio, during it the topic of dealing with her children came up, she said there are four basic rules in their household and when there is a dispute it is almost always involving one of them, so she simply asks them 'what number is it?'

1] No one can touch me without my permission.
2] No one can touch my things without my permission.
3] No one can tell me what I am thinking.
4] No one can tell me what I am feeling.

To deny ones own reality is an abuse, whether to a child or an adult. To do so professionally for money is criminal.

Interventions by professionals who a priori determine your reality invalid are incompetent as well as criminally abusive. It also constitutes fraud. Both by practitioner and researcher.

There should be severe penalties as they do not just defraud or abuse the few, but they are able to promote the abuse of masses and entire population groups.
 
Interesting point, of relevance to NICE members @adambeyoncelowe and @Keela Too.
It reminds me that Simon Wessely said to me that his only concern about PACE was that it would be followed by rolling out of 'CBT' done by subcontracted services using people who were inadequately trained.

I looked into this many years ago when I wanted counselling via my GP's surgery. I had to battle to get that via IAPT and was shocked at what was happening in the mental health services.

I'll see if I can find the evidence I used on my old computer (cost of IAPT was even higher earlier on), but I can't guarantee it. I also remember seeing this £100 figure in relation to the IAPT LTC rollout being used. So ditto if I can find it. The counsellor I saw via IAPT but at my surgery was part time with the NHS - I could have seen her privately in her own establishment for £35 an hour at the time (she is now about £45 an hour)! It was so nonsensical.
 
Interesting point, of relevance to NICE members @adambeyoncelowe and @Keela Too.
It reminds me that Simon Wessely said to me that his only concern about PACE was that it would be followed by rolling out of 'CBT' done by subcontracted services using people who were inadequately trained.

I think in regards to MUS they are intending for patients to see the 'High Intensity CBT therapists'. At least, the training manuals are directed at the more highly trained therapists and are currently being offered as 'top-up' training. I've seen the manual uploaded somewhere on this site, can't remember which thread.

However, IAPT uses a 'step-up' model of care, so whether all CFS/ME patients are given an 'experienced' [sic] therapist in practice remains to be seen I guess.
 
I think in regards to MUS they are intending for patients to see the 'High Intensity CBT therapists'. At least, the training manuals are directed at the more highly trained therapists and are currently being offered as 'top-up' training. I've seen the manual uploaded somewhere on this site, can't remember which thread.

However, IAPT uses a 'step-up' model of care, so whether all CFS/ME patients are given an 'experienced' [sic] therapist in practice remains to be seen I guess.
As I understood it you had to have done the generic low intensity “sausage machine” CBT to get a referral to the next level (CFS clinic) but that could just be what happened locally 4 years ago.
 
That could hardly be called randomised then surely? Often patients pull out of studies if they don’t get the group they were hoping to be randomised to, but I’ve not heard of patients being shifted from one group to another before.
Participants in the sham control arms of PACE were allowed to try CBT or GET if they wanted after the initial trial had ended, making long-term follow-up completely uninterpretable (but which didn't prevent follow-up papers from being published).

There are comments in the trial meeting notes that some control arm participants were angry at not being in the active arms but were assured they could try it after, expectations that came from hyping CBT and GET as safe and effective as part of the treatment, which itself makes the whole trial completely meaningless.

Thoss facts are known and no one who promotes PACE seems to have issues with it so in a sense this has already been normalized and accepted as good practice. Unofficially, of course, but lack of rigor is basically the defining theme of this ideology. Since it's required in the LP to be willing and enthusiastic to take part in it, it's doubtful that proper randomization was even possible.
 
That could hardly be called randomised then surely? Often patients pull out of studies if they don’t get the group they were hoping to be randomised to, but I’ve not heard of patients being shifted from one group to another before.

Neither have I! I have a longer reply I will post in this thread later when I have more energy, but it looks like this is what happened for 11 of 100 patients.

@rvalle I agree with you largely, except I think it makes itself plain if you dig around in the paper a little. If the statistician for this paper is in charge of assesing bias, it would be quite bad if he authored a paper where patients were shifted after randomization with advanced knowledge this would bias the outcome and then hid it. That's why I want to come back to it when my brain works.
 
Well I think I'm totally lost - is the primary outcome graph mislabelled?!

Treatment as allocated was received by 46 (94%) and 39 (76%) participants in the SMC and SMC+LP arms, respectively.

These numbers are consistent with the 11 patients switching and for 11 patients not followed-up on at primary outcome. Although it would require no one else was not followed up on. Which would be odd.

But when you look at figure 2, the triangle line indicates SMC+LP numbers (51 to 46 patients) and circle line indicates SMC numbers (49 to 38)

Even weirder. The subgroup analysis @ 6 months “web table 5” references 44 or 45 patients used in SMC + LP (e.g. 9 males, 36 females). So, even if we include the 3 that did LP after the trial that were included in analysis, we are getting a number too high since 5 others were excluded (51-9-5+3=40).

In fact, did they misabel the subgrup analysis as well? Because there the SMC group reads 37 and the SMC + LP reads 45. I find it unthinkable that refers to the initial cohort regardless of treatment because it is a graph that shows the outcome fatigue scores of treatments and their respective number of patients.
 
Something else caught my eye.

1) In their final results, the authors reference a baseline dataset before cohort swapping because their reference numbers are 49 and 51. Whatever the case, those are not the numbers of patients that began LP and SMC sessions. They must have known this to misrepresent the situation when writing the paper.

The primary analysis compared mean SF-36-PFS scores at 6 months according to randomised allocation among participants with measured outcomes, using multivariable linear regression adjusting for baseline values of the outcome, baseline age and gender.

Baseline characteristics were similar between those who did (n=82) and did not (n=18) provide primary outcome data at 6 months (online supplementary table 4).

The weird thing is it looks like they did not use the secondary baseline characteristics, but the ones immediately following randomization.



2) There is a leapfrogging of cohorts between baseline and follow-up point 1 in figure 2. SMC starts out higher, but then quickly SMC+LP is higher. This represents the largest change from the interventions. We know baseline is again a misrepresentation, because the numbers are again 49 and 51. So, the cohort dropout took place before baseline and first follow-up.

So, the authors used the baseline immediately after randomization, not after the patients switched cohorts. It doesn't represent the baseline before the actual trials began.

But they knew this couldn't be accurate

Eligible children and adolescents who found out more about the trial but were not randomised had lower anxiety and depression scores and attended more school.

Instead they labelled them as "not followed up on" which implies they were unfilled questionnaires. They also, I think shockingly, left this out of the text. This info can only be found in the supplementary data and the info about the phone call is from the feasability trial.
 
Last edited:
Back
Top Bottom