Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence

Sly Saint

Senior Member (Voting Rights)
Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence

Sarah Wakefield
Melanie Simmonds‐Buckley
Daniel Stockton
Abigail Bradbury
Jaime Delgadillo

23 June 2020
Abstract
Objectives
Improving Access to Psychological Therapies (IAPT) is a national‐level dissemination programme for provision of evidence‐based psychological treatments for anxiety and depression in the United Kingdom. This paper sought to review and meta‐analyse practice‐based evidence arising from the programme.

Design
A pre‐registered (CRD42018114796) systematic review and meta‐analysis.

Methods
A random effects meta‐analysis was performed only on the practice‐based IAPT studies (i.e. excluding the clinical trials). Subgroup analyses examined the potential influence of particular methodologies, treatments, populations, and target conditions. Sensitivity analyses investigated potential sources of heterogeneity and bias.

Results
The systematic review identified N = 60 studies, with N = 47 studies suitable for meta‐analysis. The primary meta‐analysis showed large pre‐post treatment effect sizes for depression (d = 0.87, 95% CI [0.78–0.96], p < .0001) and anxiety (d = 0.88, 95% CI [0.79–0.97], p < .0001), and a moderate effect on functional impairment (d = 0.55, 95% CI [0.48–0.61], p < .0001). The methodological features of studies influenced ESs (e.g., such as whether intention‐to‐treat or completer analyses were employed).

Conclusions
Current evidence suggests that IAPT enables access to broadly effective evidence‐based psychological therapies for large numbers of patients. The limitations of the review and the clinical and methodological implications are discussed.

Practitioner points
  • IAPT interventions are associated with large pre‐post treatment effect sizes in depression and anxiety measures.
  • IAPT interventions are associated with moderate treatment effect sizes with regards to work and social adjustment.
  • A reduction in dropout and also the prevention of post‐treatment relapse via the offer of follow‐up support are important areas for future development.
Published in the British Journal of Clinical Psychology

https://onlinelibrary.wiley.com/doi/full/10.1111/bjc.12259
 
Dr Mike Scott CBT watch (14th July)

last week I wrote to Professor Grisham, the Editor of the Journal complaining, inter alia, of IAPT’s failure to declare a conflict of interest over the paper by Wakefield et al (2020) in the current issue, see link https://doi.org/10.1111/bjc.12259. The Journal has responded by formally inviting me to write a commentary, which subject to peer review, will appear alongside a response by the said authors.
In this paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study Stephen Kellett is an IAPT Programme Director. This represents a clear conflict of interest that I believe you should alert your readers to. The study is open to a charge of allegiance bias.

I am concerned that in their reference to my published study “IAPT – The Need for Radical Reform”, Journal of Health Psychology (2018), 23, 1136-1147 https://doi.org/10.1177/1359105318755264 these authors have seriously misrepresented my findings.

full letter http://www.cbtwatch.com/british-jou...ology-responds-to-iapts-conflict-of-interest/
 
Mike Scott's words quoted by @Sly Saint above
these authors have seriously misrepresented my findings.

Anyone else similarly not surprised to find that some of these so called mental health professionals seem completely unable to read or hear other people's opinions without misinterpreting them? Even when both parties are native speakers of the same language.

I can only conclude that either their language skills are so poor as to make them unfit to work as mental healthcare professionals, they are so blind to human nature & their own biases they are unfit to work as mental healthcare professionals, or they're self interested & ruthless to the extent they wilfully risk the mental health of others in the pursuit of their own agendas. Either way they prove themselves to be wholly unsuited to careers in mental health.

It's a shame they taint all mental healthcare professionals and allowed to prey on one of the most vulnerable and stigmatized patient groups.

Why aren't more mental healthcare professionals speaking out? If they don't they'll soon find their own jobs replaced by a barely trained therapist given free rein to harm some very ill and vulnerable people.
 
We have seen this so often, in people who should be expected to know better, that it almost seems as though there is a selection bias in favour of people who lack basic reading comprehension skills. Perhaps more worrying is that the errors are not picked up by peer reviewers, editors, or those reading the articles and for whom they are intended.
 
Last edited:
Follow‐up
There were four studies that had a post‐treatment follow‐up period, and this ranged from 4 to 52 weeks.
Since so few studies did follow up assessments they ignored these altogether.

Study limitations
The absence of any control comparators means that the observed effects may be confounded by statistical phenomena such as regression to the mean and/or a possible natural recovery phenomenon [...] The lack of studies with adequate post‐treatment follow‐up data means that the durability of IAPT interventions is still open to question.

So, no long term follow up data, no control group. So no conclusions possible. End of story.
 
This is what the "corresponding author" has to say of himself

Clinically, my real interest lays in helping people with long-standing interpersonal problems with the CAT model.

I suspect many have long standing problems with the CAT model. This, apparently, is it

It is a cognitive therapy which promotes new awareness of thoughts and behaviour patterns. In addition, it also aims to ‘reach the parts cognitive behaviour therapy doesn’t reach’, by understanding the unconscious aspects of our thinking, our emotions and our actions. In particular, it uses the therapeutic relationship as a vehicle for change and understands people in the context of their interpersonal relationships and their social context.https://sheffield.catalyse.uk.com/about-cat/

God help us.
 
I'm not really sure how any analysis can be made here, there are no reliable data to evaluate. The thinking is that if someone went through the treatment then they were "helped". Only 4/47 had follow-up "data", some as low as 4 weeks, all of which are arbitrary psychometric questionnaires of no utility here and wildly heterogeneous.

The conclusion of "access" is bonkers, of course if you provide a service you provide "access" to that service. People also have access to healing crystals and psychics, so what? And we know there is systemic cheating and perverse incentives on top of that, in addition to huge rates of attrition. This is little more than fiddling with spreadsheets until the numbers give out a desired pattern.

There was an explicit target of 50% "recovery":
The original aim of the IAPT programme was to increase access to evidence‐based talking treatments and there is evidence that large numbers are being treated annually, and that recovery rates are slowly increasing and achieving the 50% target (IAPT, 2019).
There is no discussion of "recovery" beyond this. There are only 4 mentions of the word recovery, the primary outcome of this multi-billion dollar program. Maybe I'm not looking for the right thing but I see no discussion or evaluation of "recovery". So basically "access" and "satisfaction", in the form of "I did not explicitly dislike the experience" is well and good. Still, there is this nugget here saying the rates are increasing. Where? How?

An average restaurant chain has more reliable data to evaluate their performance. This is all junk, a multi-billion boondoggle to inflate the BPS blimp of doom, filled with explosive gas straight out of natural tripe.
 
IAPT = sausage machine. McTherapy.

Good point by @rvallee that even a bog standard fast food chain would not have ‘performance’ data at the shoddy level demonstrated by IAPT.

Indeed they do.

Nearly 35 years ago I spent a summer working for a well known fast food place to earn some extra cash for my final year at uni - it's very hard work by the way.

Even then, every time an order went through a till it was recorded, very strict stock control. There was a bin for wasted food/orders - if a customer ordered and then there was a problem with it - this was inspected at the end of shift.

So they knew exactly what was ordered through the tills, what stock was used and what was wasted. They could even keep a running total hour by hour of how much business we did on the tills from the duty manager's office.

This lack of accuracy at one of those fast food joints simply wouldn't have been tolerated back in the 80s. I'm sure that with new technology they would be even sharper today. These "scientists" are rank amateurs by comparison.
 
Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin
Michael J. Scott

16 August 2020
https://doi.org/10.1111/bjc.12264

I welcome the opportunity to comment on the recent paper ‘Improving Access to Psychological Therapies (IAPT) in the United Kingdom: A systematic review and meta‐analysis of 10‐years of practice‐based evidence’ by Wakefield et al. (2020) published in the Journal.
Allegiance bias and real‐world outcome
In the Wakefield et al. (2020) paper all the authors declare ‘no conflict of interest’. But the corresponding author of the study, Stephen Kellett, is an IAPT Programme Director. The study is therefore open to a charge of allegiance bias.
The failure to demonstrate an added value
The Wakefield et al. (2020) study did not include a comparison of IAPT’s claimed outcomes with an appropriate counterfactual. For a new service to warrant continued funding, it must demonstrate that it is better than if the service never existed. But data from psychological services that pre‐dated IAPT suggest that IAPT has conferred no added value.
IAPT’s studies fail to clear methodological bars for evidence supported treatments (ESTS)

The past decade has witnessed a refinement in the criteria necessary for psychological interventions to be regarded as evidence supported. This has included (1) more detailed examination of the risk of bias (Higgins et al., 2011) (2) the need for comparisons with active control conditions (Carpenter et al., 2018; Giudi et al., 2018), (3) the need for independent blind assessment, a combination of observer and patient‐reported outcome measures together with a determination of the duration of recovery (Giudi et al., 2018), and (4) the need for measures of treatment fidelity and the need to test out a supposed EST in real‐world settings with evaluators independent of those who developed the protocols (Tolin et al., 2015). In the same decade, IAPT has greatly expanded, but Wakefield et al. (2020) fail to acknowledge that IAPT’s studies are largely invalidated by these considerations. These authors seem unaware of a highering of the methodological bar, for interventions to be regarded as ESTs.

the last bit surprises me as I haven't seen much evidence of this 'highering of the methodological bar for interventions to be regarded as ESTs' in many if any psychosocial studies discussed here.

IAPT’s treatment infidelity
As Wakefield et al. (2020) acknowledge, IAPT does not utilize any measure of treatment fidelity, but appear not to appreciate the gravity of this.

Dubious points of reference
More generally, IAPT’s client population is so heterogeneous that no meaningful comparisons can made with the results of RCTs.

IAPT’s studies are of completers, despite most clients dropping out
Whilst Wakefield et al. (2020) acknowledge the importance of an intention to treat analysis, they fail to highlight how the absence of this undermines their review. IAPT’s studies focus primarily on completers, defined as attending two or more sessions.

Towards achieving outcomes that matter to clients
The Wakefield et al. (2020) study serves to legitimate current IAPT practice. These authors are remiss in not pointing out that IAPT’s studies reveal no evidence of enduring loss of diagnostic status. As such, they display an indifference to what clients would regard as evidence of treatment making a real‐world difference.

full commentary here
https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264

blog
http://www.cbtwatch.com/british-journal-of-clinical-psychology-commentary-on-iapt/
 
Ensuring that the Improving Access to Psychological Therapies (IAPT) programme does what it says on the tin
Michael J. Scott

16 August 2020
https://doi.org/10.1111/bjc.12264






the last bit surprises me as I haven't seen much evidence of this 'highering of the methodological bar for interventions to be regarded as ESTs' in many if any psychosocial studies discussed here.









full commentary here
https://onlinelibrary.wiley.com/doi/10.1111/bjc.12264

blog
http://www.cbtwatch.com/british-journal-of-clinical-psychology-commentary-on-iapt/
"£5M isn't cool. You know what's cool? £4B."

Don't be a sucker, people. This is what suckers do, believe in the blatant lies of con artists. Complete dereliction of duty and mismanagement of public resources by those who pushed this turd through the bowels of medicine. Billions wasted on a mediocre pipe dream that never had a chance to deliver even a fraction of what was promised.

All because these people made manipulation their primary skill. They sure can manipulate people into believing they know what they're doing, but that was always what a snake oil salesman does. Nothing changed other than the charlatans working within the system of medicine, rather than outside.

Once again: evidence-based medicine is a complete and total disaster that has not only failed to improve outcomes but actually managed to regress them. A truly unique level of mediocrity among all the professions. Truly the poster child for the problem with toxic positivity.
 
From one of the quotes in post #14 :

These authors seem unaware of a highering of the methodological bar, for interventions to be regarded as ESTs.

This author seems to be struggling to find the word "raising" i.e.

These authors seems to be unaware of a raising of the methodological bar, for interventions to be regarded as ESTs.
 
Is Evidence Based Treatment Possible Without Evidence Based Assessment?

‘no’, this is the take home message from a just published study by Moses et al in the Journal of Anxiety Disorders https://www.sciencedirect.com/science/article/pii/S0887618520300931. An evidence based assessment includes a diagnostic interview, as well as a clinical interview and psychometric tests.

Moses et al (2020) summarise the literature that the inclusion of a diagnostic interview improves outcome, by minimising missed diagnosis and misdiagnosis.

These authors bemoan their finding that only a small minority of Australian psychologists use a diagnostic interview, but the position is even worse in the UK, as the largest provider of services the Improving Access to Psychological Therapies (IAPT) explicitly excludes the making of diagnosis/diagnostic interviews. IAPT cannot improve access to evidence based psychological therapies because it does not operate the admission gate of an evidence based assessment.

http://www.cbtwatch.com/is-evidence-based-treatment-possible-without-evidence-based-assessment/
 
Those are not on the same level of scientific validity, but the idea that it's possible to fix any problem that is a complete mystery makes exactly as much sense as trying to cure AIDS without knowing about viruses, let alone that a particular virus causes it. The whole premise here is absurd.

I don't understand how something this basic, as common sense as it gets, can be waived off entirely. It's possible to randomly stumble unto something by simply brute-forcing solutions but there is no credible reason why this process should be any different with psychobabble than it is with drugs, where if one were to simply randomly try all the things it would take millennia or more to randomly chance upon the right treatment.

Nobody can even define the mechanism of so-called conversion disorder and anxiety has been stripped of all common meaning, it's left entirely to the imagination. But people think it's possible for someone to just randomly intuit the right answer? Setting aside the fact that conversion disorder is completely unscientific BS nonsense, it still makes no sense that it would be possible to randomly find the right answer without even a hint about where to begin, especially by doing the same things over and over again without learning anything from the experience.

This isn't evidence-based it's fantasy-based. The only way any rigorous assessment or scientific research will provide clues here is in invalidating the ridiculous fantasy of psychosomatic illness, whether en masse or in individual cases. Especially in a process that advises against rigorous assessment, knowing this is how the concept has been invalidated many times in the past.

There is no way to actually assess things here, the concepts are vague and impossible to validate. I still don't understand how it's common practice to "diagnose" anxiety or depression without any assessment whatsoever, let alone a reliable test. It's completely reckless and clearly invalid. Knowing what the problem is before fixing it is not optional. Science is not optional and fantasy-based evidence is definitely not the answer, no matter what the question.
 
Back
Top Bottom