There’s something rum here. Even the following paper which is trying to argue for cost-effectiveness of IAPT for anxiety and depression:
Cost of Improving Access to Psychological Therapies (IAPT) programme: An analysis of cost of session, treatment and recovery in selected Primary Care Trusts in the East of England region - ScienceDirect
cites that it was economic argument by Clarke that it was introduced on a mass scale
and that 2 pilot areas ‘tested this’ and then quote Clarke et al (2009)
Improving access to psychological therapy: Initial evaluation of two UK demonstration sites - ScienceDirect
This 2024 paper quotes it as:
“Of the 55% who completed 2 or more sessions 5% went from unemployment to part or full time employment”
which is at best just over 2.5% they sold it on.
except looking at this Clarke et al 2009 paper. There were two pilot centres. For Doncaster, 4451 people were referred and by the end it was 1654 who did two or more sessions. They’ve just excluded the ones who were deemed not suitable and mutually decided elsewhere was better or refused treatment but 1654/4454 =0.37
of those only 1257 had their prior condition coded, and of those only 833 had been ill for 6months
all this is in a table on page 4
at the end of treatment some 650 were ‘still cases’ - by their own iapt recovery measures - the recommendation in the paper is to step up therapy to iapt high-intensity BUT therapist was allowed to refer outside the service to counselling if patients expressed a preference for that
it seems at that point there was a clear vote with their feet as only 25 went for high-intensity iapt CBT and some 400+ went to counselling
page 5 section employment and benefits outcomes notes it only had data for 445 (27%) but that of those who ‘had 2or more sessions AND had been on SSP' 4% returned to work, which they claimed correlated with the claims made by Layard etcal (2007) - which I guess was an economic case for iapt to be set up vs claimed costs - of exactly of 4% ‘of those who complete treatment would return to work’ , however I haven’t check whether the Layard paper caveated that as those on Ssp
The follow-up measures for Doncaster were in Jan-feb 2008 contacting those who completed treatment by Sept 2007 (min 5months later) . The eligible group who’d completed treatment was 1444 but 893 people (chosen at random) who’d completed treatment were mailed a survey.
They claim of those who replied with employment data, 343 people, there was a 10% increase in employment from 190 being in work and not claiming sick pay compared to 155 coded as such initially. Except it is commonly known that most sick pay ends at 6months and around that point if you can’t start trying to return to work ‘processes begin’ as well as people having no sick pay? So the 153 left from the 343 who did want to respond (and I’d imagine there is a bias to not wanting to write back with bad news) had no employment and despite id guess by this point many will have exhausted their sick pay.
So actually we don’t know how many people ‘went the other way’ but there is evidence that 35 more people being coded as employed vs number who’d been coded as such before treatment in those who completed and replied (343) for the eligible sample of 1444. They can’t sssume that 343 is an externally valid representation of all 1444 to extrapolate.
it's 35 more people. 10% of 343, but not extrapolatable to claim even 10% of 1444? And no info on how many were employed and went the other way?
Page 7 There is then the Newham centre they had 135 people only with pre and post employment questionnaires. Say the change is 10% not on SSP but then report that 4% of this were in the ‘other’ category with no work, SSP or benefits.
Their follow up (also feb 2008) found only 161 eligible to be mailed the survey, only 60 responded. And they don’t even mention employment for this (just analysis of eg GAD scores).
I’m struggling to find any robust example of 5% in the Clarke eg al (2009) paper that corresponds to the quote that this 2024 paper has referenced to it. And at longer term I can only see actual evidence for 35 people as a change - call it 10% of a very narrowed down field of 343 if they want. But 4454 were referred there - and they’ve used a heck of a lot of ‘not gates’ to narrow that 343/4454 = 7.7% (35/4454 = 0.8%) who even got to the stage of completing a survey? Heck of a selective filter?