Treatment outcome in adults with CFS: a prospective study in England based on the CFS/ME National Outcomes Database, 2013, Crawley et al

Discussion in 'ME/CFS research' started by bobbler, Apr 8, 2024.

  1. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,744
    Copied from:
    UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023


    This is an aside and isn't to do with any of the PROMS authors here, however I just needed to get it out because I couldn't quite believe it (certainly given what we know/is acknowledged now). From the Crawley and White (2013) paper: Treatment outcome in adults with chronic fatigue syndrome: a prospective study in England based on the CFS/ME National Outcomes Database | QJM: An International Journal of Medicine | Oxford Academic (oup.com)

    They have detailed below how they used a complex 'GEE model' in order to apparently adjust the scores where follow-up questionnaires were returned late - in order to be precise in working out what the 'fatigue level' would have been at 12 months. They say that some teams obtained data at additional follow-up points (6 and 24-months) so I guess they used these to back-fill the model. And I don't know either how many of the sample would have had this done nor how 'far off' they were 12months. But it seems a pretty astounding approach given they are using retrospective data, but on something that hasn't actually been researched before?? ie it isn't like they've got 25 trials worth of data covering all these time periods to show some curve.

    Yet I can only assume they think they are being really diligent by doing this.

    "Follow-up interval
    Each team sent out follow-up questionnaires at 12 months. Variation in when the questionnaires were sent and delays in return of 12-month follow-up questionnaires led to variation in the exact time of follow-up. Also, some teams obtained data at additional follow-up points (e.g. 6 and 24 months). To maximize the availability of follow-up data for our analysis, we determined a margin of follow-up either side of 12 months. We did this by fitting fractional polynomial generalized estimating equation (GEE) regression models of fatigue against time.21,22 This method (implemented in Stata as fracpoly combined with xtgee) compares a linear GEE model with the best-fitting first and second degree models, each containing fractional polynomial terms (from a pre-defined set of integer, fractional and negative powers) for time. The differences in deviances between the linear and 1st-degree model and the first and second degree models are tested using a chi-squared test and the resulting P-values indicate whether the change in outcome over time is linear or whether it has a more complex shape. We inspected a plot of predicted values of fatigue against time from the model with the best fit to determine an appropriate follow-up interval in which observed fatigue scores could be assumed to be representative of the scores predicted at 12 months. In patients with more than one follow-up assessment, the closest to 12 months was used."
     
    Last edited by a moderator: Apr 27, 2024
    Arvo, MrMagoo, Kitty and 2 others like this.
  2. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,744

    IN relation to the conclusion on the paper with Crawley and White (2013) , "Although NHS services are moderately effective in improving fatigue in patients with chronic fatigue syndrome, they are much less effective in improving physical function than similar treatments delivered in the PACE trial. This requires urgent investigation to determine whether it is due to differences in the delivery or the content of treatments offered by services."

    Here is the thing: is it that different to what PACE actually found ?:

    - now we can look at the objective measures for physical function?

    - now we know what we know re: potential fraud on the subject measures being analysed with changed 'recovered' points ?

    - now we have had confirmed what many knew re: how using short-term measures after having newsletters and influencing them with CBT 'to be positive in their scores'


    So I do have to ask the question - is this PROMS thing a hang-up of a 'false belief' / self-denial , and really not getting their head around how their treatment and behaviour and power-differential was skewing what they ever got to see and acknowledge.

    Are they all thinking the issue with GET has been 'those physios who overdo it' and they all assume it is someone who isn't them.

    part of which was stoked happily by eg Sharpe suggesting 'maybe it was how some physios did it' etc.


    I'm not saying that necessarily any/all of them are even trying to 'turn back time' on the treatment. But I think potentially they might have got functionedly fixed from also being a bit gaslighted potentially . As @Jonathan Edwards noted with them thinking that the main issue / question needing to be answered is: 'is it being delivered effectively' rather than 'is it the right thing'.


    So yes a potential repeating of history / continuation of history because they think the 'change in clinical practice' is an issue in a 'different place' to where the actual issue might be?

    EDIT: I'm also wondering whether in some there is a sense of being able to get a second bite of the cherry because others doing it wrong ruined it but them really thinking that proper change is a 'baby out with the bathwater situation' ? because they believe they were part-way there on 'changing things'

    When the guideline is emphasising that there is an underlying issue of the magnitude that really needed if not a new/different babies then ones with a completely opposite 'gist/understanding' as/on what they will be required to deliver.

    So even if such evolutions might have seemed laudable to those on the inside in 2013... it isn't the level and type of change that is needing to be measured today/flagged for a method-change?
     
    Last edited: Apr 8, 2024
    Arvo, Kitty, Sean and 2 others like this.
  3. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,744

    Oh yes and (sorry brain is picking up on things then train of thought spots another to summarise).. to draw attention back up to near the top of this, with the paper that included CRawley and White (2013) explicitly saying 'physical function' in the abstract conclusion:

    "Patients who attend NHS specialist CFS/ME services can expect similar improvements in fatigue, anxiety and depression to participants receiving cognitive behavioural therapy and graded exercise therapy in a recent trial, but are likely to experience less improvement in physical function."

    but then making claims on 'CFS' in the conclusion itself's wording:
    "Although NHS services are moderately effective in improving fatigue in patients with chronic fatigue syndrome, they are much less effective in improving physical function than similar treatments delivered in the PACE trial."

    In the statistical analysis section:
    "Our primary outcome measures were fatigue and physical function. Our secondary outcome measures were anxiety, depression and pain. "

    "We defined clinically useful improvements within each person as having a difference of 2 points on the total Chalder Fatigue Scale and a difference of 11 points on the SF-36 physical function subscale between the baseline and follow-up measurements. We chose these cut-offs because they equated to 0.5 SD of the distribution of the baseline measurements."

    It is that phrase stating clearly 'likely to experience less improvement in physical function' having been admitted back in 2013 by a paper with both of those on it.

    And what now from their definition of 'physical function' vs what fell into the rest they defined in the conclusion as 'CFS' but in the abstract as 'fatigue, anxiety, depression' (and tackled by that old CBT) would be the bit that was actually deemed as making up an accurate description of 'ME/CFS' these days ?

    And the big difference in that time being quite a significant change in the understanding and defining of what the condition is - ergo: that first phrase is correct because it will always be for that point in time '[some] patients who attended the CFS/ME services'

    but I'm unconvinced by claims of having PEM required for data collected on any of those dates. Or an accurately defined version of it. It is interesting that eg the papers with Gladwell make a point of claiming they had PEM required for papers where the data would have been before eg 2018 (then references the 2007 guideline) despite the papers being published 2023 and 2024.

    So yes, I've found it quite interesting spotting this 2013 paper and then looking at Gladwell's work on PROMS that include critique of the CFQ which focuses on the 'fatigue' measure and suggestions of needing better PROMS. But were begun much earlier than it might seem given the recent publication dates and both collected data prior to the new guideline.
     

Share This Page