A general thread on the PACE trial!

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Esther12, Nov 7, 2017.

  1. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,924
    Location:
    UK
    I thought they *were* 'the experts'......or at least the SMC keeps making out that they are.

    Or was the 'expert advice' from the same person/people who deduced that half the working population had a score under 85?
    [​IMG]
    upload_2019-8-17_9-42-45.png
    [​IMG]
     
    Snow Leopard, pteropus, Barry and 9 others like this.
  2. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    They admit in the main pace paper that the normal ranges used in the recovery paper were adhoc. From the minutes to the PACE committee meetings it is unclear how the changes were made and who if anyone actually approved them or realized they happened beyond having a stats plan.

    They dropped the recovery criteria as a secondary outcome for the trial in the stats plan (although as far as I know not in the protocol but they just assume the stats plan superseeds the protocol even though it is just meant to be more detail). This did appear to happen prior to them unblinding the data but they may have produced blinded summary data for one committee prior to this. Blinded as in not specifying treatment arm but this is where they could still make a good guess as to how it was working.
     
    Barry, JohnTheJack, Lucibee and 2 others like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I'm never convinced by the implications they give regarding this. Surely there must have been a wealth of tea-room chats and so forth where they must inevitably discuss who things seem to be panning out, even though they doubtless should not do so. Given everything about the trial was unblinded, the way things were trending would have been obvious long before the data was fully released.
     
    MEMarge, Louie41 and Sean like this.
  4. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    FINE data may have been available by then? They would therefore not have needed to see unblinded data.
     
    MEMarge, Louie41, Sean and 1 other person like this.
  5. large donner

    large donner Guest

    Messages:
    1,214
    Was it Schrodingers cat?
     
  6. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    Unblinding is the formal process where they look at the data or associate treatment labels with the data (I think they may have prepared summary stats for the data management committee). It doesn't mean that they couldn't have a good guess at what was happening. They were set up to have good knowedge.
     
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Plus they knew by then that actimeters in multiple studies were showing a null result (for CBT at least).
     
    Barry, MEMarge, Cheshire and 6 others like this.
  8. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Exactly. I find it highly implausible the investigators wouldn't have been sounding out opinions and impressions from the therapists during the trial. Their subsequent behaviour has shown loud and clear they lack the integrity needed to not do this. And of course it would have been ever so innocent, innocuous and chatty, so no one would have any hint of hidden agendas.
     
  9. Mithriel

    Mithriel Senior Member (Voting Rights)

    Messages:
    2,816
    As people have said before, if you do a trial on how many tall men there are in a group you do not have to see any data to know that changing the definition of tall from 6 foot to 5 foot 9 inches will give a higher number.

    Changes were made before the figures were unblinded is a nonsequitur and meaningless. Rather than an exoneration and reassurance, stating it (or agreeing with it) highlights ignorance not expertise.
     
  10. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK
    It is a little more complex than that because they are not looking for numbers but whether there was a significant difference between groups. So it may be like saying there are more tall men than women in a group. This is likely to be true if you choose 6ft as a tall threshold but if you choose 9 ft then it won't be true and as you lower the threshold it is less likely to be true (5ft9 will likely be true but 5ft 3 may be a more even number and not significantly different).
     
    Barry, MEMarge, alktipping and 3 others like this.
  11. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,175
    Location:
    London, UK
    Yes, the trick is quite subtle. I have come to think that the key thing is that the threshold has been brought down to come in to contact with the outcome band within which systematic bias is expected to operate. You could not tell in advance where this band would end up (50-65 or 70-85) but having treated some patients it would be (even if subliminally) obvious where to drop the threshold to. It is similar to cutting off the bottom of the Y axis in the figure on fatigue, except that it is not merely presentational effect, it wreaks havoc with the statistics.
     
    Barry, pteropus, MEMarge and 8 others like this.
  12. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    A thought experiment.

    Suppose there is a condition, previously proven by good science, to be nigh on certain due to unhelpful beliefs in patients. Suppose additionally that all the symptoms are just that, subjective symptoms. There are no objective measures, and this is also well accepted by good scientists. But the subjective symptoms are nonetheless very distressing to patients, and you want to trial an unblindable intervention which might eliminate these subjective symptoms. By definition you would then have a cure for the condition if your trial can reliably show these subjective symptoms disappear, and do not return even after stopping the intervention for some time.

    For me, in the context of this thought experiment, that might have something going for it; I suspect it is the basis for many psychiatric trials. But crucially it all perches on the premise the condition is already previously well proven by good science to be due to unhelpful beliefs in the first place, and only measurable on subjective outcomes. That has to be a given else the whole trial is worthless.

    And of course this is exactly what PACE did. The whole thing relied totally on the presumption that the illness is perpetuated due to fear avoidance behaviours, and that only subjective symptoms mattered. A classic psychiatric trial.
    [my bold]

    Although they initially paid lip service to some objective measures, they soon got shot of them because of evidence from previous trials that objective outcomes were at odds with their own expectations; ironically that evidence was good.

    So they ended up with what seems to be a classic psychiatric trial: Fully unblinded, fully subjective outcomes. But without any underpinning of being previously well proven by good science the condition was due to faulty beliefs, and only measurable subjectively; only in their own little bubble.

    So PACE fell flat on its face at the very, very first hurdle. There was no prior good scientific evidence that the condition was due to unhelpful beliefs, nor that it could only be measured by subjective symptoms alone. But they had previously defined it to be such, on scant evidence, and then sought to prove their interventions based on their flawed definitions and assumptions.

    So to me PACE was premised on totally circular logic. Presume the condition is due to fear avoidance behaviours, and only needed measuring on subjective outcomes (and unblinded interventions thereby not an issue). And if the trial results were shown to be positive, that in turn further 'proving' the condition due to fear avoidance behaviours and measureable on subjective outcomes alone.

    Feels to me that the nearest thing these particular psychiatrists have to objective outcomes are subjective ones, and treat them as so.

    @adambeyoncelowe, @MEMarge. I'm sure this is all old hat now re the NICE guideline, but just wanted to highlight my point in paragraph 3, re PACE being falsely hypothesised at the outset, given no prior supporting good scientific evidence in place to justify their hypotheses. Without it then PACE simply cannot be used as supporting evidence of any kind, given its hypotheses rely on supporting evidence that itself does not exist.

    Just rambling ;).
     
    Last edited: Oct 2, 2019
  13. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia
    This brings home the point.

    By "normal" they meant within or above 1 SD of normal, which assuming a gaussian distribution would be the top 84% (68+16) of the population. Using the actual distribution in the graph above, this would mean a PF function cutoff somewhere around 82-84. Meaning they either don't understand statistics or flat out lied when they specified 60.
     
  14. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,200
    This option. SD is undefined on this kind of dataset. PDW wrote a paper in 2007 showing he understood it was a biased measure, one which in fact biases toward their hypothesis. Look up his paper on SD for SF36PF data. This is deliberate manipulation of data to obtain a answer biased toward their hypothesis.

    As a smell test, to see if it reeks, given that the maximum score is 100, ask yourself what their average PLUS 1 SD results in. Then ask yourself why 65 is serious disability, in their own assessment, but 60 is a basis for recovery. Then ask yourself what SF36PF score applies to other disabling conditions, like heart failure.

    The whole thing is a deliberate manipulation of data to deliver a biased outcome to the medical profession. The real tragedy is most doctors fall for it.
     
  15. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    From that paper:
    Is a Full Recovery Possible after Cognitive Behavioural Therapy for Chronic Fatigue Syndrome?
    Knoop H, et al.
    Psychother Psychosom 2007;76:171–176
    DOI: 10.1159/000099844

    https://www.karger.com/Article/Abstract/99844
     
  16. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,200
    This is interesting but not the paper I was referring to. I was referring to the paper identified by David Tuller in a thread on this forum. It is specifically on SD and SF36PF in CFS. I do not recall the exact name or reference.
     
  17. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    @Barry

    I sort of see three 'levels' of failure with PACE/BPS as applied to ME and other things bucketed into MUS/Functional/Psychosomatic

    -I'd classify what you've shown in your post as the widest 'Philosophical' defect, which is that the whole BPS paradigm presupposes itself in order to generate evidence in its favor. Feeding into this is that it is set up to be unfalsifiable. It sets up that the only way for it to effectively fail is to appear so implausible in light of generally accepted interpretations of reality that it appears absurd; unfortunately, psychosomaticism is a centuries-old fetish in medicine and by default sounds plausible to probably most people.

    You're saying that this amounts to a lack of 'probable cause' or 'reasonable suspicion', to use legal/law enforcement terms. I think this is somewhat the case but that's not really the punchline here. For me, the punchline regarding the issues at hand for us is the insight into the theoretical structure which any serious scientist and policymaker ought to find quite suspicious once enlightened as to its nature, and thus subsequently begin to look at it critically and, frankly, take offense to it. Fortunately, I think we are seeing this happening to a promising degree.

    There are even broader lessons here for all scientific thought and even the necessarily speculative theorizations of the humanities. I mean, it really isn't even scientific thinking and is more on the intellectually lazy end of the humanities that piles bullshit on bullshit to build a dogma, attract zealots, and demonize detractors. It's disturbing... well, actually it's just plain fucked up, that this sort of trash can so easily enjoy a prominent and celebrated status in medicine and science.

    -The next 'level' is more where probable cause comes in. I think of this as the 'Procedural' defects. It starts with the idea that @Jonathan Edwards shared a while back that, (if I can remember accurately,) even on the most charitable reading, the magnitude of the effect size in PACE or other such trials is so small (and overlapping in the individual study arms (?)) that, in real life, CBT/GET practitioners could not possibly have discerned them, i.e. if they perceived reality dispassionately they would never have had 'probable cause' to think they were worth investigating because their experience would have said otherwise.

    The good thing though is that this has taught us all another valuable lesson! Clinicians do not perceive reality dispassionately - actually they are prone to wildly inaccurate assessments. Or more formally, you can't put any scientific stake in claims of 'clinical experience'; if it's actually worth anything, it will prove out in a convincing trial without much trouble.

    And, actually, I think 'probable cause' is not really the definitive angle here when you think about it. Rituximab was trialed based on 'clinical experience' and with no actual 'theory' behind it, for instance. Or, nobody knows why ECT is so effective in many cases of severe depression, and it frankly doesn't even sound plausible, but it works. Which is to say, I'm not in principle upset that 'psychological' therapies were trialed, even though it came from doltish theory and, in retrospect, preposterously inaccurate interpretation of past experience.

    It's more what came after, in the actual execution of the trial - telling patients that CBT or GET was already known to be effective, sending out propaganda pamphlets, compelling patients to 'push through', changing outcome measures/thresholds, etc. Many of the sorts of things uncovered by enterprising patients and pursued by David T . That's where the real 'miscarriage of justice' begins to take shape for me. It's sort of like if police raid your house on dubious - but not quite unforgivably so - suspicion, find nothing relevant, and then beat you up, plant narcotics, weapons, a dead body, shoot your pets, and then sue you for emotional distress. This is the BPS brigade.

    -The last level is the 'Interpretive' defects - all the stuff we always talk about regarding evidence. This is where the real problem continues. As we have discussed ad nauseam, they are hopeless for reasons that circle back to the 'Philosophical' defect. Fortunately they managed to (a) produce evidence that contradicts their theory in probable the most direct way possible (i.e. patients beliefs 'improve' on questionnaire but objective measures do not improve); less fortunately, they (b) did the lightning trial, which highlights the naked absurdity of the whole approach for all to see.


    Anyway that's my thoughts for the day.
     
  18. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    I still think (especially think) that general trends on group differences could have become quite evident during "coffee room chats" etc. In such discussion scenarios it is usually the differences that stand out and people start to comment on ... it's human nature to home in on differences. And it would not have taken our "super high integrity" investigators long to realise they had a problem. I'm sure ongoing informal chats within the whole team would have helped them get a very good feel for how the intervention arm differences broadly varied according to broad number ranges.

    Also: did the therapists all stick with just the one therapy? Or would they themselves have been moving between more than one?
     
    alktipping and Annamaria like this.
  19. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,420
    Sorry, can you just clarify where the '16' comes from please @Snow Leopard.
     
    alktipping likes this.
  20. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    3,860
    Location:
    Australia

Share This Page