A general thread on the PACE trial!

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Esther12, Nov 7, 2017.

  1. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    Not until a few days after that. Here is his reply:

    MSharpe_2jun_exchange.png
     
  2. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    However, it *is* a trial statistician's job to be aware of bias - after all, that's what the trial design is there to address!

    The mathematical aspects of the stats used in PACE are not so much the problem. It's the measures that were used - and I suspect the statisticians didn't want to rock the boat in that regard. And there's not much they can do if they weren't consulted while the trial was being designed, particularly if those measures have been used in previous trials.
     
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,416
    Location:
    London, UK
    It is a statistician's job to be aware of the concept of bias and its dynamics but I don't think it is their job to know the realities of how bias arises in specific situations - because that is the job of the scientists doing the experiment. Having said that one can reasonably expect statisticians to have an idea of the basic patterns, just as a scientist is expected to know what parametric statistics are about.

    What is so damning is that a whole group of people sat round tables with the blessing of the MRC and designed a trial which, if I had been there, I would have said 'look guys, this is a non-starter, it is no better than the amateur physio trials of the 1980s. In real life trials like this are incapable of telling us anything.' The fact that Fiona Watt insists on defending this group makes things all the worse.
     
  4. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,664
    Location:
    UK
    I think too many statisticians follow recipes and forget about the limitation of techniques. Sometimes (or often) they seem to simply assume things like Gaussian error in readings and hence apply standard techniques. Or assume the techniques are robust if the assumptions aren't quite true. I've seen occasional papers where they have used simulated data to analyse a technique and how it breaks down as the assumptions break - but remarkably few of them.

    One thing I notice in PACE and other work is if you call something a scale then statisticians seem to treat it as such without thought as to the properties,
     
    andypants, MEMarge, Esther12 and 5 others like this.
  5. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    "I'm shocked, shocked, ..."
    Most unfortunate but not surprising, I suppose. If you want to skirt the law find the right unscrupulous lawyer. If you want to skirt sound science find a pliable statistician, among other things.

    I guess I have no idea what it's like to get a graduate degree in statistics or biostatistics. For whatever reason it's ingrained in me that the practice of statistics includes doing ones utmost to look at and critically examine the whole picture; the plug-and-chug is just the final piece after making sure everything is carefully set up to produce good findings. Maybe this gets lost or becomes impractical within the current reality of the profession.

    This seems like just one entry point into a wicked problem that to address substantively would require wholesale change in the incentive structure of scientific research. It seems like otherwise only a relatively minuscule amount of certain types of people who really care about doing things correctly and are bristly enough to stand up for it will be trying to police an impossibly large mass of suspect research.
     
    rvallee, Keela Too and MeSci like this.
  6. Sean

    Sean Moderator Staff Member

    Messages:
    8,232
    Location:
    Australia
    They could always decline to get involved, or withdraw their name from the paper. They have minimum standards they must meet, and a basic understanding of trial methodology and how it relates to statistics is surely one of them.

    Since when has it been acceptable to not understand the problem with subjective only outcomes in an unblinded trial that is testing 'treatments' that consist of entirely applying psychological manipulation/pressure to patients to report a good outcome?

    How on earth did they either miss, or have no concerns about, the misuse of the SF-36 population data, for example?

    And where's the Bayesian angle in their reckonings? How is any of this even plausible a priori ?

    (Not mad at you, Lucibee. Just increasingly frustrated at nobody taking responsibility for this shit show. :banghead:

    Somebody is responsible, and it sure ain't patients. :mad: )
     
    MEMarge, Inara, MeSci and 3 others like this.
  7. Woolie

    Woolie Senior Member

    Messages:
    2,931
    I will try to explain as best as I can.

    Traditional scientific methodology is based on the idea of hypothesis testing. You come up with a hypothesis for your study. For example, your hypothesis might be that people's recall of words from a list will be poorer if they have recently drunk a certain amount of alcohol, compared to those that had a non-alcoholic drink (of course, the participants should be blinded too). The null hypothesis is simply the opposite of your hypothesis - in this case, its the hypothesis that alcohol consumption will make no difference to your recall in this task.

    Traditional hypothesis-testing is asymmetric in that you are supposed to assume the null hypothesis is correct until there is really quite a large amount of evidence to prefer the alternative (its actually called that, the 'alternative hypothesis'). The null hypothesis is sort of the default position, and you try to gather evidence that will be sufficient to confidently reject that (with 95% confidence).

    So its not the case that "we are extremely confident this null hypothesis is in fact true". It is that we have set ourselves up to be very strict about the conditions under which we will reject the null hypothesis - that is, not unless there's lots of evidence.

    The best analogy is with criminal justice. Most judicial systems take the default position that a person is innocent - unless there is really a lot of persuasive evidence to reject that position. So its not as though we are really confident that every defendant that comes to trial is innocent. That's not the case. If we were that confident, we would not be putting this person on trial at all (similarly, if we were confident of the null hypothesis in that alcohol study, why even bother to run it?!). Its not a question of confidence, its a question of where we decide to place the burden of proof: we decide ahead of time to set the stakes so that we don't reject the null hypothesis unless we can be really sure of the alternative.

    Hope that makes some sense to you, @Barry.

    You raise a lot of other points, but I wanted to just address this one, just to help clarify that issue.

    Edit to add: Sorry, lots of others have explained this on the thread before me, sorry I didn't read ahead before posting!
     
    Last edited: Sep 17, 2018
  8. Woolie

    Woolie Senior Member

    Messages:
    2,931
    One other point. The study hypothesis is never a theory or and idea. Its always a specific outcome of a specific study. So the PACE authors weren't testing the hypothesis that PwMEs are deconditioned. That's not a hypothesis in the experimental sense. It might be a proposal, a theory, or a position, but not an experimental hypothesis. The hypotheses were much more specific: that people in the GET (and also CBT) trial arm would exhibit greater improvement in self rated fatigue /physical function scores over the course of the trial than those in the control (and also APT) trial arm.
     
    JohnTheJack, Barry, Dolphin and 2 others like this.
  9. Woolie

    Woolie Senior Member

    Messages:
    2,931
    I would say its unreasonable to expect a statistician to understand the psychological factors that may influence outcomes - recall bias, confirmation bias. They also can't be expected to know the nuances of what sort of research designs are best to use to address specific questions, or what sort of participant sample would be appropriate and representative, what sort of outcome measures would be most sensitive and indicative. That's the part the primary researchers are supposed to do - that's supposed to be their area of expertise.

    So for example, I do some work looking at the involvement of different brain regions in different types of cognitive problems. I know some regions will commonly appear to be involved in everything, because they're more vulnerable to certain types of brain damage, and I know its important to correct for that. But I woudln't expect a statistician to have that kind of nuanced knowledge. Its very specific to my research area.
     
    Last edited: Sep 17, 2018
    Indigophoton, Barry, Dolphin and 3 others like this.
  10. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    I know. We are sooooo dumb!

    I don't think it is unreasonable at all. It's why I went to LSHTM to study stats. But I guess my initial interest in stats was started while studying psychology at Cambridge. But hey, maybe I'm just 'special'.

    ---
    Statisticians are taught about those biases on MSc courses, so it is entirely reasonable to expect them to deal with them.

    I don't like the tone of this elitism here - it is not fair to assume that just because someone has studied in one area, they are completely incapable of understanding another. That is a gross presumption to make. We have no idea about the backgrounds of the statisticians involved in PACE. But there should have been enough expertise in the others involved, and by the "endless rounds of peer review" to cover those factors. That they didn't spot the flaws is of great concern - and everyone involved should be ashamed of themselves.
     
    Last edited: Sep 17, 2018
  11. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,664
    Location:
    UK
    I think there is a difference between expecting a statistician to know about effects and to have a deep working knowledge of the psychology of the effects. Coming from a slightly different angle when doing data analysis (and machine learning) the people doing it always like to have a subject matter expert around to give a really good understanding of the data. Otherwise nasty assumptions about the form of the data can be made.

    As research is done in specialist areas a statistician shouldn't be expected to know the nuances of the data and how it is collected. But they should know the right questions to ask to ensure that the stats are correct with the experiment.
     
    Barry, Trish, Keela Too and 1 other person like this.
  12. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,650
    Location:
    Norway
    https://twitter.com/user/status/1041603375773638656
     
    inox, andypants, MeSci and 11 others like this.
  13. Inara

    Inara Senior Member (Voting Rights)

    Messages:
    2,734
    That's foul (and cheap; at times it's even a bit dumb): "If people had done it exactly like I said, nothing would have gone wrong. It's not my fault if others apply my idea incorrectly and, by doing so, harm people."

    Maybe individuals cannot be blamed for their ideas - maybe you cannot blame Alfred Nobel for inventing dynamite (although it is said he had good intentions), but still dynamite can cause a lot of harm. I would expect that an individual, who claims to have good intentions, spoke up when his idea is misused to be harmful.

    Also, an invention can surprisingly turn out to be harmful in itself, after it was implemented, even if the idea was supposed to be "helpful and good". If good intentions were involved, the inventor will try to withhold or to correct or to destroy his idea/invention in order not to cause harm.
     
    Barry, MEMarge, Lisa108 and 2 others like this.
  14. Woolie

    Woolie Senior Member

    Messages:
    2,931
    I think that's a little unfair. That's not at all what I said. I've seen many times on the forum people placing unreasonable expectations on statisticians - saying that because the PACE trial had statisticians, those people didn't do their job well and must have been really shit. I'm not a statistician, but from where I sit, the PACE statisticians looked pretty competent to me (okay, maybe you're an amazing genius, and know all the ins and outs of everybody's research area, but then I wasn't talking about you specifically). Those statisticians obviously took the basic design and outcomes from the researchers, and worked on those. The problems with PACE were with the primary researchers, not the statisticians.

    And yes, it looks easy to see the flaws of PACE in retrospect, but I don't think its reasonable to expect those statisticians would have been able to advise appropriately at the time. Even if they had been given the opportunity. That's what the primary researchers are supposed to do, ffs!
     
    Last edited: Sep 17, 2018
    JohnTheJack, Barry and Lisa108 like this.
  15. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    The problem with this type of research is that those doing it are under the mistaken belief that because they are dealing with a "subjective" condition, the only way to analyse it is with subjective outcome measures. (And normal rules regarding blinding etc need not apply.) I expect the statisticians have been told to defer to this greater knowledge and that they can't be expected to understand. But as I know only too well, to question this "greater knowledge" will just end up with you being sacked or having to leave.

    Yes, it's good to have the subject expert on hand, but chances are, they don't really understand the measures they are using either. No-one could ever question the CFQ because Chalder was an author, so she must understand how it works, right?

    NEVER ASSUME ANYTHING!
     
    JohnTheJack, Barry, MEMarge and 3 others like this.
  16. Keela Too

    Keela Too Senior Member (Voting Rights)

    I don’t actually think statisticians are generally to blame for this. Although it would be good if they all insisted on caution being applied to outcomes derived from subjective scales.

    Assigning scale numbers to subjective responses, is the real problem as you rightly point out.

    Doing this gives the impression that those numbers represent an objective measurement. The numbers however are not objective.

    I’m not sure these scales are even useful comparators. Perceptions are not consistent, either over time nor between individuals.

    Eg Measuring the length of a line with a ruler gives an easily replicable value. So long as the rulers are correct, and the ends of the line clear. So, we can be confident of the measured length, no matter when, where, or by whom, the measure was taken.

    Measuring slightly less fixed things eg the height of a horse to the withers can be somewhat more tricky, so measurers are trained on how to best do this accurately. And in disputed situations the measure is made more than once, and to avoid bias the repeat measures will be undertaken by different measurers who have no vested interest in the height of the horse in question. (Of course there are occasionally some shenanigans that go on to try to bias the outcome - I’ll not recount the stories I’ve heard here.)

    So there may still be some subjective room for error, but the process aims to remove as much of that as possible. And replication is important. Barring outright fraud, the system works well enough for competition fairness.

    However, imagine the chaos if we asked people to simply answer questions about each horse, then based on their answers to those subjective questions, we were to assign a height number to each horse.

    How reliable might that be?

    Imagine then that the questioners, and horse assessors were also different for each horse measured.

    Add a confounder: Perhaps many of the assessors were primed about the horse they were about to assess - perhaps they were told the horse was a beautiful creature of significant stature, and with amazing presence etc. Or maybe they were told it was a pony of little consequence (but don’t worry we’ll let you see the real horses in six months time: just be patient!)

    How might all these derived numbers translate? If we numerically lined all our assessed horses up, do you think there would be any real graduation of size?

    Do you think it might be fair to apply statistical techniques to sub groupings measured this way?

    Would a good statistician not be concerned?

    Would an unscrupulous investigator perhaps just keep going til they found someone happy enough (or paid well enough) to crunch the numbers anyway?


    *Note: No horses were harmed in the above imaginary situation.

    ——————-

    Okay that’s my little bit of free wheeling thought over for this morning. I’d better get more coffee! Lol

    Some typos fixed later ;)
     
    Last edited: Sep 17, 2018
    andypants, MEMarge, Amw66 and 2 others like this.
  17. Woolie

    Woolie Senior Member

    Messages:
    2,931
    That's true. There's no guarantee that the primary researchers are any good either. I don't think the PACE psychologists and psychiatrists received a very strong training in psychological methodology (the psychiatrists would have received none, and the psychologist - Trudie - trained under one of those untrained dudes, so by extension, probably didn't get much methodological training either). Most of my training was in that, its really what I do. Because psychology has little "content" and is all about methods and approaches and their strengths and weaknesses.
     
    JohnTheJack, EzzieD, Trish and 2 others like this.
  18. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    No. It's what others were saying:

    I'm definitely NOT an 'amazing genius' - I just refuse to accept that any area is closed off to me just because I didn't study it. Because that's elitism.

    I'm not slamming the statisticians in PACE. We don't know that they didn't raise concerns and were told to shut up and get on with it.
     
  19. Woolie

    Woolie Senior Member

    Messages:
    2,931
    Oh, okay. Point taken.
     
    MEMarge and Trish like this.
  20. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,416
    Location:
    London, UK
    I actually think that statisticians should be expected to know, and in that regard I agree with @Lucibee. I should be expected to know enough about stats to judge whether a statistician is giving properly informed advice. I have to know enough about the other person's subject to make sure thing do not fall through gaps. Same for them.

    But that is different from saying that assessing problems with bias in a particular context is the statistician's job. Or that it would be appropriate to pass a trial by a statistician if it major faults are in other areas - said statistician my is not in a position to probe the clinicians in the way that the one on the committee should have done.
     

Share This Page