A general thread on the PACE trial!

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Esther12, Nov 7, 2017.

  1. Sean

    Sean Moderator Staff Member

    Messages:
    8,011
    Location:
    Australia
    They just can't stand the thought that patients may actually be more knowledgeable and competent than the 'expert pros'.

    They are just arbitrarily inserting themselves, for no good purpose. It is just empire building, income generation, and egos.
     
  2. Inara

    Inara Senior Member (Voting Rights)

    Messages:
    2,734
    If I read that chapter about Pacing as a newbie I wouldn't know what pacing is. Well done.

    Also, if I read "professionals favor this, patients that" I would get kean-eared.
     
    rvallee, Woolie, MEMarge and 4 others like this.
  3. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
    Agreed. Real pacing is a very subtle balancing act between energy conservation versus expenditure, with look-ahead to account for delays between cause and effect.

    The previous definition is very good to my mind ...
    ... and fits extremely well with how I observe my wife's approach to pacing. The APT advocates have no clue (nor humility) how their "adaptive" modifier completely changes the strategy such that it is no longer pacing in any recognisable sense.
     
    Woolie, Sean, MEMarge and 2 others like this.
  4. Kalliope

    Kalliope Senior Member (Voting Rights)

    Messages:
    6,556
    Location:
    Norway
    Henrik Vogt is tweeting today

    https://twitter.com/user/status/1033301324207665152


    It is tempting to just laugh it off, but he is an MD and has initiated a "patient organisation" with people who have recovered from conditions by their own means/undocumented treatments (meaning ME-patients recovered by Lightning Process, but that isn't said out loud because you are not allowed to advertise for alternative treatments in Norway, including sharing success stories).

    People listen to him and he continues to smear ME patients that are critical of Lightening Process and the BPS approach to ME.

    Edit to add: The tweet might be a reaction to a recent and critical article in the Journal of the Norwegian Medical Association about the PACE-trial.
    There is a post about that article in the thread Rethinking the treatment of chronic fatigue syndrome - A reanalysis and evaluation of findings from a recent major trial og graded exercise and CBT with google-translation.
     
    Last edited: Aug 25, 2018
    inox, rvallee, Woolie and 11 others like this.
  5. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,100
    Location:
    London, UK
    Dr Vogt is right, with a slight twist:

    #mecfs: Those who think that current criticisms of the PACE trial is about a fair scientific discourse, read @TheLancet editorial from 2011. It is part of an aggressive campaign to discredit anything that smells of scientific argument.

    There is nothing but surmise in that editorial. And the editor needed to save face.
    Vogt is interesting because he is so unable to understand his own psychology. A perfect match for Sharpe.
     
    inox, rvallee, Woolie and 14 others like this.
  6. Cheshire

    Cheshire Moderator Staff Member

    Messages:
    4,675
    Action for ME:

    The PACE trial and behavioural treatments for M.E.

    https://www.actionforme.org.uk/news/pace-trial-and-behavioural-treatments-for-me/

    https://twitter.com/user/status/1034736087430823936


    Edit: thread here
     
    Last edited: Aug 29, 2018
    inox, Woolie, NelliePledge and 9 others like this.
  7. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    5,742
  8. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
  9. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
    Bear with me, as this is - eventually - about PACE. More as an interesting exercise rather than any in-depth investigation.

    I'm getting into @Brian Hughes' book Psychology in Crisis, and been grappling with the idea of the null hypothesis. Need to re-read and further inwardly digest before I fully understand, what seems simple in principle, but apparently many psychological researchers completely misunderstand. One of the points Brian makes is that although a clear understanding of a study's null hypothesis is crucial, many psychology researchers tend to short circuit that bit and just jump straight to, and only consider, their main hypothesis.

    As I understand it for PACE, the main (i.e. alternative?) hypothesis is along the lines that patients are locked into a vicious circle of being deconditioned, reinforced and perpetuated by activity avoidance. GET reverses that vicious circle and thereby reconditions patients.

    So what would the null hypothesis have been? Was it ever reported? What would the p-value have been for data pertaining to that null hypothesis? Was it reported for PACE?

    Presumably the null hypothesis would be something along the lines that patients are not deconditioned, and there is no vicious circle to break out of. GET would therefore have no beneficial effect.

    I'm well aware that correctly stating the alternative and null hypotheses is crucial, and that I for sure will not have achieved that here. But I'm interested to see if anyone else finds this aspect of PACE interesting, and wants to add their two penn'oth.

    Given we are extremely confident this null hypothesis is in fact true, this would presumably have to mean that a soundly constructed and operated trial would produce data showing a very high p-value for that null hypothesis. What would the p-value be for PACE?

    I'm also (now) aware that p-values can be hugely distorted as a consequence of bad trial methodology etc.

    Am I on the right lines here?
     
  10. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    @Barry

    This is the abstract paradigm/'theory' behind the study but you wouldn't put this language in a hypothesis for statistical testing; it's not quantifiable.

    Instead you would say something like:
    Null Hypothesis: "CFS patients treated with (CBT, GET, or APT) + SMC will not show any more improvement (i.e. equivalent improvement or less improvement) on (Chalder Fatigue Scale, SF-36 score, etc.) than patients treated with SMC alone.""
    Alternative Hypothesis: "CFS patients treated with (CBT, GET, or APT) + SMC will show greater improvement on (Chalder Fatigue Scale, SF-36 score, etc.) than patients treated with SMC alone."

    Then you can run an experiment, and calculate a test statistic and a p-value from your observed results. A p-value is a measure of how unusual your observed result is assuming the null hypothesis is true. I.e. is it reasonable to assume that the deviation in your observed results from the null hypothesis is due to random chance or not. If p=.001, there is a 1/1000 chance that you would get a test statistic as extreme or more than the one you got based on your observations if the null hypothesis is true - it's reasonable to assume in this case that the null is not true.

    (PACE's p-values are in table 3 at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3065633/)

    PACE rejected the null hypothesis for CBT+SMC and GET+SMC. This simply means that it is very unlikely that the results (more improvement on Chalder scale/SF-36 in these groups vs SMC alone) could be explained by random chance. But does this support the abstract theory? Is the improvement on the questionnaires a reliable indicator of improvement in the patients' disease brought about by addressing unhelpful beliefs and reconditioning? No, because of all of the many problems that have been pointed out.

    Something to keep in mind is that, even without some of the weird issues specific to PACE, we should expect these results to be quite easily replicable because (a) the unblinded-treatment/subjective-outcome problem - CBT or GET will be tested as the intervention that's supposed to help, so patients receiving it will be biased to say they are better, (b) CBT and GET train patients to say they are better, so it's not an interesting result that patients say they are better after being treated. These results cannot be taken to support the 'deconditioning/unhelpful beliefs' hypothesis.

    I hope that's not too muddled. I'm not learned on all of the particulars of PACE methodology so I hope others who are will clean up any mess I made, but I think this will help to conceptualize the basics.

    Barry, a book I found helpful for learning the basic concepts of statistics is Statistics by Freedman, Pisani, and Purves. I think you would find it quite helpful in sorting out your questions about hypothesis testing, and it's quite fun to work through!
     
  11. Trish

    Trish Moderator Staff Member

    Messages:
    55,252
    Location:
    UK
    @James Morris-Lent has explained it very clearly.

    To put it even more bluntly (and to over-simplify), the statistician is not interested in the theories behind the study. Nor are they interested in what the treatment involves.

    The question their statistical test is addressing is, 'is there a statistically significant difference between the means of the two sets of data?'. It doesn't matter to the statistician whether the data represent the number of cups of tea the patients are drinking in a year, or the number of steps they can walk in a day - it's just numbers.

    The null hypothesis is 'there is no difference between the means', ie any difference is just a result of chance variation.

    You then work on the basis that this is true and calculate what the probability of getting the observed difference by chance. If that probability is very small, you reject your null hypothesis and conclude that the treatment group have a 'statistically significant' difference in outcome to the untreated group.

    It's then over to the researcher to decide whether the statistically significant difference in the number of steps or cups of tea is clinically significant and whether it supports or disproves their pet theory. They should also take into consideration confounding factors like whether the treatment involved persuading the patients to drink more tea or take more steps... But that's another story.
     
  12. Lucibee

    Lucibee Senior Member (Voting Rights)

    Messages:
    1,498
    Location:
    Mid-Wales
    This is why I asked Sharpe this question:

    https://twitter.com/user/status/1002911245664702464


    An trial can only tell you whether one treatment is better than another on the outcome measures you have chosen. It cannot test your underlying theory. That should have already been established in previous experiments, case studies etc. It also cannot tell you how much bias is involved in those measures. Again, that work should already have been done before it goes anywhere near a trial. If the measures you are using have not been properly validated for use in that circumstance, the stats are not going to tell you anything useful.
     
  13. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
    I guess statistical methods are simply a tool set, and it's the person using the tool that is responsible for its proper use. The same as a hammer cannot be blamed for smashing a priceless vase, when the user was trying to knock in a nail nearby.
     
  14. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
    Thank you @James Morris-Lent, @Trish and @Lucibee. I'll come back to this later when I have more time to think about it. I understand the mechanics of what you are saying, but need to ponder bit more to get under the skin of it.
     
  15. MEMarge

    MEMarge Senior Member (Voting Rights)

    Messages:
    2,923
    Location:
    UK

    What was Sharpes reply? or did he block you?
     
    Lisa108 and Inara like this.
  16. Barry

    Barry Senior Member (Voting Rights)

    Messages:
    8,412
    So if the PACE hypotheses had been based purely on truly objective outcomes, rather than the usual subjective ones, then the p-value would presumably have come out very high, assuming all other things were done properly.

    I know it's been mentioned in the past that the statistical analyses done for PACE are OK, which may be true so far as it goes. From the comments above, the statistical analyses are essentially specialist number crunching exercises; even if done with 100% competence, if the input data is rubbish then it's a case of garbage in garbage out.

    So what is the scope of a trial statistician's remit? Is it purely the statistical calculations itself? Or is the statistician, supposed to still understand and advise on what might constitute sane input data. Is the statistician supposed to understand trial methodology better than the trial authors themselves? And even if the statistical analyses themselves have been done correctly, is it still valid for a statistician to say that a trial's stats are all OK, even if the input data is clearly flawed? Where is the "trial stats" boundary? What is the accepted scope?
     
  17. James Morris-Lent

    James Morris-Lent Senior Member (Voting Rights)

    Messages:
    903
    Location:
    United States
    I second these questions. For me it seems like the value in having people trained in statistics is not so much in doing the sterile number crunching - for this you should just click a button and a computer script calculates and outputs the results - but in critically examining the validity of the input for testing; then, if statistical testing is warranted, to make sure that the conclusion accurately reflects what the test is telling you.

    From PACE
    Should say, at best: "When added to SMC, CBT and GET had greater success in inducing participants to register improvements on questionnaires that we would like to think reflect fatigue and physical function."
     
  18. Inara

    Inara Senior Member (Voting Rights)

    Messages:
    2,734
    I would expect statisticians have a mathematical training background. So I would expect a statistician checks if the things he applied actually can be applied, i.e. he also checks if the theory holds. That's what numerical mathematicians always do (and we made "jokes" about situations where this hadn't been done, e.g. an oil platform that cracked on the high ocean since the meshing wasn't done correctly - according to theory). I would expect the statistician to report to the researcher whether he thinks the data/hypothesis don't fulfill the criteria - because then, theory doesn't hold anymore.

    E.g. computing the p-value is only possible if you assume the null hypothesis is true (here comes in first and second type errors).
     
    Barry and Trish like this.
  19. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    15,100
    Location:
    London, UK
    I agree these are key questions. The problem is that people trained in statistics have no particular reason to understand the more basic problems of trial design that arise from sources of bias - human nature. You cannot use numbers to assess the likelihood of bias. The jobs you want done are not jobs for statisticians. Which is why I was disappointed that the BBC said they were not going to do a Newsnight programme because they had asked a statistician who said that PACE was not too bad.

    I think it may have been a mistake to focus on statistical issues with PACE in the first place, but it got people interested and there is no doubt that there are major statistical problems as well.

    As with all these things it looks as if statisticians are increasingly oiling the wheels of garbage in garbage out. Statisticians tend to work on projects they are asked to look at by people who want positive results. They are asked 'how can we show this is significant'. Statisticians who reply 'I think you may be cherry picking' will not get asked again.
     
  20. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,757

Share This Page