Cognitive behavioural therapy for adults with dissociative seizures (CODES): a ... multicentre, randomised controlled trial (2020) Goldstein, Chalder

Discussion in 'Other psychosomatic news and research' started by Sly Saint, May 22, 2020.

  1. Hoopoe

    Hoopoe Senior Member (Voting Rights)

    Messages:
    5,424
    The Wikipedia page has a list of possible indicators of pseudoscience. It is astonishing how well it describes the characteristics of the BPS research we discuss on this forum. I have highlighted some that I think are particularly relevant.

    A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[1]

    Use of vague, exaggerated or untestable claims
    • Assertion of scientific claims that are vague rather than precise, and that lack specific measurements.[37]
    • Assertion of a claim with little or no explanatory power.[38]
    • Failure to make use of operational definitions (i.e., publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can measure or test them independently)[Note 4] (See also: Reproducibility).
    • Failure to make reasonable use of the principle of parsimony, i.e., failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (see: Occam's razor).[40]
    • Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science.
    • Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.[41]
    • Lack of effective controls, such as placebo and double-blind, in experimental design.
    • Lack of understanding of basic and established principles of physics and engineering.[42]
    Over-reliance on confirmation rather than refutation
    • Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (see also: Falsifiability).[19][43]
    • Assertion of claims that a theory predicts something that it has not been shown to predict.[44] Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" (e.g., ignoratio elenchi).[45]
    • Assertion that claims which have not been proven false must therefore be true, and vice versa (see: Argument from ignorance).[46]
    • Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e., hypothesis generation), but should not be used in the context of justification (e.g., statistical hypothesis testing).[47]
    • Presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims.[28] This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
    • Promulgating to the status of facts excessive or untested claims that have been previously published elsewhere; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the Woozle effect.[48]
    • Reversed burden of proof: science places the burden of proof on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g., an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than on the claimant.[49]
    • Appeals to holism as opposed to reductionism: proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.[50]
    Lack of openness to testing by other experts
    • Evasion of peer review before publicizing results (termed "science by press conference"):[49][51][Note 5] Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.[50]
    • Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.[52]
    • Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested.[52]
    • Substantive debate on the evidence by knowledgeable proponents of all viewpoints is not encouraged.[53]
    Absence of progress
    • Failure to progress towards additional evidence of its claims.[43][Note 3] Terence Hines has identified astrology as a subject that has changed very little in the past two millennia.[41][23] (see also: Scientific progress)
    • Lack of self-correction: scientific research programmes make mistakes, but they tend to reduce these errors over time.[54] By contrast, ideas may be regarded as pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g., The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.
    • Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.
    Personalization of issues
    Use of misleading language
    • Creating scientific-sounding terms to persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
    • Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline.

     
    Last edited: May 29, 2020
    Sid, Michelle, MEMarge and 14 others like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    Literally a check on every single feature other than assertions of claims of a conspiracy in the scientific community, but only because the pseudoscience has effectively succeeded at regulatory capture. Which is massively worse and represents complete system failure.

    Meanwhile "professional skeptics" love this even though it checks every single mark of pseudoscience. That seems somewhat problematic.
     
  3. Arnie Pye

    Arnie Pye Senior Member (Voting Rights)

    Messages:
    6,416
    Location:
    UK
    Your diagram shows that 5 of the subjective outcome results weren't even statistically significant. Non-significant results are shown where the red horizontal lines cross the dashed vertical line.
     
    Joan Crawford and ukxmrv like this.
  4. Snowdrop

    Snowdrop Senior Member (Voting Rights)

    Messages:
    2,134
    Location:
    Canada
    Interestingly, I think a case could be made for including a reference to this wiki page on the psychosomatic medicine wiki page or perhaps the psychological research wiki page.
     
  5. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    I submitted the following letter to Lancet Psychiatry, regarding Perez's commentary about the CODES trial, on Saturday:

    "It is encouraging to see Perez highlight the need for subgroups in future research with patients experiencing dissociative seizures.1 There are a range of presentations and degrees of distress which need further detailed examination. However, his review of future outcome measures falls short.

    The methodology used in CBT intervention trials is by its very nature unblinded. Therefore, the combination of using subjective outcome measures and a lack of blinding will always introduce systematic bias. This can only be reduced using objectively measured primary outcomes. Perez’s proposal of using subjective or a combination of subjective and objective primary outcome measures in future research needs challenging as this is would lead to conclusions that are misleading.

    Bias is evident in Perez’s commentary. For example, “In my opinion, CBT remains an effective treatment for dissociative seizures.”1 The CODES trial does not support his claim.2 He wishes this to be the case; but it is not so. Humans are largely loss averse. It can be hard to step back from a strongly invested belief even in the presence of contradictory evidence.3 His desire to continue to believe in the trial’s success is a powerful demonstration of how bias works.

    Perez cites evidence from his clinical practice. I do not doubt his sincere belief of robust improvement via CBT. However, we are unable to evaluate its objectiveness. Whether he is confusing ‘feeling better’ (adapting to living well despite ongoing symptoms) with ‘being better’ (cured/recovered) is impossible to ascertain. Patients heartily desire the latter. Honesty about the limitations of current practice is needed.

    1 Perez DL. The CODES trial for dissociative seizures: a landmark study and inflection point. Lancet Psychiatry 2020; 7: 464-65.

    2 Goldstein LH, Robinson EJ, Mellers JDC, et al. Cognitive behavioural therapy for adults with dissociative seizures (CODES): a pragmatic, multicentre, randomised controlled trial. Lancet Psychiatry 2020; 7: 491–505.

    3 Kahneman D. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, 2011."

    ==

    I received the following reply today:

    Manuscript reference number: thelancetpsych-D-20-01336
    Title: Letter in correspondence to Comment on The CODES trial for dissociative seizures: a landmark study and inflection point. David L Perez Lancet Psychiatry 2020; 7: 464-65

    Dear Ms. Crawford,

    Thank you for your recent submission to The Lancet Psychiatry. We have discussed your letter. Dr Perez was invited to write an opinion piece, which he has done. Normally, we would welcome a debate spurred by a Comment but I am afraid that COVID-19 is dominating our Correspondence sections across all the Lancet journals and we have been asked to cut everything else back. So on this occasion, we have decided not to publish your letter.

    Although this decision has not been a positive one, I thank you for your interest in the journal and hope it does not deter you from considering us again in the future.

    Regards,
    Joan

    Joan Marsh MA PhD
    Deputy Editor

    ==

    I'm far from impressed. While we are in extraordinary times; the continued publication by the Lancet an associated publications of poor quality trials and commentary which has been highlighted on many an occasion is making any headway changing attitudes. Depressing and sad. Peer review should be all over this - and it is not.

    Any ideas about how I should respond, if at all, to the editor (politely :) )? I'm thinking about suggesting the opportunity for debate/discussion in a few months time. I want to keep channels open.

    Many thanks,
    Joan Crawford
    Counselling Psychologist



     
    Sid, Michelle, 2kidswithME and 15 others like this.
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    This is a terrible response from the Lancet. The op-ed is in June's edition, they were not overburdened until then and I see no reason why psychiatry should be so overwhelmed by an infectious disease pandemic. It's insulting in its weakness. This is not how scientific debate works. Stifling issues does not make them disappear, especially using blatantly fictitious excuses.

    Instead it's what silencing actually looks like. It's not screaming unfiltered personal grievances from the privileged position of an internationally published special report. It's simply being denied a valid response to be published at all.

    However since most of the "professional skeptics" are on the wrong side it's hard to bring attention to it. I don't know if The Lancet is just under temporary terrible management or if its reputation was entirely unearned but this is not how a "top journal" operates at all.

    It's definitely worth publishing, it raises all the right points and it's very telling that the response to yet more critical points are being met with a resounding "meh" of indifference by the gatekeepers of science.

    BOOO Lancet, you are terrible at whatever it is you are doing.
     
    JemPD, EzzieD, ukxmrv and 4 others like this.
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Using COVID19 as cover for shitty quality work is pathetic. But not surprising.
     
    MEMarge, 2kidswithME, JemPD and 9 others like this.
  8. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    It's absurd to publish a major study that merited a separate comment and then say you have no room for correspondence. then don't publish the fucking paper and commentary.
     
  9. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    My thoughts entirely :)
     
    MEMarge, 2kidswithME, JemPD and 7 others like this.
  10. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    It is poor isn't it? :)
     
    2kidswithME, ukxmrv, Simbindi and 2 others like this.
  11. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    Many thanks for your input and support rvellee. Helps to have feedback that mirrors my own thoughts. I felt like it was a silencing job. I'm definitely going to respond assertively and positively to their rejection. I think it is so easy to dismiss or minimise or miss the point that patients are being subjected to unhelpful and possible harmful psychological / psychosocial interventions with no / poor evidence base. That's poor ethics and not up to snuff in 2020. It never has been but it's not stopped the nonsense from proliferating.

    I also might submit a second letter re the actual CODES paper re the CBT model - it was neither CBT for panic nor CBT for trauma. It was CBT to help patients be more avoidant - which seems so bizarre it makes my head hurt. So no surprise that there was a duff result - needs more focused work with decent underlying theory and good methodology and high face validity before this train should be allowed to keep trundling on....... Some reflection is needed on the part of the researchers which includes the possibility of other ways of understanding DS given the uncertainty. It's ironic that often in therapy helping people to increase their tolerance to unpleasant feeling and to tolerate uncertainty can play a big role - that those who have taken on the task of 'helping' 'understanding' have done the complete opposite and show obvious lack of psychological resilience is rather weird.
     
    Michelle, 2kidswithME, EzzieD and 8 others like this.
  12. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    I've replied to Dr Marsh, Deputy Editor, at The Lancet Psychiatry setting out the reasons for my displeasure at her response.

    I suspect it'll go nowhere but ya never know. Gotta stand up for ourselves when we think we've being silenced :)

    Bw
    Joan
     
  13. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,922
    Location:
    UK
    KCL blog
    3 June 2020

    Cognitive behavioural therapy reduces the impact of dissociative seizures
    https://www.kcl.ac.uk/news/cognitive-behavioural-therapy-reduces-the-impact-of-dissociative-seizures

    eta:
    see also
    https://www.s4me.info/threads/curre...-disorders-espay-et-al-2018.4446/#post-193896
     
    Last edited: Jun 5, 2020
    ukxmrv and Simbindi like this.
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,659
    Location:
    Canada
    This is basically false advertisement. There is no credible basis for those claims. Making outlandish claims is a clear marker of fraudulent research.

    This is part of why people distrust experts. That and reactionaries who point at stuff like that and yell "See? All they do is lie about their own research". This is how you destroy the very notion of expertise, when authoritative sources engage in disinformation selling junk pseudoscience.

    Medical research needs serious systemic reform, this is completely unacceptable.
     
    MEMarge, Mithriel, EzzieD and 7 others like this.
  15. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Which is why the rest of medicine and science need to pay very close attention to what the likes of Chalder are getting away with, and stop it.
     
    MEMarge, 2kidswithME, EzzieD and 5 others like this.
  16. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    14,837
    Location:
    UK West Midlands
  17. Arnie Pye

    Arnie Pye Senior Member (Voting Rights)

    Messages:
    6,416
    Location:
    UK
  18. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,447
    Buried in the CODES paper is this:

    "The p values in table 3 have not been adjusted for multiple testing.26 Considering conservative adjustment (eg, post-hoc Bonferroni correction of 0·05/17=0·003), five secondary outcomes suggest that CBT plus standardised medical care provided significant benefit to participants compared with standardised medical care: longest period of seizure freedom, psychosocial functioning, self-rated and clinician-rated global change, and treatment satisfaction (table 3)."

    So that means the results as reported in the abstract and main table, where they claimed benefits for 9-10 secondary outcomes, are not corrected for multiple comparisons, even though they had like 17 total secondary outcomes. That seems kind of sneaky to me.

    In the statistical analysis plan, here's the operative paragraph:

    Method for handling multiple comparisons

    There is only a single primary outcome, and no formal adjustment of p values for multiple testing will be applied. However, care should be taken when interpreting the numerous secondary outcomes.

    Would a statistical analysis plan normally elaborate beyond "care should be taken" when discussing more than a dozen secondary outcomes? Would the main findings normally be those before this correction or afterwards?

    Added: Actually, the methods section in the abstract has this sentence:

    "p values and statistical significance for outcomes were reported without correction for multiple comparisons as per our protocol."

    But the protocol also says to interpret secondary outcomes with care. Nine out of 16 secondary outcomes had statistically significant results. After correcting for multiple comparisons, only five were statistically significant. But that's only mentioned in passing.
     
    Last edited: Jun 8, 2020
  19. Joan Crawford

    Joan Crawford Senior Member (Voting Rights)

    Messages:
    701
    Location:
    Warton, Carnforth, Lancs, UK
    Good spot. It is usual to correct for multiple comparisons - that's standard practice at undergrad level...... It is rather poor that a journal like Lancet Psychiatry have allowed this through peer review. Disheartening. I doubt it would have got through if it had been a RCT of a drug intervention..... Mind you I doubt that there would have been multiple secondary outcomes in drug trial. CODES was a 'spread it wide strategy' - in the hope that something will turn up positive in the results..... a fishing trip. Hardly speaks of confidence in their work/effectiveness and the underlying model.
     
    Michelle, Amw66, MEMarge and 7 others like this.
  20. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Nor their integrity. :grumpy:
     

Share This Page