How to prove that your therapy is effective, even when it is not: a guideline, 2015, Cristea & Cuijpers

Discussion in 'Research methodology news and research' started by rvallee, Jan 3, 2025.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    How to prove that your therapy is effective, even when it is not: a guideline
    https://pmc.ncbi.nlm.nih.gov/articles/PMC7137591/

    Aims.
    Suppose you are the developer of a new therapy for a mental health problem or you have several years of experience working with such a therapy, and you would like to prove that it is effective. Randomised trials have become the gold standard to prove that interventions are effective, and they are used by treatment guidelines and policy makers to decide whether or not to adopt, implement or fund a therapy.

    Methods.
    You would want to do such a randomised trial to get your therapy disseminated, but in reality your clinical experience already showed you that the therapy works. How could you do a trial in order to optimise the chance of finding a positive effect?

    Results.
    Methods that can help include a strong allegiance towards the therapy, anything that increases expectations and hope in participants, making use of the weak spots of randomised trials (risk of bias), small sample sizes and waiting list control groups (but not comparisons with existing interventions). And if all that fails one can always not publish the outcomes and wait for positive trials.

    Conclusions.
    Several methods are available to help you show that your therapy is effective, even when it is not.
     
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    Found by @dave30th and referenced in his latest Trial by error. Tongue-in-cheek but also entirely spot on in describing the morass of garbage quality evidence that makes up all non-pharmaceutical evidence-based medicine.

    I mean, this is literally the psychosomatic playbook!

    1. Express in all communications about the intervention that you as developer or expert believe it to be best intervention ever (helps to increase expectations in participants).

    2. Do everything else that can increase expectations, such as writing books about the intervention, going to conferences to convince other professionals that this is the best intervention ever, giving interviews in the media showing your enthusiasm, preferably seasoned with some personal stories of participants who declare they have benefited very much from the intervention.

    3. Use the ‘weak spots’ of randomised trials: let the assignment to conditions be done by research staff involved in the trial or do it yourself (not by an independent person not involved in the trial).

    4. Do not conceal conditions to which participants were assigned to for the assessors of outcome.

    5. Analyse only participants who completed the intervention and ignore those who dropped out from the intervention or the study (and do not examine all participants who were randomised).

    6. Use multiple outcome instruments and report only the ones resulting in significantly positive outcomes for the intervention.

    7. Use a small sample size in your trial (and just call it a ‘pilot randomised trial’).

    8. Use a waiting list control group.

    9. Do not compare the intervention to already existing ones (but do tell your colleagues that based on your clinical experiences you expect that this intervention is better than other existing ones (good for the expectations))

    10. If the results are not positive, consider not publishing them and wait until one of the clinicians you have persuaded about the benefits of this intervention conducts a trial that does find positive outcomes​
     
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    16,005
    Location:
    London, UK
    I like the final conclusion:

    For those who think this is all somewhat exaggerated, all of the techniques described here are very common in research on the effects of many therapies for mental disorders.
     
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    The paper discusses issues with randomization, but it glosses over the main method of bypassing this in EBM, which is to randomize a heavily pre-selected non-random sample. That is, people referred to be assessed and randomized have been filtered already, sometimes multiple ways. This is a serious issue specifically with every LP study, since it basically requires to pledge allegiance to the king. This is even heavily discussed in psychosomatic literature, how they find that participants who believe in the treatment seem to do better at reporting to have done better. Which, duh.

    All of this is equivalent to randomizing people from an astrology conference, and pretending like it's a representative sample of the whole population in terms of how it affects people in their daily life. Every single people involve in clinical trial would point this flaw out in this scenario, but completely exempt themselves because, hey, that's just the privilege some people have.
     
  5. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    Not only is there nothing exaggerated here, it actually holds off because the whole naked truth is just... not pretty to the eyes.
     
    Ash, tornandfrayed, EndME and 8 others like this.
  6. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,568
    I think the whole thing is a gem. Lots of words of wisdom there. Such as believing in your own therapy is already part of the solution to proving it works!--which we have seen over and over is a very operative notion.
     
    EzzieD, hibiscuswahine, Ash and 11 others like this.
  7. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    15,461
    Location:
    UK West Midlands
    As we are now 10 years on ideal time for a new paper looking at the current situation. Perhaps @dave30th id thinking of contacting the authors to see if they have plans or could be persuaded
     
    tornandfrayed, EndME, Hutan and 3 others like this.
  8. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,416
    Location:
    Romandie (Switzerland)
    Fantastic read. It really makes me wonder if the near entirety of psychological “treatment” is expectation bias pseudoscience.
     
    EzzieD, Ash, tornandfrayed and 7 others like this.
  9. Sean

    Sean Moderator Staff Member

    Messages:
    8,510
    Location:
    Australia
    I think the single most important fact about human psychology is that we see what we want/expect to see, and it takes considerable disruption and contradiction to get us to see otherwise.

    Which is why robust methodology is so important.
     
    EzzieD, alktipping, Ash and 4 others like this.
  10. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,332
    Interesting to see that the first author is a clinical psychologist in Amsterdam who publishes a lot on CBT and who is part of a Research Program that also includes Hans Knoop. They ought to have crossed paths more than once.
     
    bobbler, alktipping, Hutan and 5 others like this.
  11. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,332
  12. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,206
    Location:
    Belgium
    There's also a commentary on this paper by Ioannidis but is mostly shows that he doesn't get the basic things right:
    https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC7137590&blobtype=pdf

    He writes things like:
     
    bobbler, EzzieD, Hutan and 5 others like this.
  13. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    It's truly peak irony. Psychology is the discipline that includes this stuff, about bias and expectations, and yet they are some of the worst at it, nearly their entire evidence base is built out of failing at this. And for some reason that I can't even begin to explain, the expertise to run clinical trials mainly rests within psychology. Which has the absolute worst methodologies, custom-built to produce fake results.

    It took a long series of mistakes for things to get this bad, but they've been this bad for so long that it's no longer possible to change things. Especially because they are so bad that almost no one even dares voice it out loud, aside from, maybe, 10 people. Meanwhile there are lots more who say the same things, but in fact want more of the garbage methodology precisely where high standards are needed.
     
    bobbler, Sean, alktipping and 2 others like this.
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,090
    Location:
    Canada
    Truly amazing stuff. The ethical and intellectual bankruptcy of this discipline goes all the way to 11:
    Zero difference. It's like they're celebrating cheating and corruption, find nothing wrong with it.

    Of course it's easy to put your finger on the scale, you can call that efficient if you want. And cheap. And reproducible. This is exactly why there has to be zero tolerance. Instead we find unlimited tolerance. Polar opposites.

    This 'placebo' stuff is truly medicine's Philosopher's stone. They will keep chasing the dragon until this entire nonsense is made obsolete.
     
    bobbler, Sean, alktipping and 2 others like this.
  15. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,568
    wow, I hadn't noticed that. thanks for pointing that out.
     
    bobbler, Sean, alktipping and 2 others like this.
  16. dave30th

    dave30th Senior Member (Voting Rights)

    Messages:
    2,568
    and I hadn't seen this either. thanks for highlighting it. Very interesting.
     
  17. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    4,269
    I’m reading from it that they’ve lost the alternative offering where they diagnose the actual problem people have and use the best-fit content to address that underlying cause with matched treatment


    The whol shoddy sausage-machine courses that they got conned into creating after being told to prove it works - and the same trap as we get for a lot of tests on us of ‘an arbitrary amount of sane treatment for all’ when these are humans and they won’t have exactly the same thing.

    well of course they’ve ended up relying on bias

    and then they’ve sold their existence (just like they are trying to with us by blocking any other medical clinics that would monitor be able to do OT adjustment reports and find out more about profession) based on telling GPs they’ve nowhere else to send these people ‘and at least the alliance means they seem to feel less awful than they would if you left em waiting with nothing fir a year’

    but are forgetting that really they might as well totally deprifrssionslise it to just making it any old person giving them a thirty minute chat on anything as long as someone with a badge told them they’ll feel better for it in six weeks

    I think there’s a lot of coercion going on tho and those under this get the same ‘do you want to get better’ con and the to be free and keep your job you need to tell us you had it and have insight then tell us you feel better - and on top of it these course teach them how to fake it (which isn’t good mental health) and what the right answers are. And everyone likes scoring well on a test. Even if they got confused/misled and it’s not really their test. But it is because often as well as being used for whatever stats it’s also used to confirm ti their boss and gp or what not ‘they’ve progressed and can now eg go back to work/keep their job’

    it’s not until 2yrs in that you stop being scared of losing your job as a priority and that’s because you realise you aren’t well enough to kid anyone and it’s gone because the fixes don’t work. But in that window you think it’s a ‘yet’ because it’s not placebo but it’s lying so people can’t plan and they think if they go back before they get dismissed it’s ok because’any moment now it will start working’ and ‘it said if feel better once I was working because working is good for mental health’


    All this lying to play people and force them into decisions not in their best interests based on misinformation need to be banned not encouraged.

    and yes if they were called the sales department in the business school and were doing this same method for these same therapies but couldn’t call themselves’heakth’ then people would/should see it as manipulation , but they get to hide behind their pretence because their department should be helping that it’s all fir good intentions
     
    Hutan likes this.
  18. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    4,269
    Could you imagine running it as an A/B test of a ted talk video where it’s the exact same paper read out except with one it’s someone dressed up to claim they cared and they were in mental health and in the other some anonymous billionaire who is just selling ‘whatever’ (but neutral)

    I’d be intrigued to see laypersons in general but also students and professions from eg business , medicine, psychosomatics, clinical osychology, psychology but any other scientific area (say cognitive)

    and ask them if the methodology is ethical
     

Share This Page