How to Spot Hype in the Field of Psychotherapy: A 19-Item Checklist, Meichenbaum & Lilienfeld, 2018

Woolie

Senior Member
Meichenbaum, D., & Lilienfeld, S. O. (2018). How to spot hype in the field of psychotherapy: A 19-item checklist. Professional Psychology: Research and Practice, 49(1), 22.

Abstract:
How can consumers of psychotherapies, including practitioners, students, and clients, best appraise the merits of therapies, especially those that are largely or entirely untested? We propose that clinicians, patients, and other consumers should be especially skeptical of interventions that have been substantially overhyped and overpromoted. To that end, we offer a provisional “Psychotherapy Hype Checklist,” which consists of 19 warning signs suggesting that an intervention’s efficacy and effectiveness have been substantially exaggerated. We hope that this checklist will foster a sense of healthy self-doubt in practitioners and assist them to become more discerning consumers of the bewildering psychotherapy marketplace. This checklist should also be useful in identifying the overhyping of well-established treatments.
Scott Lilienfeld has written some exceptional pieces on the limits of psychotherapy. He recently passed away, a great loss to the field.

Edited to add Link (thanks @JohnTheJack): https://psycnet.apa.org/record/2018-05600-002

and fulltext here (thanks @Ravn):
https://melissainstitute.org/wp-content/uploads/2018/02/Don.HOW-TO-SPOTacceptedversionnovember13.pdf
 
Last edited:
Here is the full 19 item list:
  1. Substantial exaggeration of claims of treatment effectiveness

  2. Conveying of powerful and unfounded expectancy effects

  3. Excessive appeal to authorities or “gurus”

  4. Heavy reliance on endorsements from presumed experts

  5. Use of a slick sales pitch and the use of extensive promotional efforts, including sale of paraphernalia

  6. Establishment of accreditation and credentialing procedures

  7. Tendency of treatment followers to insulate themselves from criticism

  8. Extensive use of “psychobabble”

  9. Extensive use of “neurobabble”

  10. Tendency of advocates to be defensive and dismissive of critics; selective reporting of contradictory findings, such as the results of dismantling studies

  11. Extensive reliance on anecdotal evidence

  12. Claims that treatment “fits all”

  13. Claims that treatment is “evidence-based” on the basis of informal clinical observations

  14. Inadequate empirical support: Limited reports or omission of treatment outcome information, such as patient selection criteria, drop-out rates, and follow-up data

  15. No proposed scientific basis for change mechanisms; proposed theoretical treatment mechanism lacks “connectivity” with extant science

  16. Repeated use of implausible ad hoc maneuvers to explain away negative findings

  17. Comparison of treatment with weak and “intent to fail” treatment groups, or with only partial (incomplete) treatment conditions

  18. Failure to consider or acknowledge potential allegiance and decline effects

  19. Failure to consider differential credibility checks across treatment groups; failure to consider the role of non-specific factors, such as the therapeutic alliance.
 
Last edited:
It is interesting to see the open acknowledgement of the likely main source of bias in studies, and the fact that they even have a word for it - 'therapeutic alliance'. How far away is that from therapeutic connivance or conspiracy? In the 1989 Chalder paper it is overt connivance. The patient has to say what they are supposed to say in order to be doing the treatment right.

My worry is that this is not so much David against Goliath as 'mind your 'ps and qs'.

They seem to be saying:
Mind out for these cowboys - they are not doing things right.

Maybe they are saying:
Here's how to put down these cowboys when they threaten to take away your business.

I was told off by a psychologist referee for possibly implying that maybe all psychotherapy trials are rubbish. This neatly dodges that by focusing on treatments that are 'hyped' - implying that what psychotherapists do all day long normally is fine.

It seems uncomfortably close to the response to PACE criticism that 'oh we don't do it like that anyway, we do person-centred treatments'.
 
I've had the impression for a while that the covert purpose of some psychological and psychiatric treatments is not to help the patient but to lessen the emotional burden placed by the sick person on society. In simpler words, it's about getting the patients to stop expressing their suffering, stop asking for help, stop bothering others.

This can be achieved by convincing the patient that the problem is not the illness but their behaviour in response to it and that they're totally overreacting anyway and have unjustified negative thoughts and feelings.
 
It is interesting to see the open acknowledgement of the likely main source of bias in studies, and the fact that they even have a word for it - 'therapeutic alliance'. How far away is that from therapeutic connivance or conspiracy? In the 1989 Chalder paper it is overt connivance. The patient has to say what they are supposed to say in order to be doing the treatment right.

My worry is that this is not so much David against Goliath as 'mind your 'ps and qs'.

They seem to be saying:
Mind out for these cowboys - they are not doing things right.

Maybe they are saying:
Here's how to put down these cowboys when they threaten to take away your business.

I was told off by a psychologist referee for possibly implying that maybe all psychotherapy trials are rubbish. This neatly dodges that by focusing on treatments that are 'hyped' - implying that what psychotherapists do all day long normally is fine.

It seems uncomfortably close to the response to PACE criticism that 'oh we don't do it like that anyway, we do person-centred treatments'.
"Therapeutic alliance" is not one of the sources of bias (that's researcher allegiance you're thinking of), its the notion that therapy mainly works via generic mechanisms that related to the quality of the therapist-client relationship.

As an idea, it is fine, but the problem is that it is used to explain why all forms of psychotherapy seem to "work". Another equally plausible explanation is that they all "work" because they all benefit from the same sources of bias - that is, response biases due to the client's positive expectations of likely improvement.
 
Last edited:
"Therapeutic alliance" is not one of the sources of bias (that's researcher allegiance you're thinking of), its the notion that therapy mainly works via generic mechanisms that related to the quality of the therapist-client relationship.

Well that would seem to me to be very much the source of bias. The bias not being from the researcher's allegiance to a theory but the 'client's' allegiance to the therapist. The generic mechanism indicated by Chalder is that the patient has to accept the positive nature of the treatment and go along with the therapist and their method of changing their mindset.

So that is why GET and CBT gave the same result - because the therapeutic alliance for physio and psychotherapy are pretty equivalent. The patient wants to 'help' the research team to much the same extent - or at least wants to give that impression. In the context of a trial that will be researcher allegiance maybe but the behaviour is not due to there being any research - it is the usual clinical relationship.


My experience of patients, at least in the UK, is that they will almost invariably say they are better if they think it is part of the social contract to do so. If the question is rephrased to neutralise that requirement they will in most cases say they are much the same. Over the years I developed a variety of ways of asking people about how they were getting on that allowed me to get the real answer out before they felt under an obligation to be kind to me.
 
Last edited:
This may be an off the wall comment, but I'll say it anyway.
So that is why GET and CBT gave the same result - because the therapeutic alliance for physio and psychotherapy are pretty equivalent. The patient wants to 'help' the research team to much the same extent - or at least wants to give that impression.
This reminds me of my dislike of having to answer questions about English literature at school, especially poetry (it was all long dead male English poets, largely incomprehensible to a science oriented Australian schoolgirl like me). I thought the task was to get the 'right' answer to please the teacher. I had no idea it was OK to give my own interpretation.

I think this desire to give the correct answer on questionnaires is particularly problematic with therapy with children who spend their school lives being tested where marks are gained for getting the 'right' answer.
 
I think this desire to give the correct answer on questionnaires is particularly problematic with therapy with children who spend their school lives being tested where marks are gained for getting the 'right' answer.

I agree.

If it was called something like 'Therapeutic attitude' meaning being supportive, respectful and kind to patients then I agree that would be a non-specific factor that might bias because it did good in a non-specific way.

But the term 'alliance' seems to me very specifically used to imply commitment on both sides, with the patient's commitment being under scrutiny and the therapist merely being scrutinised for whether or not they can engender commitment in the patient!
 
I've had the impression for a while that the covert purpose of some psychological and psychiatric treatments is not to help the patient but to lessen the emotional burden placed by the sick person on society. In simpler words, it's about getting the patients to stop expressing their suffering, stop asking for help, stop bothering others.

This can be achieved by convincing the patient that the problem is not the illness but their behaviour in response to it and that they're totally overreacting anyway and have unjustified negative thoughts and feelings.

Yes, agreed. And your ideas on this leads me directly to the cost savings for governments, etc., that this therapy provides. Blame the patient, who will hopefully see their health issues as their own fault, and not seek biomedical testing and care.
 
And, in my opinion, we see this exact thing happening in things like the Lightning Process, where it is claimed that change can only happen if the patients 'buy' into the process presented to them. And obviously this relieves the 'therapist' of any responsibility for whether it actually works or not, as it's all down to the patient.
 
I have not seen a single BPS paper that did not have at least 10/19 and probably most had 15+/19 with a near half scoring at 18-19. Not a single one is below that, which makes sense since they replicate the same formula over and over again. A flaw in design will obviously be replicated on the production line, it's the point of a production line: to produce identical outcomes.

Unfortunately though it's clear that this is a feature, not a bug. The point is to exploit the placebo/lab coat effect in bad faith. A point that is entirely wasted, of course, but nevertheless it is the point, to maximally bias perception.

I think that's the part most people struggle with about the crisis of replicability/validity: none of this is accidental, it is done entirely on purpose, because otherwise the work is too hard, too demanding, almost nothing comes out positive. Which is a self-reinforcing mechanism, of course, by lowering the bar constantly to make it easier to publish anything, all the field actually accomplished is to allow anything and everything to be published, with no concern for quality or validity. Almost all of it is effortless busywork, all cookie-cutter formula.

But what has to be admitted in order for things to change is that this has been a choice, and it's a comparable one to corruption in politics, where corrupt politicians are the ones who would have to change the rules that made them "successful" (at merely remaining in office, not at accomplishing anything). This literally places almost the entire field's self-interest against that of real-life outcomes and the pursuit of scientific knowledge.

Humans gonna human, never put doing the right thing between people and their immediate self-interest, you will be disappointed almost every time.
 
Back
Top Bottom