Review Selective outcome reporting in trials of behavioural health interventions in health psychology & behavioural medicine journals, 2025, Matvienko-Sikar

Discussion in 'Other psychosomatic news and research' started by Dolphin, Feb 13, 2025.

  1. Dolphin

    Dolphin Senior Member (Voting Rights)

    Messages:
    6,230
    https://www.tandfonline.com/doi/full/10.1080/17437199.2024.2367613

    Selective outcome reporting in trials of behavioural health interventions in health psychology and behavioural medicine journals: a review
    Karen Matvienko-Sikar
    ,
    Jen O'Shea
    ,
    Stephen Kennedy
    ,
    Siobhan D. Thomas
    ,
    Kerry Avery
    ,
    Molly Byrne
    , show all
    Pages 824-838 | Received 27 Nov 2023, Accepted 09 Jun 2024, Published online: 26 Jun 2024
    In this article

    Selective outcome reporting can result in overestimation of treatment effects, research waste, and reduced openness and transparency.

    This review aimed to examine selective outcome reporting in trials of behavioural health interventions and determine potential outcome reporting bias.

    A review of nine health psychology and behavioural medicine journals was conducted to identify randomised controlled trials of behavioural health interventions published since 2019.

    Discrepancies in outcome reporting were observed in 90% of the 29 trials with corresponding registrations/protocols.

    Discrepancies included 72% of trials omitting prespecified outcomes; 55% of trials introduced new outcomes.

    Thirty-eight percent of trials omitted prespecified and introduced new outcomes. Three trials (10%) downgraded primary outcomes in registrations/protocols to secondary outcomes in final reports; downgraded outcomes were not statistically significant in two trials.

    Five trials (17%) upgraded secondary outcomes to primary outcomes; upgraded outcomes were statistically significant in all trials.

    In final reports, three trials (7%) omitted outcomes from the methods section; three trials (7%) introduced new outcomes in results that were not in the methods.

    These findings indicate that selective outcome reporting is a problem in behavioural health intervention trials.

    Journal- and trialist-level approaches are needed to minimise selective outcome reporting in health psychology and behavioural medicine.
    Selective outcome reporting
     
    Woolie, Fero, BrightCandle and 14 others like this.
  2. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    This is bad, but not unexpected.

    When does it become fraud?
     
  3. Sean

    Sean Moderator Staff Member

    Messages:
    8,668
    Location:
    Australia
    For ME/CFS at least, the psycho-behavioural crowd crossed that bridge very early on, IMHO.
     
  4. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    Is science fraud one of those self-governing terms? Is there anything we could do to make it have real consequences for the researchers?
     
    alktipping, Kitty, Yann04 and 4 others like this.
  5. dratalanta

    dratalanta Senior Member (Voting Rights)

    Messages:
    108
    This is a substantial result: that an entire field is suspect, with selective outcome reporting in a whopping nine out of ten papers.

    It’s disappointing though that this is not in a journal more widely read by the doctors and guideline writers who recommend bogus interventions as a result of this junk science.

    This is in Health Psychology Review, which is an important journal for the field, but that means it’s read by the people who are overwhelmingly perpetrating this selective reporting so this result is not going to be news to them. They’ll continue to insist this doesn’t affect the robustness of their results because… (insert inane reason here: “we adjust our outcomes to reflect clinical experience - do you want to block patients from finding effective treatments?”).

    If anything they’ll be heartened to know that everyone else is at it too. You might as well publish Lance Armstrong’s dope test results in Cheating Cyclist’s Newsletter.

    The fact that this entire field is junk needs to be trumpeted across the medical world before healthcare systems waste any more money, time and patients’ irreplaceable lives on this quackery.
     
  6. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,621
    In general I don’t think the majority of researchers/authors involved are intending systematic fraud, rather like PACE they are not interested in answering scientific questions rather confirming their established bias.

    We have seen with such as Wessely and Sharpe, how researchers in our field can operate within total double think, operating objective scientific standards in one context, ie biological based research that could contradict their position and total disregard of objectivity with confirming psycho somatic research. There was the telling metaphor from one of the PACE apologists (I forget which) describing the study as an ocean liner heading to the target port.

    Indeed we see that in this field behavioural/psychological research that ought to be the reductio ad absurdum of their preferred experimental design, rather than leading to revaluation, results in such as advocacy of the unconscionable as with Garner’s and others’ support for the Lightening Process.

    Having said that, whether fraud or confirmation bias, the result is the same patient harm.

    [edited to include LP sentence]
     
    Last edited: Feb 14, 2025
  7. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    Wouldn’t ‘manipulation of the research method’ be classified as fraud?

    You’ve changed data. It doesn’t matter if it’s the measurement data or the label of the data (e.g. changing something from secondary to primary outcome).
     
  8. Sean

    Sean Moderator Staff Member

    Messages:
    8,668
    Location:
    Australia
    Wessely.

    https://www.nationalelfservice.net/...syndrome-choppy-seas-but-a-prosperous-voyage/

    In an email exchange with Julie Rehmeyer, in 2016, he also said this:

    Simon Wessely, president of the UK Royal College of Psychiatrists, defended the trial in an email exchange with me. He argued that some patients did improve with the help of cognitive behavior therapy or exercise, and noted that the improvement data, unlike the recovery data, was statistically significant. “The message remains unchanged,” he wrote, calling both treatments “modestly effective.”

    Wessely declined to comment on the lack of recovery. He summarized his overall reaction to the new analysis this way: “OK folks, nothing to see here, move along please.”​

    "...nothing to see here, move along please."

    I'm sure he would like nothing more than to not have to lower himself to giving honest answer to informed substantive questions from the patient rabble.

    https://www.statnews.com/2016/09/21/chronic-fatigue-syndrome-pace-trial/
     
    Last edited: Feb 14, 2025
  9. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    I just want to add that the lack of ‘intent’ is irrelevant. The closest analogy would be the law - ignorance won’t save you if you go above the speed limit. There are some exceptions, but fraud is not one of them.
     
    alktipping, Kitty, rvallee and 3 others like this.
  10. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,423
    I agree with @Peter Trewhitt. There may also be instances where there are good reasons to do such a thing. I suppose it depends entirely on the trial.

    This is only a review of 29 studies so not very representative, but I wouldn't be surprised if something similar would also be applicable at larger. You will find that many biomedical trials (for example of drugs) do exactly the same things mentioned above. In some cases this can also be justified. Almost everyone is aware of these problems. They are largely also aware of other far more fundamental problems in trial design. For the individuals there are gains to doing research in this fashion, the gains are simply largely not of scientific kind. But then again not everyone can be a good scientist.

    With my extremely limited view, the difference I see in the psychobehavioural field, is that there is a shrugging of shoulders when poor methodological standards, despite everyone being aware of the problem, forms the basis of evidence whenever suitable. At least in case of biomedical trials there is a higher standard required for something to count as meaningful evidence (at least in the majority of fields).

    That's the thing. There are no positive dope test results of Armstrong. Famously the most tested athlete of all time, that was full of drugs, never tested positive. He was taken down because he's was an ass****, not because he was doing what almost everyone else was doing, has been doing and are still doing.

    Here you're simply seeing a collection of people doing what everyone else is doing. They are largely aware of these problems, but prefer to neglect them when it comes to their own work. Just like many people elsewhere. Abstrusely psychologists seem particularly unable to comprehend the psychology of research and trials.
     
    Last edited: Feb 14, 2025
    alktipping, Kitty, rvallee and 2 others like this.
  11. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    @EndME can you give some examples of when it’s justified to deviate from the study protocol? And contrast that with when it isn’t?
     
    Sean, alktipping, Kitty and 1 other person like this.
  12. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,423
    That's a question for someone else with actual expertise to answer. But I can imagine that you might have some insights during a drug study that could make the study more meaningful if it is slightly adapted to better reflect those insights or where things that couldn't be anticipated occur that require adaptations.

    The problem of course is when, which is unfortunately far too often the case, you have no meaningful results and simply switch things around here and there to obtain a desired looking outcome. And at least in this study above all upgraded outcomes are, to no ones suprise, statistically significant.

    I would guess there is no justification to make changes, unless there is a rigorous justification to do so. That would unfortunately leave wiggle room for those making changes as they wish.

    For something like PACE that shouldn't matter. A nothingburger is a nothingburger, even if you flip it around and put some truffle oil on top.
     
    Last edited: Feb 14, 2025
  13. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    Wouldn’t it be better to complete the study as intended and then add more info if you encounter something unexpected?

    ‘X didn’t happen, but we found Y. We did some digging, and Z might explain it. We should look closer at this.’

    Like Fluge and Mella did - publish the null results, highlight other interesting aspects and use that to guide further research.

    I believe it would be very rare to find completely unexpected behavioir in treatment trials. Unless unexpected means ‘my pet theory failed’..
     
    MeSci, Trish, Sean and 3 others like this.
  14. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,423
    Some examples I could think of: New information and/or results from other trials, better biomarkers or outcome measures are made available once the protocol has already been established, unusual side effects become known as part of the trial etc. Some trials run over extremely long time periods, so that such knowledge can be quite inevitable (Baricitinib for long covid could potentially be such an example). I could imagine that are some rare occasions where you might even be running a trial where it's a priori clear that you will have to see how things develop and as such make adjustments in between. I think there are examples of well run trials that a priori started without an primary endpoint, but that are still sound trials.

    I suppose there has to be some form of independence of the wanted data and the decision to change things up. The question is on what basis are these changes being made? Post-hoc, during the study, after reviewing interim data etc...

    I'm sure there's an abundance of publications on when trials outcomes can be changed and when not. I think most journals require justifications for when outcome measures are switched. Of course that can often still lead to much wiggle room and there is often room for cherry picking.
     
    MeSci, alktipping, Kitty and 2 others like this.
  15. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,257
    Location:
    Canada
    Fraud is mainly about deceiving people in order to benefit from that deception. In that regard, the entire psychobehavioral ideology is fraudulent, and a huge chunk of so-called evidence-based medicine is also fraudulent. They both make false claims of efficacy and cause-and-effect where none exist. The people making those claims benefit from the deception. It is absolutely fraud.

    It doesn't matter what they aim to do, because the intent here is deception, their very product itself is deception, and by definition that is fraudulent. Selling fake treatments is just as fraudulent as selling products that don't work, advertising and consumer laws regarding this are far stricter than research standards. The psychobehavioral approach is comparable to putting a bunch of rocks in a box with shiny graphics boasting of being able to do X, Y and Z, none of which it can do, based on a bunch of BS claims. It is entirely identical to all the fake pills pushed by red-faced muscle bros on podcasts, or anything fake found in conspiracy communities.

    And the entire industry benefits from the perceived lie that they aren't failing millions of people, effectively refusing to do their job, including doing proper assessment of their job performance, which is massively negative here. It would never pass any judicial review because there is simply nothing that can account for this level of systemic fraud, most expert witnesses would defend this because it's usual practice. The whole system breaks apart when most of that system openly enables and rewards fraud.

    But I don't think anyone can claim innocence. No one is that good, this field is producing basically 90%+ success while pushing for the same thing in a loop. It's completely absurd to get such results, they need to be questioned. And instead they are defended, making the entire industry complicit in massive fraud, but on a scale that makes it impossible to stop because it's too fashionable.

    Hell, even the level of bias here reaches the level of fraud. A huge chunk of the fake evidence here basically consists of:
    1. The people who made the false models to begin with
    2. Invented their own treatments based on those models
    3. Manufactured fake evidence supporting their own claims
    4. Designed and ran trials of their own invented treatments based on their own models
    5. Performed meta analyses and systematic reviews of their own research
    6. Many of them are even editors or in other influential positions from which they push their own research
    You only reach this level of fraudulent bias in monarchies where everyone in charge is a cousin or close business partner of the ruler. No one can claim innocence here, it's completely ostentatious levels of fraud.
     
  16. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    New info could be added without changing the primary outcome. Same with biomarkers. Negative side effects should already be covered inderectly by adverse events. Positive side effects could be added as secondary or tertiary outcomes.

    As a silly example: if you studied viagra, the primary endpoint should be hypertension and pectoris. But during the trial, you discover that it helps with erectile dysfunction. So you report a negative result for the primary endpoints, but include a section about the unexpected side effects. That’s just reporting everything you found.

    That can obviously be valuable. On the other hand, changing the primary endpoint after the fact would require very heavy arguments. And going by what I’ve encountered so far, very few studies that change endpoints have valid reasons to do so. It’s mostly about getting a result, because publishers and employers don’t want null-results. Or the researchers don’t want to be wrong.

    I would love to learn more about this approach if anyone has examples.

    Have you ever seen this kind of independence? And how independent can uou be if everyone are in the same system together? Just take a look at the ‘independent’ peer review system.

    And we know from GRADE, Cochrane, etc. that manuals and guidelines are insufficient.
     
  17. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,257
    Location:
    Canada
    It can definitely make sense in drug trials, because they are so expensive. The larger ones cost more than the entire global ME/CFS research budget.

    An example I can think of is a drug tested to help for one thing, that happens to help another thing no one expected. So a drug may not help with a problem of coagulation, but could help make breathing easier, thus leading to better survival; totally made-up example. Completing the trial as intended and starting over from scratch to properly check the other thing would be massively wasteful, as long as the results are reported honestly.

    But with psychobehavioral outcomes, they're just too easy to select for a fake effect. It's not something that happens, it's the whole of what they do. Especially as they take the most random things and claim them as some sort of gotcha, whereas where it's appropriate to do so, it would be based on objective results showing a clear benefit that are self-evident, rather than fished for.
     
  18. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    That’s kind of why you do phase I and II. Surely, it would be exceedingly rare to encounter such interactions as late as phase III or IV. Besides, you would not enter phase III with an assumed ineffective drug, and it would be valuable to know that it wasn’t effective after all. That’s also good info, because you can eliminate options and models.
     
  19. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,423
    If another drug trial, with a potentially life threatening or saving drug, already revealed null results for a primary outcome measure but actually showed how the drug works in unexpected ways, I think it would be unethical to resume just as planned and to ignore this evidence when it has the potential to contribute to research and life. The same argument applies for biomarkers. In these cases the justification largely comes from observations independent of your own data. The goal is to help those suffering and as such there are instances where changing outcome measures can be justified. Some trials run over years, others over several decades.

    Not if these side effects render your outcome measures useless (for example by destroying blinding) which might require adaptations in the original protocol.

    If there is a genuine effect that could for example save lifes but this thing was not a prespecified endpoint it might be that you wouldn't able to obtain an approval for it albeit it saving lives. Or a biomarker becomes available or similar.

    It would indeed require rigorous arguments, that go far beyond justifying sloppy cherry-picking.

    Certainly.

    I remember seeing such examples. I would have to Google around a bit to find them.

    The independence is the independence of your data from additional knowledge. At least in fully blinded trials there would be instances where ensuring such an independence should be quite manageable.

    If you're running unblinded psychological trials with subjective outcome measures guidelines are probably never going to do anything in the first place. What is required is common sense. What has been witnessed is a lack thereof.

    Phase 1 are largely safety and dosage studies. Usually you're doing Phase 2 or in other cases even directly Phase 3 studies depending on the drug and condition. I don't think it is exceedingly rare to experience similar issues in Phase 3 studies.

    Baricitinib would be an example of a Long-Covid study jumping straight to phase 3 where there is 0 essentially indication of how or why it should work and no evidence that it works. As such similar problems apply to the outcome measures. Should knowledge become available (which I highly doubt) there could be justifications for switching up things if done on a solid basis. In my eyes it would be far more sensible to do other things in the first place, but that is a different question.

    Alternative examples would be those trials targeting viral persistence and that have been ongoing, seemingly without much thought behind them. Just imagine hypothetically a biomarker for viral persistence that would be meaningful was to be discovered but a trial had already been planned.
     
    Last edited: Feb 14, 2025
  20. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,202
    Location:
    Norway
    @EndME I’m running out of steam here, but I just want to acknowledge that you make some very good points and that’s I agree with the essence of them. Thank you for good explanations and examples!
     

Share This Page