Blog post: how I learned to doubt a paper

Siebe

Established Member
New blog post, in which I argue that science should be about delivering solutions, but too often functions as a hope-delivery system for patients.

"The irony is that when patients evaluate studies from the psychosomatic paradigm, they act like great scientists. They scrutinize inclusion criteria, they scrutinize the objectivity of outcome measures. When people defend their worldviews, it’s like they gain 15 IQ points!"

https://viralpersistence.substack.com/p/how-i-learned-to-doubt-a-paper

1000067640.jpg
 
There's certainly an issue with patients blindly supporting studies from clinicians and researchers they see as allies (and, conversely, rubbishing all research they don't agree with, e.g., from known BPS types). Luckily there is on this forum a healthy scepticism for any and all research coming our way.
 
Thanks for this nice article @Siebe. I think most people with ME/CFS here have been through those phases you describe.

Scientists are highly biased towards publishing positive results and getting citations. They have to, to keep their jobs and secure grant money. It’s extremely rare for a scientific team to publish a paper saying “we tested our hypothesis and found nothing interesting”. Naive patients getting swept up in the hype make this much worse. And the more toned down and skeptical my tweets have become, the fewer likes I get. This dynamic turns science into a hope-delivery system for instant gratification, instead of the long-term solution-delivery system it is supposed to be.

I'd just add that there's nothing wrong with science being a hope-delivery system as well as a long-term solution-delivery system, and good science, even good science with a null result, can definitely be both.

If a scientific team does careful work and concludes 'we tested our hypothesis and found nothing interesting', that does give me hope. I think 'here is a reliable team that has a good chance of truly finding something useful next time' and 'well, that makes one possibility a whole lot less likely, so that we are closer to focusing on something that will be the breakthrough'.

We need to applaud skepticism in patients and value good quality null results in papers.
 
Wow I had the exact same arc, I was even buying random vagus nerve stimulators in 2023 due to “studies” that Putrino was doing. It’s been awesome to read studies outside of CFS as well and have a bit more insight into what makes a good design. I thank this forum immensely for opening my eyes!
 
Very good, @Siebe. I'm glad you found this forum. I have learned so much too about evaluating scientific papers. It's really opened my eyes to so many problems with the whole academic world of research with perverse incentives to publish rubbish, egged on by desperate patients and amplified by social media.
 
Learning to doubt papers is very important but ... I would want to frame doubt probabilistically.

Science has been made by frail and silly humans for a long time and yet made progress. You don't need all the studies to be the platonic form of crystalline perfect science to be able to press onward.

Prospectively, we should demand perfect science. Retrospectively, we should squeeze every weird little biased paper for every drop of juice.

Che, Brydges, Lipkin use a Bayesian approach to pan for gold in underpowered studies.
 
Here's an interview with Erick Turner, MD.


He was the first psychiatrist who made the public aware that there was a problem at the FDA with approving antidepressants on the basis on cherry-picked, flawed studies. You could say that his specialty was publication bias.

Whenever I see a low-dose repurposing of a drug as we're seeing it now in ME too I am very skeptical because this is a known strategy well established for quite some time in the marketing of psych drugs.
 
If a scientific team does careful work and concludes 'we tested our hypothesis and found nothing interesting', that does give me hope.
This increasingly looks to be the problem in clinical trials and evidence-based medicine: negatives can't be proven. And it's a fundamental, fatal flaw, especially when mixed in with positive bias that is turned to 11. It means that the entire industry can easily get stuck in an infinite loop. Without the positive bias this usually works out fine because it's not a good career move to obsess over a dead-end. I'm not sure I remember of a 'pragmatic' trial that actually accepted the conclusion in the way a falsified hypothesis works in scientific research.

Not here, in fact it's a great career move to massively over-hype, market and deploy treatment models that don't work at all as long as they are not pharmaceutical. Mainly because once it's been deployed and used in real life, no one involved in its delivery wants to admit they got fooled by something that doesn't work. So they will always escalate. Instead all that evidence-based medicine does is increase the sunk cost commitment, making it even less possible to get out of the loop.

It's a lot like some of the early machine learning systems that played video games, where some of them would get stuck if a video played on a wall. They couldn't get out of the loop, something about the video playing giving them a false signal that they were moving around the level.

I really don't see a way to fix this. It can't be fixed internally. There is no outside pressure against this, and anyway false hope exploiting the "mind over matter" mythology is super popular at all levels of society, from the fringes of conspiracy fantasy communities and the general public, all the way to government officials, experts and their institutions.
 
Thanks for the nice responses all :)

A couple of thoughts:
- yes, to some extent good data can be squeezed from bad studies, but that requires open data which is still mostly not available (even though they say it's upon request), and you'll be limited by the experimental design and which data they collected/failed to collect.
- I wanted to link to a blog post by Scott Alexander on science progressing despite the flaws, but Gemini gave a pretty good overview of his writing on this. However, it's currently very inefficient and I think there are things we can do to improve the efficiency
- one idea to reward the publication of negative results without spin, is to give out prizes for it. Ideally, this adds prestige that can then be leveraged the same way citations get leveraged (to get further grants). S4ME Yearly Prize for Best Method, and Best Null Result?
- there isn't a IACI-specific journal, and journals can really affect standards by demanding e.g. data availability and some field basics (like, don't use Fukuda criteria). I think there's an opportunity here to found this.
 
Back
Top Bottom