This is more of a “my person experience” rant then a high quality discussion as I don’t have the energy for the latter right now, but here goes: Why do a chunk of published studies in ME/LC feel absolutely worthless? Why is it almost easy to find errors in published papers in this discipline? Why do the descriptions of the illness in the literature tend to be littered with inaccuracies, sometimes even misspellings of the name and unfamiliarity with the diagnosis criteria? Why do the large majority of papers overstate their findings, and have abstracts containing unsupported claims? Why do many papers pretend to be in a sort of void (ie. tested a couple blood markers and found no difference with controls => conclusion: Long COVID is probably psychological, completely ignoring the mountain of research showing biological abnormalities)? Perhaps this is the Dunning-Kruger effect speaking, but I feel like if I had the energy to do it, I could write more balanced and informative non-technical parts of a lot of these papers. And I have no specialisation in biology/medicine at all, all I have is this illness and I’ve spent a decent amount of time on this forum this past year. Is this a normal thing in medicine? Or is this specific to ME/LC? Or am I imagining things and being unfairly critical to researchers? I want to note this criticism does not apply to every researcher at all, and there are plenty who are doing a wonderful job. And even those I criticise have far far far more medical expertise than I have a chance of ever achieving during my lifetime. But a non-negligeable chunk of these papers written by professionals don’t feel professional at all. It often feels like the authors either haven’t bothered to read up much about LC/ME and their views are partly formed by stereotypes, ME = Fatigue etc. Or they are entrenched in some personal belief system about ME or LC and see everything through their pet theory ignoring the fact their theory hasn’t been proven yet.
Medicine is one of the few scientific disciplines where lab experiments are impossible. Science happens mostly in labs, with repeatable experiments where all unrelated factors are taken into account. Taking from how AI has been growing over the last few years, those systems all depend on knowing the right answer. It has to be known, at least most of the time. But the same applies to humans, a group of people taught wrong will almost never know that what they were taught was wrong. They don't know the right answer. They can't compare to the right answer, no one knows it yet. But they lack clear ways of knowing this. And here if they ask the teacher, the teacher will give them wrong answers. And if they answer correctly, they will be marked as an error. But there's definitely a much deeper problem here. Every expert discipline makes their students aware of the limits of their knowledge. Apparently physicians are taught at medical school that much of their knowledge is not fixed, will change by the end of their training, and change again by the end of their career. I'm now questioning whether this is actually true, because they sure as hell don't ever apply it, having literally built several overlapping ideologies out of that exact logical fallacy. It sounds more like a myth to me, more like the Hippocratic oath, which is entirely meaningless in real life. Everyone thinks they're doing great. Certainly a physician would never think that their ideas are wrong, let alone do harm. So it looks to be a normal thing in medicine, but a very abnormal thing in general. It's the only expertise where people can fail miserably and still pretend that they succeeded, where others will actually agree to it. Someone builds a structure with the wrong materials and load calculations? Eventually it fails. Spectacularly. Unquestionably. Expensively. Someone has to pay for it. To clean up, then rebuild. Same with a piece of technology. Other people expect to use it. If it doesn't work, they won't pretend. They'll return it for a full refund. In medicine the mistakes get either buried, or silenced in broad daylight. Gagged by abusing privacy laws meant to protect patients, but used far more often to shield from accountability. There is literally no way to redress mistakes. It's an entirely self-regulated industry, where everything happens in secret behind closed doors. AIs learn by training. Training training training. They adjust their weight based on feedback. Did it work? How well? They adjust accordingly to get an optimal outcome. But in health care, an optimal outcome could be us, total and miserable failure across the board. They simply don't know any better, and don't seem all that interested in doing so, they have no way of knowing anyway. Every time they hear about their failures, they launch into defensive postures about being attacked. They simply don't have a feedback loop enabled. Like a factory producing gadgets that never hear back from how their gadgets perform in the real world. They can't ever know if their quality control process works. Not because of any real constraints, they just choose not to. Because they can. It's so much easier this way, and it's how it's always been done. All the parts of medicine that work are based on science. All of them. So-called evidence-based medicine is barely needed in the real world, it literally can't tell good from bad, while the good parts are based on science anyway. We are simply in a space where science has not produced results, and they have nothing else to work with. No plan B. They can't even tell the difference between failing and achieving something. But they have the privilege of lying endlessly about it. Lies they even get to actually believe. Which is the same thing, but it feels otherwise. For sure this level of failure is unique. It's massively amplified with us. We only get the worst of the worst of the worst. That just never works anywhere. The only way to get out of this failure loop is to realize it and want to change. They never realize it unless technology gives them the answers they want in the first place. Answers they demand to get started. Also unique. In part because of a lack of market forces rewarding success and punishing failures. Nothing happens when they fail. Not to them. Or to their systems. We have all the stakes and all the consequences. Most random groups of people would do better when it comes to us. The methods and training of medicine actually make them worse at it. But they can't imagine that, and they don't listen when people tell them. They never listen, they don't have to. It's like all the worst combinations imaginable lining up, then you add human flaws obsessing over being cheated, bigotry, lack of empathy and so on, and it's basically the most perfect combination of bad factors ever. Amounting to the worst failure of expertise in human history, and probably by a wide margin.
Well one thing I think is happening (in general, not just ME/CFS) is that we are taught to look in the literature (fraught with errors) to find out how to write, and taught to write up our findings/make claims that shows our work has some sort of importance. Else how can it be used to support funding for further work...? @rvallee I've taken methods courses with MD students, and we are told things will change. Not that that has helped any.
There's definitely a lot of bad science / poor papers in this field, and that's largely due to the nature of the illness, being somewhat poorly defined, heterogeneous, and misunderstood. Conflation with normal fatigue is obviously a clear problem and results in myriad terrible papers from people who should know better. One thing to note is that on this forum we see a large proportion of the published literature in this space, whereas elsewhere the average person largely sees only a tiny subset of scientific literature — and that happens to be the better quality stuff that makes the news. I know from my interest in research integrity that this is an issue affecting every corner of scientific publishing.
As with any desirable resource, organisms will try to exploit it. Crops need pest management, mineral resources need security systems, media needs rules and regulations. Research funding is no different: you need to take measures to reduce losses to pests. Medical research has a disconnection between the people responsible for the funding and the people who suffer from losses due to pests.
I really feel your pain @Yann04. So much research is rubbish. Just yesterday, I listened to a young researcher on the radio. The interview was billed as 'why mindfulness or gratitude journalling may not be the way to happiness - the problems with research on happiness'. "Great!" I thought. For his PhD the researcher had reviewed 600 or so psychology papers examining methods to achieve happiness, concentrating on two errors - sample size and prospective registration. He reported that only about 60 papers met his quality criteria of sufficient sample size and prospective registration. The researcher said that there had been huge improvements in the quality of psychology papers over the last ten years and that many of the papers he reviewed had been written before the understanding that prospective registration was important. He actually said that asking people if they were happier at the end of an intervention was a good way to assess the intervention. The interviewer then went on to go through some of the 60 papers that the researcher had said were ok. So, most of the interview was about gratitude journalling, exercise... discussed in a context that made it seem as though things had been assessed much more rigorously than in the past. It was such a frustrating listen. A couple of other problems with research were mentioned in passing - could the increase in happiness just be a momentary thing? could the effect of gratitude journalling wear off in a couple of weeks? But still, the net effect of the interview for many people will have been to strengthen their belief in reported findings of research about woo. And the researcher said that he had only examined the literature for research on happiness in healthy people, and that the evidence might be more reliable for people with clinical conditions, like depression for example. I might send the interviewer a copy of Brian Hughes book on the crisis in psychology. Anyway, to come back to the topic. * Incompetent research to do with being human is rewarded with attention and citations rather than criticism. The media love a good story about how doing something relatively easy will make life better. That's because their audience love those sorts of stories. * The tolerance of bad research practices can be exploited by people wanting to show that a product works - for reputational and monetary gain. *We don't have good enough systems to improve research quality, or to prevent bad research quality. That surely includes training and research funding. I don't think ME/CFS is singled out for incompetence. I think it might be a bit like when you see a news item about something you know well, and you see all the mistakes and misleading statements, whereas for the rest of the news, it's easy to assume that it is true. But, it certainly doesn't help that there is so much misinformation out there about ME/CFS, so new researchers and funders can easily get misled. Nor does it help that the lack of treatments and desperate people makes it a Wild West situation where almost any idea can get a following if marketed correctly. Nor the fact that ME/CFS is not a prestige topic, and so top notch researchers have, at least until recently, tended to work in other fields. There is also the fact that there was a substantial incentive for the well funded insurance industry and for governments to help ensure that ME/CFS was seen as a behavioural issue and easily cured, and that seeing ME/CFS as a personality problem fits nicely with people's propensity to view illness through a 'just world' lens - "those people who are sick deserve to be sick, but I am safe because I am a superior person". And our patient charities have tended to be inadequately resourced and often staffed by people subscribing to all sorts of misinformation... I don't know what we can do to be more effective in making things better. For sure, the incompetence in much of ME/CFS research has greatly held back progress towards understanding the disease. And it is really really hard to get rid of it.
Must be one those lessons most sleep through, I guess. Or I guess it's the usual "yes, some things might change, but not those things". Humans gonna human.
I get particularly fed up with poor methods in research studies especially looking at compounds with short half-life out of the body. For example BH4 has a half life of 4 hours in healthy adults. In PBS it has a half life of 16 minutes at room temperature and completely destroyed at 90 minutes. Guess what Simmaron did to some of their samples. Yep, they use PBS. Thread : https://www.s4me.info/threads/detec...rance-a-pilot-study-2023-gottschalk-ea.33317/ That's just one example among many.
Some – and more than a handful – of the abstracts even contradict themselves. Typically something like 'we did not find a significant result, but will recommend psycho-behavioural interventions anyway'. Nope. Right on the money. The BPS gang had a huge propaganda win when they got the name changed from ME to CFS. The incentives are perverse.
I don’t think ME/CFS attracts an abnormal amount of bad research. All science publishing and especially all medical publishing is full of bad research for the reasons discussed: incentives to publish low quality work, poor peer review, poor baseline standards of reasoning in medical research and benefits outweighing risks in publishing work which flouts ostensible standards including misrepresenting data. But I think ME/CFS attracts an abnormally low amount of good work. Good work persists in other areas of medicine despite the problems in the publication system because researchers believe the best work will rise to the top, will be rewarded with esteem and advancement, and will help patients. The publication system does not sufficiently favour good work, but the wider medical career structure does. But since the 1990s, a toxic combination of financial interests, egos and prejudice has ensured that good research in ME/CFS is not valued but is rather contested by vested interests, and good researchers have mostly been chased away unless they have an overriding personal motive to pursue ME/CFS. The field has become captured by research which is not only bad but also tendentious and often actively misleading, and which has debased the whole field. This resembles Gresham’s Law: the economic principle of traditional metal coinages that “bad money drives out good”. Once debased coinage is widely accepted, people stop spending their good coins containing the official amount of metal and eventually they start melting them down, as their value as coinage is no longer worth their constituent materials. In the same way, rewarding bad research with career advancement drives out good. After thirty years of this debased work being praised, we are left with few leads and a field populated by psychs, grifters and cranks - with a few honourable exceptions, several of whom post on this forum. What promising researcher would choose to study a condition where work has to start almost from scratch, resources are negligible, and far from being deployed to help patients, solid work could lead to ostracism from polite medical society? Such a researcher’s raw talents - their precious metals, in the coinage analogy - will be better valued elsewhere. These career reward levers need to be pulled to begin to address the problem. Decode ME may well flush out important clues, but researchers will still need funding and opportunities if the field is to get crowded enough to give us a chance of progress for patients in our lifetimes.
An MD student I met during an ethics course was upset that their lecture on fraudulent research and the impact on patients/society was not compulsory and that most of her class didn't turn up. I'm sure those stories are not important as obviously it's all in the past and researchers/MD's today would never not listen to patients
Without any medical education myself, meaning this is all rather worthless blabbering, I wander whether this might not be quite an intrinsic problem in medical research that is often less apparent in some harder sciences (and possibly even more apparent in the softer ones such as sociology) that is a particular problem in ME/CFS research. The proof for a hypothesis in medicine generally relies on a medical trial. I would imagine that many medical researchers will not have been part of a successful medical trial during their entire career. They will have learned all the principles in their career and read papers and seen seminars on succesful trials and perhaps they'll be listed as co-authors on one, but they might struggle to get first hand experience of what a hypothesis that actually turns out be true actually true looks like vs a hypothesis that ends in the dirt and most ideas are bound to land in the dirt. Some other scientists will have it a lot easier. They might often be able to prove things within their lab over and over again. There are certainly cases in modern physics where the hurdles are even bigger than in modern medicine and a proof requires the building of a monumental apparatus, but that will not bother any theoretical physicists. The mathematican has it easiest, he will see thousands and thousands of proofs and "truths" before actually having to come up with an own conjecture for the first time and yet even the most experienced and brilliant will regularly come up with conjectures that don't stand the test of time. In ME/CFS we see this over and over again. A hypothesis gets made up by some researcher on the back of some data that has never been replicated before and that only they have produced, most likely in an experiment far outside of their expertise, and they are left to believe that this finding is now the "truth" behind ME/CFS, after all "the easiest person to fool is yourself". This might be especially problematic for a young researcher who intends to grow up in ME/CFS research. Without any actual meaningful findings in ME/CFS research everyone finds a way to convince themselves that the p-value they have discovered that might not even be statistically significant has to be the ultimate truth behind ME/CFS, because nobody can know what significance looks like. Growing up in ME/CFS research might mean you'll never learn what a finding that has a good chance at being true looks like vs a finding that is just noise. Researchers with more experience coming from other fields might be able to tell a true finding from chance far better, but might it sometimes not be rather hard to spark their interest when a whole field is muddled with findings that lead nowhere and were never reproduced but still receive so much prominence?