Large-scale investigation confirms TRPM3 ion channel dysfunction in ME/CFS, 2025, Marshall-Gradisnik et al

Like another study from this group that was posted here, it looks like the p-values are artificially low due to pseudoreplication. I'll just quote the last time I said it, since it's the same issue, just with the sample size changed:
Do you think it is intentional? Or is it just difficult/unusual to avoid this, thus likely to be an accident.
It all sounded quite interesting!
 
Do you think it is intentional? Or is it just difficult/unusual to avoid this, thus likely to be an accident.
Here's another resource explaining pseudoreplication, and they say it's likely usually researchers just being unaware it's an issue:

Pseudoreplication is unfortunately quite a big problem in biological and clinical research, probably because many people aren’t really aware of the issue or how to recognise whether they’re accidentally doing it in their analysis. Several review articles have investigated the incidence of pseudoreplication in published papers, and have estimated that as many as 50% of papers in various fields may suffer from this problem, including neuroscience, animal experiments and cell culture and primate research. In fields like ecology and conservation, the estimated figure is sometimes even higher.
 
Is it possible some researchers haven't done a great deal more than that?
yup, my program only requires one very basic stats course (I’m taking more advanced courses because I’m specializing in bioinformatics, everyone else is doing biology). I don’t know if the idea of pseudoreplication would even be brought up in that basic course

You’d hope that scientists absorb more stats knowledge from others during their training and career but if someone is surrounded by people who do the same thing it might just fall low on the list of things that they are motivated to question and verify for themselves
 
yup, my program only requires one very basic stats course (I’m taking more advanced courses because I’m specializing in bioinformatics, everyone else is doing biology). I don’t know if the idea of pseudoreplication would even be brought up in that basic course

You’d hope that scientists absorb more stats knowledge from others during their training and career but if someone is surrounded by people who do the same thing it might just fall low on the list of things that they are motivated to question and verify for themselves
Considering how statistics are central to all medical research, this is really odd. Also it explains a lot. When we look at evidence-based medicine and how they abuse statistics, it might explain everything. They use the tools because they're told to use them, but don't really understand why they should use them.

I guess this is how we end up with standardizing an entire discipline that aims to influence the lives of billions make such heavy use of mathematical tools that are meant to apply to quantitative data being abused on qualitative data, when most of the benefits of those tools don't apply and can wildly distort the results.
 
Considering how statistics are central to all medical research, this is really odd. Also it explains a lot. When we look at evidence-based medicine and how they abuse statistics, it might explain everything. They use the tools because they're told to use them, but don't really understand why they should use them.
If a project is well funded and comprehensive enough there will usually be a biostatistician collaborator who does the analysis and (hopefully) knows to account for these things. But often when you have a smaller investigation like this study, its just going to be a grad student or post doc generating the data and learning how to run a few tests with a specific stats program. It would be up to their PI to check their analysis, but if the PI is a biologist who didn't spend much time on statistics, the student will end up with the impression that what they did is good enough. And there's often no requirements for any of the reviewers to have a strong stats background--if its a small experiment in a mid-tier journal that doesn't involve much sophisticated analysis, the reviewers will probably be chosen based on their familiarity with the biology.
 
The incentive structures for sticking to the facts in a measured way seem far less albeit more worthy

I suppose the counter argument is that, if a group really thinks they've got something but still need to pin it down, they might have to produce something shiny every now and again to get the support they need to keep chasing it.

Which is kind of okay, as long as they're bright enough to know when it's time to pull the plug.
 
Back
Top Bottom