Hypotheses and Research Directions for ME/CFS

If e.g. Daratumumab works for ME/CFS, then that study alone will be enough to tell us that something that Dara affects is involved. We don’t need all of the info, just enough.

Yes, that’s mostly the case in ME/CFS, and science in general. Not because science is useless, but because there are too many scientists doing useless stuff.

Not at all. By dissecting the individual pieces of evidence we get a much better understanding of where the solution might lie. And if the researchers had bothered with the discussions beforehand, they might have been able to design studies that could have provided meaningful data.

I think that by far the most important contribution by S4ME is that people are willing to assess the evidence and not get too caught up in hypotheses.

And I think there is a general misunderstanding among both laypeople and scientists about what science is. You’re supposed to propose a testable hypothesis, figure out what would confirm or refute it, and then gather the data. Far too many pick a hypothesis and go out only looking for data the confirms it, completely neglecting anything that disproves it.

So when someone says «the mitochondria are not producing enough energy», and someone else says that «if that was the case, you’d expect to see symptoms X and Y like in other diseases where the mitochondria are faulty, but you don’t see those in ME/CFS», you’ve already got good data against the hypothesis.

Or pointing out that if ME/CFS is neurodegenerative, then we’d expect to see the symptoms of Alzheimers, Parkinson’s etc., but we don’t. And people have gone from being severe to healthy, so whatever ME/CFS is, it’s probably reversible.

And you don’t have to be a scientist to ask those kinds of questions or make those observations.
That’s exactly my point. If Fluge and Mella had asked in this forum back then whether they should test this, many people here would probably have advised against it, saying the evidence was too weak.

The idea that we first need to finish all discussions and only then design studies is unrealistic. Especially in poorly understood diseases, good studies often come from competing and even incomplete hypotheses.
Without hypotheses, there is no meaningful data collection. Data are not neutral, they are always shaped by assumptions.

I never said that science is useless. What I meant is that, in discussions like these, hypotheses from well-known researchers are often dismissed simply because the existing studies don’t confirm them 100%. I think some hypotheses now have enough support to be tested properly. B-cell depletion is one example. The mitochondrial hypothesis is another.

What I’m trying to say is that we may need to let go of the idea that we must fully understand the disease before trying therapies. Instead, we should test solid hypotheses and see whether they lead to treatments. Otherwise, I honestly don’t see how we’ll get effective therapies anytime soon.

Remissions do not rule out a biological cause. Even autoimmune diseases can partially or completely go into remission.

Saying “you don’t need a scientific background” sounds appealing, but it’s risky. Intuition cannot replace methodological understanding, especially in highly complex biological systems.

Critical thinking means more than saying “this isn’t proven.” It also means knowing that a lack of clear signs doesn’t necessarily disprove an idea.
 
That’s exactly my point. If Fluge and Mella had asked in this forum back then whether they should test this, many people here would probably have advised against it, saying the evidence was too weak.
No, they would have asked for their reasoning for trying X, assessed the reasoning, and asked if there are alternative ways to get good data that is cheaper, faster and/or safer.
The idea that we first need to finish all discussions and only then design studies is unrealistic.
Nobody has claimed that. But if patients here manage to poke serious holes in a hypothesis in a matter of days, then there are serious questions that should be asked of the researchers. We’re not asking for perfection, but for the bare minimum.
What I meant is that, in discussions like these, hypotheses from well-known researchers are often dismissed simply because the existing studies don’t confirm them 100%.
No, that it not why they are dismissed. They are dismissed because there is good reason to think they are wrong. It’s possible to prove that something is wrong without knowing what the truth is. Like how I can know that adding two even numbers will always produce an even number, so any odd answer is wrong. I don’t have to know the numbers, just some properties.
What I’m trying to say is that we may need to let go of the idea that we must fully understand the disease before trying therapies. Instead, we should test solid hypotheses and see whether they lead to treatments. Otherwise, I honestly don’t see how we’ll get effective therapies anytime soon.
Nobody are claiming we need a full understanding first. Just that most people have a tendency to jump to treatments when you can get better answers sooner and cheaper with different methods.

Most people over-estimate the probability of stumbling upon a treatment, and severely under-estimate the probability of something being harmful. Experimenting willy nilly is a net negative game.

There is a long discussion about that here if you’ve got thoughts:
Remissions do not rule out a biological cause. Even autoimmune diseases can partially or completely go into remission.
I said reversible, not non-biological. We’re in agreement here.
Saying “you don’t need a scientific background” sounds appealing, but it’s risky. Intuition cannot replace methodological understanding, especially in highly complex biological systems.
I didn’t say in general. I said for the kinds of questions I gave examples of, questions that scientists somehow don’t seem to consider that often even though they are very basic.
Critical thinking means more than saying “this isn’t proven.” It also means knowing that a lack of clear signs doesn’t necessarily disprove an idea.
Nobody here are claiming that. They’re claiming that the presence of clear negative signs probably disproves the idea.
 
Maybe Kitty s referring to making sure we are trying to explain the right thing - is a proposed thing-to-explain real or misidentified?

Yessss...both the research and patient communities have wasted a lot of time trying to explain things without asking if they're really there.

Look at the PEM factsheet threads. We discovered we thought we were talking about the same things, but often we weren't. Thinking we had a thing to define when we didn't, not really. It was repetitive and wearying and took far too long, but it's essential and not enough of it happens. Being sure you know nothing is an achievement.

I don’t think it’s necessarily an uncommon skill

I might take issue with that slightly. I worked in the performing arts, and saw how knowledge and experience and training and ego and expectation and reputation all got in the way of the truth. I think it could be the case in science too. It strikes me that people are allowed something of an open mind as undergraduates, but after that many don't get to ask really stupid questions like "What is wet?" again. They're stuck in a machine that doesn't allow it, and tend to lose the will or the ability.
 
I might take issue with that slightly. I worked in the performing arts, and saw how knowledge and experience and training and ego and expectation and reputation all got in the way of the truth. I think it could be the case in science too. It strikes me that people are allowed something of an open mind as undergraduates, but after that many don't get to ask really stupid questions like "What is wet?" again. They're stuck in a machine that doesn't allow it, and tend to lose the will or the ability.
Sure, I can see that to an extent. But just as often as I see that attitude, I see tons of examples of the opposite. Every time I’m in a meeting with multiple PIs at least half the time is spent on questions of “are we sure X means Y? Have we checked Z? What are we missing here?”

Just a few weeks ago I was at a meeting where we were talking about several examples of drugs that ended up working even though parts of the underlying logic got disproven or the foundational research could not be replicated (and, sometimes, the replication to the replication showed that the original findings were real after all).

Is it good to be as meticulous as possible? Absolutely. But the reality of science is that we will be constantly overturning even the most basic interpretations we thought we could safely make.

Taking a leap of faith on a theory where parts of it probably won’t hold up is where all science starts by necessity—the good as well as the bad. It’s good to spend time hammering out details so the theorizing doesn't go completely off the deep end of plausibility, but the hammering-out-of-details tends not to actually move things forward without the leap-of-faith theorizing alongside it.
 
Back
Top Bottom