Preprint Initial findings from the DecodeME genome-wide association study of myalgic encephalomyelitis/chronic fatigue syndrome, 2025, DecodeMe Collaboration

The FND GWAs had 22000 participants and had 3 hits across 2 different conditions. DecodeME had 15000 and 8 hits. DecodeME gives the impression of being more on target.
Certainly, especially when the FND study is based on cohorts that for ME/CFS are already known to be no good (such as UK Biobank etc). I'm not questioning the much higher quality of DecodeME. However, I don't think this should affect the argument that you can sporadically pick up genes, due to confounders and these confounders not generally being linked to "general chronic illness genes".
 
Anyone who can point me to this study? Seem to have missed it.
 
I just used ChatGPT5 with an entirely new approach in my prompts and I am posting the results here. This means that the LLM is presented with the most solid evidence (e.g. GWAS results) and then layers of information are given in subsequent prompts as newly added knowledge (e.g. metabolomic results for which we have replicated findings). It should be noted that I did not mention anything about Myalgic Encephalomyelitis or LongCOVID. Here are the results for anyone interested :

Screenshot 2025-08-07 at 23.24.16.png


and


Screenshot 2025-08-07 at 23.32.32.png
 
Last edited:
I just used ChatGPT5 with an entirely new approach in my prompts and I am putting results here. This means that the LLM is presented with the most solid evidence (e.g. GWAS results) and then layers of information are given in subsequent prompts as newly added knowledge (e.g. metabolomic results for which we have replicated findings). It should be noted that I did not mention anything about Myalgic Encephalomyelitis or LongCOVID. Here are the results for anyone interested :

View attachment 27643


and


View attachment 27644
What data are you inputting, because I think most of the data besides DecodeME (for example GWAS data from UK biobank) might be entirely unreliable rather than "high score confidence"?
 
What data are you inputting, because I think most of the data besides DecodeME (for example GWAS data from UK biobank) might be entirely unreliable rather than "high score confidence"?

Good question. A second layer of data with lower confidence from GWAS were the results in certain lipids (e.g. cholines) being low in several studies. A third layer is comorbidities and so on
 
The way I'm understanding this is that the 8 highlighted results are genomic loci / variants (SNPs). These point to a region in the DNA, not to one specific gene. These genomic loci are then named after the closest gene. Which is why they seem to appear as specifc genes. But it's not guaranteed that the closest gene is actually the gene that's involved in ME/CFS. It could be a more distant one or multiple genes. So these appear as genes but they're actually regions.
We start with the genetic signal for ME/CFS. Basically, candidate genes are nearby, but which are playing a part in causing ME?
One way of finding out is to see if gene activity changes in people with ME.

Lets take a genetic signal for ME/CFS that has a particular known variant We want to know if the gene behaves differently in peoople who have that variant. If there is no difference, the gene probably isn't doing anything relevant.

There is a large public database (GTEX) that has such data: it shows gene expression of each in people with different variants. So, the method is to find the variant associated with ME, then look to see if nearby genes show different levels of activity in people with that variant.
Thanks. I think I get the general idea now of how you get from 43 candidate genes to 29 priority candidates to 8 ‘headline’ candidates

One remaining question, since those decisions on how to prioritise genes are based on existing knowledge about those genes, how reliable or complete is that knowledge?

Could you end up prioritising one candidate gene over another simply because one has been studied more, especially with respect to the areas we already think are relevant to ME?

In other words, how big is the risk here of falling into the trap of looking under the streetlight?
 

Presumably it won't have been worth their while starting funding applications until the initial DecodeME findings were public, and they might hold fire until more sharp pairs of eyes have looked at them. They'll want to make the best case they can, and that means understanding what we've already got.
 
Back
Top Bottom