Webinar 2pm today (Friday 6 June 2025): Genetics Centre of Excellence (Edinburgh Ponting lab): update on recent research

I want to understand what's going on in the black box but equally, I don't understand what's going on in the Zhang black box!

To an extent I agree on that. My confidence in Zhang having picked up some real signals comes partly from th presentation being a tad more transparent and partly from the fact that they threw a load of incomprehensible gene codes at us which we discovered for ourselves were rather interesting. PL are telling us that they found special genes that explained symptoms in each subgroup - which my guess is would be quite extraordinary if true.
 
To an extent I agree on that. My confidence in Zhang having picked up some real signals comes partly from the presentation being a tad more transparent and partly from the fact that they threw a load of incomprehensible gene codes at us which we discovered for ourselves were rather interesting.

People seemed to have reservations about the machine-learning approach in the Zhang paper, and I worry that we're trying to figure out whether the methodology is any good by seeing whether we can make sense of the results. I'm concerned about confirmation bias. If DecodeME confirming the results would confirm the methodology as good, and DecodeME not confirming the results would confirm the methodology as bad, surely the Zhang paper provides no information?
 
surely the Zhang paper provides no information?

I think one can have confidence in a set of results from a black box from the pattern of results.

Let us say that a friend calls to say that he has a new bird song app and you are initially a bit sceptical about how good it is. You tell him to go outside and turn it on. It reports the presence of wood warbler, pied flycatcher, raven and common redstart but not house sparrow, reed bunting or centi's warbler. You are already pretty confident it is good because no randomly inaccurate app would pick up those five birds - which are all quite restricted to the same habitat - and not some lemons.

The problem with Zhang seems to be that it may be using machine earning or AI to tell itself that if it hears a pied flycatcher then that other noise is very likely wood warbler. If that had a big effect I would worry. But I still find it difficult to see how it is going to come up with the list it does unless at least some of them are bona fide.
 
I did see it as positive that their results seemed to both overlap and also to differ in sensible ways with the new LC cohorts they sourced and to replicate with different ME/CFS cohort in the US and I thought the collaboration with metrodora sounded interesting. I understand black box reservations, but a clustered phenotype set like this does seem to account for the heterogeneous nature and inconsistent results that we see.

They seemed extremely confident to me , much more so than last time I saw them so I am very much hoping that that means they have undeclared positive results from robust additional analysis that they can't yet share as its not written up.

I was less excited about the trial results so far, especially as the UK was not a priority due to its less easy regulatory environment. That seemed a big leap though I understand the commercial challenges limited transparency about that part of the research process.

I agree an exercise with a unrelated disease would be even more confidence inspiring around the methodology. Can't wait to see what happens next.
 
If DecodeME confirming the results would confirm the methodology as good, and DecodeME not confirming the results would confirm the methodology as bad, surely the Zhang paper provides no information?

Is it possible for them both to have found meaningful information, even if it differs substantially?

It could turn out one technique just spotted bits of the jigsaw that we can't place until we can see the picture better.
 
I think @Sasha is right, and I’m not sure if it’s just that there is a black box that’s the communication issue. Do most of us really understand all the details of what Audrey is doing? It’s more about how well they communicated what they’re doing.

- This is the question we set out to answer and why we chose it
- Here’s how we went about doing so
- These are our results
- Here’s what we plan on doing next

That’s what Audrey did and very well. But I still don’t really know what the message PrecisionLife were trying to convey was?
 
The problem with Zhang seems to be that it may be using machine earning or AI to tell itself that if it hears a pied flycatcher then that other noise is very likely wood warbler. If that had a big effect I would worry. But I still find it difficult to see how it is going to come up with the list it does unless at least some of them are bona fide.
My worry is that it's doing something self-reinforcing like that and that a sensible pattern is something that would come out regardless of what was fed into it. I don't understand the method enough to know whether my fear is reasonable.
 
That’s what Audrey did and very well. But I still don’t really know what the message PrecisionLife were trying to convey was?

Audrey did standard science (cases vs controls, but with fancy tech), whereas PL did something that I think we're all unfamiliar with and don't understand. Unless they explain their methods, I don't know how they're going to convince us that we can trust the results. I understand that there are commercial issues here, but I think that's presenting a real problem.
 
Really interesting so far. Looks like Audrey has failed to reproduce this. No difference between people with ME/CFS and healthy controls. Paper submitted and should be live very soon!
I didn't hear the talk, but understand that she and @chillier attempted to replicate the most robust/replicable 'something in the blood' finding, and could not

Negative replication results are so important because they stop the field from having endless possible theories. Culling things that don't work out helps clear some paths so people can focus on more promising areas.
 
I didn't hear the talk, but understand that she and @chillier attempted to replicate the most robust/replicable 'something in the blood' finding, and could not
Wow I just put together that chillier is the one of the people from your 'something in the blood' post. The username makes sense now, I thought they were just cold.

Someone asked if she falsified the nanoneedle study. She said it's hard to compare her result to Ron Davis's, but that Fatima Labeed and Michael Hughes are attempting to replicate that.
 
I didn't hear the talk, but understand that she and @chillier attempted to replicate the most robust/replicable 'something in the blood' finding, and could not

Negative replication results are so important because they stop the field from having endless possible theories. Culling things that don't work out helps clear some paths so people can focus on more promising areas.

Thanks Simon :) We submitted the preprint to biorxiv earlier in the week so hopefully it'll be out soon. Maybe monday?

EDIT: As if on cue! It came out at the same time I made this comment.
 
Last edited:
Wow I just put together that chillier is the one of the people from your 'something in the blood' post. The username makes sense now, I thought they were just cold.

Someone asked if she falsified the nanoneedle study. She said it's hard to compare her result to Ron Davis's, but that Fatima Labeed and Michael Hughes are attempting to replicate that.

haha :D one of my ME symptoms is terrible thermoregulation so both are true!
 
Also appreciated Ryback's clarity, transparancy and fastidious approach, even if the lack of replication is a little disappointing. But as was said, it shows us where not to look.
Negative replication results are so important because they stop the field from having endless possible theories. Culling things that don't work out helps clear some paths so people can focus on more promising areas.
This.

Am I being too critical?
No.
 
Negative replication results are so important because they stop the field from having endless possible theories. Culling things that don't work out helps clear some paths so people can focus on more promising areas.
Absolutely! I was very pleased to hear clear results and am excited to go through the paper.
haha :D one of my ME symptoms is terrible thermoregulation so both are true!
Same, in both thermoregulation and like @forestglip not realising who you were. Congratulations on the paper!
 
I find it hard to see how you can divide patients up into twenty-odd groups, each conveniently with a gene telling you what caused their brand of disease. Especially when previous experience indicates that with this number of subjects you are unlikely to find anything reliable in the genetics.
I find that surprising, even though the combinatorial method, in theory, is able to find true results from much smaller samples.

The big problem I see in saying that a gene or genes explains the symptom of any subgroup is that these are SNPs. Snips are common variants , which indicates they are not generally harmful. Almost all of them do not change proteins themselves, but subtly change gene regulation of those proteins. Typically these snips increase the risk of ME/CFS by less than 10%. When you consider that the risk of someone in the population is very low – under 1 percent – these are subtle effects. Could they explain the symptoms of everyone in a subgroup?

I very much hope that precision life are making big strides forward. But like others, I find it hard to judge the results from the results that they able to provide. Obviously, they are in a difficult situation as they're being funded by investors, who probably don't want to hear about the niceties of experiments and bumps along the road.

I hoped they would report results from their micro clinical trials with Metrodora, but it sounds like they haven't been able to share anything on those.

Science is hard, and it's fantastic that those developing new technologies – and investors – are taking an interest in our illness. But I'm not sure how much we're going to learn about their progress until they're able to share something definitive.
 
The big problem I see in saying that a gene or genes explains the symptom of any subgroup is that these are SNPs. Snips are common variants , which indicates they are not generally harmful. Almost all of them do not change proteins themselves, but subtly change gene regulation of those proteins. Typically these snips increase the risk of ME/CFS by less than 10%. When you consider that the risk of someone in the population is very low – under 1 percent – these are subtle effects. Could they explain the symptoms of everyone in a subgroup?
Thanks for the really clear explanation and grounding of something that was just a feeling for me.

I think like you I’m really pleased they’re involved. But they will need to find a better way of telling their story and communicating with patients. Especially as they seem focused on drug trials and that is going to involve patient trust.
 
I hoped they would report results from their micro clinical trials with Metrodora, but it sounds like they haven't been able to share anything on those

Agree with everything you say in your post, wanted to ask if you had heard these trials were already happening? I swear Sayoni Das said they would start after they had the locome results in a presentation i saw once.
 
Obviously, they are in a difficult situation as they're being funded by investors, who probably don't want to hear about the niceties of experiments and bumps along the road.
Yeah. Given that. And their whole marketing thing is “personalised medicine”. Like this is the second things they write on their front page:

Our unique ability to stratify patients by the molecular mechanisms of their disease enables us to accurately map patients to the targets, trials, and treatments that will be of greatest benefit to them.

With our understanding of causal biology, we support our partners to eliminate risk from drug development and maximize healthcare efficiency.
 
Back
Top