Developing a blood cell-based diagnostic test for ME/CFS using peripheral blood mononuclear cells, 2023, Xu, Morten et al

Discussion in 'ME/CFS research' started by Andy, Mar 20, 2023.

  1. duncan

    duncan Senior Member (Voting Rights)

    Messages:
    1,628
    Last edited: Sep 29, 2023
    Kitty and Sean like this.
  2. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,626
    Location:
    UK
    First ever diagnostic test for chronic fatigue syndrome sparks hope - Advanced Science News

    https://www.advancedsciencenews.com...est-for-chronic-fatigue-syndrome-sparks-hope/
     
  3. DMissa

    DMissa Senior Member (Voting Rights)

    Messages:
    108
    Location:
    Australia
    I appreciate the reply. Please understand that I do not criticise your expertise nor your intentions. I apologise if it sounded like I was accusing you of herding people into things; I instead meant to say that comments in isolation (as they will appear to a casual reader) will likely be considered in isolation.

    I understand your long-running and thoughtful contributions to the forum and I appreciate them. The point was mostly just that not everybody who stumbles upon this will have the benefit of having read all of those contributions, nor in order. So each time a point is raised from a position of expertise, it needs to be communicated with some detail to avoid people misconstruing it.

    In any case I'm not trying to lecture or rant away here, just clarify my previous comment's intention. I hope it comes across well. As always I appreciate and value our interactions and learn something from your wealth of experience each time.

    I'm particularly glad that this has sparked some great discussion around not only the value of a diagnostic market but the utility of its mechanistic relevance. Needed to be addressed I think.

    I think another thing to mention here is that clinical judgement *is* lacking in so many cases where clinicians are not properly informed about the disease, which is much of where my contention is coming from. So that's why I'm not sold on clinical judgement as a counterargument. With better education and policy, sure.

    As a general comment I will also defend the apparent lack of impact of much modern science when compared with work done 50 years ago by saying that I think all of the low hanging fruit has long been exhausted. I do not mean this in a snarky way. It's just harder to find transformationally-new things when so much foundational work has already been done and done so well.

    I should also say that my entire research philosophy and efforts are centred around mechanism, so I get it. And a biomarker associated with mechanism is undeniably so much more useful than one without. I'm just thinking in terms of a potentially more rapidly reachable stopgap to reduce the skepticism that people face even if only partial. Knowing what people go through with dismissal or disbelief absolutely crushes my heart. I've experienced it myself, so I understand it completely. We need to start ending it asap. So I remain open minded to all efforts to do so even if 90+% of my own work is probably more concerned with mechanism than diagnostics.

    This turned into a disjointed stream of consciousness typed from the lab bench but I hope it makes the contention clear.

    Best, dan
     
    Last edited: Oct 6, 2023
    forestglip, Simone, sebaaa and 9 others like this.
  4. LarsSG

    LarsSG Senior Member (Voting Rights)

    Messages:
    370
    Slides from a recent presentation by Prof Morten, the second half is mostly about this work.

    Near the end there is a slide about their next step on this project:

    This seems like a much more sensible approach than what they did in the paper. Good to see them trying to replicate in multiple centres, though I'm not sure how valuable this is compared to just testing more samples instead (or 40 different samples in each centre). I suspect the results from this paper won't replicate, but maybe they'll find something useful all the same.
     
    forestglip, Simon M, Kitty and 13 others like this.
  5. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,761
    Good point e.g. those who participated in the Nath NIH intramural study (lets forget Walitt!) could be tested to see whether they were considered positive (for ME/CFS) on the basis of this (Raman spectroscopy) "test" - ditto Decode ME participants.
     
    Last edited: Mar 17, 2024
  6. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    371
    It seems like they did have a test set - 20% of the samples - and this is what the accuracy figures describe:
    advs6355-fig-0005-m.png

    The figures above are called confusion matrices (good chance @chillier you already know what this is if you are proficient in R, describing for others).

    The squares on the diagonal lines going from bottom left to top right are correct guesses by the model on never before seen data. All other squares are incorrect guesses. The darker a square, the more samples the model guessed that way for.

    The diagonal is much darker (and the numbers for correct guesses higher) in both tasks (separating into HC, MS, and ME, as well as separating HC, MS, and three severities of ME).

    So the accuracy seems very promising here on unseen samples.

    Though it's also possible to "train to the test set" as well, if they routinely tested the model on this set, and tweaked the model until the results on the test set look good, potentially by chance. Ideally, the test set should only be used once at the very end to verify the model. I don't know if they did this.

    Edit: Actually, reading through more comments, I'm not as excited about the accuracy after learning the test set isn't completely separate people from the training set.
     
    Last edited: May 29, 2024
    RedFox, MEMarge, Kitty and 3 others like this.
  7. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    371
    I asked the Morten Group about this on Twitter:

    https://twitter.com/user/status/1796166890789667137


    Though I'm not too hopeful they'll respond as they didn't respond to a similar tweet about their test set methodology from over a year ago:

    https://twitter.com/user/status/1644410334097309696
     
    RedFox, Hutan, Kitty and 2 others like this.
  8. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,773
    Location:
    UK West Midlands
    Kitty, Peter Trewhitt, janice and 2 others like this.
  9. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    534
    Location:
    Switzerland (Romandie)
    Text: Raman PBMC validation study. We have now collected data on samples from 86 individuals: 35 Healthy, 24 Mild/Moderate and 27 MS. Around 15,000 Raman spectra. Just putting the funding in place for the analysis.
     
    Michelle, Binkie4, Simon M and 9 others like this.
  10. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    371
    They posted this:

    https://twitter.com/user/status/1803670724780904601


    (I accidentally deleted the question they are responding to. Basically asking if they split their original study by individuals or simply by cells.)

    As I say in response, I think I might have misinterpreted what they meant by "samples" in the paper when they said "the train and test sets contained a balanced number of samples from five groups of MS, Severe ME, Moderate ME, Mild ME, and HC"; I had thought it meant individual cells.

    But this seems to say samples refer to people, which I missed:
    They also said:

    https://twitter.com/user/status/1803671399120195685


    So we'll get a better idea of the power of the original model since they'll test on a totally new group.
     
    Last edited: Jun 20, 2024
    Michelle, Lindberg, Hutan and 4 others like this.

Share This Page