Rethinking ME/CFS Diagnostic Reference Intervals via Machine Learning & Utility of Activin B for Defining Symptom Severity (2019) Lidbury et al.

John Mac

Senior Member (Voting Rights)
Abstract
Biomarker discovery applied to myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), a disabling disease of inconclusive aetiology, has identified several cytokines to potentially fulfil a role as a quantitative blood/serum marker for laboratory diagnosis, with activin B a recent addition.

We explored further the potential of serum activin B as a ME/CFS biomarker, alone and in combination with a range of routine test results obtained from pathology laboratories.

Previous pilot study results showed that activin B was significantly elevated for the ME/CFS participants compared to healthy (control) participants. All the participants were recruited via CFS Discovery and assessed via the Canadian/International Consensus Criteria.

A significant difference for serum activin B was also detected for ME/CFS and control cohorts recruited for this study, but median levels were significantly lower for the ME/CFS cohort.

Random Forest (RF) modelling identified five routine pathology blood test markers that collectively predicted ME/CFS at ≥62% when compared via weighted standing time (WST) severity classes.

A closer analysis revealed that the inclusion of activin B to the panel of pathology markers improved the prediction of mild to moderate ME/CFS cases.

Applying correct WST class prediction from RFA modelling, new reference intervals were calculated for activin B and associated pathology markers, where 24-h urinary creatinine clearance, serum urea and serum activin B showed the best potential as diagnostic markers.

While the serum activin B results remained statistically significant for the new participant cohorts, activin B was found to also have utility in enhancing the prediction of symptom severity, as represented by WST class.

https://www.mdpi.com/2075-4418/9/3/79
 
Last edited by a moderator:
Can anyone explain this seeming contradiction? My brain simply computes this as illogical - what have I misunderstood?

1) A previous study found higher activin B in ME than HC.

2) They changed the test to better detect low levels of activin B.

3) The new study found lower activin B in ME than HC.

The authors say the change in the test could explain the different results. But, based on the first study, shouldn't the more sensitive test have found more low results in the HC, not in ME?

Another brain teaser: several of the results in the tables show the same unintuitive pattern we have recently seen in at least 2 other studies (McGregor and ???), namely that more severely affected patients look more like HC than less severely affected patients. This weird pattern is beginning to look like more than just a coincidence.
 
Which are the “five routine pathology blood test markers that collectively predicted ME/CFS” over half the time?
Does it say?
I think they mean these (but I got rather muddled trying to read this paper so please correct if I'm mistaken):
Activin B was also investigated as a member of a six-marker profile that included 24-h urinary creatinine excretion rate, mean corpuscular haemoglobin (MCH), alkaline phosphatase (ALP), serum urea, and total lymphocyte count.
 
Another brain teaser: several of the results in the tables show the same unintuitive pattern we have recently seen in at least 2 other studies (McGregor and ???), namely that more severely affected patients look more like HC than less severely affected patients. This weird pattern is beginning to look like more than just a coincidence.
Less severely affected still trying to be more active so more likely to be in a rolling PEM state, whereas the more severely affected are forced to pace themselves far more to avoid that state due to how much more it would affect them?
 
Another brain teaser: several of the results in the tables show the same unintuitive pattern we have recently seen in at least 2 other studies (McGregor and ???), namely that more severely affected patients look more like HC than less severely affected patients. This weird pattern is beginning to look like more than just a coincidence.

Wild guess - perhaps the test reflects "stability." Healthy controls and severe patients may both represent "stable" states (one of which is highly impaired). It might be that the patients that fall between these two extremes are "unstable," with the test results somehow reflecting that ongoing struggle. It would be somewhat like Dr. Klimas' bi-stable hypothesis.

1 & 3 are stable, but 2 is caught between forces pulling it in both directions (2 is also impaired, but not as much as 3).

1234.jpg
 
Last edited:
Less severely affected still trying to be more active so more likely to be in a rolling PEM state, whereas the more severely affected are forced to pace themselves far more to avoid that state due to how much more it would affect them?
Wild guess - perhaps the test reflects "stability." Healthy controls and severe patients may both represent "stable" states (one of which is highly impaired). It might be that the patients that fall between these two extremes are "unstable," with the test results somehow reflecting that ongoing struggle.
Either one could conceivably explain why mildly affected patients can look worse on paper than severely affected ones.

More tricky though are McGregor's study results because he was comparing HC to ME without current PEM to ME with current PEM. It was the patients with PEM - which I would consider an unstable state, and an 'unpaced' one if there is such a word - that looked most like HC.

Only one thing's for sure: ME is tricky...

Anyway, back to the current study here: has anyone figured out what's going on with the activin B being down now when it was up in the previous study?
 
I've only skimmed the paper but this worried me:

Pairwise WST classes were analysed per ROC, both for the entire dataset, and for the correctly predicted cases for each WST class (0, 1, 2
or a comparatively small data set (for this study, 97 in total),

When using ML you typically split the data into training, validation and test sets and quote results on the test set not on the entire data. I can't see any information about the relative splits of the training and test data which makes me suspicious that they have trained with all the data (which would be bad)!. When small amounts of data exist often an n-fold cross validation is used instead but I'm not seeing mention of this either. But I have only skimmed the paper and they may have just not thought it important to say how they split their 97 samples into training, validation and test sets.
 
Can anyone explain this seeming contradiction? My brain simply computes this as illogical - what have I misunderstood?

1) A previous study found higher activin B in ME than HC.

2) They changed the test to better detect low levels of activin B.

3) The new study found lower activin B in ME than HC.
Just read the study and can't make much sense of it either.

Also: it seems to be the middle group, the moderately affected who show the largest differences with healthy controls. In the other two groups differences were not statistically different from controls. Do not see much evidence here for a potential biomarker.
 
podcast

Dr Brett Lidbury | Rethinking Myalgic Encephalomyelitis/Chronic Fatigue Syndrome Using Machine Learning
Jun 14, 2022
Dr Brett Lidbury from the Australian National University worked with colleagues to utilise machine learning techniques in a new strategy to identify biomarkers that could be used to help diagnose myalgic encephalomyelitis/chronic fatigue syndrome in patients. Their work represents a significant step forward in understanding, diagnosing and treating this challenging condition, particularly in relation to pathology, the results of which form a routine but important part of general health assessment.


transcript also available

https://www.scipod.global/dr-brett-...onic-fatigue-syndrome-using-machine-learning/
 
Back
Top Bottom