Multi-centre classification of functional neurological disorders based on resting-state functional connectivity, 2022, Weber et al

Discussion in 'Other psychosomatic news and research' started by Andy, Jun 26, 2022.

  1. Andy

    Andy Committee Member

    Messages:
    22,399
    Location:
    Hampshire, UK
    Highlights

    • Using machine learning on multi-centre data, FND patients were successfully classified with an accuracy of 72%.

    • The angular- and supramarginal gyri, cingular- and insular cortex, and the hippocampus were the most discriminant regions.

    • To provide diagnostic utility, future studies must include patients with similar symptoms but different diagnoses.

    Abstract

    Background

    Patients suffering from functional neurological disorder (FND) experience disabling neurological symptoms not caused by an underlying classical neurological disease (such as stroke or multiple sclerosis). The diagnosis is made based on reliable positive clinical signs, but clinicians often require additional time- and cost consuming medical tests and examinations. Resting-state functional connectivity (RS FC) showed its potential as an imaging-based adjunctive biomarker to help distinguish patients from healthy controls and could represent a “rule-in” procedure to assist in the diagnostic process. However, the use of RS FC depends on its applicability in a multi-centre setting, which is particularly susceptible to inter-scanner variability. The aim of this study was to test the robustness of a classification approach based on RS FC in a multi-centre setting.

    Methods
    This study aimed to distinguish 86 FND patients from 86 healthy controls acquired in four different centres using a multivariate machine learning approach based on whole-brain resting-state functional connectivity. First, previously published results were replicated in each centre individually (intra-centre cross-validation) and its robustness across inter-scanner variability was assessed by pooling all the data (pooled cross-validation). Second, we evaluated the generalizability of the method by using data from each centre once as a test set, and the data from the remaining centres as a training set (inter-centre cross-validation).

    Results
    FND patients were successfully distinguished from healthy controls in the replication step (accuracy of 74%) as well as in each individual additional centre (accuracies of 73%, 71% and 70%). The pooled cross validation confirmed that the classifier was robust with an accuracy of 72%. The results survived post-hoc adjustment for anxiety, depression, psychotropic medication intake, and symptom severity. The most discriminant features involved the angular- and supramarginal gyri, sensorimotor cortex, cingular- and insular cortex, and hippocampal regions. The inter-centre validation step did not exceed chance level (accuracy below 50%).

    Conclusions
    The results demonstrate the applicability of RS FC to correctly distinguish FND patients from healthy controls in different centres and its robustness against inter-scanner variability. In order to generalize its use across different centres and aim for clinical application, future studies should work towards optimization of acquisition parameters and include neurological and psychiatric control groups presenting with similar symptoms.

    Open access, https://www.sciencedirect.com/science/article/pii/S2213158222001553
     
    Peter Trewhitt likes this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,001
    Location:
    Canada
    The thing about machine learning is that the system "learns" all the biases that are present in the data.

    So if you feed biased data that will make a pattern emerge where there is none, the neural network will "see" the patterns for the same reason if you swap pictures of dogs with cats in the training data it will not only see dogs where cats are but see something close to dogs for any feline species.

    Basically if you train a neural network to look for signs of alien civilizations out of low-resolution satellites images, it will "see" those patterns, whether it's "canals" or faces.

    And in the end all it is is resolution, lossy compression.

    There is no "face" on Mars. Some people did "see" it, though. Because they wanted to. And training a neural network in those conditions? It would see so many of those patterns you could start selling "alien-adjacent" real estate on Mars.

    face-on-mars-low-res.jpg
     

Share This Page