AI resources related to research or healthcare

Jenny TipsforME

Senior Member (Voting Rights)
I’m getting quite obsessed with the AI Revolution stuff and thought it could be worth starting a thread to share resources which could be applied to ME in some way. I know some of the ME researchers do keep up with the forum. It might be that some others on the forum have skills to use existing research data and analyse it differently with the advent of open source AI tools?

The first is https://owkin.com/substra/

Substra is a ready-to-use, open source federated learning (FL) software developed by Owkin, now hosted by the Linux Foundation for AI and Data. Substra enables the training and validation of machine learning models on distributed datasets. It includes a flexible Python interface and a web application to run federated learning training at scale. Academic research centers and Biopharma companies deploy Substra (formerly Owkin Connect) in a wide variety of federated learning settings for clinical research, drug discovery and development.
 
https://glass.health/ai/


Welcome to Glass.
We've built the first digital notebook designed for the way doctors think, learn and practice.

Glass AI 2.0 combines a large language model (LLM) with a clinical knowledge database, created and maintained by clinicians, to create DDx and Clinical Plan outputs.

Glass has a Community Library of medical knowledge content.

This product is not intended for use by members of the general public for medical diagnoses or other purposes. Glass is available for use by clinicians, including physicians, pharmacists, nurse practitioners, physician assistants, and clinicians-in-training.

I’m not a medic so I didn’t go further.
 
Possibly of relevance to research around POTS —

Artificial intelligence-based model to classify cardiac functions from chest radiographs: a multi-institutional, retrospective model development and validation study (2023, The Lancet Digital Health)

Background
Chest radiography is a common and widely available examination. Although cardiovascular structures— such as cardiac shadows and vessels—are visible on chest radiographs, the ability of these radiographs to estimate cardiac function and valvular disease is poorly understood. Using datasets from multiple institutions, we aimed to develop and validate a deep-learning model to simultaneously detect valvular disease and cardiac functions from chest radiographs.

Findings
We included 22 551 radiographs associated with 22 551 echocardiograms obtained from 16 946 patients. The external test dataset featured 3311 radiographs from 2617 patients with a mean age of 72 years [SD 15], of whom 49·8% were male and 50·2% were female. The AUCs, accuracy, sensitivity, and specificity for this dataset were 0·92 (95% CI 0·90–0·95), 86% (85–87), 82% (75–87), and 86% (85–88) for classifying the left ventricular ejection fraction at a 40% cutoff, 0·85 (0·83–0·87), 75% (73–76), 83% (80–87), and 73% (71–75) for classifying the tricuspid regurgitant velocity at a 2·8 m/s cutoff, 0·89 (0·86–0·92), 85% (84–86), 82% (76–87), and 85% (84–86) for classifying mitral regurgitation at the none-mild versus moderate-severe cutoff, 0·83 (0·78–0·88), 73% (71–74), 79% (69–87), and 72% (71–74) for classifying aortic stenosis, 0·83 (0·79–0·87), 68% (67–70), 88% (81–92), and 67% (66–69) for classifying aortic regurgitation, 0·86 (0·67–1·00), 90% (89–91), 83% (36–100), and 90% (89–91) for classifying mitral stenosis, 0·92 (0·89–0·94), 83% (82–85), 87% (83–91), and 83% (82–84) for classifying tricuspid regurgitation, 0·86 (0·82–0·90), 69% (68–71), 91% (84–95), and 68% (67–70) for classifying pulmonary regurgitation, and 0·85 (0·81–0·89), 86% (85–88), 73% (65–81), and 87% (86–88) for classifying inferior vena cava dilation.

Interpretation
The deep learning-based model can accurately classify cardiac functions and valvular heart diseases using information from digital chest radiographs. This model can classify values typically obtained from echocardiography in a fraction of the time, with low system requirements and the potential to be continuously available in areas where echocardiography specialists are scarce or absent.
 
Sorry I didn’t keep this thread going. Are there similar threads?

I’m assuming you’re probably using ChatGPT scheduled task feature or similar to keep up with relevant research these days? I can’t read very much so I find the Google lm notebook podcast feature useful when there’s new ME papers.

Does anyone know if anyone has access to this and ME blood samples:

Mal-ID (Machine Learning for Immunological Diagnosis) combines six machine learning models to analyze millions of immune cell sequences, identifying distinct patterns associated with various diseases[1][2]. This groundbreaking approach examines both B-cell and T-cell receptors (BCRs and TCRs), with B cell receptor sequences proving most effective for detecting HIV and SARS-CoV-2 infections, while T cell receptor sequences provide better insights into lupus and Type 1 diabetes[3]. The tool's combined analysis improves diagnostic accuracy across all conditions, regardless of patient demographics, and can even detect recent flu vaccinations[1][4].

Sources
I got information from https://www.perplexity.ai/page/ai-tool-diagnoses-diabetes-hiv-60yD.7CfT9OBJzcTZ.LYMw which cites:

[1] Disease diagnostics using machine learning of B cell and T cell receptor sequences https://www.science.org/doi/10.1126/science.adp2407
[2] Disease diagnostics using machine learning of B cell and ... - Science https://www.science.org/doi/abs/10.1126/science.adp2407
[3] Immune ‘fingerprints’ aid diagnosis of complex diseases in Stanford Medicine study https://med.stanford.edu/news/all-news/2025/02/immune-cell-receptors-complex-disease.html
[4] AI tool diagnoses diabetes, HIV and COVID from a blood sample https://www.linkedin.com/posts/dr-e...v-and-covid-activity-7298783474461605889-ubLc

The innovative "one-shot sequencing method" employed by Mal-ID captures comprehensive immune system exposures, providing a holistic view of an individual's health status. This approach allows for the simultaneous assessment of multiple diseases through a single blood test, streamlining the diagnostic process[1][2]. By analyzing millions of immune cell sequences, the system can detect subtle patterns indicative of various conditions, offering a more nuanced understanding of a patient's immune response[3]. This method's ability to provide a unified immune system analysis represents a significant advancement in diagnostic medicine, potentially reducing the time and resources required for accurate disease identification[4].

Sources
[1] AI tool diagnoses diabetes, HIV and COVID from a blood sample https://neuron.expert/news/ai-tool-diagnoses-diabetes-hiv-and-covid-from-a-blood-sample/11113/en/
[2] Disease diagnostics using machine learning of B cell and T cell receptor sequences https://www.science.org/doi/10.1126/science.adp2407
[3] Disease diagnostics using machine learning of B cell and ... - Science https://www.science.org/doi/abs/10.1126/science.adp2407
[4] Machine Learning Unlocks Immune System Secrets https://www.insideprecisionmedicine...chine-learning-unlocks-immune-system-secrets/
 
Thanks for suggesting using Notebook LM podcast feature to access research studies. I've found that particular feature very useful for other things, but hadn't thought of using it on research papers.

I've found Gemini very useful when writing letters etc, I can put in a rough paragraph and it comes up with something coherent for me. I was looking at a long list of free text patient feedback yesterday. I asked it to identify positive and negative themes, and it does it in a second or two. Obviously responses need to be sanity checked, but in the last six months or so the AI available without subscription has improved enormously.
 
Back
Top Bottom