Artificial intelligence in medicine

Discussion in 'Other health news and research' started by RedFox, Apr 11, 2023.

  1. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,293
    Location:
    Pennsylvania
    Glass AI--AI that generates differential diagnosis and clinical plans

    Recently I learned about this AI tool, which is currently available for anyone to use without cost or registration.

    Edit: Darn I forgot the link: https://glass.health/ai

    I'm playing around with it to see how well it understand ME/CFS. It seems like it's familiar with it. Here's the DDx it generates for one case I made up:
     
    Last edited by a moderator: Jul 31, 2023
    alktipping, Lisa108, Wyva and 6 others like this.
  2. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,293
    Location:
    Pennsylvania
    I'm now playing with it extensively.
    It think CBT/GET is the answer to everything apparently.
    I tested its response to being tole CBT/GET made someone worse:
    How would it view my case, considering my mental health history, the presentation of my illness upon its onset, and my inability to accurately describe it due to not knowing the vocabulary of ME:
    I'm incredibly angry because this is exactly how all doctors think, and how they treated me until I learned I had ME. No doctor told me I had ME. I figured it out by Googling. Then I sought medical attention again. And the treatment plan says nothing about addressing exercise intolerance.
     
    Ash, alktipping, Lisa108 and 8 others like this.
  3. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,293
    Location:
    Pennsylvania
    This is horrible. It recommended CBT/GET for someone totally bedridden due to severe ME:
     
    Trish, Lisa108, Hutan and 6 others like this.
  4. glennthefrog

    glennthefrog Established Member (Voting Rights)

    Messages:
    62
    Location:
    ARGENTINA
    I asked ChatGPT (Openai's GPT3.5) what were the treatments for ME/CFS:
    upload_2023-4-10_22-3-49.png

    But then, I asked:
    upload_2023-4-10_22-4-16.png

    I guess its last response is better than nothing, given it's been feed literature on ME indiscriminately without any form of quality control
     
    alktipping, Lisa108, Hutan and 5 others like this.
  5. glennthefrog

    glennthefrog Established Member (Voting Rights)

    Messages:
    62
    Location:
    ARGENTINA
    there's more, the good thing about ChatGPT is that it's capable of reevaluating its conclusions, you could NEVER, EVER have this discussion with a doctor, he would kick you out of his office:
    upload_2023-4-10_22-9-34.png
    and then, finally:
    upload_2023-4-10_22-10-1.png
     
    ukxmrv, alktipping, Lisa108 and 6 others like this.
  6. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    And without any understanding of what the symbols it is manipulating actually represent. Which is likely to remain the core problem with AI, at least in its current form and degree.
     
    LJord, ukxmrv, glennthefrog and 7 others like this.
  7. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    @RedFox I agree it's disappointing to see these things recommending GET/CBT. But I'm not at all surprised that it does, given that most of the literature and most doctors would agree. Unless it is programmed to follow NICE and CDC guidelines, or to 'understand' the flaws that make the research unreliable, it will have no way of 'knowing' any better.
     
    Ash, glennthefrog, alktipping and 4 others like this.
  8. Shadrach Loom

    Shadrach Loom Senior Member (Voting Rights)

    Messages:
    1,053
    Location:
    London, UK
    It’s the core problem with exploiting large language models to predict meaningful outputs. It’s not necessarily the core problem with machine-trained pattern recognition or with trial-and-error brute-force solution design.

    It’s definitely a future problem for AI regulation that many people will see AI as synonymous with things like Chat GPT and DALL.E, though.
     
  9. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,563
    Location:
    UK

    Although alarming this is a representation of the material it has been trained on and is perhaps a way of pointing out the failures of the material that exists within the medical community.
     
    Amw66, NelliePledge, ukxmrv and 9 others like this.
  10. Solstice

    Solstice Senior Member (Voting Rights)

    Messages:
    1,216
    I wonder what would happen if you'd ask it if recommending CBT/GET is possible if you keep the hippocratic oath in mind.
     
  11. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    Yes, using AI on scientific papers will definitely force a re-evaluation of how papers are judged for acceptance, or force AI trainers to teach AIs how to identify bad papers. Given human nature (reluctance to change), the latter option is probably more likely to succeed.
     
    glennthefrog, Sean, RedFox and 2 others like this.
  12. Solstice

    Solstice Senior Member (Voting Rights)

    Messages:
    1,216
    Imagine if an AI were trained to seek out open-label trials with subjective outcomes, brought them to light and forced them to be retracted. Professions would crumble and patients would be liberated.
     
  13. glennthefrog

    glennthefrog Established Member (Voting Rights)

    Messages:
    62
    Location:
    ARGENTINA
    it's been said that GPT4 has some cause-effect understanding capability, so It doesn't only deal with symbols as GTP3
     
    Sean and Peter Trewhitt like this.
  14. Hubris

    Hubris Senior Member (Voting Rights)

    Messages:
    317
    This is not surprising at all. A medical AI is trained on publication data so it will repeat the same lies doctors tell themselves.

    The proper way to train a medical AI is to have it interact with patients and to give objective data stronger weight. A sort of inquisitive algorithm that tries to figure out the truth.
    This is easier said than done because a lot of the things patients say online do not include objective data at all, and if the AI was interacting with patients directly it wouldn't be able to get enough data to properly train itself. Still, with the current training models an AI will not be useful for an illness that doctors do not understand.
     
    Ash, Sean, rvallee and 1 other person like this.
  15. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Yup. This is a good example of how not to apply AI: trying to do the same thing, but with AI.

    AI changes what can be done entirely, as radically as the old paper-and-post mail system is from modern instant messaging and social networking. Simply reading from academic/official sources is not the way to do this, and it shows that this AI was limited to the physician perspective and official sources. This is thinking small.

    It will require thinking differently, but more than anything it will open up market forces allowing patients to make choices. A poor platform will not be used by patients, who will prefer a much better one. Even if it's more official, a lousy platform will be ignored in favor of one that massively overperforms human physicians.

    But maybe this is just an early version and it will improve. Things will move very fast in the coming months over this. So much money. Ridiculous amounts of money at stake.
     
    Sean, Peter Trewhitt and RedFox like this.
  16. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    I see the main potential strength of AI being its ability to improve itself by feedback. That requires objective outcomes, such as correct diagnoses that lead to effective treatments. Let the AI read all the scientific papers, but since it doesn't blind itself with baseless belief in any of the papers, it should be able to learn how to judge the quality of papers. Papers with poorly-defined buzzwords, small cohorts or certain selection processes will give those papers low weights. It might find that number of citations has very little effect on whether a paper will lead to a correct diagnosis and treatment.

    Let the AI view patient records, and see which papers include theories or data which correlates with successful patient outcomes. BTW, someone needs to deal with the issue of AIs being able to access "private" information without violating human laws about privacy. It's a machine; it doesn't make any emotional judgements about your bowel movements or sexual kinks, so there's no reason not to let it access that data in its search for solving problems. That would need rules to prevent the AI from passing that data on for any other purposes, but that's no reason not to allow AIs to access the information as anonymous data.

    Yes, the entire global financial system is likely to go under AI control. There was already a big revolution due to super-fast stock trading, and that was trivial compared to the potential for AIs handling stocks. The big question is: "Who will get the money?" AIs can potentially replace managers, lawyers, artists. Hmmm, they could replace sports superstars too. The sports industry would fight it, but if AIs offered robot sports that appealed to humans (combat robots engaged in really brutal battles), viewers would switch. Please, please let them replace politicians! I'd certainly consider voting for an AI.

    Back to AIs an medicine, AIs are already superior at some diagnostic tasks, so improve their feedback from successful patient outcomes, and give them more access to data.
     
    glennthefrog, Sean and Peter Trewhitt like this.
  17. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    9,924
    Location:
    UK
    Merged thread

    Med-PaLM : Medical AI


    Med-PaLM (research.google)
     
    Last edited by a moderator: Jul 31, 2023
    RedFox and Amw66 like this.
  18. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Merged thread

    Artificial intelligence in medicine


    Artificial intelligence will revolutionize medicine in the coming year. No typo. It will not take years, it will begin this year, and it will be more transformative than even electrification was, mostly because it will happen much faster.

    This thread is for news, papers and releases of AI technology applied to medicine, healthcare and health in general.

    In the end, only technology matters. Humans are fallible, but given them a piece of reliable technology and anyone can master things that would otherwise take years of experience. The primary way AI accomplishes this is by reducing time. AIs can train on the equivalent of millions of years, and apply this knowledge in millisecond. There is nothing else like it.
     
    Last edited by a moderator: Jul 31, 2023
    Robert 1973, Sean, Amw66 and 4 others like this.
  19. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Towards Generalist Biomedical AI
    https://arxiv.org/abs/2307.14334
    DeepMind

    Medicine is inherently multimodal, with rich data modalities spanning text, imaging, genomics, and more. Generalist biomedical artificial intelligence (AI) systems that flexibly encode, integrate, and interpret this data at scale can potentially enable impactful applications ranging from scientific discovery to care delivery. To enable the development of these models, we first curate MultiMedBench, a new multimodal biomedical benchmark.

    MultiMedBench encompasses 14 diverse tasks such as medical question answering, mammography and dermatology image interpretation, radiology report generation and summarization, and genomic variant calling. We then introduce Med-PaLM Multimodal (Med-PaLM M), our proof of concept for a generalist biomedical AI system. Med-PaLM M is a large multimodal generative model that flexibly encodes and interprets biomedical data including clinical language, imaging, and genomics with the same set of model weights. Med-PaLM M reaches performance competitive with or exceeding the state of the art on all MultiMedBench tasks, often surpassing specialist models by a wide margin.

    We also report examples of zero-shot generalization to novel medical concepts and tasks, positive transfer learning across tasks, and emergent zero-shot medical reasoning. To further probe the capabilities and limitations of Med-PaLM M, we conduct a radiologist evaluation of model-generated (and human) chest X-ray reports and observe encouraging performance across model scales.

    In a side-by-side ranking on 246 retrospective chest X-rays, clinicians express a pairwise preference for Med-PaLM M reports over those produced by radiologists in up to 40.50% of cases, suggesting potential clinical utility. While considerable work is needed to validate these models in real-world use cases, our results represent a milestone towards the development of generalist biomedical AI systems.
     
  20. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Zero-shot in the above is especially significant. It means problems the AI was not trained on and is seeing for the first time. State of the art refers to human experts, so they are claiming that this model is already as good or better than human medical doctors in some cases, even "often surpassing specialist models by a wide margin".
     
    RedFox and alktipping like this.

Share This Page