Artificial intelligence in medicine

Discussion in 'Other health news and research' started by RedFox, Apr 11, 2023.

  1. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
  2. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    5,762
    Location:
    Aotearoa New Zealand
    Influence of believed AI involvement on the perception of digital medical advice (2024)
    Reis, Moritz; Reis, Florian; Kunde, Wilfried

    Large language models offer novel opportunities to seek digital medical advice. While previous research primarily addressed the performance of such artificial intelligence (AI)-based tools, public perception of these advancements received little attention.

    In two preregistered studies (n = 2,280), we presented participants with scenarios of patients obtaining medical advice. All participants received identical information, but we manipulated the putative source of this advice (‘AI’, ‘human physician’, ‘human + AI’). ‘AI’-and ‘human + AI’-labeled advice was evaluated as significantly less reliable and less empathetic compared with ‘human’-labeled advice. Moreover, participants indicated lower willingness to follow the advice when AI was believed to be involved in advice generation.

    Our findings point toward an anti-AI bias when receiving digital medical advice, even when AI is supposedly supervised by physicians. Given the tremendous potential of AI for medicine, elucidating ways to counteract this bias should be an important objective of future research.

    Link | PDF (Nature Medicine) [Open Access]
     
  3. glennthefrog

    glennthefrog Established Member (Voting Rights)

    Messages:
    62
    Location:
    ARGENTINA
    these results truly don't reflect my experience, the last thing I'd say is that I found human doctors to show more empathy than the simulated empathy of large language models. I also don't think they reflect the experience shared by most ME/POTS/LYME/MCAS, etc sufferers shared on patient groups. The problem of this study, I believe, is that the participating doctors were aware that they were part of a study and that they were being monitored, so their behavior doesn't reflect the typical behavior of a medical practitioner working with the absolute lack of accountability they work in their common practice, and also I'm pretty sure that neglected diseases weren't included as possible diagnosis.
     
    oldtimer, Peter Trewhitt and Trish like this.
  4. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    764
    Location:
    Switzerland (Romandie)
    Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said - Associated Press

    Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

    But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

    Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

    More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

    https://apnews.com/article/ai-artif...lth-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

     
    Sean, LJord and Peter Trewhitt like this.
  5. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    :cry: it truly learned from the professionals. It's so beautiful.

    Joking aside, yikes on going ahead with it before it's ready. It will be ready soon. Jumping the gun here is a good way to turn the work culture against something that will soon be superior in all cases. Especially if they explicitly say not to use it in such cases.

    Health care systems have been essentially hostile towards telemedicine and any means to provide better access. They don't even have patient portals dealing in tickets and case management available in most cases. Even though it's all mature and tried-and-tested in many other industries. But they go right ahead with technology that hardly anyone else uses yet. Very odd people making bizarre choices.
     
    Amw66 and Peter Trewhitt like this.
  6. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    I need a retinal exam, and have to travel 300 km for that (and 300 back). I read an article about how teleophthalmology is working so well in my town. It wasn't offered to me, and they say their camera isn't good enough, and referral time is really long. So, despite the glowing reviews, it's really not available. Even with a lesser quality camera, it should be good enough for an expert to judge whether a more detailed exam is necessary. The government would probably save money by setting up some regional adequate-quality cameras, rather than paying for specialists' offices and staff. Yes I'm annoyed about it.
     
    rvallee and Peter Trewhitt like this.
  7. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    5,762
    Location:
    Aotearoa New Zealand

Share This Page