The Robot Doctor Will See You Now

Discussion in 'Research methodology news and research' started by Jaybee00, Feb 3, 2025.

  1. Jaybee00

    Jaybee00 Senior Member (Voting Rights)

    Messages:
    2,321
  2. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    From the article:
    This research is early and may evolve. But the findings more broadly indicate that right now, simply giving physicians A.I. tools and expecting automatic improvements doesn’t work. Physicians aren’t completely comfortable with A.I. and still doubt its utility, even if it could demonstrably improve patient care.

    Replace AI (tools) with «patient input», and you’ve got ME/CFS in a nutshell.

    The most radical model might be complete separation: having A.I. handle certain routine cases independently (like normal chest X-rays or low-risk mammograms), while doctors focus on more complex disorders or rare conditions with atypical features.

    In the field Responsible AI (RAI), one concern with the complete replacement for easy tasks, is that the human gets less practice over time, and that they therefore will become worse at their responsibility. An example is self-driving cars - if you only drive when the conditions are difficult, you’ll drive so much less that your driving skills will deteriorate over time.

    Is this a concern here as well? I imagine that there might be enough complex cases to occupy the specialists full time?
     
    rvallee, MeSci, tornandfrayed and 4 others like this.
  3. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,418
    Yes and I would image you will have had to have seen thousands of "easy cases" to know what a complex case looks like. But I can imagine quite some bit of the work GP's do is quite replaceable. But will that replacement be ethical, on which grounds will it be and to whose will and advantage?
     
  4. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    The BPS lobby, of course!
     
    Trish and Peter Trewhitt like this.
  5. alex3619

    alex3619 Senior Member (Voting Rights)

    Messages:
    2,286
    Back in the 90s there were experimental robotic surgeons that were better than real surgeons on very specific surgeries, not general surgeons, according to what I was hearing in the AI community. However there was a huge issue that was not resolved at the time - if something goes wrong, who has liability?

    I am currently expecting AI might replace most doctor jobs within a few decades. It can replace most jobs. Where does that leave society? Who pays for it all? Who fixes things when it goes wrong, which is inevitable on occasion?

    There are economic, political, moral, ethical and legal issues aplenty, most of which we do not have answers for. Will we leave it up to some AI to decide for us? Will we get informed and treat this as political, or scientific, or pragmatically?

    Lots of questions remain unanswered. One thing I consider possible, but not inevitable, is that AI can move past the stigma of ME. Unless it doesn't. AI reflects the literature, and the literature is mired in stigma. We still need our medical researchers to make the big discoveries for AI to take advantage of them. Of course in time they will be using AI to assist that research, much more than they do now.

    My first degree was in AI, and back in the day I was calling AI artificial insanity, and alien intelligence. It wont be human, and while superficially it might adhere to human sensibilities, under the hood it is likely to be as human as an ant hill.
     
    MarcNotMark, MeSci, Trish and 2 others like this.
  6. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    I’m just concerned because while a lot of medicine is objective biological signs — another big part of it is mostly based on social contructs and “stereotypes”.

    These AI doctors are inheriting these “stereotypes”, that’s how they learn — and unlike their human counterparts, once their training is done, they never learn again — their neural network stays exactly the same…
    So
    1) They cannot learn from their mistakes
    2) Their engrained biases (which they learn from the medical databases that trained them) cannot go away until they are replaced with a new model. But when the new model is trained on some of the old models outputs… it’ll be a never ending cycle of perpetuating biases.
     
    CorAnd and Peter Trewhitt like this.
  7. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    No. We don’t do that with anything today, there’s no reason to believe that it will happen with AI. The only exception is EU’s AI Act.

    Some do learn over time.

    There’s a case from Norway where they use AI to create 3D models of organs before surgery. That used to be done manually by doctors using a program that looks like Paint. Using AI, they were able to do the surgery the same day that they did the imaging - saving loads of time and money.

    There’s another case where they use AI for scheduling at hospitals. This drastically reduced the downtime for ORs.

    For now, AI is most suitable for «administrative» tasks.
     
    CorAnd, MeSci and Peter Trewhitt like this.
  8. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    I’m not sure any of these are examples of the AI “learning” over time.

    How neural networks tend to work is you have a training phase where it is fed a bunch of data and establishes a network of nodes with weights, and once your satisfied with the preformance, you can save those weights, and you have trained network.

    Then you apply that networks (weights) to real world problems — the weights don’t change anymore.

    There do exist adaptive neural networks — that can adapt their weights over time — but as of now those are rarely used.
     
    Peter Trewhitt likes this.
  9. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    Yeah, I should have made that clear. My bad! Those are examples of how AI can be applied to HC today.
     
    Peter Trewhitt likes this.
  10. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    If you let any of them loose on the web, someone will find a way to break them within a few hours.
     
    tornandfrayed and Peter Trewhitt like this.
  11. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,246
    Location:
    Canada
    The stuff they are the worst at? Nah. AIs will just comically overperform physicians here, like with most other things. In many cases, like with us, they are already much better. Truly, they get it so much more it's not even close, but that's mostly because things are maximally broken right now.

    One major problem when people think about AI's role here is that they're missing the most important lesson we got in the last few years: the easy stuff is actually the hard stuff. AIs still struggle at basic math, although that's rapidly improving, but have been able to make poetry, stories, songs, art and so on for years.

    Faking empathy is super easy. It literally only takes time, patience. Human physicians don't have that. AIs will have basically unlimited time per patient, they will do 1000000x better than humans here.

    You don't want physicians to focus on more complex or rare conditions. It's the very last thing anyone should want. Actually, we should want as little human judgment as possible, preferably none. Human judgment is the literal worst thing you can rely on in all circumstances, with relating to the experience of other people a close second.
     
    Yann04 and Peter Trewhitt like this.
  12. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.
     
    Sean and Peter Trewhitt like this.
  13. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    Yes but we should minimise this as much as possible.

    There exists AI models where you don’t even have human labelled training data, they just learn to distinguish patterns overtime without human input. This is called “unsupervised learning”.
     
    Peter Trewhitt likes this.
  14. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    The AI has to get the data from somewhere even if it isn’t labeled, and the human still has to decide on how to interpret the output.
     
    Sean and Peter Trewhitt like this.
  15. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    Unsupervised learning is often used to differentiate biological species, or to group proteins or things like that, so this data has very little human influence.

    And yes obviously, humans will interpret the results, the goal though is to minimise the points where human biases can be inserted.
     
    Peter Trewhitt likes this.
  16. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    That’s a trait of the problem it’s being applied to, not the type of algorithm.

    But I agree that minimizing human bias is the wanted path in most cases.
     
    Peter Trewhitt likes this.
  17. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    Well there is less interest in applying unsupervised learning to human language and things like that, because it already has by definition subjective labels.

    It’s more interesting to apply to places where it isn’t influenced as much by human cognition and social constructs… in my opinion.
     
    Peter Trewhitt and Utsikt like this.
  18. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    I don’t have an opinion on that. I’m just for reducing any source of bias and using the best tools (and not the hyped ones) to solve any given problem.
     
    Peter Trewhitt likes this.
  19. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,394
    I had the impression that at least some AIs were given goals, such as "identify which of these x-ray images shows cancer", and it trains from images with known findings. Once that's done, it should continue to adapt to feedback, such as false positives (they cut the patient open, and found no cancer) and false negatives (the "cancer free" patient returned to the hospital and found a large tumor in the same location). Do today's AIs not adapt? Can they not adapt to a different x-ray machine's images?
     
    Peter Trewhitt likes this.
  20. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    1,700
    Location:
    Romandie (Switzerland)
    When would happen in the case of “errors” with today’s AI’s is that you might “fine tune” a model by training it on previous errors to help it’s accuracy. But that isn’t an automatic process, and in doing so, you’re creating a new model.
     
    Peter Trewhitt likes this.

Share This Page