Artificial intelligence in medicine

Discussion in 'Other health news and research' started by RedFox, Apr 11, 2023.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Good stuff. I think that we'll be seeing a lot of these, and in growing numbers and impact over time.

    The individual papers are still worth discussing on their own, but I created a thread to consolidate most of the general discussion over AI models, papers and technology here: Artificial intelligence in medicine.
     
    Amw66 and alktipping like this.
  2. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    Now we just need the sensors small enough to fit in a saltshaker... I think we already have Star Trek hypo injectors, just not in common use.
     
    alktipping likes this.
  3. John Mac

    John Mac Senior Member (Voting Rights)

    Messages:
    1,006
    I wonder if AI could replace some of the role GP's/MD's fullfill today?
    You wait weeks for an appointment to see a GP and when you tell them what's up with you they send you away with a form for blood tests or prescription for medication based on what you have told them.
    Why couldn't this be done via an AI system which you log into from home and after entering details of your problem and answering qualified questions based on your previous answers the AI system gives you it's advice and you print out the necessary blood test forms and prescriptions there and then, all monitored and recorded officially in your medical records of course.
    Less than one hour instead of several weeks and at far lower cost to the NHS i.e. no doctor/receptionist/surgery to pay for.
     
    alktipping likes this.
  4. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Most of it. And this is why healthcare systems will embrace it whole. It will be massively more effective, cheaper, faster and better. Where they don't, private companies will swoop in anyway.

    Doctors will still have a role to play, but more of a hands-on thing, and for those who don't trust technology. There will always be those. Robotics is a different issue, it's separate from AI, despite all sci-fi tropes being based around humanoid robots. There is hardly any need for it in most cases.

    Healthcare is probably the industry with the most unmet needs. Half of them, at least. AI can deal with most of those, reducing disability and sick leave, making people more productive and happier. It won't just change some of the roles MDs play, it will change everything, top to bottom. As big a change as the modern Internet is to mail-by-pony was.

    Actually soon there will come the issue of whether human doctors even have a need to be in the loop, for prescriptions and such, as they would become the main bottleneck to a system that can move 100x faster and better than them. But once performance is higher, there is literally no need, it will be an easy choice for governments. This will likely be the main point of backlash from doctors, who will see their role diminished, and likely their salary since it's mostly high because of scarcity, but everyone else will be better for it, the savings will be so massive that it can't be held back.
     
    Ariel and alktipping like this.
  5. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    4,500
    What worries me (i am hoping someone can explain to me why i am wrong or even just reassure me that i am!), is that surely the information that it is given will be from what is already known - so it will take its information from the information already held, and use that..... just faster and more efficiently and from more sources simultaneously than a human mind ever could?

    So what worries me is the rubbish that is already in all the text books about ME, the research already done and the crap like cochrane... surely it can only mine its responses from the information that is programmed into it?

    If that info is inaccurate or rubbish - which we know it is, in the main...

    Then how does that help us (pwme specifically)?
     
    TruthSeeker, RedFox, Ariel and 4 others like this.
  6. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    4,500
    For example, if it were asked about FND, its not going to say 'this is scientifically bankrupt' is it? Because all the knowledge that will have been put into it about FND=Conversion disorder=emotional distress manifesting somatically... if thats what the computer is taught, if thats what is programmed into it, how can it challenge it or say 'thats rubbish? If its told certain theories are proven facts... then????

    I am sure i am revealing my oignorance, and i have a weird sense of deja vu. I may have asked this before and been answered before. If so i sorry i cant remember the answer or if i was reassured or not. :rolleyes:

    Am longing to hear it will be the answer to our problems (or part of it), before the robot apocalypse :D
     
    TruthSeeker, RedFox, Ariel and 6 others like this.
  7. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    Yep, as much promise as AI has, if the GIGO problem is not dealt with it will not only fail but could actually be extremely damaging, especially in the early cruder phase of its application to actual human lives.
     
    RedFox, Ariel, EzzieD and 3 others like this.
  8. JES

    JES Senior Member (Voting Rights)

    Messages:
    209
    With all the risks including garbage in garbage out, I'd still take AI over a handful of BPS proponents having the largest say in how diseases like ME/CFS are treated. At least the AI doesn't come with an inability to understand or a complete ignorance of the biology part in the BPS model. Even with all the garbage out there, I'm hopeful it will pick up a lot more of the opposing viewpoints than what a typical BPS oriented doctor would.
     
    RedFox, Ariel, EzzieD and 2 others like this.
  9. Sean

    Sean Moderator Staff Member

    Messages:
    8,064
    Location:
    Australia
    To be clear, I am certainly not against AI. Like every other technology, used well it has huge potential, in cost and time savings alone.

    Just wary of it being relied on too much, of being seen as somehow more objective and less biased than humans, especially before it has been well tested and its limitations and risks understood and factored in to its practical application.

    In particular its rate of development (evolution?), as @rvallee mentioned, worries me, as I think it is moving is way too fast for humans to understand the full implications, and introduce adequate controls and restraints, before it is implemented widely.

    There will also always be hostile actors in this world more than willing to exploit new tech for their nasty purposes, so the rest of us need to understand it to be ready to counter them.

    Plus, we can't afford to let it completely displace human experts, as flawed as they are. We will always need those people to help keep AI honest, and as a backup if it fails (including being switched off if it goes rogue).
     
    Amw66, RedFox, EndME and 1 other person like this.
  10. mariovitali

    mariovitali Senior Member (Voting Rights)

    Messages:
    516
    Thank you @rvallee for this thread. As many of you know, I have been looking at various AI technologies for quite some time. The first AI-generated hypothesis on the origin of ME/CFS and other syndromes has been generated in 2015. This was sent it to a number of ME/CFS researchers in December 2015 (why I write about this will become clear later on). Also I am in remission since 2015 (which was also explained in the email I sent to researchers).

    1) To this day, despite the rise of AI tools there is no single tool that looks at medical syndromes. Chat GPT and Google Bard are useless when specific questions are being posed to them regarding ME/CFS and LongCOVID.
    2) Since these tools are/will be disruptors , expect many difficulties when it comes to their use from researchers. Imagine, that you have a tool that will be able to tell the most likely cause(s) of ME/CFS and where to put your time (and money) at.
    3) Then we have "research silos" . An immunologist will be looking for immune related causes. Other researchers will be looking at metabolites. Another researcher believes that the solution lies in the vagus nerve. We need to unify these silos and to do so, I find no better way than using AI tools to guide this unification.

    I find it extremely dissapointing and frustrating that ever since 2015 I have been trying to convince researchers and patient organisations to use AI technology to speed up research process. Eight years and counting.
     
    Ariel, Amw66, RedFox and 4 others like this.
  11. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    AI should be better than humans at filtering out garbage information and theories. Humans excel at self-delusions. Humans want to believe beneficial nonsense; they have for thousands of years. Humans can also care more about appearances (My theory was right!) than about reality or the welfare of people affected by those desires. AIs can be developed to focus on results. If they prescribe GET to 20 patients and get bad results, they weigh that against unverified theory and look for other options ... and share that with other AIs immediately. Harmful theories would likely be weeded out quite quickly, far more quickly than human institutions would respond.
     
    Sean and rvallee like this.
  12. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    The biggest shift AI will enable is time. Right now doctors learn very little about us, including researchers. They spend very little time with us, they mostly only work from cookie-cutter standardized tropes. Most of which are OK to good.

    Illness, the perspective of patients, is entirely missing from medicine. Any AI system built only from the perspective of doctors will only be good at doing the same work doctors already do, like you said, just a lot faster. It wouldn't be a very good system. The big shift is going to be in the ability for those systems to listen to all of us, everything we have to say, everything that happens to us, and work with existing scientific knowledge and push it further.

    Doctors are trained in a standardized way, so that the worst doctor is almost as good as the best ones, because they all do the same things. There is a difference, but it's trivial. In a sense, they are trained like robots, to apply rote formulas. AIs won't have this limitation, they will be able to learn everything, remember everything, cross-reference everything. And they will learn from us, which doctors are unable, even seem forbidden, to do. It's a huge change that is hard to imagine until it's there.

    But really, AI systems built with the same biases as humans just wouldn't be very good. Those with a profound understanding of illness, not limited to the standard healthcare approach of "one symptom at a time", will massively outperform those that simply use the existing medical literature.

    Technology is awesome, but it's really when you put it in the hands of people that you get amazing results, and in healthcare this also means patients. There are so many more patients than there are doctors. There are more humans alive right now in need of healthcare than the number of doctors ever trained. And there is so much money in it for those who can produce better results, that it will get there eventually.
     
    Last edited: Jul 31, 2023
    Amw66, Sean, Ariel and 1 other person like this.
  13. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Yup. Well said. AIs won't have the investment that people who dedicate their career to a pet theory currently have, willing to see benefits where there aren't any, because they are mainly interested in their interest. Or willing to BS infinitely to get large research grants whose only aim is to get more research grants. This is all driven by academic careerism.

    An AI system that sees the huge rates of deterioration from BPS treatments will simply move on, won't spend a nanosecond dithering over what it means in the same way as someone who has spent years learning, researching and applying them. They'll want results, and people who remain sick but can parrot that they're a bit better on some random questionnaire simply isn't a result worth replicating.

    The biopsychosocial model of illness is built entirely on logical fallacies, mainly the god of the gaps and argument by ignorance. AIs won't ignore that, they won't be able to, otherwise they would be generally bad at it. In our case, it's all biases and conflicts that lead to this. All human needs based on social gratification. There is no knighthood in line for an AI that harms millions of people with their nonsense.
     
    Sean and JemPD like this.
  14. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    AIs have only recently shown the promise of their potential. Mostly in the last year, really. I can't fault them for not spending much effort applying them, they have to mature first. And it's getting there very rapidly, but there isn't much point spending months trying to get some system to do something when by the time you're done with an experiment there are systems that are 10-100x better already and can do the same stuff in a day.

    In the next 18 months, we'll see models that are 100x larger than GPT4 trained. And that's just one company. There are dozens more doing the same. The old trope where some small lab, or even a lone research, creates an AI is pure fiction. It's a technology that needs scale, that needs millions of people working on all its parts, compounding the work of other people, and not just creating it, but finding innovative ways to maximize their potential. And it just got there this year.
     
    Sean likes this.
  15. JemPD

    JemPD Senior Member (Voting Rights)

    Messages:
    4,500
    Ariel likes this.
  16. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,218
    If, for some reason, bitcoin type mining collapsed, could all that processing power be repurposed for AIs?
     
    oldtimer likes this.
  17. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Bitcoin-type mining (there are thousands of those) use cheap GPUs, not really fit for that.

    AI training uses chips built on the same principle as graphics cards (actually, if it wasn't for video games, there wouldn't even be AI happening right now, amazing story) but they're not individual cards. Instead they're $500K+, and more, machines that connect similar chips but they have ridiculous bandwidth, memory and storage on the same system, and the biggest players have thousands of those interconnected in data centers. Mostly built by Nvidia, although AMD recently announced theirs.

    The collapse of bitcoin mining would be good for gamers, but mostly useless for AI. They don't use high-end models, just buy lots and lots of them.
     
    Amw66, oldtimer and Trish like this.
  18. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
    Uh, well, looks like I was mostly thinking of the biggest companies without a thought for all the smaller players that can't afford the big machines. Those are also available in limited quantities.

    Turns out lots of them are buying GPUs and putting them to good use: https://www.tomshardware.com/news/evidence-shows-ai-driven-companies-are-buying-up-gaming-gpus. It definitely makes sense for startups, and universities who don't have the budget for the expensive stuff.
     
  19. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,662
    Location:
    Canada
  20. John Mac

    John Mac Senior Member (Voting Rights)

    Messages:
    1,006
    AI offers huge promise on breast cancer screening

    Artificial intelligence can "safely" read breast cancer screening images, a Swedish study suggests.

    Researchers led by a team at Lund University found computer-aided detection could spot cancer at a "similar rate" to two radiologists.

    But they said more research was needed to fully determine whether it could be used in screening programmes.

    Experts in the UK agreed AI offered huge promise in breast cancer screening.

    This is not the first study to look at the use of AI to diagnose breast cancer in mammograms - X-rays of the breast.

    Previous research, including some carried out in the UK, has looked retrospectively, where the technology assesses scans which have already been looked at by doctors.

    But this research study saw AI-supported screening put head-to-head with standard care.

    The trial, published in Lancet Oncology, involved more than 80,000 women from Sweden with an average age of 54.

    Half of the scans were assessed by two radiologists, known as standard care, while the other half were assessed by the AI-supported screening tool followed by interpretation by one or two radiologists.

    In total, 244 women from AI-supported screening were found to have cancer, compared with 203 women recalled from standard screening.

    And the use of AI did not generate more "false positives" - where a scan is incorrectly diagnosed as abnormal.

    The false-positive rate was 1.5% in both the AI group and the group assessed by radiologists.

    https://www.bbc.co.uk/news/health-66382168
     
    Ariel, rvallee, Sean and 2 others like this.

Share This Page