The Robot Doctor Will See You Now

From the article:
This research is early and may evolve. But the findings more broadly indicate that right now, simply giving physicians A.I. tools and expecting automatic improvements doesn’t work. Physicians aren’t completely comfortable with A.I. and still doubt its utility, even if it could demonstrably improve patient care.

Replace AI (tools) with «patient input», and you’ve got ME/CFS in a nutshell.

The most radical model might be complete separation: having A.I. handle certain routine cases independently (like normal chest X-rays or low-risk mammograms), while doctors focus on more complex disorders or rare conditions with atypical features.

In the field Responsible AI (RAI), one concern with the complete replacement for easy tasks, is that the human gets less practice over time, and that they therefore will become worse at their responsibility. An example is self-driving cars - if you only drive when the conditions are difficult, you’ll drive so much less that your driving skills will deteriorate over time.

Is this a concern here as well? I imagine that there might be enough complex cases to occupy the specialists full time?
 
Is this a concern here as well? I imagine that there might be enough complex cases to occupy the specialists full time?

Yes and I would image you will have had to have seen thousands of "easy cases" to know what a complex case looks like. But I can imagine quite some bit of the work GP's do is quite replaceable. But will that replacement be ethical, on which grounds will it be and to whose will and advantage?
 
Back in the 90s there were experimental robotic surgeons that were better than real surgeons on very specific surgeries, not general surgeons, according to what I was hearing in the AI community. However there was a huge issue that was not resolved at the time - if something goes wrong, who has liability?

I am currently expecting AI might replace most doctor jobs within a few decades. It can replace most jobs. Where does that leave society? Who pays for it all? Who fixes things when it goes wrong, which is inevitable on occasion?

There are economic, political, moral, ethical and legal issues aplenty, most of which we do not have answers for. Will we leave it up to some AI to decide for us? Will we get informed and treat this as political, or scientific, or pragmatically?

Lots of questions remain unanswered. One thing I consider possible, but not inevitable, is that AI can move past the stigma of ME. Unless it doesn't. AI reflects the literature, and the literature is mired in stigma. We still need our medical researchers to make the big discoveries for AI to take advantage of them. Of course in time they will be using AI to assist that research, much more than they do now.

My first degree was in AI, and back in the day I was calling AI artificial insanity, and alien intelligence. It wont be human, and while superficially it might adhere to human sensibilities, under the hood it is likely to be as human as an ant hill.
 
I’m just concerned because while a lot of medicine is objective biological signs — another big part of it is mostly based on social contructs and “stereotypes”.

These AI doctors are inheriting these “stereotypes”, that’s how they learn — and unlike their human counterparts, once their training is done, they never learn again — their neural network stays exactly the same…
So
1) They cannot learn from their mistakes
2) Their engrained biases (which they learn from the medical databases that trained them) cannot go away until they are replaced with a new model. But when the new model is trained on some of the old models outputs… it’ll be a never ending cycle of perpetuating biases.
 
Will we get informed and treat this as political, or scientific, or pragmatically?
No. We don’t do that with anything today, there’s no reason to believe that it will happen with AI. The only exception is EU’s AI Act.

they never learn again — their neural network stays exactly the same…
Some do learn over time.

There’s a case from Norway where they use AI to create 3D models of organs before surgery. That used to be done manually by doctors using a program that looks like Paint. Using AI, they were able to do the surgery the same day that they did the imaging - saving loads of time and money.

There’s another case where they use AI for scheduling at hospitals. This drastically reduced the downtime for ORs.

For now, AI is most suitable for «administrative» tasks.
 
Some do learn over time.

There’s a case from Norway where they use AI to create 3D models of organs before surgery. That used to be done manually by doctors using a program that looks like Paint. Using AI, they were able to do the surgery the same day that they did the imaging - saving loads of time and money.

There’s another case where they use AI for scheduling at hospitals. This drastically reduced the downtime for ORs.

For now, AI is most suitable for «administrative» tasks.
I’m not sure any of these are examples of the AI “learning” over time.

How neural networks tend to work is you have a training phase where it is fed a bunch of data and establishes a network of nodes with weights, and once your satisfied with the preformance, you can save those weights, and you have trained network.

Then you apply that networks (weights) to real world problems — the weights don’t change anymore.

There do exist adaptive neural networks — that can adapt their weights over time — but as of now those are rarely used.
 
while doctors focus on more complex disorders or rare conditions with atypical features
The stuff they are the worst at? Nah. AIs will just comically overperform physicians here, like with most other things. In many cases, like with us, they are already much better. Truly, they get it so much more it's not even close, but that's mostly because things are maximally broken right now.

One major problem when people think about AI's role here is that they're missing the most important lesson we got in the last few years: the easy stuff is actually the hard stuff. AIs still struggle at basic math, although that's rapidly improving, but have been able to make poetry, stories, songs, art and so on for years.

Faking empathy is super easy. It literally only takes time, patience. Human physicians don't have that. AIs will have basically unlimited time per patient, they will do 1000000x better than humans here.

You don't want physicians to focus on more complex or rare conditions. It's the very last thing anyone should want. Actually, we should want as little human judgment as possible, preferably none. Human judgment is the literal worst thing you can rely on in all circumstances, with relating to the experience of other people a close second.
 
Actually, we should want as little human judgment as possible, preferably none. Human judgment is the literal worst thing you can rely on in all circumstances, with relating to the experience of other people a close second.
Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.
 
Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.
Yes but we should minimise this as much as possible.

There exists AI models where you don’t even have human labelled training data, they just learn to distinguish patterns overtime without human input. This is called “unsupervised learning”.
 
There exists AI models where you don’t even have human labelled training data, they just learn to distinguish patterns overtime without human input. This is called “unsupervised learning”.
The AI has to get the data from somewhere even if it isn’t labeled, and the human still has to decide on how to interpret the output.
 
The AI has to get the data from somewhere even if it isn’t labeled, and the human still has to decide on how to interpret the output.
Unsupervised learning is often used to differentiate biological species, or to group proteins or things like that, so this data has very little human influence.

And yes obviously, humans will interpret the results, the goal though is to minimise the points where human biases can be inserted.
 
Unsupervised learning is often used to differentiate biological species, or to group proteins or things like that, so this data has very little human influence.
That’s a trait of the problem it’s being applied to, not the type of algorithm.

But I agree that minimizing human bias is the wanted path in most cases.
 
That’s a trait of the problem it’s being applied to, not the type of algorithm.
Well there is less interest in applying unsupervised learning to human language and things like that, because it already has by definition subjective labels.

It’s more interesting to apply to places where it isn’t influenced as much by human cognition and social constructs… in my opinion.
 
I don’t have an opinion on that. I’m just for reducing any source of bias and using the best tools (and not the hyped ones) to solve any given problem.
 
There do exist adaptive neural networks — that can adapt their weights over time — but as of now those are rarely used.
I had the impression that at least some AIs were given goals, such as "identify which of these x-ray images shows cancer", and it trains from images with known findings. Once that's done, it should continue to adapt to feedback, such as false positives (they cut the patient open, and found no cancer) and false negatives (the "cancer free" patient returned to the hospital and found a large tumor in the same location). Do today's AIs not adapt? Can they not adapt to a different x-ray machine's images?
 
I had the impression that at least some AIs were given goals, such as "identify which of these x-ray images shows cancer", and it trains from images with known findings. Once that's done, it should continue to adapt to feedback, such as false positives (they cut the patient open, and found no cancer) and false negatives (the "cancer free" patient returned to the hospital and found a large tumor in the same location). Do today's AIs not adapt? Can they not adapt to a different x-ray machine's images?
When would happen in the case of “errors” with today’s AI’s is that you might “fine tune” a model by training it on previous errors to help it’s accuracy. But that isn’t an automatic process, and in doing so, you’re creating a new model.
 
Back
Top Bottom