AI better than physicians. (With Eric Topol)
https://www.nytimes.com/2025/02/02/...ytcore-ios-share&referringSource=articleShare
https://www.nytimes.com/2025/02/02/...ytcore-ios-share&referringSource=articleShare
Is this a concern here as well? I imagine that there might be enough complex cases to occupy the specialists full time?
The BPS lobby, of course!But will that replacement be ethical, on which grounds will it be and to whose will and advantage?
No. We don’t do that with anything today, there’s no reason to believe that it will happen with AI. The only exception is EU’s AI Act.Will we get informed and treat this as political, or scientific, or pragmatically?
Some do learn over time.they never learn again — their neural network stays exactly the same…
I’m not sure any of these are examples of the AI “learning” over time.Some do learn over time.
There’s a case from Norway where they use AI to create 3D models of organs before surgery. That used to be done manually by doctors using a program that looks like Paint. Using AI, they were able to do the surgery the same day that they did the imaging - saving loads of time and money.
There’s another case where they use AI for scheduling at hospitals. This drastically reduced the downtime for ORs.
For now, AI is most suitable for «administrative» tasks.
Yeah, I should have made that clear. My bad! Those are examples of how AI can be applied to HC today.I’m not sure any of these are examples of the AI “learning” over time.
If you let any of them loose on the web, someone will find a way to break them within a few hours.There do exist adaptive neural networks — that can adapt their weights over time — but as of now those are rarely used.
The stuff they are the worst at? Nah. AIs will just comically overperform physicians here, like with most other things. In many cases, like with us, they are already much better. Truly, they get it so much more it's not even close, but that's mostly because things are maximally broken right now.while doctors focus on more complex disorders or rare conditions with atypical features
Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.Actually, we should want as little human judgment as possible, preferably none. Human judgment is the literal worst thing you can rely on in all circumstances, with relating to the experience of other people a close second.
Yes but we should minimise this as much as possible.Human judgement is 100 % required to create an AI model. The human provides the training data, and decides which meaning(s) to assign to the numerical value(s) that the model outputs The human also decides how and where to implement the model. Human judgement isn’t going anywhere anytime soon.
The AI has to get the data from somewhere even if it isn’t labeled, and the human still has to decide on how to interpret the output.There exists AI models where you don’t even have human labelled training data, they just learn to distinguish patterns overtime without human input. This is called “unsupervised learning”.
Unsupervised learning is often used to differentiate biological species, or to group proteins or things like that, so this data has very little human influence.The AI has to get the data from somewhere even if it isn’t labeled, and the human still has to decide on how to interpret the output.
That’s a trait of the problem it’s being applied to, not the type of algorithm.Unsupervised learning is often used to differentiate biological species, or to group proteins or things like that, so this data has very little human influence.
Well there is less interest in applying unsupervised learning to human language and things like that, because it already has by definition subjective labels.That’s a trait of the problem it’s being applied to, not the type of algorithm.
I had the impression that at least some AIs were given goals, such as "identify which of these x-ray images shows cancer", and it trains from images with known findings. Once that's done, it should continue to adapt to feedback, such as false positives (they cut the patient open, and found no cancer) and false negatives (the "cancer free" patient returned to the hospital and found a large tumor in the same location). Do today's AIs not adapt? Can they not adapt to a different x-ray machine's images?There do exist adaptive neural networks — that can adapt their weights over time — but as of now those are rarely used.
When would happen in the case of “errors” with today’s AI’s is that you might “fine tune” a model by training it on previous errors to help it’s accuracy. But that isn’t an automatic process, and in doing so, you’re creating a new model.I had the impression that at least some AIs were given goals, such as "identify which of these x-ray images shows cancer", and it trains from images with known findings. Once that's done, it should continue to adapt to feedback, such as false positives (they cut the patient open, and found no cancer) and false negatives (the "cancer free" patient returned to the hospital and found a large tumor in the same location). Do today's AIs not adapt? Can they not adapt to a different x-ray machine's images?