“AI will cure cancer” misunderstands both AI and medicine

SNT Gatchaman

Senior Member (Voting Rights)
Staff member
by Rachel Thomas, PhD

(Discusses long Covid, systemic medical failures and misdiagnoses)

https://rachel.fast.ai/posts/2024-02-20-ai-medicine/

An AI algorithm that reads MRIs more accurately would not have helped neurologist Ilene Ruhoy, MD, PhD, when she developed a 7 cm brain tumor. The key obstacle to her treatment was getting fellow neurologists to believe her symptoms and even order an MRI in the first place. “I was told I knew too much, that I was working too hard, that I was stressed out, that I was anxious,” Dr. Ruhoy recounts. Eventually, after her symptoms worsened further, she was able to get an MRI and urgently sent in for a 7 hour surgery.

Again, MRI-reading AI can not help these patients whose doctors won’t order an MRI in the first place. On average, it takes Lupus patients 7 years to receive a correct diagnosis, and 1 in 3 are initially misdiagnosed with doctors incorrectly claiming mental health issues are the root of their symptoms. Even healthcare workers are often shocked at how quickly they are dismissed and disbelieved once they become patients. For instance, interviews with a dozen healthcare workers revealed that their colleagues shifted to discarding their expertise as soon as they developed Long Covid.

This disregard of patient experience and patient expertise severely limits medical knowledge. It results in delayed diagnoses, misdiagnoses, missing data, and incorrect data. AI is great at finding patterns in existing data. However, AI will not be able to solve this problem of missing and erroneous underlying data. Furthermore, there is a negative feedback loop around lack of medical data for poorly understood diseases: doctors disbelieve patients and dismiss them as anxious or complaining too much, failing to gather data which could help illuminate the disease.

For medical research more generally, the Patient Led Research Collaborative (focused on Long Covid) is an encouraging model. I hope that we can see more efforts within medical AI to center patient expertise.
 
This is certainly true, but it misses the point that one major reason MDs so commonly disbelieve patients is because resources, including their time, are too limited, and that accelerating all technological tools will lessen the burden of ordering tests that MDs are strongly encouraged to limit as much as possible.

There is a human problem, but the solution is mostly about economics, about removing the scarcity that lower levels of technology forces, and this applies just as much to testing, as it does to the availability of MDs. Once medical AIs are available 24/7 and basically have the same level of capacity as if every single person on Earth had a personally-assigned MD, everything changes. They don't have to gaslight anymore, it literally becomes evidently counter-productive, because everything that happens is recorded accurately.

Cultural changes usually follow technological changes, not the other way around. The culture of medicine is built this way because scarcity and limited technology have always been a strongly limiting factor. Once you cross to another level of technology, it's a bit like moving from letters and packages to the modern telecommunications infrastructure. You're not limited by the availability of paper and ink in your hands, or the time it takes for the information you have to scribble down onto a physical medium.
 
Back
Top Bottom