Why Chronic Illness Patients Feel Safer Talking to AI Than to Doctors

I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created? (genuinely wondering). From the level of the humble stick which can be used to get hold of something out of reach or used to poke someone's eye out or even possibly commit homicide, it seems that every new tool we invent as humans is going to be used by some percentage of people in a harmful way.

Engaging with human counsellors and therapists has likewise caused harm to some people. Engaging with the officially sanctioned medical system has widely caused harm though history, and I don't mean just as regard "illegal" illnesses that haven't been officially "medically sanctioned". Medical errors happen, which result in death or injury or harm. I've certainly been harmed by official medicine. I have a friend who died due to medical error. And in the latter two cases there has certainly been no accountability or responsibility.

Having worked in healthcare in the past I saw a number of examples where no accountability or responsibility was taken on occasions where they should have been. Even to the level of statutory organisations that investigated and had a responsibility to do so.

For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.
 
I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created? (genuinely wondering
In some ways yes I think there is more risk than previous tools because we've never had any tools before that we can speak to. None we can speak to in the way we are used to speaking to other humans, that's been a unique characteristic and no longer is. Regardless of other capability that is a huge change and has huge ramifications I think. It’s where some of the downsides we already see come from but also where some of the upsides are enabled.

For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.
Agree. We’re not going back. I wish a few companies and people hadn’t pushed this technology in the ways they have. Some of the problems are pushback are entirely due to them. It’s made the job of educating and discussing the risks and rewards that much harder. There was an alternative path more responsible people in the field were taking.

There’s some great uses. There will be more. But there are also areas I think over time we will want to keep human through choice or preference as much as anything. As well as areas that the tools of today are definitely not suitable or appropriate and likely the tools of the future will not be for a long time.

Of course anybody who says they know where things are heading is wrong. Me included. If you’d have given me what we have now a decade ago I would have been surprised and would have made all sorts of incorrect assumptions. People I’ve spoken with or just listened to in the field who I respect say the same.
 
I wish a few companies and people hadn’t pushed this technology in the ways they have.
I agree with this and have in fact discussed the issue with AI who is only to happy to help write a letter to Microsoft to spell out the problems! It completely over simplifies and generalises everything and produces much more incorrect or out of date information.

Another problem with it, as is the case with everything on the internet, is that it tends to be US biased simply because I guess there is so much more stuff out there from the US.
Using chat based AI you do need to frame everything and repeat that you want information that is valid in the UK. This is particularly true with regards to health issues as the NHS is a very restricted and particular set up.

Examples where I find it very useful is analysing blood tests, in particular lipids, and comparing/analysing products ingredients, and also fact checking things I get told by NHS 'health professionals'.

It is also very good if you need to vent on any subject without fear of upsetting anyone!.
 
I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created?
What I see mostly is people who don't think that AI will improve enough, and are judging its final value entirely based on what they've seen so far. And I don't mean here, this is what's happening in general.

From that perspective, it kind of makes sense. Ironically, the same mistake psychosomatizers make: every model is coherent with itself when evaluated on what the model says. Current AI is still too limited. It has massively improved since the first loudly publicized blunders, but those blunders created a sort of checkpoint past which most people can't imagine it will get any better, even as it already has. But since it's currently unable to fully replace people, it will never be able to do that. Fallacious, but consistent with itself.

All of those issues go away the second a medical AI is as good as an average physician. At this point medical AIs will already be 100x better than the current system, because availability is a major component of performance, and instant 24/7 availability isn't just transformative, it's completely revolutionary. Which is also the point at which any AI is probably capable of replacing most non-manual jobs.

But it's not ready yet, therefore it will never reach that. Comically famous mistake. And ironically, not especially different from "we don't know what causes ME/CFS, therefore we will never find anything, and that means it doesn't actually exist". It's definitely funny how humans can be ;)
 
Engaging with human counsellors and therapists has likewise caused harm to some people. Engaging with the officially sanctioned medical system has widely caused harm though history, and I don't mean just as regard "illegal" illnesses that haven't been officially "medically sanctioned". Medical errors happen, which result in death or injury or harm. I've certainly been harmed by official medicine. I have a friend who died due to medical error. And in the latter two cases there has certainly been no accountability or responsibility.
This has been pretty damning as well. Seeing all the talk about how people have committed suicide because they were encouraged by AIs and how they should be banned as a result, and yet we know, for a fact, that it's common for sick people with disabling chronic illnesses to commit suicide because of the total lack of support and denial from health care, and... no one cares but us. And none of this is because of edge cases where someone went rogue, but is explicitly encouraged, even defended at all costs as perfectly good.

Biopsychosocial/psychosomatic ideology has killed and ruined far more lives than AIs have, and likely will unless it's used to commit atrocities by people in power, but is unlikely to happen entirely on its own, an AI alone doing that for its own motivations. And it goes completely unchallenged, the harm is completely denied.

Humans are so freaking weird.
 
Do people think AI really carries more risk than any other "tool" that has ever been created?
One difference is the availability of the tool. You can buy lab equipment and do self-diagnostics or create your own pharmaceuticals, but neither those tools or the knowledge to use them is readily available. Medical specialists can be considered a tool for diagnosis and treatment, but are also not readily available (could be a multi-year wait). AI medical advice is at people's fingertips. Furthermore, doctors have some (flawed, inadequate) systems in place for quality control, while such systems for AIs have yet to be developed. There will be news reports about harm from these new tools, but that feedback leads to improvements. Weren't there deaths and injuries from airbags, anti-skid systems, and other such new safety features?

I don't see AI as being more risky; all new tools have risks and benefits and civilization simply has to adapt to them.
 
Article on use of Chatgpt for medical issues:
Another draw to AI for some people is its perceived lack of judgment. ‘The most comforting thing is that it doesn’t judge me about my questions. It doesn’t roll its eyes or say “that’s an absurd thing to worry about”.’
Edward, 33, has spent years using the app for health advice, but says it’s his friend’s recent hospital stay for a heart attack that really tells the story of why it’s so useful.
‘My friend, in his late 70s, plugged the medical report into ChatGPT after he felt a bit disenchanted by the bed manner of the doctor,’ the AI advisor explains.

‘The app, however, was empathetic, told him exactly what the report was saying, and how severe it was in a gentle way.’
It’s not just patients embracing AI, either. More than half of GPs use it in their clinical practice, according to a survey by the Royal College of General Practitioners (RCGP) and the health organisation Nuffield Trust.

However, it’s worth pointing out that most use algorithmic systems for note-taking, rather than clinical judgement.

Dr Becks Fisher, the director of research and policy at the Nuffield Trust, says GPs like her ‘expect’ patients to come in armed with ChatGPT advice.

‘It’s very difficult to make broad generalisations about the accuracy of AI tools, partly because how useful they are will depend on what information the user prompts them with,’ she adds.
 
From this I take that AI removes the tone of voice, eye rolling, laughing at suggestion I’ve received from in person GP interaction. The misleading of saying to your face we don’t do GET by CFS clinic staff. It wouldn’t remove the dismissal or downplaying inherent in the wording eg churning out BACME pacing up, emphasis on anxiety, avoidance etc.
 
The misleading of saying to your face we don’t do GET by CFS clinic staff. It wouldn’t remove the dismissal or downplaying inherent in the wording eg churning out BACME pacing up, emphasis on anxiety, avoidance etc.
I see no reason why AI will be any better on these recommendations. Who knows what nonsense it might generate, depending on thte quesitons the person asks. And I can do without the pseudo empathy from a machine.
 
Let’s be clear if you know how to prompt AI well it will be more knowledgeable on ME/LC and PEM than +99% of the doctors out there.
Always ask for a reference papers (and let AI scavenge the internet for patient experiences)
And cross check S4ME forum for a second opinion.

Then hope to find that 1% doctor that’s willing to discuss these results and treatments with you.

With the speed things are now, I think doctors will start using AI more widely in less than 5 years.
- still the bottleneck is the amount of research and papers published that feed AI
 
But, especially this early on in its development, there is way too much danger of it just being a more efficient version of spreading and reinforcing the same old ignorance and prejudice of living humans, with a superficial veneer of being more objective and neutral and authoritative.
GIGO still applies. Garbage in, garbage out. If the AI feeds on medical opinion then the quality of the opinion matters. As does the quality of studies. As does the volume of studies. If poor science dominates due to financial or other interests, it will have a tendency to bias the AI response. AI is a fantastic tool, and getting better, but it currently needs a lot of real world grounding by qualified people.
 
Let’s be clear if you know how to prompt AI well it will be more knowledgeable on ME/LC and PEM than +99% of the doctors out there.
Always ask for a reference papers (and let AI scavenge the internet for patient experiences)
And cross check S4ME forum for a second opinion.

Then hope to find that 1% doctor that’s willing to discuss these results and treatments with you.

With the speed things are now, I think doctors will start using AI more widely in less than 5 years.
- still the bottleneck is the amount of research and papers published that feed AI
While I agree that some S4ME members, scientists, doctors, and others sufficiently aware of the pitfalls, may be able to make positive use of some AI, I think the majority of people can get very badly misled.

But I think this thread topic is not really about whether AI adds value or makes people feel more understood and supported.

I think the real message is - doctors, do better. Listen to patients, admit when you don't know, offer practical supportive care, keep learning, be open minded, and so on.
 
Back
Top Bottom