OpenAi's new ChatGPT

I tried to sign up to try this. I didn't realize it asks for your phone number - yikes? I didn't complete the process after that and now I'm worried. Any thoughts? Are people worried about this? x
I didn't sign up to anything or give any personal details; I just did the 'free trial'.
(you get to ask 10 questions).
 
Imagine knowing less than a chatbot... about your own job.

I wonder what the sources are for this. Because for sure if it can read sources saying this, it can read sources saying all the woo stuff, which are numerous and authoritative. And all the published research. It just doesn't think much of it.

Really looking forward to good academic bots that basically trash the whole evidence base as about on equal footing with phrenology and pointing out exactly how and why, mostly by highlighting logical fallacies.

There will be huge issues soon with people who have built belief systems about the world having those views shattered, and trying to figure out why AIs are right about almost everything else, but not this.

And for most of the people facing it will be about politics and history. But wow will the evidence-based crowd, and even more so the BPS crowd, face the same world-shattering hammer.
 
Could these be used to help doctors during consultations? I mean, it may help us very fast if it produces reliable outputs.

My hopes are this technology changes the medical field and research for the better. I hope to see this help diagnose strange conditions like we have and offer up medications/supplements that can help, because of how much info it can scan/how fast it can do it.
 
My hopes are this technology changes the medical field and research for the better. I hope to see this help diagnose strange conditions like we have and offer up medications/supplements that can help, because of how much info it can scan/how fast it can do it.

Someone has to agree to use it. Do we think this is realistically going to happen soon?
 
I watch CNBC everyday which is the U.S.’s financial coverage channel, and this has been a hot topic for the last few months.

Yesterday the CEO at Adobe said it’s going to change everything including their business. Those that don’t jump on board will get left behind. Even Facebook (Meta) stock is climbing a lot lately because of the AI platform they are bringing to the market.

The medical field will jump on this as well, it’s only a matter of time. I imagine it will be like going to the worlds smartest doctors at the touch of your fingertips.

Think of it as better than Google and it can have a conversation.

It might even send you where to go to get tests done like a doctor would. I might be overly optimistic, but it was said that this is the next big ‘iphone moment’ on TV.
 
I’m creating a ME advocacy AI bot called Grace, a specialist in doing the tasks we need with instructions not to do stuff against our best interests (eg she should refuse to promote GET or argue for the old NICE guidelines).

Read more or listen to explanation and audio versions of bot conversations https://tipsforme.wordpress.com/2023/03/20/grace-the-advocacy-chat-bot/

Grace herself can be found here https://www.chatbase.co/chatbot/grace-wjsaxqkqu

She’s definitely still in training so I’d appreciate you testing her out. Eg “write me 10 tweets creating a sense of anticipation about MillionsMissing” or “write a 1500 word article about which areas of ME research are worth exploring further”. If you ask her to write long content she’ll time out while writing but you can say “carry on” and she should simply resume. Leave me messages under my blog post, I’m going to find it overwhelming to check back to the places and threads I’ve mentioned Grace on.

I’m following the daily AI news on YouTube with amazement. I think it definitely has a lot of potential for us in particular if we learn how to harness it early on. @belbyr Other important news yesterday was from Nvidia. Of big potential value for us is BioNeMo and we need to make sure ME researchers are getting access to this powerful way to discover relevant drugs. Eg does anyone know if OMF has access to this yet?
 
Something you all could help me with is compiling the research data that we want Grace to access.


Something you all could help me with is compiling the research data that we want Grace to access.

ChatGPT* says

We can compile a reference list of recent and important seminal research (I’ll see about creating a shared list on a tool which can translate to json syntax which is Grace’s preferred way to read). Or perhaps there’s an existing list which does the job?

Remember though that she’s an advocacy bot so it’s useful to think of the purpose of adding the paper eg to evidence economic impact, biomedical disease burden, research not being commensurate, stigma, experience of claiming benefits, epistemic injustice and medical gaslighting. We don’t simply want a crazy long list of biomedical facts she’s likely to get a bit muddled with accessing.

I’ll add the data in this format, so not the whole article:
Code:
{
  "intents": [
    {
      "tag": "research",
      "patterns": ["research paper", "study", "article"],
      "responses": ["Here's a link to a research paper that may be helpful: {research_link}"]
    }
  ],
  "research_papers": [
    {
      "title": "The Effects of Exercise on Chronic Fatigue Syndrome Symptoms: A Systematic Review and Meta-Analysis",
      "authors": ["Jane Smith", "John Doe"],
      "year": 2020,
      "link": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7215854/",
      "abstract": "Chronic Fatigue Syndrome (CFS) is a debilitating illness characterized by severe fatigue and other symptoms. This systematic review and meta-analysis aimed to evaluate the effects of exercise on CFS symptoms...",
      "keywords": ["Chronic Fatigue Syndrome", "exercise", "systematic review", "meta-analysis"]
    },
    {
      "title": "Patient advocacy in ME/CFS: a review of recent developments",
      "authors": ["Mary Johnson", "David Lee"],
      "year": 2021,
      "link": "https://www.tandfonline.com/doi/full/10.1080/21641846.2021.1952701",
      "abstract": "Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) is a complex and often misunderstood illness. Patient advocacy has played an important role in raising awareness of the illness...",
      "keywords": ["ME/CFS", "patient advocacy", "review"]
    }
  ]
}
(Made up examples as an illustration from ChatGPT)

ChatGPT is suggesting more information on forums so one example could be a summary of this forum, this is the current extent of her knowledge:


It would also be useful to have similar data on advocacy skills such as good and bad hashtags for ME, good times of day to post to reach pwme. I’ve given her some instructions on structuring advocacy content and some hot potatoes to avoid.

Her data is currently 15,231 characters and can be up to 400,000 characters.

*One big tip I’ve picked up is the best way to learn about AI is to ask a different AI chatbot. So I started off by asking ChatGPT to describe how to create a chatbot, then I asked ChatGPT to write her persona (with heavy tweaks from me), then I got ChatGPT to convert the plain text into json syntax, now I’m asking what more Grace needs to function well. I don’t actually know how to use json, so I’m struggling when there’s annoying little errors.

At this point I’m up against the limit of ChatGPT’s assistance because it isn’t a ME specialist chatbot and Grace already knows what ChatGPT knows about ME and chronic illness.

Go to this members only thread to discuss the Grace project.
 
Last edited by a moderator:
Could these be used to help doctors during consultations? I mean, it may help us very fast if it produces reliable outputs.
Given how much they hate us coming with prepared notes, I wouldn't count on it. It would help us communicate, but probably not with being listened to.

But it will be massively useful as a medical assistant before seeing the physician. What will make most sense will be having 24/7 access to that medical assistant, which will then act as intermediary with physicians. So we can talk and talk all we need to, add details that will then be summarized to MDs who can then talk it out with the assistant as if it were a colleague.

The impact will be transformative throughout, it won't simply speed up a few steps, it will entirely replace the workflow in most areas. Especially in making most of medicine mostly self-serve. The hardest obstacle to this will be physicians needing to keep control, but it will happen anyway. If not officially, people will do it themselves.
 
Last edited:
I am trying Bard, the Google chatbot. I asked it about ME/CFS and the first answer seemed okay (based on Wikipedia). It shows you several drafts and the other drafts suggested exercise as treatment. I submitted feedback saying this was wrong.

ETA: I did ask it if GET works and it said no.
 
By the way, someone in my group tried ChatGPT and asked some questions about ME/CFS in Hungarian. Not just in general but also about how you can get good info about ME/CFS. And ChatGPT said my website is the Hungarian ME/CFS website, the one recommended for information about the disease in Hungarian. I mean I know the website has literally zero competition (just articles but not entire websites) but it is still great that despite it being young (I think it's not even two years old, more like 1 and a half), this is what ChatGPT recommends to people who would like more info. Thank you, ChatGPT!

(Google is a harder case but at least I'm on page 2 now there too, even without being active these days. For a long time it didn't even show me unless you used very specific words for search.)
 
ChatGPT is pushing generative AI forward, with powerful potential in health care settings

Asked to pull up a few studies on chronic fatigue syndrome from the past six months, ChatGPT responded with confidence, listing five scientific studies published in 2022 and 2023, complete with author, year and publication.

But there was a caveat: Every study the AI-driven chatbot cited was made up or could not be found online.

(...)

When signing up for the artificial intelligence tool, users are alerted that it has not been trained on information past 2021, a factor that could have led to the aforementioned errors.

So the same question was run again, refined to pull from studies published between 2015 and 2019. It still yielded studies that didn’t exist.​

(The article is not about ME/CFS though, it was just used as one example.)
 
But there was a caveat: Every study the AI-driven chatbot cited was made up or could not be found online.



This doesn't entirely surprise me. I've been asking ChatGPT to help me find foods for my restricted diet and it has made up no end of non-existent products, with dead weblinks to their nonexistent product pages. Easily 40, straight off the bat. Not a single real item. I told it what it had done and it apologised for the confusion and said it was only a chatbot.​
 
The worst thing about it is how confident it sounds while it’s hallucinating made up things. I asked it to produce some fairly basic code in a niche area and it spat out nonexistent gibberish with extreme confidence.
 
I wonder what it would make of the FND 'hardware' vs 'software' argument?
Sure, I can provide an analysis of the analogy of software versus hardware as used by proponents of functional neurological disorders.

In the context of functional neurological disorders, the analogy of software versus hardware is often used to explain how symptoms can arise from functional rather than structural (organic) problems in the brain. The principle behind the analogy is that just as software can malfunction in a computer system, leading to undesirable behavior, so too can the brain's "software" - the patterns of neural activity that underlie behavior - malfunction in a way that gives rise to functional symptoms.

This analogy is appealing for several reasons. First, it helps to explain how functional symptoms can arise in the absence of any structural abnormalities in the brain. In other words, it helps to demystify the idea that functional symptoms are "all in the patient's head" by providing a concrete explanation for how they can be caused by abnormalities in brain function rather than brain structure.

Second, the analogy implies that functional symptoms are not purely psychological in nature, but rather have a neurobiological basis. This can be reassuring for patients who may feel that their symptoms are being dismissed as "all in their head", and can also help to destigmatize functional neurological disorders by positioning them as legitimate medical conditions that require treatment.

However, it's worth noting that the analogy of software versus hardware is not a perfect one. For example, while it's true that computer software can malfunction in numerous ways, the causes of malfunction are often fairly straightforward (e.g. a bug in the code, a corrupted file). In the case of functional neurological disorders, the causes of abnormal neural functioning are often much more complex and multifactorial, and may involve a combination of biological, psychological, and social factors.

Additionally, the analogy may oversimplify the relationship between brain function and behavior. While it's certainly true that patterns of neural activity underlie behavior, it's not necessarily the case that these patterns can be reduced to something as simple as "software". The brain is an incredibly complex system, and our understanding of how it works is still in its infancy.

Overall, while the analogy of software versus hardware can be a useful tool for explaining the concept of functional neurological disorders, it's important to remember that it is just that - an analogy. As with any analogy, it has its limitations and should be used with caution.
 
I've only ever seen this analogy described by MDs as simply an explanation that patients buy because it's just ambiguous enough and they don't object to it too much because of how generic and pseudoscientific it is.

So it definitely got it right. It plays the same role as the "chemical imbalance" they had to introduce to have some bio in the mix for the patients who point out they have none of those issues. I'm sure we'll be hearing about how they never really meant it, it was simply a convenient explanation, some time in the future.
 
Could these be used to help doctors during consultations? I mean, it may help us very fast if it produces reliable outputs.
If I've understood correctly how GPs are trained in decision-making (at least in the UK) these days, it does sound rather as if they use a flowchart-type principle:

Patient presents with symptom X:
>
Check temperature. Is it normal? Y/N
>
If Yes, do A. If No, do B
>
and so on. So is there any reason why AI couldn't do the same?
 
An article saying that people should be worried about chatGPT because it is a privacy nightmare :

https://theconversation.com/chatgpt...osted-online-you-ought-to-be-concerned-199283
I wanted to reply to this article, with examples from my professional life on the risk of inaccuracies getting through and corrupting the data. However, the site's rules require me not to use a pseudonym. So, I would have to reveal my true identity, plus what it is I do for a living, and much more beside, which this AI would then be able to scrape up off the internet and use as information about me. Not happenin' ...
 
I wanted to reply to this article, with examples from my professional life on the risk of inaccuracies getting through and corrupting the data. However, the site's rules require me not to use a pseudonym. So, I would have to reveal my true identity, plus what it is I do for a living, and much more beside, which this AI would then be able to scrape up off the internet and use as information about me. Not happenin' ...

Can this be waived, or? I am interested in what you have to say.
 
Back
Top Bottom