OpenAi's new ChatGPT

Discussion in 'Other health news and research' started by ME/CFS Skeptic, Dec 2, 2022.

  1. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    5,761
    Location:
    Aotearoa New Zealand
    StackOverflow is the premier Q&A site for programmers. People ask "how do I achieve x?" and others respond. The community votes the answers up or down and generally this has worked very well for over a decade. Subject matter experts have their answers prominently displayed. Probably much of the AI "knowledge" in the programming arena has been derived from StackOverflow. 7 months ago they instituted Temporary policy: Generative AI (e.g., ChatGPT) is banned.

    Stack Overflow is a community built upon trust. The community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers. The system relies on users to verify and validate contributions by other users with the tools we offer, including responsible use of upvotes and downvotes. Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment. This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.

    Have to say there are parallels with BPS research.

    However, got to laugh at this answer someone posted, asking ChatGPT to sarcastically respond to this ban —

    Many of the comments are of the form: "Wow, this is frightening."
     
    Shadrach Loom, Hoopoe, Wyva and 4 others like this.
  2. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,002
    Location:
    Belgium
    I think StackOverflow's took the right position. In my view, Chat GTP is very useful if you use it as a language model to help generate text or easy code, but not for things that require a deeper understanding or logic.

    I heard someone make the analogy with self-driving cars. The technology gets it right most of the time but there are still too many edge cases to safely implement it. It may be that throwing more data at it will not help solve these edge cases because some may have never happend before. A higher level of accuracy may only be achieved with new models that are based on logic and a conceptual understanding of the world.

    The same might be true for Chat GTP. Predicting the next word might not be good enough for many tasks and jobs. It could be that it will reach its limit and that more data hardly improves its accuracy. Could be wrong though. Will be interesting to see how this evolves.
     
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Well, AI did learn everything from us, confidently saying something that is completely wrong is basically an art form with humans.

    And as SNT Gatchaman said, it's definitely not limited to Internet forums, there are entire disciplines and industries built on being confidently incorrect. It's not as if only using validated sources of information doesn't have truckloads of that.

    Sniff, it really has its father's ability to bullshit with complete confidence
     
    Last edited: Jul 23, 2023
    Amw66, EzzieD, TiredSam and 2 others like this.
  4. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    rvallee and Peter Trewhitt like this.
  5. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    I listened to the BBC Media show (available on IPlayer) today. It was about AI, and specifically ChatGPT. A cause of concern is that if it's given access to private information or sensitive company information for a task, it stores that information and makes it available when it's doing other tasks it's set by anyone else. So that information becomes effectively public information.

    It's going to be made available to companies in a form where information users feed into it for tasks is kept confidential to that company, but the charge is per person using it, not per company.

    It occurs to me that we on this forum need to be aware of this.

    First don't give it any of your own private information to work with, as it effectively becomes no longer private.

    Second, don't give it anything posted in the members only or private areas of the forum to work with unless you have clear permission from the individual whose post you want to put in ChatGPT.
     
    Last edited: Aug 31, 2023
  6. Amw66

    Amw66 Senior Member (Voting Rights)

    Messages:
    6,769
    Last week was a local business week and I went along to a seminar hosted by a self confessed tech nerd on AI.

    He is an eternal optimist and I do agree with him that we are at a transition point akin to before electricity and how it changed everything. He played down the negatives .

    I have a rare talent for being able to do things with tech that noone else seems to be able to ( not in a good way ). I definitely missed my vocation - should have been a user / tester ! So anything which simplifies things for me is a godsend .

    The seminar touched on different apps and how they were used in business . Using chat gpt to generate a tweet storm in seconds . From checking tone as well as spelling in grammarly, using midjourney and canva for marketing ( dodged the query re IP rights and copyright that is rife at moment), autopod for podcasts , beautiful AI for graphics . All useful , timesaving and interesting - could take a lot of spoons out of advocacy work ..

    There an app called seeing AI for visual impairment which will describe objects to people.

    It was combining apps which was interesting .
    Short YouTube videos can be done by giving chat gpt formatted questions, outputting data in a table, taking the table into Canva and using the YouTube template . Short videos are then auto generated with potential to edit in a matter of hours rather than a week. Definitely a gamechanger on multiple fronts .
     
  7. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    759
    Location:
    Switzerland (Romandie)
    I noticed that Chatgpt is much more inclined to psychologise ME when you call it CFS than when you call it ME/CFS. An example:

    My promt:
    Regarding [Name] was Simon Wessley right, or are those critiquing him right? Short answer only please
    Answer for Name=CFS:
    The controversy around Simon Wessely's views on Chronic Fatigue Syndrome (CFS) is complex. Wessely's emphasis on psychological factors in CFS has been criticized by many patients and researchers who believe it undermines the biological aspects of the illness. Both perspectives hold validity, but a balanced approach acknowledging both psychological and biological factors is increasingly seen as the most accurate and constructive.
    Answer for Name=ME/CFS
    Critics of Simon Wessely are largely right. His approach to ME/CFS, which emphasized psychological factors and treatments like cognitive behavioral therapy (CBT) and graded exercise therapy (GET), has been widely criticized for misunderstanding the biological basis of the illness and potentially harming patients. Recent research supports a biomedical understanding of ME/CFS.
    This is obviously not too suprising and to be expected, but is still really useful to note. I’ve noticed it with a variety of prompts but thought this one would be the simplest to highlight.

     
  8. forestglip

    forestglip Senior Member (Voting Rights)

    Messages:
    874
    I tried with Claude.ai. I had it rewrite multiple times and all the answers varied a good deal, but it wasn't like "CFS" always made it more of an advocate for Wessley's views. Your theory does make sense, but there's also a chance the first answers you got happened to fit well with that hypothesis.

    Or maybe there's something different in this regard between these two bots.

    CFS
    ME/CFS
     
    RedFox, Trish and Peter Trewhitt like this.
  9. Peter Trewhitt

    Peter Trewhitt Senior Member (Voting Rights)

    Messages:
    4,081
    Unfortunately none of the answers gets to the main point, that the research cited in support of Wessely’s views is just plain bad research. Though we have no defensible evidence detailing the aetiology of ME, the evidence we do have, including the PACE trial itself, is that GET/CBT do not work as a long term treatment of ME and for a sizeable proportion they do harm.
     
  10. Eleanor

    Eleanor Senior Member (Voting Rights)

    Messages:
    267
    Text generators in a nutshell. Great if you want a squishily averaged synthesis of general opinion. Not great if you want factual accuracy - though it can look like it if you're asking about a topic where fact and general opinion happen to more or less overlap.
     
    rvallee, forestglip, RedFox and 3 others like this.
  11. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    759
    Location:
    Switzerland (Romandie)
    That actually makes a lot of sense Forestglip. Claude AI by Anthropic based on the “Harmlessness through Constitutional AI model” ie. making the model go through extensive stages of supervised and reinforcment learning where it is trained to abide to a strict ethical guideline. This reduces hallucinations or problematic answers but also means the model tends to release more generic and bland answers while seldom clearly taking sides.
     
  12. Trish

    Trish Moderator Staff Member

    Messages:
    55,414
    Location:
    UK
    To lighten the tone, a BBC article today:

    Glue pizza and eat rocks: Google AI search errors go viral

    Quote:

    Google's new artificial intelligence (AI) search feature is facing criticism for providing erratic, inaccurate answers.

    Its experimental "AI Overviews" tool has told some users searching for how to make cheese stick to pizza better that they could use "non-toxic glue".

    The search engine's AI-generated responses have also said geologists recommend humans eat one rock per day.

    A Google spokesperson told the BBC they were "isolated examples".

    Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

    They have been widely mocked on social media.
     
  13. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    13,661
    Location:
    Canada
    Makes sense considering that almost no good research is done using CFS, so it would be mostly biopsychosocial research and still the best that can be said is that it could be somewhat valid. And since most of this research is old, being "increasingly seen as the most accurate and constructive" is mostly late 90s early 2000s stuff.

    It's certainly valid that there are psychological consequences to chronic illness, so factors is correct, but such a response misses the ambiguity that the BPS model is about psychological causes, which have no basis in fact or evidence and specifically exclude the role of pathogenic triggers other than as a generic "life event".
     
  14. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    5,761
    Location:
    Aotearoa New Zealand
    To drop the tone into the gutter, although this is in the scientific literature as a term of art —

    ChatGPT is bullshit (2024)
    Hicks, Michael Townsen; Humphries, James; Slater, Joe

    Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”.

    We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions.

    We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

    Link | PDF (Ethics and Information Technology) [Open Access]
     
    rvallee, Snow Leopard, LJord and 4 others like this.

Share This Page