Can Large Language Models (LLMs) like ChatGPT be used to produce useful information?

That's probably because 90% of texts and papers on ME/CFS talk about weak data and all these possible connections that have no sound basis.

I wonder if you would get better results if you call it illnessX and describe it yourself with what are considered the most robust findings. You could also ask it to be very skeptical, to check if a connection makes sense and that you want to avoid making an error at all costs.
 
I don't know how accurate those two AI statements above are (1, 2), but they seem to me clear and concise, which is always a big plus, if for no other reason than that makes it much easier to spot the limitations and flaws.

It demonstrates what I have long thought is the main potential benefit of AI (at least in its current form and level) which is efficiency gains. How long would it have taken a human or even a team of humans to produce that?

It also shows that the prompts AI is given are very important (e.g. ME/CFS v. ME). That is where the human skill component of the equation matters.
 
Last edited:
That's probably because 90% of texts and papers on ME/CFS talk about weak data and all these possible connections that have no sound basis.

I wonder if you would get better results if you call it illnessX and describe it yourself with what are considered the most robust findings. You could also ask it to be very skeptical, to check if a connection makes sense and that you want to avoid making an error at all costs.

Yes, I got much better results after insisting that it limit its answers to only the most robust findings, be highly skeptical, and avoid overinterpretation. This was in response to a demand for a hypothesis about type-1 diabetes and some other randomly selected genes.

It gave me this:

Got it — let’s strip this right back to what can actually be said with high confidence from the literature and your GWAS list, without speculative functional chains.


Here’s a skeptical, minimal interpretation:




1. Which genes have established, reproducible links to immune function or autoimmunity?​


  • ETV6 – Strong evidence from hematology: encodes a transcriptional repressor essential for hematopoiesis; germline and somatic variants cause altered blood cell development and immune phenotypes. No direct, replicated T1D link yet, but immune relevance is well-established.
  • THRB – Encodes thyroid hormone receptor β. Thyroid hormones affect immune system function, and autoimmune thyroid disease often co-occurs with T1D, but there is no direct mechanistic evidence tying THRB variants to β-cell autoimmunity.



2. Which genes have plausible cellular roles but little to no direct autoimmune/T1D evidence?​


  • MTCL1 – Microtubule crosslinking factor; required for microtubule organization in polarized cells. Known in neuronal and epithelial contexts; β-cell or immune-specific roles not well validated in humans.
  • TBC1D19 – Predicted Rab GTPase-activating protein; general role in vesicle trafficking is plausible, but no confirmed function in immune cells or β cells in the context of autoimmunity.
  • STYXL1 – Catalytically inactive phosphatase; some evidence for involvement in stress responses in cell models, but no robust immune or diabetes link.
  • ERICD, ZBTB40-IT1 – Long noncoding RNAs; regulatory potential is possible, but functional and disease-relevant roles are unconfirmed.
  • OR1F12 – Olfactory receptor pseudogene or low-expression GPCR; no validated function outside olfactory tissue in the context of immunity or diabetes.



3. Mechanistic inferences we can make with caution​


From this set, the only well-supported connection to an immune-mediated disease pathway is via ETV6 (hematopoietic regulation) and, more indirectly, THRB (thyroid autoimmunity co-morbidity patterns). The rest are largely uncharacterized in T1D or immune biology, and functional connections would need to be demonstrated from scratch.




4. Minimal working hypothesis​


Certain loci in the GWAS may influence T1D risk through effects on immune cell development or function (ETV6), while others may reflect novel, as-yet-uncharacterized mechanisms. Most genes here lack direct experimental evidence in β-cell biology or immunity, so their relevance remains speculative.


This seems a lot more useful, but maybe still overoptimistic.
 
But here it makes sense for objective findings. But we need to be quite careful.
Yes of course we need to be careful, just like we wouldn’t blindly accept something a human medic or scientist said, I think most pwme are beyond that from our experience. But we could miss useful insight from being too cynical.

The training is from all sorts of sources, but also includes directly what we’ve been writing about for years. I’d be shocked if this forum isn’t in the training data. I know something like 27,000 words from my blog was in GPT3.5 model. It does also know about BPS papers too. It’s a couple of years since it’s said anything like that to me, as if it’s valid knowledge, but we should be aware that it’s in there.

Anyway this is probably a different discussion thread.
 
I suppose you could say they’re a bit like seagulls. Good at regurgitation but you wouldn’t depend on them for reliable deduction.

Well crafted promote definitely help, out of the box their system prompts can give a tendency to be sycophantic, although there are some recent examples of Gemini being more like Marvin the Paranoid Android.

I’ve found hem incredibly useful tools for many things but as @Yann04 says, not for reasoning or critically assessing information or anything speculative. None of the LLMs seem capable of knowing when they’re wrong. And I do worry when I see how some (individuals or companies and governments) use them.
 
Back
Top Bottom