Can Large Language Models (LLMs) like ChatGPT be used to produce useful information?

These tools can make great leaps, and do things unexpected and new, the example of AlphaGo is a great one for this. As is I think things like protein folding or the maths challenges. It requires very specific domains with clear rules and concepts of what is ‘correct’ as well as the ability to test that. But understanding why it is possible there and not transferable to ‘solve this disease for me’ is important I think.
Alpha go is entirely different than LLM’s in architecture. Not really a great comparison if as were talking about LLM’s here.

For LLM’s to be true AGI there is a hope that with enough data “reasoning” will become an emergent property, possibly even consciousness, similar to the old Chinese room scenario.
 
Given the latest news about GPT-5 I would say LLMs have hit a wall and are way more hype than substance.

This is so funny to me, a few weeks ago before GPT-5 everyone was still a AI futurist. One bad model and stalled take off and now everyone’s a downer. I doubt this has changed the minds of true LLM AGI believers like Sam, theil, etc
 
This is so funny to me, a few weeks ago before GPT-5 everyone was still a AI futurist. One bad model and stalled take off and now everyone’s a downer. I doubt this has changed the minds of true LLM AGI believers like Sam, theil, etc
They are all full of shit and mostly just grifters. LLMs do not scale that much is known. So they are a dead end really the way they are currently designed.

They will need another huge discovery like transformers were for LLMs before another major advance can be made, and guess what no such discovery is on the immediate horizon they are just hoping to “fake it until they made it” and praying that throwing trillions of dollars at the problem there will be a new discovery
 
Last edited:
They are all full of shit and mostly just grifters. LLMs do not scale that much is known. So they are a dead end really the way they are currently designed.

They will need another huge discovery like transformers were for LLMs before another major advance can be made, and guess what no such discovery is in the immediate horizon they are just hoping to “fake it until they made it” and praying that throwing trillions of dollars at the problem there will be a new discovery
I think with enough data they were hoping it would be emergent.

Genie3 world consistency was an emergent property of more data in …. So there may be some value in the idea “it can just happen” but yes it is hope at this point.
 
Alpha go is entirely different than LLM’s in architecture. Not really a great comparison if as were talking about LLM’s here.
I thought I made the distinction in my posts and think it’s pretty clear? It also seems most likely that LLMs will stick around in some form but perhaps as the human interface to other forms of AI/ML, that’s what they’re most suited to after all.

The scaling stuff for LLMs has always been a pipedream pushed by a few. It’s an amazing grift and equally amazing they’ve managed to convince so many it may work. Like @leokitten says more discoveries and technologies are needed and the industry seemed to know that until ChatGPT blew up.

Paper from Apple on why they aren’t pursuing LLM’s:
I’m familiar with the paper. But saying Apple are not pursuing LLMs when they very clearly are (researching, developing and building products using them) seems to misrepresent things somewhat?
 
I was reading this the other day and it is entertaining in some ways but highlights a number of the ways LLMs struggle.
A positive read could be that they feel they can fix these issues. But I think it also shows how much the earlier points made in this thread stand and how hoping for an LLM to be able to do something new which they haven’t encountered or been trained for (like finding a new disease solution by prompting alone) is somewhat wishful thinking.
 
Back
Top Bottom