You don't have to be an AI "hype bro" to take issue with the reductive and trite notion that LLM's are stochastic parrots only. There is a continuum between that and AGI.
Well I take issue with the reductive and trite notion that just because an LLM can generate plausible text it's suddenly maybe conscious and intelligent, maybe about to end humanity etc.
It's exactly like the crypto hype wave. Everyone dreaming up semi-plausible futures, based on a whole chain of unfounded assumptions.
It's plausible text, and it's useful text. LLMs aren't just speculative vehicles in search for a problem, as most of crypto is, they are useful, today, right now. They don't require any assumptions to be so, nor do they have to be skynet world ending AGI to be that.
You can point out problematic extrapolation of doomers without being reductive towards the very real and very useful capabilities of LLMs.
The only thing the LLM is missing is an self actuation loop.
We put a camera on a multimodal LLM, it interprets the visual world and sees before it a bunch of blocks. It looks at the task list it has that says "pick up red blocks, put them in blue bin". The visual components identifies the red blocks and the textual component issues commands to its drive unit which calculates the best path and how to use it's manipulators to pick up the blocks.
This is a very basic chain of existence. We have world identification, we have actuation on motivation, we have interaction with the environment. We can do this now. These goals have already been achieved. Companies are already testing more complex models with much more general instructions such as "Pick up everything you think is trash" or "Organize this room" to see the emergent behaviors that come out of the models.
You are seemingly a few years behind what has already been done, and why people are starting to get concerned.