Hacker News new | past | comments | ask | show | jobs | submit login

Could you learn everything needed to become fully human simply by reading books and listening to conversations? Of course not. You'd have no first person experience of any of the physical experiences that arise from being an embodied agent. Until those (multitude of) lessons can be learned by LLMs, they will remain mere echos of what it is to be human, much less superhuman.



AGI doesn't mean "has the same phenomenological experience as humans."

In fact, if we plan to use them as a tool, then that's definitely not what we want, since it would imply many of the same flaws and limitations as humans.


> Could you learn everything needed to become fully human simply by reading books and listening to conversations?

Does this mean our books and audio recordings are simply insufficient? Or is there some "soul" component that can't be recorded?


It isn't some "soul", but I think parent is making the same point as Yann Lecun usually makes: You can't have "true intelligence" (i.e. akin to human intelligence, whatever that is as we don't really know how it works) based on just next token prediction + bandaids.

A typical argument for that is that humans process 1-3 orders of magnitude more multimodal data (in multiple streams being processed in parallel) in their first 4 years of life than the biggest LLMs we have right now do using a fraction of the energy (in a longer timeframe though), and a lot more in the next forming years. For example that accumulated "intelligence" eventually allows a teenager to learn how to drive in 18-24 hours of first-hand training. An LLM won't be able to do with that little training even if it has every other piece of human knowledge, and even if you get to train it with driving images-action pairs I wish you good luck if it is presented with an out-of-distribution situation when it is driving a car.

Humans learn to model the world, LLMs learn to model language (even when processing images or audio, it process them as a language: sequences of patches). That is very useful and valuable, and you can even model a lot of things in the world just using language, but is not the same thing.


I have personal experience with the human form of this - language learning in a vacuum.

For the last two years I've studied French every day, but only using language apps. Recently, I hired a one-on-one tutor. During lessons I find myself responding to what I think I heard with the most plausible response I can generate. Many times each session, my tutor asks me, "Do you really understand or not?" I have to stop and actually think if I do.

I don't have much multi-modal input and increasing it is challenging, but it's the best way I have to actually connect the utterances I make with reality.


> LLMs learn to model language

Obviously not. Language is just a medium. A model of language is enough to describe how to combine words in legal sentences, not in meaningful sentences. Clearly LLMs learn much more than just the rules that allow to construct grammatically correct language, otherwise they would just babble grammatically correct nonsense such as "The exquisite corpse will drink the young wine". That knowledge was acquired via training on language, but is extra-linguistic. It's a model of the world.


Need evidence for that, afair this is a highly debated point right now, so no room for "obviously".

PS: Plus, most reasoning/planning examples coming from LLM based systems rely in bandaids that work around said LLMs (rlhf'd CoT, LLM-Modulo, Logic-of-Thought, etc) to the point they're being differentiated by the name LRMs: Large Reasoning Models. So much for modelling the world via language just using LLMs.


No doubt, but no one is claiming that artificial humanity is an inevitability.

(At least, no one I'm aware of.)

The claim is about artificial intelligence that matches or surpasses human intelligence, not how well it evolves into full-fledged humanity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: