Hacker News new | past | comments | ask | show | jobs | submit login

> do we have any reason to think we can’t teach a LLM better logic?

I'll go for a pragmatic approach: the problem is that there is no data to teach the models cause and effect.

If I say "I just cut the grass" a human would understand that there's a world where grass exists, it used to be long, and now it is shorter. LLMs don't have such a representation of the world. They could have it (and there's work on that) but the approach to modern NLP is "throw cheap data at it and see what sticks". And since nobody wants to hand-annotate massive amounts of data (not that there's an agreement on how you'd annotate it), here we are.




I call this the embodiment problem. The physical limitations of reality would quickly kill us if we didn't have a well formed understanding of them. Meanwhile AI is stuck in 'dream mode', much like when we're dreaming we can do practically anything without physical consequence.

To achieve full AI I believe will eventually have to our AI's have a 'real world' set of interfaces to bounds check information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: