Hacker News new | past | comments | ask | show | jobs | submit login

You worded his unstated assumptions beautifully. I completely disagree, though: this demonstrates the exact opposite, that LLMs are using statistical methods to mimic the universal grammars that govern human linguistic faculties (which, IMO, is the core of all our higher faculties). Like, why did it break like that instead of more clear gibberish? I’d say it’s because it’s still following linguistic structures — incorrectly, in this case, but it’s not random. See https://en.m.wikipedia.org/wiki/Colorless_green_ideas_sleep_...

Marcus’s big idea is that LLMs aren’t symbolic so they’ll never be enough for AGI. His huge mistake is staying in the scruffies vs neat dichotomy, when a win for either side is a win for both; symbolic techniques had been stuck for decades waiting for exactly this kind of breakthrough.

IMO :) Gary if you’re reading this we love you, please consider being a little less polemic lol




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: