Hacker News new | past | comments | ask | show | jobs | submit login

The same way that we know interpolation of a linear regression is not the same as the deeply complex navigation of reality we do as living things.



I notice that often in these debates someone will make the comparison between a low level mechanism driving LLMs, and a high level emergent behavior of the human mind. I don't think it's deliberate - we don't fully understand how the brain works so we only have emergent behaviors - but how can you be so certain that deeply complex navigation of reality can't emerge from interpolation of a linear regression?


That's a good question. With sufficient dimensionality, interaction terms, and enough linear regressions, I suppose it's possible. But dynamic and reactive coordination of many multiple linear regressions wouldn't be just a linear regression. The output of a linear regression is simplistic just like LLM token prediction is simplistic. Saying something might be a component of eventual intelligence is far from it being intelligence. LLMs are episodic responses to a fixed context by a fixed model that is programmed to predict tokens. Even the CoT models, while more complex, still use a static model with a recursive feed of model outputs back to the model. I think Dr. Chollet does an excellent job of identifying the fundamental difference between a potential AGI and static models in his ARC-AGI papers and presentations.


> but how can you be so certain that deeply complex navigation of reality can't emerge from interpolation of a linear regression?

That was pretty much my question. Why are people so certain on the topic.


I wasn't trying to be flippant but challenge the excessive confidence people have on this topic.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: