I notice that often in these debates someone will make the comparison between a low level mechanism driving LLMs, and a high level emergent behavior of the human mind. I don't think it's deliberate - we don't fully understand how the brain works so we only have emergent behaviors - but how can you be so certain that deeply complex navigation of reality can't emerge from interpolation of a linear regression?
That's a good question. With sufficient dimensionality, interaction terms, and enough linear regressions, I suppose it's possible. But dynamic and reactive coordination of many multiple linear regressions wouldn't be just a linear regression. The output of a linear regression is simplistic just like LLM token prediction is simplistic. Saying something might be a component of eventual intelligence is far from it being intelligence. LLMs are episodic responses to a fixed context by a fixed model that is programmed to predict tokens. Even the CoT models, while more complex, still use a static model with a recursive feed of model outputs back to the model. I think Dr. Chollet does an excellent job of identifying the fundamental difference between a potential AGI and static models in his ARC-AGI papers and presentations.