The debate over whether LLMs are "intelligent" seem a lot like the old debate among NLP experts whether English must be modeled as a context-free grammar (push down automaton) or finite-state machine (regular expression). Yes, any language can be modeled using regular expressions; you just need an insane number of FSMs (perhaps billions). And that seems to be the model that LLMs are using to model cognition today.
LLMs seem to use little or no abstract reasoning (is-a) or hierarchical perception (has-a), as humans do -- both of which are grounded in semantic abstraction. Instead, LLMs can memorize a brute force explosion in finite state machines (interconnected with Word2Vec-like associations) and then traverse those machines and associations as some kind of mashup, akin to a coherent abstract concept. Then as LLMs get bigger and bigger, they just memorize more and more mashup clusters of FSMs augmented with associations.
Of course, that's not how a human learns, or reasons. It seems likely that synthetic cognition of this kind will fail to enable various kinds of reasoning that humans perceive as essential and normal (like common sense based on abstraction, or physically-grounded perception, or goal-based or counterfactual reasoning, much less insight into the thought processes / perceptions of other sentient beings). Even as ever-larger LLMs "know more" by memorizing ever more FSMs, I suspect they'll continue to surprise us with persistent cognitive and perceptual deficits that would never arise in organic beings that do use abstract reasoning and physically grounded perception.
> LLMs can memorize a brute force explosion in finite state machines (interconnected with Word2Vec-like associations) and then traverse those machines and associations as some kind of mashup, akin to a coherent abstract concept.
That's actually the closest to a working definition of what a concept is. The discussion about language representation has little bearing on humans or intelligence, because it's not how we learn and use language. Similarly, the more people - be it armchair or diploma-carrying philosophers - try to find the essence of a meaning of some word, the more they fail, because it seems that meaning of any concept is defined entirely through associations with other concepts and some remembered experiences. Which again seems pretty similar to how LLMs encode information through associations in high-dimensional spaces.
LLMs seem to use little or no abstract reasoning (is-a) or hierarchical perception (has-a), as humans do -- both of which are grounded in semantic abstraction. Instead, LLMs can memorize a brute force explosion in finite state machines (interconnected with Word2Vec-like associations) and then traverse those machines and associations as some kind of mashup, akin to a coherent abstract concept. Then as LLMs get bigger and bigger, they just memorize more and more mashup clusters of FSMs augmented with associations.
Of course, that's not how a human learns, or reasons. It seems likely that synthetic cognition of this kind will fail to enable various kinds of reasoning that humans perceive as essential and normal (like common sense based on abstraction, or physically-grounded perception, or goal-based or counterfactual reasoning, much less insight into the thought processes / perceptions of other sentient beings). Even as ever-larger LLMs "know more" by memorizing ever more FSMs, I suspect they'll continue to surprise us with persistent cognitive and perceptual deficits that would never arise in organic beings that do use abstract reasoning and physically grounded perception.