On the contrary, few experts expected this performance from an AI this soon too.
If you can identify one or two aspects of the human “general” intelligence that an AI cannot ever possess, even in principle, I think a lot of people would be grateful.
In animals, propositional knowledge is built from procedural knowledge; and it can't really be otherwise.
What AI does at the moment is approximate propositional knowlegde with statistical associations, rather than take the procedural route. But this fails because P(A|B) doesnt say whether A causes B, B causes A, A is B, A and B are causally unrelated, etc.
What is the procedural route? To perform actions with your body so as to disambiguate the cases. Animals have causal models of their bodies which are unambiguous and their actions are intentional and goal-directed and effectively "express hypotheses" about the nature of the world. In doing so, they can build actual knowledge of it.
There's at least some good reasons to suppose that "bodies which express hypotheses in their actions" require organic properties to do so: becuase you have to have adaption from bottom-up to top-down to really have "the mind" grow the body in the relevant ways.
In other words, every action an animal performs isnt clockwork: in acting, it's body and mind change. Every action is a top-down, bottom-up whole change to the animal.
This is a very interesting hypothesis that could be quite true for living beings. What I disagree with is that having an animal-like body is necessary for the process of forming a world model. A simulation could be sufficient. And there is already work on that front. (Also, I would not characterize deep-learning-based AI as trying to form propositional knowledge. In fact, its great performance partly stems from not dealing with propositional knowledge directly.)
If you can identify one or two aspects of the human “general” intelligence that an AI cannot ever possess, even in principle, I think a lot of people would be grateful.