I'm ok with believing that all it takes to make intelligence (probably not the only way) is a sufficiently large neural net with the right architecture and weights.
I think it is easy to distracted by the specific mechanisms by which these models work, but most of the technological detail is because we want something to happen on systems at the scale of what we can actually build. We simply can't build a human brain scale neural net yet. We build what we can and besides maybe with all this research we will figure something significant about what intelligence actually is.
The notion "This can't be all thought is" is as old as the idea of AI. I think it informed Turing when he proposed the Imitation Game. The insight is that people would be resistant to the idea of a bunch of simple things stuck together becoming thinking until they were faced with something that behaves sufficiently indistinguishable from themselves that to doubt that they were thinking would be akin to doubting everyone you meet.
In the end some people won't even accept an AI that does everything a human does as actually thinking, but then again some people are actually solipsistic.
>The notion "This can't be all thought is" is as old as the idea of AI.
Older still:
>It must be confessed, moreover, that perception, and that which depends on it, are inexplicable by mechanical causes, that is, by figures and motions, And, supposing that there were a mechanism so constructed as to think, feel and have perception, we might enter it as into a mill. And this granted, we should only find on visiting it, pieces which push one against another, but never anything by which to explain a perception. This must be sought, therefore, in the simple substance, and not in the composite or in the machine.
I think it is easy to distracted by the specific mechanisms by which these models work, but most of the technological detail is because we want something to happen on systems at the scale of what we can actually build. We simply can't build a human brain scale neural net yet. We build what we can and besides maybe with all this research we will figure something significant about what intelligence actually is.
The notion "This can't be all thought is" is as old as the idea of AI. I think it informed Turing when he proposed the Imitation Game. The insight is that people would be resistant to the idea of a bunch of simple things stuck together becoming thinking until they were faced with something that behaves sufficiently indistinguishable from themselves that to doubt that they were thinking would be akin to doubting everyone you meet.
In the end some people won't even accept an AI that does everything a human does as actually thinking, but then again some people are actually solipsistic.