Hacker News new | past | comments | ask | show | jobs | submit login

Just to play the devil's advocate here, you can argue that ChatGPT is not really intelligent. It is one hugely complex probability distribution, true, but importantly, a static one. Human intelligence stretches the similar distribution into the temporal dimension as our brain processing data influences the shape of the distribution real-time. In this manner, human brains are not Chinese Rooms due to their continuous online learning, but ChatGPT is, since the weights could be written to stone and the output would not be any different.



I don’t think I buy that this is playing the devil’s advocate, or that it’s even a meaningful argument.

1) Whether the weights are static or dynamic over time is not of importance. As a simple counter argument, if for instance the theory that LLMs could produce AGI if they could just be pushed to an absolutely colossal scale, then a planet scale computer might produce a machine intelligence by the definition of this conversation. That’s a big what if, and it’s about as useful as string theory, but it illustrates something I touch on in point #2.

2) A second counter argument to the “well the weights are hardcoded and ChatGPT doesn’t learn” argument, ChatGPT does learn. I’ve taught it conversational protocols, where it stored information mutates it, and lets me retrieve it in a standard I invented on the spot. This is the entire basis of ChatGPT, understanding call-response of human conversation in the probabilistic abstract. The apparatus ChatGPT uses to “stretch the similar distribution into the temporal dimension” is that it can store new information passively in the continuing conversation thread. You could theoretically teach ChatGPT about 2023 by having a conversation about recent events. It probably wouldn’t be as effective as having trained it on new information, but nonetheless.

3) Finally, a deeper argument against what you’re saying, what you’re arguing has nothing to do with the definition of intelligence I’m using here. That is: “An agent which is exceedingly effective at its game.” it’s important to characterize intelligence by the game that the agent in question aims to play. When we say a child is intelligent, we don’t mean it in the same way that we say a doctor is intelligent. The two are playing entirely different games, yet both are considered intelligent. This is because the game parameterizes intelligence, and the intelligence explosion is all about the proliferation of specialized intelligences for the variety of “games” out there. ChatGPT is exceptional at its game of “general human conversational prediction up to the year 2021”.


I'd argue computers from the early 1900s or even before are intelligent. Intelligence is pretty broad. Human intelligence is a very specific type of intelligence.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: