Hacker News new | past | comments | ask | show | jobs | submit login

seems like you'd need some sort of cognition before you could even begin to approach the "hump" of AGI



What do you think cognition is, if not a coherent flow of ideas?


> What do you think cognition is, if not a coherent flow of ideas?

With ChatGPT, the flow of ideas is coherent to the reader of them (the user typing into the system). The "AI" itself has no understanding of the ideas, and does not "know" if they are coherent or not. That is why it is far from AGI. It is not a general intelligence, it is a general pattern matcher.


Being a little bit of a devil's advocate, how do you know you have an understanding of the ideas? How do you know the AI does not? Both of these questions are tough to answer. (I'll put a relevant ChatGPT essay in a subsequent response digging into the philosophy a bit.)

I do think you are on the right track with your criticism however. I agree GPT is not a general intelligence, it is a general pattern matcher. By design it produces flows of text similar to what it was trained on, without regard for how well these correspond to the actual world or even a hypothetical but rule-governed world.

The map is not the territory; GPT deals exclusively with the "map" side.


> How do you know the AI does not?

Easy, GPT-3 doesn't provide a coherent world view. Just ask it about the things it tells you, and it will quickly start to directly contradicting itself, like "A is true" and then say "A is not true". Humans don't do that so blatantly, unless they have Alzheimer's or similar which makes it hard to tell if the human is still intelligent or not.

Gpt-3 is like looking up random parts of the internet with your queries, and changing them a little will lead you to a site that tells a completely different story. That is the most reasonable explanation to why it behaves as above.


[me] Write a five paragraph essay on Daniel Dennett's responses to Searle's Chinese room thought experiment.

[GPT (text-davinci-003)]

In his responses to John Searle's Chinese Room thought experiment, philosopher Daniel Dennett offers a defense of strong artificial intelligence (AI). In his argument, Dennett maintains that Searle's experiment does not adequately account for how human beings use language, and that strong AI can potentially be achieved if the right conditions are met.

Dennett's first response to the Chinese Room argument is that language use is not merely an informational process, as Searle suggests. Rather, Dennett argues that language use involves a complex system of meaning and interpretation, and that meaningful language use cannot be reduced to a strictly algorithmic process. For example, Dennett points out that two people may use the same words but interpret them differently, depending on various contextual factors. Thus, the Chinese Room thought experiment fails to account for the complexity of language and its use.

Dennett also challenges the assumption that a computer can never understand language as well as a human being. To this end, he argues that if a computer is given enough data, it can learn to process language in a similar way to humans. This could involve the computer being programmed with a large corpus of data, such as literature, so that it can learn to understand language in context.

Dennett's third point is that the Chinese Room argument neglects the fact that computers can be programmed to interact with their environment. In other words, a computer can be programmed to respond to various external stimuli, just as a human being would. This means that a computer could potentially interact with its environment in a meaningful way, just as a human being can.

Finally, Dennett argues that the Chinese Room argument assumes that a computer must understand language in the same way as a human being, which is an oversimplification of how language works. Rather, he suggests that a computer could potentially understand language in its own way, and be able to process information in a meaningful way.

In sum, Daniel Dennett's responses to Searle's Chinese Room thought experiment offer a defense of strong AI, arguing that computers can potentially be programmed to understand language in a meaningful way. Dennett's argument draws upon various theoretical and practical considerations, and provides an interesting counterpoint to the conventional view that computers can never truly understand language in the same way as a human being.


That's the "brain in a vat" model of cognition.

4E model of cognition is much more demanding in its definition (embodied, embedded, enactive, and extended)

In other words, what's the purpose of a flow of ideas if there is no real world movement, meaning, interaction etc it ultimately results in?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: