Hacker News new | past | comments | ask | show | jobs | submit login

Pretty much everything Searle has ever written on the subject can be predicted by starting with the argument "but only humans can understand meaning!" and working from there.

In part c of that link, he lays it out quite clearly: even if you manage to build a detailed working simulation of a human brain, even if you then insert it inside a human's head and hook it up in all the right ways, you still haven't "done the job", because a mere simulation of a brain can't have mental states or understand meaning. Because it's not a human brain.

In other words, he's an idiot. Or at least he's so committed to being "right" on this issue that he's willing to play the dirty philosophy game of sneakily redefining words behind the scenes until he's right by default.

But in any case, he's not talking about any practical or mesurable effect or difficulty related to AI. He's arguing that even if you built HAL, he wouldn't acknowledge it as intelligent, because his definition of "intelligent" can only be applied to humans.




Is it Searle who redefines consciousness because he's doesn't like computers, or is it you, because you like them? His argument is quite brilliant, because it's both clear and non-trivial. Most of the self-appointed internet philosophers lack both of these qualities.

For example, people who say that there is no difference between understanding addition and merely running an addition algorithm are wrong. Dead wrong. You don't need complex philosophy to show that. Yes, the results of computations would be the same, but the consequences for the one doing computing are not. We all know that a person who understands something can do much more with it than a person who merely memorized a process. Everybody agrees to this when it comes to education, so why is this principle suddenly reversed when it comes to computers?


Most of the self-appointed internet philosophers lack both of these qualities: What use is attacking the man here?

You are also misrepresenting Searle's argument. In the case of addition, the machine would not only be able to perform it, but also answer any conceivable question that regards the abstract operation of addition. It would be able to do everything a human would do, excluding nothing. The underlying argument is that "understanding" is a fundamentally and exclusively human property (this will not be fully rebutted until we discover in full the processes underlying learning and memory in humans)

Granted, a huge list of syntactic rules will probably not result to any useful intelligence, but a brain simulator would be exactly equivalent to a human (and Searle's response to that argument is completely unfounded)


I don't think I misrepresent his argument. I just interpret it using different examples. He uses a huge example, like speaking Chinese, which seems to confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding what you do are two different things. I don't see why this needs an elaborate thought-experiment when we all have experienced doing things without understanding them. We don't need to compare humans to computers to see the difference.

Problem is, this difference becomes apparent only when you go beyond the scope of the original activity/algorithm. And that's exactly where modern AI programs fail, badly. You take a sophisticated algorithm that does wonders in one domain, throw it into a vastly different domain, and it starts to fail, miserably, even though that second domain might be very simple.


His argument is that, while a human can do something with or without understanding it (e.g Memorizing), a machine can only do the former and will never do the latter. The argument may hold for the currect (simplistic) AI, but not for a future full brain simulator.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: