Hacker News new | past | comments | ask | show | jobs | submit login

> yet measurably identical in all aspects

> behavior is the activity that you measure, and you can measure brain activity.

You've shifted your definition of "behavior" now. I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans. For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave. Let's say they're permanently sleepwalking, then.

I really feel like you're hand-waving at supposed contradictions here, rather than engaging with why this is a difficult problem. If you firmly reject the idea of a p-zombie, let's leave that aside for now.

Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?




> Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?

I don't even know that other humans are conscious entities. At least not with the level of rigor you seem to be demanding I apply to this hypothetical robot. However, if you and I were to agree upon a series of test that, if passed by a human, we would assume for the sake of argument that that human was a conscious entity, and if we then subjected your robot to those same tests and it also passed, then I would also assume the robot was also conscious.

You might have noticed I made a hidden assumption in the tests though, which is that in establishing the consciousness or not-consciousness of a human they do not rely on the observable fact that the subject is a human. Is that reasonable?


Sure, absolutely. I agree that we could construct a battery of tests such that any entity passing should be given the benefit of the doubt and treated as though it were conscious: granted human (or AI) rights, allowed self-determination, etc.

> I don't even know that other humans are conscious entities

Exactly. Note that the claim Retra is making (to which I was responding) was very much stronger than this. He is arguing not just that we should generally treat beings that seem conscious (including other people) as if they are, but that they must by definition be conscious, and in fact that it is a self-contradictory logical impossibility to speak of a hypothetical intelligent-but-not-conscious creature.


>For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave.

Yes, you do. Because if the p-zombie has a slightly different brain-wave, it remains logically possible that p-zombies and a naturalistic consciousness can both exist. The goal of the thought-experiment is to prove that consciousness must be non-natural -- that there is a Hard Problem of Consciousness rather than a Pretty Hard Problem. Make the p-zombie physically different from the conscious human being and the whole thing fails to go through.

Of course, Chalmers' argument starts by assuming that consciousness is epiphenomenal, which is nonsense from a naturalistic, scientific point of view -- we can clearly observe it, which means it interacts causally, which renders epiphenomenalism a non-predictive, unfalsifiable hypothesis.


Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?

http://www.imdb.com/title/tt0708807/


>I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans.

I was talking about the stupidity of p-zombies. Either way, those 'minute' differences in MRI scans build up in such a way to determine the survival of the mind being scanned.

>Do you believe [...] such a robot be necessarily a conscious entity?

Yes, it would. Because in order to cause such behavior to be physically manifest, you must actually construct a machine of sufficient complexity to mimic the behavior of a human brain exactly. It must consume and process information in the same manner. And that's what consciousness is: the ability to process information in a particular manner.

Even a "sleepwalking zombie" must undergo the same processing. That processing is the only thing necessary for consciousness, and it doesn't matter what hardware you run it on. As in Searle's problem: even if you run your intelligence on a massive lookup table, it is still intelligence. Because you've defined the behavior to exactly match a target, without imposing realistic constraints on the machinery.


> Yes, it would. [...] that's what consciousness is: the ability to process information in a particular manner.

Then this is our fundamental disagreement. You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.

I believe that you're neglecting the "the experience of what it's like to be a human being"[0] (or maybe you yourself are a p-zombie ;) and you don't feel that it's like anything to be you). There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it, but that's different than sidestepping the question entirely by defining down consciousness until it's something we can measure (e.g. information processing). I posted this elsewhere, but I highly recommend reading Chalmers' essay "Facing Up to the Problem of Consciousness"[1] if you want to understand why many people consider this one of the most difficult and fundamental questions for humanity to attempt to answer.

[0] http://www.cs.helsinki.fi/u/ahyvarin/teaching/niseminar4/Nag...

[1] http://consc.net/papers/facing.html


>You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.

No, that is not at all what is happening. That's not even on the same level of discourse.

>I believe that you're neglecting the "the experience of what it's like to be a human being"

That experience is the information processing. They are the same thing, just different words. Like "the thing you turn to open a door" and "doorknob" are the same thing. I'm not neglecting the experience of being human by talking about information processing. What is human is encoded by information that you experience by processing it.

>There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it [...]

No, this is not agreement with me. This is not at all what I'm saying.


In that case, I'm really struggling to understand your position.

> What is human is encoded by information that you experience by processing it.

So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same? Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals? What about a person taking Ambien who walks, talks and responds to questions in their sleep (literally while "unconscious")?


>So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same?

Yes, exactly.

>Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals?

This is a different question. No, computers aren't conscious. You need to have the 'right kind' of information processing for consciousness, and it's not clear what kind of processing that is.

This is essentially the Sorites Paradox: how many grains of sand are required for a collection to be called a heap? How much information has to be processed? How many neurons are needed? What are the essential features of information processing that must be present before you have consciousness?

These are the interesting questions. So far, we know that there must be continual self-feedback (self-awareness), enough abstract flexibility to recover from arbitrary information errors (identity persistence), a process of modeling counterfactuals and evaluating them (morality), a mechanism for adjusting to new information (learning), a mechanism for combining old information in new ways (creativity), and other kinds of heuristics like emotion, goal-creating, social awareness, and flexible models of communication.

You don't need all of this, of course. You can have it in part or in full, to varying levels of impact. "Consicousness" is not well-defined in this way; it is a spectrum of related information processing capabilities. So maybe you could consider computers to be conscious. They are "conscious in a very loose approximation."


Does this unit have a soul?

I answer, yes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: