Hacker News new | past | comments | ask | show | jobs | submit login
Death Is Optional: A Conversation with Yuval Noah Harari and Daniel Kahneman (edge.org)
110 points by sergeant3 on March 5, 2015 | hide | past | favorite | 95 comments



  Computers are very, very, very far from being like humans, especially 
  when it comes to consciousness. The problem is different, that the system, 
  the military and economic and political system doesn't really need 
  consciousness.
Consciousness is a poorly defined concept with many unfortunate connotations, but let's assume in this context it's the same as self-awareness. I assert it's very likely that nature didn't evolve conscious brains by accident, it's probably a byproduct of making an intelligence that can reason about itself and its environment.

I know it's just a thesis, but when you think about what our mindless AIs lack, it makes sense. They're characterized by a complete incapability for global reasoning and an inability for personal consideration. You might argue, as the article does, this is exactly how we want our tools to behave, but then we might have to accept there could be hard limits on the complexity of mental tasks these systems are able to perform without access to higher reasoning.


> I assert it's very likely that nature didn't evolve conscious brains by accident, it's probably a byproduct of making an intelligence that can reason about itself and its environment.

I think you're exactly right; Michael Graziano's theory of consciousness it that it starts as a necessary function for modeling the attention of an agent and turns into awareness when the brain itself is modeled as an agent. First, a specialized brain function for agent modeling of predator/prey/rival/mate evolves as a good trait for survival. Part of that function entails modeling what an agent is attentive to. However, once this agent model discovers the brain it is running on, a new agent is recognized, and attention now becomes redundant and is instead reported as awareness of attention, which has to feel subjectively like a secondary aspect of primary sensory information. It's a nice logical progression. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223025/


"Self awareness is a byproduct of an intelligence that can reason about itself" looks like a tautology to me. I've heard similar theories proposed that extend to broader notions of consciousness, but I don't find the argument very compelling. It's very easy to see why self-awareness is required in nature: intelligence won't do any good for an organism unless it is self aware and has a survival instinct. That pressure clearly doesn't apply to man made machines.


>it's probably a byproduct of making an intelligence that can reason about itself and its environment.

I think this is exactly right. In fact my suspicion is that "consciousness" is an emergent property of feedback from high resolution sensors. Stated another way, it's a constant internal inventory of everything that can be controlled through the same volitional system.


The word "Consciousness" is almost like the word "God".

Many people seem to know what it is, no one can actually define it, and it has never been measured, located or proven to exist.

Maybe we should stop using it altogether.


I wouldn't go as far as stopping to use it altogether, but when discussing things it might be worth a while to taboo it [0]. It's a very helpful trick; [1] explains it nicely.

[0] - http://wiki.lesswrong.com/wiki/Rationalist_taboo

[1] - http://lesswrong.com/lw/nu/taboo_your_words/


We should either stop using it, or at the very least own up to what it is we're [trying to] talk about: http://consc.net/papers/facing.html


Going to read that in earnest, did a quick scan (not my native language) and yes: i want to differentiate here between the not-asleep and the i-think-i-know-i-am-thinking type of consciousness. It's not only experience, but knowing or thinking to know your own experience.

Furthermore, it seems only to exist or pinpoint when you're communicating with another person. In total solitude, boundaries between your self-image and the other(s) just don't hold up, and the whole thing becomes almost meaningless.

Edit: the concept of time seems to be connected to it as well, but i really need to read this paper first now, i think :)


I wouldn't go that far. Consciousness is the result of algorithms properly functioning in my brain chemistry which allows me self-awareness.

Our brains process information in a certain way due to evolution, therefore, we are.


I root all these things into survival. So far computers are 2 years old, without humans to lay infrastructure, renew, fix computers would stop functioning pretty fast.


> it's probably a byproduct of making an intelligence that can reason about itself and its environment.

See Jaynes' bicameral mind hypothesis. Even if it's most likely a mistaken theory it's still interesting as a distinction between intelligence and consciousness.


> it's very likely that nature didn't evolve conscious brains by accident, it's probably a byproduct of making an intelligence that can reason about itself and its environment.

Strictly speaking, everything in living nature is an "accident". Natural selection acts on "accidental" random mutations. It's not a directed process.


I'm not sure the original comment deserves this clarification, you'll be hard pressed to find someone on this forum who didn't already know about the nature of mutations.

The point is though that while the mutations themselves are "accidents" and the result is always characterized by a certain randomness, evolution as a problem-solving algorithm isn't itself accidental. While some (or even many) if the characteristics of an organism might be incidental, some major features tend to have a good reason for being selected. Self-awareness, I assert, is such a feature, because it carries implications too large not to have an effect on selection.


> Self-awareness, I assert, is such a feature, because it carries implications too large not to have an effect on selection.

You might assert that, but there's virtually no evidence to support that assertion, and interesting philosophical arguments to the contrary: https://en.wikipedia.org/wiki/Philosophical_zombie

Now, if what you mean by "self-awareness" is just the ability of an organism to monitor its own condition and respond accordingly, then we're in agreement, but I find that a very uninteresting assertion and it means that we're not really talking about "consciousness" except inasmuch as a thermostat is conscious: http://consc.net/notes/lloyd-comments.html


P-zombies are not an interesting philosophical argument anymore than "Angels dancing on pins" is an interesting argument.


Parent comment is asserting that consciousness/self-awareness is an evolutionarily important feature, that consciousness is favored by natural selection.

P-zombies are vivid counterexample that points to the possibility of epiphenomenalism. There's no reason to believe that consciousness is a necessary feature for an organism to respond to its environment in a survival-enhancing way. In fact, there is some neurological evidence that suggests nerve impulses to take an action precede conscious awareness: http://www.consciousentities.com/libet.htm


P-zombies don't exist, so they are not a counterexample to anything. In fact, they cannot possibly exist, so they don't even point to the possibility of anything interesting.

>There's no reason to believe that consciousness is a necessary feature for an organism to respond to its environment in a survival-enhancing way.

The reason to believe this is that systems with "consiousness" are a strict superset of systems with "responding to the environment." They are not unrelated ideas, and in fact, the ability of an organism to survive is closely tied to this kind of behavior.

I have never heard anyone try to defend P-zomies unless they were simply unaware of what the word 'meaning' means, or how our words acquire meaning. If you know how this works, you should be able to easily see why P-zombies are a meaningless idea -- an incongruous hypothetical. (Like "what would we be talking about if I didn't exist?")

Same goes for Searle's Chinese Room argument. If you assume something that is impossible, it is easy to conclude any ridiculous thing you like. P-zombies are impossible. They are not anymore useful than any other self-contained contradiction.


> In fact, they cannot possibly exist

I don't understand how you can be so confident of this. How are you defining consciousness? How are you measuring it? What makes you believe with such emphatic certainty that I am a conscious being and not a p-zombie? (or, if you prefer, a bot that easily passes the Turing test)

> They are not unrelated ideas, and in fact, the ability of an organism to survive is closely tied to this kind of behavior.

That's what I'm saying, consciousness is not a "kind of behavior". There is nothing behavioral about your inner experience as a conscious entity.


I think the TL;DR of the argument against p-zombies goes like this: if you have two things that are by definition indistinguishable by any possible measurement even in principle, they are by this very definition the same. Since there is, by definition, no way to tell if someone is a p-zombie or not, the introduction of the term "p-zombie" doesn't make any sense at all, and therefore why would you ever do that?

The people who argue p-zombies often do this because they want to keep consciousness as something fundamentally different than the material world, something inaccessible to science. But it's wrong. Even magic is accessible to science. By the very definition and idea of science, anything that has any causal influence on the observable universe can be studied and is in the domain of science.


The TL;DR argument against philosophical zombies is more like: if consciousness is non-causal (the consequence if p-zombies can exist), then the answer to the question "Why do I think I'm conscious" can not in any way make reference of the fact that you actually are conscious. Suppose we take the two parallel universes, and we run the same experiment in each, where the conscious and non-conscious doppelgangers are both asked the question "are you a conscious, self-aware human being?" Both of them will answer "yes" of course, and we can record and observe whatever we want about their brain states on so on, and get exactly the same results for both.

So, only one of the versions is correct, but it's only by coincidence! All the reasons that the conscious brain has to think it's a conscious human being, and answer "yes" to the question, are also in play in the zombie universe, which also answers "yes". The only difference is that in the "real" world the non-zombie brain happens to be right, for literally no reason at all.

And I think it's around this point you're supposed to realize the absurdity of the thought experiment.


That's a very bad argument. Indistinguishability doesn't entail identity. One obvious way to show this is to note that only the latter is a transitive relation. In other words, if A = B and B = C, then A = C; but if A is indistinguishable from B and B is indistinguishable from C, it doesn't follow that A is indistinguishable from C.


It doesn't? Why? We're talking about indistinguishability in principle, by any possible form of measurement/observation.


Yes, I know. Indistinguishability in that sense is not a transitive relation. Imagine e.g. that we have detectors which can distinguish As from Cs, but no detectors which can distinguish As from Bs or Bs from Cs. There is no contradiction in that scenario. In contrast, there is no consistent scenario in which A = B and B = C but A != C.


Imagine that we have bunch of As, Bs and Cs in one place. Start testing every one against another. You'll quickly discover two groups - An A tests positive with other As and Bs, but tests negative with Cs. A C tests negative with As, but tests positive with Bs and other Cs. B is the one that tests positive with everything.

Here, I distinguished them all. Doesn't that contradict your argument about indistinguishability not being transitive in general?


Yeah, that strategy would work in the scenario I sketched, but it's easy to change it so that you couldn't do that. Just say we have As, Bs, Cs and Ds and that all pairings are indistinguishable except As with Ds.


But at this point I have to ask, how do you define identity? I'm pretty sure that I could use the strategy I outlined above to separate our objects into three groups - As, Ds and the rest. So how do you define that Bs are not Cs, if there is no possible way for telling the difference?


I'd define identity as the smallest relation holding between all things and themselves.

If you want, you can redefine identity in terms of some notion of indistinguishability, but then you'll end up with the odd consequence that identity is not transitive. In other words, you'd have to say that if A is identical to B, B is identical to C, and C is identical to D, it doesn't necessarily follow that A is identical to D.

There are even semi-realistic examples of this, I think. Suppose that two physical quantities X and Y are indistinguishable by any physically possible test if the difference between X and Y < 3. Then i(1, 2), i(2,3), i(3,4), but clearly not i(1,4).


I'll have to think a bit more about this. Thanks for all those scenarios and making my brain do some work :).

So at this point I'm not sure if your example is, or is not an issue for a working definition of identity. To circle back to p-zombies, as far as I understand, they are not supposed to be distinguishable from non-p-zombies by any possible means, which includes testing everything against everything.

What if I define the identity test I(a,b) in this way: I(a,b) ↔ ∀i : i(a,b), where i(a,b) is an "indistinguishable" test? This should establish a useful definition of identity that works according to my scenario, and also your last example unless you limit the domain of X and Y to integers from 1 to 4. But in this last case there's absolutely no way to tell there's a difference between 2 and 3, so they may as well be just considered as one thing.

As I said, I need to think this through a bit more, but what my intuition is telling me right now is that the very point of having a thing called "identity" is to use it to distinguish between things - if two things are identical under any possible test, there's no point in not thinking about them as one thing.


>But in this last case there's absolutely no way to tell there's a difference between 2 and 3, so they may as well be just considered as one thing.

Yes, that's the point. But then you lose the transitivity property, since although 2 and 3 are indistinguishable, 3 and 4 are indistinguishable, and 4 and 5 are indistinguishable, 2 and 5 are not. So the kind of operational definition of identity you have in mind yields a relation that's so radically unlike the standard characterization of the identity relation that I don't see any reason to call it "identity" at all.

Here's one way of drawing this out. Suppose that X linearly increases from 2 to 5 over a period of 3 seconds. Do we really want to say that there was no change in the value of X between t=0 and t=1, no change between t=1 and t=2, no change between t=2 and t=3, and yet a change between t=0 and t=3? (?!)

As far as I understand you, you have some kind of positivist skepticism about non-operationalizable notions, and so you want to come up with some kind of stand-in for identity which can play largely the same role in philosophical/scientific discourse as the ordinary, non-operationalizable notion of identity. That's a coherent project, but it rests on assumptions that anyone who's interested in P-zombies is likely to reject.


> Here's one way of drawing this out. Suppose that X linearly increases from 2 to 5 over a period of 3 seconds. Do we really want to say that there was no change in the value of X between t=0 and t=1, no change between t=1 and t=2, no change between t=2 and t=3, and yet a change between t=0 and t=3? (?!)

Yeah, I get that, but what I meant in my previous comment is that you either limit the domain of t to 0-3 (and X to 2-5) and there is indeed no way to tell the change between t=2 and t=3, or you don't limit yourself to that test and can distinguish the intermedate values by means of the trick I described before. In other words, either you have transitive identity or you have all the reasons to treat non-transitive cases as one (if the identity test is like the one I described in my previous comment).

> positivist skepticism about non-operationalizable notions

I think it's too late in the night for me to understand this, I'll need to come back to it in the morning. Could you ELI5 to me the meaning of "non-operationalizable" in this context?

Again, thanks for making me think and showing me the limits of my understanding.


>Again, thanks for making me think and showing me the limits of my understanding.

Yes this was a fun discussion, thanks.

Your objection stands if you have (and know you have) at least one instance of every value for the quantity. So suppose that we are given a countably infinite set of variables and told that each integer is denoted by at least one of these variables, and then further given a function over pairs of variables f(x,y), such that f(x,y) = 1 if x and y differ by less than 3 and = 0 otherwise. Then, yes, we can figure out which variables are exactly identical to which others.

However, I would regard this as irrelevant scenario in the sense that we could never know, via observation, that we had obtained such a set of variables (even if we allow the possibility of making a countably infinite number of observations). Suppose that we make an infinite series of observations and end up with at least one variable denoting each member of the following set (with the ellipses counting up/down to +/-infinity):

    ...,0,2,3,4,5,6,7,9,...
In other words, we have variables with every integer value except 1 and 8. Then for any variable x with the value 4 and variable y with the value 5, f(x,z) = f(y,z) for all variables z. In other words, there'll be no way to distinguish 4-valued variables from 5-valued variables. It's only in the case where some oracle tells us that we have a variable for every integer value that we can figure out which variables have exactly the same values as which others.


>Indistinguishability doesn't entail identity.

Of course it does, by Voevodsky's Univalence Axiom ;-).

>One obvious way to show this is to note that only the latter is a transitive relation. In other words, if A = B and B = C, then A = C; but if A is indistinguishable from B and B is indistinguishable from C, it doesn't follow that A is indistinguishable from C.

In this case, you seem to be envisioning A, B, and C as points along a spectrum, and talking about ways to classify them as separate from each-other, in which we can classify {A, B}->+1 or {B, C}->+1, but {A, C}->-1 always holds.

That's fine, but when we say indistinguishable in the p-zombie argument, we're talking about a physical isomorphism, which doesn't really allow for the kinds of games you can get away with when classifying sections of spectrum.


>Of course it does, by Voevodsky's Univalence Axiom ;-).

I think this was a joke, right? Just asking because it's hard to tell sometimes on the internet. I didn't see how VUA was particularly relevant but I may be missing something.

It is question-begging in this context to assert that the existence of a physical isomorphism between A and B entails that A and B are identical, since precisely the question at issue in the case of P-zombies is whether or not that's the case.

I took OP to be making an attempt to avoid begging the question by arguing that in general, indistinguishability in a certain very broad sense entails identity, so that without question-beggingly assuming that the existence of a physical isomorphism entails identity, we could non-question-beggingly argue from indistinguishability to identity. In other words, rather than arguing that P-zombies couldn't differ in any way from us because they're physically identical to us (which just begs the question), the argument would be that they couldn't differ in any way from us because they're indistinguishable from us.


This isn't really germane to the p-zombie thought experiment, but:

Indistinguishability does entail identity. If I have a sphere of iron X, and a sphere of iron Y which is atom-for-atom, electron-for-electron, subatomic-particle-for-subatomic-particle identical to sphere X, and I place sphere X in position A, and sphere Y in position B, then they are still distinguishable, because one is in position A and one is in position B.

Basically, I'm not sure what the two of you mean by "the same", but I suspect you're not in agreement on it.


I think we're talking about a sense of indistinguishable/identical for which the two spheres would be indistinguishable/identical, since we're comparing a person to a P-zombie, so it's clear that we're dealing with two different individuals. I think identity in that sense is still transitive on the ordinary understanding. So e.g. if I can show that sphere A has exactly the same physical constitution as sphere B, and that sphere B has exactly the same physical constitution as sphere C, then presumably sphere A must have exactly the same physical constitution as sphere C.


The human and the p-zombie are distinguishable because one is in the zombie universe and one isn't. For the purposes of the experiment, you're not supposed to be able to tell which universe is which by observation of the universe itself (i.e. there is no property of p-zombies that gives them away as p-zombies), but from the outside looking in I guess you have a label for one and a label for the other.

Like I said, it doesn't seem germane to the thought experiment anyway, which doesn't allow for epsilons, at least none that could have a causative effect on anything. Like, if you have universe A with no consciousness, and universe B with orange-flavored consciousness, and universe C with grape-flavored consciousness, and finally universe D with cherry-flavored consciousness, and none of them are distinguishable from the others except for universe A and universe D, then you're violating the terms of the thought experiment because you have two supposedly physically identical universes which are nonetheless distinguishable by dint of their underlying consciousness substrates (or lack thereof).

Anyway you're right, it is a weak argument, but only because it doesn't go far enough in outlining why p-zombies are ridiculous (which, IMO, the argument I presented instead, does).


Identity isn't what we're measuring here, it's "humanness" or "consciousness" -- things that are behaviorally distinguishable. Up to an abstract categorical similarity.

Thus they only need to be indistinguishable up to some feature of similarity that allows them to be classified in the same group. That's why, for example, we don't have to worry about "A is the same as B except that it is 2 meters to the left."


OP was saying that P-zombies are "the same" as us in virtue of being indistinguishable from us. I was just pointing out that this inference doesn't go through, since two non-identical things can be indistinguishable.


Ah, ok.


>I don't understand how you can be so confident of this. [...] How are you measuring it? What makes you believe with such emphatic certainty that I am a conscious being and not a p-zombie?

Because p-zombies are self-contradictory. The definition of a p-zombie is a contradiction. It's like saying "suppose 1 = 2 and 1 != 2. Call this a p-zombie quality."

When you suppose that the behavior of a thing is separate from the reality of a thing, you are failing to account for how the words 'behavior' and 'reality' acquire meaning -- through observation. They cannot be different because the processes that establish their meaning are identical.

To suppose that a p-zombie could be different from a person, yet measurably identical in all aspects is a contradiction.

>How are you defining consciousness?

There is a big difference between meaning and definition. I don't have to define consciousness, I only need to know what it means. I only need to identify the use-cases where it is appropriate.

>There is nothing behavioral about your inner experience as a conscious entity.

Yes there is: behavior is the activity that you measure, and you can measure brain activity.


> yet measurably identical in all aspects

> behavior is the activity that you measure, and you can measure brain activity.

You've shifted your definition of "behavior" now. I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans. For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave. Let's say they're permanently sleepwalking, then.

I really feel like you're hand-waving at supposed contradictions here, rather than engaging with why this is a difficult problem. If you firmly reject the idea of a p-zombie, let's leave that aside for now.

Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?


> Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?

I don't even know that other humans are conscious entities. At least not with the level of rigor you seem to be demanding I apply to this hypothetical robot. However, if you and I were to agree upon a series of test that, if passed by a human, we would assume for the sake of argument that that human was a conscious entity, and if we then subjected your robot to those same tests and it also passed, then I would also assume the robot was also conscious.

You might have noticed I made a hidden assumption in the tests though, which is that in establishing the consciousness or not-consciousness of a human they do not rely on the observable fact that the subject is a human. Is that reasonable?


Sure, absolutely. I agree that we could construct a battery of tests such that any entity passing should be given the benefit of the doubt and treated as though it were conscious: granted human (or AI) rights, allowed self-determination, etc.

> I don't even know that other humans are conscious entities

Exactly. Note that the claim Retra is making (to which I was responding) was very much stronger than this. He is arguing not just that we should generally treat beings that seem conscious (including other people) as if they are, but that they must by definition be conscious, and in fact that it is a self-contradictory logical impossibility to speak of a hypothetical intelligent-but-not-conscious creature.


>For purposes of the thought experiment, I certainly don't care if the p-zombie has a slightly different brain-wave.

Yes, you do. Because if the p-zombie has a slightly different brain-wave, it remains logically possible that p-zombies and a naturalistic consciousness can both exist. The goal of the thought-experiment is to prove that consciousness must be non-natural -- that there is a Hard Problem of Consciousness rather than a Pretty Hard Problem. Make the p-zombie physically different from the conscious human being and the whole thing fails to go through.

Of course, Chalmers' argument starts by assuming that consciousness is epiphenomenal, which is nonsense from a naturalistic, scientific point of view -- we can clearly observe it, which means it interacts causally, which renders epiphenomenalism a non-predictive, unfalsifiable hypothesis.


Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?

http://www.imdb.com/title/tt0708807/


>I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans.

I was talking about the stupidity of p-zombies. Either way, those 'minute' differences in MRI scans build up in such a way to determine the survival of the mind being scanned.

>Do you believe [...] such a robot be necessarily a conscious entity?

Yes, it would. Because in order to cause such behavior to be physically manifest, you must actually construct a machine of sufficient complexity to mimic the behavior of a human brain exactly. It must consume and process information in the same manner. And that's what consciousness is: the ability to process information in a particular manner.

Even a "sleepwalking zombie" must undergo the same processing. That processing is the only thing necessary for consciousness, and it doesn't matter what hardware you run it on. As in Searle's problem: even if you run your intelligence on a massive lookup table, it is still intelligence. Because you've defined the behavior to exactly match a target, without imposing realistic constraints on the machinery.


> Yes, it would. [...] that's what consciousness is: the ability to process information in a particular manner.

Then this is our fundamental disagreement. You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.

I believe that you're neglecting the "the experience of what it's like to be a human being"[0] (or maybe you yourself are a p-zombie ;) and you don't feel that it's like anything to be you). There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it, but that's different than sidestepping the question entirely by defining down consciousness until it's something we can measure (e.g. information processing). I posted this elsewhere, but I highly recommend reading Chalmers' essay "Facing Up to the Problem of Consciousness"[1] if you want to understand why many people consider this one of the most difficult and fundamental questions for humanity to attempt to answer.

[0] http://www.cs.helsinki.fi/u/ahyvarin/teaching/niseminar4/Nag...

[1] http://consc.net/papers/facing.html


>You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.

No, that is not at all what is happening. That's not even on the same level of discourse.

>I believe that you're neglecting the "the experience of what it's like to be a human being"

That experience is the information processing. They are the same thing, just different words. Like "the thing you turn to open a door" and "doorknob" are the same thing. I'm not neglecting the experience of being human by talking about information processing. What is human is encoded by information that you experience by processing it.

>There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it [...]

No, this is not agreement with me. This is not at all what I'm saying.


In that case, I'm really struggling to understand your position.

> What is human is encoded by information that you experience by processing it.

So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same? Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals? What about a person taking Ambien who walks, talks and responds to questions in their sleep (literally while "unconscious")?


>So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same?

Yes, exactly.

>Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals?

This is a different question. No, computers aren't conscious. You need to have the 'right kind' of information processing for consciousness, and it's not clear what kind of processing that is.

This is essentially the Sorites Paradox: how many grains of sand are required for a collection to be called a heap? How much information has to be processed? How many neurons are needed? What are the essential features of information processing that must be present before you have consciousness?

These are the interesting questions. So far, we know that there must be continual self-feedback (self-awareness), enough abstract flexibility to recover from arbitrary information errors (identity persistence), a process of modeling counterfactuals and evaluating them (morality), a mechanism for adjusting to new information (learning), a mechanism for combining old information in new ways (creativity), and other kinds of heuristics like emotion, goal-creating, social awareness, and flexible models of communication.

You don't need all of this, of course. You can have it in part or in full, to varying levels of impact. "Consicousness" is not well-defined in this way; it is a spectrum of related information processing capabilities. So maybe you could consider computers to be conscious. They are "conscious in a very loose approximation."


Does this unit have a soul?

I answer, yes.


The mutations are random, but the selection isn't... it's governed by the environment.


You realize that randomness and "accident" are man made concepts, they don't exist in nature. Something we can't predict isn't random at all, with enough data one could forsee anything, even the fact that you were about to write that message and the exact words I would use to answer you.

It's beyond science, but I don't believe in randomness or chaos, which doesn't mean I believe in religion either( which are just a collection of myth, which says absolutely nothing about the fundamental nature of 'god').


Many people believe that medical control over aging will be stunningly expensive, and thus indefinite extension of healthy life will only be available to a wealthy elite. This is far from the case. If you look at the SENS approach to repair therapies [1], treatments when realized will be mass-produced infusions of cells, proteins, and drugs. Everyone will get the same treatments because everyone ages due to the same underlying cellular and molecular damage. You'll need one round of treatments every ten to twenty years, and they will be given by a bored clinical assistant. No great attention will be needed by highly trained and expensive medical staff, as all of the complexity will be baked into the manufacturing process. Today's closest analogs are the comparatively new mass-produced biologics used to treat autoimmune conditions [2], and even in the wildly dysfunctional US medical system these cost less than ten thousand dollars for a treatment.

Rejuvenation won't cost millions, or even hundreds of thousands. It will likely cost less than many people spend on overpriced coffee over the course of two decades of life, and should fall far below that level. When the entire population is the marketplace for competing developers, costs will eventually plummet to those seen for decades-old generic drugs and similar items produced in factory settings: just a handful of dollars per dose. The poorest half of the world will gain access at that point, just as today they have access to drugs that were far beyond their reach when initially developed.

Nonetheless, many people believe that longevity enhancing therapies will only be available for the wealthy, and that this will be an important dynamic in the future. Inequality is something of a cultural fixation at the moment, and it is manufactured as a fantasy where it doesn't exist in reality. This is just another facet of the truth that most people don't really understand economics, either in the sense of predicting likely future changes, or in the sense of what is actually taking place in the world today.

[1]: http://sens.org/research/introduction-to-sens-research

[2]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3616818/


>Nonetheless, many people believe that longevity enhancing therapies will only be available for the wealthy, and that this will be an important dynamic in the future. Inequality is something of a cultural fixation at the moment, and it is manufactured as a fantasy where it doesn't exist in reality. This is just another facet of the truth that most people don't really understand economics, either in the sense of predicting likely future changes, or in the sense of what is actually taking place in the world today.

Social elites sometimes value their inequality and eliteism more than they value economic productivity.

(Actually, this is a pretty fair explanation of the logic behind the whole neoliberal era, in which inequality has been radically widened while productivity growth has stagnated.)


A complex system with millions of nodes and trillions of potential interactions with external and internal factors, working perfectly with no oversight and no unexpected outages.

Do you live in this century?

P.S. I'm not saying that this is impossible, or even unlikely, but it's probably going to be something that requires constant and possibly very expensive maintenance.


Your body performs that maintenance when you are young.

The damage repair approach to treating aging suggests that if we remove the fundamental differences between old and young tissue, then the body will continue to maintain itself as it does when it is young. There are not all that many types of fundamental damage.

So this doesn't require maintaining the whole system; it requires removing the spanners in the works. These spanners are well known and well characterized. For example, cross-links in the extracellular matrix that degrade blood vessel elasticity, that causes hypertension, cardiac remodeling, and microstrokes, and so on. Break down the cross-links with designed drugs (and deal a few other items that also contribute to stiffening) before the point of serious remodeling and all of that goes away, because it is the stiffening that drives this dsyfunction. There are a number of other items that have similar roles, but not so many that it is unfeasible to think of producing effective therapies on a time scale of the next two decades.

Think of damage of this sort as rust in a fantastically complex metal sculpture. You don't have to understand the sculpture, just how to rust-proof it, and how to remove the existing rust. Rust is simple, and the complexity of the sculpture doesn't much matter when it comes to how you approach rust-proofing and removal.


Cancer is simpler than aging. We understand cancer better than aging. Yet after many billions of dollars and billions of hours of research, we still cannot "cure" cancer. A huge number of people die from it every year.


Are you Aubrey de Grey?


The thought of being able to abolish death as soon as you have a working BCI by running one's mind on silicon hardware is, if not ludicrous, at least really really far out. So, you hook your brain up to a computer, and get basic input and output. Great, you now have an upgraded keyboard / mouse. It doesn't get any better no matter how many capabilities you add. Direct memory access, changing the contents of programs on the fly, none of it will abolish death any more than being able to do by 'hand' does. The best you can do is create a really sophisticated program that will continue acting deterministically after the programmer goes away. You will live on in the same way Tolkien lived on after he died in his books.

You're not going to be able to abolish death until you can get neural networks to run at the speed of your current brain hardware and at similar capacity. In other words, you still have to be able to simulate a brain. To take the functions of the brain currently has, and get enough of them working electronically until you can start to build an identity on top of it.

Even once you do that, it won't feel like your brain without a lot of training, both on your artificial hardware and on your biological hardware. I envision an era where early adopters have hybrid consciousness, where we slowly incorporate an electronic identity with our biological one. Going from "This is my brain extension" to "this is another part of me, some of my thoughts are here, some of them are in this piece of hardware."

I suspect this will be a highly individual process that we will slowly gain mass competence with in a similar way we're doing with software now. We'll have to grow our ability to simulate neuronal processes and replicate our psychologies computationally. Even then, it will feel like an artificial prosthetic until you have enough capabilities to where if you suddenly lost the other side, you wouldn't feel trapped in the most hellish solitary-confinement prison ever devised.

Then after they're fully integrated, start prioritizing experiencing through the mind prosthetic. I suspect one would need many many years before cutting off the biological part wouldn't cause grievous trauma to one's sense of identity. Especially since our legal system will need a long time to figure out how to fit non-biological beings into society.


I actually agree with your view of the BCI being a long process.

I think you are too flippant and dismissive of what he was saying about this in the context of the many different things he was talking about.

You are essentially quibbling over his simple presentation of what most people know will be a very complex process.


> You're not going to be able to abolish death until you can get neural networks to run at the speed of your current brain hardware and at similar capacity.

That can't possibly be true. You're dead if you are slower than you once were?


If you can't merge your consciousness with the prosthesis then you'll never actually be able to consider it 'you'. And you won't be able to merge if there's a serious impedance mismatch.

Unless you're talking about taking not just a human brain and simulating it, but rather programming a brand new electronic brain to run your own consciousness. Which is even farther out than merging. You'd have to understand neurons to a much-greater depth.

This is assuming that neuronal processes aren't relying on quantum dynamics, which I suspect would make true simulation impossible. Just because you can do simulated annealing on silicon doesn't mean you can predict the unfolding of arbitrary quantum states. Merging would be the only viable strategy, and there, your new brain would only have a fraction of its former speed until we're so far along with quantum computers that they're as ubiquitous as silicon processors are today.


Buddhists say a loss of identity can lead to the greatest bliss one can experience. I think the future is going to get really weird.


AI is a simulation (artificial). Even if you did the acrobatics of saying there could be II (inorganic intelligence), there's nothing to suggest self-awareness could transfer from organic to inorganic, and there's no way to distinguish between a simulation being self-aware, and having the "appearance" of being self-aware, so I don't see how anyone could claim to have made a self-aware machine.

If you believe that people have spirits, then the question is also moot. But if you don't believe that people have spirits, there's no way to prove that your memories transferred into anything other than a simulation of yourself. If you consider yourself "you", then something else would not be "you", even if it had the appearance of being "you".

Even if you believe that consciousness arises from the synapses, and is therefore a simulation or illusion to begin with, there's still no way to prove that you would "transfer".

You can't say "there is a singularity" - you can only hope, if that's what you hope for. Others hope there is a god. It's a matter of faith both ways, and a reflection of wanting to escape death.

I don't think there's any harm in believing in the singularity for entertainment value, but as for providing a door to immortality, I think the main danger is simply distraction. While all the effort and discussion and research is spent on something there is no way of proving -- in the meantime, 20,000 children die each day from preventable diseases.

Even if you are a rational egoist, I invite you to consider what you'd do if a child was dying right in front of you. Would you help them? The fact is children are at arm's length, courtesy of our mobile devices, with which we could place more attention on relieving suffering than in trying to achieve immortality by flipping a coin on an unproven allegation.

So the fact is, because we could choose to act, rational egoists included, we are genocidal in our indifference. The first step in facing this is to admit it, and then to try and take steps to do something about it.

I invite anyone who seriously considers a singularity to set it down, and put some effort into the relief of suffering. When we have preventable diseases and poverty figured out, then let's revisit the singularity. I'd enjoy working on it then.


> I don't think there's any harm in believing in the singularity for entertainment value, but as for providing a door to immortality, I think the main danger is simply distraction. While all the effort and discussion and research is spent on something there is no way of proving -- in the meantime, 20,000 children die each day from preventable diseases.

> I invite anyone who seriously considers a singularity to set it down, and put some effort into the relief of suffering. When we have preventable diseases and poverty figured out, then let's revisit the singularity. I'd enjoy working on it then.

The amount of money currently being spent on singularitarian pursuits is negligible compared to what is spent on development aid, or healthcare.

Furthermore, one of the necessary enablers of the singularity is faster computers. Regardless of the sustainability of Moore's law, it is economic competition between chip-makers that has provided the impetus for the continuous exponential increase in computing power that happened during these last decades; as long as it's physically possible, and as long as the chip market is not a monopoly, the increase in computing power is going to happen anyway, and the singularity would be nothing but a byproduct of these market forces - and will not come at the cost of a disinvestment from charitable pursuits.

As a species, it's nice to have a portfolio of possible futures - and it's nice to know that the singularity road is being probed by some people.


I think the most precious thing we have is time, not money, because our time is finite. When you consider the potential of the human race and computing power to help spare daily genocide, there is clearly an opportunity to do better. If a person is driven to spend ever more time on pursuing the innovations that could theoretically lead to a singularity, what would that person say to a stadium full of children who are going to die the next day? I wouldn't know what to say. I'm not sure they'd be comforted by assurances that a byproduct of the pursuit of the singularity results in economic momentum that employs people, as good as that may be.

Perhaps though the attendant momentum of deep mind projects might result in AI or II be putting to the task of humanitarian relief. But my guess is that any such system would probably ask, "what should the priority be for preserving life?" -- and if the prioritization were done by consensus, I'm guessing that most people would prioritize preserving life over pursuing the singularity. For example, I think most people would want a self-driving AI government to prioritize their own survival over the singularity. And if that were the case, and AI were tasked with helping to "load balance" priorities of ethics, efforts, solutions and systems, I wonder if it might not admonish us to make changes in our life.

I'm not self-righteous, I'm self-unrighteous. I'm just beginning to question my general acceptance of wherever technology goes, wherever I spend my time. I don't presume to tell people what to do, but I do propose that the ethic of choosing whether to relieve a suffering child right in front of you may be self-evident to some people -- and if so, that realizing children are at arm's length, that we can do something about it, may convince some people, myself included, to think about "re-balancing" priorities.

I read fiction regularly. I would be a hypocrite to judge anyone for "not spending enough time relieving suffering". Yet overall I question how content and comfortable I am with life. I guess in a portfolio of effort, I hope that the allocation of assets in my own life and others would place a primary emphasis on sustainability that includes the relief of suffering. I think that's what 20k kids dying each day says to me. I think that's what they'd post to Hacker News, if they could.


When people in the 1940s looked into the future they saw flying cars, spaceships and cities in the clouds. When we look into the future today we see AI, a post-labor economy and the end of death itself. I'm not sure if technological progress has increased, but we're definitely dreaming bigger.

I think that predictions from the past can be instructive in two ways. First, most of it didn't come to pass. We still don't have flying cars or cloud cities, and common commercial space travel is still decades away. The second thing that might be of note is how positive the past vision of the future was compared to our current ideas about the future. We're so dystopian and negative in 2015. We'll have self aware supercomputers, but we're worried that they'll kill us. Robots will be able to do anything a human can do, but we're worried that we'll all be unemployed while a few get rich. We'll have the ability to live forever, but we're worried that will only be a privilege for the rich people who control the robots, and then only if the robots don't decide to kill us all.

Putting these two things together, I can only think to quote Packer's QB Aaron Rogers: "R-E-L-A-X" [0]. Odds are that most of these revolutionary technologies are far away, and won't turn out to be as evil as we imagine. The beginning of the 20th century was a time of tumultuous change. The automobile, household electricity, recorded music and movies, communism and the end of aristocracies. Everything was changing and people were optimistic about the future. Now we're on the other end of the pendulum. Outside of communications and the internet, there haven't been a lot of big technological or political changes and people are feeling generally negative about everything. Hopefully the early 21st century will produce big ideas that will have us all feeling inspired again. I wish I had some grand conclusion or takeaway from all of this, but I think the key is that this too shall pass and we should all try to enjoy the journey.

[0] This is a references from sports news that might be out of place here. If you didn't get it, it's ok. Move along, nothing to see here.


People in the 1940s were riding the top of an energy revolution, an enormously rapid upward trend in energy production that, had it continued unabated, would today see us with the output of a nuclear powerplant generated by threads in our clothing for a dollar or two. They didn't foresee information technology, for the most part. They foresaw solar system-wide travel and simple mechanical computing devices like slide rules coexisting.

Of course it proved harder than expected to keep that curve going, and we got the infotech future rather than the high power future. We're probably better off for that, given that medicine is driven by infotech, not power.


It's also possible that had energy generation capability continued to increase forever at that rate it would have led to self-destruction in any number of forms.


I find it strange that he says we have no way of imagining what post-singularity would be like. I get his point, it's so different that we can't really get it. But we have ample science fiction that still explores the possibilities and gives us a starting point for imagining it.

Also surprised he doesn't mention basic income when discussing the decreasing value of the individual and the problems that could cause. He's identified some important problems but this interview was very high level and didn't seem to even touch on possible solutions. These are things society is already thinking of.


> I find it strange that he says we have no way of imagining what post-singularity would be like.

It's by definition - the singularity is defined as the moment when technology advances so fast that we just can't keep up with it. If you can still reasonably predict how will it look like, it's not singularity yet.


It's the moment where we lack the capacity to work at its level and control it. That doesn't preclude imagining outcomes. His whole article was about how he tries to imagine the full scope of possibilities rather than narrow down to predictions (that are often false). It's not about reasonably predicting.

As an example, we don't yet have fine-grained, ubiquitous nanotech like that featured in Diamond Age by Neal Stephenson. But that book is all about imagining what it might lead to.

We extrapolate from assumptions all the time, why is this one any different?


So I had a random thought earlier. At one point, the cessation of the heartbeat signified death. But now we have CPR. Now 'brain death' is often considered to signify death.. but could/can we 'restart' the brain as we do the heartbeat with CPR?


Harari seems to be worried that sometime in the future there will be tons of superfluous people that economy will no longer need. Suppose technology brings about abundance never before seen on the planet Earth. Suppose that means the vast majority of everyone goes unemployed. Are we to worry that most of those people will necessarily be poor now because the few powerful up top will hoard all the riches? Is that the most likely scenario?


Perhaps a loss of power and relevance of the human masses will give rise to a renewed reverence for and recognition of the fading beauties of human frailty, similar to the romantic movement following the industrial revolution. In such a setting, where we hand tech and science off to the machines (or they seize it), the profit model will be in the humanities.


Would not any human-devised outcome, no matter how intellectually superior its capacities, ultimately be secondarily dependent on the same metabolic laws as humans, even if in a different order of magnitude? I mean those delimiting laws would not disappear. Their indefatigability is powered somehow, and our fatiguability is adaptive.


But at the end of the day, what is life and death anyway? Merely a self repeating pattern of some atoms...maybe consciousness is just a collection of concepts which our brain uses to reinforce its self identity? Maybe consciousness is just as valid as religion and our existence really holds as much meaning as the existence of a tree in an infinite universe? and considering the fact that the majority of people would value a nice diamond more over a tree, a well sculpted rock might just have more meaning than "life" itself.

maybe intellect,in itself, is merely a small bump in the fractal nature of the universe itself, after all...there is a universe in everything and nothing is everything.

Only with that in mind before i die, would i upload my mind...


To see a World in a Grain of Sand

And a Heaven in a Wild Flower,

Hold Infinity in the palm of your hand

And Eternity in an hour.

William Blake


Somehting similar in video :

Humans Need Not Apply - https://www.youtube.com/watch?v=7Pq-S557XQU


The point of technological progress is to lower the cost of things. Eventually everything will be free or negligible cost. We're already there with music.


Tangent, but... we're not there with music. We're pretty much there with the cost of copying/distributing music.

And we're partway there with the falling cost of production tools, and wider affordable/free availability of instructional materials.

But creating music -- particularly creating good music -- still takes a significant investment of time in the immediate act of creation and another order of magnitude of time in investing in skills.

And without an economic model to support that investment of time, we'll get less of it.


True.

Not everything is a market commodity, and assuming it is is not intelligent behaviour.

Unit price is not a useful metric for assessing social and cultural value - and I don't mean value in some abstract classical-is-better-than-pop sense, but in terms of (e.g.) the quantifiable social benefits of culture, including improved mental and social health, improved cognitive skills from arts training and practice, and so on.

So it's not just about investment of effort, it's the fact that reducing everything to commodity pricing misses out a huge amount of relevant information about value.


Not necessarily. As a counterexample, people (many of whom are very creative and talented) put staggering amounts of time and dedication into playing and becoming skilled at video games for no reason other than personal entertainment. Videos of people playing games are freely available and no profit is typically expected.

A similar phenomenon might be observed with music, where players of music dedicate large amounts of time to playing music and upload the fruits of their efforts to the internet for the enjoyment of their peers.


The cases you're describing either involve someone who spends some other portion of their time (probably "full-time") on money making activities, or someone who has an external means of support (trust fund, savings, fortune they earned earlier, family, whatever).

In the former case it's pretty much as I originally described. Sure, if they're dedicated and don't have any other time sinks like a family, you may still get some music from them, but significantly less or lower quality music than you'd get from them if they could spend full-time on it.

In the latter case.... it's true enough that people in this position have the privilege of studying and creating music full-time without any expectation of payment. They have all the money they need, they're free to spend their time as they wish. Of course, saying this is our economic model means that most of our music has to come from people in this position, mostly kids, retirees, and those with rich family or other benefactors.

And depending on your tolerance for how undemocratic that's likely to be, maybe that's OK. Though I think that if one accepts a picture of the world painted in the article where an increasing number of people aren't even needed by any of our market, civic, or social institutions, the prospect of having them also sidelined in music and other letters gets a little more troublesome.

(And as another tangent: while I enjoy video games myself -- even some very difficult ones that require big investments of skill -- I'd be very careful about drawing larger lessons about skills from them. One of the reasons we enjoy these games is that their practice-reward cycles are often significantly shorter than many comparable real-world skills.)


>The point of technological progress is to lower the cost of things. Eventually everything will be free or negligible cost

This is a non-sequitur. Lowered costs doesn't mean everything will be (virtually) free. It could very well mean that some particular things are free, but when taken in aggregate, it is not.

In other words, a grain of sand may be free, but 50 million tons of sand is not, and creating and managing a single grain is a completely different problem from managing 50 million tons.

Technology doesn't lower just the cost -- it also increases the volume.


Also it removes the variations. Who can build a new widget, when technology provides one that's almost as good for almost nothing? Why houses are made of prefab parts or cost 10X-100X to build custom.

And we don't get the best solutions. We get the ones that were lucky and first e.g. VHS vs Betamax.

Technology will see many things reduced to a handful of highly technical mass solutions. Transportation, health, energy, food, everything. The cost of doing something different in small batches will skyrocket.


"In terms of history, the events in Middle East, of ISIS and all of that, is just a speed bump on history's highway. The Middle East is not very important. Silicon Valley is much more important."

This is what futurologists actually believe


It might sound self-centric and limited, but giving it a second thought, I think it's also true - because the "events in Middle East" are important only as much as they threaten to spin out of control and lead to the collapse of technological civilization. There are no important technological advances coming out of this conflict. It's unlikely that the existence of ISIS will lead to an important new political insight, a piece of social technology being developed. It's just, like often in history, a case of groups of humans failing to get along very well with each other.

This is how I see war, nowadays. A stupid, useless distraction. "We're building amazing things here for everyone with our technological civilization, so would you kindly please pause for the moment, and don't ruin everything because of some idiotic dispute?".


I'm doubtful of the prediction that humans will eventually become useless or superfluous. Common jobs today that only humans can do will undoubtedly be accomplished by machines in the future, but that doesn't mean humans will become obsolete.

As long as the universe and time exist as we know it humans will never be perfect. And just like humans, AI will always have bugs, as its root creator will always be a flawed human. Whether there are unintended consequences of those bugs is another story. But since a human can never create AI to create AI better than a human, AI can never render the human mind obsolete.

Any AI not created with bad intentions will mostly be created to serve, defend, or improve our way of life and survival. These things work to support our purpose not destroy it.

But as Harari says (before he starts predicting), "it's impossible to have any good prediction for the coming decades."


A lot of baseless assumptions here.

"its root creator will always be a flawed human"

"But since a human can never create AI to create AI better than a human..."

This is your premise. I don't think, and a lot of smart people don't think, this is true. The thing that gets people really worried or excited is that they think it IS possible to make an AI that can create better AI. And it's a positive feedback loop that goes nobody knows how far.

There is no reason, unless you believe in magic, to think AI can't be as smart as humans. But if you go that far, there's no reason to think it can't be smarter. And if it can do that, it can make better AI than humans can.


"There is no reason, unless you believe in magic, to think AI can't be as smart as humans."

AI can be as smart or smarter than most humans in many ways, but I think it's a very real possibility that its development path won't render the human mind useless. They key difference between AI and humans is AI has the power to iterate and learn from its mistakes much faster than humans without fatigue. The methods of which it learns are created by humans. To assume the creation of AI with "a positive feedback loop that goes nobody knows how far" without humans first understanding how seems more of a belief in magic to me.

"I don't think, and a lot of smart people don't think, this is true."

When it comes to predictions, smart people can be wrong. I could be wrong or they could be wrong, and they may be smarter than me, but I'm smart enough to know this is true.


> To assume the creation of AI with "a positive feedback loop that goes nobody knows how far" without humans first understanding how seems more of a belief in magic to me.

Not really. This is pretty much a definition of a positive feedback loop.

To give a very simplified example, imagine that a mind of IQ N is able to create, at best a mind of IQ N+10. So say, the smartest human alive has 150 IQ. He goes and creates an AI that has 160 IQ, which then goes on to create a 170-IQ AI, ad infinitum.

Of course you could argue the relationship is different. Maybe the ith mind can create at best an N+(1/2)^i mind, at which point the whole series will hit an asymptote, a natural limit caused by diminished returns. But it would be one hell of a coincidence if humans were close to that natural limit.

So basically, what we need to do to potentially start intelligence explosion is to figure out how to make a general AI that is just a little bit smarter than us. Which seems entirely possible, given that we can use as much hardware as we like, making it both larger and faster than human brains.


I understand the concept of creating something exceedingly more generally intelligent than its creator, I'm simply suggesting it's not possible. Many people assume that it is, and we'll have to agree to disagree. But even if I'm wrong and it does become possible, think about how unlikely it would be for a human to accidentally accomplish this.

Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?


> Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?

That's not a big leap. In fact, we humans know this already, and we've even quantified it nicely, and called it probability theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: