Hacker News new | past | comments | ask | show | jobs | submit login
Can We Copy the Brain? (ieee.org)
198 points by wjSgoWPm5bWAhXB on Jan 30, 2018 | hide | past | favorite | 82 comments



The best analogy to this field and the many, many problems it might run into is biotech. This is actually quite funny because in the early days the main analogy to biotech was the computer / state-machine.

For some background, biotech has undergone many booms and busts in the last half century; unlike the last tech bubble, these booms and busts have flown somewhat under the radar.

In fact, Jurassic Park, written in 1990, is actually a book /about/ the late 80’s biotech bubble. There is a really fun part in the book where they are talking about making miniature dinosaurs that will only eat Ingen-brand dinosaur pet food (Ingen was the company behind Jurassic Park).

Anyway, with biotech, the problem has been, again and again, the assumption that a system is radically simpler than it. Biology is so incredibly complicated that it puts the largest engineering book I have, a 1500 TCP/IP protocol book (No Starch) to shame. I mean at least we know how TCP/IP works. With biology, the manual might be closer to 900,000 pages and we only have 40% of the table of contents and maybe 800 pages (out of order) so far.

BEST EXAMPLE: In the 70’s, once large portions of the genome started to be identified and linked to specific genes that were known to cause traits or diseases, it was widely assumed that creating/reading/updating/deleting genes on DNA would be relatively straightforward, particularly as more of the genome was uncovered. However, as was later discovered, many traits or diseases might be the result of 200+ genes that are also used elsewhere. Turn off just 1 gene for a disease, and 1). it won’t do anything because you didn’t turn off the other 199, and 2). oh wow that gene was actually used for something else and now you’ve lost the ability to form eyeballs / are born without anything in your eye sockets.


> Turn off just 1 gene for a disease, and 1). it won’t do anything because you didn’t turn off the other 199, and 2). oh wow that gene was actually used for something else and now you’ve lost the ability to form eyeballs / are born without anything in your eye sockets.

Certainly not on the same scale, but this resonates with my experiences with legacy code.

I think this similarity is more than superficial. Energetic systems evolve over time to become tangled, correlated messes, without some other force counteracting this tendency (ie. refactoring). I wonder if DNA has analogous mechanisms.


Yeah, in some way it feels that there's essentially a huge software (reverse) engineering project.

We have the technical ability to read all the code in our DNA, understand what small parts of it do (e.g. making a particular protein), and model some of the small scale behavior.

And we've got a very, very, very large codebase of mishmash undocumented legacy homegrown code that sort of does what we want but in an unstable and occasionally buggy manner. And we've got a strong wish to fix some bugs (i.e. genetic diseases) and possibly add some features (e.g. longer quality lifespan, increased capabilities). So we'd like to reverse-engineer this system.

The good part is that we only have to do it once and we can cooperate on it; the bad part is that the system is really complex and (more importantly) horribly interdependent; it actually implements pretty much all the practices that we know makes code unmaintainable.

Anyway. The hypothesis I'm trying to make is that this seems to indicate that research on advanced methodologies and tools to analyze and understand large quantities of tangled (and possibly intentionally obfuscated) computer code; work techniques and algorithms for computer(machine learning?)-aided understanding and reverse engineering large quantities of code seem likely to eventually have practical applications in biotech.

Yes, contemporary code behavior is quite far from protein interaction. That's ok - we're quite far from starting to properly reverse-engineer (in this context) biotech as well; with every decade, code (and its analysis) will become more complex and biotech more understood, eventually meeting. And when designing tools for analysis of very complicated systems, the tools will anyway have to be adapted not to the systems but to the analyzer, to the limitations of what structures the human researchers can understand and "keep in their head" and what needs to be automatically summarized/structured by tools.


> The good part is that we only have to do it once and we can cooperate on it;

I'm not a biology person but I think everyone except identical twins has different DNA which makes the problem so much harder since "doing it once" only solves one person's problem when every variable (dna) can potentially interact with every other (n^n problem where n = 3,000,000,000 potential pairs which is an insane number, granted it almost certainly has some defined structure which reduces the potential differences but that will still be a huge number) . Also, you have the whole nature versus nurture problem which makes biology even harder.


Even identical twins are not all that identical. Reading and understand the DNA is one thing, reading and understanding the regulatory network that controls which genes are expressed under what circumstances is at least as difficult.


As an identical twin we have 99.99% the same DNA. But after conception you have epigeneic factors that will start to play. Or even the fact that we don't live the same lives. For instance I like doing math for fun. My brother doesn't care for math at all really.


> I think this similarity is more than superficial. Energetic systems evolve over time to become tangled, correlated messes, without some other force counteracting this tendency (ie. refactoring).

Well... yes. That's entropy. Any system tends towards disorder.

> I wonder if DNA has analogous mechanisms.

DNA works differently, since there's an advantage to reusing code that naturally leads to spaghetti. Although it's not about minimizing energy- just that it's more likely to get successful code by adding onto existing code than adding a whole new section (unless you transclude it from bacteria/viruses).


There's also abstractions in nature, a non-spaghetti form of code-reuse - a natural outcome in code, when the same thing is reused several times. e.g. cells form the "atoms" of a body. They are very similar, but specialized. Many internal mechanisms are the same. Metabolism. DNA itself and transcription is reused. It's not much, but we may find other abstractions, currently hidden, if they have been reused many times.


there are also MASSIVE spans within the human genome that are copies and copies and copies and copies.


Yes, both are true.


There's fundamental limits on understanding computation, where the only way to predict what a system will do is to run it.

Is the unmaintainability of legacy code at all related to this? Is the impenetrability of DNA at all related to this?


Though this is a solved problem for our field, if you're using a statically typed language.

You can't right-click on a gene and "find all references".


Civilization (II)’s “modern” techs are like a time capsule of the late 80s / early 90s.

- Cure for Cancer is a wonder of the world and requires “genetic engineering”

- You can build SDI lasers in every city, and they work. In 2018 the current state of the art is “it will probably shoot down an ICBM, maybe”

- Fusion power and room temperature superconductor are basic research projects. The technology immediately preceding superconductor is ... plastics

- SETI project is a wonder of the world and gives a huge boost to scientific research. The obvious choice now would be to change it for “the internet” but hindsight is 20/20


Part of that is because Civ II is meant to model a large span of technologies over time using a fairly uniform mechanism. It would be infeasible to include the vast numbers of technological branches that have occurred in modern times while not making the game one that takes place mostly in modern times... So it only picks a few representative technologies so show you.


I understand that. My point was that the modern tech tree has many ...optimistic technologies that don't even exist now but were considered "close" or even already developed in that time period like advanced biotech (like the parent comment mentioned) or fusion power (cold fusion).


A genetic algorithm designed an FPGA logic that couldn't possible work under usual rules, but actually did and used less resources than any human design. As it turned out, the genetic algorithm exploited some analogue properties of the FPGA, which a human designer would never attempt to do.

Genetics seem to favor efficient systems over understandable and easily extendable systems.

Paper "Adrian Thompson, An Evolved Circuit, Intrinsic in Silicon, Entwined With Physics."


So true about TCP/IP. Even rocket science is a joke compared to most biological systems. That said, even a set of rules as simple as TCP/IP can lead to complex, valuable systems (e.g. WWW).

TCP/IP rules leading to the web is nothing compared to physical rules leading to biological life, true. Even in 2018, one could say that computers' potential has barely been tapped.


the manual might be closer to 900,000 pages

Knowing a thing or two about neurophysiology, this seems like a vast underestimation (since a biology book should also cover that, likely most complicated of all, topic). So vast I wouldn't even dare putting a number on it, because there is so much we don't know about the brain, it is hard to even estimate how much we don't know. (Which by the way also puts some AI claims regarding replication of brain functionality in a different light)


> many traits or diseases might be the result of 200+ genes that are also used elsewhere.

What are examples of 200+ gene diseases? At this level of genetic sophistication it should probably be called a trait which causes disadvantages in some situations.


An excellent book which covers this topic is Permutation City by Greg Egan (1994). If we can clone all of you (physical characteristics, brain, memories, personality) into a simulation, what are all the implications?

In it, a perfect replica of a person is scanned, and when it boots up, it continues it's existence from the time of the scan. Hacking on the model is possible, so you can avoid certain memories, update "physical" appearance, etc.

Given how prevalent virtualization has become recently, I thought it was a fairly modern book. I was surprised to find out it came out in the nineties, and not only has it held up well, it would be considered visionary even if it had been published yesterday.


You should try the Takeshi Covacs books by Richard Morgan. There is a TV show coming out in two days (good timing here) based on the first book (Altered Carbon). It'll probably be a mess since the book has quite some violence. But the thoughts above are played out in a interesting and entertaining way. All three books play in the same universe with the "same" main character but are not a trilogy.


Can you link to info on the TV show? Because all I can find is Morgan's recent blog post recounting his visit to the set:

https://www.richardkmorgan.com/2018/01/fragments-of-a-jet-la...


oh wait, here it is,

http://www.imdb.com/title/tt2261227/

I'm disappointed -- it's on Netflix.


Why disappointed?


> the book has quite some violence

That's a huge understatement. Starting with the opening, which sets the context. And damn, that clinic strike! And the writing just puts you right there.

So yeah, I'm not optimistic about the video adaptation.


*Kovacs.


Well that's embarrassing...but it was late here ;)


Robin Hanson recently wrote a non-fiction book on the topic looking at what standard economic and sociological models say would happen if this were possible.

http://ageofem.com/


I've recently played through the game "Soma" which had a similar concept. Warning spoilers:

The main character goes to a doctor to get his brain scanned following a car crash. After the brain scan he wakes up in a distant future and discovers that his brain scan was used as a sort of programming template and that there are many (modified) versions of him living on in robots. Throughout the game this has many more implications.

Also at the dawn of humankind the earth sends out a satellite called the Ark which houses some kind of simulated reality inhabited by the brain scans of selected humans since earth has become uninhabitable.


Awesome game, I recommend it to everyone. Worth it even if you've been spoiled the ending by the comment below.


Especially since this topic covers about half of the episodes in the most recent season of Black Mirror.


This report by IEEE is actually a very good collection of topics and research currently being done in this field. They discuss specific problems and topics like the neocortex, IIF [1], neuromorphic engineering [2], pose cells, SLAM [3], and more.

For anyone interested in research being done in AI, ML, consciousness, etc., these are great articles written by actual scientists and researchers who are doing the work (as opposed to the hyperbolic articles or tweets you see online these days about AI).

[1] https://en.wikipedia.org/wiki/Integrated_information_theory

[2] https://spectrum.ieee.org/semiconductors/design/neuromorphic...

[3] https://spectrum.ieee.org/robotics/robotics-software/why-rat...


I'd like to add this article (with somewhat of a cheeky title): "The impossibility of intelligence explosion"

https://medium.com/@francois.chollet/the-impossibility-of-in...

It's written by François Chollet, creator of the Keras DL framework. In the article it is shown how the environment and intelligence are interrelated. Some of the points are expressed in the IEEE Special Report as well (sensorimotor integration). There are many correlations with the recent push towards simulation in AI - Atari, OpenAI Gym, AlphaGo, self driving cars, etc. It's a new front of development, where simulation will create playgrounds for AI.

The main point is that intelligence develops in the environment, and is a function of the complexity of the environment and task at hand. There is no general intelligence, or intelligence in itself, only task-related intelligence. An intelligence explosion can't happen in the void (or in a brain in a vat, or in a supercomputer that has no interface to the world, and can't act on the world). The author concludes that AGI is impossible based on environment and task limitations.

An interesting take because we're focusing too much on reverse engineering "the brain" as if it exists in itself, outside the environment. We should learn about meaning and behaviour from the environment and the structure of the problems the agent faces. Meaning is not "secreted" in the brain.


A related 'trend' in Cognitive Science is called Embodied Cognition [1]. Intelligence develops together with the body that it inhabits, and, as you mention, the environment that this body is living in.

Maybe Dolphins are as 'intelligent' as we are, but having fins instead of hands and living in a maritime environment just make it impossible for them to invent fire, printing presses and automobiles.

[1] https://en.wikipedia.org/wiki/Embodied_cognition


Far-fetched conclusions based on a misinterpretation of "no free lunch" theorem. The theorem doesn't forbid intelligence which is universal only in our own universe, as our own universe doesn't give us uniform distribution of all possible problems.


I tend to believe that a hole in François' argument is that a sufficiently powerful computer could simulate an environment inside, where the AI could thrive.


A hole? In that Swiss cheese? It's hardly surprising. He uses hypothetical Chomsky language device to support his idea of "there couldn't be general intelligence", while there's provably optimal intelligent agent (AIXI) and its computable approximations. He uses the self-improvement trends established by entities which aren't intelligent agents (military empires, civilization, mechatronics, personal investing) to predict what self-improvement of intellectual agent will be like. It's a pure London-streets-overflowing-with-bullshit level prediction.

I am not extreme singularitarian. There are hard physical limits making exponential progress and singularity impossible. But bad arguments are bad arguments, it doesn't matter if conclusions are appealing.


Sure... sorta. But there is a problem. You see, without continuous input from the environment in the form of highly corellated input... we know what happens to human consciousness. It rapidly dissolves and disappears. Consciousness is intimately and (thus far) inextricably bound up in its embodied existence. Facial paralysis has profound effects upon the ability of people to feel emotions 'internally', eventually resulting in both an inability to even recall ever feeling those emotions or having an ability to recognize them in others. Should a consciousness divorced from a feedback loop created with the same environment we share (or at least similar in most respects, a simulation might work OK, I don't think we know) be created, the odds that we could even recognize it as conscious are very low. Maybe a general measure of the systems tendency to reduce entropy in some region either within or near itself?

We are feedback loops, and when the loop is broken, we stop being us.


Most people would accept being able to see and interact with the world a prerequisite to consciousness. I understand this as an I/O problem. There must be continuous high-bandwidth input (visual, audio, tactical) as well as very detailed and dexterous output.

I wonder to what degree virtual realities can play the I/O function. I think for the next half century virtual realities will mostly operate at a lower level of detail then meatspace. Can a mind function well stuck in a lower detail world?

Alternatively, cybernetics could serve. You bring up a real concern, but it's a solvable problem.


> Most people would accept being able to see and interact with the world a prerequisite to consciousness.

People lacking sight, hearing, mobility, and similar would disagree.

By all means, we want the ability to interact with the world, bidirectionally. But that's not a prerequisite for consciousness. And if the only thing we manage, in the short term, is preservation and continued function, that would be a massive improvement, far larger and more important than the subsequent incremental steps towards I/O.


> People lacking sight, hearing, mobility, and similar would disagree.

I think "to see" is meant as a placeholder for any kind of sensory input. I doubt that people who lack any form of sensory input with no way to communicate would disagree even if they could -- while they are in that state. As somebody who experienced some knock-out after an accident, I'd say the consciousness is a fragile something (but uses any straw to re-establish itself).


> I doubt that people who lack any form of sensory input with no way to communicate would disagree even if they could -- while they are in that state.

If you're in a sensory deprivation tank, but you can still think, are you no longer conscious? If you were in a perfect one, or you had something that somehow cut off the connection between the brain and the body, but you can still think, are you no longer conscious? The ability to interact with the world certainly makes life nicer, but consciousness does not depend on that.


> If you were in a perfect one, or you had something that somehow cut off the connection between the brain and the body, but you can still think, are you no longer conscious?

I think, if such an unlikely scenario were possible, that disturbing insanity would result, and eventually it would degrade into something that does not resemble human consciousness.

The reality check provided by sensory input stabilizes and fosters consciousness.

Perhaps it is possible to design a conscious machine that would be more robust against sensory deprivation, but humans certainly don't have that characteristic. A good example of this is the very damaging effects of solitary isolation prison cells.


The situation is different since the body is still intact. You and those people are not talking about copying the whole body though but about copying just the brain -- or rather some information that was extracted from the lump of cells that constitute the organ "brain".


Consciousness does depend on it. You cannot mature a recognisably human consciousness without input from the world and from other humans.

And after you mature a human consciousness, you need to keep it busy with external stimuli, otherwise it becomes badly damaged - as anyone who has spent a long time in solitary will tell you.

If you cut off all stimuli for long enough, you’ll have something left in your tank, but it won’t be sane enough to be recognisably human.


Even if it isn't a prerequisite without I/O there would be two challenges.

1. How do we prove it is conscious. 2. How do we gain value from it if we cannot interact from it.


You're iterating the IT inspired delusion of the brain being the cpu + memory and the body being the peripheral hardware. So why not just replace the signals from the peripheral hardware with some mock data as we do in unit tests, you ask.

For you VR world to work you'd have to raise the consciousness in the VR world.


I agree that I am expanding upon the metaphor that the brain is a computer and an individual is a program. It's no more a delusion then any other metaphor which only inaccurately describes the world. All metaphors can cause poor understanding when taken to far, although I agree this specific one is often used to jump to poor conclusions.

I'm not convinced that it is impossible to transplant consciousness from meatspace to VR. Am I missing some prerequisite reading to form this conclusion?


That's not entirely true.

That certainly is how we evolved, but somehow we evolved a consciousness that isn't purely a function of its inputs; people who are "locked in" remain conscious.

That was quite the trick...


People who are 'locked in' are not cut off from perceptive stimulus. They're simply cut off from being able to have their nervous system output result in manipulation of the world. I imagine this does have a significant effect upon their conscious experience, but it's not something that would be easy to study. Even studying how less significant damage to the body results in 'personal' changes can be difficult. Studies of victims of spinal cord damage which results in parapelegia and how that results in depression and reduced emotional range beyond what should be expected from the pain or trauma alone strikes some people as 'insulting'. Our adamant refusal to recognize that the 'brain' is nothing more than a convenient formalism used to talk about the entire nervous system and body and that every part is necessary to give rise to what we recognize as human consciousness runs very deep. Dualism (in the sense of a body/brain split) is wrong, but apparently far too alluring for many.


No, it's protected under the DMCA. If you wish, you may read the complaint at ChillingEffects.org.


Tom Scott did a video about that — “Welcome to Life: the singularity, ruined by lawyers”


You wouldn't download a brain!


I read something along the lines of files having no meaning until we interpret them with programs. The idea is that files are almost schemaless and we bestow a schema on them by the way we consume them with programs. I believe this may have been a SQL book...

In any case, what does "copying the brain" mean if we don't fully understand how it works? How can we bestow meaning on the raw data, given we can even collect it, without knowing exactly how the brain itself interprets it?


What if there is no meaning. What if "meaning" is just a word we invented to describe whatever our brain does with information.


Ideally if you are copying a brain, you are going to copy the I/O schema (spine connecting to a "body", eyes, ears, nose, mouth, ect). When you run a brain as a computer program you don't want a disembodied mind, you want a person who "really-is" in a VR environment.


Discussed at length in the book "We are Legion; We are Bob" where a disembodied mind creates a VR environment to preserve its sanity.


I'm pretty sure we know how disembodied brains would work - consider any quadriplegic. They may as well be disembodied, yet they can continue to mentally function at a high level.

Agreed 'copying the brain' would have to copy/checkpoint the software running, or you risk failing to connect meaningfully with the environment. It'd be hard for a deaf/dumb/quad brain to learn to speak or interact from scratch.


Why bother? It's a great general purpose organic computer that you can power with sugar and fit into a head...but there's no reason that some other solution to the problem might be arrived at that works along entirely different parameters. There's lots of evidence that brain design has some deep flaws, extreme limitations and severe compromises. Look at the crazy stacked, hemispherical design of mammal brains -- nonsense!

Besides, if we need a brain, there's literally trillions of them already all over the planet. In fact they're so common they're already commoditized, everybody has one!


Be able to make a brain, hopefully means being able to make a better brain - or even, to make our brains better.


Copy at what level?

Cellular? Molecular? Atomic? Subatomic?


The subtitle appears to be "Intensive efforts to re-create human cognition will transform the way we work, learn, and play." So, functional copying, presumably.


One article of the series revolves around MOSFET-based analog circuitry for the building blocks: https://spectrum.ieee.org/computing/hardware/we-could-build-...


I believe the open worm project works at the functional neural level. That's today's state of the art, who knows what level of fidelity we need to get to to get digital immortality.


When they do “copy” it and successfully put it in a human, it will make Pet Sematary the movie look like Driving Miss Daisy



And one of the linked articles was explicitly engaging with that.

https://spectrum.ieee.org/computing/hardware/can-we-quantify...



We've no idea how neurons actually work but hey, sure, let's copy the whole brain! Most articles concerning brain recently are some kind of infotainment porn.


Downvoted as you probably didn't click the link and left the flippant comment. It links to a page of interesting articles related to this topic.


Yeah there’s still a lot of fundamental research that needs to be done here. We’re probably 3-4 Nobel prize worthy breakthroughs before we have the necessary knowledge to do such things with brains.


We kind of have an idea of how individual neurons work, the problem is that all the interesting aspects seem to appear in tangled systems of millions (or much more) of neurons, and we don't understand that.

If we want to study the collective behavior of neurons, then we do pretty much need to analyze or simulate large-ish portions of brains.


Identical twins handily prove that copying a brain still produces different people.

It’s true for siamese twins, and it’d be just as true for cloned genetic stem cells 3D printed from a snapshot of the exact cellular structure of your brain from a moment in time, transplanted into your body.

If I copy your brain, you won’t be inside it, even if it’s effectively you, as far as anyone else can tell.

So, I can comfort myself with a replica of each dead parent. But the parents that raised me, those people are dead. They won’t be there to see me sharing christmas with their atomic-precision duplicates.

Virtual simulations aren’t people, aren’t human beings, no matter fidelity of the simulation. Maybe you can hold a conversation with a hologram, but then what?

Would one expect to experience what it feels like to become and persist as a haptic-enabled hologram? Pretty sure that’s a bogus concept, turtles on down.


Why do you assume that identical twins have identical brains?

Brain physical structure is altered during lifetime experience, learning changes large physical properties of neurons e.g. forming some new synapses and pruning others.

Even at birth, identical twins have different fingerprints, different retinas and yes, different cellular structures of their brains. And that only diverges later.

It's not a given that a brain "3D printed from a snapshot of the exact cellular structure of your brain" would be sufficiently good copy, we know that there are other things that matter (e.g. various intra-cell properties and chemical concentrations within particular locations of those neurons) and there likely are some other things that matter but which we don't know yet.

However, assuming that we could make a sufficiently good copy, it would be undistinguishable from you. It would have the same behavior, reactions, memories, skills and understanding; it would have the same beliefs as you do - including a belief that it's you and that it has been you since your birth, supported by memories of living your life. If the copy is sufficiently good, barring the signs of the operation itself (scar tissue? machinery? video evidence of it being done) neither your relatives nor (new?) you yourself would be able to make a distinction.

Yes, you could argue that the new copy is different from the previous one, that it's not the same, that it lacks continuity. But that's arguing about what do we mean when we say "the same". If you didn't know that your parents had died, you'd have no way to tell when their copies come to celebrate christmas with you.


Suppose someone invented a "matter transporter", that was to all external appearances approximately equivalent to the standard device that is a science fiction trope.

But what actually happens with this device, is that once a person steps into the chamber, and is scanned, and the information to recreate them is sent to a receiver elsewhere, by some beaming method, the original is dropped into a "recycling chamber" where their body is shredded, compacted, and made into soylent green or something.

Would you voluntarily use this device? All the people who go through it report at the other end everything went well, since it in fact makes perfect copies.

One tangential idea I also want to mention is that there is a mental disorder where people perceive things incorrectly as false and strange. So, given your scenario of your parents coming to Christmas, there exists a disorder where your brain tells you they are some sort of duplicates or imposters, but you can't explain why. I don't have a reference offhand, but it's probably either something I read in a book by the neurologist Oliver Sacks, or some book about schizophrenia.


There’s something circular about this argument, which reduces to “We can’t make perfect copies. But if we could, they’d be perfect.”

Without evidence - unlikely anytime soon - there’s no way to know if a copy really would feel identical, or whether - more likely IMO - it might start “close enough” but errors would accumulate over time and lead to noticeable differences, which might be experienced in both subjective and objective ways.

If there are quantum-level effects, consciousness will not be copyable.

Even if there aren’t, we’re talking about exactly duplicating a specific arrangement of atoms in space to reproduce all the neuronal configurations and chemical gradients.

And we’re talking about doing that with no significant errors. And implanting that configuration inside an extended nerve and tissue network it has to be matched to.

At the very least, the functional information structures created by the chemistry and biology have to be duplicated exactly - probably instantaneously, to avoid mistakes - with no significant inaccuracies.

All of that seems quite challenging to me.


[flagged]


Please don't be a jerk in HN comments.

https://news.ycombinator.com/newsguidelines.html


I wonder if person died, and his clone waking up can be compared to a person getting into a deep coma from an accident and waking up a year later. In both cases a consciousness disintegrated and a consciousness was integrated?


> clone

Being genetically identical is not equivalent to having same memories and experiences. Such things are not stored in DNA.


Would you commit suicide to find out?

Would you believe it, if you witnessed a minutes-long, violent, extremely painful suicide, and the dead individual’s clone said they remember everything, and that despite all the screaming and begging, it was an unremarkable experience?


I wouldn't get into a deep coma to find out either!


>If I copy your brain, you won’t be inside it, even if it’s effectively you, as far as anyone else can tell.

If you wake up tomorrow, will you be inside of your brain, even if it's effectively you, as far as anyone else can tell?

What if your awareness is reconstructed daily from the structure of your synapses? If you copy that structure, that copy is you, in the same way that you are now almost the same person as the one that went to sleep yesterday. (Quite like the movie 'The Prestige'.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: