Hacker News new | past | comments | ask | show | jobs | submit login

When I was a young person in the 1970s reading about technology, space travel was the focus of futurism and predictions. I read a lot about the "accelerating rate of progress" in space technologies, and many experts confidently predicted permanent lunar colonies around 1990 and luxurious martian and orbital space colony living post-2000. The reasoning was more or less the same as the "AI singularists". If you look at the "curve of progress" in space tech from WWII to 1970 and extrapolate from it, we ought to have the full "Star Trek" technologies now.

However, as we know, the pace of progress in space travel and settlement technology slowed down and almost stagnated, despite a lot of very smart people who thought fast space colonization was a sure, inevitable prediction.

I think the same goes for extrapolations from our current computing technology - as exciting as the blue-sky scenarios are, and as fervently as I hope that brain uploading and similar goals will be achieved, I think the barriers to computational transcendence of the current limits of our bodies will probably be at least as challenging as the barriers to permanent off-earth colonization.




The same situation existed for life extension technologies in the 1970s. People were far more optimistic than the science merited. Here's an explanation as to why things are different now for that field, and you can perhaps draw the parallels for the other parts of the SMILE (space migration, intelligence increase, life extension) equation:

http://www.fightaging.org/archives/2009/05/learning-from-the...

But in the 1970s, getting proto-SENS to work in mice would not have been a billion-dollar, ten year proposal as is presently the case. It would have required a far greater initiative, one to rival the space program or the formation of the NIH. In 2009 we live in an age of biotechnology and life science capabilities that is far, far removed from the rudiments of 1970. Work that today can be achieved by a single postgrad in a few months for a few tens of thousands of dollars would have required entire dedicated laboratories, years of labor, and tens of millions of dollars in the 1970s.

The advocates of 1970 who predicted engineered longevity in their lifetimes were wrong for two reasons: a) because the technology of that period, while advancing rapidly, would remain inadequate to the task for decades, b) there was no set of events likely to lead to the necessary (enormous) outlay of resources required to make significant progress in life-extending medicine.


Another reason i think is that life sciences applications are inherently difficult for ethical reasons. It's one thing to risk a few people's lives who get sent to space once in a blue moon, and another to purposefully put the lives of patients at risk experimenting. On the other hand, experimenting with computers is harmless.


In the sci-fi stories of a libertarian bent, ethics were a little more mutable in the search of their ultimate goals. Such externalities were often avoided or ignored.


As Kurzweil writes in the article, this is exactly one of the arguments against the singularity that Kurzweil brings up and discusses in his book.

Lots of people argued that if space exploration or car engineering continues growing at an exponential pace we'll have flying cars and space colonies. They didn't explain why the growth should continue to be exponential and not end up as an S-curve.

In "The Singularity is Near" Kurzweil argues and shows why computation and reverse engineering of the human brain are subject to exponential growth. Short answer: there are lots of competing technologies and researchers that have different ways of ensuring that current computational advances continue well into the future - we are far from reaching any physical laws that will stop us.


I'm a huge believer in the transformative power of technology, and AI in particular. I don't see, though, why AI is so fundamentally different from nanomachines, "fully programmable" genetic engineering, and "free energy" technologies in either feasibility or in potential impact on humanity.

You might say my problem with the singularity talk is that I think we have ALREADY gone through "technological singularity" using the current, rather flaky definition. We already have experienced technology-driven change which has made the future completely unpredictable from the past. The integrated circuit has already revolutionized the nature of human life. The claim that the future changes will be orders-of-magnitude more important in some way seems difficult to measure in an objective way, which is part of why the topic has a bit of a "code smell" of pseudoscience.

Once I really start thinking about how technology has changed the human condition, it seems to me that the invention of both spoken and written language is actually the TRUE "singularity" because human civilization is radically different from how other animals live, and it is the ability to transmit and preserve information that is the most important enabling technology.

Even if I can upload my brain into a computer, isn't that just an upgrade to the capability I already have, which is to preserve my brain contents by writing and transmitting that information into the future?

tl; dr - Humanity has already experienced fundamentally transformative technological change. We will certainly experience more in the future. Is there any objective way to measure the impact of technologies, and are any potential future changes actually "more important" than those we have already experienced?


Again, I really recommend that you read The Singularity is Near before arguing against it (actually not sure if you are arguing against it anymore). But one of the topics there is a long argument on how the pace of change is increasing.

(Eg compare the difference in life between someone born in the 13th century to someone born in the 14th century - not much of a difference. But compare how life have changed in just the last 50 years).


Imagine that you are playing a strategy game, and that there is an upgrade that makes all other upgrades go 1000x faster, and cost 1/1000 as much. Surely you would want to get this one as soon as possible?

This is the promise of AI.


Imagine a silicon brain that we create that is 10% more capable than our brain. It's nothing astounding, but it's smarter. And it has access to all knowledge within seconds. So this brain is able to actively comb through that information, and look at it's own design, and being 10% smarter, it's able to design a more efficient AI. That AI, now X% smarter will be able to do the same, etc, etc... It's not unlike what we do now. You couldn't design and build a modern computer without a modern computer. But the AI will be able to do it better, and faster. So it's not about a potential advancement that changes the world a little. It's about the possibility of being able to jump human knowledge by years or even centuries in a very short period of time.


The problem is, designing a smarter AI itself would possibly be harder as well, and it would depend on a balance between difficulty and capability.


Comparison to space travel is silly. I'm your age, and I remember clearly what happened. Don't you? We kids were so excited about getting closer and closer to a man walking on the moon. The closer we got, the more exciting. Finally, we watched it happen. Woohoo! We did it! And then we all lost interest. It was done. It wasn't entertaining anymore and it wasn't worth any money to anybody, so we got on with our real lives and did things that WERE fun and profitable.

It was a government program built for reasons other than economic, so it didn't grow organically as an economic system serving and adapting to the needs of customers. When the non-economic reason went away, the growth rate dropped to near zero.

Compare that to information technology. Is that one centralized federal program divorced from economic reality? No, it's growing organically, serving countless different markets demanding countless things. Machines can do more every year, and the more they can do, the more diverse the demands for them to do more, and the more people there are who are willing to pay, the more people who are working on better IT technologies, the more investment, and so on.

Will there be a singularity? No, we're not all converging on a point in technology or time. We already have domains where machines are orders of magnitude better than people and those that are the reverse, and the machines are sliding past us gradually. There isn't one "general intelligence" model, either. People do some cognitive activities so poorly that it wouldn't be considered intelligent at all if we didn't use ourselves as the benchmark for general intelligence. We'll find ways to improve machines in specific domains and find other ways to generalize their intelligence across various categories of domains, one step at a time without ever crossing any visible "general intelligence" threshold.

The process has been underway for a long time. We'll eventually reproduce whatever Nature has done, and much that it hasn't, if we don't kill ourselves first.

How soon? How soon what? A gradual change is already underway and will just continue. Maybe at some point augmented humans will notice that they can't think of any specific thing an unaugmented human can do better than a pure machine, but others will debate it and the augmented people and machines will keep improving until they all think it was sometime in the past.

Okay, how soon will it be possible to upload my mind to a long-lived substrate so that I won't disappear like an unsaved term paper when the power goes off? Probably not long after my power goes off....


You are missing a fundamental key difference. There is no profit in reaching the stars. Besides that, there are also no barriers that come close to the issues with the Physics and energy requirements of space travel. Unlike a warp drive which we don't even know is possible, the human brain is an actual physical device that exists, and simply uses chemicals as it's logic gates and transistors. We both run on electricity.

I'm a biologist by schooling, and a programmer by occupation. So I understand the science on both sides, and it's not a matter of if, it's a matter of when. And it's coming a lot sooner than people think.


  > And it's coming a lot sooner than people think.
This is an indicator you hardly understan the science behind it. We barely have any understanding how brain works, so even if singularity is possible it is very very far away.


Broad understanding of how the brain functions it not necessary (and would not be sufficient) to replicate it. To make a model of a brain that runs on a computer, we only need to be able to replicate it's basic building blocks, and then copy the structure of a brain over.

The basic technologies to do this already exist, and are presently used to reverse engineer microchips (among other things). What you do is first freeze a brain, and then carefully slice a few micrometers off the top. Then record all the connections between neurons in the top layer. Slice another layer off and continue.

This would take a very long time, and cost hundreds of billions. It would not, however, require any advancement in technology over our present level. The key technology we presently lack is a computing substrate sufficient to run the simulated brain.


That's one approach, but I don't think there's any evidence that you would end up with a working brain at the end, let alone a copy of the individual you started with. And good luck finding a volunteer. :)

The brain's functionality is not just due to the physical connections between neurons, it's the chemical and electrical connections as well, operating on various simultaneous levels (that is, the frequency of firing as well as the strength of firing of neurons matters). A frozen brain isn't working, and you need to copy all that stuff too, and the technology to do that doesn't exist. We can barely measure it, let alone reproduce it.

I don't think you can say that a thorough understanding of all those multiple levels of complexity in the brain just isn't necessary. I really don't think it is that easy.


To make a physics analogy, I don't need to understand gas laws to reproduce them by simulating newtonian mechanics.


Someone might have said the same thing about molecular biology before watson+crick. Neuroscience has been growing with leaps and bounds in the past decade, and, exactly because it is still in its infancy there are very significant low hanging fruit to be discovered as visualization and stimulation techniques are improving and are employed to scale. It actually does look like we will be able to figure out how our brain works in a decade (decades?). Once that is figured out, we already have the computer framework to do massive simulations of the brain.

Allen himself is funding a Brain research institute, so one wonders why he's pessimistic about the future of neuroscience.


I bet, at least one brain behind the many eyes reading this, knows how human minds work - at least the significant functions.

Hey, I am talking to you! Time to speak out loud! Tell us how you function.


How we work is completely irrelevant to reaching a singularity. These arguments are like talking about how we will need only a few computers as large as cities to handle our needs or the making of OS/2 out of intel assembly.

When we have a github of standardized computations and deep learning (or evolutionary) algorithms that pull it together with no human intervention we will have the singularity. Then it can try to figure us out or do something useful instead.

It wont look magical, it wont take a huge market force, it wont answer your religious questions or guess what you are thinking... But it will change how we approach and automate solutions to what are currently intellectual problems. Reality is always mundane.


Well, that statement was a lot more correct even just a few years ago. But we're starting to get a much better grasp on it. In the past 10 years or so we have learned more about how the brain functions than in all of human history. It's true, we're not yet experts, but you're making the assumption that we need to know entirely how the human brain works in order to build something that supersedes it in capability, and this is simply untrue. I don't need to know anything about how a gasoline engine works to build a faster and more efficient electrical engine. In truth, such thinking may even impinge on my ability to do so. The singularity isn't about creating a human brain in chips, it's about a point when the chips are able to process more than our brains are capable. What forms that takes and consequence of that is simply unforeseeable. Simple math proves it's coming.


I'm sorry, I mean no disrepect, but the idea that because you are a programmer by occupation doesn't really mean you understand what it would take bring about the sort of rapture predicted by Kurzweil.

Last night I read several posts on HN about people calling themselves "hackers" or "programmers" because they played with Wordpress and learned a little Javascript on a YouTube. This is not to be elitist but I think claiming expertise on this topic requires a little more insight than what is passing for being a "programmer" these days.

And you're somewhat wrong, I think. There is no reason inherently that traveling "to the stars" has to be unprofitable. In fact, many of those predictions about colonization the parent discussed, usually had some sort of capitalist profit driven motive. The pace of "accelerating progress" could manifest a cheap means of such travel. I don't know that speculative physics about faster than light travel (warping space-time, etc) is any more unrealistic than some of these prognostications about AI and spiritual transcendence, whatever that is.

But honestly, responding to this post sort of reminds of responding to Way of the Master type people anyway. :)


And it's not just a matter of profit. In fact, the research for the achievement of superhuman intelligence must hijack the ordo cognoscendi and jump in front of the line, because with superhuman intelligence humans can leave the rest of science to machines (hopefully).


>There is no profit in reaching the stars.

There's no profit in the singularity either.


There's stupendous profit in every small incremental step towards it. AI research in general has proven to be unbelievably profitable -- even when you fail in your goals (like most AI research to date), every minor partial result can probably be turned into a multimillion business.

Before the AI winter, there was a lot of criticism of how the US govt spent a lot of money on a lot of projects which fizzled out. One of the results built from the ashes of those DARPA projects was the Dynamic Analysis and Replanning Tool, used to optimize and schedule transportation of supplies and personnel.

From wikipedia: Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined.


Imagine having a computer that would have the same level of cognition as a human. I think most people would disagree with you on that not being valuable.


You might not want to pay to have your brain preserved to perpetuity, but I'm pretty sure that some would.


Strong AI would could potentially dwarf all past profits, you're simply not correct.


Are you kidding? Because I don't want to be a jerk, but you cannot have possibly thought that through.


Actually he is correct, if everything is virtualized (and therefore abundant) , then the price of everything drops to zero.


True if everybody has access to strong AI. I doubt that is the way it will play out. At least at the beginning. Whomever invents it first will probably hoard it for themselves as an advantage against everybody else. The technology will eventually spread to everybody else but by that time our economy will probably have adapted.


You're a little unclear as to why it's called the singularity. It means a point so drastically different that we cannot predict what the world will look like after it happens. However, should a planet still exist on the day after this occurs, you can pretty much guarantee that the company that controls this technology will make Apple, Google, and Microsoft look like technological infants. Well, unless it is Google, which seems to be the play they are making by hiring RK.

Also, World of Warcraft is virtualized, and everything in that game is potentially infinitely abundant. And yet, it still seems to pull in billions. So...


Yes, but my point is essentially that talking about profits in a post singularity world is probably as sensible as talking about Mao as CEO of China. After the singularity we will likely have a different socio-economic system and profits will probably be something like a noble rank is today.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: