Hacker News new | past | comments | ask | show | jobs | submit login

I'm a huge believer in the transformative power of technology, and AI in particular. I don't see, though, why AI is so fundamentally different from nanomachines, "fully programmable" genetic engineering, and "free energy" technologies in either feasibility or in potential impact on humanity.

You might say my problem with the singularity talk is that I think we have ALREADY gone through "technological singularity" using the current, rather flaky definition. We already have experienced technology-driven change which has made the future completely unpredictable from the past. The integrated circuit has already revolutionized the nature of human life. The claim that the future changes will be orders-of-magnitude more important in some way seems difficult to measure in an objective way, which is part of why the topic has a bit of a "code smell" of pseudoscience.

Once I really start thinking about how technology has changed the human condition, it seems to me that the invention of both spoken and written language is actually the TRUE "singularity" because human civilization is radically different from how other animals live, and it is the ability to transmit and preserve information that is the most important enabling technology.

Even if I can upload my brain into a computer, isn't that just an upgrade to the capability I already have, which is to preserve my brain contents by writing and transmitting that information into the future?

tl; dr - Humanity has already experienced fundamentally transformative technological change. We will certainly experience more in the future. Is there any objective way to measure the impact of technologies, and are any potential future changes actually "more important" than those we have already experienced?




Again, I really recommend that you read The Singularity is Near before arguing against it (actually not sure if you are arguing against it anymore). But one of the topics there is a long argument on how the pace of change is increasing.

(Eg compare the difference in life between someone born in the 13th century to someone born in the 14th century - not much of a difference. But compare how life have changed in just the last 50 years).


Imagine that you are playing a strategy game, and that there is an upgrade that makes all other upgrades go 1000x faster, and cost 1/1000 as much. Surely you would want to get this one as soon as possible?

This is the promise of AI.


Imagine a silicon brain that we create that is 10% more capable than our brain. It's nothing astounding, but it's smarter. And it has access to all knowledge within seconds. So this brain is able to actively comb through that information, and look at it's own design, and being 10% smarter, it's able to design a more efficient AI. That AI, now X% smarter will be able to do the same, etc, etc... It's not unlike what we do now. You couldn't design and build a modern computer without a modern computer. But the AI will be able to do it better, and faster. So it's not about a potential advancement that changes the world a little. It's about the possibility of being able to jump human knowledge by years or even centuries in a very short period of time.


The problem is, designing a smarter AI itself would possibly be harder as well, and it would depend on a balance between difficulty and capability.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: