The first problem with "The Singularity" is that nobody can actually agree on a definition. For instance:
1. The first comment on this article: The singularity refers to the time in human history when computers first pass the Turing test.
2. Wikipedia: A technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict -- hey, y'know what I call the point beyond which the future is hard to predict? I call it "the present".
... and I'm too lazy to keep looking up other definitions, but they're definitely out there.
The second problem with the idea is that some folks seem to have this flawed logical chain in mind:
1. Assume that a human brain can make a computer smarter than itself.
2. In that case, the computer smarter than the human can make a computer smarter than itself, which can in turn make a still smarter computer, and so on, leading to vastly smarter computers very quickly.
This ignores the fact that if we ever do make a computer smarter than a human it will either be via (a) reverse-engineering of the human brain or (b) some kind of evolutionary algorithm. The slightly-smarter computer is then no more capable of building an even-smarter computer than we are, since it also has to fall back on one of these two dull mechanical processes.
The human brain doesn't need to build a smarter brain. It just needs to build something of equivalent smartness (which should be theoretically possible, there's no reason to believe the human brain is the upper bound for all generalised reasoning ability) on a substrate like silicon which is subject to Moore's Law (and thus gets inherently faster with time) and which is immortal and duplicable.
Build 1 functioning brain in silicon, and:
- 18 months later you can build one that's twice as fast using the same principles
- duplicate this brain and get the power of multiple people thinking together (but with greater bandwidth between them than any human group)
- run this brain for 100 years and get an older intellectually functioning human than has ever existed before
- duplicate whatever learning this brain has accumulated over 100 years (which, say, brings to the level of an Einstein) as many times as you have physical resources for (so, clone Einstein)
All those are paths to super-human AI from the production of a human-intelligence brain in a non-biological form.
So, if a human brain can make a computer brain, which is a reasonable assumption, then a human brain can make a brain smarter than itself.
But (part of) the point is, building a human brain in a non-biological substrate is not a miracle. It would be a miracle in the same way that transistors and penicillin are, not in the way that Jesus' resurrection is. I.e., a fantastic, happy, unlikely but possible event that will change the world for the better.
After all, we know that human brains can be built in some way: we have the evidence for that claim inside billions of skulls. The question is then not to push the theoretical boundaries of computational capability beyond some theoretical level - but merely to achieve it again artificially.
We've managed to copy birds, fish, we've sent people to the moon, we've sent probes outside the solar system, we've beaten countless diseases, we've extended our own lifespans by decades, we've created monuments of human culture... why assume that we won't achieve this too?
> Exactly. Skulls. With connection to living and feeling flesh.
And, unless you claim there is something inherently magical and miraculous in that, it can be reproduced or abstracted.
> We cannot even model the brain of the simplest worm…
I don't believe that's true. I am quite sure we have some models of varying accuracy (as in "good enough") for those. Maybe we cannot run the most accurate ones (modeling the chemical processes within individual neurons) in real time, but, by 2012, we'll be able to run them twice as fast as we do now.
Exactly. Skulls. With connection to living and feeling flesh.
The human brains provide a minimum theoretical limit, that's all. The existence of the human brain proves categorically that it is physically possible to build a computing device that fits inside a skull and has the computational capabilities of a brain. It exists, therefore it can exist.
So, any argument that "it's impossible to build such a device" are refuted by our very existence.
We cannot even model the brain of the simplest worm
See Moore's law and the relationship between computational capacity and neural network modelling.
Eliezer Yudkowsky has actually gone and laid out just what the different things people seem to mean by "the Singularity" are: http://yudkowsky.net/singularity/schools ; of course, this may be incomplete, but it seems pretty good.
The problem then is people just referring to "the Singularity" without specifying what they're talking about.
"This ignores the fact that if we ever do make a computer smarter than a human it will either be via (a) reverse-engineering of the human brain or (b) some kind of evolutionary algorithm."
There is a large discussion to be had and I do believe that I have some novel points to make on the subject, but what I would like to do at this point is note that just because you can't imagine another way that this could be done, it doesn't mean that it can't be done another way. You called this a fact, and that's a strong claim that I urge you to retract.
To approach this from a different perspective, imagine somebody saying the following at about 1890:
"This ignores the fact that if we ever do make a heavier-than-air flying machine it will either be via (a) reverse-engineering of birds or (b) some kind of evolutionary algorithm."
I think the misunderstanding comes from confusing "strong AI" with "human-like AI". It may be impossible for a meatbrain to build another meatbrain-like AI, but, if we are willing to compromise on that and make some AI that can function at or above meatbrain levels in some of their functions, we have already a couple examples of very clever parts we can, eventually, combine.
A recent post on LessWrong, http://lesswrong.com/lw/3gv/statistical_prediction_rules_out... , suggests that won't help much, at least for most people. Even when they have superior tools and information available, most people prefer their own, inferior, judgement.
blockquote
>If this is not amazing enough, consider the fact that even when experts are given the results of SPRs, they still can't outperform those SPRs (Leli & Filskov 1985; Goldberg 1968).
>So why aren't SPRs in use everywhere? Probably, we deny or ignore the success of SPRs because of deep-seated cognitive biases, such as overconfidence in our own judgments. But if these SPRs work as well as or better than human judgments, shouldn't we use them?
A standard alternative path to superhuman intelligence (which is the precondition for a singularity.. NOT AI specifically) is IA: Intelligence Augmentation. I.e., work on making humans smarter than they are. It's reasonable to say that the Internet has achieved a lot of progress in that direction already. Get brain/computer interfaces working, and suddenly everyone will gain 10 effective IQ points through instant access to encyclopaedias and calculators. Etc, etc.
Computers can solve mathematical equations faster than humans, recall vast amounts of historical knowledge and information in fractions of a second and passably translate between many languages.
Are you saying that your definition of 'smart' wouldn't include many of the capabilities of Wolfram Alpha?
1. The first comment on this article: The singularity refers to the time in human history when computers first pass the Turing test.
2. Wikipedia: A technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict -- hey, y'know what I call the point beyond which the future is hard to predict? I call it "the present".
... and I'm too lazy to keep looking up other definitions, but they're definitely out there.
The second problem with the idea is that some folks seem to have this flawed logical chain in mind:
1. Assume that a human brain can make a computer smarter than itself.
2. In that case, the computer smarter than the human can make a computer smarter than itself, which can in turn make a still smarter computer, and so on, leading to vastly smarter computers very quickly.
This ignores the fact that if we ever do make a computer smarter than a human it will either be via (a) reverse-engineering of the human brain or (b) some kind of evolutionary algorithm. The slightly-smarter computer is then no more capable of building an even-smarter computer than we are, since it also has to fall back on one of these two dull mechanical processes.