Whilst I agree with he author that there are religious aspects to the whole Singularity with a capital S movement, it seems to me his arguments are equally unsound.
First of all, this is not an attack on the singularity idea, but on the "strong AI" idea.
With respect to strong AI, Nick is very close to the problem, and that's the issue. Like any good engineer/scientist, he sees problems everywhere. Yes, there are many problems before we get to strong AI. That's not news. There were many problems to resolve before people could pay a small sum of money and let a giant metal tube take them through the air to a destination halfway round the world without killing them - but those problems got resolved, one by one. Many problems do not amount to an impossibility, only a damn hard problem (which we knew strong AI was anyway).
As for his other assertion, that the concept of the singularity is a fantasy, Nick's main argument is that the singularity will only last "for a time", and that it will turn into a classic S-curve. He waves Feynman's name around as supporting evidence, but does not address the fact that intelligence (and artificial intelligence in particular) is not subject to the Malthusian laws which have caused other phenomena to follow S-curves. Yes, we only have access to so many particles, but the whole point of exponential AI is figuring out better ways to use the same number of particles. There may be a theoretical limit to how efficiently we can use those particles, but even so there are a lot of particles, and if we can manufacture even just human-equivalent computing matter in factories, that's already enough to achieve a singularity.
So, in summary:
- Yes, there is a problem with Singularity stuff being quasi-religious (I've been to a Singularity event and agree - though the proportion of kooks was relatively low, there were certainly a few, some of them on the podium speaking)
- No, this article is not a convincing argument against either strong AI or the potential of a singularity-like event to occur.
Indeed. Belief that strong AI is never possible is also religious. If the human mind is simply a chemical computer (and, science currently says it is), it can (eventually) be simulated with sufficiently powerful hardware.
I haven't yet heard any critic of Singularity religion claim that strong AI is never possible.
What we are skeptical of is that we need just a little bit of development and strong AI will take-off just like that without any regard to physical world constraints.
Well, one critic I usually hear about singlularity subjects is that "you can't upload your mind into a computer because the emotional/soul/bla aspects of a human being could never be emulated"... which is obviously religious. Yes, it's very very hard but certainly no reason to call it impossible.
And even Kurzweil doesn't say that exponential curves go on forever. But he does take the trouble to analyze what the physical limits actually are. Based on that, Moore's Law can go on for at least another fifty years.
The amount of energy needed by brains is fixed. The amount needed by computers can be decreased.
Also, the upper bound on how much energy is available to us should we really need, from the nearest power source, is some 6-9 orders of magnitude greater than the amount we currently take from it as a whole planet (most of it being wasted). Food growth, on the other hand, is much more limited by many other factors such as ecosystem impact, limited land area, failing crops, speed of production of seeds, and all sorts of pesky logistical issues...
I was referring to going and harnessing the sun's output, which is, iirc, somewhere between 10^7 and 10^9 times bigger than what the Earth gets (can't be bothered to do the calculation to figure out exact order of magnitude).
If our hypothetical AI saturates insolation output and minimises waste (most of our 5% usage is utter waste from a computational point of view. We mostly spend our energy to produce poop and move physical stuff around pointlessly), it will already have a lot more energy to spend on computation than we do today, and then if it really needs more there's millions of times more energy available just one AU away, easily harvested if it's already figured out how to do so efficiently on Earth.
So my point is, there may be a Malthusian limit of sorts, but by the time we reach that, the bottom part of the S curve is so far behind that we can't even remember it anymore. It's as if the classical Malthusian limit was that Earth can "only" feed 10 million billion people.
Oh, and I forgot another source of huge amounts of energy: Jupiter. 75% hydrogen, 1.8986×10^27 kg, which works out as ... well, a whole lot of fusion energy if you can harvest it.
I believe the Fermi Paradox[1] is very compelling evidence that the evolution of computation is logistic[2] and further that saturation comes before practical interstellar travel. Otherwise I feel it's likely we would have already seen evidence of Clarke's Overmind[3].
That intelligence becomes increasingly specialized rather than general is a reasonable explanation for why such saturation may happen (in a hand wavy way).
I also think that much of the discussion about the Singularity, downloading brains, etc, makes many anthropomorphic assumptions that are entirely unjustified. I suspect it's more likely that we'll have better luck building artificial chemical brains than we ever will simulating some captured representation of the state of a human brain on a digital computer.
On the upside, if computation's growth is logistic then certainly we're in the exponentiation portion right now, which means humanity still is likely to see generations of at least near linear improvement. We may be essentially trapped in this solar system, but we'll likely have quite astounding information processing capabilities.
His one good point is that "The Singularity" is a bad name for a concept that doesn't refer to a single event, which makes it easy to dismiss with the "rapture of the nerds" strawman. Aside from that, it's mostly an argument from personal incredulity, which is even less valid here than usual. I wouldn't have believed you a year ago if you told me that a search and advertising company had self-driving cars roaming around Mountain View, but here we are.
Despite all the sensory inputs we can attach to computers these days, and vast stores of human knowledge like Wikipedia that one can feed to them, almost all such data is to a computer nearly meaningless.
His one good point is that "The Singularity" is a bad name for a concept that doesn't refer to a single event, which makes it easy to dismiss with the "rapture of the nerds" strawman.
Right, so why are we even still discussing the concept? If we want to discuss what's likely to happen in the future then let's throw away the silly term "the singularity" and start talking about specifics.
The first problem with "The Singularity" is that nobody can actually agree on a definition. For instance:
1. The first comment on this article: The singularity refers to the time in human history when computers first pass the Turing test.
2. Wikipedia: A technological singularity is a hypothetical event occurring when technological progress becomes so rapid that it makes the future after the singularity qualitatively different and harder to predict -- hey, y'know what I call the point beyond which the future is hard to predict? I call it "the present".
... and I'm too lazy to keep looking up other definitions, but they're definitely out there.
The second problem with the idea is that some folks seem to have this flawed logical chain in mind:
1. Assume that a human brain can make a computer smarter than itself.
2. In that case, the computer smarter than the human can make a computer smarter than itself, which can in turn make a still smarter computer, and so on, leading to vastly smarter computers very quickly.
This ignores the fact that if we ever do make a computer smarter than a human it will either be via (a) reverse-engineering of the human brain or (b) some kind of evolutionary algorithm. The slightly-smarter computer is then no more capable of building an even-smarter computer than we are, since it also has to fall back on one of these two dull mechanical processes.
The human brain doesn't need to build a smarter brain. It just needs to build something of equivalent smartness (which should be theoretically possible, there's no reason to believe the human brain is the upper bound for all generalised reasoning ability) on a substrate like silicon which is subject to Moore's Law (and thus gets inherently faster with time) and which is immortal and duplicable.
Build 1 functioning brain in silicon, and:
- 18 months later you can build one that's twice as fast using the same principles
- duplicate this brain and get the power of multiple people thinking together (but with greater bandwidth between them than any human group)
- run this brain for 100 years and get an older intellectually functioning human than has ever existed before
- duplicate whatever learning this brain has accumulated over 100 years (which, say, brings to the level of an Einstein) as many times as you have physical resources for (so, clone Einstein)
All those are paths to super-human AI from the production of a human-intelligence brain in a non-biological form.
So, if a human brain can make a computer brain, which is a reasonable assumption, then a human brain can make a brain smarter than itself.
But (part of) the point is, building a human brain in a non-biological substrate is not a miracle. It would be a miracle in the same way that transistors and penicillin are, not in the way that Jesus' resurrection is. I.e., a fantastic, happy, unlikely but possible event that will change the world for the better.
After all, we know that human brains can be built in some way: we have the evidence for that claim inside billions of skulls. The question is then not to push the theoretical boundaries of computational capability beyond some theoretical level - but merely to achieve it again artificially.
We've managed to copy birds, fish, we've sent people to the moon, we've sent probes outside the solar system, we've beaten countless diseases, we've extended our own lifespans by decades, we've created monuments of human culture... why assume that we won't achieve this too?
> Exactly. Skulls. With connection to living and feeling flesh.
And, unless you claim there is something inherently magical and miraculous in that, it can be reproduced or abstracted.
> We cannot even model the brain of the simplest worm…
I don't believe that's true. I am quite sure we have some models of varying accuracy (as in "good enough") for those. Maybe we cannot run the most accurate ones (modeling the chemical processes within individual neurons) in real time, but, by 2012, we'll be able to run them twice as fast as we do now.
Exactly. Skulls. With connection to living and feeling flesh.
The human brains provide a minimum theoretical limit, that's all. The existence of the human brain proves categorically that it is physically possible to build a computing device that fits inside a skull and has the computational capabilities of a brain. It exists, therefore it can exist.
So, any argument that "it's impossible to build such a device" are refuted by our very existence.
We cannot even model the brain of the simplest worm
See Moore's law and the relationship between computational capacity and neural network modelling.
Eliezer Yudkowsky has actually gone and laid out just what the different things people seem to mean by "the Singularity" are: http://yudkowsky.net/singularity/schools ; of course, this may be incomplete, but it seems pretty good.
The problem then is people just referring to "the Singularity" without specifying what they're talking about.
"This ignores the fact that if we ever do make a computer smarter than a human it will either be via (a) reverse-engineering of the human brain or (b) some kind of evolutionary algorithm."
There is a large discussion to be had and I do believe that I have some novel points to make on the subject, but what I would like to do at this point is note that just because you can't imagine another way that this could be done, it doesn't mean that it can't be done another way. You called this a fact, and that's a strong claim that I urge you to retract.
To approach this from a different perspective, imagine somebody saying the following at about 1890:
"This ignores the fact that if we ever do make a heavier-than-air flying machine it will either be via (a) reverse-engineering of birds or (b) some kind of evolutionary algorithm."
I think the misunderstanding comes from confusing "strong AI" with "human-like AI". It may be impossible for a meatbrain to build another meatbrain-like AI, but, if we are willing to compromise on that and make some AI that can function at or above meatbrain levels in some of their functions, we have already a couple examples of very clever parts we can, eventually, combine.
A recent post on LessWrong, http://lesswrong.com/lw/3gv/statistical_prediction_rules_out... , suggests that won't help much, at least for most people. Even when they have superior tools and information available, most people prefer their own, inferior, judgement.
blockquote
>If this is not amazing enough, consider the fact that even when experts are given the results of SPRs, they still can't outperform those SPRs (Leli & Filskov 1985; Goldberg 1968).
>So why aren't SPRs in use everywhere? Probably, we deny or ignore the success of SPRs because of deep-seated cognitive biases, such as overconfidence in our own judgments. But if these SPRs work as well as or better than human judgments, shouldn't we use them?
A standard alternative path to superhuman intelligence (which is the precondition for a singularity.. NOT AI specifically) is IA: Intelligence Augmentation. I.e., work on making humans smarter than they are. It's reasonable to say that the Internet has achieved a lot of progress in that direction already. Get brain/computer interfaces working, and suddenly everyone will gain 10 effective IQ points through instant access to encyclopaedias and calculators. Etc, etc.
Computers can solve mathematical equations faster than humans, recall vast amounts of historical knowledge and information in fractions of a second and passably translate between many languages.
Are you saying that your definition of 'smart' wouldn't include many of the capabilities of Wolfram Alpha?
Article Summary: attack the very premise of achieving general intelligence by attacking narrow AI techniques such as machine learning and their obvious inadequacies, and then acknowledging that they have not in fact, accomplished general intelligence.
This is, typically, yet another diatribe that generalizes various "the Singularity" concepts and denigrates them in a single fell swoop.
I adhere to many of the arguments put forth by Ray Kurzweil, who many dismiss as a crank. However, unlike any (at least, most) other self-declared prophets Kurzweil has a solid history of predictions going back at least 21 years. While of course not all of the things he has said have come to pass, or they come to pass at slightly different time periods, more than not they have been spot on.
Of course, some nutcases really do subscribe to The Singularity as a faith. These people don't pay attention to the various sub-disciplines of biology, computer science, and materials science that are among the core fields generally related to The Singularity.
Folks like myself who think that something like TS is ahead of us base our reasoning on the rapid advances that humanity is making in the core fields, not on some religious belief. I'm an atheist and don't believe shit without proof.
Its easy to tear down, but much harder to make a valid argument. Nick Szabo's stating that something will never happen because it hasn't happened yet. I suggest he start listening to what's going on in science.
Confirmation bias. You're not looking hard enough at the stuff he says that didn't come true.
Even so, "past performance is no indicator of future success. You could have said the same thing about the goldilocks economy in 2001 but the notion that the economy could continue exponentially forever was pretty dumb. All I have to say about believing in exponential things, is, please, please consider the bacterial growth curve.
(that's a log graph) See, if you were an intelligent e coli during growth phase, you might notice that everything was exponential and hunky dory, and there was no end in sight.
Luckily as humans we can fight the logistic death phase after the singularity 'period'. Fight it with more and more superior AIs; perhaps something like what Adams envisioned.
But we'll most likely lose to ourselves. It's a M.A.D. world.
I think prophecies (or predictions) are a long standing cultural hack. I remain extremely unimpressed by Kurzweil's predictions. Just because most people are uninterested in making predictions doesn't make them especially hard. Just about everything he's claimed as 'correct' was being researched in some form at the time he made the predictions, and was certainly in science fiction long before that.
So considering that, mostly he put dates on a random set of things coming to pass. Then he uses extremely generous criteria to define when something he predicted has occurred and essentially throws out the actual timeframe when doing so. And that was the only bit he really predicted in the first place.
Somehow it works on people though. Just go ahead and anoint him your High Priest, save yourself the confusion.
I think it would be religion to not believe that it is theoretically possible to build a device which emulates the human brain - sort of like the belief that it was impossible to make 'organic' compounds out of compounds from 'inorganic' origin because they lack the vital force.
Whether it will ever be possible to do it on a physically possible Turing machine in real time is perhaps slightly more debatable.
Whether or not we reach "The Singularity" doesn't matter much to me, I just want us to reach a point where there are hundreds of thousands of nanobots aiding our individual immune system along with preventing degradation of our internal organs, is that too much to ask? According to all the research we shouldn't be too far off from that, though probably not in my lifetime. Whether or not we reach an AI singularity doesn't matter to me, so long as we have our nanobots.
Hm. It appears Blogspot ate my comments. So I guess I'll just paste it here.
---
> The "for a time" bit is crucial. There is as Feynman said "plenty of room at the bottom" but it is by no means infinite given actually demonstrated physics. That means all growth curves that look exponential or more in the short run turn over and become S-curves or similar in the long run, unless we discover physics that we do not now know, as information and data processing under physics as we know it are limited by the number of particles we have access to, and that in turn can only increase in the long run by at most a cubic polynomial (and probably much less than that, since space is mostly empty).
Yes, but the fundamental limit is so ridiculously high that it might as well be infinite. Look at Seth Lloyd's bounds in http://arxiv.org/abs/quant-ph/9908043 . He can be off by many orders of magnitude and still Moore's law will have many doublings to go.
(Incidentally, the only quasi-Singulitarian I am aware of who has claimed there will be an actual completed infinity is Frank Tipler, and as I understand it, his model required certain parameters like a closed universe which have since been shown to not be the case.)
> As for "the Singularity" as a point past which we cannot predict, the stock market is by this definition an ongoing, rolling singularity, as are most aspects of the weather, and many quantum events, and many other aspects of our world and society. And futurists are notoriously bad at predicting the future anyway, so just what is supposed to be novel about an unpredictable future?
WHAT. We have plenty of models of all of those events. The stock market has many predictable features (long-term appreciation of x% a year and fat-tailed volatility for example), and weather has even more (notice we're debating the long-term effects of global warming in the range of a few degrees like 0-5, and not, I dunno, 0-1,000,000). Our models are much better than the stupidest possible max-entropy guess. We can predict a hell of a lot.
The point of Vinge's singularity is that we can't predict past the spike. Will there be humans afterwards? Will there be anything? Will world economic growth rates and population shoot upwards as in Hanson's upload model? Will there be a singleton? Will it simply be a bust and humanity go on much as it always has except with really neat cellphones? Max-ent is our best guess; if we want to do any better, then we need to actively intervene and make our prediction more likely to be right.
> Even if there was such a thing as a "general intelligence" the specialized machines would soundly beat it in the marketplace. It would be very far from a close contest.
And the humans? If there is a niche for humans, that same niche is available to the general intelligence, and since it can be self-improving and graft on the specialized machines better than humans ever will, it ought to dominate.
The economic extinction of humanity? Seems like a Singularity to me. (The economy was driven by human desires before, but what do the machines compete for when the humans are gone? I certainly can't predict it.)
> Nor does the human mind, as flexible as it is, exhibit much in the way of some universal general intelligence. Many machines and many other creatures are capable of sensory, information-processing, and output feats that the human mind is quite incapable of.
And pray, how do we know of these feats? How were these machines constructed? (Whether I kill you with my bare hands or by firing a bullet, you are still dead.)
5678:
> The null hypothesis would be: there does not exist an algorithm and a computer which achieves the goals of AGI. I await the proof, or a modicum of scientific evidence to support this.
Shouldn't the null hypothesis be the Copernican hypothesis? - there is nothing special about humans.
Every success of AI, every human function done in software, is additional Bayesian evidence that there is nothing special about humans. If there were something special, one ought to observe endless failure, much like trying to trisect the angle or prove the parallel axiom; but instead, one observes remarkably rapid progress. (Where was chemistry after its first 60 years? Feel free to set your starting point anywhere from the earliest Chinese alchemists to after Robert Boyle.)
Kevembuangga:
> More to the point, some singularitarians pretend to have a definition of "Universal Intelligence" which curiously enough isn't computable. :-)
If you have an actual argument, please feel free to give it and not just engage in mockery.
Were you aware that uncomputable algorithms are useful and can easily be made computable?
(You can solve the Halting Problem for a Turing machine with bounded memory, for example. And every static analysis like type-checking, by Rice's theorem, is attempting something Turing-complete, but they're still not useless. When employing theorems, beware lest you prove something different from what you think you are proving.)
I think of it as kind of an inverted religion. Instead of having faith that something magical happened in the past, a singularitarian has faith that something magical (or maybe terrifying) will happen in the future.
That someone would claim to "debunk" the entire Singularity concept without showing any indication that they've ever read anything written on the topic by Eliezer Yudkowsky is laughable (in particular, Szabo doesn't appear to really grok the difference between the time scales of exponential growth in Moore's law and the exponential growth we envision when we talk about an intelligence FOOM).
Yes, if what you mean by "Singularity" is "that thing that Kurzweil says will save us all, and it will happen by 2029 because Exponential Curves Are Awesome And Go On Forever", then sure, it's a religious and borderline psychotic idea.
The rest of us realize that exponentials don't last forever, that strong AI is a hard problem, that brain-scans won't solve the whole problem even if we did have the tech to do them, which we won't for quite a while, that the ML algos we have today is not strong enough, that even defining a metric to optimize against is extremely difficult, etc. We got it - we live in the real world, AI is really tough, and we haven't solved it yet. Further, we're mostly worried about the prospect of AI, rather than naively hopeful for some geek rapture, because self-improving AI is likely to turn out quite badly for us unless we're exceptionally careful.
"Faith" only enters into this when deciding whether you believe the probability that we can build self-improving AI any time soon is significant or not, and I'd argue that it's not so much faith as slightly-educated guessing (predicting the probability of a tech breakthrough is an extremely difficult thing to do, so sure, there are high margins of uncertainty). You may disagree as to whether that probability is 1% over the next 20 years or 1% over the next 100, but we're hardly talking about the odds of virgin-birth here, so I find it completely unfair to call the whole idea "religious"...frankly, either of those probabilities is, IMO, worth worrying about, given the stakes. And you're flat out delusional if you don't admit a 1% chance of strong AI within 100 years.
I was going to more specifically tackle the other claims in this post, but ultimately, it's not worth it. When it comes down to it, Szabo assumes that the main thing that would make strong AI "easy" (or at least relatively easy), the existence of a general algorithm for reasoning, doesn't exist. If you take that as gospel, then sure, it's going to be pretty difficult to construct AI. I can even see why he'd think that, having used genetic programming and other ML techniques and finding little success (IMO we haven't pushed the genetic programming paradigm anywhere near far enough, but that's a story for another day).
Plenty of people see things differently, and think that a general purpose intelligence algorithm is within our grasp; I'd argue that there's some loose evidence that human cognition largely owes itself to exactly such an algorithm, though in our case it's probably a bit more complex than we'll be able to come up with from scratch.
First of all, this is not an attack on the singularity idea, but on the "strong AI" idea.
With respect to strong AI, Nick is very close to the problem, and that's the issue. Like any good engineer/scientist, he sees problems everywhere. Yes, there are many problems before we get to strong AI. That's not news. There were many problems to resolve before people could pay a small sum of money and let a giant metal tube take them through the air to a destination halfway round the world without killing them - but those problems got resolved, one by one. Many problems do not amount to an impossibility, only a damn hard problem (which we knew strong AI was anyway).
As for his other assertion, that the concept of the singularity is a fantasy, Nick's main argument is that the singularity will only last "for a time", and that it will turn into a classic S-curve. He waves Feynman's name around as supporting evidence, but does not address the fact that intelligence (and artificial intelligence in particular) is not subject to the Malthusian laws which have caused other phenomena to follow S-curves. Yes, we only have access to so many particles, but the whole point of exponential AI is figuring out better ways to use the same number of particles. There may be a theoretical limit to how efficiently we can use those particles, but even so there are a lot of particles, and if we can manufacture even just human-equivalent computing matter in factories, that's already enough to achieve a singularity.
So, in summary:
- Yes, there is a problem with Singularity stuff being quasi-religious (I've been to a Singularity event and agree - though the proportion of kooks was relatively low, there were certainly a few, some of them on the podium speaking)
- No, this article is not a convincing argument against either strong AI or the potential of a singularity-like event to occur.