Hacker News new | past | comments | ask | show | jobs | submit login
The Singularity is Far (scottaaronson.com)
37 points by __ on Sept 7, 2008 | hide | past | favorite | 71 comments



a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans...

A great example of how a smart person, and a decent writer, can still drive an essay into inane irrelevancy when trying to wrap up with a snappy conclusion. No--the problems associated with agricultural runoff and AI tech are completely different, have completely different economic contexts, and should never be in the same article.

I'm surprised the last sentence wasn't "Vote for Obama!"


You took your smart, decent, criticism and made it inane and irrelevant with an ad hominem attack as your snappy conclusion. You related the singularity to this year's US Presidential candidates, two topics that have completely different contexts.

Please tell me the irony was intentional... :-)


Actually, the author says this in the first paragraph:

"...nor do they obviate the need to address mundane matters such as war, poverty, disease, climate change, and helping Democrats win elections."

So I'd say that it was a fair statement by the commenter above.


Missed that. My mistake.


One of the points of the article seemed to have missed you. If the singularity is possible, there is still the question of whether humanity will go extinct first. At some point, AI researchers are going to have trouble breathing Earth's atmosphere -- or do AI researchers not breath air like the rest of us?


At some point, AI researchers are going to have trouble breathing Earth's atmosphere

Why might that be?


This essay is not very well written. The author's points are pretty hard to grasp, and it's even harder to see how they support his central point. I'll just go over them:

-No civilization has been more advanced than ours. There is no historical reason to claim that we are about to "return to normality".

+Predicting future technological development has (almost) never worked.

+We might in the future find out that Kurzweil's ideas were bad. -> History is overflowing with examples of this.

-"Suppose that all over the universe, civilizations arise and continue growing exponentially until they exhaust their planets’ resources and kill themselves out. [...] I wish reading the news each morning furnished me with more reasons not to be haunted by this vision of existence". -> Suppose instead that reality is just a dream, and you are the only conscious mind in existence. There is little reason to believe any of this. And (ad hominem alert) no doomsayer has ever been right, although the profession always has a comfortable number of practitioners.

-"AI hasn't made any progress". -> No one has ever sent anything into orbit, so it might be impossible.

-"There is a ceiling to computational expressive power. Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster". -> Correct, but the consequences of this are just speculation. My speculation: Being 1000 times faster gives you entirely different capabilities. What's the use of decoding an MP3 in anything less than real-time?

+"Yet while I believe that the latter kind of singularity is possible, I’m not at all convinced of Kurzweil’s thesis that it’s near". -> Yes, specific technological predictions have never been very successful. We will probably have some of these technologies by 2045, and also some we weren't able to think of.


Turing complete: "Get up to a certain threshold, and every machine can simulate every other one, albeit some slower and others faster".

This view has always fascinated me: if AI is possible, it's possible right now (in terms of hardware). What's missing knowing how to do it (software inventions).

But it's true that faster hardware can open our eyes: (decades ago) the inventor of a new way to use silicon to make CPU's faster said that once the extra silicon was available, the solution was obvious to him. But before then, no one had even tried to think of it. It was already possible before, but the greater power (more silicon) made it practical to think in those terms.


"What's missing knowing how to do it (software inventions)".

Not really, the scope of the hardware plays an important role. Consider: that you could perform the processing needed to play Half Life 2 on an abacus. But even if the abacus wielder followed the commands exactly and emulated the x86 instructions with paper and pencil, would anything actually emerge from the exercise that would be enlightening in anyway?

Probably not, since it would take a few thousand years (I'm handwaving here) to compute the contents of the first frame.


You're right, if the difference is that great. e.g. your handwaving of 1/50 sec to 4000 years, is about 50*10^11= 5,000,000,000,000 = a factor of 5 trillion (12 orders of magnitude).

I was thinking that 1 second's thought in a few months would be reasonable - but you are quite right, I don't have any real idea of the factor. Maybe that would be plausible if we found very efficient AI implementations, but the initial attempts of actually doing it at will undoubtedly be quite inefficient. And the slowness of feedback is a huge factor in coding, as well....


Correct, but the consequences of this are just speculation. My speculation: Being 1000 times faster gives you entirely different capabilities. What's the use of decoding an MP3 in anything less than real-time?

That's correct. I'm sure flies and dogs are Turing machines as much as we are (if we are), yet we are considered to be much more intelligent beings than both of them. A computer capable of a level of intelligence as big to us, as we are to a dog, would be pretty impressive. But I think he's not talking about that, as he says: The one notion I have real trouble with is that the AI-beings of the future would be no more comprehensible to us than we are to dogs, but if the singularity mind was bigger than us, but of the same nature, we would be able to compute it with pencil and paper, or in a computer, and to analyze it to see how it works (very slowly, as a Turing machine).


If the singularity mind was bigger than us, but of the same nature, we would be able to compute it with pencil and paper, or in a computer, and to analyze it to see how it works (very slowly, as a Turing machine).

My impression is that we couldn't, just like we couldn't decode an audio file that stream forever without real-time decoding. Once we "compute it" "very slowly," the AI will already have improved twice, tenfold, a thousandfold. It will be fruitless.

Not to mention, even if the AI is simply a Von Neumann machine with really impressive specs, what makes you think we'd be able to decode it? If you gave me a RAM chip, I wouldn't know what to do with it without an EE book. Now imagine a technology a thousand times more advanced.


What I mean: given enough time, we'd be able to analyze a specific computation (not all real-time computations of such mind). However, our patience, lives and our universe too is finite, so it could set practical limits to our task. Say: decoding X signal would only take me more time than the universe to end, so, that's anyway an impossible task. But we should be able, in principle and with infinite time, to analyze any computation. (Well, we should first set a meaning for "analyze" in this context)


I'm a singularitarian, but I think it's going to be 500-1000 years. I also think it's going to be close call with the doomsday scenario. I took special note of the closing remarks:

"...before we transcend the human condition and upload our brains to computers, a reasonable first step might be to bring the 17th-century Enlightenment to the 98% of the world that still hasn’t gotten the message..."


Where do you get those numbers from? I don't see how you possibly could make any guesses about the state of the world so far into the future.

You just mean it is a hell of a lot longer than 20-30 years before we are there?


"I don't see how you possibly could make any guesses "

Really?

3

Okay. Let's try it again.

42

Looks to me like guessing is a pretty easy thing to do. Perhaps you were looking for some kind of formula? As if a piece of math would make you more comfortable about even 5 years from now?

I think you're taking the underlying logic to the singularity a little too literally. It's all general trends with general statements to back them up. Wonderful piece of speculation, which I agree with. But it ain't science. It ain't logic. Kurzweil just speculates using a lot of mathematics. I grok it. But I'm not selling my house to buy a bunch of EverReady batteries to power my nanobot Zoot suit any time soon.

It's a guess. Get over it. It's all guessing.


What I said: "I don't see how you possibly could make any guesses about the state of the world so far into the future."

You quoted as: "I don't see how you possibly could make any guesses "

Not the same...Try to keep an honest debate, shall we?

---

No on the contrary I am not taking it literally but with a big grain of salt.

Of couse there is guessing. But there are educated guesses and blurbs.

Your guess is pointless, it has no validity at all. You could just as well say "blahaba" because it is not based on anything(at least it doesn't seem to be).


I might just well say "blahaba" Ok. Blahaba!

Somehow I feel like I'm in bizarro world here.

It's a guess. There is nothing to debate. And no, it's not just noise. It's a guess. We all guess. Come on, you can guess too. Just try it. It's fun.

If I had wanted to give an educated guess, I'd say so (And anybody who tells you they can give you an educated guess about decades in the future is guilty of first-degree BS) I'm very happy with guessing about things. I like guessing, and I plan to continue guessing as much as possible in the future. It is not pointless or a waste of time. I like it.

So there. (grin)


ok guess I got hung up on something pointless. i'll keep guessing too. peace.


The essay seems directed against a straw man. The author implies that because people think a singularity is near they are somehow expending energy to bring it to fruition instead of spending energy on what he feels to be more important problems.

He would have a valid target if he was only talking about a small number of people involved in projects like the singularity institute, but he seems to be aiming wider. He even acknowledges that the people working at the institute are justified in the same way people worrying about asteroid collisions are justified.

So who is he criticizing, then? Who are these people who are wasting energy chasing a singularity instead of solving modern problems? The only other targets I can think of are scientists and engineers moving their respective fields up the exponential, but they are doing their respective work precisely because of current problems in the world, not because of some far off possibility.


I liked this description of our current world: I see a world that really did change dramatically over the last century, but where progress on many fronts (like transportation and energy) seems to have slowed down rather than sped up; a world quickly approaching its carrying capacity, exhausting its natural resources, ruining its oceans, and supercharging its climate; a world where technology is often powerless to solve the most basic problems, millions continue to die for trivial reasons, and democracy isn’t even clearly winning out over despotism; a world that finally has a communications network with a decent search engine but that still hasn’t emerged from the tribalism and ignorance of the Pleistocene. We are a world in great deficit excepting for: a) Internet, b) Google. Wow!


The first half of the last century was defined by physics, the latter by chemistry. The first part of this century appears to see technological advances at simlar rates on the biological scale. We're still making headway in physics and chemistry, but it's in the fields of genetics, stem cell research and so on where we're doing things that were unthinkable less than two decades ago. Whilst I agree that the problems with AI are more due to a lack of knowledge of how to implement ratherh than a limitation of current technology (i.e. our approach sucks rather than the tech), I believe that our singularity will arrive when we stop having to think about how to translate the carbon wetware to silicon hardware.


Personally I believe the singularity will never happen.

For one reason: The entire premise of the singularity is that humans will be able to invent a machine that is smarter than them. Or at the very least invent a machine that can invent another machine smarter than itself.

I think that premise is simply wrong. I think it's impossible.

Let's assume you give the smartest people on earth an unlimited budget and computing power. None of them will be able to even propose an idea for a working AI.

No one has any idea how to program an AI - no matter the computing speed.

Humans are not getting smarter. If we can't even propose an idea for an AI today - why would you think we can do one tomorrow?

One argument is basically evolutionary design - let the machine invent itself. The big problem with that is no one even knows how to create an algorithm to measure smartness! If you can't create a fitness function, you can never create an evolutionary design.

So don't wait for the singularity - it's not coming. Work on upgrading humanity, not computers.


Humans are not getting smarter. If we can't even propose an idea for an AI today

That's kind of silly. First of all, the Flynn Effect (http://en.wikipedia.org/wiki/Flynn_Effect) says we are getting smarter in a very real sense.

Second, we stand on the shoulders of giants. We each get to start where the greatest minds before us ended. Today, we're taught what great men had to struggle for 50 years ago - so each new great man will get a little further.

A 100 years ago we couldn't propose an idea to land a man on the moon - yet we could be done 50 years ago.

Who knows what another 50 years will bring?


We each get to start where the greatest minds before us ended.

Disagree. Pretty soon, you're going to have to spend 40 years in specialized schooling just to even try to understand where the giants before us went, let alone ended. Human knowledge only seems sequential, but I don't think it is. The fact that archeologists are continually surprised by how advanced the "ancients" were shows how much knowledge humanity has lost and reinvented (or in some cases, not).


Not if you have silicon neurons that can process information a hundred times faster (since they use electromagnetic signals, not chemical agents, to pass information)! ;-)

But I beg the question...

EDIT: I sort of disagree with you. I'm in graduate school right now, and 90% of a modern mathematician's life is spent on keeping up with the progress of others, particularly in the last 50 years. That is true. However, over time, we also filter out the unimportant stuff. While automorphic forms may be an important field today so that we can bring in fresh minds to try to tackle the Langlands Program, once that is solved there won't be as much of an emphasis. I knew "more" calculus than Isaac Newton when I was 12 in the sense that I had learned things like Stoke's theorem that he had not even known about, but Newton's knowledge was much more of a mess, as well. The stuff you learn from textbooks is not how researchers originally discovered things. Think of the analogy of starting a fresh startup codebase that looks horrible, but becomes very polished and clean after several refactorings; it may take weeks to introduce a person to the former codebase, but for the latter it may only take a good one-hour tutorial.


You nailed it with the code analogy, human knowledge is continually refactored and slowly but surely increased with each successive generation. To say we aren't getting smarter is absurd, we are getting both smarter and more knowledgeable at an ever increasing rate.


You are assuming that the required knowledge can be presented in a modular way, to reduce its complexity.

Knowledge can only be simplified so much... what if the knowledge required for AI, after being made as simple as possible, is still more complex than the most gifted human being can understand?

I like to think that the ultimate nature of reality is simple and beautiful. But all I can be sure of is that the things that I can see are simple enough for me to grasp.


We've already designed, or grown might be a more accurate description, working (most of the time) systems which defy understanding [in full] by the most gifted humans. See the world financial system, our national electricity grid, Windows, Wikipedia, etc. etc. We may not be able to design AI, but we are going to develop it.


We can design/grow some things that we cannot understand.

Does it follow that we can therefore design/grow all things that we cannot understand?


Well the thing is, the code analogy works really well for things like algorithms, math, and factual knowledge. Really, the things you are talking about that cannot be modularized are things that require large amounts of training. However, the training algorithm may be able to be modularized and encoded thus enabling the AI to learn these things. This is the goal of machine learning.


A training algorithm selects a hypothesis from a hypothesis space.

What happens if the hypothesis space does not include the true hypothesis?

Defining the hypothesis space is tricky - though the training/search part is also tricky :-).


>A 100 years ago we couldn't propose an idea to land a man on the moon - yet we could be done 50 years ago.

What!?! That is not true. How about the book From the earth to the Moon, written in 1865? And the ideas there were correct! They couldn't do, but they could imagine it.

Even 2000 years ago you could imagine throwing a rock really really high and getting to the moon. Again, you couldn't do it, but you could imagine it.

No one has any ideas for how to implement AI - none. It's not a matter of not being able to execute the ideas - there aren't any.

And BTW the Flynn Effect has stopped.


the Flynn Effect (http://en.wikipedia.org/wiki/Flynn_Effect) says we are getting smarter in a very real sense.

Are Flynn-Effect gains g gains?


So don't wait for the singularity - it's not coming. Work on upgrading humanity, not computers.

One of Kurzweil's really strong points was to treat "AI" as a very loose term. That is, it could for example mean replacing every human neuron with a manmade (e.g., silicon) counterpart. This has already been done on a small scale in rats (I can't find the reference, but it's in his book).

"Upgrading humanity" IS artificial intelligence if you take artificial to be functionally equivalent to manmade: man has "upgraded" himself, and hence created something artificially intelligent.


There are many ways to invent. Intentional design is just one of them, and so far the least effective one.

Many great discoveries in fields like medicine and biology were by accident and enlightened stabs in the dark, not by careful prediction, modeling and subsequent measurement.

Let's assume you give the smartest people on earth an unlimited budget and computing power. None of them will be able to even propose an idea for a working AI.

If you had given doctors around then 1920s an infinite budget and armies of test subject, they would still not have been able to even propose an idea for a working wide-spectrum anti-bacterial pill to vanquish most of that age's deadliest diseases. Yet, by accident, they stumbled on penicillin, and did beat out all those diseases.

Accident is a powerful force.

So don't wait for the singularity - it's not coming. Work on upgrading humanity, not computers.

The singularity doesn't have to come through pure-silicon computing. If we enhance human intelligence enough, we get a similar result.


You missed the point with your example of anti-bacterial pill.

It's not the execution that is missing with AI, it's the ideas. In 1920 you most certainly can come up with the idea of killing bacteria by eating something - people did it every day! Sure they had no idea if it works, or why it works, or what to eat. But the basic idea: eat something and make the illness go away was there.

There are no ideas in AI. AI reaserchers these days are working on sub-problems, vision recognition, speech, pretending to reply to someone, but none of them have any ideas on how to make AI work.

They just hope that by putting all these things together something will come out - and they will probably be able to make a robot that is a smart as a dog for example.

But they have no ideas for how to program creativity - no one does.


I'm afraid you're the one who's missing the point. People in the 1920's weren't so used to eating something and making diseases go away. Diseases were accepted as a fact of life (or rather death), and if you had told someone around that time that within a few decades they'd be able to swallow a little pill and cure most of the deadliest diseases of that time, they'd have laughed at your face and considered you a hopeless optimist.

Imagine someone came up to you today and suggested that in 20 years' time you can get a jab that will protect you from cancer, aids, degenerative diseases, heart disease, cholesterol problems, baldness, and erectile dysfunction all at once - you'd laugh at them and call them a spammer. Yet the biotech and nanotech revolutions are just around the corner and who knows, they might bring just that, though anyone who believes that they will is indeed very optimistic.

AI researchers might be working on sub-problems, but who's to say that they won't stumble on a bigger solution while working on those sub problems? And, again, you're missing the point even more by failing to see that AI is not the only path to a singularity. Enhancing human intelligence can be just as effective. If we create "trans-humans" who are vastly more intelligent than we are, we still get a singularity. We can do that biologically, cybernetically... etc.


Are you not reading what I wrote? You yourself just suggested a method of curing all those things you wrote. So obviously it's possible to imagine them. You don't know how exactly, but you have a starting point: inject some sort of protective something that will do it.

Why do you think I would laugh at them? It seems quite plausible to me.

You are totally wrong about people in 1920. People 2000 years ago even were eating various plants for their medicinal effect. Some of the books from that time (or a little later) still exist! And some of them even work (willow bark for pain for example).

That does not exist with AI. Give me any idea at all of a programming method (besides random permutation) that will, eventually, even with a lot of work, result in AI.

You don't realize just how much harder creating creativity is from anything else humanity has ever done. Create a mechanism that will create something new that has never existed before? (And is not simply a reshuffling of other things.) Computers can not do that, and I don't think will ever be able to do that. (Again, except by trying everything randomly.) Even humans rarely do that - and the ones that do have no idea how they do it, so what makes you think we can program such a thing?

Read some quotes from Mark Kac about magician genius and you'll start to understand the problem. No one has any idea how they make their leaps. A regular genius simply can not do what they do. So why do you think we can program a computer to do such a thing, when we can not even understand how it's done in the first place?

AI is not coming. And on top of all that computing speed has hit a wall. The entire premise depends on computers getting faster and faster. They aren't right now. Intel can not make CPU's run any faster than they are right now. Instead they are adding cores.

Now I expect that to be solved at some point, but it means the exponential growth curve broke. And that means no more singularity.

That's something I don't see much talk of. But it's over, there is no more exponential growth in computing speed.


I agree, in that I think we can't make a machine that is fundamentally smarter than us.

But I think it's possible to create a machine that is as smart as a human. Now, since some human beings seem to be smarter than others, I think it's possible to build a machine that is closer to the "smart" end. And, of course, with the help of such machines, we will be able to do this better and better (as with knowledge and technology in general). Therefore, we could build many Einstein-level intelligences (or a bit better, since he wasn't perfect).

Behind this, my key point is that "more intelligence" is an illusion. Einstein was a function of his times (there was the unexplained lack of motion through the ether, for one thing) and, like Newton, saw further than most because he SOTSOG. He wasn't fundamentally smarter than you - though he might have been able to keep track of more ideas than you, and track their relations to other ideas, and he might have been more free of convention (the humility to admit he didn't know, and the arrogance that you don't either), and more dedicated than you. Ie. he was probably smarter, but not fundamentally.

Being able to accept that you don't know, and therefore to be open to see, is the important thing. Not intelligence.


So if we can have something that is just like the person who invents an equal-to-humanity intelligence, but operating a hundred times faster, with perfect recall, and able to shut down senses or arbitrarily reorder goals, you don't think we could tell this being "Do what your meatspace predecessor did, approximately" and have something even faster, in less time than the original took? And that this process couldn't be repeated?


We could still build faster Einsteins.


Yes, and more of them.

But they're not fundamentally smarter than us; it's just another technology (which is a great thing).


And cheaper Einsteins.


I don't agree we can't build a machine that out-thinks us, because the early iterations at least would be specialist self-improvement tools - NOT general intelligence. Glorified whole-program optimizers.

AI needs to be built from a theory of mind and a theory of intelligence - but there is reason for hope, as a combination of neuroscience and mathematics (cf: AIXI) is attacking this question. AI also needs to be built from a theory of "friendliness" (goals compatible with humanity) and there are people (cf: SIAI) working on this.

Your message strikes me as having the "mankind will never achieve heavier-than-air flight" quality. (As opposed to the "mankind will never exceed lightspeed" quality - there is no fundamental physics in the way.)


...there is no fundamental physics in the way

He didn't say that: http://news.ycombinator.com/item?id=297576


You don't have to design the machine. Once you have enough computer power, you can use genetic algorithms (or another method).

Con: It can be slow.

Pro: It worked at least once.


Ugh, no. You would just create a super-parasite. Evolution is parsimonious. You would need to know how to build a mind the simple way in order to direct evolution down just that one path that leads to minds - and meanwhile you'd have cruelly killed millions of by-then thinking feeling persons who didn't quite measure up. And who knows what other tendencies you'd have created aside from intelligence? The whole thing would be a nightmare.


That only works if you have a fitness function. No one does.


No one has a fitness function for a search interface. But Google does a lot of A/B tests and the interface evolve.

I think that they are not using random mutations in the code. They design the "mutations". But the important point is that they don't have a theoretical fitness function.


Of course they have a fitness function! What do you think an A/B test IS? It's a fitness function!


So don't wait for the singularity - it's not coming. Work on upgrading humanity, not computers.

I thought that was the entire point.


Do you believe in evolution? The process of human evolution is a tale of smarter and smarter animals coming from less intellegent ones.

Also Humans can invent chess algorithms that can beat the best chess player in the world.


No, the process of human evolution is a tale of organisms adapting to their environment. There is no guarantee that human beings will evolve into something more intelligent.

If the average person is granted the ability to manufacture their own reality without any need for earthy concerns, one might think evolutionary forces would push people into becoming marginally intelligent beings focused on decadence.


No there is no guarantee. However in the past it is safe to say that, on average, the human evolutionary path involved an increase in intelligence. Whether that is 'the tale' I will leave to poetic license. I did this to disprove the notion that beings could not produce beings of greater intelligence in the parent post. To disprove a general statement you only have to show one contrary instance.

Here is a link you may find useful

http://en.wikipedia.org/wiki/Logic


Back at ya: http://en.wikipedia.org/wiki/Evolution

Sexual activity aside, human creative activity did not 'produce beings of greater intelligence'.


> Sexual activity aside

That was my point. That there exists a process (evolution) where greater intelligence can spring from lesser intelligence. I agree the mechanism is quite different but it demonstrates there is no fundamental obstacle to the process of increasing intelligence.

Perhaps this is one of the mental leaps that creationists are not able/willing to make. Their belief in an ultimate intelligence producing lesser intelligences is at least logically consistent.


>Also Humans can invent chess algorithms that can beat the best chess player in the world.

Except the machine is extremely stupid! It doesn't have any idea on how to play chess, it just tries every possibility one after the other until it finds a good one.

That's ok for chess - it has well defined rules, and more importantly a fitness function. That's where all the work is - define a function to tell you if your permutation was a good one.

But it doesn't work for AI, because there are no fitness functions there.


> Except the machine is extremely stupid! It doesn't have any idea on how to play chess, it just tries every possibility one after the other until it finds a good one.

It is also possible that each individual neuron in the brain is stupid! They don't have any idea of whatever task is being processed, human intelligence is just the result of random connections being made and reinforced by positive feedback.

> But it doesn't work for AI, because there are no fitness functions there.

I think it would accurate to say that the fitness functions are more difficult to define and perhaps they are not available at on such a constant basis. But if you are saying that there is no test for AI, you are just basically defining AI as an impossibility.

I general I think that whenever a decision making process is automated it tends to lose it's mystic and people then view it less as intelligence and more as a dumb process. Yes chess is just a baby step but it does demonstrate that something that humans used to think of as a significant indicator of intelligence can be achieved.


>But if you are saying that there is no test for AI, you are just basically defining AI as an impossibility.

What I'm saying is that to evaluate AI you need I (artificial or otherwise).

So I don't think random permutations will ever get you there, because if we ever manage to write the fitness function, we are almost there anyway. (And no one knows how to do that.)

Maybe one day all of humanity will take part in the great AI race, and every person will help evaluate AI's to find the best one. I'm not sure it'll be enough though, you simply need far far too many permutations for it to ever work.

It would be interesting to use say WoW as a test bed for AI, make sure it always joins a team and see how good you can make it (emphasize the helping other members of the team part). Then send it to second life.

But, I still don't think it'll ever be enough. In particular it'll never be able to design for creativity.


You do know aboutthe Turing test right?

With all the searching and interactions on the internet these days I think there will be plenty of feedback from humans to develop AI if it can be done.


I know about the Turing test, but it will never be able to test for creativity. Only basic speech - and that's not enough to launch the singularity which require a machine that can invent.


It isn't just basic speech. It is whatever is required to convince the examiner that the subject is a human. You could ask the subject for some creative input on a particular topic. That is allowed under the Turing test.

I think that creativity in solving a task is actually quite amenable to automation for a Turing-passing program. Generate a whole bunch of 'ideas' by combining existing concepts in different ways and then test them for usefulness. (Now imagine doing this with a million different Turing processes on your desktop computer - I think there would be some pretty creative ideas generated.) If the program can't tell if an idea is good or at least OK then it is never going to pass the Turing test.

If you are talking about creative in the sense of 'amuses humans' like music or art then the computer is of course at a huge, unfair and pointless disadvantage.

If a program can complete the Turing test then it is likely it will already be a lot smarter than the examiner.


I think his chief exception is the timetable laid out. He is not saying he doesn't believe in evolution. He disagress with someone's extrapolation, that is all.


I was replying to a comment not the parent post. The one where the guy proposed that it would be impossible for humans to create anything smarter than ourselves. Sometimes the indentation on comments can be a little tough to keep up with.


The first singularity has already happened - it happened the moment human beings started to create perpetuating social structures. A corporation, a society, a nation, a community are all instances of gestalt intelligence.


I mis-read "The Singularity is Fair" at first. I am slightly disappointed. An article about distributional aspects of ever-increasing technological progress would have been interesting.


I studied AI at University, and it was mostly about search.

I learned Prolog, which was a real eye-opener about functions that can find paths backwards, a kind of reverse search.

I also learned about Bayes, and have come up with an algorithm (myself) which is similiar to Bayes, but it's top secret at the moment.


There is a bit of a renaissance going on within some arcane fields of AI. There is work underway to try to create a real thinking machine, with the idea that we have come a long way in our understanding about neuroscience, etc.


How about, "The Singularity does a Grover Impression?"

"Near!...Far!...Near!...Far!..."


have none of you seen the thirteenth floor, we are already living singularity.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: