Hacker News new | past | comments | ask | show | jobs | submit login
The Future Arrives in 2045 (theconnectivist.com)
14 points by hackerjam on Dec 9, 2014 | hide | past | favorite | 19 comments



The shift from current statistical modelling to sentient software is not a matter of degree, but a difference in kind. Nothing we have now is even close to being able to perceive and think in the way that a human mind does. We won't get there through incremental progress, better hardware or clever algorithms. The shift from "applied statistics" to "Commander Data" will be sudden and unexpected, as big or bigger than any technological change that in human history that I can think of. We couldn't put a date on that shift any more than Henry Ford could predict the common adoption of driverless cars, if they were explained to him in 1908.

A current deep learning neural network cluster and "Hard AI" seem similar but they really bear no relation to each other. Like comparing a bird to an airplane, they both fly and have wings but the ability to make one isn't related to making the other. Right now we're building better birds, true AI is a stealth bomber.

People say that whenever computers achieve a goal, then the goal is no longer considered AI. For example people 20 years ago didn't think that computers could play chess or compete on Jeopardy, but now that they've done those things they aren't thought of as impressive demonstrations of intelligence any more. There's some truth to this, but for the majority of people the goalposts have never moved. They associate the term "artificial intelligence" with an artificial mind that functions in the same way that a human mind does, "Hard AI" in the tradition of Asimov and other science fiction writers. We're as far away from that as we've ever been. It could happen in 2045 or 4045, or anywhere in between. No evidence exists that we're getting closer and there's no reasonable way to predict when what we can't imagine will become reality.


I don't think it's possible to just "think up" true AI.

I think the easiest method to achieve it, would be to first upload an actual human brain to a computer environment - by slicing a brain into molecule-thin slices, then observing the neural responses of that simulated brain.


No, it doesn't. Kurzweil has been making this claim for the last decade based solely on the increasing speed of computers, ignoring the fact that we don't yet have any clue how general intelligence actually works. It doesn't matter how fast our computers are if we don't know what algorithms will give rise to "intelligence", and we've made virtually no headway in this field.

The examples of "AI" cited in the article are remarkable, but are still extremely specific or not really intelligence at all. Siri, etc, are nothing more than text parsers that give a canned set of responses. The work on neural networks is interesting but still, at best, only a small component of actual AI. (Note: I'm not going to define an "actual AI". Yes, I know we keep moving the goalposts on what that would be. I'll know it when I see it, and so will you).

I'm not saying it won't happen, but it will require a type of conceptual breakthrough that we simply haven't had yet. To hype "the singularity is nigh!" at this point is dishonest, trivializes the real problems and sets false expectations for industry and policy-makers.


His argument, if you read his work, isn't based solely on the increasing speed of computers.

His idea is that progress is exponential in all of the requisite areas. That includes algorithms, hardware, biology, neuroscience, and more.

> Siri, etc, are nothing more than text parsers that give a canned set of responses

We say the same about everything once we can do it with computers, because progress doesn't come magically. It's incremental. Yet much of what we have already is what used to be "science fiction" and, before that, "magic". I'm sure when AIs are passing the Turing test, we won't think any more of computers, but less of the Turing test.

This isn't an argument against Kurzweil's predictions, it's just moving the goalposts as you say. If strong AI comes, we'll move them all the way there and maybe even a bit past.

I disagree that we'll know it when we see it though. I think we'll deny it until we die and the new generation grows up in a world in which computers have rights.

> I'm not saying it won't happen, but it will require a type of conceptual breakthrough that we simply haven't had yet.

Finally, Kurzweil's arguments do not require a breakthrough as a premise. Kurzweil's idea is to scan the brain at the neuronal level, then brute-force its simulation at the biochemical level. It's straightforward extrapolation.

You could propose that we will come across some roadblock in our exponential progress towards that goal, but in the absence of one, the null hypothesis is that progress will continue as it has. Then indeed, the singularity is nigh.


Well, I'm skeptical of Kurzweil's prediction too, but I see where he's coming from. His prediction is solely based on computing power because he also sees general AI as a pure brute force problem. It's the same approach he took to predict that a computer would beat a human in chess before 2000 - and he got that one right.


Another one he got right was that we wouldn't think more of computers when it happened -- we would simply think less of chess.


Yes, and that's precisely why he's wrong.


If intelligence is just an emergent property of a complex-enough system we won't need more than brute force. This seems a very reasonable hypothesis considering the wetware prior-art, so I wouldn't be as quick to say he's wrong.

What I'm skeptical about are the predictions that we will reach something more intelligent than humans (how to quantify intelligence even?), that it will improve our culture, and other sci-fi stuff...

Even if it turns out the predictions are wrong, the brute force can reach something interesting anyway, not human-like intelligence, but something new and complementary.


If we look back 30 years to 1984 and try to estimate how much progress we've made that might give us some indication on how much change we'll make in the next 30 years.

I'd argue, however, that since the rate of change is accelerating, maybe we should actually compare to 60 years ago. In 1954 we were practically in the stone ages.


Well, I added a reminder into my Google Calendar. Whatever happens in AI, if I make it to 1/1/2045 I'll be freaked out at midnight by a long-forgotten "Future Arrives" notification popping up in my ocular implant or whatever.


Assuming Google Calendar exists in 31 year :P


I don't think anyone wants actual artificial intelligence. What they want is a tool that interacts with them in a way that seems intelligent, but always does what it is told.

Intelligent things don't always do what they're told; that's what makes them intelligent. If you told someone to jump off a cliff, and they immediately did it, would you think "wow, that person was very intelligent"? Of course not. What if you tried to push them? I would expect an intelligent person to fight back.

Now replace that person with a robot. Do we really want a robot that will refuse to follow our commands? Do we really want a robot that will fight back against us? Even Asimov put self-preservation #3 in the list of rules, after following human commands. But I challenge you to think of an intelligent being that is not dangerous in some way, when threatened. I propose that this attribute is not separable from intelligence.

It seems to me that it is impossible to conceive of a truly intelligent artificial being without considering it dangerous. Bumblebees are dangerous; dogs are dangerous; people are certainly dangerous. But who is working on creating robots that are designed from day one to be dangerous to humans? I can't remember ever hearing of such a research program.

And I don't think that just "happens" when algorithms get complex enough. Not when the algorithms and even hardware are designed and built from an inherent assumption of obedience and compliance.


I mean no disrespect to Ray, he is a smart person and all, but I don't think we know how to define what qualifies as hard AI. Let alone how to get there.

This article seems overly optimistic.


I don't think that 2045 machines will have become intelligent in terms of having a free will (whatever this means) and being able to think critically, but what I could imagine is that most people got enough dumb that they won't be able to realize the difference anymore. See how many people already take for real what marketing is telling them or using facebook and google bubbles without the slightest idea of the consequences.


2045 is an estimated based on our current pace. Couldn't we take steps to accelerate progress? Reduce it by 10 or 15 years? In the 1960's, for example, we were able to put the first human into space and reach the moon, all within a decade.


In the last century, has anyone ever accurately predicted the future 30 years before?


I think there are quite a few examples, but those same people also made horrible predictions at the same time. What really matters is your "hit ratio" - your number of hits vs misses.


I think we were supposed to have flying cars for Y2k ... instead we had the "famous" bug ;-)


People really are insane.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: