Hacker News new | past | comments | ask | show | jobs | submit login

He wrote this about a year before AlphaGo beat Lee Sedol... which happened 10 years before anyone expected. The singularity in the mirror could be much closer than it appears, and everything he writes tells me he knows nothing about AI.

This piece is full of sloppy thinking as well as obsolete. Calling corporations superhuman AIs doesn't clarify the problem; it introduces oranges to a discussion of apples. And even in this irrelevant tangent, he is wrong. As we so often see in government and the private sector, many of us can be dumber than a few of us. Collective decision making has pernicious emergent properties, which means we should consider many corporations as subhuman AIs.

> The most successful and profitable AI in the world is almost certainly Google Search.

This, too, is false. Parts of Google ads might qualify as the most lucrative. But other parts of Google outside search, notably DeepMind, are much more successfully pushing AI forward. Autonomous cars and drones are two very successful examples of tech using AI.

The fact that he even brings up Jeopardy Watson in a discussion of AI shows that he knows little about the state of the art, which is light years ahead of IBM's question-answer system.

Ethical issues will not prevent nation-states and corporations from continuing to pursue the AI arms race.

And there are huge incentives to be the one to get this right. Which is why enormous investments in AI are being made by governments and the private sector alike. Google's DeepMind is going to more than double from 400 to 1000 people, half of whom are AI researchers. DeepMind is obviously a research powerhouse, and that investment alone must cost hundreds of millions of dollars beyond the acquisition price of 400M pounds.

AI advances hand in hand with hardware capacity. Distributed computing and faster chips will continue to progress, and pull AI along with them. A breakthrough in quantum computing will entail a huge step-wise leap in computing power and therefore AI. So progress will be non-linear, but not in the sense he thinks.




Rodney Brooks and Marvin Minsky, pioneers in the fields of robotics and AI, don't think we're anywhere close to general purpose AI. Minsky doesn't think we've made much progress in that area in the last several decades. The things you mention were worked on in the 60s (leaving aside Quantum Computing, which is probably a red herring for AI).


Marvin Minsky is dead, so you shouldn't refer to him in the present tense. He is unable to have opinions about current events. Secondly, Minsky was skeptical about neural nets, and he was ultimately proven wrong. Even great minds make mistakes. In the 1960s, we did not have the confluence of big data, much faster hardware, and certain algorithmic advances that make current deep learning performance possible. So what you say is partially false. We had some of the ideas in the 60s, but we were missing certain conditions necessary to support and prove them out. Now we're not missing that, and AI progress has greatly accelerated.


I don't see how alphago changes the content of this essay in any way.


They beat Go 10 years before anyone predicted. The point is that AI is accelerating in its progress.


He addresses this specifically in the article, using deep blue instead of alphago. Game playing AI is not progress towards AGI.


He does not address it in the article. As with most of his points, that one is irrelevant. What's important here is that AI is beating more and more complex games such that, directionally, it eventually be able to play (cough) the game of life, or some important subset of that game, which will be close to AGI. Chess is computable and AI beat it decades ago. Go is not fully computable, and AI won at it recently. It solved a much more complex problem. What's more, Deep Blue has zero to do with the boom in AI that is under way. AlphaGo does. It is part of a wave of recent progress of which the author apparently knows little, because he focuses on Watson and other phenomena that are not central to what's important in AI now.

What would constitute progress toward AGI if not the ability to solve more and more complex problems? Winning at Go involved high-performance machine vision, and I think we'll all admit that vision is an important part of how an intelligence will operate in the world. It also involved reinforcement, or goal-oriented, learning, another crucial strategy for an eventual AGI.


What year do you think the singularity will happen?


Nobody knows exactly. So any precise estimate is guaranteed to be wrong. But people smarter than me, and deeply involved with current research, have said they think strong AI could happen in 10 years or so. Others think it's much further away -- and that's probably the consensus view among respected researchers. It's been 20 years away for the last 80 years, right? /s


Obviously nobody actually knows, I just find it interesting to ask people for their prediction.

It's been 20 years away for the last 80 years, right?

Well this is how I feel, so I like asking people to make a prediction so there is some public record people can look back at.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: