Yes, RNNs and autoencoders, etc. do seem to be encoding concepts via embedding vectors in higher dimensions. But see the argument made by Doug Hofstadter about language translation using RNNs [2]. We have also seen how CNN layers can "visualize" higher levels of features in images as we go deeper in the network. And many think that
some variation of this basic approach done repeatedly, with more data, computing power etc. will be sufficient to lead us to AGI in the future.
However, many of the leading figures in AI - including Geoffrey Hinton - the father of deep learning, is very skeptical of the approach to AI that he pioneered. He recently stated - "My view is throw it all away and start again." [0]
Francois Chollet - the author of the deep learning framework Keras, has said: "For all the progress made, it seems like almost all important questions in AI remain unanswered. Many have not even been properly asked yet." [1]
And of course, Doug Hofstadter, who thinks it is going to take a lot more to come close to human level intelligence & understanding, even when you consider the most advanced RNNs of the day - those that run Google Translate .
The counterpoint to Douglas Hofstadter's argument is pretty simple : he is defending horse drawn carriages against cars.
Take a great horse drawn carriage, and compare it to the first cars. Oh my god. Those cars SUCK ! They are bumpy (no suspension). They rarely get from one city to the next without doing repairs [1]. Getting them to start at all requires an engineering degree AND more than average muscle. Despite many cities having maybe 20 cars total, one blew up every 10 days or so. And the top speed of horses is actually faster than cars ! And the fuel is WAY more expensive than horse feed.
Compared to the AVERAGE horse drawn carriage, which also had no suspension and constantly needed repair ... well they started off being about even, perhaps a little worse, and after a few years they were so much better it isn't even funny.
Likewise compared to a PhD-level expert human translator, in that specific language pair, Google translate sucks. Of course ... there are a few 10's of thousands expert human translators, and they might know 3 languages, and perhaps there's 100 that know 4 languages. But as can be plainly seen on the Indian channel in Australia, Google translate outperforms the translators used for that ... by a wide, wide margin. Google translate can translate between any pair of languages, either language can be chosen out of over 100 languages. Let's face facts here : on most metrics Google Translate wipes the floor with even those PhD level translators, but not -quite yet- on every last metric. Compared to the average human translator, Google Translate wins on every last metric.
Aside from that, Google translate is always available (even mostly available offline, with no Google involved beyond a binary upload to your phone), it's cheap, and frankly it translates from Chinese to English better than a Chinese person following an English course for 2 years can express themselves in English.
The truth is Douglas Hofstadter is ... wrong. Okay, you might argue he's just 80% wrong, and the remaining 20% is shrinking, that's fair. And of course, Google translate is not AGI.
You know, there are social scientists, in fact quite a few of them, that claim "AGI" is simply an AI solving 2 problems:
1) any problem, like surival in groups
2) explaining the actions they took to achieve (1) to other (usually new) members of their group
That's AGI. They're a pretty abstract/advanced form of auto-encoders. We don't know that for sure, but ... it's not far off the truth.
But yes, you can find a few exercises where humans still outperform Google Translate. They're mostly unfair (humans outperform the AI because they have side-channel information available, e.g. what events happened outside of the content of the actual translation. A good test would exclude that and then humans are 100% left behind, but in the press ...)
A lot of disciplines are currently like that. Humans are utterly beat by AI in just about every last thing that was used to "prove" humans have "intelligence" and machines "don't" just 10 years ago. Even the most human of things. AIs actually outperform humans at chatting up humans, can you think of anything you can do on computers that's more human than that ?
And no we're at the point where it's becoming more and more obvious that while AIs don't beat the best experts in specific fields, but they do beat the average human. It's getting ridiculous. AI robots are better at navigating forest terrain than humans, to take an example I recently saw. AI is not just better, but has error rates solidly and consistently below humans on expert-level medical analysis. Expert level mathematics and physics without AIs doing most (or all) of the work has been dead for 2 decades, and in the 2 decades before that I would argue forms of AI made particular researchers far more successful than their peers already.
Where exactly is the point when "but they can't yet X" gets the answer "oh yeah. Hold my beer" ? Is it really that far off ?
So here's Mr. Hofstadter, and with all due respect, he's merely moving the goalposts again. He best goes home to dive into his books, urgently, because in 2 years, we'll have crushed even the most expert human translator using AIs, and he'll need a new place to put those goalposts. I look forward to where he'll put them this time, it'll be fun !
How did we historically "prove" computers "aren't intelligent" ? "Don't have a soul" ? Well, they can't analyze a problem, can they ! (and then we had expert systems). But they can't strategize ! Take chess (and then we had deep blue). But they'll never recognize unstructured data like images will they ? (oops) Okay, but never video and reading people's intentions ! (oops) Okay, but at least humans are better at voice recognition (AIs win consistently now) ! And translation (90% there) ! Okay, sure, but they'll never control a robot in the real world ! (and now pretty much every research robot ... and of course there's self-driving cars). Okay but they'll never deal as well with dynamically stable robots as humans will (that one's a TODO at the moment). Okay, but they'll never deal as well with partial information games and bluffing (Poker - humans beat. Starcraft ... TODO)
Hinton might very well be right. There's 5 major chapters in an introduction to ML course, and Hinton is a big name in 3 of them. Frankly there ought to be a 6th (Hebbian learning). When it comes to exploring deeply, we have only really done that for one of those chapters, the symbolic reasoning chapter. We're getting deeper into the neural network chapter, but symbolic reasoning got a headstart of a millenium or four or five, which I would say we've not quite matched yet. We are very far from out of ideas to get the field to progress further, so I wouldn't worry about that yet.
He also does have a good point : the overwhelming majority of current AI research is focusing on too narrow a slice of the field.
I would like to point out that Hinton is a theoretical researcher into AI. That he believes that theoretical advances are necessary to advance the field is almost a tautology : he wouldn't have become Geoffrey Hinton if he believed otherwise. I mean, this argument has it's place, but it's a statement of faith. A very successful statement of faith, by a very impressive researcher, but ultimately it's about as informative finding out Mr. Nadal likes tennis.
> But yes, you can find a few exercises where humans still outperform Google Translate. They're mostly unfair (humans outperform the AI because they have side-channel information available, e.g. what events happened outside of the content of the actual translation. A good test would exclude that and then humans are 100% left behind, but in the press ...)
That's an very reductionist view on translation. I'm of opposite opinion that it requires human-level intelligence to translation anything but the simple and dry texts. Translators of literary texts are no less authors than the actual writers.
> Poker - humans beat.
AIs beat humans only in simplest variant of poker - heads-up (two players). In the more complex ones, AIs are nowhere near humans.
However, many of the leading figures in AI - including Geoffrey Hinton - the father of deep learning, is very skeptical of the approach to AI that he pioneered. He recently stated - "My view is throw it all away and start again." [0]
Francois Chollet - the author of the deep learning framework Keras, has said: "For all the progress made, it seems like almost all important questions in AI remain unanswered. Many have not even been properly asked yet." [1]
And of course, Doug Hofstadter, who thinks it is going to take a lot more to come close to human level intelligence & understanding, even when you consider the most advanced RNNs of the day - those that run Google Translate .
[0] https://cacm.acm.org/news/221108-artificial-intelligence-pio... [1] https://twitter.com/fchollet/status/837188765500071937 [2] https://www.theatlantic.com/technology/archive/2018/01/the-s...