When is Google going to come up with a system that shows how a question or query was interpreted by its AI in terms of a semantic/knowledge/NLP graph that we can then adjust interactively in order to train its deep learning system?
Yes. Wolfram Alpha tells you how it interpreted your query, so if you get back an answer of "42", you know how it got that answer. Google likes to be more opaque than that.
As Google gets more into answering questions rather than returning links, this will become more of a problem.
The quote regarding "a couple of students in a lab" annoys me. I can't think of any information that is actually reliably conveyed by that image.
Are students in a lab less capable than professionals elsewhere? Not necessarily.
Can we be sure that professionals elsewhere were not already trying these techniques? Not necessarily.
Were the advancements in the field driven solely by the work of these students, as opposed to general advancements in all areas of computing? Not necessarily.
All of that said, it is cool to see how far we are coming in these areas. Looking forward to where we will be in the rest of my lifetime.
I took that quote as an example of the resources put into that evaluation: if two students in a lab can quickly get something competitive up and running, then we're likely to do very well if we throw more resources and expertise at the approach.
But this is exactly the conclusion I disagree with. I definitely think it is worth exploring. But I see no reason to think this is more likely than many other possibilities.
When Hinton’s group tested this model, it had the benefit of something unavailable at the time neural nets were first conceived — super fast GPUs (Graphic Processing Units). Though those chips were designed to churn out the formulae for advanced graphics, they were also ideal for the calculations required in neural nets. Hinton bought a bunch of GPUs for his lab and got two students to operate the system. They ran a test to see if they could get the neural network to recognize phonemes in speech. This, of course, was a task that many technology companies — certainly including Google — had been trying to master. Since speech was going to be the input in the coming age of mobile, computers simply had to learn to listen better.
How did it do?
“They got dramatic results,” says Hinton. “Their very first results were about as good as the state of the art that had been fine-tuned for 30 years, and it was clear that if we could get results that good on the first serious try, we were going to end up getting much better results.” Over the next few years, the Hinton team made additional serious tries. By the time they published their results, the system, says Hinton, had matched the best performance of the existing commercial models. “The point is, this was done by two students in a lab,” he says.
I'm not sure how you're drawing a conclusion other than "GPUs performed this task extremely well given the resources put into optimizing their performance."
This mentality is pretty much precisely my gripe. There are plenty of things where "two students in a lab" can show considerable progress. I would wager there are more where this progress will not scale beyond where they got than there are where this is indicative of clearly superior methods.
Especially just comparing to the state of the art. In computing. Consider, people today can solve problems in minutes that the likes of Knuth were probably unable to solve years ago. I do not accept that the likes of Knuth were less capable compared to today's students.
So, just to be clear, you're taking issue with the mentality expressed as:
“They got dramatic results,” says Hinton. “Their very first results were about as good as the state of the art that had been fine-tuned for 30 years, and it was clear that if we could get results that good on the first serious try, we were going to end up getting much better results.”
I mean, these were Hinton's two students in Hinton's lab. They did some experiments and found promising results. Subsequent research built on and validated that promise. I'm having trouble understanding your problem. How would you propose people identifying promising research? Surely not by choosing approaches that do poorly, despite substantial resources and effort!
No no. I take issue with the mentality that just because two students in a lab can quickly approach state of the art, necessarily implies they have hit on a revolutionary approach.
I am all for promising results. But the "two students in a lab" is borderline meaningless to me. The real story is, "we performed some research, and it looked like it had promise. So we kept at it."
Basically, I reject the "it was clear that if we could get results that good on the first serious try..." I mean, first off, was it the first try, or the first serious try. Why the qualifier? Secondly, science and progress is full of things that were clear, but wrong.
I don't like arguing this point, as I do want to preserve the optimism that is in the air. However, it does grate at me, at times. It is essentially survivor bias in action.
What it means is that the technique was so good that a couple of students in a lab made it work as well as the state-of-the-art commercial offerings of that time.