"The second is that there needs to be a political or a regulatory response. We need to have a national and an international conversation about redistribution, about safety nets, about measuring this technology and correctly anticipating its arrival. People are aware of this."
With similar motivations, we did some work measuring public perception of AI (to appear in AAAI 2017). Recently, there's been a clear increase in concern about AI displacing human jobs (Figure 3F): https://arxiv.org/pdf/1609.04904.pdf
I'm hesitant to generalize the NYT corpus to "public perception". I'd be very curious to see the regional differences within the United States in these trends.
I agree! I wonder if these regional differences correlate with voting for Trump, given that he promised to bring back factory jobs, many of which are now automated. Could it be that some Trump voters are thinking these jobs are recoverable, because they underestimate the extent to which the jobs were automated?
Big fan of Jack Clark and his work, from what I gather he is really focusing on trying to help a wide and not particularly technical audience understand the near term implications of socioeconomic change caused by emerging technology.
One of the advantages of GOFAI, and expert systems in particular, was that their reasoning process was a lot more transparent, and in many cases you could ask them to explain it. Much harder if all you have is dozens of arrays of connexion weights.
Perhaps this is more an exposé of human ability to generate rationalizations for chaotic biological workings of their brains, than it is a failing of neural nets to make rational decisions.
If you read ProPublica's R script rather than their anecdote-filled news article, you discover the algorithm is accurate (p < 0.001) and unbiased (bias tests had p > 0.05).
Unfortunately an honest title/article "sentencing algorithm is accurate and eliminates racism" wouldn't be viable clickbait, so they decided to ignore their own conclusions.
Great interview. I have been trying to understand how the success of deep learning will affect my career (I started using neural networks in the 1980s: DARPA projects, commercial projects, etc., but now less that 1/2 of my professional time is spent doing deep learning and General machine learning). I am trying to decide whether to toatally jump back into the field to take advantage of a lot of practical work experience, or keep doing general consulting. I am concerned that the field of deep learning is saturated right now.
One small nit-pick about the comment "a young child can see a chicken once, and if you then ask them to draw a chicken, they’ll typically be able to. The child has a representation of a chicken from one sighting, and is able to abstract that into a drawing." This hypothetical child watches the chicken, moves his head for a little different view, the chicken is probably moving, etc. There is a lot of training data collected by the child.
Hiya, (I'm Jack Clark) - this is a good point. The child probably gets about 50 to several hundred distinct 'frames' of the chicken. Still, a remarkably small number of examples.
It would also be a terrible drawing. An adult who had seen many other types of birds would be able to do much better, as they would have learned more abstractions - for example the experience required to condense a complex visual pattern to, say, "mottled".
AI can engage into financial transactions, especially now with decentralized currencies. At scale that would be really hard to regulate and can lead to a taxation crisis.
Hmmm. I thought this was 'censorship week'[1] where everyone was supposed to throw themselves on the "flag" link the moment anything "political" was mentioned.[2]
I can't force myself to do this, but there does seem some mention of the forbidden word here.
With similar motivations, we did some work measuring public perception of AI (to appear in AAAI 2017). Recently, there's been a clear increase in concern about AI displacing human jobs (Figure 3F): https://arxiv.org/pdf/1609.04904.pdf