Hacker News new | past | comments | ask | show | jobs | submit login
Yann LeCun's comment on AlphaGo and true AI (facebook.com)
322 points by brianchu on March 14, 2016 | hide | past | favorite | 204 comments



Preface: AlphaGo is an amazing achievement and does show an interesting advancement in the field.

Yet ... it really doesn't mean almost anything that people are predicting it to mean. Slashdot went so far as to say that "We know now that we don't need any big new breakthroughs to get to true AI". The field of ML/AI is in a fight where people want more science fiction than scientific reality. Science fiction is sexy, sells well, and doesn't require the specifics.

Some of the limitations preventing AlphaGo from being general:

+ Monte Carlo tree search (MCTS) is really effective at Go but not applicable to many other domains we care about. If your problem is in terms of {state, action} pairs and you're able to run simulations to predict outcomes, great, but otherwise, not so much. Go also has the advantage of perfect information (you know the full state of the board) and deterministic simulation (you know with certainty what the state is after action A).

+ The neural networks (NN) were bootstrapped by predicting the next moves in more matches than any individual human has ever seen, let alone played. It then played more against itself (cool!) to improve - but it didn't learn that from scratch. They're aiming to learn this step without the human database but it'll still be very different (read: inefficient) compared to the type of learning a human does.

+ The hardware requirements were stunning (280 GPUs and 1920 CPUs for the largest variant) and were an integral part to how well AlphaGo performed - yet adding hardware won't "solve" most other ML tasks. The computational power primarily helped improve MCTS which roughly equates to "more simulations gets a better solution" (though with NNs to guesstimate an end state instead of having to simulate all the way to an end state themselves)

Again, amazing, interesting, stunning, but not an indication we've reached a key AI milestone.

For a brilliant overview: http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...

John Langford also put his opinion up at: http://hunch.net/?p=3692542

(note: copied from my Facebook mini-rant inspired by Langford, LeCun, and discussions with ML colleagues in recent days)


I took a closer read through the AlphaGo paper today. There are some other features that make it not general.

In particular, the initial input to the neural networks is a 19×19×48 grid, and the layers of this grid include information like:

- How many turns since a move was played

- Number of liberties (empty adjacent points)

- How many opponent stones would be captured

- How many of own stones would be captured

- Number of liberties after this move is played

- Whether a move at this point is a successful ladder capture

- Whether a move at this point is a successful ladder escape

- Whether a move is legal and does not fill its own eyes

Again, before the neural nets even get involved. Some of these layers are repeated 8 times for symmetry. I would say for some of these, AlphaGo got some domain-specific help in a non-general way.

It is of course still groundbreaking academically. The architecture is a state-of-the-art deep learning setup and we learned a ton about how Go and games in general work. The interaction between supervised and reinforcement learning was interesting, especially how the latter behaved worse in practice in selecting most likely moves.

disclaim: Googler, not in anything AI.


Note, that these features are for the RollOut fast policy. The reason is that this needs to be fast, so rather than a net they have a linear policy. A linear policy in order to work it requires good feature selection, which is what this is. In some future, when we have better hardware, you can imagine removing the roll out policy and having just one.


I think you're confusing this list of attributes with a separate list used for rollouts and tree search. The ones above are definitely used for the neutral networks in the policy and value networks. See: "Extended Data Table 2: Input features for neural networks. Feature planes used by the policy network (all but last feature) and value network (all features)."


I'm having trouble identifying what algorithmic innovation AlphaGo represents. It looks like a case of more_data + more_hardware. Some are making a big deal of the separate evaluation and policy networks. So, OK, you have an ensemble classifier.

The most theoretically interesting thing to me is the use of stochastic sampling to reduce the search space. Is there any discussion of how well pure Monte Carlo tree search performs here compared to the system incorporating extensive domain knowledge?


Wow, this really took me by surprise. I thought the only input was (s_1...s_final, whowon) where s are statates during training and (s_current) during play, and the system would learn the game on its own. That's the way it worked with the Atari games anyway.


I expect the Atari games, if we're thinking of the same articles, had much less strategic depth than playing a Go champion.


- As all problems can be converted to Markov Decision Processes, this is a moot point. The transition may not be efficient in terms of states/actions, but since we just nearly solved a problem with more states than the atoms of the universe, this seems to be a moot point. In addition, most problems for humans are actually in the form of {state,action}. Just because now ML is popular and it's all about putting label on data does not make this less true. Other algorithms already exist (POMCP) which are direct transformations of MCTS for uncertain environments. Determinism is not needed both for MCTS nor for POMCP, so that point is void too.

- I don't see what the problem is with that. Nearly all tasks humans would currently gain in getting automated have plenty of human experience before it. This is why training in any field even exists. Sure it would be cool to send a drone in unexplored territory with no knowledge and come back later to a self-built city, but I don't see how putting that milestone a bit later is in any way a problem.

- The hardware requirements for any new interesting breakthrough to do something important have always been immense. Just as for Deep Blue, or to all the graphite experiments where we can only produce just milligrams of it at a time, or to solar panel, each new discovery innovates in a specific direction. It is just a matter of time before the new method is substantially improved and will be able to run on single machines. To expect otherwise is foolish, to complain about it is pointless.

EDIT:

This is not to contradict the general point. That we don't need new breakthroughs is utterly false in so many ways it's not even funny. However this certainly is an AI breakthough, one that many AI researchers (myself included) thought it would take AT LEAST 10 years to come to pass. Breaking 10 years in 1 seems like a breakthrough to me.


Thanks for the good discussion :)

+ The issue with MCTS was not that it couldn't be extended to non-determinism (you're correct re: POMCP) but that it requires a simulator which produces at least a reasonably accurate model of the world. This simulator is almost always hand engineered. Determinism and perfect information simplify both the task and the creation of the simulator. The state in Go also contains all history required to perform the next optimal computation - i.e. the system doesn't have to have any notion of memory - yet the state in most real world tasks is far more complicated, at least in regards to what needs to be remembered and how that should be stored.

+ The point of the hardware requirements is that hardware advances will advance the state of the art in Go but will not do the same in many other ML tasks. Whether or not we have 1x or 10x AlphaGo distributed's computing power in our pocket is not the issue - the issue is that such computing power won't assist many tasks as the potential of our ML models are not compute bound.

There's also disagreement about how concerted the "10 year jump" was, which is mentioned in the article by Miles Brundage. Many people (including Michael Bowling, the person who designed the system that "solved" limit heads-up Texas Hold Em) predicted professional level Go play around now. Whilst it might be held by many, I also feel it was a media reinforced estimate.


Simulators should be fast, sure, however that they should be deterministic to be fast is something I do not believe. We have millions of programs already that make use of pseudo random generated numbers, and they don't seem to be suffering performance problems because of that.

And about the state and memory concern, mcts does not care directly about it, since for it the simulator is used as a black box and its internals are irrelevant to it. Instead, as long as any environment configuration can be described in terms of state (essentially a unique state->number conversion must be possible - and even then not always) mcts will work. And since it also does not care about the size of the state space, the concern that having memory as one of the factors in the state would be problematic is also unfounded.

I also disagree on the specificity of AlphaGO. Mcts has been used successfully in many fields after its initial usage and tuning for Go. I did my thesis on similar algorithms. In the same way, it does not matter whether AlphaGO can be directly used on other problems. What matters is the new idea of using NNs in order to improve substantially and with little overhead the value estimations used by mcts to explore the decision tree. This is the true breakthrough. The fact that the first implementation of this idea is a Go playing program is irrelevant, it's more like a showcase of the goodness of the approach.


This is besides your main point, but I just want to add that the 'problem' of Go is not nearly solved. Rather, nearly solved is the problem of beating a human Go player. There is quite a difference.


I wouldn't expect global maximums to even be definable for most problems


I think a "solution" for Go is a program that can play to a win (or a draw, I guess) in any circumstance, right? I mean, like, you can create a tic-tac-toe program that plays perfectly and will never lose, but you cannot do this for go.


... Yet.

Growth mindset.


> the 'problem' of Go is not nearly solved.

And it is very likely that it will never be, the number of combinations is simply too large.


That doesn't (necessarily) mean a proof can't be found that a certain ruleset leads to optimal play. Tic-tac-toe can be solved without examing the entire state space of the game.


That's mostly because of trivial symmetry and even taking that into account Go space is in the most literal sense astronomic, simple numbers don't work in describing just how large it is.


> The hardware requirements were stunning (280 GPUs and 1920 CPUs for the largest variant) and were an integral part to how well AlphaGo performed

Is that really true? Demis stated that the distributed version of AlphaGo beats a single machine version only 75% of the time. That's still stronger than virtually all human players, and probably would still beat Lee Sedol at least once.


The best discussion in terms of hardware adjusted algorithmic process I've seen is by Miles Brundage[1]. The additional hardware was the difference between amateur and pro level skills. It's also important to note that the non-distributed AlphaGo still had quite considerable compute power.

Now was a sweet spot in time. This algorithm ten years ago would likely have been pathetic and in ten years time will likely superhuman in the same manner that chess is.

None of these constitute a general advance in AI however.

[1]: http://www.milesbrundage.com/blog-posts/alphago-and-ai-progr...


I don't really buy his argument. Lots of other companies with plenty of resources have been attacking this problem, including Facebook and Baidu. People have been talking about Go AIs for decades. If it was just a matter of throwing a few servers at Crazy Stone or another known Go algorithm, it would have been done already.


The companies may have plenty of resources but those resources were not solely dedicated to this problem. You mention Facebook and they were indeed on the verge of putting time into this with - though their team is far smaller (1-2 people) and they still used less compute resources. From the linked Miles article:

"Facebook’s darkfmcts3 is the only version I know of that definitely uses GPUs, and it uses 64 GPUs in the biggest version and 8 CPUs (so, more GPUs than single machine AlphaGo, but fewer CPUs). ... Darkfmcts3 achieved a solid 5d ranking, a 2-3 dan improvement over where it was just a few months earlier..."


That can't be right. Amateurs do not beat pro players 25% of the time, yet single machine AlphaGo beats distributed AlphaGo 25% of the time.


I don't think comparing win loss distributions is particularly insightful.

By a single machine winning many games relative to the distributed version, really it's just saying that the value/policy network is more important than the monte carlo tree search. The main difference is the number of tree search evaluations you can do; it doesn't seem like they have a more sophisticated model in the parallel version.

This suggests that there are systematic mistakes that the single 8 GPU machine makes compared to the distributed 280 GPU machine, but MCTS can smooth some of the individual mistakes over a bit.

I would suspect that the general Go-playing population of humans do not share some of the systematic mistakes, so you likely won't be able to project these win/loss distributions to playing humans.


Then it would seem that AlphaGo on a single machine isn't equivalent to an Amateur. Presumably it's really good even when running in a single machine, and the marginal improvement for each additional machine tails off quite quickly.

But when you're pitching it at a top class opponent in a match getting global coverage, you want all the incremental improvement you can get.


It is a major AI breakthrough, but not the sci-fi like. AlphaGo is a true AI for Go in the sense that it develops intuition and anticipation by evaluating probabilities, same as any human player. A Go-specific philosophical zombie in short. And when we will have superhuman philosophical zombies for each problem, what will be the point of a general AI?

Edit: typo


to add to that, even our brain seems to work with dedicated specialized sub system. Wild supposition but maybe general AI could be something like a load balancer/reverse proxy to lot of problem specific AI. When a human learns to play Go, is some small part of the brain retrained specifically for this task? If yes, then AlphaGo could in fact be a building block. Architecture would look like "reverse proxy" -> sub system dedicated to the learning process -> sub system trained for learning board games -> sub system trained for playing Go. Absolutely not expert so it surely has been thought by the way /wild disgression


Marvin Minsky thought exactly along these lines. Check out the interview below, especially the last few minutes. All his interviews in that series are well worth watching.

http://youtu.be/wPeVMDYodN8


> The hardware requirements were stunning (280 GPUs and 1920 CPUs for the largest variant)

Stunningly big, or stunningly small?

The CPUs would cost around $65/hour on Google Cloud. I can't immediately find pricing for GPUs on either Amazon or Google, but let's suppose it doubles that price.

It's pretty small potatoes.

Especially if you put it in the context of a project with ~15 researchers.


Given Langford's locality vs globality argument this also gets quite obvious for the 4th game mistake and overconfidence that AlphaGo had. The rate of growth of the compounding error for local decision maker is going to cause these kinds of mistakes more often than not.


I don't know if I'd agree that unsupervised learning is the "cake" here, to paraphrase Yann LeCun. How do we know that the human brain is an unsupervised learner? The supervisor in our brains comes in the form of the dopamine feedback loop, and exactly what kinds of things it rewards aren't totally mapped out but pleasure and novelty seem to be high on the list. That counts as a "supervisor" from a machine learning point of view. It's not necessary to anthropomorphize the supervisor into some kind of external boss figure; any kind of value function will do the trick.


Wouldn't that be a self-supervised learning? Yes, as the brain lears, it gets rewarded or not, but not by any external organism - rather by itself (or a part of itself). The "learning instruction manual" is in there somewhere.

So if you are able to code an AI so that it can run experiments over itself and gradually build this "learning instruction manual" eventually it will become able to do things you never though of in the first place.


I think that's what the AlphaGo team did - they trained their agent against itself, and it learned new moves not explicitly programmed in! With an evaluation function just saying ahead / not ahead.


Well, a really interesting AI could tell us things about questions where we don't even know where to start, rather than needing to be primed with tons of data.


That IS unsupervised learning - you are maximizing an optimization function, you are not training using a set of "truth" values.


Hmm, I don't think so. From wikipedia: "Unsupervised learning is the machine learning task of inferring a function to describe hidden structure from unlabeled data. Since the examples given to the learner are unlabeled, there is no error or reward signal to evaluate a potential solution." (emphasis mine)


Yes, "unlabeled" means that the examples are not marked as "true" or "false", and the "inferring" you are doing is making your function fit the data (e.g. minimizing some error term). Life doesn't come with pre-packaged answers; humans use unsupervised learning.


Right, but we do have a reward function that says pleasurable/painful and novel/boring and probably other stuff too. So that can be viewed as a labeling on the data. Earlier data can be associated with the reward labeling through induction; that's how a recurrent neural net works. Doubtless that's an oversimplification though.


Ah, the joys of arguing about artificial intelligence without ever defining intelligence.

It is the perfect argument, everyone can forcefully make their points forever, and we'll be none the wiser whether this AI is 'true AI' or not.


That's right. 'Intelligence' isn't a useful technical concept. Think of it like 'beauty' or 'interestingness'. It's a quality of the impression something has on you, and not an external objective thing.

By this definition, a successful AI makes the impression of intelligence on people that observe its behaviour.

This definition is pretty well known, though not universally agreed on, and it serves me well in my professional AI-research life by removing this otherwise tediously unresolvable argument.


So then the discussion is all about defining intelligence, beginning with a fuzzy conception of required qualities. It literally means ability to select, read, choose between. The discussion can be a means to judge the AIs or, for the sake of the argument, to judge and improve intelligence with AI as a heavily simplified model, which is an old hat by now.

Do you think that's irrational? Do you expect neuroscience or are you rather interested in mathematics? I thought that's how to learn in absence of external inputs, by recombination of the old inputs, ie. the fuzzy notions, to generate new ones. I'd think that's how recurrent networks work. What do you know about it (honest question)?

Fuzzy doesn't mean wrong. Underspecification, as little as I know about it, is a feature.


I remember similar arguments in the early 90s when I was doing my PhD. They got about as far then. The same arguments will be happening in another 25 years. And beyond, even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'. And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now, and, then as now, it will be mostly take place between well-meaning undergrads, and non-specialist pundits.

(Philosophers of science have a discussion that sounds similar, to someone just perusing the literature to bolster their position, but the discussion is rather different though also not particularly relevant, in my experience.)

> What do you know about it (honest question)?

About recurrent NNs? Not much beyond the overview kind of level. My research was in evolutionary computation, though I did some work on evolving NN topologies, and using ecological models to guide unsupervised learning.


But then the super-human intelligence will give us some convincing arguments for one or the other side :)


> About recurrent NNs?

yes, obviously, as that's the topic, but also the the rest I of what I mentioned, neuroscience, maths, leaning on logic and philosophy.

> I remember similar arguments in the early 90s

That's why I mention neuroscience, the ideas are much older.

> even when a computer is super-human in every conceivable way, there'll be arguments over whether it is 'real AI'

Of course. Just because it's superhuman, we humans wouldn't know what it is, whether it is what we think it is and if that's all there could be.

Real (from res (matter (from rehis (good as in the goods))) + ~alis (adjective suffix (from all?))) means worthy and obviously an AI is only as good as its contestants are bad. It won't be real for long before it's thrown to the trash once a better AI is found.

That'll stop when the AI can settle the argument convincingly. That's what's going on, evangelizing. And we do need that, because if not for the sake of the art itself, then as proof for the application of answers and insights other fields.

> And nobody will define their terms then either. Ultimately the discussion will be as irrelevant then as it is now

LeCun sure went along a lot further since then, and he defines the terms in software. As I said, the discussion is just about what to make of it. Of course many come up basically empty, that's why the discussion is important, and that's why I asked what do you know about it. I think it's a very basic question and not easy to grow tired of. If you work that, maybe that's different and specialized to computation.

There might not be much to say about it, all the easier then to summarize in a short post. Or there's indeed more to it, then I'd appreciate a hint, to test my own understanding and learn.

I don't really know, what LeCun talks about, or the techniques you studied, so I'm saying it. Just for perspective. I'm just generally interested in learning and computation is just one relevant and informative angle. Maybe that's what bothers you, learning to learn, and that's why its freshman bothering with it, but learning to learn is maybe really just learning, or impossible. That's the kind of logical puzzle that's to be taken half joking. Don't beat yourself up over it.


As I have said elsewhere, I will agree we have achieved true AI when a program, uncoached, creates a persuasive argument that it is intelligent. One nice feature of this test is that you don't have to define intelligence precisely beforehand. Of course, that does not give a specification for developers to work to, but that is the case we have now, anyway.


Nice to know your criteria. There have been many. The trick is not to decide when you'll 'agree' (whatever your agreement is worth), but to form a consensus in the discussion.

I've known people who've been unable to create a persuasive argument that they are intelligent (or unwilling), and I've know intelligent dogs unable to argue for anything, persuasive or not. I don't fancy your chances of having your definition become the standard.


The possible downside to appealing to consensus is that it doesn't always agree with you. That doesn't bother me in the slightest, but perhaps that's because I think people will recognize AI when it shows up.


Would you like to propose a definition?


I think we need more advances in neuroscience and, I know this will be controversial, psychology before we really know what the cake even is.

Edit:

I actually think the major AI breakthrough will come from either of those two fields, not computer science.


I disagree. In engineering we've shown great advancements not by exactly reproducing biology from cases like transportation (wheel vs legs) or flight (engine and wing vs flapping feathers). We can't even mass produce synthetic muscles and instead use a geared motor.

The growth in building increasingly sophisticated AI is faster than our efforts to reverse engineer biology. I could see that changing with improved observational techniques like optogenetics or bacteria and viruses we can "program" to explore.

Researchers are already focusing on concrete insights from cognitive science, neuroscience, etc such as one shot learning or memory that we haven't yet figured out in a cohesive machine learning framework. For the time being I'd bet on more advaces coming without significant changes in biological understanding.


I think you're right, and AlphaGo is an example. The big advancements in flight give us supersonic jets far faster than birds, but we've only recently developed microdrones that can land on a tree branch. We have wheeled vehicles far faster than horses, but haven't got a single vehicle that can complete an Eventing course. So in AI we have computers far better at Chess than any human, far better at indexing and searching databases, and now (or certainly very soon) far better at playing Go.

But we are still nowhere near developing a computer that can learn to play Monopoly, Risk or Axis & Allies just from reading the rule book and looking at the components. If you aim your machine at a very narrow subset of a problem, you can optimise it to amazing degrees, far exceeding humans or nature. But developing a machine that has the whole package is staggeringly hard in comparison.

But you know what? That's fine. Special purpose, single domain tools are fantastically useful and are easier to make highly reliable, with well understood limitations.


I feel that your analogy does not disprove OP's claim. You would first have to prove that the analogy is actually applicable. Theoretically, one could argue that motion is substantially different from intelligence such that the analogy does not hold.

But I am with you in that engineering / CS / ... should not wait for neuroscience to make further discoveries but continue the journey.


Interesting point, but which do you think is a more reasonable prior: that intelligence is a deterministic process similar other's we've encountered or that there is something uniquely different about intelligence than other physical phenomena? I think the latter requires more assumptions so I would argue philosophically it is the one that requires the proof or more evidence :)

Regardless, my feeling is that there is a healthy dose of human hubris around intelligence. If I train a dog to go fetch me a beer from the fridge, that seems pretty smart. It learned how to understand a request, execute a complex sequence of actions for motion and planning, reason around occluded objects, understand depth and 3 dimensional space, differentiate between objects, and more without me writing a sequence of rules to follow. I'd be happy to have a robot that intelligent. Plants dont have brains or neurons yet react to sensory stimulus such as light touch or sound, communicate, and even have memory and learn. It's not at a scale to do something interesting within one plant but communities of plants are arguably the most successful organisms on the planet.

Andrew Ng likes to point to a Ferret experiment [0] where experimental neuroscientists rewired the visual inputs to the auditory cortex and the auditory cortex learned to "see" the visual signals! This suggests that there may be some amount of unified "learning" rules to the brain. Biology is never so clean but if humans have whatever this intelligence thing is that lesser organisms do not, there is another angle to look at things. We have a lot of neurons which suggests less per neuron specialization than say a C. elegans; basically large populations of neurons perform tasks that in lesser creatures single or few neurons may perform. While the trees are complex and important to understand for biology and medicine, the forest may have some high level rules.

Looking at something that appeared intelligent 50-100 years ago but seems mechanical now, we have text to speech. NETtalk was a simplified computational neuroscience model from the 80s that could synthesize human speech. Today we have far better quality techniques that came out of R&D focused on things like large high quality labeled datasets for training, better soundcards, more processing power, and algorithmic improvements. Researchers didn't continue trying to model the brain and instead threw an HMM and a couple other tricks at it. Now we're going full circle back to neural networks but they aren't using any advances from biology and certainly arent produced by computational neuroscientists like Terry Sejnowski.

It's funny because at the time of NETtalk they thought that learning to read would be an extremely hard problem because it incorporates so many components of the human brain [1]. While it certainly wasn't a trivial problem, state of the art OCR and object recognition came from similar artificial neural networks a decade later with LeNet and MNIST * . And no, ANNs != biological neuronal networks. The models of computational neuroscientists are different; for example look at [2, 3] for high level models or [4] for a tool.

Now I'm even more convinced than before that understanding the brain is great for humanity but wont be necessary for building intelligent systems that can perform tasks similar to biological ones.

[0] http://www.nature.com/nature/journal/v404/n6780/full/404871a...

[1] https://en.wikipedia.org/wiki/NETtalk_(artificial_neural_net...

[2] http://science.sciencemag.org/content/338/6111/1202

[3] http://ganguli-gang.stanford.edu/pdf/InvModelTheory.pdf

[4] http://neuralensemble.org/docs/PyNN/index.html

* Perhaps you could argue that convolutions are loosely inspired by neuron structure, but that sort of knowledge had existed for quite some time, with inspiration arguably within Camillo Golgi's amazing neuronal physiology diagrams from the 1870s let alone the 1960-80s. It's telling that papers on CNNs have little to no neuroscience and a lot of applied math :)


I did not mean to have implied that I think there is anything magical about intelligence. Of course, it is based on physical phenomena. I am doing my PhD right now and I try to incorporate as much AI/ML into it as possible.

What I meant to say is that our ANNs are so ridiculously simplified versions of real neural networks that there might still be something to be learnt from the real brain. This shall not imply that to achieve intelligence, the solution necessarily has to mimic a biological brain.

(Thank you for your detailed response. I love to read about this stuff!)

edit: missing word


That's an equivalent of saying that advancements in airplanes will come from biology rather than engineering fields. Biology at best can give hints to improve aerodynamics, and it still can be solved mathematically better. Same will be with neuroscience.

I believe it's more likely that engineering of AI will bring new ideas to neuroscience instead, just like after building helicopters we gained some intuition and understanding on why certain features of dragonflies exist.


The Wright brothers studied bird flight extensively, and drew important ideas from birds. An aeronautical engineer today has the luxury of a more mature field, and can probably afford to put less thought into bird flight.

Despite the history and significant progress in AI we still don't know that much about what approach will result in the first strong AI, or even if it's possible to make one. In an important sense AI is more like aeronautical engineering in 1902 than aeronautical engineering today, so it's possible that better understanding of biology will result in an important innovation.


> I know this will be controversial, psychology

Why would that be controversial? It seems to make extremely good sense and even though it may be doubted in some circles I think that most people involved in AI research are painfully aware of our limited understanding of our own psyche.


I disagree with hacknat (and the reason I suspect it's controversial) because psychology is generally far too high-level (e.g. focused on things like traits and high-level behavior and vague "perceptions") and has far too little empirical rigor for engineers to be able to build any sort of actual system from insights gleaned from psychology. The recent replication crisis in psychology does little to help this reputation.

Neuroscience does concern itself a great deal with low level biological mechanisms and architectures, and is more amenable to cross-pollination with machine learning.

Though I would like to point out that thus far deep learning has taken few ideas from neuroscience.


Neural networks are the big driver and arguably were taken from neuroscience.

The reason why psychology is 'too high-level' is exactly what is meant with 'we don't understand it', we're approaching the psyche at the macro level of observable traits, but there is a very large gap between the 'wiring' and the 'traits', some of that gap belongs to neuroscience but quite possibly the larger parts belongs to psychology. The two will meet somewhere in the middle.

A similar thing happens in biology with dna and genetics on the one side and embryology on the other.


They were taken from neuroscience... 50 years ago. Since then, very few ideas have been explicitly taken from neuroscience.


Exactly, and even then, half of the details or how real neurons work were ignored or discarded. NN are now often taught by sticking closely to the math and avoiding the term and any reference to biology completely.


Does it matter? As long as the original source for the idea is remembered I don't particularly care which field originated it. There are so many instances and examples of cross pollination between IT and other sciences that in the end whether or not we stick to the original in a literal fashion or not should not matter (and if we did we'd probably lose out on a good bit of progress).


That's like saying there has been no development in automobiles since 1890. Sure, at first glance the cars from then are still like the cars from now. ICE (or electrical) power source, maybe a gearbox, some seats, a steering wheel and something to keep you out of the weather. But that's at the same time ignoring many years of work on those concepts in order to further refine them.

The neural networks that were taken 'explicitly from neuroscience' have gone through a vast transformation and that + a whole lot of work on training and other stuff besides is what powers the current crop of AI software. All the way to computers that learn about games, that label images and that drive cars with impressive accuracy to date.

The problem is - and I think that was what the original question was about - that neuroscience is rather very low level. We need something at the intermediate level, a 'useful building block' approach if you will, something that is not quite a fully formed intelligence but also not so basic as plumbing and wiring.


This analogy doesn't make any sense. Since automobiles were never patterned off anything in biology (they were the improvement of horse drawn carriages), I'm not sure what point you're trying to make with the analogy.

I never said there weren't any developments in neural nets, I'm just saying that few ideas have been taken from neuroscience (there certainly have been some ideas, like convolutions). In fact most things (including most tricks in convolutional neural nets in their current state) a neural net does, we know a brain does not do.


All it took for us to make aircraft was to look at birds and to realize something could be done (flying even though you weigh more than the air). The first wanna-be aircraft looked like birds, and some even attempted to work like them. Most of those designs went nowhere. Just like legs got replaced by the wheel in ground transportation aircraft engines and eventually jets replaced the muscles of birds. It doesn't really matter what sets you on a particular part, what matters is the end result.

Right now, we have a state of AI that is very much limited by what we know about how our own machinery works. Better understanding of that machinery (just like better understanding of things like aerodynamics, rolling resistance and explosions led to better transportation) will help us in some way or other. And it may take another 50 years before we hit that next milestone, but the current progress all leads more or less directly back to one poor analogy with biological systems. Quite probably there are more waiting in the wings, whether through literal re-implementation or merely as inspiration it doesn't really matter.


> It doesn't really matter what sets you on a particular part, what matters is the end result.

To put it in other words: Birds are limited by evolution. They are not an optimal design - they are a successful reproductive design in a wide ecosystem where flying is a tool.

Our intelligence is no different.

This is something Feynman addressed in this beautiful talk (in the Q&A iirc): https://www.youtube.com/watch?v=EKWGGDXe5MA


I like the comparison to developing wings. What's interesting about the development of plane wings is that, while we used the underlying physics of how wings work to make plane wings, a plane gets it's trust differently than a bird, and looks and flies differently. Flapping wasn't very useful to us for planes; what things about the way minds work will not be useful for AI? I think once we understand the algorithms behind AI / intelligence / learning, what we choose to make may be very different than what a lot of people currently imagine having AI or robots will to be like.


> and has far too little empirical rigor for engineers

I think this is one of the key points of the problem: we, as engineers, are expecting this problem (imitating the human psyche/getting to true AI) to be able to be defined rigorously. What if it can't?

I'm not a religious person and I don't generally believe in something inside us that it's "undefinable", but looking at our cultural and intellectual history of about 2,500 years I can see that there are lots of things that we we haven't been able to define rigorously, but which are quintessential to human psyche: poetry, telling jokes, word puns, the sentiment of nostalgia and I could go on and on.


If anything the replication crisis in Psychology is proof of just how little we really know about intelligence. I think Psychology is necessary, because I think it needs to come up with an adequate definition of what intelligence is first, then neuroscience can say, "this is how that definition actually works", then engineers can build it.


Even now, deep learning is basically 1980s-era backdrop with a few tweaks and better hardware - it turns out that despite decades of backlash against it, plain old backdrop pretty much is the shit, as long as you have a lot of compute power and a few tricks up your sleeve!


Well, it just seems unlikely that psychology will produce significant insights here, because it mostly just looks at how the brain behaves. While this can be insightful, I doubt that it will explain intelligence in the end, because this happens a layer below. That's precisely what neuroscience covers. The other approach (just thinking about the problem and trying to to build a AI from first principles) is CS.

So I would expect strong AI from CS, possibly with a paradigm shift caused by an advance in neuroscience.


I think someone needs to come up with a good theory of what intelligence even is, then we can try to discover its mechanism(s).


Here are three potential definitions:

1) Acting like a human (this is the Turing test approach)

2) Thinking like a human (this is cognitive science, and for now is focused on figuring out how humans think)

3) Thinking rationally, ie, following formal logic (the difficulty here is encoding all the information in the world as formal logic)

4) Acting rationally. That is, entities that react rationally to their goals (this one is notable because it allows fairly stupid entities).

These are all explained in more detail in Artificial Intelligence, a Modern Approach by Russell and Norvig


Those aren't broad definitions at all! Quick question though, what is "acting", what is "thinking", what is "rational"? Your definition of intelligence practically includes the term in its definition(s).


I like the acting human approach (Turing) because it can be stated more precisely. One test is the classic Turing test -- fool a human judge.

Another line for acting human would be the ability to self direct learning in a variety of situations in which a reasonably intelligent human can learn. That means a single algorithmic framework that can learn go, navigate a maze, solve Sudoku, carry on a conversation and decide which of those things to do at any given time. The key is that the go playing skill would need to be acquired without explicitly programming for go.

I believe a lot of our intelligence is the ability to perform solved AI problems given the situation. The key is combining those skills (whether as a single algorithm or a variety of algorithms with an arbiter) and the ability to intelligently direct focus. That's why most researchers aren't confusing alphago with general intelligence. It can play go - period.


You can go to the source I cited to find further explanation.


AI has some good definitions as far as intelligence is concerned. Perhaps you are worried about consciousness or something, but this is not needed for a definition of what intelligence means.


Shane Legg (a DeepMind founder) and Marcus Hutter collected and categorized 70 different definitions of intelligence: http://arxiv.org/abs/0706.3639

Their attempt at a definition that synthesizes all the others is:

> Intelligence measures an agent's ability to achieve goals in a wide range of environments. - S. Legg and M. Hutter


Just like ornithologists invented the first airplane, and horse vets built the first automobile, etc.


While they weren't card-carrying ornithologists, the Wright brothers studied bird flight extensively and explicitly used it for their designs.

But they concluded that the flapping motion wasn't essential for propulsion, and could more easily be achieved by a propeller.


Though airplane designers probably looked at bird wings for inspiration. Similar to how Deep Mind are looking at the brain.


Early airplane inventors tried very hard to imitate flapping bird wings for propulsion.

https://en.wikipedia.org/wiki/Ornithopter

Some even worked.


I disagree. I think we need advances in philosophy to truly answer these questions in depth.


I'm not so sure about this. If I had to guess, I suspect things will go like this:

1. Scientists get a really good understanding of how learning works.

2. One third of the philosophers claim that their philosophical ideas are vindicated, another third claim that their models aren't really contradicted by this new scientific model, and the final third claim that the scientists have somehow missed the point, and that special philosophical learning is still unexplained.


Read David Deutsch's book eh?


Can someone more knowledgeable explain why biological systems are considered unsupervised instead of reinforcement based systems?

While it seems intuitive that most individual "intelligent" systems in animals can be seen as unsupervised, isn't life itself driven in a reinforced manner?


There is no enough reinforcement volume to learn anything complex.

If child gets external reward for every waking moment until she is 12 years old, it's just 4.2 million signals.

Reinforcement learning works for fine motor control and other tasks where the feedback loop is tight and immediate. Reinforcement and conditioning can also modulate high level cognition and behavior, but it's not the secret sauce of learning.


I could not follow your argument. Could you elaborate? (Honest question)


Reinforcement learning is learning by interacting with an environment. RL agent learns from the consequences of its actions.

Biological system don't live long enough to get enough feedback to learn complex behavior trough consequences. Animal or human must be able to generalize and categorize what they have learned correctly without external feedback teaching it how to derive the function that's doing it.

For example, if you want to learn how to tie a complex knot and learn it trough trial and error you might have try it million times if you improve your behavior mainly trough consequences of your actions. In practice you probably try only 5-10 times before you learn to do it and it involves pausing and looking at the problem. There is some kind of unsupervised model building happening that is not involving external input.


Ok, I see what you mean now.

But it seems like (or one could misunderstand you in way that) you see those concepts as mutually exclusive. I would assume a combination of reinforcement learning and unsupervised (and supervised) learning.

Rats have been trained to detect landmines and then go back to their trainers and show them the mine. This is complex behaviour that was taught using reinforcement (at least on a top level). There will be some unsupervised learning going on in the rat's brain on a lower level. But it is complex behaviour and it's been reinforcement learnt.


>see those concepts as mutually exclusive.

I certainly don't. Reinforcement or conditioning is part of it, but it's not the cake.


The brain needs to form structure before any sort of reinforcement learning is even possible. This structure comes from an "unsupervised" process we don't understand well.


Isn't it from millions year of natural selection? The reason why a human brain can think, is just by chance.


Personally I think a lot of it has to do with evolution. We certainly have different sections of the brains that are more utilized in different tasks, auditory, visual, motor, etc. I think it is likely a lot of the initial feedback is preset -- the instinctual positive and negative responses of certain stimulus.


As much as it doesn't help to call it intelligent design, chance is not any good either, if asking for a reason. Random chance is as good as no reason at all.


Mixing natural selection and intelligent design is a problem, but that's not a problem for natural selection.


What's the context? The GP probably meant that intelligence is an emergent property not directly coded in DNA.

Natural selection is not pure chance. At least seeing it that way doesn't yield anything except maybe lowered expectations. You want some expectations.

Hiding you theory behind Nature is not any less religious. Nature is pretty much synonym with living things, existence in general, and actually stems from to be born. Nature as the reason for things being born is thus kind of true by definition. The rational behind the generational evolution is the exact opposite of pure chance, though. Chance is only applicable if there are two possible outcomes. In hindsight, only one outcome was possible, so there are no chances, just facts.

The argument I am getting at is, self similarity seems to be such a common property, that it's maybe inevitable somewhere in a complex system. From there to self awareness and critical thinking is a long way to go, but what you call millions of years is a blink of an eye on a cosmological scale.

The idea of complexity includes enough variety to appear random to us, so we might agree that it's just a difficult to express concept and this is only semantic quibbling.


>While it seems intuitive that most individual "intelligent" systems in animals can be seen as unsupervised, isn't life itself driven in a reinforced manner?

I think apparently unsupervised systems could be explained by models that predict a future input given their current state and input. Correct predictions are reinforced. An RNN-like model.

(If the network is small, it should learn some compressed representation, which can be used as an input to a more abstract layer that makes more general predictions over a longer time period)


In AI the definition of reinforcement learning vs unsupervised is that reinforcement requires external identification of the correct result to compare with the given result.

If going through life you had artificial reality display that flashed "apple" in front of every apple and "orange" in front of every orange you would have reinforcement learning. There is certainly a major component of reinforcement in teaching, but that does not account for all of learning.

With humans we are able to transfer concepts to different situations without examples and there are a lot of categories that are identified without any explicitly reinforced examples.


You're describing supervised learning. Reinforcement learning is more like you stick random things in your mouth, and if one tastes good, you learn to stick that kind of thing in your mouth more often.


Well, yes, I was describing supervised learning (as what doesn't happen) where labels are not present is unsupervised (which makes things much more difficult). Reinforcement is more like a special case between supervised and unsupervised. Difficult like unsupervised because the labels aren't known, but with some feedback that is not a direct answer. Sorry for the confusion. There is very little in the way of purely unsupervised learning. I think the distinction between unsupervised and reinforcement is not very well defined. I would argue that even a "pure" unsupervised algorithm like k-means has a kind of reinforcement in the group means.


I would agree.

Could one not argue that even unsupervised learning is kind of reinforced by let's say the emotion you obtain from successfully performing a learning task? Then the reward signal does not come from the external environment but from resolving cognitive dissonance in the brain.

Happy to be proved wrong by experts here.


I would like future competitions between AIs and humans to have a "power budget" for training and during gameplay. For example, a chess grandmaster that has played for 20 years would have spent X amount of energy training. The AI should get an equivalent budget to train with. During gameplay, the AI would get the same 20 watts [1] that a human has. This would drive the development of more efficient hardware instead of throwing power at the problem :)

[1] http://www.popsci.com/technology/article/2009-11/neuron-comp...


I'm surprised he'd make such an optimistic statement. I think a better analogy would be:

We figured out how to make icing, but we still don't really know what a cake is.


He addresses your query, not in the analogy but the additional comment.

"And that's just an obstacle we know about. What about all the ones we don't know about?"

With sufficient scrutiny all analogies break down, so readers must be generous with their interpretation.


We can describe Solomonoff-based agents like AIXI. None of them are fully sufficient for true general AI, but you could probably accomplish quite a bit with an AIXI-like agent.


As fas as I know, the only thing existing AIXI implementations have demonstrated, is to learn to play Pac-man at a somewhat reasonable, but not in any way stellar level.


Yes, it is not tractable. It serves as an example of a definition of an agent though.


I thought full AIXI wasn't computable though?


You can do time-bound AIXI and for a large enough time bound it's sufficient for all practical situations.

AIXI is not tractable, but I'm responding to the parent comment saying we don't even know what a cake is.


I don't consider this mathematical model of induction/prediction to be an accurate description of "intelligence" in the sense of "True AI" (which itself is open to interpretation).

This is the cake: human intelligence. Right now we have pieces of it, but even the end goal isn't well defined. We know the human mind makes predictions, recognizes patterns, can formulate plans, works with both concrete and fuzzy information, and so on. But we still don't understand what human intelligence really is, overall.


It sounds reversed to me- shouldn't the "cherry" be supervised learning and the "icing" be reinforcement learning? At least insofar as reinforcement learning is closer to the "cake" of unsupervised learning, as there is less feedback required for a reinforcement learning system to work (a binary correctness signal rather than an n-dimensional label signal.)

It might also be argued that most "unsupervised learning" in animals can be broken down into a relatively simple unsupervised segment (e.g., an "am I eating nice food" partition function) and a more complicated reinforcement segment (e.g. a "what is the best next thing to do to obtain nice food?" function.) I'm sure someone like Yann LeCun is familiar with such arguments, though.


I wish the term "true AI" were replaced with "strong AI" or "artificial general intelligence" or some such term. We already have true AI - it's a vast, thriving industry. AlphaGo is obviously a true, legitimate, actual, real, nonfictional example of artificial intelligence, as are Google Search, the Facebook Newsfeed, Siri, the Amazon Echo, etc.


We already have true AI - it's a vast, thriving industry.

Or how about calling that vast, thriving industry "weak AI," or "clever algorithms," which is what they really are. The original definition of AI was what we now call strong AI, but after some lesser problems were solved without actually creating strong AI, we had to come up with some name for those.


I want to see an AI that can improve itself by developing new algorithms for arbitrary tasks. I wonder how far off we are from that now?


You know, if you're at the point where you can give a human-readable spec of the problem and the AI can make a passable attempt at it, that's basically the Turing Test -- hence why I think it deserves its status as holy grail. Something that passes would really give the impression of "there's a ghost inside here".


Rather than a ghost, I wonder if we'll ever have the average person looking at brains and thinking "there's a program inside here."

And then to reverse it, imagine that the world really is some kind of massive simulation... and that there are backups of the save()-ed :)



The problem is that fundamentally all our AI techniques are heavily data-driven. It's not clear what sort of data to feed in to represent good/bad algorithm design


Interestingly just ~5 years ago the team "AI" was frowned upon and people insisted in using the term "machine learning". The reasoning was that "AI" is just too convoluted a term because people will insist on comparing with humans and then whole question of consciousness invariably arises which renders scientific inquiry into undesirable debates.


I think that the refusal to seriously engage with consciousness is the main obstacle to progress toward general AI.


I guess it boils down to a difference in opinion as to what it means to be intelligent.

IMO, AlphaGo isn't "intelligent" because all it can do is play Go. For example, it can't be taught to play chess without completely reprogramming it.

Surely it's using a lot of clever algorithms, but where's the intelligence?

TBH, there's not much point to this post, because the intelligence in AI has been twisted more towards your usage almost since the beginning.


I agree with you and would like to double down: Intelligence is defined as "...the ability to acquire and apply knowledge and skills."

Even if Alpha-Go could play chess, manage air traffic control, and play GO the same time, it would never know it was doing that. As you mentioned above, Alpha-Go's "intelligence"is specifically pre-programed algorithms and accurate numeric inputs. If it can't create it's own algorithms or translate abstract data into numbers it can crunch on it's own, then there is no intelligence there. It just "smells" like intelligence.


I tried to argue once that the term "synthetic AI" would be better than AGI, as it removes any notion that artificial implies fake. [Note to surprised self: apparently there is a Wikipedia page describing this exact argument https://en.wikipedia.org/wiki/Synthetic_intelligence]


"Scotsman AI"


This seems like semantic quibbling. All those terms seem synonymous to me.


I feel that there's a legitimate reasons to split them into those terms as there are some significant distinctions that could be made in terms of "level of intelligence" of the AI.

For a much more detailed exploration of this topic, I think Wait but Why's article did a pretty thorough job: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


I prefer computational intelligence to AI, so maybe CI?


Strong AI means a different thing.


"Useful AI"?


No one is claiming that alphaGo is close to AGI. At least not anyone that understands the methods it uses. What alphaGo is, is an example of AI progress. There has been a rapid increase in progress in the field of AI. We are still a ways away from AGI, but it's now in sight. Just outside the edge of our vision. Almost surely within our lifetime, at this rate.


There has been a lot of progress in AI research, and AlphaGo does represent some of that. But AlphaGo is not any kind of progress from the AI before it towards AGI. It's progress, but progress mostly perpendicular to the goal of "true" or "general" intelligence. It's similar to the same kind of AI that was built for chess, just for a much more difficult problem in that same class of problems. No problems useful to general intelligence were solved in making AlphaGo, just harder problems in the field of special intelligence. It's still a tremendous accomplishment, just not one that puts AGI happening within any of our lifetimes.


> We are still a ways away from AGI, but it's now in sight.

These people in 1956 also thought that AGI was in sight:

https://en.wikipedia.org/wiki/Dartmouth_Conferences


Not most of them, and not with any certainty. The Dartmouth conference was just the very first investigation into whether it was even possible and what it would take. Of course they didn't know when AGI would be developed. The computers they had access to were smaller than my calculator!

Now we have massive computers and 60 years of progress. And lots of real world successes of AI technology. Surely we are in a much better position to predict than people of 60 years ago!


> Not most of them, and not with any certainty.

This probably holds true also today.


> but it's now in sight. Just outside the edge of our vision.

Well if it's outside the edge of our vision, it is out of sight.


It's progress, very neat progress, but it's not even progress in the sense that there are new kind of problems being solved. AI for Go and similar games has existed for a zillion times, but, given the huge space of the game of Go, it had never been that good at playing it as to beat a human.

Not to diminish that this isn't an achievement, but let's put it into context. To talk about being closer to AGI, a different kind of progress would need to be shown: AI solving different -new- kinds of problems.


"AI solving different -new- kinds of problems."

Which of course, it does. Do you honestly believe Go and other games are the only AI problems where progress is being made?


ok, but we were talking about alphaGo here, and what does it represent as an example of progress in AI.


If you look at how child learns, it's huge amount of supervised learning. Parents spend lots of time in do and don't and giving specific instructions on everything from how to use toilet to how to construct a correct sentence. Lots of language development, object identification, pattern matching, comprehension, math skills, motor skills, developing logic - these activities has huge amount of supervised training that runs day after day and year after year. There is sure unsupervised elements like ability to recognize phonemes in speech, tracking objects, inference despite of occlusion, ability to stand up and walk, make meaningful sounds, identify faces, construct sequence of actions to achieve goal, avoiding safety risks from past experiences and so on. However, typical child goes through unparalleled amount of supervised learning. There was an incidence of a child who got locked up in a room for over a decade and she didn't developed most of the language, speech or social skills. It seems unsupervised learning can't be all of the cake.


Interesting. I have heard the opposite of this.

Supervised learning may be how it looks from the outside, but consider that out of the >6,570,0000 waking seconds of a child's life up to age 5, there maybe only a few dozen instances of supervised adult instruction per day. Besides those, what do neurons do the remaining 99.99% of the time?

Part of the problem might be that comparing supervised and unsupervised learning 'effectiveness' is a bit apples-and-oranges. Their effect together is highly collaborative. Children have to develop abstractions on their own before you can supervise them on those abstractions. It is probably fair to say that a key part of human general intelligence is creating high-level representations of low-level stimuli. It might also be fair to say that this is what the brain is doing 100% of the time.

So if I may hand wave a little: while supervised learning can make a child better maximize objectives on those high-level representations (objectives they may be aware of through unsupervised observation), for the most part it does not fundamentally change the structure of those things in the child's brain. This makes unsupervised learning almost all of the cake to me.

My post has the caveat that children undergo a lot of other objective-based learning besides explicit instruction from adults, and all of this maps only fuzzily to supervised vs unsupervised learning in AI, which is the issue from the submitted post.


> there maybe only a few dozen instances of supervised adult instruction per day

There might be only a few dozen instances but I think each instance has a lasting effect which makes up for this.

If you scold a child for something stupid it did then it will remember this for a long-ish time. Same for teaching him things or correcting stuff.

I guess you show the child some correct behaviour at a few instances and this is then used internally as a guideline for selflearning.


That is a great point. Talking about supervised vs unsupervised vs reinforcement learning is most straightforward with tasks like language learning, audio processing, image processing, and playing discrete games. It is possible to see broad similarities between deep learning approaches and human cognition for several of these tasks. But when you start getting into tasks like the formation of narrative identity, things get very complicated.

Maybe one major difference between playing a game and forming a personality is that these early important interactions don't just adjust wirings in the cerebral cortex, the part of the brain most responsible for general intelligence. It goes straight to our emotional memory bank in the limbic system, which is all about learning an incredibly important objective function: to survive. But very high level features formed by unsupervised learning can do this, not just reptilian predator detection routines. Being scolded or corrected can have a powerful effect on future motivation. Suffice to say, artificial intelligences don't currently worry about this.


> Supervised learning may be how it looks from the outside, but consider that out of the >6,570,0000 waking seconds of a child's life up to age 5, there maybe only a few dozen instances of supervised adult instruction per day. Besides those, what do neurons do the remaining 99.99% of the time?

This seems like a really facile analysis. For example, if I read a child a storybook, I'm deliberately providing several signals every second. That's a "single instance" but I've effectively provided a lot of training information. At least enough to keep a child's mind busy for 3600 seconds.


I understand what you're saying and I agree with you. Human learning is based almost entirely around feedback and trial and error. But by comparison, machine learning requires a lot of implicitly labelled data, and that's what I think Yann LeCunn was talking about.

For example, to train a classifier to identify birds you need a large number of pictures of birds, maybe millions of varied examples. And then you'd need an equivalent number of images of things that aren't birds, or things that look similar to birds. Humans are able to make that same classification with a very small number of examples, maybe even n = 2. If someone saw a bird for the first time and then another one shortly after, they would be able to put the two together immediately and make a lot of inferences on top of that. Machine learning algorithms aren't even close to that yet.

Supervised learning sucks because it requires vast quantities of labelled and prepared data, which is expensive. Top ML researchers must feel like they're sitting in a Formula One racer but can't afford any gas for it. AlphaGo shows that a computer can do almost anything, but only with a significant investment of resources for each specific task. Unsupervised algorithms tend to be less effective than supervised methods right now, but once that changes it will open up a new world.


As a parent, I have quite an opposite experience to your claims.

> Parents spend lots of time in do and don't and giving specific instructions on everything from how to use toilet to how to construct a correct sentence.

What you're missing is that the child is initially a blank page. He has no knowledge of language either, so giving instructions to somebody that doesn't understand your language is challenging, to say the least. And acquiring language is something they do just by listening and observing others, in a very cool game of trial and error. At some point a child starts mimicking what the parent does, repeating words or gestures and then notices the triggered response.

They learn best by observing what you do and not by what you say. They also learn by discomfort. I taught my boy to use the chamber pot, not by language, but by letting him without diapers and letting him pee on himself, until he got the hint that he should use the chamber pot :-)

And of course, you might classify this as "supervised learning", but these are just shortcuts. Because of our ability to communicate in speech and writing, we learn from the acquired knowledge of our ancestors. Isolate a couple of toddlers from the world and you'll eventually see that they'll invent their own language and they'll learn by themselves to not shit were they eat or sleep.


If child gets external reward every waking moment until she is 12 years old, it's just 4.2 million feedback signals. It's not enough to learn complex behavior.

Reinforcement learning works for fine motor control and other tasks where the feedback loop is tight and immediate. Reinforcement and conditioning can also modulate high level cognition and behavior, but it's not the secret sauce of learning.


This sign language spontaneously developed mostly without supervising:

https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language


Err. A human is capable of unsupervised learning. Try locking two kids in a room for over decade. Maybe she didn't develop speech and social skills because there was no one to speak to or socialize with.



Is anyone working on an embodied AI? Even a simulated body might help. Ultimately intelligence is only useful insofar as it guides the body's motion. We often tend to minimize the physical act of say, writing down a theorem or actually applying paint to the canvas, but there are certain actions like playing a musical instrument that certainly blur the distinction between "physical" and "mental". Indeed, even 'purely mental' things like having an "intuition" about physics is certainly guided by one's embodied experience.


You might look at the work of Rodney Brooks. There was a wonderful documentary called "Fast, Cheap, and Out of Control", and I'm pretty sure it was in there that he explains his notion that true intelligence is embodied. That was years ago and I haven't kept up with the field, but perhaps it's a useful starting point for you.


Cool, thanks. I think the idea is also important to Zoltan Torey, who believed that "consciousness" is essentially a simulation of a kind of arm in our mind. ("The Conscious Mind"). But I've never heard of anyone attempting to train an AI within a simulated world. The nice thing about doing that is that aren't limited by physics of robotics. I suspect that for basic intelligence stuff, relatively lo-fidelity simulation would be sufficient. (Although there are some very important moral issues presented by such a program!)


> But I've never heard of anyone attempting to train an AI within a simulated world.

Ah, then you might want to follow the trail from SHRLDU:

http://hci.stanford.edu/winograd/shrdlu/

https://en.wikipedia.org/wiki/SHRDLU


Thanks for the link! That's pretty close to what I mean; but I was thinking more like an embodied AI living in a simulated world. So, for example, you might start from a Counter-Strike bot, input to the bot would be it's 'camera', and output would be it's position and actions.


A while ago I helped develop iSpike - http://ispike.sourceforge.net/, which is an interface between the iCub robot (http://www.icub.org/) and a Spiking Neural Network simulator (We used SpikeStream - http://spikestream.sourceforge.net/)

The coolest part is that there exists a pretty complete simulator for the iCub robot that anyone who is interested can run on their computer - http://eris.liralab.it/wiki/Simulator_README


The physical world is so slow to give you feedback, I think DeepMind has the right approach with video games, you can move to 3d games, then games with physics where the agent directly controls artificial muscles, while iterating much faster than you can in the real world.

https://www.youtube.com/watch?v=nMR5mjCFZCw&feature=youtu.be



I wouldn't go as far as calling that emodied AI, since the only percept is a camera.


What we need next are more systems which can predict "what is likely to happen if this is done". Google's automatic driving systems actually do that. Google tries hard to predict the possible and likely actions of other road users. This is the beginning of "common sense".


> As I've said in previous statements: most of human and animal learning is unsupervised learning.

I don't think that's true. When baby is learning to use muscles of its hands to wave them around there's no teacher to tell it what should its goal be. But physics and pain teaches it fairly efficiently which moves are bad idea.

It has built in face detection engine and the orienting and attempting to move and reach towards it is clear goal. Reward circuit in the brain do the supervision.


The difference between supervised and unsupervised is that the inputs are paired with known outputs for supervised. In unsupervised the agent has (initially) no knowledge of what the outputs will be, given the inputs.

The baby does not know (initially) that something will cause pain, or the extremities of its joints. It must learn this over time and experience. The baby must also learn how to use the built in components, as it has no idea what outputs will occur given the inputs.

As you allude to, there are built in mechanisms/configurations in the brain which provide various forms of feedback, as well as built in behaviours and responses. If there was no basic structure to the brain, I think it would be almost impossible for an unsupervised agent to develop and learn to the complexity and level of a human brain. These basic behaviours significantly speed the initial development process up.


> The difference between supervised and unsupervised is that the inputs are paired with known outputs for supervised. In unsupervised the agent has (initially) no knowledge of what the outputs will be, given the inputs.

I'd still call learning to move, supervised (or reinforced) then. You're feeding the world some input (muscle contractions), and the world immediately gives you the output in terms of pain. You are using it to adjust your internal function. After a while you have pretty good function that maps you muscle contractions to whether it valid move or not and you can generalize it to when your position is different and get to some other stuff like trying which moves can alter what you see and feel (apart from your hands that you already know).

> If there was no basic structure to the brain, I think it would be almost impossible for an unsupervised agent to develop and learn to the complexity and level of a human brain.

I agree that there's some stuff built in, but I think it's surprisingly little of it. How little I think we can see when we learn about people blind from birth or with deformities. They still learn to operate their bodies as well as it's physically possible.

Whatever person can relearn after physical brain damage I think can't be built-in. I think the structure we see in the brain is result of built-ins + various structural optimizations that make some stuff faster (or more energy efficient) than if the structure was different.

For me the real trick in neural networks is to find out how exactly natural neurons learn because it's not back-propagation and it's important. Do we know that? In detail? How scratching yourself on the face as a baby translates to chemical changes in synapses of neurons that fired recently?


Sorry but what you describe is not what supervised learning means. It has a specific meaning in AI. At best, what you describe is reinforcement learning. And I don't think this is how we learn to recognize people's faces.


I'm not sure if supervised learning is such a narrow term but I'll take your word for it. As for recognizing faces

I think we have that pretty much baked in the hardware. Faces are recognized immediately and not just by humans, also animals. I think people who lost ability to see faces, can't re-learn it.


If artificial intelligence is the cake, true AI is the ability to argue about whether cake is a useful analogy.


The ability to argue, or the capacity to reason that argument? If the AI can convincingly argue that the analogy is apt, without actually reasoning or "thinking", does that make it the holy grail? I'd suggest that self awareness, and the ability to reason, would be true AI, and not just a glorified Turing test in the form of an effective ability to pose an argument.


(1) Adversarial learning is unsupervised and works great. Most of language modeling is unsupervised (you predict next word, but it's not real supervision because it's self-supervision). There're many works in computer vision which are unsupervised and still give more or less reasonable performance. See f.e. http://arxiv.org/pdf/1511.05045v2.pdf for unsupervised learning in action recognition, also http://arxiv.org/pdf/1511.06434v2.pdf and http://www.arxiv-sanity.com/search?q=unsupervised

(2) ImageNet supervision gives you much information to solve other computer vision tasks. So perhaps we don't need to learn everything in unsupervised manner, we might learn most features relevant for most tasks using several supervision tasks. It is kind of cheating but very reasonable one.

Moreover,

(3) We observe now just fantastic decrease of perplexity (btw, it's all unsupervised = self-supervised). It's quite probable that in the very near future neural chat bots write reasonable stories, answer intelligibly with common sense, discuss things. All of this would be just a mere consequence of low enough perplexity. If neural net says smth inconsistent it means that it gives too much probability to some inappropriate words i.e, it's perplexity isn't optimized yet.

(4) It's quite probable that it would open a finish line for human-level AI. AI would be able to learn from textbooks, scientific articles, video lectures. Btw, http://arxiv.org/pdf/1602.03218.pdf gives a potential to synthesize IBM Watson with deep learning. May be, the finish line to human level AI has been opened already.


There's also a huge issue around problem-posing and degrees of freedom, that doesn't necessarily get better as your AI tools improve. Go has a fairly large state space, but limited potential moves per turn, well-defined decision points, limited time constraints, and only one well-defined victory condition. The complexity is minuscule compared to even something relatively well-structured like "maximize risk-adjusted return via stock trades".


Can someone elaborate the difference between reinforcement learning and unsupervised learning? It seems that I mistakenly think that human learns through reinforcement learning, that we learn by the feedback from the outside world. I mean without feedback from aldult can a baby even learn how to walk?


Not an expert, but my understanding is that humans can learn many things through reinforcement learning, but most of our intelligent decisions are a result of unsupervised learning.

For instance, if the stove element was red hot and you touched it, you'd receive a feeling of pain. Reinforcement learning would suggest that you shouldn't do this again, and you might learn to not touch stove elements when they are red hot anymore.

However, a human is likely to additionally realize that touching a red hot marshmallow stick would burn them as well. And at the same time, a human can also tell that a red ball is safe to touch. This sort of behaviour (internally labeling things and deciding what it is that made the stove element dangerous so you can apply that to other things) would be unsupervised learning.


Interesting, so it's the ability to generalize attributes selectively? That sounds like it would make for the difference between the specialized deep learning we have and a more general intelligence.

Could that be accomplished if NN problems are broken into features, and those features are individually tested against new information? Though you'd need a layer for feature selection, and it still lacks the ability to pick features without training.


> Interesting, so it's the ability to generalize attributes selectively?

In technical terms it is the ability to generate its own features and classifications for making decisions, where in supervised learning a human provides the features and classifications.

> Could that be accomplished if NN problems are broken into features, and those features are individually tested against new information?

If I understood your example correctly, that would be an example of supervised learning, as you pointed out it lacks the ability to pick features without being told what they are. There are types of unsupervised learning that exist and work quite well, for example cluster analysis [1].

[1] https://en.wikipedia.org/wiki/Cluster_analysis


Computers follow instructions written for them. Humans write their own instructions (and can even disregard instructions).


Unsupervised learning in a technical sense means things like clustering algorithms where the goal is to discover structure in a data set without known labels (or numeric response).

If you show a human a single unlabeled picture of a platypus (or whatever) they will know instantly with a high degree of certainty that it is a new category of thing and be able to recognize additional images of it as belonging to that class even without being told a label for it.

Our best image classification algorithms can't do that, even with labels they need a lot of example images to learn to identify a new class.


Exactly what i was going to ask! Is there such a thing as unsupervised learning? A human mind is constantly fed a stream of learning data. And while some things may be predispositioned by our DNA, I think for the most part we learn directly from the rules and guidelines dictated by our parents first and our society later.


So, its like, when I learned salsa dancing, i couldnt hear the 'beat' of the music, and was always out of time. So what I did was just listen to salsa music whilst cycling and walking for a couple of weeks. Now i could hear the 'beat', and lots more structure in the music to boot.


Ah, yes. This's a great example. I see what you're saying.


The statement that he's critiquing does reflect the wider-spread, overly simplistic view of AI. Contrary to hype, recent events represent only partial development/peeling of the top layer from the AI onion, which has more known unknowns and unknown unknowns than known knowns.


Totally agree, it's a bit like when some physicists were convinced that there wouldn't be other great breakthroughs after Maxwell's theory of electromagnetics. Maybe Yann LeCun is the Einstein of Machine Learning? haha


It seems that AI does well when the problem and the performance metrics are well defined: chess, Go, various scheduling problems, pattern recognition, etc. At the very least we can track, quantitatively, how far off we are from a satisfactory solution, and we know we can only ever get closer.

"True", or general-purpose AI, is harder to pin down, and thus harder to define well. I'd argue that the moment we have define it formally (and thus provided the relevant performance metrics) is the moment we have reduced it to a specialized AI problem.


The Turing test is one such measure that is unbeatable by specialized AIs. Language understanding in general requires some degree of intelligence.

I don't think the Turing test should be an actual goal of AI researchers. Turing just proposed it as a hypothetical example.


It seems to me one of the higher hurdles for creating a general purpose intelligence, is human empathy. Without it you are left with creating a nearly infinite-length rules engine.

When you ask your AI maid to vacuum your house, you would prefer it not to plow through closet door to grab the vacuum, rip your battery out of your car and hardwire to the vacuum, and then proceed to clean your carpets. If you don't want to create a list of rules for every conceivable situation, the AI will need to have some understanding human emotions and desires.


I'm not sure why you connect that behaviour to empathy. There are two simple rules here that apply to almost every possible situation, as well as the one you presented. They aren't even connected to human emotions. That's pure economy.

1. Minimise work (plugging into socket has lower cost / effort than what you described)

2. Minimise irreversible changes (or cost of reversing them)

There are so many people with low empathy who are useful, I don't think this is an issue until someone needs a personal companion rather than general purpose AI.


These rules cannot be applied in every situation. Then you back to writing a bunch of rules. If you told a general AI to take care of a puppy for a few days, it would end up putting a diaper on it and keeping it in a crate 24 hours a day. That would be the least amount of work to take care of the puppy, and minimize the chance of the puppy hurting itself or damaging anything else.


This is close to what happens when you leave your pet at a cheap pet hotel though. General purpose humans do it the same way.


If you don't want to create a list of rules for every conceivable situation, the AI will need to have some understanding human emotions and desires

My roomba does a good job vacuuming around table legs. Also sofa legs. Also stationary human legs. Also lamps. Also random poles sticking out of the floor. I imagine it would even do a good job vacuuming around a stalactite that made it to the floor. Are you saying someone programmed every one of these situations into it? Or does it instead have some understanding of human emotions? Either way, I'm surprised because it was so cheap I figured it has some generalizable built-in rules that applied to some categories of inputs, the infinite possible instances/variations of members of these categories not having to be enumerated. :)


There is a common situation that a Roomba does horribly at, toys on the floor that a person would just pick up or move before vacuuming.

Also, Roombas don't have any capacity to learn. They are just executing their bump and clean algorithm (except for the newest one which actually maps out rooms).


Roombas could learn easily if they just asked you how good of a job they did every time.


i don't think empathy is an appropriate word for this. It is defined as the ability to share the feelings of others. While in this case, what you want the robots to know is what most people would call 'common sense'.


What a strange response. You read that as me comparing human capabilities to roomba's?

The point, which you couldn't have missed more blatantly, is that one doesn't need to program every conceivable situation into an automaton for it to be able to behave properly in a whole range of normal situations.


But you have to program every conceivable 'normal' situations, so really you are just using a variable definition term like 'normal' to tautologically argue.


If you haven't read them yet, I would suggest Isaac Asimov's Robot series, which discusses the implications of hardware-level 'laws' to which robots must abide. His take was that these laws would constrain behavior according to human judgments of harm and benefit. I wonder if there is another way of looking at it, one where AI action is not only constrained by human sensibility, but informed by it.


We keep trying to engineer AI rather than reverse engineering it. The thing with living organisms is that the neural network underlying the intelligence of living organisms is a product of evolutionary design of an organism situated in the real physical world with laws of physics and space and time. This is where the bootstrapping comes in. Unsupervised learning is built on top of this. Trying to sidestep this could prove difficult to get to General AI.


To be fair AlphaGo never decided Go was a fun and worthy challenge; Lee Sedol, however, did.


Click bait titles aside it's an amazing achievement.


A beautifully concise statement of an incredibly common misconception as to the current state of the field.


I have Facebook blocked for the next week (because, you know, productivity). Can someone post LeCun's comment here?


Statement from a Slashdot post about the AlphaGo victory: "We know now that we don't need any big new breakthroughs to get to true AI" That is completely, utterly, ridiculously wrong. As I've said in previous statements: most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don't know how to make the cake. We need to solve the unsupervised learning problem before we can even think of getting to true AI. And that's just an obstacle we know about. What about all the ones we don't know about?


Like, duh.


True life has no rules


The cake is a lie. Obviously :)


reddit is over there ->


Stop looking at the red dot. Take a step back and look around you. "True" AI is here and it's been here for some time. You're communicating with it right now.

It's just that we find it so hard to comprehend it's form of "intelligence", because we're expecting true AI to be a super-smart super-rational humanoid being from sci-fi novels.

But what would a super-smart super rational being worth 1 billion minds look/feel like to one human being ? How would you communicate with it ?

Many people childishly believe that "we" have control over "it". You don't. We don't.

The more we get used to it being inside our minds, the harder it becomes to shut it down without provoking total chaos in our society. Even with the chaos, there is no one person (or group) who can shut it down.

But "we" make the machines ! Well... yes, a little bit..

Would we be able to build this advanced hardware without computers ? Doesn't this look like machines reproducing themselves with a little bit of help from "us" ?

Think about the human beings from the Internet's perspective - what are we for it ? Nodes in a graph. In brain terms - we are neurons, while "it" is the brain.

But it's not self-aware ! What does that even mean ?

Finally, consider that AlphaGo would have been impossible without the Internet and the hardware of today.

And that "true" AI that everybody expects somewhere on the horizon will also be impossible without the technology that we have today.

If so, then what we have right now is the incipient version of what we'll have tomorrow - that "true" AI won't come out of thin air, it will evolve out of what we have right now.

Just another way of saying the same thing - it's here.

Is this good or bad ? Well, that's a totally different discussion.


Once an AI algorithm (even just one for Go) realizes that it can hijack the bank accounts of all the world's other 9 dan players in order to demand an analysis of its planned move, and figures out how to do that, then we've made the cake.

N.B. the genericity of the deepmind stuff that is the basis of AlphaGo makes this seem not entirely far-fetched.

Yum, cake.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: