Hacker News new | past | comments | ask | show | jobs | submit login

I suspect that with time Lee may be able to even out the advantage. It would be surprising if AlphaGo hadn't be trained on historic matches with Lee giving him an early edge until Lee can adapt.

Probably one of the most exciting achievements in AI in a little while.




At the press conference after the 4th game it was mentioned that AlphaGo was not trained on professional games at all.

They used strong amateur player games from public archives.

It was also mentioned that AlphaGo needs millions of games to be trained. Even if they added a few hundred Lee Sedol games, the impact would be very small.


You are correct. They used the large number of amateur games from the KGS Go server.

By the way, AlphaGo has a wikipedia page: https://en.wikipedia.org/wiki/AlphaGo

And the Nature paper is: http://www.nature.com/nature/journal/v529/n7587/full/nature1... (paywalled, but scihub can get it for you). The paper is fairly readable, and worth having a look at if you are interested in this topic.


> It would be surprising if AlphaGo hadn't be trained on historic matches with Lee giving him an early edge until Lee can adapt.

This kind of learning doesn't work like that. It doesn't learn from specific examples, or make meaningful inferences from single data points. It learns tiny gradients from millions of examples. If we had an approach that could create a meaningfully distinct strategy depending on {whether we included or excluded {every game Lee Sedol has played in his life} from training}, that would be wildly more significant than just beating him.


I suspect that with time Lee may be able to even out the advantage. It would be surprising if AlphaGo hadn't be trained on historic matches with Lee giving him an early edge until Lee can adapt.

I hope that Lee can adapt, but I doubt it. Lee will be training at a slow pace analyzing 4 or 5 games. AlphaGo will be playing hundreds of millions against itself - and it is a worthy competitor of itself already.

Although Go and AlphaGo are very different than Chess and DeepBlue, there is one interesting analogy... Once chess beat us, they pulled ahead very quickly and the gap isn't closing. The ELO rating of top chess programs [0] is pulling away from top humans [1]. There are natural differences between computational superiority and creativity, but it seems like AlphaGo has captured a lot of the latter.

[0] https://www.chess.com/article/view/the-best-computer-chess-e...

[1] https://en.wikipedia.org/wiki/FIDE_World_Rankings


Is there anything currently in Go like in chess where computer assisted humans play best (centaurs)? I did a brief internet search, but nothing specific came up.


I suspect that this will start happening. Until now the computers weren't good enough to help! (Even now AlphaGo takes a tremendous amount of computing power)


I think one of the most interesting things is it was mostly trained on millions of games against itself. Therefore the professional level moves its playing are just emergent strategy.


AlphaGo hasn't been trained on Lee's games.


David Silver of Google DeepMind has stated that the program has not been specifically trained using Lee's previous games.

"specifically" Not that he was not trained on his games at all


It was in the press conference for Game 4 IIRC.


Also in the pre-game commentary for game 3.

It was mentioned that AlphaGo was trained on so many games that the games of Lee Sedol that were used for training were not more than a drop in the ocean. This was the answer to the question about whether AlphaGo was trained specifically against Lee Sedol.


Demis was asked about this, training AlphaGo with Lee Sedol's games and said that it would take tens of thousands of games to make any difference.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: