I wrote a Go playing program ("Honinbo Warrior") in UCSD Pascal on my Apple II in the late 1970s. I made some money selling it commercially, but it was mostly a hobby.
Also in the 1970s, I had the privilege of playing the women's world champion and also the national champion of South Korea. They both gave me huge handicaps, and still easily beat me - I am not a very strong player. Go really is a great game.
I bought Crazy Stone for my droid phone, and it really is a fine program.
I have asked Albert if he can find the old ALGOL code, as it is of some historical value. The code might be stored on tape, which can be read by Scotch brand IBM tape drives (http://3480-3590-data-conversion.com/). He's going to look for his old dissertation, as well, to see if the listing was included. Otherwise, the tapes will be mailed for a data dump.
Reading Zorbist's paper on computer Go sparked my interest in Go playing programs. It would be great to have his old code, papers, etc. on something like github for a historical record.
> Caught between atheism and a crippling fear of death, Ray Kurzweil and other futurists feed this mischaracterization by trumpeting the impending technological apotheosis of humanity, their breathless idiocy echoing through popular media.
An unexpected jab in the second-last paragraph, seems the author has some strong opinions about AI. I wonder how he'd respond to Hawking's recently newsworthy worries.
Yeah, that felt like a weird and very out-of-place insult. I also thought this was an odd assertion:
>In fact, computers can’t “win” at anything, not until they can experience real joy in victory and sadness in defeat, a programming challenge that makes Go look like tic-tac-toe.
It seems like when it comes to AI advances, there's always a shifting of goalposts as soon as an AI is able to do a task that was once thought to be a defining characteristic of human cognition. Why does it matter if a computer "feels good or bad" at the outcome of a competition for that competition to be somehow valid?
I'm unsure sometimes if the AI moniker isn't misleading people as to what is really happening. We (human beings) aren't creating an artificial intelligence. We are learning about our own intelligence and then trying to put instructions to a machine so it can process them and simulate our intelligence. Beating Kasparov wasn't the result of powering up a machine and letting it learn on its own: on its own it would just sit there with a light on indicating power is passing through the on/off switch. People spent perhaps thousands of hours studying how chess experts approached the game and developed algorithms from it. I think of what they did as not so much artificial intelligence as collective intelligence in which the computer was just a medium.
That's not a correct characterization of what happened with chess. Computer chess programs do not work the same way as the human mind. The human mind uses very strong heuristics, and then simulates the game to a relatively small set of positions. We do not really understand how that heuristic part works. On the other hand, computers use very simple and weak heuristics, but search a huge space of possible moves. I'd argue that we learned relatively little about how humans play chess from chess AI. The same goes for most other kinds of AI. AI/machine learning isn't studying how the human mind does it an replicating that, it is developing a method that is effective for computers.
People often seem to miss the "artificial" part of artificial intelligence, despite it being half of the term. Any time someone says of an AI system "well yeah, but that's not real intelligence," just say "in fact, it's artificial intelligence."
Great explanation of how the process actually works for automating these complex systems. What you are describing though is how we build narrow AIs, which are considered part of the "Artificial Intelligence" portfolio.
The idea of naive learning is very much a goal of most interpretations of what "Artificial Intelligence" encompasses. The scope of the self directed learning though I think is what distinguishes between narrow AI and AGI.
Thanks for taking the time to explain this to me: I was a little off target. I don't have a background in AI but find it fascinating. (I'm preparing to undertake a formal study of it within the next 2 years. If you can recommend introductory books, I welcome it.)
> It seems like when it comes to AI advances, there's always a shifting of goalposts as soon as an AI is able to do a task that was once thought to be a defining characteristic of human cognition.
Usually it's because people all over the discussion fail to correctly distinguish between generally adaptive intelligence and specifically engineered "intelligence" (tool that appears to perform a task intelligently), and also between conscious and unaware mechanical intelligence.
A Go or Chess program isn't a goalpost for generally adaptive intelligence, it's a milestone for specifically engineered intelligence. And it isn't conscious, which is the writer's point, though the expression is imprecise enough it could be misunderstood as criticizing the more specific form.
Yeah exactly. If anything, I would like to avoid emotions in AI, since emotions makes for an unpredictable outcome. That is the last thing you want in an AI, imho.
TL;DR don't conflate emotions with instinctive response.
Emotions are orthogonal to rationality, that's why you can devise rational plans to minimize or maximize your likelihood of feeling an emotion. In other words it's perfectly rational to act in such a way to prevent sadness or pain, if that's the value you want to optimize for.
You can imagine a creature (natural or artificial it doesn't matter) that feels emotions and nevertheless it doesn't start acting irrationally just whenever the emotion is strong enough.
I could argue that human beings are in many cases such creatures; even apparently irrational behaviour caused by emotional response usually is not a bad strategy in the original environment where that instinctive reaction was selected for.
I'm not sure it would be a good idea to create AI that simulate humans in all aspects, including unpredictability, just for the sake of it. But I understand some might be tempted to explore that area in order to research creativity, assuming it had something to do with unpredictability.
Perhaps we are just fooling ourselves into thinking that our very failures are what enables us to be so $special (put whatever aspect you prefer in $special). Unfortunately we don't have much means to compare us with something else.
Interestingly an irrational-emotional creature can outperform a rational player in some games, because an irrational-emotional player can make credible threats.
Consider the ultimatum game[0]. "The ultimatum game is a game often played in economic experiments in which two players interact to decide how to divide a sum of money that is given to them. The first player proposes how to divide the sum between the two players, and the second player can either accept or reject this proposal. If the second player rejects, neither player receives anything. If the second player accepts, the money is split according to the proposal. The game is played only once so that reciprocation is not an issue."
If you are rational and the other player offers you a cent, you will accept. But humans will become disgusted and angry and therefore refuse. A rational counterparty is hence forced offer more.
Taking the money isn't necessarily the most rational, it depends on your culture. For instance, taking the money could introduce an unspoken obligation, refusing 60% of the split is not unknown.
The players are anonymous so no obligations can be introduced. Taking the money is the most rational. (Well I guess the group was so small that egoistic behaviour could cause you to lose longterm because of reduced willingness to cooperate even though no one knew it was you).
Emotions are what enable us to judge something as right vs. wrong. This has been conjectured philosophically since forever, and has recently been proven to be true by scanning a brain while it solves mathematical questions. Every right step evokes emotional center of the brain, qed.
EDIT: This fact has been used to argue against the 'enlightenment' era approach to solving problems. Unfortunately I am unable to find the relavant talk on youtube right now, since all the results for 'enlightenment' return mystic sadhu bs.
Emotions are still a physical process that can be modeled, and it may be possible to give computers a sense of "correctness" that doesn't require "real" emotion.
I think emotions are simply hardwired shortcuts for some very specific outcomes. Mostly related to mammal reproduction.
Emotions are not really unpredictable, just very limited in comparison to rationality. They essentially can't deal with unexpected or unprogrammed situations.
It also seems like this author isn't aware of even the most basic concepts in the philosophy of mind, like the philosophical zombie problem, which deals precisely with the idea that the outward expression of emotions seems to be indistinguishable from "actually" experiencing emotions.
As a professor of philosophy, I very much doubt he's ignorant of Philosophical Zombies. He probably just disregards them; they're not universally accepted as remotely meaningful, or even possible. (Note that the most meaningful forms of the zombie argument require them to be metaphysically possible, not merely logically.)
They're also not one of "the most basic concepts in philosophy of mind." They're a very pointed way of illustrating the Hard Problem of Consciousness, and specifically Epistemic Asymmetry, but it's the latter two that are fundamental to modern philosophy of mind, not zombies.
He still doesn't even recognize that the idea exists. He just claims that "winning" requires internal emotion, without any support or recognition that other opinions might exist. To me, that sounds philosophically ignorant.
I noticed that the author is an "assistant professor of philosophy and religion." When I ran across this part of the article, I couldn't help but think of the philosophers bursting in on Deep Thought in HHGTTG:
"You just let the machines get on with the adding up," warned Majikthise, "and we'll take care of the eternal verities thank you very much. You want to check your legal position you do mate. Under law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up half the night arguing that there may or may not be a God if this machine only goes and gives us his bleeding phone number the next morning?"
"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"
> Caught between atheism and a crippling fear of death
So is that line really necessary? It seems to do not but make assertions about the motives and beliefs of others, and to do such in a negative light. It adds no value to the article, but creates a negative impression in the reader's mind without giving any evidence as backing.
To reply anyways to contribute to the discussion, there is a lot of fear mongering about cognizant AI from people who don't create these systems themselves.
With all of the marketing around things like big data, deep learning and other topics, it makes it seem like we're closer to cognizant AI than we really are.
At the heart of all this is still statistics and machine learning which in and of itself is just statistics with a lot more data used to achieve a set of tasks such as predicting a future value or labeling some thing (such as a fraudulent event)
Edit:
I'd love to see some discussion on what people think we are close to.
We have to ask ourselves, what is AI and what is considered intelligent? Is it mimicking the human brain as closely as possible (with all the flaws that come to mind such as bias) or is it making the heuristically best decision given a set of circumstances (game playing AI, maximum likelihood learning).
Both of these mindsets can be beneficial. Let's just make sure we treat it for what it is: math and binary information.
A non cognizent AI that is capable of generating an similar AI better at generating AIs than itself is would run into the issue, regardless of whether it is cognizant.
I don't know Kurzweil's real position, but I believe in one of his books he expected singularity by 2045. A lot of people don't protest the idea itself, but the optimistic schedule.
Vernor Vinge (who wrote one of the first essays on singularity) recently stated that we can tell the singularity has happened when a robot can clean an un-prepped bachelor's pad (robots can currently clean a prepared room, but they can't clean a room in an "as-is" state).
Yep, Kurzweil was actually just interviewed by CNN (on their new show Inside Man) about singularity and that's still around the time he predicts it'll happen.
Wired magazine has always been anti-strong-AI. One of their founding editors, Kevin Kelly, has written a lot about the critique of strong AI and the singularity as Kurzweil describe it
> The first chess programs were written in the early fifties, one by Turing himself
When readed that I wondered in what computer could possibly that chess program ran. The amazing answer is: nowhere. Turing executed himself the orders of the program he wrote acting as cpu.
A chess-playing computer had been suggested as early as 1864, and the first machine able to carry out a "program" was invented in Spain in 1914 by the engineer Leonardo Torres-Quevedo and called Ajedrecista http://en.wikipedia.org/wiki/El_Ajedrecista
"It is neither daring nor original to predict that within decades the world's best chess player will be a machine ... Other games such as Go are presently less susceptible to machine analysis, a fact which sometimes provokes incredible displays of intellectual snobbery from Go players who like to denigrate chess as a child's game. When in due course a machine also becomes the top Go player, such people will no doubt move to games like snakes-and-ladders where computers have no detectable advantage." -- David Langford (1979)
One falsehood in the article is that Go is the only game where computers "don't stand a chance."
In fact, computers are substantially worse than the best humans at Go, Arimaa, Hex and maybe Havannah [1], to take some games that I know.
Arimaa is underexplored for both humans and computers, but there are several programmers working on it, and there is a modest prize available. Hex and Havannah are less explored, but they also have academic work done on them, and their human communities are also small, which means that we're not getting the best humans can do.
[1] Havannah has a pretty good bot, Castro, but I think it's still quite beatable.
One of these games was designed to be hard for computers, the others are way less popular than go and have far less history, and hence competition/interest.
Yes, but as I pointed out, there's also less human interest in those games. That affects both humans and computers. No one is a grandmaster at Arimaa, Hex or Havannah. If those games were more popular, then both computers and professionals would be better.
As far as the quality of the exaggeration, I think it is misleading. Shogi programs are just catching the best humans in 2013-2014. Go may not resist artificial intelligence for another ten years. That's a noticeable difference, but also not that grand of one. I'd just like a little accuracy: we may not be dealing with more than moderate differences in difficulty.
Shogi programs only match up in the standard 9x9 board. They fall far behind the moment you introduce a larger variant.
As to go, computers have a hard to time with 9x9 and a handicap. 13x13, 19x19, or larger is even further out of the question. The fundamental issue is that the problem must be solved entirely with heuristics. We have trouble modeling cat brains. We are a long way away from human-level pattern recognition heuristics.
Your knowledge of the situation in Go seems out of date. On the 9x9 board, bots are perhaps equal with professionals, or just behind them. On 19x19, they regularly win with 4 stone handicaps. See: http://www.computer-go.info/h-c/index.html
From the article a criteria often missed in the discussion is "...of all the world’s perfect information games..."
Virtually all tabletop grognard games are perfect information games. I say virtually because there might be one I can't think of at this time, maybe a card driven game. I suppose the COIN series is partially imperfect information... Even a noob player can crush an AI trying to play grognard games so usually the computer cheats or is given a massive handicap, like, say, your human controlled division is up against two, maybe three AI divisions. I've played a lot of grognard games both computer and tabletop and I've never found a worthy evenly matched AI opponent although humans can kick my butt. Some computer implementations implement a fog of war which would certainly be imperfect information, but not tabletops or faithful reimplementations of tabletops. A "good" grognard scenario/game doesn't depend on FoW anyway, or at least many people share that belief.
You can model a stereotypical grognard game as "chess with a large hex board, more pieces, and wider variety of pieces". I guess that gives some idea what a computer-proof abstract game would look like. "GO" on a 171x171 grid, perhaps, or chess with 50 kinds of pieces.
You must have a very restrictive definition of grognard games. There are hundreds of wargames out there that would not qualify as perfect information. For instance, anything made by Columbia Games.
If you study them mathematically, many of those games are actually quite a bit simpler than chess or go, because despite the large number of pieces, turn limits, order limits and very limited setup options will shrink the decision space in a major way.
Those games are also easier to solve because, often to provide a light semblance to how history actually turned out, the rules are set up so that entire avenues of approach are restricted.
So why are AIs worse? because when you make a videogame, getting the hardest AI possible is not really a selling point. We worry instead about not taking too much time, or, in case of a mobile game, not eating the processor alive. Those limits make entire sets of approaches to AI unworkable altogether. If a machine was allowed to think as long as the human does, and we had a reason to actually built said AI, you'd see humans losing a whole lot in wargames. You'd get a similar thing in Euros too, as entire avenues of playing just go away because they are mathematically inferior.
Strong AI is for sure a selling point, though. Who wants to play an easy shooter, for example? I believe you when you say if we threw the amount of power required at it, it would work, but then why not? We don't have that amount of computing power available at all?
I'm not sure what exactly a grognard game is - but if it's just wargames, off the top of my head Stratego is a game without perfect information. A lot of fun too. The traditional board version takes three participants to play, but computer/electronic versions gets ride of the referee.
> Even a noob player can crush an AI trying to play grognard games so usually the computer cheats or is given a massive handicap, like, say, your human controlled division is up against two, maybe three AI divisions. I've played a lot of grognard games both computer and tabletop and I've never found a worthy evenly matched AI opponent although humans can kick my butt.
How hard have people tried? I've never had the impression that tabletop wargames had a lot (or any...) programming/AI talent working on them. Too many, too unpopular.
Captchas are (like Go and the other games) a legit information processing distinguisher between humans and computers; unlike spin the bottle, it's not just an issue of having the right peripherals or social standing.
(I know you were probably joking; this is just in case anyone gets the idea that the "captcha game" is just as trivial in this respect as spin-the-bottle.)
Heh, my first thought reading the article was designing a game that is simple, yet inherently difficult for an AI player. Thanks for introducing me to Arimaa!
One additional issue for computers is that humans don't play the game tree out to its conclusion - humans recognise winning and losing positions rather than playing a much longer (perhaps infinite) prove-out game.
A naive game theory strategy runs into a disasterous problem - the game tree is extensively longer than the 200 moves that a human will play.
So, a computer has to recognise a winning position, in a heuristic way, as well as the best players do. But, even as a total beginner, it's easy to notice Go programs sometimes get the end-of-game scoring totally wrong.
Once you've overcome this first problem, you've reduced Go to "just" a 200-move game tree.
I don't know what you're trying to say. Monte Carlo based programs play out entire games, and evaluate them by scoring the end of the game--no heuristics needed for the evaluation, only for the playouts. And yes, they can accurately score a completed game.
I'm trying to express why Go is a difficult problem to solve.
I'm saying this (scoring, play-out) has been a difficult problem, which was not solved even a small number of years ago, and is still not considered straightforward. Even if it is solved correctly now, that's only a start.
I would also note that Monte Carlo is by definition a heuristic method - it is statistical and not guaranteed to be optimal.
I would also note that Monte Carlo is by definition a heuristic method - it is statistical and not guaranteed to be optimal.
So is the endpoint evaluation in chess programs. This isn't a difference between Go and the other games. (In fact, pretty much nothing in your original post is)
Anyone who has a modest understanding of chess can codify very simple rules for identifying a winning endgame.
The exact same thing can be done in go, scoring a finished game is trivial. The strength of the program simply isn't determined much by correctly identifying and scoring terminal positions, but the intermediate ones. And for both chess and go, one very much uses heuristics which are often wrong.
If the heuristics were never wrong, you wouldn't have to do the tree search part at all.
Endgame databases only have a tiny effect on the strength of chess programs (a common misunderstanding!) just because it's not very common for the game to be still "flippable" by the time they become relevant. In all the other positions, you need the heuristics.
The situation for checkers on the other hand, is very different. There the endgame databases were critical for solving the game, but due to mandatory capturing rules the search space reduces much faster.
Edit: Not sure why HN won't let me reply. But anyway: the error that you're both making is assuming that the positions in which humans stop and score are "endgame" or "finished" positions. That's not at all the case! A game is finished if there are no more legal moves besides filling one's own eyes. Counting at that point is trivial because all the life & death situations are "resolved". Monte Carlo programs play until those positions, not the ones where a human would stop the game.
It seems you've studied this more than me, so perhaps I'm just misunderstanding you, but it sounds like you are equating a finished game with endgame. Scoring a finished game is obviously trivial, but even when a small number of positions remain open on a Go board, the game tree is still combinatorically vast, and I don't think there are any "simple rules" for determining optimal play.
Scoring is trivial - if the game is actually finished. No human wants to play go games to the finish so essentially all human games are scored by agreement. But a Monte Carlo program just simulates the game vs itself. See the addenum to my post above.
Those rulesets weren't written by mathematicians. What do they say about life & death disputes while scoring "after the game has ended"? I'm going to guess: Resume the game.
Humans agreeing on a score and outcome has little to do with what a program has to do to calculate this score.
It's complicated. Most rulesets (AGA, New Zealand, I believe chinese) say "resume the game". The Japanese ruleset is a lot more complicated.
I still don't see your argument. Humans and humans are capable of agreeing on a score with those rulesets. Computers are also capable of following the ruleset and agreeing on the score. Computer vs. human games are not playing by a different ruleset. By that ruleset, the game is over when both sides pass.
I think the above is trying to say that the percentage of total possible trees that a human would bother considering is much lower in Go than most games, like chess. So the computational resources needed to do Monte Carlo and get results competitive with human players are much greater.
But the poster is wrong in saying that this means computers need to use heuristics. Indeed, the recent breakthroughs in Go-playing programs come from removing heuristics that previously helped the programs prune the tree space and instead just using Monte Carlo to simulate all possibilities.
There's a tricky balance, because incorporating too much knowledge/heuristic use can make the program miss good but surprising moves. Nonetheless, the trend seems to be heavy playouts.
Yes, the strategy for rollouts (the rest of the game after you leave the currently explored tree) are still an area of research. The successful programs usually involve some sort of local shape matching or other heuristic, sometimes trained using ML techniques.
One of the surprising results is that having stronger play between opponents in the rollout phase does not always equate with a successful search. Having a balanced strategy -- that is, one that does not introduce a bias for either player -- appears more important.
Not sure what you mean? That's exactly how go programs work today. They don't need to play out the games with an optimal strategy. Just an unbiased, sufficiently random one.
No, a game is only over when both sides concede, and players concede when they recognize that any move they could make either does not change the score or would cost them a point (playing in territory that is already yours effectively reduces your territory as only empty territories are counted as points (in standard japanese rules)).
If one player would never concede you would keep playing until the board is full except for the unfillable eyes of the player that wins. Obviously that would take a long time and many negative moves of the losing player.
So you stop playing when both players are confident the game is over and agree on who has won.
That problem of "negative moves" is solved by the Chinese rules. A point is scored under them for each surrounded intersection plus each stone on the board. So you can defend any supposed weakness without losing points (you still lose sente and points if you needlessly defend when the score is not settled yet). You can even reduce your territories to two eyes each if you feel like.
By the way, the Chinese rules let solve neatly all those cases handled by special rules in the Japanese rules. The bent four in the corner is the most notable one. Playing it out with the Chinese rules is a neat explanation why that corner is defined to be dead: the surrounding player defends any weakness without losing points and starts the ko. The other player has no ko threats and dies. Those defensive moves lose points under the Japanese rules so they have to make a special rule for that shape and many others.
The only problem with the Chinese rules is that scoring takes longer and completely destroy the shape of the game: you fill in the territory of one player, take out the other's stones and count by grouping the stones in convenient shapes. Furthermore if you want to count during play you must remember how many stones have been captured because prisoners are returned to their bowl and are not stored in plain view (the score penalty is paid by not having those stoned on the board). Japanese rules are a shortcut that makes scoring easy but the tradeoff is the dictionary of special cases at the end of the game.
"Conceding" means passing. If you don't pass, you have to play. If you play when the game is effectively over, you either play in your opponent's territory and get captured, or your own, which reduces your score. If you play in your own territory enough then you can actually end up losing your eyes, and then be captured.
I guess the remark comes from the "perhaps infinite" proof game. I don't know why they put "perhaps infinite" there as most Go rulesets have rules against repeating board positions.
A mathematically established lower bound on the longest possible go game is 10^10^48 [1]. While this isn't infinite, it's certainly large enough to be considered infinite in practice.
For what may or may not be an example, Graham's number is astonishingly huge, but math world (http://mathworld.wolfram.com/GrahamsNumber.html) claims the answer to problem it is an upper bound for may even be larger than 11:
"Graham and Rothschild (1971) also provided a lower limit by showing that N must be at least 6. More recently, Exoo (2003) has shown that N* must be at least 11 and provides experimental evidence suggesting that it is actually even larger."*
In this case, I think it is safe to claim that the answer is at least 361!/8, though (but that may already include many truly silly games with suicidal moves in the opening or games that continue way past the time experienced players think they are over)
A computer would only win by playing out pre-proved positions (joseki) against an amateur who has studied them positions without understanding them.
As for infinite games, they don't really happen much; in rulesets that make it possible for them to happen, the game is usually called "no result", and this has happened only very few times in hundreds of thousands of recorded games. Modern rulesets have "patched out" this by implementing superko or similar rules: playing a stone that would put the board in a state it was in previously (positions of the stones and which player's turn it is) is an illegal move.
For each of the (several) traditional rulesets there are positions that the ruleset can't deal with. So in order to allow a mathematical or computer analysis the rules often need to be tweaked.
> Crazy Stone and Nomitan are locked in a game of Go, the Eastern version of chess.
Ummm, the Japanese version of chess is…chess. It's called shogi. There's also a Chinese version of chess, xiangqi. There are, I believe, Vietnamese, Korean, Burmese and perhaps other varieties of chess. There's certainly no common 'Eastern version of chess' any more than there's a common 'Easter version of food.'
Go is a Japanese game of considerably greater complexity than chess.
Thumb up on the comments on "the East Version of Chess".
Go is not a Japanese game, though. It was originated in China more than 2000 years ago. It is very popular in both China, Japan, and South Korea. [I don't know how it is in North Korea.]
As "an abstract strategy game that is considered culturally to be a symbol of intelligence, deep thought and deep tradition", the analogy works for me.
Chess as an indication of intelligence is an American cultural quirk. In other parts of the world, people just think that means someone is good at chess.
There should be a term for this sort of error, in which one notices that some phenomenon isn't necessarily universal and then condemns the provincialism of those who think it is common, when in fact the phenomenon in question happens to be quite common.
Am I condemning it or just pointing it out as a faulty assumption? I don't see any adjectives like provincial in my comment. In the context of this thread, a game's analogous position to chess is predicated on it being seen as a sign of intelligence. One cannot simply assume that such games are seen by all cultures as signs of intelligence. In fact, one might ask: What kind of intelligence for which culture, exactly?
In case it wasn't clear, by "the phenomenon" I meant the widespread existence of a belief that chess skill and general intelligence are positively correlated. As attested in this thread, the phenomenon exists in North America and in much of Europe. In my experience it is common in East Asia as well, although in many places the emphasis is more on go than on chess. Is it wrong to say this phenomenon is common?
How sure are you that you haven't missed any nuances? I bet you just did a loose pattern match with your own cultural notions and moved on. This is precisely my point.
(EDIT: This one is going to be especially tricky, since everyone from the generations around ours and younger has been inundated with TV and movies from the US, and the "Chess Player is a genius" trope is all over that.)
At this point I'm just confused. If you want to maintain that "Chess as an indication of intelligence is an American cultural quirk" then please do so. The rest of us will try to be satisfied with our apparently superficial understandings of various cultures. I would like to visit a place where chess is the game for dullards, so cheers if you have any advice on that.
Natural intelligence aside, it's certainly been a leading, news-making target (and best analogy) for artificial intelligence work, as long as I've followed AI. Since the 1970s at least, I've been reading articles about trying to beat chess and go as major milestones, but not Shogi.
I can't speak to the rest of the world, I was honestly thinking more of Turkey than America (where I spent hours and hours ruining my college career through this game, a national obsession second only to backgammon).
First paragraph of article body:
"Chess was introduced to Persia from India and became a part of the princely or courtly education of Persian nobility."
First of all, let me share that what Remi achieved with bots is incredible for Go players. When I was 16, the best bots out there were 6kyu, 5 ranks below the median rank for Go players.
Now, CrazyStone and Zen achieved 5d consistently on Go Servers, thats 5 ranks above the median. I have played Zen once, and I was utterly impressed by the quality of its play.
However, my game with the bot confirmed to me how bots WILL NOT beat professionals in a LONG LONG time. The bot excelled at tactical situations, and endgame. Maybe almost professional level. But the strategy is so weak, that the game turns into taking advantage in the first stage of the game, and then not losing it in the rest of it.
Until the computational power is strong enough to develop new openings, computers will always lag behind professionals in Go.
I don't agree with this. You can handle openings by using a pre-computed book.
If you think about it, this is actually how humans handle openings. There is an opening theory, to which huge amount of analysis(pre-computation) was done. Even professionals don't do impromptu opening analysis over the board.
On the other hand, in my opinion computer tactics are nowhere close to professional level. Not even close.
Openings can't be handled by a pre-computed book. What we humans call the opening can be 40 moves, and professional games often feature unique moves within the first 10-20 moves.
The situation with regard to the opening is quite different from chess.
Of the corner opening patterns alone (called joseki), there are 10000s of them, and it's very common to use novel ones. Many joseki, like the well known 4-4/3-3 invasion, can range from great plays to disasters depending on board position. And there are four corners.
The approach of trying to repeat the human process for pattern recognition is probably the main reason why bots were so weak until it was decided by a few computer scientists that Go should also be battled with brute force.
In one of the Zen vs Takemiya(?) or other professional games, the bot manages to kill a group by the professional with razor-sharp precision, including several tesujis. They excel at local tactical situations, and in terms of killing groups in a big board, they are excellent at poaching eyes.
I've always wondered if it's possible to inductively teach a computer to play Go using some form of machine learning by playing tens of thousands of games; perhaps games with annotations by humans following a simple format to guide the fledgling AI: "Good move", "Bad move", etc.
I used to play Go but stopped a long time ago. It was a game where you had to train your brain to recognise patterns and act on them.
Monte Carlo Tree Search uses Machine Learning indeed! The balance between exploration and exploitation of the game tree is achieved using "bandit" algorithms (a type of Reinforcement Learning algorithm). I recommend reading (Kocsis & Szepesvari, 2006): http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102....
I've been experimenting with exactly this, in an effort to create a trading bot (with an eye on bitcoin/crypto-currencies, in particular). Starting from a lesser-known python library and example that solves the cart-pole task, I managed to apply it to create a pretty neat inventory balancer. Anyone curious about it should email me (email in profile).
"teach a computer to play Go using some form of machine learning"
It would work. Even unsupervised. Let random programs play against each other. Let the losers die and the winners breed offsprings with mutations. If you let that run long enough, i tend to think that the new world champion would arise with certainty.
The question is how long "long enough" is. Probably "too long" on current hardware.
Would be nice to let this experiment run and draw a graph showing the elo of the best program over time.
For a game this complex the evolution will be so slow you'll likely never see any progress. My experience with genetic programs is they'll go through hundreds of thousands of generations just to play tic-tac-toe properly.
Different parts of the game are amenable to different techniques.
I had to do a genetic algorithm for my HS AI class, and I found that checkers opening through middle was quite amenable.
Late-middle to end-game of checkers is easy to evaluate positional strength, which allowed a very good fitness function. This meant I was able to create a genetic program to learn to play the first half of the game fairly effectively.
Yeah, I know what genetic programming is. But your programs don't just have to beat each other, they have to beat expert humans. Since there isn't a way to "crossover" operations from human players to the next generation of programs, crossover is of limited use. That means the only way to progress will be through random mutation, which will take a very long time.
I actually tried this for fun during my PhD I made sure the program instruction set was Turing complete and ran it along Koza's lines. As predicted, it took too long. On any reasonable time scale, the best evolved program was to play randomly.
> The victory was not quite a Deep Blue moment; Crazy Stone was given a small handicap, and Ishida is no longer in his prime.
As I heard the story, the computer received something like 4 stones (a moderate handicap), and was playing against someone who no longer was top-of-the-world, but was still a strong player.
This is very different than beating the world champion in a fair game.
A four-stone handicap is not small: a beginning professional player could give any top Go player (there are several major championships) a run for his money with 4 stones.
Just to clarify: a beginning professional player (as in, a player who is considered of professional rank in the Go world, but just in the beginning of their career) will beat any top Go player with 99%+ certainty at 4 stones. The difference between professional players are far less than 4 stones.
If you meant a student studying to be a professional, then 4 stones would probably be true for an even chance game.
How about game with much smaller search tree complexity first, like poker.
We don't have computers winning professional poker players yet. They are quite good in heads up game (one to one), but full table is beyond them. The problem in poker is not the complexity of the game, it's in learning how your opponent thinks and how they adjust to your play.
> How about game with much smaller search tree complexity first, like poker.
I presume you're talking about limit poker. In pot-limit or no-limit poker, the search tree is large, especially (as you say) for 6-max or full-ring. I don't have much of a frame of reference, but I'm tempted to say "very large".
> The problem in poker is not the complexity of the game, it's in learning how your opponent thinks and how they adjust to your play.
I don't understand this distinction. It seems to me that learning how your opponent thinks and how they adjust to your play are a central part of the difficulty in most deep games of strategy.
>It seems to me that learning how your opponent thinks and how they adjust to your play are a central part of the difficulty in most deep games of strategy.
For human players perhaps, but my impression is that most Chess playing software simply tries to find the strongest move, without considering the opponent to any great extent.
The opposite extreme here is Rock Paper Scissors AI, where there is evidently no strongest move, and the only way to do better than a draw is to identify the opponent's strategy.
I think you may be talking past each other. Chess programs, for example, certainly "consider" the opponent. Indeed, they consider the opponent to the extent of mapping out tens of billions of possible opponent moves.
What I believe you're saying is that they don't do is consider the opponent as a unique individual, rather than as a generic opponent to be brute-forced. They don't say, "Oh, well I'm playing against Kasparov, who plays aggressively, so I'll set a trap," versus "I'm playing against Karpov, who overvalues his knights, so I'll threaten them." (Note: these are not actual foibles of Kasparov or Karpov).
Knowing your particular opponent is, it seems to me, a tree-pruning technique, in much the same way that the Monte Carlo approach that is highlighted in the article. If you can anticipate ALL possible opponent responses to an acceptable depth, that's clearly better than making assumptions about how your opponent will respond.
Yeah, they said they didn't understand the distinction, so I was clarifying. Poker leans a lot more towards the Rock Paper Scissors style of AI, where determining the best move becomes much more dependent on understanding your opponent's strategy.
While it's not perfect information, it's been studied quite a bit, and heads up play is getting very good for some games. Not sure if that's limit, no-limit or what.
Loving the article so far, interesting read and makes me interested in reading more into the AI Go scene. Any HNers have links that would an excited learner make?
also
> computer game theory genius Alfred Zobrist
I couldn't help but laugh, because I read this an entirely different way than the author intended. I assume he just meant 'game theory' without the computer prefix
I see on the Wikipedia article for Zobrist hashing[1] there is a reference to Albert Zobrist, who doesn't appear to have a Wikipedia page, but has pages at [2] and [3].
The problem is inherently parallel but not obviously what each task should be doing. The experienced brain clearly can see the whole board and recognize what to do next but a computer can't right now.
If I had more time this would be a lot of fun to work on. My approach would be to find some way to evolve a Go player. People clearly learn over time by playing a lot, which is basically programming their brain to recognize state and options.
Isn't this really mis-titled? "Computers" don't win anything. The software is what cant "win" right now. Even though Im sure there is a lot of effort by various companies, academia and individuals into AI, is there really much development being put into beating a go grand master?
Caught between atheism and a crippling fear of death, Ray Kurzweil and other futurists feed this mischaracterization by trumpeting the impending technological apotheosis of humanity, their breathless idiocy echoing through popular media.
Wat. Why was this bit of out-of-place and mostly off-topic divisiveness dropped into an article about Go?
I don't expect "the Singularity" to come in my lifetime, although smaller victories in "AI" have already proven themselves very useful; but let's leave God out of AI debates. Whether there exists a God or gods (all "atheism" means is not believing in gods, and nothing either way about life after death), whether there is life after death, and what AI can do are orthogonal matters. The one that we can do something about, is the last of these.
For me, I'm a huge fan of AI and would love to see more research in that vein, but I don't dread or fear my eventual death. I don't intend to bring death on prematurely, but my curiosity has me looking forward to it. I certainly don't expect to see a "Singularity" by 2045. That said, if I could program a computer to (say) cure cancer, or extend the human lifespan, I would do it in a heartbeat.
The stereotype that all AI researchers are "atheists" (as if that were a negative, or even a meaningful category) driven by a "fear of death" is (a) untrue, and (b) irrelevant. The great thing about science is that it doesn't matter what your religious beliefs are, and it seems to work regardless of whether gods exist.
Not sure why he put atheism bit but considering how many pills Ray Kurzweil takes per day(200?) to increase his age by keeping himself young, it wont be surprising if he has a genuine fear of death.
Some of his claims are quite ridiculous and PZ has done a good job of pointing out why so :
> "Death is bad," said Harry, discarding wisdom for the sake of clear communication. "Very bad. Extremely bad. Being scared of death is like being scared of a great big monster with poisonous fangs. It actually makes a great deal of sense, and does not, in fact, indicate that you have a psychological problem." -- HJPEV
Also, I think there might have been some tuning by search engines/web pages (a couple of years ago I remember it was more difficult getting a good result)
Go has been around for thousands of years--- Golang will be dead and gone soon enough (in a decade or two?) and we won't have that nasty keyword collision.
However, english-speaking go players are adapting to this problem. Go is the japanese name for the game--- and we're starting to google for "baduk", the korean name for the game, instead.
I wrote a Go playing program ("Honinbo Warrior") in UCSD Pascal on my Apple II in the late 1970s. I made some money selling it commercially, but it was mostly a hobby.
Also in the 1970s, I had the privilege of playing the women's world champion and also the national champion of South Korea. They both gave me huge handicaps, and still easily beat me - I am not a very strong player. Go really is a great game.
I bought Crazy Stone for my droid phone, and it really is a fine program.