"The software allows him to play more chess, which allows him to make more mistakes, which allows him to accumulate experience at a prodigious pace."
The same can be said about the young generation online poker pros of the last 5 years, who regularly decimate the old school pros who learned live poker first.
This piece stands in stark contrast with how computers are received in Go. Past the beginner level, you are discouraged from playing a computer player at all, lest you pick up bad habits that will hinder your progress later on.
Although, admittedly, this is because Go AI hasn't progressed to the level where it can defeat professionals at all, much less consistently. Once this happens, I can only begin to imagine the level that will be attained by some future kid with plenty of time to practice against such a formidable opponent.
Disclaimer: I don't play a lot of Go, I could be completely wrong.
That's probably due to the fact that in Go there are less intermidiary goals. In chess capturing a piece, defending a piece or simply making your position better by putting a bisshop on a diagonal can be seen as a step towards winning. Thus a chess AI doesn't need to go very deep before knowing which strategies are bad. For example, chess AI's written in JS often don't go much deeper than two steps ahead but still play quite decently.
But with Go you need to build chains made of several (often > 10) stones. And that sort of calculation is very heavy for brute-force style AI, which is about all a computer can resort to at this moment.
I don't play a lot of chess, but it seems like there are more intermediate goals in go, but they're also more nebulous. Gaining center influence, threatening an opponent's weak group, cutting an opposing group into two groups, defending your own territory, making eyes, and reducing your opponent's territory are all important goals.
The best moves are ones that further two or more of these at the same time; it's very easy to lose if you're only concerned about one at a time.
I don't play a lot of Go either, but I've read some on the topic.
There are a few problems with modeling Go in AI. One of the biggest is that in Chess, you can more easily evaluate intermediary positions by material advantage. Positioning still matters, and there's plenty of times where a sacrifice is worth it, but material advantage is a serviceable enough quantitative measure. Go does not see captures as commonly, and emphasizes positioning more, which makes it harder to quantify.
The other problem is that (if I remember right) Go has more possibilities, so it's harder to look ahead as far. All grid intersections on the board are possible moves, and the board is bigger, while for chess the range of moves is more constrained. But really, this isn't the important part, because the forecast horizon of both games are finite, which means in both games at some point you have to stop looking at possibilities and evaluate board position of leaf nodes. And chess has a less-imperfect method for that.
But really, this isn't the important part, because the forecast horizon of both games are finite, which means in both games at some point you have to stop looking at possibilities and evaluate board position of leaf nodes. And chess has a less-imperfect method for that.
Actually, that makes a big difference. Because you can look ahead farther in chess, the intermediate scoring can be less accurate, as you have a real chance of looking far enough ahead to have turned your positional advantage into a material advantage.
Using the size of the board as a proxy for the number of legal moves, looking ahead N moves in chess requires evaluating approximately 64^N intermediate states, and go requires 361^N. That means that with equivalent computing power, looking ahead N moves in chess will let you look about N^0.177 in go. Thus, a computer that can look ahead 40 moves in chess wouldn't quite be able to look 2 moves ahead in go, without very aggressive pruning of the moves that it considers.
Right idea, wrong execution. If 64^N = 361^M then N log 64 = M log 361, so M = (log 64 / log 361) N ~= 0.71N. So if you could look 12 moves ahead in chess, this estimate would say you could manage 8 or 9 moves in go.
Unfortunately, it's worse than this. In practice, the effective branching factor in a chess tree-search is more like 3 than 64, roughly because lots of moves can quickly be found to be bad. I would be very surprised if the corresponding figure for go were less than, say, 30. So now your factor is log 3 / log 30, or more like 1/3.
On top of that, go games are much longer than chess games, so your search depth in go is a much smaller fraction of the whole game, so the search is a worse proxy for the final result. Slightly related to that: there's no nice simple measurement of how well you're doing that's comparable to measuring material in chess. (Your ultimate goal is to build territory, but it takes quite a while before any sort of quantitative estimate of how much territory each player has is feasible, especially if you want it to be anywhere near as cheap as a material count in chess.)
Sadly the article over-exaggerates the amount of computer chess Magnus plays. Few grandmasters do. Chess software is good for analysis and study of the board but not so much for actually playing against. For direct quotations from Magnus regarding this issue see http://www.nationalpost.com/life/story.html?id=2392166
It is true, there are other aspects to Magnuss chess proficiency. Adgestein's coaching when Magnus was in his early teens, must have been a factor. It is hard for a teen with other pressures to transition from a promising player (with much hype one must add) into a real premier player.
Notice though, that Magnus mentions that he is not even sure where his real chess board is. His chess time is spent at the computer.
Personally, I have trouble playing on a regular board after playing tons of computer chess, but then again I am an old patzer(2400FIDE).
I have this sort of experience in programming as well.
For the whole 5 years of my programming hobby and career, I wrote primary in ruby(now, starting to do javascript), struggling with my own problem and bugs and generally learning how to write good code. I didn't think I was learning much, since I was doing this whole thing in ruby and rubygame.
It took a beginner programmer who asked for help to see the extent of my knowledge. From there, I efficiently dispatch each problems one by one, pointing out the simple errors that the newbie made. He was so thankful that he gave me 10 bucks.
Even so, I still felt that I wasn't so smart given the vast domain of knowledge that exists in programming and computer science. Surely, there is someone my age who is way smarter than me and programmed all sort of cool stuff. Surely there is someone who can code a tetris clone with unfathomable beauty.
I reckons that each programmer in the world only solve a fraction of the problem space in the area that they're interested in.
It would be nice if there more meat in the discussion of what intuition really is.
This is a truism of expertise. Although we tend to think of experts as being weighted down by information, their intelligence dependent on a vast set of facts, experts are actually profoundly intuitive. When experts evaluate a situation, they don't systematically compare all the available options or consciously analyze the relevant information.
References? Studies? It sounds very plausible but...
It would help if the one link the article wasn't inside a paywall too.
Hofstadter's book GEB talked about chess and intelligence a bit, and it mentioned some studies. There's one I most definitely remember. The study involved taking two groups of people, normal people and chess pros, showing them a chess board for like 5 seconds, and having them reconstruct as much of the board position as they could.
The normal people did exactly like you thought they would, placing a few pieces around, with a lot of off-by-one errors. The chess pros, on the other hand, got board positions comparatively "more wrong" than the beginners. Multiple pieces were radically moved across the board. However, when these board positions were looked at by other pros, the responses were almost universally along the lines of "well, the positions are different, but strategically they're actually kinda similar..."
It's qualitative feedback, to be sure, and difficult to verify and certainly not be trusted in a scientific sense. Nonetheless, it does reveal some of the "intuition" going on. Pro chess players are good at seeing high-level patterns in the positioning of the pieces, related to strategies and advantages and relative positioning. They see these things so well that they stop seeing the actual pieces, to an extent. It reminds me of that scene in the first Matrix movie where the one guy (Cypher?) is pointing to the screen with numbers trailing down, saying "I don't see the numbers anymore, it's just blonde, brunette, redhead, ..."
This seems to relate more to perception than to intuition as it's traditionally thought of, but I would argue that they two are more strongly coupled than most people realize. One type of intuition is perceiving emergent phenomena directly without noticing the underlying layers.
My memory isn't all that great, so I hate to go out on a limb here, but what I think was discussed in GEB was that if shown a sensible board layout that could be reached in real play, the pros could nail it in five seconds, whereas normal people could do only tolerably well in five seconds. Show the pros a nonsense board that could never come up in real play, and the normal people do the same as they did before, but the pros crash and burn. This is meant to show that the mental conception of a board is different between a pro and a novice.
I would vouch for this being the correct interpretation, and the parent being quite wrong. It's originally from a study by W.G. Chase and H.A. Simon in 1973, but I can't find the full text free online.
After consulting GEB, I replied to parent with corrections, we're both largely right in point but got some details wrong. The text claims the study comes from Adriaan de Groot in the 1940's, the bibliography refers to a book published in 1965 by Mouton, under the title Thought and Choice in Chess. If you can summarize the other study, I'd love to hear it.
It's not the most scholarly thing in the world, but this Chessbase biography of de Groot suggests that where de Groot measured the distinction between novices and experts, Chase and Simon were the first to measure differences between random positions and game-like positions.
Thank you for the addition, your memory is mostly correct. I drudged out my copy of GEB, first part of Chapter X, and we're both correct. Both groups of players are given a 5-second glance at the board. If the game is a real game, the pros can assemble the board very quickly, and they do make mistakes but they're mistakes that other pros recognize as strategically invariant, while the normal people take longer (accuracy not mentioned). Given a random board, they both perform the same.
The general point is the same: the pros perceive the board differently, because they see the emergent patterns. My contention is that this is the heart of intuition (or at least an important subset of intuition).
Hmm, so this would mean that a normal player sees the game as just a bunch of pieces, whereas a pro looks at the position (strategy, advantages for one player, etc). Well, it would explain some things about pros, but what does this mean when you train with an AI? I mean, yes, you get more experience, but a typical game for a computer is not a typical game for a computer at all. It would be nice to know which AI Magnus Carlsen used to train and compare it's playstyle to that of pros. Maybe his strategy is just different enough to catch people with something they don't expect/know and give him a slight advantage.
This is not really accurate. The strongest chess programs (mostly Rybka, Shredder, and Fritz) are commercially available, and all top grandmasters are using more or less the same ones.
The advantage of training with an AI is that the AI will happily play you for ten or twelve hours a day, from any position you please, without tiring; you can ask it for the evaluation of any position, and it will generally provide an extremely accurate one; it will immediately reveal errors in your calculations and suggest tactical possibilities that few humans would see; and it will play perfect (for small sets of pieces) and near-perfect endgames against you.
Last, but not least, computer chess databases are like nothing available 20 years ago. Chess professionals (and amateurs) have access to all the games they have ever played, hundreds of games played by any potential rival, and hundreds or thousands of games played by IMs and GMs in any potential opening line.
@mqander (for some reason I can't reply directly):
Yeah, you're probably right. I had also forgotten about those chess databases, those are quite powerful. The problem is probably that the last commercial chess AI I used dates from ~1995, when they weren't that smart yet. Or, more accurately, the computers didn't have enough computing power to be strong and at the same time take a reasonable amount of time before playing.
I remember reading that study in GEB. It's kind of a big step from that to clarifying what "intuition" is, even, especially, if we feel we understand it easily.
I don't think this thing called "intuition" is all that mysterious. It seems to me it's just a result of extensive training of the neural network we call carry underneath our hats. Study a lot of chess games, and you'll absorb a lot of chess knowledge, even if you can't output that knowledge in the form of a set of rules. Really, it reminds me a lot of how after a few years of foreign language study, a lot of the grammar rules one would learn in a first-year course fade away into the background and become more of a gestalt knowledge of basic language forms (which is really just a fancy-shmancy way of saying "you know what sounds right and what sounds wrong.")
The same can be said about the young generation online poker pros of the last 5 years, who regularly decimate the old school pros who learned live poker first.