In this case, they were fairly reasonable handicaps -- six to eight stones, though certainly large, is not unheard of. You could see that kind of difference between a low-level and a high-level amateur, for example.
Sure, it's not unheard of, but it's still huge! It's extremely hard to become six or eight stones better from that level, and by all appearances, it represents an enormous gap in ability. The most talented and smartest humans take years of constant study and practice in their teens and 20s (at the height of their cognitive power) to go from 2d or 3d amateur -- the strongest programs -- to become a high-dan professional. Further perspective: Various professionals have remarked that they believe the best humans are only about 3 handicap stones weaker than God.
Consider this. Perhaps you play chess; David Levy's famous chess AI bet was that no computer by 1978 could beat him, a chess international master, in a match (I don't know his exact strength, but presumably 2400-2500 Elo -- much, much stronger than an expert!) Now, in 1978 the best computer could not beat him in a match. But it did win a game, and draw a game. However, it took nearly another 20 years from there before a computer could beat the world champion.
But Go computers, relatively speaking are not even as good at Go as the 1978 computer was at chess. The equivalent of a 2d amateur at Go might be a 2100 Elo human at chess, a strong expert; certainly not an IM! So it's not obvious to me that Go computers will be at a world-class level in a shorter timeframe than 20 years. Indeed, I believe more effort has gone into computer Go circa 2010 than had gone into computer chess circa 1978, so it might be reasonable to expect progress to continue even slower.
So it's not obvious to me that Go computers will be at a world-class level in a shorter timeframe than 20 years.
Don't get me wrong: I agree with you 100% on this. I'd more likely go further and wager against that happening. Go has an absolutely silly branching factor, no solid method of pruning, and no straightforward position evaluation function. Material doesn't work nearly as well as it does in Chess, and the sheer amount of information contained in the board makes the "mean value of this board position in past grandmaster games" approach completely infeasible.
My only point is that researchers are, indeed, making progress. It's just very limited.
recently bots have gotten a lot better at go - they just passed 3dan rank, which is above where most amateurs ever reach. http://www.lifein19x19.com/forum/viewtopic.php?f=18&t=13...
But they still have a ways to go.
Here is a (fake money) future prediction market on whether a bot will beat a pro in an even game by 2020 - running at 20% right now. http://www.ideosphere.com/fx-bin/Claim?claim=GoCh