I used to be pretty active on iccup, and I was a masters-level sc2 player for a while during the beta and when it was first released. So I'm definitely familiar with starcraft and what it takes to become a good player.
I think you're misunderstanding a big part of what is "easy" and "difficult" for humans vs ai. Yes, go is absolutely a more challenging games for humans than starcraft (I also play go, although not very well - currently around ~7k on igs). Starcraft is strategically a much simpler game than go. You are correct in stating that mechanics is what makes starcraft hard for most people, and yes if the computer knew exactly what to do, it would be able to execute it faster and without making any multi-tasking mistakes. But strategy is not what makes starcraft a challenge for ai. Tasks that are trivial for humans can be extremely difficult for ai.
Computers are way better at tree searching than humans, for the obvious reason that they run much faster than brains. So games with relatively small state-spaces, like checkers, are solved quickly. But as you increase the state space, it becomes impossible to search all possible future moves, and this was why go was intractable for such a long time.
The big advancement in alphago is that by using deep learning it is able to evaluate different board-states without doing any search, using a neural net. This allows it to massively prune the search space. Humans are able to do this through "intuition" gained through experience - talk to any advanced go player and ask them about specific moves and they will tell you things like "this shape is bad" or "it felt like this was a point of thinness". AlphaGo was able to gain this "intuition" by training on a massive dataset of go board positions.
In go, the rules are very simple - 19x19 board, each turn you can put a stone in any not-surrounded open space. Its also a turn based game. The state at any given time is fully known. Starcraft is real-time, there are tons of different actions you can take, the actions are not independent (pressing attack does something different if you have a unit selected or not), the game state is not fully known and a given state can mean different things depending on what preceeded it. Not to mention that the search space is massively massively larger. To create a representation of this that can be fed into a neural net and give meaningful results (something like at a given tick, score all possible actions and find the best one) is going to be incredibly difficult. An order of magnitude more difficult than go, imo.
>The big advancement in alphago is that by using deep learning it is able to evaluate different board-states without doing any search, using a neural net.
It still uses a Monte-Carlo Tree Search to get to the level where it can beat human pro players.
>Starcraft is real-time, there are tons of different actions you can take, the actions are not independent (pressing attack does something different if you have a unit selected or not), the game state is not fully known and a given state can mean different things depending on what preceeded it.
And yet StarCraft is extremely primitive as far as strategy games go. Most of the stuff you can do in the game simply doesn't matter, and the stuff that matters could be modeled at a much coarser level than what people see on the screen. Knowing how this stuff works, I'm willing to bet this is exactly how Deep Mind will approach the problem. They will try many different sets of hand-engineered features and game representations, then not mention any of the failed efforts in their press releases and research papers.
The choice of StarCraft as their next target reeks of a PR stunt. Sure, there might be no AIs that play at pro level now, but there wasn't any serious effort or incentive to build one either, and now Google will throw millions of dollars and a data-center worth of hardware at this problem.
As far as I'm concerned, real AI research right now isn't about surpassing human performance at tasks where computers are already doing okay. It's about achieving reasonable level of performance in domains where computers are doing extremely badly. But that won't get you a lot of coverage from the clueless tech press, I guess.
What are their other options besides Starcraft2? This doesn’t seem like a PR stunt (not that the PR isn’t a bonus), but there’s already a history of AI competitions for Brood War, the game is more balanced than arguably any other RTS, and even though it is “primitive” as a strategy game in your estimation, AI isn’t ready to tackle a more advanced strategy game.
It uses MCTS, but that's not the same thing as the claim, now is it? If you look at the win rates in the AG paper for the NN vs MCTS+NN and then consider the performance curve, use of a single TPU, crushing superiority of Master's flawless 60 blitz matches and Ke Jie matches despite very fast moves, the released self-play matches, and comparing with FB's Dark Forest, it's clear that the AG NN all on its own, without any MCTS, is a truly formidable player that would likely crush many pros, although I don't know if it would reach Sedol or Ke Jie levels of play.
> Not to mention that the search space is massively massively larger
That's what I'm not really convinced about. The build-order space is not that big (compared to Go's positions) and once you got a good micro-management engine I'm affraid this will lead to something like : if protos or zerg pick protoss then 8 gate -> 9 pylon -> scout : if no counter to 4-gates, then 4-gates and win from out-microing.
The preferred opening for Protoss in PvZ (on most maps) is the forge fast-expand. If the Zerg player doesn't want to play an economic game in response, they have a variety of all-in strategies available. There is a lengthy article on Team Liquid about how Protoss should respond to these.
2. Good scouting is required for most of these situations.
3. There are terrain-based considerations all over the place.
4. There are considerations based on how many units were lost in earlier engagements all over the place.
Enumerating all the build orders (#1) is pretty easy (as you said, build order space isn't that big), but the interaction between terrain and building placement (#3) is a lot more complex and starts to interact with the full game's massive search space more, and the followups are dynamic (#2, #4) so I don't think the game will degenerate into a solved solution as long as it looks anything like regular play.
It's possible that there's some degenerate micro-based solution that turns everything on its head, of course. Bot-based vulture micro might rewrite part of the Terran matchups, but it doesn't seem insurmountable yet. My own bot gets units across the map 5% to 10% faster than normal, but that doesn't look like enough to break the game even with a 4pool.
I think you're misunderstanding a big part of what is "easy" and "difficult" for humans vs ai. Yes, go is absolutely a more challenging games for humans than starcraft (I also play go, although not very well - currently around ~7k on igs). Starcraft is strategically a much simpler game than go. You are correct in stating that mechanics is what makes starcraft hard for most people, and yes if the computer knew exactly what to do, it would be able to execute it faster and without making any multi-tasking mistakes. But strategy is not what makes starcraft a challenge for ai. Tasks that are trivial for humans can be extremely difficult for ai.
Computers are way better at tree searching than humans, for the obvious reason that they run much faster than brains. So games with relatively small state-spaces, like checkers, are solved quickly. But as you increase the state space, it becomes impossible to search all possible future moves, and this was why go was intractable for such a long time.
The big advancement in alphago is that by using deep learning it is able to evaluate different board-states without doing any search, using a neural net. This allows it to massively prune the search space. Humans are able to do this through "intuition" gained through experience - talk to any advanced go player and ask them about specific moves and they will tell you things like "this shape is bad" or "it felt like this was a point of thinness". AlphaGo was able to gain this "intuition" by training on a massive dataset of go board positions.
In go, the rules are very simple - 19x19 board, each turn you can put a stone in any not-surrounded open space. Its also a turn based game. The state at any given time is fully known. Starcraft is real-time, there are tons of different actions you can take, the actions are not independent (pressing attack does something different if you have a unit selected or not), the game state is not fully known and a given state can mean different things depending on what preceeded it. Not to mention that the search space is massively massively larger. To create a representation of this that can be fed into a neural net and give meaningful results (something like at a given tick, score all possible actions and find the best one) is going to be incredibly difficult. An order of magnitude more difficult than go, imo.