I could be mistaken but this is what I gathered from what the people who worked on AlphaGo said in various interviews and videos:
AlphaGo does not use brute force at all. It uses a neural network to find interesting moves, and this in effect limits the search space.
In addition, when AlphaGo "reads" moves (aka performs a tree search) it doesn't go all the way to the end of the game. It stops somewhere and asks another neural network to "evaluate" the board position, and figure out who is it favorable for. I'm thinking the neural network will also place some kind of probability on its evaluation, so it might give something like "this board position is 58% favorable for black".
In that way, it's very similar to how a professional player plays the game.
One difference I have noticed, which is probably the weakness that Lee Sedol exposed in game #4, is that AlphaGo does not do a thorough reading of "delicate" or "complicated" situation. I believe that a top professional would look at a complex situation and start considering moves that are not usually considered otherwise.
Another thing is that a professional would imagine a board position they want to arrive at, then try to search for a sequence of moves that allows them to reach that board position. This is specially the case in complex situations like what happened during game #4 where AlphaGo made a mistake
That doesn't sound different from chess AIs to me. Chess AIs can't "go all the way to the end of the game" until very very late in the game (it's simply impossible with current, even projected future, computing power). They too "stop somewhere" to 'evaluate' the board position, and figure out how favorable it is for them:
For most chess positions, computers cannot look ahead to all possible final
positions. Instead, they must look ahead a few plies and compare the possible
positions, known as leaves. The algorithm that evaluates leaves is termed the
"evaluation function", and these algorithms are often vastly different
between different chess programs.
(It's worth noting that if computers could "go all the way to the end of the game", they could play perfectly, which they can't.)
While there are similarities to how professional players play (either game), there are important differences. In particular, professional human players usually look ahead relatively little, but have developed a very accurate intuition for evaluating the board position and for what "feels" like the right next move. This intuition is expected to still be more accurate than the evaluation functions of even the best computer players, which is why anti-computer tactics typically involve "playing conservatively for a long-term advantage": https://en.wikipedia.org/wiki/Anti-computer_tactics
"the difference with chess is there's no simple known heuristic to evaluate a board position"
While it is indeed hand-coded in most chess engines (I'm sure there are some experimental ones, but that's not the path that leads to beating human grandmasters), these heurystics are everything but "simple".
AlphaGo does do a tree search as you say, but the "value" of the stopping node is half the neural network you mention, and another half performing Monte Carlo with medium strength 'players' (2d amateur). These 'players' are really a quick version of the priority network that figures out which moves to play in the first place.
AlphaGo does not use brute force at all. It uses a neural network to find interesting moves, and this in effect limits the search space.
In addition, when AlphaGo "reads" moves (aka performs a tree search) it doesn't go all the way to the end of the game. It stops somewhere and asks another neural network to "evaluate" the board position, and figure out who is it favorable for. I'm thinking the neural network will also place some kind of probability on its evaluation, so it might give something like "this board position is 58% favorable for black".
In that way, it's very similar to how a professional player plays the game.
One difference I have noticed, which is probably the weakness that Lee Sedol exposed in game #4, is that AlphaGo does not do a thorough reading of "delicate" or "complicated" situation. I believe that a top professional would look at a complex situation and start considering moves that are not usually considered otherwise.
Another thing is that a professional would imagine a board position they want to arrive at, then try to search for a sequence of moves that allows them to reach that board position. This is specially the case in complex situations like what happened during game #4 where AlphaGo made a mistake