This was pretty much my PhD thesis topic. The summary no one wants to hear is that all the incremental algorithms for changing graphs suffer the same problem, and no one talks about it.
They are all optimised for changes near the start of the search. The better ones are optimised for changes anywhere on or beyond the 'search perimeter' (the perimeter of the region consisting of nodes whose cost-to-goal has been calculated). They all suffer catastrophic slow down when costs change near the goal region, because the incremental update requires changing almost all the saved information. Due to the overhead of extra calculations and storage, this can be slower than throwing away all the saved state and starting again from scratch.
What to do about this is a much more interesting question, and inadequately explored in my opinion.
If I imagine trying to make the path-finding agent think "like a human" rather than "optimally" (which is what you'd expect in, say, an RTS game), a solution springs to mind:
1. Do your initial global pathing to figure out what the optimal path is at some point back in time;
2. set a sequence of waypoints (local to the agent) to walk the agent through that path;
3. if something changes near the goal, draw a circle out to a heuristic distance for the maximum "roundabout path" it could have created, and eliminate all waypoints within that circle;
5. at each waypoint, instead of just continuing on to the next one, recalculate the path to the next waypoint from where the agent is, with the search bounded to the space between them and the next waypoint (plus some squeeze-room);
6. if the recalculated path is sufficiently worse than the previously-known path, throw out the waypoint and repath to the next one after that, increasing the pathing search bounds equally in all directions around the agent [not just in the direction of the next-furthest waypoint.]
7. if you end up throwing out the goal, do a global repath.
8. either way, re-divide the new path you created (between you and the remaining waypoint) into new waypoints [down to some quantum], and continue on.
The resources consumed for each recalculated path will be much smaller, since each agent only has to figure out how to "put one foot in front of the other" to get to the next place it was planning to go. But, it is very possible that something can change that will cause the agent to get to a "stop" on its way, realize there's a problem, and have to turn around and run back the other way because it has now recalculated that there's a better route that it can't "merge into" from where it is. It'll also (if you do the pathing for an agent as a thread local to that agent) spend a minute standing around figuring out which way it needs to go, while everyone else runs past it--with a more "surprising" change necessitating more "thinking."
...which is exactly what, say, a human riding the metro while periodically checking Google Maps would do :)
In fact, thinking about it, if the agent just naively draws the initial set of waypoints from its "memory" of the route (at worst, confabulate a straight line), it can optimistically "just start walking" toward the goal, and then actually think about which way it has to go only once it's actually on its way. Sort of like only checking a map once you're on the highway :)
So, is this strategy anything near "useful", or is there a horrible flaw here I'm not seeing?
You've essentially just derived my Parallel Hierarchical Replanner (PHR)... :-) However, your step 6 isn't always useful because not only do you sometimes discover new obstacles (higher costs), occasionally you discover new shortcuts (lower costs).
In the PHR, you spawn two (or more, but ignore that for now) threads. One runs the globally optimal planner on as up-to-date data as it can; slow, but important to keep working upon. The other runs a local window planner from the agent's current location to as far ahead along the most recent global plan as fits within the window.
The result is a pretty good trade-off between actually executing the plan, and waiting to finish computing a globally optimal one. Essentially you're tracking along a stale but once-upon-a-time optimal global plan, locally optimising subsections of it. There are also some more generally useful trade-offs to this too, like using an anytime planner for the global thread.
My PhD was then an attempt to generalize these strategies into a cohesive framework. It starts from the assumption that what you really want is not to travel along `optimal' paths, but just to get to the damn goal as fast as possible. To that end, measuring everything in terms of time becomes sensible. Ultimately, it comes down to solving y* = arg_min_over_Y{ p + e + c }. y* is the optimal set of adjustable parameters in your system (e.g. choice of algorithm, number of threads to use, amount of RAM to use, tuning parameters in the algorithm, etc) in the space Y of all possible parameter sets. p is planning time, e is execution time, and c is the time taken to solve that optimisation equation itself. i.e. it's a transindental equation, so the only way to solve it is to estimate each of those time factors.
Finally, everything gets wrapped up in a nice package that combines a learning model, state estimator, optimiser, and planning system, and solves that equation repeatedly. Every time it does, it "applies" the y* parameters to the system (whatever that means depends upon what they represent), and starts again. If it helps, you can think of it as a feedback loop that adjusts its own controller gains, and where one of the inputs is the time taken to run the controller.
Data structures called "distance oracles" handle such problems. However, most research results are for static graphs only.
If the graphs are grids (as in the cstheory.stackexchange.com question) they are planar and some dynamic data structures exist (unclear whether the constants are small enough for the application in question):
Exact shortest paths:
Jittat Fakcharoenphol, Satish Rao: Planar graphs, negative weight edges, shortest paths, and near linear time. J. Comput. Syst. Sci. 72(5): 868-889 (2006)
Approximate shortest paths:
Philip N. Klein, Sairam Subramanian: A Fully Dynamic Approximation Scheme for Shortest Paths in Planar Graphs. Algorithmica 22(3): 235-249 (1998)
While on this topic (not sure if this is the same problem fundamentally, it might be). Imagine a crowded market full of AI agents. They all want to walk to their destination and trying not to bump into each other.
In the naive case for N actors, each would run A* on each AI frame update rate F (which might not correspond to the main frame rate) so that's NxF runs of A* runs per second.
It might not be that bad except that since some of the obstacles are moving (other agents) and there is a need to predict other agent's movement based on a some approximate constant speed and direction.
This seems like another case of where the constant factors of these super-smart algorithms will end up nullifying any benefit you get, even producing worse performance.
This is actually completely false. Most of these algorithms were specifically designed to make a specific intractable subcase of some algorithm tractable, not only in theory, but in practice. Most of them are completely practical, and together they constitute a really large portion of what we know about motion planning.
I definitely think anytime planning is underused in games. It's like a modified A* planner with a weighted heuristic, but which reduces this weighting factor after every successful iteration. The Anytime-D* algorithm then efficiently re-uses calculations to make subsequent plans faster. I'm not a games programmer, but I used this in a tech-demo planning paths for >5000 agents in a constantly changing map.
The biggest win in my opinion is that it devolves gracefully. If you're low on CPU time, you get worse paths. When CPU is readily available, they get better again.
FWIW, if you're reading this Valve, I'm still waiting for your call... ;-)
They are all optimised for changes near the start of the search. The better ones are optimised for changes anywhere on or beyond the 'search perimeter' (the perimeter of the region consisting of nodes whose cost-to-goal has been calculated). They all suffer catastrophic slow down when costs change near the goal region, because the incremental update requires changing almost all the saved information. Due to the overhead of extra calculations and storage, this can be slower than throwing away all the saved state and starting again from scratch.
What to do about this is a much more interesting question, and inadequately explored in my opinion.