I can confirm the first trick, reserving a block of memory - I learned this in the early 90s coding on the Mac as the "rainy day fund" memory allocation.
I haven't seen it with memory, but I work with an graphics programmer who swears he's done the same trick with CPU cycles, just insert a busy loop that does nothing for a couple of milliseconds every frame (busy loop, not sleep, so it still appears to the casual observer that your rendering engine is working as hard as it can). Then when the game is getting ready to launch, and the framerate is inevitably dropping, remove the loop.
I don't think he has a loop like that in our current code base. But it's possible.
Another method occurred to me that might sort of work on the PC would be to finish a game, then delay the release for 1 year or so, during which time (due to Moore's law), everyone's gfx cards & processors have gotten faster and hence the average user's game playability would be improved.
I think the bigger titles already account for the hardware advancements and aim for beefy, top-of-the line machines that are going to be common in the several years it takes to develop the title or engine.
Although there is still something to be said for having a low memory footprint, even on modern hardware. If you can fit all of your resources reasonably into memory, then you don’t need to have any load delay, or worry about asynchronous loading. Half the reason retro games are so fun to make is that you can easily do everything you always wanted to do on the Super NES or the Genesis or whatever.
I'm amused to find that this has been written up as a pattern, the Memory Overdraft Pattern: http://www.charlesweir.com/papers/Mempres8AsHtml.html