The time that matters isn't how long it takes to restart the app; it's how many hours of changes just got eaten because the app crashed and the data was either resident in memory only or the crash corrupted the save file (the latter scenario, again, being more common in the past where correctly shunting the right bytes to disk was a dance done between "application" code and "OS toolkit" code, not the responsibility of an isolated kernel that has safeguards against interference in mission-critical tasks).
OTOH, lower runtime performance of modern apps eats into how much people can produce with them - both directly and indirectly, by slowing the feedback loop ever so slightly.
While there are couple of extra layers of abstractions on our systems that make them more safe and stable, hardware has accelerated far more than just to compensate. Software of today needs not to be as slow as it is.
In general, people will tradeoff fixed predictable cost to high-variance cost, so even if the slower tools are nibbling at our productivity, it's preferable to moving fast and breaking things.
I'm not claiming there's no room for optimization, but 90% of the things that make optimization challenging make the system reliable.