That's an important detail, as it explains why the tests aren't entirely flawed: Testing 10 reloads every second really distorts the numbers, assuming a reload every hour, or every few hours is more realistic. But if reloads are part of what amounts to doing live testing on a large part of live traffic -- then the exercise makes more sense.
And to be sure; being able to do ten reloads every second with few ill effects enables different, more nimble systems engineering.
But if we assume 2000 requests per second, per box - fighting ~100 reset connections a day (assuming two ha-proxy reconfigs) doesn't really seem worth the effort - packet loss and other outages would probably(?) domminate anyway.
> Testing 10 reloads every second really distorts the numbers, assuming a reload every hour, or every few hours is more realistic.
It depends on what you do. I've seen shops (successfully and, IMO, correctly) scaling AWS instances for services with a threshhold of every fifteen minutes, and I've seen Mesos clusters dynamically spinning up web instances much more nimbly than that (think every two minutes under spiky load--the instances would come up in five seconds, so it didn't hurt to down them).
Sure, but if it doesn't work there, I don't trust it to work if, say, a piece of my scheduler goes nuts and suddenly is upping and downing containers every few seconds. The problem remains, it's just not as acute and still must be fixed.
And to be sure; being able to do ten reloads every second with few ill effects enables different, more nimble systems engineering.
But if we assume 2000 requests per second, per box - fighting ~100 reset connections a day (assuming two ha-proxy reconfigs) doesn't really seem worth the effort - packet loss and other outages would probably(?) domminate anyway.