I think another part of it is that an API interface actually makes more sense. While the necessity for the non-technically inclined are pretty obvious, auto-scaling shouldn't usually be done reactively, and doing so is at the expense of optimal service delivery.
That said, I can imagine a good number of times it could come in handy, even for the techiest of us.
It might have simply not been the lowest hanging fruit, and the one thing I keep noticing that with AWS is that there are only two types of features they roll out -- new SKUs (e.g., something new that they can charge money for) and services that they should have written a long time ago, but probably didn't, because they were too busy launching new SKUs.
> auto-scaling shouldn't usually be done reactively, and doing so is at the expense of optimal service delivery.
You can react predictively (or predict reactively, whichever way you want to say it.)
Set up a cascade control system, training it as it runs on (process load x cost-of-scale.) It will begin to "see the signs" of load being about to occur, and adjust accordingly to reduce it. You know, just like any modern thermostat.
You can. I even qualified that in the next line. ;-)
Even easier, you could just say "Scale it up, we're demoing this to 3,000 users at PyCon", and do that proactively as well -- that said, it's likely done at the expense of efficiency or cost.
That said, I can imagine a good number of times it could come in handy, even for the techiest of us.
It might have simply not been the lowest hanging fruit, and the one thing I keep noticing that with AWS is that there are only two types of features they roll out -- new SKUs (e.g., something new that they can charge money for) and services that they should have written a long time ago, but probably didn't, because they were too busy launching new SKUs.