Hacker News new | past | comments | ask | show | jobs | submit login
AWS Management Console - Auto Scaling Support (aws.typepad.com)
90 points by turbo_pax on Dec 13, 2013 | hide | past | favorite | 46 comments



It always struck me as odd that one of the original defining features of AWS never had a web interface. Maybe they were making it API-only to protect people from shooting themselves in the foot with exorbitant bills. Regardless through I'm really glad to see this finally!


I think another part of it is that an API interface actually makes more sense. While the necessity for the non-technically inclined are pretty obvious, auto-scaling shouldn't usually be done reactively, and doing so is at the expense of optimal service delivery.

That said, I can imagine a good number of times it could come in handy, even for the techiest of us.

It might have simply not been the lowest hanging fruit, and the one thing I keep noticing that with AWS is that there are only two types of features they roll out -- new SKUs (e.g., something new that they can charge money for) and services that they should have written a long time ago, but probably didn't, because they were too busy launching new SKUs.


> auto-scaling shouldn't usually be done reactively, and doing so is at the expense of optimal service delivery.

You can react predictively (or predict reactively, whichever way you want to say it.)

Set up a cascade control system, training it as it runs on (process load x cost-of-scale.) It will begin to "see the signs" of load being about to occur, and adjust accordingly to reduce it. You know, just like any modern thermostat.


You can. I even qualified that in the next line. ;-)

Even easier, you could just say "Scale it up, we're demoing this to 3,000 users at PyCon", and do that proactively as well -- that said, it's likely done at the expense of efficiency or cost.


Yeah been wondering this myself. I think that's how Heroku really emerged in the scene, AWS gave you the ability to scale but didn't make it easy by any means (relative to 'heroku scale web n'). I doubt billing was the issue either considering tools for billing monitoring and email warnings have been in place for a while. My guess is that things that don't scale easily (db, workers, etc) which, are what most EC2 is used for, have to be scaled with an API and consequently that unprioritized webserver type autoscaling.


> to protect people from shooting themselves in the foot with exorbitant bills

As a matter of fact this is why new EC2 instances don't get swap -- running into swap can cost a pretty penny.


What do you mean by this?

You can swap to local (ephemeral storage) at no charge. Or you can created a provisioned IOPS EBS volume and swap all you'd like. There's no separate I/O charge for these volumes.


Hi Jeff!

I assume this is a reference to the charge per 1M I/O requests on EBS.


You can, but it doesn't come on by default.


Amazon rolls out all their features as CLI first, then build a web interface later. Although they have taken their time with this particular feature.


After I just spent ages setting this up via the API!

Actually there is a nice, cheap service called ezautoscaling.com - looks like a hacker side project but supports the full API including schedules, which I think are missing from the official AWS offering.


You can do scheduling with OpsWorks, I believe.


I actually hate Amazon for exposing these features - now, my boss will be mucking around with them! As a DevOps guy, nothing beats AWS CLI and CloudFormation!


That's what IAM is for. Create a playground for your boss and let him or her muck around in safety.


Well, you tell my boss that he shouldn't be a full admin. :)


How about trying to create a change management process (supported by some time that someone other than your boss made a disastrous change)? Get him on board about how important that process is, especially when working in a team and something as critical as AWS. hint hint..


That's already in place. I personally use the Web Console as a read-only tool, but it's hard to keep a manager with past developer background away from poking around - policies are for the mere mortals! :)


A question for other startups running a small time stack on top of AWS - how are you managing your scale up/scale down/fault tolerance on top of EC2 ? For a small startup, we are not really talking OpsWorks - so what tools allow you to do this. I'm trying to do this in a really small way for a project - but there are way too many deployment/monitoring/management tools to wrap my head around.


You can use CloudWatch which is already part of EC2 to monitor your instances for you. You can also have autoscaling scale up / down based on these metrics. You also can have Elastic Load Balancer monitor the health of your backend web hosts port state (ie: is the nginx port down for more than three health checks? OK kill that instance and replace it with a new one behind the ELB).

All of these functions are builtin to AWS and now with the new web interface it is much easier to configure autoscaling to do these types of things.


Along the same lines, I'm interested in how people handle the case of auto-scaling their web app (or anything that updates frequently via deploys). Is there a better solution than imaging a machine on every deploy and updating the auto scaling group to use the new AMI?


How are you deploying? You 'should' push a tarball to s3 and have web nodes pull it down.

Red-black deploys via AMI changes are one way for sure, but those seem to be usually done via separate scaling groups entirely. Netflix has a lot of automation around this (see Asgard and AMInator)


If you're small you can just use the tools Amazon provides you and get pretty far -- CloudFormation and/or OpsWorks, combined with AutoScaling (now configurable from the console! :) ).


RightScale was the first with this feature, before it was available via the api. I'm looking forward to seeing how each cloud provider handles the UI for all this data.


I'm going to piggy-back on this comment and mention that Rackspace recently added auto-scale as well, FWIW.


This is excellent news! I was just about to start exploring the auto-scaling capability for our cluster. Even if the API has more flexibility/options, this should make it much easier to get started with the basics.

Now I just wish AWS management console itself got some TLC.....it can be pretty difficult and cumbersome to use sometimes. Even products for fellow devs should have beautiful UIs.


Is it just me or is there still no option to do scheduled scaling from the web interface? Regardless this is still cool!


I just searched for this again this week. Finally in the console. I do prefer the api once I understand something but the console is great for a first walk through.


Funny. Google App Engine auto scaled from day one.


EC2 has had auto-scaling for ages -- this is just the web interface.


People still use Google AppEngine with its outrageous pricing?


Thanks for the downvote, but my experience is that for a relatively small project we started to receive $200+ monthly bills and had to switch to EC2.


Very small, but frequent DB updates. I wasn't responsible for the software, but simply put, it was a backend for a not-so-popular Chrome extension, which was doing authentication and persisting some state (a few integers). I'm sure the architecture could've been optimized specifically for AppEngine to reduce the cost, but why do you need to invest so much effort in optimizing something so basic that works just fine on EC2 m1.small or Digital Ocean for few bucks per month?


FWIW, I'm currently maintaining a project that does half a billion API calls per month for ~$400/month


What was costing you $200+ monthly bills? CPU? Storage? Datastore operations?


Nice one!


I would rather like to see better CPU performance first then auto-scale. Awful CPU performance is what keeping me away from EC2, trying linode nowadays..


The new C3.Large is cheaper than M1.Large (and only marginally more expensive than the old M1.Medium), but its CPU performance is roughly twice as good. Swapping over made a notable difference for our web nodes (perhaps 30-40% reduction in server time for the web-facing tier).

C3's instance-level SSD drives are a very nice touch as well.


It should be noted, though, that the c3.large has half the RAM of the m1.large.


Have you tried the new c3 instances?


For the last 2 days i have a script that is trying to allocate a c3 instance with no luck. AWS is having crazy capacity problems in us-east (all zones).


Same here. The ones we've managed to snag have been great, but it has made it difficult to scale out in a few situations when they haven't had the capacity.


We are also getting multiple 500 ServerInternalErrors trying to allocate them.


Good to know - we were salivating and planning to switch from M1 to C3.


in my experince its only us-east-1a that is having problems. i can get instances in b, c, and d no problem.

c3.large instances are awesome!


Keep in mind that your us-east-1a is not necessarily the same as someone else's us-east-1a as they randomly assign the names to reduce the effect of everyone congregating in us-east-1a because it sounds like the best one.


really? thats hilarious. thanks for the info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: