Hacker News new | past | comments | ask | show | jobs | submit login
Easy Amazon EC2 Instance Comparison (ec2instances.info)
90 points by benhomie on Nov 3, 2013 | hide | past | favorite | 43 comments



I still don’t understand this industry’s obsession with predefined fixed limits on unrelated resources.

Just because I want lots of RAM, why do I necessarily need lots of disk and/or lots of transfer?

Or vice versa, why do I need to pay for lots of tranfer and RAM to get lots of disk?

I get that AWS has separate billing for data, but they still tie CPU, RAM and Disk space together, as do most “traditional” VPS hosts.

And even more confusing to me, is why anyone with any sense would pay for these things?


> I still don’t understand this industry’s obsession with predefined fixed limits on unrelated resources.

1. It keeps their billing simpler. They would have to (or otherwise make it up elsewhere) charge different rates for different resources, making it relatively confusing + increasing support costs.

2. Much easier to forecast resources. If you know that you can fit X instances of type Y on a box, or W instances of type Z, it's easier to understand when/where you will need more hardware.

It's not perfect, I agree, but if an ad-hoc VPS product was profitable I'm sure we'd have seen it by now.


It's more of a capability thing. If you're running, say, Piston cloud you're using ceph over ethernet to back your disks, so you can easily decouple disk usage and ram usage. If you're stuck using local disks (ie. rackspace/joyent/linode/amazon to a point/etc.), then it's a lot harder to provide that sort of product.

That being said there are providers out there that sell it, and have been for years.


I’m very aware there are, and we use one of them.

How is it harder (technically. I realise billing is more complex)? You have some software that provisions a VPS with the requested resource limits.


That’s the thing - they DO exist, its just that the “big” players don’t offer them.

I almost get why big companies (Rackspace, MT, etc) don’t offer it - if you can make a schmuck pay $X hundreds and hundreds of GB of disk he will never use, just because he needs 4GB of RAM or a lot of transfer, in theory you can over provision the hardware.

What I really don’t get though, is the technically savvy people that think they’re somehow getting a reasonable service?


This makes sense when you keep in mind how the older clouds work (essentially VPS providers 101).

You have a server with some disks, some ram and some cpus. You aggregate the disks together, then split them to form the individual disks for the virtual machines. You then use kvm/xen to provide isolation as well as to split the ram/cpu between the virtual machines.

So to answer your question: Storage/ram/cpu is sold in lock step because otherwise there would be resources sitting on servers that are unable to be sold. Bandwidth isn't constrained like that because bandwidth isn't a thing tied to a machine.

There are some providers out there that don't lock ram/disk together. This is mostly because they use a distributed storage pool rather than local disks. This is significantly more complex and is a 'fairly' new addition to the scene (~2010?).

This is also why certain providers still charge you for ram even when your machine is turned off, and why backups/migra tions/plan upgrades can be a bit of a pain in the neck at times.


This isnt EC2 but I'd imagine they have the same limits with pricing vs hardware vs options

http://digitalocean.uservoice.com/forums/136585-digital-ocea...

"Unfortunately given how physical resources are segmented if we gave users the ability to arbitrarily select CPU, RAM, or HDD independent of each other they would actually end up paying more for this 'custom' plan than using one of the pre-defined plans.

As I'm sure you are a well aware the resources are not equal and are not priced equally, it's cheaper to get more disk, than to get more RAM, which is why we've done our best to cut it up into units and provide the best cost savings to our customers."


Maybe it is easier to keep the maximum amount of virtual machines running on your hardware when you only offering fixed sizes? You can plan your physical hardware so that there is always certain amount of vms on one host and all CPU/mem/disk is allocated to them.

With more flexible allocation of resources the pool would start to fragment. Without local disk the defragmentation process would be fairly easy as you could just restart the vm's in another host, but local disk makes this more difficult (or more annoying for the customer).


AWS has a bunch of options for exactly what you're looking for.

Just want better disk performance? Use instance storage or high provisioned IOPS EBS.

Just want a lot of memory? Pick a M2 or CR1 Memory-optimized instance type and pay for more RAM without adding CPU.

Just want more CPU power? Stick with the same amount of RAM as the m1.large but add 5 times the CPU power with the c1.xlarge.

More disk space? On EBS you just pay per GB.


That your examples are so different proves my point.

This is all ridiculously over-complicated, and no doubt over priced like everything AWS offers.


I think it's more useful to be able to build up a more real-world deployment with storage costs, etc all built in, like you can do with PlanForCloud.

That gives you a monthly/yearly final number, which is more useful for comparing against other 'cloud' providers, or for making the point that often it's cheaper to use standard VM or dedicated server providers.

This tool is great, but only if you're only looking at EC2, and I think that's a mistake these days.


Another site that is also helpful to make provider comparisons is http://www.cloudorado.com/


In case anyone wants to estimate the cost of other services. AWS simple monthly calculator: http://calculator.s3.amazonaws.com/calc5.html


Here's a different service I made that includes other regions, continously updated spot instance prices, and a few other nice features: http://ec2pricing.iconara.info/

It's also available as an API: http://ec2pricing.herokuapp.com/api/v1/eu-west-1/


AWS has a full blown calculator for every service they provide, not just EC2. They give you pricing by monthly, upfront cost [for reserved instances] etc.

http://calculator.s3.amazonaws.com/calc5.html


These costs don't include the cost of provisioned IOPS, right? Without provisioned IOPS, the I/O performance is going to range from "very low" to "low", not "low" to "very high".


Provisioned IOPS is just a method for other large corporations to donate to AWS without C-level approval.


If you view money spend on reducing risk that ended up not being fulfilled, I believe you are correct.

However, one major component of businesses that private individuals do not understand is the value of risk reduction. In a major sense, many businesses are in the business of reducing or quantifying risk.

To an individual, it may not be worth $100k to make sure everything runs correctly. To a business, that might be an easy decision.

It is especially compounded because private individuals don't typically calculate the cost of their time. If a business can spend $100k to make sure their website doesn't go down, their ROI calculation is: probability_of_downtime * cost_per_unit_downtime + cost_to_fix + cost_to_report_causes + cost_to_explain_why_it_happened.

At $100 - $300 per hour per employee, costs rise FAST.

Businesses need to control risk, and will pay money to do so.


what do you mean by donate?


As in I give you this money and you make sure I can keep giving you money.


Am I missing something or does this not include options for the reserved instances? Because _that_ is the part of the EC2 pricing that is most confusing to me.


To get an idea of relative savings in reserved instance prices compared to ondemand ones , you can try http://promptcloud.com/ec2-ondemand-vs-reserved-instance-pri... . It was HN couple of months back.


Reserved instances are most likely for people not like us. You should be modeling your apps to take advantage of spot pricing not reserved.


Is there a reason you specifically want spot pricing over reserved? Do you save more money?


Yes, spot instances are significantly cheaper than even reserved instances. For example, a 3-year heavy utilization cc2.8xlarge instance costs $0.49/hr (with $10K up-front!), whereas its costs only $0.27/hr on the spot market (with nothing up-front). Your spot instance could run happily for several days or it could get killed 5 minutes later if there's a price spike. You have to accept uncertainty in exchange for the cheaper price.

Combining reserved instances with spot instances in the same pool can be a good strategy because you get cost-savings while also maintaining a minimum capacity. You can also spread spot instances over different availability zones, since spot price varies across zones.

FYI, I created an app that charts spot prices over time across various instance types, regions, and availability zones:

http://ec2price.com/


I just started using AWS yesterday and was pretty annoyed by the really counter-intuitive AWS website. I've been using other IAAS before, but AWS's information is all over the place.

Thanks a lot, really helpful for me atleast!


Having an option to show the effective monthly rate (incl. the amortized up front cost) for light/medium/heavy reserved instances would be great ... I always end up creating a spreadsheet for those.


This is useful but it doesn't cover reserved or any IOPs which is unfortunately.

AWS in general makes pricing opaque and hard to reason about, more simple tools like this would be useful.


There is also http://www.awsnow.info/, now with RESTful access to pricing.


This is nice, but can you add the machine name (ie. c1.xlarge) and add costs scenarios for reserved instances?


The question is, why would you use EC2 instead of DigitalOcean?


AWS has a lot of managed infrastructure components like RDS, EMR, S3.


ELB, CloudFront, SQS, SES ... it’s nice to have those things at your disposal even though alternatives exist. Consolidated billing, fewer accounts to log into (admin overhead) etc.


For applications that don't need guaranteed uptime or SSDs, you can save a lot of money with EC2 spot instances. I can get an m1.large instance and 80GB of EBS storage for about $27/month; comparable specs would cost $80 from DigitalOcean.


How is DOs network uptime compared to AWS?


I can't select any of the menu options on my phone.


Please add other regions, especially us-west-1.


always wanted this. ended up putting numbers in excel. thanks.


It would be nice to have a column for a 30.44 day cost (average month).

EC2 is ridiculously expensive, considering the prices at Linode, Hetzner, DigitalOcean, OVH, LeaseWeb and 100tb.


10x in cost isn't necessarily the most important factor to a business.

From an individual's perspective, that's hard to understand. To a business, it really isn't that difficult to justify the cost.

One of my criticisms of HN commenters is the inability to empathize from a company's POV. Just because it doesn't make sense to you doesn't mean it doesn't make sense!

Companies (especially big ones) have different priorities than individuals. I may think it's stupid to 10x infrastructure costs by using AWS. A company may say "that's 1% of my budget, and it keeps my development running smoothly and developers happy with the familiarity and flexibility. I make that 10x cost back in 30 minutes, every day. Not worth optimizing."


Just having checked out the companies you mentioned, they are cheaper for small-midsize hosting solutions.

For running large memory and bandwidth hungry servers they just can't deliver. On Amazon you can get 256GB of ram with dedicated 10GBps clustered networking. None of the options you listed can go above 1GBps and it won't be dedicated (you'll get a 1GBps port onto a shared network, and you'll be at the mercy of the traffic conditions inside their data center.)

Amazon also has a huge amount of cloudy solutions which is not to be sneezed at.


This one allows up to 384GB RAM:

http://www.hetzner.de/hosting/produkte_rootserver/dx290

And it costs a fraction of EC2, which would cost you $2556 for 30.44 days.

I could get 10 servers with 1 Gbit each, and it will still be cheaper than EC2.


Ok, that's interesting. It's non-obvious that it was possible from their site. But the network is still a major problem. 10 servers @ 1gbit are not equal to 1 @ 10gbit depennding on your use case. In my case the database is network limited, and it's extremely not-nice to have to partition it horizontally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: