This is fantastic. Amazon is the leader, and having them drop prices is a boon to the market as a whole. Companies who resell or add services on top of Amazon's services see an immediate benefit, as do their customers. I can't imagine how much money this saves companies like DropBox who build on top of it [cperciva is right, it's exactly 15245/month, since they're in the 1PB+ range].
I'm definitely wondering if and when we'll see 10 cents/gb - prices like that (minus the data transfer charges - see: http://www.nasuni.com/news/nasuni-blog/whats-the-cost-of-a-g...) put it within striking range of high availability spinning disk in your local data center.
Disclaimer: I work for a company building on top of/reselling S3 in addition to other providers.
Or $182,940 per year. Depending on benefits and payroll taxes, that's somewhere between one and two very qualified engineering hires right there, for free.
Wouldn't a company that is such a significant user (by volume) have already negotiated a rate of their own? It would seem silly to be in the 1+ PB range and paying the going market rate.
I would guess that the point of Amazon's pricing tiers is so they can say "look, you've already getting a discounted rate" and not waste time negotiating with each customer separately.
I would guess they do to. Netflix recently moved to the AWS infrastructure for their streaming and it was recently revealed that Netflix accounts for 20 percent of all internet traffic during peak times. I highly doubt Amazon simply said "go read our webpage" in a situation like this.
They have to. I've never negotiated w/ amazon, but their list bandwidth prices were 3 - 4 times more expensive than what i got negotiating with CDNs and carriers directly.
That's to say nothing about cogent, who will sell you bandwidth at $4/mbps, AWS's cheapest pricing (at 150tb/mo) still comes out to $28/mbps, which is a joke.
What I have learned from building out the @Grooveshark infrastructure:
Their pricing on bandwidth is still 3x to 4x more than what it costs to buy transit above the 10Gbps level and still noticeably more expensive at the 1Gbps level. A 3 to 4 times increase in bandwidth means millions of extra dollars a year to run on AWS.
Does that "3x to 4x more" figure take into account the associated costs of handling 10GigE drops versus having your stack on AWS? (Routing/switching gear, server hardware, rack space/power, network engineers, etc.)
The size of the volume discount (up to 60%) is pretty surprising. If they're making any money at 5.5 cents/GB then they must be making a lot of money at 14 cents.
I think once you factor in the costs related to creating and maintaining accounts (including support and billing) they aren't making much more off the small users than large ones, percentage-wise.
I wonder what kinds of things people are using this for, that they even explicitly mention a tier for 5 Petabytes. Thats like $275,000/month, not counting transfers.
Isn't DropBox in the 5 PB+ range? I have a vague recollection of concluding that they were in S3's top tier a few months ago, but I can't remember if it was based on them announcing how much data they were storing, based on an estimate from their burn rate, or based on an estimate from their number of users.
I worked out the first 5 PiB to be $462,336/month at the older pricing, and $433,433.60/month at the new rate.
Edit: fat-fingered a couple of columns. New figures are $449,536 and $433,423.36, for $16112.64/month savings, or 3.6%. All the savings come in the first 1 PiB.
Just in case I'm not the only person who was confused by this at first: stellar678 is referring to the free bandwidth between EC2 and S3, not the free AWS upload bandwidth (which no longer exists).
Marginally related trivia: Thanks to said free EC2-S3 bandwidth, if you want to move more than 1 MB between EC2 nodes in different availability zones, it's cheaper to PUT the data to S3 from one node and then GET and DELETE it from the other node than it is to transfer it directly.
Just remember this is durability not availability. Jeff states this clearly: "If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so."
Eleven nines availability is the service not working/responding only 0.0003 seconds per year ... indistinguishable from perfect.
I was actually thinking about this claim. Seems kind of unreasonable; seems that the amount of data lost should be proportional to the size of the data, not the number of objects it is split into.
I just figured they're going off an average object size stat. Data size can be equally meaningless - a single bit of data loss might be catastrophic in a 10 GB file, or it might not be noticeable in a 1 KB file.
I am not sure if what you claim is meaningful. A single bit of data loss is a data loss, no matter what the file size is.
If you meant you could fix the 1-bit error easily in the 1KB case, as you have just 8K bits to flip through, then it makes much more sense. If you split the big 10GB file into smaller chunks of 1KB (at which error detection/correction is done), then the fault becomes much more manageable.
In the last couple of months they also created new 'Micro' EC2 instances, billed at 2-3 cents/hr. They also had a recent promotion to give annual access to a single Micro instance for free for a full year. It seems like price wars will be on the horizon for 2011. Great news for startups using EC2 (like myself).
They even dropped the reduced redundancy storage rate.
I wonder if with reduced redundancy and export to your own hardware (using Amazon's sneakernet) you can mimic the standard durability at a lower cost? Maybe too much work.
I'm definitely wondering if and when we'll see 10 cents/gb - prices like that (minus the data transfer charges - see: http://www.nasuni.com/news/nasuni-blog/whats-the-cost-of-a-g...) put it within striking range of high availability spinning disk in your local data center.
Disclaimer: I work for a company building on top of/reselling S3 in addition to other providers.