Hacker News new | past | comments | ask | show | jobs | submit login
Lower Cost S3 Storage Option and Glacier Price Reduction (amazon.com)
170 points by jeffbarr on Sept 17, 2015 | hide | past | favorite | 71 comments



This looks like it would be perfect for Tarsnap -- the data Tarsnap stores is almost always kept for 30+ days, and it's almost always in objects of 128kB or more. The $0.01/GB for reads would be annoying (one of the reasons Tarsnap is hosted in EC2 is because it has free data transfer to and from S3; data is regularly retrieved and stored back after filtering out blocks marked for deletion) but it would be cheaper.

One thing concerns me however: Standard – IA has an availability SLA of 99%.

If this is just a reduced SLA but the actual availability is likely to be similar, that's fine. But if the actual availability is not expected to hit 99.9% -- say, if the backend implementation is "one copy in online storage, plus a backup in Glacier which gets retrieved if the online copy dies" that would be completely inadequate.

Hopefully we'll get more details over time.


This is online storage.

If a GET fails, just retry as usual (most higher-level libraries do this automatically, sometimes with a backoff mechanism).


If a GET fails, just retry as usual

Thanks! This is a very important detail which isn't documented anywhere: Retries are likely to succeed. A service where 1% of requests fail but failures are completely uncorrelated is far more usable than a service where 0.01% of requests fail but they keep on failing no matter how many times you retry them.


Additionally, assuming your block data is being hash-addressed, i.e. not changing the S3 objects once they are in S3, adding CloudFront in front of your buckets may go a long way to increasing that percentage.

However, its SLA is a little more involved (for better or worse): http://aws.amazon.com/cloudfront/sla/

I'm not accessing S3 from EC2, though, so another benefit for me was it brought my S3 network costs way down.


(constructive review)

I had to read these sentences a few times to understand what you were trying to say: "You now have the choice of three S3 storage classes (Standard, Standard – IA, and Glacier) that are designed to offer 99.999999999% (eleven nines) of durability.‎ Standard – IA has an availability SLA of 99%."

No availability is mentioned for the others, but I assume it's 100%? Perhaps a simple table could help readers to scan and visually compare the values of two properties across three service classes?


S3 Standard: "Designed for 99.99% availability over a given year" [1]

Standard - IA: "Designed for 99.9% availability over a given year" [2]

Glacier: "Retrieval jobs typically complete within 3-5 hours." [3]

[1] https://aws.amazon.com/s3/storage-classes/#Amazon_S3_Standar...

[2] https://aws.amazon.com/s3/storage-classes/#Infrequent_Access

[3] http://aws.amazon.com/glacier/faqs/#How_can_I_retrieve_data


Thanks for rounding those up. There's a nice table just below this URL too with all relevant comparisons too: https://aws.amazon.com/s3/storage-classes/#Archive

It does contradict the introductory blog post article here, but I'm assuming the actual documentation is more accurate.


Jeff, could you elaborate on the trade-offs?

When choosing between standard and this it would be helpful to understand the pros and cons. With the current description (below) it's as if the difference is only in pricing. But I assume there is a technical difference as well.

Also, the availability number could be explained better -- why is it different.

    Standard - IA offers the high durability, throughput, and low latency
    of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee.


One possibility is the reduced SLA is to account for code bugs / youth, not for any expected difference.

They might expect to be just as good as regular in the happy path, but are under-promising out of fear of some code-bug or other issue.

Either way, I'd definitely not migrate too soon for exactly the above reason.


Based on the post, it seems availability drops but durability remains the same. You might need to rest to get an object, and you'll be successful eventually.


Right. But it matters how much availability drops, and also what the correlation is between failures -- if they're completely uncorrelated but there's a 1% failure rate, you just retry, but if 1% of objects are going to be unavailable for the next four hours, that's a problem.


From one of the customer quotes on the page:

> it’s vital that customers have immediate, instant access to any of [our photos] at a moment’s notice – even if they haven’t been viewed in years. [IA] offers the same high durability and performance ... so we can continue to deliver the same amazing experience for our customers.

The way this was phrased implies that this customer's use-case had a hard requirement that all of their data be in "online" storage at all times, and their satisfaction implies that IA does, in fact, hit this requirement.

I'm not sure what the 99% SLA means given that.


Huh, I hadn't noticed that quote. Now I'm really confused.


To me it sounds like:

S3 Classic: Your file gets replicated on 3 live HDs (and/or HDs backed by RAID arrays—not sure about the internal S3 storage topology).

S3 Infrequent: Your file gets stored on 1 live HD (or single hardware redundancy component) and a copy in Glacier. If your live HD dies, your file will be automatically restored from Glacier to a new HD (but your data may be inaccessible during the automatic re-deploy).

Glacier: offline bluray combined with error correction accessed by robot arms, temporarily restored to live HDs on demand.


I thought Glacier used "green" consumer drives clocked down even further to save power?


Whichever they use, most disks are likely unpowered most of the time. Like the Facebook equivalent where they can only power 1 out of 12 disks at any time.


I understand that 99% doesn't mean that 1% of the objects will be always unaccesible. Instead, I guess that they mean is that they allow themselves to have up to 80 hours a year of downtime for any bucket.


Updated cross-provider comparison:

http://gaul.org/object-store-comparison/


Thanks for that. After looking at that comparison, I came across https://www.runabove.com/index.xml who provide a good deal.

Storage at 1c/GB/month, outgoing traffic at 1c/GB/month and no charge for incoming traffic. Data is replicated 3 times.


Run Above is managed by OVH. Very cheap but not the most reliable, at least on their dedicated servers.


How does Amazon claime 11 9's of durability when the chance of an asteroid extinction event is roughly 1000x as high?

This isn't a joke: I can't find any documentation on the risk model that lets them estimate 11 9's and what class of risks it includes.


With the cost per GB for Glacier Storage now. Any small to medium company would be fiscally irresponsible to not use it as a primary disaster recovery option. $84 per year per TB is ridiculous for geographically diverse storage.


Everything related to Glacier is ridiculously complicated.

The pricing is tricky (the per-GB price is cheap, but the retrieval can get horribly expensive). There's the fixed 4-hour delay for all actions (including listing stored files), which makes any interaction a pain. And there aren't really any good clients or high-level libraries that abstract away this complexity.

For a disaster recovery, I would certainly go for something simpler and easier to use. When everything is one fire, the last thing I need is dealing with a tricky API to restore the company files.


I've used it a bit, and I have to say I agree.

Look Glacier is great and the prices are really good. But it isn't something an SMB wants to be using directly. Now a large enterprise who can dedicate engineers to this, sure, but an SMB really wants to be utilising Glacier by means of a third party service in my opinion.

I think it is wise to think of Glacier as cold storage. So if you need recoveries RIGHT NOW, well, it may not be for you. If you can wait 24 hours? Sure (and, yes, I realise you can recover faster than that, but between transfer times, and actually starting the transfer it can take a while).


There are some companies that sell things to IT people that use glacier natively. Veeam makes a very good backup product to backup VMs and uses Glacier natively. No need to use APIs you just select the VM you want to restore and it brings it down from your Glacier store. You have to remember that most IT people have no idea how to use an API, and aren't on Hacker News.


The Glacier API is a miserable piece of crap. A couple months ago, something regressed on Amazon's end causing uploads to fail for no good reason. We could have worked around it client-side with some changes to Boto, but that would have been painful enough that it literally would have been less work to start from scratch on Google Nearline Storage.

For better or for worse, Amazon fixed it, so we're still using Glacier.


There are Glacier clients that allow you to manage the "Restore Speed" so you don't get hit with ridiculous price hikes.

Glacier is PERFECT if you just need to restore a photo or document, and not the entire repo.


But you have to store things in larger archives or the per-object overhead hurts your pricing. When Glacier first came out I really wanted to use it, but it had so much complexity over just treating it like an object store that I didn't use it. Then add the fact that S3 Standard kept coming down in price and Glacier just stood still (thus the name).


I'm currently storing my entire photo collection of about 50GB and ~20,000 photos. It cost me about $3 to upload the entire library. I pay around 50 cents a month for storage. YMMV, but I'm very happy with it.


Not that large. 25MB objects will cost you $.002 per GB to transfer, much lower than the storage and bandwidth costs. 5-10MB objects are perfectly feasible at low cost.


Glacier pricing is cheap only if you don't need to access it quickly.

  Glacier pricing is surprisingly complicated, and the actual
  cost can be much higher than $0.01 per GB-month if you don't
  read the fine print.

  The biggest gotcha is that you can only access 0.17% of the data
  you've stored in any given day without extra charges. So if you've
  stored 1000 GB, you can only access 1.7 GB per day for free.
https://news.ycombinator.com/item?id=9184466


I think the math for Glacier actually gets a lot simpler if you look at it this way:

  Glacier is only cost effective if you never
  want to access that data ever again.
There are actually use cases like this, when you will almost certainly never want to restore this data, but just in case you put it in Glacier. For anything that you expect to ever reasonably want to restore in a reasonable timeframe (like a MySQL database backup), it just doesn't make sense: it's too slow and too expensive.


Disaster Recovery != storage

Disaster recovery is being able to get your business up and running again. It would be pretty rare to have a disaster that wipes all your data, but hardware is intact and in perfect working condition. Your DR strategy needs to include hardware, location, people, everything.


I totally agree. A lot of small to medium businesses can't/won't afford true DR. This is a lower cost way to get your data offsite, and should you have a DR event that you can't fix with local or semi local backups, then you have this. A lot of businesses are unable to afford geographically diverse storage.


Glacier is archival, not Disaster Recovery. A four-hour wait to start restoring functionality is not acceptable for most online services.

Glacier may be part of a 'scorched earth' disaster recovery, but it (almost definitively) can't be a primary option.


In most cases a half decent should IT department has insurance that covers a DR event, then the cost is irrevalent. If that is not that case, even the cost of getting that data out of the cloud, will be cheaper the building geographically separate backup locations. I'm not talking about large companies. I'm talking about small to midsize companies that need DR, but don't have money to build several DR sites to do proper DR.

And I'm not talking about regular backup data that is accessed quickly as needed. I'm talking about true DR that is only accessed when you have a catastrophic data center event.


Are we certain that Glacier is "geographically diverse storage."?

We know that to be true for S3 Standard. Even S3 Reduced Redundancy claims "The RRS option stores objects on multiple devices across multiple facilities".

But I haven't seen Amazon make a similar diversity claim for Glacier. Perhaps it's implied by the "durability of 99.999999999% of objects" claim they make? Hard to achieve that durability in a single data center if there's a non-zero probability of something like a fire or other catastrophe.


What do you mean by 'geographically diverse'? AWS has a model of 'availability zones' and regions. A region like us-east-1 means a bunch of AZs (a.k.a. datacenters) that are close to each other.

S3 (and most AWS services) copies your data across multiple AZs but they are all pretty close together.


What do you mean by 'geographically diverse'?

I used those words because the person I was responding to said:

   $84 per year per TB is ridiculous
   for geographically diverse storage.
The interesting question to me is "how close is too close"? Here in the Pacific Northwest we've had quite a few wildfires this summer. Even if Amazon's Oregon data center isn't anywhere near a forest, there are still failure modes that can affect a widespread area. For example, fires can disrupt power lines. They can also result in mandatory evacuations of large areas. They can also cause highways to be shutdown for many days. All of which can impact multiple AZs that are "close to each other".


In many cases the recovery times won't be sufficient for that. If the bulk is archived or log data a couple days/weeks of recovery are OK, but of you need everything immediately...


These prices make the new Standard-IA storage significantly cheaper than the Reduced Redundancy Storage, even if you end up reading the data back.

However, I find it interesting that in addition to the cost per GB to retrieve data, this new storage class also has a significantly higher per-request cost, too. Actually, it looks cheaper to upload an object as a different storage class and then transition it to Standard-IA, since PUT of IA costs $0.01 per 1000, but PUT of another class costs $0.005 per 1000, and the cost to transition another class to IA is $0.01 per 10000. It's a small difference ($0.04 per 10k objects), but if you store an obscene amount of data on S3, that seems like enough difference to matter.


I'm going to go out on a limb and guess that Standard-IA is basically Standard except with slow disks instead of fast. Slow disks have lower $/GB, but higher $/IOPS.


Seems unlikely, given some of the other properties it has; more likely, Standard-IA has fewer copies of the data, and then a backup in Glacier, which explains why it has high durability but low availability.


Doubtful. Almost certainly the same storage backend, but IA requests are more aggressively throttled, so there's more chance of requests being rejected. S3's capacity planning then requires less request processing infrastructure to serve IA, so it's cheaper to provide than regular S3.


This seems like a smart response to Google Cloud Nearline Storage. Slightly more expensive, but with the S3->IA->Glacier lifecycle mapping the ultimate costs may be lower with AWS, certainly the flexibility will be there.

Amazon, staring at its mighty armory, goes on the hunt for a tiny chink to repair.


Just a clarification the difference in cost-per-byte stored is 25% (1.25 cents vs 1). The rates are all so small it's hard to see that but when you say $12.50 vs $10 per TiB/month, I think that makes it more "visible". As you get into the PiB range ($10k/month on Nearline) and then consider storage for a year you've got a difference of $30k/year/PiB. For individuals doing a small backup even 25% isn't huge in absolute numbers, but in the petabyte range it matters a lot.

Disclaimer: I work on Compute Engine (and not GCS or Nearline).


Yep, looks like AWS is catching up with Google in areas like these (Nearline, Container Engine).


Just released a tool Yesterday to give an insight on your S3 bucket size: https://github.com/EverythingMe/ncdu-s3


Can you say a few words about this. What's the advantage over doing a recursive ls with something like s3cmd?


Just like the difference between ncdu and regular du. It lets you drill down into the directory hierarchy and spot the exact dir/file that consumes a lot of space.


Aren't these services still wildly overpriced?

At $0.0125/GB/month, that means it costs $75 for 6TB per month.

But a 6TB hard drive costs less than $300, which means that assuming the data is stored on 3 hard drives for redundancy, they break even in less than a year.

However, hard drives seems to last at least 3-5 years on average, so this service seems to be priced at least 3-5 times as much as it costs to Amazon.

And there is even a $0.01/GB charge for retrieval on top.

There are other costs, but they should be relatively small at scale.

Am I missing something? If not, why doesn't anybody compete with Amazon and provide more reasonable pricing?


You'd need 3 pairs of 6TB hard disks in 3 data centers to match the reliability that S3 provides.

So at least 6 HDDs, 1800$ of H/W and the monthly hosting costs for the 3 DCs and presumably 3 server chassis... Plus software to keep them in sync and you need to deal with the inevitable failures that will happen eventually.

Amazon guarantees 11 9's worth of durability (99.999999999%).

Backblaze sees between 2.41% and 7.77% annual failure rate on 6TB HDDs (source: https://www.backblaze.com/blog/best-hard-drive/) and they're the only ones publishing numbers like these over sensible number of HDDs.


Go look up one of James Hamilton's talks, he breaks down the cost of running datacenters/cloud. 'Servers' are ~50% of the cost, hard drives are of course less than 100% of server costs.

So you need to at least double the cost of the hard drive. Tripling the cost of hard drives might be a good assumption for S3/Glacier because you could assume they are building servers where hard drives make up 2/3 the cost of the server.


Amazon offers features of reliability which you normally wouldn't attempt to tackle yourself, such as geographically diverse storage and really high reliability rates. You don't have to worry about what technology they use behind the scenes.

If you set up your own system, you have to manage all the technology and risk management strategies. Then you can see if you can do it cheaper than Amazon.


My guess is off site backup is expensive. Also add the electricity costs, CPU etc, engineers, rent, etc.

I That probably takes the margin from 80% of price to maybe 40%, in line with most retailers and whatnot


Glacier isn't even using hard drives, so their per TB costs should be way lower (raw storage and infrastructure).


> Glacier isn't even using hard drives

That's still speculation right? Some have theorized they use offline harddisks


I would expect offline HDDs as well, possibly combined with tapes for secondary copies to meet durability requirements.


Yeah,except you forgot to account for

* electricity

* labor

* bandwidth

* cooling

* servers, routers, wiring, and other infrastructure

* taxes on everything

* rent

* insurance


Would be nice to apply this to EBS snapshot storage too, that can add up mighty-quick.


If I could have a git API to glacier I would be happy.

I have tried various glacier clients and they all seem to suck. So I have trouble tracking exactly what I have stored there sanely. :( Unusable.


Are you using glacier specific clients? I just use an s3 client and configure the bucket to put everything in Glacier.

For example, I have a cron job calling `s3cmd sync` for my photos on my iMac once a day.


I'm aware of that easy API of getting stuff to glacier, but once it "expires" to glacier, how do you check what's actually there?

It quickly becomes a nightmare for me. Hence I need git! http://natalian.org/2015/04/13/How_I_organise_my_media/


What about s3backer with a filesystem (https://code.google.com/p/s3backer/wiki/CreatingANewFilesyst...). Then you could use regular git.

I think any active use of glacier is going to suck though. Just use that for archive.


git-annex has a glacier remote which is supposed to work (I haven't personally used it yet).

http://git-annex.branchable.com/special_remotes/glacier/


Spent the past 10 minutes just looking for a simple grid chart of pricing on S3 and gave up.

and "infrequent access" is not on the price calculator

wtf is it so complicated to compare


Did you see this?

https://aws.amazon.com/s3/pricing/

It has quite a lot of info.

Edit: you must view that page with JS enabled, otherwise no prices are shown. Perhaps that was your problem?


Didn't show in Firefox, had to open it in Chrome to see the table, thanks

Firefox logs the dreaded

"Error: InvalidStateError: A mutation operation was attempted on a database that did not allow mutations."

which is known bad coding problem.


so..does S3 objects becomes IA automatically if it's access infrequently, or do we have to do the implementation on our side?


No, but you can configure your bucket to behave that way (and then after even more time move to Glacier if you wish)


You have to state the storage class at object creation time. Prior to this, there was only standard (the default), and reduced redundancy.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: