Hacker News new | past | comments | ask | show | jobs | submit login

I think the best option for backups at AWS these days is the "Infrequent Access" storage class introduced a few months ago (probably as a reaction to Nearline): https://aws.amazon.com/blogs/aws/aws-storage-update-new-lowe...

It's almost as cheap as Glacier, but requires no waiting and has no complicated hidden costs, just simply somewhat higher request pricing, a minimum 30 days of storage and an extra $0.01 per GB for data retrievals.




Sadly, there's still the "small file gotcha":

> Standard - IA has a minimum object size of 128KB. Smaller objects will be charged for 128KB of storage.

Rounding up all small files to 128 KB can be a huge deal. I for example use Nearline to directly "rsync" my NAS box for offsite backup (yeah I know, I should use something that turns it into a real archive or something, but I'm lazy and Synology has this built in). If those hundreds of thousands of (often) small files were rounded up, S3-IA would easily be more expensive than S3/GCS.

Disclaimer: I work at Google on Compute Engine and am a happy Nearline customer myself.


You can also try the duplicity backup tool. It uses tar files to store the primary backup and its binary incremental deltas. It supports a number of backup back-ends, but you can also just generate the backup to a local disk and then rsync it to a remote location.


imo only crazy people upload anything to the cloud without compression+encryption.


You could solve that by using tar/zip to bundle your backups, but obviously that's an extra step.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: