Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Its more like 700GB now on the new server and we were about to have to migrate to a higher tier on Atlas.

> maybe you don't need that much uptime for your particular use case.

Correct. Thanks for reading!



Yep, we just migrated to Atlas, and the disk size limitation of the lower instance tiers pushed us to do a round of data cleaning before the migration.

Also, we noticed that after migration, the databases that were occupying ~600GB of disk in our (very old) on premise deployment, were around 1TB big on Atlas. After talking with support for a while we found that they were using Snappy compression with a relatively low compression level and we couldn't change that by ourselves. After requesting it through support, we changed to zstd compression, rebuilt all the storage, and a day or two later our storage was under 500GB.

And backup pricing is super opaque. It doesn't show concrete pricing on the docs, just ranges. And depending on the cloud you deployed, snapshots are priced differently so you can't just multiply you storage by the number of the snapshots, and they aren't transparent about the real size of the snapshots.

All the storage stuff is messy and expensive...


Did you have a very aggressive backup schedule?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: