Oh, running databases on instance storage is one of my favorite topics! :-)
Here is the start of a chain of comments where you'll find some interesting ideas and alternatives: https://news.ycombinator.com/item?id=17098783 (It is just amazing who is on HN....)
Doesn't really seem like a fair comparison since it relies on compression for the local storage to be cheaper. Comparing similarly sized EBS volumes + EC2 instance to the local storage instances and it is pretty much the exact same cost.
Obviously local NVMe is going to smoke a networked replicated drive performance wise but that comes with it's own set of trade offs.
Is Amazon AWS the new reality distortion field? Gosh, just rent two dedicated boxes, one master, one slave, switch over to slave manually in the extreme rare case if the master fails and be done. This entire article screams "right tool for the job and this is not the right tool".
RDS with reserved instances / VPC and some security rules. A LOT of maintenance goes away.
Who is managing these two dedicated boxes at OS and app level. Who is managing the networking of these boxes, including a non-internet accessible network subnet and a an internet gateway to let a white listed developer host box connect to a dev instance of database with some sample data?
Using secrets manager, which handles key rotation seems to work great on AWS, how does this work on the rented boxes.
Who sets up the slave, confirms replication is current to slave etc.
Who does the backups of all of this (in case of a DROP table).
I used to do things the harder way with rented boxes. BUT every-time someone tells me oh - just rent some boxes and you could do X much cheaper I roll my eyes. X is always a massive subset on security, durability with a massive uncounted overhead in other areas (waste of time).
While I agree with you, "switch over to slave manually" just doesn't seem right. Who would monitor when to switch? Why would somebody want to monitor that? Having said that - there're probably solutions that allow switching when database is waiting for I/O
RDS doesn't fail over to a slave anyway. You would need to do that manually.
If you mean 'I ticked the multi-az box' that is only in the even that the AWS datacenter crashes - it will move RDS for you to another zone. If you have services that cache ip addresses from DNS, you will need to rediscover the database.
I think it's also about that things grow. It's easy to set up one thing anywhere but then you may want more things connected to that thing. All of a sudden you want to focus on your app and not infrastructure problems.
AWS is expensive but for a company it's very cheap for the problems they avoid completely.
I was looking into it a bit before. The smallest RDS instance was like 11 dollars per month and can't use spot instances (naturally).
If you do use spot instance with a local postgres on EBS, it comes down to about 4 dollars per month all together. With instance store, that should be around 3 dollars.
These are for t2 micro. If you use bigger instances, the money difference will be huge.
It's my private project of trying to build an interesting browser plug-in using aws services. I want to keep the costs super low since I need it running all the time but will only use a handful of computers running my plugins while developing.
I do it partly to learn about aws features, partly because I would like if this plug-in existed myself and it doesn't :)
Here is the start of a chain of comments where you'll find some interesting ideas and alternatives: https://news.ycombinator.com/item?id=17098783 (It is just amazing who is on HN....)