> we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume.
Without any intent to insult what you've done (because the information is interesting and the writeup is well done)... how do the numbers work out when you account for actually implementing and maintaining the database?
- Developer(s) time to initially implement it
- PjM/PM time to organize initial build
- Developer(s) time for maintenance (fix bugs and enhancement requirements)
- PjM/PM time to organize maintenance
The cost of someone to maintain the actual "service" (independent of the development of it) is, I assume, either similar or lower, so there's probably a win there. I'm assuming you have someone on board that was on charge of making sure Aurora was configured / being used correctly; and it would be just as easier if not easier to do the same for your custom database.
The cost of 120,000/year for Aurora seems like it would be less than the cost of development/organization time for the custom database.
Note: It's clear you have other reasons for needing your custom database. I get that. I was just curious about the costs.
This sounds what big companies or a disorganized company would need. For an efficient enough company, a project like this needs just one or two dedicated engineers.
In fact, I can't imagine why this project needs a PM at all. The database is used by engineers and is built by engineers. Engineers should be their own PMs. It's like we need a PM for a programming language, but no, the compiler writer must be the language designer and must use the the language. Those who do not use a product or do not have in-depth knowledge in the the domain should not be the PM of the product.
>For an efficient enough company, a project like this needs just one or two dedicated engineers.
Maybe for a research project or a hobby project, but not for a real, high performance database to be used in a business-critical application.
FTA:
"Databases are a nightmare to write, from Atomicity, Consistency, Isolation, and Durability (ACID) requirements to sharding to fault recovery to administration - everything is hard beyond belief."
>Engineers should be their own PMs.
For small projects, sure (your "one or two dedicated engineers"). But once you start tackling projects that require larger teams, or even teams of teams, you need someone to track and prioritize the work remaining and the work in progress (as well as the corresponding budgets for personnel, services, and other resources). Similar to the way a sole proprietor can do their own accounting, but a multi-million dollar business probably should have an accountant.
As an aside, I wonder if this might be a use case for a bitmap db engine like Featurebase (https://www.featurebase.com/).
> what we’ve built is just a cursor streaming a binary file feed with a very limited set of functionality - but then again, it’s the exact functionality we need and we didn’t lose any features.
The trick is that they didn't need a database that provides "Atomicity, Consistency, Isolation, and Durability (ACID)". By only implementing what they need they were able to keep the project small.
It's like people are scared of doing anything without making it into some huge multi hundred developer effort. They've written a super simple append only document store. It's not rocket science. It's not a general purpose arbitrary SQL database.
If it's really true that they only needed a "cursor streaming binary file feed with a very limited set of functionality", then why claim in the headline they wrote their own database and begin the article with the observation that doing ACID databases is really hard?
Maybe they really built only what they needed. If so, good on them, but ditch the clickbait title. On the other hand, a lot of people believe that they've built only what they need, only to discover in a week or a month or a year that they underestimated their own requirements. Then they gradually re-learn all the lessons that led to the development of real databases, to their eventual sorrow.
Everything we are teaching people to do is centered around solving the problems with already built services, preferably from one of the big 3 cloud vendors.
We are churning out “qualified” engineers who don’t know how anything works and we pay them to configure a few AWS services.
> In fact, I can't imagine why this project needs a PM at all. The database is used by engineers and is built by engineers. Engineers should be their own PMs.
What about when two different projects have two different requirements they need supported by the database. Which one is implemented first? What about if there is only engineering capacity to implement one?
I don’t think a database is the place for “just send a PR for adding your required feature and ping the team that owns it” kind of development. It requires research, planning, architecture review, testing, etc. It’s not a hobby project, it’s a critical tool for the business.
I feel like the word "database" is throwing people off because they're comparing it with something like MySql/Postgres, when this seems slightly more complex than a k/v store stored to a file, with some other indexing, where data integrity is a low priority. That shouldn't take too much time and should be fairly isolated on the tech side so little involvement from product/project managers.
> The cost of 120,000/year for Aurora seems like it would be less than the cost of development/organization time for the custom database
Only if they planned on hiring someone just to develop this new database and if they switch to Aurora they’d let them go immediately. If the said developer was already costing them $250k to maintain and develop the application and work on top of Aurora cost seems like a good way to save $100k/year.
True. Also, to your point, one could argue that if that developer leaves, they'd have an easier time hiring anyone with Aurora experience as opposed to someone to learn and maintain the custom database.
But at the same time, Aurora costs could also scale with usage. It may cost $120k one year, $180k next year, $500k the year after. If the database they have now is well designed after it's already built it may not need active development every year but adding a feature here and there. Also, switching back to Aurora could also be an opportunity cost "we should have written our own thing and could have saved millions ...".
Agreed, a developer that can pull this off is pretty good, if maybe distracted by shiny objects, what could they do working on the actual product instead of this technological terror?
RDS also outperforms Aurora. My 12 year old Dell R620s outperform both for certain types of queries (admittedly they have some fast NVMe over insanely fast Mellanox).
SANs add latency, who knew? What did surprise me during testing is that Aurora seems to have added latency even if the entire dataset could easily fit into buffers, with zero disk reads required.
If you want actually fast cloud DBaaS, the only way is with instances that have local NVMe for caching, like AWS ??gd.db instances.
Mine thinks we’re going to be able to shift a massive, resource-hungry PG BDR mesh onto Aurora. I don’t buy it, purely from a performance capability standpoint. The only reason the shit queries we have run at all is through sheer performance.
I think for this kind of thing their needs are so simple and well-suited to a bespoke implementation that it probably paid for itself in less than 4 months. This doesn't seem like a db implementation that's going to need dedicated maintenance.
They're operationally using a funny spelling of SQLite and I don't imagine anyone arguing that such a thing needs constant attention.
Don’t forget this is a largely one time cost vs Aurora, which scales cost with usage.
Also they said their current volume is around 13k/second. They’ve built the new platform for 30k/sec per node. This should last them a long time with minimal maintenance.
Using Aurora is also not free in terms of cost of development. Developers need to be trained on it's implementation, features, constraints, performance implications, keeping up with API changes, etc
The problem is all those people you listed would still exist plus the 120k bill to Amazon.
They may or may not be doing other things depending on the company size and state.
You could drop the PM, engineers writing for engineers don’t need a PM.
You will likely hire similarly costed engineers to maintain the database stack anyways.
You basically hit all the talking points big cloud has brainwashed people into thinking into being true, but every day we see stories of a handful of engineers doing something we are told can’t be done and saving millions in cloud cost.
It’s so painful to watch. Software Engineering became a thing because you could hire a engineer solve your problem, and big businesses stepped in and told you that your problem was something else and gone out of its way to stifle innovation by settling industry standards on how to do things that only guarantee you use cloud services.
Any company that wants to own its destiny knows to stay away from lock-in.
I would imagine, as someone with no special insight into goings-on at Hivekit, that the answer is intended scale.
They mention 13.5k simultaneous connections. The US has 4.2 million tractors alone, just the US, just tractors. If they get 10% of those tractors on the network that's a 30x to their data storage needs. So multiply that across the entire planet, and all the use cases they hope to serve.
Investing time early on so that they can store 50x data-per-dollar is almost certainly time well spent.
Presumably those tractors wouldn't be connecting directly to the db though. Not sure why they dont just go the standard iot events route and store data in a data lake and propagate into an analytics db/warehouse from there. Add a layer to make recent events available immediately.
I actually disagree with you here. There are costs above and beyond the engineer's effect on the balance sheet. There's the partial salary of management to manage them, plus asking them to document their work and train others so that the database won't have a bus factor of 1. So in well-run engineering departments, there's no such thing as paying for a "single" engineering salary. You have teams; a team maintains the system and it has a pre-existing workload.
A large part of the value of popular platforms is precisely that they are not bespoke. You can hire engineers with MySQL/Postgres experience. You cannot hire engineers who already have experience with your bespoke systems.
The median home price is under 400K, so a 120k salary is not really stretched.
Now, median in the Seattle metro, or in San Francisco, sure. But 120k in, say, St Louis is still going to get you an intermediate dev, no problem, and they can afford a house by themselves too. There are 4 bedroom houses in my neighborhood for 300K.
So now every change takes 10x as long because you need to explain it to somebody who's bonus depends on it not being done? I love how the solution to too much management is always more management.
Everyone seems fixated on the word database and the engineering cost of writing one. This is a log file. You write data to the end of it. You flush it to disk whenever you've filled up some unit of storage that is efficient to write to disk. Every query is a full table scan. If you have multiple writers, this works out very nicely when you have one API server per disk; each server writes its own files (with a simple mutex gating the write out of a batch of records), and queries involve opening all the files in parallel and aggregating the result. (Map, shuffle, reduce.)
Atomic: not applicable, as there are no transactions. Consistent: no, as there is no protection about losing the tail end of writes (consider "no space left on device" halfway through a record). Independent: not applicable, as there are no transactions. Durable: no, the data is buffered in memory before being written to the network (EBS is the network, not a disk).
So with all of this in mind, the engineering cost is not going to be higher than $10,000 a month. It's a print statement.
If it sounds like I'm being negative, I'm not. Log files are one of my favorite types of time series data storage. A for loop that reads every record is one of my favorite query plans. But this is not what things like Postgres or Aurora aim to do, they aim for things like "we need to edit past data several times per second and derive some of those edits from data that is also being edited". Now you have some complexity and a big old binary log file and some for loops isn't really going to get you there. But if you don't need those things, then you don't need those things, and you don't need to pay for them.
The question you always have to ask, though, is have you reasoned about the business impacts of losing data through unhandled transactional conflicts? "read committed" or "non-durable writes" are often big customer service problems. "You deducted this bill payment twice, and now I can't pay the rent!" Does it matter to your end users? If not, you can save a lot of time and money. If it does, well, then the best-effort log file probably isn't going to be good for business.
If you only need those things there’s also an off the shelf solution for log files. Time you spend reinventing the wheel is time you aren’t spending finding product-market fit (if you’ve already found it you wouldn’t even consider it because you’d be too busy servicing the flood of customers.)
Unless your company is so far past product market fit that it hires qualified applicants by the classfull or whatever-it-is is their product, they have no business coding up custom infra bits. The opportunity cost alone is sufficient argument against, though far from the only one.
I think that EBS is the difficult engineering problem that they purchased instead of built from scratch here. Writing binary records to a file and reading them all into memory is not going to be a time sink that prevents you from finding product/market fit. The $120,000/year burn rate on Aurora they had seems alarming; an alarm that strongly implies "we didn't use the right system for this problem".
My guess for "why didn't they use something off the shelf" is that no existing software would be satisfied with the tradeoffs they made here. Nobody else wants this.
I mean, that's basically my default. If I have some problem involving storing data, I pretty much 100% of the time type "pgx.Connect(" and go from there. If that's not good enough, only then do I start thinking about the problem. 99.9% of the time, PostgreSQL is going to get the job done for you.
Aurora is a nice drop-in replacement for Postgres. Kind of a good way of dealing "oh shit, I built way too much stuff on top of a database that can't scale to this workload." It was good enough to find product/market fit, but now it's too slow. Solution: pay Amazon some money. If that works, that works.
If it doesn't, then you need to take a different approach. At this point, you're like 8 levels of weirdness down the stack, and that's when it's time for innovation. Nobody says "I need to make a CRUD app, so I'm going to build a team to build a database first." No! You just use Postgres!
When that starts failing in a measurable business metric kind of way, then your creative juices start flowing and you start thinking about making your own database.
(An aside: I make a database at work. We use Postgres for all the stuff that doesn't need to scale. "Is this auth token one associated with an authorized user?" That's Postgres. "Are these bytes being written to object storage approximately the same as bytes that have been written before?" That's our thing.)
(Another aside: My all-time favorite database is Spanner. I used that a lot when I worked at Google. I remember someone on my team working on the Spanner integration with our product, which was basically an in-memory database, but we needed durability. I've never seen a piece of software laugh at our 20,000 writes per second workload so hard. I imagined outages, rollbacks, furrowed brows... nope. I thought there was no way a durable replicated storage system could handle our workload. I was wrong. It just accepted all of our data and never caused a problem. So to me, that was the solution to every problem. But after I left Google, my thought was to use Cloud Spanner for everything. I quickly realized I did not have enough money to pay for Spanner! Nobody had enough money. So postgres is what I settled on. I have enough money to afford it ($0 is the starting cost) and it has never done me wrong. So I don't blame OP for starting there. It's what I would have done, anyway. BTW, I think they lowered the price a lot since I last looked, so if it sounds like the kind of thing you need, you should probably just buy it. But never underestimate "beefy computer with Postgres". By the time that fails you, you will probably have a lot of money if people like the thing you're making.)
Anytime I hear "we need to blast in per-second measurements of ..." my mind jumps to "well, have you looked at the bazillions of timeseries databases out there?" Because the fact those payloads happen to be (time, lat, long, device_id) tuples seems immaterial to the timeseries database and can then be rolled up into whatever level of aggregation one wishes for long-term storage
It also seems that just about every open source "datadog / new relic replacement" is built on top of ClickHouse, and even they themselves allege multi-petabyte capabilities <https://news.ycombinator.com/item?id=39905443>
OT1H, I saw the "we did research" part of the post, and I for sure have no horse in your race of NIH, but "we write to EBS, what's the worst that can happen" strikes me as ... be sure you're comfortable with the tradeoffs you've made in order to get a catchy blog post title
ClickHouse is one of the few databases that can handle most of the time-series use cases.
InfluxDB, the most popular time-series database, is optimised for a very specific kind of workloads: many sensors publishing frequently to a single node, and frequent queries that are not going far back in time. It's great for that. But it doesn't support doing slightly advanced queries such an average over two sensors. It also doesn't scale and is pretty slow to query far back in time due to its architecture.
TimeScaleDB is a bit more advanced, because it's built on top of PostGreSQL, but it's not very fast. It's better than vanilla PostGreSQL for time-series.
The TSM Bench paper has interesting figures, but in short ClickHouse wins and manage well in almost all benchmarks.
Unfortunately, the paper didn't benchmark DuckDB, Apache IoTDB, and VictoriaMetrics. They also didn't benchmark proprietary databases such as Vertica or BigQuery.
If you deal with time-series data, ClickHouse is likely going to perform very well.
I work on a project that ingests sensor measurements from the field and in our testing found timescaledb was by far the best choice. The performance x all their timeseries specific features like continuous aggregates and `time_bucket` plus access to the postgres ecosystem was killer for us. We get about 90% reduction in storage with compression without much performance hit too
Apache Parquet as data format on disk seems to be popular these days for similar DIY log/time series applications. It can be appended locally and flushed to S3 for persistence.
> but "we write to EBS, what's the worst that can happen" strikes me as ... be sure you're comfortable with the tradeoffs you've made in order to get a catchy blog post title
EBS latency is all over the place. The jitter is up to the 100ms scale, even on subsequent IOPS. We’ve also had intermittent failures for fsync(), which is a case that should be handled but is exceptionally rare for traditionally-attached drives.
The author does note in the writeup that they are comfortable with some (relatively rare) data loss; like server failure and the like. Given their use cases, it seems like the jitter/loss of EBS wouldn't be too impactful to them.
There's different kinds of data loss. There's data loss because you lose the whole drive; because you lost a whole write; because a write was only partially written. Half the problem with NIH solutions is, what happens when you try to read from your bespoke binary format, and the result is corrupted in some way? So much of the value of battle-tested, multi-decade-old databases is that those are solved problems that you, the engineer building on top of the database, do not need to worry about.
Of course data loss is alright when you're talking about a few records within a billion. It is categorically unacceptable when AWS loses your drive, you try to restore from backup, the application crashes when trying to use the restored backup because of "corruption", the executives are pissed because downtime is reaching into the hours/days while you frantically try to FedEx a laptop to the one engineer who knows your bespoke binary format and can maybe heal the backup by hand except he's on vacation and didn't bring his laptop with him.
> Half the problem with NIH solutions is, what happens when you try to read from your bespoke binary format, and the result is corrupted in some way?
restoring an EBS snapshot seems pretty similar to restoring Aurora/RDS, binary format or not. if you know you have problems. (they don't mention checksums in the blog post. or any kind of error handling, just that they can buffer some writes.)
usually the problem with NIH solutions is that evolving/extending them is hard. (of course multi-decade projects are also not magically free of ossified parts, we just euphemistically think of them as trade-offs.)
How EBS snapshots going to be consistent? I mean AWS takes them at random time, a half of a write may be captured. Another, less common scenario is silent data corruption you never notice until you need to restore.
I assume they do the snapshotting from some script. (Stop writes, flush, snapshot, restart writes.) If not, then they probably have some way of detecting partial writes. It seems that they do fsync() every second.
> Another, less common scenario is silent data corruption you never notice until you need to restore.
I tried to find what kind of guarantees EBS offers, but they only talk about durability and availability ( https://aws.amazon.com/ebs/sla/ ), nothing about data corruption, individual sector failure. (So this could mean that they either won't even notice ... or you might get back a full zero page - assuming they detect some underlying error, but due to encryption at rest the internal checksum fails, assuming there is one.)
I mean if you spun up Postgres on EC2 you would be directly writing to EBS so that's not really the part I'm worried about. I'm more worried about the lack of replication, seemingly no way to scale reads or writes, beyond a single server, and no way to failover uninterrupted.
I'm guessing it doesn't matter for their use-case which is a good thing. When you realize you only need like this teeny subset of db features and none of the hard parts writing you own starts to get feasible.
Replication and reads can be scaled with something like Patroni or even a DIY replication setup (if one knows what they’re doing, of course), but writes are difficult.
Where C* databases seems to fall down are point updates and in this case, requirement to implement your own aggregations.
For these workloads you are much better off (unless you are already running C* somewhere and are super familiar with it) with something like Clickhouse or if you need good slice and dice then Druid or Pinot.
Yeah, point updates are less than stellar, but for aggregation, it's fine unless you're really want adhoc low latency queries. For anything substantial you'd want Spark/Beam for aggregation otherwise, and the Cassandra model is really fast at loading up time series type data in parallel, and Spark etc. make it really easy to do. The tradeoff is just the high startup cost of those kinds of jobs.
Wasn't this already discussed here yesterday? The main criticism of the article is that they didn't write a database, they wrote an append-only log system with limited query capabilities. Which is fine. But it's not a "database" in the sense that someone would understand when reading the title.
Why isn’t that a database? In my understanding a DB needs to be able to store structured data and retrieve it, so not sure what’s missing here? Many modern DBs are effectively append only logs with compaction and some indexing on top as well as a query engine, so personally I don’t think it’s weird to call this a DB.
I agree. Is there an industry accepted definition of what a system must do before it can be called a database?
I also wrote a KV system to keep track of metadata (tags) for an object store I invented. I discovered that it could also be used to create relational tables and perform fast queries against them without needing separate indexes.
I started calling it a database and many people complained that I was misusing the term because it can't yet do everything that Postgres, MySQL, or SQLite can do.
Databases have a long history that reaches back much further than the modern, full featured SQL databases we have today. What you built sounds like it would fit in well amongst the non-sql databases of the world, like Berkeleydb, indexeddb, mongo, redis, and so on.
Don't be absurd. By your standard, cat, grep and a file form a database. Sure, if you interpret literally what a database is, that fits. But once again, it's not what people have in mind when they read "we cut cloud costs by writing our own database".
File systems are databases. Different file systems choose different trade offs, different indexing strategies, etc.
Git is also a database. I got into this argument with someone when I proposed using Github as a database to store configuration entries. Our requirements included needing the ability to review changes before they went live, and the ability to easily undo changes to the config. If your requirements for a DB include those two things, Github is a damn good database platform! (Azure even has built in support for storing configurations in Github!)
I don’t know what point you are really trying to make. At uni the DBMS that everyone learns in their database course is an SQL database. The database part is technically just a binary file, but it’s not what people usually mean when they say they need a database for their project. Just like a search engine doesn’t have to be anything more than indexOf and a big text file. It’s just not very useful to think of it like that.
It's difficult to be pedantic about an ambiguous term like database without additional qualification or specificity.
There are more types of databases than those that end in "SQL".
A CSV file alone is a database. The rows are, well, rows. So is a DBM file, which is what MySQL was originally built on (might still be). Or an SQLite file.
The client or server API doesn't have to be part of the database itself.
> But it's not a "database" in the sense that someone would understand when reading the title.
Sure, because it is common for people to mix a "database" (aka: data in some kind of structure) with a paradigm (relational, SQL, document, kv) with a "database system" aka: and app that manages the database.
> We want to be able to handle up to 30k location updates per second per node. They can be buffered before writing, leading to a much lower number of IOPS.
> This storage engine is part of our server binary, so the cost for running it hasn’t changed. What has changed though, is that we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume. We are using Provisioned IOPS SSD (io2) with 3000 IOPS and are batching updates to one write per second per node and realm.
I would be curious to hear what that "1 write per second" looks like in terms of throughput/size?
That’s with their delta-compressed format. Postgres isn’t that. An empty row on its own is 24 bytes. In PostGIS, a point is 32 bytes. If you implemented it with two floats instead, that’s still 16 bytes. A timestamp is 8 bytes. Assuming a bigint PK (8 bytes) and perhaps an int tenant id (4 bytes), that’s at best 60 bytes per row, ignoring any alignment buffering. Likely more.
If they’re instead using something more random as a PK, like a UUIDv4, the page splits alone will bloat that well past the raw storage.
Similarly, presumably they’re indexing at least one other column, so add that to the overhead.
That overhead is still just a factor 2x, so might still be fine?
Anyway, TimescaleDB adds compression (including delta encodings) to Postgres. Pretty sure it would handle this load just fine, as long as the inserts are batched.
> EBS has automated backups and recovery built in and high uptime guarantees, so we don’t feel that we’ve missed out on any of the reliability guarantees that Aurora offered.
It may not matter for their use case, but I don't believe this is accurate in a general sense. EBS volumes are local to an availability zone while Aurora's storage is replicated across a quorum of AZs [0]. If a region loses an AZ, the database instance can be failed over to a healthy one with little downtime. This has only happened to me a couple times over the past three years, but it was pretty seamless and things were back on track pretty fast.
I didn't see anything in the article about addressing availability if there is an AZ outage. It may simply not matter or maybe they have solved for it. Could be a good topic for a follow up article.
I have built similar system in 2002 using JGroups (JavaGroups at the time before open source project was acquired by JBoss) while persisting asynchronously to DB (Oracle at the time). Our scale even in 2002 was much higher than 13,000 vehicles.
The project I believe still appears in success story on JGroups website after 20+ years. I am surprised people are writing their own databases for location storage in 2024 :). There was no need to invent new technology in 2002 and definitely not in 2024.
> There was no need to invent new technology in 2002 and definitely not in 2024.
Sure we have had roads for hundreds of years but they're not the same ones we have today, even though it's the same concept and function.
You can't just take advantage of the technology we have today and at the same time refuse to acknowledge it and hand-wavingly claim it was just the same in '02. It's easy to say so about anything if we're going to conveniently gloss over the details.
Believe it or not, many technologies that changed the world and we depend on today actually have origin stories similar to the one shared in the article. Many of them started as custom internal tools that I imagine you would've been similarly critical towards them for trying to invent new technology needlessly.
Eh, Kafka isn't easy to fuck up. If anything it's stupidly hard to fuck up assuming you aren't completely incompetent and didn't read up on how it works or the main caveats. I say that because there really isn't much to Kafka in the first place (as long as you aren't including things like Kafka Streams or Kafka Connect etc).
Operationally there are some annoying things in OSS Kafka (hot partitions, controller failover slowness pre-KRaft, etc) but it's overall bog simple and easy to work with if you can accept the things it doesn't do (queue-like behavior).
I don't love Kafka these days but the fear mongering is a bit much.
Sidenote: If you think you want Kafka you should probably check Pulsar first, in most cases you probably want Pulsar or due to changing requirements you would have been better off going Pulsar from the start.
I don't know what geospatial features are needed, but otherwise time series databases are great for this use case.
I especially like Clickhouse, it's generic but also a powerhouse that handles most things you throw at it, handles huge write volumes (with sufficient batching), supports horizontal scaling, and offloading long-term storage to S3 for much smaller disk requirements.
The geo features in clickhouse are pretty basic, but it does have some builtin geo datatypes and functions for eg calculating the distance.
I love the attitude, we didn't see a good fit so we rolled ours.
Sure it won't cover the bazillion cases the DBs out there do but that's not what you need. The source code is small enough for any team member to jump in and debug while pushing performance in any direction you want.
It's amazing what can happen when software companies start doing something approximating real engineering, rather than just sitting a UI on top of some managed services.
As others had mentioned, probably hosting your own clickhouse instance could yield major savings while allowing for much more flexibility in the future for querying data. If your use case can be served by what clickhouse offers, gosh is it an incredibly fast and reliable open source solution that you can host yourself.
Yeah and for most companies without a huge supply of developers the financial risk of having all your stuff blitzed when your home spun solution fails.
I've no doubt this is true, however, anyone I've ever met who exclaimed "let's create our own database" would be viewed as dangerous, unprofessional or downright uneducated in any business meeting. There's just too much can go badly wrong, for all the sunk cost in getting anything up and running.
That is such a problem in today's world. Of course you don't want to re-invent the wheel and all that, but we must be open to the idea of having to do it. Innovation stagnates if people suggesting redoing something are immediately seen as "dangerous, unprofessional or downright uneducated"
I think the issue is that you rarely get to see a neat new solution to a given problem. Usually you'll see some kind of half-baked attempted solution that's worse than the already existing alternatives.
The real problem is people who want to write a database never write one and have a very brief understanding of the domain and its complexities. Another problem is such database can became an engineering bottleneck. Other teams need new features easily found in a conventional db but “core team” unable to meet their demands. I’ve seen this in practice, result is a dead product.
> would be viewed as dangerous, unprofessional or downright uneducated in any business meeting
Sounds like a great place to work.
> There's just too much can go badly wrong, for all the sunk cost in getting anything up and running.
Engineering is the art of compromise. In many cases the compromises would not be worth it, but that doesn't mean there are zero places where it would be, and eschewing the discussion out fear of how it would be perceived is the opposite of Engineering.
True, if it's some limited in-memory key-value store then that's a lot more readily implemented than something resembling Oracle's latest enterprise offering, and potentially could fall under the umbrella of it being a "database".
A lot of people here are making very confident sounding assertions, yet some as saying it's just an append only log file and some imply it's sharded. Something everyone does agree on is they are very vague about what geospartial features they need.
The one thing they do say is "no ACID". That implies no b-trees, because an unexpected stop means a corrupted b-tree. Perhaps they use a hash instead, but it would have to be a damned clever hash tree implementation to avoid the same problem. Or perhaps they just rebuild the index after a crash.
Even a append only log file has to be handled carefully without ACID. An uncontrolled shutdown in more file systems will at leave blocks of nulls in the file and 1/2 written blocks if they cross disk block boundaries.
It's a tantalising headline, but after reading the 1,200 words I'm none the wiser on what they built or whether it meets their own specs. A bit of a disappointment.
It would be interesting to see a database built from the ground up for being trivial to maintain.
I use managed databases, but is there really that much to do for maintaining a database? The host requires some level of maintenance - changing disks, updating the host operating system, failover during downtime for machine repair, etc. if you use a database built for failover I imagine much of this doesn’t actually affect the operations that much assuming you slightly over provision.
For a database alone I think the work needed to maintain is greatly exaggerated. That being said I still think it’s more than using a managed database, which is why my company still does so.
In this case though, an append log seems pretty simple imo. Better to self host.
I had a similar idea (except using kafka) : have all the nodes write to a kafka cluster, used for buffering, and let some consumer write those data in batch, into whatever database engine(s) you need for querying, with intermediate pre-processing steps whenever needed. This lets you trade latency for write buffering, while not loosing data thanks to kafka durability guarantees.
What would you use for streaming directly to s3 in high volumes ?
Yeah kafka would handle it, except in my experience i would like to avoid kafka if possible, since it adds complexity. (Fair enough it depends on how precious your data is, if it is acceptable to loose some of it if a node crashes)
But somehow they are ingesting the data over network. Would writing files to s3 be slower than that? Otherwise you don't need much more than a RAM buffer?
Edit: to be clear, kafka is probably the right choice here, it is just that kafka and me is not a love story.
But it should be cheaper to store long term data in s3 than storing it in kafka, right?
I could be missing something, but I can't really wrap my head around "unlimited paralelism".
What they say is that the logic is embedded into their server binary and they write to a local EBS. But what happens when they have two servers? EBS can't be rw mounted in multiple places.
Won't adding the second and more servers cause trouble like migrating data when new server joins the cluster, or a server leaves the cluster?
I understand Aurora was too expensive for them. But I think it is important to note their whole setup is not HA at all (which may be fine, but the header could be misleading).
i got caught up on that statement too. i interpreted it as they can spin up more servers with their own volumes but thats not really 1 ebs volume anymore. maybe at their current load they only need 1? the op mentioned copying stuff to s3 at a certain size so it sounds like the disk isn't very big at any moment. i don't think there would be much to do if another server joins.
> Amazon EBS offers a higher durability volume (io2 Block Express), that is designed to provide 99.999% durability with an annual failure rate (AFR) of 0.001%, where failure refers to a complete or partial loss of the volume.
if they take snapshots often enough to feel comfortable with that low failure rate, it does seem kind of reasonable to me. really low risk of a given volume failing.
> So - given that we don’t know upfront what level of granularity each customer will need, we store every single location update.
Maybe Im cynical but interesting that "the business" didnt start to check it to cut costs. I know that customers love this feature. Cynically I can see it costing more, so some customers would drop in.
Also it looks they rewrote a log / timeseries "database" / key value store? As pthers mention sounds like reinventing the wheel to get a cool blog post and boost career solving "problems".
> we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume
Reminds me how I implemented mssql active-active log replication over dropbox shares back in 2010 to synchronise two databases in the Us and in the UK. Worked perfectly fine except of that one hurricane that took them out for longer than 14 days. This was more than the preconfigured log retention period.
How fast can reads be thou? Even if skipping along a fixed offset, reading 4 byte identifiers to filter out location updates for vehicles, that's still a sequential scan of a massive file. Wouldn't this read issue become a choking point to a degree that would make growth a curse? Then you get into weird architectures that exist solely to facilitate predigested reads?
They mention all those features of databases, presenting them as important:
> Databases are a nightmare to write, from Atomicity, Consistency, Isolation, and Durability (ACID) requirements to sharding to fault recovery to administration - everything is hard beyond belief.
Then talk about their geospatial requirements, PostGIS etc, making it seems they need geospatial features ("PostGIS for geospatial data storage" -- wtf? you need PostGIS for geospatial query not merely storage...)
In reality, they did not require any of the features they mention throughout the article. What a weird write-up!
I guess the conclusion is "read the F*-ing specs". Don't grab a geospatial DBMS just because you heard the words "longitude" and "database" once.
if i'm reading the op right, they kind of use ebs as a buffer for fresh data until it ages out to s3. they use a "local" disk to hold the stuff used by the queries that people actually make and the queries run quick. they let the old stuff rot in s3 where its almost never used. that sounds like a good idea to save money plus the stuff that's done often is fast.
the ebs slas look reasonable to a non expert like me and you can take snapshots. it sound like you need to be careful when snapshotting to avoid inconsistencies if stuff is only partially flushed to disk. so you'd need to pause io while it snapshots if those inconsistencies matter. that sounds bad and would encourage you to take less frequent snapshots...? you also pay for the snapshot storage but i guess you wouldn't need to keep many. i like that aws defines "SnapshotAPIUnits" to describe how you get charged for the api calls.
with aurora, it looks like you can synchronously replicate to a secondary (or multiple secondaries) across azs in a single region. it sounds nice to have a sync copy of stuff that people are using. op says the'yre ok with a few seconds of data loss so i'm wondering how painful losing a volume right before taking a snapshot would be.
i wonder if anything off the shelf does something similar. it sounds like people are suggesting clickhouse. i saw buffer table in their docs and it sounds similar https://clickhouse.com/docs/en/engines/table-engines/special.... it looks like it has stuff to use s3 as cold storage too. i even see geo types and functions in the docs. i've never used clickhouse so i don't know if i'm understanding what i read, but it sounds like you could do something similar to whats described in the post with clickhouse if the existing geo types + functions work and you are too lazy to roll something yourself.
"Of course, that’s an unfair comparison, after all, Postgres is a general purpose database with an expressive query language and what we’ve built is just a cursor streaming a binary file feed with a very limited set of functionality - but then again, it’s the exact functionality we need and we didn’t lose any features."
Why, because you think the surveillance implies that it’s nonconsensual and thus unethical, or the very small scale (<100k clients) means this isn’t actually a very difficult engineering challenge?
Honestly, this doesn’t seem like that high of requirements. There are tens of thousands of companies that are doing more spatial data processing and are using standard cloud databases just fine.
I'm surprised to see the (mostly) critical posts. My reaction before coming to the comments was:
- This is core to their platform, makes sense to fit it closely to their use cases
- They didn't need most of what a full database offers - they're "just" logging
- They know the tradeoffs and designed appropriately to accept those to keep costs down
I'm a big believer in building on top of the solved problems in the world, but it's also completely okay to build shit. That used to be what this industry did, and now it seems to have shifted in the direction of like 5-10% of large players invent shit and open source it, and the other 90-95% are just stitching together things they didn't build in infrastructure that they don't own or operate, to produce the latest CRUD app. And hell, that's not bad either, it's pretty much my job. But it's also occasionally nice to see someone build to their spec and save a few dollars. It's a good reminder that costs matter, particularly when money isn't free and incinerating endless piles of it chasing a (successful) public exit is no longer the norm.
I get the arguments that developer time isn't free, but neither is running AWS managed services, despite the name. And they didn't really build a general purpose database, they built a much simpler logger for their use case to replace a database. I'd be surprised if they hired someone additional to build this, and if they did, I'd guess (knowing absolutely nothing) that the added dev spends 80% of their time doing other things. It's not like they launched a datacenter. They just built the software and run it on cheaper AWS services versus paying AWS extra for the more complex product.
Without any intent to insult what you've done (because the information is interesting and the writeup is well done)... how do the numbers work out when you account for actually implementing and maintaining the database?
- Developer(s) time to initially implement it
- PjM/PM time to organize initial build
- Developer(s) time for maintenance (fix bugs and enhancement requirements)
- PjM/PM time to organize maintenance
The cost of someone to maintain the actual "service" (independent of the development of it) is, I assume, either similar or lower, so there's probably a win there. I'm assuming you have someone on board that was on charge of making sure Aurora was configured / being used correctly; and it would be just as easier if not easier to do the same for your custom database.
The cost of 120,000/year for Aurora seems like it would be less than the cost of development/organization time for the custom database.
Note: It's clear you have other reasons for needing your custom database. I get that. I was just curious about the costs.