> We want to be able to handle up to 30k location updates per second per node. They can be buffered before writing, leading to a much lower number of IOPS.
> This storage engine is part of our server binary, so the cost for running it hasn’t changed. What has changed though, is that we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume. We are using Provisioned IOPS SSD (io2) with 3000 IOPS and are batching updates to one write per second per node and realm.
I would be curious to hear what that "1 write per second" looks like in terms of throughput/size?
That’s with their delta-compressed format. Postgres isn’t that. An empty row on its own is 24 bytes. In PostGIS, a point is 32 bytes. If you implemented it with two floats instead, that’s still 16 bytes. A timestamp is 8 bytes. Assuming a bigint PK (8 bytes) and perhaps an int tenant id (4 bytes), that’s at best 60 bytes per row, ignoring any alignment buffering. Likely more.
If they’re instead using something more random as a PK, like a UUIDv4, the page splits alone will bloat that well past the raw storage.
Similarly, presumably they’re indexing at least one other column, so add that to the overhead.
That overhead is still just a factor 2x, so might still be fine?
Anyway, TimescaleDB adds compression (including delta encodings) to Postgres. Pretty sure it would handle this load just fine, as long as the inserts are batched.
> This storage engine is part of our server binary, so the cost for running it hasn’t changed. What has changed though, is that we’ve replaced our $10k/month Aurora instances with a $200/month Elastic Block Storage (EBS) volume. We are using Provisioned IOPS SSD (io2) with 3000 IOPS and are batching updates to one write per second per node and realm.
I would be curious to hear what that "1 write per second" looks like in terms of throughput/size?