Hacker News new | past | comments | ask | show | jobs | submit login

Actually, one more question...do you guys scale compute and data layers separately, or are they tightly coupled within the same container?

I was looking at containerized PostgreSQL on AWS because I want to colocate a job scheduling tool (pg_cron) with the database process, but RDS doesn't support that extension. Apparently (or at least I hope), ecs-cli compose supports docker volumes through EBS, which is the same base as EKS persistent volumes. There's next to no information for ECS + EBS though, everybody uses EC2 or full on EKS.

I was just thinking, if you needed to handle excessive read load on small quantities of data, having separate data layers would enable you to autoscale db instances while still having the same volumes, instead of using an entirely separate caching layer which could introduce bugs and increase maintenance overhead. If you guys had native HA with docker exec access and passed savings to consumers that would be huge for me and my use cases.




I’m experimenting with this, they have Redis at every edge, with a way (SELECT 2) to send commands to all edges with eventual consistency. No RDBMS yet, said they’re looking at CockroachDB.

I’m running a single central Postgres server on Heroku and planning to use the Redis edges to cache.


Right now we're best suited for app servers, databases won't (yet) run very well on fly.io. We are trying really hard to focus on what we have because it's so valuable but we love DBs so much we might end up trying to "solve" them soon.


But your most valuable customers will need to interact with an app server plus database for any real life use case. Can you share some applications where only placing the app server close to user works? Is the database back in Virginia?


You are mostly right, there are a surprising number of problems that don't need much database interaction. Lots of image generation, video workloads, game servers, etc.

One of the things we want to do, though, is make "boring" apps really fast. My heuristic for this is "can you put a Rails app on fly.io without a rewrite?".

Many of these applications add a caching layer. Normally if someone wants to make a Rails app fast, they'll start by minimizing database round trips and cache views or model data. If somone has already done this work, fly.io might just work for this app since we have a global Redis service (https://fly.io/docs/redis/).

We have experimented with using CockroachDB in place of Postgres to get us even farther, but it doesn't work with most frameworks' migration tools.

We're also thinking of running fast-to-boot read replicas for Postgres, so people could leave their DB in Virginia but bring up replicas alongside their app servers.

If you've seen anyone do anything clever to "globalize" their database we're all ears.


I’m extremely impressed with how slick your Heroku integration is. We thought about moving over to render but the dev ux just isn’t there like Heroku. I would be fine with paying for a read replica on the west coast that was always running if you can make it as easy the rest of your Heroku integration.


I've seen https://macrometa.co take a stab at an edge database, but their guarantees (consistency / correctness) don't really infuse any sort of confidence in me [0]. https://yugabyte.com is another global scale database that competes squarely with cockroach-db, though I haven't used either.

Cloudflare Workers KV has the simplest model, with a central-db that transparently and eventually only replicates read-only, hot-data specific to a DC but writes continue to incur heavy penalty in terms of operations-per-second, cost, and latency.

In our production setup, we back Workers KV with a single-region, source-of-truth DynamoDB [1] and employ DynamoDB Streams to push data to Workers KV [2], that is,

Writes (control-plane): clients -> (graphql) DynamoDB -> Streams -> Workers KV

Reads (data-plane): clients -> Workers KV

Reads (control-plane): clients -> (graphql) DynamoDB

[0] https://news.ycombinator.com/item?id=19307122

[1] We really should switch to QLDB once it supports Triggers.

[2] We do so mainly because we do not to be locked-down to Workers KV, especially at its very nascent stage.


Hi Ignoramus - founder and CEO of Macrometa here - regret that our first attempt at explaining our consistency model caused confusion last year. Here's a link to the research paper that describes our architecture and consistency model.

https://bit.ly/HPTS-Macrometa

We got accepted in High Performance Transaction systems last year for the innovations around CRDTs for strong eventual consistency (SEC) with low read and write latencies.

Im trying to figure out how to provide a simple light weight way for fly.io users to use our global DB in their apps. It would allow a full stack to run at the edge with the compute on fly.io and the data on Macrometa either directly on fly.io or a nearby PoP (same city). Will update


Fair enough! I signed up, looking forward to DBs on Fly.io!

(Also I got permission denied when attempting to curl the script when writing to /usr/local/bin, I needed sudo. I'm on Ubuntu 19.10 Eoan Ermine. Not sure whether security implications for `curl | sh` outweigh convenience, but I trust you guys and my connection. :P)


Heh curl to sudo slippery slope :P

The script is just picking the binary for your OS/arch and putting in PATH. We have instructions for doing it yourself here https://fly.io/docs/getting-started/installing-flyctl/#comma...

Or you can download straight from github: https://github.com/superfly/flyctl/releases

Hopefully we can get on snap soon!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: