Hacker News new | past | comments | ask | show | jobs | submit login

We have our own queue, because it was easy, fun and has been exceedingly reliable above all else. Far moreso than other things we had tried. Cough Gearman cough SQS cough

One endpoint accepts work to a named queue, writes it to a file in an XFS directory. Another locks a mutex, moves the file to an in progress directory and unlocks the mutex before passing the content to the reader. A third and final endpoint deletes the in progress job file. There is a configurable timeout, after which they end up at a dead letter box. I am simplifying only a little bit. It's a couple hundred lines of Go.

The way this is set up means a message will only ever be handed to one worker. That simplifies things a lot. The workers ask for work when they want it, rather than being constantly listening.

It took a little tuning but we process a couple billion events a day this way and it's been basically zero maintenance for almost 10 years. The wizards in devops even figured out a way to autoscale it.




> The workers ask for work when they want it, rather than being constantly listening

Can you elaborate more on this? How do the workers know when they have to process a new job?

Also, am I right in assuming this is typically a single node setup only, as all the files are mounted on a non "share-able" XFS disk?


They ask for work after they finish the previous job (or jobs, they can ask for more than one). Each worker is a single process built just for one task.

If there's no work for them there's a small timeout and they ask for more. Simple loop. It's all part of a library we built for building workers. For better or worse, it's all done over http.

You are right, though, it is one XFS volume per queue instance.

We just run multiple instances (EC2) on a load balancer. Each instance of the queue gets it's own set of workers though so the workers know the right server to report done to.

We want a way to have a single pool of workers, rather than a pool per queue instance, and have them talk to the load balancer rather than directly, but we haven't come up with a reasonable way to do that.


I like how GCP cloud tasks reverses the model. Instead of workers pinging the server asking for work, have the queue ping the worker and the worker is effectively a http endpoint. So you send a message to the server, it queues it and then pings a worker with the message.

https://cloud.google.com/tasks/docs/dual-overview


Ooh, that's kind of interesting. Am I reading this right that it holds the HTTP connection open for up to thirty minutes waiting for the work to complete? That's kind of wild.


Indeed. If you're hitting AppEngine or GCP Functions, they auto scale workers up for you to manage long running tasks. Ideally though, you finish as quickly as possible by breaking the work down into more tasks. That way, you can parallelize as much as possible.

It is all configurable, but I've scaled up to hundreds of workers at a time to blast through tasks and it wasn't expensive at all.

Workers being an HTTP endpoint makes them super easy to implement and even better... write tests for.


I love Task Queues. We are using them extensively. Also, they give you deduplication for free and a lot of other nice features like delayed tasks storing tasks for up to 30 days extremely detailed rate limits etc.


GCP is really under rated in this regard.

Are there any open source implementations of Task Queues? It feels like something that has been missing for years.


Yea, this is the only thing I don't like about them, that I can't test them locally.

More generally, is there something like a "on prem cloud" which just replicates say Cloud Tasks (but also other Cloud Apis) using local compute as well as say a local db. For testing / development this would be very cool.


I implemented my tasks as cloud functions, so I just test Tasks the same way I tested functions... by calling the handler function directly.


Just like using JMS, MSMQ or similar queues, I fail to see what is so great about it.


JMS is a specification, not an implementation.


Playing pedantic?


Ok, if you can't see what is so great about it, then I'd suggest spending some time in the GCP documentation to figure it out.


Likewise there are plenty of JMS implementations to play around.


Sure. Name one implementation that does exactly what Cloud Tasks does.


But this way you can lose messages, but I guess it’s fine for your use case. Having to provide redundancy is when things get usually complicated.


I'm curious what throughput are you moving? How many tasks per second on average and how long does each task take to be serviced, on average?


Built very similar but on S3. Jobs have statuses, land in /jobs, indexed by status at /indexes-jobs/PENDING, etc. Scheduler polls for jobs in PENDING index, acquire lock, pass job to processor, change its status to COMPLETE or DEAD.

300~ LOC or so and fairly easy to test. Wouldn't take that approach every time, but definitely worth it when you're aiming for a simple architecture.


Why files though, and why move them into different directories? You said billions a day. With files, the physical drive must be taking a beating. Not to mention potential issues with directory file limitations(based on OS and file system). Why not use some kvdb?


As I understand (correct me if I'm wrong, it's been forever since I've worked with filesystems) - file renames are very cheap as the actual data does not get moved, simply a journal gets updated


Sounds neat. Do you have the go code anywhere for folks to poke at?


Nah, afraid not. I wish.

We always wanted to open source it, but we got bought out by a big and very IP protective company before we got the chance.


I'd just be aware that XFS has no data journaling, only metadata.


What if a task fails/crashes?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: