Hacker News new | past | comments | ask | show | jobs | submit login

You really don't need anything fancy to implement a queue using SQL. You need a table with a primary id and a "status" field. An "expired" field can be used instead of the "status". We used the latter because it allows easy retries.

1. SELECT item_id WHERE expire = 0. If this is empty, no items are available.

2. UPDATE SET expire = some_future_time WHERE item_id = $selected_item_id AND expire = 0. Then check whether UPDATE affected any rows. If it did, item_id is yours. If not, loop. If the database has a sane optimizer it'll note at most one document needs locking as the primary id is given.

All this needs is a very weak property: document level atomic UPDATE which can return whether it changed anything. (How weak? MongoDB could do that in 2009.)

Source code at https://git.drupalcode.org/project/drupal/-/blob/9.2.x/core/... (We cooked this up for Drupal in 2009 but I am reasonably sure we didn't invent anything new.)

Of course, this is not the fastest job queue there is but it is quite often good enough.




For other readers, the `UPDATE` step is an exact anlogue of the "compare-and-set" atomic instruction [0]. It's really cool to see how you've realized it in SQL!

As a capability, compare-and-swap has an infinite consensus number [1], meaning it's sufficient to implement wait-free consensus algorithms with an arbitrary number of participants. That makes it a perfect fit for managing a scalable pool of workers that need to coordinate on consuming from a queue!

[0] https://en.wikipedia.org/wiki/Compare-and-swap

[1] https://en.wikipedia.org/wiki/Consensus_(computer_science)#C...


Postgres’s transactional semantics are really useful when building a queue, because of how it interacts with the various pieces.

Connection 1

  LISTEN 'job-updates';
Connection 2

  BEGIN;
  INSERT INTO jobs ('a-uuid', …);
  SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
  COMMIT;
Connection 3 (used when Connection 1 is notified)

  BEGIN;
  SELECT id, … FROM jobs WHERE id = 'a-uuid' FOR UPDATE SKIP LOCKED;
  UPDATE 'jobs' SET state = 'step1_completed' WHERE is = 'a-uuid';
  SELECT PG_NOTIFY('job-update', 'json blob containing uuid and state change info');
  -- do the thing here: computation, calling external API, etc. If it fails then rollback.
  COMMIT;
Because notify has transactional semantics, the notify only goes out at transaction commit time. You want to use a dedicated connection for the notify.

The only downsides I immediately think of are you will have every worker contending to lock that row, and you’ll need to write periodic jobs to cleanup/retry failures.


Yes, the UPDATE command is the exact equivalent of LOCK CMPXCHG (the SELECT can be seen as computing the memory address). So the discussion about that in the comments of https://stackoverflow.com/a/59022356/308851 totally applies: if two queue runner threads pick the same item exactly one will succeed so it can't happen both tries and tries the same item. So there's no busy wait (reminder: a busy wait is where it does nothing just tests a condition), it just goes over every candidate until one succeeds.


I have been asked this question in interviews for how to prevent two people from booking the same seat. Interviewers don’t seem to be satisfied by the answer that I will let the query go to the database and see who ends up booking. They want some advanced locking mechanism using redis. Redis does serialize queries using lua thus avoiding two people from booking the same seat.


I think this being a good answer hinges on the resulting user experience. If one person spends several minutes going through the process of putting in their payment and other info only to be told you don't have a ticket, they're going to be pissed. So you need a good answer on how to avoid that experience, however you implement the locking.

Saying this as someone who has been on both sides of that interview question, and also evaluated performance in it as a hiring manager.


Send bookings to the database early, with an expiry time and a counter visible on the screen?

Thing is, because purchasing is often shared across third-parties, even then you can’t guarantee that you’ll have the reservation until it’s paid and entered into the shared booking backend… unless the third party backend has a locking mechanism, at which point you’re probably not using your own Postgres to do this, but their mainframe instead.


So what do you propose here? I see that stackoverflow mentions some preemptive locking for a seat in cache for a few minutes. During that time the seat is taken out of the pool for potential booking. If the time lapses without the seat getting booked then the seat returns to the pool. This avoids going to the DB and see who wins.


Is there any other solution other than going full STM on this?


Yes a simple SCRM will do.


What is SCRM?


based on experience arriving very last minute to flight, nobody has problem with letting two people book the same seat.


Let the check-in attendant handle it; that's what I call a no-code solution! You're hired


Reminds me of Starbucks does not use two phase commit. https://news.ycombinator.com/item?id=6229004


That's solid middle management thinking right there!


Wouldn’t the attendant be in trouble if two people at the gate start fighting for the same window seat as printed on their ticket?


mmmm no

They allow overbooking the flight but you can't book the same seat twice.

Overbooking can make financial sense -- even if you need to hand a little compensation from time to time it's still better than flying with empty seats. Of course it sucks big time for the travelers especially if there are no volunteers. IDB is miserable. https://www.transportation.gov/individuals/aviation-consumer...


They obviously don't have the transaction semantics worked out. I have been at the gate and the person in front of me had the same seat number. They read it out. When I presented my pass the girl politely took my ticket and printed another one. If the person was not right in front of me I would never have known why. It is not unusual.


>They allow overbooking the flight but you can't book the same seat twice.

ok, but how do you overbook a flight without booking at least one of the seats twice?


You don't assign all the seats up front. Some are marked to be assigned at the gate.


Well then the problem moves a little to the left. If there are x seats to be booked up front and y seats to be allocated later. How many times can each of the x seats be booked and how do you prevent that n and n+1 over the limit booking for some seat? The problem is still there.


Airlines have been data mining for a very, very long time now and have a reasonably guess of the ratio of people booking vs showing up at the gate to fly. So you set your buckets to oversell by that much and assign seats later. If more people show up than expected than you throw them a voucher to go away. Hopefully enough will.

Sometimes it doesn't work out and that's when airlines bleed. I was flying home one day from some US city (was it Denver? the years and cities blend together) when a Drupal conference and some major sports thing ended the same day and the airport was a total gong show. United first offered a paltry 120 USD but when they raised to 400 USD to go home the next day instead my hand went up -- and I was such a greenhorn I didn't know they will pay for the hotel, too. A buddy pocketed no less than 1200 USD from Delta.


This looks like one solution but I don’t think you want this design in medicine slot booking or stock selling. You want it to be much more definite if there is no way for a conflict to be resolved. This solution won’t work there.


Well, sure. But what if you want to avoid such a scenario? Two people booking vaccine slots shouldn’t be allowed the exact same slot. They are going to be pissed realising one of them reached the center but didn’t get the vaccine.


The "what actually happens" answer is often "let them both book successfully and then rebook the person with lower loyalty status".


I had a similar question asked of me in an interview. The interviewer was also surprised by my solution of letting the database decide the winner.


What exactly is the right answer here?


I have done something similar to this to implement queues in Postgres. And at quite a large scale with lots of workers all pulling items off the queue at once.

One huge advantage over, say, SQS, is that you also get easy visibility! You can query and see what’s been completed, what’s still not worked on, etc.!

Oh, and do you want a priority queue? Just add a priority field and sort before picking an item! Do you need to change the priority of an item while it’s already in the queue? Go right ahead. Do you want to enforce some constraints so that duplicates don’t get added to the queue? It’s a database — add the constraints.

If you’re already using a relational database, and now you need a queue, start with this approach. Odds are, it’ll work just fine, and it’ll give you the perks above.


Interesting! What kind of scale did you run it at?

Also keen to know if you saw any disadvantages with this approach?


Oh please stop using databases as queues. I spent a disproportionate amount of time in the last 20 years undoing that decision. It doesn’t scale at all well.


You’re dealing with survivorship bias - those companies whose products were successful enough that they needed your services. You don’t speak to the teams who went bankrupt because they spent too much time fiddling with RabbitMQ, when Postgres would have been fine.

As always, there are trade offs, no silver bullets.


> You’re dealing with survivorship bias - those companies whose products were successful enough that they needed your services

In fairness, Redis has only been available for 12 years so someone who has been in the industry longer has probably encountered systems using DBs as queues for no better reason than an alternative was not available when the system was made. (Rabbit just a couple years older I think.)

But in this day and age, you have better options to start with c'mon.

> they spent too much time fiddling with RabbitMQ, when Postgres would have been fine.

Rather ironic claim in a thread full of tips, tricks, and gotchas just to make PG behave like a queue no?

I can't speak for RabbitMQ personally but Redis has got to be among the simplest drop-in dependencies I've ever encountered. For a simple queue, the defaults do out-of-the-box.

Gut feel also says, Redis defaults for queuing would also serve more than PG defaults for queuing so you could leave it untouched for far longer to focus on your product, get to profitability, etc. Feel free to debunk with data.


There's one incredibly valuable feature you get with PG that you will never get with Redis or RabbitMQ or any other task queue: Enqueueing tasks transactionally with your unit of work.

If your load requirements fit using your database as a queue, it can radically simplify your system.

I've personally done a fair bit of migration from formal queue systems into an RDBMS. You simply get a lot more control and visibility that way. But I use task queues heavily too. It's all very application-specific and I don't think you can generalize.


Not sure why you were downvoted, but perhaps they were thinking about Redis transactions (lua scripts that can do multiple operations at once). If Redis is your datastore (groan), then you can transactionally enqueue tasks with your unit of work.


If Redis is your primary datastore then sure. But the problem domain for which "Redis is your primary datastore" is appropriate is relatively narrow.


Real time trading systems can totally get away with only using Redis, outside of historical data (which obviously you'd stick into Postgres or your favorite flavor of DB)


You can actually do this with RabbitMQ - it has a feature called "Publisher Confirmations", where you send a message, and then wait for an "ack" in return.


If you’re using the same database for your queue as your business entities, then you can wrap both in the same transaction - I think that’s what the parent post is getting at. For example, save a customer entity and add a job to send them a confirmation email. As far as I know, that’s something you can’t do natively if you’re using RabbitMQ and PostgreSQL. Of course you can work around it, but when your workloads are low you don’t need to, which is great for early stage products.


In that case it doesn't help you that much since you can't transactionally send an email but you would get at least one attempt.


It doesn’t help guarantee the email is sent, but it makes it easier to find and handle unhappy edge cases, which is where a lot of time is spent in an early-stage startup.


> Rather ironic claim in a thread full of tips, tricks, and gotchas just to make PG behave like a queue no?

There are libraries in many languages written specifically to accomplish the "make PG behave like a queue" part. All these people are explaining the principle of how it would work, but in practice, you can just pull in one of those libraries and be done. (Or, more accurately, you use an abstract external-job-queuing library and tell it to use its Postgres adapter, which in turn uses that library.)

In exchange, you get the ability to not have to set up / maintain any more infra than you already have. Which can be important if you're already "at scale", where every component you set up needs to be duplicated per geographic-region, clustered for High Availability, etc. If you've already done all that for Postgres, it'd be nice to not also have to do all that again, differently for Redis, RabbitMQ/Kafka/NATS, etc.

> Gut feel also says, Redis defaults for queuing would also serve more than PG defaults for queuing

Don't know about what you've got your queues doing, but our service only does (persistent, out-of-process) queuing for two purposes:

1. Global scheduled background-job queuing. (Think "cron jobs.") There are only a static number of these; they don't scale up with load.

2. Lifecycle email dispatch. This scales up with MAU, but with very very low coefficients.

For those cases, Postgres would totally be enough. (Anything would be enough!)

Most "queues" needed in the real world are requirements-light like this. It's a rare company whose product is doing something Apache BEAM-alike with its data, where a high-throughput reliable MQ is a fundamental part of the data workflow. (And those companies, I hope, know that they need an actual MQ, not a piece of software that does queuing in its spare time.)

Our company doesn't use Postgres queuing; we do, in fact, use Redis for it instead. But we only ended up doing that, because we already needed Redis for other stuff; and if you already have Redis in play (including an ops plan for scaling it), then it's the better tool for the job.


> In exchange, you get the ability to not have to set up / maintain any more infra than you already have.

And, you mean, a PGQ will not need any special monitoring other than the common DB metrics?

For instance, if I ever run a queue, it's just due ops diligence to know the length of the queue, average time an item spends in queue, throughput over time, among others. Are there standard monitoring modules that would check this for a PGQ? Because in exchange for setting up a proper queue cluster, compatibility and documentation for common stuff like this is also what you get.

The first one is particularly a sticking point for me. If you don't do it right, you will end up issuing a COUNT on a table periodically. You might say it's acceptable for most companies, because they don't need high-performance queues, but I specifically need my monitoring to be reliable during times of unexpected high load. Also, ideally, there is close to zero chance that my telemetry is the one that ends up causing me trouble.

Also, why does no one consider the added "surface for human error" that using a DB for a queue introduces. For instance anyone can drop in the DB and perform a query you were not supposed to. A malicious actor can update all outgoing emails in the queue to another recipient. If these are sensitive emails like password reset or OTP, good luck. A dedicated queue process does not allow such operations.

I have to say, it's very perplexing to me that people (especially in HN) would avoid using the right structure just because they have to invest a bit more work.


> I have to say, it's very perplexing to me that people (especially in HN) would avoid using the right structure just because they have to invest a bit more work.

You're imagining this as if it's just the CapEx (the work of setting up Redis), but in reality it's the OpEx (the work of maintaining a Redis cluster) that kills you. Think of it in terms of the number of ops salaries you have to pay. At scale, this number goes up in proportion to the complexity of your stack.


Actually, if you read the rest of my previous reply (as opposed to an offhand comment at the end) you will see that I'm considering the logistics of operating an in-DB queue as well.

Using an in-DB queue doesn't give you zero OpEx does it? Maybe you can make the argument that it's cheaper but it's cheaper for a reason: the job wasn't done properly, so to speak.

Both options introduce new failure modes and op costs into your system. Might as well do it properly if (and this is a huge IF in my opinion) for slightly more cost. When you run into a failure it's standard, maybe the solution is even one Google search away as opposed to realizing, one hour into a late-night debugging session that, shit, that table should be treated as a queue!


I read your whole reply. I didn't see any mention of anything I'd consider an OpEx cost.

Re: Monitoring

Presuming you're already monitoring anything at all, adding monitoring for a new piece of infrastructure is a pure CapEx cost. You presumably already have monitoring infrastructure running. Its ingestion, by design, will be 99% idle — i.e. it won't need to be scaled horizontally proportionally to the number of components. The only thing needed, then, will be careful up-front design for the monitoring.

Which is usually also a cost already paid for you in advance when you use an abstract job-queue library. They all know how to expose Prometheus metrics endpoints, and they do the accounting to serve those endpoints efficiently (usually using process-local per-job-producer and per-job-consumer counters, which you must then roll up yourself at the PromQL level, taking the irate() to find spikes.)

Re: AAA

> Also, why does no one consider the added "surface for human error" that using a DB for a queue introduces.

Because the type of person who reaches for their DB first to solve a novel problem, is the same type of person who understands and makes use of their DB's security features, to the point that doing more DB-security-config isn't a complex one-off problem for them, but something they can whip off in a few minutes.

For a simple option, you can create two service-account DB roles: a queue_producer, and a queue_consumer. You can put all the job-queue implementation tables in a schema owned by the queue_producer; and then grant the queue_consumer SELECT privileges on all the tables, and UPDATE privileges on some of them. Then, nobody but the job_producer (or a DB superuser) can create or destroy jobs; and nobody but a job_producer, a job_consumer, or a superuser, can read or modify jobs. (Your job-queue abstraction library within your service usually maintains its own DB connection pool anyway, so it's no sweat to have those connections use their own credentials specific to their job-queue role.)

For a more complex — but perhaps useful? — option, the tables themselves can be "hidden" behind stored procedures (DDLed into existence by the abstract job-queue library), where nobody has any privileges (incl. SELECT) on the tables, only EXECUTE rights on the sprocs. And the sprocs are carefully designed to never do anything that could have an unbounded CPU time. Then anyone can "check up on" or even "insert into" the job-queue, but nobody can do anything "funny" to it. (My god, it's an encapsulated API!)

Once again — the libraries that abstract this away, already think about concerns like this, and choose one or the other of these options. That's why libraries like this exist, when the "core premise" is so simple: it's so there's a place to put all the fine details derived from man-years of thought on how to make this approach robust.

-----

On a completely different track, though: having a queue in the DB can sometimes be the optimal (i.e. the "if we had infinite man-hours for design + implementation") engineering decision. This case comes when the thing the queue is operating upon is data in the DB. In such a case, the DB data modification, and the job's completion, can succeed or fail together atomically, as part of a single DB transaction.

To accomplish the same thing when your queue lives outside the DB, you usually end up either needing some really gnarly distributed-locking logic that both your DB and your app layer need to know everything about (leaky abstraction!); or you need to completely remodel your data and your job queue into an event-streaming paradigm, so that you can "rewind" one side when the other side fails.


> For a simple suggestion, the job queue can live in a schema that is owned by a job-queue service-account role. Nobody other than that user (or a DB superuser) can issue any query in there.

Yes service accounts are fine and dandy but [a] isn't that more hoops borne by choosing the wrong tool for the job (gotta stay mindful of those permissions, you can have an Ansible typo, etc) and [b] as I said, proper queue services would not even allow tampering of enqueued data. For no extra cost other than installing the service. In contrast, your service account falling into a malicious actor is still a threat vector.

> For a more complex — but perhaps useful? — option ...

Now this is just a queue service with extra steps running in a relational DB instead of natively as an OS process. You did cite it as just an option but I don't see why this is an attractive option.

> I didn't see any mention of anything I'd consider an OpEx cost.

It seems to me we have differing definitions of OpEx and, judging by your previous comment, you value this almost exclusively in terms of "the number of ops salaries you have to pay". Even if I play by that definition, I could tell from experience (and also corroborated by other commenters here, in other subthreads of this discussion), that operating a Redis cluster does not need extra warm bodies. The people who monitor that your app responds, your servers aren't running out of memory, could take on the task with little to no additional training.

The experience I want to cite, bearing in mind of course that everyone's mileage varies: in my previous job, customer base of 2M and growing across two countries, we operated and monitored a redundant Redis cluster with an ops team of 3. In my first job I talked about in another subthread here, we managed three high-traffic Redis queues (and a few other smaller-traffic special cases) with a team of 7. In both jobs, we also did active across-the-stack development; we weren't dedicated to monitoring, nor to the components related to the queues.

In fact I would argue running a DB (queue or no queue) is just more complex than running a queue service. Rather telling is the need for "DB Administrators/Specialists" but no such equivalent for Redis or other proper queueing solutions.

> Which is usually also a cost already paid for you in advance when you use an abstract job-queue library. They all know how to expose Prometheus metrics endpoints...

Honestly, this sounds fine but I'm rather wary of the caveats that might be present. How do they do it efficiently? What do I need to keep in mind to keep things efficient? As such, unless you can point me to your preferred job-queue library which does all these wonders, I have to reserve judgment.

Edit: I just remembered mentioning this in another subthread. Even if you have a fantastic library/middleware abstracting all those queue concerns, that ties you in with that library. If someone wants to build integrations into your queue, they have to play by that library. If majority of your app is in Java (and so is the library), and Data Science (who, in this example, uses Python, not improbable to happen) wants to share your data pipeline, if that library isn't available for them, tough luck I guess? More dev time for someone.

And also, whatever underlying DB features or semantics your library might rely on in order to enforce a queue structure, you can't be assured that future versions of your DB would support those features because, you know, your DB isn't really in the business of queues. It opens you up to running an outdated DB version just to keep that queue functionality.


> Now this is just a queue service with extra steps running in a relational DB instead of natively as an OS process. You did cite it as just an option but I don't see why this is an attractive option.

Your DB then shares a WAL log with your queue. Meaning a single managed physical replication pipeline for them both. Meaning only one set of leader-election issues to debug, not two. Meaning one canonical way to do geographic high-latency async replication. Meaning disaster recovery brings back a whole-system consistent snapshot state. Etc.

Honestly, if I had my way, every stateful component in the stack would all share a single WAL log. That’s what FoundationDB and the like get you.

> In fact I would argue running a DB (queue or no queue) is just more complex than running a queue service.

Well, yeah, but you usually need a DB. So, if you’re going to be paying the OpEx costs of the DB either way, then you may as well understand it deeply in order to wring the most productive use you can out of each OpEx dollar/man-hour spent.

(I feel the same way about Redis, as it happens: if you need it, and are locking your code into its model anyway, then you may as well take advantage of its more arcane features, like Redis Streams, Lua scripting, etc.)

However, maybe our company is uncommon in how much our service literally is doing fancy complex DB queries that use tons of DB features. We’re a data analytics company. Even the frontend people know arcane SQL here :)

> that ties you in with that library

The difference between what you / apps / abstracting libraries do in Redis, and what they do in an SQL DB, is that in the DB, the shape of everything has to be explained in a vendor-neutral manner: SQL DDL.

Sometimes Redis-based solutions converge on conventional schemas; see e.g. Sidekiq’s informal schema, which several other queuing systems are implemented in terms of. But when they don’t, there’s nothing you can really do — beyond hacking on the libraries involved — to bring them into sync.

In an SQL DB, anything can be adapted into the expected shape of anything else, by defining SQL views. (Heck, in an SQL DB with Redis support, like Postgres with redis_fdw, the Redis data can be adapted into any shape you like using SQL views.)

And that’s further enabled by the fact that the DB had received from the app, through DDL, a schema, that you can examine, manipulate, and refactor; or even synthesize together with other schemas.

> you can't be assured that future versions of your DB would support those features

You can if those features are in the SQL standard. I’ve never heard of a DBMS regressing on its level of SQL standard support.


> Your DB then shares a WAL log with your queue...

I feel like the understanding we can come to here is that we have differing definitions of necessary complexity.

You did mention you work in data analytics and I have worked and am working in more traditional product-oriented gigs. Everything you mentioned are nice, impressive even, but to me they are toys. I don't need a queue with a WAL stream, or PITR backups. Queues, to me, are inter-process communication mechanisms, or a means to concurrency. In fact, worst case, you can delete the queues or restart Redis even without triggering a disk write (though I note this is another feature that comes free with Redis); it would inconvenience our customers but they can always just retry.

Of all the benefits you mentioned, leader-election is the only one I could make a good case for.

> then you may as well understand it deeply in order to wring the most productive use you can out of each OpEx dollar/man-hour spent.

Understanding it is one thing but using that understanding to create/develop (CapEx) and then maintain (OpEx) an in-DB queue feels like unnecessary costs. CapEx to install Redis is practically nil, and you seem to agree that running it is far simpler OpEx than PG too ("Well, yeah, but...").

Of course, I keep in mind your point about libraries doing the dirty work. But if it all boils down to a third party in the end, the point I'm replying to above is rather moot no? And if it comes to a third dependency anyway, I've already reasoned out my preference for an actual queue solution.

> explained in a vendor-neutral manner: SQL DDL

> You can if those features are in the SQL standard. I’ve never heard of a DBMS regressing on its level of SQL standard support.

While this is an ideal case, my experience so far shows that every major DB package relies on their own tweaks to the SQL standard, and so "If those features are in the SQL standard" turns out to be a pretty big "if" to ask. I don't worry about vendors regressing on SQL standard support but rather that the libraries for DB queues are relying on the non-SQL-standard features/behavior to mock out a full queue functionality---non standard behavior that has no guarantee of consistency across versions.

I mean, if there's a DB queue middleware/library that works across DB vendors, be it Postgres, Oracle, Maria, MySQL (with a reasonable storage engine, perhaps) then that's a convincing argument that the SQL standard can support queuing sufficiently. But otherwise, this counterpoint is a hard sell for me.


> it would inconvenience our customers but they can always just retry.

We have very different things in our queues :) An MQ, to me, is a place to put stuff that needs reliable, at-least-once delivery. Like, say, invoices. User never receives that, bad things happen.

There's very little difference between that type of data, and the data in your DB, while it's "active." It's only once the data in the queue is fully ACKed and no longer relevant that the data behaves more ephemerally than first-class DB state.

> and you seem to agree that running it is far simpler OpEx than PG too

I did not say that. In our stack, we run a Redis cluster on top of Kubernetes, because it's nominally semi-stateless. In reality, it's our biggest SPOF. It fights the container daemon around memory allocations (why does a Redis container that uses 2GB memory steady-state need a 64GB memory limit to finish starting up!?) and has weird dynamics around startup readiness (i.e. Redis started from a large AOF-with-RDB-prolog file will start serving requests for just a moment, then immediately stop again to finish loading. This confuses k8s into thinking it's done with startup-probes and is ready for readiness-probes. Which it's not.)

Redis, when configured to be both Highly Available and durable (which is not your use-case, I know) is really just less-robust than a DB, with less management tooling, fewer docs, fewer people who know how to scale it, fewer third-party solutions pre-adapted to expect it, etc. Because, to paraphrase: Redis isn't really in the business of being a database. And yet that's exactly what you end up with, when you try to use Redis as a reliable at-least-once MQ: a database. Just, a kinda sucky one. (One that doesn't even have a concept of users with different privileges; let alone RBAC!)

And yet, that actually is the niche of Redis. It's the durable, client-server cousin to in-process peer-mesh state-sync solutions like Hazelcast/Mnesia. Which makes it kind of weird that it's not particularly tuned at solving the problems in the one niche unique to it.

(I have nothing against Redis! It has non-unique use-cases where it works great, e.g. when it's being used as "memcached with more data structures", or as a coordination/locking server instead of Zookeeper, etc. A robust reliable MQ it is not. My argument is that your average RDBMS is closer to being a robust, reliable MQ than Redis is. Though, of course, you'll get the best robustness+reliability for that use-case out of a "real" MQ — Kafka or NATS or something.)

> but rather that the libraries for DB queues are relying on the non-SQL-standard features/behavior to mock out a full queue functionality

They don't have to, because everything you need to implement reliable queuing semantics is already part of [the lowest-common-denominator part of the] SQL standard [that everything that calls itself an RDBMS implements.]

These libs have DB-specific adapters for two reasons:

1. There's no such wire protocol as "SQL" — every RDBMS speaks its own wire protocol to convey your SQL statements to the database and convey result-sets back; and so, unless you're using a language with a common API abstraction for RDBMS client libs to be built against (e.g. JDBC), then each such client lib ends up presenting its own distinct API to callers, which needs a separate adapter written for it.

2. with SQL, reliable queuing is "easy", but knowing about enqueued items (with as minimal an overhead as possible, scanning as few additional items as possible) is hard. SQL says nothing about how to get push-notification-to-the-client type information out of the DB, and so every DBMS implements its own mechanism for this. These are pure optimizations using pure-ephemeral code, though; the schema of the queue in the DB remains the same either way, such that the data could be exported from one RDBMS to a .sql file, imported from that .sql file to a different RDBMS, and the only thing needed on the client side would be switching adapters.

(In theory, presuming a common wire-protocol abstraction interface lib like JDBC, you could have a "standard SQL" adapter. It'd just be annoyingly low-throughput. Like using HTTP polling when you could be using WebSockets.)


I've got about 6 years operational experience with Redis. I would not use it for a queue. I struggle to consider using it as a data store these days as well with other options on the table. But that's a whole new thread so I'm not going into details here.

If you want a queue, get a queue.

If you want a relational database, get a relational database.

If you want a key-value store, get a key-value store.

Don't cross the streams.


Not really knowing the details of Redis. What makes it a better key-value store than a Postgres table with a key-column and a value-column?


What would you use redis for, if anything?


I assume a key-value store.


Correct


Perhaps I misinterpreted this part?

> I struggle to consider using it as a data store these days as well with other options on the table.

It sounds like you wouldn't use it for any purposes.


Using a tool without knowing reasonable bounds of the domain, current requirements, and how the tool (redis in this case) solves the problem isn't good advice.

Case in point our team went with Redis, just the default, use it blindly without fully understanding our requirements and how redis helps scale.

2 years later we spent 2 sprints, holding back the release trying to understand RDF vs AOF, and why we're seeing massive spikes in consumption and performance triggering pod eviction, running comparison tests to prove which works better and explaining why, running qa tests (regression, performance, load), introducing postgres for queuing, redoing our code to bypass the sync mechanism between how data flows between redis and postgres, updating dependenciies, migrating existing customer data (various on site locations), explaining this to team members, managers and their managers, installation technicians, support engineers and presenting it at engineering townhalls as a case of bad decisions.

Not worth it.


Well, by your admission, you used Redis for a problem domain it wasn't suited for in the first place. How is this an argument for using in-database queues?

> use it blindly without fully understanding our requirements and how redis helps scale

I'm sorry I don't get how I could come across as advocating the use of Redis blindly. My point is if your data flow looks like a queue, then use a queue, don't hack a relational DB to become a queue. I think that's reasonable rule of the thumb, not going in blind.


We needed queues. We used Redis. That fits the domain.

Problem was there wasn't a good answer to "How much redis does your team need to know to put it in production".

We thought we knew it well enough, we thought we knew what we were getting into, and we thought so many others are using it for this, we should be good. That is makes a difference, clearly.


I believe you meant "RDB vs AOF".

Also reading your reply I get the impression that "sync mechanism between redis and postgress" was the bottleneck. Wondering if you can add some details around it and also was this something that can't be fixed by fine tuning redis config, rather than completely removing it from your stack.


Yes I did mean RDB. Thanks for pointing out.

There were many problems but at the core of it, this was us having redis write huge amounts of data to disk very frequently causing this.

We could not reduce the frequency (product would not allow) and we couldn't find a way to make the writes reliably fast.

I like to believe there exists a possible way of handling this, but point being, our team had no time to find out, how redis works internally and have confidence that the new way won't bring up new surprises.


You (or whoever is doing ops) will have to pay with your time to operate Redis until the product dies though.

I don't think adding another component (especially one with storage/backup requirements/more complex failover procedures) should be taken lightly.

Paying attention to the tips/tricks/gotchas is something you'd pay once and hopefully document in source code/documentation.

If you piggyback off someone else's work (another team at your job, a cloud-based solution that handles everything, etc) the calculus would change, of course.


> You (or whoever is doing ops) will have to pay with your time to operate Redis until the product dies though.

Repeating my other comment: do you mean to say using an in-DB queue will not need special monitoring other than what's already standard metrics for DBs?

Since a queue is a different structure, I would want to know metrics that might be unfamiliar for standard DB monitoring. Queue length, queue throughput, etc. Can I get that from my DB queue with negligible cost?

Another thing, there might be standard practices and patterns for in-DB queues, often encapsulated in a nice library/middleware. But as databases are not queues there is no assurance from the vendor that those patterns will remain applicable in future versions.

> Paying attention to the tips/tricks/gotchas is something you'd pay once

Actually, I disagree with this. Everytime someone connects to that database, they have to remember that one special table which is used as a queue, and remember to tread differently where it is concerned.

Setting up a queue? You do that once and everyone who interacts with it will not forget it is a queue. Heck you can't even do your usual DB operations there. How can you not remember it is a queue?


You are wayyy overestimating how complex something like RabbitMQ or Redis is. You don’t need to hire entire teams of engineers or go bankrupt setting up an instance. It is less work than implementing a production-level queue in Postgres for sure.


Have worked at multiple companies that successfully used redis or rabbit with teams of less than five engineers.

It's a little insane that a highly rated thread on HN is telling people to use postgres as their queuing solution. The world is wide, I'm sure that somewhere out there there is a situation where using postgres of a queue makes sense, but in 99% of all cases, this is a terrible idea.

Also, SQS and nsq are simple.


For years I've been using RabbitMQ in small teams, and me as a one-man team too.

As long as you don't need clustering (even a single node can handle some pretty heavy load), it's actually really simple to setup and use - way easier than Postgres itself, for example.

My biggest beefs with it have historically been the Erlang-style config files (which are basically unreadable), and the ridiculous spiel of error messages you get if you have an invalid configuration. But thankfully RabbitMQ switched to a much simpler config file format one or two years back, and I understand the Erlang OTP is working on better error messages and stack traces.


If your data storage pattern involves a database write followed by a message queue insert, then the ability to wrap those in a transaction can be a good trade-off to avoid consistency failures between the writes.

Avoid consuming that queue directly though -- which is probably what you're thinking when saying this is a dumb idea and I tend to agree. Typically, you want to have a worker that loads entries into a more optimal queue for your application.

Bottom line though, there is not a single best "queue" product. There are lots of queues offering wildly different semantics that directly impact use cases.


“Production-level” means different things to different groups. Many (most?) groups don’t operate at the scale where they need a specialized queue in a dedicated message broker. Simple queues in a DB will work fine. Even if it isn’t very complex, why not already use the infrastructure you probably already have setup — an RDBMS?

Now, if you’re using a nosql approach for data storage, then you already know your answer.


My main concern would be monitoring. Most queue systems connect to alerting systems and can page you if you suddenly stop processing thongs or retrying the same query many many times. For a DB, since the scope of access is much larger, you don't get these sort of guarantees for access patterns and you essentially need to reinvent the wheel for monitoring.

All to save 2 to 3 hr of Googling for the best queue for your use case and finding s library for your language.

It makes sense if you don't care about reliability and just need something easy for many many people to deploy (ex: Drupal).


We use pingdom that hits a page that gives it the health of various systems. Queue included.

> All to save 2 to 3 hr of Googling for the best queue for your use case and finding s library for your language.

The cost of using a new piece of tech in production is not just 2 or 3 hours.


If you're at the scale where your postgres db can be used as a queue and no one on your team has experience running a these systems (most startups) then pretty much anything will work to begin with and as long as you have a clear interface that separates your code from your deps for the queue it'll be easy to swap out.

At $JOB-1 I wrote a `Queue` and `QueueWorker` abstraction that used an environment variable to switch between different queue backends while providing the same interface. Because of this I got everyone up and running with Redis lists as a queue backend and then could add in things like MQTT or RabbitMQ as things evolved. I also defined very loose constraints for the queue interface that made it so the implementer. Essentially there was a `push()` which added something into the queue or failed and returned an error. Then there was an `onWork()` which was called whenever there was work "at least once" meaning your system had to handle multiple instances being delivered the same work item. We would only ack the queue message after `onWork()` completed successfully.

There's not really anything preventing a team from doing this, putting a pin into it, and coming back to it when there's a scalability or reliability concern.


The issue is not necessarily the complexity of RabbitMQ or Redis. The complexity comes from having to manage another (stateful) process that has to be available at all times.


Yes. Pay Amazon to do it and add use all that saved time to add business value instead.

They'll also manage the consumers of the queue and scale them too! Serverless is bliss.


I agree. I’ll also add that Redis is a level easier to operate than RabbitMQ.


In my experience proper abstraction is the key. If you have a clear abstraction and structure you can replace the implementation when you need to without too much disruption. If it’s too leaky you’re screwed.


In my experience, great abstractions without a good data migration blueprint is as about as useful as no abstraction at all.

In this example - great, you have a “queue” abstraction. When you switch to RabbitMQ or Postgres, how do you move your existing data without a quality of service or data loss? It can be difficult with production datastores even if the abstraction within your service is beautiful.


Isn't migrating a queue super simple? Have a window in which you only add to the new system and you listen to both. When the old queue is empty, delete it.

If you need to keep track of old data, then yes, migration is hard. But queues don't save old data—that's the point of a queue.


100%! Create a good enough solution and put it behind an abstraction that remains stable when you later create an ideal solution


Yes. Use a queue abstraction for which there are numerous queues available off the shelf for!


There is a lot of space between Postres and RabbitMQ. If performance is a factor it takes alot of work to make a Postgres table fast enough. These days, and for the past at least 5 years, you're better off using a service like SQS, or setup Redis yourself.


Sure, but I also feel like this comment is based on an assumption that everyone works at startups. A lot of us work for larger, already established companies that are past that will we or won’t we survival stage. We are trying to design systems to solve needs for already validated products.

It might seem like everyone works for a startup, but big companies have a lot of engineers, and we still have to solve new problems there even though the company is not new.


no. he/she actually has a point. a database is not a queue

https://mikehadlow.blogspot.com/2012/04/database-as-queue-an...


The blog post you mentions brings two arguments: first the database would require looking. Second the database would need tradeoffs between reading (polling) and writing (adding and removing to the queue).

The original article handles the first argument: Postgres doesn't need polling. Instead it provides a notify mechanism, that informs the application when the table changed (something was added or removed from the queue) via the SQL NOTIFY statement.

For the second point it also provides a solution: since optimization for reading is not needed anymore with the NOTIFY statement, the trade-off like different: we now need an efficient way to write. For this the article provides an efficient update statement with special lock behavior. This helps to make writes efficient, too.

It looks like both points from the blog post you linked are handled in the original article.


Not sure. All messaging solution brokers walked away from relational database based storage long time ago (I've seen how badly JBoss 4 messaging with persistence sucked) and for a good reason as either database was occasionally killed by messaging traffic, or messaging traffic was crawling because database was too slow.

In these days with Docker, ready cloud solutions getting the message broker is so simple that there is no reason to use database as a broker. If we know that our traffic will be so small, just install DB and message broker on the same VM, to avoid spending more on hardware or hosting.


You're not wrong, but if anyone on the team has already worked with a queue system that they need to manage then utilize something that scales. Otherwise there's no reason to use the cloud solution you're hosting with which hand hold you to make it work quickly with your integration.


Great comment: Been reading Taleb’s Black Swan this morning on ascertainment bias, so this resonated well.


You say that as if rabbitmq takes more than an 8th grader to set up in a few hours?


To be fair, you can pull and run the Docker image in a few minutes, and the defaults will work out-of-the-box for a lot of folks.

If you don't need clustering, RabbitMQ really is pretty simple to setup.


I’ve watched companies maintain their non-scaling DBs all the way to the grave.

It’s a killer.

If it’s “just a cache” OK. Fine. If you need it for something important like queuing or certain caching scenarios - be aware it won’t scale well.


This. So much this.


This please. I feel like "How to Get Away with Just PostgreSQL" and the GP comment falls squarely under being too preoccupied with whether you could, you didn't stop to think if you should.

Whatever happened to use the proper data structures for the job? PostgreSQL and MySQL are, at the end of the day, b-trees with indices. Throw in relational properties and/or ACID too. Those aren't properties you need or want in a queue structure.

I know I don't have a solid argument against not doing it; it's just experience (and dare I say, common sense) telling me not to. Not quite like parent but I spent the first two years of my professional career in a team that had the brilliant idea to use DBs as queues. The big task I partook in for that stint is moving them off that v2 into a v3 which used---wait for it---Redis. Everyone's quality of life improved with every migration, proportional to the size of the v2 cluster we retired.


What has me wanting to stick with postgres is that I work on a small team (two developers) and adding more technologies to our stack is extra overhead that's hard to justify. At our peak we're currently handling one request per second, and postgres for a queue is more than sufficient for that. Is there any good reason for us to add, learn, and maintain a technology neither of us yet knows? Or would we do just as well to abstract away the queue in the code so that we can switch to redis when we do run into scaling problems?


> Is there any good reason for us to add, learn, and maintain a technology neither of us yet knows?

Absolutely and that reason is, you are still a small team, with a small user base to boot. That's fantastic opportunity to learn a new technology and build on it properly! Remember everything is easier in software engineering if you assume you have no users[1] and your situation is as close as it gets to this ideal. Leverage it.

Plus, as me and others keep saying, Redis (and other proper queues) isn't a complex addition to your infra. This isn't Hadoop, or Kafka, which is a very special type of queue (one way to put it, at least).

> one request per second, and postgres for a queue is more than sufficient for that

Yes I agree but...

> Or would we do just as well to abstract away the queue in the code so that we can switch to redis when we do run into scaling problems?

What I read when I see such statements is this mythical software engineering ideal that with enough abstraction, a migration is just a matter of writing a new class that implements some interface and then changing a config. For a sufficiently complex app infra, that happens almost never because you could never keep the abstraction leaks to an acceptable level.

Another thing, abstraction does not solve all your problems if the underlying implementation is poor fit to begin with. Let me paint you a plausible scenario:

Once you are large enough, you might find your PGQ acting weird and you realize it's because someone in the team wrote code that accesses your queue table like it's an actual table of records, not a queue. So you think, okay let's prevent that from happening. Maybe you add users and permissions to distinguish connections that need to access between tables proper and queue. Maybe you start writing stored procs to check and enforce queue invariants periodically.

Well, guess what, all those problems would've been solved for free if you invested maybe one work day getting a Redis server running when you were a two-person op serving one request per second.

Lastly, scaling a relational DB is an entirely different beast from scaling a queue. Scaling anything does not ever come painless but you can reduce the suffering when it comes. Would you rather scale PG so it can keep acting as a queue or scale a queue that's, you know, really a queue in the first place? Heck the latter might even be solvable by throwing money at the problem (i.e., give it more compute).

[1] Except for the part where you need to make money, of course.


> Absolutely and that reason is, you are still a small team, with a small user base to boot. That's fantastic opportunity to learn a new technology and build on it properly! Remember everything is easier in software engineering if you assume you have no users[1] and your situation is as close as it gets to this ideal. Leverage it.

I have to disagree. Of course code quality is important, but building things "properly" because "we may need it later" is a great way to kill a project with complexity. KISS, YAGNI. An early startup is, IMHO, not a good place to learn about new frameworks while getting paid - you're on borrowed time.

Make a back-of-the-envelope calculation about how much throughput you need. E.g., if you expect to have 10,000 users, and each may make one request per hour, you're dealing with 3 qps. Anybody who wants to bring in a new dependency for this, needs some talking to.

(If you already need Redis anyway and it's a better fit than Postgresql, then sure, go ahead.)


> but building things "properly" because "we may need it later" is a great way to kill a project with complexity

Emphasis added because I feel like I addressed this in the paragraph immediately after the one you quoted:

> Plus, as me and others keep saying, Redis (and other proper queues) isn't a complex addition to your infra

I'm speaking out of experience and, as I already pointed out in another subthread, Postgres is far more complex than Redis. Consider the presence of "DB Admins/Specialists" and the lack of counterpart thereof for Redis and other queuing solutions.

Of course, if queues are not central to how your platform operates, you might be able to get away with Postgres. I still advise using Redis as a reasonable hedge against someone famous tweeting organically about your service because in this case, you don't want your DB to go down because some queue table had a surplus of transactions (or vice versa).

Not to mention, at an early stage, your tech decisions set precedents for the team. Maybe you have 10K users with a low qps but soon you are sending marketing emails to them and your system has periodic bursts of queue activity for all 10K users at once. When discussing this marketing "feature" rarely anyone thinks, "Hey we can't do that with our Postgres queue", rather "Yeah I saw functions in our codebase for queuing---this is doable". This is a small effort but a huge technical investment for later on.


> I still advise using Redis as a reasonable hedge against someone famous tweeting

Early stage startups die because of lack of PMF. Diverting focus and resources away from finding PMF kills companies. Most companies should focus on the product, tech debt be damned.


> Whatever happened to use the proper data structures for the job?

This so much. People too often treat databases as magical black-boxes that should handle anything. Database is most often the bottleneck and choosing the proper storage engine with appropriate data structures can be 100x more efficient that just using the defaults. 1 server vs 100 can definitely make a noticeable difference in costs and system complexity.

While premature optimization is bad, choosing the right tool for the job is still somewhat important and will usually pay off in the long run.


I think your "most often" is more like 0.01%, I'd say the inverse is true, that _most_ would be fine with single sqlite host or something like rqlite.


rqlite author here. Happy to answer any questions about it.

https://github.com/rqlite/rqlite


Are you planning on adding websockets or something similar in the near future to support things like ie. data change notifications [0]?

[0] https://www.sqlite.org/c3ref/update_hook.html


What would you then consider to be the most common bottleneck?

I agree that there are many cases with low workload where that would be plenty.


Most common bottleneck is lack of competence.

Direct visible effects are wrong decisions entangled in spaghetti-like complexity.

It's hard to reach technical bottleneck in well designed systems. Computers are really fast novadays. They will vary greatly depending on what kind of system it is. Out of resources – cpu, memory, network, disk io – likely the weakest of them will be saturated first – network. But that's not a rule, it's easy to have system which will saturate ie. CPU before.


Competence is expensive :) While I mostly agree, even well designed systems have (sometimes considerable) tradeoffs.

> It's hard to reach technical bottleneck in well designed systems. Computers are really fast novadays.

I have been listening to how fast moderen computers are for the better part of the past two decades, yet as I user I still have to deal daily with too many of slow software and slow web services.


Somebody once said "cheap things are expensive". This idea applies to developers as well. Cheap developers will drive company through bumpy roads towards uninteresting plains. Good developers not only pay for themselves but bring orders of magnitude more cash in. Only thing that touches on this that I can find is "software craftsmanship manifesto".


True


This is the best post in this thread.

A lot of people don't see the effects of their decisions. They leave a company after 3-5 years and go and work somewhere else where they get to make the same mistake again. The bottleneck indeed is lack of competence.

As for technical bottlenecks, it's quite easy to hit a wall. Be it through layers of stupid or unexpected success. We have unexpectedly reached the limit of what is possible with x86-64 on a couple of occasions due to stupid decisions made over 10 years previously for which there is now no longer the budget or attention to fix.


Money. The most common bottleneck is money and customers. Use whatever helps you get new customers faster.

Don't be scared of having to make changes in the future. Do the small amount of work it takes today to make sure your transition in the future is easy.

Transitioning from a SQL queue to redis it's only difficult if you have a bunch of SQL throughout your code. If you have that, you did it wrong.


> Whatever happened to use the proper data structures for the job? PostgreSQL and MySQL are, at the end of the day, b-trees with indices. Throw in relational properties and/or ACID too. Those aren't properties you need or want in a queue structure.

How is relational storage engine with support for transactions, document/json, ACID, hot-standby "the wrong data model" for a queue?


Your comment sort of explains why you would use your DB as a queue. It is a big task to migrate to a new system. If you already have Ppostgres or MySQL integrated and deployed. Using it as a queue may be the simplest option.


For the scale of most tasks, the existing RDBMS that a small app is using for its data is perfectly sufficient for the queuing needs of said app. This reduces complexity in the app and removes the need to maintain additional tools.

Frameworks like Laravel make this easy because they allow you choose different backends for your DB, queue, and cache, among other things, all or some of which can be an RDBMS like Postgres or MySQL/MariaDB.

When your app's need outscale what the RDBMS can rationally handle, then you can dedicate the resources to switch to a more specific tool as your growth has indicated you should.

I can't say what the percentage of apps that can handle things this way, but the anecdotes mentioned in this thread are clearly the cases where growth outstripped what a traditional RDBMS like Postgres are well-suited for, and so migrating was the right thing. The probably was likely identifying the need or delaying the transition until it was too painful. I don't think there's a magic bullet here, but I also don't think that starting with a dedicated queuing store tool is also always the right thing to do.


The solution I use now is to have the queue in the dB and then have a single process pushing to another queue better suited to dealing with workers.

I drank from the “keep your queue out of the dB” koolaid once, but then you end up worrying about transaction boundaries and atomic writes to the queue and the dB. It totally depends on your workload, but in my case, I’ve found the halfway solution of the queue in the dB so you get to write to it within your transactions, with a dedicated queue for the workers to be a much better solution.


The outbox pattern. Can in many cases be solved by using topics.


Ah ha. Yup, that’s exactly it. Thanks for the pointer.

Edit: can you explain by what you mean by solving it with topics? I thought topics were a broadcast mechanism?


If you have the case, if A then do B and C. You can commit to a topic and let B and C be triggered separately. This does not solve cases like storing to a db and then send a db-updated event.


Are you sure it works properly when something fails. That’s a wonderful situation I like to ask people because the answer is usually no. Then someone gets billed twice for something and then the guy who said this was a good idea gets shot.


Well, I only ever enqueue idempotent work, so again, in my scenario it can fail and be rerun, but I’m also unsure about what you think is a bad idea here.

Honestly, I had more code managing two phase commits so I would only push to the queue when I was sure (ish) the dB commit was ok.

Say you need to trigger something to send an email, but let’s say that there’s an error condition after and the transaction is rolled back. How do you handle that failure? In my scenario the queue is rolled back too, no harm to foul.


You're not wrong, but every solution involves compromises. More broadly, I would argue, though, that almost nobody actually needs queues.

Sure, there are cases were you actually need strict ordering, but they are relatively few, in my opinion, involving things like transaction processing or other areas where the order of operations change the effect of the operations and must be strictly sequential.

Ordering itself brings significant technical challenges. For example: With a queue, you can only have strict ordering as long as you only process one item as a time, with no concurrency. Ordering also complicates handling of failures and retrying, because it means every newer item has to wait until the failed item has been dealt with; if you use things like dead letter queues, order is violated.

In almost all use cases -- anything from image processing to newsletter delivery -- a better system is to have an unordered list of items that anyone can take items out of, in any order. Then you throw as many workers at this system and design it so that there's minimal locking needed to prevent workers from stepping on each other's.

There are queue systems that have a purpose in a stack, specifically pub/sub brokers, which can be used to schedule work along several dimensions. You use the pub/sub system to signal work to be done, but each payload just refers to some state that's stored elsewhere.


I think you're missing something important here by applying the worker pool simplification to the idea. Order is important i that case as well. It forces both fair scheduling of the work and the metrics from the queue can easily describe the overall latency of the work being done and the capacity and utilisation. The latter can be used to scale the consumers of the queue and/or alert on when things aren't working properly.

If you throw a proper messaging broker on the table like RabbitMQ then you're getting message acknowledgement as well which is important if you must do the work. A failed consumer will result in the work being rescheduled on another consumer transparently. Of course there are metrics you can see and get heads up when your consumers are not working.

Ergo, don't simplify something too soon. Even reliable delivery of newsletters is rather important otherwise why would all the tracking pixels exist? (they are a form of message acknowledgement)


If latency is important, I would argue that ordering doesn't help you.

That is, if you have the computational resources to service your entire queue within the required latency, then ordering doesn't matter, because workers could simply claim the first available random task, and it doesn't matter who finishes first or last. If you don't have the resources, then ordering still doesn't matter, because you are blowing your latency budget anyway.

For example, say users are uploading photos which go into a queue for scaling, cropping, etc. Let's say 10 users per second are uploading a photo. In the best-case scenario, the queue is empty, so all users should see their photo fully uploaded quickly. Worst-case scenario, your queue has a million tasks pending, which benefits a handful of early users who see their photos completed, whereas most users will simply wait forever. Prioritizing early users (i.e. queue order) doesn't benefit anyone when everyone will be miserable, so it's an arbitrary choice.

The ordering imposed by queues can create an illusion of fairness when task execution time (i.e. end-to-end latency) is not taken into account. If one user uploads a 1 gigapixel photo, it's most likely going to take up more processing time than other users, and again, there's no reason to prioritize that user over any other user just because their request arrived earlier.

If you have random-order delivery, you can still measure latency, capacity, and utilization, as well as have retries and acking, and utilization-based scaling.


I think you'd still desire loose ordering even if you didn't care about strict ordering. If you always have 50 items in the queue you could be blowing the latency budget on random items because they are getting passed over where as with some form of loose chronological order you could still be completing everything within a certain amount.

SQS without FIFO works about like this AFAIK.

I agree with the gist of everything you are saying though, particularly the point about signaling. As soon as you want to start keeping track of a process that state is going to be stored somewhere outside the queues.


I use Queues for retries, buffering, and parallelization. It's pretty easy to write most programs to support at least once and to run multiple queue workers that you auto scale.


Can you explain exactly what doesn't scale well?

Databases scale well in general, it's one of the things they're built for.

A single database server with an indexed table (b-tree) and an SSD is extremely performant. (And yes there are other indexes that could be even more performant, but b-trees are already extremely fast.)

But the huge advantage of putting a queue in your database is being able to create transactions across both the queue and data in other tables that needs to change at the same time.

Not to mention one less extra tool to maintain, existing database tools for backups replication etc. automatically include the queue as well, etc.

As well as the fact that it's incredibly easy to make your queue behave according to whatever kind of custom business logic you might need. (Which is where additional indexes and b-trees can turn out to be necessary, and which is precisely what databases are great at.)

So to the contrary -- unless you're at Facebook-level scale, using modern databases for queues generally seems like an excellent choice. (And in the special cases where it's not, that's usually quite obvious.)


I very strongly agree to the idea of using the tools you already have and know to solve the problem at hand and release it. Then observe where it could use help, then seek options on specific products solving those problems and only then can we get to real good questions that can pierce the veil of marketing and the comfort of herd mentality.


I believe most people simply have no concept of how performant modern RDBMS are. "Joins are slow" and all that.


Skype at its peak used a postgres queue extension, pgqueue, at its core. Seemed to go OK for them. The extension did work around some limitations when running a pure sql queue in postgres. These days I would run Pulsar but if you only have a million users and already have postgres you will be fine.


I worked at a place using pg_queue (the person who created it worked there before going to Skype IIRC) and maybe it was working well operationally but it was not a great experience to develop on it. It was hard to generate events and debug/observe them (it was height years ago so the details are a bit fuzzy).

MS also moved from it afterwards when they acquired Skype but I guess it was more because of the shift to SQL Server.


Marko Kreen? He still maintains it so I guess some people still use it and the associated logical table distribution system.


Look at SKIP LOCKED in PostgreSQL and MySQL.


It’s not that. It usually ends up in the same storage engine as the main business functionality which leads to IO contention. The some sick and twisted bastard does a join to it from somewhere abhorrent which means it requires some serious heavy lifting to fix it.

This is unfortunately the status quo in the slightly less ideal universe than “seemed like a good idea at the time”

Use SQS or RabbitMQ or something.

Edit: Also if you’re using something SQL for queues you’re going to have to build out your monitoring stack yourself based on your implementation rather than suck up a metrics endpoint with queue stats or pull from cloudwatch.


The benefit of having the queue in the same engine as the main business functionality is that you can do stuff in a single transaction.

Having the queue on its own means you have to handle exceptions like "job is done but queue didn't get updated" or get into distributed transaction which is very different challenge.

Also, operationally having one inherently stateful component (db) versus two (db, queue) makes things a lot simpler to manage, deploy, update, etc.

Not claiming this is a perfect solution, just that the tradeoff isn't obvious.


The point of queues is sequencing, storage and decoupling of events for the sake of scalability and durability.

By putting it in the same transactional store with the same transactional boundaries you’re instantly shooting the whole point.

Not only that, most queues define boundaries between different logical systems where transactional boundaries do exist. At which point your database’s transaction scope is extremely limited in capability.

In the real world of messaging transactions span more than just the database engine.

It’s just bad architecture. Full stop.


> The point of queues is sequencing, storage and decoupling of events for the sake of scalability and durability.

The point of queues in any particular application is some subset of that; the calculus of implementation approaches that make sense depends not on the abstract point of queues in general but on the concrete point of queues in your use case.


Having a separate queue makes sense across application boundaries. If you need queuing mechanisms in your own application you should be able to get by with a dB for 99% of use cases


So use a separate database.

Is contention with the application your only objection? That’s pretty weak.

What’s the real compelling reason not to use a database? You haven’t said.


Contention, architectural separation, you have to build your own monitoring stack, not transactional with concerns outside the database without introducing distributed transactions and risk, no routing or distribution capability, you have to build it yourself, point in time message durability is somewhat dubious depending on your acknowledgement process which of course you had to invent yourself as well.

Not to mention numerous database administrators crying into their cups of coffee.

Enough?

Like I said 20 years of people fucking this up is my experience.


Meh. All your objections are hand wavey and not factual.

Databases actually work fine as a queue but emotionally you don’t like it. That’s fine it’s just not real strong objections.

What you have not said is “it physically does not work”, and that’s because it does work fine.


I don't think he's saying 'it cannot be physically made to work (given enough effort and discipline)'. The impression I got was more like 'if you go down this route then the incremental path of least resistance leads somewhere bad'.


The range where databases work fine is pretty small. Sudden spikes in job creation will kill the performance of the database. Often enough the job creations come from external systems you can't control.


I’m not saying it doesn’t work. I’m saying you’re shooting your toes off down the line.


> you have to build your own monitoring stack,

How is this done in Redis automatically? At some point you are writing queries (albeit Redis ones) to pull metrics. Other queueing systems may expose an API but there is always some level of integration. With Postgres/other db I would write SQL which is their API and a powerful one at that.

I can couple events with triggers to auto-generate other events etc, have built in audit capability, roll up reporting etc, all with standard SQL. (all in the context of monitoring and reporting)


Um, I'm not suggesting using Redis. In actual fact I said elsewhere I wouldn't use Redis for queues.

As for triggers, reporting, audit, I'm laughing now because you miss the point. That's another bunch of IO in your monolithic black box, and you're still building it.

Start here: https://www.rabbitmq.com/prometheus.html


Currently at work we use a postgresdb for queues for long-running persistent jobs, but I'd like to move away from that model since I have doubts about how it'll scale.

I've thought about using rabbit mq, but have a few questions: - it appears that when a job is consumed, it's gone. How would I maintain a record of the jobs & status of the job in rabbitmq? If I wanted to display job updates how would I handle that? I didn't think you could update a message once it's already in the queue

Or am I missing the mark? Do I want to separate the business entity "job" that maintains a status and updates and such from the "thing that dispatches jobs to workers"? And when a worker Has the job it just updates the business entity in the database?


> Do I want to separate the business entity "job" that maintains a status and updates and such from the "thing that dispatches jobs to workers"? And when a worker Has the job it just updates the business entity in the database?

If you ever really need to that's one way to do it but consider I'm imagining 50-100k state updates per second as being reasonable depending on how the data and queries are laid out. You can always use NOTIFY as discussed to cut down on the amount of worker queries for new work before having to completely push signaling to a separate system(which would likely still require occasional polling and/or a background process to re-signal work that hasn't been picked up).

It's interesting to consider that AWS Step Functions charges(or did charge) by the state transition.


You might want to look at something like SWF or Temporal


I dunno. If it’s usually the same problem, it should be easy to identify and fix.

Unless we’re talking about a high load, there should be no problem doing this.


It’s only the same problem until someone dumps 25,000 entries in your queue.


I’ve seen all manner of hokey queue implementations in sql going back about 20 years and all of them could handle 25k enqueue bursts. That wasn’t a problem for a Sybase database on a commodity host circa 2000.

I think if I were going to argue against using DBs as queues it would be around: heavy parallel write use cases, latency concerns of both reads/writes and scaling to millions of events per second.

If you don’t have those concerns using a properly normalized and protected schema (which you are doing anyway right? Cause if not you are already shooting your toes off) for queues goes a very long way and removes a very big operational burden and tons of failure modes.


I agree.

Going from a single database to a (database + queue) means two server processes to manage, maintain, observe, test etc.

I actually start with SQLite to reduce as much distributed state as possible, then move to something else once it’s proven it will not work.


It wasn't a problem for Sybase on a commodity host circa 2000 because clearly that host wasn't doing a whole lot of other stuff. It's a big problem for our 48 core nodes with 2TiB of RAM and a metric shit ton of DAS NVMe. Ergo anecdotes don't scale either.

To clarify we just moved this entire problem to SQS.


What database can't handle an extra 25,000 entries?

That's... nothing. Databases handle billions of rows effortlessly using a b-tree index. So not really sure what point you're trying to make?


It can absolutely scale well if you design the database properly. However, it’ll probably be fairly high latency compared to a Redis or Kafka stream.

The things that often make databases appear slow is that they’re typically configured for large random reads rather than small uniform writes. And they have insane initial response times because of all the IPC they typically do when starting a session.

But aside from that, if you use a heap table without an index you’ll can probably get similar throughout to Kafka. And if you turn Fsync off you might even get up to REDIS speeds.


It's possible at scale, but the engineers that can do that are usually 1:20 outnumbered within most organisations. (when scoping out FAANG and co) Asking a developer to design a database is not even normal process today, unless the ORM does it for you, very little actually is done with the database. It might as well be a flat file...


That’s sad. I learned relational DB design before I even really got into systems programming and it was really a valuable thing to learn. You can be so much more efficient and less error prone with normalized data structures.

There’s a reason SQL is still around after all these years. It’s basically math and while the syntax is a little moldy the principles are eternal.


One of my businesses is running a software solution build in 2012. Ever since it is using queuing via a database. It's simple, easy to maintain. And scales really well. We are talking about 10k queues per second. Yes it disattracts developers like you. They usually prefer the latest fancy trending tool. luckily those types are often responsible for overengineering things and overly complex solutions.


There are A LOT of use cases for a queue that aren't particularly computer intensive and you aren't worried about ever needing to scale - especially larger than your Postgres database.

So while this might be good advice if you will need to scale, it's certainly not blanket advice.

In the first case, I would much rather just have Postgres and not a separate queue to manage.


the other side of the coin is that queue servers are databases, just optimized for the use case.


In your experience, roughly at which point did it stop scaling and why?


I also have a lot of issues with people using tables and cronjobs instead of just using queues.

I once was the one implementing this stupid idea. It's very easy to shoot yourself in the foot. It seems to be ok at first, but it is a stupid idea and will give nightmares to the operations team.

It works as long as the table is small. It gets especially problematic when there are sudden spikes in the workload, so that the table is growing rapidly. People don't test these scenarios enough.

Often it's important to do the jobs in the order they came in. This means you need to have the index set up this way or you'll end up with full table scans. You also need to remove jobs which are done, but this will also mess with the tables performance.

There are tricks with databases to help scaling this, but at the end of the day it's much simpler to just use a queue.


It stopped scaling when we put in the PO for a $150k x86-64 box to run it on and finance said WTF.


What about in terms of number of requests and requests per second?


Exactly, GP does not sound like someone who has worked on anything with decent scale. Because if you did, you would know that it is a really, really bad idea.


Or maybe GP is someone who has worked on projects from their infancy. If your queue will be handling less than a request per second for the foreseeable future, why not just use the technologies already in your stack and plan a migration path now? What do I gain from adding Redis (and thereby making my system even more distributed, with all the hazards that carries) if my project isn't even guaranteed to make it to 10 rps before going bankrupt?


Very few things have "decent scale". 99% of apps can run on a single server with modern hardware.


Lean startup begs to differ :p



The article mentions that.. The comment by chx was rather a more generalized idea for implementing a queue that is not database-specific.


`FOR UPDATE` is also a good idea.


Yeah, most of the NoSQL solutions boil down to not learning how powerful SQL and its programming extensions are.


That’s a bit of rewriting history. NoSql, particularly document style stores came about at least as much as a reaction to the overweight policies & orms the sql world has made rampant at the time.

You’d be trying to store a tiny bit of simple state and all the books/articles would have you standing up read only views & stored procedures for all your crud ops. The document stores came along with a fresh perspective and easy scaling.

Then their were the columnar stores and time-series stores that really did solve the newer scale problems in ways the existing sql stores didn’t.

I’m a sql guy through and through but it’s important to recognize the nosql movement was a reaction to real pain points. Also it made the sql databases better.


What history rewriting?

I have been using RDMS all along since 1996.

Nokia NetAct was scaling GB of data across multiple clusters with OLAP reporting engine in 2005 with no hiccups.

The experience I had with DynamoDB kind of proved to me I haven't lost anything by staying in the SQL path.

Most NoSQL deployments I have seen, could have been easily done in Oracle or SQL Server, provided they actually had a DBA on the team.


But that’s sort of the point. You are saying “if you have a DBA and use proprietary products and have the skill to understand the trade offs of running a db with another data warehouse product layered in you can handle GB of data”.

Mongo said “Throw data at me and I’ll scale with very little work”.

Now, I’ve always largely believed that’s penny wise and pound foolish but it’s certainly a good pitch.


"mongo said" - the obligatory mongo DB is webscale video:

https://m.youtube.com/watch?v=b2F-DItXtZs


Which is what I originally stated "boil down to not learning how powerful SQL and its programming extensions are".

One can only have bought into Mongo's story, by lacking the skills to understand how fake it was.


DynamoDB was designed to solve a very different problem than a traditional SQL database. When DynamoDB (and the other Cassandra flavors) were released there were no databases doing multi-master failover with high write throughput - we are talking about TBs, not GBs. It's not a coincidence that Google, Facebook, and Amazon all had to write to their own database at around the same time (BigTable, Cassandra, Dynamo).

With those new tools you had other companies building on top of those databases for far cheaper than a license to MSSQL or any other OLAP of choice would give you.


How so?


You really really wanna do everything (locking, working, unlocking, deleting or completing) in a transaction, and use SKIP LOCKED to find work. It's actually pretty high performance when you so this.


do you have a more detailed example on how this could look like?


The article starts by discussing SKIP LOCKED



Problems with this approach:

1. Recovery.

2. Concurrency.

3. Polling.

In more detail:

1. Recovery. Suppose a worker dies while processing a job. The job should be retried, but how do you know this, since expire > 0? You can impose a timeout, but that has drawbacks -- there might be long jobs you repeatedly start but never finish. To recover failed jobs without imposing a timeout, you'd have to run both the UPDATE and the job processing inside a transaction. That way, a failed job can result in an implicit ROLLBACK, freeing the job for future worker attempts.

2. Concurrency. So now recoverable workers are trying to claim jobs inside a transaction. If there are 10 jobs and 10 workers, we want all 10 jobs processing concurrently. However, each worker SELECTs the first item_id and tries to UPDATE that same row. 1 worker wins. The other 9 workers block while that transaction completes, then update zero rows. Concurrency will hover around 1.

3. Polling. In the transactional UPDATE approach, there's no way to tell whether there are free jobs short of trying to claim them via UPDATE, which blocks. So you must poll, and either burn cpu on quiet queues, or introduce an artificial delay into job processing times. The beauty of the SKIP LOCKED approach is that it can tell whether there are free (not currently processing) jobs. Even if you wake all 10 workers up via LISTEN / NOTIFY for a lone job, 9 workers in the thundering herd will fail to claim the job, and go back to sleep.

I blame TFA for not sufficiently explaining the SKIP LOCKED approach. A much better explanation is here: https://www.2ndquadrant.com/en/blog/what-is-select-skip-lock...


1. I interpreted expires to be the timeout. Their could either be a stand alone transaction run periodically, that sets the expiration back to zero, or you just update the queries to read `expire < now()`.

Thus neither are running in the same transaction as anything else. The SELECT will never block, since it is not specified as "FOR update". The update itself specified the key.

On the other hand, the SKIP LOCKED approach is really nice for allowing the queue processing to be one single transaction that commits atomically, and thus nicer semantics than a timeout based retry strategy. I do like that approach. The only downside is that not all RDBMS support those semantics, but for an app that only needs to support Postgres it does look like the smart approach.


A single Rabbit node seems easier than rolling this yourself and doing all the work to test it.


It's a tradeoff between software and infra complexity.

I would argue that if you've built a system up from scratch, this is much easier to debug and maintain than a foreign piece of software. Rabbit is just way overkill if you have a couple of jobs per hour, for example.


But if we always do what's easiest, then we will make a bunch of horrible choices.

Other things that matter, often more than what's easiest:

* Flexibility. If you aren't 100% sure what you'll need, but need something in place today, it can make sense to choose an unopinionated, generic solution.

* Familiarity. Maybe you've never used Rabbit.

* Ops simplicity. Maybe you already have a bunch of postgres deployed. Adding a new type of system means you need new ways of monitoring, new dependencies, different deployment.


Of these, I think the last is the most compelling. Why introduce something new if you already have all of the infra and tooling configured for managing postgres? The parent mentioned testing the postgres implementation, but that seems like quite a lot less work than configuring the infra-as-code, monitoring, alerting, etc required for a new type of thing.


There's off the shelf libraries to do this in nearly every language.

It's always about tradeoffs in context, but I have seen plenty of instances of someone setting up a rabbit cluster for something that could be done trivially in their existing db cluster via the above.


The way we did it (in MYSQL) was "UPDATE tasks SET processid=<unique id for the queue consumer task> WHERE processid IS NULL ORDER BY taskid LIMIT 1" and then look for processid to find your task. I guess the LIMIT on UPDATE might be an mysql extension? Then if your task crashes or dies you can look for invalid processids and do cleanup


Our initial did that too but generating a unique was removed soon after as had expire in there anyways for retries as I mentioned and that is enough.


Right. And this article reminds me of people trying to use a SQL database like a filesystem. "Lock file before update otherwise it will get overwritten".

Don't use record lock as like it's an attribute of a row. Add a column on the table called "status" and use it.


Thanks for posting this! It is relevant to me right now and great example. Especially since I'm using a new to me database, Spanner.

Specifically I similarly pull rows into a 'queue' like that linked sql by WHERE primary key, primary key, and queue_status ORDER BY timestamp, uuid LIMIT 10. I was worried what would happen with many requests around the same time if it would accidentally pull duplicates into 'queue.'

I think hopefully I've implemented Spanner locking read write transaction correctly and that this won't cause problems at scale (we're not even close to google scale lol).


> You need a table with a primary id and a "status" field.

For a queue, you need a table with an insertion order and a status indicator. “Primary id” is not enough because “primary id” may not be ordered (e.g., a UUID primary key is a primary id.)

OTOH, while a DB can do this, therr are plenty of well-tested open source purpose built message queueing solutions, so the swt of scenarios where “roll your own with a DB” isn’t a misuse of effort is limited.


Why on earth would you make a queue table with a non-serial primary key? What benefit do you gain from a UUID here?


What happens if the process that performed 2. crashes before it was able to complete whatever processing it was supposed to do?


The idea, I think, is you wouldn't delete the job from the queue until the processing was done.

Of course, this relies on the jobs being something that can be retried.


But you will have marked it as in progress by setting the "expire" to a non-zero value, preventing any other workers from trying to work on it. How will they know that the worker which marked it actually crashed and will never finish?

By using SELECT ... FOR UPDATE SKIP LOCKED, the record will automatically get unlocked if the worker crashes.


Not sure. Maybe something else is responsible for identifying "stuck" jobs and kicking them out to humans to decide what to do (assuming worker crashes are rare)

If we're talking PostgreSQL specifically, and newer-ish Postgres (9.5+ I think), then you can leverage its abilities to do all this in one atomic query:

  UPDATE jobs
  SET status='working'
  WHERE id = (SELECT id
              FROM jobs
              WHERE status=NULL
              LIMIT 1
              FOR UPDATE
              SKIP LOCKED)
  RETURNING id


If you are willing to use SELECT ... FOR UPDATE SKIP LOCKED, then you can implement the queue without any UPDATE query or status column at all. Just lock the record to mark it as in progress, that is how you get the benefit that when the worker crashes, it will automatically be returned to the queue.


This approach has its own gottchas.


Looking at the code, some steps are missing in the description:

3. When processing is complete the matching table entry is deleted.

4. There is a "reaper" process that checks for stale jobs ( based on time in the future ) and resets them to 0.

This of ouffcourse raises more questions.


I think they want to select expire < now instead of expire = 0. Delete the row on completion.


> SELECT item_id WHERE expire = 0

I come across a lot of SQL databases that uses 0 instead of false.

Is there any good reason for this?


false would be "expired", this is called "expire" because it is a timestamp, it's the epoch time when the item expires. The worker presumably finishes before that time and if it does then the item is deleted. So, if any item has a nonzero expires in the past then it needs to be reset to 0 so it can be retried.


Only in databases that have no Boolean type.

Sometimes it's an artifact of the orm mapping.


So that you don't need a separate column for an associated datum?


Great! So I learned this technique. But one thing that's not clear to me - how a "expired" datetime field can allow easy retries? Anyone give me an example? Thanks!


Successful items are deleted from the queue before they expire. So an item with expired > current time needs to be retried because it failed processing. So, a cronjob resets these to expire 0 and the next worker can pick them up.


Could the process not delete the row, and not commit until it has finished processing? Otherwise, the transaction is rolled back when the db connection ends. This would not require timeout or clean-up.

One would need the appropriate transaction isolation, but shouldn't it work?


That's requires maintaining more db locks.


Doesn’t sound like a robust solution. What if the job takes longer than your estimated expire time? Does the worker keep updating the expiry date on a background thread while doing the processing?


Same for any queuing system. You need to set the expiry time long enough for the expected task duration.

In SQS, for example, you use a visibility timeout set high enough that you will have time to finish the job and delete the message before SQS hand it off to another reader.

You won’t always finish in time though, so ideally jobs are idempotent.


Thanks for the explanation and time is needed in order to digest it... I guess one might start with the simple "status" field approach ;)


The status doesn't help if the worker died during processing. If you've got something that's been "in progress" for 20 minutes of a job that takes 2, your job might not be getting done. That's what expired is for.


I see now! So the 'expired' field is for checking died/incompleted workers.


You mean expired < current time, right?


no you don't but it's a lot better to do an atomic select/update IMHO.

Basically you have status = 0,1,2 for queued, processing, done. You have a filtering CTE that returns the queued rows, sort by id and update status = status +1 of the top one.

You can even get fancy and disallow updates on the status column for other users to make it immutable.

I've done this in SQL server in a very similar way to as explained in the post


What am I missing? With your description, nothing ever sets expire to 0. Can't look at the code comfortably as I'm on mobile.


So you need to keep polling the db?


Postgres has LISTEN/NOTIFY for this.


I believe something like this would have issues with multiple consumers.


I recently switched from rolling my own table to SQS.

The main issue is making a job queue dispatcher and runner which most people aren't familiar with.


This sounds dangerous. Without some good way to prevent workers stepping on each other, that will result in lots of useless queries and failed updates if a lot of jobs get scheduled at the same time.

In the Drupal case, the database returns a single row which is pretty much guaranteed to be the same for all workers in between updates. You really don't want that in the same database your page views hit. At least selecting N rows at a time and then claiming a random item from them would be slightly better.


I find the downvotes weird without any explanation why the raised issue doesn't sound important.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: