> It's annoying but also understandable how often people just dump stuff into an unbounded queue and punt on making sure things work until the system is falling down.
It's annoying if it is done by the infrastructure team (I mean, they should know the details of the queue they are managing). It's understandable if it is done by product developers (they are more into "inheritance vs composition" kind of things).
I've seen plenty of product engineers not understand this fundamental aspect of queues and just add them because it "felt right" or "for scale" or something silly...
There's a lot of "Let's add a queue to deal with scaling issues" type thoughts which don't really work in practice. Like 95% of the time the system works better without one.
I think this is insightful thanks. We did this recently, and combined with the OP article, I'm really reminded of how fundamental and ubiquitous queues are. They aren't always obvious, or even intentional. I generally don't set out to design a queue. It just sort of happens while I solve a problem.
So yes, adding an MQ specifically just embeds another queue in your original queue. If your scaling problem is an unbounded, maybe unintentional queue, then the MQ can provide just the throttling you need to keep from overloading your consumer end.
Yep, the system just got more complicated, but now it's also more manageable, because we hacked a governor into our unregulated queue.
As discussed elsewhere, you still have to deal with the back-pressure on the producers' side.
Aren’t queues simply a requirement to do asynchronous processing? And MQs are a way to do it while keeping your application stateless, and with features to make it easier to recover from failure (e.g. persistence, DLQs).
I love discovering simpler solutions to problems! Could you explain this a bit more - how could you design things that seemingly need a queue, without a queue?
Abstractly, everything has a queue size of 1. Synchronous vs asynchronous just refers to what the producer does while its message is being processed. In synchronous programming, the producer blocks and waits until their message is processed to proceed. In async programming, the producer does other things and optionally receives a notification from the consumer once the task is complete.
How does this apply to not needing queues? I suppose you can rely on your language runtime to juggle the various jobs and concurrent threads (analogous to workers), but then you lose a lot of the benefits of having an explicit MQ system. If your system goes down, for example, you’ll lose all the in-progress work.
Actually, is that the point I was missing? That the benefits of an explicit MQ system are not always required, so it can be simpler to just rely on the async primitives of your language?
You have a problem so you implement a queue. Now you have two problems.
It succiently illustrates the problem because you should build your application to account for the queue being down. So you still have your original problem of what do I do if I can't process everything I need to.
> you should build your application to account for the queue being down
Maybe. but the whole idea is that the queuing infrastructure is intrinsically more reliable than the rest of the system. So while you may design for it, you can do so with different thresholds for severity of what the conseqeunces might be.
I've never met an application developer who was unaware of what a "queue" was or the problems they purport to solve. Pretty sure stacks/queues are among the first "data structures" that students create, which inevitably leads to the question of, "what to do when full?" i.e. circular queues, double ended queues, priority queues. I know that the enterprise grade queuing systems we're talking about are a lot more involved than that, but to suggest that developers don't grok queues is pretty disingenuous. And the implications of rate in > rate out is pretty obvious for anyone that's ever had a clogged sink.
I didn't mean queues in general, my bad. I meant, as you pointed out, enterprise grade queuing systems: there are lot of stuff going on there that are not exactly the stuff one learns in the Data Structures 101 course.
> And the implications of rate in > rate out is pretty obvious for anyone that's ever had a clogged sink.
Well, many developers I know are in their early 20s. I'm not sure they ever had to deal with clogged sinks :)
It's annoying if it is done by the infrastructure team (I mean, they should know the details of the queue they are managing). It's understandable if it is done by product developers (they are more into "inheritance vs composition" kind of things).