You're not wrong, but every solution involves compromises. More broadly, I would argue, though, that almost nobody actually needs queues.
Sure, there are cases were you actually need strict ordering, but they are relatively few, in my opinion, involving things like transaction processing or other areas where the order of operations change the effect of the operations and must be strictly sequential.
Ordering itself brings significant technical challenges. For example: With a queue, you can only have strict ordering as long as you only process one item as a time, with no concurrency. Ordering also complicates handling of failures and retrying, because it means every newer item has to wait until the failed item has been dealt with; if you use things like dead letter queues, order is violated.
In almost all use cases -- anything from image processing to newsletter delivery -- a better system is to have an unordered list of items that anyone can take items out of, in any order. Then you throw as many workers at this system and design it so that there's minimal locking needed to prevent workers from stepping on each other's.
There are queue systems that have a purpose in a stack, specifically pub/sub brokers, which can be used to schedule work along several dimensions. You use the pub/sub system to signal work to be done, but each payload just refers to some state that's stored elsewhere.
I think you're missing something important here by applying the worker pool simplification to the idea. Order is important i that case as well. It forces both fair scheduling of the work and the metrics from the queue can easily describe the overall latency of the work being done and the capacity and utilisation. The latter can be used to scale the consumers of the queue and/or alert on when things aren't working properly.
If you throw a proper messaging broker on the table like RabbitMQ then you're getting message acknowledgement as well which is important if you must do the work. A failed consumer will result in the work being rescheduled on another consumer transparently. Of course there are metrics you can see and get heads up when your consumers are not working.
Ergo, don't simplify something too soon. Even reliable delivery of newsletters is rather important otherwise why would all the tracking pixels exist? (they are a form of message acknowledgement)
If latency is important, I would argue that ordering doesn't help you.
That is, if you have the computational resources to service your entire queue within the required latency, then ordering doesn't matter, because workers could simply claim the first available random task, and it doesn't matter who finishes first or last. If you don't have the resources, then ordering still doesn't matter, because you are blowing your latency budget anyway.
For example, say users are uploading photos which go into a queue for scaling, cropping, etc. Let's say 10 users per second are uploading a photo. In the best-case scenario, the queue is empty, so all users should see their photo fully uploaded quickly. Worst-case scenario, your queue has a million tasks pending, which benefits a handful of early users who see their photos completed, whereas most users will simply wait forever. Prioritizing early users (i.e. queue order) doesn't benefit anyone when everyone will be miserable, so it's an arbitrary choice.
The ordering imposed by queues can create an illusion of fairness when task execution time (i.e. end-to-end latency) is not taken into account. If one user uploads a 1 gigapixel photo, it's most likely going to take up more processing time than other users, and again, there's no reason to prioritize that user over any other user just because their request arrived earlier.
If you have random-order delivery, you can still measure latency, capacity, and utilization, as well as have retries and acking, and utilization-based scaling.
I think you'd still desire loose ordering even if you didn't care about strict ordering. If you always have 50 items in the queue you could be blowing the latency budget on random items because they are getting passed over where as with some form of loose chronological order you could still be completing everything within a certain amount.
SQS without FIFO works about like this AFAIK.
I agree with the gist of everything you are saying though, particularly the point about signaling. As soon as you want to start keeping track of a process that state is going to be stored somewhere outside the queues.
I use Queues for retries, buffering, and parallelization. It's pretty easy to write most programs to support at least once and to run multiple queue workers that you auto scale.
Sure, there are cases were you actually need strict ordering, but they are relatively few, in my opinion, involving things like transaction processing or other areas where the order of operations change the effect of the operations and must be strictly sequential.
Ordering itself brings significant technical challenges. For example: With a queue, you can only have strict ordering as long as you only process one item as a time, with no concurrency. Ordering also complicates handling of failures and retrying, because it means every newer item has to wait until the failed item has been dealt with; if you use things like dead letter queues, order is violated.
In almost all use cases -- anything from image processing to newsletter delivery -- a better system is to have an unordered list of items that anyone can take items out of, in any order. Then you throw as many workers at this system and design it so that there's minimal locking needed to prevent workers from stepping on each other's.
There are queue systems that have a purpose in a stack, specifically pub/sub brokers, which can be used to schedule work along several dimensions. You use the pub/sub system to signal work to be done, but each payload just refers to some state that's stored elsewhere.