Not all queues serve (soft) real time requests and not all queues are bounded (1). If the rate of incoming queue items is predictable but not constant (for example, average load in the weekend or at night tends to be lower for many systems) you can often get away with having enough capacity to serve the average load while using the queue as a buffer. This is fine if the work done is not time critical, like sending most types of marketing emails.
You might argue that a queue with many items in it which can still accept more items is not really "full", but it is clearly not empty either.
(1) Obviously all queues are bounded by the capacity of the hardware, but since most queueing software can persist to disk and/or lives in the cloud with elastic storage this is not usually a problem.
The difference is that the requests that are being served are getting extreme latency. You are serving the same number of requests, but every single one is X days old.
No, there are use cases where new entries are not ingested by requests - sometimes you know all work that needs to be done upfront. And sometimes you have a situation where you have a few requests coming in over time but that you at the same time need to sometimes go through everything you have seen earlier, once more and flush it through the queue.