I guess "relatively small load" is... relative, but I've written that kind of thing to be able to handle ~70k sustained OLTP requests per second (including persistence to postgres) when load testing locally on my laptop. In any case the same thing applies to external queues. Your workers will often be more efficient if you pull off chunks of work to process together.
By small load I mean size. The frames in your batch jobs are tiny and few in number if you're getting 70k batch jobs per second.
It's more similar to streaming... what you're doing here. In that case my velocity measurements are more applicable. You want your queues to be empty in general. A batch job is run every hour or something like that which is not what you're doing here.
If you ran your load test for 5 minutes and you see your queues are 50 percent full. Well that means 10 minutes in you'll hit OOM. Assuming your load tests are at a constant rate.
If your queues are mostly empty then it can handle the load you gave it and have room for spikes. It's just math.