Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Presumably it's okay that this loses work if your task runner has an error?



If you read my guide, you’ll see that I embed it in a transaction that doesn’t COMMIT until the companion code is complete :)

For example, I run the above query to grab a queued email, send it using mailgun, then COMMIT. Nothing is changed in the DB unless the email is sent.


Holding a transaction open for the duration of a request to an external service makes me nervous. I've seen similar code lock up the database and bring down production. Are you using timeouts and circuit breakers to control the length of the transactions?


Yes, you absolutely need to set a reasonable idle transaction timeout to avoid a disaster (bugs in the code happen) - this can also be done globally in the database settings.


Long running transactions can lead to an accumulation of dead tuples: https://brandur.org/postgres-queues


This article was written in 2015, a year before idle_in_transaction_session_timeout parameter was added (in Postgres 9.6) - which is unfortunately still disabled by default, but that's the easiest way to make sure no transaction sits idle for too long.


This is from 2015, does it still hold true in 2023?


Yes, in that Postgres still uses oldest-to-newest tuple ordering, and its MVCC hasn’t changed, so you can still cause the issues listed.

Careful monitoring and tuning of parameters mentioned by the sibling comment to you can help mitigate this, though.

Ultimately at scale, no, RDBMS shouldn’t be a queue. But most have a long way to go before they hit that point.


Gotcha, apologies for responding without reading!


From the linked article

> The task row will not be deleted if sendEmail fails. The PG transaction will be rolled back. The row and sendEmail will be retried.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: