Hacker News new | past | comments | ask | show | jobs | submit login

For 1. We went with a per-routine bytes.Buffer that was batch "inserted" every x milliseconds or n messages. However, we don't care if we lose some messages on a crash. For integrity, some queues are set to 0ms, 1msg because we don't want to lose anything, but when it is ok that messages are lost, this is great for perf.

For 2. you could probably do something like this:

    BEGIN TRANSACTION;
    -- Select the oldest pending message
    SELECT id, message FROM queue WHERE status = 'pending' ORDER BY created_at ASC LIMIT 100;
    -- Mark messages as 'processing'
    UPDATE queue SET status = 'processing' WHERE created_at < ?; -- assuming created_at is monotonic
    COMMIT;
Basically, select a batch and then abuse the ordering properties to batch mark them. Then all messages in your select you can dispatch evenly to sender threads. Sender threads can then signal a buffered channel that they've completed/failed, and the database can be updated. At startup, you can just SELECT where status = 'processing' and recover.

This is a pretty decent translation of how ours works.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: