"Instead, a received notification is immediately sent into buffered channel, which means it’s discarded if the channel is full"
Shouldn't the channel rather block than discard if full?
Anyway, nice and relevant article for me as I've recently added a few listeners to my app. I chose the naive approach since I only have two topics and a surplus of connections
Author here. The Go channel send behavior could certainly be altered depending on the particular semantics of the application, but the reason I chose to use a non-blocking buffered channel is so that no particular subcomponent can slow down the distribution of notifications for everybody.
> Shouldn't the channel rather block than discard if full?
In Go, a blocking channel is one that's initialized without a size (see [1]). You could have a blocking channel where the sender uses a `select/default` to discard after it's full, but that leaves very little margin of error for the receiver. If it's still processing message 1, and then message 2 comes in and the notifier tries to send it, message 2 is gone.
IMO, better to use a buffered channel with some leeway in terms of size, and then write receivers in such a way that they clear incoming messages as soon as possible. i.e. If messages are expected to take time to process, the receiver spins up a goroutine to do so, or has another internal queue of its own where they're placed so that new messages from the notifier never get dropped.
The reason you'd use a non-blocking send is to make sure that in the event of one slow consumer that the entire system doesn't slow down.
Imagine a scaled out version of the notifier in which it's listening on hundreds of topics and receiving thousands of notifications. Each notification is received one-by-one using something like Pgx's `ListenForNotification`, and then distributed via channel to subscriptions that were listening for it.
In the case of a blocking send without `default`, one slow consumer that was taking too much time to receive and process its notifications would cause a build up of all other notifications the notifier's supposed to send, so one bad actor would have the effect of degrading the time-to-receive for all listening components.
With buffered channels, a poorly written consumer could still drop messages for itself, which isn't optimal (it should be fixed), but all other consumers will still receive theirs promptly. Overall preferable to the alternative.
Shouldn't the channel rather block than discard if full?
Anyway, nice and relevant article for me as I've recently added a few listeners to my app. I chose the naive approach since I only have two topics and a surplus of connections