I don't think this holds, you can enforce filtering in the encoding step, i.e. be strict about what you output, but always decode, even if the input is profanity. This means you can also be backwards compatible if you update the list etc. So in short, the old maxim of be strict about your outputs and lenient about your inputs.
From their FAQ: "The best way to ensure your IDs stay consistent throughout future updates is to provide a custom blocklist, even if it is identical to the current default blocklist."
The *encoding* changes. The decoding stays consistent:
> Decoding IDs will usually produce some kind of numeric output, but that doesn't necessarily mean that the ID is canonical. To check that the ID is valid, you can re-encode decoded numbers and check that the ID matches.
The reason this is not done automatically is that if the default blocklist changes in the future, we don't want to automatically invalidate the ID that has been generated in the past and might now be matching a new blocklist word.
In that case it sounds like a shortcoming on their part. There is no fundamental reason to have that limitation. I understand it can make the implementation easier to not have it, but in my opinion being blocklist change agnostic would be a much better value offering.