I'm just clarifying my interpretation of OP's comment, not necessarily agreeing with it.
Anyway, involving humans (funded by NPM's commercial revenue) doesn't reduce the options to "letting the spam happen and dealing it after the fact" or "holding all submissions until a human reviews [them]".
If I was trying to solve this problem, I'd be open to a solution that tried to automatically classify submissions as either legitimate or spammy, with an associated confidence level. If the confidence level fell below a given threshold then I'd involve a human.