Sentiment analysis tools haven't yet proved accurate enough to be useful for HN moderation; traditionally they misclassify too many things because they don't have access to intent. It does seem like LLMs have a chance at doing this better, though, and if anyone wanted to work on that, I'd certainly be interested in what they find.