Hacker News new | past | comments | ask | show | jobs | submit login

A better technique is one where even if the spammers know what your countermeasures are, they still can't spam you.



Good luck! :)

Unless you're taking a blood sample from every user, you are going to have spam that you can't catch.


If a robot makes a meaningful contribution to your website / service, do you really care if they are a robot?

If you could tell which comments are insightful / relevant / interesting / unique it wouldn't matter which ones were produced by humans and which ones by algorithms.

Likewise humans can often create spam by hand - daft comments / contributions that hurt your site but come from legitimate humans.


The assumption with any kind of collaborative filtering is that the opinions of many people produce a better result when combined than the opinions of one person. If you are allowing machines to vote then you're letting one person have an arbitrary number of votes, which totally breaks the model.


How would blood samples even help? We're talking about excluding content, not people.

It's a certainty that the spam you can't define is the spam you can't catch. The "I know it when I see it" test doesn't scale.


That's the only way you're going to be able to ensure that everyone interacting with your site is a human. (It's preferable if you take a blood sample on every interaction, just in case.)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: