Hacker News new | past | comments | ask | show | jobs | submit login

Yes, but it's only a matter of time until it gets gamed. Google could index it to find less spammy content for a while.

Ultimately you need to know who is who, who is posting what. Is that what their internet ID lobbying is about?




That's the real problem. The stakes are huge, so eventually most systems get gamed by spammers.

Maybe they could have meta-moderators (crowd-sourced?) like Slashdot has (had? haven't been there in a while). People who review what people have reported as spam, and only the top 5% most trusted and consistent accounts would be used to actually influence live rankings.

For a spammer to get there, he would have to do lots of good work for a while, and risk losing that top spot as soon as he starts trying to do spammy stuff.

Maybe it would work... Any obvious way to game such a system?


/.'s system works because the number of users who want to have a discussion outnumber the people who want to shout obscenities in all caps. That is only true because there is no money in shouting obscenities in all caps on /.- but there is plenty of money in upvoting spam on Google. If they switched to crowdsourced curation today, you'd have have a spammer to user ratio of 10:1 before February 1st.


Very good points. Thanks for the reply!


I think anything will be gamed until gaming it is more costly than its gains. There is enough cheap labour as of now who is willing to game the system - I wonder if there is a correlation between search quality decrease and increase of internet access in countries where people are willing to work for very low wages (I did not find precise enough data on google for the evolution of internet usage in the last few years in third-world countries).


That's kind of the point of my system. It takes a lot of work to become "trusted", and it's trivially easy for a trusted meta-mod to remove that status from you.

If Wikipedia can work, this might just work. But Google would no doubt prefer an algorithmic way to solve the problem. Maybe some form of narrow AI is close enough in the pipeline...


It is difficult to scale trust, for once: you would get people who keep submitting everything as spam, until overload of the "spamming" committee you are suggesting (if I understand you well). Many people are not that able to make the difference between spam and not, also (is efreedom spam or not ? If you are not a programmer ?).

I also suspect just taking the top 5 % will get a lot of false rejections, which is a real issue.

But when I say that there is a cost/revenue issue, it goes both ways: everybody thinks about the cost of spamming, but if the income coming from spamming decreases, it would also fight spam quite well. Decreasing this without decreasing google's revenue may be challenging.


It's harder to game delicious because you can narrow your search to sites that have been bookmarked by people you have chosen to be in your network. You are essentially searching across curated lists from several trusted curators.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: