I keep thinking of a hierarchical invite model for these sorts of problems. I'd be very curious if someone could give a second opinion on this idea.
The mechanism:
Everyone has to be invited by someone, so it all traces back to the creator. The creator knows they themselves are legit, but let's say someone online asked for an invite and bad inputs keep coming from somewhere down that branch of the invite tree. Either the person the creator invited is a spammer, or someone they've invited in turn is the spammer. All accounts leading up to them can be progressively killed (and their inputs nullified), starting with the ones actually causing trouble → if it keeps happening in that branch then kill one layer up, and so on.
Incentives:
People risk losing their own account when inviting someone who they don't trust to be good netizens. Maybe there needs to be an incentive why you care about your account in the first place, or maybe (looking at Wikipedia or OpenStreetMap, or HN with its voting system and the homepage meaning a lot of attention for your page) a majority of people are simply honest and happy to contribute and that suffices
Problem it solves:
Wouldn't such a hierarchical invite system work around the online identification problem?
If you would DNA check everyone and ban people who abused the system in the last decade, you'd also not have any spam online, but that's way too invasive (besides not being legal and prohibitively expensive, it's also not ethical). However, a pseudonymous (all that is known about you is a random user ID) invite tree seems to me like it would have similar properties. It requires banning the same person perhaps a hundred times until they run out of people that will give them invites, but wouldn't it eventually distill the honest people from the population? (Which is probably almost everyone if there is no gain from systematic cheating and there's social pressure to not ask for multiple invites because account holders know that means you were either messing with the system they enjoy using or invited someone onto it that did that.)
(Implementation details: One bad input isn't an instant ban: people misclick or misunderstand interfaces, but eventually it gets to a point where, if they can't click the right buttons, there's also no point having them be moderators of the search engine (or whatever this is used for) and so their account is removed. If multiple removals happen in a tree that's deep and recent, remove more than one layer at a time to get rid of malicious sockpuppet layers. The tree's maximum depth can be limited to something on the order of 50: it doesn't take many steps to find a chain of relationships that links two random persons on the planet, so a fairly low depth is enough for the whole world. People should be told on the invitation page how many bad apples were removed in each layer below them, so if they're 1 bad apple removed from having their own account pruned then they know to only invite people they're very sure about. One problem I see with the system is that it reveals social graphs which not everyone is happy about. If that means being able to kill virtually all spam, content farms, etc., maybe it's worth it, but or course more research is needed beyond an initial proposal.)
This is the traditional way societies enforce totalitarianism. You're not allowed to sleep outside, because your landlord will punish you. He has to, because if he doesn't, the city will punish him. It has to, because if it doesn't, the state will punish it.
Maybe totalitarianism is what you want for search engine moderation. But it does feel a bit messed up when you frame it like I just did. Another thing that happens in these power delegation hierarchies is that people undermine each other in order to move up the hierarchy.
Isn't that what's happening to websites today? Everyone jostling for a higher position in the ranking to stay afloat on advertisement returns, and dancing to the advertisers' tune or they'll get kicked out of the system
I'm also not sure I agree that a system with this amount of leeway (anyone can invite you onto the system and it asks nothing of who you are) is comparable to a system where your landlord gets to say how you must behave in frivolous detail. All it wants from you is to not promote spam. Furthermore, physically moving to another landlord, who in such a scenario would want to get a reference, is a whole other ordeal compared going online and asking one of any number of people you're close to for a code that takes them 5 seconds to generate
The mechanism:
Everyone has to be invited by someone, so it all traces back to the creator. The creator knows they themselves are legit, but let's say someone online asked for an invite and bad inputs keep coming from somewhere down that branch of the invite tree. Either the person the creator invited is a spammer, or someone they've invited in turn is the spammer. All accounts leading up to them can be progressively killed (and their inputs nullified), starting with the ones actually causing trouble → if it keeps happening in that branch then kill one layer up, and so on.
Incentives:
People risk losing their own account when inviting someone who they don't trust to be good netizens. Maybe there needs to be an incentive why you care about your account in the first place, or maybe (looking at Wikipedia or OpenStreetMap, or HN with its voting system and the homepage meaning a lot of attention for your page) a majority of people are simply honest and happy to contribute and that suffices
Problem it solves:
Wouldn't such a hierarchical invite system work around the online identification problem?
If you would DNA check everyone and ban people who abused the system in the last decade, you'd also not have any spam online, but that's way too invasive (besides not being legal and prohibitively expensive, it's also not ethical). However, a pseudonymous (all that is known about you is a random user ID) invite tree seems to me like it would have similar properties. It requires banning the same person perhaps a hundred times until they run out of people that will give them invites, but wouldn't it eventually distill the honest people from the population? (Which is probably almost everyone if there is no gain from systematic cheating and there's social pressure to not ask for multiple invites because account holders know that means you were either messing with the system they enjoy using or invited someone onto it that did that.)
(Implementation details: One bad input isn't an instant ban: people misclick or misunderstand interfaces, but eventually it gets to a point where, if they can't click the right buttons, there's also no point having them be moderators of the search engine (or whatever this is used for) and so their account is removed. If multiple removals happen in a tree that's deep and recent, remove more than one layer at a time to get rid of malicious sockpuppet layers. The tree's maximum depth can be limited to something on the order of 50: it doesn't take many steps to find a chain of relationships that links two random persons on the planet, so a fairly low depth is enough for the whole world. People should be told on the invitation page how many bad apples were removed in each layer below them, so if they're 1 bad apple removed from having their own account pruned then they know to only invite people they're very sure about. One problem I see with the system is that it reveals social graphs which not everyone is happy about. If that means being able to kill virtually all spam, content farms, etc., maybe it's worth it, but or course more research is needed beyond an initial proposal.)