Hacker News new | past | comments | ask | show | jobs | submit login

Web of trust-based spam filtering takes some effort to use, but is a workable solution.



WoT is pointless when people can manufacture identities that trust each other and build a fake trust score for an id used to spam or whatever.


WoT can't be effectively spammed, unless those spam accounts are trusted by people you trust. The spam accounts can vouch for each other all they want (a la Twitter), but you control whose trust you value.


I was thinking about this the other day. From an information perspective, it shouldn't be impossible to design the system you describe (including the implied nuances) because spam usage patterns do and must look different than normal usage patterns under any system that penalizes new accounts.

Hypothetically, the worst you could do would be astroturf. (aka the US/Chinese military style "slightly biased posts from a large number of centrally controlled but seeming unrelated accounts")

However, the idea of slight bias over longer periods is somewhat antithetical to the idea of a spam. In that it might influence you to buy Sparkle towels (honestly, with Amazon prices that low and shipping that easy... [meta :p]) over a competitor, but isn't going to convince you to navigate to {insert sketch get-rich-quick spam scheme here}.

Weeding out astroturf is an entirely more interesting problem though...


It only takes one break in the chain to compromise the entire web of trust. With such wide-spread connections across the planet these days, the chance of someone you trust three times removed accidentally breaking that chain is quite real.


Let's say you see some spam. You could have the software tell you which part of the web has made the spam trusted, and then you could manually mark that part of the web as untrusted. If there are only a few breaks in the chain like that, it'd be a workable solution.


Yeah, that's the problem. Unless I personally build a trust score for every member in the web, all I can do is rely on a score based on friend-of-friend rankings. Eventually a friend and I will disagree on a friend-of-friend ranking. Do I lower the trust of the friend, or the friend-of-friend? Is that even possible? And if it is, how much time do I really want to spend pruning the web?

Good example: I want to see everything my Aunt Susan is doing in her personal life on Facebook. I do not want to see anything ever for any reason that has to do with her Zynga games.

A single trust score doesn't really encapsulate that relationship and it's very possible she would effectively breach the WoT by allowing Zynga to send me messages, notices or e-mail in exchange for her to get a shiny new Farmville tractor or something.

I stopped using Facebook because of this kind of crap. I don't have the time, energy or interest to deal with people I do know sending me crap I don't want, and more importantly Facebook's flexible definition of privacy and customer service. I had Facebook change my settings away from their desired state more than once as part of an "policy" or "feature" update.

So I guess the meta discussion is about whether you trust the holder of the trust. LOL.


There are trust metrics for web-of-trust systems that are resistant to attackers who can create unlimited dummy identities that trust each other. For example:

http://www.advogato.org/trust-metric.html


It depends on how things are implemented. WoT isn't implicitly just a vote based system. If it's actually a web then there is a requirement of a trust connection (or route) between you and the content, and that's much harder to game.


That's a nice theory, but has it been demonstrated to work in any real systems? Is it known to be a workable solution at the scale of Usenet/Reddit?


That's a very high bar to set :)

No it hasn't been validated for that use case for that large a system. Some examples where it has been used: PGP uses web of trust to validate keys. Freenet boards used web of trust to succesfully stave off spam attack. They are a lot smaller than reddit of course.


It's not clear that the PGP web of trust will survive well under an attack, at least in terms of most users not being fooled.

Someone made a fake PGP for me several years ago, and many people have chosen that over my genuine key when e-mailing me, just because the fake key is newer, even though my genuine key has lots of signatures and the fake key has none at all. (It was probably Enigmail helping them make the choice rather than a clearly informed decision.)

Meanwhile, there is already a complete clone of the strong set with colliding key IDs. That is, people have spent the computing time needed to make a fake version of every single public key, with the same name and key ID and signatures as the real one, just with a different fingerprint. (There's one at https://evil32.com/, but I think at least one other group has done the same thing!)

If someone uploaded those to the keyservers, there would be a fake copy of each PGP public key with the same key ID and the same signature structure (of course signed by other fake keys rather than by other real keys). At that point you would always have a 50% chance of getting a fake key every time you tried to use PGP to contact a new person, unless you consciously manually used an out-of-band fingerprint verification mechanism to bootstrap your selection of what key to use. You would never be safe in just guessing because you "found a key out there" for someone and it "looked right" and "had a bunch of signatures"!.

I'm willing to be more charitable toward the web of trust than someone like Moxie is -- I think more users could be taught to be more cautious, and software could help automate key exchange better -- but my own experiences with having a fake key out there in my name don't make me very optimistic about the way the web of trust is being used today. It's also sad to ponder, as Moxie has, that it seems PGP isn't even being used widely enough to make it worthwhile for attackers to try to DoS the web of trust, let alone to try to trick people into using the wrong keys on a large scale. (That is, PGP hasn't even reached Gandhi's "then they fight you" stage in the mass market.) This isn't to deny that PGP has provided major communications security benefits to smaller communities and groups that have consciously adopted it and use it carefully.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: