Hacker Newsnew | past | comments | ask | show | jobs | submit | yry4345's commentslogin

> "Lots of people think NAT is a show-stopper for peer to peer communication, but it isn’t. More than 90% of NATs can be traversed, with most being traversable in reliable and deterministic ways."

All the traversal methods require coordination with a 3rd party (ie: centralized) server so - yes - this is a show stopper for P2P.

As public addresses become more scarce, and carrier NAT becomes common, the problem of finding that intermediary will only get worse.

IPv6 should be a solution, but it won't get off the ground if carrier NAT gets priority, for example. (Or if ISPs just put firewalls everywhere, and other "best practices"...)


> All the traversal methods require coordination with a 3rd party (ie: centralized) server so - yes - this is a show stopper for P2P.

Third party and centralized aren't the same thing. Any peer with a real public address or even a manual port mapping or a router that supports NAT-PMP or PCP can play the role of the "server" in this context.


In order to boot strap a connection to a P2P network, one must contact a well-known server. It doesn't matter if the well known server is a "peer" that happens to be running the same software, or if it's the BitTorrent DHT bootstrap servers, what matters is that that peer has a disproportionate amount of authority and influence over the network, amounting to a single point of censorship or failure.

NAT (and firewalls anywhere except the networked computer - even at the subscriber's router) contribute to/create this asymmetry, where one side has to beg to connect to another, and so everyone winds up settling on the most-memorable 3rd party. Everyone has heard of the king (or Verisign), and so the rich get richer...


There is no requirement that it be a single node or that they all be operated by the same party.


It's not a requirement, it's just that it tends toward it naturally due to the asymmetric addressability. As long as there are two or more global addresses available to the public on which to run STUN, UPnP, etc, there will be "competition" but it is immeasurably weak compared to what would be possible with direct (non-NAT) addressability. In an environment without those obstacles, systems are naturally designed in a P2P way - simply from the need for scalability.

Case in point A: Skype leveraged an initial P2P design at a time where direct addressability was the norm (and there were many freeware alternatives that allowed direct dialing)... Now that Skype has become dominant, it has switched to a centralized infrastructure 1) because it's owners can (it makes administration, censorship, and surveillance easiest), and 2) because a P2P model no longer makes sense with most users relying on their centralized bootstrap servers.

Case in point B: Dropbox and similar services have replaced self-hosted FTP, I would argue, simply because noone wants to maintain static port mappings and Dropbox is easier.

Even without other incentives, the presence of NAT is a centralizing force that - taken to the extreme (such as with carrier NAT) outright precludes P2P - and that is undesirable. In an Internet with NAT (or any other violation of the end-to-end principal) all systems suffer the same fate: centralization (the antithesis of P2P).


Is your argument that we need to adopt IPv6? Because you'll get no disagreement from me. But something has to be done in the meantime.

I guess I'm going to have to plug my software: http://trustiosity.com/snow

The idea is that it doesn't actually eliminate the horrors of NAT traversal, it just makes it my problem instead of yours.

The current solution is to use other nodes as relays using DHT-style routing and then I put a VM on AWS to bootstrap. The interesting thing is the bootstrap peer is only required for the first connection. Once there is an existing path A-B-C-D, it doesn't matter that zero of them have a public address, you can still use it to send a hole punch message from A to D.

The real problem is that trusting random peers to relay messages allows them to DoS you by filling up the network with Sybils and then not forwarding your messages. So I'm in the process of coming up with solutions to that, probably something along the lines of allowing particular nodes to be designated as trustworthy and preferring those.


Very cool.

Thanks for the link and thanks for taking a stab at a hard problem! Snow looks very promising so far... (I can ping nodes on my LAN over it too, which is usually a sticking point for traversal-oriented software - one is doubly-NAT'd, and there's an SSH server with an Ubuntu banner reachable from afar with UDP packets cutting through both brick walls nicely.) I'm impressed! :)

(FYI, building snow on a fresh Debian Squeeze 686-pae (packages: make g++ libssl-dev libminiupnpc-dev libnatpmp-dev) fails for me at dht.cpp line 220 (ambiguous function call) though; I'll have to read the source more to find the right cast or ::namespace to fix it but it compiles fine on amd64 with an identical set of packages.)

I'll definitely be reading the code more closely!


Thanks.

I can see the bug: The function is overloaded as taking uint64_t or a pointer and I'm passing "0UL" to it, which on 64-bit is an exact match for uint64_t but when 0UL is 32-bit it doesn't know whether to convert it to a uint64_t or a NUL pointer. It probably just needs a cast to uint64_t.


So you define P2P so as to force users to type in IP addresses of other users?


Unfortunately, that does not provide a solution. Since public IP addresses are becomming scarcer, and since at least one side needs a global address, all of the workarounds for NAT will tend to centralize the 3rd party coordination/proxy role.

Even IPv6, which should provide direct addressability in the long term (assuming ISPs provide it on the wire), may wind up increasing the centralization in the short term (creating a single point of failure and censorship) if the only way to connect to the IPv6 Internet is to tunnel into a major tunnel broker; rather than hundreds of ISPs, there may only be a handful - easy targets for mandatory kill-switches, censorship, and surveillance - and what started with more addresses than stars in the universe, will have degenerated into a global hub-and-spoke network.


The Web is about statelessness and hyperlinking, so I'll note that:

1) Javascript breaks stateless linking.

2) Mobile versions and browser detection breaks stateless linking.

3) Censorship headers like Prefer:Safe, and censorship in general, breaks stateless linking.

And the list goes on. Some state and inability to link is inevitable, but this is not.

Mozilla: please get back on track for a strong/stateless and cite-able web. Headers are not the place to build a censorship "UX."

Also, "safety"? That's not helping. Almost no one uses NetNanny, or similar software and, while I'd like to think we've grown as a species, even if we haven't, it doesn't make sense to force something most Web surfers have already rejected back down their throats. (And it is forcing them, even if it's optional. The social implications of even "optional" headers will be with us for a very long time.)

A browser especially should strive to be neutral, unless you want to start getting requests from governments and industry to block sites directly in the browser. Google handles a million or more every day and they are just and index list... You can't expect a different fate without discarding neutrality as a core principal.

Cite-ability requires availability, and censorship - the Web-equivalent of a frontal lobotomy - contradicts the very essence of your product. I'm starting to feel ashamed to be using a browser made by an organization that doesn't understand that.


> "I would see no reason to ever move."

(Obligatory) Apart from the single political point of control. AWS is to web apps what GoDaddy is to DNS and Verisign is to SSL.


You're right, actually. I've dealt with all three, AWS is definitely more friendly (if we're talking about support here) but they all have the monopoly. AWS, honestly, is the only one that actually has an advantage. Verisign is overpriced, GoDaddy is just a typical domain registrar.


Could you justify your relating? Your relating basically suggest any one's father is any father to his son/daughter. I could not agree that for any child, they are fathers treat them equal good/bad...

In summary, it's find to be alert, but be sure to not over-generalize...


> "it doesn't seem entirely irrational." (Yes it does...)

> "It doesn't have to be perfect to be better." (Yes it does...)

The problem used to be approached by presuming innocence (demanding perfection), rather than with a willingness to accept false positives (20 years ago spam filters weren't available as an analogy...). It is always possible to wrongfully judge someone, but it was never a valid or acceptable outcome ("It is better that ten guilty persons escape than that one innocent suffer" - Blackstone). We accept that spam filters give false positives (not to mention that one person's spam is another person's opportunity), so I think comparing the justice system to detecting spam is a mistake, and more over that a goal of "prevention" itself is a red herring.

The goal of prevention encourages us to accept lower thresholds of guilt probability, and that is wrong. In other words, if prevention is an end, then it is worth deliberately (rather than accidentally) restricting innocent people on the basis of virtually any nonzero probability of guilt. 80% "guilty" by association (for using Tor for example), 45%, etc, would all be enough to justify legal action - and the thresholds would certainly depend on whoever is in power and has access to the database that week. This is a very different model than presuming innocence, and having not only a goal of 0 false-positives, but also providing satisfaction when the justice system is in error.

I think today we are mostly talking around the fact that a crime has to have been committed in order for it to deserve to be punished, and that, for that reason, prevention cannot be a valid goal in itself (but it's nice when it happens).

Rationalizing surveillance as a tool to "prevent" rather than to justly punish wrongdoers (which centralized surveillance does not do because it is centrally operated, due to the conflict of interest; everyone owning a camcorder on the other hand...) implies that the central database needs to go IMHO (and that individuals need to be empowered instead).


Hold on there friend. I was not suggesting we replace the judicial system with a filter. Rather arrests.

I.e., make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.

Convictions should be false-positive-free. But our system would not work if arrests also needed to be 100% false-positive-free.

I'm also not advocating punishment for crimes that have not yet been committed. Rather, think of it as looking for flags for crimes that have already been committed or are in progress. For example, there are all sorts of small flags thrown by embezzlement or salami-slicing that, put together, identify the operation.


> make the arrest based on the filter, then run the trial in the same old jury-of-your-peers.

LOL, jury of peers. You mean the jury that is left after the prosecutors and defenders screen out the most competent jurors. The same jurors that typically believe you are guilty because you've been arrested. Have you been in the typical criminal courtroom lately? Any public defender will tell you that going to trial in cuffs and jailhouse orange will almost certainly get you a conviction.

There are lots of things that need to be fixed in the justice system. Lets not give them more tools to make it worse.


Granted, arrests are held to a different standard than convictions in that they merely require "probable cause" rather than proof of guilt and this lower standard does make it look like the spam filtering analogy scenario may fit - but in calculating this new "guilt probability" our spam filter is relying increasingly on the "testimony" and "facts" presented by the surveillance database itself and it is the objectiveness of this database in practice, or rather the ones accessing it, that I am directly calling into question (though I didn't elaborate above).

Unfortunately, the database cannot be trusted by virtue of its centralized nature and administration (even if that centralization is justifiable, for example to protect everyone's privacy). The hardware may be objective, but people are not - people lie cheat and steal when they can get away with it - and there are simply too few separate and competing interests to hold the small number of people with access to the database and tools accountable for their inevitably selective use of them to ensure their objective application. We have seen centralized data collected and used for private interests (and books censored, and guns regulated, and...) in the past, be they fascist governments or police protectionism (lying under oath; evidence tampering; racial "profiling"), economic fraud, etc. It is human nature to use one's control to his advantage, and it is simply too tempting for police to shoot first (detain, seize, etc), especially when it is in their interest, and ask questions later (check the database for cause; use "parallel reconstruction"; incriminating speech taken out of context).

It would be worse if that extended all the way to conviction, but it presents the same kind of problem for arrests, detainments, and searches, etc, since it is effectively the word of the administrators (who we trust not to abuse the data and tools) against the person arrested. The more centralized the data and tools become, the less we can trust them to be applied objectively without accountability.

Unfortunately, there are no checks and balances on absolute power (centralization), and so we cannot allow centralization to continue indefinitely. Absolute power corrupts, absolutely, and it is my "thesis" that arrests are not a suitable application of these tools. The risk is too great. Police already have a high level of responsibility (the authority, training, and tools/weapons to control use by force) and what feels like decreasing accountability (because the kids, because the drugs, because I said so, because I can, because of cronyism, and because wealthy people don't like hearing criticism), and since they are none the less "only human" - I don't recommend giving them more.

Granted, you are merely describing a potentially objective algorithm, but my point is that the objectivity of any given tool is moot given the human element. Guns don't kill people, people do, and will continue to do so even with checks and balances (like laws against murder; if prevention was the goal we fail daily). It is only the distribution of accountability (peer juries, private key sharing, democratic voting, citizen groups, etc) that keeps such roles in check.

Anyways, thanks for the opportunity to flesh my thoughts out more.


I guess my theory partly depends on the filter being too sophisticated for any one person to co-opt. We can design machine learning, but there can't be many people who are capable of wrapping their head around a running machine learning system, and be able to reach in right here and peek/poke some weight and bam your nephew is arrested in Texas. On the bright side, most of those people are probably not officers, whom you seem to be most afraid of.

As for the objectivity of feeding the filter data, I envision something completely automatic. No selective entry for this or that suspicious person- the filter is fed a database of all people, and perhaps monitors the internet's traffic on its own. Maybe ACH traffic too. Financial crime could be this system's biggest win- computers are way more suited to uncovering financial crime relative to humans.

Basically, when it's big enough and sophisticated enough and automated enough that no one person can fully understand it, it becomes significantly harder to pervert. And, as I mentioned before, it needn't be perfect- our current system is pervertable too (see: papers please, racial profiling, etc), so this one would just need to be less pervertable...


http://boingboing.net/2009/12/08/farm-family-put-unde.html

Edit: The title and link of this HN article have changed. The link changed from a BoingBoing article to the original German article, and the headline used to be a question ("Who is the NSA spying on..." or similar) that gave the GP comment more context.


I think the cooperation necessary would be for the "big guys" to not have a vested interest in selling out privacy, which has been the prevailing business model for a long time. And, since the big guys only listen to their bottom line, that means not using them until they support privacy. It may mean not using the Internet substantially at all. (It's more than a little ironic to be saying this on the preeminent "business hacker" (or "startup") community, which has a visible subset who sympathize with some of the NSA's programs, or at least have been able to rationalize them...)

As you say, the tools have always been there, but no one uses them. That might be because it's a chicken-or-egg problem. At the same time, it might be because the people in the positions to develop and promote the tools, even if only for their own use, are being prevented by a one-track culture that encourages them to sell out their client's privacy in addition to discouraging them from working on projects like Tor. (Again, the HN forum is an example of that conflict - being a largely business-oriented forum; surveillance technology sells... Even DuckDuckGo, a favorite startup in this community, has filters to protect us.) Rather than peer-to-peer solutions like Gnutella, Gnunet, Tor, and even open wireless, people continue to make websites with JavaScript encryption, despite the proven MITM threat.

I don't think JavaScript and CSS will get us out of this, but if this latest revelation doesn't wake people up in the tech community specifically, nothing will, since BoingBoing readership is a large number of them - which to me means that the tech and programmer categories are themselves a primary focus of the surveillance that some highly-respected tech pundits (and HN forum members) have defended and rationalized as only being used for terrorists and perverts. That definition now includes anyone with enough knowledge to build or use strong privacy tools. The definition now includes everyone on this forum.


"No one" uses it because it is too complicated for "every one".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: