A sweet hack, and full marks for humor in the FAQ.[1]
Q: Is it secure?
A: Security is not binary.
Q: OK, how secure is it?
A: It seems like you just asked that question.
Q: No, the first question was if it's secure,
the second question was how secure is it.
A: Well now that wasn't even a question at all.
Tell you what, if you find an unreported security
vulnerability I'll buy you a beer.
Personally I find this really unsettling for non-technical users. Almost asshole-ish. It's funny to us...but seriously providing a real answer after all that would even be sufficient. Not just answering with a "yes", but with a little blurb on how it is secure.
Oh, so when someone doesn't lie to you it's unsettling?
The only way to know if something is secure is when it's adopted en-mass and you see if it really was secure or not. You could read the WinXP pamphlet on security back when it was released and it had endless bullet points about how secure it was. It was probably the least secure software in the history of computing based on actual attacks after the fact.
Security isn't something you provide an answer to unless you're selling snake oil. Luckily, it seems most people prefer buying snake oil and are happy to eat up a vendor telling them how secure an utterly untested product is.
Security theory is not something you can understand as a non-technical user anyway.
I think one should start by explaining what "a layer 3 virtual network that uses public keys instead of IP addresses" would mean, or what a network is depending on what non-technical means.
If one doesn't immediately understand what this means, they should stay away. The intended audience is clearly people who have some grounding in networking.
Q: What do you mean "No"?
A: We believe we have done a good job in securing it.
Q: So did you do a good job?
A: We hope so!
Q: You "hope so", what sort of answer is that?
A: Trust us. It's secure. We are not hackers. We don't want to steal your data. We did not put in any back doors. We audited the code ourselves. There are not any kernel level hacks, root kits, or otherwise. This has been tested against a variety of anti-virus scanners and none of them flagged anything. We're very good. Please please trust us?
The last answer could be even better if it included an actual list of things that have been checked against:
What testing methodology did you use, what form of vulnerability or classes of errors does it prevent (valgrind, ...). Has the code been formally verified ?
What are the attack scenarios that you have considered. What are those you don't prevent (physical access, system compromise, user compromise).
:-) The slight tone of sarcasm was there if you were looking for it.
Ultimately it comes down to "Trust us". Unless you are well versed in computer security, anything other than what I wrote, is meaningless. Even the rootkit stuff I put there is above the head of the average computer user (we're probably talking the 98th percentile and above that would understand what a rootkit is).
Probably talking the 99.99th percentile for what's above.
> Personally I find this really unsettling for non-technical users.
There is of course the counter argument, that if you're non-technical, you probably shouldn't be trying to implement a cryptographic layer-3 network for any reason other than "the lols".
That just means we move the bar a little further. We write an answer for programmers who know next to nothing about cryptography and security measures.
I’m not part of the Snow project, but I have the impression it’s still pretty experimental. If so, it’s probably better for non-technical users to remain unsettled about it for a while yet.
This is similar to what I thought IPv6 IPsec should have been, auto-generated addresses: where address generation uses the hash of a public key. Sure, the addresses would have to be longer (in a /48, you only have 80 bits of choice), but if IPv6 were longer to accommodate strong hashes, it would solve much of the problem of secure computer-to-computer communication in a decentralized way.
Right now, IPsec practically requires PKI. But at Google or Amazon's scale, PKI is far from an easy problem, distributing keys to millions of nodes must be painful. And auditing the system must be its own level of hell, as I doubt many internal PKI systems attempt to manage devices at that scale. Unlike a smartphone or a laptop, where you can rely on 2-factor authentication, a server must be single-factor authenticated. The server is the server, and that places a huge burden on correctly allocating certificates.
And then there's the chicken and the egg problem: if you want to deploy PKI to millions of existing servers, how do you do that and ensure every server is what it says it is? There's too many shaky links of trust involved for a system like that to stand up.
I really like this idea, it's in many ways better than the idea I had about IPv6, because it uses the DNS layer to advertise public keys. It's inarguably more extensible, to boot. My idea would fix IPv6 into a single standard for IPsec, this is much more flexible.
Not realy. cjdns seems to need an invite to get onto its network.
Not saying that's necessarily a bad idea; cjdns seems to be useful to the people that use it. But if I want to build an app that communicates P2P over such a network, a manual step to join the network won't fly.
cjdns does not exactly need an invite. If you want to peer over the internet you'll need to swap public keys with someone already part of the network. But most people have turned on autopeering, which will discover peers on the local network and take care of peer swapping for you. Keep in mind that the intended purpose of cjdns is not to be a p2p library, it is to be a mechanism for connecting local mesh networks together into a mesh internet.
The project is in the middle of a partial rewrite. The existing DHT has several issues and I'm replacing it.
The change is going to break compatibility, which made it into a much bigger change because it provided an opportunity to make several other compatibility-breaking changes. So I haven't been promoting the project recently and the DHT bootstrap node is currently offline.
There should be new code some time around the end of summer.
I gave this a quick try, as I've been looking for something like this that works for several years.
Looks like the DHT used for NAT and resolving .key addresses is not currently online, at least my (very well connected) test machine wasn't able to connect to the 1 pre-seeded DHT peer.
Anyone gotten it to work outside of a single machine and ideally thru NAT?
Interesting, but I can't immediately think of any real-world use cases that this solves that aren't already solved with existing technology. The docs don't really describe anything beyond "automatic NAT traversal and end-to-end encryption with no configuration". No configuration? Not entirely true. :)
IP addresses (normal, unicast ones) like 1.2.3.4 and a9c::890 are meant to reach a computer somewhere on the Internet, through any number of routers. Routers are meant to forward IP packets until they reach their goal.
Try it with your neighbor's computer, it's not going to work. Did he enable DMZ or port forwarding? Alright, that works when your neighbor is home. Now try it when your neighbor is at work. His IP address changed, so there goes the reachability. This seems like "duh, obviously," and you're right. But I just want to perform the fundamental action of connecting two computers.
I personally used Tor to solve this problem: run a hidden service on one, connect to the .onion address on the other (you can configure ssh to work with .onion addresses).
This public key system would solve the problem in a much better way, without going through Tor. Not that Tor is bad, it's just not meant for this. Connecting the two machines directly without thinking about the intermediary network is what I wanted.
When trying to design a decentralized Internet platform of any kind, one always runs into the issue that if it is built on IP addresses, it will be fundamentally centralized. This fixes that at a low level.
First of all, No, a protocol or application is not fundamentally centralized because it uses IP. IP already supports multiple forms of addressing and routing and both centralized and decentralized services use IP.
Second, snow does not change whether an application is centralized or not. It's the application which is centralized, not the address. Your host's address can be "1.2.3.4" or "abcdefghijklmnop", this does not change how the application works at all.
Third, snow is just a tunnel. Any tunnel would "fix" an application the same way by simply translating addresses and encapsulating communication.
This is basically just onion routing, but snow doesn't really exist to be an onion router. The real purpose of snow appears to be that the author wanted to use the features of IPv6 (secure connections and the ability to address and connect to a host behind a network firewall) without having to actually use IPv6 in his application, and doing all this on top of an IPv6-only network. This is what sets it apart from every other NAT-tunnel. The public key stuff is a red herring.
IP - and especially IPv4 - is a challenging protocol for decentralized networking. The protocol is really more suitable for a local network within a single routing and administration realm. That does not reflect today's internet.
Applications tend to assume that IP addresses are globally unique. ISPs depend a lot on each other to handle routing properly. Occasionally we see a route leak when someone screws up. Sometimes it even happens deliberately. And it's entirely possible that malicious routes are announced on a regular basis to conduct clandestine MITM attacks. Technical solutions for automatically determining which ASNs should be allowed to announce an IP prefix remain problematic. And BCP 38 - while it helps to deal with DoS attacks and certain security issues - also breaks some very useful approaches to deploying high performance/scale applications.
The internet is currently far more centralized than most people like to admit. The reality is that both DNS and IP are handled by delegation from a central authority. For instance, proof of IP address ownership remains outside the scope of the protocols. Network connectivity still remains based on trust relationships. That is fundamentally incompatible with a decentralized and ad-hoc approach to networked applications.
There are many network operators who have been shown untrustworthy. The design of the internet hasn't quite caught up yet.
Today's internet supports multicast, unicast, anycast, broadcast, and geocast addressing. IP is a connectionless protocol designed to facilitate communication from one network node to another. The protocol is designed to be routed through dynamic, unreliable networks. IP is really not that challenging and there's a lot of other layers that make its job easier.
And it really has nothing to do with centralization or decentralization. It's peer to peer. Your peers can be anywhere and you can send and receive anything, out of order, connectionless. This is fantastic for decentralized distributed networking.
Applications can 'assume' anything they want; that's the application, not the addressing protocol. Everyone who has read RFC1918 knows IP addresses are not unique.
And there is no way to ensure a route doesn't have a malicious actor. It's been shown time and again with networks like Tor that it doesn't matter what layers of security or obfuscation or decentralization you add. A bad actor on a route will be able to identify or mess with your traffic. Your application is the deciding factor in the security of the connection.
DNS and IP are not handled for everyone by a central authority. Both are independent protocols which can be used across the internet without a central authority's authorization. Of course IP addresses are more closely guarded, but like you mentioned before, advertising an invalid range of addresses works all the time. And DNS is not even needed to use the internet! Public domain registration using specific TLDs does have centralized control bodies, of course, but that's necessary to prevent conflict.
The internet is a web of trust. That will never, ever change. The reason it will never change is we all want something for free.
If you wanted, you could pay for and bury fiber-optic cable from your home to every place on planet earth that you want to make a network connection to. Then you wouldn't have to trust anyone, and when someone taps into your fiber or cuts the connection, you could (hopefully) determine that your connection is no longer "safe" or "reliable". But this is not very practical.
The internet fixes this by allowing any network to help any other network get around common network problems. We help each other because it is mutually beneficial. When that mutual assistance breaks down you get problems like the Comcast-Netflix debacle. No internet protocol or addressing scheme will route around a monopoly on the network. The only "decentralized" solution is a bunch of people on a wireless mesh network and a satellite link, which will still result in Netflix not being practically usable.
But please, keep believing that an addressing scheme will somehow keep you from having to trust a foreign network. Good luck getting House of Cards to stream.
I'm trying to place my finger on it too but I feel an itch in the back of my brain that says "This is important and can be used for the core of something really amazing"
Isn't this a similar concept to Tor addresses without the onion routing being part of it?
in Tor servers are one IP and one public key, so yeah you can't talk to that ip if they change their public key. It's closely related. But this is because someone is giving you a table of that ip <-> public key association. If an attacker change the public key of an ip in that table then... That's why OP's idea is stronger.
> I can't immediately think of any real-world use cases
The problem with SSL is that it needs certificates. You need a domain name and a certificate if you want to run anything over SSL in a resonable manner.
If address == identity then those requirements vanish because learning about the address already provides you all the information you need to establish a secure connection.
He's thinking on the level of "replacing IPs", not "replacing domain names". Such addresses would be like .onion domains - only becoming marginally human-meaningful with a significant computational expense. But since IPs aren't very human-meaningful anyway it's a step forward.
You would still have the problem of name resolution. However since the address would be the public key, once you had resolved the address the identity of the other party would be assured. Assuming that an attacker cannot feasibly generate an equivalent public key, you remove key exchange+authentication as an attack surface.
Key management could be a downside. If you want to update your key you have changed your address, which would look like a name-resolution poisoning attack. There are feasible ways around this but none are ideal (particularly if your key was compromised, mechanisms like signed forwarding records would become extremely hazardous). It would probably have to rely on name-resolution mechanisms similar to those used by current IP addresses.
i haven't looked into it deeply, but it sounds like it'd be great if you wanted to build a decentralized p2p system that didn't require a centralized routing system like dns or ip. like in practice email or twitter but decentralized, and would act similarly as simple as HTTP over DNS
I had the same thought, but am curious if you've solved the following problems:
Would you bake the private key into the container or set it at runtime? If you set it at runtime how will two containers in different places know who to talk to?
Perhaps you generate the keys at build time and add the public keys to the partner containers, then at run time you inject the private key into the container via an env var. Now you have to securely manage and transport private keys and you've got two problems.
There must be other things I'm not considering.
And, of course, whatever system is running the container can step into it and read the private keys (or any malicious containers running on the host that are able to break out of the container). But we can just avoid that by saying they are our own hardware.
I was thinking that you could bake a the Snow keys of a centralized set of discovery servers that also happen to provide key transport into the containers (so, basically, something like SkyDNS). The containers would generate private keys at runtime and then advertise themselves with the discovery servers.
@gruez, The length of an IPv4 address would not allow for a future-proof-enough address length. IPv6 might just do it, I can't say.
@y0ghur7_xxx, Let's say I want to expose some snow-unaware service via snow to only a single host. I have no way to set up an iptables rule to do that atm. "When you resolve a key name an address is assigned to that key. The address remains assigned to the key as long as there is traffic, but never for less time than the TTL on the DNS record and never for less than 5 minutes (and generally for much longer than that)."
So it's a DHT, so it's decentralized, pretty nice, but I still don't understand the purpose of this. Is this trying to improve security ? If so, how ?
What I'm more interested in, is a protocol that can let people share data on a DHT, which is resistant to denial of service and other security issues. I guess freenet is that already (somehow), but it's really not usable.
There are so many things in bitcoin I'd love to see in other standards, especially for messaging and forums. It would make things so much harder for the NSA and advertisers.
Does this or CJDNS have any mechanisms in place to limit metadata exposure?
From what I could gather, both use public DHTs for routing, and AFAIK public DHTs in general can be rather trivially crawled for metadata.
The current generation Internet already offers plenty of methods to protect message contents, but very few can also obfuscate metadata, which can be just as revealing, but almost always much more readily accessible.
You can run this through a VPN with a port forwarded for UDP. And that VPN can be tunneled through a nested chain of VPNs. That provides some metadata obfuscation.
The main problem with using a public key as your identity is that once it is leaked, you no longer have an "identity" (or more exactly, it's not uniquely you). There is no way you can change your identification without changing your identity.
Multi-key addresses would be more robust. Plus, when it's an Open Asset on Bitcoin blockchain, then you can "rotate" keys by moving your identity into a different multisig address when one of the previous keys gets compromised.
> The main problem with using a public key as your identity is that once it is leaked, you no longer have an "identity" (or more exactly, it's not uniquely you). There is no way you can change your identification without changing your identity.
The main problem with using a public key as your identity is that it's a horrible string of gibberish that people can't remember. What you want is some way of mapping some friendly name to your key.
But you can resolve friendly names to keys however you like. You can update that mapping however you like. That is orthogonal to what snow does.
True, you could have a bunch of SRVs in your DNS that redirects from _snow._tcp.my.best.friend.snow to aaa<..>.key or bbb<...>.key. You can even expire your keys after any amount of time for free thanks to DNS. Hey, that actually looks like a nice thing to add to snow !
It already supports CNAME. You can do example.com CNAME aaa<..>.key. SRV is on the todo list, not least because it also lets you look up the IP address and port in DNS as an alternative to the DHT.
Depends if the network has memory. If you can publish a "this key is compromised" message then at least you can take down your own identity so the thief can't use it, and work on re-establishing a new one.
> So if i'm following this right, querying the builtin DNS server for a .key actually triggers DHT lookup and NAT setup?
That's the idea.
> How are NAT entries recycled?
DNS responses have a TTL. The mappings last at least as long as the TTL and get extended if any traffic is sent to the address or there is another name lookup. After there is no traffic for a period of time the address goes back into the pool.
> Also couldn't SNI be used here to do this via a single IP?
SNI is HTTP. Doing it this way works with other protocols too.
SNI isn't just for HTTP. It's basically just a field in the TLS handshake that tells the server what name the client is interested in.
I can't think of a way to make this compatible with the DNS method you're using now though... you'd need a new address class that only ever returns a fixed IP by DNS, which was used exclusively for alternative determinations of the .key name requested. You could do this for TCP/80 with the HTTP Host header and TCP/443 with SNI, for instance. I'm wondering if one way to do it would be with haproxy to avoid having to implement this yourself.
Since a lot of connections going over Snow are going to be HTTP or HTTPS, this might make sense, at least for IPv4 where you IP space is limited.
You're right that SNI is part of TLS rather than HTTP. It's the same trouble though. Most applications don't support TLS+SNI.
The way Tor does it is to use a SOCKS proxy. Then you don't need an IP address and it's protocol-agnostic but the client application has to support SOCKS.
I'm not sure address space limitations are even a major problem. Address assignments are local, not global, and there are millions of RFC1918 addresses.
For HTTP you have to solve it from the other side anyway. An HTTP server would be more likely to run out of addresses than a client would. But running out of IPv4 addresses is what IPv6 is for. We could even return both IPv4 and IPv6 addresses until the IPv4 addresses run out. Then if you want to burn through millions of peers you just have to support IPv6.
This is amazing. The ability to create private networks from arbitrary subsets of networked machines without any (serious) restrictions opens up all sorts of new possibilities. For some reason though the first use case that comes to mind for me is operating botnets.
maybe im missing something, but what is the point of using the public keys if they need to be mapped to ip? dont u still need dns servers to resolve the ip addresses?