I welcome the current stance of the IETF regarding privacy, opportunistic encryption and mass surveillance. I hope we can stop mass surveillance by technical means.
Our political leaders have made it clear that they are unwilling and/or uncapable of stopping this continued human rights violation.
You'll recognise me from the CFRG threads about that topic.
The subject is moot now: Kevin M. Igoe is retiring, so he's stepping down at the end of the year (and probably not doing anything major until then, other than participating as anyone else could, with of course full knowledge of where he works). The other current co-chair is also stepping down, in favour of two new unrelated co-chairs who have been extremely open and forthright with their status - and, to the best of my knowledge, don't work for a Nation State Adversary. Things seem to have turned out okay.
In the meantime, CFRG has been documenting - in the public eye - a ChaCha20-Poly1305 AEAD (the djb alternative to the NIST-preferred AES-GCM; a fast, 256-bit secure constant-time authenticated stream cipher AEAD) and has, after a lively discussion, selected djb's Curve25519 (the most mature of the so-called "SafeCurves") as the preferred curve for new IETF protocols; amongst some other lively and open discussion about the promises and perils of PAKEs, signature schemes and other such things. They'll be coming to protocols near you soon: various WG drafts are in progress now.
> Making networks unmanageable to mitigate PM is not an acceptable outcome
Not sure that I'm reading this correctly, but "managed network" was a euphemism for Deep packet inspection used as a censoring tool (by government-controlled ISPs) some years ago. This reads a bit like the IETF had to unanimously yield, not to "But who will think of the children" arguments of the time, but the spying threat; it makes Snowden revelation an interesting linchpin in making surveillance acceptable, as a weapon against surveillance.
I think they're speaking more about the much less exciting form of network everyday network monitoring that goes on inside the enterprise - troubleshooting, performance tuning, etc. But you've got an interesting, if not a bit cynical, interpretation there.
“You can't solve social problems with software.” – Marcus Ranum
We begin therefore where they are determined not to end, with the question whether any form of democratic self-government, anywhere, is consistent with the kind of massive, pervasive, surveillance into which the Unites States government has led not only us but the world.
This should not actually be a complicated inquiry.
>“You can't solve social problems with software.” – Marcus Ranum
This is the first time I've seen that quote or that name. I have no idea why I should believe that quote, though. For one, the NSA, CIA, Russian intelligence (what's the preferred name these days anyway?), etc all invest significant time and money into software.
Additionally, I feel like the Internet has solved lots of problems for me personally, even if those aren't "social" problems per se.
Right now, on one hand, we're spending billions of dollars for this Myth of Homeland Security in the hopes of protecting against terrorists, rogue states, and ideological nutcases. But, on the other hand, corporate America is lining the pockets of executives...
Additionally
A Web 2.0 site may allow users to interact and collaborate with each other in a social media dialogue as creators of user-generated content in a virtual community, in contrast to Web sites where people are limited to the passive viewing of content. Examples of Web 2.0 include social networking sites, blogs, wikis, folksonomies, video sharing sites, hosted services, Web applications, and mashups.
What are the odds of something like this becoming core to how the IETF writes it's recommendations? Is this an actual attempt to address the surveillance issues that have become so prevalent, or should I not get my hopes up?
> What are the odds of something like this becoming core to how the IETF writes it's recommendations?
This is a document mandating just that.
Status-wise, this is a BCP (Best Current Practice) document, so it's as senior and authoritative as IETF documents get.
BCPs are on the same top tier as IETF's Standards Track documents, they go through a heavy (as far as IETF goes) process involving IESG approval etc.
This means that specs have to take this BCP into account in their (mandatory) security considerations sections, and spec reviewers are required to take issue if a spec isn't doing that.
Like rules in general, this is subject to gaming on various levels, so if a working group is manned by snooping enthusiasts they can conspire to do this weakly. But it's still something!
Yes, HTTP/2 is finally taking the opportunistic encryption seriously (think STARTTLS in SMTP), whereas the pre-Snowden http://tools.ietf.org/html/rfc2817 "Upgrading to TLS Within HTTP/1.1" never took off one bit.
In fact, there are now not one, but like at least four different post-Snowden drafts to the effect of seamless behind-the-scenes encryption on top of the HTTP:// address scheme, which is the exact solution I'm personally advocating for, since it's pretty much backwards compatible with 100% of the existing infrastructure, and effectively has negligible cost, and no tie-ins, unlike the full blown HTTPS://.
There are also proposals for lower-layer opportunistic encryption, like Tcpcrypt. Being an OS feature it would take considerably longer time to reach wide-spread deployment, but it would add opportunistic encryption to all TCP connections, including regular HTTP.
Opportunistic encryption without authenticated certificates is basically useless. Actively decrypting and re-encrypting the entire current internet at line speed can be achieved for only a few million dollars.
It's neither a technical nor economic deterrent to surveillance.
An SSH style TOFU auth model might provide some safety here but at a significant cost in user experience when certificates do change.
I'm going to have to disagree with you there. See, even without authentication, in a world of pervasive opportunistic encryption, Eve is pretty much screwed for spying on everyone's data, and we can cause trouble for a fair amount of metadata collection, too. And a pretty large amount of the pervasive surveillance infrastructure out there now relies quite a lot on the information gleaned just from being Eve.
So, you say, a Nation State Adversary will just switch to being Mallory and MITM all the things? Well, that might not be a choice they want to make. In a good protocol design allowing for opportunistic encryption, Mallory can't actually tell for sure if Alice and Bob are being opportunistic or not. That means Mallory has to guess whether they can safely run active interference without getting caught, or whether Alice or Bob are in a position to find out about it and, thanks to things like Certificate Transparency and others, loudly tell the world about it - and then Mallory's going to have to fend off awkwardly pointed questions about why exactly they're spying on Alice the source and Bob the journalist, or Carol the perfectly normal person and Dave their lover having webcam sex over Yahoo [hypothetical names; real GCHQ operation, search platform cover-named OPTIC NERVE].
Opportunistic Encryption techniques give us the opportunity to, with no configuration necessary, close the door on several of the easiest ways to be able to spy on everyone covertly. That's a big improvement from where we are right now. It forces a potential Nation State Adversary to weigh up carefully the risks of using such a capability, given they can't be certain to hide it: the more they use it, the more likely they'll get caught. So they can either act in the open, where everyone can see it, or choose to not act at all, or (most likely) act more selectively (which isn't perfect, but it's a start). That is a hard choice for many of them to make and potentially opens the door to actual discussion about oversight (or lack thereof), necessity (or lack thereof), usefulness (or lack thereof), and the incredible risks posed by such technologies' deployment.
I don't think there's a hell of a lot we can technically do about a pervasive determined well-funded doesn't-give-a-damn-about-anything jackbooted Mallory, using what I'll call constructive techniques (as opposed to destructive techniques, which, for example, actively disrupt surveillance infrastructure - which of course are, from the adversary's point of view, perfectly okay for them but highly illegal when you do it). That's not the threat we face in all places, however: the situation isn't hopeless, everywhere, yet.
As the BCP points out, we need non-technical (i.e. political, etc) solutions to this attack model too: this isn't a problem we can solve on our own. But piece by piece, we're doing what we can to combat this attack - and this BCP is a clear statement of the overwhelming IETF consensus that we do regard it as having a malicious impact on the internet as a whole, and that it's an attack we need to address every practical way we can.
We already know that these agencies carry out real mitm attacks on users regularly. We already asked them pointed questions as to why they do this. The reality is that they don't care yet. They have near complete legal freedom to carry out these attacks.
Maybe OE would raise more eyebrows? I don't really think anyone will even care if the NSA are caught spying on your cobbled together private Web mail server. The applications that OE will be deployed on just aren't politically sensitive enough IMO.
I don't think OE will likely make anything worse. I just don't think it will actually achieve anything. I'm certainly not going to tell someone who can't use a CA cert not to deploy it though.
If I'm reading it right, this RFC is seeking public input on a proposed policy of considering pervasive monitoring (PM) another attack vector among a host of attack vectors that should be considered when designing protocols.
I am blissfully ignorant on the politics of how the IETF works so who knows what the actual odds are. But I think it's good that it's at least being considered. That said, I don't know how or if "pervasive monitoring" is mitigated any differently than plain old MITMs. Seems a bit like tilting at windmills if the corporations that host are data are either voluntarily complying with "PM" or being compelled to by subpoena.
> That said, I don't know how or if "pervasive monitoring" is mitigated any differently than plain old MITMs.
One of the primary concerns with pervasive monitoring is metadata. We could see an increase in protocols that make efforts to not only protect what you're communicating but who you're communicating with.
> Seems a bit like tilting at windmills if the corporations that host are data are either voluntarily complying with "PM" or being compelled to by subpoena.
The general idea is to change the status quo. Use distributed systems rather than centralized ones.
Yay, a whole new class of protocols which will make it harder to communicate with each other.
I'm all for secure protocols but metadata protection is so nebulously defined, and introduces such ridiculous problems, that I really don't want it built into core protocols.
There is no solution to it - moving traffic from here to there via a third party requires telling someone how to do it. So long as that third party exists, the problem is unsolvable outside of distributed-risk models like Tor.
> There is no solution to it - moving traffic from here to there via a third party requires telling someone how to do it.
Why do I have to do everything myself? OK, here's IPv8: The only unencrypted part of the packet is the destination address, the rest of the packet is encrypted with CurveCP. Moreover, every router has a public key, and you can set the "destination address" to any router on the path to the actual destination and encrypt the entire actual packet with its public key. That router will receive the packet and decrypt it only to discover that it contains another encrypted packet, which it sends on its way to the next destination. If you like you can do this more than once. It's onion routing at the IP level. Tor without the inefficiency, because the "relays" are already devices on the path to the destination, and with in some ways better security because each "relay" doesn't inherently know whether the previous hop was also the previous relay or whether the next relay is the actual destination.
"destination address" to any router on the path to the actual destination
1) In normal IP, the source does not know the route to the destination. You can try to guess with traceroute, but that's not authoritative. And the route out may not be the same as the route back. There may not even be a single well defined route.
2) This has the same pattern as the various reflection/amplification attacks and facilitates DDOS. I think that what "network management" means; if you provide means for people to flood the network with bad traffic then it becomes unusable for everyone, or at least it becomes possible to silence a site or endpoint on an ongoing basis.
3) You've assumed that there's no legal responsibility attached to re-emitting these packets.
> 1) In normal IP, the source does not know the route to the destination. You can try to guess with traceroute, but that's not authoritative. And the route out may not be the same as the route back. There may not even be a single well defined route.
This isn't normal IP, it's new IP. Moreover, using a relay would be optional (just encrypting the source address would give 90% of the benefit) and the worst case for having chosen one that isn't a router on the preferred route is that the packet would take a mildly suboptimal path to the destination.
> 2) This has the same pattern as the various reflection/amplification attacks and facilitates DDOS. I think that what "network management" means; if you provide means for people to flood the network with bad traffic then it becomes unusable for everyone, or at least it becomes possible to silence a site or endpoint on an ongoing basis.
I don't see amplification. You send one packet, each router forwards it once. I suppose the attacker could have a packet go back and forth between the same routers multiple times but that is already possible in existing IP using source routing and is trivially mitigated by adding a hop count / TTL field.
Amplification means you can send a small amount of data and cause someone else to send a large amount of data to the target. For example, if you send an EDNS query to a DNS server with many records for a particular name, the query is very small and the response could be very large. I don't see that here.
Reflection is much more benign. It doesn't allow an attacker with 100Mbps of bandwidth to convert it into 10000Mbps of bandwidth, it only allows an attacker who doesn't care about receiving a response to remain anonymous. So your complaint about a technical measure designed to allow people to remain anonymous is that it would allow people to remain anonymous. Feature not bug.
> 3) You've assumed that there's no legal responsibility attached to re-emitting these packets.
The existing routers on the existing internet are already re-emitting all the packets. That's what routers are for.
Obviously Congress could pass whatever law making it legal or illegal after the fact, but that's orthogonal to the technical issue of how it can be done whatsoever.
> This isn't normal IP, it's new IP. Moreover, using a relay would be optional (just encrypting the source address would give 90% of the benefit) and the worst case for having chosen one that isn't a router on the preferred route is that the packet would take a mildly suboptimal path to the destination.
"Mildly suboptimal" is the difference between playable latency in online gaming and unplayable. It's the difference between VOIP and video calling working and not working.
Your sweeping those issues away under the guise of "probably not so bad" yet we've had decades of experience finding out that, yeah, they are that bad which is why the modern internet has ended up the way it is.
Your argument is that we can't have an optional feature that provides stronger anonymity because when you use it there could be a few ms of latency that would be intolerable to some applications that aren't required to use it?
Yup, the only way to really hide metadata is for every device to talk to every other device constantly, either with "real" data if they have something to send or "padding" if they don't. That way a sniffer can't tell who you're sending real data to. Everything else is vulnerable.
> That said, I don't know how or if "pervasive monitoring" is mitigated any differently than plain old MITMs.
The difference is that with passive monitoring, you don't have to worry about authentication, and opportunistic encryption without any sort of authentication is good enough. Think STARTTLS in SMTP.
I actually found this RFC7258 through http://tools.ietf.org/html/draft-nottingham-http2-encryption... "Opportunistic Encryption for HTTP URIs", so, there's that. Personally very interested in this, as I have a dozen of different domains and subdomains for my non-profit reference web-sites like http://mdoc.su/, and I also care about supporting legacy clients and having minimal administrative costs, hence I cannot deploy HTTPS://.
In your last paragraph, you've actually provided a good example of why opportunistic encryption is dangerous. Instead of deploying proper HTTPS, you're thinking that opportunistic encryption is good enough for you, that it is a valid replacement for HTTPS.
> Instead of deploying proper HTTPS, you're thinking that opportunistic encryption is good enough for you, that it is a valid replacement for HTTPS.
But I will never deploy HTTPS:// regardless -- no backwards compatibility with http-only clients, not even much compatibility with HTTPS-enabled clients with no SNI support (e.g. Android 2.3 or Windows XP), the tie-in of the new address scheme, huge opportunity costs for non-commercial multi-dozen-site owners (certificates for wildcard domains like "*.example.net" cost several times more than the domain names themselves; plus the multi-domain certificates for both both "example.net" and "example.com" simultaneously (to avoid having to deal with SNI) cost even more, and those 50$+/domain for multi-domain certs don't even seem to have wildcard support, either).
So, even though I will never deploy HTTPS://, you think opportunistic encryption is still a waste of time for me? IETF now begs to differ! And I'm very glad they finally do!
Your comment provide an excellent example why current views of HTTPS is dangerous to security. It diffuses the line between encryption, authentication and trust.
Opportunistic encryption adds encryption as a fundamental part of the network, where HTTPS create added features on top of an primitive network design. So long one understand what HTTPS do, putting encryption into the transport layer will highlight HTTPS as an authentication system and make network security more understandable, reliable, and in the long run, better.
I thought I had been paying attention to this as a proposed draft, but actually didn't notice that it had achieved BCP status in May!: https://datatracker.ietf.org/doc/rfc7258/
This was pleasantly speedy (six months, from November '13 to May '14), by IETF standards.
Our political leaders have made it clear that they are unwilling and/or uncapable of stopping this continued human rights violation.