Hacker News new | past | comments | ask | show | jobs | submit login
Opportunistic Encryption for Firefox (bitsup.blogspot.com)
109 points by cpeterso on March 27, 2015 | hide | past | favorite | 82 comments



I'm really glad to see this.

I really dislike that browsers seem to treat self-signed certificates as worse than plain HTTP (in that self-signed certificates cause a big scary yellow warning that looks similar to the big scary red warning for invalid certs).

Self-signed certs are bad insofar as you can't prove that someone else isn't MITMing the connection and serving you an encrypted, but untrustworthy, proxy page. But with plain HTTP, you already have no guarantees that that isn't happening![0]

If I understand it correctly, this seems to combine the best of both: encouraging the use of self-signed certificates over plain HTTP, while still rewarding verified certs (ie, signed by a trusted third party) over self-signed.

[0] If you really care, you should use a CA-signed cert. But every attack that possible when using self-signed certs is not only possible when using plain HTTP, but much easier to execute, and also much easier to execute silently.


I won't defend it, but I can imagine that an argument for why self-signed certs could be worse than plain HTTP might be that if you're using HTTP, the communication isn't meant to be secure/trustworthy, but the presence of a certificate means that your communication is meant to be one or both, and since the cert is self-signed, it may not be either. This argument does presume that site administrators know what does and doesn't need to be confidential, which of course may not be the case.

Some argue that all communication should be encrypted, but that's another issue.


It's an argument made through misleading categories. Self-signed certs specifically are not an indication that the connection needs to be trustworthy, so treating it like HTTP is fine. If it's marked with HSTS, then fail it like HTTP. Don't show secure cookies. Easy.


But if someone clicks a link that says https://, then the person who created that link wanted there to be authentication, so an error would be the correct behavior.


Why not just show an easily visible indication that says:

>Connection is encrypted but authenticity cannot be confirmed. Connection is not secure unless you thrust the fingerprint!


Because security is already confusing enough for the average user, and screen real estate is valuable. We don't need an extra type of distinction that they need to know. We don't want to have to say to someone, "before logging into your bank, make sure that it is https AND that there isn't some little warning somewhere about unconfirmed identity."


Then create a new name, like ohttp. If it is the name of the protocol that is causing a roadblock for better security, then changing the name should be the obvious choice. Alternative we could always treat http as both being an unencrypted protocol and a opportunistic encrypted protocol.


They are treating http as both being an unencrypted protocol and a opportunistic encrypted protocol.


1. Not true if that link replaced one that said http via an automatic process. Such as HTTPS Everywhere or a search engine doing the wrong thing.

2. Certificate errors are much more confusing than a failure to connect.


I don't want my UA showing me the wrong protocol though.


Depends on how you define "wrong". The S stands for secure, that's been pounded into people's heads. HTTP 1 and 2 are miles apart and get the same protocol marker. So a secure connection with TLS gets shown as HTTPS, and an insecure connection with TLS gets shown as HTTP.


> The S stands for secure, that's been pounded into people's heads.

The S stands for "this connection is on port 443 and has been negotiated with TLS". That means it's being encrypted and has been authenticated.

> HTTP 1 and 2 are miles apart and get the same protocol marker.

Which was and is a terrible terrible idea and among the many reasons I dislike HTTP2.

> So a secure connection with TLS gets shown as HTTPS, and an insecure connection with TLS gets shown as HTTP.

Which is a terrible terrible idea. The protocol marker should denote the protocol used, not some fluffy stuff that doesn't really mean anything to the user.


> port 443

That's just a default.

>That means it's being encrypted and has been authenticated.

Except that TLS does not always imply encrypted and authentication. You can use TLS with a self-signed cert. You can use TLS with a broken cipher. You can use SSL_NULL_WITH_NULL_NULL. Browsers will helpfully provide things like a slash through the https in cases like this.

Whether the connection is actually encrypted and authenticated is what the vast majority of users care about. I'm not sure how you can call that fluff.

(Now as far as http 2, it's tricky because it's kind of at a lower layer that still runs http on top. And when you're linking to something you don't care about whether that layer exists. So there are pluses and minuses to having http2:// or spdy:// or whatnot. Not that any of that matters, because cocksure middleboxes nearly force it to continue to use http://.)


> That's just a default.

And your point is? If there is no port specified, that's what it's on if the protocol is HTTPS?

> You can use TLS with a broken cipher. You can use SSL_NULL_WITH_NULL_NULL. Browsers will helpfully provide things like a slash through the https in cases like this.

As they should, because allowing such things to happen automatically is contrary to what the spec is expecting to happen:

> If the hostname does not match the identity in the certificate, user > oriented clients MUST either notify the user (clients MAY give the > user the opportunity to continue with the connection in any case) or > terminate the connection with a bad certificate error.

elsewhere

> In order to prevent this form of attack, users should carefully > examine the certificate presented by the server to determine if it > meets their expectations.

- RFC 2818 "HTTP over TLS" (http://tools.ietf.org/html/rfc2818)

(for the second quote, the User Agent automates that process to make sure the certificate meets trust and expiration and the cipher suite meets exceptions.)


Self-signed certificate provides defense against passive attacker, and HTTP does not. That's an important difference IMO.


That's good in that it requires a slightly more sophisticated attack, but I don't really view that as a very effective defense.

However, the bigger thing a self-signed cert allows you is the ability to verify later if your communication has been snooped by an active attacker--which is very significant. It's problematic that the communications aren't authenticated before they occur, but that doesn't mean they can't be authenticated later (via CAs or other means).


> That's good in that it requires a slightly more sophisticated attack, but I don't really view that as a very effective defense.

A slightly more sophisticated attack, indeed — but one that doesn't scale well. It ups the cost of mass surveillance tremendously.


I hear this frequently but I don't buy it. What about the MITM scales worse than a passive listener?

As far as I can tell, both insertion of a passive listener and a MITM are algorithmically O(n) on the number of connections being surveilled. All you're increasing for the MITM is the constant factor.

When you're on the order of billions of connections being surveilled, even linear growth is hard, but we know that the NSA already has done that. Increasing the difficulty by a constant factor is not much harder, and there's no question that the NSA has the budget to do so. And in fact, the constant factor isn't even large: it's whatever the resource costs of two connection handshakes per connection is, plus decryption/encryption on the data flowing across, all of which are highly optimized algorithms at this point.


No. Passively listening only requires dumping whatever flies over the interface of some router. Done! Very hard to detect. You can also just scan for keywords and only start dumping traffic when triggered. With SSL, you cannot retroactively decide that you'd want to dump traffic. You have to MITM from the beginning.

That's point 1.

Point 2: Detection.

Actively MITM-ing an SSL connection requires you to (if the CA Chain-O'-Trust works as supposed) give away the fact that you own a (valuable!) compromised CA authoritative cert. You would want to use such a cert for a targeted attack, not waste it on blanket surveillance and get found out (and called out) within a couple of days.

If you don't own a root cert but rely on vulnerable implementations, or implementations such as this one which do not rely on the CA infra, same story. You do not want to waste that on blanket surveillance and get caught. You'd save it for the /special occasions/.

SSL-MITMing everyone, all the time, as in blanket surveillance, is unfeasable even without CA chains. You'll get called out on it by people that do check cert fingerprints once in a while.

This is a different kind of scalability than the computational order-of-complexity one that you seem to be thinking about.


Can anyone who's in the "HTTPS without certificate verification is bad, m'kay?" camp (I'm not directing this to the author of the post I'm replying to) compare and contrast that approach to the approach of SSH, where there is (usually) no such validation, and the breaking of which could have much larger consequences on global security then just HTTPS?

Why not adopt the SSH model of accepting the server's cert at the first connection and then complaining loudly if the fingerprint changes? Or even go a step further and have browsers manage private keys (client certs) which identify the user? Some Linuxes already do this with SSH: if you have a private SSH key, an SSH agent will remember its decrypted form on first use, and use it for SSH auth.

The client certs can be exchanged in a standard format throughout browsers and devices, and mobile phone-based two-factor-authentication can be used to increase security.

So - bam, good enough MITM protection (sure, let the banks retain the PKI system), no passwords, and a completely ready implementation requiring no new standards. There is no requirement for the client certs to be "blessed" with strong identification with the person, so anonymity is achievable by having an arbitrary number of different client certs.


The self-signed certificate can be MITM'd.


What part of passive did you miss?


It is a GUI problem. How it is presented to the user.

Typical mildly advanced user thinks this way:

I see https:// in URL bar and no scary red/yellow untrusted cert dialog? -- Ok I am all set to type my credit card number in.

There are the green brackground in there too, which I forget what it means, but it presumes some extra "security/authentication" thing.

What should the encrypted but not authenticated mode show? https:// with a yellow or gray bar? Try to teach the user about is the difference between authenticated vs encrypted. Warn them once and put a "Got it, do not bother me again" dialog.

Remember windows vista with "deny or allow?" dialog every single time you did anything with your computer. The intent was good but how the users perceived it was the problem. Here it is the same problem.


The problem with that argument is that HTTPS is being increasingly used for all traffic, rather than just traffic meant to be confidential/authenticated. As a result, the semantic difference between HTTP and HTTPS is being lost in those cases.


The argument against this is that a URL starting with https means something, at least to some users and your proposal would undermine that. When I want to go to my bank, I type chase into the address bar and choose one of the suggestions that starts with https and then go about my business. Maybe I should be paying attention to whether or not there's a green lock in the upper left and maybe I'd notice if it were missing, but I don't intentionally look for for the lock. In your proposed change, the absence of a lock icon would be the only way I'd notice if my connection were being MITM'ed. So, in that way it would be making the web less secure for users who are entering an https URL and expect it to be secure.

That's what's attractive to me about Firefox's opportunistic encryption proposal. It really makes no change in the UX compared to an unencrypted connection, not in the URL and not in the time to connect.


I'd be interested to know how many users notice the HTTPS as opposed to the padlock. After all, no-one types the "http" in a web address anyway.


There are different classes of users. Above average users will notice.

My mom for example won't notice. She doesn't know what a URL bar is. She doesn't even look at that place. She clicks on her toolbar for Gmail and other sites she uses. She wouldn't care if it said http:// https:// or foobar://. She will be scared and be afraid to proceed if she would see the "Warning, untrusted certificate!" dialog.

My wife knows something about http vs https. If it says https she knows it is safe to type in her credit card number. https means "secure". But she doesn't know about authentication vs encryption. And will be equally scared about "Warning, untrusted certificate!" dialog. That might as well say "Your computer is hacked and you have a virus".

Anyway just giving a perceptive on how this affects different classes of users. As I put it in another post, it is in large about the UI.


How about an open padlock? If I saw that my initial reaction would be "WFT?", mouse-over it, and then get a dialog explaining it.


The problem is a "real" certificate being MitM'd and replaced with a self signed one. Things like certificate pinning might be able to mitigate that. Even though it's been a bit of a farce (in no small part due to the quizzically large set of trusted root CAs) the idea is that the CAs that you trust do their due diligence and don't sign fraudulent requests.

>[0] If you really care, you should use a CA-signed cert. But every attack that possible when using self-signed certs is not only possible when using plain HTTP, but much easier to execute, and also much easier to execute silently.

Right, but the owner of the server you're connecting to is the one that has to choose if the data they're serving or receiving qualifies for protection of this sort. By omission they've basically taken the position that their pages or data are not worth preventing an attacker from MitM'ing you. If you allow self-signed certificates to fly without a warning at that point you're taking the decision to secure from a MitM attack away from the server owner that wants to protect their pages. Sure, a savvy user would notice a change, but what if they've never been to that site before?

Here's a scenario: Bob's Widgets sells Widget 9000 and has a secure site with a certificate signed by a CA. Alice hops on free WiFi at a Widget convention and visits for the first time, hoping to buy some of these amazing widgets Bob sells. Except Mallory intercepted the traffic and swapped out a self signed certificate. Bob's only way of telling Alice that his certificate and therefore traffic is authentic is by using one signed by a trusted third party.

Now that isn't to say that I don't think your idea has merit. It just doesn't fit into how we currently do things. What might work is a keeping port 80 and port 443. Port 80 is secured by a self-signed certificate but the user isn't informed (via their browser's indicators, like a green icon or what-have-you) that their connection is secure because it isn't in the sense that it will protect them from MitM attacks. It is however encrypted and you can have the browsers trigger warnings when the key changes similar to the way SSH does. 443 still behaves exactly the same where users only trust CA signed certs by default.


I will again state my solution to this: drop all support for HTTP. There still seem to be a large number of people that don't get why illusion of security is worse than knowing for a fact that there is no security. However, dropping HTTP solves this confusion and a host of other problems. Along with that, perhaps we can revamp the current CA system and have something that more closely resembles decentralized secure web.

Once again, if you advocate using HTTP, go ahead and drop ssh in favor of telnet.


Do you really mean dropping HTTP? HTTP itself just requires an underlying reliable byte stream and does not dictate whether that is bare TCP, TLS, or any other concoction.

Assuming you mean drop HTTP over bare TCP, and are advocating for HTTP over TLS, then everyone knows that is the right thing to aim for. But advocating revolution won't get anywhere since there is a phenomenal number of systems to update. Just like IPv6 deployment, it will take decades before HTTP over bare TCP is removed. The article is an excellent proposal because it helps migration.

Certificates do need to be sorted out. Heck they currently hinge on names, yet what name does a random router bought at Best Buy and plugged into my network have? Or the armada of IOT devices coming along - thermostats, fridges, light bulbs?


You are technically correct (the best kind), but of course I mean dropping HTTP in favor of HTTPS. I don't think many people could possibly confuse what I meant. We all know the way this works and I don't want to get bogged down in semantics. I don't see anything there as constructive.

I am advocation for a revolution. We have the technology today, and after Let's Encrypt goes live all we really need is for Google to announce that Chrome will run for six months with a warning when using HTTP (similar to what it does bow for self-signed certs) and the will stop supporting HTTP altogether. How quickly do you figure people will move if this was announced?


There is no way the scheme could possibly work. For example my printer is managed over HTTP. I'd have to keep two browsers around - the current (non-http supporting) version, and an older (insecure etc) version for all the things still using HTTP. I also don't see Netgear/Linksys/Dlink/Foscam doing firmware updates for all their devices out in the field, and even if they did they would require management over HTTP in order to transform them to TLS.

Or in other words, you could cut off support for the major sites/devices using HTTP, but could never do it for everything in the next 6 months.

Since you are advocating TLS support being mandatory, then it dropping HTTP for a different protocol is worth considering. HTTP/2 is a strong consideration, and currently is always over TLS.


Perhaps browsers could include a "legacy mode" where stuff behind an IPv4 NAT could still be accessible over HTTP. This would of course require all manner of warnings. In general though, my router has a self-signed cert and works over HTTPS. I can even load my own cert. OpenWRT FTW.


> This would of course require all manner of warnings

Ok, now try and word one of those so that regular folk understand what the heck it is saying. And your "legacy mode" means HTTP is still supported.

There is nothing wrong with deprecating HTTP over bare TCP, but being completely unsupported in the near future is unworkable. Everyone including this effort is about a slow steady migration.


In a world where users automatically equate encryption with safety, self-signed should be regarded with skepticism.


I would even go so far as to say that shipping with root certificates undermines the security of every user by APPEARING to verify identity, when in reality you're just trusting Mozilla, Apple, Google, Microsoft, etc., and they cannot possibly verify any fraction of the CAs that are handed out daily.


This is very interesting.

I think a better approach might be to separate encryption from trust.

I'm thinking back to Chrome's announcement that they were considering making http:// show some sort of warning.

What if:

* We make HTTP show up as "insecure" in browsers

* We make HTTPS work with self-signed certificates, and display websites encrypted that way the same way we currently show http

* We make HTTPS with Trust show up the way we currently show HTTPS

* Keep EV Certs the same


> * We make HTTP show up as "insecure" in browsers

> * We make HTTPS work with self-signed certificates, and display websites encrypted that way the same way we currently show http

Clear text and opportunistic encryption should have exactly the same UI. Security UIs are already too confusing, and we shouldn't introduce more complexity. Besides, one should never make a decision based on a clear text vs OE distinction, since OE is so easily defeated.


I guess this is why most administrators use telnet instead of ssh, since they don't have time to check ssh fingerprints... Well, not really. As it is, many people do make a decision based on a clear text vs OE.

Now if just Firefox would store and check fingerprints of self-signed certificates, we would get the exact same benefit when telnet was replaced by ssh.


You don't check ssh fingerprints?


Administrators are a much more sophisticated group than average users. Presenting telnet/SSH as equivalent to HTTP/HTTPS is silly because they have drastically different use cases.


I think his point is that ssh is just opportunistic encryption if you don't compare the fingerprint at the first connection. Anybody could just MitM your first connection to the server without you noticing, and then MitM every future connection as well.

In a sense telnet/ssh is exactly like http/http_with_oe. And still we make a big deal out of the telnet/ssh difference while saying that http with OE is only marginally better than http.

Of course it's not nearly that black and white because ssh stores the fingerprint, meaning you are safe if (and only if) your first connection wasn't intercepted.


I might be wrong,, but it seems like trust and encryption are fundamentally linked.

For example, in the case of self-signed certificates, the browser knows it is encrypting a message to the holder of the self-signed certificate, but there's no information on who that is. The certificate sent could be changed, and there's no way to know something is wrong.

Put another way, plain text is readable by anyone. Text properly encrypted to a known recipient is readable only by that recipient. But text encrypted to an unknown recipient is readable by only one entity, but that entity could be anyone, so it's equivalent to plain text.


Trust (authentication) and encryption are linked... iff your threat model includes defending against MitM attacks. For threat models that merely try to defend against trivial passive eavesdropping (such as bulk XKEYSCORE style surveillance or even just the local busybody that just learned how to use tcpdump), encryption alone is very useful.

I strongly recommend using full authentication whenever possible, of course, but in the cases where that is not (yet?) practical, enabling auto-generated self-signed certificates for TLS should usually be a very simple change.

Really, the only people that I expect would be against such an easy change are people doing NSA-style spying or businesses with a surveillance-based business model.


The local busybody - how are they attacking you? On WiFi or arp poisoning scenarios, they can MitM with the same setup they're already using.

If there was a big shift to encryption without verification, passive attacks could just start shifting to active ones. In most cases they're in place to do so. I suppose the major exception is optical splitter scenarios. But what's stopping them from replacing the splitter with another hop?


> If there was a big shift to encryption without verification

Shift from WHAT? Remember, nobody is suggesting shifting anything from proper encryption, or changing the UI in any way that might mislead a user into thinking they were using a "secure" connection. This is a replacement for plaintext only.

> But what's stopping them from replacing the splitter with another hop?

Cost and complexity. Which is to say, if anybody really wants to throw a few megabucks at the problem, they will succeed, but that is not a contingency that some threat models worry about.

I will refer to Dan Geer when he said[1] (and I suggest this is true for the corporation as well as government):

    The central dynamic internal to government is, and always
    has been, that the only way ... to control the many sub-units
    of government is by way of how much money [it] can hand out.
    ...
    Suppose, however, that surveillance becomes too cheap to meter,
    that is to say too cheap to limit through budgetary processes. 
This is what technology has done for the surveillance of plaintext. The marginal cost of recording "everybody"[2] went form about 1:1 in man-hours to practically free, largely because it is possible to log all the plaintext you want to disk, to be processed as needed (or to data mine). By raising the cost at all, we might be moving at least some types of eavesdropping back from "too cheap to limit".

Make the eavesdropper work for it, instead of simply folding because they might decide to upgrade to a MitM.

An even better question, though, is why are you so interested in keeping the internet in plaintext?

[1] http://geer.tinho.net/geer.blackhat.6viii14.txt

[2] for whatever definition of "everybody" you prefer


Some proposals were to warn about HTTP without OE, and then display the current HTTP plaintext UI for HTTP with OE. So long as there's no distinction to the end user UI so plaintext looks as good as OE, then I'm all for it.



I think that's basically what the Chrome guys proposed.


I think so too, but I don't recall them opening the doors for self-signed HTTPs as on-par with HTTP


>We make HTTPS work with self-signed certificates, and display websites encrypted that way the same way we currently show http

Then how do you know you are getting any sort of encryption?


Because the HTTP shows up as insecure, for one.


Opportunistic Encryption is harmful because people think it's actually useful. Here's the problem with opportunistic encryption:

  OE provides unauthenticated encryption over TLS for data that would otherwise be
  carried via clear text. This creates some confidentiality in the face of passive 
  eavesdropping,
You should never assume that 'eavesdropping' is passive. In almost every practical context of traffic interception, if you can read the transmission, you can write as well. If you're going to the trouble of installing some kind of tap, it makes more sense to make it read-write so you can actually do something with that intercepted connection. Collecting data is great, but hijacking is even better.

  and also provides you much better integrity protection for your data than raw
  TCP does when dealing with random network noise
This is a red herring, and if you need real integrity is totally useless since it doesn't prevent an active attacker from corrupting the data once they've hijacked your unauthenticated connection. For the majority of plaintext traffic, a small amount of corruption is way more efficient to allow for than breaking down and re-creating a connection every time a single bit gets flipped.

To use OE, you have to set up an SSL service in the first place, so just take the extra 15 minutes and make a real signed certificate. There is no such thing as "kind of" secure, after all. Encryption is intended to provide security. OE is not secure.


MitM takes more effort and is less common. You're only fooling yourself by thinking about it as black and white. How about "it's just as easy to MitM me, but I have reduced how common eavesdropping will be by 80%" I want that. And with how common ISP surveillance is, the number is probably much higher than 80.

>There is no such thing as "kind of" secure, after all.

If you really want to get absolute, then you have to admit that any computer you have purchased parts for is not secure.

Levels of security exist.


Of course levels of security exist. And OE is not on the spectrum of levels of security, because it is not a secure connection: http://www.techopedia.com/definition/13266/secure-connection

It's an arms race. If I have sensitive data, you install a tap to capture it. If I begin obscuring the data from you rtap, you begin injecting traffic to unobscure it.

If someone is trying to steal your data, they will not stop trying to steal it just because you applied ROT13 to it. Why do you think someone would just give up just because they now need to use an active attack? Active attacks are trivial once a tap is in place. We're not talking NSA research projects here; this shit has been going on since World War II. If they have a means to exploit your data, they will use it.


OE isn't about targeted attacks to get your data. This is about raising the cost of bulk eavesdropping. Yes, there are a lot of ways to easily do MitM, but all of those still cost more and have more opportunities for error and detection than a passive tap that is basically a giant tcpdump on the backbone.


If I paint my car an ugly color, it's half as likely to get stolen as if I paint it red. It's not a "security" measure because it doesn't result in a "secure car", but it is a measure that makes me less likely to be hit by a marginal effort attack.

Even if I leave the door unlocked and the keys under the seat.


> In almost every practical context of traffic interception, if you can read the transmission, you can write as well

Lets look at a real-world example here. Say an adversary is tapping one of the Atlantic cables between US and EU. That is what, a few T/s of data, and a few millions of concurrent connections. Then try to do a practical MIT attack (a write) for each of those connections, while maintaining data speed. Also, you must be 100% undetectable.

Can you do that?


> Opportunistic Encryption is harmful because people think it's actually useful. [...] You should never assume that 'eavesdropping' is passive.

- Google driving by with it's street view cars and capturing all your wifi traffic for later analysis is passive.

- People sitting in a coffee shop and capturing your wifi traffic is passive.

OE protects against these attack vectors. It does not protect against other attack vectors, but that does not mean it's harmful or useless.

Moreover, with pins it could be a first step to get rid of CAs.


>- People sitting in a coffee shop and capturing your wifi traffic is passive.

OE does NOT protect against this. If you can read wifi packets you can write them and MITM. You're basically arguing that at the moment it's more likely that they are using a snooping package that doesn't MITM. Sorta like "use Mac/Linux cause there's no viruses/malware/keyloggers". If there's a shift to OE, then sniffing packages will provide plugins to MITM OE when possible.

So sure, turn on OE, it won't _hurt_ if we don't tell the users via UI that there's anything special going on. It's like a CRC check to prevent minor accidents.


These are both examples of how OE would _not_ help you.

With google street view you're barely even in range long enough to pick up a couple packets, if the person was even using their computer while the car drove by; this is not what OE is designed to protect against. If the car sat outside their house, they'd still get owned, and OE would still not be useful.

Sitting on a cafe's wireless is literally the de-facto example of how to actively sniff or inject traffic on an unsuspecting victim. The only more bluntly useless case for OE is a state-sponsored mitm using coercion-induced signed certificates.

And you can't get rid of CAs.


You just continue to state the same thing: that OE does not protect against an active attacker. We know that. Nobody says that it does. What it does is protect against a passive attacker. And that makes it useful for some use cases.


fwiw the reference in the blog post about integrity protection and random network noise was a reference to the inadequacy of the tcp/ip checksums for protecting your data from normal network errors of the accidental non-attacking variety.

This is real problem for http cleartext data which data transported over TLS does not have.


I would agree with you if this were released with great fanfare to regular users. I assume this won't be the case


The problem with allowing self-signed certificates has always been distinguishing if a site should be signed by a CA or not. Consider the follow situation:

Alice sends Bob a link: http://example.com

Bob trusts Alice and now knows that example.com is probably ment to be accessed over HTTP. Now for the next example:

Alice sends Bob a link: https://example.com

With the current implementation of browsers Bob knows that example.com should present a CA signed certificate. But what if example.com wants to encrypt their data, but for whatever reason uses a self-signed certificate? Some people say that Bob's browser should not display a "big scary" warning, but instead display a UI similar to when accessing a HTTP site. However, in this situation HTTPS has lost some meaning. I think http2 should work as follows:

http2:// - encrypted, not verified

https2:// - encrypted and verified

This way the protocol still conveys the same level of information.

However, if it were completely up to me, I would say ditch the CAs and use namecoin to verify certificates.


That's more or less what OE does. It allows the browser to use HTTP/2 (and encryption) to connect to a site, but keeps the user experience the same as unencrypted HTTP.

That's why self-signed certificates work in this context; the identity of the server's not supposed to be validated (unencrypted HTTP can't validate server identities), so the browser can accept a self-signed certificate without warning.

There's no change to how certificates are authenticated when accessing a site via an https:// URL.


> However, if it were completely up to me, I would say ditch the CAs and use namecoin to verify certificates.

Please, please, please, please no. Any kind of blockchain is far too vulnerable to attacks here to be a good source of authority.


This is a great step, one that I've been hoping for. Of course encryption without authentication is much worse than true encrypted transport (as in, with authentication), since it only prevents passive adversaries, but any sort of encrypted transport is better than plaintext. I'm also hoping this will ease the transition to TLS, since you can get it up and running without worrying about mixed-resource problems, then fix those one at a time.


I'd argue that the CA system is false authentication because it's fairly easy for the right players to tamper with. In that case, unauthenticated/encrypted transport is only a little less safe than "authenticated"/encrypted transport, but with the latter giving a higher illusion of safety than the former.

The only real trust that would work is distributed trust. The CA system is kind of a joke.

That said, yes, it does protect coffee shop HTTPS browsing better than a self-signed cert.


Does someone know how to set this up serverside? Does apache support this? I skimmed through the mod_spdy docs, but found nothing about opportunistic encrypion.

This, together with TACK or Certificate Transparency could be a CA killer.


This is a neat idea, but we really need to define a standard for self signed certificates. Something like certificate pinning should be mandatory. It should be done in a way so there is no confusion with a connection with identity protection but at the same time it is essential that the user knows they are at least getting the protection of the self signed certificate. Perhaps we need something like a httpq:// resource identifier.

For bonus points such a standard should incorporate a web of trust system that can not be overridden by a bogus certificate in the regular system. Ideally a self signed certificate provided by someone you can physically visit should be more secure than what we have now.

Added: I guess my point is that we are thinking about this backwards. A verified self signed certificate is the gold standard, not some inferior alternative. I should be able to walk into my bank and then walk out with a certificate on a USB key that can not be messed with. If we are going to change things we should strive to end up with the possibility of something better than what we have now.


This is only for HTTP/2. If could have been for HTTP/1.1 also, if there had existed a registered ALPN name for “HTTP/1.1 with TLS”. As it is now, HTTP/2 has both variants, but HTTP/1.1 stands alone¹.

https://www.iana.org/assignments/tls-extensiontype-values/tl...


fwiw the h1 barrier was the lack of a scheme indication in an h1 transaction - not really the alpn ids (which can always be registered if need be). But without a scheme the server just needs to infer http vs https from the port/address and that wouldn't work with alt-svc


Since this is a backport of a new feature from a new protocol to its older predecesor protocol, it is not necessary to be so wary of slightly uglier features. Simply having “Alt-Svc: http/1.1:443” imply HTTPS would do fine to solve this specific problem, and I doubt anyone would really have a problem with it.


> 443 is a good choice :)

If it's self signed, and going to throw massive warnings with a direct connection, shouldn't you use anything other than 443?

Any subtleties I should be aware of?


443 is more likely than a random port to be allowed through a firewall, for example.


The main reason I would think it's a good choice is because if you decide to get a CA certificate later, you just drop it in and you're done; no additional configuration required.

If you don't have a CA certificate, you're probably not advertising your https:// URLs anyway, so unless search engines are aggressively looking/prioritizing for https transport, it wouldn't seem to hurt anything to run a self-signed certificate there.


HTTPS everywhere. And I would not trust an entire site to go unindexed.

There's a lot more to change if I want real HTTPS support. Changing a single port number is the least of my worries.


This is good, but still not a good as a service that requires at least an unauthenticated encryption. The problem is that the attacker has to be active, but very little effort is needed to break this without the user noticing - it's enough to inject some packets to disrupt the TLS connection.

However, for HTTP it's the best thing possible at this point.


Since the request and response sizes can reveal what public page you are browsing over https, OE in the proposed form would not prevent user tracking: http://sysd.org/stas/node/220


Please, please, can we have the same for WiFi?! Trade a key with the AP when you first associate, and be done with it. The entire concept of a WiFi password is a *%^$ waste of time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: