Hacker News new | past | comments | ask | show | jobs | submit login
Progress Towards 100% HTTPS, June 2016 (letsencrypt.org)
220 points by dankohn1 on June 22, 2016 | hide | past | favorite | 106 comments



I keep hoping they will help address non-Internet TLS. For example if you run a HTPC, fridge, printer, device controller or anything similar on your LAN and want to talk to it over the same LAN using TLS. Getting a workable cert is currently not possible: for example the LAN names aren't going to be unique.

Plex did solve this in conjunction with a certificate authority, but that solution only works for them. The general approach could work for others if someone like letsencrypt led the effort. https://blog.filippo.io/how-plex-is-doing-https-for-all-its-...


It's certainly possible to get a trusted certificate for a LAN-only device. DNS-based validation is your best option here. The only requirement is that you use an ICANN ("public") domain. This is not a requirement made up by Let's Encrypt, but rather by the CA/B Forum and applies to all CAs (for good reasons[1]!)

The Plex approach would be possible with Let's Encrypt, though you would have to find a way to avoid running into rate limits (via PSL or by making users use their own domains, which is admittedly only an argument if you're catering to a technical audience).

[1]: https://cabforum.org/wp-content/uploads/Guidance-Deprecated-...


An important point is that you need to own that public (sub)domain, even if it is only for use on a private LAN.


Not necessarily. As an example, an IoT vendor could set up a domain for their devices and delegate a subdomain to each device (basically what Plex does). They could also provide an API that allows those devices to provision a TXT record to solve the domain validation challenge. The actual ACME client would still run on the device itself, and validate using that API.

You'd need a way to get past the rate limits (either via PSL or with a rate limit exception), but other than that this is doable.


The fact that it would not be unique would fundamentally undermine the security of the CA system.

Nothing would stop someone from getting a certificate for the hostname "myfridge" on their LAN, then going to your LAN and using the same one to perform MitM for your "myfridge".

The plex approach is very interesting though! There would be a lot to think out, but LetsEncrypt could do it if anyone could.


Nothing would stop someone from getting a certificate for the hostname "myfridge" on their LAN, then going to your LAN and using the same one to perform MitM for your "myfridge".

Which brings up an important point that is often lost amongst the "encrypt everything!" "hype" prevalent today: you should be able to MITM the traffic of every device you own, or else you do not really own them and cannot tell what information they are actually communicating. Keep in mind that incidents like the smart TVs spying ( http://arstechnica.com/security/2013/11/smart-tv-from-lg-pho... ) were easily noticed because the data was in plaintext.

When pushing for more security, I think it is extremely important to be aware of all the consequences and pause to think deeply before we end up locking ourselves out of things we own, because by the time we realise, it will be too late.


You can still MiTM your own devices which use SSL, just add a custom certificate to them (all major MiTM tools support this).

The problem is not encryption... the problem is buying black box devices which are not transparent to their users. Bad manufacturers will always be able to do this whether we advocate for encryption everywhere or not.


> Getting a workable cert is currently not possible: for example the LAN names aren't going to be unique

Connectivity [1] and using a global namespace are orthogonal things: you can use global DNS namespace just fine independent of connectivity. So from the naming perspective it Just Works if you get certs for printer.yourhouse.you.tld and fridge.yourhouse.you.tld.

(Of course you'd still like an automated cert renewal system for this disconnected case, but that's just a "simple matter of programming".)

[1] assuming by "LAN" you meant "network disconnected from the Internet"


By "you" I also meant people in general, rather than an individual tech person. Every connected printer, fridge, light switch etc user should have better security, and approximately none of them are going to set up global DNS space for their house!


I don't have any newish connected general purpose device (fridge printers, etc.... Do they give you an option to upload a cert?


My experience is that some may have self signed certs, or something similarly pointless, and that the vast majority have nothing so communications with them are in the clear.

If someone like letsencrypt helped solve the issue (probably in a manner substantially similar to plex) then the devices will be able to get and renew their own certificates automatically, and clients talking to them would just work.


The self-signed cert is not entirely pointless in this case. On your LAN, you can be fairly sure (or even ensure) that the first connection is secure. This allows you to accept the specific self-signed cert as valid and trusted. From then on, that's what you keep on trusting.

There's no CA to tell you that it's a valid cert for this specific location. But you're the owner and you're the authority in this case.


You are right in theory and in older times, but these days trying to get a browser to accept a self signed certificate is difficult. Remember that it would need to be done on a multitude of different platforms (eg Safari on iOS, Chrome on Windows) and by regular users. And no one is going to repeat this for the combinatorial explosion of browsers in the home with the devices (fridges, printers, other computers, device controllers, your garage door and what have you).

If you were a device manufacturer what would you do? One approach is to make all communication happen with "the cloud" as an intermediary, but that means that local operation is dependent on the Internet and some backend running on it somewhere. Or you have to come up with some hack, confusing instructions for users, lots of documentation etc. We see the results in Matthew Garrett's most recent post, and it isn't pretty.


I'm not saying it's a great solution. Just that it's not pointless to put a self-signed cert on a device like that.

What I'd do as a dev manufacturer? I'd try to change the game entirely. The current system simply did not have devices like that in mind. One idea would be "device certificates" - basically known CAs that can certify MAC addresses. This should be enough for local networks.


I think we can agree that self signed certs aren't useless in theory, they are close to impractical these days especially for folks who don't even know what one is, and are confronted by scary warning dialogs (if you are even lucky enough to see one!)

Amusingly mac addresses aren't as constant as you'd think. On my home lan I use a netgear wifi range extender. The way it works is to give fake (transformed) mac addresses for devices connected to it. For example if the real mac address is 11:22:33:44:55:66 when connected to the main access point, then when connected to the wifi range extender the rest of the network will see the mac address as aa:bb:cc:44:55:66. (The range extender also has ethernet ports - this affects wired and wireless connections to it.)


That's why I mentioned local network. Once we extend that with bridges, of course the verification fails. You can also assume any mac you want, so that's not a good security at all.

But that's all we've got. Unless we start integrating some kind of factory-sealed guaranteed-unique identity chips in all devices, there's simply no work around. Once something is on your network, you manage the identities. Either you have to manage some kind of CA, or trust whatever the factory provided you.


My wifi extender is the local network. That is why I mentioned you can't even trust a mac address on the local network!

I think this can be solved the same way plex did, by combining separate DNS space with a certificate authority issuing certs programmatically like letsencrypt does.


Amen! While it's possible to get certs for things like firewalls and other embedded devices, it's a big PITA. Factor in the short expiration times, and buying a 2-5 year cert becomes a lot more attractive for those use cases.


small note, since a while they are limited to maximum 3 years


For local traffic, why do you need a public certificate authority?


Because every device and program uses more or less the same list of trusted root certs. Yes, you can install your own root cert and run your own CA for stuff like this, but it's a huge pain in the ass to actually install the cert on all your devices. Sure, you can put that cert in your Chrome trust store. But what about Firefox, Safari, IE/Edge? What about your iPhone with Safari and Chrome? What about curl/telnet? What about your daughter's tablet, your son's gaming rig, your wife's e-reader? It's all just so much work. Businesses that have in-house IT departments can get away with it because they provision hundreds/thousands of identical installs of the same software. You don't have that in your house or small office.


Because some devices and browsers have difficulty determining if they're talking to something on the local network or not. And they don't try to guess. So if your router requires you to connect via HTTPS, which is a good idea, have fun clicking past a nasty warning and then have nasty icons everywhere telling you that you're not secure.

And before you tell me to set up my own local authority and add it to the chain on every device ... common, really? Nobody wants to do that.


That's not a problem of browsers having a difficulty determining if they're talking to something on a local network or not, just because you're in a local network doesn't means you can't be victim of a MiTM.


Because browsers have a UX for certs that's designed around the common/high-risk case e.g. average users going to banking websites, and deliberately terrible UX for self-signed certs.

Compare with SSH, which prompts to save the fingerprint on first connect, and warns loudly if the fingerprint changes. This is a superior way to handle self-signed certificates.


Presumably dhcp should tell you a CA for .local or similar?


Just at the entire world is going HTTPS, my faith in the system is seriously waning. When Symantec bought Blue Coat, it made me start to think about how fragile this is. How long before Symantec gets an NSL demanding an appliance that can mint bogus certs on the fly for dropbox.com, facebook.com, twitter.com, etc...?

How effective is something like certificate pinning against fraudulent certs?


> How long before Symantec gets an NSL demanding an appliance that can mint bogus certs on the fly for dropbox.com, facebook.com, twitter.com, etc...?

If the bogus certs are not logged in Certificate Transparency, they will be rejected by Chrome: https://security.googleblog.com/2015/10/sustaining-digital-c...

If they are logged in Certificate Transparency, then the world will know, the offending certificates will be immediately blacklisted, and Symantec will be booted from root programs.

With the ongoing advancements in Certificate Transparency, your faith in the Internet PKI should be growing, not waning.


From the link you posted:

> However, we were still able to find several more questionable certificates using only the Certificate Transparency logs and a few minutes of work. We shared these results with other root store operators on October 6th, to allow them to independently assess and verify our research.

So finding questionable certificates is trivially easy, but nobody ever bothers to look? What good is that?


"Nobody"? Google monitors for their domains. So does Facebook. I'll bet a lot of other high value sites are monitoring too but haven't said so publicly.

As for everyone else, give it some time. The ecosystem is still very young and we're still developing tooling.


I'm happy that Google and Facebook are discovering fraudulent certs. When they pop up, hopefully those companies aren't prevented from going public with the information.

Are there any end-user tools? When I open twitter.com, I would love for my browser (or my phone if I'm using an app) to tell me that the certificate fingerprint has changed unexpectedly since the last time I visited.


If you don't mind the risk of false positives (detecting changes that are legitimate), you can get that information from

https://addons.mozilla.org/en-US/firefox/addon/certificate-p...


There is certificate patrol or something like that. It is very verbose and will annoy you.


It's a fairly new project (in "internet standard/security technology" years), first introduced in 2013. CT Monitoring/Auditing is basically a group effort, being most effective when everyone from domain owners, browser vendors to CAs use it. There are usually no easy and instant solutions to complex problems, but it's getting there.


On the subject of bothering to look, I built this service to answer that: https://ctadvisor.lolware.net


Very cool. Do you know if Lets Encrypt is monitoring on behalf of their users? It seems like that should be part of the job.

Why are all these tamper detection tools targeted to domain owners rather than end users?


I don't really see how Lets Encrypt could monitor on behalf of their users. If a CT log shows <randomCA> created a certificate for my domain, I know it's unauthorised for the reason that I did not choose to authorise it. Lets Encrypt cannot assume they are the only company that I deal with.


oh yeah, that's a good point. I didn't think that one through.


Now that I think about this more, I'm wondering how the certificate transparency program can be protected. The certificate information would have to be submitted out-of-band to be sure that it hasn't been tampered with, right? It wouldn't make sense to communicate about certificate security using infrastructure that depends on the same technology.

I was thinking about this because I was wondering if you could use secure dns to store certificate fingerprints. That doesn't make sense though because secure dns also depends on PKI.


I don't think they would be considered fraudulent at all, and in fact I'm pretty sure that's the "safety valve" built into the system and why public encryption is now being encouraged. I share your tinfoil hatted feelings wholeheartedly.


It would certainly be considered fraudulent by all major browser vendors, possibly leading to a death sentence for the CA (i.e. root removal) in cases of deliberate misissuance or massive negligence. Key pinning mechanisms and Certificate Transparency would make it quite likely that this kind of misissuance would be detected as well.

My personal opinion on the "nation-state adversary forces CA to misissue" topic boils down to this: It's unsuitable for mass surveillance as it's easy to detect. It would work for targeted attacks, but in all likelihood your adversary will use other means to get in (zero-days, someone on the inside, physical compromise, etc.). Even for a targeted attack, my guess is that these other means would be less likely to be detected and would be significantly cheaper (if you take into account the cost of causing a root CA to get removed).


How would it be identified as anything other than a routine cert rotation? You would have to have proof that you were being served different certs for the same endpoint from different devices or locations, AND find someone who cares and is not under some influence. That sounds difficult for you, but easy for a central entity to detect. Ads follow me around between devices and locations, so I am quite sure a certificate could.


The certificate is presented when fairly little information has been sent by your browser, so it's tough to target it more specifically than source IP address and OS/browser.

There are a lot of ideas for catching this and some of them are starting to work, like HPKP pinning and preloads, and in the long run Certificate Transparency (including not accepting certs that haven't been publicly disclosed).

If you use HTTPS Everywhere, you have an option to submit certs that you see to the EFF SSL Observatory.

I don't mean to minimize the threat; I think there are lots of sites and browsers against which misissued certs can still be successfully used today without detection, and it's important to keep working on making that no longer true.


If a site uses key pinning (e.g. via HPKP), the site is in control of telling you which keys are acceptable, so there would be no way to pretend it's a routine rotation after your first site visit (unless the site is cooperating, in which case you're screwed anyway).

Certificate Transparency would also help here, as it would allow site owners to monitor the logs for any certificates they did not request. This is more about detection as opposed to prevention (though the chilling effect of easier detection would help with prevention, I suppose). Admittedly, this is probably something only a couple of large, security-conscious organizations do (like, for example, Facebook, which detected that someone was issuing certificates against internal policy, though not maliciously in that case).


> I don't think they would be considered fraudulent at all

Er, by whom?


Well, they would be as technically valid as any other updated cert, and they would not violate any laws since a judge somewhere would have rubber stamped it, so I just don't think that word fraudulent could apply in any actionable sense.

Morally wrong, sure, but you know, politicians and law enforcement...


Fraudulent is not usually a term that's referring to legal assessments when talking about the CA system. Various browsers and root programs are (to keep the comparison) both the executive and judiciary branch of the CA system, and if certificates are issued in a way that's not allowed in those policies (which would certainly be the case in this scenario), it would be considered fraudulent and could lead to root removal.


I'm still bitter about this chain of trust model. The fact that I have to get some other party to tell my users that they can trust me just seems wrong. They trust me because of personal history, not because some banner says they should.

Browsers and OS vendors shipping CAs seems to be the root of the problem, in my mind. Those should be distributed by the service providers, who are the actual trustworthy entities in the user's minds.


> They trust me because of personal history, not because some banner says they should.

The chain of trust is not to tell your users to trust you. It's to tell your users not to trust me, even if I look just like you.


This isn't about trusting you, it's about trusting that your domain belongs to the IP address it's supposed to.


IP addresses are not in certificates. Only hostnames. Not a rule, but I've set up hundreds and seen thousands, and never seen a single purchased cert that contained an IP.


That's because the certificate is used to determine if the IP address you're connecting to has control of the domain name you expected to visit.

Were the certificate to contain an IP address (or IP addresses), it would need to be updated every time the site started or stopped using a public-facing IP address.


> Those should be distributed by the service providers, who are the actual trustworthy entities in the user's minds.

That's what HPKP does, basically, unless I'm misinterpreting what you mean with service providers.

HPKP is Trust on First Use, so it's not perfect, but the alternative - some kind of Web of Trust - is not really practical for non-technical, not-security-conscious users, IMO.


This looks like it still uses https. Currently I am giving openkeychain on android a try. It's trust establishment process is very interesting.

Sadly I understand https much better than alternatives, due to web hosting experience. I am trying to catch up.


Aren't the root authorities in browsers/OSes just a means of short cutting the chain of trust validation by eliminating the need to validate the chain up to a single root?

I share your frustration and I understand that trying manage levels of trust a tough problem compounded by the fact that a user's expectations are fluid.


Its a bit like gun control laws; ultimately the criminals won't follow them. I was reading about some recent attacks and how hackers just steal certs or fool CA's into making certs for them. My understanding is that this is trivial for them to do in most cases. Turns out most CA's are run like security shitshows.

Meanwhile at work we're juggling dozens of certs left and right each with their own expiry as a handout to CAs. There's no reason why CAs cant sell me a cert that has a decade expiry. If the cryptography it uses goes bad, we'll just replace it. Why am I constantly buying these things?

Everything about CAs and browsers are wrong. Especially when many browsers ship with root certs from entities controlled by autocratic governments with zero accountability and involved in cybercrime and cyberspying. I'm giving incredible access to these nation states by downloading Firefox, Chrome, or IE. How is this "secure" again?


Nailed it, but my concerns about manipulation extend farther than criminal abuse. It's more about privacy to me.


Correct me if I'm wrong. So you argument is "Quis custodiet ipsos custodes"(Who will guard the guards themselves)?


And when we visit your site for the first time, having never heard of you before, why should we trust you?

That's the point. Having some authority who did at least some minimal checking, to extensive checking, and who will verify you really are who you purport to be. Trust but verify probably plays a part in this.

But, remember, you don't have to go to HTTPS. There is no requirement for you to do so.


> They trust me because of personal history

That does not mean that you know something about Security.

> ... why should we trust you?

That's exactly the point. This is INTERNET, we don't trust anyone, it's a dangerous place to do such action... but we have to, otherwise it's better to go a live up in the mountain.

So, I prefer to trust Symantec/Google/DigiCert/etc... instead of some small business that does not even know the meaning of updating software or change default passwords.

The chain of trust it's a burden, I know, why we should trust anyone? But there has to be some level of trust between two parties, and, if we can have a third one (Like an escrow) that can ensure that trust I think it's great. Even using asymmetric encryption you need to trust the other party's public key...

A quick example of an unencrypted, cert-less network, an unsecure one with tons of vulnerabilities is the SS7 and the GPS systems... Since they cannot add Certificates to their BTS (base transceiver station) or their satellites, because of roaming technology, it's quite easy to set up an antenna an spoof them[1] and have full control over you phone and GPS[2]

[1] https://julianoliver.com/output/log_2014-02-13_17-17

[2] http://permalink.lanl.gov/object/tr?what=info:lanl-repo/lare...


I take cash, and always let folks try before they buy. :) I do have solid means of establishing trust. None of it has anything to do with technology security. Old school, baby!

That said, I am actually trying to move to a rather isolated place, and that is a perfectly valid option, so don't knock it.


Why should I trust you even if you have an HTTPS cert? All you needed to get one was a domain name.


People seem to be misinterpreting the intent of HTTPS; it doesn't give you any reason to trust a given site. HTTPS only verifies that the site you are talking to is in fact the domain name in the URL, rather than a government agency, ISP snooper/intermediary, or other man-in-the-middle attacker. Its up to you whether you trust the operator of that domain.


Why should you trust me if you have never met me? If you like what I do, trust me, and please give me money. :)

Cert companies only do a phone call check for the very expensive EV certs. There is no minimal to extensive checking. That is a scam.

Web tech is all https now. I can't even browse a lot of https sites with some of my older devices. There is a requirement and I dislike it.


>There is no minimal to extensive checking. That is a scam.

You generally have to modify the root domain to host a random value in a text file the cert company gives you. This demonstrates that you have control of the domain.

Aka, minimal checking.

Granted, that doesn't prove that you're the domain owner, but if you aren't the domain owner and you've got enough access to pass that challenge, the real own has security problems a cert isn't going to fix so hey.

All things considered, it's a hell of a lot better than nothing.


> Why should you trust me if you have never met me? If you like what I do, trust me, and please give me money.

What if a customer who trusts you returns to your site, but ends up on an impostor's site instead? He was no way to discern the difference.


I would argue strongly that such users do not have those abilities even with https. A valid cert is a valid cert. My supporting point would be the major browser vendors recent backpedal on throwing mixed-content errors, demonstrating that a smooth ride for the user is far more important than safety to them.

Actually I called shutterfly.com on the phone about that mixed content issue. I emailed them screenshots of the error from 6 different operating system and browser combinations, from 3 other users even. They claimed nothing was wrong. They were serving javascript via http on an https page and told me I was wrong and needed to update java, for weeks, on the phone, in chat, and in email, and declined to send the report to their webmaster. Even those wanting to be trusted are incapable of using these tools, from what I have seen. The whole thing is broken.


> I can't even browse a lot of https sites with some of my older devices.

What devices do you have that don't support TLS?

Also, the point is not to trust you or not, it's to trust that I'm actually talking to you and not a MitM.


Libretto 50ct. If 301s from http:// to https:// didn't exist, then I wouldn't have anything to complain about.


Let’s Encrypt has issued more than 5 million certificates in total since we launched to the general public on December 3, 2015. Approximately 3.8 million of those are active, meaning unexpired and unrevoked. Our active certificates cover more than 7 million unique domains.

How can you cover 7 million unique domains if you've only issued 5 million certificates?


A certificate can cover many domains thanks to the subjectAltName extension.


One certificate can be for more than one domain.


For example, a single cert can serve www.example.com as well as example.com


That is true, but in this case I think Let's Encrypt and also parent to your comment mean different domains as in e.g. one certificate to cover all three of example.com, example.net and example.org.


The same mechanism in cert generation provides that functionality. Hostnames are hostnames. SAN certs just take a list of them.


This is great, I use LetsEncrypt for my company. however, the graph is a little misleading. Lets look closer:

LetsEncrypt is almost built upon the idea of frequently (and automatically) re-issuing your certificate(s). The graph's line shows what appears to be an accumulated sum of certificates issued by day.

If every 90 days most certificate(s) expire, of course the graph will look like that!

Whats most interesting to me is the steps up in the graph. It appears that the steps in the graph roughly occur on 70-90 day intervals.

Impressive growth for a great mission/service, but I wanted to point out the mechanics behind the graph. Hopefully others can offer some alternative perspectives!

edit: Grammar, illogical sentence structure.


Is it still problematic to issue lots of certs for lots of subdomains? I mean, still no wildcard certs and crazy rate limits, that disallow issuing 1000s of certs per day for user-generated subdomains?


Yep. You can get 20 different certificates per domain per week, with up to 100 names on each.

https://community.letsencrypt.org/t/rate-limits-for-lets-enc...


Wildcard certs are also a huge need for platforms like Sandstorm.io which opens documents on arbitrary/randomly-generated subdomains. And as someone who hosts a lot of things on various subdomains in general, the idea of having a bunch of different certs is far less appealing than a wildcard cert.

But unfortunately it doesn't seem like Let's Encrypt currently has any plans to add wildcard certs any time soon.


If you're generating that many subdomains (and you control the subdomains), it's probably worth investing in a traditional wildcard cert.

Though, it would be nice if the likes of dyndns names were given exception, since they are effectively second level tld's.


LE uses the Public Suffix List to decide what's a "domain". Their really-low rate limits have caused a flood of applications which are overwhelming the PSL's maintainers.

https://community.letsencrypt.org/t/dyndns-no-ip-managed-dns...


Though if you are a hosting provider for example, I'm sure you could try to negotiate a deal with let's encrypt for more tolerant rate limits for a bit of sponsoring.


We do, in limited cases for large providers (we can only handle so many requests), adjust rate limits. Such adjustments are never dependent on sponsorship, though sponsorship is nice.


It sounds like you are doing something serious enough that Let's Encrypt might not meet your needs in other ways. Pay up for a wildcard cert or refactor subdomains out of your architecture.


My understanding is for intranet, you could use Let's Encrypt. For example, if I own .foo.com, and i want my intranet to be .internal.foo.com I need to make *.internal.foo.com in the DNS in order to verify I own .internal.foo.com, correct? But then doesn't that expose my 'internal' network? Hope there is a different way to solve this problem.


You don't need to "open up" your internal network (the ownership validation can happen via DNS), but the hostname would be public through Certificate Transparency.

Generally, if you're relying on your internal hostnames being secret (which is a terrible idea anyway), you should consider using an internal CA, because there's a good chance all public CAs will start logging every single certificate they issue to public logs, and that would include all the domains the certificate is valid for¹. Better yet, don't treat your hostnames as secrets.

¹ I think there has been some discussions about allowing CAs to censor DNS labels after the TLD+1 level for Certificate Transparency. Not sure if that's going to happen, I'm not a fan. This would still require that your CA supports this mechanism, something I don't think Let's Encrypt would do.


This is extremely exciting. I've been supporting these folks since the beta. It's great for offering free SSL to clients.


Am I the only person that is wary of 100% https ?

Remember, once you encrypt a web resource in SSL, you add a ton of baggage on top of any methods that might be used to access it.

I like a world in which I can 'nc' a web resource and manipulate it with unix primitives without a truckload of software dependencies.

If sensitive information is involved, then certainly - use SSL. I understand that we must give up conveniences for that functionality.

But there are a lot of web resources that have existed, do exist, and potentially exist that are completely benign ... I think we're shackling ourselves by chasing after this perfection.

Or, put another way, we're chaining ourselves to a world where web resources are only accessed by web browsers, and only by those web browsers that are chaining themselves to a fairly dubious security scheme...


Just as you can use "nc" for an HTTP resource, you can use "openssl s_client" or "ncat --ssl" (from the nmap project) or "socat" to manipulate an HTTPS resource using the same unix primitives. Which truckload of dependencies does this require? The Debian package for OpenSSL only depends on libc.

I do fully agree that the web is getting more tied to browsers, and to me that's worrying, but TLS is mostly a transparent tunnel over which you can use the same protocols; it's not part of that trend, in my opinion.


Is there any alternative to ssl and tls out there? Sshttp anyone?


Tor Onions are technically an alternative. You access the hash of the public key? and you are only able to put up that URI if you have control over the private key. I guess something similar based on hashing and public key cryptography might be possible outside of Tor but it's not exactly user friendly to begin with.


Blackberry 10 browser refuses to recognize Lets Encrypt certs :(


Is there any work being done on being able to easily switch out standards?

That way when https is found to lack some feature, we can easily upgrade to httpz almost immediately?


This will likely never be the case due to how HTTPS actually works. As someone else stated, HTTPs is HTTP + TLS.

The "s" in HTTPS is for "secure", and TLS provides that security.

TLS is a evolving standard which is updated over time to add new features when necessary. When HTTPS is negotiated, it can seamlessly choose which version of TLS to use, based off what the client and server want to use.

So, HTTPS will never die due to lack of features. A new version of TLS will just be approved and deployed, and newer devices can use that while older devices can get by on an older version of TLS.

TLS is the successor to SSL. They are backwards compatible, so devices that support TLS also support SSL. The full version history, from newest to oldest, is: TLS 1.2, TLS 1.1, TLS 1.0, SSL 3, SSL 2. In reality, very few servers still use SSL 3 or SSL 2, due to known weaknesses, but colloquially, all the versions are just called "SSL".

TLS 1.3 is underway and will shortly be ready for primetime. Firefox and Cloudflare have already written some implementations based on the draft spec (sorta how routers will implemented the newest 802.11 standards before they are 100% official).


Plus, even if we did decide to fully replace TLS, nothing would necessarily need to happen with certificates. We call them "SSL certificates", but the certificate standard - X.509 - actually predates SSLv1 by several years. A TLS alternative/replacement could adopt the X.509 standard as its certificate format and automatically work with the existing CA system.


The s in https just means secure. It has evolved from SSL to TLS with various versions of each.

The http in https just means http. It has evolved from http/1.0 to http/1.1 to http/2.

I'm not sure what you're asking or how it is relevant to Let's Encrypt.


The situation is not ideal. But the consensus among browser makers is that the previously-relevant standards bodies move too slowly. They can implement new transport features independently (like Chrome did with SPDY and QUIC). But the downside is that fragmentation is more likely, as most browsers implemented SPDY's features in HTTP/2 but only Opera has added QUIC.


I see this as security theater. Most web pages don't need to be encrypted. Anything with a form should be, but if you're just viewing static content, there's little point. Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.

Encrypting everything increases the demand for low-rent SSL certs. Anything below OV (Organization Validated) is junk, and if money is involved, an EV (Extended Validation) cert should be used. Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.


> Most web pages don't need to be encrypted. Anything with a form should be, but if you're just viewing static content, there's little point.

Some really cool HTML and JS functionality will only work over HTTPS.

> Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.

If you have an attacker than can identify content solely from its length, you have bigger problems than an SSL cert can solve.

> Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.

I do not see the problem. All those domain owners consciously choose to have Cloudflare host their stuff. The cert might be a few KB bigger, but who cares?


> Most web pages don't need to be encrypted. Anything with a form should be, but if you're just viewing static content, there's little point.

Some really cool HTML and JS functionality will only work over HTTPS.

What "really cool" HTML feature requires HTTPS? There can be problems with mixed secure/insecure content, but that's more of an offsite content issue.

> Yes, it obscures what content you're viewing, slightly. An observer often could figure that out from the file length.

If you have an attacker than can identify content solely from its length, you have bigger problems than an SSL cert can solve.

An eavesdropper knows the IP address and the length of the content, even if it's encrypted.

> Trying to encrypt everything leads to messes such as Cloudflare's MITM certs which name hundreds of unrelated domains. This is a step backwards.

I do not see the problem. All those domain owners consciously choose to have Cloudflare host their stuff. The cert might be a few KB bigger, but who cares?

When sites share an SSL cert, and you can break into one of the sharing sites, there's a way to impersonate others. Cloudflare customers for their lower tiers of "security" often don't realize this. The customer doesn't pick which sites share certs; that's up to Cloudflare.[1]

[1] http://john-nagle.github.io/certscan/whoamitalkingto04.pdf


> What "really cool" HTML feature requires HTTPS? There can be problems with mixed secure/insecure content, but that's more of an offsite content issue.

One example would be the the Geolocation API, with more to come[1]. Another example (specifically for HTML) would be Mozilla showing a user-visible warning when it encounters a type="password" field in a form served via HTTP (or with a HTTP target - I'm not certain). This is currently only enabled in the Developer Edition, but will eventually land in stable.

> When sites share an SSL cert, and you can break into one of the sharing sites, there's a way to impersonate others. Cloudflare customers for their lower tiers of "security" often don't realize this. The customer doesn't pick which sites share certs; that's up to Cloudflare.

This is a non-issue for services such as CloudFlare. Site owners do not have access to the private key, only CloudFlare does. Breaking into one of the other sites won't give you access to the private key, only breaking into CloudFlare would, and such a vulnerability would have nothing to do with the fact that you're sharing a SAN certificate with other sites. I'm not aware of any other cross-site vulnerabilities that stem from shared certificates in an environment where every site on that certificate is served by the same frontend.

[1]: https://www.chromium.org/Home/chromium-security/deprecating-...


"One example would be the the Geolocation API, with more to come[1]."

Ugh. Why would they do that ?

I can understand that geolocation could be tremendously sensitive and you absolutely would want to offer the option of SSL ... but why limit it to SSL ?

geolocation is also something that you'd want to hack into and build into things ... and maybe even things with limited processing power and memory.

Wouldn't it be nice to have the option to interact with a geolocation API (over http) with stdio and not include a giant truckload of dependencies and libraries and megabytes of packages ?


> I can understand that geolocation could be tremendously sensitive and you absolutely would want to offer the option of SSL ... but why limit it to SSL ?

I think you answered your own question. ;-)

> geolocation is also something that you'd want to hack into and build into things ... and maybe even things with limited processing power and memory.

Presumably, once your device is capable of running a modern browser such as Chrome or Firefox (which is what we're talking about here), TLS is a drop in the bucket in terms of resource usage. Or were you talking about the server?


Sorry, but your comment is literally all wrong.


Have you ever built toolchains on top of parsing web resources with unix primitives ?

Adding SSL makes it a lot more complex and limits your toolset dramatically.

If your source is sensitive, by all means - use SSL. I don't think anyone would argue with that.

But if you provide a useful resource that isn't sensitive or controversial (say, for instance, the weather) why would you want to chop off so much interoperability ?

I guess if the only way you've every used the web is with a web browser, this doesn't make any sense to you.


This has nothing to do with complexity or tooling.

Many people, including you, forget that TLS protects content from tampering. ISPs, captive portals, and other network entities are known for injecting ads or intercepting transmissions. Imagine your liability if a bad actor is injecting child porn on your site to a large portion of your audience. This isn't unheard of, by the way. Arguably less criminal, even Comcast is known for injecting content into pages.[1]

[1]: https://gist.github.com/ryankearney/4146814


mholt is the lead developer of Caddy[1], so it's probably safe to assume that he knows a bit about this.

The goal behind the HTTPS everywhere effort isn't just to encrypt private data, but also to provide authentication for your content. ISPs are known to interfere with HTTP requests, injecting ads, malware and what not. That's something that affects anyone, even static sites.

[1]: https://caddyserver.com/




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: