Hacker News new | past | comments | ask | show | jobs | submit login
Chromium and Mozilla to enforce 1 year validity for TLS certificates (googlesource.com)
203 points by vld on June 28, 2020 | hide | past | favorite | 363 comments



With the tightening of certificate trust, demise of self-signed certificates, etc., is there any remaining way to establish a consumer-oriented HTTPS server on a local network? Thinking of things like routers, printers, and self-hosted IoT devices here. Some of the label printers we support at work have simply atrocious workarounds to get them to work, and I'm wondering if it's the manufacturer's fault or if that use case has been completely abandoned in the push for tighter security on the Internet.


It’s a glaring security hole, IMHO. I create such devices and the only way I know is self-signed certs, but the browsers complain a lot about that. Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

The issue lies between the browsers and https system. SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?". Then if it changes informs you of that. Browsers could easily implement that for .local with self-signed certs.

Of course browser developers assume everyone has internet all the time and you only access servers with signed domains. I’ve wondered what it’d take to get an ITEF/W3C RFQ published for .local self-signed behavior.

(Edit: RFQ, not my autocomplete’s RTF)


> Ideally there’d be a way to sign .local domains with browsers handling it while letting people know to verify the identity of their local devices/services and that the identity isn’t verified by https like most sites.

For these types of sites we run a local CA, and sign regular certificates for these domains and then distribute the CA certificate to our windows clients through a GPO. When put into the correct store, all our "locally-signed" certificates show as valid.

In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.


> In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.

This is basically what I've been doing lately as well. I'll create a wildcard letsencrypt cert for .vpn.ourdomain.com and then point the subdomains to internal IPs. You can even set up a split-dns where it responds to the challenge txt records for letsencrypt, but only the internal side responds to requests under .vpn.ourdomain.com.


That works if you have control over the device/network. IoT devices usually don’t work that way. It’d be nice to be able to ensure the passcode to the device isn’t broadcast in clear http.


It handles it by asking "Do you want to trust this new server?"

Asking the end user to accept downgraded security is a huge security antipattern.

Also, if I’m operating an evil wifi AP at a coffee shop and I intercept your web request for bankofamerica.com with a redirect to bankofamerica.local, would HSTS prevent the redirect? Or could I then serve you a bad cert and trick you into accepting it?

Also, what sokoloff said makes a lot of sense. Encryption without authentication is worthless, and that cert chain only works in so far as someone at the top vouches for someone’s identity. If that’s your print server, then you are the one vouching for its identity. It makes more sense for you to be the certificate authority and just build your own cert chain.


If the browser correctly explained what you were doing, and warned you that this is an attack unless you are in control of the entire network and the machines on it, I don't see the problem.


What would it say?

“You’re connecting to an IoT device that has a worthless certificate. Would you like me to open up a completely pointless AES256 session with it and pretend that you have a secure connection?”

Just use HTTP.


The identity isn’t trustworthy, but as mentioned below their are ways to handle that with device id’s. Also it’s not pointless to encrypt communication with a device you’ve verified the identity of in the past. It prevents hijackers from later hijacking the device, without The user knowing it. Just the same as the nastygram you get when your ssh server changes its private key.

You’re effectively claiming SSH is pointless and/or useless encryption as it doesn’t use certificate chains to verify url/domain. Your argument is the same as saying that any devops ssh’ing into a new local server is pointless and they should just use telenet.


(Disclaimer: I'm not a security expert).

Ideally, I think, something like this: "You're trying to connect to a new device on your local network. To ensure the security please check that the device has a display or a printed label that says 'HTTPS certificate ID: correct horse battery staple couple more random words'?" (mobile devices may suggest to scan a QR code instead).

I'm pretty sure if at least one major browser vendor would implement something like this (denoted by a special OID on the certificate), IoT vendors would be happy to follow. Verifying a phrase or scanning a code is not a big burden, and it resolves trust issues.

The fingerprint could be either from a private key generated on device (for devices that have a display and can display dynamic content) or from vendor's self-signed "CA" with special critical restrictions (no trust for any signatures unless individually verified + signed certs are only valid on what clients consider to be a local network) which private keys are not on the device itself (for devices with printed labels, to avoid having the same private key on all devices).


It certainly sounds a lot better than simply asking browser vendors to give .local a pass on cert validity.

I’m still wary of any flow that would have browser users “accepting” a device as secure - could I impersonate that device on the local network? Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Maybe another approach would be to build infrastructure (like protocols and client software) to make building a home cert chain easy? A windows client that would let you create a root cert, install it in your cert store, and then give you server certs to hand out to devices? Give it a consumer friendly brand name or something and get IoT vendors to add a front-and-centre option to adopt a new server cert.

Authentication isn’t a tricky problem; it’s the trickiest.


> I’m still wary of any flow that would have browser users “accepting” a device as secure -

It’s about accepting that communication with the device is secure, not guaranteeing that the device itself is secure. In reality you don’t know if your bank’s servers are secure or if they encrypt passwords properly, etc, but you do know it’s them and your communication isn’t tampered with.

> could I impersonate that device on the local network?

Not readily with a device specific id check and TOFU (trust on first use) similar to SSH. If the device certificate was stored permanently for .local urls like `my-device-23ed.local`, then anyone who tried intercepting or MITM’img that device would have the user receive a message "warning device identity has changed, please check your device is secure ... etc " warning.

Not having any browser support .local certificate or identity "pinning" means that anyone who compromised your network (WiFi psk hacking anyone?) can impersonate a device you’d not know it. Browsers forget self-signed certs regularly, if they let you "pin" the certificate at all. A hacker can intercept the .local url (trivial) and use another self-signed cert. the user’s only real option is to blindly accept it whenever it happens. Then an intruder can MITM the connection to the device all they want. Is your router’s config page really your router? Who knows.

> Could I convince someone to accept a site on the wider internet as their IoT device? Someone smarter than me needs to think hard about these questions.

Any `.local` domain isn’t allowed to have a normal cert or global dns name. They could trick them on first use, but again with a device specific ID on first use it’d make that harder to do. After trust-on-first-use any access afterword wouldn’t be able to be tricked without a explicit warning to the user about changing device identity and that something funny might be happening.

If browsers implemented entering a device specific code as part of the "do you accept this device" on first use, that’d make it a much more usable and secure pattern. It’d standardize the pattern and encourage IoT shops do setup the device id checking properly.

To impersonate a device using .local certificate/identity pinning, a hacker would need physical access to the device to get it’s device id code, then hi-Jack the mdns request with the correct device specific .local address (on first use!), then setup a permanent MITM in order to impersonate a device. Otherwise the user would get a warning. Possible but serious resources required. With physical access you can modify hardware, possibly install a false cert on the user machine, etc, so security in that scenario would be largely compromised already.

Perhaps some IoT devices use custom apps and certificates but many just use http, or self-signed https. In my experience, IoT device makers have little experience with something like creating a CA. Getting users to install it would be a headache. Time is money on those projects and having an entire factory down because they can’t figure out how to install a certificate chain on Windows 7, well, most users will complain loudly. Currently IoT is full on IT-installing-certificate-chains, or no security at all. Many go with none at all therefore.

Therefore the current status quo with browser certificates on .local domains encourages far more security gaps and effectively makes it difficult for non-internet connected device to Operate securely without a fairly expensive and complicated IT setup.


I like this idea, as we already add a device serial ID to the ap hosted mode, partly for this reason. Chromecast’s show a random code and users seem to be able to handle this fine.


That works fine with a brand new device you just unboxed. But what should happen 3 years later? Should the IoT device have a certificate that is valid forever?


I would say yes, as long as certificates are unique per-device. The harm of my light bulb's cert being compromised is less than the harm of my light bulb no longer working if the manufacturer's CA goes offline and it can't renew its cert.


HTTP traffic would be unencrypted, so everyone (esp. on Wifi) could record passwords etc. flying around. With HTTPS, you at least need to MITM the connection to do that. If you establish trust in some other way (cert ID printed on the device?), the connection is secure.


Not just intercept: Using HTTPS prevents messing with the connection in-flight. So an attacker won't be able to inject their own payload into that web page you just requested.


Except you’re using garbage certificates so anyone could MITM you and inject whatever they like.


In what way are self-signed certs garbage? They're essentially the same as ssh certs.


That's a separate kind of attack, not intrinsic to the protocol. If the keypair got leaked or a CA misbehaves and issues multiple certificates that can be used for a host, then yes, it can happen.


It's like calling someone you don't know, but over a secure line.


I love this idea. There are enough influential tech people who read HN, can we make this happen please?


Thirded. This sounds like a sound solution.


How is that different than how self signed certs work now?

My browser warns me, I can accept the warning for that particular certificate, and it warns me again if it changes..


At least for Chrome and Firefox, I can't accept self-signed cert easily for permanently. It asks again if I exited the browser.


Do you have them configured to clear those settings on exit? Is the certificate actually the same when you visit the site again?

Chrome and Firefox remember the acceptance of self-signed certs for a long time on my PC.


Why do you need to use self-signed certs? (As contrasted with CA-signed certs that happen to be signed by a CA that you own and trust as suggested by arwineap?)

It took my a little over an hour one evening to figure out how to create my own CA, trust it, and sign certs for all my local devices (except my UniFi cloud controller which I admit I gave up on due to time).


Because 99.99%+ of users don't have the technical skill to do this, but still need to be able to access local devices and it would sure be good if they could do so in a secure manner?


So the answer is to subvert the global certificate infrastructure that protects web traffic? No, it isn’t. Your IoT device has no security at all if a non-technical user is setting it up or if it doesn’t have a way of accepting a user configured certificate, and you shouldn’t pretend otherwise by dressing it up in bad certificates and worthless encrypted tunnels.

Just use HTTP.


I mean if they’re worthless tunnels then so is every SSH tunnel. Should we just go back to telnet?


On your own network? Does it really matter whether you use telnet or ssh? And if it’s on a shared network, don’t you have an IT department that can set up the local key infrastructure and push out certificates?

The argument here is that we should enable lots of shitty IoT devices to masquerade as being secure, and inure browser users to click ‘yes’ to accepting a broken certificate.

If it’s on a managed network, IT can set up a certificate and push that out to client machines. If it’s on your home network you can do that (unless your IoT device can’t take a user configured client cert, in which case it’s rubbish anyways), and if you can’t then you might as well use HTTP.


Ok we can always use HTTP instead. I personally hate how using HTTPS gets harder and harder every single year.


A lot of the devices mentioned ("routers, printers, and self-hosted IoT devices") don't give ways to change the certificate. Besides, not every individual or company wants to or is knowledgeable enough to manage their own CA. For the audience in HN it might be an hour one evening, to others the instructions read like black magic.


Routing is scary magic. DNS is scary magic. Wifi is scary magic. Everything in computing is scary magic until someone writes an app with good UX for it. It's not a fundamental problem.


It is a fundamental problem, because you need specialized knowledge to understand what is being presented and asked of you in the UI.

The average user knows absolutely nothing about routing, most will throw their hands up or their eyes will swim if you so much as mention something like IP address.

They also know nothing about DNS and don't have to: because we always give them defaults that they never see and they go along with their lives.

As for wifi, once again, largely automated. Most people never change the default SSID and password. There's some manufacturers that will make a good UI, but it stops at the SSID and passwords because that's the extent of most users' understanding. Some users have a vague understanding that 2.4GHz and 5GHz is different, but don't know the significance of the difference. Channel, authentication type, and other options aren't given to users in those UIs because they simply wouldn't know what to do with it and people don't read manuals anyways.


> SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?".

The problem is to figure out whether to trust the server you need to get its fingerprint through another channel. Is there an HTTPS equivalent of that?


You don't need to get the fingerprint through another channel. Getting the fingerprint through another channel prevents some classes of attacks. Blindly storing the first fingerprint offered also prevents a variety of attacks.


> It handles it by asking "Do you want to trust this new server?"

That's basically how it works though; your OS packages a group of trusted CA certs. You can add additional trusted CA certs, even ones minted by you to ensure your apps trust the connection


There are two options:

* Manually install a root certificate, which is a confusing process for most end users and a non-starter for anyone who cares about security. (Imagine walking your parents through the process.)

* Trust a self-signed certificate, which is an increasingly difficult and counterintuitive process since Chrome and Firefox started competing to see who could destroy their usefulness faster. I'm not even sure if it's possible anymore.

Neither of these are acceptable.


I mean I’m not sure if there’s a solution that will make everyone happy then. Making trusting self-signed cheers easy and not scary has real security implications because users just click-through warnings.


Making casual users create their own root certificates sounds like an even worse problem. Now an attacker isn't restricted to impersonating your lightbulbs. They can impersonate any domain if they can get your private CA. Now imagine if an IoT vendor engages in questionable practices like creating the CA for you and the user only has to download an exe that automatically installs the root certificate. The benefit for the vendor would be that all devices you order from their website would be shipped with correctly signed https certificates. Later a hacker dumps the database with root CAs and uses it to impersonate your bank.


That’s why self-signed and "local signed" should be distinct concepts, IMHO. The .local domain is already special cased, and could provide a different UI path more akin to how SSH works. AFAICT, you can’t get a https cert for a .local domain, so it’d not break existing https security model. It’d provide a more secure way for apps like syncthing to provide a secure local UI as well. Getting browsers to accept my self-signed certificate is a pain and makes people just use http.


Honestly, I don't trust most end users to install root certificates

If you are doing something for an end user, I think it makes a lot of sense just to get a certificate; it's just not a large barrier anymore.


The mechanism SSH uses is called Trust on First Use ("TOFU") and is closer to what used to be HTTPS certificate pinning. In this scheme, certificates never expire, and if they do, clients warn about the unexpected change in certificate.

It is different from the CA PKI system, where the client trusts any certificate signed by a trusted CA without prompting the user at all, and doesn't prompt the user if the certificate for a site changes.


Self-signed certificates give you basically this. It's a bit of a hassle to mark them as trusted, but you only have to do it once.


That has not been my experience with current versions of Chrome.


Crazy idea: Why not serve an initial page over HTTP, and then implement encryption in JS using webcrypto for all subsequent calls.

I'm not sure self-signed HTTPS can do much better than this anyways.

(Yes, yes, it's a crazy idea, hehe)


You can no longer do webcrypto because the initial page is compromised.

Self signed HTTPS works for this case as long as you know the fingerprint/cert to accept.


Oh, yeah... webcrypto only works on HTTPS.

So you would need to ship a crypto library in JS, hehe :)

Self-signed certs probably does work, if you install the certificate root on your machine. It just not something you would advice end-users to do.


The other problem is shipping a crypto library if the entire page and script is not served over HTTPS means that it's no longer useful, because the crypto library is compromised.


> (Edit: RFQ, not my autocomplete’s RTF)

Sorry for the mostly insubstantial comment but it may help you in the future: it’s RFC (Request For Comments) not RFQ.

And it’s IETF (Internet Engineering Task Force) not ITEF.


Plex uses a combination of wildcard certificates and a custom DNS resolver to offer HTTPS on local networks, but it does require a working internet connection to work. [1]

You can also get a certificate through the Let's Encrypt DNS challenge without having to expose a server to the Internet, but you'll still need ownership of a domain name and either an internet connection or a local DNS server to support HTTPS using that certificate.

There is always the option of creating a local certificate authority for your devices, but this is kind of a pain. There are some new applications that aim to make this easier [2], but there is no easy way around having to install the root certificate on each device.

[1] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-... [2] https://github.com/smallstep/certificates


If you just want the green lock to show up, the device can get a certificate from Let's Encrypt. The manufacturer would need to provide an API that lets the device pass the DNS challenge.

For example, you could have serialnumber.manufactuerer-homedevices.net, and each device would get a cert for its serial's host name. Ideally, you should properly secure that API with some form of attestation key included on the device. Alternatively, the host name could be e.g. the hash of the devices' generated key (that way you could ship the devices without placing individual keys on them, but the host name would change after a factory reset).

Making this actually secure is hard, though, because you need the user to visit the URL for his device. If an attacker can simply get a cert for differentserial.manufacturer-homedevices.net and direct the victim there, you don't win much actual security.


I'm not a huge fan of it, but, it seems like the way things are going is to simply run a service which is basically a large proxy.

Your device connects out with some kind of persistent connection to their central service then requests to your device go to their server, which does AAA and routes to your local device. Fixes the SSL issue, avoids any NAT headaches, enables fully remote access and most importantly for PMs it makes the device useless without your server-side components. If there is any local accessibility at all, it can be neutered or reduced.

I don't entirely hate this model, its not my favorite, but, its the way things are going.


I would be happy if my router supported letsencrypt.

Why would I even bother copying and distributing self-signed certificates if I can just properly get a certificate for my own personal router?

It’s idiotic that people still trust pure HTTP and have no option of switching.


If your router needs configuring before it can access the Internet, then it can’t use certificates that require the internet to generate or validate.

Or if you change ISP and need to change your router internet connection configuration, your router cannot be accessed.


When was the last time you needed to configure your router to access the internet?

I understand if that router is something industrial, but then you can probably figure out how to do that over SSH anyway (which is secure).


I think this is the only practical answer, unfortunately. Everything else might possibly be made to work for a personal project, but definitely isn't an option at scale.


The way plex does it works great at scale.

It requires people to care more about "self hosted" than "PM says this will centralize user access and allow us to collect data and better monetize"

I'm not knocking either model. They both work (technically) but you need to understand your market and what works better for them.


Buy a domain, create a subdomain for local use, and issue ACME certs with Let's Encrypt every 60 days.

If your vendor device or software doesn't support automated certificate rotation, put nginx/haproxy/envoy in front of it.


This won't work either, btw: You'd have to request from Let's Encrypt a new certificate for each individual device. LE has several rate limits that will prevent that from working for anything more than a trivial number of devices: https://letsencrypt.org/docs/rate-limits/

The only way I see how this would work is if you not just purchase a domain but also an internet-facing server and do the renewal and certificate management centrally for all devices - at which point, your device is definitly not standalone anymore.


This will work fine. LetsEncrypt will raise ratelimits for you. I've done it for a commercial CDN and they were very accommodating and helpful.

Plex does this, for example, though they use DigiCert's free certificates: https://www.plex.tv/blog/its-not-easy-being-green-secure-com...


The LE rate limits are (mostly?) for new cert issuance. I’ve never run into a rate limit on automated renewals and seem to recall it was either non-existent or comically far away from anything any individual would hit.


You can do wildcard certs with LE, I run hundreds of k8s services all secured with LE and wildcard certs.


We're talking about customer hardware. If someone looks at the insides of the device and finds, of course, the private key for your one shared wildcard certificate, the issuer is required to invalidate it immediately.


You can, but that wouldn't quite work for the prosumer router manufacturer case the OP mentioned: LE would revoke the cert once you distributed it.


You can, but, you can't (by policy) distribute keys across multiple customers.


I have a nasty habit of requesting revocation of such compromised keys whenever I find them. CAs are required to revoke within 24 hours, I think, though unfortunately revocation is surprisingly ineffective.


Do you actually find those often? I've actually never seen one. I will admit I've also never specifically looked very hard.


I'd say one every couple of years.

https://letsencrypt.org/docs/certificates-for-localhost/ has great documentation on that topic, including more examples.


This is a ridiculous requirement that is not at all practical.


How is it not practical? It's really not hard to set up and there is great documentation out there


I get the feeling you have no real experience either running a company network or dealing with end users and home networks. Any of these solutions work fine for a majority of people who just use their laptop in Starbucks, but they really break down when you need to start doing anything more complicated than that.


Please, educate me as to what I am overlooking. The requirements of buying a domain name and getting a LE wildcard cert should be trivial to someone with the experience you seem to have.


Why would the average customer of IoT products have the same expertise as the person you are replying to?


All of these juvinile "It's easy! just implement ${SUPER COMPLICATED INFRASTRUCTURE WITH SPECIFIC REQUIREMENTS AND LIMITATIONS I'M GOING TO TO PRETEND AWAY}" replies from eager-idiot hacker tweens is just trolling.


You can also run your own CA.


> This is a ridiculous requirement that is not at all practical.


Really. This: https://jamielinux.com/docs/openssl-certificate-authority/ gives you a CA in about an hour. HashiCorp Vault will give you a CA in 5 minutes. certstrap will give you a CA in 15 seconds. It’s 2020, it ain’t voodoo anymore.


Just spinning up a CA is a couple of commands. Running one sanely (to include security of the private keys, availability and auditability of the signing machine, keeping backups, publishing a CRL, setting up ACME if you want any kind of automation) is significantly more involved.


But this is silly. If this isn’t completely trivial to add to your app then something has gone horribly wrong.

* Every machine in your infra already has backups, right? Nothing about your signing boxes are special in this regard.

* All your services are already HA, right? The API servers that now have to run some glorified OpenSSL commands aren’t any different than your normal API endpoints.

* You already have to protect secrets on your machines. DB passwords, API keys. What’s one more?

* You don’t have to implement ACME. These are your devices talking to your devices.


Nothing different than any long lived component in any infrastructure. There is no reason to look for reasons not to use a CA.


For a company? Absolutely not. In private? Probably not worth the effort, just skip the cert warning.


Most hardware in this category really needs to be set-and-forget, whether online or not. You can't have every random sound system and light controller having to dial out to a third party every month. You need to be able to come back five years later and still be able to configure the hardware.


Without buying a domain. (and continously spending money to keep it owned)


Run your own CA internally and handle the CA distribution problem with MDM tools.


I admit, that's a solution, even if a very unpleasant one: Installing a custom root CA is intentionally complicated, so this is hardly doable as an onboarding experience. The setup must be repreated for every single client device that should access the server.

There remains the question how I would get the CA certificate onto client devices in the first place.

Lastly, with asking consumers to install a CA certificate, I ask for a significantly more powerful permission than if I could just have them trust my certificate. This seems like a step backwards security-wise.


> Lastly, with asking consumers to install a CA certificate, I ask for a significantly more powerful permission than if I could just have them trust my certificate.

CA certificates can be constrained. https://tools.ietf.org/html/rfc5280#section-4.2.1.10


Are common certificate validation libraries honoring these constraints?

When I tried to use this many moons ago, most things ignored the constraints; although I could mark the extension critical, and then some (but not all, yay) of the things that didn't understand would refuse the CA.


IDK NSS seems to have code to verify it:

https://searchfox.org/mozilla-central/source/security/nss/li...

As does webpki:

https://github.com/briansmith/webpki/blob/482627c40dad2148da...

But haven't tested it (or checked other libraries).


Update: tested it with openssl and webpki. both claim to have support but it only works with openssl. For webpki I had to file two bugs:

https://github.com/briansmith/webpki/issues/134

https://github.com/briansmith/webpki/issues/135


Can you name and shame those that ignored the critical extension? Sounds CVE-worthy. A date to guess the versions you used would also help.


No, it was on the order of 5 years ago; everybody was garbage back then. But, if this had become usable, I would expect to have seen articles about using it since then.


How do you actually generate a constrained CA certificate? I have tried to do this for a long time but openssl is inscrutable.


There seems to be a guide for openssl here [0] but it seems kinda complicated. This discussion inspired me to add name constraints support to rcgen [2]. If you aren't afraid to write Rust, you should give using it a try.

[0] https://www.marcanoonline.com/post/2016/09/restrict-certific...

[1] https://tools.ietf.org/html/rfc5280#page-41

[2] https://github.com/est31/rcgen/commit/059cc19fcd1b8bb57feed5...


Thanks!


It is no more complicated than a self signed certificate. Two clicks in Firefox, 4 taps in iOS


Every time and no tracking if device indentity changes.


That is also an insane and unrealistic suggestion.


~$5/year (US) for a domain, and a one time investment of setting up a few scripts might save you a lot of time in the long run.


Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore. Even if my device is purely for LAN use and wouldn't need the internet at all, I now need to ensure it has an internet connection and I have to keep a domain owned that must be constantly renewed.

The device will also only be accessible if an internet connection is present, even if both the device and the client are in the same LAN - because the client has to access the device through the domain.

This means, should I ever lose the capacity to support the device and renew the domain, the device will become useless, even if technically, it is still completely functional.


> Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore.

That’s not true at all. I’ve created a CA and a script to generate and sign server certificates and I generated them left right and centre now for my very standalone, local network only with no access to the internet whatsoever services. I added my CA to my browsers and my iPhone and everything works perfectly.


Will you also add it to the iPhones of other people that would want to use the device? (Or more realistically, would they let you add it?)


Depends on them I guess. If it's a corporate phone then it's no problem. The rest can either add it or get used to cert warnings.


If you're in a context where you can personally install it on phones of friends and relatives, that will work, I agree.

I'm thinking of an example to illustrate what I mean. (Sorry if this appears to be moving the goalposts)

Imagine some small business is selling a home surveillance camera, or a network printer or whatever else. The thing is that it's a product intended for perivate, layman consumers and intended for LAN use.

With HTTP, you could add a local web server as a simple way to manage the device pretty easily: Just open a server, communicate the IP address to the user, done. No internet connection required, no continuing support from the company required. Even if the company went bust, the existing units continued to work and the web interface stayed accessible.

There seems to be no good way to replicate this with HTTPS. The closest seems indeed to be a custom root CA - however, then you need to communicate to your users how to install the CA certificate on their own devices, clicking through all kinds of scary warnings and dismissing "this section is for admins only" notices. I predict that not a lot of people would do that.

This also leaves you with the challange to safely get the certificate to your users. You could serve the certificate from the device over HTTP - however, then you'll require that your customers download a root certificate, over an unencrypted connection without any integrity checks and install it on their device. This seems like ripping open a mojor security hole.

Meanwhile, even if the company purchases a domain and attempts to get a certificate from a public CA, deployment will be difficult as described in all the other branches of this thread.

In short, I think you can pick any three of the following four conditions, but I see no way to archieve all four at the same time.

(1) use modern web features (all recently added and all future features require https)

(2) have your site usable on a client device that does not belong to you

(3) present a non-confusing user experience (no cert warnings, etc)

(4) have the device stay accessible even after you stop actively supporting it (by purchasing domains, running cloud services, having deals with CAs, etc etc)


> This also leaves you with the challange to safely get the certificate to your users.

Because the hardware vendor does not own nor configure the private network, they are not able to certify to the network’s users that a particular network node is the device it’s supposed to be, and not an impersonator. Only the network administrators can do that, and so it is the network administrators that must generate the certificate and install it on the device. In this way the admins bestow a programatic declaration of trust on the network node.

The device manufacturers can only provide tools for showing that the device was not tampered with. TLS/SSL certificates are not for that purpose.


This only applies if they want to access internal services without cert warnings, so asking them to install a cert seems reasonable?


> Honestly, my main issue is not even the price, it's that devices cannot be stand-alone anymore.

I'm wondering where the impression fo" not any more" comes from. Really the situation hasn't changed much. You can have your HTTP webinterface. You can have HTTPS with a selfsigned cert and click away the warning. The only thing that really has changed is that for your HTTP connection you will get a warning that the connection is not secure.

I don't think the ability of browsers to load HTTP pages will go away any time soon.


Aren't browsers preventing submission of form data over http now?


No. Your DNS can also be locally, so you have no Internet dependence.


A .net is 83 cents a month.


That's usually a limited special offer. Not everyone wants to change domains every year.



There’s always the .local TLD, which is reserved for this use case:

https://en.m.wikipedia.org/wiki/.local


That works, but you can't get public certs for it, because you can't prove you own that domain (indeed, you don't :))


Please remind me where I said anything about public certificates. ;-)

xg15 is going to have to run a self-hosted Certificate Authority (CA) and generate certificates himself.


That article goes on to state that .local is reserved by RFC6762 (multicast DNS), which if you use that domain on your network, will cause problems with any services using it, usually Macs or iPhones.

This document specifies that the DNS top-level domain ".local." is a special domain with special semantics, namely that any fully qualified name ending in ".local." is link-local, and names within this domain are meaningful only on the link where they originate. [...] Any DNS query for a name ending with ".local." MUST be sent to the mDNS IPv4 link-local multicast address 224.0.0.251 (or its IPv6 equivalent FF02::FB).

I'd recommend using something like .lan instead.


I think you’re talking about running DNS locally (not sure) and resolving .local addresses by DNS. In that case, yes, the devices that do lookups by mDNS will experience a delay caused by first querying mDNS before falling back onto DNS. The solution is to set up mDNS for the internal resources.

Using an unregistered domain like .lan has serious security implications. See here: https://serverfault.com/a/17566


.lan is called out in appendix G of the MDNS RFC as "not recommended, but many people do this".

Personally speaking, I'm not too worried about .lan getting registered as a gTLD anytime soon. I'm a lot more worried about forgetting to renew my domain and having things horrifically break if/when that domain gets picked up by someone else. This is a lot more likely...


I’m not sure I understand... What would break on your local network if a public domain you own and use only for internal resources is registered by someone else? How is this different from making up a domain name? In both cases you have to set up something to resolve the names to IP addresses on your local network, be it a hosts file or DNS. I would expect that to keep working regardless of the ownership of the domain name.


I would have said the same about .dev, and did, until Google came along and registered it.


Why not just use mDNS too and stick with .local?


I haven't heard about that yet. this sounds interesting indeed. But how would I get a valid certificate for a .local domain?


You need to set up your own “chain of trust” to verify your self-generated certificates. You can run your own Certificate Authority for example. (There are other approaches too.)


If only we had NameConstraints: we could have a CA limited to *.clientdevices.manufacturer.com, installed in everyone's trust root.


Installed? Everyone?

It would be enough to send it as an intermediate CA cert, no need to install.

Going the self-signed DNS name restricted CA way would likely still not fly with browsers, because there's no way to securely deploy the trust root. (Because if it requires user interaction to install that can be exploited by malicious actors.)


You can always issue the TLS certificate on an internet-facing system that can do the corresponding ACME challenge, then give it to the internal service and have all clients of that service resolve the domain name for the certificate via a static configuration in /etc/hosts. That's how I do TLS for my intranet-only LDAP server.


It would be nice if there were a way to be a CA for a subdomain. Then each manufacturer could sign *.mydevice.com.


I've just resigned myself to accepting the fact that I am not the browser vendor's target market anymore and I'll have to keep an older copy of FF around for the sole purpose of accessing devices I own that will never see an update to modern crypto/certificate standards.


Do self-signed certs not work? Yes, you have to tell your browser to permanently accept them the first time you connect, but after that, they work.


For some reason, iOS Safari won't do like all the other browsers, show a warning and then let you access. No, it outright rejects self-signed certs. You have to go through the trouble of installing the root CA into the phone, which is not practical.


Why as a vendor would you use a self-signed certificate that causes the browser to scream at the customer when you could just not use TLS, plain old HTTP.


Some features require Secure Context. Browsers just don't enable those features in a context that isn't secure (HTTP is only considered "secure" on the loopback network to your own machine). If it's a Javascript API it returns an error, if it's an HTML or HTTP feature it doesn't work. "Here's a nickel kid, get yourself a secure context".

Both Chrome / Chromium and Firefox have explicitly set policy that new features (as opposed to tidier ways to do things that already exist like DOM improvements) will require Secure Context, and there's already a weak assumption that even some tidying up will go into secure context when the rationale for not doing so is shaky (e.g. some of the web crypto features that needn't technically require Secure Context do anyway).


I mean yes, browsers have amply demonstrated they don't care about secure local device communication at all. Mostly from ignorance and disinterest. It's their loss, just means everyone has to install an app or that smart lightbulb only talks to the vendor cloud.


Because the alternative is to embed a TLS private key that would allow you to MITM every other one of those devices. Someone extracted it? Looks like you have to either (a) bury your head in the sand or (b) rollout an expensive recall to change certs on those devices.

Why use slightly compromised HTTPS versus plaintext HTTP? Same reason they have those super cheap locks on diaries from the 90s: it's a deterrent. Makes it a little harder to do a bad thing.


You have already answered why no one in their right mind would embed a shared certificate across all devices. I don't think you are being realistic with yourself when you believe people use self-signed certificates; they don't.

You are missing what happens instead. There is just simply no web management interface on the device anymore. You need to download the vendors app to configure and use the device. Maybe, if the vendor cares, they use their own CA to secure a local connection to the device. Much more likely, the app and device exclusively talk to their cloud and use that as a middleman to exchange information.


But it also makes it a little harder for the user to do what they want too because they have to click through a (correctly) scary-looking security warning.


Chrome seems to intentionally forget you accepted self-signed certificates after some period of time.


I feel like Chrome has generally become much more amnesic as of the past few months. Lost more signing in to various services, which isn't a bad thing, I'm just not sure what (if anything) changed.


Unfortunately the answer is no.


Backdate your self-signed cert. so far that works around any validity length restrictions.



The answer to that may be to stop using the web browser to do damn near everything. Your computer will load an app, distributed by a trusted app store or downloaded directly from the device, that talks only to the device on the local network, and to a whitelist of allowed internet hosts, and nothing else. The app will have a client certificate so the device won't blindly trust the computer either. Firefox will only communicate over the local network in a limited way, enough to download the app from the device or get a link to the manufacturer's app store profile. Or that discovery can become part of the operating system and browsers will stop talking over the local network at all.


This CCADB vote provides the context missing from this link to a Chromium patch. After the CA issuers rejected 2017 and 2019 proposals (Ballot 185, Ballot SC22) to reduce certificate issuance times to ~1 year, Apple announced enforcement of the rejected 398-days limit across all platforms on 01 Sep 2020, the CAs reversed their position while complaining that they were being forced to, and Chromium is now implementing the policy as well.

https://cabforum.org/2017/02/24/ballot-185-limiting-lifetime...

https://archive.cabforum.org/pipermail/servercert-wg/2019-Se...

https://ccadb-public.secure.force.com/mozillacommunications/...

> SUB ITEM 3.1: Limit TLS Certificates to 398-day validity Last year there was a CA/Browser Forum ballot to set a 398-day maximum validity for TLS certificates. Mozilla voted in favor, but the ballot failed due to a lack of support from CAs. Since then, Apple announced they plan to require that TLS certificates issued on or after September 1, 2020 must not have a validity period greater than 398 days, treating certificates longer than that as a Root Policy violation as well as technically enforcing that they are not accepted. We would like to take your CA’s current situation into account regarding the earliest date when your CA will be able to implement changes to limit new TLS certificates to a maximum 398-day validity period.


Nit: It's CA/B Forum (Certificate Authority / Browser Forum, a standing meeting between the major browser vendors - which are also roughly the set of major OS vendors except Mozilla stands in for the Free Unixes - and the major publicly trusted Certificate Authorities). The original purpose of this meeting was to find common ground between these two groups and this has borne considerable fruit over the years in the from of the Baseline Requirements.

CCADB is a totally different service run by Mozilla and Microsoft (using Salesforce, I presume because they both agree this is terrible but neither can accuse the other of using their preferred pet technologies?) notionally open to other trust stores to track lots of tedious paperwork for the relationship with trusted CAs. Audit documents, huge lists of what was issued by who and to do what, when it expires, blah blah blah. Like a public records office it's simultaneously fascinating and a total snooze fest. Mozilla is using it in this case to conduct their routine survey of CAs to check they understand what they're obliged to do, they're not asleep at the wheel and so on.


Sounds like CAs will be forced to keep shrinking cert length until everyone standardizes on 1 month. They no longer have any real power.


A less labor-intensive approach would be require CAs to revalidate the 'proof of ownership' basis of issued certificates monthly, and publish a revocation via CRL if the validation times out or fails for 1 month + 1 day. This would further encourage automation of the ecosystem without requiring redeployment in the cases where automated verification passes each month.


Less labour intensive for whom?

Anyone using email validation now needs to click a link every month, or their cert goes away.

I used to have the unfortunate task of managing a massive SAN cert used for white-label hosting with a bunch of our customer's domains.

Getting every single customer to get their tech person to look at the mailbox and click a link was often a multi-month process.


Less labor intensive than requiring validation and deploying a new signed certificate every month.


>and publish a revocation via CRL if the validation times out or fails for 1 month + 1 day.

If you're in a position to MITM using a stolen certificate, you're probably also in a position to block the CRL response from going through. Since failing to get an updated CRL doesn't result in a security warning, your CRL proposal is essentially useless.


> you're probably also in a position to block the CRL response from going through

Not if the certificate is OCSP-Must-Staple.


One of the arguments that I've seen for shorter-lived certs is that revocations aren't honored particularly well. If we could fix that, then your proposal would make sense (but I'm not sure that's doable)


Misses the point. The concern is all historic traffic being vulnerable to a single encryption failure.

Short cert lives make certain decloaking much, kuch more difficult.


It looks like 84% of sites [1] use forward security with modern browsers, which should mean historic traffic is not vulnerable to a leaked key.

It seems like driving this number up is a better way of dealing with historic traffic than quickly expiring certs. Limiting the duration of leaks of future traffic seems like the right justification for short lived certs.

[1] https://www.ssllabs.com/ssl-pulse/


However in TLS 1.2 and earlier in most cases there is also a potentially long-lived key inside the server to enable faster (1-RTT) resumption. Bad guys who obtain this key get to decrypt all TLS sessions protected with that key, even if the client never used resumption at all. This is fixed in TLS 1.3, where having that long term key only lets you see inside subsequent resumptions that don't redo the DH key exchange.

That recent GnuTLS bug resulted in bad guys not even needing to steal that resumption key for any servers using affected versions of GnuTLS because GnuTLS was just initialising it to zero...


I heard perfect forward secrecy is intended to prevent decrypting past traffic.


Good!


CAs are resting all and every changes because it's easier, it makes sense.


CA's are resisting because the only person to buy from them is someone who can't set up certbot and lets-encrypt. As soon as they cant issue for longer than a year, their market is being whittled away.


> the only person to buy from [CAs] is someone who can't set up certbot and lets-encrypt

Digicert is in the process of migrating their customers to ACME (the issuance protocol used by Let's Encrypt and certbot). Where's your god now? :)


And that's two out of how many?


Will browsers start allowing self signed certificates though?


As long as you first create a root certificate then you can create how many certificates you want.


Assuming non-chained root CAs remain trusted.

I can forsee the browsers eventually treating self-created CAs like they currently treat self-signed certs. if they're not traceable to a trusted root CA then there's no accountability, from a browser perspective, in the event of abuse or breach.


Then people will create their own root CA and use it to sign the existing root CAs. Whatever it takes. Corporate users need internal certificates.


Self-signed certificates are insecure, so, no.


Aren't they allowed already, with a click-thru warning screen? And you can also choose to trust them permanently, aka trust on first use.


No... web of trust is an important aspect to https.


s/web of trust/centralization/


s/centralization/validating ownership

Without centralization I can MITM at the coffee shop and steal passwords.


WoT would fix that, unless the other coffee shop patrons have (directly or indirectly) trusted you.


They should at least allow for local addresses


Remember the good old time when it was not an almighty cartel of browsers that controlled your internet?

This is so an arbitrary decision and so much a pain in the ass. Again, a limited number of people used their corporate interests to decide for the whole world with almost no discussion.

The worst is that the "security" argument for this change is quite weak. Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.

Now, you as an user are so stupid, that browsers will decide for you what website is deemed safe for you to visit, the same as with appstores. Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust: google.com vs myshaddyfraudyweb.com


I'm surprised to see this as the highest comment on this post.

This is a clear security win, and thus good for users. And no, I don't trust websites to have my best interests in mind, not remotely. Hell, if browsers hadn't started warning about insecure connections then I suspect that even to this day most websites would still be insecure. We used to leave it up to the choice of each website, and that was a clear failure, and now they're being forced to provide better security, which is a clear win.


I agree with you about publicly available websites, but I'm not convinced this policy makes sense for IoT devices, especially for ones that aren't connected to the internet.


CA/B isn't a cartel, indeed it jumps through a bunch of hoops to ensure it isn't a cartel. Cartels are illegal in many countries (the one you're most likely thinking of right now, OPEC, doesn't need to care that cartels are illegal because its members are sovereign entities, and thus they decide what the law is)

Moreover, this didn't come from CA/B anyway, it was rejected there. CA/B agreed the previous 825 day limit, and the 39 month limit before that, but this new rule did not get support at CA/B so Apple imposed it unilaterally (and with some really poor communication but whatever).

Google and Mozilla have just decided that since they wanted this limit, and Apple has effectively imposed it anyway, they might as well go along for the ride.


I'm pretty sure Google, Mozilla, and Apple are who they meant in the first place, not CA/B.


>Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust: google.com vs myshaddyfraudyweb.com

People can barely tell whether it's really microsoft calling them saying their computer is infected. What makes you think they'll be able to tell the difference between google.com and google-secure-login.com, or whether they should download the "codec pack" that their shady streaming site is offering?


Corporations think they have to protect people from themselves now because people are now required, even encouraged, to blindly run all remote code they're sent. It's because browsers have become the OS. And now it's standard to metaphorically open every email attachment you receive.


> Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.

That sounds like a disagreement; it benefits the user, so let the website opt out? Because websites are known to have users' well-being in mind?


“Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.”

I would think the choice on how long to trust a certificate should be on the user, possibly using the hint that the creator of the certificate gave. You wouldn’t trust a certificate from evil-empire.com, no matter its expiration date, would you?

The discussion should be about whether the browser should make that decision on behalf of the user. I’m not sure I’m in favor of that. On the other hand, browsers already do a lot in that domain, for example by their choice of trusted root certificates (and changes to that list)


Yes, maybe it was not clear but that was what I wanted to say: It should be the job of the website to decide the expiration date for it's certificate. So they decide if they want to look shady, careless and use 10 years certs or look like trustable and serious and use 6 months. And indeed users would be able to use that to determine the trust they give to a website.

So in the end, websites determine their 'trust value' without the browsers 'police', that will let the possibility for special cases.

For example, if I do a device that is to be used out of internet for 3 years, logically the user will not see an issue with a 5 years certificate.


You mean the good old days of IE 5.5?


No, I don't. Browsers have always controlled the internet since the web became the dominant way the internet is used. And I'm really quite happy for them to do this because lord only knows helping my various relatives with their computers has proven to me that someone needs too.


Yeah those good old times when Comodo was hacked and issued certificates for gmail.com and nobody really cared. Or when some shady CAs sold intermediate certificates in devices so you could man in the middle all your network connections (and everyone else's, too).

So bad those times are over and we have this browser cartell enforcing some basic security standards for TLS. Screw them!


Shortening the validity duration does not stop any of those issues. It just shortens the duration of a potential attack to one year.


The validity time is part of a process where browser vendors have tightened rules for CAs over time.

We got plenty of gradual improvements over time. Validity time does not stop incidents, but it makes the impact smaller and allows ecosystem improvements to propagate faster.

Take for example Certificate Transparency, which is one of the most important ecosystem improvements. It was required for new certificates in 2018. But we still can't rely on Certificate Transparency logging for all certificates, as the certificate lifetimes were so long.

In the future such improvements will take maximum 1 year till all certificates have them.


Eliminate, no. But the goal of security is generally not to make breaches impossible, but to mitigate them / make them harder to achieve.

It's an uphill battle but I'm glad browser vendors are fighting it.


From the source code:

https://chromium.googlesource.com/chromium/src/+/ae4d6809912...

  // For certificates issued on-or-after the BR effective date of 1 July 2012:
  // 60 months.

  // For certificates issued on-or-after 1 April 2015: 39 months.

  // For certificates issued on-or-after 1 March 2018: 825 days.

  // For certificates issued on-or-after 1 September 2020: 398 days.
The source code also requires certificates issued before 1 July 2012 to expire on Jul 1st, 2019 at the latest.


On 30 April 2018 it became a requirement (in Chrome) for all certificates issued after that date to be recorded in a public Certificate Transparency log[0]. A certificate issued on 28 February 2018 could therefore be issued without being logged, while having a validity period of 39 months. Such a certificate would be valid until 28 May 2021.

Does that mean that next May, for the first time ever, the domains of all HTTPS sites on the web will be recorded in a public log? I think the only caveat to that is wildcard certificates.

[0] https://www.feistyduck.com/bulletproof-tls-newsletter/issue_...


In practice it's probably already true or very close to true that names from certificates in the Web PKI that are intended to be publicly accessible are all logged. As you observe if the name listed is a wildcard this doesn't tell you which (if any) of the names implied by that wildcard actually exist, and indeed no names for which certificates were issued need necessarily exist, the rule is only that if they did exist they'd belong to the subscriber.

Although the Chrome mandate only technically kicked in on 30 April in practice most CAs were considerably ahead of that date, in addition some of the logs are open to third parties uploading old certificates, Google even operates logs that deliberately accept certain untrustworthy certificates, just because it's interesting to collect them.

If you're excited to know what names exist, the Passive DNS suppliers can give you that information for a price today, their records will tell you about names that aren't associated with any type of certificate, and lots of other potentially valuable Business Intelligence. They aren't cheap though, whereas harvesting all of CT is fairly cheap, you can spin up a few k8s workers that collect it all and store it wherever (this is one of the tasks I did in my last job).


This is Google and Mozilla aligning with Apple's earlier announcement (https://support.apple.com/en-us/HT211025).

The CABF has talked about doing this before, most recently in SC22 (https://cabforum.org/2019/09/10/ballot-sc22-reduce-certifica...). In that case all browsers supported it, but it wasn't passed by the CA side.


This may be good for security, but it is extra burden for small web developers and individuals. Big players will have cert renewals automated.

It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.

Without automation, you've got an annual chore to do or your site goes offline.

I think some hosts are already starting to offer free and easy SSL certs to their small customers, but I do expect automated SSL management to be generally available for the masses before this takes effect.


Check out Caddy Server. It was only a few days ago when I was still managing my own certs and renewing them with a Cron job. Caddy now acts as my proxy for my various web domains and it handles certs automatically. Like literally you fill out a few lines in the config called a Caddy file and you do Caddy run and it gets the certs itself. And as long as it's running, it renews them automatically.


Giving an Internet-connected program autonomous write-access to system-critical filesystems is not considered good practice in production environments.

Much better to have a separate central cert management system that handles renewals and pushes the certs outwards to the DMZ systems.


This is true for enterprise, but for small business Caddy or Traefik is totally fine.


Caddy is good for enterprise use too because of its configurable, pluggable storage backends (doesn't have to be a file system). You can achieve the permission segmenting your company requires.

You can also use it as a certificate manager independently of a web server if you want.


Responsibilities have changed a bit. If you're going to host a website you are going to have to put a modicum of effort into ensuring that you are not harming others by doing so.


>are not harming others

How is HTTP harmful when you visit my website about amateur radio? An expired cert is no more harmful than bare http in this non-commercial non-institional personal context. It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.

The burden is real and completely unecessary for personal websites. This makes the web more commercial by imposing commercial requirements on everyone.

It's what killed off self-signing as a speed bump against massive surveillance and centralized everyone into the benign dictactorship of letsencrypt. But centralization will lead to problems when money is involved. Just look at dot org.

The real harm comes from this fetishism of commercial/institutional security models.


> How is HTTP harmful when you visit my website about amateur radio?

"Unharmful" HTTP sites are used to silently hack people's computers and keep them under observation for months. Every unsecured site contributes their small piece to keep the web unsafe for people who needs it to be safe.

https://www.amnesty.org/en/latest/research/2020/06/moroccan-...


This is exactly the same as blaming getting shot at a neighbor's BBQ on the neighbor for not hiring private security to deal with the government army specifically attacking you.

If your threat model includes nation state attacks you're gonna have problems no matter what. Change your personal behavior accordingly. Don't tell everyone else they need to wear bullet proof vests around the house and hire corporate security goons. They don't and doing so is burdensome.


You're right, today. But as everything, stuff that starts as very advanced tools only at the disposal of big agencies, with time ends up being reachable for more mundane users, or in this case, criminals.

So, in a way, it's probably just a matter of time that the kind of silent hack depicted in the Amnesty article is used for attacks targeted towards more general victims. I don't look forward to the day that just by reading an unprotected HTTP site is enough to get my phone compromised as part of a widespread scamming effort from someone trying to get credit card details or banking stuff... but it will propbably end up coming if we don't move all together for a more secure WWW.


I know complaining about it won't change the future course of things but these are all problems coming from treating the browser like the OS and exposing more and more low level functionality.

They aren't problems with security in HTTP versus HTTPS for a personal or small business static website.


Totally agree. The current situation with the complexity of browsers is crazy. Its implications are spoiling all around other technologies and causing all sorts of issues, like this one.


The problem with your analogy is that it downplays certain aspects and plays up others.

Attacks on HTTP sites are known threats that we have evidence for, they aren't ridiculous or unheard of. The defense is not "everyone where a bulletproof vest", it's get a certificate and set up HTTPS - a one time cost that will protect thousands of people.

You're making the choice for your users, who may not be as informed as you are, to not protect them. That's very different from asking them to wear a bulletproof vest.


Agreed. I'm sick to the back teeth of fscking with HTTPS/SSL on all the client static sites I manage. Certbot-apache was so flaky I had to switch every client to Nginx so that I could use certbot-nginx. The web has become a no-go zone for do-it-yourselfers. If I didn't setup my clients in VPSs I don't know how we would manage all the mailserver blacklisting and endless HTTPS/SSL requirements.


dehydrated is nice and painless.


These days, it's harder to set up a website _without_ SSL/TLS. If you're buying a domain name, they'll likely offer free HTTPS. If you're setting up a site via Shopify / Wix / etc, it'll use HTTPS. From what I've seen, sites without a valid certificate are either ancient and no longer maintained, or are built by devs learning the ropes of web development and haven't bothered to set up certbot on their server just yet.

There was already a fetishism of commercial/institutional security, and LetsEncrypt gave it quite the blow. Now companies that you used to have to pay a yearly fee for a certificate are offering their certificates for free.

It does stink that corporations have to be in the middle in the first place, but that's due to the difficult problem of "trust." I'm not sure it's possible to decentralize it, besides some sort of blockchain solution that would be unworkable in the real world.


> It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.

I did miss that but I did not assume a business context.

> The burden is real and completely unecessary for personal websites.

Users who visit your website are still at risk of having their connection hijacked - they could be phished, exploited, etc. This is maybe not something you consider important, it is certainly a sort of "boil the ocean" approach, but given the efforts put in up until this point I think it's already the case that most users are probably not visiting HTTP sites on the average day. Continuing that effort seems reasonable.

> This makes the web more commercial by imposing commercial requirements on everyone.

I'm not sure what you mean.


We have had this conversation to death: https://doesmysiteneedhttps.com/


I find some of the arguments on that page, uh ... unhelpful at best. Circular, even.


I guess I could see that for a small subset of the arguments presented, but that leaves all the rest. Honestly, "There's nothing sensitive on my site anyway." covers 90% of arguments I've seen against HTTPS and that answer is strong. The presence of weak arguments doesn't undermine the strong arguments.


There are injection tools that don't seek to steal data, but to weaponize your client.


It’s free to own a certificate today, so doesn’t matter.


This is silly to use as a blanket statement. There is nothing harmful about hosting a website. Especially personal sites, internal sites, or small businesses who use it as little more than a brochure that serves static content. My roof repair guy is not harming anyone by posting his information on a basic website.


What if I visit your roof repair guy's site and content is injected, informing me that they now take payments online? Or that I can download their special Roof Repair App to manage my bookings? Or it contains an exploit payload?

It is extremely uncommon for me to actually visit an HTTP website - I even have HTTPSEverywhere block them by default, so I'd know if I were. That means that I am relatively protected to such avenues until I visit your roof repair guy's website.


If I was the bad actor I would simply purchase google ads in the name of the target business sending traffic to my own site with wonderful green padlock - it's cheaper and has bigger reach than trying to hijack TCP/IP traffic.

More to the point - if I am running a collection of Karl Marx works it is highly unlikely that he would request payments.


You're describing a completely different attack vector, which is the entire point - to push attackers to different attacks. if we eliminate HTTP, we can focus more effort on the attacks you're describing.

Regardless of the content, hijacking is a danger to users.


It's worrisome that you injected yourself into the conversation between me and my users. How is this any of your business?


I don't really understand your point. You're upset because I am advocating for your users despite not being one? I... don't care at all.

It isn't my business so I've done nothing to reach out to your users or interfere in your website. We're having a discussion about technology on a technical forum.

It is the browser developers' business though since they are tasked with protecting users from these specific threats.


Because he might be a user too.

And are you blocking all Web traffic except from people who your users, somehow? If not, then everyone is your user.


One reason for these proposals was to put pressure on the SSL certificate ecosystem to provide (CAs) and adopt (hosting) automated SSL renewal practices. Businesses have had three years since Let's Encrypt first went live to adopt such practices, but many chose not to — not just hosting providers, but e.g. bigcorp load balancers too.


Guess what - not all websites are businesses. In fact not all websites are dynamically-generated so why the fsck do we all have to put up with this madness? Make HTTPS/SSL necessary for transactional sites but for simple static sites give me a break.


did you know there are ISPs out there that inject garbage code into the html of unprotected sites?

get out of here with this HTTPS is unnecessary tedium.


Better not do business with shady companies then.


or you could follow established best practices and secure your site with TLS.


Both. I am not responsible if you chose a shitty ISP.


But this makes no sense because the user has basically zero control of how their traffic is routed to/from you.

If we could trust the entire network we wouldn’t need TLS for any site.


There are lots of good options for low-maintenance SSL certificates, from self-hosted (Let's Encrypt) to CDNs (CloudFlare) to hosting platforms (Netlify).

Even so, this doesn't actually change much. I've never bought a certificate valid for more than a year. I'm not aware of any major player that sells certificates valid for more than a year. So this rule has existed for a long time in practice, but is only now being codified.


I genuinely believe that in 2020 the myriad of ways of getting automated TLS is easier than logging into a website and uploading a CSR and then placing that certificate somewhere.


Check out Certera https://docs.certera.io

It's PKI for Let's Encrypt certificates. Helps you issue, renew, revoke certs from a central place. Also get alerts so you know when things have changed, expired, failed to renew.

While a lot of places give you certs built in, there's a whole world of places you still need certs. Like FTP, mail, behind load balancers, disparate environments and systems, etc.

In the future, I'm planning on creating a way to automate the certificate exchange process. This should help with using and exchanging certs used in client authentication and things like SAML SSO. If expiration get down to a month or less, I see a need for a system to help do all of these things and more.


This looks interesting as a log of Let's Encrypt certificate operations, but is it more than that, and why would I want to use it?


To centrally manage all of your LE certificates, keys, alerting, etc. You can also more easily use LE certs in a wider array of scenarios too. Check out the docs to learn more.


I did have a look at the docs, but they more explained the how, rather than the why - I missed some kind of intro/overview explaining the value proposition.

I'm still a bit fuzzy on this - why would I want alerting, for example? Automation is a big part of LE, and my certs are configured to auto-renew. If that was to fail for some reason, then LE will send me an email - is it this part where this tool comes in, providing improved alerts where automation has failed?


That's great feedback. I'll update the docs to better explain the why.

To elaborate on the why for alerting, there are many situations that I've seen where things change and subsequently fail silently. Perhaps some dependencies, or maybe configuration changes, caused things to break. Also, alerting doesn't only have to be for your certificates. You can point to any endpoint to monitor as well. There are three aspects of alerting: changes to the cert (perhaps you care about a 3rd party certificate and its underlying key changing), failure to renew, and expirations. Each comes with its own benefits and use cases.

To expand on the why a bit further for the project as a whole, it's really as a way to help consolidate and centralize things. I've seen many disparate ways of using Let's Encrypt. From various clients to some hacks to better support more complicated scenarios. By separating obtaining the certificate from applying, it helps facilitate many things, like using LE certs behind load balancers & proxies, non-standard ports, things that don't speak HTTP, etc.

If certificate expiration continues to decrease in time, we'll need some capabilities to exchange certificates in an automated fashion as well. I'd also like to incorporate Certificate Transparency logs so you can be sure no one has issued certs for your domain(s). There are many cool and interesting scenarios but mostly the challenges come when managing things at scale. So, it's not really all that useful if you're only managing one or two certs.


You've got it backward. It is a boon for small players and troubles for large companies.

Small players can easily get certificates manually or automate. The platforms/tools they use often give certificates out of the box (cloudflare, heroku, wordpress, etc...).

Large players can't manage certificates. Developers/sysadmins can't use let's encrypt because it's prohibited by higher up and blocked. Even if they could use it, it's not supported by their older tools and devices. The last large company I worked for had no automation around certificates and the department that handled certificate requests was sabotaging attempts to automate, possibly out of fear of losing their jobs.


> that still takes some time to set up, manage and maintain over time.

I’d say it takes less time than going through a single paid certificate store… Assuming you already have a tool. If you don’t, then maybe it’s the same or 5 minutes more.


Can you describe the kind of person who hosts their own website but cannot easily set up Let's Encrypt automatic renewal?



There's no cert because there's no need for one in the first place. Mentioning that is pretty silly - it's obvious that there's nothing wrong with a static site with now cert, and no one is arguing against that.


> no need for one in the first place ... it's obvious that there's nothing wrong with a static site with no cert

Oh yes, there is.

https://doesmysiteneedhttps.com

> YES

> Your site needs HTTPS.


> there's nothing wrong with a static site with no cert

Not really. Google says "switch to HTTPS or lose ranking":

https://webmasters.googleblog.com/2014/08/https-as-ranking-s...


Good to note. But I think you're distracting from the article's talking point.

I disagree with "switch to HTTPS or lose ranking", but that's an HTTP vs. HTTPS issue with Google's search ranking, not about Chromium or Mozilla. This article is about Chromium & Mozilla making stricter rules for HTTPS certificates. That's not a bad thing, to hold HTTPS sites to a better standard.


The whole "Let's Encrypt should solve all your problems" attitude is arrogant and short-sighted.

1) In my experience the user experience even for technical admins is still flakey on at least some popular platforms. In other words, it's not as incredible as you think.

2) It's not available to a host that doesn't connect to the internet but does occasionally get connected to by a local browser (eg. IoT firewalled inside my LAN is one obvious such case; I'm sure there are others).

And most importantly:

3) You'd have to be insane or naive to accept an architecture that leaves you dependant on a single vendor (especially if you need that vendor more than they need you!).


How fortunate, then, that LE isn't the only vendor. Not even the only ACME vendor, nor the only free vendor (https://zerossl.com/features/acme/).


If your device never connects to the internet then how would any public cert work? It would expire like any other?


Me. I use shared hosting on a server that runs a reverse nginx proxy to my nginx server. I don't have root on the server. I have a LE cert that I need to manually fiddle with DNS settings every 3 months to get. If you know how to automate it I'd love to hear about it.


Why doesn't their nginx proxy /.well-known/ requests for your domain to your nginx? Then you could just use `certbot certonly --webroot --webroot-path /path/to/webroot/for/your/domain -d your.domain.name -d www.your.domain.name` once and put `certbot renew` and nginx reload in crontab weekly, and you're good to go.

If you can't use HTTP-01 and must use DNS-01 challenge, I would check whether the software that runs your host's DNS management panel has an API in addition to manual mode. If not, I would check for ability to automate HTTP requests to that tool (parse the HTML, submit the forms, basically). My hope would be that the tool is popular and someone already did the work and code exists to operate it as if it had an API.

If you can do that, you can write (or find one already written) a certbot plugin that performs the DNS challenge using your credentials to the host provided DNS settings. certbot has number of plugins for the big hosting providers: https://github.com/certbot/certbot

certbot is the most popular Let's Encrypt client, but it's not the only one. Maybe another client has support for your situation. I would maybe ask the support of your hosting provider, maybe they know something.


Letsencrypt is broken or an incredible pain in so many different setups its not even funny.


What are setups where it’s broken? Sincerely.

If you can’t accept inbound http traffic then you use DNS verification and if you never contact the internet then no public cert could work for you.


Devices with web based interface (KVM over IP, IPMI, etc).


iDRAC has a CLI tool that can be instrumented to install new certs regularly. I’m sure other vendors do as well.


That's me! I'm technical enough to self-sign for ssl for my sites (it and tor are what I do instead) but I run on lots of old hardware and old (>5 years) OSes. The tools for constantly re-updating letsencrypt simply don't work and all the containerizations didn't exist yet. I've tried nearly a dozen LetsEncrypt updates solutions, compiled from source, from debs, "standalone" only bash solutions, etc, there's always a catch that prevents it from working.


Are those >5 year OSes receiving security patches?


They probably receive more security patches than Centos 8 and by that I mean Centos 8 is lagging behind.


Shared hosting

Unless they set up LtE for their customers

(And as much as I like LtE I think it's complicate to depend in one issuer only)


Semaphor asks "can you describe the kind of person". Since when is "shared hosting" a person?

People who know how to set up a website on a shared hosting platform probably also know how to renew a LE certificate, I think.


Lots of people. Such arrogance from those who post on hackernews.


it takes absolutely no time at all to set up for an individual on their VPS, compared to the faff of going through the openssl csr process + buying from a CA


"On their VPS". What are you smoking?


It is not much harder to do this every year as opposed to doing this every few years. It’s just an incentive to streamline the process.


> It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.

If you want to run a webserver but are unable to set up a cronjob that does

  certbot renew
you don't deserve external users. Full stop.

If it's just you and you don't care about your own security, then do whatever you want in your own browser.


I have tried to look at the documentation for certbot and the amount effort they put into optimizing the fastpath makes it incredibly difficult to do things manually. The documentation is absolutely awful. Certbot uses .pem files which are practically useless to any JVM based application. So now you got to add your --deploy-hook and add a custom script to convert everything. Don't use any of the blessed DNS providers? Again write your own authentication and cleanup hooks. Suddenly your simple certbot setup involves 3 different scripts that have to be tailored to your specific situation. Sure there are nice blog posts that go through the entire thing but the official documentation basically pretends that your use case doesn't even exist because everyone is running Apache and Nginx, right guys?


> you don't deserve external users. Full stop.

It’s shit attitudes like this that killed the old internet we all loved


Don't feed the 5 hour old troll account.


It's not wrong to build whatever you want for yourself.

But if you have external users on your site sending data to your site, you have a responsibility to not treat your users' data as meaningless.


Internet starts to have 1y memory retention.

Unless refreshed by active learning, aka someone doing the refresh job.

Or unless delegating the work to large players—either the memory or the hosting.

EDIT: This feels wrong, even when done for right reasons. And I wonder whether this would fly without LE and whether this means we are officially making LE THE critical part of Internet infrastructure.


Websites marked "insecure" are still fully accessible.


Not always. You may also end up with having incompatible set of ciphers (happened to me).

"Get off my Internet lawn if you can't be up to date" is what we're saying and I just do wonder whether we haven't exchanged too much of accessibility for too little of security.


Not always. Sometimes the browser presents a full-page response to the effect that the site is dangerous at which point, even if it's a harmless site, the non-savvy user will leave. Blanket HTTPS/SSL + Letsencrypt is a disaster.


This only happens if the site used to be HTTPS and no longer has a certificate or the site has long-lasting HSTS.


On the contrary… LE is unaffected by this, since from the beginning it has enforced a much shorter certificate expiry time: 90 days. Which effectively forces you to set up automated renewals. Doing that does not require the help of "large players"; you stick certbot or another tool in your crontab, or use something like Caddy or Apache mod_md to have your web server do it by itself.


Fun fact: you can use Caddy to manage certificates independently of its web server, with just a few lines of config: https://caddy.community/t/using-caddy-to-keep-certificates-r...

This approach is more reliable than cron in case of failures/errors. Not only are there fewer moving parts, Caddy's error handling logic and retries are smarter than just "try again in <interval>".


To clarify, this is the limit for how long they can be to be considered valid.

Certificates are encouraged to be of shorter lengths as it reduces their potential for abuse. If compromised, a certificate with a long lifespan could be used for years without anyone noticing. A system which doesn't check for revocation is especially vulnerable (though of course, browsers do).

Let's Encrypt certificates are only valid three months, which works well because it's largely automated. It would be good to extend that philosophy elsewhere: automation, and with shorter cycles.

Note the actual limit is 398 days, which gives a small buffer over 1 year.


which makes websites ephemeral and at the mercy of a few authorities. my torrent website could disappear within a few months behind a scary "this site is dangerous" notice


What does that have to do with certificate lifespan? Authorities go after your domain, not (typically) the CA. Longer certs don't help.


It's just gonna be a red strike through the lock in Firefox.


Is your torrent website hosting illegal/pirated content? It's probably already dangerous.


> it reduces their potential for abuse.

It will also increase the number of errors. The more times a thing is done increases the total number of errors occurring doing that thing.


There's a countervailing effect where the more often you do something, the better you get at it.

You're right that the absolute number of errors will certainly rise, but the fraction of attempts which have errors will likely fall. As legacy certs expire, the aggregate quality of certs will likely be higher.

A secondary question is whether the gain in security is worth the required effort. Obviously Apple believes this, and LetsEncrypt is pretty easy, so even for hobbyists, it's probably at worst an annoyance.


A world-class chef may have nicked their fingers more times than I have with a knife, but I suspect their food is still better than mine.


You're right that more attempts means more chances at failure, but I don't think it's a 1-to-1 relationship. It's when I don't perform a task for a few years that I tend to make mistakes.

Even if it's not an automated process (which I think this encourages), then it's easier to keep your skills sharpened by doing something more often.

Would Mozilla have accidentally forgotten to renew their browser certificate recently if it were a more frequent task? It's hard to say, but I think it's likely there'd be a stronger procedure in place. There would need to be.


> To clarify, this is the limit for how long they can be to be considered valid.

to be fair, there's already a the concept of certificate revocation list and OCSP (on-line certificate status protocol) that helps in order to check the validity of a certificate (that is, whether it has been revoked or not).

While short-lived certificates are fine for letsencrypt, pushing the same for the rest of the world looks a bit like an abuse to me.


The problem with certificate revocation is that a lot of software treats it as a soft-fail if revocation status can't be verified, rather than a hard fail.

That is one of the main reasons for LE's short lifespan. Certificate revocation is not reliable in practice.


I’m so torn here. Personally I like this a lot and think it will really help enforce good practices and allow easier things like root/int key rotation. Professionally it sucks, as there are a ton of valid use cases for real certs in areas that require manual work and tracking them all is a hard problem. If internal PKIs were easier to make work across all OS and Browser combos I’d just use those instead.


> If internal PKIs were easier to make work across all OS and Browser combos I’d just use those instead

Even if you use your own PKI, if your certs have a validity > 1 year, won't browsers still complain?


The fact that, as a user, I can't tell a browser "hey, this internal PKI that you already don't trust, just go ahead and trust it no matter what because I tell you to," feels bad.


TOFU is a viable alternative for "long-living" certs, too. The very fact that the cert has longer validity makes it somewhat easier to trust it directly in the client.


TOFU doesn’t actually work. If you set up a TOFU cert environment, 100% of non-security people will click right through it, and 95% of security people will also click right through it.

They’ll just assume that because it was untrusted the first time, that cert errors are normal and ignore it. Especially since they will have a “first use” for every new device and every new browser they visit with.


Funny how some people claim no one will ever click thru the TOFU warning screen because it's too scary and unfamiliar, whilst others say users will just click thru everything.


There’s an important distinction. My claim is that once a user is trained how to ignore a cert error for a particular site and add an exception, they will no longer pay any mind to that site or environment giving cert errors.

The general public, when surfing and hitting a cert error on a random site, will usually disengage.


If you've added a persistent exception, that means you've trusted that cert on your device so getting further cert errors would surely be unexpected.


I don't like seeing how the SSL hurdle affects small read-only websites.


Agreed. Talk about sledgehammer to crack a nut. Typical sysadmin solution to a problem assuming every Joe Blogger is going to setup his own VPS and fsck with certbot.


If Joe Blogger can setup LAMP, he can setup ACME. If Joe can't setup LAMP, Joe will use a web host that does all of it for him including HTTPS. If Joe picked a host that's incapable of securing sites, Joe needs to switch to any of the dozens (if not hundreds) of competitors that can get this right.


Joe Blogger is not expected to setup a VPS, Joe Blogger is using shared hosting or a blog-as-a-service, and thus leaves worrying about how to implement HTTPS to someone else.


So the death of self-sufficient, independent Joe Blogger espcially if that "someone else" is his hosting provider who doesn't handle Letsencrypt.


It's a positive for security, but unless you're going through Let's Encrypt it adds another entity that you have to disclose PII to simply to host your own blog or side project.


What are some valid reasons not to use LetsEncrypt?


It's a single point of failure that has to follow US laws and sanctions.

If you only have one domain it isn't and issue as you can just go get a certificate somewhere else. But if you have 1000+ domains it's an issue.


If Letsencrypt was the only CA left I would call it a big failure. Without a choice there cannot be trust.


LE is open standard, any CA can decide to implement it.


The only other CA I know that has this service available is https://www.buypass.com/ssl/products/acme


https://en.m.wikipedia.org/wiki/Automated_Certificate_Manage....

According to Wikipedia there's several large CA's that already support ACME



ZeroSSL are a commercial CA... I can't figure out what's in it for them to offer free 90-day certs with auto ACME renewal?


I assume on-ramp/freemium. Free certs help them sell paid certs.


If you accidentally leave DNS pointing at an old IP that gets recycled to someone else, you've authorized LetsEncrypt to issue a certificate to the lucky winner.

Most old school CAs do domain validations against the root of the domain, so it's a lot harder to accidentally delegate that.

That's not a reason not to use LetsEncrypt, but it's a reason not to include it in certificate pinning.


> If you accidentally leave DNS pointing at an old IP that gets recycled to someone else, you've authorized LetsEncrypt to issue a certificate to the lucky winner.

Yeah, but only for that particular subdomain. Sounds like a pretty contrived attack. For it to work, it needs to be some website that you forgot about, but still have enough users that it's viable to attack it.

>Most old school CAs do domain validations against the root of the domain, so it's a lot harder to accidentally delegate that.

Source for this? If there's even a handful of paid CAs that validate at the subdomain level this is a moot point.


> Yeah, but only for that particular subdomain. Sounds like a pretty contrived attack. For it to work, it needs to be some website that you forgot about, but still have enough users that it's viable to attack it.

Not really, something similar happened recently (forgot the company details but was discussed on HN). Somebody left dangling DNS pointed at AWS, new IP holder was apparently using domain scoped cookies / etc to grab browser data. Of course, cert pining in browsers is largely dead, so not a lot an average person can do here (other than not f* up their DNS). Larger entities can still get one off cert pinning by emailing chrome/other browsers.

>> Most old school CAs do domain validations against the root of the domain, so it's a lot harder to accidentally delegate that.

> Source for this? If there's even a handful of paid CAs that validate at the subdomain level this is a moot point.

This was from personal experience, could be obsolete. But if you're pinning to a couple of commercial roots, you only need to confirm that those roots don't issue certs from subdomain authentication.


It’s extremely insecure if you’re worried about things beyond passive mass surveillance.

If someone can intercept traffic to your server IP, they can get a Let’s Encrypt certificate. If they can’t reliably man in the middle that IP, then HTTP is reasonably secure already.

Such “certificates without certification” This is one reason browsers have added new UI elements for certified domains.


MITM'ing the connection between LE and a server is generally much more difficult and targeted than between any client and the server. Two different scenarios there.


> This is one reason browsers have added new UI elements for certified domains.

Can you elaborate?


Good. I know too many places (large, established places) that could be using ACME but currently just put up with manually replacing 2 year certificates which means every 2 years the live certs are expired for at least half a day.


Help me understand why > 1 year server certs are problematic but issuers have 20 year roots. Isn’t the issuer’s cert a bigger concern?


Root certificates have their private keys in hardware security modules, which are kept in safes in secure facilities, only brought online when needed to sign intermediate certificates. Plus, it takes quite a while for new ones to be widely trusted - Let's Encrypt's root cert was issued in 2015 and still isn't trusted by a large percentage of older Android phones.

Intermediate certificates have shorter lifetimes. Even though they're kept online, they're also stored in HSMs. Even if the CA were compromised, the chance of the private key itself leaking is very small.

End user certificates, on the other hand, are usually handled much more cavalierly. Sure, you could store the key in an HSM, but most servers just keep them in memory (and in the file system). A server certificate's key is far, far more likely to be compromised than a CA key.


It is, people got screwed with AddTrust in the last weeks en masse, and there is a boatload of root certs expiring in the next ten years.


This is a tangent, and I apologize. Is there any good infrastructure for creating self-signed TLS CA/host certificates these days for people who don't sysadmin full time (grok OpenSSL)?

I would like to create a self-signed CA with a name-constraint for certain internal (sub)domains, and have my browser trust the CA. And have it sign end-host certificates. And have httpd use those certificates (or certificate chains) such that the end result is a trusted HTTPS connection I don't have to click-through Advanced every time.

Is there a collection of PKI software that makes this remotely easy to do? OpenSSL objectively does not.

I have a good understanding of public and secret key cryptography as well as hash functions and other primitives, but I don't understand any of how PKI works — it's just crazy complicated for what it seems to do.


https://github.com/redredgroovy/easy-ca

Pretty easy to use.

Letsencrypt with DNS validator also works great for servers that aren’t accessible externally.


XCA is reasonably easy to use, it's open-source and cross-platform: https://hohnstaedt.de/xca/


Why exactly 398 days? Seems a little bit odd as it’s approx 13 months plus additional 2-3 days.


> The choice of 397 days represents the maximum legitimate interpretation of a "thirteen-month" period; it's calculated from 366 days (considering leap years) along with a 31-day month, the longest in the calendar used by certificates. And the “Must Not Exceed 398 days” also accommodate the different time zones and any other unexpected error.

https://sslretail.com/news/ssl-validity-limiting-to-one-year...


Why 13 months though?


Because it's 1 year + 1 month + 1 day.

1 year (366 days): See the ballots and accompanying discussion linked in other comments about the proposal to reduce validity to 1 year.

1 month (31 days): Grace period for human beings, to permit weekends, vacations, and continuity handoffs.

1 day (timezones): Grace period for browsers and shared libraries, to survive the timezone math issues with "It's one day greater than today somewhere in the world". This ends up baked into the process as follows: Certificates will be issued for "397 days or less", browsers and libraries will validate as "398 days or less".


With 13 months, you can renew your certificate once per year and still have one month of buffer time. But in 2 or 3 years the maximum length might be tightened even further. At least there has been such a trend of length decreases in the past.


I am just guessing but I think it's because of renewals. I know in the past when I've purchased a certificate before the expiration date, the CA gives me that extra time on the new cert so the expiration date stays the same the following year.

Totally speculating here that 30 days is probably the earliest one can renew a yearly cert.


You can of course renew earlier, but limits like these prevent the CA from giving you "extra" time beyond the limit so it'd be silly to renew an annual certificate six months before it expires, as you'd only add about 7 months to the lifespan for the full price.

Without this extra margin there'd be an incentive to cut it as fine as possible on renewal (or even not renew until the expiry causes problems) which is bad for security, bad for business continuity and bad for the CA businesses.

The practice of adding unused time to new certificates goes back a long way and probably is a business practice copied from other things you need to renew in this way. After the CA/B Forum came into existence they standardised a limit of 39 months (3 years + 3 months) to support this existing business practice while forbidding new very long lived certificates, this didn't take effect immediately, instead it was allowed to phase in by 2015.

That limit is a bit vague, which wasn't good. Machines don't really do vague, you can see what Chromium does about that in the linked source code - they pick 1188 days as "39 months" on the argument that while 39 months might sometimes be shorter than 1188 days it can't be longer.

In 2018 the CA/B forum agreed a new limit, 825 days, the specification in days is to avoid vagueness, 825 is two years plus three months plus a very generous allowance for various holidays and other accidents and I think that getting votes for 825 days was judged better than losing votes for some slightly shorted lifespan like 798 days.

Proposals to further reduce this year or next year fell through and apparently Apple decided that rather than negotiate they'd take the nuclear option, which is always something they could do. With Apple eating the PR cost there's no reason why Chromium shouldn't enforce the same limit.


366 (days in a leap year) + 31 (longest month) = 397. So it's exactly one more than that.


12 mths plus 30 days to pay maybe?


Does it also apply to certs issued by a private/own CA or just public certificates?


EDIT: Sorry, replied to the wrong comment!

---

cf. https://support.apple.com/en-us/HT211025:

> This change will affect only TLS server certificates issued from the Root CAs preinstalled with iOS, iPadOS, macOS, watchOS, and tvOS.

> This change will not affect certificates issued from user-added or administrator-added Root CAs.


But what about Chromium and Mozilla?


if (verify_result->is_issued_by_known_root && HasTooLongValidity(*cert)) { verify_result->cert_status |= CERT_STATUS_VALIDITY_TOO_LONG;

Chromium's code (linked as the story) only applies these rules to certificates from the Web PKI, not to a private CA.

Mozilla has no checks, I presume the story title names them because they've agreed on this policy but they don't actually enforce policy in the browser code itself.

Or at least they didn't when I asked them months ago about this topic.


> verify_result->is_issued_by_known_root

Let's take Windows as an example, as it has a root certificate store. Now, if I operate a private CA, I install my private root certificate to the root certificate store - does that make it a "known_root" for Chromium, or does this check only cover a specific set of known-to-Chromium CAs?


Generally speaking locally installed certs have been exempted from most of the requirements levied on public certificates.


I can understand this can cause pain for existing deployments, but I don't see any reason not to use something like Let's Encrypt to issue 90-day certificates for new services/deployments. With certbot, renewals are automatic and with something like Caddy, the certificates can be managed across load balancers. I'm only talking about the web service here and not IoT or any use case that makes this sort of frequent renewals difficult.


> Enforce publicly trusted TLS server certificates have a lifetime of 398 days or less, if they are issued on or after 2020-09-01.

Fortunately not enforced for currently issued certs.

Will this ever be part of the TLS spec?


> Will this ever be part of the TLS spec?

I'm pretty sure TLS itself doesn't specify anything about certificate lifetimes. I could be wrong; I have actually read it, but as sibling comment notes, TLS is used in a lot more places than browsers, including mutual TLS between random services that don't use an external CA at all.


hopefully not. there is a huuuuge number of services that are not http based that use tls and certificates.


It’s disgusting. It’s not up to them to decide how long a certificate should be valid. Especially when they’re so expensive to buy and complicated to replace.


>Especially when they’re so expensive to buy and complicated to replace.

Letsencrypt is free and easy to replace (it's automatic, and takes maybe 5 minutes to set up on a new server). EV certificates might be harder, but I've heard good things about certsimple.


>EV certificates might be harder

EV certs are completely worthless (as in they provide no extra value above that provided by regular DV certs) so nobody should care if they're harder to obtain.


Sadly CertSimple just disappeared all of a sudden. Not much real use for EV certs though - the only thing they can be a little helpful for is if you want to do cert-pinning at the CA-level


For most people, they can be free, and replace themselves, thanks to Let's Encrypt and their automation tools.


Isn't this going from real security measures towards security theater?


It really is. Why do we should trust short-lived certs which might have been issued under shady circumstances like BGP hijack hour ago? How shorter validity terms protect against attack which takes days at most? Why they are so sure rotated key will not be stolen as well if it was before? How they ensure specific public/private pair were not already used before? Do they actually check it?

This is a security theater and I think it's intended to make TLS maintenance unbearable for non-IT businesses and to push them to cloud hosting providers like Google Cloud and Cloudflare.

Also latest drafts of TLS ESNI/ECH feature were written by Cloudflare for Cloudflare's needs.


how does this affect a private enterprise CA. assume that all clients trust the internal CA infrastructure. will Chrome complain about internal certificates longer than 398 days after the change?


I'd quite like to see this eventually getting to more like 1 month, maybe 7 days - forcing continuous automated issuance.

Ideally, something more like 1 hour - like a JWT - would be nice, but not particularly practical as you need to allow some margin for incorrect local clocks time


How would 1 hour be ideal? Have you considered the immense increase of logistic costs and power consumption this would incur?


Ok I'm in favor of shorter cert times but one hour is ridiculous. If you need a key pair for that amount of time generate an ephemeral one! The cert can be valid for months and still be secure.


OCSP stapling may be what you're looking for. The certificate stays the same, but an additional short-lived signature indicating that it wasn't revoked yet is attached.

Aside from not spamming the CT log and possibly making it easier to offload the generation of the OCSP responses to a more efficient architecture than the one needed to issue certificates, I'm not sure how mandatory OCSP stapling is better than just reissuing the certificate every day/week.


This summary should emphasize that it's a 398 day _maximum_.


Letsencrypt's default validity is for an year right? Is this not a game?


Let's Encrypt doesn't offer anything longer than 90 days.

https://letsencrypt.org/2015/11/09/why-90-days.html


It's 90 days, and they renew it when there are 30 days left.


I can see this policy being used for censorship in this age of cancel culture. Don't virtue signal hard enough for the latest outrage mob? No cert for you.


> I can see this policy being used for censorship in this age of cancel culture. Don't virtue signal hard enough for the latest outrage mob? No cert for you.

How does that work with a largely automated process like Let's Encrypt?


A couple of lines of code to enforce domain black lists if the relevant activits apply enough pressure.


Why should anyone do this? It's way easier and more effective to put pressure on hosters, anti-DDOS services and the payment providers to get Nazis booted off the net, see e.g. Stormfront.


It is easier to apply pressure when a single provider has a quasi monopoly, which let's encrypt is quickly building up. And I am not suggesting these attacks are mutually exclusive.


Nobody is talking about Nazis. And all of those things are not easier, especially if the site is self-hosted. His point is that it’s one more gatekeeper and point of failure. It doesn’t matter that there are existing means to target websites. Adding another makes freedom even more fragile.

It’s very naive to assume that censorship is only a problem for Nazis that the world shouldn’t listen to anyway.


Seeing how hard this comment is being downvoted somewhat illustrates your point. I don’t see how anything you just said is controversial or offtopic.


Charitably, it could be seen as inflammatory.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: