1. The requirement to involve a 3rd party certificate authority is a needless power grab. Giving in ends the hope that it will ever get changed.
2. There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message which will make them think there are security issued with your website.
3. Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.
5. HTTPS is needlessly complex making it hard to implement. There have been several security vulnerabilities introduced simply by its use.
6. If you can't comply with the OpenSSL license, implementing it yourself is a hopeless endevour.
SSL was developed by corporations, for corporations. If you want some security feature to be applicable to the wider Internet, it needs to be community driven and community focused. Logging in to my server over SSH has far more security implications than accessing the website running on it over HTTPS. Yet, somehow, we managed to get SSH out there and accepted by the community without the need for Certificate Authorities.
> The requirement to involve a 3rd party certificate authority is a needless power grab. Giving in ends the hope that it will ever get changed.
Genuinely curious - what alternatives do you have in mind? Are there any WoT models that interest you more?
> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message
Isn't this the point?
> Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
Could you elaborate? Have you written your whole stack from scratch? You are running millions of lines of code that you will never read but have been implemented by other parties.
> HTTPS is needlessly complex making it hard to implement.
Isn't this done with robust battle-tested libraries and built-in support in modern languages?
---
Mainly I'm just wondering why you're letting perfect be the enemy of good. There's always room for improvement in everything, but I don't think user privacy is a reasonable sacrifice to make.
> Giving in ends the hope that it will ever get changed.
Abstaining from HTTPS won't be seen by anyone as a protest, but as incompetency, whether you find that justifiable or not.
DNSSEC is superior to both PKI and WOT. It's basically free. It makes chains of accountability transparent (hint: it's the dots in the URL). It provides the benefits of hierarchical trust except with democratic control, and is operated on film in public ceremonies.
We don't have a robust understanding of who exactly operates PKI, but we do know that it's de facto governed by a company on Charleston Road, since CAs only have their root keys listed in things like web browsers at their pleasure. We also know that Charleston Road rewards CAs for their loyalty by red-zoning and down-ranking the folks who don't buy their products. Products which should ideally be deprecated, since SSL with PKI is much less secure.
Can anyone guess who's stymied progress in Internet security, by knuckle-dragging on DNSSEC interoperation? It reminds me of the days of Microsoft refusing to implement W3C standards. Shame on you, folks who work on Charleston Road and don't speak up. You can dominate the Internet all you like, but at least let it be free at its foundation.
Who's knuckle-dragging on DNSSEC interop? The entire Internet community. It's been almost 25 years, and 3 major revisions of the protocol, and still it has almost no adoption --- virtually none of the most commonly queried zones are signed. Why is that? Because DNSSEC is awful.
Obviously, you can't replace "SSL with PKI" (you mean TLS, and/or the WebPKI) with DNSSEC, because DNSSEC doesn't encrypt anything. Whether or not you enact the ritual of adding signature records to your DNS zone, you will still need the TLS protocol to actually do anything securely, and the TLS protocol will still not need the DNS in order to authenticate connections.
Instead, what DNSSEC (DANE, really) hopes to do is replace LetsEncrypt, which is not "basically" but instead "actually" free, with CAs run by TLD owners. Who owns the most important TLDs on the Internet? The Five Eyes governments and China. Good plan!
What we mean by DNS security is that when you visit your bank's website, you know it's actually your bank. We're less concerned about concealing DNS queries from routers and more concerned about preventing them from forging responses. Eavesdropping won't empty your bank account. Spoofing can, and encryption doesn't matter if the remote endpoint isn't authentic.
Right now you need to ping Google's servers each time you visit a website to ask if it's safe. We love Google but they're a private company that can do anything they want. If you feel comfortable with them being the source of truth for names on the Internet, then the problem is solved.
Most of us would prefer it be controlled by ICANN which is a non-profit, not controlled by any one government, that lets anyone from around the world who cares enough participate show up and take part in Internet governance. Controlling names was the purpose they were founded to serve. I say let them.
DNSSEC doesn't protect your bank account. Your bank uses TLS to establish connections with you, and TLS is authenticated, and does not rely on the DNS when establishing connections.
DNSSEC is in fact controlled by world governments, who have de facto authority over the most important TLDs. When a CA misbehaves, Google and Mozilla can revoke them, as they've done with some of the largest and most popular CAs. You can't revoke .COM or .IO.
>> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message
> Isn't this the point?
The point is to secure the communication between client and server, and warn/stop it, if it is insecure (MITM et al.). It is counter-productive to stop the communication because an unrelated party (CA) is having issues.
The CA is not an unrelated party. If the client cannot verify the validity of the cert against the CA, then it should throw up a warning message. If the server cannot get a cert signed by the CA, then it too should throw up a warning message, because it does not have the trust of clients by itself.
4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.
This is important. I have several devices at home that cannot display many web sites because they don't have the ability to use latest ciphers.
Um, the number of people connected to my ssh server I can count on my fingers, and generally have communicated with beforehand. The number of people communicating with my https server is one larger than I could ever count to monitonically.
If you dont get the difference in scale between the two you might have an issue understanding the real problem.
I remember a discussion here on HN about how it makes life very hard for organizations like a school in Africa where the Internet connection is slow and expensive. Although many requests go many times to same pages (e.g. Wikipedia), HTTPS makes it impossible to cache them with a local cheap proxy.
HTTPS is cargo-cult'ish in this aspect. One obviously should not accept or serve personal data over HTTP, but why to encrypt public information? (Having said that I'm guilty here too as I blindly followed the instruction given to me by my hosting company and my plain open site redirects to HTTPS.)
Soon we can properly sign HTTP requests using DNS for the PKI. Stuff like SRI inside HTML is paving the road to allow verification of hashes transmitted via header for the main page request, including a signature of that hash and url or such.
Sort of similar to how linux package managers employ GPG and package mirrors.
Or maybe we can provide caching based on signed-exchange.
Why do browsers punish non-verified certs much harder than no-cert?
If I want to quickly host my page and use encryption, then I have go through all that hustle to make it work. Perhaps allow use of self-signed certificates on same level as http instead of blocking my website.
Since there's no way to distinguish a non-verified (self-signed or not) certificate from an attack, browsers have to treat them identically to an attack (otherwise an attacker would simply pretend to be a non-verified certificate, to get the more lenient treatment).
On the other hand, a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
I think the point here is that there's also no way to distinguish a http request from an attack.
It's fair enough to an argue that a self-signed cert could be an attack, but so could any http request.
> a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
I don't understand how that allows one to distinguish it from an attack. Knowing that a connection is supposed to be unencrypted is just equivalent to knowing that a connection could be under attack.
Rightly punishing the connection for having the trappings of security when it actually lacks it doesn't mean we need to punish openly insecure traffic. End users have been told time and again that http is insecure, and so it's fine to leave it. End users should also be able to trust that https means secure without having to distinguish between secure and secure unless I'm being mitm'd and needing to understand what any of that means.
To echo @mrob's comment (not sure why they've been downvoted), relying on user-understanding of HTTP -vs- HTTPS is considered a failed experiment, and actively discouraged. Chrome in particular are moving to bring this into the browser UI by marking HTTP sites as insecure (rather than relying on users understanding that HTTPS is secure, which they don't).
Most end users have no idea what HTTPS is. They've just been (incorrectly) taught that the padlock means it's secure. Disable the padlock for self-signed HTTPS, and disable the CA-signed HTTPS-only features, and it becomes strictly better than HTTP.
Especially because there is no way to MITM a connection with perfect-forward-secrecy only if it ends up serving a self-signed certificate, because the connection first negotiates an ephemeral key with which everything, including the certificate, will be encrypted.
This means that with eSNI and at least one CA-signed cert on the IP, any attacker runs the risk of having to spoof the CA-signed certificate.
A sophisticated attacker might know that you were going to connect to a self-signed site, though. Interestingly though, private DNS (DoH, etc.) might help further shroud this fact from the attacker.
All in all, I'd say that the browser should still throw up a full-page warning because of the implications of TOFU, but it can be one where the "continue to site" option is clearly shown even to a naïve user, and not hidden behind a spoiler.
I don't see that as the biggest problem. If we repeat what was said in the above comment:
> ...disable secure cookies ... for self-signed certs. ... the user ... enable[s] them.
So you make a self-signed cert for your website which needs secure login, and you tell your users to turn on secure cookies so that you can safely store their credentials in the browser. Then your website gets MITM'ed with another self-signed cert, which either
1. can access the same cookies, because the domain is the same
2. can't, because the cert is different
But in the second case, you've already conditioned users to log in to your website with the cert being self-signed, so they'll just log in again. If the browser complains that the attacker's cert isn't the same as the old cert, or makes the user re-enable secure cookies with a warning, then the user has been conditioned to do that too - and an extra message of "we changed the cert, ignore your security warnings" will convince lots of users with doubts.
The convoluted and unlikely scenario you describe is currently possible with HTTP and non-secure cookies (the website admin is setting the cookies and can choose to define them as secure or not).
Using a self-signed cert. isn't secure. What's being discussed is whether it's worse than HTTP. It isn't.
Browsers now try to detect and warn about credential forms which are submitted over HTTP. Any website admin who tries to convince users to ignore security warnings about HTTP is somewhere between seriously negligent and evil.
>What's being discussed is whether it's worse than HTTP. It isn't.
I disagree. The self-signed cert approach tries to carry with it the trappings of proper HTTPS, but it results in a bigger attack surface. Every additional bit of complexity that can be added to describing the "safe browsing experience" to the end user is an additional chink in the public armour. This is why I originally called it "open users up to social engineering attacks to make my web-dev life easier".
Since the self-signed cert is not secure, admins should have no reason not to simply use HTTP. In fact, this is where the discussion has gotten to now: self-signed certs can't even do safe login. What makes the self-signed cert worse is that for some reason people are insisting on using it anyways.
A known phishing message in gmail gets a red banner. An expired cert gets a full page block and buries the actual page link. It does seem disproportionate.
I manage 100+ servers, hosting a significantly larger number of domains, on a variety of linux and FreeBSD operating systems. Under both Apache & Nginx.
"..all of that hustle.." to initially setup is under 2 minutes with LetsEncrypt.
The renewal (via a cron job) is completely out-of-sight/out-of-mind.
The execution is shockingly simple. If you think it's "all that hassle" I guarantee you haven't even tried.
You're a professional plumber working on hundreds of households saying it's shockingly simple and should take no time at all for a first time home owner to fix their own plumbing. You've already got the knowledge, experience, and tools/parts in the van - of course you don't think it's a hassle!
And he's not even right. There is no hassle only if you take plenty of risk and rely on a random crappy acme client to do it well, its dependencies to always work, disks, OSes, servers not failing, acme protocol not changing and not deprecating anything.
Otherwise you need some infrastructure: logging, monitoring, some way to manage upgrades, backups, testing recovery, oh and those private keys are better not be leaked anywhere, so you need encryption for backups, which brings key management and so on.
Everything in your comment has to do with general server maintenance, and is not specific to automating certificate renewal with certbot or a similar tool which is what is being discussed. Adding HTTPS to your site and setting up automatic renewal is literally three steps on an Ubuntu system and you can copy and paste it from the certbot documentation [1].
Dealing with certificates is more critical than "general server maintenance", things people often neglect doing suddenly become required. It might take from a few months to even a couple of years to get from neglected infrastructure to infrastructure ready for reliable automated issuance of certificates.
I actually evaluated a bunch of acme clients, wasn't satisfied with the code of any of them and wrote my own. But even from those I looked at certbot was always the worst choice, it's ridiculous letsencrypt is promoting it, better choices were POSIX shell clients or statically linked clients, like those written in Go and other compiled languages.
It sounds like you are super critical about any potential security issues (because what else could it be, other than that it just works or it doesn't). If given machine security is super important (oh it's running a web server..), then why not just run certbot elsewhere and sync the files in a manner that satisfies your security needs?
To be reasonably fair Lets Encrypt makes it easy. I even have a $5 a year shared hosting account that gives me Lets Encrypt SSL certs through cpanel. I can't imagine this feature is unique only to this one random shared hosting provider.
Your $5 instance with cPanel management isn't really quite the same as someone manually managing 100 servers hosting these services on disparate configurations and environments.
In other words, what your tooling of choice automates in a specific situation doesn't necessarily apply to any one else's usage, and scalability-wise, would be even more detached.
> Your $5 instance with cPanel management isn't really quite the same as someone manually managing 100 servers hosting these services on disparate configurations and environments.
Actually, it is. Some reverse proxies such as Traefik handle TLS certs automatically. You practically need to explicitly not want to do it.
Extended with: "and all his customers are based in a country with strong plumbing standards and regulatory guidelines" - rendering his advice less valid for every country which doesn't.
A practical example I ran into lately, we had a small system run on GKE and Google Cloud Loadbalancer and struggled to automate the certificate renewal process. Because the cluster/project was for an internal tool this automation was given a low priority and we still have to "manually" swap a certificate every few months (and if we forget to we get an angry slack DM).
TLDR; there are still many combinations of networked services that still do not ~easily~ support certificate automation, even ones you expect really should by now.
I found this to be a problem too and created Certera (shameless plug). It helps simplify a lot of things and fixes some of the pain points of the typical ACME clients.
Now, in our setup, everything is running on a different port, so it is easy to set up additional services all coming from the same hostname with the same IP address.
If you think this is "shockingly simple", I'd like to hear from you again in 10 years as your environment has grown, as your number of operating systems explodes, as you have to deal with restrictive network policies, as LetsEncrypt has been replaced a few times with new up-and-coming latest-and-greatest solutions, as bugs have been found, as clocks have skewed, as domain ownership rules have changed, as domain ownership verification policies have changed a half dozen times...
If you think something is set-it-and-forget-it, you haven't been around long enough.
Have you audited the source code of everything running on your computer? If not then you've had to trust that people aren't being evil or that someone is doing that checking for you. Why is this any different?
One reason that comes to mind immediately: self-signed certificates offer no protection against MITM attacks. It's worse than without a cert, since it gives a false sense of security.
The problem is that a http: protocol specifier implies no protection; the moment you follow the link, you know that the connection is not secured. Whereas a self-signed https: connection could be due to someone MITM'ing a site that generally uses CA's, in which case "no warning message" implies that the site is secured. The browser message has to make it clear to the user that something possibly unexpected is going on.
Yep, for years when everyone was talking about NSL's and other corporate strong-arming by the gov, I started saying I suspect most major CA's are compromised. At least you know your threat model though, because only the nation states are going to have that.
CA's and DNS are two parts of the internet that have become way too centralized in my opinion.
Do they? IME, they just ask you if you want to trust the self-signed certificate and allow you to optionally store that "trust" indefinitely, ending up with something like Trust On First Use. The warnings have to be scary initially because the security model is so radically different from the usual case of CA's; specifically, getting that "first use" validation correct is critically important.
Well, a XHR cannot programmatically decide whether a self-signed cert should be trusted. Perhaps browsers should pop up a warning bar in such cases, explaining that some site functionality is being blocked for security reasons. Clicking it would take the user to the big scary warning page, where they would be allowed to indicate that they trust the self-signed cert (permanently or not) and reload the original page.
How do you imagine this to work? XHR caller and XHR endpoint are both coming from untrusted sources at that point — if you allow either side to define a trust root, you are fully opening up to MITM attacks.
For development purposes, I imagine the approach akin to cross-origin support in browsers for loopback networks might work (i.e. don't enforce checks on them).
I apologise, I still don't understand your claims.
Without previous proof that the calling code was not eg. modified in-flight (eg. over HTTP or over HTTPS without a valid certificate), allowing it to use TLS and to either modify the trust root or pin a new certificate would severely reduce the security of the communication, and would be completely against what HTTPS is designed to solve in the first place (mainly MITM snooping and attacks).
And if there is a "previous proof" of genuinity (eg. by serving through properly encrypted HTTPS), what the benefit is to allowing those clients to pin certs? I.e. they'll still need the existing "proper" HTTPS for all the other first time visitors (and return visitors using new browsers/OSes/devices)?
I don't mean we should allow the fetch API to mess with the browsers trust configuration. It should only allow a temporary override of trust rules, similar to DANE TLSA-RRs, but provided by JavaScript instead of DNSSEC-verified DNS lookups.
Imagine e.g. combining this with an SPA bootloader contained in a data-url (like a bookmarklet), which the user scans via a QR-code or receives via text-based messaging.
CORS would still be in-play, and maybe the insecure nature of the caller is communicated to the API.
The benefit of this pinning would be e.g. allowing direct communication with IoT hardware, or even just prevention passive content analysis.
You could talk to IPs directly and still use TLS without weird wildcards like *.deviceid.servicedevices.com where the dns just has these zone entries:
deviceid.servicedevices.com DNAME has-a.name
, but that's ugly and leaks the device's IP through a DNS lookup.
The "trust this site" can be disabled, and usually is by many corporate policies. Additionally they use words like "unsafe" and "not trusted", but never use those words, nor big red screens, to warm against plain HTTP requests.
> Why do browsers punish non-verified certs much harder than no-cert?
Because it's taking time to build enough acceptance to flag http as insecure, whereas bad https connections that can't guarantee the expected security properties have been flagged as insecure from the beginning.
At this point, though, modern browsers show http sites as various flavors of "not secure" in the address bar, and limit what those sites can do. Browsers will increase the restrictions on insecure http over time, and hopefully get to the point where insecure http outside the local network gets treated much like bad https.
Because there is only one free certificate provider (lets encrypt) and it does not allow wildcard certificates via server authentification.
Having the DNS credentials laying around on the server is not a good idea. So creating wildcard certs via letsencrypt is a huge pain in the ass.
If a webmaster has control over somedomain.com I think that is enough to assume he has control over *.somedomain.com. So I think letsencrypt should allow wildcards to the owner of somedomain.com without dabbling with the DNS.
The way things are now, I don't use ssl for my smaller projects at smallproject123.mydomain.com because I don't want the hassle of yet another cronjob and I sometimes don't want the subdomain to go into a public registry (where all certificates go these days).
> Cloudflare will also put SSL in front of your origin for free.
Used to be everyone complained about CF putting SSL in front of HTTP origins.
However, CF can also issue a CF-signed certificate with a stupid long expiration for your origins[1] and validate it. This is how I fully SSL many of the things while avoiding potential headaches with LE / ACME. Combine with Authenticated Origin Pulls[2] and firewalling to CF's IP ranges[3] for further security.
Of course, that still leaves CF doing a MITM on all my things.
Static hosts like Netlify & GitHub also enable free SSLs. The barrier is so low most people trip over it.
I am sure there are still very unique edge cases though. If I had one of those edge cases I would sit down & really weigh the pros & cons though of not using HTTPS. I would not take it lightly.
"Free", but you can only use them on AWS stuff. AWS makes it nice and easy (and does a bunch behind the scenes for you). Part of that behind-the-scenes is that they have control of the private key on their side. You want to use the AWS generated cert locally, or on another provider, too bad.
You’re right, but it’s pretty simple to slap CloudFront (or Cloudflare) ahead of those origins if you need to in a pinch. I don’t work for Amazon (and have no dog in the fight) but I am a fan of AWS. And if you’re ever using AWS for anything, there’s no reason to _not_ use their free certs.
Someone else mentioned Azure having a similar offering (I’ve never played with Azure so I can’t speak to it). And if 2/3 of the providers offer it, I’d imagine GCP will at some point as well.
I love how easy it’s becoming to launch SSL. LetsEncrypt did a lot to make it mainstream. I’ve never used LE but I am grateful for their impact on our industry.
> I think the barrier is low enough that I SSL all the things (including my small side projects).
Same here. If you have a domain then you should have a cert, it's not that hard today.
My wife wanted a website that's pictures of our dog as a joke, right now it's a single img tag. The second thing I did after that was getting an HTTPS cert and forcing redirection.
Maybe you saw this, but you can make _acme-challenge.domainA.tld a CNAME to _acme-challenge.domainB.tld. Where domainB is a throwaway domain used only for validation. There are some TLDs that are pretty cheap per year.
Certbot might not do this out of the box but ACME lets you pass one challenge at a time, collect a new one, repeat. The tokens which show you passed a challenge will "keep" for at least hours and it might even be days (when Let's Encrypt was new it was weeks!) so you can collect them up to get your cert over a time period.
So, as long as the challenge taking is serialised you can get away with just giving a single TXT answer at a time.
True, though running your own DNS server or paying for another DNS provider may be similar in effort or expense...as compared to a throwaway cheap TLD domain that comes with DNS.
Your post said several things, but one was "there is only one free certificate provider (lets encrypt)". Pointing out that there's actually a second ACME one is a useful response, at least to me, since I think a lot of us still thought LE was the only option.
StartCom/StartSSL used to issue free certificates even before LetsEncrypt appeared, and it was a much bigger hassle to get verified, but at least they were valid for a full year. Not sure if they still do, and they didn't allow for multiple servernames in one cert.
Sure there is Let's Encrypt and if you are facing Internet you are probably good to go.
If you are on an internal network, then good luck. You need to build a PKI, and then put into your devices the right certificate so that it is trusted.
If it was simpler, Apache would sing out its "It works!" in HTTPS and not HTTP.
So here's how I do it for internal network devices. I have a RaspberryPi running on 192.168.100.1 on my local network. On https://www.duckdns.org/ or whatever your favorite DNS provider is, I signed up for a free account and created myRaspberryPi.duckdns.org and pointed it to 192.168.100.1. While you're logged in, grab the DuckDNS API key.
Next you need to use ACME or Caddy (I use the latter) and tell it to do the Let's Encrypt DNS challenge using DuckDNS. It looks like this for Caddy:
# in the Caddyfile
tls {
dns duckdns
}
# in the CaddyEnvfile
DUCKDNS_TOKEN=your-api-key-goes-here
Then you start it like this:
nohup caddy -http-port 80 -conf /etc/caddy/Caddyfile -envfile /etc/caddy/CaddyEnvFile -agree -email you@email.com &
That's it, now I can go to https://myRaspberryPi.duckdns.org and I've got HTTPS on my local network without anything exposed on the internet EXCEPT my device's internal IP. You've got to evaluate how much of a threat that is.
Wouldn't this be subject to Let's Encrypt's rate limit of 50 certs per week for duckdns.org? Do they have an exception or are not enough people using this trick for it be a problem (yet)?
Let's Encrypt only works on public domains that happen to not route externally. I can never (or at least, should never) get LE certificates for *.pikachu.local, but that's a perfectly valid hostname for a local machine.
This is probably a bad idea and I'd recommend migrating off such names as a background task.
Realistically you can't entirely deconflict these names. So you always have a risk of shadowing names from the public Internet.
The public CAs spent years in denial over this (yes they used to sell publicly trusted certs for "private" names, this is now prohibited). Create internal.example.com and things get easier. To the extent security by obscurity is worth trying it's just as available this way (split horizon DNS etcetera)
> Realistically you can't entirely deconflict these names. So you always have a risk of shadowing names from the public Internet.
It's totally save and legitimate for ycombinator to use secret.ycombinator.com on their intranet without telling anything about it to the outside internet.
Those are names you own, and a CA will happily issue you certs for those names (but Let's Encrypt won't without a DNS record saying the name at least exists)
The grandparent was, as I understand it, talking about names they don't own, for which you've no assurance somebody else won't own them (on the public Internet) tomorrow. This used to be very common, decades ago Microsoft even advised corporations to do it for their AD, but it's a bad idea.
If you could get certificates for them, so could anyone else including your adversaries, since there is no system of ownership for them. It would be like issuing certs for https://192.168.1.1
They're likely be part of a cafe/hotel/guest wlan or a poorly managed "intranet" full of vulnerable stuff that needs to be shielded from CSRF. That's in addition to having ambiguous addresses. So should definitely be treated as less safe.
My point was that HTTPS is (much) more complicated than bare HTTP and this is probably one of the reasons it is not taking over the web in a storm (though progress is undoubtedly there)
There is one "good" reason against https: handshakes take enormous amounts of CPU, relatively speaking. It's quite easy tp DoS server by skipping the expensive part on your end. You can load a core with 10~30Mbit@2k rps if your not even optimized.
Whereas the same server could tank 40k rps HTTP requests.
This is an argument I hear often, but I have yet to see an effective L7 DoS with the TLS handshake being the bottleneck. It's almost always the application code that gives up, rather than the CPU spikes due to TLS.
I have a 1 vCPU 2GB server that terminates TLS with dual Prime256v1/curve25519 + RSA 2048 setup with a 10 minute keepalive time, running AES 128, 256 (CPU has AES-NI), and CHACHA20-POLY1305 comfortably handling several millions of requests a day and CPU load hovering 10-20%.
The amount of ECC handshakes are surprisingly high, and CHACHA works wonders too with user agents today.
Given the threats from passive attacks today, this is a cost that must be paid. It just looks quite affordable with modern protocols.
The only place I've had to care about this was on an embedded hardware server. Even then, if the handshakes were too much, it'd just drop the connections and continue to serve those it could. It wasn't enough to knock the whole thing offline.
If a 16bit 200Mhz microprocessor can handle a few thousand connections/second, then a modern processor should definitely be able to stay upright fairly easily.
It’s not exactly apples to apples... but my 64Mhz embedded processor is doing way more than 10,000 chacha20-poly1305 encodes of 64 bytes with another 64 bytes of additional data for the AEAD per second. Granted, it has some hardware crypto functions.
I am still skeptical TLS handshake on site visit is actually bogging down anyone’s computer.
The stream cryptography is not the issue here. Neither is "TLS handshake on site visit".
The issue is that you have to spend the handshake cost before you can look into the request at all.
In my testing for high-throughput scenarios like copies over ssh/rsync/https/smb (i tried them all) in every case encryption was a big hit to throughput. hardware assistance (built into the CPU) helped a lot but it was still a massive boost to shut off encryption - saving literal minutes on every bulk transfer, multiple transfers per day.
For the average case it probably doesn't matter, and you can optimize it, but I think it is totally understandable that the average novice could end up with bad https performance if only because the defaults are bad or they made a mistake. If hardware assist for the handshake and/or transfer crypto is shut off (or unavailable, on lower-spec CPUs) your perf is going to tank real hard.
I ended up using ssh configured to use the weakest (fastest) crypto possible, because disabling crypto entirely was no longer an option. I controlled the entire network end to end so no real risk there - but obviously a dangerous tool to provide for insecure links.
Also worth keeping in mind that there are production scenarios now where people are pushing 1gb+ of data to all their servers on every deploy - iirc among others when Facebook compiles their entire site the executable is something like a gigabyte that needs to be pushed to thousands of frontends. If you're doing that over encrypted ssh you're wasting cycles which means wasting power and you're wasting that power on thousands of machines at once. Same would apply if the nodes pull the new executable down over HTTPS.
How long ago was this — and how fast was your network? On hardware less than a decade old you shouldn’t be seeing that unless you’re talking about 10+Gb networking.
Oh, yeah, for good clients it's totally fine.
But e.g. a machine I'll try an http benchmark on in a couple hours (2 cires; 4780 BogoMIPS each) only managed 4177 ops/s using the fastest-available curve X25519 with
openssl speed ecdh
gatling -V -n -p 80 -u nobody
I know this is somewhat extreme, but on a cpu that was about 30% faster I got 40k rps for small files using the kernel's loopback, which is where the cpu spent most of it's time.
Depends on the stack used. If you have persistent connections you'll incur far fewer handshakes than requests. If you use an elliptic curve scheme key exchange costs are negligible. But sure, if you do one 4096 bit RSA exchange for every request it will be costly.
Assuming this is true 2000rps per CPU core seems pretty reasonable. That would only be a bottleneck when serving static files. Only the most basic apps are going to be able to serve that much traffic per core.
My biggest gripe with the current de facto recommended approach (even mandated in HSTS) is that you need to redirect to https from untrusted http.
So you are being forced to either not serve http, or to condition users to trust MITM-able redirect. How many people will notice a typoed redirect to an https page with a good certificate?
The solution is simple: browsers should default to https, and fall back to http if unavailable. Sure, some sites have broken https endpoints, but browsers have enforced crazier shit recently.
That's what HSTS is for - you set a HSTS policy, and the browser will remember this site for a certain time you can set (usually 1-2 years).
And going further, you can enable HSTS preloading, meaning the next release of browsers is going to hardcode your website as always and only ever going to be used with HTTPS.
See for example my domain https://hstspreload.org/?domain=kuschku.de, which is currently in the preload lists of all major browsers including Chrome, Firefox, Edge and even Internet Explorer.
I also deploy the same for mail submission with forced STS, and several other protocols.
Right, so HSTS will protect a visitor who has visited your web site at most max-age ago using that particular browser and device.
Or, as I stated, for preload, you have to either not have HTTP at all, or have a redirect to HTTPS: it should be clear from my above post why I think a redirect is a bad idea. I also dislike turning off HTTP for those that don't have any other option.
To me it seems that browsers just switching to https-by-default and http-as-fallback is a much simpler, better, backwards-compatible change that should just work. What am I missing and why do you feel HSTS is a good idea compared to that?
Because some websites serve something different on 443 and 80, and you won’t get the right result by visiting 443.
The preload list allows you to specifically say that for your own website clients should always use HTTPS, which is a good solution, as it means no one is ever going to visit kuschku.de on port 80, except for curl and dev tools, for which the redirect is useful.
We have differing views of "everywhere, today": you acknowledged yourself there are cases where it won't happen, it's just how much we think that's important where we differ. That's ok, I appreciate your point and thanks for spending the time to explain.
As for what browsers can or cannot do, they also can't introduce DNS-over-http, introduce stricter cookie policies breaking a bunch of web sites, or reduce effectiveness of ad-blockers, drop flash, or... Sure, defaulting to https is too high a bar (not expressing an opinion on any of those — eg. good riddance to Flash :) — but browsers can and have done stuff that's just as bad, forcing web site creators to adapt their web sites).
Annoyingly, if you want to get a let's encrypt cert you have to serve http. Back when I was manually purchasing & installing certs I didn't even listen on 80 for several services.
Weirdly nature.com seems to actually redirect to https, as does zara.com, lenovo.com, genuis.com, and senate.gov. Is this list stale, or did no one spot-check this?
It seems to meet the requirement for exclusion from the list. Data updated 16 Dec 2019, so I don't think it's stale.
I've also checked from Australian and a European connection, so I don't think it's a regional thing. The other genuis.com doesn't work for me, the other sites redirect and set a cookie.
Article states they allow multiple 301 or 302 redirects. What is not allowed are JS based redirects. There might also be a limit to the number of redirects followed, but that isn't mentioned in the article.
Opps! You're right, the W3C only helped author it.
I was also wrong to say that w3.org never redirects to HTTPS. If the browsers sends a Upgrade-Insecure-Requests HTTP-header, then it redirects. That allows it to support all browsers as securely as possible.
Sites like whynohttps.com and observatory.mozilla.org should really test for this pattern.
I noticed it as well. I first thought it was a result of using CDN services or recycled IP addresses, but gnu.org doesn't use a CDN, and its IPv4 and IPv6 are both served by Hurricane Electric, which never did any business in mainland China.
One annoyance with this system, from the linked webpage:
>an expectation that a site responds to an HTTP request over the insecure scheme with either a 301 or 302
Doing things this way is the final nail in the coffin for Internet Explorer 6, since IE6 does not use any version of SSL which is considered secure here in 2019. And, yes, I have seen in people the real world still using ancient Internet Explorer 6 as recently as 2015, and Windows XP as recently as 2017.
Which is why I instead do the http → https redirection with Javascript: I make sure the client isn’t using an ancient version of Internet Explorer, then use Javascript to move them to the https version of my website. This way, anyone using a modern secure browser gets redirected to the https site, while people using ancient IE can still use my site over http.
(No, I do not make any real attempt to have my HTML or CSS be compatible with IE6, except with https://samiam.org/resume/ and I am glad the nonsense about “pixel perfect” and Flash websites is a thing of the past with mobile everywhere)
That’s actually a good idea. It was simpler to set up the Javascript redirect. It I were to go that way, I would probably redirect IE6 to a “neverssl” subdomain (which also would be useful for dealing with WiFi capture portals).
Can you use old crypto for IE6 using some kind of agent detection while using new crypto for modern browsers? I thought Cloudflare does something like that. But there's a danger of MITM downgrade attack with this approach...
Most people in this space want to do SHA-1 which is prohibited so you need a deal with a CA that uses a "pulled root" to do this. That means they told the trust stores this CA root will not comply with the SHA-1 prohibition and so it's untrusted in a modern browser, but IE6 doesn't know that so it trusts the SHA-1 cert. The CA obviously wants actual money for sorting this out for you. In fact I don't even know if this idea ended up successful enough to be commercially available at all.
If you don't do this to get SHA-1 then you're relying on the users somehow having applied enough updates to not need SHA-1 but for some reason insisting on IE6 anyway. That's a narrower set of users. At some point you have to cut your losses.
Preloads list is an absolute kludge that does not and will never scale and creates a huge deal of problems and works only for specific browsers.
The task is not as simple as using DNS to store strict https flags(as DNS can be manipulated by intermediary), but hardcoding the lists in the browsers and keeping the lists in the chrome's code is definitely not a solution.
I mostly have port 80 egress traffic blocked on Little Snitch. The web is painful to use like that but gives you an idea of the sorry state of websites.
A lot of websites just don't serve over HTTPS, or serve them with domains whose CN or SAN don't match the host.
Many that do support https have links that downgrade you back to http on the same domain.
Most captive portals I've seen use HTTP redirection to the actual domain of the captive portal, so it would still fail as soon as it follows the redirected URL.
I mean whitelist port 80 for captive.apple.com. Sorry if that wasn't clear.
macOS has a background daemon which automatically hits captive.apple.com on connection to a WiFi network, to detect if it's behind a captive portal (and opens up a browser window to let you complete the flow, if it gets a 302). So that much should work even if you block egress port 80 but whitelist captive.apple.com.
...that is, assuming the portal to which you get redirected would be served over https, but I guess that isn't a given either.
One thing that surprised me was how hard it was to set up https https redirects for websites on aws and Google cloud. I needed too set up a load balancer to do https.
The redirects are also hard, I have a static site using Google storage and I have to create a server instance and redirect from there because it's not possible to do an automatic redirect. I don't know why the big cloud hosting providers aren't cooperating to make full https implementation easier.
Recently an OpenShift cluster I admin went down because of long-lived certs not being rotated in time. There are many clients, servers, nodes, services, and configs involved, so rotating is non-trivial, so of course it's automated, and of course because it's not tested regularly, the automation just doesn't work after a while. Using the automation only seems to make things worse, and getting everything working again ends up taking days.
PKI is technically the best practice for these systems, but it's also the most fragile and complicated. At a certain point, if the security model is so complex that it becomes hard to reason about, it's arguable that it's no longer a secure model, to say nothing of operational reliability.
I also have a whole rant about how some business models and government regulations literally require inspecting TLS certs of critical transport streams, and how the protocols are designed only to prevent this, and all the many problems this presents as a result, but I don't think most people care about those concerns.
Oh, and gentle reminder that there are still 100% effective attacks that allow automated generation of valid certs for domains you don't control. It doesn't happen frequently (that we know of) but it has happened multiple times in the past decade, so just having a secure connection to a website doesn't mean it's actually secure.
Is it still the case that when you think you connect in https to a website, only the segment to cloudflare is encrypted and the segment cloudflare to the web server might not be?
Yes, that's SSL termination. Generally this happens at the CDN, load balancer or proxy (e.g. nginx used as a cache) layer and is pretty common since the fleet of servers handling the request after being routed are in a private network. With CF, the request from CF to the origin is over a public network and it will depend on how the user has configured their CF setup as to whether or not that hand-off is then encrypted. If they are doing SSL termination in CF, then it won't be encrypted from CF to the origin server.
If we migrate to HTTPS everywhere we can get rid of HTTP for general use and switch to a different UI, where HTTPS websites don't have any special icon but HTTP ones get a warning icon.
It's already effectively how password form submissions work in many browsers.
You can't have HTTPS everywhere until we can get HTTPS for IoT devices. My router doesn't serve it's configuration screen via HTTPS. How could it? I have to connect to it to configure it before it's on the internet.
Same with my IoT cameras and all the various local apps I run that can start a web server. Heck, my iPhone has tons of apps that start webservers for uploading data since iPhone's file sync sucks so bad.
We need a solution to HTTPS for devices inside home networks.
I agree that having an elegant and secure solution to enable HTTPS on
non-internet-facing equipment would be nice. I work mainly on embedded devices
and all my admin interfaces are over HTTP because there's simply no way to ship a certificate that would work anywhere. It would be nice if you could easily
deploy self-signed certificates that would only work for local addresses and
only for specific devices, although of course doing that securely and with
good UI would be tricky.
In the meantime having big warnings when connecting to these ad-hoc web
interfaces makes sense I think, since they can effectively easily be spoofed and
MitM'd (LANs are not always secure in the first place so it makes sense to warn
the user not to reuse a sensitive password for instance). It's annoying for us embedded devs but I think it's for the greater good.
The problem is that, for better or worse, generations of internet users have been taught to look for the padlock before sharing any sensitive info (especially banking credentials and the like). Suddenly removing this prompt is probably going to confuse and worry many people.
This is exactly the opposite of a forcing HTTPS problem. When HTTPS isn’t everywhere, HTTPS gives a false sense of security. When it is, browsers can stop emphasizing it. We’re already well down that path, with padlocks no longer being green/being hidden entirely, EV certificates losing their confusion vector, insecure pages being assigned the icon that sticks out…
> The biggest problem with forcing everything HTTPS
No it isn't. Https not being 100% bulletproof is unrelated to using it everywhere. And it's lightyears away from its biggest problem.
Maybe I’m wrong, but I feel SSL has a downside of relying on more centralization. If a visitor to my totally-static webpage wants to bypass that layer and request the http version directly, I’m going to let them. (Obviously not excited about the idea of being mitm’d but it’s not a security risk, so leave that tradeoff up to the visitor).
MITM can do anything to your site, so your totally-static site may not be static any more at the victim's end. It may be a site collecting private details, attacking the browser, or using the victim to attack other sites.
Your static HTTP site is a network vulnerability and a blank slate for the attacker.
Thanks for the reply. I've seen that site but it seems to be aimed at people who don't offer any https at all. At this point I'm still more comfortable offering visitors the decision. (Not many people visit my site by the way.)
That won't do anything. If someone can Man-in-the-Middle you, then they can easily forge a 302 redirection to a malicious web page that could be HTTPS.
Ok, cool, I found a new numerical overflow image rendering in your browser library. Now I can shove an <img> tag in the insecure stream and exploit you.
I see your point. But we trade security for convenience 24/7/365. We could all have bulletproof glass in our homes, personal security cameras everywhere, backup generators, panic rooms, etc, but we don't, because it's not convenient (and I know the expense is primarily what makes it not convenient, but I think it's still a valid argument).
Providing access to Wikipedia over http to people in third world countries may be worth the risk of someone MITMing the site with propaganda.
The suggestion is only to give some users the option.
Mitm with propaganda is the least of the worries. Full on exploit code is.
The fact is as an ecosystem develops completely increases. Lifeforms in that ecosystem have to spend more time and effort protecting themselves from outside attacks as time progresses.
TLS 1.3 with 1-RTT should improve this situation at least somewhat. I suspect HTTP3 will help in high packet loss situations but it's going to be a while until that's deployed. Also wikipedia is still perfectly reachable via http if you disable the HSTS preloading in your browser
I'm not an expert, but would this be fixable by installing a new root certificate on the computers who want to use the caching server, and then having the caching server sign the pages it transmits using the new root certificate?
- If you are hosting a simple static page or blog, your hosting provider probably has Let's Encrypt plugin.
- If you have your own VPS, Caddy has you covered with file serving, fastcgi support for PHP, and proxying to (g)unicorn/nodejs/Go/.NET, and has HTTPS enabled by default.
- If you have more advanced setup (e.g. containers), traefik supports HTTPS with just a few lines of configuration.
- If you are big enough to afford cloud, it takes a few lines of Terraform code to provision certificate for load balancers (speaking for AWS, and assuming others have similar solutions).
For other cases (e.g. lots of traffic with custom haproxy/nginx/etc. setup), you are probably smart enough to find out how to enable Let's Encrypt support.
1) Not everything is running bare Apache. In fact, some services might have some rather strange web-driven GUI (or, more interestingly, curses-like) that requires you to carefully load a certificate, a CSR, and so forth in a somewhat arcane manner. Some pretty niche serving exists out there and I have had to deal with a bunch of them, to the point where I had to write extensive documentation on keeping the certificates up to date on each separate weird service. Many of these services have a "no user-servicable parts inside, your warranty will be voided ..." clauses in the service contract which deter spelunking.
2) Some services require wildcards, like proxies.
3) Some organizations have, due to someone far away making strange decisions, policies about certificate authorities, and people to audit for compliance. Therefore, a cert costs money and, for a site which is purely informational, that's a hard sell.
4) Because we're not running on a hosting provider, a VPS, containers, or cloud.
5) Because not everyone wants to deal with some combination of the above every three months due to Let's Encrypt's expiration policy.
I consider myself young, but I've been around long enough to to rely on One True Service Provider for anything.
And "Let's Encrypt" is not an answer to "HTTPS is not free". It's not. We all are going to see our projects outlive Let's Encrypt (or their free tier).
In the end, nothing is secure. A dedicated attacker will find a way, given enough resources. Any security measure is just a deterrent.
My deterrent is that it's not worth MITM'ing my personal website with, like, 10 monthly visitors. (The reader might gasp that I lock my bicycle with a chain that can be snapped in a second, and that a strong enough human can probably bash my home door in).
Anyway. It's almost 2020, and if you are still advocating on moving the entirety of the Web to reliance on Big Centrally Good Guys, I really don't know what else to say to you.
Sure, depending on your setup it's easy, but for a lot of setups it isn't. Instead of trying to say HTTPS is easy and shame everybody who isn't doing it more efforts should be diverted into creating an actual fully encrypted network that doesn't need CAs.
What actually happens when you try to force HTTPS over the internet: you centralize it, you make it harder for the small player, hobbyist, personal homepage guy, and make it easier for the big corporation.
It isn't just web sites. Many software repos still use http or native rsync. Some would argue that you validate the packages with GPG, but you would be amazed if you saw how many people install the GPG public key from the same mirror they download software from.
I set up Let's Encrypt for an older Exchange server a while ago. While I love the result, it was NOT a simple, one-line exercise.
Up to date documentation was near-impossible to find, and the scripts that came out of the box on the recommended client needed some fixing. The whole thing took about half a day, plus some hours a few weeks later once the unforgiving anti-abuse thresholds I accidentally triggered during end-to-end testing finally expired. Definitely wasn't a pleasant experience.
It is relatively straightforward if you have a single site hosted on a well-supported operating system and web server.
It suddenly becomes really, really complicated if you have multiple servers, multiple domains, nginx configurations that the tool does not expect (but insists on rewriting).
Yes, but at that point it's not two lines with let's encrypt any more.
For my part, I had to write around a thousand lines of script and alter various existing code in order to switch from manual ssl (whenever the client paid for it) to automatic ssl (everywhere), because there was no way I was going to manually buy hundreds of certificates a year when I took over this role. Nowadays we're 100% ssl but it was harder for an existing person already accustomed to the existing system than doing nothing. I'm just too lazy to check a site every week and renew many certificates manually and copy around stupid files and generally go crazy. Plus, if it's automated, I think there's less chance of the keys being copied. So in my mind it was worth the effort, but it was surely effort.
So true. Even on hosting that fully supports let's encrypt thru an web based admin like cpanel or directadmin, the process can be confusing and error prone.
If we're purely talking about Let's Encrypt, it's not straightforward to set up on Azure either.
It's easy to set up a standard cert through Azure, but if you want to use Let's Encrypt there's a whole dance you have to go through to get there, and for many people it's not worth the time and they'll happily pay a bit of money to make it a few-clicks thing.
When I looked at doing it, I'd have to bump up my hosting plan for my vanity blog to somewhere in the neighborhood of $100/month to apply an SSL cert for my custom domain, which is just stupid for a site that gets a couple thousand visits a month and maybe earns me $5 in referral fees.
It looks like you have to go up to at least a B1 app service, which at $50/month doesn't make a lot of sense for me, unless I can figure how to get my MSDN credits associated with that Azure subscription instead of one of the other two accounts I don't use, but that's a whole other can of worms...
It is simple for a one-server website.
When you're on Alexa 1M, you certainly have a load balancer, multiple servers for redundancy, etc. It makes things not straightforward, and you certainly don't want to use the default certbot which overwrites your config.
I am on alexa 1m (50k even). I do not have a load balancer, I do not have multiple servers for redundancy. This isn't even a static site, most of our page views are the wiki, the server running all of this has 8 cores and 4 are constantly maxxed out by a non-website related process.
Checked my old site's rank. ~250000. One VPS, €4/month. Mostly static, but a decent part is served with a not so light Perl CGI script (!). I'm sure I wouldn't get away with that in top 1k websites, but 1m?
> "I am on alexa 1m (50k even). I do not have a load balancer, I do not have multiple servers for redundancy. This isn't even a static site, most of our page views are the wiki, the server running all of this has 8 cores and 4 are constantly maxxed out by a non-website related process.
Most websites now and days are over engineered."
That's awesome! Mind sharing some more details? (hosting plan/CDN/etc). Or even the URL?
Rented dedicated server running a 9900k. Windows hypervisor runs vms. database vm, website vm, and 3 game server vms running on this machine. each game server vm is running 2 instances of the game server, but only 1 ever has high pop.
Most of our traffic goes to our wiki: we are the most active open source video game on github. Most ss13 servers run their own codebase, forked from ours, but will still frequently point their players to our wiki rather then set one up on their own.
A Cloudflare caching layer was added back in march when we got a 4x spike in web traffic from a youtuber talking about the game.
I mean the next more complicated case isn't that bad either. You set up a sidecar VM/container/machine/whatever-you-want that either instruments your DNS or gets the traffic from .well-known/acme-challenge and just renews your certs every day.
Then your load balancers pull the current cert from the sidecar every day with NFS/Gluster/Ceph/HTTP/whatver-you-want and reload the web server if it changed.
Assuming that you can catch a failure of your sidecar server in 89 days or so you don't need much more redundancy.
IMHO it is easier to setup SSL on LB. you don't need to setup them one by one, all servers (HTTP, SMTP, POP, IMAP and others) protected by the same SSL certificate, cipher suite with a SSL-terminated LB. Also many LBs support auto-renewal.
While I appreciate the efforts of certbot to make it as user-friendly as possible I still find this state of things unforgivable. I don't know where it went wrong so that today a developer must spend time learning and tweaking a low-level encryption tools. I'm just saying https will never be 100% unless it becomes a baked-in feature of any hosting.
Developers don't need to, unless they're the ones hosting your website. In which case, yes, I expect them to be able to configure web hosting software.
There is myriad of other cases, basically every time you diverge a bit from the 80% path, you're in for a treat and will deal with all the intricacies of SSL management.
Certbot, and most other standalone ACME clients, are just stop-gaps.
The end game is first-party support for automatic HTTPS in all web (and other) servers. It is happening (e.g. mod_md), it's just going to take time. For example, to get it packaged for all distributions.
For shared hosting, if you ignore the few providers at the top who are either CAs (e.g. GoDaddy) or are in contracts with CAs (e.g. Namecheap), the overwhelming majority of them are already providing free and automatic SSL for all hosted domains.
> The end game is first-party support for automatic HTTPS in all web (and other) servers.
There's still a need for certbot et al when you have multiple services (e.g. web and mail and XMPP) running on a single domain name. In fact, I actively avoid servers that insist on doing ACME themselves because it breaks my unified ACME process.
A management fad called dev-ops is what went wrong, before you could count on your sysadmin to take care of that :) Apart from that, not everything always makes sense to use in production without a good level of understanding --- and might otherwise lead to, for example, a false sense of security.
If Microsoft baked in Auto-cert-install in to IIS that allowed you to cherry pick a provider, and/or just select their own free CA, that'd really solve the problem for Windows based web servers. In my experience CertBot/ACME type renewal doesn't work reliably for Windows/IIS.
Most things would benefit from encryption. Even if you don't need integrity protection, and you don't have any need of privacy, and you don't care about authenticating your peers you still want encryption because otherwise middleboxes ossify everything.
If the middlebox can't see inside your flow because it's encrypted it can't object to whatever new thing it's scared of this time whether that's HD video or a new HTML tag.
Not a significant issue in practice as far as I can tell. I deliver text over the internet, and sometimes binaries over the internet, and it happens very fast because there is no useless cruft in the process to satisfy some security twonk's paranoid delusions.
Maybe not everyone host website on a platform where you can easily install these things.
For example, I have a simple web app hosted on Heroku free plan, and I have to use CloudFlare SSL to get it served over https on my custom domain. But it actually is half encrypted as the connection between CloudFlare and Heroku is plain http.
To add to your point, a lot of insurers only provide cyber insurance with a certificate from a specific range of CAs, and LetsEncrypt is not one of them. Frustratingly, Symantec is allowed.
> i cannot stand is people who can do it, but refuse to out of laziness
(Raises guilty hand)
I run a couple of sites on my hosted server that are still http. They both sit behind a varnish setup and to be honest I just have not found the time to get it done. Usually when I mess with my configurations I lose a week to troubleshooting stupid stuff and I just can't bring myself to do it.
Assuming you are talking about software developers, you can't expent people do extra work out of virtue. They will do it only if there is an economic incentive. Setting up a transport layer security is not in software developer's interest or competence.
This is about managers and executives who call the shots on implementing these features. It is not your responsibility as a software dev working for a big company to implement something they do not pay you for.
I mean, if you don't value your users privacy of course i'm not going to think you're a very swell person.
Again this really only applies to people in a comfortable position to do this and choose not to. The average developer is not my target here, it's the big guys.
I don't do it on my own site. I'm capable of doing it, and certainly did it for my job. But my own site... It's free with HTTP, but they charge for every level that includes HTTPS. I'm it's major user (so far) so \/\/
I currently use a mini CDN (content delivery network) of three different OpenVZ servers in the cloud to host my content, so getting things to work with Let’s Encrypt took about two or three days of writing Bash and Ansible scripts which get the challenge-response from Let’s Encrypt, uploading it to all my cloud nodes, having Let’s Encrypt verify it got a good response, uploading the new cert to all of the cloud nodes, then using Ansible to log in to all the nodes, put the new cert where the web server can see it, then restarting the web server.
Point being, the amount of effort needed to get things to work with Let’s Encrypt varies, and can be non-trivial.
I started using lets-encrypt before it supported Nginx (using standalone mode). I recently tried the Nginx-based mode, and it wrecked my reverse proxy config pretty thoroughly.
Still, the stand-alone mode is pretty dang easy. I've also considered the /.well-known mode but there was some tiny snag.
It encrypts traffic to prevent others from reading, modifying, or replacing requested responses? What is your security? I don't see a reason that your site wouldn't be vulnerable to a MITM attack.
Last time I described it here on HN there was confusion.
It's just "single serving server salt" (try saying that fast 3 times) sent to "client for secret hashing" and then "sent back to server again", so it's insecure on registration (just like all security with MITM without common pre-shared secret) but after that it's pretty rock solid, even quantum safe. Requires two request/responses per auth. though.
This tech is nothing new and has been used by many big actors since forever. It's simpler that public/private encryption because it only requires hashing math to work.
It should be my choice to use whatever encryption I want without having google scare away my customers with "Not Secure".
But "common pre-shared secret" (well, public key allowing verification that there is a trusted secret being used) is at the root of https security today (a preset list of root certificates distributed with OSes/browsers).
If someone presents as your web site to a first time visitor (or a previous visitor but on a new device), there is no way for them to really trust your web site. Basically, it's the equivalent of you using self-signed certs, and likely even worse because there are more attack vectors even outside the initial connection.
Can you explain how root certificates makes anything secure? Why can't you just hack the root cert store on the local computer f.ex.?
There must be a million attack vectors to that system too, with a lot of attackers working on them since the payout is good when everyone uses the same system?
Even if it makes sense, since all governmental offices and some corporations have their own; doesn't that make you skeptic of that kind of centralized security?
I'd rather take my risks with something I can understand, modify and improve; than using what everyone else uses.
And again: It should be MY choice! Not googles, now I have to compile my own browser, which takes like 24 hours on a modern home PC!!!
If you are on an untrusted device (eg. someone else could have hacked the root cert store in the OS/browser), all bets are off: they could have also just hacked the browser to drop any and all warnings and to always display a green padlock icon.
If you are talking about someone else hacking your machine, well, then it's pretty much the same: they can get most stuff by adding keyloggers, screen recorders and just scraping your disk for useful data.
If you are on a trusted device, you can "hack" the root cert store all you want to add root certificates you trust. As long as you trust them, no trust has been lost.
Root certificates are not really "centralized": they are issued by different CAs, and different browsers trust different root CAs too, and it was even more prominent in the past where you had some certs "work" in only some browsers. Still, there are multiple recognised attack vectors there as well (each individual CA, their certificate issuing servers which have access to the root or intermediate signing cert, browsers and OSes and their trusted-CA components...), and the big difference is that the attack vectors are known and heavily monitored.
PGP/GPG keyrings were basically the same approach without the root certificates, and the (in)famous signing parties did not bring a trust level that is ultimately needed on the internet today. I would love to see a development in that direction (one could say it was an early consensus-building approach on who to trust), but we are not there yet.
It certainly is your choice to how you want to protect yourself and your web site visitors, and it's your web site visitors' choice whether they want to trust you with their data (for instance, I personally would recommend you to set up a self-signed cert and add that root cert to your keyring for services that you plan to only access yourself through untrusted networks).
Except that most people won't understand where the risks are in either approach, and that's half the battle.
I host a single site on a host (so, no login, subject name or path information to leak), which only contains details how to connect to my irc server at the same address.
If the message is altered then the most pain anyone will have is connecting somewhere else for the first time. (They won’t be automatically logging in if they’re using this page).
Why does everything need to be TLS? It feels like a cargo cult. A requirement: “because!”
In other scenarios it’s worth modelling threats and I agree that it’s good to err on the side of caution but aside from the modification of my connection information there’s no good tangible reason to incur an overhead in administration.
Although it should be noted; part of the reason that web server even exists is to do letsencrypt for a globally geobalanced irc network.
> Why does everything need to be TLS? It feels like a cargo cult. A requirement: “because!”
Traditionally, people have only encrypted things that are deemed sensitive (logins, money, health). However, when the majority of traffic is non-encrypted, actually ciphered data is very noticeable to anyone monitoring the network, and it screams "look at me! I am important!".
However, when >90% of traffic of the Internet is encrypted, then there is no 'extra' information to be gained from that fact. If further forces any surveillance program to expend extra resources to either trying decrypting everything, or choose to only focus on those people that it actually deems important, instead of wholesale surveillance of the entire population.
Further, encrypting content prevents it from being modified, reducing your potential to be leverage against:
> The Great Cannon of China is an Internet attack tool that is used to launch distributed denial-of-service attacks on websites by performing a man-in-the-middle attack on large amounts of web traffic and injecting code which causes the end-user's web browsers to flood traffic to targeted websites.[1]
"herd immunity" is a good argument; but herd immunity exists for outliers. The people who for some reason cannot get a vaccine, yet they are not exposed to the hypothetical disease because everyone they are surrounded by is immune.
That's kinda my argument, not that https is bad. I agree with widespread adoption and taking it as a default even for a static page.
But in my environment I have many dozens of nodes and idk where letsencrypt is going to come in because of geobalanced DNS. I also serve many domains with this project so I don't have the nice DNS-01 ACME verification features because not all DNS providers have an API.
So I have a web server on each node, which reverse proxies .well-known/ to some central server that runs certbot. Then I distribute those certs outwards to those nodes.
It goes against certain sysadmin principles about transportation of private key materials, but it's what works.
But; given that architecture which caters for a latency sensitive product; letsencrypt is a serious overhead. To the point where I'm considering going back to 2y paid certs.
My work's DNS provider does not have a handy API, so if we want a cert for the internal-only foo.example.com, we point _acme-challenge.foo to _acme-challenge.foo.dnstest.example.com. And the NS server for the dnstest.example.com lives in our DMZ and is only there to answer ACME queries from Let's Encrypt. We set up some scripting to allow updates to the NS server via nsupdate.
And there are ACME clients written specifically around the idea of having the client run on a different system than the web server:
If the message is altered then the most pain
anyone will have is connecting somewhere else
for the first time
If the page is altered so it loads 3rd party tracking code, then the pain is to be tracked.
If the page is altered so it opens a "Please enter your ebay login" phishing site in the background, a user might switch tabs, think "Oh, I logged out of ebay somehow" and enter their password into the attackers site. Exposing them to the pain of ecommerce fraud.
If the page is altered to use a 0-day exploit, the pain is to have a zombie machine afterwards.
If you hijack the DNS request and respond with the IP of a different server, that server will not have a valid certificate for the domain in question. Why are any extra features required?
As long as the worlds greatest surveillance system continues to be given deliberate access to the plaintext, I will continue not caring about HTTPS for websites that don't have users logging into an account or submitting forms.
I clicked through to the list of sites. Embarrassing to see that mit.edu is not https by default! The same institution invented Kerberos. Come on MIT, fix this please.
Without HSTS preload anyone on your local network can arp/dns spoof your traffic, MITM you, and automatically inject malicious javascript (cryptominers, credential-stealers, etc.), access all of the page content, and manipulate the page or response.
If you are connecting to a "Free Public WiFi" and the malicious actor is the one broadcasting the access-point; it's even easier to MITM you.
Without Cert & Key Pinning your employee laptop can be MITM by corporate to eavesdrop on all of your HTTPS traffic. The browser will show that the connection is secure, but it isn't. When you pin the cert and key - even with a compromised corporate computer - the insecure site warning will show and you'll be alerted to the fuckery.
> Doing things this way is the final nail in the coffin for Internet Explorer 6
- Fucking great! Nothing else to say here.
> handshakes take enormous amounts of CPU
- This is vastly overstated (enormous?). Also, this is called a tradeoff. Security isn't free in time, money, or performance.
> Preloads list is an absolute kludge that does not and will never scale... and works only for specific browser
- The preload list, right now, is 10.6mb and contains 90,862 entries. This seems to function and scale just fine. Seeding your browser with known values is really the best way to do this until 99.X% of web traffic is provided over HTTPS... Also Chrome, Firefox, Safari, IE/Edge, and Opera make up 98% of all browser traffic today and they have all supported this standard for years.
> The biggest problem with forcing everything HTTPS is a false sense of security.
- Defense in depth. Layering security controls is the only way to go. Also; this is some crazy mental gymnastics to take the position "wearing a seatbelt is a false sense of security because you can still crash".
> Because it's hard and a pain.
- Feeling that pain is offset onto the attackers trying to compromise your site. If you don't feel the pain; they don't either.
> Secure websites can make the web less accessible for those who rely on metered satellite internet... TLS 1.3 with 1-RTT should improve this situation.
- Even if your entire business depended upon delivering data to metered satellite internet users; the risk outweighs the cost when not encrypting your traffic. WARNING: DON'T IMPLEMENT 0-RTT OR 1-RTT WITHOUT UNDERSTANDING YOUR APPLICATION-SPECIFIC REQUIREMENTS. You can really fuck this up by not properly managing tokens between your webserver and application layer. Not recommended.
> I don't get it. With Lets Encrypt, it's like one or two lines to get everything set up.
- True, but it get's confusing really fast if you don't 100% match the certbot use-case.
> HTTPS is not an obligation.
- For 99% of people running businesses; it is.
> Recently an OpenShift cluster I admin went down because of long-lived certs not being rotated in time.
- If you have had certbot running for a long time I would suggest you check your server logs TODAY and make sure your cron job is still working correctly. Recently there was a change with the certbot acme version requirement and your reissue might be failing. Seriously, take a quick look right now.
> Because frankly, I neither trust letsencrypt nor the certificate authority system in general... but won't help against industrial (e)spionage
HTTPS is not an obligation. Most people believe it's a must these days, but it's not. There is a nice rebuttal of Troy's arguments on N-gate (via webcache as direct links from HN end up in an endless pseudo-captcha):
Though I'm on the "encrypt all the things!" camp, let me play devil's advocate for a moment.
If I set up a purely static HTTP-only site in 1998, it would still work with today's browsers, more than 20 years later.
If I set up a purely static HTTPS-only site in 1998, and didn't follow the upgrade treadmill, it would have stopped working for modern browsers some time ago.
Even with a static HTTP-only website, there's tons of stuff that you have to update anyway. Hardware gets outdated and needs replacing, at which point you cannot postpone the kernel update anymore because you need the new device drivers, etc. etc. You also don't want to stop updating your HTTP server, CVEs get discovered quite frequently. You can of course draw a line between that churn and the churn of updating your TLS config every few years, but it's more arbitrary than you think.
> at which point you cannot postpone the kernel update anymore because you need the new device drivers, etc. etc
Irrelevant as I didn't upgrade the hardware
>You also don't want to stop updating your HTTP server, CVEs get discovered quite frequently
it's a server for serving a single static page from 1998, nothing bad will happen if that machine is compromised, well, nothing worse that what could be done for not having HTTPs
> it's a server for serving a single static page from 1998, nothing bad will happen if that machine is compromised, well, nothing worse that what could be done for not having HTTPs
Here's one: the server has a remote code execution vulnerability, which is exploited to gain root permissions, and your server is serving child porn. The cops are knocking on your door.
Granted, this isn't a pro-HTTPS argument, but you do need to keep your stuff updated, even if it is only a static site.
It would still work, just create a warning. For a page that hasn't been updated since 1998 that's ok imo. On the other hand, it needs to be hosted somewhere. Either a vps (which also needs updating) or a web hosting package (which tend to provide auto-renewing certificates). Just because the code is static doesn't mean nothing about the website has changed for 20 years.
You assume it's not a box in my basement or my company's that has been running for 20+ years. I wouldn't be surprised to hear things like this still exist.
Of course, migrating to even a raspberry Pi would be a net performance and perf/watt improvement.
my static website is a sand castle in the beach. When I'm not around, kids may break it, or a random person may impersonate as its creator. That is alright, it is just a sand castle. The only purpose of its existence is to provide casual onlookers a nice view (or read) for a few minutes.
Having to set up a "certificate" for that would be an unacceptable burden.
It is a sand castle on a private beach owned by you.
Have you ever posted a link to your site anywhere? Imagine you sent me a post card saying "Come to my beach to look at my cool sandcastle" and then when I got there the sandcastle was actually a robot that stole my credit card.
You could say that it wasn't your fault - somebody broke into your private beach and replaced the sandcastle.
But I would probably still blame you for not securing the area and double-checking the contents before inviting people. Even if I didn't blame you, I probably wouldn't respond to another invitation.
I like this. Post-cards are a better analogy for plain http than the sand castle.
Of course postcards are not "secure": everybody can read them, and you can trivially impersonate somebody else sending a postcard. Any serious snail mail communication must go via safer channels. Yet, postcards are a very nice thing to have, it would be a shame if they weren't possible. My kids can easily send a postcard to their grandparents, just by themselves. My grandfathers will probably (but nor surely) receive the postcard, and they will recognize who wrote them (but they can never be sure, really), and everybody will be happy.
In the same vein, HTTPS is better than plain HTTP for serious communication, and there's nothing wrong with it. Yet, the existence of HTTP is another fundamental part of the internet, and I make a point of using it as much as possible.
Do you propose that anonymous postcards shouldn't exist, and that the post office should only accept letters certified by adults who had identified themselves at the local police station? I would hate that! And for the same reasons I hate a world without http.
Our asshat twin n-gate has something to say about this
> Horseshit. Users must keep themselves safe. Software can't ever do that for you. Users are on their own to ensure they use a quality web client, on a computer they're reasonably sure is well-maintained, over an internet connection that is not run by people who hate them. None of the packets I send out are unsafe, so my site does not need HTTPS.
> None of those things are my problem. If people don't want to see my site with random trash inserted into it, they can choose not to access it through broken and/or compromised networks. If other website operators are concerned about this sort of thing, they are free to use HTTPS, but I have no reason to do so. Encryption should be available to anyone who wants to serve encrypted content, but I have no interest in using it for my website. It's a shame that people are using web browsers (note: not my website, but BROWSERS) as attack vectors. The legions of browser programmers employed by Mozilla, Google, Apple, and Microsoft should do something about that. It's not my flaw to fix, because it's a problem with the clients. My site does not need HTTPS.
> Earlier you recommended letsencrypt, and now suddenly you want me to pick a competent certificate authority? The only reason they didn't leak my info already is because my site does not need HTTPS.
> Obviously my site does not display ads; as has [been pointed out][https://news.ycombinator.com/item?id=14666391], It does not even appear to be monetized. This is because I have a real job and the entire web ad industry can fuck itself off a cliff. So, while mixed-content warnings are pretty obnoxious, my site does not need HTTPS.
Can't read the article because the captcha won't load, but this reply doesn't make any sense. What can the browsers do without the cooperation of the server? You don't really need encryption to deal with that specific problem, but you do need signatures, which means you need a certificate anyway. It's quite a strange attitude toward the problem.
The website is actually quite useful: I notice that intersection between the threads discussed there and the ones I comment on is almost exactly the empty set. So it’s a great check to see whether I’m doing a good job ;)
Because frankly, I neither trust letsencrypt nor the certificate authority system in general. This might prevent eavesdropping in your coffee shop wifi, but won't help against industrial spionage powered by three-letter-agencies who probably control some of these authorities.
So because you think it’s not a good enough defense against three-letter agencies, you’ll let everyone else continue to eavesdrop too? Controlling a CA isn’t even enough to strip confidentiality, there needs to be an active (private keys!) and ideally public (certificate transparency, CAA) attack on top of that that entities who can pull it off definitely won’t want to waste.
2. There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message which will make them think there are security issued with your website.
3. Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.
5. HTTPS is needlessly complex making it hard to implement. There have been several security vulnerabilities introduced simply by its use.
6. If you can't comply with the OpenSSL license, implementing it yourself is a hopeless endevour.
SSL was developed by corporations, for corporations. If you want some security feature to be applicable to the wider Internet, it needs to be community driven and community focused. Logging in to my server over SSH has far more security implications than accessing the website running on it over HTTPS. Yet, somehow, we managed to get SSH out there and accepted by the community without the need for Certificate Authorities.