OpenSSL 3 seems to be largely compatible (it's a much easier upgrade than previous big ones) but the part of this that breaks clients is odd.
If your peer closes the connection without sending close_notify, you get an unexpected EOF; 1.1.1 ignored this, 3 passes the error to the client, which needs to set SSL_OP_IGNORE_UNEXPECTED_EOF to recover the old behaviour - but this is only safe as long as the client checks for truncation attacks. This peer behaviour seems to be pretty common for google servers, so if you use google logins or APIs it'll trip you up (from doing these upgrades at work, almost all of the errors I saw were talking to google)
What's odd to me is most of the patches I saw for this were just setting that flag, and not doing anything about truncation attacks? I'm sure this will bite us all somewhere down the line.
Anyway, some notes for admins of old systems who got to this too late:
It was noticeable that this patch got backported into ppa's that maintain old/deprecated versions of some software. eg the ondrej php ppa has this backported to EOL'd versions of php. But, this is not the case for the python deadsnakes ppa - I didn't look at python 3.8, but 3.9 was fixed at source, and it hasn't been backported to the (EOL'd) 3.7 anywhere. You should be off 3.7 anyway, but if you're not, it's now urgent.
Another notable backport is openresty. If you're using nginx with lua and oauth, it's a popular choice; but lags the latest nginx by a long way. The latest version is 1.21.4.2 which means it's based on nginx 1.21; but nginx is on 1.25 and nginx only released with openssl 3 support in 1.22. If you proxy to https servers, this is a concern. However, digging I found the openssl3 support was backported to openresty a few months ago, so the _current_ version does work without falling over.
> What's odd to me is most of the patches I saw for this were just setting that flag, and not doing anything about truncation attacks? I'm sure this will bite us all somewhere down the line.
It sounds like those are workarounds to get the apps to work again by reintroducing the old behavior. Once they are unblocked, they can refocus on addressing the issue.
It's either that, or remain broken and unusable until further notice.
correct, and specifically with backports there is no other reasonable way to do this. Since the mitigation needs to happen outside of the openssl integration, you need to expose the flag to clients and allow them to make the choice to ignore the error or not, until they handle this. But in a backport, you can't expose new apis like this to clients, because that code isn't getting updated.
So was the truncation attack already there in 1.1.1?
I think it's OK for a patch that changes the dependency to 3 to just retain existing behavior. Addressing the problematic behavior should be a distinct patch.
Yes, the fix for the truncation attack was originally rolled into 1.1.1e (March 2020) and then withdrawn in 1.1.1f because the new errors were too disruptive; IIRC the fix was already in the 3.x prereleases by that stage. So effectively if you stuck on 1.1.1, you've been living with this risk for 3 years.
The truncation attack is one where the client thinks their ssl session has ended but a malicious actor prevents the session from ending correctly; the threat is something like, after doing this your gmail can be messed with even tho you think you've logged out. From the descriptions it seems like you'd need to layer it with another attack or two to do anything useful with it. It's not something that keeps me awake at night.
When I dug into the python code we had hitting this a few months back (python 3.7 on recent ubuntu) it appeared that the error would not only trip up a client expecting to use 1.1.1 but could mean the last buffer read from the socket was not flushed (so, you can't just catch the error in python and assume it's ok because the other end had closed the connection anyway - you're missing some of the response). I wasn't certain but that possibility was a big bucket of NOPE for me, far easier to tell people to get the upgrade done and stop using software that had been built against 1.1.1.
I don't know about other Linux distributions (or operating systems) but a while ago NixOS marked Sublime Text as insecure and wouldn't let me install it without me setting the option to allow insecure packages (since it depends on openssl 1.1.1u), I don't know how dangerous it would be to have software with an out of date TLS library for me, a regular user, but I just switched to emacs, so sad because ST is an excellent editor, a great set of features and plugins while still being extremely fast (which is my problem with VSCode).
I briefly looked into it (from the same NixOS issues), there's a GitHub issue somewhere about it [1]
Effectively as I recall it a large amount of Sublime Text plugins internally rely on an old version of Python (3.3) which in turn relies on OpenSSL 1.1.1, there is concern that forcing the python version to something newer will largely break their plugin ecosystem as a lot of the plugins are expecting Python 3.3 and may not be compatible with 3.8 or later versions with supported OpenSSL.
It's probably not a major security risk unless the plugins are making network connections, but it is an unfortunate situation.
The thing that frustrates me about Sublime’s position is this (from that link):
> With most packages being no longer maintained there's little chance to get that file into existing repos.
So you can’t break backwards compatibility because “most packages” aren’t maintained. But that means those packages are no longer receiving bug fixes.
As a new Sublime Text user (within the last year), the package situation is definitely frustrating. There seems to be a lot of pride within the Sublime staff and core volunteer group, some of which is deserved, Sublime is an amazing editor. But at some point they’re going to have to admit that the current set of packages isn’t perfect and deprecate a bunch of them.
This unfortunately already completely broke Gentoo for the moment :-(
Maintainers masked openssl-1.1.1 but there is currently a ton of packages with (sometimes needlessly) hardcoded requirement for old openssl. Including things like Rust etc. This will be painful.
If anyone is getting the same results as me where you could not run emerge world --deep because random already installed packages pulled back the openssl-1.1.1, you can use following command:
emerge -DavuUN @world @system --changed-deps
This should recalculate all the dependencies from scratch, not using (possibly) cached openssl-1.1.1 allowing you to run abovementioned emerge without problems :-)
I use Gentoo on a few machines, and I'm confused by what GP says: I have no problems using OpenSSL 3, and the 1.1 releases have been masked (not installable by default) for a while now.
The following mask changes are necessary to proceed:
(see "package.unmask" in the portage(5) man page for more details)
# required by net-libs/nodejs-20.5.1::gentoo
# required by www-client/firefox-102.15.0::gentoo
# required by @selected
# required by @world (argument)
=dev-libs/openssl-1.1.1v
And many many many more packages. I went through like 20 yesterday and created custom ebuilds without mentions to slot :0= in ebuild, but there is plenty more.
Something seems unusual about your system. Are you using an unusual profile or heavily customized make.conf? net-libs/nodejs-20.5.1 is satisfied with dev-libs/openssl-3.0.10 out of the box here.
I have a fully updated world with a KDE desktop, and don't have OpenSSL 1.1 installed at all.
eselect profile list says I have selected the default one: default/linux/amd64/17.1 (stable) *
I would not say it is something unusual...
But funny thing is that if I emerge that one specific package with -1, it does not pull this dependency. It seems like something broken inside the portage.
I experienced a similar sounding issue, but was able to decipher the blocked emerge output from portage to find that app-crypt/tpm2-tss-engine was blocking the whole system from getting onto openssl-3. Once I dropped tpm2-tss-engine, things went forward swimmingly. No other unmasking/masking of anything was needed.
It seems like portage did remember that those packages were built against that openssl version - when running as "emerge -DavuUN @world @system --changed-deps" the problem went away :-)
Probably some caching issue/race condition in portage...
Gentoo is very much alive - from time to time I try some other distro but I keep going back. I do some development and I need headers and generally latest versions of some libraries and in Gentoo this is by default.
In Debian there is no way to do that - and the last time I tried to switch to unstable, my Debian commited suicide :)
Interesting, you can purchase support for 1.1.1x between 15,000 -- 50,000 USD per year. I can see some large Companies in panic mode now, so the option exists for them to continue with older versions.
This explains why NetBSD 10.0 people are holding the release until they get the new version working in Base and pkgsrc.
> Interesting, you can purchase support for 1.1.1x between 15,000 -- 50,000 USD per year.
Nitpick, from the announcement, only the Premium contract ($50K per year) provides for extended 1.1.1 support:
Another option is to purchase a premium support contract which offers extended support (i.e. ongoing access to security fixes) for 1.1.1 beyond its public EOL date. There is no defined end date for this extended support and we intend to continue to provide it for as long as it remains commercially viable for us to do so (i.e. for the foreseeable future).
Can you get support for 15k? My reading of that page is that extended support is only for the top 50k a year package. Though in reality RedHat and others will keep OpenSSL 1.1.1 patched for quite a while yet.
I imagine there is a clause in the support contract that prevents the disclosure of source code? I am wondering if we will need a community supported fork. I have said it for months that this transition will be a mess, so many distribution LTS and packages are built for 1.1.1
This is going to be a big deal. So many applications and appliances that have received bare-minimum updates won't get a new openssl major version. Expect new attacks on your shitty IOT devices in the coming year
The majority of those applications and devices are already stuck on 0.9.x anyway, so it's not like it makes that much of a difference. It doesn't seem like that big of a deal to me.
The majority of "shitty IOT devices" only make outgoing TLS connections to relatively trusted external servers (i.e. the company's server), and it's hard enough to MITM connections at scale that you're probably fine. Just don't let a malicious stranger on your wifi.
Adding to that, it's also pretty hard to develop real exploits for the supposed RCE CVEs that you see. Like, most of them are "there's a buffer overrun, this is probably RCE", but most of the time it's actually "no, for all real builds of openssl, this is a crash and that is it".
You don’t need to do complicated MITM attacks to abuse these issues if you control the DNS servers, like if you’re the ISP or one of the parties that provide fashionable DNS servers like Google or Cloudflare.
Most openssl-CVEs suffer from massive severity inflation.
”If you can fool a client running on AIX to connect to a server with deprecated hashing algorithm from 1986, you can make the client take an extra 40 milliseconds to process the packets”
Shitty IOT devices are too small to use openssl anyway. The big ones are updated with their stack.
We started upgrading our stack about half a year ago, and 3.x should be ready now for the next release.
The biggest change were outdated RC4, which was a good thing to get rid of this cruft
Plenty of IoT devices out there that run on full-blown Raspberry Pi's or equivalent boards. Paying someone to write embedded firmware is expensive, throwing together a UI in Python is cheap.
Not that this poses such an issue for those devices; I doubt they ever received firmware updates in the first place. Still, you'll find OpenSSL in the weirdest places.
The worst IOT devices are based on old EoL Android versions hacked on without any understanding by each party adding a little and resell it as the next higher margin product like nesting matryoshka dolls while planning to dissolve the respective business before anyone sues them for the abomination they cobbled together.
Isn't that almost always the case with FOSS projects? For instance, throw enough money at Red Hat and I bet they'd support anything you'd like for as long as you'd want.
No, not at all ! It can only be the case of the maintaining entity is the one that sells support. That's actually the definition of commercial software!
It may be the case for company-backed FOSS projects (that use FOSS as a marketing channel and have a business model around it). You might be used to that because Redhat, Elastic, Mongo, ...
But for very large number of FOSS projects, that's not the case. Either there is no paid support, or it is done by a third-party. But then this third party doesn't call the shots what's deprecated and what's backported where.
I was under the impression that OpenSSL is a community project, and now it appears that that's not the case.
I think GnuTLS is probably the second most popular TLS library, after openssl.
Though actually, maybe, firefox's NSS is now that I think about it a second more. Firefox is at least some fraction of C-library-based ssl traffic out there.
> LibreSSL is the drop-in replacement for openssl that everybody should have embraced back when heartbleed happened.
100% and 1000x times this. LibreSSL does have some minor incompatibilities, but those were mostly about removing obscure and very rarely useful APIs that were badly designed in the first place. It was quite likely that if your program used these APIs, it had security issues that OpenSSL couldn't solve with an update.
(Usability is security. If it's hard to use correctly, then it's easy to create a security hole.)
Remember folks, LibreSSL comes from the same people who made OpenSSH, and that "other" OS that had 2 remote holes in almost 30 years. You already trust them, you maybe just don't know it.
> but a year or 2 ago the went back to OpenSSL for some reason which I did not fully understand.
That would be Void Linux [1]. One of the reasons iirc, is PEP 644 [2], in which CPython drops support for LibreSSL due to it not being fully compatible with OpenSSL 1.1.1 APIs.
That was either Void Linux (see sibling comment) or Alpine Linux.
You might also have read Gentoo Linux announcement (https://www.gentoo.org/support/news-items/2021-01-05-libress... ); technically doesn't fit your description since Gentoo never "switched" or "went back", but rather supported them simultaneously, and just pulled support for LibreSSL at some point.
OpenSSL is very much part of the problem set here with a long history of dubious API choices, a performance sapping rewrite in 3.0, and many things retained for far too long like VMS support.
The problem is that killing it requires a lot of buy-in and dealing with FIPS 140-2 and no one was willing to do that last time.
The problem is that if you want Red Hat or Canonical to support integrating your new library across all the applications that need it, it has to support FIPS. If it doesn't people who need FIPS and have the money to support maintenance will avoid integrating it.
If your peer closes the connection without sending close_notify, you get an unexpected EOF; 1.1.1 ignored this, 3 passes the error to the client, which needs to set SSL_OP_IGNORE_UNEXPECTED_EOF to recover the old behaviour - but this is only safe as long as the client checks for truncation attacks. This peer behaviour seems to be pretty common for google servers, so if you use google logins or APIs it'll trip you up (from doing these upgrades at work, almost all of the errors I saw were talking to google)
What's odd to me is most of the patches I saw for this were just setting that flag, and not doing anything about truncation attacks? I'm sure this will bite us all somewhere down the line.
Anyway, some notes for admins of old systems who got to this too late:
It was noticeable that this patch got backported into ppa's that maintain old/deprecated versions of some software. eg the ondrej php ppa has this backported to EOL'd versions of php. But, this is not the case for the python deadsnakes ppa - I didn't look at python 3.8, but 3.9 was fixed at source, and it hasn't been backported to the (EOL'd) 3.7 anywhere. You should be off 3.7 anyway, but if you're not, it's now urgent.
Another notable backport is openresty. If you're using nginx with lua and oauth, it's a popular choice; but lags the latest nginx by a long way. The latest version is 1.21.4.2 which means it's based on nginx 1.21; but nginx is on 1.25 and nginx only released with openssl 3 support in 1.22. If you proxy to https servers, this is a concern. However, digging I found the openssl3 support was backported to openresty a few months ago, so the _current_ version does work without falling over.