Ugh, that's a horrible vulnerability. We found something similar in nginx a few years ago, and the result is that you can repeatedly open up client connections and dump server memory as it changes, revealing keys and, without any real effort, authentication info and cookies.
DTLS is designed to secure traffic running on top of unreliable
transport protocols. Usually such protocols have no session
management. The only mechanism available at the DTLS layer to figure
out if a peer is still alive is performing a costly renegotiation.
If the application uses unidirectional traffic there is no other way.
TLS is based on reliable protocols but there is not necessarily a
feature available to keep the connection alive without continuous
data transfer.
The Heartbeat Extension as described in this document overcomes these
limitations. The user can use the new HeartbeatRequest message which
has to be answered by the peer with a HeartbeartResponse immediately.
"Don't roll your own parsers" should really be up there with "Don't roll your own crypto". This advisory is scant on details, but this extension protocol[0] neither looks complex nor beyond mechanical code generation to me. Just simple enough to be dangerous. And it's pretty new, so this must be recently authored vulnerable code.
Even though the code got better with this fix I still wouldn't accept code that looks like this in a review. Why are 1, 2, 3, 16 not defines? What's up with the code duplication between files? Where are the unit-tests?
I'm starting to feel that a lot of software that has been around for 10+ years and is commonly used does not live up to current best practices regarding writing good system-level software.
> I'm starting to feel that a lot of software that has been around for 10+ years and is commonly used does not live up to current best practices regarding writing good system-level software.
I get the impression that this applies to openssl far more than other software. The code base is a mess, and it's security sensitive. So people dare not touch it.
It's a shame that there isn't a better incentive for this particular code base to be fixed.
Yep, and no braces after the 'if' statement in the patch. Even after the previous ssl vuln (I thought it was gnutls- struggling to find the relevant hn discussion) was caused by an omission of braces after the 'if' statement.
You are thinking of "goto fail", a bug in Apple's Security framework. I would not claim that was "caused" by a lack of braces: even having the braces, that bug--in addition to a wide class of similar bugs--is still quite possible, even if in a few models of how the bug was caused it becomes slightly less likely. The best place to lay blame for that kind of error is a stubborn insistence that error handling should involve boilerplate return value checks strewn throughout the code, with no attempt at abstraction or structure: it leads to numerous potential mistakes. Please read the various discussions attached to this article that made this claim:
I felt much better about having a fix before I looked at the actual code. It is functionally secure (looks right to me and thousands of others by now), but the way it is written would guarantee you an instant fail in an exam or interview.
I've been thinking along these lines for a long time now, that parsing is such a critical activity that we should treat it with far more reverence than we do. Ideally, we would define languages to describe the format of the data that we want to parse (something like a BNF perhaps), and the OS/environment would parse it and populate variables/provide a dictionary in response. Ensuring that the input to your algorithm is exactly as expected is such a critical task that no one should ever be doing it manually.
> "Don't roll your own parsers" should really be up there with "Don't roll your own crypto".
.. and if you do, don't do it in a highly memory-unsafe language. Espcially when it's for a security critical piece of central internet infrastructure!
Looks interesting, but static code generation along the lines of Ragel (but more oriented toward binary structures like Protocol Buffers) would be a lot better for performance.
If so, that must be an awful lot of web servers, with a horrendous cost for everyone to buy new certificates etc. if there's no reliable way to determine what if anything was compromised.
Would any of our resident security experts like to suggest best practices under such circumstances?
(Edit: It looks like the page I linked above has been updated and a patch is going into Wheezy security as I write this.)
(Edit 2: Confirmed that Wheezy security updates now include openssl 1.0.1e-2+deb7u5 and related libssl changes.)
All reasonable certificate authorities will — at no cost — revoke your existing certificate and issue you a new certificate with the same expiration date as your old certificate. You'd just need to send the CA a new certificate signing request created from a newly-generated RSA key pair.
If your CA wants you to buy a new certificate to recover from a key compromise, your CA is taking you for a ride, and you should find a less horrible CA to throw your money at.
Totally agreed on the over-complexity and un-securability of TLS, that too often is deployed where something simpler should be used instead.
However, wouldn't OpenSSH be the thing spiped replaces most of the times? And that has a better security track record (I mean, better than OpenSSL for sure).
Basically, for internal infrastructure, where autossh wont work and/or where something simpler than ssh is desired.
So the strawmen arguments about it not replacing TLS is not the point.
stud, nginx, stunnel, f5 load balancers and cloudflare will still be needed for now, until 'moxie0 or someone comes up with a viable CA alternative AND something way, way simpler than TLS (brain-hurt ASN1, even with Wireshark).
Well, since you mention it, why did you write spiped? It seems like if you just wanted to protect network services from the internet you could have A) segmented your network, B) used ssh, C) used one of the myriad other existing non-TLS tunneling protocols. Doing A might expose you to less risk than B or C, since with tunnels if your client is owned your server is still vulnerable. Of course if you just wanted to code something for fun I totally understand that too. But it seems like there were already alternatives to stunnel (and I don't really get why people use stunnel to begin with)
Segmenting my network isn't an option when "my network" involves machines on multiple continents.
I avoided ssh because sshd is an effectively unauditable mess, and breaks the "transient network glitches don't kill quiescent connections" assumption.
How do transient network glitches kill the connection? I'm not completely familiar with the ssh wire protocol, but to my knowledge TCP is largely responsible for ensuring the reliability of the virtual circuit even in the event of a transient lower-layer failure.
ssh frequently uses either application-level or TCP-level keepalives. But it doesn't have to; you can just turn off ssh keepalives and your quiescent connections will survive network outages.
Keep in mind that you have to run this with OpenSSL v1.0.1 and above. Running it on a stock OS X Mavericks install will not detect the extension because v0.9.8 of OpenSSL is installed.
At least in my Bash (4.2.25(1)), there seems to be a difference between "2>&1|" and "2>&1 |" – the latter works as expected, whereas the former doesn’t give any output.
You are protected now. But you were not before, so if any attacker figured this out before the public disclosure then you have [possibly] already been attacked and compromised.
This is perhaps somewhat misleading. It's possible that this bug was being actively exploited before now, so you should change your keys even if you use a CDN (all the majors have already fixed this as far as I'm aware).
>>There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.
An Ubuntu update would be nice right about now. Outside of disabling everything that uses openssl or compiling a new one manually, there's not much I can do to secure my servers at this moment. Meanwhile, I'm guessing a lot of not so nice people are racing to scan IP ranges for this bug.
By either compiling & installing it with -DOPENSSL_NO_HEARTBEATS or waiting for the security fix to be backported by ubuntu devs. http://heartbleed.com/
The binary package name is "libssl1.0.0". You want "sudo apt-get update && sudo apt-get install libssl1.0.0", but I suggest that you take all security and regular updates (or set sources.list to security only updates if you insist). Then you can just run "sudo apt-get update && sudo apt-get dist-upgrade" to pick up all updates, without worrying about package names.
If you want to verify if a particular vulnerability is fixed, look in /usr/share/doc/<package>/changelog.Debian.gz. In this case, you want /usr/share/doc/libssl1.0.0/changelog.Debian.gz. In this file, you'll see CVE-2014-0160 mentioned as fixed, which is the universal identifier of this vulnerability.
Note that until it gets patched, you can go to your ELB config and disable TLS support, which I believe (someone please correct me if not) will protect you from this particular attack. Whether the cure is better than the disease is up to you.
If one were using ASLR would this have mostly mitigated this? (I just rebuilt without the heartbeat extension but I'm curious). Also how exploitable is this?
I've been running the exploit against our test app (through AWS ELB), and have managed to get a fair bit of data out. Got snippets from HTTP requests on other threads including session cookies and even login passwords.
I wonder how many service providers with big OpenSSL deployments (cloudflare, google, facebook, etc.) will do the sane thing and roll their authenticity keys. I'm guessing zero.
(Assuming they are deployed in such a way that their long-term authenticity keys are in the memory space of the network service, and not kept on another system or HSM.)