Hacker News new | past | comments | ask | show | jobs | submit login
OpenSSL Security Advisory: TLS heartbeat read overrun (openssl.org)
297 points by moonboots on April 7, 2014 | hide | past | favorite | 85 comments



Ugh, that's a horrible vulnerability. We found something similar in nginx a few years ago, and the result is that you can repeatedly open up client connections and dump server memory as it changes, revealing keys and, without any real effort, authentication info and cookies.


not sure what you have to maintain, but it sure sucks having to scramble and fix this right away.

our (quick) fixes are almost all done:

- recompile openssl where necessary (web, chat, mail, windows binaries) without heartbeat support

- roll related certs and keys ASAP

and then comes the painful process of suggesting all web service users roll their certs and auth.

oh, and rotate personal passwords at other sites that issue a warning about openssl...


I had to google what "heartbeat extension" does:

   DTLS is designed to secure traffic running on top of unreliable
   transport protocols.  Usually such protocols have no session
   management.  The only mechanism available at the DTLS layer to figure
   out if a peer is still alive is performing a costly renegotiation.
   If the application uses unidirectional traffic there is no other way.

   TLS is based on reliable protocols but there is not necessarily a
   feature available to keep the connection alive without continuous
   data transfer.

   The Heartbeat Extension as described in this document overcomes these
   limitations.  The user can use the new HeartbeatRequest message which
   has to be answered by the peer with a HeartbeartResponse immediately.

https://tools.ietf.org/html/draft-ietf-tls-dtls-heartbeat-01

Edit: here is the commit patching the bug https://github.com/openssl/openssl/commit/7e840163c06c7692b7...


"Don't roll your own parsers" should really be up there with "Don't roll your own crypto". This advisory is scant on details, but this extension protocol[0] neither looks complex nor beyond mechanical code generation to me. Just simple enough to be dangerous. And it's pretty new, so this must be recently authored vulnerable code.

[0] http://tools.ietf.org/html/draft-ietf-tls-dtls-heartbeat-04



Ouch, pretty basic lack of bounds checking.

Even though the code got better with this fix I still wouldn't accept code that looks like this in a review. Why are 1, 2, 3, 16 not defines? What's up with the code duplication between files? Where are the unit-tests?

I'm starting to feel that a lot of software that has been around for 10+ years and is commonly used does not live up to current best practices regarding writing good system-level software.


> I'm starting to feel that a lot of software that has been around for 10+ years and is commonly used does not live up to current best practices regarding writing good system-level software.

I get the impression that this applies to openssl far more than other software. The code base is a mess, and it's security sensitive. So people dare not touch it.

It's a shame that there isn't a better incentive for this particular code base to be fixed.


    if (1 + 2 + 16 > s->s3->rrec.length)
    if (1 + 2 + payload + 16 > s->s3->rrec.length)
Come on. At least use a macro or something.

This always makes me cringe during code reviews.


Yep, and no braces after the 'if' statement in the patch. Even after the previous ssl vuln (I thought it was gnutls- struggling to find the relevant hn discussion) was caused by an omission of braces after the 'if' statement.


You are thinking of "goto fail", a bug in Apple's Security framework. I would not claim that was "caused" by a lack of braces: even having the braces, that bug--in addition to a wide class of similar bugs--is still quite possible, even if in a few models of how the bug was caused it becomes slightly less likely. The best place to lay blame for that kind of error is a stubborn insistence that error handling should involve boilerplate return value checks strewn throughout the code, with no attempt at abstraction or structure: it leads to numerous potential mistakes. Please read the various discussions attached to this article that made this claim:

https://news.ycombinator.com/item?id=7318039


Yes, that is exactly what I was thinking of. Thanks for the link. The discussion I was recalling is here

https://news.ycombinator.com/item?id=7282005

but the thread is a lot heftier now.


I felt much better about having a fix before I looked at the actual code. It is functionally secure (looks right to me and thousands of others by now), but the way it is written would guarantee you an instant fail in an exam or interview.


2011-12-31, it looks like, so it's been there for a couple of years already (introduced between 1.0.0f and 1.0.1):

http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=...


Wow, a code change introduced around New Year's Eve (depending on timezone), and authored by Robin Seggelmann, who also brought us DTLS-SCTP.


I've been thinking along these lines for a long time now, that parsing is such a critical activity that we should treat it with far more reverence than we do. Ideally, we would define languages to describe the format of the data that we want to parse (something like a BNF perhaps), and the OS/environment would parse it and populate variables/provide a dictionary in response. Ensuring that the input to your algorithm is exactly as expected is such a critical task that no one should ever be doing it manually.


> "Don't roll your own parsers" should really be up there with "Don't roll your own crypto".

.. and if you do, don't do it in a highly memory-unsafe language. Espcially when it's for a security critical piece of central internet infrastructure!


How do the Ruby-YAML and Python-Pickle vulnerabilities get cataloged?


Looks like a good use case for Hammer: https://github.com/UpstandingHackers/hammer


Looks interesting, but static code generation along the lines of Ragel (but more oriented toward binary structures like Protocol Buffers) would be a lot better for performance.


"Heartbleed Bug" Q&A: http://heartbleed.com/


Can we please vote this link higher? It's got a ton of information in it.


It's already at the top of the homepage, as a separate submission.


It is now. It wasn't at the time of vacri's post. Top of the homepage is best.


Ouch. Does this mean almost every Debian 7 web server out there is probably vulnerable to having its private data for supporting HTTPS compromised?

https://security-tracker.debian.org/tracker/CVE-2014-0160

If so, that must be an awful lot of web servers, with a horrendous cost for everyone to buy new certificates etc. if there's no reliable way to determine what if anything was compromised.

Would any of our resident security experts like to suggest best practices under such circumstances?

(Edit: It looks like the page I linked above has been updated and a patch is going into Wheezy security as I write this.)

(Edit 2: Confirmed that Wheezy security updates now include openssl 1.0.1e-2+deb7u5 and related libssl changes.)


All reasonable certificate authorities will — at no cost — revoke your existing certificate and issue you a new certificate with the same expiration date as your old certificate. You'd just need to send the CA a new certificate signing request created from a newly-generated RSA key pair.

If your CA wants you to buy a new certificate to recover from a key compromise, your CA is taking you for a ride, and you should find a less horrible CA to throw your money at.


I think startssl requires $$$$ to revoke and/or reissue those "free" certs before they expire :-/


Is there another good CA that doesn't charge $$$ for both issuing and revocations?


I just got a revocation request accepted with no charge there.


In case anyone was wondering why I wrote spiped...


Totally agreed on the over-complexity and un-securability of TLS, that too often is deployed where something simpler should be used instead.

However, wouldn't OpenSSH be the thing spiped replaces most of the times? And that has a better security track record (I mean, better than OpenSSL for sure).


A lot of people are doing spiped-like things using stunnel.


Basically, for internal infrastructure, where autossh wont work and/or where something simpler than ssh is desired.

So the strawmen arguments about it not replacing TLS is not the point.

stud, nginx, stunnel, f5 load balancers and cloudflare will still be needed for now, until 'moxie0 or someone comes up with a viable CA alternative AND something way, way simpler than TLS (brain-hurt ASN1, even with Wireshark).


Oh. Sigh.


> Totally agreed on the over-complexity and un-securability of TLS, that too often is deployed where something simpler should be used instead.

What would something simpler, less error-prone which would give the same benefits in a client-server connection?

EDIT: Spiped is one, I got it (I'm on it right now and might even use it actually on a side-project), anything else that we should know about? :-)


Well, since you mention it, why did you write spiped? It seems like if you just wanted to protect network services from the internet you could have A) segmented your network, B) used ssh, C) used one of the myriad other existing non-TLS tunneling protocols. Doing A might expose you to less risk than B or C, since with tunnels if your client is owned your server is still vulnerable. Of course if you just wanted to code something for fun I totally understand that too. But it seems like there were already alternatives to stunnel (and I don't really get why people use stunnel to begin with)


Segmenting my network isn't an option when "my network" involves machines on multiple continents.

I avoided ssh because sshd is an effectively unauditable mess, and breaks the "transient network glitches don't kill quiescent connections" assumption.


How do transient network glitches kill the connection? I'm not completely familiar with the ssh wire protocol, but to my knowledge TCP is largely responsible for ensuring the reliability of the virtual circuit even in the event of a transient lower-layer failure.


ssh frequently uses either application-level or TCP-level keepalives. But it doesn't have to; you can just turn off ssh keepalives and your quiescent connections will survive network outages.


But isn't spiped mostly irrelevant here?

I mean, it's not a TLS replacement, as it's based on PSK (thus only useable between two mutually trusting peers like me and myself), not PKI.


spiped should be irrelevant here. But there are a lot of people using PKI where they could be using PSK.


Who, exactly? Distributing shared secrets securely is a non-trivial exercise.


You mean like wifi-passwords? :) Everybody seems to manage distributing those just fine?


But it's written in C. So it can't be good!

(Sorry, I'm just still pissed at HN's simple mindedness and try to get more downvotes: https://news.ycombinator.com/item?id=7549916)


Please don't post comments to HN that have no real content.



So they managed to notify Cloudflare in advance, but not debian/ubuntu security teams?


They may have had someone involved in the research / identification and fix.


Unfortunately, if you have been using an AWS ELB to terminate SSL traffic, they are vulnerable to this particular exploit. See this forum post: https://forums.aws.amazon.com/thread.jspa?threadID=149690


Check for the extension:

    $ echo -e "quit\n" | openssl s_client -connect google.com:443 -tlsextdebug 2>&1| grep 'TLS server extension "heartbeat" (id=15), len=1'
    TLS server extension "heartbeat" (id=15), len=1
This doesn't tell you that the server uses OpenSSL, or that it is vulnerable, simply that it supports the extension.


I wrote a bash script to check the top 1000 websites and huge percentage of them responded with heartbeat extension (30-40%):

  INPUT=websites.csv
  OLDIFS=$IFS
  IFS=,
  [ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
  while read rank website
  do
    echo "checking $website for heartbeat..."
    echo -e "quit\n" | /usr/local/bin/openssl s_client -connect $website:443 -tlsextdebug 2>&1| grep 'TLS server extension "heartbeat" (id=15), len=1'
  done < $INPUT
  IFS=$OLDIFS
You can download a list of top 1 million websites from Alexa and Quantcast: http://www.seobook.com/download-alexa-top-1-000-000-websites...

Chinese websites timeout on port 443 so you'll have to skip them.


Keep in mind that you have to run this with OpenSSL v1.0.1 and above. Running it on a stock OS X Mavericks install will not detect the extension because v0.9.8 of OpenSSL is installed.


At least in my Bash (4.2.25(1)), there seems to be a difference between "2>&1|" and "2>&1 |" – the latter works as expected, whereas the former doesn’t give any output.

   $ echo -e "quit\n" | openssl s_client -connect chubig.net:993 -tlsextdebug 2>&1| grep 'TLS server extension "heartbeat" (id=15), len=1'
   $ echo -e "quit\n" | openssl s_client -connect chubig.net:993 -tlsextdebug 2>&1 | grep 'TLS server extension "heartbeat" (id=15), len=1'
   TLS server extension "heartbeat" (id=15), len=1
   $ 

Does anybody know why?


If your site is protected by CloudFlare (like HN is), you are automatically protected from this vulnerability (see: http://blog.cloudflare.com/staying-ahead-of-openssl-vulnerab...).


You are protected now. But you were not before, so if any attacker figured this out before the public disclosure then you have [possibly] already been attacked and compromised.


Not entirely correct, as the blog post states: > We fixed this vulnerability last week before it was made public.

Although there's still the other 103 weeks this was vulnerable to worry about.


This is perhaps somewhat misleading. It's possible that this bug was being actively exploited before now, so you should change your keys even if you use a CDN (all the majors have already fixed this as far as I'm aware).


Perhaps Cloudfare should note that the "up to 64kB" isn't entirely correct.

http://heartbleed.com/

>>There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.


An Ubuntu update would be nice right about now. Outside of disabling everything that uses openssl or compiling a new one manually, there's not much I can do to secure my servers at this moment. Meanwhile, I'm guessing a lot of not so nice people are racing to scan IP ranges for this bug.


Debian already updated: http://www.debian.org/security/2014/dsa-2896

Ubuntu should follow really soon, if not already.

Edit: Ubuntu updated: http://www.ubuntu.com/usn/usn-2165-1/


Looks like this being made publicly listed, they did get this CVE out now (I last checked that link around noon PST) http://people.canonical.com/~ubuntu-security/cve/2014/CVE-20...


FreeBSD updated (run the update about 40 minutes ago).


Are Android or iOS affected? Android seems to ship openssl 1.0.

Could a malicious server attack clients? Perhaps expose a browser's cookie jar or other saved passwords in memory?

The number of installed openssl clients across all devices and computers must be quite large.


It seems that Android has dodged this bullet by compiling OpenSSL with NO_HEARTBEATS: https://twitter.com/agl__/status/453472368589942785


Yes, the vulnerable code is used by both client and server so any client using openssl is affected.


Which parts of Android really use OpenSSL to do TLS with this heartbeat feature enabled?

The browsers? All Apps running on Dalvik? All apps running on ART?


OpenSSL doesn't seem to be installed on my jailbroken iPhone.


How does one go about installing this update on Ubuntu? "sudo apt-get upgrade openssl" didn't do it.


The fix has now been released by Ubuntu, so you can upgrade via the normal methods (apt-get update && apt-get upgrade)


Make sure you check running daemons too... apt-get install debian-goodies; checkrestart


By either compiling & installing it with -DOPENSSL_NO_HEARTBEATS or waiting for the security fix to be backported by ubuntu devs. http://heartbleed.com/


See: http://www.ubuntu.com/usn/usn-2165-1/

The binary package name is "libssl1.0.0". You want "sudo apt-get update && sudo apt-get install libssl1.0.0", but I suggest that you take all security and regular updates (or set sources.list to security only updates if you insist). Then you can just run "sudo apt-get update && sudo apt-get dist-upgrade" to pick up all updates, without worrying about package names.

If you want to verify if a particular vulnerability is fixed, look in /usr/share/doc/<package>/changelog.Debian.gz. In this case, you want /usr/share/doc/libssl1.0.0/changelog.Debian.gz. In this file, you'll see CVE-2014-0160 mentioned as fixed, which is the universal identifier of this vulnerability.


Is this something I have to worry about as someone who uses AWS ELB SSL offloading? Hard to tell from the docs.



Note that until it gets patched, you can go to your ELB config and disable TLS support, which I believe (someone please correct me if not) will protect you from this particular attack. Whether the cure is better than the disease is up to you.


Disabling TLS in the ELB config seemed to work for me until AWS finishes the patch rollout. (Looks like they're partway on the rollout.)


If one were using ASLR would this have mostly mitigated this? (I just rebuilt without the heartbeat extension but I'm curious). Also how exploitable is this?


I don't think ASLR helps here one single bit.


I've been running the exploit against our test app (through AWS ELB), and have managed to get a fair bit of data out. Got snippets from HTTP requests on other threads including session cookies and even login passwords.


It should really be possible to easily detect what functionalities your deploy of OpenSSL is using and recompile only those.


Anybody knows if the bug can be triggered in OpenSSH (I believe it uses the same lib?)


Most likely no since SSH is a different protocol than TLS.


Go C!

Hope everyone had forward secrecy on by now.


I wonder how many service providers with big OpenSSL deployments (cloudflare, google, facebook, etc.) will do the sane thing and roll their authenticity keys. I'm guessing zero.

(Assuming they are deployed in such a way that their long-term authenticity keys are in the memory space of the network service, and not kept on another system or HSM.)


If you'd like to update the keg-only OpenSSL brew on osx, and dont care for legacy and crap:

     ( export CONFIGURE_OPTS='no-hw no-rdrand \
       no-sctp no-md4 no-mdc2 no-rc4 no-fips no-engine'; \
  brew install https://gist.github.com/steakknife/8228264/raw/openssl.rb )
Beware, that by default on osx/ios, pretty much everything links to sketchy CommonCrypto or a crusty, quasi-deprecated 0.9.8.


of course, anyone using 0.9.8 is fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: