Hacker News new | past | comments | ask | show | jobs | submit login
Decrypting TLS Browser Traffic with Wireshark (jimshaver.net)
132 points by WestCoastJustin on Feb 12, 2015 | hide | past | favorite | 37 comments



That article seems to be a rewrite of this 2012 article: http://www.root9.net/2012/11/ssl-decryption-with-wireshark-p...

A SSLKEYLOGFILE environment variable makes Firefox and Google Chrome silently log their crypto keys as plaintext. What could possibly go wrong?

This 2014 discussion (https://groups.google.com/forum/#!topic/mozilla.dev.tech.cry...) says that originally the documentation for this feature said that it was only turned on in debug builds of Firefox. Then, somehow, it was quietly turned on for all builds, with no notification.

Take a look at the code in Firefox for this:

https://hg.mozilla.org/projects/nss/file/65605e800fd1/lib/ss...

Firefox understands several suspicious environment variables: SSLBYPASS, SSLKEYLOGFILE, NSS_SSL_REQUIRE_SAFE_NEGOTIATION, NSS_SSL_ENABLE_RENEGOTIATION, and NSS_SSL_CBC_RANDOM_IV, all of which have security implications. Some were put in for some Thunderbird problem involving Microsoft mail servers, but the common library may enable them in Firefox as well. That whole section of code should be present only debug builds, if at all.

It's funny how backdoors like this keep creeping into mainstream software, isn't it. Look at the code histories and see who put in those changes.


This does seem, at the least, irresponsible of the browser vendors. I'll be especially angry if Chrome doesn't have very prominent warnings, considering they harass some developers and power users with a warning about an "insecure" configuration Google basically forced them to use.[0] By analogy, I expect nothing less than a blinking red warning covering the entire browser window whenever keys are being logged because of an environment variable.

I also don't think readers should be encouraged so readily to put this into the persistent global environment.

[0] https://code.google.com/p/chromium/issues/detail?id=337734


Chrome uses NSS, so the GP applies exactly to it: https://code.google.com/p/chromium/codesearch#chromium/src/n...


Some of those environment-variable "features" were workaround for problems in the past. Many of those could probably be removed outright. All of them should be removed from non-debug builds.


Why are you calling this a backdoor?


It could be named "A rather unfortunate architectural decision that can make browsers write keys in plaintext without users explicitly enabling it" but that would be a little verbose. A malicious program can add a SSLKEYLOGFILE environment variable (if it's not systemwide, it doesn't need root privileges for that).

I guess we can haggle over whether or not this meets the "definition" of a backdoor. "Can give unrestricted access to encryption keys" is backdoor in my book, but I'm not a fan of taxonomy, the retarded child of sciences :).

I can understand why this... um, this feature, would be useful in developer builds. Why it's present in anything that is shipped to users is beyond my ability to comprehend. It should be behind an ENABLE_DANGEROUS_PLAINTEXT_LOGGING switch that is off by default. What's the thought process by which someone decides giving this to users is a good idea? Is there something I'm missing here?


If the attacker can add an environment variable, they can add a command-line switch or about:config value or modify my firefox binary or install an addon that saves my password fields.


If I carelessly execute ~/Downloads/some-binary, my environment variables and config settings are now suspect , while overwriting my firefox binary requires root access.

As such, having the browser loudly warn (irrespective of settings) about such unsafe defaults is still vastly better.


As such, having the browser loudly warn (irrespective of settings) about such unsafe defaults is still vastly better.

Yes. Loudly. As in "surrounds entire window with a red frame with INSECURE TEST MODE ENABLED", not as in "pops up notification that quietly fades out".


But poisoning your Firefox binary does not require root access. Just preload what you want.


While technically true, that did not prevent Shellshock from being a big deal.


Shellshock was a big deal because the attacker only needed to control the value of an environment variable, not its name. There are many vectors to provide a string that will end up stored in some environment variable, far fewer that will allow you to specify a particular name/value pair.


Small super-pedantic note: the correct file 'so that it is set every time you log in' on both OS X and Linux would be .bash_profile.

- .bashrc is run every time the shell is started (eg, subshells, su without -l or -)

- .bash_profile is run only when a 'log in' shell is started, eg, a tty, su -l, su-, new tab, etc.


Very interesting combo!

Another technique that often, sadly, works is the old school man-in-the-middle.

I was recently looking at the api of an app on my phone, using ettercap to arp poison my phone and capture traffic on my laptop. Found the api call was over https (fwd secrecy and all looking good), I tried ssl-mitm with a fake cert... no errors in the app, clear text on my laptop. Happy and sad at the same time. :/


Wait a second...

Was the fake cert trusted by your phone? If affirmative, did you manually add the root cert from your fake CA or at least accepted that cert as trusted?

If you didn't explicitly trust that cert and the application didn't complain we're in front of another completely wrong use of SSL.


No, not trusted. That's why I was sad. The app failed to verify the certificate chain and silently continued over the (in)secure connection.


What an amazing and scary tool (maybe those always go together). I now have a much deeper appreciation for why folks are afraid of Starbucks WiFi.


Any decent browser will display a certificate error for a Mitm attack. But yes, if you are using other programs/apps over an untrusted network, you can never be sure.


It's not actually about the WiFi, secure or not. I was in a WPA2 WiFi. I attacked my phone (at a higher network level), so unless there's an active IDS in place, you can't do much about it.

The issue is not the network, but the way the app continued over an insecure connection.


I can't help but wonder how many people are going to apply this handy trick to people's unlocked Windows sessions...


I'm looking forward with interest to see how long it takes for a piece of malware to start reconfiguring exploited machine's browsers and exfiltrating session keys.

(Or, to the news that the NSA/GHCQ/ASIO/et al. have been actively doing this for years already...)


Have we seen malware ripping keys out of memory? It seems a stretch to think that making this slightly easier to do will result in it being more widespread. What reason does malware have to do this that isn't better served by DNS Hijacking + installing a root cert?


This gets an attacker session keys for TLS sessions with forward privacy - which'd be kinda handy if you were a (the?) "global passive adversary" who's already syphoning off _all_ the traffic in and out of the major cables and datacenters.


This might get around pesky certificate pinning ;-)


I have seen malware that repeatedly adds a compromised root certificate to a machine's trusted authorities.


This is handy, but you can very easily do this automatically with fiddler. It automatically swaps out all of the certs though.


Do you mean using Fiddler (great program, definitely a musthave) in place of Wireshark?

In that case you can add its key to a trusted authorities in Firefox and then it swaps nothing, everything seems to be signed properly... Unless I misunderstood your comment.


No, Fiddler is an active ManInTheMiddle attack. With HTTPS interception it always substitutes the certificate (which is the public key).

The substituted public key allows the proxy to negotiate a TLS session between the browser and the proxy, impersonating the real server.

You avoid a certificate error if you install the signing CA certificate in the browser; but you still tamper with the traffic. There are scenarios where mitm doesn't work; for example Client Authenticated TLS. Things like certificate pinning, where the browser expects a specific public key, also break intetception.


I meant that Fiddler switches the original certificates with certificates that it generates. It's not a big deal if you trust them (on Windows Chrome and IE work automatically since it adds them to the trusted root store) and for Firefox you just have to trust the Fiddler issued certificate. However, if you inspect the certificate of an HTTPS site when Fiddler is running you see the CA is "DO_NOT_TRUST_FiddlerRoot".


I believe mitmproxy also sniffs HTTPS, but I think it uses a different method by dynamically generating a cert based on the true one http://mitmproxy.org/doc/howmitmproxy.html (bottom)


Spaces. Use them. I read "musthave" as "mutshave" :)


Come to think of it I'm never sure how to write those 2-part words being a non-native speaker. I just go with what seems right :)


Sure, but if you want to see the whole stack (below HTTP/S) you can't do that with fiddler.


I find httpWatch with Firefox and IE much better than Fiddler for HTTP debugging purposes. However it is not free.


I just dealt with this a couple months ago, and had to explain to my boss why I reverted from best-practice SSL settings in the course of trying to solve a tricky problem. Another issue I ran into was, the current packaged version of Wireshark in Ubuntu had some bugs in it that also prevented me from decrypting traffic (it didn't tell me this, it just didn't work and I had to track down the problem myself.) I had to compile the latest from their website to finally get everything working.


If you're going to debug your own services (where you can own the certificate key) you can also do this easily by tcpdump and then importing the encrypted dump and decrypting it inside Wireshark: http://support.citrix.com/article/CTX116557

I had to verify how two services (not web browsers) were communicating the other day and this was the easiest solution for me.


Take a look at the first paragraph. With Forward Security, having the private key is not enough, you need the session keys.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: