Hacker News new | past | comments | ask | show | jobs | submit login
SLOTH – Security Losses from Obsolete and Truncated Transcript Hashes (mitls.org)
112 points by mukyu on Jan 6, 2016 | hide | past | favorite | 20 comments



More interesting than the attack itself is their overall effort of combining formal verification with protocol implementation. Along the way, they've found all these problems in the other protocols because they didn't use such rigorous methods. Quite an argument in favor of using high assurance techniques for at least critical, slow-changing protocols like TLS.

Anyway, I found this paper...

http://www.ieee-security.org/TC/SP2015/papers-archived/6949a...

...that reminds me of older, high assurance designs. The classic way to do it is the so-called abstract or interacting state machine models. Each component is a state-machine where you know every successful or failure state that can happen plus argument security is maintained. Then, you compose these in a semi-functional way to describe overall system. Seems the miTLS people did something similar for theirs that they call "composite, state machines." The result was clean implementation and verification of what got really messy in other protocol engines. Plus, new techniques for handling that of course.

Really good stuff. Worth extending and improving in new projects.


Formal verification of protocols like TLS is essentially the research thesis behind this group at INRIA, which was also responsible for Logjam (as well as the paper you linked to).


Oh yeah, they're kicking ass on every front of this sub-field. Then, Leroy et al are doing that for compilers and language analysis at INRIA as well. Then Astree is leading it for static analysis of C.

France, esp INRIA, seems to be in the lead on verified software with a real-world focus. It will be great when more elsewhere follow suit.


Galois just gave a talk on high assurance crypto at RWC (they made cryptol and other open sourced tools to give formal proofs of security)


Oh yeah, Galois is another one of the greats in the field. They just keep cranking out one practical thing after another. At a higher pace than most it seems. Here are a few good things on their end.

Scrolling their blog is endless insights https://galois.com/blog/

CRYPTOL's open source page http://www.cryptol.net/

SMACCMPilot has things like Ivory language http://smaccmpilot.org/

GitHub site with 8 pages worth of their stuff https://github.com/galoisinc


They mention that tls-unique is used by FIDO. Does this include the U2F specification that is just getting starting to gain acceptance for two-factor authentication? If so, what does it mean for U2F going forward? Are there (potentially/in theory) issues with using it for 2FA?

(I'm not a crypto guy, obviously...)


The link's not working for me, nor is any reference to SLOTH given on http://www.mitls.org/wsgi/tls-attacks.



Try clearing your recent browsing history, cookies, and such. The link wasn't working for me either, but it started working after I had cleared everything.


It's not clear if the JRE6 and the JRE7 are impacted (does the JSSE shipped with them support TLS1.2)? If so, pretty worrying as not supported anymore yet widely deployed.


I keep seeing this in a lot of places, but Oracle are still releasing JDK6 and JDK7 updates every quarter. AFAIK TLSv1.2 is not supported on 6.

Edit: ah, but only to customers paying for support.


Java7u79 is supposed to be the last public release. From the SLOTH page, it looks like there might be another release to address this.


OpenJDK7 (not sure about 6) is still getting updates. I think Redhat is sponsoring that.


It's not clear to me that this work establishes that truncated hashes are dangerous, so much as that tls-unique is just not a very good protocol.


I wonder why you're speculating given they used the attacks against default configurations of Java apps, Firefox, some Chrome builds, BouncyCastle, and more. Amusingly, the clients often accepted the weaker crypto even when it was disabled. Fits my meme's nicely. Then, some of the flaws that led to that were fixed.

Not sure how applicable it is now outside a checklist item for a situation that can repeat in a new client. Yet, the paper said they used it on real stuff. Was there something specific you were thinking about that's not covered by that?


I don't understand your question. I'm not doubting the impact of the paper, just one of its conclusions.


That's all I was wondering.


I am very much a novice, but here goes...

I don't think the paper was trying to speak to truncation generally. Truncation is a weakening of the crypto by definition. (Reduce bits => reduce security.) It is easier (cheaper) to find a collision if one uses a truncated hash. That doesn't make it Dangerous or Safe; just quantitatively reduced.

If the authors are trying to establish anything, it seems to be, "Yes, collisions do matter!"


I read somewhere that it was being considered for deprecation in the TLS 1.3 spec.


How would you modify tls-unique to not be vulnerable to collision attacks?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: