More interesting than the attack itself is their overall effort of combining formal verification with protocol implementation. Along the way, they've found all these problems in the other protocols because they didn't use such rigorous methods. Quite an argument in favor of using high assurance techniques for at least critical, slow-changing protocols like TLS.
...that reminds me of older, high assurance designs. The classic way to do it is the so-called abstract or interacting state machine models. Each component is a state-machine where you know every successful or failure state that can happen plus argument security is maintained. Then, you compose these in a semi-functional way to describe overall system. Seems the miTLS people did something similar for theirs that they call "composite, state machines." The result was clean implementation and verification of what got really messy in other protocol engines. Plus, new techniques for handling that of course.
Really good stuff. Worth extending and improving in new projects.
Formal verification of protocols like TLS is essentially the research thesis behind this group at INRIA, which was also responsible for Logjam (as well as the paper you linked to).
Oh yeah, they're kicking ass on every front of this sub-field. Then, Leroy et al are doing that for compilers and language analysis at INRIA as well. Then Astree is leading it for static analysis of C.
France, esp INRIA, seems to be in the lead on verified software with a real-world focus. It will be great when more elsewhere follow suit.
Oh yeah, Galois is another one of the greats in the field. They just keep cranking out one practical thing after another. At a higher pace than most it seems. Here are a few good things on their end.
They mention that tls-unique is used by FIDO. Does this include the U2F specification that is just getting starting to gain acceptance for two-factor authentication? If so, what does it mean for U2F going forward? Are there (potentially/in theory) issues with using it for 2FA?
Try clearing your recent browsing history, cookies, and such. The link wasn't working for me either, but it started working after I had cleared everything.
It's not clear if the JRE6 and the JRE7 are impacted (does the JSSE shipped with them support TLS1.2)? If so, pretty worrying as not supported anymore yet widely deployed.
I wonder why you're speculating given they used the attacks against default configurations of Java apps, Firefox, some Chrome builds, BouncyCastle, and more. Amusingly, the clients often accepted the weaker crypto even when it was disabled. Fits my meme's nicely. Then, some of the flaws that led to that were fixed.
Not sure how applicable it is now outside a checklist item for a situation that can repeat in a new client. Yet, the paper said they used it on real stuff. Was there something specific you were thinking about that's not covered by that?
I don't think the paper was trying to speak to truncation generally. Truncation is a weakening of the crypto by definition. (Reduce bits => reduce security.) It is easier (cheaper) to find a collision if one uses a truncated hash. That doesn't make it Dangerous or Safe; just quantitatively reduced.
If the authors are trying to establish anything, it seems to be, "Yes, collisions do matter!"
Anyway, I found this paper...
http://www.ieee-security.org/TC/SP2015/papers-archived/6949a...
...that reminds me of older, high assurance designs. The classic way to do it is the so-called abstract or interacting state machine models. Each component is a state-machine where you know every successful or failure state that can happen plus argument security is maintained. Then, you compose these in a semi-functional way to describe overall system. Seems the miTLS people did something similar for theirs that they call "composite, state machines." The result was clean implementation and verification of what got really messy in other protocol engines. Plus, new techniques for handling that of course.
Really good stuff. Worth extending and improving in new projects.