Hacker News new | past | comments | ask | show | jobs | submit login
Failures in NIST’s ECC standards [pdf] (cr.yp.to)
66 points by lisper on Oct 7, 2016 | hide | past | favorite | 18 comments



And yet unfortunately, LibreSSL still doesn't support any safe curves ( https://safecurves.cr.yp.to ) like Curve25519. Nor the ability to choose between multiple curves for the 99% of browsers that only support the NIST curves (only the most recent Firefox nightlies and Chrome support it.) The former landed in OpenSSL 1.1.0, and the latter in 1.0.2, whereas LibreSSL is a fork of 1.0.1. And after Dual EC DRBG, it is negligent to trust more curves that the NSA has tampered with.

The curve has been out for a long time now, and it really doesn't take much code to do this. Here's a recent implementation of it that I wrote, which runs in constant time and is based in part off curve25519-donna-c64 and with some help from MerryMage:

http://hastebin.com/raw/lutiloyaxa

As you can see, most of the work is just simulating a 256-bit type efficiently. But they should already have the field element types in their signify code (which uses Ed25519), so they'd only have to write the first 86 lines of that code.

At this point, I've pretty much given up hope of being able to use the even nicer Curve448 within the next five years with TLS. I'm unaware of any TLS libraries or browsers that support it.


OpenSSL supports ECC X25519 since august this year.

Apple uses 25519 for iCloud encryption as well as encrypting AirPlay and wireless communication with various accessories.

X25519 Is also supported in Chrome https://www.chromestatus.com/feature/5682529109540864


> OpenSSL supports ECC X25519 since august this year.

"The former landed in OpenSSL 1.1.0"

> X25519 Is also supported in Chrome

"only the most recent Firefox nightlies and Chrome support it"

> Apple uses 25519 for iCloud encryption as well as encrypting AirPlay and wireless communication with various accessories.

I was referring to its use in HTTPS/TLS, but that's good to hear! :D Hopefully they'll add it to Safari soon.


Wasn't contradicting you, just adding a bit of info. Does firefox actually supports X25519 now? I haven't seen it in the builds (it's also not on the reference list i use). I know Edge is going to support it by the end of the year, no clue on Safari yet.

But anyhow if you want to have a full list of software that supports Curve25519/X25519(DH) as well as other open crypto standards IANIX has an upto date list https://ianix.com/index.html


Oh okay, in that case my apologies for the misunderstanding ;)

Firefox is said to support it in the nightly releases, but not yet in the official releases. This is the last time I checked about a month ago (I no longer use Firefox.)


The problem is that despite being deployed and having a lot of support there still is no official RFC for X25519 in TLS.

We have only an RFC for the curve as a cryptographic primitive (RFC 7748). A bit unfortunate that IETF process runs so slow in this regard.


I can also tell you that cryptography libraries are complete piles of failure if you want to run on an embedded device.

C++ is a non-starter. Non-open licenses are non-starters since nobody will review the library. GPL libraries are radioactive legally. Opinionated libraries are wonderful--unless you have to interoperate (see NaCl and libsodium). Dynamic allocation is death in embedded (looking at you libgcrypt and your stupid s-expressions garbage).

And, if you want some hardware acceleration, your choices are ...

Zip. Nada. Zilch.

And then everybody wonders why security in IoT sucks.

(Botan is probably the least bad. It's C++, but they went out of their way to provide a C API and even have Python bindings in the source(!). Unfortunately, it's lots of atomized C++ classes and inheritance to trace through for what looks like not a lot of gain. It's not immediately clear how to inject hardware acceleration either.)


Fellow embedded engineer here.

I think the ARM MbedTLS (former PolarSSL) isn't that shabby. WolfSSL is pretty ok too. And then there are things like TweetNaCl by Schwabe et al that works really well in embedded space.

HW acceleration for ECC is very rare in MCUs, but quite a few sports AES with a couple of modes, typically CTR and CBC. Bigger ones (based on Cortex-M4) may even support AEAD modes like GCM.


> And then there are things like TweetNaCl by Schwabe et al that works really well in embedded space.

I stumbled on TweetNaCl by accident. It's amazingly useful.

But, it's like it's being kept hidden. It's not at all easy to find when you are digging through crypto libraries looking for stuff. That's so terribly unfortunate because there are so many other libraries that are so amazingly difficult to port and use.


> C++ is a non-starter.

Uh?


C++ is a no-go in the embedded world for a lot of reasons.

First, you have to have a C++ compiler. That's not always a given.

Second, everybody likes to use a different subset of C++ in the embedded world. Some people have heaps, some don't. Some people use exceptions, some don't. etc.

Third, it's quite a bit more difficult to figure out what your max stack usage is if you have to traipse through virtual inheritance chains. C isn't easy, but C++ makes it horrifically difficult.

Does that answer your question?


I do embedded work for a living and can confirm all this. Perhaps bsder is a little too categorical, but C++ is definitely a language that poses additional difficulties for embedded developers. Many people feel that it offers too little in exchange -- that is, when a decent, reasonably non-buggy compiler exists for your platform.

This is not to say that there isn't a lot of embedded C++ code (I don't have numbers, but I suspect C and C++ are a lot closer together than C++'s haters would think) -- or that there isn't a lot of good embedded C++ code. I've seen a lot of good C++ code, and the only reason why I haven't written any is that my C++ skills are so out of date that I have a good chance of borking anything longer than a hundred lines.

And indeed, as bsder points out, a lot of the problems are people/politics-related. It's not that "C++ sucks", it's just that there is so much of it that as soon as you get ten people together, two of them are bound to know and use some gimmick that the other eight don't know about, four of them want to avoid exceptions while the other six used to do Java so they have no idea how else you're supposed to deal with failures, and three of them love C++11's new features but that's irrelevant because your compiler (which, by the way, costs about as much as half a kidney on the black market) barely holds together and the team it was outsourced to back in 2007 can barely patch up the null pointer dereferences, let alone add new features.

If you have full control over your codebase and don't (or barely) need to integrate anything from the outside world, it's actually OK. Otherwise, there's a good chance that you're in for such a rough ride that you'll swear you'll never use C++ again.


Yeah, C++ seems to have happily taken over the "write-only language" trophy from Perl. That's not a compliment.

And, I don't think I have ever seen a language mutate so much over time. C++ from 1996 looks nothing like C++ from 2006 which looks nothing like C++ from 2016. Also not a compliment.

I just simply no longer care to allocate the brain space for a language that buys me so very little in the embedded space. If I could find a decent interpreted language for the embedded space (256K flash/16K RAM), I'd never touch C++ again.


djb's papers always make for good reading. I wish more papers were written as straightforwardly as his.


There's a previous discussion about this somewhere (it's from earlier this year, written for a NIST summit on next-generation elliptic curve standards).

I can offer a quick cheat sheet:

* The NIST ECC problem everyone is aware of is that the seeds used to generate the curves have totally unknown provenance. Pretty much everyone agrees that's a problem; it's not really a live issue. Whatever the next curve standard is, it won't have this problem.

* NIST's ECC standards included an ECC-based PKRNG, which is a CSPRNG based around a public key operation rather than a hash or cipher. Pretty much everyone agrees that the only practical reason to use a PKRNG is to allow the holder of the secret private key to extract the state from people's RNGs and predict their random numbers; it's a form of key escrow. It was very bad that NIST standardized a PKRNG.

The meat of the paper is about Curve25519 and Ed25519, Bernstein's competing curves. These constructions are basically responses to the flaws of the NIST standards:

* The NIST curves are misuse-prone. When receiving ECC parameters from your counterparty, there's a validation routine you must perform precisely, or else an attacker can trick you into performing a computation using your secret on an unexpected and insecure curve. This attack is very simple to perform and pretty spectacular: you can use it to remotely suck the private key out of a remote server. There are some variants of the attack (oversimplifying: there's the generic problem of not doing validation, and the more sophisticated problem of how you send coordinates, and whether protocol details like this can allow attackers to force attacks onto related-but-insecure "twist" curves).

* The NIST curves are difficult to implement without timing leaks. That's what the Montgomery Ladder stuff is about: it's an algorithm that's simultaneously constant-time, efficient, and simple. NIST's curves are difficult to work with in these terms, and Bernstein's aren't.

* The NIST curves have a naive design with respect to modern computer architecture, and so it's easier to generate fast curve software with other curves (like Curve25519; the 25519 refers to the structure of the prime field it works in, and that prime field was chosen to be efficient to compute on in software).

* NIST originated DSA and ECDSA, the two most popular public key signature protocols. DSA/ECDSA sucks. A lot. They rely on random nonces and blow up spectacularly if those nonces are biased; it's become a bit of a sport in ECC research to get practical attacks out of extremely small nonce biases (a problem that's easy to have).

* Modern signature schemes have a bunch of nice features that DSA signatures don't; they're bulletproofed internally against attacks that could isolate weaknesses in specific parameters (the hash used, the nonce and the key), the math is tweaked so that things like modular divisions that are hard to do in constant time are replaced with multiplications, and the system is hardened to make it difficult to launch batch attacks (attacks where it costs only marginally more to attack 1000 signatures than to just attack 1). The next NIST signature standard should be like those nice signature systems, not DSA, which, again, sucks.

The paper is worth reading! I think it's a little easier to read with the roadmap. :)


Interesting reading w'r't' DSA from this past week: http://eprint.iacr.org/2016/961


I think that paper is "submission worthy" itself.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: