Hacker News new | past | comments | ask | show | jobs | submit login
How to defeat Ed25519 and EdDSA using faults (kudelskisecurity.com)
171 points by lisper on Oct 6, 2017 | hide | past | favorite | 61 comments



The most important thing to understand about this attack is that it's premised on attackers physically flipping bits, usually in targeted memory locations, during particular phases of computation. Even for attackers with physical access to a device, it's not an easy attack to pull off. It's for the most part not a line of attacks relevant to client/server and cloud software (your cloud provider has less dramatic ways of destroying your security).

The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to). The crypto in a smart card needs to work even when the custodian of the card is zapping it with lasers.

The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.

Another pretty important thing to know is that while the paper is interesting, it's not unexpected that DFA attacks would compromise EdDSA; they compromise pretty much everything, don't they?

Finally: I think you really don't care what your crypto library has to say about DFA attacks on its EdDSA implementation. All the mainstream implementations will be "vulnerable" to the attack (EdDSA is unworkably slow with the most obvious workaround of checking the signatures you yourself generate). If you need protection against fault attacks in your threat model, you are way past the point where library safeguards will help you. This is probably the point at which you can't responsibly ship products without getting an actual crypto consultancy to sign off on it.

These are relatively straightforward attacks, both conceptually (the paper has a pretty useful breakdown of "nine" attacks on EdDSA that are basically the same fundamental attack applied to different phases of the algorithm) and to simulate in software, so if you really want to get your head around it, boot up a Sage shell, get EdDSA working, and just play with fake "bit flips".


I have some customers with a stronger threat model than 'physical access owns the secrets' for reasonably mundane 'IoT' products. IP protection / late-point differentiation / any as-a-service model requirements can bump you out of that category.

Fault attacks themselves require exploiting the structure of whatever primitive you're targeting, so in that sense whilst 'everything is susceptible' isn't false, it really does vary quite a bit, and attack strategies can be really quite complex. Some fault attacks on block ciphers are effectively augmented differential cryptanalysis.

In general you're not going to be able to take a library and be able to make any claims about protection against invasive or non-invasive SCA without first knowing about the hardware it will be run on, so agree that it's not usually meaningful to add countermeasures against aforementioned types of attacker.

IMO applied crypto is at its hardest when you're close to bare metal and you have a physical attacker.


I'm not surprised that there are IOT products that need to be secure against their custodians, but the likelihood that the median example product from that set is so secure that fault attacks are a serious concern seems pretty low.

It's a sort of intuitive Venn Diagram situation here, where we're capturing:

1. The subset of hardware products that are "IOT" devices

2. The subset of that that are security sensitive

3. The subset of that that must be secure against their custodians (an extremely narrowing step here)

4. The subset of those products that rely on cryptographic primitives in order to operate.

To put this in perspective: most automotive and industrial secure hardware elements don't make it all the way down to (4), and are compromised by plain ol' bog standard software security attacks.


Yes, don't really disagree with the general trust of that, particularly once you start considering the technical barriers to executing an attack.

My main point was that the threat model includes the manufacturer needing to maintain ownership of secrets more often than you'd expect. I'd suggest that your step 3 isn't as narrowing.

Whether meeting that model warrants building in resistance to classes of SCA is an almost independent question, and I would agree that it's not very likely in most cases. The consumer-facing industries in which you do see that (e.g. set-top box, printing) aren't really IoT ones, either.


You seems to have read the second paper which got published on the same topic shortly after this work was presented at FDTC. However both papers are about the same kind of fault attacks. And since you mention playing around, the first paper and the blog post here also provide some python code for you to play with bit flips.

Regarding the IOT setup, I know first hand that Nest products are trying very hard to be secure even in case of physical possession. Especially on the firmware signature verification part. But they rely on some fun post-quantum scheme, not on EdDSA.

It's true that in the end it all boils down to your threat model, but I bet smartcard producers are glad someone researched this before they got on the EdDSA train.


> but I bet smartcard producers are glad someone researched this before they got on the EdDSA train.

If the only thing controlling whether they implement something is if they can find negative results on google, I don't think they'll be glad for very long.


Can you elaborate on the Nest part? How do fault attacks apply to firmware verification?


Nest products are using hash-based signatures?


Tangentially, we as the consumers often lament that some keys in the devices we own remain hidden from us. In such cases we posses the devices and we are the "adversaries" from the point of view of the companies that produced them.

The article shows that maybe it's still somewhat easier to get them than we'd often assume?


Sure, but the whole security model behind smartcards is premised on this problem, so it's not surprising. Fault attacks aren't new, and the kinds of side channels that smart cards and similar devices need to protect against are also a lot more interesting than just timing. There's a whole subfield of cryptography that focuses on this very narrow problem space.

I'm just saying: it's probably not all that easy to pull a key off a smartcard.

Remember that in most cases, the security model baked into the device is also that it need only be secure enough to resist economical attacks --- meaning, if the value of the secret you obtain from a smart card is less than the value of the equipment and expertise you'd need to carry out the attack, the smart card has probably "won" even if it can be compromised.


As you say it is very much not, at least for now, a trivial attack even with unconstrained physical access, nor is it an unknown problem. That said I think you might be a little too blasé about the ever increasing importance of maintaining at least some security in the face of physical attacks.

>The textbook example of a setting where fault attacks matter is smart cards: they're computing devices that perform crypto operations and their users are naturally adversarial (the card controls their access to some resource they'd naturally prefer unfettered access to).

Smart cards (and more specialized HSMs), along with on-cpu/soc dedicated crypto black boxes, are also utilized by users to provide an additional layer of redundancy or at least compromise-detectability for keys and authenticity of operations in the face of attackers who gain physical access. In many countries national IDs, financial tokens and such depend on this. Obviously modern smartphones make use of this too.

>The paper mentions IOT as a setting that will increasingly implicate fault attacks. To some extent, maybe. Most IOT devices aren't adversarial with the people whose custody they're in; in other words, if you can get your hands on an IOT device once deployed, the threat model for that device basically already says you own all the "secrets" in the device.

This is definitely at least somewhat wrong. Particularly as IOT gets deployed more, they will commonly be in settings that aren't "public" per se but certainly do not enjoy high levels of physical security against anyone who might pass by either. Private individuals, educational institutions, businesses and organizations often have guests who they want to have temporary limited resource access but do not want to let leverage that into persistent/unlimited resource access.

How much this matters depends on whether this sort of attack could ever be made easy and fast enough, and it can be at least partially hardened against for relatively low cost. But I wouldn't discount the importance of manufacturers at least always keeping in mind whether that threat scenario implicates to their customers or not. This shouldn't be a big deal and is very much not headline territory, but it's part of the shift to ever more ubiquitous networked inputs that we may have to rethink from time to time how much physical access restrictions are a defense right?


> EdDSA is unworkably slow with the most obvious workaround of checking the signatures you yourself generate

Checking a signature is twice as expensive as generating one. Yes, a 3X cost is significant, but it's not "unworkably slow". It's just a tradeoff of performance vs security. And for some applications, like bitcoin wallets, it might well be a price worth paying.


Are you going to check with a different implementation? Where does this rabbit hole end?

This is an integrated hardware/software problem, right?

I think my comment is being read to suggest that fault attacks are generally unimportant, which is not my point. I think if there's a subtextual reaction to anything in my comment, it's the notion that crypto library developers should be implementing and marketing fault attack countermeasures. If you need defenses against fault attacks, as a general class of attacks, you need something no crypto library is likely to provide you.


The security rabbit hole never ends. I did not mean to imply that an internal check is a panacea, only that it might be a cost that some not-entirely-unreasonable person might be willing to pay in certain applications. I was just pushing back against the glib dismissal of this countermeasure as "unworkably slow."


It's not a good defense, because a double fault injection attack could bypass the signature verification as well.

The only software defense that actually works reliably is preventing duplicate r-values from being generated in the first place. This can be accomplished by augmenting the fully deterministic r with some additional randomness, producing a synthetic r which is still guaranteed to be unique-per-message even if we have an RNG failure. See:

https://moderncrypto.org/mail-archive/curves/2017/000925.htm...

Note this is quite different from the k-value in ECDSA, which is not synthesized from the message contents at all.


libsodium can be compiled with ED25519_NONDETERMINISTIC defined in order to compute r as recommended in the generalized eddsa proposal.

libhydrogen always worked that way.


Yep, that looks like a good approach.


> If you need protection against fault attacks in your threat model, you are way past the point where library safeguards will help you. This is probably the point at which you can't responsibly ship products without getting an actual crypto consultancy to sign off on it.

I wish there was a way to make comment text bold, because this definitely seems like it needs to be emphasized.


Crap, now I have to update my manual¹, and advise the paranoid to verify the signature they just generated. Perhaps even provide a helper function for it?

(I still think letting the user control `r` is too dangerous.)

[1]: http://loup-vaillant.fr/projects/monocypher/


You don't really want to verify the signature you've just generated, if you care about perf, because verification is twice as costly as signing.


I know, it would slow down the whole thing by a factor of 3.

Monocypher can "only" produce 7000 signatures per seconds on a single thread on my laptop (i5 Skylake Intel). With the verification, I would waste a whooping 0.3ms on each signature, reducing the throughput to 2000 signatures per seconds.

Unless you're running a signing intensive application on the scale of Let's Encrypt, you won't really care about the slow-down. If you don't fully trust your hardware however, faults might be a real problem.


Even let's encrypt is issuing only a handful of certificates per second. Perhaps it'd matter more if used on a per connection basis.


1. Nobody is attacking Lets Encrypt with fault attacks.

2. If you can induce faults on Lets Encrypt's signing infrastructure you have better attacks to deploy.

3. Lets Encrypt doesn't use Elliptic Curve in the first place.


Of course, I was only speculating about the scale required to make several thousands of signature per second. Your second point stands, though. It's probably a waste of CPU to mitigate fault attacks in a Lets Encrypt like use case.


I wasn't suggesting any of those things weren't true, merely a back-of-the-napkin estimate of the sort of load a function like this would be sufficient for.


Let's Encrypt use HSMs to sign, and more than 90% of signatures produced are for OCSP, not for issuance. Though I have failed to find the link for this, I saw them publish these numbers.


Yet, if you then proceed to send your signed document over a network, for it to be rejected 10 minutes later because of a bad signature and then you have to start from the beginning again, that verification cost is irrelevant.


How frequent do you expect failures to be? (That would attack induced failures). There's a threshold below which the "10 minutes later" cost less than wasting 3ms every single time.


It is much more likely that the fault is due to corruption during data transfer anyway and you get to handle this case.


The paranoid should be doing that regardless of the algorithm, should they not?


It depends on the algorithm.

If the algorithm is immune to faults, the worst thing that can happen is that you produce an invalid signature, which will then be rejected. This turns the security concern into a more mundane performance concern.


> It depends on the algorithm ... If the algorithm is immune to faults

The truly paranoid would never believe any algorithm is immune to faults.

Even if full mathematical code analysis shows it to be so, can you prove that the proving analysis algorithm itself is immune to faults? It is potentially faulty turtles all the way down!


The higher that tower of turtles, the stronger the confidence.

The proof checker can be ran several times to account for faults. Then you don't need to run it at all (it's proven). And you can use another proof checker (or that proof checker itself) to prove that the proof checker works. Etc.

Now there's the problem of the compiler, which gives a upper bound on the confidence you can have: if there's a bug in it, you're screwed even with a correct algorithm.

> Even if full mathematical code analysis shows it to be so

Come to think of it, that may not be possible for lots and lots of algorithms —whose mathematical security is only conjectured to begin with. Thanks for pointing that out.


You can skip the step of compiler fault by directly verifying machine code or by refining an algorithm provably into machine code.

This is what SEL4 people are doing.

The remaining part is verifying if your machine model is correct. Not trivial for some CPU and memory controller as complex as x86. (Even though just a subset of operations are used.)

Doing this also allows you a proof against fault attacks and timing attacks.


I expect (though I'm not theoretical mathematician so I'm happy to be told how wrong I am!) any formal proof method will have algorithms that it can't do anything with, and that there will exist those that none can deal with. Presumably there is a code version of Gödel's incompleteness theorems somewhere in Turing's work or similar.


In particular, a system usually cannot prove itself consistent. However, we have developed systems expressly for the purpose of being as consistent as our intuition for logic could allow it to be. I'm willing to trust Metamath [0] based on the formalism that they've chosen, which is a dead-simple term-rewriting formalism that is suitable for building pretty much any mathematical structure, given enough time and effort.

A better question would be whether, in a theoretical sense, fault-free algorithms even exist. There are similar questions, like whether one-way functions exist, where we have lots of evidence to suggest that the answer is "yes" but we have yet to prove it.

[0] http://us.metamath.org/


Correct, you cannot prove everything.

But you can prove many things. Many programs we care about can be proven correct (for whatever definition of "correct" we manage to formally specify), and those we can't prove correct can probably be transformed into provable versions —assuming they're short enough.


It has always seemed like a footgun for the algorithm to totally cave if the 'r' value is reused for the same key and different message. This attack uses the deterministic 'r' generation using the key and the message (to ensure that the 'r' is only reused if the message is the same), but then glitches the hash function that is used on the message during signing (so the same'r' is used on a different message).

Edit: Removed silly question on why the order has to be 'r' before the second hash. Spolier: Signature is two parts - and at least one needs to be fixed to prevent an forger just modifying both. That needs to be the 'r' part, so it has to be before the S.


I have another, perhaps less silly question.

The way EdDSA works, one needs to hash the message twice, in a way that makes incremental hashing impossible (you need to complete the first pass before you begin the second):

  r = Hash(secret_key | message)
  R = Base_point × r
  H = Hash(R | public_key | message)
  // use H to compute the signature
Because of this, we cannot have a streaming interface. Thus, signing and verifying huge messages is a hassle. I think there are ways to avoid this, if we're willing to mess with EdDSA (I'm not).

We could for instance change the second hash to this:

  H = Hash(public_key | message | R)
By appending R to the rest, instead of pre-pending it, we can perform the two hashes in parallel, and use the result of the first hash only for the last bit of the second hash.

A more involved tweak would hash the message first:

  r = Hash(message | secret_key)
  R = Base_point × r
  H = Hash(message | R | public_key)
That way you can hash the message once, then duplicate the hash context to complete the two hashes.

---

Now the question: would those modifications work? Are they still immune to collision attacks? How about length extension attacks?

Subsidiary question: if my modifications are secure, how DJB and al didn't think of it? DJB in particular is reputed to be quite performance sensitive.


EdDSA is NOT online, but it's trivial to use it in an online way: sign a hash of the data rather than the data. But now you're back to having collisions (EdDSA is collision resistant). So it's a trade-off you have to make. As for DJB's take on this, there were lengthy debates about this on the CFRG list, and you find out for yourself (spoilers: he's very big on collision resistance). Of course, the need for online algorithms is real, but there's more than one way to skin that cat.


Do you have a link to those debates? I'm searching through the mailing list, no luck so far.


The list archives are difficult to link to.

Look for posts around June 2015, e.g., with Subject:s like "Summary of the poll: Elliptic Curves - signature scheme: friendliness to low memory implementations" or subjects containing "IUF" (Init, Update, Final) here:

https://www.ietf.org/mail-archive/web/cfrg/current/mail5.htm...

(The archives' main page is https://www.ietf.org/mail-archive/web/cfrg/current/maillist....)


Shouldn't the "message" being signed by EdDSA (or others like RSA) be a hash of the real data? That way you can do a streaming interface just fine, and the "message" being hashed within EdDSA is small and constant-sized.

I think the idea with the ordering of the inputs to the hash functions was to always put the most secret data first, to make things harder for the attacker. It was probably expected that the "message" would always be small, like it must be with for instance RSA.


> Shouldn't the "message" being signed by EdDSA (or others like RSA) be a hash of the real data?

It could be, and that's what I currently recommend. But then isn't there a problem with collision attacks? Or does one still has to perform a pre-image attack on the hashes of previous messages?


RFC 8032 already specifies an "Ed25519ph" variant which prehashes the message and therefore facilitates Initialize-Update-Finalize (IUF) APIs:

https://tools.ietf.org/html/rfc8032#section-4


Anyone have any idea how does this affect Olm[1] (and possible NaCl and libsodium)? I know it uses Ed25519 signatures for both the device fingerprint and the one time pre keys.

[1]: https://git.matrix.org/git/olm/about/docs/signing.rst


On the Matrix side we’re reading through the paper and seeing if/how it applies to Olm.


Why would you need to read through the paper to see if it applies? Are you deployed in settings where attackers can physically flip bits or not? That question, the only one that matters, is posed essentially in the abstract of the paper.


Yup, Olm could conceivably be deployed in settings where attackers could physically flip bits, but we're not aware of anyone doing so yet (all current usage is on non-embedded devices where the hassle of bit flipping would be enormous).

Assuming I've correctly skimmed the paper, it looks like it relies on your Ed25519 impl incorrectly reusing the same 'r' value when signing messages - and then deliberately flipping bits whilst it does so, in order to gradually recover the private key. Our usage of Ed25519 doesn't reuse 'r' values, so I think Olm is okay.


The article acknowledges RFC6979, which is the standard way to produce a deterministic r that doesn't have this problem intrinsically. It also refs RFC8032, which has EdDSA in it (both variants).

The fault attack acts after that deterministic generation, so tha fact that Olm doesn't reuse r values explicitly doesn't matter. You're not _erroneously_ reusing r: the attack involves you trying to sign the same message twice, with a fault occurring on the second signing.


When signing the same message, all EdDSA implementations are supposed to reuse the same `r`, as per RFC 8032, and thus to produce the same signature under the same key. For smartphone and instant messaging, I wouldn't be worried about fault attacks, there are probably easier ways around it.


It's a fault attack. It doesn't really matter what implementation you use; the attack is an interaction between the hardware (flipping bits in targeted memory at specific points in computation) and the algorithm itself.

You care about these kinds of attacks if you build or rely on a device that uses ECDSA for security and is in the physical custody of adversaries. It's not an attack that is especially relevant for client/server software or cloud cryptography (there are less theatrical ways your cloud provider can destroy your security).


It mostly does not matter for user-level software. Anything capable of creating faults on your system is almost certainly already able to read your private key and send it around.


I wonder how long before someone demonstrates fault attacks triggered from a sandbox in the cloud..

I imagine something like rowhammer could be used..


This attack uses physical fault injection, so it shouldn't be trivial for remote attackers to use it, but it definitely is a nice demonstration, and exposes vulnerabilities on chips that generate and store such keys, like maybe pay-tv cards (remember the business of Kudekski/Nagra).


Just a while ago there was a paper here about how to create faults using the CPU's power management features, by software.


Sure, and Rowhammer flips bits from software too. The point here is you need relatively precise control not just in location but in timing of your bit flips to break crypto like this. Rowhammer attackers have the luxury of attacking bits that stay high-value for long periods of time. DFA attackers get very tiny windows.


Not very long. Flip Feng Shui does exactly that: a malicious program in a VM can affect another one co-hosted in the same cloud server if the hypervisor uses deduplication:

https://www.vusec.net/projects/flip-feng-shui/


For a moment, I thought somebody wrote an article on how to beat edw519's HN karma.


I believe the only way to defeat Ed209 is also through exploiting its faults.


Referring to its first demo? Really, the engineers were asking for that… "glitch":

1) Failed to test Ed209 on demo conditions (medium sized meeting room).

2) Failed to anticipate the failure of the sound recognition algorithm (gun on on carpet instead of gun on concrete).

3) Failed to implement software failsafes for the demo: a layer on the motor system so it never touches anything (especially moving things such as humans), a weapons system that doesn't actually fire…

4) Failed to implement a proper shut down button, such as a physical plug one could pull from behind.

5) Loaded the damn thing with live ammo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: