Crap, now I have to update my manual¹, and advise the paranoid to verify the signature they just generated. Perhaps even provide a helper function for it?
(I still think letting the user control `r` is too dangerous.)
I know, it would slow down the whole thing by a factor of 3.
Monocypher can "only" produce 7000 signatures per seconds on a single thread on my laptop (i5 Skylake Intel). With the verification, I would waste a whooping 0.3ms on each signature, reducing the throughput to 2000 signatures per seconds.
Unless you're running a signing intensive application on the scale of Let's Encrypt, you won't really care about the slow-down. If you don't fully trust your hardware however, faults might be a real problem.
Of course, I was only speculating about the scale required to make several thousands of signature per second. Your second point stands, though. It's probably a waste of CPU to mitigate fault attacks in a Lets Encrypt like use case.
I wasn't suggesting any of those things weren't true, merely a back-of-the-napkin estimate of the sort of load a function like this would be sufficient for.
Let's Encrypt use HSMs to sign, and more than 90% of signatures produced are for OCSP, not for issuance. Though I have failed to find the link for this, I saw them publish these numbers.
Yet, if you then proceed to send your signed document over a network, for it to be rejected 10 minutes later because of a bad signature and then you have to start from the beginning again, that verification cost is irrelevant.
How frequent do you expect failures to be? (That would attack induced failures). There's a threshold below which the "10 minutes later" cost less than wasting 3ms every single time.
If the algorithm is immune to faults, the worst thing that can happen is that you produce an invalid signature, which will then be rejected. This turns the security concern into a more mundane performance concern.
> It depends on the algorithm ... If the algorithm is immune to faults
The truly paranoid would never believe any algorithm is immune to faults.
Even if full mathematical code analysis shows it to be so, can you prove that the proving analysis algorithm itself is immune to faults? It is potentially faulty turtles all the way down!
The higher that tower of turtles, the stronger the confidence.
The proof checker can be ran several times to account for faults. Then you don't need to run it at all (it's proven). And you can use another proof checker (or that proof checker itself) to prove that the proof checker works. Etc.
Now there's the problem of the compiler, which gives a upper bound on the confidence you can have: if there's a bug in it, you're screwed even with a correct algorithm.
> Even if full mathematical code analysis shows it to be so
Come to think of it, that may not be possible for lots and lots of algorithms —whose mathematical security is only conjectured to begin with. Thanks for pointing that out.
You can skip the step of compiler fault by directly verifying machine code or by refining an algorithm provably into machine code.
This is what SEL4 people are doing.
The remaining part is verifying if your machine model is correct. Not trivial for some CPU and memory controller as complex as x86. (Even though just a subset of operations are used.)
Doing this also allows you a proof against fault attacks and timing attacks.
I expect (though I'm not theoretical mathematician so I'm happy to be told how wrong I am!) any formal proof method will have algorithms that it can't do anything with, and that there will exist those that none can deal with. Presumably there is a code version of Gödel's incompleteness theorems somewhere in Turing's work or similar.
In particular, a system usually cannot prove itself consistent. However, we have developed systems expressly for the purpose of being as consistent as our intuition for logic could allow it to be. I'm willing to trust Metamath [0] based on the formalism that they've chosen, which is a dead-simple term-rewriting formalism that is suitable for building pretty much any mathematical structure, given enough time and effort.
A better question would be whether, in a theoretical sense, fault-free algorithms even exist. There are similar questions, like whether one-way functions exist, where we have lots of evidence to suggest that the answer is "yes" but we have yet to prove it.
But you can prove many things. Many programs we care about can be proven correct (for whatever definition of "correct" we manage to formally specify), and those we can't prove correct can probably be transformed into provable versions —assuming they're short enough.
(I still think letting the user control `r` is too dangerous.)
[1]: http://loup-vaillant.fr/projects/monocypher/