> The security of a program is a function of its own design and testing, as well as the design, testing, and basic correctness of its underlying platform: everything from the userspace, to the kernel, to the compilers themselves.
You don't even have to worry about bugs. There are many cryptography primitives that require constant-time to guarantee security. That means that the latency of each machine code instruction could be vital to security. The latency of instructions between vendors within the same architecture is already a concern.
This is a good point! But to be maximally precise: variations between each ISA implementation's latencies is not particularly important; what's important is that instructions on data-dependent paths are not themselves data-dependent in timing.
In other words: it's okay if AMD64 takes 9 cycles to execute an instruction while IA32e takes 7; the problem arises when AMD64 takes 9 + k cycles for k bytes of input while cryptographic engineers have tested on IA32e and assumed a constant overhead of 7 cycles.
> the problem arises when AMD64 takes 9 + k cycles for k bytes of input while cryptographic engineers have tested on IA32e and assumed a constant overhead of 7 cycles.
Why is that a problem? Doesn't that just leak something you could determine by just looking at the CPU type?
That also leaks how many bytes of data are being processed. This sometimes matters.
When checking for a password match you have to check all characters in the.string otherwise you leak where the mismatch was. Even in a properly salted and hashed scheme that makes breaking the password easier.
In an ideal world, a cryptography library wouldn't have to care about these kind of details. It would be written in a completely cross-platform, mathematically pure fashion, and it would be the job of the platform to prevent information leaks (constant time, constant power consumption, ...).
I wonder if there is a CPU that you can switch to "constant resource mode" where it runs everything in fixed time, albeit slower? Or maybe you could run cryptographic operations in a black box (like an extra chip or a networked node) that replies on a fixed schedule?
> I wonder if there is a CPU that you can switch to "constant resource mode" where it runs everything in fixed time, albeit slower? Or maybe you could run cryptographic operations in a black box (like an extra chip or a networked node) that replies on a fixed schedule?
I believe you can disable the CPU cache on x86, which at least gets closer.
However, “slower” in this case means thousands of times slower.
no problem..i'll just hang out here until you come up with a general way to transform an O(n) operation for arbitrary n into a 1000x constant time one.
Bounded time. If you have some operation that could leak information, like checking if a key can decrypt something, take the longest possible time and then add some, and only return after that time. If it takes 50 ms per request then so be it.
Edit in response to sibling: Imagine it doesn't just take 1s but it also makes a clicking noise like a relay. You could still do 10000 requests/second if you install 10000 of them :-D. I wonder if for certain use cases it would be worth the trade-off, if you had proof of absolutely being side-channel free.
You don't even have to worry about bugs. There are many cryptography primitives that require constant-time to guarantee security. That means that the latency of each machine code instruction could be vital to security. The latency of instructions between vendors within the same architecture is already a concern.