Hacker News new | past | comments | ask | show | jobs | submit login

As naz pointed out, you can use sleep(1.0 - time_spent) to accomplish this.

The downside is that while the solution given in the OP will use a constant amount of time, that constant amount of time itself is not fixed. When over the years processors get faster, the code can execute faster, but in the above case it cannot.




> When over the years processors get faster, the code can execute faster, but in the above case it cannot.

Sure it can. You can drop the limit whenever you change processors. In fact, you can tune the limit for a given call to be the max time taken by that call. As the implementation or resources change, the limit changes.


You're better off just using a constant-time algorithm which uses the same number of operations for all inputs. Few systems, if any, have sub-nanosecond-resolution timers, let alone thread schedulers which have quanta of that size.


> You're better off just using a constant-time algorithm which uses the same number of operations for all inputs.

That's easier said that done. Also, you have to come up with one for every security-relevant operation.

And, "algorithm" isn't the issue, implementation is. Given caching and the like, that's really hard to do.

An easy way to implement the "sleep to worst case" idea is to simply time each call and keep track of the max. If a given call instance returns in less time, sleep the difference. (It's probably good to add a decay mechanism.)


Perhaps it'll mean more if someone else says it:

Although nanosleep() takes a timespec struct that allows you to give an accuracy with nanosecond precision, nanosleep doesn't really give anywhere near that level of accuracy. Under Linux, the kernel only checks on timers once every "jiffie," which defaults to 1 ms under the 2.6 kernel (on older kernels it was 10 ms, and 10 ms is still a compile option in the 2.6 kernel). Once upon a time, nanosleep would busy-wait if the sleep time given was less than 2 ms, but not anymore. On the other hand, don't be too discouraged—as long as you don't need submillisecond accuracy, nanosleep() is very good…

http://www.ddj.com/linux-open-source/184402031


That means that any sleep will disguise a call's duration to the jiffie, which is a good thing. (If the call's variance is small enough, it may mean that there's no need to figure out how long to sleep.)

Note that caching and branch prediction and the like make "constant time" implementations almost impossible.


No, it will add millisecond-accuracy jitter to the timing of your algorithm. And the attacker will continue to be able to extract microsecond-accuracy data from your timing attack. You need profoundly accurate hardware and a real-time operating system. Your attacker needs… more measurements.

Caching is equally unlikely to play a role. You're probably thinking of DJB's timing attacks on AES (http://cr.yp.to/antiforgery/cachetiming-20050414.pdf), but that attack resolves around the timing of S-box lookups in which the index of the lookup is key-dependent. In the constant-time algorithm proposed by Nate Lawson and others, the only array lookup is the incremented value of a for loop (i.e., constant-time). The other operations are XOR and OR, both of which are constant-time.

Branch prediction doesn't come into play because a constant-time comparison doesn't involve conditionals until all the data has been compared.

A constant-time implementation of AES? Hard. A constant-time implementation of an array comparison? Dead simple.

Not sure why you're so resistant to the idea.


Are you sure that (1.0 - time_spent) executes in constant time? Or that sleep() is precise enough?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: