# Re: Random delay as a countermeasure to timing attacks

Francois Grieu <fgrieu@xxxxxxxxx> writes:

I'm trying to quatitatively evaluate, using
information-theoretic arguments, how much addition of
random delays is an efficient countermeasure against timing
attacks against cryptographic algorihms, e.g. DJB's
http://cr.yp.to/papers.html#cachetiming

I assume added random delays are independant, truly random,
with a continuous distribution fully known to the adversary,
and are the only randomness in the adversary's measurements.

One of the key features of an algorithm are that it be fast. So you want to
slow it down. And you need to slow it down by much more than the slowest
part, so that your random slowdown can swamp the significant difference
between the fastest and slowest runs. Far far better to design it properly
so that all take the same amount of time, rather than slowing it down by a
factor of 5 and then still not being certain that a statistical analysys
cannot differentiate between the slowest and fasterst cases.

What are the approriate hypothesis to state that the
number of measurements necessary to carry an attack
scales up as the square of the mean random delay, when we
scale up our distribution ? of course, that can only be
up to some point, where other attacks such as brute force
become an inssue.

Can we conjecture with some confidence that it is enough
that the standard deviation in the mean of n of our delays
exceeds the maximum (or maybe even standard) deviation in
one execution of the attacked algorithm attribuable to the
timing effect that the adversary uses, to ensure that she
learns at most one (of few enough) bits of information
from all her n measuremùents ?
[to back this conjecture, I rely of the fact that the
weighted sum of n measurements with sum of absolute
values of coefficients normalized to 1 has a standard
deviation at least equal to that of the average of the
n measurements]

Are actual attacks even close to this conjectured
information theoretic thresold ?

If we use a uniformly distributed random delay, the best
estimate of measurements of the SAME actual duration is
not based on the average of all the experiments (the average
of the extremes is better); is it a good reason to use another
distribution? which distribution?
is it enough to sum two uniform delays each half as long?

Any idea or pointer much appreciated.

François Grieu
.

## Relevant Pages

• Random delay as a countermeasure to timing attacks
... I assume added random delays are independant, truly random, ... with a continuous distribution fully known to the adversary, ... and are the only randomness in the adversary's measurements. ... where other attacks such as brute force ...
(sci.crypt)
• Password "security" - was"Passwords with Lan Manager (LM) under Windows" and &qu
... it is limited to 7 characters, when NTLM is up to 14 in older Windows, ... Algorithm 256 encryption algorithm and AES ... etc) will have infinite collisions. ... Final rant, other attacks on passwords... ...
(Pen-Test)
• Re: Security of Secret Algorithm encruption
... > how difficult is it to attack an arbitrary and unknown algorithm? ... cracks that attackers can use for compromise. ... secret algorithm that was supposed to be widely deployed ... ... so the threat models are not only how difficult are frontal attacks ...
(sci.crypt)
• Re: How to pick best encryption algorithm based on application
... the optimum encryption algorithms for your particular application. ... severley affected if one algorithm is better at treating a continuous ... AES and other AES contest finalist will be unfeasible to break from a ... we should take in account not only attacks to the algorithm ...
(sci.crypt)
• Re: Invision Power Board Army System Mod <= 2.1 SQL Injection Exploit
... If you use an ecryption algorithm to store/get data into/from the ... database you will not be able to do SQL injections? ... Audit your website security with Acunetix Web Vulnerability Scanner: ... Up to 75% of cyber attacks are launched on shopping carts, forms, ...
(Pen-Test)