# Re: cold boot attack question

On 25.09.2008, Kusigrosz <tvkerase0196@xxxxxxxxxxxx> wrote:

The reason I thought about those simple and very fast operations is
that, for a given time limit to restore the expanded key, the speed
of the algorithm translates to the area of memory the expanded key
can occupy, an so to the expected number of decayed bits.

You do have a point there.

Still, the speed difference doesn't have to be that much. For
example, compare the following two loops:

uint32_t buf[BUFSIZE], sum = 0;
for (i = 0; i < BUFSIZE; i++) {
sum += buf[i];
}

and

uint32_t buf[BUFSIZE], sum = 0, mul = 1;
for (i = 0; i < BUFSIZE; i++) {
uint32_t tmp = buf[i] * mul;
mul *= 0x4e45a683; /* random, 3 mod 8 */
sum ^= sum >> 7; /* pulled out of a hat */
sum += tmp;
}

On my computer[1], the former takes about 1.7 ms to sum four megabytes
(BUFSIZE = 1<<20) of data. The latter takes about 2.7 ms, or about
one and a half times as long as the former.

So using the former code would let you increase the data size by 50%,
with the cost that decays in same or nearby bit positions combine
linearly. The latter code, meanwhile, _ought_ to achieve a fairly
good approximation of perfect mixing (though I haven't actually
subjected it to any statistical tests); if there's any major weakness,
it's in the high bits, whose mixing is entirely due to the shift-xor.

[1]: AMD Athlon64 X2 dual core, 2.2 GHz, compiled with gcc 4.2.3,
target i486-linux-gnu, -O3

What about generating the chunk key from the position of the given
chunk on the disk (encrypting it with K1 and K2 before use)? Then
there would be no need to store it at all.

Yes, that should work too. It has the (minor) weakness that the same
chunk in the same position always encrypts to the same value -- but
that weakness is shared by _every_ disk encryption system that doesn't
salt chunks somehow.

--
Ilmari Karonen