Re: Randomness using computers



On 2011-01-08, Maaartin <grajcar1@xxxxxxxxx> wrote:
On Jan 8, 8:16?am, unruh <un...@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
On 2011-01-08, Maaartin <grajc...@xxxxxxxxx> wrote:
Unfortunately the precision means nothing. The system clock is not
accurate to nanoseconds. If you try you will find that the times have
huge gaps between the possible values of the time.

Forget the system clock, I was speaking about the
http://en.wikipedia.org/wiki/Rdtsc
which steps with each single CPU clock (or about). It's not really
good for measuring times intervals, it's obsolete for profiling, but
the resolution is very high.

Why not? I write a simple crazy multithreaded program reading a couple
Because your attacker knows that multithreaded program.

But he doesn't know all the circumstances influencing it's behavior.
The system is about as easy to predict as the climate, although there
are not so many liars making their money of of this.

of files and hashing all the measured times. The program's behavior
depends on the task scheduler and indirectly on all the times obtained
so far. The system is IMHO too complex to be analyzed easily.

No need. You just run it to find out the possibilities.

Ever tried?

Again, don't get me wrong: I do NOT propose restarting a PC as a
source of entropy. Quite the opposite is true, with any activity and
especially with user interaction it gets more and more harder to
crack. I was speaking about the restart since it's the most easily
reproducible situation.

A random number generator needs to be robust, not "it works sometimes
and sometimes crashes."

That's surely true, although I've no idea about how did you to come to
crashing. Of course, I always prefer using /dev/urandom to any such
toy program, however, my claim is that such a toy program may be
strong enough.

The problem is not that your procedure is completely predictable, but
that the predictability is difficult to gauge. I believe that your
procedure will deliver a few bits of randomness which is truely
unpredictable. Just that most of it is not. /dev/urandom uses things
like that to help seed itself, but makes conservative assumptions about
how much randomness it really delivers, and uses a number of different
sources to spread the load. Note that /dev/urandom is far far easier to
use than your "toy program".

Are they any pointers in the literature about analyzing it?
It would be pretty tough to do, since one has to make so many
assumptions about the load on the machine, the architecture of the cpu,
etc. Eg, if the machine is only running your program, the timing is
probably completely predictable. If running on a virtual machine,
perhaps unpredicatble to microseconds, or even milliseconds. Do you
really want your users to sit there figuring out the machine load in
order to estimate the randomness they are getting?
Having it as one amongst many other sources is probably fine as those
other sources, properly chosen, could compensate for the times when this
one is really not delivering much in the way of unpredictability.

Anyway, why not do some experiments? Try it and report the results -- on
lots of different machines with lots of different load factors.

.