Re: RSA keygen recommendations

On Jan 25, 6:49 am, Tom St Denis <t...@xxxxxxx> wrote:
On Jan 24, 6:10 pm, Samuel Neves <samuelnev...@xxxxxxxxx> wrote:

On Jan 22, 6:32 pm, Tom St Denis <t...@xxxxxxx> wrote:

Looking at X9.31 and of course the NIST DSS spec it seems they still
promote a contrived key generation process for RSA keys despite the
fact p \pm 1 attacks are no longer viable.  I get that there is some
value in deterministic key generation, but couldn't they just say seed
some PRNG generate two large numbers, and test for primality?  Also
they still cite 1/4 for the bounds of MR but I've read [and should
look up again] that for random bases or numbers the bounds are in fact
much tighter [and lower] than that.

Is this just a case of X9.31 not ever getting updated and/or used, or
are there valid reasons to use the contrived key gen processes?


There is no mathematical reason to generate "strong primes" in RSA key
generation. See:

The averages for random numbers in Miller-Rabin are described in:

What I don't get then is if this paper was published in '93, then why
does rDSA from DSS and X9.31 recommend 8+ rounds of MR + LL test?

Thanks for the citations though.  I think the results from paper on MR
are also to be found in HAC [at least that's where I seem to recall
reading it]

One needs to be aware of the political situation that was present
inside of ANSI X9F
at that time.

The committee consisted of a small number of cryptographers (me, Paul
Matt Wiener was sometimes there, a competent rep from the NSA), a
large number of
bank representatives, plus a representative from Certicom who shall
remain nameless.
(but whom I respect).

There were a lot of political roadblocks to getting X9.31 established
as a standard.
Certicom clearly wanted to keep it from ever becoming a standard. RSA
was their competition.

A number of people kept bringing up "red herring" reasons for putting
all kinds of
obstacles into the standard.

The bankers wanted "strong primes" because it gave them a warm fuzzy
They didn't understand the math behind elliptic curve factoring
methods, nor the
reason why ECM made Pollard P-1 and Pollard P+1 obsolete. The
representative went along with this, of course. The feeling was
"generating strong primes
is cheap; why not do it anyway?" In fact, for years before I
became involved the
standard had been STALLED precisely because noone knew a way to
generate so-called
strong primes quickly. I gave them a method to do it.

It also gave people a warm fuzzy feeling to require multiple MR tests
for PRP
as well as the LL test. The claim was that "why not do it since the
cost of a
few extra tests is small" [keys do not get generated all that often].

For the sake of getting a standard accepted and in place my boss (Burt
basically told me to give in on all these technical points. RSA
Security needed
to have a standard put it place.

BTW, for those who have not guessed, I was Ron Rivest's co-author on
stong prime paper. And I was one of the main authors for X9.31

You needed to have been part of the politics to really understand what
going on.

Relevant Pages