Re: Not helpful statements from recent papers.....
 From: pubkeybreaker <pubkeybreaker@xxxxxxx>
 Date: Wed, 26 Aug 2009 05:25:46 0700 (PDT)
On Aug 26, 4:27 am, Samuel Neves <samuelnev...@xxxxxxxxx> wrote:
On Aug 25, 2:37 pm, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:
On Aug 25, 9:19 am, Tom St Denis <t...@xxxxxxx> wrote:
On Aug 25, 8:56 am, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:
1024 bits is secure for at least 10 years. (Barring an algorithmic
improvement
in factoring). We can barely do 768 bits today (in progress; it is a
very very very large
effort). 1024 bits is about 1200 times harder (in time) than 768
bits and requires about 35
times as much memory.
Out of curiosity, why aren't there projects to factor numbers in
smaller increments? Like 640bit [that is properly balanced...]?
We have already done RSA200 (i.e. about 664 bits). People
are shooting for 768 and skipping 736 and 672 bits because 768
is a major milestone.
If
the goal is to see how the GNFS scales as an algorithm and on tech as
it emerges wouldn't it be better to have more datapoints?
We have many, many datapoints from 400 to 650 bits. It would indeed
be better to have more datapoints above 650 bits. But one needs the
resources!
Our computer capabilities will need to double about 10 times from what
we have now.
Can anyone see this happening in the next 10 years?
I thought the bottleneck was the backend and specifically the memory
bandwidth? Or is the sieving step more significant in terms of time?
We 'only' need memory to increase by 5 doublings.......
The linear algebra and sieving take roughly the same time
asymptotically.
Memory is more of a bottleneck right now. Even if we had code to do
the
sieving for 1024bits, we would not have enough individual sieving
processors
with enough memory. Even today, systems with more than 8 GB of memory
are
rare.
And unlike sieving where machines can run independently (and be
unavailable
for periods of time) the linear algebra requires a tightly coupled
group of machines.
Furthermore, during the LA, one machine being unavailable can stall
ALL of the others.
Availability of tightly coupled systems with large memory is very
limited.
Also, such machines generally have more important applications to run.
I won't claim to be a factoring expert, but last I knew the memory
requirements for GNFS are the sqrt() of the time it takes. 1024bit
needs 1TB of tightly coupled memory and 2^86 time.
1TB is an underestimate.
Is there a "back of the envelope" method of estimating the memory from
the time?
Yes. Memory requirements scale linearly with the square root of the
time.
(As you have stated)
Take the reported memory for any benchmark over 600 bits and scale
accordingly.
Kleinjung et al. recently made available a preprint detailing
estimates for factoring a 1024bit modulus, based mostly on the
ongoing RSA768 factorization.
http://eprint.iacr.org/2009/389 Hide quoted text 
 Show quoted text 
There is a MAJOR error in the above paper. It validates the
statement I made that
RSA1024 would be about 1200 times harder and require about 30 times
the space.
But it also says:
"For a 768bit RSA modulus 2 gigabytes of RAM per core works nicely,
half of it still works
too but is noticeably less efficient. For 1024 bits, 2 gigabytes of
RAM per core would cut it
too closely, but memories of 32 or 64 gigabytes per multicore or
multithreaded processor
(or even per mainboard) would work."
This quote is in direct contradiction to their own remarks (with which
I agree) that space requirements
scale linearly with sqrt(run time). Since 768 bits requires 2GB of
ram for the siever and since the
factor base sizes will need to increase by a factor of ~30 for 1024
bits, there is NO WAY that 1024
bits can be sieved in 2GB per core. A substantial fraction of the
space required by the siever is
taken up by the factor bases and the roots of the two polynomials
modulo those primes.
.
 FollowUps:
 Re: Not helpful statements from recent papers.....
 From: pubkeybreaker
 Re: Not helpful statements from recent papers.....
 References:
 Not helpful statements from recent papers.....
 From: Tom St Denis
 Re: Not helpful statements from recent papers.....
 From: Scott Contini
 Re: Not helpful statements from recent papers.....
 From: Tom St Denis
 Re: Not helpful statements from recent papers.....
 From: Scott Contini
 Re: Not helpful statements from recent papers.....
 From: Tom St Denis
 Re: Not helpful statements from recent papers.....
 From: Noob
 Re: Not helpful statements from recent papers.....
 From: Tom St Denis
 Re: Not helpful statements from recent papers.....
 From: pubkeybreaker
 Re: Not helpful statements from recent papers.....
 From: Tom St Denis
 Re: Not helpful statements from recent papers.....
 From: pubkeybreaker
 Re: Not helpful statements from recent papers.....
 From: Samuel Neves
 Not helpful statements from recent papers.....
 Prev by Date: Re: ECC in Botan?
 Next by Date: Re: entropy of Japanese passwords
 Previous by thread: Re: Not helpful statements from recent papers.....
 Next by thread: Re: Not helpful statements from recent papers.....
 Index(es):
Relevant Pages
