Re: Not helpful statements from recent papers.....

On Aug 25, 8:56 am, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:
1024 bits is secure for at least 10 years.  (Barring an algorithmic
in factoring).   We can barely do 768 bits today (in progress; it is a
very very very large
effort).   1024 bits is about 1200 times harder (in time) than 768-
bits and requires about 35
times as much memory.

Out of curiosity, why aren't there projects to factor numbers in
smaller increments? Like 640-bit [that is properly balanced...]? If
the goal is to see how the GNFS scales as an algorithm and on tech as
it emerges wouldn't it be better to have more datapoints?

Our computer capabilities will need to double about 10 times from what
we have now.
Can anyone see this happening in the next 10 years?

I thought the bottleneck was the backend and specifically the memory
bandwidth? Or is the sieving step more significant in terms of time?

I won't claim to be a factoring expert, but last I knew the memory
requirements for GNFS are the sqrt() of the time it takes.  1024-bit
needs 1TB of tightly coupled memory and 2^86 time.  

1TB is an under-estimate.

Is there a "back of the envelope" method of estimating the memory from
the time?