Re: Not helpful statements from recent papers.....

On Aug 26, 4:27 am, Samuel Neves <samuelnev...@xxxxxxxxx> wrote:
On Aug 25, 2:37 pm, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:

On Aug 25, 9:19 am, Tom St Denis <t...@xxxxxxx> wrote:

On Aug 25, 8:56 am, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:

1024 bits is secure for at least 10 years.  (Barring an algorithmic
in factoring).   We can barely do 768 bits today (in progress; it is a
very very very large
effort).   1024 bits is about 1200 times harder (in time) than 768-
bits and requires about 35
times as much memory.

Out of curiosity, why aren't there projects to factor numbers in
smaller increments?  Like 640-bit [that is properly balanced...]?

We have already done RSA-200  (i.e. about 664 bits).  People
are shooting for 768 and skipping 736 and 672 bits because 768
is a major milestone.

the goal is to see how the GNFS scales as an algorithm and on tech as
it emerges wouldn't it be better to have more datapoints?

We have many, many datapoints from 400 to 650 bits.  It would indeed
be better to have more datapoints above 650 bits.  But one needs the

Our computer capabilities will need to double about 10 times from what
we have now.
Can anyone see this happening in the next 10 years?

I thought the bottleneck was the backend and specifically the memory
bandwidth?  Or is the sieving step more significant in terms of time?

We 'only' need memory to increase by 5 doublings.......

The linear algebra and sieving take roughly the same time

Memory is more of a bottleneck right now.   Even if we had code to do
sieving for 1024-bits,  we would not have enough individual sieving
with enough memory.  Even today, systems with more than 8 GB of memory

And unlike sieving where machines can run independently (and be
for periods of time) the linear algebra requires a tightly coupled
group of machines.
Furthermore, during the LA,  one machine being unavailable can stall
ALL of the others.

Availability of tightly coupled systems with large memory is very
Also, such machines generally have more important applications to run.

I won't claim to be a factoring expert, but last I knew the memory
requirements for GNFS are the sqrt() of the time it takes.  1024-bit
needs 1TB of tightly coupled memory and 2^86 time.  

1TB is an under-estimate.

Is there a "back of the envelope" method of estimating the memory from
the time?

Yes.  Memory requirements scale linearly with the square root of the
(As you have stated)

Take the reported memory for any benchmark over 600 bits and scale

Kleinjung et al. recently made available a preprint detailing
estimates for factoring a 1024-bit modulus, based mostly on the
ongoing RSA-768 factorization. Hide quoted text -

- Show quoted text -

There is a MAJOR error in the above paper. It validates the
statement I made that
RSA-1024 would be about 1200 times harder and require about 30 times
the space.

But it also says:

"For a 768-bit RSA modulus 2 gigabytes of RAM per core works nicely,
half of it still works
too but is noticeably less efficient. For 1024 bits, 2 gigabytes of
RAM per core would cut it
too closely, but memories of 32 or 64 gigabytes per multi-core or
multi-threaded processor
(or even per mainboard) would work."

This quote is in direct contradiction to their own remarks (with which
I agree) that space requirements
scale linearly with sqrt(run time). Since 768 bits requires 2GB of
ram for the siever and since the
factor base sizes will need to increase by a factor of ~30 for 1024
bits, there is NO WAY that 1024
bits can be sieved in 2GB per core. A substantial fraction of the
space required by the siever is
taken up by the factor bases and the roots of the two polynomials
modulo those primes.