Re: Not helpful statements from recent papers.....



On Aug 26, 4:27 am, Samuel Neves <samuelnev...@xxxxxxxxx> wrote:
On Aug 25, 2:37 pm, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:





On Aug 25, 9:19 am, Tom St Denis <t...@xxxxxxx> wrote:

On Aug 25, 8:56 am, pubkeybreaker <pubkeybrea...@xxxxxxx> wrote:

1024 bits is secure for at least 10 years.  (Barring an algorithmic
improvement
in factoring).   We can barely do 768 bits today (in progress; it is a
very very very large
effort).   1024 bits is about 1200 times harder (in time) than 768-
bits and requires about 35
times as much memory.

Out of curiosity, why aren't there projects to factor numbers in
smaller increments?  Like 640-bit [that is properly balanced...]?

We have already done RSA-200  (i.e. about 664 bits).  People
are shooting for 768 and skipping 736 and 672 bits because 768
is a major milestone.

 If
the goal is to see how the GNFS scales as an algorithm and on tech as
it emerges wouldn't it be better to have more datapoints?

We have many, many datapoints from 400 to 650 bits.  It would indeed
be better to have more datapoints above 650 bits.  But one needs the
resources!

Our computer capabilities will need to double about 10 times from what
we have now.
Can anyone see this happening in the next 10 years?

I thought the bottleneck was the backend and specifically the memory
bandwidth?  Or is the sieving step more significant in terms of time?

We 'only' need memory to increase by 5 doublings.......

The linear algebra and sieving take roughly the same time
asymptotically.

Memory is more of a bottleneck right now.   Even if we had code to do
the
sieving for 1024-bits,  we would not have enough individual sieving
processors
with enough memory.  Even today, systems with more than 8 GB of memory
are
rare.

And unlike sieving where machines can run independently (and be
unavailable
for periods of time) the linear algebra requires a tightly coupled
group of machines.
Furthermore, during the LA,  one machine being unavailable can stall
ALL of the others.

Availability of tightly coupled systems with large memory is very
limited.
Also, such machines generally have more important applications to run.

I won't claim to be a factoring expert, but last I knew the memory
requirements for GNFS are the sqrt() of the time it takes.  1024-bit
needs 1TB of tightly coupled memory and 2^86 time.  

1TB is an under-estimate.

Is there a "back of the envelope" method of estimating the memory from
the time?

Yes.  Memory requirements scale linearly with the square root of the
time.
(As you have stated)

Take the reported memory for any benchmark over 600 bits and scale
accordingly.

Kleinjung et al. recently made available a preprint detailing
estimates for factoring a 1024-bit modulus, based mostly on the
ongoing RSA-768 factorization.

http://eprint.iacr.org/2009/389- Hide quoted text -

- Show quoted text -

There is a MAJOR error in the above paper. It validates the
statement I made that
RSA-1024 would be about 1200 times harder and require about 30 times
the space.

But it also says:

"For a 768-bit RSA modulus 2 gigabytes of RAM per core works nicely,
half of it still works
too but is noticeably less efficient. For 1024 bits, 2 gigabytes of
RAM per core would cut it
too closely, but memories of 32 or 64 gigabytes per multi-core or
multi-threaded processor
(or even per mainboard) would work."


This quote is in direct contradiction to their own remarks (with which
I agree) that space requirements
scale linearly with sqrt(run time). Since 768 bits requires 2GB of
ram for the siever and since the
factor base sizes will need to increase by a factor of ~30 for 1024
bits, there is NO WAY that 1024
bits can be sieved in 2GB per core. A substantial fraction of the
space required by the siever is
taken up by the factor bases and the roots of the two polynomials
modulo those primes.

.



Relevant Pages

  • Re: Whip a duo-core PC into submission...?
    ... used are the Pointer type, which on Win32 contains the 32 bit ... the memory symmetrical in halves. ... 2GB process running on each core (ignoring OS and other ... overheads) in which case I naively assume that 4 GB of RAM ...
    (comp.dsp)
  • Re: P2B-3 replacement
    ... This has an Intel Pentium III 500 MHz processor and 256 MB ... 1.66GHz Intel Core Duo processor ... 512MB memory ... 1G of quality RAM at a minimum: ...
    (alt.comp.periphs.mainboard.asus)
  • Re: Not helpful statements from recent papers.....
    ... times as much memory. ... estimates for factoring a 1024-bit modulus, ... "For a 768-bit RSA modulus 2 gigabytes of RAM per core works nicely, ...
    (sci.crypt)
  • Re: Need advice on a new computer
    ... optomize for video editing? ... and don't know how much memory will speed things up. ... Is your current setup dual core, ... If more RAM would be helpful, I could replace 2 sticks with 2 ...
    (alt.comp.hardware.pc-homebuilt)
  • Re: LOCAL_MEM_AUTOSIZE in Dual Core Processor
    ... LOCAL_MEM_AUTOSIZE (Auto memory scan) is working in single core ... But it is not working in dual core processors. ... The vxld.bin boot block prints only a few + signs and then stops. ... into an area of the address space where no RAM is mapped.) ...
    (comp.os.vxworks)