Prime numbers and primality testing is a Restricted Group with 1114 members.
 Prime numbers and primality testing

 Restricted Group,
 1114 members
Primary Navigation
We don't need no stinking factorizer to break RSA... duh!!!!!!!!!
Expand Messages
 0 Attachment
front page of NY Times just featured this paper
http://eprint.iacr.org/2012/064.pdf
which is hilarious!
Just when you thought you'd seen the ultimate in human stupidity... 0 Attachment
 In primenumbers@yahoogroups.com,
"WarrenS" <warren.wds@...> wrote:
> http://eprint.iacr.org/2012/064.pdf
"among the 4.7 million distinct 1024bit RSA
moduli that we had originally collected, more
than 12500 have a single prime factor in common"
Hint to hackers: take GCDs with Arjen's database.
David 0 Attachment
On Wed, 20120215 at 02:26 +0000, djbroadhurst wrote:>
Easier said than done. First, you have to get Arjen's database. Once
>  In primenumbers@yahoogroups.com,
> "WarrenS" <warren.wds@...> wrote:
>
> > http://eprint.iacr.org/2012/064.pdf
>
> "among the 4.7 million distinct 1024bit RSA
> moduli that we had originally collected, more
> than 12500 have a single prime factor in common"
>
> Hint to hackers: take GCDs with Arjen's database.
you have it, finding factors through GCDs is a nontrivial task. That
is the limit to the help which I am prepared to give.
In case you haven't already done so, read the Acknowledgements
paragraph.
Paul 0 Attachment
 On Wed, 2/15/12, Paul Leyland <paul@...> wrote:> On Wed, 20120215 at 02:26 +0000, djbroadhurst wrote:
Presumably you get it from the same places that Arjen got it?
> >  In primenumbers@yahoogroups.com,
> > "WarrenS" <warren.wds@...> wrote:
> > > http://eprint.iacr.org/2012/064.pdf
> >
> > "among the 4.7 million distinct 1024bit RSA
> > moduli that we had originally collected, more
> > than 12500 have a single prime factor in common"
> >
> > Hint to hackers: take GCDs with Arjen's database.
>
> Easier said than done. First, you have to get Arjen's database.
> Once you have it, finding factors through GCDs is a nontrivial
I think I can do it in about the cost of about a dozen halfgigabyte GCDs.
> task. That is the limit to the help which I am prepared to give.
Looking at some GMP benchmarks and extrapolating, that should be doable in way less than a day. I wouldn't be surprised if DJB (the UIUC one, not the Open one) didn't document such an algorithm some time in the nineties or noughties, as it's very close to some of the other combineandconquer techniques he was looking at.
> In case you haven't already done so, read the Acknowledgements
Yeah, I spotted that, and was about to drop you a private email as I was intrested in what the title of the paper referred to. So I'll do that here and now instead!
> paragraph.
Phil 0 Attachment
On Wed, 20120215 at 15:09 +0000, Phil Carmody wrote:> > Easier said than done. First, you have to get Arjen's database.
That's one way. It's also consistent with my statement that it's easier
>
> Presumably you get it from the same places that Arjen got it?
said than done.
> > Once you have it, finding factors through GCDs is a nontrivial
If you think you can do it, you can easily verify whether you can do it
> > task. That is the limit to the help which I am prepared to give.
>
> I think I can do it in about the cost of about a dozen halfgigabyte
> GCDs.
in practice. The paper gives statistics on how many keys were found,
the distribution of key lengths and the distribution of keys which share
at least one prime. A few hours with a good random number generator
should enable you to produce a synthetic data set with the same
properties as the keys found in the wild. Once you've got your data go
ahead and find the factorizations, then report back on how hard it was.
Paul 0 Attachment
 In primenumbers@yahoogroups.com,
Phil Carmody <thefatphil@...> wrote:
> I was interested in what the title of the paper referred to
I suggest that
Ron = R.L. Rivest
Whit = W. Diffie
but I am not sure if they had a direct argument?
David 0 Attachment
 In primenumbers@yahoogroups.com,
"djbroadhurst" <d.broadhurst@...> wrote:
> > I was interested in what the title of the paper referred to
R[SA] is not keen on D[H], but more relaxed about D[VW]:
> I suggest that
> Ron = R.L. Rivest
> Whit = W. Diffie
> but I am not sure if they had a direct argument?
http://www.rsa.com/rsalabs/node.asp?id=2248
>> The authenticated DiffieHellman key agreement protocol,
or StationtoStation (STS) protocol, was developed by
Diffie, van Oorschot, and Wiener in 1992 [DVW92] to
defeat the maninthemiddle attack on the
DiffieHellman key agreement protocol.<<
David 0 Attachment
 In primenumbers@yahoogroups.com,
Paul Leyland <paul@...> wrote:
> That is the limit to the help which I am prepared to give.
https://listserv.nodak.edu/cgibin/wa.exe?A2=ind0503&L=nmbrthry&P=R684
give a lot a help, by publicly certifying semiprimality.
Might Paul care to try to use that help to determine the factors?
David (who never encrypts, as a matter of firm principle)
"He who receives an idea from me, receives instruction
himself without lessening mine; as he who lights his
taper at mine, receives light without darkening me."
Thomas Jefferson (17431826) 0 Attachment
 On Wed, 2/15/12, Paul Leyland <paul@...> wrote:> Phil Carmody wrote:
All hail the great godess Obscurity, saving the day once again!
> > > Easier said than done. First, you have to get Arjen's database.
> >
> > Presumably you get it from the same places that Arjen got it?
>
> That's one way. It's also consistent with my statement
> that it's easier said than done.
> > > Once you have it, finding factors through GCDs is a nontrivial
Well, I don't have a machine with multiple gigabytes of RAM, which would be necessary for the GCDs I have in mind. But what's wrong with just modelling the problem?
> > > task. That is the limit to the help which I am prepared to give.
> >
> > I think I can do it in about the cost of about a dozen halfgigabyte
> > GCDs.
>
> If you think you can do it, you can easily verify whether you can do it
> in practice. The paper gives statistics on how many keys were found,
> the distribution of key lengths and the distribution of keys which share
> at least one prime. A few hours with a good random number generator
> should enable you to produce a synthetic data set with the same
> properties as the keys found in the wild. Once you've got your data go
> ahead and find the factorizations, then report back on how hard it was.
Do you agree that if the number of keys massively dominates the number of common factors, that the amount of work once you have the product of all common factors is negligible compared to the work required to generate the product of all common factors?
The algorithm I have in mind is asymptotically faster than the following, but this is trivial. Do agree that this finds the product of all common primes:
Number all keys K_0 .. K_{N1}
For b in [0..ceil(lg(N))]
Build products L and R such that
L = prod {i=0..N1} (K_i if i has bit b set, else 1)
R = ditto, test flipped
Compute G_b = GCD(L, R).
Asymptotically, the GCD and the multiplications should be similar in cost,
and the arguments are large enough that deviations from the asyptotes are relatively minor.
Pari/GP on this old crate can do GCDs as follows.
1M digits 6.5s
2M digits 15.5s (factor ~2.4)
4M digits 37.2s (factor ~2.35)
8M digits 98.3s (factor ~2.32)
16M digits 205s (factor ~2.3)
Assuming pessimistically that the exponent is 2.3, that can be extrapolated (we're already out of L3, so there shouldn't be any surprising jumps) to 30000s for a gigadigit GCD.
The tree multiplication can similarly be done in approximately (I calculated the time for 1 layer, and multiplied by the depth. Asuming you code for the nonlinear access times of the memory heirarchy, this is a clear overestimate.)
1M digits 8s
2M digits 18.9s (factor ~2.36)
4M digits 44.2s (factor ~2.34)
8M digits 99s (factor ~2.24)
16M digits 218s (factor ~2.2)
Assuming again pessimisatically that the exponent is 2.2, extrapolation again gives us about 24000s for gigadigit product generation.
That's clearly less than a day per bit, and there are probably machines which are between 5 and 10 times faster than this machine. I think also that the George's GIMPS bignum library would dominate GMP for most of the operations. So this phase (generating the product of all and only duplicated factors) is no more than 3 days work on a single core. And of course the above is embarassingly parallel.
And that's the "naive" version of what I have in mind. Some of the multiplcations are redundant, for example, and with only a small constant memory increase (add another gig for this size problem) would reduce the mul cost by a factor of 2. A bit more RAM, and the stage after this, extracting the individual factors and identifying which key they come from becomes trivial, cheaper than one 'b' step above.
Are there any estimations above that you disagree with? (Or do you think any two keys are not getting GCDed with each other in the above process?)
And is the title of the paper also shrouded in obscurity that you're sworn not to reveal, or was that just an oversight in your attempt to swiftly assert that the algorithm I hadn't even yet told you about could never work? Does it just refer to the fact that the *R*SA with multiple prime secrets is "wrong" compared the single prime secred of *D*H?
Phil 0 Attachment
 In primenumbers@yahoogroups.com,
"djbroadhurst" <d.broadhurst@...> wrote:
> Hint to hackers: take GCDs with Arjen's database.
Paul Leyland quite correctly remarked that Arjen's database
is private. However, the GCD idea seems to work OK:
https://freedomtotinker.com/blog/nadiah/newresearchtheresnoneedpanicoverfactorablekeysjustmindyourpsandqs
> We were able to factor 0.4% of the RSA keys in our SSL scan.
David (with thanks to Peter Kosinar, for the info)
> We did this by computing the greatest common divisor (GCD) of
> all pairs of moduli from RSA public keys on the Internet.
> We identified 1724 common factors which allowed us to factor
> 24,816 SSL keys, and 301 common factors which allowed us to
> factor 2422 SSH host keys.
 0 Attachment
On Wed, 20120215 at 18:26 +0000, Phil Carmody wrote:
> And is the title of the paper also shrouded in obscurity that you're
Two things.
> sworn not to reveal, or was that just an oversight in your attempt to
> swiftly assert that the algorithm I hadn't even yet told you about
> could never work?
1) Never ascribe to malice that which is adequately explained by
incompetence.
2) I never said your algorithm couldn't work, only that it might not be
as easy to implement and run as it is to assert that you have an
algorith,
> Does it just refer to the fact that the *R*SA with multiple prime
Yup.
> secrets is "wrong" compared the single prime secred of *D*H?
Paul 0 Attachment
On Wed, 20120215 at 17:20 +0000, djbroadhurst wrote:>
Give that man a {coconut,cigar,token prize of your choice}.
>  In primenumbers@yahoogroups.com,
> Phil Carmody <thefatphil@...> wrote:
>
> > I was interested in what the title of the paper referred to
>
> I suggest that
> Ron = R.L. Rivest
> Whit = W. Diffie
> but I am not sure if they had a direct argument?
 0 Attachment
 In primenumbers@yahoogroups.com,
Phil Carmody <thefatphil@...> wrote:
> I wouldn't be surprised if DJB
I think that Phil might be recalling this paper:
> (the UIUC one, not the Open one)
> didn't document such an algorithm
http://cr.yp.to/lineartime/dcba20040404.pdf
David (the Open DJB :) 0 Attachment
 On Wed, 2/15/12, djbroadhurst <d.broadhurst@...> wrote:> However, the GCD idea seems to work OK:
Using an algorithm from (other) DJB. I said that might be the case, didn't I?
>
> https://freedomtotinker.com/blog/nadiah/newresearchtheresnoneedpanicoverfactorablekeysjustmindyourpsandqs
Aparently I'd only need to do ~400Mdigit GCDs rather than gigadigit ones, and less than half the RAM. That drops the estimation to about a day. Which is in remarkable agreement with that team's results. Most remarkable as it's a distinct algorithm from DJB's. It might be worth actually implementing, to see how the two compare to each other. To have apparently the same BigOh and similar constant to a DJB algorithm just from a backofafagpacket algorithm doesn't happen every day.
Phil 0 Attachment
 In primenumbers@yahoogroups.com,
Phil Carmody <thefatphil@...> wrote:
> Using an algorithm from (other) DJB.
Bang on, good Sir!
> I said that might be the case, didn't I?
> To have apparently the same BigOh and similar constant
Fortune favours the unsecretive :)
> to a DJB algorithm just from a backofafagpacket algorithm
> doesn't happen every day.
David (the Open DJB) 0 Attachment
 On Wed, 2/15/12, Paul Leyland <paul@...> wrote:> Phil Carmody wrote:
I was trying to give as little away as possible, as clearly Lenstra et al. didn't want too much made public about the technique they used, and whilst I'm under no obligation to honour that I thought I'd play along. Anyone who could implement such an algorithm would probably recognize its ingredients, and vice versa. When such things get questioned, my steepest gradient it to simply leak parts of the algorithm I had in mind, obviously.
> > And is the title of the paper also shrouded in obscurity that you're
> > sworn not to reveal, or was that just an oversight in your attempt to
> > swiftly assert that the algorithm I hadn't even yet told you about
> > could never work?
>
> Two things.
>
> 1) Never ascribe to malice that which is adequately explained by
> incompetence.
>
> 2) I never said your algorithm couldn't work, only that it might not be
> as easy to implement and run as it is to assert that you have an
> algorithm
It would have been quicker, but far less fun, to run off to my archive of DJB papers, and hunt out the relevant "essentially linear time" paper. So in some ways I'm glad I had to concretise the ideas. When I first had them, they were just so obviously workable, I didn't bother with anything apart from 1significantfigure guestimations (as you can tell from the factor of 2.5 in my estimation of the size of the input), I'm quite pleased how close they appear to be to potential reality.
> > Does it just refer to the fact that the *R*SA with multiple prime
OK, thanks for that. It was not clear as if there was some rump or coffee (or wine?) session comment that had historically been made between the pair (or to a third party) that I wasn't aware of. If the above is the full story, then I think it's unfair to say that "Ron is wrong", as the mathematics and the algorithmics of composite secrets are still bulletproof. It's just that some people (apparently not too many, fortunately) simple can't implement the algorithm correctly. I believe it's even specified that two times as much apparent entropy must be fed into the system as the length of the primes extracted from it. Clearly nothing like that has been honoured in the failing cases. (Can anyone say "15 bits of entropy"?)
> > secrets is "wrong" compared the single prime secred of *D*H?
>
> Yup.
Phil 0 Attachment
 In primenumbers@yahoogroups.com,
Paul Leyland <paul@...> wrote:
> > I suggest that
It's a poor title for what purports to be a
> > Ron = R.L. Rivest
> > Whit = W. Diffie
> > but I am not sure if they had a direct argument?
>
> Give that man a {coconut,cigar,token prize of your choice}.
serious article (on insufficient entropy).
Try to imagine Pauli using the title
"Izzy hat geirrt; Bert hat recht" in
Encyklopädie der Mathematischen Wissenschaften,
Vol. 5, 539775 (1921).
David 0 Attachment
 On Wed, 2/15/12, djbroadhurst <d.broadhurst@...> wrote:> Phil Carmody <thefatphil@...> wrote:
My method's obvious enough that I'm sure it must have occured to Dan, and therefore I'm pretty sure there must be something like a log log N of leverage in his method that makes it win in the long run. (That would be a factor of 20ish at these sizes, and not to be sniffed at.) However, I will confess to not having read his paper for about 6 years, including today.
> > Using an algorithm from (other) DJB.
> > I said that might be the case, didn't I?
>
> Bang on, good Sir!
>
> > To have apparently the same BigOh and similar constant
>
> > to a DJB algorithm just from a backofafagpacket algorithm
> > doesn't happen every day.
>
> Fortune favours the unsecretive :)
Phil
Your message has been successfully submitted and would be delivered to recipients shortly.