Prime numbers and primality testing is a Restricted Group with 1120 members.
 Prime numbers and primality testing

 Restricted Group,
 1120 members
New Factoring Method...
Expand Messages
 Hello all,
I don't know if this has been thought of before, or if its been
investigated before (couldn't find any references on the net), but let
me ask some establishing questions first:
1) Does x^2 == y^2 (mod n) work only because all n can be expressed as
the difference of two squares? (ie, if not all numbers could be
represented as such, would the quadratic sieve still work?)
2) Could we use something like x^3 == y^3 (mod n) I don't think all
numbers are representable as the difference of cubes, but, could we use
something like a 'cubic sieve' (analogous to the quadratic sieve, but
cubes instead of squares)? And when we've found such a pair (x, y) we
would take the gcd like: gcd(x  y, n) or gcd(x^2 + xy + y^2, n) to try
and find a factor. [or possibly a 'divisor' depending on what came
out... ;) ]
I've looked at some of the exponent arrays that are produced from this
method, but since I don't know how the whole partial and double partials
thing works, I can't really explore creating a cubic sieve on my own.
Also, I was curious about, how do you make sure (in MPQS) that you are
producing "squares" on the RHS of r == Ax^2 + Bx + C (mod n). With an
understanding of this then it might be possible to create a "multiple
polynomial cubic sieve" (MPCS).
And, how do you know how many "relations" to collect in order to be able
to solve (or make sure you will more than likely solve) the quadratic
sieve? And how do you know how many primes to put into your factor
base?
Anyway, just thought I'd throw this out there to see if it interests
either the egroup or the cabal. ;) Please let me know what you think
about this possibly new factoring method that I have dubbed the cubic
sieve.
David C.  On Thu, 01 November 2001, David Cleaver wrote:
>
There's a simple mapping from a posited C=A.B factorisation, where C is an odd composite such
> Hello all,
>
> I don't know if this has been thought of before, or if its been
> investigated before (couldn't find any references on the net), but let
> me ask some establishing questions first:
>
> 1) Does x^2 == y^2 (mod n) work only because all n can be expressed as
> the difference of two squares? (ie, if not all numbers could be
> represented as such, would the quadratic sieve still work?)
>
> 2) Could we use something like x^3 == y^3 (mod n) I don't think all
> numbers are representable as the difference of cubes, but, could we use
> something like a 'cubic sieve' (analogous to the quadratic sieve, but
> cubes instead of squares)? And when we've found such a pair (x, y) we
> would take the gcd like: gcd(x  y, n) or gcd(x^2 + xy + y^2, n) to try
> and find a factor. [or possibly a 'divisor' depending on what came
> out... ;) ]
<A,B> > <X,Y> where X^2Y^2=C
You then solve the problem in the X,Y domain using the Fermat/Gauss/Kraitchik/QS techniques.
You then do the mapping back from the X,Y domain to the A,B domain, et voila, a factorisation. (lather, rinse, repeat)
i.e. You need to provide both mappings, else you can't get the answers out. Avoidance of operations such as root extraction in a composite base is often a good thing.
Phil
Mathematics should not have to involve martyrdom;
Support Eric Weisstein, see http://mathworld.wolfram.com
Find the best deals on the web at AltaVista Shopping!
http://www.shopping.altavista.com  David Cleaver wrote.
> either the egroup or the cabal. ;) Please let me know what you think
It is possible to create the cubic sieve (CS).
> about this possibly new factoring method that I have dubbed the cubic
> sieve.
However it is probably slower than QS.
(ax+b)^3N is greater than (ax+b)^2N.
This fact implies that smooth data is found more rarely
by CS than QS.
Satoshi Tomabechi>
 Paul Leyland asked me in a private mail to me.
>> It is possible to create the cubic sieve (CS).
Thanks to him I find me wrong.
>> However it is probably slower than QS.
>> (ax+b)^3N is greater than (ax+b)^2N.
>Are you sure? Why then does NFS, which uses higherdegree polynomials,
>run asymptotically faster than QS?
> This fact implies that smooth data is found more rarely
> by CS than QS.
>I think your conclusion is correct, but please check your reasoning
>very carefully.
The complexity of CS is as same as the complexity of QS.
The smooth data is found more rarely by CS than QS.
However the size of factorbase of CS can be smaller than
one of QS. The region of sieve of CS can be smaller as well.
It compensates for rareness of smooth data.
Thus it is concluded that the complexity of CS is
exp(lnN*lnlnN).
I thank Paul Leyland for correcting me.
Pardon me for quoting your mail without permission.
Satoshi Tomabechi  Satoshi Tomabechi and Paul Leyland have been thinking
hard about cubic versus quadratic sieve:
> The complexity of CS is as same as the complexity of QS.
Satoshi: do you think to write a crude PPSICS?
> The smooth data is found more rarely by CS than QS.
> However the size of factorbase of CS can be smaller than one of QS.
> The region of sieve of CS can be smaller as well.
It need not be faster than PPSIQS;
any experiment would be interesting enough....
David  David Broadhurst wrote:
> Satoshi: do you think to write a crude PPSICS?
No, I don't. Because I have enough time to do it.
> It need not be faster than PPSIQS;
> any experiment would be interesting enough....
If someone want to do it, I may give advice.
I should turn to NFS again.
The Japanese government decided to give support
to Prof. Yuji Kida and his project.
I am a member of the project.
Our main object is to study and complete NFS.
I have no confidence to overcome many difficulties.
But it is my duty.
Satoshi Tomabechi  I wrote:
> No, I don't. Because I have enough time to do it.
Sorry, I had a mistake.
No, I don't. Because I don't have enough time to do it.
Satoshi Tomabechi  On Tue, 06 Nov 2001 15:04:06 +0900, Satoshi Tomabechi wrote:
>David Broadhurst wrote:
Satoshi
>
>> Satoshi: do you think to write a crude PPSICS?
>> It need not be faster than PPSIQS;
>> any experiment would be interesting enough....
>
>No, I don't. Because I have enough time to do it.
>If someone want to do it, I may give advice.
>
>I should turn to NFS again.
>The Japanese government decided to give support
>to Prof. Yuji Kida and his project.
>I am a member of the project.
>Our main object is to study and complete NFS.
>I have no confidence to overcome many difficulties.
>But it is my duty.
>
>Satoshi Tomabechi
>
May I ask what you mean by 'complete NFS'?
I am currently running Prof. Kida's NFSX on Ubasic and it seems to work
very well. It is certainly faster than PPSIQS for numbers of the right
form.
Thank you
Steve > >Are you sure? Why then does NFS, which uses higherdegree
There are two distinct questions to ask here:
> polynomials,
> >run asymptotically faster than QS?
>
> > This fact implies that smooth data is found more rarely
> > by CS than QS.
>
> >I think your conclusion is correct, but please check your reasoning
> >very carefully.
>
> Thanks to him I find me wrong.
> The complexity of CS is as same as the complexity of QS.
> The smooth data is found more rarely by CS than QS.
> However the size of factorbase of CS can be smaller than
> one of QS. The region of sieve of CS can be smaller as well.
> It compensates for rareness of smooth data.
> Thus it is concluded that the complexity of CS is
> exp(lnN*lnlnN).
a) What is the asymptotic complexity of CS?
b) What is the practical speed of an implementation?
I believe there is a simple typing mistake in your complexity formula.
The complexity of QS is exp ((1+o(1)) * sqrt (lnN * lnlnN))  you
forgot the sqrt(). I added the (1+o(1)) term for completeness because
it can be important when addressing the other question.
I'm not sure, without thinking about the matter *much* more deeply,
whether the smaller factorbase and sieving region compensates exactly
for the larger values to be found smooth. My guess, and it is only a
guess, is that it does not and will generate a larger o(1) term for
numbers of practical interest, which leads me to believe it will be
slower for these numbers. By way of analogy, compare QS and ECM. Both
have the same asymptotic complexity, but QS is much faster in practice
than ECM when factoring integers where both factors are large.
Actually, I'm not even certain that both CS and QS have the same
(heuristic) asymptotic complexity. It seems very plausible that CS has
complexity exp ((c+o(1)) * sqrt (lnN * lnlnN)) but I don't see any good
reason why c=1. On the other hand, I don't see why c should not be 1
either. Compare the complexity of GNFS and SNFS, which differ only in
the value of c.
> I thank Paul Leyland for correcting me.
I'm not so sure I did correct you 8) What I think we can agree on is
that the question needs much closer analysis before we can say anything
defnite.
> Pardon me for quoting your mail without permission.
No problem! It has led to a useful concentration on an algorithm which
may turn out to be valuable.
To make the algorithm practical, we wil almost certainly need the cubic
analogue of Montogmery's trick for finding multiple polynomials.
Paul> Satoshi
Satoshi is the only person to give a complete answer, but I can point
>
> May I ask what you mean by 'complete NFS'?
> I am currently running Prof. Kida's NFSX on Ubasic and it
> seems to work
> very well. It is certainly faster than PPSIQS for numbers of the right
> form.
out several ways in which NFSX fails to be a "complete NFS".
a) It is an implementation of the special NFS. A complete NFS would
include an implementation of GNFS.
b) It is limited to polynomials of odd degree. An evendegree
implementation is under development, it appears.
c) One polynomial must be linear.
d) It is limited to two polynomials. Multiple polynomial NFS has been
implemented at CWI though not, as yet, had much use in practice.
e) NFSX runs only on a single processor, as far as I am aware. NFS is
quite easy to parallelise and running on multiple machines is the only
practical way of factoring large integers with the algorithm.
f) NFSX uses Gaussian elimination for the linear algebra. While
Gaussian elimination is fine for small matrixes, it is very
uncompetitive for large ones, not least because the matrix becomes
denser as the algorithm progresses and the memory usage becomes
prohibitive.
g) The choice of sieving region and size of factorbases is frequently
very good, but can wildly wrong in some cases. I discovered this the
hard way when factoring 158 * 5^158  1 with the polynomial
158*(5^{32})^5  25. The factorbases were too small, and the sieving
region far from square.
Paul Paul Leyland wrote:
>No problem! It has led to a useful concentration on an algorithm which
Has any progress been made on this? It's mentioned in Murphy's thesis on
>may turn out to be valuable.
>
>To make the algorithm practical, we wil almost certainly need the cubic
>analogue of Montogmery's trick for finding multiple polynomials.
polynomial selection, but only in passing  there is a reference to some
unpublished work by Montgomery that indicated that the problem is likely to
be difficult, but not much more.
Chris
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp  Let r be a prime and hope that for n = pq, that a^r=b^r
gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
Sadly, only r=2, QS, gives a *universal* method.
When r>2 we require at least one of p1, q1, to be
divisible by r, else the map a > a^r is 11
mod both p and q, thus a^r=b^r implies a=b, so that
no usable solutions exist.
If r=3, CS, the method may be used on numbers of the form
6m+5, which are known to have exactly two prime factors,
since exactly one of two factors must be =1 mod 3. If 3
factors, method could still fail, with all factors =2
mod 3. Not exactly universal?
Just wanted to get this to the list before someone spent
a lot of time programming CS, when it might not be all
that useful. Please correct me where needed.
My first post! Been following this list for quite a few
months now, and find it very stimulating.
My thanks go out to Dick Boland, msg. 3671, for the link
http://www.crank.net/maths.html
which was long overdue, IMHO.
Probably too much for 1 msg. sorry. done for now.
 In primenumbers@y..., d.broadhurst@o... wrote:
> Satoshi Tomabechi and Paul Leyland have been thinking
> hard about cubic versus quadratic sieve:
>
> > The complexity of CS is as same as the complexity of QS.
> > The smooth data is found more rarely by CS than QS.
> > However the size of factorbase of CS can be smaller than one of
QS.
> > The region of sieve of CS can be smaller as well.
>
> Satoshi: do you think to write a crude PPSICS?
> It need not be faster than PPSIQS;
> any experiment would be interesting enough....
>
> David  Paul Leyland wrote:
> Satoshi is the only person to give a complete answer, but I can point
Paul understand what we shall do better than I:)
> out several ways in which NFSX fails to be a "complete NFS".
My intention may be different from Kida's.
At first I intend to achieve GNFS without polynomial selection.
This means treating a polynomial of even degree
and implementing Montgomery(like) method.
Kida's NFSX and my NFS have not implemented it as yet.
Another thing to do is decreasing the size of matrix,
in other words decreasing number of nonzero entries.
Indeed we find square relations by Lanczos method,
but the size of matrix is heavy.
We need more sparse matrix so that we can run NFS on
PC with limited main memory.
If I solve the problems above, I will solve the remained
problems which are pointed out by Paul.
However I think that we cannot catch up "the" cabal soon.
Satoshi Tomabechi  Hi Satoshi,
Satoshi TOMABECHI <tomabeti@...> wrote in part:>At first I intend to achieve GNFS without polynomial selection.
I don't understand the connection between these two. I can see that handling
>This means treating a polynomial of even degree
>and implementing Montgomery(like) method.
even degree polynomials is essential for efficiently factoring numbers which
don't lie in the range for cubic or quintic polynomials. But how do you get
rid of polynomial selection? I was under the impression that good polynomial
selection was a key way of reducing the size of the matrix, as well as
reducing the sieving time.
>Another thing to do is decreasing the size of matrix,
Does anyone know what the theoretical smallest size of the matrix is? In
>in other words decreasing number of nonzero entries.
>Indeed we find square relations by Lanczos method,
>but the size of matrix is heavy.
>We need more sparse matrix so that we can run NFS on
>PC with limited main memory.
other words, how much smaller can we expect to be able to make the matrix?
Chris
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp >>Another thing to do is decreasing the size of matrix,
In
>>in other words decreasing number of nonzero entries.
>>Indeed we find square relations by Lanczos method,
>>but the size of matrix is heavy.
>>We need more sparse matrix so that we can run NFS on
>>PC with limited main memory.
>Does anyone know what the theoretical smallest size of the matrix is?
>other words, how much smaller can we expect to be able to make the
matrix?
There are two separate concepts that might be getting confused here: the
size and the density of the matrix.
The size is generally regarded as being the number of rows and colums in
the matrix, or perhaps their product. The density is the number of
nonzero elements, or the fraction of elements which are nonzero.
Often it is phrased as the average number of nonzero elements per row
or per column.
Satoshi says that he needs to lower the density (total, not fractional
number of nonzero elements).
Chris appears to ask how small the matrix can be.
Please correct me if I've misunderstood the two positions.
In the real world, the answer is as it often is: it depends. Small
matrixes can be obtained with small factorbases, but the computation
needed to find enough relations rises substantially. Small matrixes can
also be obtained by combining relations to eliminate primes, but the
density then rises. Very sparse matrixes can be obtained easily enough,
but at a cost of increased size. Optimizing all these factors is more
akin to a black art than a mathematically rigorous algorithm. Even the
acknowledge experts in the field spend quite a lot of time experimenting
with various factors to see which give the best results. Research
papers are still being published, for instance Steffi Cavalar's paper at
ANTSIV last year, which are improving my abilities in this field
substantially  though I'm still an amateur compared with Steffi,
Peter Montgomery and Bruce Dodson.
Paul Chris Card wrote:
> But how do you get rid of polynomial selection?
I cannot do several things at a time, because I'm not smart:)
At first step I implement GNFS without polynomial selection.
Then I try to factor a large number with it as SNFS.
True GNFS will be closer at hand, if I succeed in factoring.
> Does anyone know what the theoretical smallest size of the matrix is? In
I don't know who knows it.
> other words, how much smaller can we expect to be able to make the matrix?
But I know that Momtgomery et al. at CWI deal with more sparse matrix.
In comparison with it, we deal with more dense matrix.
More sparse matrix will make "linear algebra" and "square root" easier.
Satoshi Tomabechi  Paul Leyland wrote:
>
You're quite right, I was confusing the two.
> >>Another thing to do is decreasing the size of matrix,
> >>in other words decreasing number of nonzero entries.
> >>Indeed we find square relations by Lanczos method,
> >>but the size of matrix is heavy.
> >>We need more sparse matrix so that we can run NFS on
> >>PC with limited main memory.
> >Does anyone know what the theoretical smallest size of the matrix is?
>In
> >other words, how much smaller can we expect to be able to make the
>matrix?
>
>There are two separate concepts that might be getting confused here: the
>size and the density of the matrix.
>
I'm not sure I had a position, except that it was confused!
>The size is generally regarded as being the number of rows and colums in
>the matrix, or perhaps their product. The density is the number of
>nonzero elements, or the fraction of elements which are nonzero.
>Often it is phrased as the average number of nonzero elements per row
>or per column.
>
>Satoshi says that he needs to lower the density (total, not fractional
>number of nonzero elements).
>Chris appears to ask how small the matrix can be.
>Please correct me if I've misunderstood the two positions.
I suppose "small" could be interpreted as "amount of memory required" in
this context. So a sparse matrix can be stored in less memory, but then so
can a matrix with fewer rows and columns. I also had in mind the possibility
of taking a factorisation done with GNFS and working backwards to an ideal
(no pun intended) set of relations that represents the best we could hope to
get. This would provide some kind of bound on what is possible. But stop me
now, if I'm talking rubbish :)
Chris
_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp  OK, I have a question for ya: (see below)
mulkeyg@... wrote:>
Do both p and q, from above, have to be prime? If not then let me show
> Let r be a prime and hope that for n = pq, that a^r=b^r
> gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
you something (if they are just let me know).
>
If a number had 3 factors that were all = 2 mod 3, then (assuming like
> Sadly, only r=2, QS, gives a *universal* method.
> When r>2 we require at least one of p1, q1, to be
> divisible by r, else the map a > a^r is 11
> mod both p and q, thus a^r=b^r implies a=b, so that
> no usable solutions exist.
>
> If r=3, CS, the method may be used on numbers of the form
> 6m+5, which are known to have exactly two prime factors,
> since exactly one of two factors must be =1 mod 3. If 3
> factors, method could still fail, with all factors =2
> mod 3. Not exactly universal?
above that p or q can be composite) we can just multiply two of the
factors together to get a number that had one of its divisors = 1 mod 3
[this is cuz 2*2 = 1 (mod 3) for those wondering] and we could therefore
use the CS to factor this number.
So, if this is possible, then we have that only those numbers with 2
PRIME factors = 2 (mod 3) cannot be factored using CS. Although, I
don't understand where you got the fact that the power used in the
sieve, you used "r", must divide one of p1 or q1. Could someone
explain that to me, or just give me a web link, or a name I could use to
look it up? And what happens when the power isn't prime? Just curious
to learn more that I don't seem to know yet. ;)
David C.
>
> Just wanted to get this to the list before someone spent
> a lot of time programming CS, when it might not be all
> that useful. Please correct me where needed.
>
> My first post! Been following this list for quite a few
> months now, and find it very stimulating.
> My thanks go out to Dick Boland, msg. 3671, for the link
>
> http://www.crank.net/maths.html
>
> which was long overdue, IMHO.
>
> Probably too much for 1 msg. sorry. done for now.  David Cleaver wrote:
> > Let r be a prime and hope that for n = pq, that a^r=b^r
[snip]
> > gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
> PRIME factors = 2 (mod 3) cannot be factored using CS. Although, I
[Fact]
> don't understand where you got the fact that the power used in the
> sieve, you used "r", must divide one of p1 or q1.
Given an integer r and a prime p,
there is an integer x with x!=0 and x!=1 such that x^r=1 mod p
if and only if r divides p1 i.e. r  p1.
This fact is derived from the properties of finite fields
and cyclic groups.
[Fact] implies that if r does not divide p1 and x^r=1 mod p,
then x=0 or 1.
In other words, the congruence a^r=b^r mod p implies a=b.
It might be better to search the word "primitive root".
Satoshi Tomabechi  On Thu, 08 November 2001, Satoshi TOMABECHI wrote:
> David Cleaver wrote:
Given r=30 and p=7
> > > Let r be a prime and hope that for n = pq, that a^r=b^r
> > > gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
> [snip]
> > PRIME factors = 2 (mod 3) cannot be factored using CS. Although, I
> > don't understand where you got the fact that the power used in the
> > sieve, you used "r", must divide one of p1 or q1.
>
> [Fact]
> Given an integer r and a prime p,
> there is an integer x with x!=0 and x!=1 such that x^r=1 mod p
> if and only if r divides p1 i.e. r  p1.
>
> This fact is derived from the properties of finite fields
> and cyclic groups.
let x be 2, 2!=0 and 2!=1,
certainly 2^30==1 (mod 7)
but 30 does not divide 6
I think the correct relationship is "r must be a multiple of a factor of (p1)", or gcd(r, p1)>1
Phil
Don't be fooled, CRC Press are _not_ the good guys.
They've taken Wolfram's money  _don't_ give them yours.
http://mathworld.wolfram.com/erics_commentary.html
Find the best deals on the web at AltaVista Shopping!
http://www.shopping.altavista.com  Sorry, I had some assumptions which in retrospect were not quite as
clear as I had thought. Let me fill in some lines:
1. a^r=b^r => c^r=1 mod n, if gcd(b,n)=1, with c=ab^(1)
2. r may be taken to be prime by a law of exponents a^(bc)=(a^b)^c
3. a^r=b^r mod n yields gcd(ab,n) a factor of n => a /= b
=> c /= 1
4. n=pq, c^r=1 mod n => c^r=1 mod p => r divides p1, since c /= 1,
and similarly for q, in fact for any p dividing n. I am not detailing
the case where n is divisible by a prime power, but it is similar.
******************************
So, yes, p and q have to be prime, in your question below.
******************************
These are elementary results from Fermat's Little Theorem and Euler's
generalization thereof, and found in almost all basic number theory
books. Or you might try a google search. (google.com)
 In primenumbers@y..., David Cleaver <wraithx@m...> wrote:
> OK, I have a question for ya: (see below)
>
> mulkeyg@m... wrote:
> >
> > Let r be a prime and hope that for n = pq, that a^r=b^r
> > gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
>
***************************
> Do both p and q, from above, have to be prime? If not then let me
show
> you something (if they are just let me know).
***************************
>
> >
> > Sadly, only r=2, QS, gives a *universal* method.
> > When r>2 we require at least one of p1, q1, to be
> > divisible by r, else the map a > a^r is 11
> > mod both p and q, thus a^r=b^r implies a=b, so that
> > no usable solutions exist.
> >
> > If r=3, CS, the method may be used on numbers of the form
> > 6m+5, which are known to have exactly two prime factors,
> > since exactly one of two factors must be =1 mod 3. If 3
> > factors, method could still fail, with all factors =2
> > mod 3. Not exactly universal?
>
> If a number had 3 factors that were all = 2 mod 3, then (assuming
like
> above that p or q can be composite) we can just multiply two of the
> factors together to get a number that had one of its divisors = 1
mod 3
******************************
sorry, but some *prime* factor must be 1 mod 3, by the above.
so there are *many* numbers for which a CS would never find c^3=1
with c /= 1 mod n
******************************
> [this is cuz 2*2 = 1 (mod 3) for those wondering] and we could
therefore
> use the CS to factor this number.
>
> So, if this is possible, then we have that only those numbers with 2
> PRIME factors = 2 (mod 3) cannot be factored using CS. Although, I
> don't understand where you got the fact that the power used in the
> sieve, you used "r", must divide one of p1 or q1. Could someone
> explain that to me, or just give me a web link, or a name I could
use to
> look it up? And what happens when the power isn't prime? Just
curious
> to learn more that I don't seem to know yet. ;)
>
> David C.
>
> >
> > Just wanted to get this to the list before someone spent
> > a lot of time programming CS, when it might not be all
> > that useful. Please correct me where needed.
> >
> > My first post! Been following this list for quite a few
> > months now, and find it very stimulating.
> > My thanks go out to Dick Boland, msg. 3671, for the link
> >
> > http://www.crank.net/maths.html
> >
> > which was long overdue, IMHO.
> >
> > Probably too much for 1 msg. sorry. done for now.  I have seen a reference to a similar method by
Hiroshi, Naoki, and Akira at sig@ips_j (?)
in IPSJ SIGNotes by The Information Processing Society of
Japan. I don't have access to this, and would appreciate
a copy of the paper, or a good link to it.
It is algorithm No. 046003
Thanks,
Milton L. Brown
miltbrown@...
 Original Message 
From: "Satoshi TOMABECHI" <tomabeti@...>
To: "Prime numbers group" <primenumbers@yahoogroups.com>
Sent: Thursday, November 08, 2001 11:52 PM
Subject: Re: [PrimeNumbers] Re: New Factoring Method...
> David Cleaver wrote:
> > > Let r be a prime and hope that for n = pq, that a^r=b^r
> > > gives a=b mod p (or mod q). (r=2 is QS, r=3 is CS)
> [snip]
> > PRIME factors = 2 (mod 3) cannot be factored using CS. Although, I
> > don't understand where you got the fact that the power used in the
> > sieve, you used "r", must divide one of p1 or q1.
>
> [Fact]
> Given an integer r and a prime p,
> there is an integer x with x!=0 and x!=1 such that x^r=1 mod p
> if and only if r divides p1 i.e. r  p1.
>
> This fact is derived from the properties of finite fields
> and cyclic groups.
>
> [Fact] implies that if r does not divide p1 and x^r=1 mod p,
> then x=0 or 1.
> In other words, the congruence a^r=b^r mod p implies a=b.
>
> It might be better to search the word "primitive root".
>
> Satoshi Tomabechi
>
>
>
> Unsubscribe by an email to: primenumbersunsubscribe@egroups.com
> The Prime Pages : http://www.primepages.org
>
>
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>  Milton L. Brown wrote:
> I have seen a reference to a similar method by
Since I am not a member of the society,
> Hiroshi, Naoki, and Akira at sig@ips_j (?)
> in IPSJ SIGNotes by The Information Processing Society of
> Japan. I don't have access to this, and would appreciate
> a copy of the paper, or a good link to it.
I could not get a copy of the paper.
The abstract of the paper says:
Factoring an integer N is interpreted as finding
the lattice point (x,y) on a hyperbolic curve xy=N.
In this paper we propose the method that determine
lattice points in neighbourhood of the hyperbolic
curve.
# Why is my post quoted ?
# It has nothing to do with the wisdom of Brown.
Satoshi Tomabechi  David Broadhurst wrote:
> Re: New Factoring Method...
They are not surnames.
> > Hiroshi, Naoki, and Akira
> None of these surnames produced a single
> hit in MathSciNet Reviews since 1970.
Their full names are
Hiroshi Nagase, Naoki Takeda, and Akira Nagai.
If you can read Japanese, please visit
http://www.ipsj.or.jp/members/SIGNotes/Jpn/16/1995/046/article003.html
Their paper is also published in Japanese.
Satoshi Tomabechi  Satoshi Tomabechi explained
> Their full names are
Thanks Satoshi; I suspected that Milton had mistaken something.
> Hiroshi Nagase, Naoki Takeda, and Akira Nagai.
I found:
Items Authored by Nagase, Hiroshi
[1] 83d:93061 Haratani, Naomi; Nagase, Hiroshi; Takahashi, Shinichi
Realization of twodimensional recursive filters.
Electron. Comm. Japan 61 (1978), no. 8, 1019 (1980).
[2] 83d:93029 Tajima, Jun; Nagase, Hiroshi; Takahashi, Shinichi
Synthesis problem of voltage transfer function matrix.
Electron. Comm. Japan 61 (1978), no. 5, 1725 (1980).
but nothing for the other two names.
Broadhurst, D*
(hits = 18 :)
Your message has been successfully submitted and would be delivered to recipients shortly.