Loading ...
Sorry, an error occurred while loading the content.

Factoring

Expand Messages
  • David Litchfield
    Howdy all, Just a quick question. Given a number that is the result of multiplying two primes together one can work out what these two primes are by
    Message 1 of 7 , Apr 2 12:43 PM
    View Source
    • 0 Attachment
      Howdy all,
      Just a quick question.

      Given a number that is the result of multiplying two primes together one can
      work out what these two primes are by squarerooting the given number and
      attempting to divide the number by each whole number below the squareroot.
      This will, though laborious, agreed, eventually give the two factors.

      Assuming our number is, say, 1268428227 the squareroot is c. 35615 and
      therefore we'd need to try a maximum of 35615 numbers to get the factors
      using division. Obviously we only need to try odd numbers so this could be
      reduced to 17807 attempts.

      If this number of attempts could be reduced further to 2374 (6% of the
      original 35615 attempts) would that be a "good result"? If indeed the number
      of attempts needed to get the factors _could_ be reduced to 2374, how does
      that compare with other methods of factoring?

      Cheers,
      David Litchfield
    • Ravi Kiran.G
      Hi All, I am a new entrant into this group and my interests in primes are confined to factorisation.So here s the mail,I have a similar kind of programme which
      Message 2 of 7 , Apr 2 10:11 PM
      View Source
      • 0 Attachment
        Hi All,
        I'am a new entrant into this group and my interests in primes are
        confined to factorisation.So here's the mail,I have a similar kind of
        programme which does try to do what you said in the mail,but i think any
        regular and systematic method to try and factorising a large integer is
        bound to fail as it becomes exponentially more difficult as the number of
        digits increase.So I figured out that a semi-random approach would be
        best,but my programme has some subtle problems which I feel can be worked
        out if I knew what were the previous methods in trying to factorise large
        integers and how they failed .
        So sir could you please tell me what were the previous
        methods(even the failed ones) to try and factorise large integers or could
        you direct me to a website which may contain this information?

        yours sincerely,
        G.RAVI KIRAN.


        On Mon, 2 Apr 2001, David Litchfield wrote:

        > Howdy all,
        > Just a quick question.
        >
        > Given a number that is the result of multiplying two primes together one can
        > work out what these two primes are by squarerooting the given number and
        > attempting to divide the number by each whole number below the squareroot.
        > This will, though laborious, agreed, eventually give the two factors.
        >
        > Assuming our number is, say, 1268428227 the squareroot is c. 35615 and
        > therefore we'd need to try a maximum of 35615 numbers to get the factors
        > using division. Obviously we only need to try odd numbers so this could be
        > reduced to 17807 attempts.
        >
        > If this number of attempts could be reduced further to 2374 (6% of the
        > original 35615 attempts) would that be a "good result"? If indeed the number
        > of attempts needed to get the factors _could_ be reduced to 2374, how does
        > that compare with other methods of factoring?
        >
      • paulmillscv
        Hello Phil, Nice comments. I had not heard of `the work factor as a concept and parameter in algorithmic design. It seems a very natural way to unite the
        Message 3 of 7 , Apr 22 4:48 AM
        View Source
        • 0 Attachment
          Hello Phil,
          Nice comments. I had not heard of `the work factor' as a concept
          and parameter in algorithmic design. It seems a very natural way to
          unite the two fields of mathematics and computer science. Back in
          the 80s we were aware of the time-space trade off in chip design.
          Yes, firmware (a combination of algorithms in hardware) is very
          underexplored for practical implementation of algorithms. So it
          remains now to define a unit of work in mathematics/computer
          science. I imagine something like the 16 bit signed multiplication
          is the unit?

          Regards,
          Paul Mills.
        • Paul Leyland
          ... There are two natural units --- that is, units which have some theoretical justification and have some practical application. One work unit is
          Message 4 of 7 , Apr 22 5:03 AM
          View Source
          • 0 Attachment
            > Hello Phil,
            > Nice comments. I had not heard of `the work factor' as a concept
            > and parameter in algorithmic design. It seems a very natural way to
            > unite the two fields of mathematics and computer science. Back in
            > the 80s we were aware of the time-space trade off in chip design.
            > Yes, firmware (a combination of algorithms in hardware) is very
            > underexplored for practical implementation of algorithms. So it
            > remains now to define a unit of work in mathematics/computer
            > science. I imagine something like the 16 bit signed multiplication
            > is the unit?

            There are two "natural" units --- that is, units which have some
            theoretical justification and have some practical application.

            One work unit is "arithmetical operations in the underlying
            field/ring/group/whatever". Mathematicians tend to prefer this one as
            it sweeps under the carpet all the messy details of the machine
            architecture underlying actual computations. It also makes comparison
            between different classes of algorithms easier.

            The other one is "bit operations", which tends to be favoured by the
            engineers, programmers and other such pigs in the rose garden. This
            measure counts how many times a bit is
            set/reset/toggled/compared/whatever and is fairly clearly well matched
            to the operations of binary computers. It's relatively easy to cast
            bit-operations into the language of some other machines, classical
            Turing machines for instance, but the concept tends to fall down badly
            when applied to some of the weirder models of computation that have been
            proposed.

            The above applies only to computation per se. Some algorithms are
            limited not be computation but by storage or communications. For
            instance, it's possible to perform cryptographic key search for
            symmetrical algorithms in constant time. It's the codebook attack.
            Suppose you want to break 56-bit DES on an 64-bit block. The approach
            is simple: in a fixed time build a table of all possible blocks
            encrypted under all keys. When this one-off precomputation has been
            performed any DES-encrypted message can be broken with a single look-up
            in memory. Building the storage space is left as an exercise for the
            student.

            Paul
          • alpertron
            ... concept ... to ... The unit depends on the algorithm. For example in QS most of the time the computer is sieving. In this stage multiplication is not used,
            Message 5 of 7 , Apr 22 5:23 AM
            View Source
            • 0 Attachment
              --- In primenumbers@y..., "paulmillscv" <paulmillscv@y...> wrote:
              > Hello Phil,
              > Nice comments. I had not heard of `the work factor' as a
              concept
              > and parameter in algorithmic design. It seems a very natural way
              to
              > unite the two fields of mathematics and computer science. Back in
              > the 80s we were aware of the time-space trade off in chip design.
              > Yes, firmware (a combination of algorithms in hardware) is very
              > underexplored for practical implementation of algorithms. So it
              > remains now to define a unit of work in mathematics/computer
              > science. I imagine something like the 16 bit signed multiplication
              > is the unit?
              >
              > Regards,
              > Paul Mills.

              The unit depends on the algorithm. For example in QS most of the time
              the computer is sieving. In this stage multiplication is not used,
              only addition and array access. But in ECM the unit is the modular
              multiplication.

              I compared the perfomance of my factoring applet located at:

              http://www.alpertron.com.ar/ECM.HTM

              For the ECM part of the program I found that the speed is almost the
              same that a program written in C (GMP-ECM) when the number to factor
              has less than 100 digits. This is because in this algorithm the unit
              is the modular multiplication, and I was able to write a fast modular
              multiplication routine.

              But for the Quadratic Sieve part of the program I found that the
              applet is about 3 times slower than factor.exe from the MIRACL
              package (written in C) and 10 times slower than PPSIQS (written in
              assembler). This is because most of the time the program must perform
              additions and array accesses. The latter ones are very slow in Java
              because it has to check the limits for every access.

              Best regards,

              Dario Alpern
              Buenos Aires - Argentina
              http://www.alpertron.com.ar/ENGLISH.HTM
            • Phil Carmody
              ... [quoth P.M.] ... Absolutely. The first one, with your wordy title, is so prefered by mathematicians that by default, if you hear someone with a
              Message 6 of 7 , Apr 22 7:16 AM
              View Source
              • 0 Attachment
                --- Paul Leyland <pleyland@...> wrote:
                [quoth P.M.]
                > > I imagine something like the 16 bit signed
                > > multiplication
                > > is the unit?
                >
                > There are two "natural" units --- that is, units which have some
                > theoretical justification and have some practical application.
                >
                > One work unit is "arithmetical operations in the underlying
                > field/ring/group/whatever". Mathematicians tend to prefer this
                > one as
                > it sweeps under the carpet all the messy details of the machine
                > architecture underlying actual computations. It also makes
                > comparison
                > between different classes of algorithms easier.
                >
                > The other one is "bit operations", which tends to be favoured by
                > the
                > engineers, programmers and other such pigs in the rose garden.
                > This
                > measure counts how many times a bit is
                > set/reset/toggled/compared/whatever and is fairly clearly well
                > matched
                > to the operations of binary computers. It's relatively easy to
                > cast
                > bit-operations into the language of some other machines, classical
                > Turing machines for instance, but the concept tends to fall down
                > badly
                > when applied to some of the weirder models of computation that have
                > been
                > proposed.

                Absolutely. The first one, with your wordy title, is so prefered by
                mathematicians that by default, if you hear someone with a
                mathematical bent describing an algorithm, then they will simply be
                called "operations". I cite Crandall & Pomerance here, as they
                unambiguously raise the issue of the two measures in their /Prime
                Numbers, a Computational Perspective/.

                Describing the algorithm in terms of operations is the first stage,
                describing the implementation of those operations at the lowest level
                of your (maybe hypothetical) machine architecture is the second
                stage. For real world work factors, you need to "multiply the two
                together", so to speak. Break-throughs in either stage are welcomed,
                of course, but if you manage to improve a low-level primitive, then
                you may well have improved the work-factor of _all_ algorithms that
                make use of said primitive.

                However, it's not black and white! Many of the high level algorithms
                (e.g. find a quadratic non-residue), which rely on lower level
                primitives (e.g. expmods, which rely on mulmods) can be considered
                'low level primitives' to other algorithms!

                This does mean that evaluating complexities can be very tricky
                indeed, as you sometimes have to rely on more than just 2 layers.
                Browse sci.crypt for subject lines including 'Bernstein' recently to
                see quite how up-in-the-air NFS's complexity is. (Sure, he's changed
                the 'space' parameters, but has given most of the time ones quite a
                shuffle in the process. So much so that noone yet has /any idea/ how
                useful the asymptotic complexity of his modified algorithm is for
                describing the process for the ranges of numbers it's likely to be
                used for.)

                Phil

                Phil


                __________________________________________________
                Do You Yahoo!?
                Yahoo! Games - play chess, backgammon, pool and more
                http://games.yahoo.com/
              • Phil Carmody
                ... If you know that you will be spending all your time doing bignum multiplies, then you will look at the most optimal bignum multiplication algorithm at the
                Message 7 of 7 , Apr 22 7:28 AM
                View Source
                • 0 Attachment
                  --- paulmillscv <paulmillscv@...> wrote:
                  > Yes, firmware (a combination of algorithms in hardware) is very
                  > underexplored for practical implementation of algorithms. So it
                  > remains now to define a unit of work in mathematics/computer
                  > science. I imagine something like the 16 bit signed multiplication
                  > is the unit?

                  If you know that you will be spending all your time doing bignum
                  multiplies, then you will look at the most optimal bignum
                  multiplication algorithm at the firmware level. You can do bignum
                  multiplies in almost constant time, but it all depends on how much
                  silicon you want to devote to it. If you know that you're unlikely to
                  go over 512 bits, then you might want a 512*512 multiplier unit for
                  example. Again, you have choices of whether you want to minimise
                  latency or not. If you can pipeline large quantities of operations,
                  then you don't need a low latency. If everything must be serialised,
                  then you've got to go for a more expensive low latency design.
                  (Either that or you explicitly work on two or more different problems
                  in parallel, but that in turn requires some smarts...)

                  Phil

                  __________________________________________________
                  Do You Yahoo!?
                  Yahoo! Games - play chess, backgammon, pool and more
                  http://games.yahoo.com/
                Your message has been successfully submitted and would be delivered to recipients shortly.