Loading ...
Sorry, an error occurred while loading the content.

Re: [hackers-il] Do Faster Computers Imply Slower Code?

Expand Messages
  • Nadav Har'El
    ... Yes. An interesting paper about the problems people faced on older computers with very limited memories and slow CPUs is M. D. McIlroy s Development of a
    Message 1 of 24 , Jun 2, 2002
    View Source
    • 0 Attachment
      On Sun, Jun 02, 2002, Oleg Goldshmidt wrote about "Re: [hackers-il] Do Faster Computers Imply Slower Code?":
      > One side of it is that nowadays people routinely write and run
      > programs that are too heavy for older computers. For instance, I

      Yes. An interesting paper about the problems people faced on older
      computers with very limited memories and slow CPUs is M. D. McIlroy's
      "Development of a Spelling List", http://www.cs.dartmouth.edu/~doug/spell.ps.gz

      This paper was published in 1982 (IEEE Trans. on Communications) and
      describes how the UNIX "spell" works. Many of the design choices that
      you'll find there where dictated by the wish of "spell" to be a quick
      and take little (in our standards) memory.

      A programmer today might have forgone all this (interesting) waste of time
      and use STL (for example) to hold the data structures. This might make
      spellchecking a 5500 word document take 2 seconds and 2MB of memory
      instead of 0.2 seconds and 200KB of memory, perfectly acceptable now
      but not at the time when McIlroy wrote his spell-checker, when checking
      that 5500 word document using his algorithms took as much as 65 seconds on
      his PDP11/70 and put a real strain on its memory (which was as little
      as 64 KB of memory, if I remember correctly; This was a 1975 machine with a
      22-bit memory management limited to 4MB, see
      http://www.village.org/pdp11/faq.pages/11model.html and
      http://www.psych.usyd.edu.au/pdp-11/11_70.html).

      --
      Nadav Har'El | Sunday, Jun 2 2002, 22 Sivan 5762
      nyh@... |-----------------------------------------
      Phone: +972-53-245868, ICQ 13349191 |A Life? Cool! Where can I download one of
      http://nadav.harel.org.il |those from?
    • Ofir Carny
      STL has more performance problems than just lazy developers: If I remember correctly, I once used an STL list for a queue, in code developed on GNU/Linux, and
      Message 2 of 24 , Jun 2, 2002
      View Source
      • 0 Attachment
        STL has more performance problems than just lazy developers: If I remember correctly, I once used an STL list for a queue, in code developed on GNU/Linux, and used the size() method, this works great as long as you don't use MS implementation which
        instead of using a variable, actually counts the nodes in the list... I then ported to MS Windows...

        The STL specification recommends (I think the word it uses is 'should') an O(1) implementation, MS do have a good reason for
        their choice, but as long as different implementations use different algorithms, STL code is not portable, at least
        performance-wise.

        Ofir






        ********************************************************
        This email has been scanned by Port Authority.

        ********************************************************
      • Shlomi Fish
        ... Gee, O(n) linked list length? What s next: arrays implemented as linked lists? Oh wait, they did that: http://groups.yahoo.com/group/hackers-il/message/433
        Message 3 of 24 , Jun 2, 2002
        View Source
        • 0 Attachment
          On Sun, 2 Jun 2002, Ofir Carny wrote:

          > STL has more performance problems than just lazy developers: If I remember
          > correctly, I once used an STL list for a queue, in code developed on
          > GNU/Linux, and used the size() method, this works great as long as you
          > don't use MS implementation which
          > instead of using a variable, actually counts the nodes in the list... I
          > then ported to MS Windows...

          Gee, O(n) linked list length? What's next: arrays implemented as linked
          lists? Oh wait, they did that:

          http://groups.yahoo.com/group/hackers-il/message/433

          >
          > The STL specification recommends (I think the word it uses is 'should') an O(1) implementation, MS do have a good reason for
          > their choice, but as long as different implementations use different algorithms, STL code is not portable, at least
          > performance-wise.
          >

          Sometimes, I am amazed at how APIs can have a brain-dead design. I think
          when you call a function you expect it to have a reasonable time order of
          growth. You do agree that O(n) list length and O(n) array lookup times are
          un acceptable? Also, the glib's hash and binary tree comparison functions
          do not accept context arguments. (not to mention qsort() and bsearch())
          Guy Keren suggested I use a thread-local storage with them...

          Are there any guidelines beside that, that you think API vendors should
          follow?

          Regards,

          Shlomi Fish

          > Ofir
          >
          >
          >
          >
          >
          >
          > ********************************************************
          > This email has been scanned by Port Authority.
          >
          > ********************************************************
          >
          >
          >
          > To unsubscribe from this group, send an email to:
          > hackers-il-unsubscribe@egroups.com
          >
          >
          >
          > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
          >
          >



          ----------------------------------------------------------------------
          Shlomi Fish shlomif@...
          Home Page: http://t2.technion.ac.il/~shlomif/
          Home E-mail: shlomif@...

          "Let's suppose you have a table with 2^n cups..."
          "Wait a second - is n a natural number?"
        • Shlomi Fish
          ... Isn t it possible to write APIs that would abstract some of that functionality away? Making the simulations faster, but just as straightforward to program.
          Message 4 of 24 , Jun 2, 2002
          View Source
          • 0 Attachment
            On 2 Jun 2002, Oleg Goldshmidt wrote:

            > Shlomi Fish <shlomif@...> writes:
            >
            > > I had a discussion with two distinguished haifuxers the other day,
            > > about whether the fact that computers are becoming faster, has a
            > > negative impact on the algorithms that people use inside their
            > > code. It is a well accepted fact that some newer programs cannot be
            > > effectively run on slower computers.
            >
            > One side of it is that nowadays people routinely write and run
            > programs that are too heavy for older computers. For instance, I
            > witnessed myself the transition from extensive research into advanced
            > algorithms, approximation techniques, and numerical methods to
            > straightforward heavy simulations (in science and finance) which
            > happened simply because now such simulations are feasible even on a
            > "PC". Simpler, more generic and straightforward algorithms tend to be
            > developed faster, with fewer mistakes, and are usually more generic,
            > flexible and extendable than highly optimized, custom ones.
            >

            Isn't it possible to write APIs that would abstract some of that
            functionality away? Making the simulations faster, but just as
            straightforward to program. In the Freecell solving world, there is a
            friendly competition among Freecell solver coders in which they (including
            me) try to make their solvers better solvers for range of deals. This is
            one case where you can't rely on the speed of the computer to save you,
            because a comparable solver would still run faster.

            Do you think we will always find certain tasks that will require more
            computing power than we have? Examples I can think of at present:

            1. Calculating the Air Thrust on Airplanes at Sub-Sonic Speeds. (a very
            heavy task)

            2. Choice of a programming language dictates more power. Slashdot runs on
            5 pretty spiffy servers, because it is written in Perl, and uses an SQL
            database.

            3. Solving a Range of Games.

            4. Chess Playing a la Deep Blue. But I heard that Sony Playstation 3 would
            have more power than it.

            5. Other Heavy Simulations - Orna can testify that at Raphael they have a
            cluster of Linux workstations, and they still need to run some jobs
            overnight.

            6. Searching a lot of information - Google has a farm of several
            thousands computers.

            7. More generally - being able to sustain a large amount of load.

            Regards,

            Shlomi Fish

            > --
            > Oleg Goldshmidt | ogoldshmidt@...
            > "A sense of the fundamental decencies is parceled out unequally at birth."
            >
            >
            > To unsubscribe from this group, send an email to:
            > hackers-il-unsubscribe@egroups.com
            >
            >
            >
            > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
            >
            >



            ----------------------------------------------------------------------
            Shlomi Fish shlomif@...
            Home Page: http://t2.technion.ac.il/~shlomif/
            Home E-mail: shlomif@...

            "Let's suppose you have a table with 2^n cups..."
            "Wait a second - is n a natural number?"
          • Omer Musaev
            ... I guess it is possible:) However, a process of abstracting out that functionality in favor of readability/clearness is not trivial. As a certain ( not the
            Message 5 of 24 , Jun 2, 2002
            View Source
            • 0 Attachment
              On Sunday 02 June 2002 13:56, Shlomi Fish wrote:
              > On 2 Jun 2002, Oleg Goldshmidt wrote:
              > > Shlomi Fish <shlomif@...> writes:
              > > One side of it is that nowadays people routinely write and run
              > > programs that are too heavy for older computers. For instance, I
              > > witnessed myself the transition from extensive research into advanced
              > > algorithms, approximation techniques, and numerical methods to
              > > straightforward heavy simulations (in science and finance) which
              > > happened simply because now such simulations are feasible even on a
              > > "PC". Simpler, more generic and straightforward algorithms tend to be
              > > developed faster, with fewer mistakes, and are usually more generic,
              > > flexible and extendable than highly optimized, custom ones.
              >
              > Isn't it possible to write APIs that would abstract some of that
              > functionality away?

              I guess it is possible:)

              However, a process of abstracting out that functionality in favor
              of readability/clearness is not trivial.

              As a certain ( not the only, and IMHO not the prefferred on most
              of hackers-il members ) way to manage that process, I can recall
              "Policy based Design",

              The application of Policy based Design is described nicely in
              MC++D cp.1 (http://www.aw.com/samplechapter/0201704315.pdf)

              The quintessence of Policy based Design is that clear separation is
              held between needed functionality and existing functionality. As
              long as there are multiple ways to get needed functionality from
              entities that provide existing functionality, a policy is defined.


              > Do you think we will always find certain tasks that will require more
              > computing power than we have? Examples I can think of at present:

              Yes, IMHO there will always be such tasks.


              > Regards,
              >
              > Shlomi Fish
              [40 lines of junk deleted]
            • Nadav Har'El
              ... I feel really silly to ask this, but what book is this chapter a part of? -- Nadav Har El | Sunday, Jun 2 2002, 22 Sivan 5762
              Message 6 of 24 , Jun 2, 2002
              View Source
              • 0 Attachment
                On Sun, Jun 02, 2002, Omer Musaev wrote about "Re: [hackers-il] Re: Do Faster Computers Imply Slower Code?":
                > The application of Policy based Design is described nicely in
                > MC++D cp.1 (http://www.aw.com/samplechapter/0201704315.pdf)

                I feel really silly to ask this, but what book is this chapter a part of?

                --
                Nadav Har'El | Sunday, Jun 2 2002, 22 Sivan 5762
                nyh@... |-----------------------------------------
                Phone: +972-53-245868, ICQ 13349191 |Can Microsoft make a product that doesn't
                http://nadav.harel.org.il |suck? Yes, a vacuum cleaner!
              • Shlomi Fish
                ... One can explicitly calculate the size of the deque according to the number of elements that were inserted and extracted. It makes for slightly slower
                Message 7 of 24 , Jun 2, 2002
                View Source
                • 0 Attachment
                  On Sun, 2 Jun 2002, Chen Shapira wrote:

                  >
                  >
                  > > The STL specification recommends (I think the word it uses is
                  > > 'should') an O(1) implementation, MS do have a good reason for
                  > > their choice, but as long as different implementations use
                  > > different algorithms, STL code is not portable, at least
                  > > performance-wise.
                  >
                  > 1. Checking the specification here:
                  > http://www.sgi.com/tech/stl/index.html
                  > The only complexity specification on Deque there, is the same one on a
                  > container. That is - linear in the number of elements.
                  > So it actually is standard, but less efficient than other implementation.
                  > You were counting on a non-standard feature.
                  >
                  > 2. I have no pointer here, but AFAIK, they actually compute the size of
                  > deque by pointer arithmethics.
                  > They keep the deque in several fixed size arrays, and they go over those
                  > arrays. Its still O(n), but not as silly as going over all the elements.
                  >

                  One can explicitly calculate the size of the deque according to the number
                  of elements that were inserted and extracted. It makes for slightly slower
                  modifications, but a much faster length lookup.

                  As a side note, I should add that it is possible to initialize an array in
                  O(1) time, but this time with a slower lookups (but still such that are
                  O(1)):

                  http://www.cis.ohio-state.edu/europa/projects/0015/#references

                  Regards,

                  Shlomi Fish

                  >
                  >
                  > To unsubscribe from this group, send an email to:
                  > hackers-il-unsubscribe@egroups.com
                  >
                  >
                  >
                  > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
                  >
                  >



                  ----------------------------------------------------------------------
                  Shlomi Fish shlomif@...
                  Home Page: http://t2.technion.ac.il/~shlomif/
                  Home E-mail: shlomif@...

                  "Let's suppose you have a table with 2^n cups..."
                  "Wait a second - is n a natural number?"
                • OmerM
                  ... Oops. MC++D == Modern C++ Design by Andrei Alexandrescu.
                  Message 8 of 24 , Jun 2, 2002
                  View Source
                  • 0 Attachment
                    On Sunday 02 June 2002 14:39, Nadav Har'El wrote:
                    > On Sun, Jun 02, 2002, Omer Musaev wrote about "Re: [hackers-il] Re: Do
                    Faster Computers Imply Slower Code?":
                    > > The application of Policy based Design is described nicely in
                    > > MC++D cp.1 (http://www.aw.com/samplechapter/0201704315.pdf)
                    >
                    > I feel really silly to ask this, but what book is this chapter a part of?

                    Oops. MC++D == "Modern C++ Design" by Andrei Alexandrescu.
                  • Chen Shapira
                    ... 1. Checking the specification here: http://www.sgi.com/tech/stl/index.html The only complexity specification on Deque there, is the same one on a
                    Message 9 of 24 , Jun 2, 2002
                    View Source
                    • 0 Attachment
                      > The STL specification recommends (I think the word it uses is
                      > 'should') an O(1) implementation, MS do have a good reason for
                      > their choice, but as long as different implementations use
                      > different algorithms, STL code is not portable, at least
                      > performance-wise.

                      1. Checking the specification here:
                      http://www.sgi.com/tech/stl/index.html
                      The only complexity specification on Deque there, is the same one on a
                      container. That is - linear in the number of elements.
                      So it actually is standard, but less efficient than other implementation.
                      You were counting on a non-standard feature.

                      2. I have no pointer here, but AFAIK, they actually compute the size of
                      deque by pointer arithmethics.
                      They keep the deque in several fixed size arrays, and they go over those
                      arrays. Its still O(n), but not as silly as going over all the elements.
                    • Ofir Carny
                      It was over a year ago, so my memory is not that clear, but see: http://www.sgi.com/tech/stl/Container.html For many containers, such as vector and deque,
                      Message 10 of 24 , Jun 2, 2002
                      View Source
                      • 0 Attachment
                        It was over a year ago, so my memory is not that clear, but see:

                        http://www.sgi.com/tech/stl/Container.html
                        "For many containers, such as vector and deque, size is O(1). This satisfies the requirement that it be O(N)."

                        The fact that this line is so hard to find in the specs is another problem.

                        I am pretty sure that what I saw there was clearer about the O(1) recommendation, it might have changed since though.

                        But this is beside the point, the point is the inherent problem in using a library whose algorithmic basis radically changes
                        between different implementations.

                        -----Original Message-----
                        From: Chen Shapira [mailto:chen@...]
                        Sent: Sunday, June 02, 2002 2:28 PM
                        To: 'hackers-il@yahoogroups.com'
                        Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?




                        > The STL specification recommends (I think the word it uses is
                        > 'should') an O(1) implementation, MS do have a good reason for
                        > their choice, but as long as different implementations use
                        > different algorithms, STL code is not portable, at least
                        > performance-wise.

                        1. Checking the specification here:
                        http://www.sgi.com/tech/stl/index.html
                        The only complexity specification on Deque there, is the same one on a
                        container. That is - linear in the number of elements.
                        So it actually is standard, but less efficient than other implementation.
                        You were counting on a non-standard feature.

                        2. I have no pointer here, but AFAIK, they actually compute the size of
                        deque by pointer arithmethics.
                        They keep the deque in several fixed size arrays, and they go over those
                        arrays. Its still O(n), but not as silly as going over all the elements.



                        To unsubscribe from this group, send an email to:
                        hackers-il-unsubscribe@egroups.com



                        Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



                        *
                        *******************************************************
                        This email has been scanned by Port Authority.

                        ********************************************************
                      • Ofir Carny
                        Repling to myself: The docuemnt to look at is the the ISO standard. Here is the official excuse I was refering to: This choice is permitted by the C++
                        Message 11 of 24 , Jun 2, 2002
                        View Source
                        • 0 Attachment
                          Repling to myself:

                          The docuemnt to look at is the the ISO standard. Here is the official excuse I was refering to:

                          "This choice is permitted by the C++ standard. The standard says that size() "should" be constant time, and "should" does not mean the same thing as "shall". This is the officially recommended ISO wording for saying that an implementation is supposed to do something unless there is a good reason not to. "

                          -----Original Message-----
                          From: Ofir Carny
                          Sent: Sunday, June 02, 2002 3:08 PM
                          To: hackers-il@yahoogroups.com
                          Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?


                          It was over a year ago, so my memory is not that clear, but see:

                          http://www.sgi.com/tech/stl/Container.html
                          "For many containers, such as vector and deque, size is O(1). This satisfies the requirement that it be O(N)."

                          The fact that this line is so hard to find in the specs is another problem.

                          I am pretty sure that what I saw there was clearer about the O(1) recommendation, it might have changed since though.

                          But this is beside the point, the point is the inherent problem in using a library whose algorithmic basis radically changes
                          between different implementations.

                          -----Original Message-----
                          From: Chen Shapira [mailto:chen@...]
                          Sent: Sunday, June 02, 2002 2:28 PM
                          To: 'hackers-il@yahoogroups.com'
                          Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?




                          > The STL specification recommends (I think the word it uses is
                          > 'should') an O(1) implementation, MS do have a good reason for
                          > their choice, but as long as different implementations use
                          > different algorithms, STL code is not portable, at least
                          > performance-wise.

                          1. Checking the specification here:
                          http://www.sgi.com/tech/stl/index.html
                          The only complexity specification on Deque there, is the same one on a
                          container. That is - linear in the number of elements.
                          So it actually is standard, but less efficient than other implementation.
                          You were counting on a non-standard feature.

                          2. I have no pointer here, but AFAIK, they actually compute the size of
                          deque by pointer arithmethics.
                          They keep the deque in several fixed size arrays, and they go over those
                          arrays. Its still O(n), but not as silly as going over all the elements.



                          To unsubscribe from this group, send an email to:
                          hackers-il-unsubscribe@egroups.com



                          Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/



                          *
                          *******************************************************
                          This email has been scanned by Port Authority.

                          ********************************************************



                          To unsubscribe from this group, send an email to:
                          hackers-il-unsubscribe@egroups.com



                          Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
                        • Chen Shapira
                          I see your point. I ll just mention that the excuse is from SGI s FAQ. This means that both SGI and MS prefered not to use the extra variable, but calculate
                          Message 12 of 24 , Jun 2, 2002
                          View Source
                          • 0 Attachment
                            I see your point.

                            I'll just mention that the excuse is from SGI's FAQ.
                            This means that both SGI and MS prefered not to use the extra variable, but
                            calculate the size directly.
                            Those are 2 of the 4 large and common implementations of STL. (The other two
                            are ObjectSpace and STLPort, and I'm not sure what they did). So if its both
                            standard conforming and damn common - it should have probably been expected.
                            In anycase, never again will I count on O(1) performance in Deque::size.



                            > -----Original Message-----
                            > From: Ofir Carny [mailto:Ofir@...]
                            > Sent: Sunday, June 02, 2002 3:22 PM
                            > To: hackers-il@yahoogroups.com
                            > Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?
                            >
                            >
                            > Repling to myself:
                            >
                            > The docuemnt to look at is the the ISO standard. Here is the
                            > official excuse I was refering to:
                            >
                            > "This choice is permitted by the C++ standard. The standard
                            > says that size() "should" be constant time, and "should" does
                            > not mean the same thing as "shall". This is the officially
                            > recommended ISO wording for saying that an implementation is
                            > supposed to do something unless there is a good reason not to. "
                            >
                            > -----Original Message-----
                            > From: Ofir Carny
                            > Sent: Sunday, June 02, 2002 3:08 PM
                            > To: hackers-il@yahoogroups.com
                            > Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?
                            >
                            >
                            > It was over a year ago, so my memory is not that clear, but see:
                            >
                            > http://www.sgi.com/tech/stl/Container.html
                            > "For many containers, such as vector and deque, size is O(1).
                            > This satisfies the requirement that it be O(N)."
                            >
                            > The fact that this line is so hard to find in the specs is
                            > another problem.
                            >
                            > I am pretty sure that what I saw there was clearer about the
                            > O(1) recommendation, it might have changed since though.
                            >
                            > But this is beside the point, the point is the inherent
                            > problem in using a library whose algorithmic basis radically changes
                            > between different implementations.
                            >
                            > -----Original Message-----
                            > From: Chen Shapira [mailto:chen@...]
                            > Sent: Sunday, June 02, 2002 2:28 PM
                            > To: 'hackers-il@yahoogroups.com'
                            > Subject: RE: [hackers-il] Do Faster Computers Imply Slower Code?
                            >
                            >
                            >
                            >
                            > > The STL specification recommends (I think the word it uses is
                            > > 'should') an O(1) implementation, MS do have a good reason for
                            > > their choice, but as long as different implementations use
                            > > different algorithms, STL code is not portable, at least
                            > > performance-wise.
                            >
                            > 1. Checking the specification here:
                            > http://www.sgi.com/tech/stl/index.html
                            > The only complexity specification on Deque there, is the same one on a
                            > container. That is - linear in the number of elements.
                            > So it actually is standard, but less efficient than other
                            > implementation.
                            > You were counting on a non-standard feature.
                            >
                            > 2. I have no pointer here, but AFAIK, they actually compute
                            > the size of
                            > deque by pointer arithmethics.
                            > They keep the deque in several fixed size arrays, and they go
                            > over those
                            > arrays. Its still O(n), but not as silly as going over all
                            > the elements.
                            >
                            >
                            >
                            > To unsubscribe from this group, send an email to:
                            > hackers-il-unsubscribe@egroups.com
                            >
                            >
                            >
                            > Your use of Yahoo! Groups is subject to
                            > http://docs.yahoo.com/info/terms/
                            >
                            >
                            >
                            > *
                            > *******************************************************
                            > This email has been scanned by Port Authority.
                            >
                            > ********************************************************
                            >
                            >
                            >
                            > To unsubscribe from this group, send an email to:
                            > hackers-il-unsubscribe@egroups.com
                            >
                            >
                            >
                            > Your use of Yahoo! Groups is subject to
                            > http://docs.yahoo.com/info/terms/
                            >
                            >
                            >
                            > ------------------------ Yahoo! Groups Sponsor
                            > ---------------------~-->
                            > Tied to your PC? Cut Loose and
                            > Stay connected with Yahoo! Mobile
                            > http://us.click.yahoo.com/QBCcSD/o1CEAA/sXBHAA/saFolB/TM
                            > --------------------------------------------------------------
                            > -------~->
                            >
                            > To unsubscribe from this group, send an email to:
                            > hackers-il-unsubscribe@egroups.com
                            >
                            >
                            >
                            > Your use of Yahoo! Groups is subject to
                            > http://docs.yahoo.com/info/terms/
                            >
                            >
                          • Oleg Goldshmidt
                            ... This is not an API issue, it is an issue of the underlying data structure and/or algorithms. ... Putting your faith in the implementor. ... Lists usually
                            Message 13 of 24 , Jun 2, 2002
                            View Source
                            • 0 Attachment
                              Shlomi Fish <shlomif@...> writes:

                              > Sometimes, I am amazed at how APIs can have a brain-dead design.

                              This is not an API issue, it is an issue of the underlying data
                              structure and/or algorithms.

                              > I think when you call a function you expect it to have a reasonable
                              > time order of growth.

                              Putting your faith in the implementor.

                              > You do agree that O(n) list length and O(n) array lookup times are
                              > un acceptable?

                              Lists usually have O(n) length methods (this is a lisper talking). You
                              should use data structures that are suitable for your problem - that
                              is why there are both arrays and lists, in particular.

                              Note that the M$ STL problem alluded to was with queue, not
                              list. Standard algorithms tests do not suggest using lists for queue
                              implementations, AFAIK.

                              --
                              Oleg Goldshmidt | ogoldshmidt@...
                              "A sense of the fundamental decencies is parceled out unequally at birth."
                            • Oleg Goldshmidt
                              ... What has it got to do with API? I was referring to the underlying algorithms. ... To add to your list: finance, hydrodynamics (you mentioned some problems
                              Message 14 of 24 , Jun 2, 2002
                              View Source
                              • 0 Attachment
                                Shlomi Fish <shlomif@...> writes:

                                > Isn't it possible to write APIs that would abstract some of that
                                > functionality away?

                                What has it got to do with API? I was referring to the underlying
                                algorithms.

                                > Do you think we will always find certain tasks that will require more
                                > computing power than we have? Examples I can think of at present:

                                To add to your list: finance, hydrodynamics (you mentioned some
                                problems involving that, but there are many more), cosmology,
                                climate models, (thermo)nuclear bombs (now that many countries -
                                albeit not all - signed the Comprehensive Test Ban Treaty), et
                                caetera, et caetera...

                                --
                                Oleg Goldshmidt | ogoldshmidt@...
                                "A sense of the fundamental decencies is parceled out unequally at birth."
                              Your message has been successfully submitted and would be delivered to recipients shortly.