Loading ...
Sorry, an error occurred while loading the content.

227RE: [feyerabend-project] RE: AI

Expand Messages
  • Mike Beedle
    Apr 3, 2002
    • 0 Attachment
      Patrick D. Logan wrote:
      [snip]
      > Regarding what's happening with AI and the current XML & RPC
      > technologies... I wonder if there is some value
      > in looking at this in a static vs. dynamic dichotomoty.
      > Kind of a "BIG AI UP FRONT" vs. "DO THE SIMPLEST AI
      > THAT COULD POSSIBLY WORK" dichotomy?
      >
      > The latter can be seen in this collection of papers,
      > Cambrian Intelligence by Rodney Brooks...
      >
      > http://www.amazon.com/exec/obidos/ASIN/0262522632/qid=1017757865/s
      > r=2-2/ref=
      > sr_2_2/104-1193081-9073537
      >
      > XML technologies in general seem to fit in those two buckets,
      > static and dynamic. Some uses of XML appear to be
      > "HEAVYWEIGHT XML". More complicated uses of SOAP, WSDL,
      > UDDI, etc. Maybe RDF, but I have not read too
      > much about it. "LIGHTWEIGHT XML" could include XML-RPC, etc.
      [snip]

      Patrick:

      Thanks for the links.

      I do believe in "evolutionary" design but I don't believe
      in "choosing the wrong tool for the job".

      The major problems, that I see, are related to constraining
      the very thing we are proposing in the first place
      (distributed AI) from the get-go -- and with no clear ways
      to fix them.

      For example, you can't do mobility because you can't migrate
      an agent, or any "executing" part of an agent through DAML
      to another location because you can't send functions,
      classes, patterns, rules, etc. This means no
      "genetic programming" over the network, for example; no
      true distributed BPM (business process management), where
      the workflows, and the business and workflow rules travel
      and compete and execute elsewhere; no ontologies rules dynamically
      installed "as is" by being transferred over the network; etc.

      I could go on, and on, and on.

      10 years down the road I have a strong feeling we _will_
      want to do something else, more advanced, but then we
      will realize that we chose the wrong paradigm to implement
      "distributed AI". Our code will look ugly, messy and it
      will be a nightmare to debug. The equivalent AI knowledge
      in terms of code and libraries will be "unusable" and
      only a small fraction of it will be reproduced in
      the "Semantic Web". But worse of all, we won't have a
      way to go. Once you choose XMLish technologies your
      only programming choice, to be able to do mobility or
      such, is to put XSLT on steroids (Yuck!!).... make
      it do what LISP does so to speak.

      I personally don't like to see that future. I think we
      can do much better than that:

      we already have the tools...

      They are just not all that popular.

      The way I make this comparison, is as if you asked
      someone to choose between a free, available, 1958
      Porsche with a good engine still; or an expensive
      1991 skateboard needing a small gas engine to
      run up to 30 mph. that is hard to use and that
      includes no safety warranties. (I like this analogy
      because no matter what you do, you won't be able
      to fit a much larger engine on the skateboard...)

      Which of the two would rather take out for a drive?

      - Mike
    • Show all 6 messages in this topic