- Just sharing some views on distributed AI, and trying to get some
support against using XML as a good technology to do this,
AI made The Economist recently.
and The Economist predicts a strong future AI prescience:
Why is this interesting? Because, by definition if The Economist
plays to it... it has to be big enough, at least in perception.
These are some of the drivers:
1) "The semantic Web" and related technologies:
2) Darpa with DAML:
3) Microsoft with .NET, check:
Actively advertising "intelligent" technologies.
4) Intelligence required for B2B, ebXML, RosettaNet, EAI, etc.
EAI/Workflow: App Servers, EAI, Queues (MQ, Open JMS, etc.), Web Services,
Workflow, and Business Process Management (Vitria, etc.)
5) IBM's biggest iron is advertised as "intelligent":
6) Lifestreams, Linda on the Web, etc.; all requiring some basic
AI techniques like pattern-matching, and distributed agents that
know about ontologies.
So, AI seems to be coming back from many but _major_ different areas:
Tim Berners-Lee (inventor of HTTP/HTML)
Business Standards (B2B, ebXML, EAI, RosettaNET, etc.)
Research Projects (aLife, Biological Metaphors, Digital Life)
But help me cut through some Gordian Knots... why does AI have to
be implemented through RPC, client-server, XMLish technologies, that:
1) have many layers of bloat (serializations/deserializations),
2) bring discomfort and confusion by introducing
"disconnected layered languages", and
3) don't have the appropriate semantics, facilities, libraries
and power to do AI jobs?
As early as 1975, (written in 1975, published in 1982), "intelligent"
business exchanges have been proposed like the "The Common Business
where basic exchanges like:
(REQUEST-QUOTE (ADJECTIVE (PENCILS #2) YELLOW)
have replies like:
(WE-QUOTE (OUR-STOCK-NUMBER A7305)
Even KQML and other ACLs are LISP-like.
You can also do this with X12 and now XML, but using a LISP-like syntax,
we can _also_ send rules, computable things (classes, functions, patterns,
etc.), do pattern-matching, send/share ontologies, do knowledge exchanges,
etc. So, imo, the infrastructure that LISP provides is superior
to do AI because it:
1) provides a larger number of existing resources available: libraries,
programs, etc.; for:
a) knowledge representation
c) logical programming
d) expert systems
e) genetic programming
f) game playing (plans, strategies, intentions, actions, etc.)
g) parsing natural languages
this is important we want to implement:
2) requires the least amount of conversions
(serialization/deserialization) when the app servers are
3) provides the greater amount of computational power
4) it is more intuitive since the parsing language can be
the same as the exchange language.
To me, it doesn't make any sense to reinvent the AI wheel with
XMLish technologies ..... this may in fact contribute
to the second commercial failure of AI,
- Thanks for all that great information, Mike.
Another entry high on my list is Understanding Computers and Cognition,
Winograd and Flores.
Regarding what's happening with AI and the current XML & RPC technologies...
I wonder if there is some value in looking at this in a static vs. dynamic
dichotomoty. Kind of a "BIG AI UP FRONT" vs. "DO THE SIMPLEST AI THAT COULD
POSSIBLY WORK" dichotomy?
The latter can be seen in this collection of papers, Cambrian Intelligence
by Rodney Brooks...
XML technologies in general seem to fit in those two buckets, static and
dynamic. Some uses of XML appear to be "HEAVYWEIGHT XML". More complicated
uses of SOAP, WSDL, UDDI, etc. Maybe RDF, but I have not read too much about
it. "LIGHTWEIGHT XML" could include XML-RPC, etc.
In general each generation of software technology seems to reinvent a lot of
ideas, for better or worse. Rather than build the same ideas into the
current layers of technology (DotNet, Java, SOAP, XSD Schema, HTTP, etc.)
maybe it would be useful to view these technologies as a "NEW ASSEMBLER
Instead of PDP-11 assembler or worse, and building from the bits up, we get
to build a new dynamic base on top of some fairly powerful components.
Rather than building "In Java", we get to build "On Java". I've been doing a
good bit of programming in JScheme, which is "On Java" and so can take
advantage of every Java class on the Internet. It has a simple syntax using
Java reflection to make this simple and painless. Java and all those classes
are the assembler language. (To the point where JScheme code can be
"compiled" to a .class to remove the reflection overhead.)
The result is powerful *and* lightweight. I think the simplest AI that could
possibly work could be built on this platform. Perhaps more emphasis is
required on simplicity, pulling us up out of the complexity of the current
popular technology. Today's technology is some much better than yesterday's,
I think we forget how complex it *still* is nevertheless!
- Patrick D. Logan wrote:
> Regarding what's happening with AI and the current XML & RPC[snip]
> technologies... I wonder if there is some value
> in looking at this in a static vs. dynamic dichotomoty.
> Kind of a "BIG AI UP FRONT" vs. "DO THE SIMPLEST AI
> THAT COULD POSSIBLY WORK" dichotomy?
> The latter can be seen in this collection of papers,
> Cambrian Intelligence by Rodney Brooks...
> XML technologies in general seem to fit in those two buckets,
> static and dynamic. Some uses of XML appear to be
> "HEAVYWEIGHT XML". More complicated uses of SOAP, WSDL,
> UDDI, etc. Maybe RDF, but I have not read too
> much about it. "LIGHTWEIGHT XML" could include XML-RPC, etc.
Thanks for the links.
I do believe in "evolutionary" design but I don't believe
in "choosing the wrong tool for the job".
The major problems, that I see, are related to constraining
the very thing we are proposing in the first place
(distributed AI) from the get-go -- and with no clear ways
to fix them.
For example, you can't do mobility because you can't migrate
an agent, or any "executing" part of an agent through DAML
to another location because you can't send functions,
classes, patterns, rules, etc. This means no
"genetic programming" over the network, for example; no
true distributed BPM (business process management), where
the workflows, and the business and workflow rules travel
and compete and execute elsewhere; no ontologies rules dynamically
installed "as is" by being transferred over the network; etc.
I could go on, and on, and on.
10 years down the road I have a strong feeling we _will_
want to do something else, more advanced, but then we
will realize that we chose the wrong paradigm to implement
"distributed AI". Our code will look ugly, messy and it
will be a nightmare to debug. The equivalent AI knowledge
in terms of code and libraries will be "unusable" and
only a small fraction of it will be reproduced in
the "Semantic Web". But worse of all, we won't have a
way to go. Once you choose XMLish technologies your
only programming choice, to be able to do mobility or
such, is to put XSLT on steroids (Yuck!!).... make
it do what LISP does so to speak.
I personally don't like to see that future. I think we
can do much better than that:
we already have the tools...
They are just not all that popular.
The way I make this comparison, is as if you asked
someone to choose between a free, available, 1958
Porsche with a good engine still; or an expensive
1991 skateboard needing a small gas engine to
run up to 30 mph. that is hard to use and that
includes no safety warranties. (I like this analogy
because no matter what you do, you won't be able
to fit a much larger engine on the skateboard...)
Which of the two would rather take out for a drive?
- Patrick wrote:
> > Which of the two would rather take out for a drive?
> I am with you and the 1958 Porsche to ride alongside the skateboard.
These are good examples of such "fast running" vehicles.
This _is_ the "semantic web" now ... All of these applications
are INTERNET killer apps built on LISP and I doubt any
of the upcoming "semantic web" apps could ever do better than
this in functionality, performance, maintainability, features, etc.:
Yahoo Stores (was Viacom):
Fujitsu's INTERSTAGE AGENTPRO:
There's a few more ....