Loading ...
Sorry, an error occurred while loading the content.

184Re: [feyerabend-project] Re: Living Metaphor of Organizations (was RE: Grand challenge:Language evolution (Re: Autonomic email client?)

Expand Messages
  • Pascal Costanza
    Dec 3, 2001
    • 0 Attachment
      Joseph Bergin wrote:

      > We realize of course that not knowing how to do something constrains what
      > we can build. But we should also take the attitude that what should be
      > built (what is ethical, desirable, human potential enhancing...) should
      > also constrain how we build it.

      I totally agree with Joe. Especially when I read one of the proposed
      characteristics of autonomic systems (at

      "5. A virtual world is no less dangerous than the physical one, so an
      autonomic computing system must be an expert in self-protection. It must
      detect, identify and protect itself against various types of attacks to
      maintain overall system security and integrity."

      This makes me feel uneasy. Does "self-protection" include protection
      against humans trying to manipulate/control/influence the system?
      Where's the difference between an attempt at manipulation and an attack?

      I am not thinking about extreme Terminator/Schwarzenegger (or HAL/Space
      Oddissey, depending on what generation you belong to ;) scenarios here,
      but, at least, there's a philosophical question involved: How much
      "self-protection" do we have to achieve in order to really call a system
      "self-protecting"? Is there a logical paradox involved - can we only
      achieve true "self-protection", if this includes protection against
      ourselves? Is this amoral?

      Perhaps there's no logical paradox, but merely a contradiction in our
      goal, trying to produce "independent" systems that, on the other hand,
      should still do what we want. This contradiction might unconsciously
      keep us from solving the inherent problems.

      Of course, when talking about "self-protection", we naturally think of
      "protection against bad things", while "ensuring good things". But this
      is thinking in black and white, and I assume that things are much more

      > Here I think we are fortunate that the social constraints also make it more
      > rather than less buildable. We do know how to build systems that learn from
      > human feedback much better than completely autonomous _intelligent_ agents.
      > Joe


      ----------------------> Extended Deadline <----------------------
      ... for the Second German Workshop on AOSD: December 10, 2001 ...

      Pascal Costanza Email: costanza@...
      University of Bonn Roemerstr. 164
      Institute of Computer Science III D-53117 Bonn (Germany)

      Fon: +49 (0)228 73-4505 Homepage:
      Fax: +49 (0)228 73-4382 http://www.pascalcostanza.de