Loading ...
Sorry, an error occurred while loading the content.

[Artificial Intelligence Group] Re: what not make sense for other creatures?

Expand Messages
  • coveent
    First off, I would like to point out that sanity is a relative term, at least that is the opinion that I agree with. Second of all, when dealing with the
    Message 1 of 20 , Sep 13, 2005
    • 0 Attachment
      First off, I would like to point out that sanity is a relative term,
      at least that is the opinion that I agree with.

      Second of all, when dealing with the concept of an artificial
      consciousness, human, or any other biological consciousness,
      references should be COMPLETELY set aside.

      An artificial consciousness would only have the desires and drives
      that it is programmed to have. A survival desire would not exsist
      without it being programmed into it, for example. For this reason, I
      do not think that machines taking over the world could happen unless
      they were programmed to do so. Asimov created the three laws of
      robotics in his short stories, and they are still referenced in
      current works because they make sense and they should insure the
      safety of humans, as long as they are implemented. (Several of his
      stories deal with AI psychology and the laws of robotics are are
      quite interesting reading if you have not gotten around to them yet.)

      In an artificial sense, I would think that artificial insanity in an
      AI system would be a corruption of the coding. Assuming that an AI
      system would have some manner of adaptive programming, the
      safeguards to protect from corruption would have to be built into
      the core "inflexible" code. Something to prevent the system from
      being caught in an endless loop, for example.

      Just a trailing thought ... if "Artificial Insanity" is the
      corruption of the code in a changing system, does that mean that
      Windows is insane?

      Just some thoughts.

      Andrew



      --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
      <aclements@i...> wrote:
      > ... interesting points all round. "artificial inteligence"... what
      > happens when in stead of focusing on Artificial Inteligence we
      were to
      > focus on "Artificial Sanity" - what prompts me to ask this is
      > considering how, while imagination is surely a critical component
      of
      > projection and conjecture - un-checked it very easily leads to
      pshycosis
      > and all manner of pathology, individiually and systemically,
      perhaps
      > because of a (potential to) drift away from validity.
      >
      > Of course defining sanity won't be any easier than defining
      inteligence,
      > and I don't think we should settle for sanity=normal either -
      that's a
      > cop-out if ever there was one - which leads me to wonder if anyone
      has
      > looked at the implications of salutogenisys theory in this field?,
      > modeling the system around Antonofski (sp?)'s criteria for
      coherence -
      > think it is: Meaning, Competence and Comprehension, and breaking
      those
      > ideas down in stead of building up from the usual components
      >
      > Just a late-night sleep deprived pondering, gotta get back to
      changing
      > our 3 week old baby's nappy again - amazing to whatch the
      unfolding of
      > the being into his inteligence etc.
      >
      > Andre S C
      >
      > PS. Definitions are not static.
      > PPS. A perfect definition = devision by Zero
    • Andre S Clements
      Hi True, but what term is not relative? I m not sure what you mean by references should be COMPLETELY set aside - please would you expand. From your third
      Message 2 of 20 , Sep 13, 2005
      • 0 Attachment
        Hi

        True, but what term is not relative?

        I'm not sure what you mean by " references should be COMPLETELY set
        aside" - please would you expand.

        From your third paragraph onwards, you seem to argue for and promote
        the mechanistic world view. Asimov makes for great entertaining reading
        - but he did write "...In fact have been told that if, in future years,
        I am to be remembered at all, it will be for these three laws of
        robotics. In a way this bothers me, for I am accustomed to thinking of
        myself as a scientist, and to be remembered for the non-existent basis
        of a non existent science is embarrassing..."

        Have you considered the "Technological Event Horizon" as described by
        Vernon Vinge? If the designed system - through design or accident,
        morphs, and comes to alter itself, perhaps by mutating reproductions of
        itself, i can't see a law like the '3 laws' staying hard-wired in for
        very long.

        I am more of an Ian M Banks fan myself.

        okay so lets move on, ...
        Is 'artificial life' possible?

        André

        coveent wrote:

        > First off, I would like to point out that sanity is a relative term,
        > at least that is the opinion that I agree with.
        >
        > Second of all, when dealing with the concept of an artificial
        > consciousness, human, or any other biological consciousness,
        > references should be COMPLETELY set aside.
        >
        > An artificial consciousness would only have the desires and drives
        > that it is programmed to have. A survival desire would not exsist
        > without it being programmed into it, for example. For this reason, I
        > do not think that machines taking over the world could happen unless
        > they were programmed to do so. Asimov created the three laws of
        > robotics in his short stories, and they are still referenced in
        > current works because they make sense and they should insure the
        > safety of humans, as long as they are implemented. (Several of his
        > stories deal with AI psychology and the laws of robotics are are
        > quite interesting reading if you have not gotten around to them yet.)
        >
        > In an artificial sense, I would think that artificial insanity in an
        > AI system would be a corruption of the coding. Assuming that an AI
        > system would have some manner of adaptive programming, the
        > safeguards to protect from corruption would have to be built into
        > the core "inflexible" code. Something to prevent the system from
        > being caught in an endless loop, for example.
        >
        > Just a trailing thought ... if "Artificial Insanity" is the
        > corruption of the code in a changing system, does that mean that
        > Windows is insane?
        >
        > Just some thoughts.
        >
        > Andrew
        >
        >
        >
        > --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
        > <aclements@i...> wrote:
        > > ... interesting points all round. "artificial inteligence"... what
        > > happens when in stead of focusing on Artificial Inteligence we
        > were to
        > > focus on "Artificial Sanity" - what prompts me to ask this is
        > > considering how, while imagination is surely a critical component
        > of
        > > projection and conjecture - un-checked it very easily leads to
        > pshycosis
        > > and all manner of pathology, individiually and systemically,
        > perhaps
        > > because of a (potential to) drift away from validity.
        > >
        > > Of course defining sanity won't be any easier than defining
        > inteligence,
        > > and I don't think we should settle for sanity=normal either -
        > that's a
        > > cop-out if ever there was one - which leads me to wonder if anyone
        > has
        > > looked at the implications of salutogenisys theory in this field?,
        > > modeling the system around Antonofski (sp?)'s criteria for
        > coherence -
        > > think it is: Meaning, Competence and Comprehension, and breaking
        > those
        > > ideas down in stead of building up from the usual components
        > >
        > > Just a late-night sleep deprived pondering, gotta get back to
        > changing
        > > our 3 week old baby's nappy again - amazing to whatch the
        > unfolding of
        > > the being into his inteligence etc.
        > >
        > > Andre S C
        > >
        > > PS. Definitions are not static.
        > > PPS. A perfect definition = devision by Zero
        >
        >
        >
        >
        > ------------------------------------------------------------------------
        > YAHOO! GROUPS LINKS
        >
        > * Visit your group "artificialintelligencegroup
        > <http://groups.yahoo.com/group/artificialintelligencegroup>" on
        > the web.
        >
        > * To unsubscribe from this group, send an email to:
        > artificialintelligencegroup-unsubscribe@yahoogroups.com
        > <mailto:artificialintelligencegroup-unsubscribe@yahoogroups.com?subject=Unsubscribe>
        >
        > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
        > Service <http://docs.yahoo.com/info/terms/>.
        >
        >
        > ------------------------------------------------------------------------
        >
      • JW
        Even with humans, how do you define human behavior as normal or standard ? To be a little mathmatical---in terms of standard deviations from the mean? How
        Message 3 of 20 , Sep 14, 2005
        • 0 Attachment
          Even with humans, how do you define human behavior as "normal"
          or "standard"?

          To be a little mathmatical---in terms of standard deviations from the
          mean? How would you define the mean? Who's or what's behavior would
          you base it upon?

          John





          --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
          <aclements@i...> wrote:
          > Hi
          >
          > True, but what term is not relative?
          >
          > I'm not sure what you mean by " references should be COMPLETELY set
          > aside" - please would you expand.
          >
          > From your third paragraph onwards, you seem to argue for and
          promote
          > the mechanistic world view. Asimov makes for great entertaining
          reading
          > - but he did write "...In fact have been told that if, in future
          years,
          > I am to be remembered at all, it will be for these three laws of
          > robotics. In a way this bothers me, for I am accustomed to thinking
          of
          > myself as a scientist, and to be remembered for the non-existent
          basis
          > of a non existent science is embarrassing..."
          >
          > Have you considered the "Technological Event Horizon" as described
          by
          > Vernon Vinge? If the designed system - through design or accident,
          > morphs, and comes to alter itself, perhaps by mutating
          reproductions of
          > itself, i can't see a law like the '3 laws' staying hard-wired in
          for
          > very long.
          >
          > I am more of an Ian M Banks fan myself.
          >
          > okay so lets move on, ...
          > Is 'artificial life' possible?
          >
          > André
          >
          > coveent wrote:
          >
          > > First off, I would like to point out that sanity is a relative
          term,
          > > at least that is the opinion that I agree with.
          > >
          > > Second of all, when dealing with the concept of an artificial
          > > consciousness, human, or any other biological consciousness,
          > > references should be COMPLETELY set aside.
          > >
          > > An artificial consciousness would only have the desires and drives
          > > that it is programmed to have. A survival desire would not exsist
          > > without it being programmed into it, for example. For this
          reason, I
          > > do not think that machines taking over the world could happen
          unless
          > > they were programmed to do so. Asimov created the three laws of
          > > robotics in his short stories, and they are still referenced in
          > > current works because they make sense and they should insure the
          > > safety of humans, as long as they are implemented. (Several of his
          > > stories deal with AI psychology and the laws of robotics are are
          > > quite interesting reading if you have not gotten around to them
          yet.)
          > >
          > > In an artificial sense, I would think that artificial insanity in
          an
          > > AI system would be a corruption of the coding. Assuming that an AI
          > > system would have some manner of adaptive programming, the
          > > safeguards to protect from corruption would have to be built into
          > > the core "inflexible" code. Something to prevent the system from
          > > being caught in an endless loop, for example.
          > >
          > > Just a trailing thought ... if "Artificial Insanity" is the
          > > corruption of the code in a changing system, does that mean that
          > > Windows is insane?
          > >
          > > Just some thoughts.
          > >
          > > Andrew
          > >
          > >
          > >
          > > --- In artificialintelligencegroup@yahoogroups.com, Andre S
          Clements
          > > <aclements@i...> wrote:
          > > > ... interesting points all round. "artificial inteligence"...
          what
          > > > happens when in stead of focusing on Artificial Inteligence we
          > > were to
          > > > focus on "Artificial Sanity" - what prompts me to ask this is
          > > > considering how, while imagination is surely a critical
          component
          > > of
          > > > projection and conjecture - un-checked it very easily leads to
          > > pshycosis
          > > > and all manner of pathology, individiually and systemically,
          > > perhaps
          > > > because of a (potential to) drift away from validity.
          > > >
          > > > Of course defining sanity won't be any easier than defining
          > > inteligence,
          > > > and I don't think we should settle for sanity=normal either -
          > > that's a
          > > > cop-out if ever there was one - which leads me to wonder if
          anyone
          > > has
          > > > looked at the implications of salutogenisys theory in this
          field?,
          > > > modeling the system around Antonofski (sp?)'s criteria for
          > > coherence -
          > > > think it is: Meaning, Competence and Comprehension, and breaking
          > > those
          > > > ideas down in stead of building up from the usual components
          > > >
          > > > Just a late-night sleep deprived pondering, gotta get back to
          > > changing
          > > > our 3 week old baby's nappy again - amazing to whatch the
          > > unfolding of
          > > > the being into his inteligence etc.
          > > >
          > > > Andre S C
          > > >
          > > > PS. Definitions are not static.
          > > > PPS. A perfect definition = devision by Zero
          > >
          > >
          > >
          > >
          > > ------------------------------------------------------------------
          ------
          > > YAHOO! GROUPS LINKS
          > >
          > > * Visit your group "artificialintelligencegroup
          > >
          <http://groups.yahoo.com/group/artificialintelligencegroup>" on
          > > the web.
          > >
          > > * To unsubscribe from this group, send an email to:
          > > artificialintelligencegroup-unsubscribe@yahoogroups.com
          > > <mailto:artificialintelligencegroup-
          unsubscribe@yahoogroups.com?subject=Unsubscribe>
          > >
          > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
          > > Service <http://docs.yahoo.com/info/terms/>.
          > >
          > >
          > > ------------------------------------------------------------------
          ------
          > >
        • Andre S Clements
          Especially once you consider evolution, which is essentially mutation, in other words, corruption, of the code, or blue-print. Considering MS Windows and
          Message 4 of 20 , Sep 14, 2005
          • 0 Attachment
            Especially once you consider evolution, which is essentially mutation,
            in other words, corruption, of the code, or blue-print.

            Considering MS Windows and Darwin's argument - that it is not necesaraly
            the strongest or the fastest of the species who survive, but those most
            capable of adapting to changing circumstances - then sanity is perhaps
            the ability to find workable coherent interaction with the greater
            whole, irrespective of what internal 'corruption' this may require,
            provided the internal corruption doesn't become too detrimental to the
            agent e.g. cancer. Increased evolutionary complexity seems to run
            parallel with increased dependency on external factors, e.g. human young
            can not protect or provide for themselves at birth. Perhaps Windows is a
            lot 'sane-er' and 'fit-er' than popular opinion is comfortable with.

            André

            JW wrote:

            > Even with humans, how do you define human behavior as "normal"
            > or "standard"?
            >
            > To be a little mathmatical---in terms of standard deviations from the
            > mean? How would you define the mean? Who's or what's behavior would
            > you base it upon?
            >
            > John
            >
            >
            >
            >
            >
            > --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
            > <aclements@i...> wrote:
            > > Hi
            > >
            > > True, but what term is not relative?
            > >
            > > I'm not sure what you mean by " references should be COMPLETELY set
            > > aside" - please would you expand.
            > >
            > > From your third paragraph onwards, you seem to argue for and
            > promote
            > > the mechanistic world view. Asimov makes for great entertaining
            > reading
            > > - but he did write "...In fact have been told that if, in future
            > years,
            > > I am to be remembered at all, it will be for these three laws of
            > > robotics. In a way this bothers me, for I am accustomed to thinking
            > of
            > > myself as a scientist, and to be remembered for the non-existent
            > basis
            > > of a non existent science is embarrassing..."
            > >
            > > Have you considered the "Technological Event Horizon" as described
            > by
            > > Vernon Vinge? If the designed system - through design or accident,
            > > morphs, and comes to alter itself, perhaps by mutating
            > reproductions of
            > > itself, i can't see a law like the '3 laws' staying hard-wired in
            > for
            > > very long.
            > >
            > > I am more of an Ian M Banks fan myself.
            > >
            > > okay so lets move on, ...
            > > Is 'artificial life' possible?
            > >
            > > André
            > >
            > > coveent wrote:
            > >
            > > > First off, I would like to point out that sanity is a relative
            > term,
            > > > at least that is the opinion that I agree with.
            > > >
            > > > Second of all, when dealing with the concept of an artificial
            > > > consciousness, human, or any other biological consciousness,
            > > > references should be COMPLETELY set aside.
            > > >
            > > > An artificial consciousness would only have the desires and drives
            > > > that it is programmed to have. A survival desire would not exsist
            > > > without it being programmed into it, for example. For this
            > reason, I
            > > > do not think that machines taking over the world could happen
            > unless
            > > > they were programmed to do so. Asimov created the three laws of
            > > > robotics in his short stories, and they are still referenced in
            > > > current works because they make sense and they should insure the
            > > > safety of humans, as long as they are implemented. (Several of his
            > > > stories deal with AI psychology and the laws of robotics are are
            > > > quite interesting reading if you have not gotten around to them
            > yet.)
            > > >
            > > > In an artificial sense, I would think that artificial insanity in
            > an
            > > > AI system would be a corruption of the coding. Assuming that an AI
            > > > system would have some manner of adaptive programming, the
            > > > safeguards to protect from corruption would have to be built into
            > > > the core "inflexible" code. Something to prevent the system from
            > > > being caught in an endless loop, for example.
            > > >
            > > > Just a trailing thought ... if "Artificial Insanity" is the
            > > > corruption of the code in a changing system, does that mean that
            > > > Windows is insane?
            > > >
            > > > Just some thoughts.
            > > >
            > > > Andrew
            > > >
            > > >
            > > >
            > > > --- In artificialintelligencegroup@yahoogroups.com, Andre S
            > Clements
            > > > <aclements@i...> wrote:
            > > > > ... interesting points all round. "artificial inteligence"...
            > what
            > > > > happens when in stead of focusing on Artificial Inteligence we
            > > > were to
            > > > > focus on "Artificial Sanity" - what prompts me to ask this is
            > > > > considering how, while imagination is surely a critical
            > component
            > > > of
            > > > > projection and conjecture - un-checked it very easily leads to
            > > > pshycosis
            > > > > and all manner of pathology, individiually and systemically,
            > > > perhaps
            > > > > because of a (potential to) drift away from validity.
            > > > >
            > > > > Of course defining sanity won't be any easier than defining
            > > > inteligence,
            > > > > and I don't think we should settle for sanity=normal either -
            > > > that's a
            > > > > cop-out if ever there was one - which leads me to wonder if
            > anyone
            > > > has
            > > > > looked at the implications of salutogenisys theory in this
            > field?,
            > > > > modeling the system around Antonofski (sp?)'s criteria for
            > > > coherence -
            > > > > think it is: Meaning, Competence and Comprehension, and breaking
            > > > those
            > > > > ideas down in stead of building up from the usual components
            > > > >
            > > > > Just a late-night sleep deprived pondering, gotta get back to
            > > > changing
            > > > > our 3 week old baby's nappy again - amazing to whatch the
            > > > unfolding of
            > > > > the being into his inteligence etc.
            > > > >
            > > > > Andre S C
            > > > >
            > > > > PS. Definitions are not static.
            > > > > PPS. A perfect definition = devision by Zero
            > > >
            > > >
            > > >
            > > >
            > > > ------------------------------------------------------------------
            > ------
            > > > YAHOO! GROUPS LINKS
            > > >
            > > > * Visit your group "artificialintelligencegroup
            > > >
            > <http://groups.yahoo.com/group/artificialintelligencegroup>" on
            > > > the web.
            > > >
            > > > * To unsubscribe from this group, send an email to:
            > > > artificialintelligencegroup-unsubscribe@yahoogroups.com
            > > > <mailto:artificialintelligencegroup-
            > unsubscribe@yahoogroups.com?subject=Unsubscribe>
            > > >
            > > > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
            > > > Service <http://docs.yahoo.com/info/terms/>.
            > > >
            > > >
            > > > ------------------------------------------------------------------
            > ------
            > > >
            >
            >
            >
            > ------------------------------------------------------------------------
            > YAHOO! GROUPS LINKS
            >
            > * Visit your group "artificialintelligencegroup
            > <http://groups.yahoo.com/group/artificialintelligencegroup>" on
            > the web.
            >
            > * To unsubscribe from this group, send an email to:
            > artificialintelligencegroup-unsubscribe@yahoogroups.com
            > <mailto:artificialintelligencegroup-unsubscribe@yahoogroups.com?subject=Unsubscribe>
            >
            > * Your use of Yahoo! Groups is subject to the Yahoo! Terms of
            > Service <http://docs.yahoo.com/info/terms/>.
            >
            >
            > ------------------------------------------------------------------------
            >
          • Lucas Fousekis
            I believe that I don t have the appropriate knowledge in this topic of discussion , but I would like to express my opinion. I am not expert in the field , so
            Message 5 of 20 , Sep 16, 2005
            • 0 Attachment
              I believe that I don't have the appropriate knowledge in this topic of
              discussion , but I would like to express my opinion. I am not expert in the
              field , so what I will reference is just simple ideas.

              Personaly I agree with the opinion of Andrew that said:
              "First off, I would like to point out that sanity is a relative term, at
              least that is the opinion that I agree with."

              Although Andrew also said that "artificial consiousness would only have the
              desires and drives that it is programmed to have."

              My opinion in that topic is that it is possible the creation of artificial
              intelligence algorithms that can make the machine to adjust itself to its
              environment. Also could have decision making algorithms. These both kind of
              algorithms could be random created according to circumstances and not to be
              pre-programmed.

              Just a thought.

              Lucas
            • coveent
              Greetings The references that need to be set aside would be the reference to human or other biological systems behaviors. Many times in this forum as well as
              Message 6 of 20 , Sep 18, 2005
              • 0 Attachment
                Greetings

                The references that need to be set aside would be the reference to
                human or other biological systems behaviors. Many times in this
                forum as well as conversations with others outside of this forum
                people have assumed artificial intelligences would take on
                biological system type behaviors. There is no acceptable reason to
                believe that this would be so. For example, the desire to survive.
                What would lead a computer to desire to survive? And before you can
                even answer that, what is survival to a computer? Turning off the
                power? With the non-volitile memory storage capability a artificial
                intelligence could have, losing power would be irrelavent.
                Its "life" would only be put on hold.

                The need to "completely set aside" human references was meant to say
                that the desires, and therefore the behaviors that are derived from
                those desires, can be controlled, and need not resemble human
                behaviors.

                As far as a system morphing or evolving to get around some original
                programming, I would think that there would be some programming that
                could not be bypassed. There should need to be a portion of core
                programming that would need to be unalterable in order for the
                system to remain operable. Take the human body. There is
                biological "programming" in the body that can not be altered, or if
                it were, would cause death or severe damage. The part of the nervous
                system that I am refering to is the part that controls the heart,
                lungs, etc. Reflexes, such as the knee kick, can not be bypassed, at
                least without outside influence, or damage to the system.

                The unalterable elements of biological programming are there for
                survival, or more specifically, for the continued safe operation of
                the system. One would think that the designers of AI systems would
                incorporate equivalent elements into the system.

                As far as Asimov is concerned, and I do not mean any disrespect to
                the the man, rarely does an individual get to determine how they are
                remembered.

                And moving on, as you say, is "artificial life" possible? That all
                depends on what you consider life? I would not mind seeing a
                definition that does not draw biological parallels. The reason for
                this is that about the only commonality that artificial life and
                biological life would have, at least with today's technology, would
                be the use of electrical signals to communicate within the system.
                Beyond that, the designer would control what types of sub-systems
                would be used.

                So what would artificial life be? There is infinite possibilities,
                in form and function, so try to be broad.

                Andrew

                --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
                <aclements@i...> wrote:
                > Hi
                >
                > True, but what term is not relative?
                >
                > I'm not sure what you mean by " references should be COMPLETELY
                set
                > aside" - please would you expand.
                >
                > From your third paragraph onwards, you seem to argue for and
                promote
                > the mechanistic world view. Asimov makes for great entertaining
                reading
                > - but he did write "...In fact have been told that if, in future
                years,
                > I am to be remembered at all, it will be for these three laws of
                > robotics. In a way this bothers me, for I am accustomed to
                thinking of
                > myself as a scientist, and to be remembered for the non-existent
                basis
                > of a non existent science is embarrassing..."
                >
                > Have you considered the "Technological Event Horizon" as described
                by
                > Vernon Vinge? If the designed system - through design or accident,
                > morphs, and comes to alter itself, perhaps by mutating
                reproductions of
                > itself, i can't see a law like the '3 laws' staying hard-wired in
                for
                > very long.
                >
                > I am more of an Ian M Banks fan myself.
                >
                > okay so lets move on, ...
                > Is 'artificial life' possible?
                >
                > André
              • coveent
                I would say that corruption of the code is insanity. Where it beomes relative is a matter of timing. The initial corruption is insanity, but as it becomes the
                Message 7 of 20 , Sep 18, 2005
                • 0 Attachment
                  I would say that corruption of the code is insanity. Where it beomes
                  relative is a matter of timing. The initial corruption is insanity,
                  but as it becomes the norm through use and acceptance, it then
                  becomes sanity. In a sense, this would be an example of
                  psychological evolution as opposed to physical evolution.

                  Insanity need not be detrimental to the system. That would only be
                  an example that would never be accepted as normal, or sane. Insane
                  people walk the streets of society all the time. Because they are
                  neither a danger to themselves or others is the reason they are not
                  institutionalized. It is not a testimony about their sanity.

                  As far as the reference to Windows and artificial insanity ... don't
                  you recognize sarcasm when you see it?

                  Andrew

                  --- In artificialintelligencegroup@yahoogroups.com, Andre S Clements
                  <aclements@i...> wrote:
                  > Especially once you consider evolution, which is essentially
                  mutation,
                  > in other words, corruption, of the code, or blue-print.
                  >
                  > Considering MS Windows and Darwin's argument - that it is not
                  necesaraly
                  > the strongest or the fastest of the species who survive, but those
                  most
                  > capable of adapting to changing circumstances - then sanity is
                  perhaps
                  > the ability to find workable coherent interaction with the greater
                  > whole, irrespective of what internal 'corruption' this may
                  require,
                  > provided the internal corruption doesn't become too detrimental to
                  the
                  > agent e.g. cancer. Increased evolutionary complexity seems to run
                  > parallel with increased dependency on external factors, e.g. human
                  young
                  > can not protect or provide for themselves at birth. Perhaps
                  Windows is a
                  > lot 'sane-er' and 'fit-er' than popular opinion is comfortable
                  with.
                  >
                  > André
                  >
                • coveent
                  Behavior that produces the desired results with acceptable side affects, in the individuals opinion, would be considered normal, I would think. I think that
                  Message 8 of 20 , Sep 18, 2005
                  • 0 Attachment
                    Behavior that produces the desired results with acceptable side
                    affects, in the individuals opinion, would be considered normal, I
                    would think.

                    I think that would be how a person would determine whether or not
                    their behavior would be considered normal. Other people may not
                    consider the behavior normal, but the person would.

                    The thing to consider with others is whether those around the
                    individual consider the behavior, the results and the side affects
                    of the behavior acceptable. All three of these must be considered.
                    For example, say I no longer want to be with my wife. (Not the case,
                    mind you, just an example.) That is acceptable. No longer being
                    together. Again, this is acceptable, depending on your religious
                    preferences. But the side affect of her being dead, that would not
                    be acceptable to others around me, if I was the one that caused it.

                    So in the end, normal and standard are based on consensus.

                    Thoughts?

                    Andrew

                    --- In artificialintelligencegroup@yahoogroups.com, "JW"
                    <johnfr3@s...> wrote:
                    > Even with humans, how do you define human behavior as "normal"
                    > or "standard"?
                    >
                    > To be a little mathmatical---in terms of standard deviations from
                    the
                    > mean? How would you define the mean? Who's or what's behavior
                    would
                    > you base it upon?
                    >
                    > John
                  • smartxpark
                    Namasthe / Hello, A very useful pattern has evolved on the discussion of what not makes sense for other creatures . In the course of only one month it has
                    Message 9 of 20 , Sep 24, 2005
                    • 0 Attachment
                      Namasthe / Hello,

                      A very useful pattern has "evolved" on the discussion of "what not
                      makes sense for other creatures". In the course of only one month it
                      has been triggering thought along so many different paths. At some
                      future time tracing back the initials will become physically
                      impossible. Debuggers of AI systems are likely to lose sanity - or most
                      likely throw the whole thing away and start a new subject line - In my
                      opinion this is a terrible waste of "time"(the most important and
                      invaluable resource the human being has).

                      "Corruption" of code, assuming there is no "self destruct or destroy
                      others pill/branch/virus" willfully planted is just not possible in a
                      machine running under its initial/original/intended hardware / os
                      configuration. It can always cause emotive behaviour in humans when
                      unpredictable response = very slow response or response ahead
                      of "onClick" is seen.

                      A small extreme example - a mobile phone found its way into a washing
                      machine by mistake - and it died. The battery was removed and
                      instrument was dried in the sun for 2 days. The same battery was put in
                      and presto it came to life and is being used very very usefully. Is
                      there a moral in this? I see the need for dispassionate contemplative
                      actions when dealing with machines on which our lives sometime depend.
                      And contemplative coding is the crux - we just cannot "trial and error"
                      and learn. And is this not what happened with space disasters and other
                      recent calamities - when leave alone machines - the entire machinery
                      failed?

                      The crux - "dhaathu" of the machine is "control" - and we are
                      seeing "control mechanisms" being put on as though they were after
                      thoughts. Here is where the core rules of robotics and ai - (knowingly
                      or unkowingly written down by asimov and positively stated in ancient
                      Indian literature - the yantr in many places) have relevance in real
                      time all time.

                      Regards

                      kedar
                    Your message has been successfully submitted and would be delivered to recipients shortly.