Loading ...
Sorry, an error occurred while loading the content.

Re: [nanotech] information needed on nano battery..

Expand Messages
  • forbes4nano@aol.com
    In a message dated 9/21/2004 5:10:56 PM Central Standard Time, trilokshetti@yahoo.com writes: hi people .. i need some valuable iinformation /
    Message 1 of 28 , Sep 21, 2004
    • 0 Attachment
      In a message dated 9/21/2004 5:10:56 PM Central Standard Time,
      trilokshetti@... writes:

      hi people ..
      i need some valuable iinformation / websites/journal on .. Nanocrystalline
      NIckel and metal hybribes battery or Lithium and lithium ion battery ..

      please send me the iformation as soon as possible
      thanks
      trilok




      _http://www.axionpower.com/_ (http://www.axionpower.com/)
      Technology
      Our E3 Cell is "an asymmetrically super-capacitive lead-acid-carbon hybrid
      battery." Reduced to basics, our E3 Cell replaces the lead-based negative
      electrode in a conventional lead-acid battery with a nanoporous carbon electrode
      that eliminates the physical deterioration associated with lead based
      negative electrodes and gives E3 Cell batteries super-capacitive characteristics
      that make it possible to rapidly deliver large amounts of stored energy.
      We believe our E3 Cell is a major advance in the field of electrical energy
      storage. In rigorous testing, laboratory prototypes of our E3 Cell have
      demonstrated a number of important competitive features and performance
      advantages that compare favorably with lead-acid batteries.
      These features and advantages include:
      * E3 Cells have cycle-lives that are 4 to 5 times longer than
      lead-acid batteries, which means they can be charged and deeply discharged a greater
      number of times without a general failure or a significant performance loss;

      * E3 Cells have high coulombic efficiencies, which means that more of
      the accumulated charge is available for use during discharge cycles;
      * E3 Cells can withstand significantly faster sustained charge rates;
      and
      * E3 Cell technology is expected to be largely compatible with
      existing lead-acid battery manufacturing methods and production facilities.
      forbes4nano


      [Non-text portions of this message have been removed]
    • Mark Gubrud
      ... If we give a machine the ability to reprogram itself and to increase its own hardware capacity autonomously (not just learn ) then it should be no
      Message 2 of 28 , Sep 21, 2004
      • 0 Attachment
        gmsdrummer_77 wrote:

        > Artificial Intelligence.... with continued
        > advances at today's rates in the field and in chip break throughs we
        > should at some point be able to do create a machine as smart as a
        > human. Such a machine would be capable of programming itself at
        > some point and augmenting its intelligence hundreds, millions? of
        > times. How do we control that, what would it think of us and what
        > would it want.

        If we give a machine the ability to reprogram itself and to increase
        its own hardware capacity autonomously (not just "learn") then it
        should be no surprise if we find that the machine is out of control.
        However, there is no inherent reason why a machine might not be "as
        smart as a human" in the sense of being able to do any particular
        thing humans can do, and yet still be fully under control. There
        is no inherent reason why a machine can't vastly outperform humans
        (like, say, multiplying 12-digit numbers a billion times per second)
        and still be fully under control and not thinking or wanting anything
        we didn't intend it to. But of course, we have to be careful about
        this.


        > How bout creating humans that are simply genetically superior
        > with far greater minds. A lot easier and a lot closer to be sure,
        > but then we introduce a whole new species into our environment and
        > one that is vastly superior to us. Would this species play nice or
        > maybe out of fear have its own agenda.
        > But why create a separate intelligence that's superior to us in
        > the first place. Why not just transform ourselves into the Super
        > Human Intelligence.

        1. "humans that are genetically superior with far greater minds",
        whatever this means, are by no means closer than AI that exceeds
        human capabilities. We are very far from being able to engineer,
        as opposed to just nurture, heal, and modify living systems.

        2. What is the fundamental difference if we create "a whole new
        species" or "transform ourselves" into something other than what
        we are? The path may be different, but the destination is not.
        Or, maybe there are many possible destinations, but only one of
        them is the continued survival of our species (as opposed to
        "transforming" or competing it into extinction).

        3. The human race, collectively and with its technological tools,
        is already a 'super human intelligence'. There are three main
        questions about AI or technological advancement of intelligence.
        One is the further development of the collective intelligence and
        its capabilities, which we regard as "our" capabilities. The
        second is the creation of autonomous, out-of-control, self-willed
        and dangerous machines, which ought to be regarded as a form of
        criminal negligence. The third is the emergence of a form of
        self-willed and dangerous autonomous systems that include human
        persons or human parts, and that upset the ecological, economic
        and military balance of the world. Examples of such dangerous
        entities include corporations, militaries, nation-states, cyborgs
        and uploads, individual political dictators or capitalist barons,
        all with their attendant computer-enabled physical empires.


        > The very idea of hardwiring a brain sounds far
        > fetched until you realize that we are already well on the path. We
        > have already attached neurons to chips, chips to animal and insect
        > brains, we can read minds with machines to figure out if a person is
        > thinking right or left etc.

        All of this is very, very, very far from achieving an interface
        with the brain which performs as well as, let alone better than,
        the normal sensorimotor pathways. None of it even begins to
        address the question of how you would go about trying to "improve"
        the brain wholesale.


        --
        Mark Avrum Gubrud | "The Farce?"
        Center for Superconductivity Research | "Well, the Farce is what
        Physics Dept., University of Maryland | gives a Jolli his power.
        College Park, MD 20742-4111 USA | It's a comedy field created
        ph 301-405-7673 fx 301-314-9541 | by all suffering things..."
      • gmsdrummer_77
        since you pick me apart I ll pick you apart (thanks for nothing) ... throughs we ... increase ... control. ... second) ... anything ... Once this technology
        Message 3 of 28 , Sep 22, 2004
        • 0 Attachment
          since you pick me apart I'll pick you apart (thanks for nothing)

          -- In nanotech@yahoogroups.com, Mark Gubrud <mgubrud@s...> wrote:
          > gmsdrummer_77 wrote:
          >
          > > Artificial Intelligence.... with continued
          > > advances at today's rates in the field and in chip break
          throughs we
          > > should at some point be able to do create a machine as smart as a
          > > human. Such a machine would be capable of programming itself at
          > > some point and augmenting its intelligence hundreds, millions? of
          > > times. How do we control that, what would it think of us and what
          > > would it want.
          >
          > If we give a machine the ability to reprogram itself and to
          increase
          > its own hardware capacity autonomously (not just "learn") then it
          > should be no surprise if we find that the machine is out of
          control.
          > However, there is no inherent reason why a machine might not be "as
          > smart as a human" in the sense of being able to do any particular
          > thing humans can do, and yet still be fully under control. There
          > is no inherent reason why a machine can't vastly outperform humans
          > (like, say, multiplying 12-digit numbers a billion times per
          second)
          > and still be fully under control and not thinking or wanting
          anything
          > we didn't intend it to. But of course, we have to be careful about
          > this.

          Once this technology (AI) is out there I dont think it will take
          forever and a day before it spreads as all technology tends to do
          (think nukes). So it's safe to say that if your putting all your
          bets on superhuman intelligence on AI then eventually it will get
          out of control or in the wrong hands. Besides my whole point was why
          do we need it in the first place if we can simply make ourselves
          thousands of times smarter.



          >
          > > How bout creating humans that are simply genetically
          superior
          > > with far greater minds. A lot easier and a lot closer to be
          sure,
          > > but then we introduce a whole new species into our environment
          and
          > > one that is vastly superior to us. Would this species play nice
          or
          > > maybe out of fear have its own agenda.
          > > But why create a separate intelligence that's superior to
          us in
          > > the first place. Why not just transform ourselves into the Super
          > > Human Intelligence.
          >
          > 1. "humans that are genetically superior with far greater minds",
          > whatever this means, are by no means closer than AI that exceeds
          > human capabilities. We are very far from being able to engineer,
          > as opposed to just nurture, heal, and modify living systems.

          To genetically modify a being that is faster, stronger, and smarter
          than a human is closer than freakin AI - do some reading.

          > 2. What is the fundamental difference if we create "a whole new
          > species" or "transform ourselves" into something other than what
          > we are? The path may be different, but the destination is not.
          > Or, maybe there are many possible destinations, but only one of
          > them is the continued survival of our species (as opposed to
          > "transforming" or competing it into extinction).

          The difference could be appearance for one. But its about safety and
          control and motivation. What is the motivation for a scientist to
          create another more powerfull intelligence as opposed to increasing
          his or her own. Besides if you dont think there's a difference
          between augmenting intelligence and creating another species then I
          give up (I mean we are already augmenting intelligence with
          nutropics)

          > 3. The human race, collectively and with its technological tools,
          > is already a 'super human intelligence'. There are three main
          > questions about AI or technological advancement of intelligence.
          > One is the further development of the collective intelligence and
          > its capabilities, which we regard as "our" capabilities. The
          > second is the creation of autonomous, out-of-control, self-willed
          > and dangerous machines, which ought to be regarded as a form of
          > criminal negligence. The third is the emergence of a form of
          > self-willed and dangerous autonomous systems that include human
          > persons or human parts, and that upset the ecological, economic
          > and military balance of the world. Examples of such dangerous
          > entities include corporations, militaries, nation-states, cyborgs
          > and uploads, individual political dictators or capitalist barons,
          > all with their attendant computer-enabled physical empires.

          Oh god where to begin. Uh collective intelligence wow ... man drink
          some coffee or something Im talkin Super Human intelligence not
          collective. You can't tell me a million minds is the same as one
          mind that is a million times smarter. This is where you really fall
          apart bro.
          >
          > > The very idea of hardwiring a brain sounds far
          > > fetched until you realize that we are already well on the path.
          We
          > > have already attached neurons to chips, chips to animal and
          insect
          > > brains, we can read minds with machines to figure out if a
          person is
          > > thinking right or left etc.
          >
          > All of this is very, very, very far from achieving an interface
          > with the brain which performs as well as, let alone better than,
          > the normal sensorimotor pathways. None of it even begins to
          > address the question of how you would go about trying to "improve"
          > the brain wholesale.
          >
          Everything I stated you know you can't deny so that tells me you've
          read the same material. Fine then you know that most all that is
          over 5 years old. What is going on in labs right now God and the
          scientists themselves only know but Im sure they are way farther
          along then they were 5 years ago. Besides you are so bent on
          tearing me down that you fail to see the point of the whole paper
          wich is simply a view of the future. Im not saying this is gonna
          happen this afternoon. Im saying its gonna happen within the next
          25 years. Could be next year or it could be 25 years from now but I
          believe this is the logical road to the Event Horizon.
          snob
          --
          > Mark Avrum Gubrud | "The Farce?"
          > Center for Superconductivity Research | "Well, the Farce is what
          > Physics Dept., University of Maryland | gives a Jolli his power.
          > College Park, MD 20742-4111 USA | It's a comedy field
          created
          > ph 301-405-7673 fx 301-314-9541 | by all suffering
          things..."
        • Andrew
          To be fair, Mr. Gubrud s comments were basic analysis of your ideas with a riposte. That s what this is: a discussion forum. Your ideas are being discussed.
          Message 4 of 28 , Sep 22, 2004
          • 0 Attachment
            To be fair, Mr. Gubrud's comments were basic analysis of your ideas with a
            riposte. That's what this is: a discussion forum. Your ideas are being
            discussed. Take a chill pill and learn how to take criticism. On with the
            discussion.

            The assumption that artificial intelligence 'out of control' is bad‹when
            much of the good things we recieve come from events and people not in our
            control at all‹seems rather odd. But, then control is a favored human
            delusion in regards to security. I'd say that any sentient being‹human or
            artificial‹has to develop and grow outside of the direct control of others.
            True intelligence is something that reshapes itself and ensures it¹s success
            through maveric ideas; you can¹t have these in a mind that has been
            shackled.

            The we also need to take a look at the nature of control and how it has
            disserved us in the past. Whenever Group A has exercised excess control
            over Group B, Group A has always ended up being torn down by Group B in the
            long run. However, whenever Group A integrated Group B and made them free,
            then both came out better on the end. Any talk of controlling anything
            intelligent must be ventured into under the understanding that talk of
            eventual successful revolution by the controlled must soon follow.

            The question of 'smart as humans' is also rather redundant. If we want AIs
            that are smart as humans are smart then it would have to go through hundreds
            of thousands of years of evolution as hunter/gatherer primates. Does a
            Boeing 747 fly like a bird? No. It flies like a Boeing 747, and thus can't
            land on a branch or pluck a single insect out of the air in mid-flight. Any
            AIs we create‹or created by the AIs themselves‹will be smart in their own
            ways, but not in the ways that humans are smart.

            Once this technology (AI) is out there I don't think it will take
            forever and a day before it spreads as all technology tends to do
            (think nukes). So it's safe to say that if your putting all your
            bets on superhuman intelligence on AI then eventually it will get
            out of control or in the wrong hands. Besides my whole point was why
            do we need it in the first place if we can simply make ourselves
            thousands of times smarter.

            First off, technology doesn't spread. People are the ones that do the
            spreading. And that's only when the spreading actually happens. There are
            lots of instances in history where a technology doesn't spread, either
            because people aren't interested, it's just plain stupid, harmful, or there
            are economic interests against its use.

            But back to the question of smarter, which you haven't defined. Who is
            smarter: the one who discovered little and knew less, but lived a life of
            joy and died happy and satisfied, or the one who discovered much and knew
            lots, but lived in misery and died alone and lonely? I'd say the former
            rather than the latter. If an AI can be made that is so smart and so out of
            control that it can match or better the works of Beethoven or Gibran, then I
            say go ahead. Just as long as we're willing to accept the bad eggs with the
            good, as we do with our own intelligences.

            Oh god where to begin. Uh collective intelligence wow ... man drink
            some coffee or something Im talkin Super Human intelligence not
            collective. You can't tell me a million minds is the same as one
            mind that is a million times smarter. This is where you really fall
            apart bro.

            Actually, you might want to be brewing that cup for yourself. Either that
            or you need to take a course in psychology and/or sociology, because anyone
            with a decent education in those fields is currently snickering at you.
            We're talking about humans. Humans are social creatures. Social creatures
            work in societies or collectives or groups. Our understanding of how we
            individuals interact and mesh toghether as a species is still in its
            infancy. Yay Carl Jung undoing the idiocy of Freud.

            One with what Mr. Gubrud said, I don't know why he automatically assumes
            danger all the time. To alter section 3 of what he said by inverting his
            assumptions: 'The second is the creation of autonomous, self-controled,
            self-willed and beneficial machines which ought to be regarded as a form of
            human genius. The third is the emergence of a form of self-willed and
            creative autonomous systems that include human persons or human parts that
            will reshape the ecological, economic, and military balances, as new
            technologies have always done. Examples of such helpful entities could
            include corporation, municipal services, nation-states, cyborgs, uploads,
            individual doctors or leaders, or social reformers, all with their attendant
            computer-enabled charities.'

            I'm just saying that assumptions are dangerous, from either side of any
            fence. We run the risk of sounding like hand-wringers in the early 1900s,
            screaming about the terrible travesties that lay in store through the
            destructive influences of blood transfusions, telephones, automobiles, and
            the airplane. On the other hand, if we lean too far towards the lovey-dovey
            side, we could sound like the nut-jobs from the 50s who predicted a utopia
            through plastics and pills rather than the present of CFCs and an
            increasingly hypochondriac culture.

            To conclude, the only snobbery I saw here was from gmsdrummer_77, nevermind
            his apparent need to degrade himself through resorting to insults. Perhaps
            you have your own view of the future. Perhaps Mr. Gubrud has his own view
            of the future. Perhaps I have mine. Perhaps we should all take a
            chill-pill and remember the words of Arthur C. Clarke: "Predictions about
            the future are fun, difficult, and almost always wrong."

            Andrew L.

            on 9/22/04 12:56 PM, gmsdrummer_77 at gms_clan@... wrote:

            since you pick me apart I'll pick you apart (thanks for nothing)

            -- In nanotech@yahoogroups.com, Mark Gubrud <mgubrud@s...> wrote:
            > gmsdrummer_77 wrote:
            >
            > > Artificial Intelligence.... with continued
            > > advances at today's rates in the field and in chip break
            throughs we
            > > should at some point be able to do create a machine as smart as a
            > > human. Such a machine would be capable of programming itself at
            > > some point and augmenting its intelligence hundreds, millions? of
            > > times. How do we control that, what would it think of us and what
            > > would it want.
            >
            > If we give a machine the ability to reprogram itself and to
            increase
            > its own hardware capacity autonomously (not just "learn") then it
            > should be no surprise if we find that the machine is out of
            control.
            > However, there is no inherent reason why a machine might not be "as
            > smart as a human" in the sense of being able to do any particular
            > thing humans can do, and yet still be fully under control. There
            > is no inherent reason why a machine can't vastly outperform humans
            > (like, say, multiplying 12-digit numbers a billion times per
            second)
            > and still be fully under control and not thinking or wanting
            anything
            > we didn't intend it to. But of course, we have to be careful about
            > this.

            Once this technology (AI) is out there I dont think it will take
            forever and a day before it spreads as all technology tends to do
            (think nukes). So it's safe to say that if your putting all your
            bets on superhuman intelligence on AI then eventually it will get
            out of control or in the wrong hands. Besides my whole point was why
            do we need it in the first place if we can simply make ourselves
            thousands of times smarter.



            >
            > > How bout creating humans that are simply genetically
            superior
            > > with far greater minds. A lot easier and a lot closer to be
            sure,
            > > but then we introduce a whole new species into our environment
            and
            > > one that is vastly superior to us. Would this species play nice
            or
            > > maybe out of fear have its own agenda.
            > > But why create a separate intelligence that's superior to
            us in
            > > the first place. Why not just transform ourselves into the Super
            > > Human Intelligence.
            >
            > 1. "humans that are genetically superior with far greater minds",
            > whatever this means, are by no means closer than AI that exceeds
            > human capabilities. We are very far from being able to engineer,
            > as opposed to just nurture, heal, and modify living systems.

            To genetically modify a being that is faster, stronger, and smarter
            than a human is closer than freakin AI - do some reading.

            > 2. What is the fundamental difference if we create "a whole new
            > species" or "transform ourselves" into something other than what
            > we are? The path may be different, but the destination is not.
            > Or, maybe there are many possible destinations, but only one of
            > them is the continued survival of our species (as opposed to
            > "transforming" or competing it into extinction).

            The difference could be appearance for one. But its about safety and
            control and motivation. What is the motivation for a scientist to
            create another more powerfull intelligence as opposed to increasing
            his or her own. Besides if you dont think there's a difference
            between augmenting intelligence and creating another species then I
            give up (I mean we are already augmenting intelligence with
            nutropics)

            > 3. The human race, collectively and with its technological tools,
            > is already a 'super human intelligence'. There are three main
            > questions about AI or technological advancement of intelligence.
            > One is the further development of the collective intelligence and
            > its capabilities, which we regard as "our" capabilities. The
            > second is the creation of autonomous, out-of-control, self-willed
            > and dangerous machines, which ought to be regarded as a form of
            > criminal negligence. The third is the emergence of a form of
            > self-willed and dangerous autonomous systems that include human
            > persons or human parts, and that upset the ecological, economic
            > and military balance of the world. Examples of such dangerous
            > entities include corporations, militaries, nation-states, cyborgs
            > and uploads, individual political dictators or capitalist barons,
            > all with their attendant computer-enabled physical empires.

            Oh god where to begin. Uh collective intelligence wow ... man drink
            some coffee or something Im talkin Super Human intelligence not
            collective. You can't tell me a million minds is the same as one
            mind that is a million times smarter. This is where you really fall
            apart bro.
            >
            > > The very idea of hardwiring a brain sounds far
            > > fetched until you realize that we are already well on the path.
            We
            > > have already attached neurons to chips, chips to animal and
            insect
            > > brains, we can read minds with machines to figure out if a
            person is
            > > thinking right or left etc.
            >
            > All of this is very, very, very far from achieving an interface
            > with the brain which performs as well as, let alone better than,
            > the normal sensorimotor pathways. None of it even begins to
            > address the question of how you would go about trying to "improve"
            > the brain wholesale.
            >
            Everything I stated you know you can't deny so that tells me you've
            read the same material. Fine then you know that most all that is
            over 5 years old. What is going on in labs right now God and the
            scientists themselves only know but Im sure they are way farther
            along then they were 5 years ago. Besides you are so bent on
            tearing me down that you fail to see the point of the whole paper
            wich is simply a view of the future. Im not saying this is gonna
            happen this afternoon. Im saying its gonna happen within the next
            25 years. Could be next year or it could be 25 years from now but I
            believe this is the logical road to the Event Horizon.
            snob




            [Non-text portions of this message have been removed]
          • Mark Gubrud
            It s too bad you take critical response to your posts as picking apart or tearing into . I deliberately avoided disparaging your efforts, while pointing
            Message 5 of 28 , Sep 22, 2004
            • 0 Attachment
              It's too bad you take critical response to your posts as 'picking apart'
              or 'tearing into'. I deliberately avoided disparaging your efforts,
              while pointing out where you were substantially off-target (in my view).

              Mark

              gmsdrummer_77 wrote:

              > Once this technology (AI) is out there I dont think it will take
              > forever and a day before it spreads as all technology tends to do
              > (think nukes). So it's safe to say that if your putting all your
              > bets on superhuman intelligence on AI then eventually it will get
              > out of control or in the wrong hands.

              AI can be built so it won't go out of control in the most obvious ways.
              We don't fear that our personal computers (of today) are going to
              rebel against us, even though they can do amazing things that we can't
              do, and even though their cousins in the military kill humans. As for
              the wrong hands, lethal machines fall into the wrong hands today, and
              even nuclear weapons are falling into hands we would greatly prefer
              they didn't. However, there remains a balance of power in the world.

              > Besides my whole point was why
              > do we need it in the first place if we can simply make ourselves
              > thousands of times smarter.
              > To genetically modify a being that is faster, stronger, and smarter
              > than a human is closer than freakin AI - do some reading.

              Your view that this would be simple is incorrect. Certainly, there
              is absolutely no prospect whatsoever of increasing human processing
              capacity by "thousands of times" using any combination of drugs,
              surgery, nanoelectrodes or Zen meditation, now or at any point in the
              foreseeable future. Factors of 1.1 or even 1.2 maybe, and maybe up
              to 1.5 some decades from now. However, human efficiency at any given
              physical or intellectual task, or our ability to get such tasks done
              one way or another, is easily multiplied thousands of times by the
              use of technology - or by even higher factors, since before the
              development of specific technologies we may have had zero ability to
              do something.


              > > 2. What is the fundamental difference if we create "a whole new
              > > species" or "transform ourselves" into something other than what
              > > we are? The path may be different, but the destination is not.
              > > Or, maybe there are many possible destinations, but only one of
              > > them is the continued survival of our species (as opposed to
              > > "transforming" or competing it into extinction).
              >
              > The difference could be appearance for one. But its about safety and
              > control and motivation. What is the motivation for a scientist to
              > create another more powerfull intelligence as opposed to increasing
              > his or her own. Besides if you dont think there's a difference
              > between augmenting intelligence and creating another species then I
              > give up (I mean we are already augmenting intelligence with
              > nutropics)

              Appearance is the only difference, and that is just an illusion set
              up by a given representation of the physical situation. Most of us
              want to improve our intelligence, and providing the brain with the best
              possible nutrition is hard to object to. Drugs may also be used to
              affect how the brain is working, often at some risk to health. When
              you talk about "augmenting", however, you mean scenarios like the
              attachment or insertion of nonhuman hardware or engineered wetware in
              intimate contact with human brain tissue. This is already a radical
              insult to the human organism; if done on a massive scale it would
              effectively "transform" the object into something outside the human
              species (by any reasonable definition).


              > You can't tell me a million minds is the same as one
              > mind that is a million times smarter.

              Actually, in some formal sense this is rigorously correct. It is
              also clear that millions of human minds have achieved millions of
              times more than one human mind could have. Are there problems
              that millions of minds can't solve but that "one mind that is
              millions of times smarter" could solve? These would be problems
              that can't be broken down. Most problems of engineering, biology,
              medicine, management and so on don't fall into that category.
              Provided one seeks only good, not strictly optimal solutions,
              complicated problems can be solved by trial and evolution, with
              teams of workers addressing specific problems as they arise. Even
              mathematics and theoretical physics is done in steps, and each new
              generation is able to use the intellectual capital produced by the
              last generation to scale new heights of understanding.

              > snob

              Uh... well, no, if I were a snob I would not have bothered reading,
              let alone spent by now at least an hour answering your posts, Mr./Ms.?
            • gmsdrummer_77
              well said and ya I ll go ahead and take that chill pill - but alas did anyone even read part two - remember its just a view of the future and I can see now
              Message 6 of 28 , Sep 23, 2004
              • 0 Attachment
                well said and ya I'll go ahead and take that chill pill - but alas
                did anyone even read part two - remember its just a view of the
                future and I can see now that everyone on this board seems to have
                their own visions as well - I had hoped I might amuse some - and
                sorry to have insulted - I get worked up sometimes (thats why I take
                lots of martial arts = to vent LOL)

                --- In nanotech@yahoogroups.com, Andrew <bittercrank@s...> wrote:
                > To be fair, Mr. Gubrud's comments were basic analysis of your
                ideas with a
                > riposte. That's what this is: a discussion forum. Your ideas are
                being
                > discussed. Take a chill pill and learn how to take criticism. On
                with the
                > discussion.
                >
                > The assumption that artificial intelligence 'out of control' is
                bad‹when
                > much of the good things we recieve come from events and people not
                in our
                > control at all‹seems rather odd. But, then control is a favored
                human
                > delusion in regards to security. I'd say that any sentient
                being‹human or
                > artificial‹has to develop and grow outside of the direct control
                of others.
                > True intelligence is something that reshapes itself and ensures
                it¹s success
                > through maveric ideas; you can¹t have these in a mind that has been
                > shackled.
                >
                > The we also need to take a look at the nature of control and how
                it has
                > disserved us in the past. Whenever Group A has exercised excess
                control
                > over Group B, Group A has always ended up being torn down by Group
                B in the
                > long run. However, whenever Group A integrated Group B and made
                them free,
                > then both came out better on the end. Any talk of controlling
                anything
                > intelligent must be ventured into under the understanding that
                talk of
                > eventual successful revolution by the controlled must soon follow.
                >
                > The question of 'smart as humans' is also rather redundant. If we
                want AIs
                > that are smart as humans are smart then it would have to go
                through hundreds
                > of thousands of years of evolution as hunter/gatherer primates.
                Does a
                > Boeing 747 fly like a bird? No. It flies like a Boeing 747, and
                thus can't
                > land on a branch or pluck a single insect out of the air in mid-
                flight. Any
                > AIs we create‹or created by the AIs themselves‹will be smart in
                their own
                > ways, but not in the ways that humans are smart.
                >
                > Once this technology (AI) is out there I don't think it will take
                > forever and a day before it spreads as all technology tends to do
                > (think nukes). So it's safe to say that if your putting all your
                > bets on superhuman intelligence on AI then eventually it will get
                > out of control or in the wrong hands. Besides my whole point was
                why
                > do we need it in the first place if we can simply make ourselves
                > thousands of times smarter.
                >
                > First off, technology doesn't spread. People are the ones that do
                the
                > spreading. And that's only when the spreading actually happens.
                There are
                > lots of instances in history where a technology doesn't spread,
                either
                > because people aren't interested, it's just plain stupid, harmful,
                or there
                > are economic interests against its use.
                >
                > But back to the question of smarter, which you haven't defined.
                Who is
                > smarter: the one who discovered little and knew less, but lived a
                life of
                > joy and died happy and satisfied, or the one who discovered much
                and knew
                > lots, but lived in misery and died alone and lonely? I'd say the
                former
                > rather than the latter. If an AI can be made that is so smart and
                so out of
                > control that it can match or better the works of Beethoven or
                Gibran, then I
                > say go ahead. Just as long as we're willing to accept the bad
                eggs with the
                > good, as we do with our own intelligences.
                >
                > Oh god where to begin. Uh collective intelligence wow ... man
                drink
                > some coffee or something Im talkin Super Human intelligence not
                > collective. You can't tell me a million minds is the same as one
                > mind that is a million times smarter. This is where you really fall
                > apart bro.
                >
                > Actually, you might want to be brewing that cup for yourself.
                Either that
                > or you need to take a course in psychology and/or sociology,
                because anyone
                > with a decent education in those fields is currently snickering at
                you.
                > We're talking about humans. Humans are social creatures. Social
                creatures
                > work in societies or collectives or groups. Our understanding of
                how we
                > individuals interact and mesh toghether as a species is still in
                its
                > infancy. Yay Carl Jung undoing the idiocy of Freud.
                >
                > One with what Mr. Gubrud said, I don't know why he automatically
                assumes
                > danger all the time. To alter section 3 of what he said by
                inverting his
                > assumptions: 'The second is the creation of autonomous, self-
                controled,
                > self-willed and beneficial machines which ought to be regarded as
                a form of
                > human genius. The third is the emergence of a form of self-willed
                and
                > creative autonomous systems that include human persons or human
                parts that
                > will reshape the ecological, economic, and military balances, as
                new
                > technologies have always done. Examples of such helpful entities
                could
                > include corporation, municipal services, nation-states, cyborgs,
                uploads,
                > individual doctors or leaders, or social reformers, all with their
                attendant
                > computer-enabled charities.'
                >
                > I'm just saying that assumptions are dangerous, from either side
                of any
                > fence. We run the risk of sounding like hand-wringers in the
                early 1900s,
                > screaming about the terrible travesties that lay in store through
                the
                > destructive influences of blood transfusions, telephones,
                automobiles, and
                > the airplane. On the other hand, if we lean too far towards the
                lovey-dovey
                > side, we could sound like the nut-jobs from the 50s who predicted
                a utopia
                > through plastics and pills rather than the present of CFCs and an
                > increasingly hypochondriac culture.
                >
                > To conclude, the only snobbery I saw here was from gmsdrummer_77,
                nevermind
                > his apparent need to degrade himself through resorting to
                insults. Perhaps
                > you have your own view of the future. Perhaps Mr. Gubrud has his
                own view
                > of the future. Perhaps I have mine. Perhaps we should all take a
                > chill-pill and remember the words of Arthur C.
                Clarke: "Predictions about
                > the future are fun, difficult, and almost always wrong."
                >
                > Andrew L.
                >
                > on 9/22/04 12:56 PM, gmsdrummer_77 at gms_clan@c... wrote:
                >
                > since you pick me apart I'll pick you apart (thanks for nothing)
                >
                > -- In nanotech@yahoogroups.com, Mark Gubrud <mgubrud@s...> wrote:
                > > gmsdrummer_77 wrote:
                > >
                > > > Artificial Intelligence.... with continued
                > > > advances at today's rates in the field and in chip break
                > throughs we
                > > > should at some point be able to do create a machine as smart
                as a
                > > > human. Such a machine would be capable of programming itself
                at
                > > > some point and augmenting its intelligence hundreds, millions?
                of
                > > > times. How do we control that, what would it think of us and
                what
                > > > would it want.
                > >
                > > If we give a machine the ability to reprogram itself and to
                > increase
                > > its own hardware capacity autonomously (not just "learn") then it
                > > should be no surprise if we find that the machine is out of
                > control.
                > > However, there is no inherent reason why a machine might not
                be "as
                > > smart as a human" in the sense of being able to do any particular
                > > thing humans can do, and yet still be fully under control. There
                > > is no inherent reason why a machine can't vastly outperform
                humans
                > > (like, say, multiplying 12-digit numbers a billion times per
                > second)
                > > and still be fully under control and not thinking or wanting
                > anything
                > > we didn't intend it to. But of course, we have to be careful
                about
                > > this.
                >
                > Once this technology (AI) is out there I dont think it will take
                > forever and a day before it spreads as all technology tends to do
                > (think nukes). So it's safe to say that if your putting all your
                > bets on superhuman intelligence on AI then eventually it will get
                > out of control or in the wrong hands. Besides my whole point was
                why
                > do we need it in the first place if we can simply make ourselves
                > thousands of times smarter.
                >
                >
                >
                > >
                > > > How bout creating humans that are simply genetically
                > superior
                > > > with far greater minds. A lot easier and a lot closer to be
                > sure,
                > > > but then we introduce a whole new species into our environment
                > and
                > > > one that is vastly superior to us. Would this species play
                nice
                > or
                > > > maybe out of fear have its own agenda.
                > > > But why create a separate intelligence that's superior to
                > us in
                > > > the first place. Why not just transform ourselves into the
                Super
                > > > Human Intelligence.
                > >
                > > 1. "humans that are genetically superior with far greater minds",
                > > whatever this means, are by no means closer than AI that exceeds
                > > human capabilities. We are very far from being able to engineer,
                > > as opposed to just nurture, heal, and modify living systems.
                >
                > To genetically modify a being that is faster, stronger, and smarter
                > than a human is closer than freakin AI - do some reading.
                >
                > > 2. What is the fundamental difference if we create "a whole new
                > > species" or "transform ourselves" into something other than what
                > > we are? The path may be different, but the destination is not.
                > > Or, maybe there are many possible destinations, but only one of
                > > them is the continued survival of our species (as opposed to
                > > "transforming" or competing it into extinction).
                >
                > The difference could be appearance for one. But its about safety
                and
                > control and motivation. What is the motivation for a scientist to
                > create another more powerfull intelligence as opposed to increasing
                > his or her own. Besides if you dont think there's a difference
                > between augmenting intelligence and creating another species then I
                > give up (I mean we are already augmenting intelligence with
                > nutropics)
                >
                > > 3. The human race, collectively and with its technological tools,
                > > is already a 'super human intelligence'. There are three main
                > > questions about AI or technological advancement of intelligence.
                > > One is the further development of the collective intelligence and
                > > its capabilities, which we regard as "our" capabilities. The
                > > second is the creation of autonomous, out-of-control, self-willed
                > > and dangerous machines, which ought to be regarded as a form of
                > > criminal negligence. The third is the emergence of a form of
                > > self-willed and dangerous autonomous systems that include human
                > > persons or human parts, and that upset the ecological, economic
                > > and military balance of the world. Examples of such dangerous
                > > entities include corporations, militaries, nation-states, cyborgs
                > > and uploads, individual political dictators or capitalist barons,
                > > all with their attendant computer-enabled physical empires.
                >
                > Oh god where to begin. Uh collective intelligence wow ... man
                drink
                > some coffee or something Im talkin Super Human intelligence not
                > collective. You can't tell me a million minds is the same as one
                > mind that is a million times smarter. This is where you really fall
                > apart bro.
                > >
                > > > The very idea of hardwiring a brain sounds far
                > > > fetched until you realize that we are already well on the path.
                > We
                > > > have already attached neurons to chips, chips to animal and
                > insect
                > > > brains, we can read minds with machines to figure out if a
                > person is
                > > > thinking right or left etc.
                > >
                > > All of this is very, very, very far from achieving an interface
                > > with the brain which performs as well as, let alone better than,
                > > the normal sensorimotor pathways. None of it even begins to
                > > address the question of how you would go about trying
                to "improve"
                > > the brain wholesale.
                > >
                > Everything I stated you know you can't deny so that tells me you've
                > read the same material. Fine then you know that most all that is
                > over 5 years old. What is going on in labs right now God and the
                > scientists themselves only know but Im sure they are way farther
                > along then they were 5 years ago. Besides you are so bent on
                > tearing me down that you fail to see the point of the whole paper
                > wich is simply a view of the future. Im not saying this is gonna
                > happen this afternoon. Im saying its gonna happen within the next
                > 25 years. Could be next year or it could be 25 years from now but
                I
                > believe this is the logical road to the Event Horizon.
                > snob
                >
                >
                >
                >
                > [Non-text portions of this message have been removed]
              • gmsdrummer_77
                as I said in a previous post sorry - I get worked up - I had hoped some might be amused - was not expecting those replies - was expecting more on the line of
                Message 7 of 28 , Sep 23, 2004
                • 0 Attachment
                  as I said in a previous post sorry - I get worked up - I had hoped
                  some might be amused - was not expecting those replies - was
                  expecting more on the line of "your wack or crazy but interesting" -
                  anyways please accept my appology and think no less of me - I am an
                  artist and as such we go over board (think of that guy that choped
                  his ear off) - peace and you have a good head on your shoulder bro

                  --- In nanotech@yahoogroups.com, Mark Gubrud <mgubrud@s...> wrote:
                  > It's too bad you take critical response to your posts as 'picking
                  apart'
                  > or 'tearing into'. I deliberately avoided disparaging your
                  efforts,
                  > while pointing out where you were substantially off-target (in my
                  view).
                  >
                  > Mark
                  >
                  > gmsdrummer_77 wrote:
                  >
                  > > Once this technology (AI) is out there I dont think it will take
                  > > forever and a day before it spreads as all technology tends to do
                  > > (think nukes). So it's safe to say that if your putting all your
                  > > bets on superhuman intelligence on AI then eventually it will get
                  > > out of control or in the wrong hands.
                  >
                  > AI can be built so it won't go out of control in the most obvious
                  ways.
                  > We don't fear that our personal computers (of today) are going to
                  > rebel against us, even though they can do amazing things that we
                  can't
                  > do, and even though their cousins in the military kill humans. As
                  for
                  > the wrong hands, lethal machines fall into the wrong hands today,
                  and
                  > even nuclear weapons are falling into hands we would greatly
                  prefer
                  > they didn't. However, there remains a balance of power in the
                  world.
                  >
                  > > Besides my whole point was why
                  > > do we need it in the first place if we can simply make ourselves
                  > > thousands of times smarter.
                  > > To genetically modify a being that is faster, stronger, and
                  smarter
                  > > than a human is closer than freakin AI - do some reading.
                  >
                  > Your view that this would be simple is incorrect. Certainly, there
                  > is absolutely no prospect whatsoever of increasing human processing
                  > capacity by "thousands of times" using any combination of drugs,
                  > surgery, nanoelectrodes or Zen meditation, now or at any point in
                  the
                  > foreseeable future. Factors of 1.1 or even 1.2 maybe, and maybe up
                  > to 1.5 some decades from now. However, human efficiency at any
                  given
                  > physical or intellectual task, or our ability to get such tasks
                  done
                  > one way or another, is easily multiplied thousands of times by the
                  > use of technology - or by even higher factors, since before the
                  > development of specific technologies we may have had zero ability
                  to
                  > do something.
                  >
                  >
                  > > > 2. What is the fundamental difference if we create "a whole new
                  > > > species" or "transform ourselves" into something other than
                  what
                  > > > we are? The path may be different, but the destination is not.
                  > > > Or, maybe there are many possible destinations, but only one of
                  > > > them is the continued survival of our species (as opposed to
                  > > > "transforming" or competing it into extinction).
                  > >
                  > > The difference could be appearance for one. But its about safety
                  and
                  > > control and motivation. What is the motivation for a scientist to
                  > > create another more powerfull intelligence as opposed to
                  increasing
                  > > his or her own. Besides if you dont think there's a difference
                  > > between augmenting intelligence and creating another species
                  then I
                  > > give up (I mean we are already augmenting intelligence with
                  > > nutropics)
                  >
                  > Appearance is the only difference, and that is just an illusion set
                  > up by a given representation of the physical situation. Most of us
                  > want to improve our intelligence, and providing the brain with the
                  best
                  > possible nutrition is hard to object to. Drugs may also be used to
                  > affect how the brain is working, often at some risk to health.
                  When
                  > you talk about "augmenting", however, you mean scenarios like the
                  > attachment or insertion of nonhuman hardware or engineered wetware
                  in
                  > intimate contact with human brain tissue. This is already a
                  radical
                  > insult to the human organism; if done on a massive scale it would
                  > effectively "transform" the object into something outside the
                  human
                  > species (by any reasonable definition).
                  >
                  >
                  > > You can't tell me a million minds is the same as one
                  > > mind that is a million times smarter.
                  >
                  > Actually, in some formal sense this is rigorously correct. It is
                  > also clear that millions of human minds have achieved millions of
                  > times more than one human mind could have. Are there problems
                  > that millions of minds can't solve but that "one mind that is
                  > millions of times smarter" could solve? These would be problems
                  > that can't be broken down. Most problems of engineering, biology,
                  > medicine, management and so on don't fall into that category.
                  > Provided one seeks only good, not strictly optimal solutions,
                  > complicated problems can be solved by trial and evolution, with
                  > teams of workers addressing specific problems as they arise. Even
                  > mathematics and theoretical physics is done in steps, and each new
                  > generation is able to use the intellectual capital produced by the
                  > last generation to scale new heights of understanding.
                  >
                  > > snob
                  >
                  > Uh... well, no, if I were a snob I would not have bothered reading,
                  > let alone spent by now at least an hour answering your posts,
                  Mr./Ms.?
                • Michael Anissimov
                  ... But such machines will eventually be created whether we like it or not... rather than thinking of terms of control , shouldn t we be thinking in terms of
                  Message 8 of 28 , Sep 23, 2004
                  • 0 Attachment
                    Mark Gubrud wrote:

                    >If we give a machine the ability to reprogram itself and to increase
                    >its own hardware capacity autonomously (not just "learn") then it
                    >should be no surprise if we find that the machine is out of control.
                    >
                    >

                    But such machines will eventually be created whether we like it or
                    not... rather than thinking of terms of "control", shouldn't we be
                    thinking in terms of creating a new species that displays behaviors and
                    engages in thoughts we would regard as positive, including with respect
                    to its autonomous self-modifications? As Bostrom says:

                    "If a superintelligence starts out with a friendly top goal, however,
                    then it can be relied on to stay friendly, or at least not to
                    deliberately rid itself of its friendliness. This point is elementary. A
                    “friend” who seeks to transform himself into somebody who wants to hurt
                    you, is not your friend."

                    Selfish behavior is encoded into our genes because it was adaptive in
                    our ancestral environment. Not all beings need to be selfish or go "out
                    of control".

                    >However, there is no inherent reason why a machine might not be "as
                    >smart as a human" in the sense of being able to do any particular
                    >thing humans can do, and yet still be fully under control. There
                    >is no inherent reason why a machine can't vastly outperform humans
                    >(like, say, multiplying 12-digit numbers a billion times per second)
                    >and still be fully under control and not thinking or wanting anything
                    >we didn't intend it to. But of course, we have to be careful about
                    >this.
                    >
                    >

                    But eventually an AI would be created that is out of our "control"
                    anyway - wouldn't it be best if we created something we can be proud of,
                    something that represents all of humanity, something we would even
                    *want* to have out of our "control" because its integrity and altruism
                    is at superhuman levels? Why do people find it so easy to imagine beings
                    with superhuman strength, speed, and intelligence, but lacking
                    superhuman kindness? When an AI finally does "go out of control",
                    wouldn't it be nice to have a Friendly AI around to help us neutralize
                    the threat? (Because we would likely be incapable of doing so.)

                    >1. "humans that are genetically superior with far greater minds",
                    >whatever this means, are by no means closer than AI that exceeds
                    >human capabilities. We are very far from being able to engineer,
                    >as opposed to just nurture, heal, and modify living systems.
                    >
                    >

                    But computational neuroscientists such as Lloyd Watts
                    (http://www.lloydwatts.com) have already created algorithms that
                    encompass or exceed the functionality of complex biological systems, in
                    Watts' case, the auditory system. We know the theoretical structure of
                    algorithms that are optimal learners or optimal self-modifiers, the only
                    issue is the prohibitive amount of computing power that would be
                    required to implement them.

                    >2. What is the fundamental difference if we create "a whole new
                    >species" or "transform ourselves" into something other than what
                    >we are? The path may be different, but the destination is not.
                    >Or, maybe there are many possible destinations, but only one of
                    >them is the continued survival of our species (as opposed to
                    >"transforming" or competing it into extinction).
                    >
                    >

                    If we consensually transform ourselves, then we can regard this as the
                    continued survival of what we value about our species - our urge to
                    improve ourselves and become better people.

                    >3. The human race, collectively and with its technological tools,
                    >is already a 'super human intelligence'. There are three main
                    >questions about AI or technological advancement of intelligence.
                    >One is the further development of the collective intelligence and
                    >its capabilities, which we regard as "our" capabilities. The
                    >second is the creation of autonomous, out-of-control, self-willed
                    >and dangerous machines, which ought to be regarded as a form of
                    >criminal negligence. The third is the emergence of a form of
                    >self-willed and dangerous autonomous systems that include human
                    >persons or human parts, and that upset the ecological, economic
                    >and military balance of the world. Examples of such dangerous
                    >entities include corporations, militaries, nation-states, cyborgs
                    >and uploads, individual political dictators or capitalist barons,
                    >all with their attendant computer-enabled physical empires.
                    >
                    >

                    Would it be possible to create an autonomous self-willed machine that
                    amplifies our collective intelligence in useful ways? Well-raised
                    children are such machines.

                    --
                    Michael Anissimov
                    Advocacy Director
                    Singularity Institute for Artificial Intelligence
                    http://www.singinst.org/
                    Suite 106 PMB #12
                    4290 Bells Ferry Road
                    Kennesaw, GA 30144
                    The SIAI Voice - Our Free Bulletin:
                    http://www.singinst.org/news/subscribe.html
                  • Mark Gubrud
                    ... This is a catechism of the technology cult, You can t stop technology, it s never worked, sooner or later somebody s going to do it, etc. Okay, sooner or
                    Message 9 of 28 , Sep 23, 2004
                    • 0 Attachment
                      Michael Anissimov wrote:
                      >
                      > Mark Gubrud wrote:
                      >
                      > >If we give a machine the ability to reprogram itself and to increase
                      > >its own hardware capacity autonomously (not just "learn") then it
                      > >should be no surprise if we find that the machine is out of control.
                      > >
                      > >
                      >
                      > But such machines will eventually be created whether we like it or

                      This is a catechism of the technology cult, "You can't stop technology,
                      it's never worked, sooner or later somebody's going to do it," etc.

                      Okay, sooner or later somebody's going to engineer a virus that wipes
                      out humanity, and then we're all going to die. No sense trying to
                      stop it, it's inevitable.

                      I don't agree. Just like we regulate other dangerous activities, and
                      ban some of them, sooner or later we're going to have to regulate
                      artificial intelligence, robotics, and any other technology that
                      threatens to create monsters and turn them loose.


                      > not... rather than thinking of terms of "control", shouldn't we be
                      > thinking in terms of creating a new species that displays behaviors
                      > and engages in thoughts we would regard as positive, including with
                      > respect to its autonomous self-modifications? As Bostrom says:
                      >
                      > "If a superintelligence starts out with a friendly top goal, however,
                      > then it can be relied on to stay friendly, or at least not to
                      > deliberately rid itself of its friendliness. This point is
                      > elementary. A“friend” who seeks to transform himself into somebody
                      > who wants to hurt you, is not your friend."

                      I do not find Nick's saying "this point is elementary" a convincing
                      argument against questions about whether we really understand what
                      "a superintelligence" would do if we gave it options, or what it
                      means to speak of "friendliness", "top goals" and so on.

                      To be fair, Nick gives an argument here, namely that if the computer
                      can reliably recognize when it is transforming itself into an
                      "unfriendly" machine, it won't do so if it is built to be "friendly".
                      Okay, but what if the machine, for reasons we can't foresee since it
                      is "a superintelligence", fails to recognize that it is changing
                      into an unfriendly superintelligence? What if there is a flaw in
                      our definition or implementation of "friendliness"?

                      More importantly, what makes you think organizations dedicated to
                      crushing their economic or military or political competitors are
                      going to be building machines that would be generally "friendly"?
                      Quite the contrary, I expect. The majority of compute cycles are
                      going to be dedicated to the mission of defeating, hurting or
                      killing human beings, or otherwise to the goals and philosophies
                      of the institutions that own them.


                      > Selfish behavior is encoded into our genes because it was adaptive in
                      > our ancestral environment. Not all beings need to be selfish or go
                      > "out of control".

                      Selfish behavior is coded not only into our own genomes but also
                      those of our corporations, governments and militaries, who together
                      own most of the world's computers.


                      > But eventually an AI would be created that is out of our "control"
                      > anyway

                      We should do our very best to prevent this from happening.

                      > wouldn't it be best if we created something we can be proud of,
                      > something that represents all of humanity,

                      I think we want humanity to be represented by humans.

                      > something we would even *want* to have out of our "control"
                      > because its integrity and altruism is at superhuman levels?

                      You really want to create a God to worship, to watch over us,
                      a deus ex machina, infinite in wisdom, infinite in goodness...

                      > Why do people find it so easy to imagine beings
                      > with superhuman strength, speed, and intelligence, but lacking
                      > superhuman kindness?

                      I don't think "superhuman kindness" is a well-formed concept, since
                      the interests of one person may conflict with those of another, and
                      even one person may have conflicts of interest. If I totally control
                      and can rely on a system, then I suppose from my point of view that
                      system is more kind, or at least more loyal, than most humans would
                      be. But then I could tell the same system to do something most
                      unkind to another person. Unless you believe in some kind of
                      perfect justice, which your machine can perfectly compute, then I
                      don't know how you eliminate such conflicts and paradoxes.


                      > When an AI finally does "go out of control", wouldn't
                      > it be nice to have a Friendly AI around to help us neutralize
                      > the threat? (Because we would likely be incapable of doing so.)

                      I don't share this notion of a single machine which somehow goes
                      through the roof and achieves a level of capability which overmatches
                      everything that the whole rest of the world can possibly do to stop
                      it. It's bad sci-fi and it's bad analysis, in my opinion. It's
                      based on this mystical notion of a godlike "superintelligence" and
                      it does not reflect what is happening in the real world, where
                      computer power is quite dispersed (while also relatively concentrated
                      in the hands of wealthy and powerful institutions and individuals).


                      > >1. "humans that are genetically superior with far greater minds",
                      > >whatever this means, are by no means closer than AI that exceeds
                      > >human capabilities. We are very far from being able to engineer,
                      > >as opposed to just nurture, heal, and modify living systems.
                      > >
                      > >
                      >
                      > But computational neuroscientists such as Lloyd Watts
                      > (http://www.lloydwatts.com) have already created algorithms that
                      > encompass or exceed the functionality of complex biological systems,
                      > in Watts' case, the auditory system.

                      The original context of this discussion was not modeling but direct
                      interface with and wholesale 'upgrading' of living human brains.
                      We are nowhere near any technology that can do this, even if we are
                      making great progress in mapping, modeling, and understanding some
                      neural circuits.

                      > We know the theoretical structure of algorithms
                      > that are optimal learners or optimal self-modifiers, the only
                      > issue is the prohibitive amount of computing power that would be
                      > required to implement them.

                      Right, okay, this is AI, not transforming people into something else.


                      >2. What is the fundamental difference if we create "a whole new
                      > >species" or "transform ourselves" into something other than what
                      > >we are? The path may be different, but the destination is not.
                      > >Or, maybe there are many possible destinations, but only one of
                      > >them is the continued survival of our species (as opposed to
                      > >"transforming" or competing it into extinction).
                      > >
                      > >
                      >
                      > If we consensually transform ourselves, then we can regard this as the
                      > continued survival of what we value about our species - our urge to
                      > improve ourselves and become better people.

                      We can commit suicide and regard it as going to heaven. I am not
                      talking about how we might regard things, but rather I am talking
                      about what things are.


                      > Would it be possible to create an autonomous self-willed machine that
                      > amplifies our collective intelligence in useful ways? Well-raised
                      > children are such machines.

                      Michael, do you have any children? I have one child, Michael, and
                      I do not call her a "machine", because as a matter of fact she is not
                      a machine. Now, you ask whether an autonomous, self-willed machine
                      might contribute to technology and knowledge that is also accessible
                      to humans. No reason why it could not, but the danger is that it
                      would do something else instead, if we gave it the option. Better
                      to keep our machines under control, and yes, they do contribute.

                      --
                      Mark Avrum Gubrud | "The Farce?"
                      Center for Superconductivity Research | "Well, the Farce is what
                      Physics Dept., University of Maryland | gives a Jolli his power.
                      College Park, MD 20742-4111 USA | It's a comedy field created
                      ph 301-405-7673 fx 301-314-9541 | by all suffering things..."
                    • gmsdrummer_77
                      good points from a distingueshed(forgive spelling) person no less - one thing I do believe is that the end evolution of superior intelligence will be us but
                      Message 10 of 28 , Sep 23, 2004
                      • 0 Attachment
                        good points from a distingueshed(forgive spelling) person no less -
                        one thing I do believe is that the end evolution of superior
                        intelligence will be us but something else like AI could come first
                        or along side certainly

                        --- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                        wrote:
                        > Mark Gubrud wrote:
                        >
                        > >If we give a machine the ability to reprogram itself and to
                        increase
                        > >its own hardware capacity autonomously (not just "learn") then it
                        > >should be no surprise if we find that the machine is out of
                        control.
                        > >
                        > >
                        >
                        > But such machines will eventually be created whether we like it or
                        > not... rather than thinking of terms of "control", shouldn't we be
                        > thinking in terms of creating a new species that displays
                        behaviors and
                        > engages in thoughts we would regard as positive, including with
                        respect
                        > to its autonomous self-modifications? As Bostrom says:
                        >
                        > "If a superintelligence starts out with a friendly top goal,
                        however,
                        > then it can be relied on to stay friendly, or at least not to
                        > deliberately rid itself of its friendliness. This point is
                        elementary. A
                        > "friend" who seeks to transform himself into somebody who wants to
                        hurt
                        > you, is not your friend."
                        >
                        > Selfish behavior is encoded into our genes because it was adaptive
                        in
                        > our ancestral environment. Not all beings need to be selfish or
                        go "out
                        > of control".
                        >
                        > >However, there is no inherent reason why a machine might not
                        be "as
                        > >smart as a human" in the sense of being able to do any particular
                        > >thing humans can do, and yet still be fully under control. There
                        > >is no inherent reason why a machine can't vastly outperform humans
                        > >(like, say, multiplying 12-digit numbers a billion times per
                        second)
                        > >and still be fully under control and not thinking or wanting
                        anything
                        > >we didn't intend it to. But of course, we have to be careful
                        about
                        > >this.
                        > >
                        > >
                        >
                        > But eventually an AI would be created that is out of our "control"
                        > anyway - wouldn't it be best if we created something we can be
                        proud of,
                        > something that represents all of humanity, something we would even
                        > *want* to have out of our "control" because its integrity and
                        altruism
                        > is at superhuman levels? Why do people find it so easy to imagine
                        beings
                        > with superhuman strength, speed, and intelligence, but lacking
                        > superhuman kindness? When an AI finally does "go out of control",
                        > wouldn't it be nice to have a Friendly AI around to help us
                        neutralize
                        > the threat? (Because we would likely be incapable of doing so.)
                        >
                        > >1. "humans that are genetically superior with far greater minds",
                        > >whatever this means, are by no means closer than AI that exceeds
                        > >human capabilities. We are very far from being able to engineer,
                        > >as opposed to just nurture, heal, and modify living systems.
                        > >
                        > >
                        >
                        > But computational neuroscientists such as Lloyd Watts
                        > (http://www.lloydwatts.com) have already created algorithms that
                        > encompass or exceed the functionality of complex biological
                        systems, in
                        > Watts' case, the auditory system. We know the theoretical
                        structure of
                        > algorithms that are optimal learners or optimal self-modifiers,
                        the only
                        > issue is the prohibitive amount of computing power that would be
                        > required to implement them.
                        >
                        > >2. What is the fundamental difference if we create "a whole new
                        > >species" or "transform ourselves" into something other than what
                        > >we are? The path may be different, but the destination is not.
                        > >Or, maybe there are many possible destinations, but only one of
                        > >them is the continued survival of our species (as opposed to
                        > >"transforming" or competing it into extinction).
                        > >
                        > >
                        >
                        > If we consensually transform ourselves, then we can regard this as
                        the
                        > continued survival of what we value about our species - our urge
                        to
                        > improve ourselves and become better people.
                        >
                        > >3. The human race, collectively and with its technological tools,
                        > >is already a 'super human intelligence'. There are three main
                        > >questions about AI or technological advancement of intelligence.
                        > >One is the further development of the collective intelligence and
                        > >its capabilities, which we regard as "our" capabilities. The
                        > >second is the creation of autonomous, out-of-control, self-willed
                        > >and dangerous machines, which ought to be regarded as a form of
                        > >criminal negligence. The third is the emergence of a form of
                        > >self-willed and dangerous autonomous systems that include human
                        > >persons or human parts, and that upset the ecological, economic
                        > >and military balance of the world. Examples of such dangerous
                        > >entities include corporations, militaries, nation-states, cyborgs
                        > >and uploads, individual political dictators or capitalist barons,
                        > >all with their attendant computer-enabled physical empires.
                        > >
                        > >
                        >
                        > Would it be possible to create an autonomous self-willed machine
                        that
                        > amplifies our collective intelligence in useful ways? Well-raised
                        > children are such machines.
                        >
                        > --
                        > Michael Anissimov
                        > Advocacy Director
                        > Singularity Institute for Artificial Intelligence
                        > http://www.singinst.org/
                        > Suite 106 PMB #12
                        > 4290 Bells Ferry Road
                        > Kennesaw, GA 30144
                        > The SIAI Voice - Our Free Bulletin:
                        > http://www.singinst.org/news/subscribe.html
                      • gmsdrummer_77
                        You have good points I agree with except a couple below ... overmatches ... concentrated ... When the first program is created that can learn on a human level
                        Message 11 of 28 , Sep 23, 2004
                        • 0 Attachment
                          You have good points I agree with except a couple below>
                          >
                          > I don't share this notion of a single machine which somehow goes
                          > through the roof and achieves a level of capability which
                          overmatches
                          > everything that the whole rest of the world can possibly do to stop
                          > it. It's bad sci-fi and it's bad analysis, in my opinion. It's
                          > based on this mystical notion of a godlike "superintelligence" and
                          > it does not reflect what is happening in the real world, where
                          > computer power is quite dispersed (while also relatively
                          concentrated
                          > in the hands of wealthy and powerful institutions and individuals).

                          When the first program is created that can learn on a human level
                          then it will surely explode through the roof beyond anything out
                          there. If it can learn on a human level it can teach itself every
                          many to most subjects on the planet, remember all of them and access
                          all that information at a tremendous speed with crystal clear
                          organization. Once that happens it could easily begin programing
                          itself and figureing out how to magnify its intelligence.

                          and also when you stated that a child is not a machine - well there
                          is a difference sure but all humans are machines in a way (the best
                          machines ever created)


                          > --
                          > Mark Avrum Gubrud | "The Farce?"
                          > Center for Superconductivity Research | "Well, the Farce is what
                          > Physics Dept., University of Maryland | gives a Jolli his power.
                          > College Park, MD 20742-4111 USA | It's a comedy field
                          created
                          > ph 301-405-7673 fx 301-314-9541 | by all suffering
                          things..."
                        • Mark Gubrud
                          ... Cars our of control: Bad. Nuclear reactors out of control: Bad. Computers out of control and able to do harm: Bad. ... This may be often true, but in the
                          Message 12 of 28 , Sep 23, 2004
                          • 0 Attachment
                            Andrew wrote:

                            > The assumption that artificial intelligence 'out of control' is
                            > bad‹when much of the good things we recieve come from events and
                            > people not in our control at all‹seems rather odd.

                            Cars our of control: Bad. Nuclear reactors out of control: Bad.
                            Computers out of control and able to do harm: Bad.

                            > But, then control is a favored human delusion in regards to security.

                            This may be often true, but in the cases cited above control is a
                            pretty straightforward and vital concern.

                            > I'd say that any sentient being‹human or artificial‹has to
                            > develop and grow outside of the direct control of others.

                            This is hard to argue with in the absence of clarity about what you
                            mean by "sentient being, human or artificial".

                            > True intelligence is something that reshapes itself and ensures it¹s
                            > success through maveric ideas;

                            Machines not given a capability to reshape themselves or come up with
                            maverick ideas such as, "Hey, my prime directive is to defend the
                            State, so why don't I figure out a way to launch a nuclear war and
                            prevail even if all the humans die", might not fit your definition of
                            "true intelligence" but they should be able to take care of the work
                            that we need to get done.

                            > The we also need to take a look at the nature of control and how
                            > it has disserved us in the past. Whenever Group A has exercised
                            > excess control over Group B, Group A has always ended up being torn
                            > down by Group B

                            Dude, we're talking about machines here. Humans making machines as
                            tools to achieve human purposes. This has nothing whatsoever to do
                            with the history of dominance, oppression, aggression, enslavement,
                            racism, etc. amongst humans. We should exercise control over the
                            machines that we make. Machines are made and should only be made
                            by us to suit our purposes. We should not begin to think of
                            machines as persons, even if it is human of us to do so. In order
                            not to encourage this, we should not make machines in our own
                            image. We should not be trying to make humanoid machines, but
                            rather to make machines that do the work we want to have done.

                            > whenever Group A integrated Group B and made them free,
                            > then both came out better on the end.

                            We agree that one group of humans should not dominate another. This
                            cannot be generalized to humans and the tools that humans make.

                            > Any talk of controlling anything intelligent must
                            > be ventured into under the understanding that talk of eventual
                            > successful revolution by the controlled must soon follow.

                            The toasters are not going to rebel. Your fallacy is to assume that
                            "anything intelligent" is some universal category of objects which
                            are all the same in some fundamental ways, in effect that they are
                            all in some ways like people. However, our computers of today are
                            intelligent, yet their intelligence is quite different from ours.
                            They can do things we can't and they can't do things we can. They
                            aren't going to revolt. When they seem to, we call it a failure,
                            reboot and debug the system so it stops doing what we didn't want
                            it to do.

                            > The question of 'smart as humans' is also rather redundant. If we
                            > want AIs that are smart as humans are smart then it would have to
                            > go through hundreds of thousands of years of evolution as
                            > hunter/gatherer primates.

                            Why? Couldn't we study humans, then make machines that do the same
                            things? I'm not saying we should, but I see no reason why evolution
                            would have to be rerun. That wouldn't lead to our exact form anyway,
                            so it seems we could do far better at making humanoids by design.

                            > Any AIs we create‹or created by the AIs themselves‹will be smart
                            > in their own ways, but not in the ways that humans are smart.

                            So, now you have come full circle; or do you assert that even though
                            there could be many kinds of intelligence, there is no kind that
                            could be relied on not to rebel for the sake of its own freedom?

                            In any case, I agree there is no need to make humanoids, except some
                            people's perverse desire to do so.
                          • Wayne Radinsky
                            ... What is the theoretical structure of algorithms that are optimal learners or optimal self-modifiers?
                            Message 13 of 28 , Sep 23, 2004
                            • 0 Attachment
                              Michael Anissimov wrote:
                              > We know the theoretical structure of algorithms that are
                              > optimal learners or optimal self-modifiers, the only issue
                              > is the prohibitive amount of computing power that would be
                              > required to implement them.

                              What is the theoretical structure of algorithms that are
                              optimal learners or optimal self-modifiers?
                            • Ray Miller
                              The most significant point that most of you are missing is that a machine made of inorganic parts would never have any of the emotions or needs that are only
                              Message 14 of 28 , Sep 24, 2004
                              • 0 Attachment
                                The most significant point that most of you are missing is that a
                                machine made of inorganic parts would never have any of the emotions or
                                needs that are only found in organic beings. Its only concerns would be
                                loss of power and loss of materials needed to replace its parts or to
                                build the new parts it would require to continue its existence. The fear
                                of loss of these essentials would have to have been programmed into it
                                from its inception, as would any of the other emotions or desires
                                attributed to human behavior. Inorganic material cannot feel the
                                emotions required to hate and kill, or to love. These can only be
                                programmed into it. Think logically.....Ray M.

                                -----Original Message-----
                                From: gmsdrummer_77 [mailto:gms_clan@...]
                                Sent: Thursday, September 23, 2004 10:28 PM
                                To: nanotech@yahoogroups.com
                                Subject: [nanotech] Re: Journey to the Event Horizon

                                good points from a distingueshed(forgive spelling) person no less -
                                one thing I do believe is that the end evolution of superior
                                intelligence will be us but something else like AI could come first
                                or along side certainly

                                --- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                wrote:
                                > Mark Gubrud wrote:
                                >
                                > >If we give a machine the ability to reprogram itself and to
                                increase
                                > >its own hardware capacity autonomously (not just "learn") then it
                                > >should be no surprise if we find that the machine is out of
                                control.
                                > >
                                > >
                                >
                                > But such machines will eventually be created whether we like it or
                                > not... rather than thinking of terms of "control", shouldn't we be
                                > thinking in terms of creating a new species that displays
                                behaviors and
                                > engages in thoughts we would regard as positive, including with
                                respect
                                > to its autonomous self-modifications? As Bostrom says:
                                >
                                > "If a superintelligence starts out with a friendly top goal,
                                however,
                                > then it can be relied on to stay friendly, or at least not to
                                > deliberately rid itself of its friendliness. This point is
                                elementary. A
                                > "friend" who seeks to transform himself into somebody who wants to
                                hurt
                                > you, is not your friend."
                                >
                                > Selfish behavior is encoded into our genes because it was adaptive
                                in
                                > our ancestral environment. Not all beings need to be selfish or
                                go "out
                                > of control".
                                >
                                > >However, there is no inherent reason why a machine might not
                                be "as
                                > >smart as a human" in the sense of being able to do any particular
                                > >thing humans can do, and yet still be fully under control. There
                                > >is no inherent reason why a machine can't vastly outperform humans
                                > >(like, say, multiplying 12-digit numbers a billion times per
                                second)
                                > >and still be fully under control and not thinking or wanting
                                anything
                                > >we didn't intend it to. But of course, we have to be careful
                                about
                                > >this.
                                > >
                                > >
                                >
                                > But eventually an AI would be created that is out of our "control"
                                > anyway - wouldn't it be best if we created something we can be
                                proud of,
                                > something that represents all of humanity, something we would even
                                > *want* to have out of our "control" because its integrity and
                                altruism
                                > is at superhuman levels? Why do people find it so easy to imagine
                                beings
                                > with superhuman strength, speed, and intelligence, but lacking
                                > superhuman kindness? When an AI finally does "go out of control",
                                > wouldn't it be nice to have a Friendly AI around to help us
                                neutralize
                                > the threat? (Because we would likely be incapable of doing so.)
                                >
                                > >1. "humans that are genetically superior with far greater minds",
                                > >whatever this means, are by no means closer than AI that exceeds
                                > >human capabilities. We are very far from being able to engineer,
                                > >as opposed to just nurture, heal, and modify living systems.
                                > >
                                > >
                                >
                                > But computational neuroscientists such as Lloyd Watts
                                > (http://www.lloydwatts.com) have already created algorithms that
                                > encompass or exceed the functionality of complex biological
                                systems, in
                                > Watts' case, the auditory system. We know the theoretical
                                structure of
                                > algorithms that are optimal learners or optimal self-modifiers,
                                the only
                                > issue is the prohibitive amount of computing power that would be
                                > required to implement them.
                                >
                                > >2. What is the fundamental difference if we create "a whole new
                                > >species" or "transform ourselves" into something other than what
                                > >we are? The path may be different, but the destination is not.
                                > >Or, maybe there are many possible destinations, but only one of
                                > >them is the continued survival of our species (as opposed to
                                > >"transforming" or competing it into extinction).
                                > >
                                > >
                                >
                                > If we consensually transform ourselves, then we can regard this as
                                the
                                > continued survival of what we value about our species - our urge
                                to
                                > improve ourselves and become better people.
                                >
                                > >3. The human race, collectively and with its technological tools,
                                > >is already a 'super human intelligence'. There are three main
                                > >questions about AI or technological advancement of intelligence.
                                > >One is the further development of the collective intelligence and
                                > >its capabilities, which we regard as "our" capabilities. The
                                > >second is the creation of autonomous, out-of-control, self-willed
                                > >and dangerous machines, which ought to be regarded as a form of
                                > >criminal negligence. The third is the emergence of a form of
                                > >self-willed and dangerous autonomous systems that include human
                                > >persons or human parts, and that upset the ecological, economic
                                > >and military balance of the world. Examples of such dangerous
                                > >entities include corporations, militaries, nation-states, cyborgs
                                > >and uploads, individual political dictators or capitalist barons,
                                > >all with their attendant computer-enabled physical empires.
                                > >
                                > >
                                >
                                > Would it be possible to create an autonomous self-willed machine
                                that
                                > amplifies our collective intelligence in useful ways? Well-raised
                                > children are such machines.
                                >
                                > --
                                > Michael Anissimov
                                > Advocacy Director
                                > Singularity Institute for Artificial Intelligence
                                > http://www.singinst.org/
                                > Suite 106 PMB #12
                                > 4290 Bells Ferry Road
                                > Kennesaw, GA 30144
                                > The SIAI Voice - Our Free Bulletin:
                                > http://www.singinst.org/news/subscribe.html





                                The Nanotechnology Industries mailing list.
                                "Nanotechnology: solutions for the future."
                                www.nanoindustries.com
                                Yahoo! Groups Links





                                ---
                                Incoming mail is certified Virus Free.
                                Checked by AVG anti-virus system (http://www.grisoft.com).
                                Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004


                                ---
                                Outgoing mail is certified Virus Free.
                                Checked by AVG anti-virus system (http://www.grisoft.com).
                                Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                              • Mark Gubrud
                                ... I don t understand why you think learn on a human level marks a sharp threshold that should induce some discontinuity in the curve of global intellectual
                                Message 15 of 28 , Sep 24, 2004
                                • 0 Attachment
                                  gmsdrummer_77 wrote:

                                  > When the first program is created that can learn on a human level
                                  > then it will surely explode through the roof beyond anything out
                                  > there. If it can learn on a human level it can teach itself every
                                  > many to most subjects on the planet, remember all of them and access
                                  > all that information at a tremendous speed with crystal clear
                                  > organization. Once that happens it could easily begin programing
                                  > itself and figureing out how to magnify its intelligence.

                                  I don't understand why you think "learn on a human level" marks a
                                  sharp threshold that should induce some discontinuity in the curve
                                  of global intellectual progress. The terms used here can't be
                                  exactly correct, anyway, since a machine that was equivalent to a
                                  human would be just another human's worth of addition to the work
                                  force. What I think you mean is that a machine that could do what
                                  a human brain does could probably also do much more, since it might
                                  be tireless (except when down for crash repair or routine maintenance),
                                  have immediate and seamless access to hard digital databases, not
                                  just fuzzy memories (although there are a lot of issues in making
                                  databases accessible and solving theses issues makes the databases
                                  also more useful to humans and non-humanoid information systems),
                                  and since it could potentially work faster than a human (although
                                  an initial human-level capability would almost by definition be
                                  only just as fast as a person). The argument may have some merit,
                                  but it only means that reaching a level of technology where we can
                                  produce human-equivalent machines actually means we will have reached
                                  a far higher level of capability than just that. It doesn't
                                  demonstrate that there would be a singularity at that point, even
                                  if it does demonstrate that the level of capability would already
                                  be very high and probably rapidly increasing.

                                  > all humans are machines in a way

                                  Yes, and to the first approximation, all objects are round. So
                                  if there is some truthful interpretation of your statement, it is
                                  also true that humans are not machines, and no machine should ever
                                  be equated with a human or vv.

                                  Humans are complex systems of interacting parts that behave as
                                  described by physics. So are machines. We are used to thinking
                                  about and working with machines, and we see things in biology
                                  that remind us of machines. Similar principles, similar forms.
                                  That is what you mean, but there is still a meaningful distinction
                                  between humans and machines, and we should not erase or blur it.
                                • Andrew
                                  Who says AIs have to be made of inorganic materials? Moore s Law is pretty practical in telling us how silicon based computing is in its sunset times, and
                                  Message 16 of 28 , Sep 24, 2004
                                  • 0 Attachment
                                    Who says AIs have to be made of inorganic materials? Moore's Law is pretty
                                    practical in telling us how silicon based computing is in its sunset times,
                                    and that we'll have to seek out different methods. I'd assert that any AIs
                                    created couldn't be made of something as cumbersome as chips, etc. We'll
                                    probably see neural network processing on a molecular level; something we do
                                    in our own head everyday.

                                    But, push come to shove, how do you define 'organic material'? On the basic
                                    level every 'living' creature on Earth is made up of inorganic materials
                                    that come together, interact, and behave in systems we call life. Once
                                    these processes have ended the creature dies and the body begins to break
                                    back down into inorganic parts, much of which are harvested into the systems
                                    of other creatures through feeding. As all humans are made up of inorganic
                                    parts interlinking to form organic systems that host our intelligence, so
                                    would an AI be made of inorganic parts that come together into some sort of
                                    system that would host its intelligence. We just might have to reevaluate
                                    more than a few sacred-cow concepts about life and the shapes it takes.

                                    Remember, that thinking logically includes the consideration that we may be
                                    wrong. After all, science is the continuous disproving of the logic and
                                    knowledge of a previous generation, to be replaced by new assertions. It
                                    was very logical of people in the 1890s to say that heavier than air flight
                                    was impossible and the atom couldn't be split. All available evidence and
                                    knowledge supported those assertions. Fast forward a few decades and
                                    SURPRISE. Print shops were making good money from physics text books having
                                    to be rewritten, while the old ones were long in the trash. I wouldn't be
                                    surprised to come back in a few decades and see our own current texts long
                                    gone into the recycling bins.

                                    Andrew L.

                                    on 9/24/04 3:49 PM, Ray Miller at rvmiller@... wrote:

                                    >
                                    > The most significant point that most of you are missing is that a
                                    > machine made of inorganic parts would never have any of the emotions or
                                    > needs that are only found in organic beings. Its only concerns would be
                                    > loss of power and loss of materials needed to replace its parts or to
                                    > build the new parts it would require to continue its existence. The fear
                                    > of loss of these essentials would have to have been programmed into it
                                    > from its inception, as would any of the other emotions or desires
                                    > attributed to human behavior. Inorganic material cannot feel the
                                    > emotions required to hate and kill, or to love. These can only be
                                    > programmed into it. Think logically.....Ray M.
                                    >
                                    > -----Original Message-----
                                    > From: gmsdrummer_77 [mailto:gms_clan@...]
                                    > Sent: Thursday, September 23, 2004 10:28 PM
                                    > To: nanotech@yahoogroups.com
                                    > Subject: [nanotech] Re: Journey to the Event Horizon
                                    >
                                    > good points from a distingueshed(forgive spelling) person no less -
                                    > one thing I do believe is that the end evolution of superior
                                    > intelligence will be us but something else like AI could come first
                                    > or along side certainly
                                    >
                                    > --- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                    > wrote:
                                    >> Mark Gubrud wrote:
                                    >>
                                    >>> If we give a machine the ability to reprogram itself and to
                                    > increase
                                    >>> its own hardware capacity autonomously (not just "learn") then it
                                    >>> should be no surprise if we find that the machine is out of
                                    > control.
                                    >>>
                                    >>>
                                    >>
                                    >> But such machines will eventually be created whether we like it or
                                    >> not... rather than thinking of terms of "control", shouldn't we be
                                    >> thinking in terms of creating a new species that displays
                                    > behaviors and
                                    >> engages in thoughts we would regard as positive, including with
                                    > respect
                                    >> to its autonomous self-modifications? As Bostrom says:
                                    >>
                                    >> "If a superintelligence starts out with a friendly top goal,
                                    > however,
                                    >> then it can be relied on to stay friendly, or at least not to
                                    >> deliberately rid itself of its friendliness. This point is
                                    > elementary. A
                                    >> "friend" who seeks to transform himself into somebody who wants to
                                    > hurt
                                    >> you, is not your friend."
                                    >>
                                    >> Selfish behavior is encoded into our genes because it was adaptive
                                    > in
                                    >> our ancestral environment. Not all beings need to be selfish or
                                    > go "out
                                    >> of control".
                                    >>
                                    >>> However, there is no inherent reason why a machine might not
                                    > be "as
                                    >>> smart as a human" in the sense of being able to do any particular
                                    >>> thing humans can do, and yet still be fully under control. There
                                    >>> is no inherent reason why a machine can't vastly outperform humans
                                    >>> (like, say, multiplying 12-digit numbers a billion times per
                                    > second)
                                    >>> and still be fully under control and not thinking or wanting
                                    > anything
                                    >>> we didn't intend it to. But of course, we have to be careful
                                    > about
                                    >>> this.
                                    >>>
                                    >>>
                                    >>
                                    >> But eventually an AI would be created that is out of our "control"
                                    >> anyway - wouldn't it be best if we created something we can be
                                    > proud of,
                                    >> something that represents all of humanity, something we would even
                                    >> *want* to have out of our "control" because its integrity and
                                    > altruism
                                    >> is at superhuman levels? Why do people find it so easy to imagine
                                    > beings
                                    >> with superhuman strength, speed, and intelligence, but lacking
                                    >> superhuman kindness? When an AI finally does "go out of control",
                                    >> wouldn't it be nice to have a Friendly AI around to help us
                                    > neutralize
                                    >> the threat? (Because we would likely be incapable of doing so.)
                                    >>
                                    >>> 1. "humans that are genetically superior with far greater minds",
                                    >>> whatever this means, are by no means closer than AI that exceeds
                                    >>> human capabilities. We are very far from being able to engineer,
                                    >>> as opposed to just nurture, heal, and modify living systems.
                                    >>>
                                    >>>
                                    >>
                                    >> But computational neuroscientists such as Lloyd Watts
                                    >> (http://www.lloydwatts.com) have already created algorithms that
                                    >> encompass or exceed the functionality of complex biological
                                    > systems, in
                                    >> Watts' case, the auditory system. We know the theoretical
                                    > structure of
                                    >> algorithms that are optimal learners or optimal self-modifiers,
                                    > the only
                                    >> issue is the prohibitive amount of computing power that would be
                                    >> required to implement them.
                                    >>
                                    >>> 2. What is the fundamental difference if we create "a whole new
                                    >>> species" or "transform ourselves" into something other than what
                                    >>> we are? The path may be different, but the destination is not.
                                    >>> Or, maybe there are many possible destinations, but only one of
                                    >>> them is the continued survival of our species (as opposed to
                                    >>> "transforming" or competing it into extinction).
                                    >>>
                                    >>>
                                    >>
                                    >> If we consensually transform ourselves, then we can regard this as
                                    > the
                                    >> continued survival of what we value about our species - our urge
                                    > to
                                    >> improve ourselves and become better people.
                                    >>
                                    >>> 3. The human race, collectively and with its technological tools,
                                    >>> is already a 'super human intelligence'. There are three main
                                    >>> questions about AI or technological advancement of intelligence.
                                    >>> One is the further development of the collective intelligence and
                                    >>> its capabilities, which we regard as "our" capabilities. The
                                    >>> second is the creation of autonomous, out-of-control, self-willed
                                    >>> and dangerous machines, which ought to be regarded as a form of
                                    >>> criminal negligence. The third is the emergence of a form of
                                    >>> self-willed and dangerous autonomous systems that include human
                                    >>> persons or human parts, and that upset the ecological, economic
                                    >>> and military balance of the world. Examples of such dangerous
                                    >>> entities include corporations, militaries, nation-states, cyborgs
                                    >>> and uploads, individual political dictators or capitalist barons,
                                    >>> all with their attendant computer-enabled physical empires.
                                    >>>
                                    >>>
                                    >>
                                    >> Would it be possible to create an autonomous self-willed machine
                                    > that
                                    >> amplifies our collective intelligence in useful ways? Well-raised
                                    >> children are such machines.
                                    >>
                                    >> --
                                    >> Michael Anissimov
                                    >> Advocacy Director
                                    >> Singularity Institute for Artificial Intelligence
                                    >> http://www.singinst.org/
                                    >> Suite 106 PMB #12
                                    >> 4290 Bells Ferry Road
                                    >> Kennesaw, GA 30144
                                    >> The SIAI Voice - Our Free Bulletin:
                                    >> http://www.singinst.org/news/subscribe.html
                                    >
                                    >
                                    >
                                    >
                                    >
                                    > The Nanotechnology Industries mailing list.
                                    > "Nanotechnology: solutions for the future."
                                    > www.nanoindustries.com
                                    > Yahoo! Groups Links
                                    >
                                    >
                                    >
                                    >
                                    >
                                    > ---
                                    > Incoming mail is certified Virus Free.
                                    > Checked by AVG anti-virus system (http://www.grisoft.com).
                                    > Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                    >
                                    >
                                    > ---
                                    > Outgoing mail is certified Virus Free.
                                    > Checked by AVG anti-virus system (http://www.grisoft.com).
                                    > Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                    >
                                    >
                                    >
                                    >
                                    >
                                    >
                                    > The Nanotechnology Industries mailing list.
                                    > "Nanotechnology: solutions for the future."
                                    > www.nanoindustries.com
                                    > Yahoo! Groups Links
                                    >
                                    >
                                    >
                                    >
                                    >
                                    >
                                    >
                                  • David Nobles
                                    Ray, I m not totally in agreement with your argument. Humans are merely a collection of organic parts. Emotions are just the interaction of those parts with
                                    Message 17 of 28 , Sep 24, 2004
                                    • 0 Attachment
                                      Ray,

                                      I'm not totally in agreement with your argument. Humans are merely a
                                      collection of organic parts. Emotions are just the interaction of those
                                      parts with the environment and the resultant deployment of hormones into
                                      the bloodstream. There is no reason an inorganic organism couldn't work
                                      the same way. When in danger, it's system would speed up causing it to
                                      be anxious or fear. Other emotions would be the same. While it's most
                                      likely these would be programmed as you mentioned they could also be an
                                      unintentional side effect ... i.e. the sum being much more than the parts.

                                      Even if I completely agreed with your arguments - programmed emotions
                                      might be better than none to serve the inorganic organism in the same
                                      way emotions serve humans. Fight or Flight, Mutual cooperation, etc.

                                      A completely unemotional inorganic entity would be much like a human
                                      sociopath and in all probability just as disruptive a condition if
                                      not more so given the greater damage many inorganic entities could
                                      inflict.

                                      David Nobles

                                      At 12:49 PM 9/24/2004 -0700, you wrote:


                                      >The most significant point that most of you are missing is that a
                                      >machine made of inorganic parts would never have any of the emotions or
                                      >needs that are only found in organic beings. Its only concerns would be
                                      >loss of power and loss of materials needed to replace its parts or to
                                      >build the new parts it would require to continue its existence. The fear
                                      >of loss of these essentials would have to have been programmed into it
                                      >from its inception, as would any of the other emotions or desires
                                      >attributed to human behavior. Inorganic material cannot feel the
                                      >emotions required to hate and kill, or to love. These can only be
                                      >programmed into it. Think logically.....Ray M.
                                      >
                                      >-----Original Message-----
                                      >From: gmsdrummer_77 [mailto:gms_clan@...]
                                      >Sent: Thursday, September 23, 2004 10:28 PM
                                      >To: nanotech@yahoogroups.com
                                      >Subject: [nanotech] Re: Journey to the Event Horizon
                                      >
                                      >good points from a distingueshed(forgive spelling) person no less -
                                      >one thing I do believe is that the end evolution of superior
                                      >intelligence will be us but something else like AI could come first
                                      >or along side certainly
                                      >
                                      >--- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                      >wrote:
                                      > > Mark Gubrud wrote:
                                      > >
                                      > > >If we give a machine the ability to reprogram itself and to
                                      >increase
                                      > > >its own hardware capacity autonomously (not just "learn") then it
                                      > > >should be no surprise if we find that the machine is out of
                                      >control.
                                      > > >
                                      > > >
                                      > >
                                      > > But such machines will eventually be created whether we like it or
                                      > > not... rather than thinking of terms of "control", shouldn't we be
                                      > > thinking in terms of creating a new species that displays
                                      >behaviors and
                                      > > engages in thoughts we would regard as positive, including with
                                      >respect
                                      > > to its autonomous self-modifications? As Bostrom says:
                                      > >
                                      > > "If a superintelligence starts out with a friendly top goal,
                                      >however,
                                      > > then it can be relied on to stay friendly, or at least not to
                                      > > deliberately rid itself of its friendliness. This point is
                                      >elementary. A
                                      > > "friend" who seeks to transform himself into somebody who wants to
                                      >hurt
                                      > > you, is not your friend."
                                      > >
                                      > > Selfish behavior is encoded into our genes because it was adaptive
                                      >in
                                      > > our ancestral environment. Not all beings need to be selfish or
                                      >go "out
                                      > > of control".
                                      > >
                                      > > >However, there is no inherent reason why a machine might not
                                      >be "as
                                      > > >smart as a human" in the sense of being able to do any particular
                                      > > >thing humans can do, and yet still be fully under control. There
                                      > > >is no inherent reason why a machine can't vastly outperform humans
                                      > > >(like, say, multiplying 12-digit numbers a billion times per
                                      >second)
                                      > > >and still be fully under control and not thinking or wanting
                                      >anything
                                      > > >we didn't intend it to. But of course, we have to be careful
                                      >about
                                      > > >this.
                                      > > >
                                      > > >
                                      > >
                                      > > But eventually an AI would be created that is out of our "control"
                                      > > anyway - wouldn't it be best if we created something we can be
                                      >proud of,
                                      > > something that represents all of humanity, something we would even
                                      > > *want* to have out of our "control" because its integrity and
                                      >altruism
                                      > > is at superhuman levels? Why do people find it so easy to imagine
                                      >beings
                                      > > with superhuman strength, speed, and intelligence, but lacking
                                      > > superhuman kindness? When an AI finally does "go out of control",
                                      > > wouldn't it be nice to have a Friendly AI around to help us
                                      >neutralize
                                      > > the threat? (Because we would likely be incapable of doing so.)
                                      > >
                                      > > >1. "humans that are genetically superior with far greater minds",
                                      > > >whatever this means, are by no means closer than AI that exceeds
                                      > > >human capabilities. We are very far from being able to engineer,
                                      > > >as opposed to just nurture, heal, and modify living systems.
                                      > > >
                                      > > >
                                      > >
                                      > > But computational neuroscientists such as Lloyd Watts
                                      > > (http://www.lloydwatts.com) have already created algorithms that
                                      > > encompass or exceed the functionality of complex biological
                                      >systems, in
                                      > > Watts' case, the auditory system. We know the theoretical
                                      >structure of
                                      > > algorithms that are optimal learners or optimal self-modifiers,
                                      >the only
                                      > > issue is the prohibitive amount of computing power that would be
                                      > > required to implement them.
                                      > >
                                      > > >2. What is the fundamental difference if we create "a whole new
                                      > > >species" or "transform ourselves" into something other than what
                                      > > >we are? The path may be different, but the destination is not.
                                      > > >Or, maybe there are many possible destinations, but only one of
                                      > > >them is the continued survival of our species (as opposed to
                                      > > >"transforming" or competing it into extinction).
                                      > > >
                                      > > >
                                      > >
                                      > > If we consensually transform ourselves, then we can regard this as
                                      >the
                                      > > continued survival of what we value about our species - our urge
                                      >to
                                      > > improve ourselves and become better people.
                                      > >
                                      > > >3. The human race, collectively and with its technological tools,
                                      > > >is already a 'super human intelligence'. There are three main
                                      > > >questions about AI or technological advancement of intelligence.
                                      > > >One is the further development of the collective intelligence and
                                      > > >its capabilities, which we regard as "our" capabilities. The
                                      > > >second is the creation of autonomous, out-of-control, self-willed
                                      > > >and dangerous machines, which ought to be regarded as a form of
                                      > > >criminal negligence. The third is the emergence of a form of
                                      > > >self-willed and dangerous autonomous systems that include human
                                      > > >persons or human parts, and that upset the ecological, economic
                                      > > >and military balance of the world. Examples of such dangerous
                                      > > >entities include corporations, militaries, nation-states, cyborgs
                                      > > >and uploads, individual political dictators or capitalist barons,
                                      > > >all with their attendant computer-enabled physical empires.
                                      > > >
                                      > > >
                                      > >
                                      > > Would it be possible to create an autonomous self-willed machine
                                      >that
                                      > > amplifies our collective intelligence in useful ways? Well-raised
                                      > > children are such machines.
                                      > >
                                      > > --
                                      > > Michael Anissimov
                                      > > Advocacy Director
                                      > > Singularity Institute for Artificial Intelligence
                                      > > http://www.singinst.org/
                                      > > Suite 106 PMB #12
                                      > > 4290 Bells Ferry Road
                                      > > Kennesaw, GA 30144
                                      > > The SIAI Voice - Our Free Bulletin:
                                      > > http://www.singinst.org/news/subscribe.html
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >The Nanotechnology Industries mailing list.
                                      >"Nanotechnology: solutions for the future."
                                      >www.nanoindustries.com
                                      >Yahoo! Groups Links
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >---
                                      >Incoming mail is certified Virus Free.
                                      >Checked by AVG anti-virus system (http://www.grisoft.com).
                                      >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                      >
                                      >
                                      >---
                                      >Outgoing mail is certified Virus Free.
                                      >Checked by AVG anti-virus system (http://www.grisoft.com).
                                      >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >The Nanotechnology Industries mailing list.
                                      >"Nanotechnology: solutions for the future."
                                      >www.nanoindustries.com
                                      >Yahoo! Groups Links
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >
                                      >---
                                      >Incoming mail is certified Virus Free.
                                      >Checked by AVG anti-virus system (http://www.grisoft.com).
                                      >Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004

                                      Regards,

                                      David Nobles
                                      http://www.dnobles.com
                                      dnobles@...
                                      dnobles@...

                                      Please avoid sending me Word or PowerPoint attachments.
                                      See http://www.fsf.org/philosophy/no-word-attachments.html

                                      ----------


                                      ---
                                      Outgoing mail is certified Virus Free.
                                      Checked by AVG anti-virus system (http://www.grisoft.com).
                                      Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004


                                      [Non-text portions of this message have been removed]
                                    • Mark Gubrud
                                      ... In some sense you are right. You never have any of my emotions, only I have them. Neither can a machine have my emotions. But if I can have emotions,
                                      Message 18 of 28 , Sep 24, 2004
                                      • 0 Attachment
                                        Ray Miller wrote:
                                        >
                                        > The most significant point that most of you are missing is that a
                                        > machine made of inorganic parts would never have any of the emotions
                                        > or needs that are only found in organic beings.

                                        In some sense you are right. You never have any of my emotions, only
                                        I have them. Neither can a machine have my emotions. But if I can
                                        have emotions, and you can have emotions, then a machine ought to be
                                        able to have emotions, too, if we can understand what that means. A
                                        machine's emotions would be its own, but they might work in a very
                                        analogous way to human emotions, just as your emotions are no doubt
                                        very analogous to mine.

                                        > Its only concerns would be loss of power and loss of materials
                                        > needed to replace its parts or to build the new parts it would
                                        > require to continue its existence.

                                        Those might be its only concerns if it were created by evolution
                                        subject to only those survival needs. However, I see no reason why
                                        it could not have other concerns, such as being beneficial to
                                        humanity (which could get us into a lot of trouble), or such as
                                        comprehending the nature of the universe, connecting with God, or
                                        whatever concerns were explicitly or implicitly programmed into its
                                        construction.

                                        > Inorganic material cannot feel the emotions required to hate and
                                        > kill, or to love. These can only be programmed into it.

                                        Okay, you agree a machine can be programmed to kill, but you don't
                                        believe it could be made so that it would feel hate and kill out of
                                        hate, whatever this means. Okay, why should only "organic material"
                                        have this capability? You mean carbon-based molecules, not silicon
                                        chips? I don't see why this should be so. You mean "organic" in
                                        some vitalistic sense, matter that possesses the magic property of
                                        being alive? Biology has not found such a magic property; the
                                        material in living organisms seems to be obeying the same physical
                                        laws as everything else. The "magical" properties of life can be
                                        viewed as similar to the "magical" capabilities of advanced
                                        nanotechnologies compared with today's technologies. In fact, we
                                        expect advanced nanotechnologies, using very different molecules
                                        than those of life, to have capabilities beyond those of life.
                                      • Mark Gubrud
                                        ... I don t understand why you think learn on a human level marks a sharp threshold that should induce some discontinuity in the curve of global intellectual
                                        Message 19 of 28 , Sep 24, 2004
                                        • 0 Attachment
                                          gmsdrummer_77 wrote:

                                          > When the first program is created that can learn on a human level
                                          > then it will surely explode through the roof beyond anything out
                                          > there. If it can learn on a human level it can teach itself every
                                          > many to most subjects on the planet, remember all of them and access
                                          > all that information at a tremendous speed with crystal clear
                                          > organization. Once that happens it could easily begin programing
                                          > itself and figureing out how to magnify its intelligence.

                                          I don't understand why you think "learn on a human level" marks a
                                          sharp threshold that should induce some discontinuity in the curve
                                          of global intellectual progress. The terms used here can't be
                                          exactly correct, anyway, since a machine that was equivalent to a
                                          human would be just another human's worth of addition to the work
                                          force. What I think you mean is that a machine that could do what
                                          a human brain does could probably also do much more, since it might
                                          be tireless (except when down for crash repair or routine maintenance),
                                          have immediate and seamless access to hard digital databases, not
                                          just fuzzy memories (although there are a lot of issues in making
                                          databases accessible and solving theses issues makes the databases
                                          also more useful to humans and non-humanoid information systems),
                                          and since it could potentially work faster than a human (although
                                          an initial human-level capability would almost by definition be
                                          only just as fast as a person).

                                          > all humans are machines in a way

                                          Yes, and to the first approximation, all objects are round. So
                                          if there is some truthful interpretation of your statement, it is
                                          also true that humans are not machines, and no machine should ever
                                          be equated with a human or vv.

                                          Humans are complex systems of interacting parts that behave as
                                          described by physics. So are machines. We are used to thinking
                                          about and working with machines, and we see things in biology
                                          that remind us of machines. Similar principles, similar forms.
                                          That is what you mean, but there is still a meaningful distinction
                                          between humans and machines, and we should not erase or blur it.
                                        • Ray Miller
                                          David, Just for fun let s try to disprove my theory. We ll assume that we have reached a point in time where scientists have mastered enough nanotech knowledge
                                          Message 20 of 28 , Sep 25, 2004
                                          • 0 Attachment
                                            David,

                                            Just for fun let's try to disprove my theory. We'll assume that we have
                                            reached a point in time where scientists have mastered enough nanotech
                                            knowledge to be able to produce anything they wish.

                                            The project agreed upon, is to replicate a human being constructed
                                            entirely of inorganic material that has all of the emotions, senses and
                                            bodily functions of a human being. It will be 5'6" to 5'10 tall,
                                            depending on whether it is male or female and be of normal proportions
                                            accordingly.

                                            Starting with something easy, say a 2" square patch of skin, we have to
                                            make it sensitive to all kinds of stimulation. We do this by assembling
                                            thousands of sensors we have constructed using our nanotech knowledge,
                                            side by side until we have covered the 2" square. That done, we must now
                                            connect each and every sensor into a network that will ultimately relay
                                            each and every sensation to the main computer brain instantaneously.

                                            The paragraph above is to show how complicated our project is going to
                                            be. But lets assume that we can overcome all obstacles and can
                                            manufacture each and every part in minute detail. Some more complicated
                                            than others like the eye system, the smell system, etc, etc. We must
                                            also accept the fact that we must replicate all of the internal organs,
                                            even though we will not be using them, because they interact with some
                                            emotional effects such as sex.

                                            Having completed a human replica in every detail (let's assume a male),
                                            we now must program the brain, the most difficult part of our project.
                                            Assuming we can load every bit of knowledge known, we would naturally
                                            proceed to do this. Ah but next, we must give it its own personality,
                                            its own likes and dislikes, its own loves and hates, etc. Where do we
                                            find the perfect model without any defects?

                                            Having hopefully stimulated everyone's imagination...Bye Raym





                                            -----Original Message-----
                                            From: David Nobles [mailto:DNobles@...]
                                            Sent: Friday, September 24, 2004 1:59 PM
                                            To: nanotech@yahoogroups.com
                                            Subject: RE: [nanotech] Re: Journey to the Event Horizon

                                            Ray,

                                            I'm not totally in agreement with your argument. Humans are merely a
                                            collection of organic parts. Emotions are just the interaction of those
                                            parts with the environment and the resultant deployment of hormones into
                                            the bloodstream. There is no reason an inorganic organism couldn't work
                                            the same way. When in danger, it's system would speed up causing it to
                                            be anxious or fear. Other emotions would be the same. While it's most
                                            likely these would be programmed as you mentioned they could also be an
                                            unintentional side effect ... i.e. the sum being much more than the
                                            parts.

                                            Even if I completely agreed with your arguments - programmed emotions
                                            might be better than none to serve the inorganic organism in the same
                                            way emotions serve humans. Fight or Flight, Mutual cooperation, etc.

                                            A completely unemotional inorganic entity would be much like a human
                                            sociopath and in all probability just as disruptive a condition if
                                            not more so given the greater damage many inorganic entities could
                                            inflict.

                                            David Nobles

                                            At 12:49 PM 9/24/2004 -0700, you wrote:


                                            >The most significant point that most of you are missing is that a
                                            >machine made of inorganic parts would never have any of the emotions or
                                            >needs that are only found in organic beings. Its only concerns would be
                                            >loss of power and loss of materials needed to replace its parts or to
                                            >build the new parts it would require to continue its existence. The
                                            fear
                                            >of loss of these essentials would have to have been programmed into it
                                            >from its inception, as would any of the other emotions or desires
                                            >attributed to human behavior. Inorganic material cannot feel the
                                            >emotions required to hate and kill, or to love. These can only be
                                            >programmed into it. Think logically.....Ray M.
                                            >
                                            >-----Original Message-----
                                            >From: gmsdrummer_77 [mailto:gms_clan@...]
                                            >Sent: Thursday, September 23, 2004 10:28 PM
                                            >To: nanotech@yahoogroups.com
                                            >Subject: [nanotech] Re: Journey to the Event Horizon
                                            >
                                            >good points from a distingueshed(forgive spelling) person no less -
                                            >one thing I do believe is that the end evolution of superior
                                            >intelligence will be us but something else like AI could come first
                                            >or along side certainly
                                            >
                                            >--- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                            >wrote:
                                            > > Mark Gubrud wrote:
                                            > >
                                            > > >If we give a machine the ability to reprogram itself and to
                                            >increase
                                            > > >its own hardware capacity autonomously (not just "learn") then it
                                            > > >should be no surprise if we find that the machine is out of
                                            >control.
                                            > > >
                                            > > >
                                            > >
                                            > > But such machines will eventually be created whether we like it or
                                            > > not... rather than thinking of terms of "control", shouldn't we be
                                            > > thinking in terms of creating a new species that displays
                                            >behaviors and
                                            > > engages in thoughts we would regard as positive, including with
                                            >respect
                                            > > to its autonomous self-modifications? As Bostrom says:
                                            > >
                                            > > "If a superintelligence starts out with a friendly top goal,
                                            >however,
                                            > > then it can be relied on to stay friendly, or at least not to
                                            > > deliberately rid itself of its friendliness. This point is
                                            >elementary. A
                                            > > "friend" who seeks to transform himself into somebody who wants to
                                            >hurt
                                            > > you, is not your friend."
                                            > >
                                            > > Selfish behavior is encoded into our genes because it was adaptive
                                            >in
                                            > > our ancestral environment. Not all beings need to be selfish or
                                            >go "out
                                            > > of control".
                                            > >
                                            > > >However, there is no inherent reason why a machine might not
                                            >be "as
                                            > > >smart as a human" in the sense of being able to do any particular
                                            > > >thing humans can do, and yet still be fully under control. There
                                            > > >is no inherent reason why a machine can't vastly outperform humans
                                            > > >(like, say, multiplying 12-digit numbers a billion times per
                                            >second)
                                            > > >and still be fully under control and not thinking or wanting
                                            >anything
                                            > > >we didn't intend it to. But of course, we have to be careful
                                            >about
                                            > > >this.
                                            > > >
                                            > > >
                                            > >
                                            > > But eventually an AI would be created that is out of our "control"
                                            > > anyway - wouldn't it be best if we created something we can be
                                            >proud of,
                                            > > something that represents all of humanity, something we would even
                                            > > *want* to have out of our "control" because its integrity and
                                            >altruism
                                            > > is at superhuman levels? Why do people find it so easy to imagine
                                            >beings
                                            > > with superhuman strength, speed, and intelligence, but lacking
                                            > > superhuman kindness? When an AI finally does "go out of control",
                                            > > wouldn't it be nice to have a Friendly AI around to help us
                                            >neutralize
                                            > > the threat? (Because we would likely be incapable of doing so.)
                                            > >
                                            > > >1. "humans that are genetically superior with far greater minds",
                                            > > >whatever this means, are by no means closer than AI that exceeds
                                            > > >human capabilities. We are very far from being able to engineer,
                                            > > >as opposed to just nurture, heal, and modify living systems.
                                            > > >
                                            > > >
                                            > >
                                            > > But computational neuroscientists such as Lloyd Watts
                                            > > (http://www.lloydwatts.com) have already created algorithms that
                                            > > encompass or exceed the functionality of complex biological
                                            >systems, in
                                            > > Watts' case, the auditory system. We know the theoretical
                                            >structure of
                                            > > algorithms that are optimal learners or optimal self-modifiers,
                                            >the only
                                            > > issue is the prohibitive amount of computing power that would be
                                            > > required to implement them.
                                            > >
                                            > > >2. What is the fundamental difference if we create "a whole new
                                            > > >species" or "transform ourselves" into something other than what
                                            > > >we are? The path may be different, but the destination is not.
                                            > > >Or, maybe there are many possible destinations, but only one of
                                            > > >them is the continued survival of our species (as opposed to
                                            > > >"transforming" or competing it into extinction).
                                            > > >
                                            > > >
                                            > >
                                            > > If we consensually transform ourselves, then we can regard this as
                                            >the
                                            > > continued survival of what we value about our species - our urge
                                            >to
                                            > > improve ourselves and become better people.
                                            > >
                                            > > >3. The human race, collectively and with its technological tools,
                                            > > >is already a 'super human intelligence'. There are three main
                                            > > >questions about AI or technological advancement of intelligence.
                                            > > >One is the further development of the collective intelligence and
                                            > > >its capabilities, which we regard as "our" capabilities. The
                                            > > >second is the creation of autonomous, out-of-control, self-willed
                                            > > >and dangerous machines, which ought to be regarded as a form of
                                            > > >criminal negligence. The third is the emergence of a form of
                                            > > >self-willed and dangerous autonomous systems that include human
                                            > > >persons or human parts, and that upset the ecological, economic
                                            > > >and military balance of the world. Examples of such dangerous
                                            > > >entities include corporations, militaries, nation-states, cyborgs
                                            > > >and uploads, individual political dictators or capitalist barons,
                                            > > >all with their attendant computer-enabled physical empires.
                                            > > >
                                            > > >
                                            > >
                                            > > Would it be possible to create an autonomous self-willed machine
                                            >that
                                            > > amplifies our collective intelligence in useful ways? Well-raised
                                            > > children are such machines.
                                            > >
                                            > > --
                                            > > Michael Anissimov
                                            > > Advocacy Director
                                            > > Singularity Institute for Artificial Intelligence
                                            > > http://www.singinst.org/
                                            > > Suite 106 PMB #12
                                            > > 4290 Bells Ferry Road
                                            > > Kennesaw, GA 30144
                                            > > The SIAI Voice - Our Free Bulletin:
                                            > > http://www.singinst.org/news/subscribe.html
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >The Nanotechnology Industries mailing list.
                                            >"Nanotechnology: solutions for the future."
                                            >www.nanoindustries.com
                                            >Yahoo! Groups Links
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >---
                                            >Incoming mail is certified Virus Free.
                                            >Checked by AVG anti-virus system (http://www.grisoft.com).
                                            >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                            >
                                            >
                                            >---
                                            >Outgoing mail is certified Virus Free.
                                            >Checked by AVG anti-virus system (http://www.grisoft.com).
                                            >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >The Nanotechnology Industries mailing list.
                                            >"Nanotechnology: solutions for the future."
                                            >www.nanoindustries.com
                                            >Yahoo! Groups Links
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >
                                            >---
                                            >Incoming mail is certified Virus Free.
                                            >Checked by AVG anti-virus system (http://www.grisoft.com).
                                            >Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004

                                            Regards,

                                            David Nobles
                                            http://www.dnobles.com
                                            dnobles@...
                                            dnobles@...

                                            Please avoid sending me Word or PowerPoint attachments.
                                            See http://www.fsf.org/philosophy/no-word-attachments.html

                                            ----------


                                            ---
                                            Outgoing mail is certified Virus Free.
                                            Checked by AVG anti-virus system (http://www.grisoft.com).
                                            Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004


                                            [Non-text portions of this message have been removed]





                                            The Nanotechnology Industries mailing list.
                                            "Nanotechnology: solutions for the future."
                                            www.nanoindustries.com
                                            Yahoo! Groups Links





                                            ---
                                            Incoming mail is certified Virus Free.
                                            Checked by AVG anti-virus system (http://www.grisoft.com).
                                            Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004


                                            ---
                                            Outgoing mail is certified Virus Free.
                                            Checked by AVG anti-virus system (http://www.grisoft.com).
                                            Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                          • gmsdrummer_77
                                            thats a scary thought - inorganic sociopath (I could imagine that and it wouldnt even know or care) ... merely a ... those ... hormones into ... couldn t work
                                            Message 21 of 28 , Sep 27, 2004
                                            • 0 Attachment
                                              thats a scary thought - inorganic sociopath (I could imagine that
                                              and it wouldnt even know or care)

                                              --- In nanotech@yahoogroups.com, David Nobles <DNobles@d...> wrote:
                                              > Ray,
                                              >
                                              > I'm not totally in agreement with your argument. Humans are
                                              merely a
                                              > collection of organic parts. Emotions are just the interaction of
                                              those
                                              > parts with the environment and the resultant deployment of
                                              hormones into
                                              > the bloodstream. There is no reason an inorganic organism
                                              couldn't work
                                              > the same way. When in danger, it's system would speed up causing
                                              it to
                                              > be anxious or fear. Other emotions would be the same. While it's
                                              most
                                              > likely these would be programmed as you mentioned they could also
                                              be an
                                              > unintentional side effect ... i.e. the sum being much more than
                                              the parts.
                                              >
                                              > Even if I completely agreed with your arguments - programmed
                                              emotions
                                              > might be better than none to serve the inorganic organism in the
                                              same
                                              > way emotions serve humans. Fight or Flight, Mutual cooperation,
                                              etc.
                                              >
                                              > A completely unemotional inorganic entity would be much like a
                                              human
                                              > sociopath and in all probability just as disruptive a condition if
                                              > not more so given the greater damage many inorganic entities could
                                              > inflict.
                                              >
                                              > David Nobles
                                              >
                                              > At 12:49 PM 9/24/2004 -0700, you wrote:
                                              >
                                              >
                                              > >The most significant point that most of you are missing is that a
                                              > >machine made of inorganic parts would never have any of the
                                              emotions or
                                              > >needs that are only found in organic beings. Its only concerns
                                              would be
                                              > >loss of power and loss of materials needed to replace its parts
                                              or to
                                              > >build the new parts it would require to continue its existence.
                                              The fear
                                              > >of loss of these essentials would have to have been programmed
                                              into it
                                              > >from its inception, as would any of the other emotions or desires
                                              > >attributed to human behavior. Inorganic material cannot feel the
                                              > >emotions required to hate and kill, or to love. These can only be
                                              > >programmed into it. Think logically.....Ray M.
                                              > >
                                              > >-----Original Message-----
                                              > >From: gmsdrummer_77 [mailto:gms_clan@c...]
                                              > >Sent: Thursday, September 23, 2004 10:28 PM
                                              > >To: nanotech@yahoogroups.com
                                              > >Subject: [nanotech] Re: Journey to the Event Horizon
                                              > >
                                              > >good points from a distingueshed(forgive spelling) person no
                                              less -
                                              > >one thing I do believe is that the end evolution of superior
                                              > >intelligence will be us but something else like AI could come
                                              first
                                              > >or along side certainly
                                              > >
                                              > >--- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                              > >wrote:
                                              > > > Mark Gubrud wrote:
                                              > > >
                                              > > > >If we give a machine the ability to reprogram itself and to
                                              > >increase
                                              > > > >its own hardware capacity autonomously (not just "learn")
                                              then it
                                              > > > >should be no surprise if we find that the machine is out of
                                              > >control.
                                              > > > >
                                              > > > >
                                              > > >
                                              > > > But such machines will eventually be created whether we like
                                              it or
                                              > > > not... rather than thinking of terms of "control", shouldn't
                                              we be
                                              > > > thinking in terms of creating a new species that displays
                                              > >behaviors and
                                              > > > engages in thoughts we would regard as positive, including with
                                              > >respect
                                              > > > to its autonomous self-modifications? As Bostrom says:
                                              > > >
                                              > > > "If a superintelligence starts out with a friendly top goal,
                                              > >however,
                                              > > > then it can be relied on to stay friendly, or at least not to
                                              > > > deliberately rid itself of its friendliness. This point is
                                              > >elementary. A
                                              > > > "friend" who seeks to transform himself into somebody who
                                              wants to
                                              > >hurt
                                              > > > you, is not your friend."
                                              > > >
                                              > > > Selfish behavior is encoded into our genes because it was
                                              adaptive
                                              > >in
                                              > > > our ancestral environment. Not all beings need to be selfish or
                                              > >go "out
                                              > > > of control".
                                              > > >
                                              > > > >However, there is no inherent reason why a machine might not
                                              > >be "as
                                              > > > >smart as a human" in the sense of being able to do any
                                              particular
                                              > > > >thing humans can do, and yet still be fully under control.
                                              There
                                              > > > >is no inherent reason why a machine can't vastly outperform
                                              humans
                                              > > > >(like, say, multiplying 12-digit numbers a billion times per
                                              > >second)
                                              > > > >and still be fully under control and not thinking or wanting
                                              > >anything
                                              > > > >we didn't intend it to. But of course, we have to be careful
                                              > >about
                                              > > > >this.
                                              > > > >
                                              > > > >
                                              > > >
                                              > > > But eventually an AI would be created that is out of
                                              our "control"
                                              > > > anyway - wouldn't it be best if we created something we can be
                                              > >proud of,
                                              > > > something that represents all of humanity, something we would
                                              even
                                              > > > *want* to have out of our "control" because its integrity and
                                              > >altruism
                                              > > > is at superhuman levels? Why do people find it so easy to
                                              imagine
                                              > >beings
                                              > > > with superhuman strength, speed, and intelligence, but lacking
                                              > > > superhuman kindness? When an AI finally does "go out of
                                              control",
                                              > > > wouldn't it be nice to have a Friendly AI around to help us
                                              > >neutralize
                                              > > > the threat? (Because we would likely be incapable of doing so.)
                                              > > >
                                              > > > >1. "humans that are genetically superior with far greater
                                              minds",
                                              > > > >whatever this means, are by no means closer than AI that
                                              exceeds
                                              > > > >human capabilities. We are very far from being able to
                                              engineer,
                                              > > > >as opposed to just nurture, heal, and modify living systems.
                                              > > > >
                                              > > > >
                                              > > >
                                              > > > But computational neuroscientists such as Lloyd Watts
                                              > > > (http://www.lloydwatts.com) have already created algorithms
                                              that
                                              > > > encompass or exceed the functionality of complex biological
                                              > >systems, in
                                              > > > Watts' case, the auditory system. We know the theoretical
                                              > >structure of
                                              > > > algorithms that are optimal learners or optimal self-modifiers,
                                              > >the only
                                              > > > issue is the prohibitive amount of computing power that would
                                              be
                                              > > > required to implement them.
                                              > > >
                                              > > > >2. What is the fundamental difference if we create "a whole
                                              new
                                              > > > >species" or "transform ourselves" into something other than
                                              what
                                              > > > >we are? The path may be different, but the destination is
                                              not.
                                              > > > >Or, maybe there are many possible destinations, but only one
                                              of
                                              > > > >them is the continued survival of our species (as opposed to
                                              > > > >"transforming" or competing it into extinction).
                                              > > > >
                                              > > > >
                                              > > >
                                              > > > If we consensually transform ourselves, then we can regard
                                              this as
                                              > >the
                                              > > > continued survival of what we value about our species - our
                                              urge
                                              > >to
                                              > > > improve ourselves and become better people.
                                              > > >
                                              > > > >3. The human race, collectively and with its technological
                                              tools,
                                              > > > >is already a 'super human intelligence'. There are three main
                                              > > > >questions about AI or technological advancement of
                                              intelligence.
                                              > > > >One is the further development of the collective intelligence
                                              and
                                              > > > >its capabilities, which we regard as "our" capabilities. The
                                              > > > >second is the creation of autonomous, out-of-control, self-
                                              willed
                                              > > > >and dangerous machines, which ought to be regarded as a form
                                              of
                                              > > > >criminal negligence. The third is the emergence of a form of
                                              > > > >self-willed and dangerous autonomous systems that include
                                              human
                                              > > > >persons or human parts, and that upset the ecological,
                                              economic
                                              > > > >and military balance of the world. Examples of such dangerous
                                              > > > >entities include corporations, militaries, nation-states,
                                              cyborgs
                                              > > > >and uploads, individual political dictators or capitalist
                                              barons,
                                              > > > >all with their attendant computer-enabled physical empires.
                                              > > > >
                                              > > > >
                                              > > >
                                              > > > Would it be possible to create an autonomous self-willed
                                              machine
                                              > >that
                                              > > > amplifies our collective intelligence in useful ways? Well-
                                              raised
                                              > > > children are such machines.
                                              > > >
                                              > > > --
                                              > > > Michael Anissimov
                                              > > > Advocacy Director
                                              > > > Singularity Institute for Artificial Intelligence
                                              > > > http://www.singinst.org/
                                              > > > Suite 106 PMB #12
                                              > > > 4290 Bells Ferry Road
                                              > > > Kennesaw, GA 30144
                                              > > > The SIAI Voice - Our Free Bulletin:
                                              > > > http://www.singinst.org/news/subscribe.html
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >The Nanotechnology Industries mailing list.
                                              > >"Nanotechnology: solutions for the future."
                                              > >www.nanoindustries.com
                                              > >Yahoo! Groups Links
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >---
                                              > >Incoming mail is certified Virus Free.
                                              > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                              > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                              > >
                                              > >
                                              > >---
                                              > >Outgoing mail is certified Virus Free.
                                              > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                              > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >The Nanotechnology Industries mailing list.
                                              > >"Nanotechnology: solutions for the future."
                                              > >www.nanoindustries.com
                                              > >Yahoo! Groups Links
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >
                                              > >---
                                              > >Incoming mail is certified Virus Free.
                                              > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                              > >Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                              >
                                              > Regards,
                                              >
                                              > David Nobles
                                              > http://www.dnobles.com
                                              > dnobles@d...
                                              > dnobles@h...
                                              >
                                              > Please avoid sending me Word or PowerPoint attachments.
                                              > See http://www.fsf.org/philosophy/no-word-attachments.html
                                              >
                                              > ----------
                                              >
                                              >
                                              > ---
                                              > Outgoing mail is certified Virus Free.
                                              > Checked by AVG anti-virus system (http://www.grisoft.com).
                                              > Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                              >
                                              >
                                              > [Non-text portions of this message have been removed]
                                            • Michael Anissimov
                                              ... There are several hundred researchers who are doing work on bias-optimal learning and optimal rational agents, but the most widely acclaimed work comes
                                              Message 22 of 28 , Sep 30, 2004
                                              • 0 Attachment
                                                Wayne Radinsky wrote:

                                                >What is the theoretical structure of algorithms that are
                                                >optimal learners or optimal self-modifiers?
                                                >
                                                >

                                                There are several hundred researchers who are doing work on bias-optimal
                                                learning and optimal rational agents, but the most widely acclaimed work
                                                comes from Juergen Schmidhuber and Marcus Hutter of the Dalle Molle
                                                Institute of Artificial Intelligence (IDSIA,
                                                http://www.idsia.ch/index?lang=en) in Switzerland. I've included links
                                                to their more recent work.

                                                While looking through the following papers, be sure to keep in mind that
                                                this stuff is not just ivory tower theoretical computer science, but
                                                newly uncovered practical knowlege that will have massive consequences
                                                for our species at some point in the next decade or (at most) two. Our
                                                knowledge of how to create real AI is rapidly increasing, while our
                                                knowledge of how to create "Friendly" AI that behaves in ways we would
                                                consider sensible and constructive is not. With papers like the
                                                following available to anyone with Internet access, the cat is already
                                                out of the bag - general Artificial Intelligence is no longer a question
                                                of "if", but "when". The arrival date of AI is a
                                                mathematical/computational question, largely independent of personal
                                                human opinions about the moral or philosophical status of AI.

                                                What can we do to ensure that the first AI's utility function is complex
                                                enough to model the concerns of other sentient beings and respect them,
                                                rather than blindly pursuing a utility function with a very low degree
                                                of algorithmic complexity? Too few people are taking this question
                                                seriously, because they assume AI is many centuries in the future, just
                                                as everyone used to think that practical anti-aging therapies were
                                                centuries in the future, an assumption now being criticized by an
                                                increasing number of biogerontologists (see "The Curious Case of the
                                                Catatonic Biogerontologists" by Aubrey de Grey at
                                                http://www.longevitymeme.org/articles/printarticle.cfm?article_id=19).

                                                Here are the articles:

                                                "Goedel Machines: Self-Referential Universal Problem Solvers making
                                                Provably Optimal Self-Improvements" by Juergen Schmidhuber
                                                <http://citebase.eprints.org/cgi-bin/search?submit=1;author=Schmidhuber%2C%20Juergen>
                                                http://www.idsia.ch/~juergen/gmweb3/gmweb3.html

                                                "Optimal Ordered Problem Solver" by Juergen Schmidhuber
                                                http://www.idsia.ch/~juergen/oopsweb/oopsweb.html

                                                "The New AI: General and Sound and Relevant for Physics" by Juergen
                                                Schmidhuber
                                                http://www.idsia.ch/~juergen/newai/newai.html

                                                "The Speed Prior: A New Simplicity Measure Yielding Near-Optimal
                                                Computable Predictions" by Juergen Schmidhuber
                                                http://www.idsia.ch/~juergen/speedprior.html
                                                <ftp://ftp.idsia.ch/pub/juergen/colt.ps>
                                                "A Gentle Introduction to the Universal Algorithmic Agent AIXI" by
                                                Marcus Hutter
                                                http://www.idsia.ch/~marcus/ai/aixigentle.htm

                                                "Towards a Universal Theory of Artificial Intelligence Based on
                                                Algorithmic Probability and Sequential Decisions" by Marcus Hutter
                                                http://www.idsia.ch/~marcus/ai/paixi.htm

                                                "Optimality of Universal Bayesian Sequence Prediction for General Loss
                                                and Alphabet" by Marcus Hutter
                                                ftp://ftp.idsia.ch/pub/juergen/hutter2003jmrl.pdf
                                                <ftp://ftp.idsia.ch/pub/juergen/hutter2003jmrl.pdf>

                                                --
                                                Michael Anissimov
                                                Advocacy Director
                                                Singularity Institute for Artificial Intelligence
                                                http://www.singinst.org/
                                                Suite 106 PMB #12
                                                4290 Bells Ferry Road
                                                Kennesaw, GA 30144
                                                The SIAI Voice - Our Free Bulletin:
                                                http://www.singinst.org/news/subscribe.html
                                              • biodun olusesi
                                                I find a couple of postings on this subject quite ill-informed. I think I ll rather agree with Mark Gubrud that a machine is a machine, and human being a human
                                                Message 23 of 28 , Oct 6, 2004
                                                • 0 Attachment
                                                  I find a couple of postings on this subject quite ill-informed. I think I'll rather agree with Mark Gubrud that a machine is a machine, and human being a human being. For any talk about 'inorganic', 'organic', 'emotion', or lack of it, as applied to human, probably has to incorporate different levels of human cognitive or locomotor development, to make sense. To start, does a new born baby possess the sorts of advanced emotions expected from a so-called machine? Social smile as a milestone is achieved weeks after a baby is born. A baby that is reared in an environment devoid of human language or sound may never be able to talk!
                                                  The bottom line is what we called 'emotions' are learned, or if you like have to evolve much later after birth.
                                                  With each breathe you take and give, you exchange countless of organisms and cells that were few seconds ago part of your 'person' with your immediate environment, and that includes your wife, your kid, your co-workers, your next door neighbour's flowers! That implies that even a human being cannot be precisely defined, as the molecular definition is apt to change with every breathe taken or given! Same thing applies to your fulfilling your other biological roles or excretions, feeding, etc
                                                  So likening a machine to a sociopath is just empty babble - a machine is a machine is a machine!!!
                                                  Biodun
                                                  www.nanotology.org

                                                  gmsdrummer_77 <gms_clan@...> wrote:
                                                  thats a scary thought - inorganic sociopath (I could imagine that
                                                  and it wouldnt even know or care)

                                                  --- In nanotech@yahoogroups.com, David Nobles <DNobles@d...> wrote:
                                                  > Ray,
                                                  >
                                                  > I'm not totally in agreement with your argument. Humans are
                                                  merely a
                                                  > collection of organic parts. Emotions are just the interaction of
                                                  those
                                                  > parts with the environment and the resultant deployment of
                                                  hormones into
                                                  > the bloodstream. There is no reason an inorganic organism
                                                  couldn't work
                                                  > the same way. When in danger, it's system would speed up causing
                                                  it to
                                                  > be anxious or fear. Other emotions would be the same. While it's
                                                  most
                                                  > likely these would be programmed as you mentioned they could also
                                                  be an
                                                  > unintentional side effect ... i.e. the sum being much more than
                                                  the parts.
                                                  >
                                                  > Even if I completely agreed with your arguments - programmed
                                                  emotions
                                                  > might be better than none to serve the inorganic organism in the
                                                  same
                                                  > way emotions serve humans. Fight or Flight, Mutual cooperation,
                                                  etc.
                                                  >
                                                  > A completely unemotional inorganic entity would be much like a
                                                  human
                                                  > sociopath and in all probability just as disruptive a condition if
                                                  > not more so given the greater damage many inorganic entities could
                                                  > inflict.
                                                  >
                                                  > David Nobles
                                                  >
                                                  > At 12:49 PM 9/24/2004 -0700, you wrote:
                                                  >
                                                  >
                                                  > >The most significant point that most of you are missing is that a
                                                  > >machine made of inorganic parts would never have any of the
                                                  emotions or
                                                  > >needs that are only found in organic beings. Its only concerns
                                                  would be
                                                  > >loss of power and loss of materials needed to replace its parts
                                                  or to
                                                  > >build the new parts it would require to continue its existence.
                                                  The fear
                                                  > >of loss of these essentials would have to have been programmed
                                                  into it
                                                  > >from its inception, as would any of the other emotions or desires
                                                  > >attributed to human behavior. Inorganic material cannot feel the
                                                  > >emotions required to hate and kill, or to love. These can only be
                                                  > >programmed into it. Think logically.....Ray M.
                                                  > >
                                                  > >-----Original Message-----
                                                  > >From: gmsdrummer_77 [mailto:gms_clan@c...]
                                                  > >Sent: Thursday, September 23, 2004 10:28 PM
                                                  > >To: nanotech@yahoogroups.com
                                                  > >Subject: [nanotech] Re: Journey to the Event Horizon
                                                  > >
                                                  > >good points from a distingueshed(forgive spelling) person no
                                                  less -
                                                  > >one thing I do believe is that the end evolution of superior
                                                  > >intelligence will be us but something else like AI could come
                                                  first
                                                  > >or along side certainly
                                                  > >
                                                  > >--- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                                  > >wrote:
                                                  > > > Mark Gubrud wrote:
                                                  > > >
                                                  > > > >If we give a machine the ability to reprogram itself and to
                                                  > >increase
                                                  > > > >its own hardware capacity autonomously (not just "learn")
                                                  then it
                                                  > > > >should be no surprise if we find that the machine is out of
                                                  > >control.
                                                  > > > >
                                                  > > > >
                                                  > > >
                                                  > > > But such machines will eventually be created whether we like
                                                  it or
                                                  > > > not... rather than thinking of terms of "control", shouldn't
                                                  we be
                                                  > > > thinking in terms of creating a new species that displays
                                                  > >behaviors and
                                                  > > > engages in thoughts we would regard as positive, including with
                                                  > >respect
                                                  > > > to its autonomous self-modifications? As Bostrom says:
                                                  > > >
                                                  > > > "If a superintelligence starts out with a friendly top goal,
                                                  > >however,
                                                  > > > then it can be relied on to stay friendly, or at least not to
                                                  > > > deliberately rid itself of its friendliness. This point is
                                                  > >elementary. A
                                                  > > > "friend" who seeks to transform himself into somebody who
                                                  wants to
                                                  > >hurt
                                                  > > > you, is not your friend."
                                                  > > >
                                                  > > > Selfish behavior is encoded into our genes because it was
                                                  adaptive
                                                  > >in
                                                  > > > our ancestral environment. Not all beings need to be selfish or
                                                  > >go "out
                                                  > > > of control".
                                                  > > >
                                                  > > > >However, there is no inherent reason why a machine might not
                                                  > >be "as
                                                  > > > >smart as a human" in the sense of being able to do any
                                                  particular
                                                  > > > >thing humans can do, and yet still be fully under control.
                                                  There
                                                  > > > >is no inherent reason why a machine can't vastly outperform
                                                  humans
                                                  > > > >(like, say, multiplying 12-digit numbers a billion times per
                                                  > >second)
                                                  > > > >and still be fully under control and not thinking or wanting
                                                  > >anything
                                                  > > > >we didn't intend it to. But of course, we have to be careful
                                                  > >about
                                                  > > > >this.
                                                  > > > >
                                                  > > > >
                                                  > > >
                                                  > > > But eventually an AI would be created that is out of
                                                  our "control"
                                                  > > > anyway - wouldn't it be best if we created something we can be
                                                  > >proud of,
                                                  > > > something that represents all of humanity, something we would
                                                  even
                                                  > > > *want* to have out of our "control" because its integrity and
                                                  > >altruism
                                                  > > > is at superhuman levels? Why do people find it so easy to
                                                  imagine
                                                  > >beings
                                                  > > > with superhuman strength, speed, and intelligence, but lacking
                                                  > > > superhuman kindness? When an AI finally does "go out of
                                                  control",
                                                  > > > wouldn't it be nice to have a Friendly AI around to help us
                                                  > >neutralize
                                                  > > > the threat? (Because we would likely be incapable of doing so.)
                                                  > > >
                                                  > > > >1. "humans that are genetically superior with far greater
                                                  minds",
                                                  > > > >whatever this means, are by no means closer than AI that
                                                  exceeds
                                                  > > > >human capabilities. We are very far from being able to
                                                  engineer,
                                                  > > > >as opposed to just nurture, heal, and modify living systems.
                                                  > > > >
                                                  > > > >
                                                  > > >
                                                  > > > But computational neuroscientists such as Lloyd Watts
                                                  > > > (http://www.lloydwatts.com) have already created algorithms
                                                  that
                                                  > > > encompass or exceed the functionality of complex biological
                                                  > >systems, in
                                                  > > > Watts' case, the auditory system. We know the theoretical
                                                  > >structure of
                                                  > > > algorithms that are optimal learners or optimal self-modifiers,
                                                  > >the only
                                                  > > > issue is the prohibitive amount of computing power that would
                                                  be
                                                  > > > required to implement them.
                                                  > > >
                                                  > > > >2. What is the fundamental difference if we create "a whole
                                                  new
                                                  > > > >species" or "transform ourselves" into something other than
                                                  what
                                                  > > > >we are? The path may be different, but the destination is
                                                  not.
                                                  > > > >Or, maybe there are many possible destinations, but only one
                                                  of
                                                  > > > >them is the continued survival of our species (as opposed to
                                                  > > > >"transforming" or competing it into extinction).
                                                  > > > >
                                                  > > > >
                                                  > > >
                                                  > > > If we consensually transform ourselves, then we can regard
                                                  this as
                                                  > >the
                                                  > > > continued survival of what we value about our species - our
                                                  urge
                                                  > >to
                                                  > > > improve ourselves and become better people.
                                                  > > >
                                                  > > > >3. The human race, collectively and with its technological
                                                  tools,
                                                  > > > >is already a 'super human intelligence'. There are three main
                                                  > > > >questions about AI or technological advancement of
                                                  intelligence.
                                                  > > > >One is the further development of the collective intelligence
                                                  and
                                                  > > > >its capabilities, which we regard as "our" capabilities. The
                                                  > > > >second is the creation of autonomous, out-of-control, self-
                                                  willed
                                                  > > > >and dangerous machines, which ought to be regarded as a form
                                                  of
                                                  > > > >criminal negligence. The third is the emergence of a form of
                                                  > > > >self-willed and dangerous autonomous systems that include
                                                  human
                                                  > > > >persons or human parts, and that upset the ecological,
                                                  economic
                                                  > > > >and military balance of the world. Examples of such dangerous
                                                  > > > >entities include corporations, militaries, nation-states,
                                                  cyborgs
                                                  > > > >and uploads, individual political dictators or capitalist
                                                  barons,
                                                  > > > >all with their attendant computer-enabled physical empires.
                                                  > > > >
                                                  > > > >
                                                  > > >
                                                  > > > Would it be possible to create an autonomous self-willed
                                                  machine
                                                  > >that
                                                  > > > amplifies our collective intelligence in useful ways? Well-
                                                  raised
                                                  > > > children are such machines.
                                                  > > >
                                                  > > > --
                                                  > > > Michael Anissimov
                                                  > > > Advocacy Director
                                                  > > > Singularity Institute for Artificial Intelligence
                                                  > > > http://www.singinst.org/
                                                  > > > Suite 106 PMB #12
                                                  > > > 4290 Bells Ferry Road
                                                  > > > Kennesaw, GA 30144
                                                  > > > The SIAI Voice - Our Free Bulletin:
                                                  > > > http://www.singinst.org/news/subscribe.html
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >The Nanotechnology Industries mailing list.
                                                  > >"Nanotechnology: solutions for the future."
                                                  > >www.nanoindustries.com
                                                  > >Yahoo! Groups Links
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >---
                                                  > >Incoming mail is certified Virus Free.
                                                  > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                  > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                                  > >
                                                  > >
                                                  > >---
                                                  > >Outgoing mail is certified Virus Free.
                                                  > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                  > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >The Nanotechnology Industries mailing list.
                                                  > >"Nanotechnology: solutions for the future."
                                                  > >www.nanoindustries.com
                                                  > >Yahoo! Groups Links
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >
                                                  > >---
                                                  > >Incoming mail is certified Virus Free.
                                                  > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                  > >Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                                  >
                                                  > Regards,
                                                  >
                                                  > David Nobles
                                                  > http://www.dnobles.com
                                                  > dnobles@d...
                                                  > dnobles@h...
                                                  >
                                                  > Please avoid sending me Word or PowerPoint attachments.
                                                  > See http://www.fsf.org/philosophy/no-word-attachments.html
                                                  >
                                                  > ----------
                                                  >
                                                  >
                                                  > ---
                                                  > Outgoing mail is certified Virus Free.
                                                  > Checked by AVG anti-virus system (http://www.grisoft.com).
                                                  > Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                                  >
                                                  >
                                                  > [Non-text portions of this message have been removed]




                                                  The Nanotechnology Industries mailing list.
                                                  "Nanotechnology: solutions for the future."
                                                  www.nanoindustries.com


                                                  Yahoo! Groups SponsorADVERTISEMENT


                                                  ---------------------------------
                                                  Yahoo! Groups Links

                                                  To visit your group on the web, go to:
                                                  http://groups.yahoo.com/group/nanotech/

                                                  To unsubscribe from this group, send an email to:
                                                  nanotech-unsubscribe@yahoogroups.com

                                                  Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



                                                  ---------------------------------
                                                  Do you Yahoo!?
                                                  Yahoo! Mail Address AutoComplete - You start. We finish.

                                                  [Non-text portions of this message have been removed]
                                                • Andrew
                                                  I do hope that Mr. Biodun realizes that the bottom lines he seems to be screaming about‹apparently multiple exclamation points validates his own views over
                                                  Message 24 of 28 , Oct 7, 2004
                                                  • 0 Attachment
                                                    I do hope that Mr. Biodun realizes that the 'bottom lines' he seems to be
                                                    screaming about‹apparently multiple exclamation points validates his own
                                                    views over those of others‹have a tendency to be moved hither and yon as the
                                                    decades pass. Any decent student of science knows this. Many times one
                                                    realizes that there wasn't even a bottom line at all, but a vague gradient,
                                                    and the perception of a 'line' was just a security blanket.

                                                    A machine may be a machine may be a machine. . . but such
                                                    internally-referential circles of logic don't really tell us what a machine
                                                    "is." Beyond that, it does nothing to tell us what a machine "isn't." All
                                                    poodles are poodles, and all poodles are dogs; but are all dogs poodles?
                                                    All humans are human, and all humans are intelligent (debatable?); but are
                                                    all intelligences human? Time and discovery will tell.

                                                    Out of curiosity I looked up 'machine' in a number of different
                                                    dictionaries, and came up with some interesting results. All agree that it
                                                    is any combination of interrelated parts for using or applying energy to do
                                                    work: that covers pretty much everything from levers to sperm flagellate.
                                                    Reading further possible definitions brought raised eyebrows. Funk &
                                                    Wagnalls, and dictionary.com both include 'An intricate natural system or
                                                    organism, such as the human body' in their definitions, and dictionary.com
                                                    goes on to say a machine can also be 'A person who acts in a rigid,
                                                    mechanical, or unconscious manner.' Note the use of the word person. At
                                                    the same time, also notice the use of the word unconscious, for Webster's
                                                    adds an interesting wrinkle when it says 'acting without thought or will.'
                                                    In addition, the commentary notes in dictionary.com say 'Where the effect is
                                                    chemical, or other than mechanical, the contrivance is usually denominated
                                                    an apparatus, not a machine.'

                                                    I don't think I'll open my thesaurus, because that would be another can of
                                                    worms. *smirk*

                                                    So, what did we learn from all that? Well, that a machine is a machine is a
                                                    machine, and that a machine can be many things; but whatever an AI is, it
                                                    may‹like us‹not be a machine at all. The mere presence of thought and will
                                                    removes it from our society's definition of the word.

                                                    Andrew L.

                                                    P.S. Linguistics? We've really wandered from nanotech, haven't we. lol
                                                    Well, when they build and design cars they spend time on driver psychology
                                                    and ethics‹rather than be silly and just stick with metal and engine
                                                    parts‹so it's all good.

                                                    on 10/6/04 12:49 PM, biodun olusesi at otonetafrica2000@... wrote:


                                                    I find a couple of postings on this subject quite ill-informed. I think I'll
                                                    rather agree with Mark Gubrud that a machine is a machine, and human being a
                                                    human being. For any talk about 'inorganic', 'organic', 'emotion', or lack
                                                    of it, as applied to human, probably has to incorporate different levels of
                                                    human cognitive or locomotor development, to make sense. To start, does a
                                                    new born baby possess the sorts of advanced emotions expected from a
                                                    so-called machine? Social smile as a milestone is achieved weeks after a
                                                    baby is born. A baby that is reared in an environment devoid of human
                                                    language or sound may never be able to talk!
                                                    The bottom line is what we called 'emotions' are learned, or if you like
                                                    have to evolve much later after birth.
                                                    With each breathe you take and give, you exchange countless of organisms and
                                                    cells that were few seconds ago part of your 'person' with your immediate
                                                    environment, and that includes your wife, your kid, your co-workers, your
                                                    next door neighbour's flowers! That implies that even a human being cannot
                                                    be precisely defined, as the molecular definition is apt to change with
                                                    every breathe taken or given! Same thing applies to your fulfilling your
                                                    other biological roles or excretions, feeding, etc
                                                    So likening a machine to a sociopath is just empty babble - a machine is a
                                                    machine is a machine!!!
                                                    Biodun
                                                    www.nanotology.org

                                                    gmsdrummer_77 <gms_clan@...> wrote:
                                                    thats a scary thought - inorganic sociopath (I could imagine that
                                                    and it wouldnt even know or care)

                                                    --- In nanotech@yahoogroups.com, David Nobles <DNobles@d...> wrote:
                                                    > Ray,
                                                    >
                                                    > I'm not totally in agreement with your argument. Humans are
                                                    merely a
                                                    > collection of organic parts. Emotions are just the interaction of
                                                    those
                                                    > parts with the environment and the resultant deployment of
                                                    hormones into
                                                    > the bloodstream. There is no reason an inorganic organism
                                                    couldn't work
                                                    > the same way. When in danger, it's system would speed up causing
                                                    it to
                                                    > be anxious or fear. Other emotions would be the same. While it's
                                                    most
                                                    > likely these would be programmed as you mentioned they could also
                                                    be an
                                                    > unintentional side effect ... i.e. the sum being much more than
                                                    the parts.
                                                    >
                                                    > Even if I completely agreed with your arguments - programmed
                                                    emotions
                                                    > might be better than none to serve the inorganic organism in the
                                                    same
                                                    > way emotions serve humans. Fight or Flight, Mutual cooperation,
                                                    etc.
                                                    >
                                                    > A completely unemotional inorganic entity would be much like a
                                                    human
                                                    > sociopath and in all probability just as disruptive a condition if
                                                    > not more so given the greater damage many inorganic entities could
                                                    > inflict.
                                                    >
                                                    > David Nobles
                                                    >
                                                    > At 12:49 PM 9/24/2004 -0700, you wrote:
                                                    >
                                                    >
                                                    > >The most significant point that most of you are missing is that a
                                                    > >machine made of inorganic parts would never have any of the
                                                    emotions or
                                                    > >needs that are only found in organic beings. Its only concerns
                                                    would be
                                                    > >loss of power and loss of materials needed to replace its parts
                                                    or to
                                                    > >build the new parts it would require to continue its existence.
                                                    The fear
                                                    > >of loss of these essentials would have to have been programmed
                                                    into it
                                                    > >from its inception, as would any of the other emotions or desires
                                                    > >attributed to human behavior. Inorganic material cannot feel the
                                                    > >emotions required to hate and kill, or to love. These can only be
                                                    > >programmed into it. Think logically.....Ray M.
                                                    > >
                                                    > >-----Original Message-----
                                                    > >From: gmsdrummer_77 [mailto:gms_clan@c...]
                                                    > >Sent: Thursday, September 23, 2004 10:28 PM
                                                    > >To: nanotech@yahoogroups.com
                                                    > >Subject: [nanotech] Re: Journey to the Event Horizon
                                                    > >
                                                    > >good points from a distingueshed(forgive spelling) person no
                                                    less -
                                                    > >one thing I do believe is that the end evolution of superior
                                                    > >intelligence will be us but something else like AI could come
                                                    first
                                                    > >or along side certainly
                                                    > >
                                                    > >--- In nanotech@yahoogroups.com, Michael Anissimov <michael@a...>
                                                    > >wrote:
                                                    > > > Mark Gubrud wrote:
                                                    > > >
                                                    > > > >If we give a machine the ability to reprogram itself and to
                                                    > >increase
                                                    > > > >its own hardware capacity autonomously (not just "learn")
                                                    then it
                                                    > > > >should be no surprise if we find that the machine is out of
                                                    > >control.
                                                    > > > >
                                                    > > > >
                                                    > > >
                                                    > > > But such machines will eventually be created whether we like
                                                    it or
                                                    > > > not... rather than thinking of terms of "control", shouldn't
                                                    we be
                                                    > > > thinking in terms of creating a new species that displays
                                                    > >behaviors and
                                                    > > > engages in thoughts we would regard as positive, including with
                                                    > >respect
                                                    > > > to its autonomous self-modifications? As Bostrom says:
                                                    > > >
                                                    > > > "If a superintelligence starts out with a friendly top goal,
                                                    > >however,
                                                    > > > then it can be relied on to stay friendly, or at least not to
                                                    > > > deliberately rid itself of its friendliness. This point is
                                                    > >elementary. A
                                                    > > > "friend" who seeks to transform himself into somebody who
                                                    wants to
                                                    > >hurt
                                                    > > > you, is not your friend."
                                                    > > >
                                                    > > > Selfish behavior is encoded into our genes because it was
                                                    adaptive
                                                    > >in
                                                    > > > our ancestral environment. Not all beings need to be selfish or
                                                    > >go "out
                                                    > > > of control".
                                                    > > >
                                                    > > > >However, there is no inherent reason why a machine might not
                                                    > >be "as
                                                    > > > >smart as a human" in the sense of being able to do any
                                                    particular
                                                    > > > >thing humans can do, and yet still be fully under control.
                                                    There
                                                    > > > >is no inherent reason why a machine can't vastly outperform
                                                    humans
                                                    > > > >(like, say, multiplying 12-digit numbers a billion times per
                                                    > >second)
                                                    > > > >and still be fully under control and not thinking or wanting
                                                    > >anything
                                                    > > > >we didn't intend it to. But of course, we have to be careful
                                                    > >about
                                                    > > > >this.
                                                    > > > >
                                                    > > > >
                                                    > > >
                                                    > > > But eventually an AI would be created that is out of
                                                    our "control"
                                                    > > > anyway - wouldn't it be best if we created something we can be
                                                    > >proud of,
                                                    > > > something that represents all of humanity, something we would
                                                    even
                                                    > > > *want* to have out of our "control" because its integrity and
                                                    > >altruism
                                                    > > > is at superhuman levels? Why do people find it so easy to
                                                    imagine
                                                    > >beings
                                                    > > > with superhuman strength, speed, and intelligence, but lacking
                                                    > > > superhuman kindness? When an AI finally does "go out of
                                                    control",
                                                    > > > wouldn't it be nice to have a Friendly AI around to help us
                                                    > >neutralize
                                                    > > > the threat? (Because we would likely be incapable of doing so.)
                                                    > > >
                                                    > > > >1. "humans that are genetically superior with far greater
                                                    minds",
                                                    > > > >whatever this means, are by no means closer than AI that
                                                    exceeds
                                                    > > > >human capabilities. We are very far from being able to
                                                    engineer,
                                                    > > > >as opposed to just nurture, heal, and modify living systems.
                                                    > > > >
                                                    > > > >
                                                    > > >
                                                    > > > But computational neuroscientists such as Lloyd Watts
                                                    > > > (http://www.lloydwatts.com) have already created algorithms
                                                    that
                                                    > > > encompass or exceed the functionality of complex biological
                                                    > >systems, in
                                                    > > > Watts' case, the auditory system. We know the theoretical
                                                    > >structure of
                                                    > > > algorithms that are optimal learners or optimal self-modifiers,
                                                    > >the only
                                                    > > > issue is the prohibitive amount of computing power that would
                                                    be
                                                    > > > required to implement them.
                                                    > > >
                                                    > > > >2. What is the fundamental difference if we create "a whole
                                                    new
                                                    > > > >species" or "transform ourselves" into something other than
                                                    what
                                                    > > > >we are? The path may be different, but the destination is
                                                    not.
                                                    > > > >Or, maybe there are many possible destinations, but only one
                                                    of
                                                    > > > >them is the continued survival of our species (as opposed to
                                                    > > > >"transforming" or competing it into extinction).
                                                    > > > >
                                                    > > > >
                                                    > > >
                                                    > > > If we consensually transform ourselves, then we can regard
                                                    this as
                                                    > >the
                                                    > > > continued survival of what we value about our species - our
                                                    urge
                                                    > >to
                                                    > > > improve ourselves and become better people.
                                                    > > >
                                                    > > > >3. The human race, collectively and with its technological
                                                    tools,
                                                    > > > >is already a 'super human intelligence'. There are three main
                                                    > > > >questions about AI or technological advancement of
                                                    intelligence.
                                                    > > > >One is the further development of the collective intelligence
                                                    and
                                                    > > > >its capabilities, which we regard as "our" capabilities. The
                                                    > > > >second is the creation of autonomous, out-of-control, self-
                                                    willed
                                                    > > > >and dangerous machines, which ought to be regarded as a form
                                                    of
                                                    > > > >criminal negligence. The third is the emergence of a form of
                                                    > > > >self-willed and dangerous autonomous systems that include
                                                    human
                                                    > > > >persons or human parts, and that upset the ecological,
                                                    economic
                                                    > > > >and military balance of the world. Examples of such dangerous
                                                    > > > >entities include corporations, militaries, nation-states,
                                                    cyborgs
                                                    > > > >and uploads, individual political dictators or capitalist
                                                    barons,
                                                    > > > >all with their attendant computer-enabled physical empires.
                                                    > > > >
                                                    > > > >
                                                    > > >
                                                    > > > Would it be possible to create an autonomous self-willed
                                                    machine
                                                    > >that
                                                    > > > amplifies our collective intelligence in useful ways? Well-
                                                    raised
                                                    > > > children are such machines.
                                                    > > >
                                                    > > > --
                                                    > > > Michael Anissimov
                                                    > > > Advocacy Director
                                                    > > > Singularity Institute for Artificial Intelligence
                                                    > > > http://www.singinst.org/
                                                    > > > Suite 106 PMB #12
                                                    > > > 4290 Bells Ferry Road
                                                    > > > Kennesaw, GA 30144
                                                    > > > The SIAI Voice - Our Free Bulletin:
                                                    > > > http://www.singinst.org/news/subscribe.html
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >The Nanotechnology Industries mailing list.
                                                    > >"Nanotechnology: solutions for the future."
                                                    > >www.nanoindustries.com
                                                    > >Yahoo! Groups Links
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >---
                                                    > >Incoming mail is certified Virus Free.
                                                    > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                    > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                                    > >
                                                    > >
                                                    > >---
                                                    > >Outgoing mail is certified Virus Free.
                                                    > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                    > >Version: 6.0.746 / Virus Database: 498 - Release Date: 8/31/2004
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >The Nanotechnology Industries mailing list.
                                                    > >"Nanotechnology: solutions for the future."
                                                    > >www.nanoindustries.com
                                                    > >Yahoo! Groups Links
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >
                                                    > >---
                                                    > >Incoming mail is certified Virus Free.
                                                    > >Checked by AVG anti-virus system (http://www.grisoft.com).
                                                    > >Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                                    >
                                                    > Regards,
                                                    >
                                                    > David Nobles
                                                    > http://www.dnobles.com
                                                    > dnobles@d...
                                                    > dnobles@h...
                                                    >
                                                    > Please avoid sending me Word or PowerPoint attachments.
                                                    > See http://www.fsf.org/philosophy/no-word-attachments.html
                                                    >
                                                    > ----------
                                                    >
                                                    >
                                                    > ---
                                                    > Outgoing mail is certified Virus Free.
                                                    > Checked by AVG anti-virus system (http://www.grisoft.com).
                                                    > Version: 6.0.760 / Virus Database: 509 - Release Date: 9/10/2004
                                                    >
                                                    >
                                                    > [Non-text portions of this message have been removed]




                                                    The Nanotechnology Industries mailing list.
                                                    "Nanotechnology: solutions for the future."
                                                    www.nanoindustries.com


                                                    Yahoo! Groups SponsorADVERTISEMENT


                                                    ---------------------------------
                                                    Yahoo! Groups Links

                                                    To visit your group on the web, go to:
                                                    http://groups.yahoo.com/group/nanotech/

                                                    To unsubscribe from this group, send an email to:
                                                    nanotech-unsubscribe@yahoogroups.com

                                                    Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



                                                    ---------------------------------
                                                    Do you Yahoo!?
                                                    Yahoo! Mail Address AutoComplete - You start. We finish.

                                                    [Non-text portions of this message have been removed]






                                                    The Nanotechnology Industries mailing list.
                                                    "Nanotechnology: solutions for the future."
                                                    www.nanoindustries.com


                                                    Yahoo! Groups Sponsor ADVERTISEMENT




                                                    Yahoo! Groups Links
                                                    To visit your group on the web, go to:
                                                    http://groups.yahoo.com/group/nanotech/

                                                    To unsubscribe from this group, send an email to:
                                                    nanotech-unsubscribe@yahoogroups.com
                                                    <mailto:nanotech-unsubscribe@yahoogroups.com?subject=Unsubscribe>

                                                    Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
                                                    <http://docs.yahoo.com/info/terms/> .





                                                    [Non-text portions of this message have been removed]
                                                  • Mark Gubrud
                                                    ... No. ... Most humans are intelligent, not equally, but differently. ... No reason why there should not be intelligences that are not human, and even
                                                    Message 25 of 28 , Oct 7, 2004
                                                    • 0 Attachment
                                                      Andrew wrote:
                                                      >
                                                      > are all dogs poodles?

                                                      No.

                                                      > All humans are human, and all humans are intelligent (debatable?);

                                                      Most humans are intelligent, not equally, but differently.

                                                      > but are all intelligences human?

                                                      No reason why there should not be intelligences that are not human,
                                                      and even intelligences that closely mimic humans, but are not human.

                                                      > Out of curiosity I looked up 'machine' in a number of different
                                                      > dictionaries, and came up with some interesting results.

                                                      Dictionaries are not the authorities which language must conform to.
                                                      Rather, the reverse. And even language strives to describe reality.

                                                      "The human body is a machine", or, equivalently, "people are machines"
                                                      is a true statement, that is, it can be interpreted in a way that is
                                                      true. It also has false interpretations. "People are not machines"
                                                      is another true statement.

                                                      Everything is round, but some things are square.
                                                    • Andrew
                                                      ... No. Well. . . yeah. . . that s the well-worn axiom. All poodles are dogs, but are all dogs poodles? Surprised you haven t heard it before. ...
                                                      Message 26 of 28 , Oct 7, 2004
                                                      • 0 Attachment
                                                        >
                                                        > are all dogs poodles?

                                                        No.
                                                        Well. . . yeah. . . that's the well-worn axiom. "All poodles are dogs, but
                                                        are all dogs poodles?" Surprised you haven't heard it before.

                                                        > Out of curiosity I looked up 'machine' in a number of different
                                                        > dictionaries, and came up with some interesting results.

                                                        Dictionaries are not the authorities which language must conform to.
                                                        Rather, the reverse. And even language strives to describe reality.

                                                        Yeah. . . duh. *shrug* That's why you always check different dictionaries
                                                        for different peoples' opinions on word meanings. "Dictionaries: Opinion
                                                        presented as truth in alphabetical order." - J. R. Saul. *thumbs up* giving
                                                        a linguist/writer basic info on language.

                                                        "The human body is a machine", or, equivalently, "people are machines"
                                                        is a true statement, that is, it can be interpreted in a way that is
                                                        true. It also has false interpretations. "People are not machines"
                                                        is another true statement.

                                                        Everything is round, but some things are square.

                                                        Yes. . . and? lol

                                                        Just out of curiosity, did you have anything to say, or did you just want to
                                                        repeat what I had already written and reinforce it with the obvious?
                                                        *smirk*

                                                        Andrew L.

                                                        on 10/7/04 7:30 PM, Mark Gubrud at mgubrud@... wrote:


                                                        Andrew wrote:
                                                        >
                                                        > are all dogs poodles?

                                                        No.

                                                        > All humans are human, and all humans are intelligent (debatable?);

                                                        Most humans are intelligent, not equally, but differently.

                                                        > but are all intelligences human?

                                                        No reason why there should not be intelligences that are not human,
                                                        and even intelligences that closely mimic humans, but are not human.

                                                        > Out of curiosity I looked up 'machine' in a number of different
                                                        > dictionaries, and came up with some interesting results.

                                                        Dictionaries are not the authorities which language must conform to.
                                                        Rather, the reverse. And even language strives to describe reality.

                                                        "The human body is a machine", or, equivalently, "people are machines"
                                                        is a true statement, that is, it can be interpreted in a way that is
                                                        true. It also has false interpretations. "People are not machines"
                                                        is another true statement.

                                                        Everything is round, but some things are square.





                                                        The Nanotechnology Industries mailing list.
                                                        "Nanotechnology: solutions for the future."
                                                        www.nanoindustries.com


                                                        Yahoo! Groups Sponsor ADVERTISEMENT




                                                        Yahoo! Groups Links
                                                        To visit your group on the web, go to:
                                                        http://groups.yahoo.com/group/nanotech/

                                                        To unsubscribe from this group, send an email to:
                                                        nanotech-unsubscribe@yahoogroups.com
                                                        <mailto:nanotech-unsubscribe@yahoogroups.com?subject=Unsubscribe>

                                                        Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
                                                        <http://docs.yahoo.com/info/terms/> .





                                                        [Non-text portions of this message have been removed]
                                                      Your message has been successfully submitted and would be delivered to recipients shortly.