Loading ...
Sorry, an error occurred while loading the content.
 

Re: [nanotech] Friendly AI

Expand Messages
  • Chris Phoenix
    ... Good point. But it remains to be seen. Also, the feasibility depends on the type of control desired. If you want to change someone s belief system,
    Message 1 of 17 , Aug 3, 2001
      Brian Atkins wrote:
      > Mind control via a superintelligent AI is a legitimate concern, but
      > normally it is thought to only be possible by a general intelligence
      > that has moved onto superintelligence. ....
      > I don't think we could have the fine level of control you
      > are worrying about using only specialized AI.

      Good point. But it remains to be seen. Also, the feasibility depends
      on the type of control desired. If you want to change someone's belief
      system, that's pretty hard and probably requires a superintelligent AI.
      But if you merely want to implant a picture or phrase that will persist
      for a while, that may be a lot easier.

      > The other thing to keep
      > in mind is the power of the Net and associated technologies. Unless
      > you assume no one out there knows this software exists, you would
      > quickly have people reading about this new "speech technique" and
      > attempting to both use it for themselves and also protect themselves
      > from the effects.

      Most people would be too apathetic.

      > While this might result in a lot of competition at
      > first, it would also make the people vastly more aware of it and
      > the potential dangers. At that point you'd have the government most
      > likely banning it.

      Except that industry lobbyists would keep the government from banning
      it.

      > Sorry to ramble, so my answer is yes it might be a somewhat legitimate
      > concern, but not to the extent you worry about. The end result of
      > such software would probably be to have it banned (who would develop
      > it in the first place?)

      Any large advertising agency or political party.

      Chris

      --
      Chris Phoenix cphoenix@... http://www.best.com/~cphoenix
      Interests: nanotechnology, dyslexia, caving, filk, SF, patent
      reform... Check out PriorArt.org!
    • Eugene Leitl
      ... Possible, when I ve got time to read that darn thing. Too many words, too little time. ... After I never got responses to what I see as definitive
      Message 2 of 17 , Aug 3, 2001
        On Thu, 2 Aug 2001, Brian Atkins wrote:

        > 'gene are you ever going to break down and read and really criticize
        > Creating Friendly AI? Because you come across as rather ignorant,

        Possible, when I've got time to read that darn thing. Too many words, too
        little time.

        > which is unusual for you.

        After I never got responses to what I see as definitive showstoppers (the
        undecidedability thing, for starters), and only pointers to large ascii
        deserts, which do not address these showstoppers I somehow don't perceive
        the burden of proof to be on my side.

        This will change once you have something to strut which you should
        ordinarily not being able to achieve. Then I'll get over Eliezer's stuff
        with a magnification glass. My motivation would be sky-high, because I
        would be shitting my pants.
      • Samantha Atkins
        ... Effective control does not require any such large-scale simulation at all. All that is required is a such a level of observation and
        Message 3 of 17 , Aug 4, 2001
          Brian Atkins wrote:
          >

          > >
          > > But we already know that there are people who want other people to be
          > > docile thoughtless sheep. If those people get AIs (*especially*
          > > special-purpose software) and use them to that end...
          > >
          >
          > Mind control via a superintelligent AI is a legitimate concern, but
          > normally it is thought to only be possible by a general intelligence
          > that has moved onto superintelligence. I mean, effectively we are
          > talking about some software here that can accurately simulate lots
          > of human minds or through some other means come up with specific
          > speeches or even smaller sets of words that would effectively control
          > large numbers of people.


          Effective control does not require any such large-scale
          simulation at all. All that is required is a such a level of
          observation and interference/punishment for modifying behavior
          that the majority will conform rather than risk the possible
          consequences. Every dicator has known this. With modern
          technology the level of observation and control through
          reinforcement can be quite maximal even with minimal AI.

          A minimum more active control program might disrupt certain
          types of brain activity except when the person was engaged in
          actions deemed desirable. A bit more thorough control would
          require an advance (but not that many) in our ability to read
          the brain and note unacceptable thoughts and association
          patterns.

          At a more mundane level the successful manipulation of human
          needs and fears with memetic campaigns and staged events can be
          greatly enhanced with only reasonably good computational
          support.

          Overall, a lot of nastiness can be accomplished in turning
          people to sheep without requiring a superintelligence.

          - samantha
        Your message has been successfully submitted and would be delivered to recipients shortly.