Loading ...
Sorry, an error occurred while loading the content.

TRANSTOPIA

Expand Messages
  • super_intelligence1234@yahoo.com
    We re at a crossroads. For thousands of years mankind has been the dominant species on earth, the pinnacle of evolution. Now, as we enter the 21st century,
    Message 1 of 4 , Nov 2, 2001
    • 0 Attachment
      We're at a crossroads. For thousands of years mankind has been the
      dominant species on earth, the pinnacle of evolution. Now, as we
      enter the 21st century, this is about to change. A new and radically
      diffferent chapter of evolution is about to begin, for, as Vernor
      Vinge put it at the 1993 NASA VISION-21 Symposium:
      `Within thirty years, we will have the technological means to create
      superhuman intelligence. Shortly after, the human era will be ended.'
      This event, the relatively sudden emergence of superintelligence
      (SI), is often referred to as the Singularity in Transhuman circles.
      The longer definition is:
      "SINGULARITY: the postulated point or short period in our future when
      our self-guided evolutionary development accelerates enormously
      (powered by nanotech, neuroscience, AI, and perhaps uploading) so
      that nothing beyond that time can reliably be conceived". [Vernor
      Vinge, 1986] (Lextropicon).
      Whether these new, Posthuman beings (aka SIs, Powers or PSEs -- Post-
      Singularity Entities) will be augmented humans, artificial
      intelligences (AIs) or some hybrid form, they will no doubt change
      life as we know it rapidly and profoundly. For better or for worse;
      what happens to those who are left behind in this burst of self-
      directed hyperevolution is by definition unknown, "unknowable" even,
      but extinction is definitely one of the more realistic options.

      Though it may take a bit longer than Vinge's proposed 20-30 years
      (then again, it could be upon us tomorrow for all we know), the
      Singularity is virtually inevitable; it is a logical conclusion of
      our (technological) evolution. Only massive, world-wide social,
      economic, technological or natural disasters could stop or delay it.
      Currently, it seems highly unlikely that creating a Superintelligence
      (in one form or another) is technically impossible, or so difficult
      that it will take centuries or more. Even according to fairly
      conservative predictions, based on current trends and scientific
      knowledge, raw computing power will match, and then quickly surpass
      that of the human brain in less than 20 years, possibly even much
      sooner (approx. 5 years). With that kind of power available so soon,
      one can only imagine what the world will look like 30, 40 years from
      now...

      Directed efforts to stop some of the more dangerous technologies
      which might cause a "malevolent" Singularity (or global destruction
      in general) can only slow the process down somewhat, not stop it
      entirely. Unless, of course, one is prepared to sacrifice
      civilization itself, which given the fact that we need progress to
      survive isn't a very good idea. Still, less destructive Singularity-
      delaying actions might buy valuable time and should therefore not be
      dismissed altogether. What exactly these "actions", if any, will
      entail is pure speculation at this point; it ultimately depends on
      future developments. On the rate of progress in various scientific
      and technological fields, socio-economic and political trends etc.
      Right now, sentient AI seems to be the most likely candidate for a
      Singularity (not good at all, see below), but sudden advances in
      nanotech or genetic engineering, for example (both fields are going
      strong), could dramatically change the picture.

      "Evolution says organisms are replaced by species of superior
      adaptability. When our robots are tired of taking orders, they may,
      if we're lucky, show more compassion to us than we've shown the
      species we pushed into oblivion. Perhaps they will put us into zoos,
      throw peanuts at us and make us dance inside our cages."
      -- Dr. Michio Kaku in "The Future of Technology" (Time magazine cover
      story, June 2000).
      The Posthuman future may be glorious, filled with wonders far beyond
      our current comprehension, but what good is that to a person if he
      can't be part of it? If AIs become superintelligent before humans do,
      this will reduce us to second-rate beings that are almost completely
      at the mercy of this new "master race". During our domination of the
      earth, we have wiped out countless animal species, brought others
      (including our "cousins", the apes) to the brink of extinction, used
      them for scientific experiments, put them in cages for our enjoyment
      etc., etc. This is the privilege of (near-absolute) power. If we lose
      our top position to our own creations, we will find ourselves in the
      same precarious position that animals are in now. As said before,
      it's impossible to tell what an "alien" Superintelligence would do
      to/with lower life forms such as humans, but the mere fact that we'd
      be completely at its mercy should be reason enough for concern.

      Needless to say, from a personal perspective it doesn't matter much
      who or what exactly will become superintelligent (AIs, genetically
      engineered humans, cyborgs) -- in each case you'd be faced with an
      unpredictable, vastly superior being. A god, in effect. Because one's
      personality would almost certainly change, perhaps completely beyond
      recognition, once the augmentation process starts, it doesn't even
      really matter whether the person would be "good" or "bad" to begin
      with; the result would be "unknowable" anyway. Many (most? all??) of
      our current emotions and attitudes, the legacy of our evolutionary
      past, could easily become as antiquated as our biological bodies in
      the Posthuman world. Altruism may be useful in an evolutionary
      context where weak, imperfect beings have to rely on cooperation to
      survive, but to a solitary god-like SI it would just be a dangerous
      handicap. What would it gain by letting others ascend? Most likely
      nothing. What could it lose? Possibly everything. Consequently, if
      its concept of logic will even remotely resemble ours, it probably
      won't let us become its peers. And even if it's completely, utterly
      alien, it could still harm or kill us for other (apparently
      incomprehensible) reasons, or even more or less accidentally, as a
      side-effect of its ascension, for example. So what's the moral of the
      story here? Well, make sure that you're one of the first Powers,
      obviously, but more on that later.

      True, the future doesn't necessarily have to be bad for the less-than-
      superintelligent, the SIs could be "eternal" philanthropists for all
      we know, altruism might turn out to be the most logically
      stable "Objective Morality", they could be our obedient, genie-like
      servants, or they might simply choose to ignore us altogether and fly
      off into space, but depending on such positive scenarios in the face
      of unknowability is dangerously naive (wishful thinking). Yet, though
      lip service is occasionally paid to the dangers of the Singularity
      and powerful new technologies in general, there is no known
      coordinated effort within the transhuman community to actively
      prepare for the coming changes. This has to do with the generally
      (too) optimistic, idealistic and technophilic attitude of many
      Transhumanists, and perhaps a desire to make/keep the philosophy
      socially acceptable and [thus] easier to proliferate. Visions of a
      harsh, devouring technocalypse, no matter how realistic, simply don't
      fit this rather mild, almost "politically correct" image. Of course
      lethargy, defeatism, strife and conservative thinking also contribute
      to the lack of focus and momentum in Transhumanism, but the main
      problem seems to be that "we" aren't taking our own ideas seriously
      enough, fail to fully grasp the implications of things like nanotech,
      AI and the Singularity. It's all talk and no action.

      Enter Transtopianism. This philosophy follows the general outlines of
      Transhumanism in that it advocates the overcoming our biological and
      social limits by means of reason, science and technology, but there
      are also some important differences. Principally, these are: 1) a
      much heavier emphasis on the Singularity, 2) the explicit inclusion
      of various philosophical, political & technological elements which
      are "optional" (or nonexistent) in general Transhumanism, and 3) the
      intention to become a movement with a clearly defined organizational
      structure instead of just a loose collection of more or less like-
      minded individuals (which is what Transhumanism, and to a somewhat
      lesser extent Extropianism, are). Essentially, Transtopianism is an
      attempt to realize Transhumanism's full potential as a practical way
      to (significantly) improve one's life in the present, and to survive
      radical future changes. Unlike regular Transhumanism or even
      Extropianism, this is a "holistic" philosophy; a complete worldview
      for those who seek "perfection" in all fields of human endeavor. It
      is also a strongly dualistic philosophy, motivated by equal amounts
      of optimism and pessimism, instead of blind (or at least weak-
      sighted) technophilia.

      Note: A complete overview of Transtopian goals, means & values can be
      found in the Principles section. The Enlightenment Test is a summary
      of the same, and the Singularity Club goes into more detail regarding
      the organizational and (other) practical aspects of the Transtopian
      movement. For "radical" socio-politics, see the True Enlightenment
      WebRing page.

      Transtopianism's main message is as simple as it is radical: assuming
      that we don't destroy ourselves first, technological progress will
      profoundly impact society in the (relatively) near future,
      culminating in the emergence of superintelligence and [thus] the
      Singularity. Those who will acquire a dominant position during this
      event, a classical Darwinian struggle, will likely reap enormous
      benefits; they will become "persons of unprecedented physical,
      intellectual, and psychological capacity. Self-programming, self-
      constituting, potentially immortal, unlimited individuals''. Those
      who for whatever reason won't participate or fall behind will face a
      very uncertain future, and quite possibly extermination (or worse?
      With mind uploading, the concept of Hell, i.e. infinite, hideous
      torture, becomes a real possibility).

      Wealth and power can not only make the present considerably
      more "interesting"; they're also a logical imperative for those who
      are serious about realizing their Transhuman hopes and dreams. It is,
      after all, nearly always the rich & powerful that have access to new
      technologies before anyone else. Unlike, for example, cars, TV sets
      and cell phones, the technologies that will enable people to
      become/create our evolutionary successors aren't likely to eventually
      trickle down to the general public. Godhood won't be for sale at your
      local supermarket 30-80 years from now for pretty much the same
      reasons why one can't buy nukes in gunshops, eventhough the basic
      design is now more than half a century old (hell, you can't even
      legally purchase a wussy machine gun, let alone more advanced
      military hardware, in most countries). Bottom line: the really
      powerful, dangerous stuff is always restricted to (self-proclaimed)
      elite groups, and nothing is more powerful and potentially dangerous
      than Superintelligence. Even all-out WW3 would be peanuts compared to
      the damage that a SI (with its arsenal of advanced nanoweapons and
      God knows what else) could inflict. By "ascending" you don't obtain
      the ultimate weapon -- you become it. It doesn't seem very likely, or
      indeed sane, that SIs will freely hand out such power to anyone who
      asks for it.

      Thus, it logically follows that we should put great effort into
      acquiring a good starting position for the Singularity, which
      includes things like gathering as much wealth as possible, keeping
      abreast of the latest technological developments, and implementing
      them to become more efficient and powerful individuals. Our primary
      interim subgoal is (must be) becoming part of the economic &
      technological elite. The Players. Our primary interim supergoal is
      (obviously) to become inorganic, "digital", which will open the door
      to virtually unlimited additional enhancements, and ultimately
      godhood itself.

      This is a tall order indeed, and to increase one's chances of
      success, cooperation with like-minded individuals is essential
      (unless, perhaps, if one happens to be a tech-savvy billionaire
      already, but I think it's fairly safe to assume that whoever reads
      this isn't one of those). Hence the Transtopian movement and its
      Singularity Club, an "elite" mutual aid association for those who
      want to fully enjoy the present while preparing for the radical
      future. Our radical future. This is the only such (known) group in
      existence. The WTA, Extropy Institute, and various national/regional
      Transhumanist groups have a social and meme-spreading function.
      Useful? Certainly. Adequate? Not al all. They do not specifically aim
      to improve their members' social, mental, physical, financial etc.
      situation (sure, occasionally it does happen, but the effect is
      usually very limited, short-lived, and not part of a clear strategy).
      And as for working towards early ascension...Well, there is, of
      course, the Singularity Institute for Artificial Intelligence, but
      their goal is "merely" the creation of a self-enhancing,
      superintelligent AI (which, if and when it becomes operational, could
      do anything from just sitting there to killing us all), not personal
      enhancement. The Foresight Institute, finally, though it does
      genuinely help to advance the development of nanotechnology, and
      seems fairly well-balanced concerning the potential threats and
      benefits of their work, is ultimately yet another public
      clearinghouse which depends on the wisdom and cooperation of the
      Establishment and/or "the masses" (uh, oh!) to realize its vision of
      a prosperous, peaceful nano-enhanced society. This serious,
      potentially fatal flaw is common to all Transhumanism-related special
      interest groups, including cryonics organizations and such.

      To each his own. We salute the Transhuman, Extropian, and
      Singularitarian giants on whose shoulders we stand, but we have seen
      further and now it's time to move on. Transtopians don't like to
      depend on the whims of society, governments, fate or luck for their
      current and future well-being; they want to take control of their
      destiny, make their own rules, their own "luck". Of course, the
      chances of success may be slim, but since there's nothing to lose
      (steady degeneration and death are the default for all living things)
      and a universe to gain, why not give it a try? Given the
      circumstances, it is the rational thing to do. As the saying goes:
      shoot for the moon, even if you miss you'll land among the stars
      (well, maybe it will just be your disassembled molecules that will
      land among the stars, but you get the idea; it never hurts to try).
      When crossing uncharted territory, it is wiser to travel in groups.
      And even if the road leads to a dead-end, there are still plenty of
      useful and pleasant things we can do along the way. Join us at
      http://www.technocalypse.org/
    • Keith Wiley
      Do you ever get the sensation that a blazing flash of lightning has just blurred through your living room in a chaotic scramble of mayhem, leaving upturned
      Message 2 of 4 , Nov 2, 2001
      • 0 Attachment
        Do you ever get the sensation that a blazing flash of lightning has just
        blurred through your living room in a chaotic scramble of mayhem,
        leaving upturned sofas, disarrayed trinkets, and a squall of assorted
        paperwork wafting around on lazy breezes in the aftermath?

        That's how I feel after the Transtopia email came flashing through our
        mailing list like a Tasmanian Devil in heat.

        :-) Cheers guys!

        ________________________________________________________________________
        Keith Wiley kwiley@...
        http://www.unm.edu/~keithw http://www.mp3.com/KeithWiley

        "Yet mark his perfect self-contentment, and hence learn his lesson,
        that to be self-contented is to be vile and ignorant, and that to
        aspire is better than to be blindly and impotently happy."
        -- Edwin A. Abbott, Flatland
        ________________________________________________________________________
      • Randal Koene
        Hi Keith, Yeah, my feeling exactly... and this isn t the first time that s happened on our list. It s kind of funny as long as it doesn t happen too often.
        Message 3 of 4 , Nov 5, 2001
        • 0 Attachment
          Hi Keith,

          Yeah, my feeling exactly... and this isn't the first time that's happened
          on our list. It's kind of funny as long as it doesn't happen too often.
          Every once in a while, a transhumanist or whatever they wish to call
          themselves in this particular case posts a generic diatribe to our list -
          and probably a host of others - with a lot of humdrum and no new info.
          Have a great day!

          Cheers,
          Randal

          _______________________________________________________________________
          RANDAL A. KOENE
          Computational Neurophysiology Laboratory, Boston University
          minduploading.org, rak.minduploading.org, www.bu.edu/people/hasselmo
          randalk@..., (617)-353-1431/1433
          _______________________________________________________________________

          On Fri, 2 Nov 2001, Keith Wiley wrote:

          > Do you ever get the sensation that a blazing flash of lightning has just
          > blurred through your living room in a chaotic scramble of mayhem,
          > leaving upturned sofas, disarrayed trinkets, and a squall of assorted
          > paperwork wafting around on lazy breezes in the aftermath?
          >
          > That's how I feel after the Transtopia email came flashing through our
          > mailing list like a Tasmanian Devil in heat.
          >
          > :-) Cheers guys!
          >
          > ________________________________________________________________________
          > Keith Wiley kwiley@...
          > http://www.unm.edu/~keithw http://www.mp3.com/KeithWiley
          >
          > "Yet mark his perfect self-contentment, and hence learn his lesson,
          > that to be self-contented is to be vile and ignorant, and that to
          > aspire is better than to be blindly and impotently happy."
          > -- Edwin A. Abbott, Flatland
          > ________________________________________________________________________
          >
          >
          >
          >
          > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
          >
          >
          >
        • Eugene Leitl
          This is off-topic. Please stop spamming multiple lists with your message. If you continue doing this (this is the second or third incidence of spam), I will be
          Message 4 of 4 , Nov 5, 2001
          • 0 Attachment
            This is off-topic. Please stop spamming multiple lists with your message.
            If you continue doing this (this is the second or third incidence of
            spam), I will be make new subscribers mute.

            On Sat, 3 Nov 2001 super_intelligence1234@... wrote:

            > We're at a crossroads. For thousands of years mankind has been the
            > dominant species on earth, the pinnacle of evolution. Now, as we
            > enter the 21st century, this is about to change. A new and radically
            > diffferent chapter of evolution is about to begin, for, as Vernor
            > Vinge put it at the 1993 NASA VISION-21 Symposium:
            > `Within thirty years, we will have the technological means to create
            > superhuman intelligence. Shortly after, the human era will be ended.'
            > This event, the relatively sudden emergence of superintelligence
            > (SI), is often referred to as the Singularity in Transhuman circles.
            > The longer definition is:
            > "SINGULARITY: the postulated point or short period in our future when
            > our self-guided evolutionary development accelerates enormously
            > (powered by nanotech, neuroscience, AI, and perhaps uploading) so
            > that nothing beyond that time can reliably be conceived". [Vernor
            > Vinge, 1986] (Lextropicon).
            > Whether these new, Posthuman beings (aka SIs, Powers or PSEs -- Post-
            > Singularity Entities) will be augmented humans, artificial
            > intelligences (AIs) or some hybrid form, they will no doubt change
            > life as we know it rapidly and profoundly. For better or for worse;
            > what happens to those who are left behind in this burst of self-
            > directed hyperevolution is by definition unknown, "unknowable" even,
            > but extinction is definitely one of the more realistic options.
            >
            > Though it may take a bit longer than Vinge's proposed 20-30 years
            > (then again, it could be upon us tomorrow for all we know), the
            > Singularity is virtually inevitable; it is a logical conclusion of
            > our (technological) evolution. Only massive, world-wide social,
            > economic, technological or natural disasters could stop or delay it.
            > Currently, it seems highly unlikely that creating a Superintelligence
            > (in one form or another) is technically impossible, or so difficult
            > that it will take centuries or more. Even according to fairly
            > conservative predictions, based on current trends and scientific
            > knowledge, raw computing power will match, and then quickly surpass
            > that of the human brain in less than 20 years, possibly even much
            > sooner (approx. 5 years). With that kind of power available so soon,
            > one can only imagine what the world will look like 30, 40 years from
            > now...
            >
            > Directed efforts to stop some of the more dangerous technologies
            > which might cause a "malevolent" Singularity (or global destruction
            > in general) can only slow the process down somewhat, not stop it
            > entirely. Unless, of course, one is prepared to sacrifice
            > civilization itself, which given the fact that we need progress to
            > survive isn't a very good idea. Still, less destructive Singularity-
            > delaying actions might buy valuable time and should therefore not be
            > dismissed altogether. What exactly these "actions", if any, will
            > entail is pure speculation at this point; it ultimately depends on
            > future developments. On the rate of progress in various scientific
            > and technological fields, socio-economic and political trends etc.
            > Right now, sentient AI seems to be the most likely candidate for a
            > Singularity (not good at all, see below), but sudden advances in
            > nanotech or genetic engineering, for example (both fields are going
            > strong), could dramatically change the picture.
            >
            > "Evolution says organisms are replaced by species of superior
            > adaptability. When our robots are tired of taking orders, they may,
            > if we're lucky, show more compassion to us than we've shown the
            > species we pushed into oblivion. Perhaps they will put us into zoos,
            > throw peanuts at us and make us dance inside our cages."
            > -- Dr. Michio Kaku in "The Future of Technology" (Time magazine cover
            > story, June 2000).
            > The Posthuman future may be glorious, filled with wonders far beyond
            > our current comprehension, but what good is that to a person if he
            > can't be part of it? If AIs become superintelligent before humans do,
            > this will reduce us to second-rate beings that are almost completely
            > at the mercy of this new "master race". During our domination of the
            > earth, we have wiped out countless animal species, brought others
            > (including our "cousins", the apes) to the brink of extinction, used
            > them for scientific experiments, put them in cages for our enjoyment
            > etc., etc. This is the privilege of (near-absolute) power. If we lose
            > our top position to our own creations, we will find ourselves in the
            > same precarious position that animals are in now. As said before,
            > it's impossible to tell what an "alien" Superintelligence would do
            > to/with lower life forms such as humans, but the mere fact that we'd
            > be completely at its mercy should be reason enough for concern.
            >
            > Needless to say, from a personal perspective it doesn't matter much
            > who or what exactly will become superintelligent (AIs, genetically
            > engineered humans, cyborgs) -- in each case you'd be faced with an
            > unpredictable, vastly superior being. A god, in effect. Because one's
            > personality would almost certainly change, perhaps completely beyond
            > recognition, once the augmentation process starts, it doesn't even
            > really matter whether the person would be "good" or "bad" to begin
            > with; the result would be "unknowable" anyway. Many (most? all??) of
            > our current emotions and attitudes, the legacy of our evolutionary
            > past, could easily become as antiquated as our biological bodies in
            > the Posthuman world. Altruism may be useful in an evolutionary
            > context where weak, imperfect beings have to rely on cooperation to
            > survive, but to a solitary god-like SI it would just be a dangerous
            > handicap. What would it gain by letting others ascend? Most likely
            > nothing. What could it lose? Possibly everything. Consequently, if
            > its concept of logic will even remotely resemble ours, it probably
            > won't let us become its peers. And even if it's completely, utterly
            > alien, it could still harm or kill us for other (apparently
            > incomprehensible) reasons, or even more or less accidentally, as a
            > side-effect of its ascension, for example. So what's the moral of the
            > story here? Well, make sure that you're one of the first Powers,
            > obviously, but more on that later.
            >
            > True, the future doesn't necessarily have to be bad for the less-than-
            > superintelligent, the SIs could be "eternal" philanthropists for all
            > we know, altruism might turn out to be the most logically
            > stable "Objective Morality", they could be our obedient, genie-like
            > servants, or they might simply choose to ignore us altogether and fly
            > off into space, but depending on such positive scenarios in the face
            > of unknowability is dangerously naive (wishful thinking). Yet, though
            > lip service is occasionally paid to the dangers of the Singularity
            > and powerful new technologies in general, there is no known
            > coordinated effort within the transhuman community to actively
            > prepare for the coming changes. This has to do with the generally
            > (too) optimistic, idealistic and technophilic attitude of many
            > Transhumanists, and perhaps a desire to make/keep the philosophy
            > socially acceptable and [thus] easier to proliferate. Visions of a
            > harsh, devouring technocalypse, no matter how realistic, simply don't
            > fit this rather mild, almost "politically correct" image. Of course
            > lethargy, defeatism, strife and conservative thinking also contribute
            > to the lack of focus and momentum in Transhumanism, but the main
            > problem seems to be that "we" aren't taking our own ideas seriously
            > enough, fail to fully grasp the implications of things like nanotech,
            > AI and the Singularity. It's all talk and no action.
            >
            > Enter Transtopianism. This philosophy follows the general outlines of
            > Transhumanism in that it advocates the overcoming our biological and
            > social limits by means of reason, science and technology, but there
            > are also some important differences. Principally, these are: 1) a
            > much heavier emphasis on the Singularity, 2) the explicit inclusion
            > of various philosophical, political & technological elements which
            > are "optional" (or nonexistent) in general Transhumanism, and 3) the
            > intention to become a movement with a clearly defined organizational
            > structure instead of just a loose collection of more or less like-
            > minded individuals (which is what Transhumanism, and to a somewhat
            > lesser extent Extropianism, are). Essentially, Transtopianism is an
            > attempt to realize Transhumanism's full potential as a practical way
            > to (significantly) improve one's life in the present, and to survive
            > radical future changes. Unlike regular Transhumanism or even
            > Extropianism, this is a "holistic" philosophy; a complete worldview
            > for those who seek "perfection" in all fields of human endeavor. It
            > is also a strongly dualistic philosophy, motivated by equal amounts
            > of optimism and pessimism, instead of blind (or at least weak-
            > sighted) technophilia.
            >
            > Note: A complete overview of Transtopian goals, means & values can be
            > found in the Principles section. The Enlightenment Test is a summary
            > of the same, and the Singularity Club goes into more detail regarding
            > the organizational and (other) practical aspects of the Transtopian
            > movement. For "radical" socio-politics, see the True Enlightenment
            > WebRing page.
            >
            > Transtopianism's main message is as simple as it is radical: assuming
            > that we don't destroy ourselves first, technological progress will
            > profoundly impact society in the (relatively) near future,
            > culminating in the emergence of superintelligence and [thus] the
            > Singularity. Those who will acquire a dominant position during this
            > event, a classical Darwinian struggle, will likely reap enormous
            > benefits; they will become "persons of unprecedented physical,
            > intellectual, and psychological capacity. Self-programming, self-
            > constituting, potentially immortal, unlimited individuals''. Those
            > who for whatever reason won't participate or fall behind will face a
            > very uncertain future, and quite possibly extermination (or worse?
            > With mind uploading, the concept of Hell, i.e. infinite, hideous
            > torture, becomes a real possibility).
            >
            > Wealth and power can not only make the present considerably
            > more "interesting"; they're also a logical imperative for those who
            > are serious about realizing their Transhuman hopes and dreams. It is,
            > after all, nearly always the rich & powerful that have access to new
            > technologies before anyone else. Unlike, for example, cars, TV sets
            > and cell phones, the technologies that will enable people to
            > become/create our evolutionary successors aren't likely to eventually
            > trickle down to the general public. Godhood won't be for sale at your
            > local supermarket 30-80 years from now for pretty much the same
            > reasons why one can't buy nukes in gunshops, eventhough the basic
            > design is now more than half a century old (hell, you can't even
            > legally purchase a wussy machine gun, let alone more advanced
            > military hardware, in most countries). Bottom line: the really
            > powerful, dangerous stuff is always restricted to (self-proclaimed)
            > elite groups, and nothing is more powerful and potentially dangerous
            > than Superintelligence. Even all-out WW3 would be peanuts compared to
            > the damage that a SI (with its arsenal of advanced nanoweapons and
            > God knows what else) could inflict. By "ascending" you don't obtain
            > the ultimate weapon -- you become it. It doesn't seem very likely, or
            > indeed sane, that SIs will freely hand out such power to anyone who
            > asks for it.
            >
            > Thus, it logically follows that we should put great effort into
            > acquiring a good starting position for the Singularity, which
            > includes things like gathering as much wealth as possible, keeping
            > abreast of the latest technological developments, and implementing
            > them to become more efficient and powerful individuals. Our primary
            > interim subgoal is (must be) becoming part of the economic &
            > technological elite. The Players. Our primary interim supergoal is
            > (obviously) to become inorganic, "digital", which will open the door
            > to virtually unlimited additional enhancements, and ultimately
            > godhood itself.
            >
            > This is a tall order indeed, and to increase one's chances of
            > success, cooperation with like-minded individuals is essential
            > (unless, perhaps, if one happens to be a tech-savvy billionaire
            > already, but I think it's fairly safe to assume that whoever reads
            > this isn't one of those). Hence the Transtopian movement and its
            > Singularity Club, an "elite" mutual aid association for those who
            > want to fully enjoy the present while preparing for the radical
            > future. Our radical future. This is the only such (known) group in
            > existence. The WTA, Extropy Institute, and various national/regional
            > Transhumanist groups have a social and meme-spreading function.
            > Useful? Certainly. Adequate? Not al all. They do not specifically aim
            > to improve their members' social, mental, physical, financial etc.
            > situation (sure, occasionally it does happen, but the effect is
            > usually very limited, short-lived, and not part of a clear strategy).
            > And as for working towards early ascension...Well, there is, of
            > course, the Singularity Institute for Artificial Intelligence, but
            > their goal is "merely" the creation of a self-enhancing,
            > superintelligent AI (which, if and when it becomes operational, could
            > do anything from just sitting there to killing us all), not personal
            > enhancement. The Foresight Institute, finally, though it does
            > genuinely help to advance the development of nanotechnology, and
            > seems fairly well-balanced concerning the potential threats and
            > benefits of their work, is ultimately yet another public
            > clearinghouse which depends on the wisdom and cooperation of the
            > Establishment and/or "the masses" (uh, oh!) to realize its vision of
            > a prosperous, peaceful nano-enhanced society. This serious,
            > potentially fatal flaw is common to all Transhumanism-related special
            > interest groups, including cryonics organizations and such.
            >
            > To each his own. We salute the Transhuman, Extropian, and
            > Singularitarian giants on whose shoulders we stand, but we have seen
            > further and now it's time to move on. Transtopians don't like to
            > depend on the whims of society, governments, fate or luck for their
            > current and future well-being; they want to take control of their
            > destiny, make their own rules, their own "luck". Of course, the
            > chances of success may be slim, but since there's nothing to lose
            > (steady degeneration and death are the default for all living things)
            > and a universe to gain, why not give it a try? Given the
            > circumstances, it is the rational thing to do. As the saying goes:
            > shoot for the moon, even if you miss you'll land among the stars
            > (well, maybe it will just be your disassembled molecules that will
            > land among the stars, but you get the idea; it never hurts to try).
            > When crossing uncharted territory, it is wiser to travel in groups.
            > And even if the road leads to a dead-end, there are still plenty of
            > useful and pleasant things we can do along the way. Join us at
            > http://www.technocalypse.org/
            >
            >
            >
            >
            >
            > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
            >
            >
            >

            -- Eugen* Leitl <a href="http://www.lrz.de/~ui22204/">leitl</a>
            ______________________________________________________________
            ICBMTO: N48 04'14.8'' E11 36'41.2'' http://www.lrz.de/~ui22204
            57F9CFD3: ED90 0433 EB74 E4A9 537F CFF5 86E7 629B 57F9 CFD3
          Your message has been successfully submitted and would be delivered to recipients shortly.