Loading ...
Sorry, an error occurred while loading the content.

Re: Long post on:Immortality, Singularity, Religiosity, & Zen

Expand Messages
  • Kevin D. Keck
    I haven t gotten back to this religion thread because I ve been swamped, no= t because I didn t have anything else to add.If you go back and look, some of
    Message 1 of 21 , Apr 23, 2004
    • 0 Attachment
      I haven't gotten back to this religion thread because I've been swamped, no=
      t because I
      didn't have anything else to add.

      If you go back and look, some of you might be surprised to realize that I d=
      id not in fact
      profess to be on either side of the "science is just another religion" deba=
      te, because in fact
      I'm not on either side. I do appreciate Chris Phoenix's exuberant confirmat=
      ion of my up to
      that point thinly supported assertions about one of the common stances, and=
      I hope he
      won't attribute too malicious an intent to my deliberately delayed confessi=
      on of sympathy
      for both viewpoints.

      The problem, as it is so often, is that the sides are talking right past ea=
      chother. Of course
      it's not really true that science is just another belief system, and it is =
      true that some of the
      people on the other side of academia mean to flatly deny this. But there is=
      another
      contingent which will concede that science is in fact a more sophisticated =
      and theoretically
      distinguished belief system, while still insisting that this distinction is=
      not very significant.
      And their point is much more than just that scientific "knowledge" is alway=
      s by definition
      both contingent and incomplete—the much bigger point is that much of our "r=
      eality",
      including particularly most of the morally and politically important aspect=
      s of it, are
      socially constructed, and thus in a much more profound sense our reality re=
      ally is _not_
      objective.

      Ironically, in fact, the more advanced our scientific and technological kno=
      wledge become,
      the less and less relevant it becomes to moral and political issues. While =
      on the one hand
      technology often seems to take issues out of the hands of legislators, by d=
      istributing
      capabilities to such an extent as to make them beyond governmental control,=
      and on the
      other hand it produces issues the political system and culture are ill-prep=
      ared to deal
      with, both of these are merely the immediate, incremental effects. The over=
      arching
      broader effect is to successively remove scientific and technological const=
      raints on the
      range of feasible political, economic, cultural systems people can adopt, t=
      hereby putting a
      progressively greater demand on our collective capacity for imagination, co=
      urage, and
      discretion in order to successfully determine and follow wise paths, rather=
      than go down
      very dystopian ones.

      Stewart Brand made a similar observation in his book, "How Buildings Learn"=
      —the most
      successfully adaptable buildings turn out to be those with constraints, suc=
      h as support
      columns, which greatly reduce the "design space" which can be considered wh=
      en
      contemplating modifications. (Perhaps professional architects could do more=
      with less
      constraints, but most building dwellers are not architects themselves, so a=
      pparently less
      quite often turns out to be more.) I think many video game critics (and som=
      e movie critics)
      have also similarly suggested that games (or movies) were better back when =
      designers (or
      directors) couldn't fall back on eye-popping graphics (or stunts & f/x, or =
      sex and violence)
      to keep players (audiences) entertained. And Jaron Lanier is one among seve=
      ral who's
      voiced the opinion that while the capabilities of software have in fact gon=
      e up as hardware
      has improved, it has not maintained the same pace of improvement, largely b=
      ecause the
      quality of the _code_ has at the same time gone very much downhill.

      This doesn't bode well for our ability to "cope", as it were, with the cont=
      inually expanding
      possibilities that accelerating scientific and technological progress will =
      continue to bring
      us. JFK observed that we had the power to eliminate hunger in the world bac=
      k in the '60s,
      and yet it still hasn't happened. Instead our politicians spend their time,=
      for example,
      facilitating ever greater abuse of increasingly counter-productive IP laws =
      to hinder all
      kinds of things from online music sharing to the provision of patented drug=
      s to third
      world patients. Both are due not to technological constraints but rather to=
      political ones. I
      don't want to preach to the choir so I'll stop there, but I'm sure all of y=
      ou have at least a
      couple of other widely-recognized problems which come to your mind, which s=
      ociety is
      either failing to address or is continuing to itself cause because of "poli=
      tical constraints".

      On a related theme, "Mark L."'s musings on the likely nature of a native or=
      innate
      philosophy in AIs actually made something click for me though, in a moment =
      of tiredness
      when I let my guard down enough to truly consider it. One of the memes Jaro=
      n Lanier puts
      forward in his Half a Manifesto is "cybernetic totalism", which is basicall=
      y the digerati
      version of George Soros's "market fundamentalism" schtick. It's also a fair=
      definition of the
      philosophy that could I think fairly be considered the obvious pre-disposit=
      ion, if there is
      any, of any A.I. system. It is essentially a perfection of the reductionist=
      hypothesis, holding
      that not only is reductionism valid, but that perception _is_ reality, and =
      that recognizing
      this "fact" is essential to true understanding and sound moral judgment. Th=
      e problem, of
      course, is it's exactly the same type of ends-trump-means philosophy which =
      produced the
      devastating seduction of much of the world by nazism, fascism, and despotic=
      communism
      last century. This philosophy _is_ dangerous, to an even greater extent tha=
      n Lanier tried to
      explain.

      Fortunately (for my own sanity), I'm still in the John Holland camp (as he =
      articulated it at
      the 2000 Stanford "Spiritual Robots" debate, shortly after the publication =
      of Bill Joy's
      infamous Wired article), and don't believe the emergence of A.I. will be ne=
      arly as
      automatic, inevitable, nor early as Kurzweil an company expect, so I'm not =
      terribly worried
      about it. Barring, of course, the frightening possibility of Lanier's inver=
      sion hypothesis
      being validated, and producing a perceived success by moving the goalposts.=
      If we let this
      happen, then we will in fact create our own dystopia, but only by (at least=
      implicit) choice,
      not due to any force of technological determinism.



      I'll try to elaborate my thoughts on Zen and the self-other dichotomy soon =
      as well.
      --
      Kevin D. Keck
    • Kevin Keck
      @#$%&! That wasn t how it appeared in the so-called preview . (One guess how soon I ll use the Yahoo! Groups web posting form again.) This one should come out
      Message 2 of 21 , Apr 24, 2004
      • 0 Attachment
        @#$%&! That wasn't how it appeared in the so-called
        "preview". (One guess how soon I'll use the Yahoo!
        Groups web posting form again.)

        This one should come out properly:


        I haven't gotten back to this religion thread because
        I've been swamped, not because I didn't have anything
        else to add.

        If you go back and look, some of you might be
        surprised to realize that I did not in fact profess to
        be on either side of the "science is just another
        religion" debate, because in fact I'm not on either
        side. I do appreciate Chris Phoenix's exuberant
        confirmation of my up to that point thinly supported
        assertions about one of the common stances, and I hope
        he won't attribute too malicious an intent to my
        deliberately delayed confession of sympathy for both
        viewpoints.

        The problem, as it is so often, is that the sides are
        talking right past eachother. Of course it's not
        really true that science is just another belief
        system, and it is true that some of the people on the
        other side of academia mean to flatly deny this. But
        there is another contingent which will concede that
        science is in fact a more sophisticated and
        theoretically distinguished belief system, while still
        insisting that this distinction is not very
        significant. And their point is much more than just
        that scientific "knowledge" is always by definition
        both contingent and incomplete�the much bigger point
        is that much of our "reality", including particularly
        most of the morally and politically important aspects
        of it, are socially constructed, and thus in a much
        more profound sense our reality really is _not_
        objective.

        Ironically, in fact, the more advanced our scientific
        and technological knowledge become, the less and less
        relevant it becomes to moral and political issues.
        While on the one hand technology often seems to take
        issues out of the hands of legislators, by
        distributing capabilities to such an extent as to make
        them beyond governmental control, and on the other
        hand it produces issues the political system and
        culture are ill-prepared to deal with, both of these
        are merely the immediate, incremental effects. The
        overarching broader effect is to successively remove
        scientific and technological constraints on the range
        of feasible political, economic, cultural systems
        people can adopt, thereby putting a progressively
        greater demand on our collective capacity for
        imagination, courage, and discretion in order to
        successfully determine and follow wise paths, rather
        than go down very dystopian ones.

        Stewart Brand made a similar observation in his book,
        "How Buildings Learn"�the most successfully adaptable
        buildings turn out to be those with constraints, such
        as support columns, which greatly reduce the "design
        space" which can be considered when contemplating
        modifications. (Perhaps professional architects could
        do more with less constraints, but most building
        dwellers are not architects themselves, so apparently
        less quite often turns out to be more.) I think many
        video game critics (and some movie critics) have also
        similarly suggested that games (or movies) were better
        back when designers (or directors) couldn't fall back
        on eye-popping graphics (or stunts & f/x, or sex and
        violence) to keep players (audiences) entertained. And
        Jaron Lanier is one among several who's voiced the
        opinion that while the capabilities of software have
        in fact gone up as hardware has improved, it has not
        maintained the same pace of improvement, largely
        because the quality of the _code_ has at the same time
        gone very much downhill.

        This doesn't bode well for our ability to "cope", as
        it were, with the continually expanding possibilities
        that accelerating scientific and technological
        progress will continue to bring us. JFK observed that
        we had the power to eliminate hunger in the world back
        in the '60s, and yet it still hasn't happened. Instead
        our politicians spend their time, for example,
        facilitating ever greater abuse of increasingly
        counter-productive IP laws to hinder all kinds of
        things from online music sharing to the provision of
        patented drugs to third world patients. Both are due
        not to technological constraints but rather to
        political ones. I don't want to preach to the choir so
        I'll stop there, but I'm sure all of you have at least
        a couple of other widely-recognized problems which
        come to your mind, which society is either failing to
        address or is continuing to itself cause because of
        "political constraints".

        On a related theme, "Mark L."'s musings on the likely
        nature of a native or innate philosophy in AIs
        actually made something click for me though, in a
        moment of tiredness when I let my guard down enough to
        truly consider it. One of the memes Jaron Lanier puts
        forward in his Half a Manifesto is "cybernetic
        totalism", which is basically the digerati version of
        George Soros's "market fundamentalism" schtick. It's
        also a fair definition of the philosophy that could I
        think fairly be considered the obvious
        pre-disposition, if there is any, of any A.I. system.
        It is essentially a perfection of the reductionist
        hypothesis, holding that not only is reductionism
        valid, but that perception _is_ reality, and that
        recognizing this "fact" is essential to true
        understanding and sound moral judgment. The problem,
        of course, is it's exactly the same type of
        ends-trump-means philosophy which produced the
        devastating seduction of much of the world by nazism,
        fascism, and despotic communism last century. This
        philosophy _is_ dangerous, to an even greater extent
        than Lanier tried to explain.

        Fortunately (for my own sanity), I'm still in the John
        Holland camp (as he articulated it at the 2000
        Stanford "Spiritual Robots" debate, shortly after the
        publication of Bill Joy's infamous Wired article), and
        don't believe the emergence of A.I. will be nearly as
        automatic, inevitable, nor early as Kurzweil an
        company expect, so I'm not terribly worried about it.
        Barring, of course, the frightening possibility of
        Lanier's inversion hypothesis being validated, and
        producing a perceived success by moving the goalposts.
        If we let this happen, then we will in fact create our
        own dystopia, but only by (at least implicit) choice,
        not due to any force of technological determinism.



        I'll try to elaborate my thoughts on Zen and the
        self-other dichotomy soon as well.
        --
        Kevin D. Keck
      • Chris Phoenix
        For another approach to the problem of science, rationality, and the real world, I encourage anyone following this discussion to read my recent Extropy-chat
        Message 3 of 21 , Apr 24, 2004
        • 0 Attachment
          For another approach to the problem of science, rationality, and the
          real world, I encourage anyone following this discussion to read my
          recent Extropy-chat post:
          http://www.lucifer.com/pipermail/extropy-chat/2004-April/005790.html

          I begin by talking about rationality, building a case that the validity
          of thoughts must be considered within their particular context. Usually,
          the context is only within our heads, but we have the cognitive error of
          believing that it extends much farther. If someone else's thought makes
          no sense, it's probably because their context is different. Likewise,
          your thoughts, however rational, are generally unlikely to be
          trustworthy if applied too widely.

          Then I discuss the consistent real world, and how it exists but we have
          trouble addressing it even with science. I'll quote myself rather than
          trying to restate:

          "It's tempting to think that the world is a single context that
          everything can be compared to. But this is equivalent to reductionism.
          There are lots of things in the world that can be understood far more
          completely by approximation than by first principles. For example,
          human psychology has some really weird phenomena (phobias, optical
          illusions, passive-aggressive behavior, etc) that a study of physics
          will not help you understand. To a psychoanalyst or a politician, or
          even a medical doctor, a study of shamanism will have more concrete
          utility than a study of electromagnetism.

          In fact, when dealing with people, not studying at all--not trying to
          form postulates and practice formal thought, but just going on instinct,
          intuition, and experience--may be more effective. This is because
          people are incredibly complex, and we have a strong evolved non-rational
          toolset to help us deal with them. In addition to people, things like
          ecology may still be too complex for rational thought to improve on
          accumulated heuristics, because we simply don't yet know the postulates
          and methods. And then there are things like immunology and cosmology
          where none of our tools really work yet, so the only way to approach
          them is by study and rationality. Eventually, we can expect that study
          and rationality will encompass psychology (including religion and
          parapsychology) and ecology and everything else as well.

          You mentioned the undesirability of chaos. The alternative to chaos is
          the belief that a self-consistent real-world context exists. But even
          though it exists, we can't access it directly. Science is motivated by
          the desire to build conceptual contexts that map to the real-world one.
          Its methods include cataloging (an underrated skill these days),
          categorization, experiment, creativity, criticism, and more. In some
          sub-contexts like electromagnetism, scientists have been very
          successful; the mapping is very close. In protein folding, the end is
          in sight. Pedagogy, psychology, and oncology are quagmires, though
          oncology may be ready for a synthesis.

          But back to the practice of science: the trouble is that scientists,
          like everyone else, are prone to the illusion that their chosen context
          extends everywhere. Let's be clear: I don't mean that scientists should
          leave room for the paranormal or magical. They should not. I mean that
          chemists should leave room for physics, and physicists should leave room
          for psychology, and psychologists should leave room for chemistry.
          Otherwise you get absurdities like chemists declaring that Drexler's
          physics and mechanics work is worthless, when it's obvious they don't
          even understand it.

          One thing I never see addressed in discussions of rationality: How does
          a rational thinker know when to keep their ears open and their mouth
          shut? Obviously, the belief that a rational thinker will be an expert
          in everything is irrational. But it's far too common. Scientists are
          slowly learning enough to be rational in certain limited contexts. And
          in a few glorious areas, those contexts have spread enough to merge.
          But anyone who aspires to rationality should learn from the
          overconfidence of scientists who, secure in their rationality, talk
          nonsense outside their field. That's as big a mistake--I would argue
          that it's the same mistake--as religious people talking nonsense while
          feeling secure in their irrationality. The mistake is assuming that
          their mental context extends farther than it actually does.

          And scientists and rationalists have even less excuse than
          irrationalists. If as great a scientist as Lord Kelvin could be wrong
          about something as mundane and technical as heavier-than-air flight,
          surely the rest of us should be extremely cautious when talking outside
          our field of study--or even inside it, for many fields. But no, we keep
          making the same mistake: our context defines our universe, and
          everything we see must be made to conform. Appeals to rational thought,
          in the end, are usually just another way to rationalize this process."

          Chris

          Ps. Note the very awkward formatting of your post; please correct that.

          Pps. I should have cited a source in the Extropy-chat article: the
          mundane explanation for the "loaves and fishes miracle" comes from a
          book called "The Robe."

          Kevin D. Keck wrote:

          > I haven't gotten back to this religion thread because I've been
          swamped, no=
          >
          > t because I didn't have anything else to add.
          >
          > If you go back and look, some of you might be surprised to realize
          that I d=
          >
          > id not in fact profess to be on either side of the "science is just
          another religion" deba=


          --
          Chris Phoenix cphoenix@...
          Director of Research
          Center for Responsible Nanotechnology http://CRNano.org
        • J. Andrew Rogers
          ... It is probably worth pointing out that one can prove this mathematically for algorithmically finite systems (which includes a subset of non-finite state
          Message 4 of 21 , Apr 24, 2004
          • 0 Attachment
            On Apr 24, 2004, at 12:41 PM, Chris Phoenix wrote:
            > I begin by talking about rationality, building a case that the validity
            > of thoughts must be considered within their particular context.
            > Usually,
            > the context is only within our heads, but we have the cognitive error
            > of
            > believing that it extends much farther. If someone else's thought
            > makes
            > no sense, it's probably because their context is different. Likewise,
            > your thoughts, however rational, are generally unlikely to be
            > trustworthy if applied too widely.


            It is probably worth pointing out that one can prove this
            mathematically for algorithmically finite systems (which includes a
            subset of non-finite state machines in addition to all finite state
            machines). In fact, the mathematical expression of this is one of the
            more useful theorems of algorithmic information theory. An interesting
            theoretical direction of this is that one can compute the limits of
            correctness for a particular model in a particular context (the
            "predictive limit" of a finite model).

            Or to put it in simpler terms: In any finite subcontext, rationality
            does not imply correctness, and correctness does not imply rationality.
            But it is theoretically possible to compute the maximum probability
            that a rational model is also a correct model. For some arbitrary
            brain/machine, the actual probability will be of the form:

            0 < x < predictive limit < 1

            where "x" is the actual probability that some rational model is correct
            in some context, and the predictive limit is the maximum theoretical
            probability that a model might be correct in that context. Why there
            is often a significant difference between "x" and the predictive limit
            for intelligent systems is a complex topic that I'll simply avoid.

            Humans have an extremely poor grasp of the predictive limits of the
            model of the universe that they build in their brains. Not only are
            many (most?) people unaware that rationality does not imply
            correctness, just about everyone is oblivious to the predictive limits
            of their rationality with respect to correctness. There are many
            things in the universe that can only be modeled to such low predictive
            limits in the human brain that one would have to be skeptical of any
            claim as to the correctness of those models.

            j. andrew rogers
          • Chris Phoenix
            You mean there s theoretical justification for what I said? Cool! Is it thought to extend to systems that are not algorithmically finite as well? What about
            Message 5 of 21 , Apr 24, 2004
            • 0 Attachment
              You mean there's theoretical justification for what I said? Cool! Is
              it thought to extend to systems that are not algorithmically finite as
              well? What about algorithmic approximations to non-A.F. systems? Can
              you give me a reference or two for this?

              Chris

              J. Andrew Rogers wrote:

              > On Apr 24, 2004, at 12:41 PM, Chris Phoenix wrote:
              >
              >>I begin by talking about rationality, building a case that the validity
              >>of thoughts must be considered within their particular context.
              >>Usually,
              >>the context is only within our heads, but we have the cognitive error
              >>of
              >>believing that it extends much farther. If someone else's thought
              >>makes
              >>no sense, it's probably because their context is different. Likewise,
              >>your thoughts, however rational, are generally unlikely to be
              >>trustworthy if applied too widely.
              >
              >
              >
              > It is probably worth pointing out that one can prove this
              > mathematically for algorithmically finite systems (which includes a
              > subset of non-finite state machines in addition to all finite state
              > machines). In fact, the mathematical expression of this is one of the
              > more useful theorems of algorithmic information theory. An interesting
              > theoretical direction of this is that one can compute the limits of
              > correctness for a particular model in a particular context (the
              > "predictive limit" of a finite model).
              >
              > Or to put it in simpler terms: In any finite subcontext, rationality
              > does not imply correctness, and correctness does not imply rationality.
              > But it is theoretically possible to compute the maximum probability
              > that a rational model is also a correct model. For some arbitrary
              > brain/machine, the actual probability will be of the form:
              >
              > 0 < x < predictive limit < 1
              >
              > where "x" is the actual probability that some rational model is correct
              > in some context, and the predictive limit is the maximum theoretical
              > probability that a model might be correct in that context. Why there
              > is often a significant difference between "x" and the predictive limit
              > for intelligent systems is a complex topic that I'll simply avoid.
              >
              > Humans have an extremely poor grasp of the predictive limits of the
              > model of the universe that they build in their brains. Not only are
              > many (most?) people unaware that rationality does not imply
              > correctness, just about everyone is oblivious to the predictive limits
              > of their rationality with respect to correctness. There are many
              > things in the universe that can only be modeled to such low predictive
              > limits in the human brain that one would have to be skeptical of any
              > claim as to the correctness of those models.
              >
              > j. andrew rogers
              >
              >
              >
              >
              > Yahoo! Groups Links
              >
              >
              >
              >
              >

              --
              Chris Phoenix cphoenix@...
              Director of Research
              Center for Responsible Nanotechnology http://CRNano.org
            • J. Andrew Rogers
              ... It is only true for algorithmically finite cases, but since this seems to cover all likely real spaces, you get a lot of bang for that buck as a
              Message 6 of 21 , Apr 25, 2004
              • 0 Attachment
                On Apr 24, 2004, at 2:44 PM, Chris Phoenix wrote:
                > You mean there's theoretical justification for what I said? Cool! Is
                > it thought to extend to systems that are not algorithmically finite as
                > well? What about algorithmic approximations to non-A.F. systems? Can
                > you give me a reference or two for this?


                It is only true for algorithmically finite cases, but since this seems
                to cover all likely "real" spaces, you get a lot of bang for that buck
                as a pragmatic matter. In terms of references, they are sparse but
                what you are looking for is probably "non-axiomatic reasoning systems",
                and Pei Wang's work in this area is probably the best and most
                accessible on the Internet. There has been an interesting bit of
                activity over the last year or two toward the unification of the fields
                of probability theory, information theory, computational theory,
                reasoning/logics, and a couple other bits and pieces as different
                facets of a single elegant universal conceptual model for
                algorithmically finite systems. My theoretical point comes from some
                of the bridgework that is unifying reasoning logics and algorithmic
                information theory. There isn't a lot out there; the first mentions of
                this general result is implied in some papers from the early '90s on
                universal predictors and Pei Wang's stuff, but we've really only worked
                it all out in the last couple years (and is still a work in progress).

                Finite versus Infinite mathematics:

                Algorithmically infinite systems are actually the standard assumption
                for classic theory in these areas, and it is of limited utility. That
                is how you end up with things like standard first-order logics. The
                problem is that we missed a lot because of this. Some very interesting
                things emerge when you restrict the mathematics purely to the finite
                case, often in areas that were considered mathematically "undefined" in
                the general case (mostly because the inclusion of infinite parameters
                force an undefined value for theorems and functions that have rich,
                interesting, and definable properties when restricted to purely finite
                parameters).

                As for what "algorithmically finite" means:

                The classic "finite state" is an inadequate system descriptor for the
                above area of mathematics, and the term "algorithmically finite"
                denotes something distinct from "finite state", though there are
                conceptual similarities. I actually coined the distinction a couple
                years ago. I used to regularly argue with a math-savvy retired
                Christian lady about the nature of religion and God in a mathematical
                context -- I've developed a lot of good pure theory angles in the
                course of trying to prove mathematical points to her, best exercise of
                theory I ever got. She made the poignant observation that the apparent
                algorithmic finiteness of the universe did not seem to have any obvious
                dependency on the universe actually being a finite state machine in the
                classical sense. And she seemed to have a point after I thought about
                it for a bit, which I later formalized.


                "Algorithmically finite" means (very roughly) a system that can only
                express finite intrinsic Kolmogorov complexity in finite time. A
                properly rigorous definition is fairly difficult to express well, and
                tonight is not that night. Interesting things that fall out of this
                are:

                1.) This is inclusive of all finite state systems.
                2.) The effective Kolmogorov complexity of these systems can vary in
                time.
                3.) This is inclusive of some infinite state systems.

                The second property looks mundane, but is actually relatively
                interesting. This essentially replaces an important given constant in
                classic computational theory with a function. Since expressible
                intelligence also varies with Kolmogorov complexity, this has
                interesting implications. It is worth noting that this can also break
                the assumptions of some theorems from classic theory.

                The third property is interesting in that you can have infinite state
                systems that are mathematically bound to express the computational
                properties of finite systems over any finite span of time. An example
                of such a system would be a system with a countably infinite state
                fabric (say, at the resolution of the Planck length) and a finite bound
                on information propagation (say, the speed of light), resulting in a
                system which would be mathematically required to do things like express
                an analog of the Laws of Thermodynamics that falls out of algorithmic
                information theory. While such a system is nominally infinite state,
                it is theoretically limited to the expression of finite algorithms with
                a Kolmogorov complexity limit that varies in finite time.

                From a functional standpoint, I would say that the AF model is more
                general than the classic finite state machine model.

                Okay, its past my bedtime,

                j. andrew rogers
              Your message has been successfully submitted and would be delivered to recipients shortly.