Loading ...
Sorry, an error occurred while loading the content.

Re: [nanotech] Singularity and moral outrage

Expand Messages
  • Chris Phoenix
    ... Obviously not. I m sure that there are errors in there. But I don t know where they are. We would all benefit from some well-informed technical
    Message 1 of 100 , Aug 3, 2001
    • 0 Attachment
      Mark Gubrud wrote:
      > Chris Phoenix wrote:
      > >
      > > Mark Gubrud wrote:
      > > > Even if some of Drexler's proposals are flawed ...
      > > Specifics, please?
      > Nope. On the whole, I believe that K. Eric Drexler and Christine
      > Peterson are intellectuals of the highest caliber and their books are
      > extremely well-written, .... I do see a lot to criticize in their
      > writings, .... Let me
      > just ask,
      > do you believe that every one of Eric's proposals are flawless and that
      > Chris
      > is dead-on right about absolutely everything?

      Obviously not. I'm sure that there are errors in there. But I don't
      know where they are. We would all benefit from some well-informed
      technical criticism of their work. I've read Nanosystems and didn't see
      anything to criticize (as long as the reader remembers that much of it
      is intended to establish lower bounds, not propose useful devices). EoC
      obviously has speculations that are unlikely to come true as described,
      but the basic message of "Nanotech implies self-replication implies very
      powerful technology" seems quite sound. So it goes...

      > However, as it stands, in my
      > opinion, the feasibility of the self-replicating diamondoid assembler
      > has neither been proven nor disproven. K. Eric Drexler and Ralph
      > Merkle remain the only two authors of any significant studies on
      > this question, and their work collectively amounts to little more
      > than a sketch of the proposed technology.

      True. But barring some fatal flaw that can't be worked around, there's
      no reason to think it won't work, and lots of reason to think it will.
      Economically, the only thing that could out-compete it is a better

      > > > At least, the prospect of immortality
      > > > is dizzying and probably accounts for a lot of the passion and irrationality.
      > >
      > > Gee, it's irrational to want to live a long time? What's the proper
      > > lifespan of a human, Mark? Or to be more specific, what's the cutoff
      > > beyond which someone is no longer human?
      > Your reaction to my very mild comments is a typical example of the
      > "passion
      > and irrationality" I referred to.

      Considering that if you were able to ban the technologies you want to,
      you would likely reduce my lifespan by several hundred years at least, I
      think your "mild comments" have some hidden teeth in them.

      > ... But it gets twisted into some pretty strange forms, such as the
      > idea
      > that having a computer simulate you would somehow be equivalent to going
      > on
      > living. That is not only irrational, it is patently illogical, and
      > founded
      > on disguised superstition.

      Hm. I'm curious what you think of Michael Korns' idea of "inloading".
      You add artificial neurons to an existing brain, and let them learn what
      to do from the existing neurons. Eventually they know enough to act as
      the primary brain, and it doesn't matter if the original ones die over
      time. Assuming that an artificial neuron can simulate a brain neuron
      (note I don't specify the tech), and given that our brains are
      continually adding and killing neurons already, at what point would an
      "inloaded" person stop being human? (And please don't say "When the
      first neuron is added"...)

      > I would like not to be facing aging and death within a few decades at
      > most.
      > So if we had a technology that could arrest and reverse the effects of
      > aging,
      > cure any disease and repair the body in the case of accident, then I
      > would be
      > very happy to avail myself of it.

      Uh? Even if it involved modifying your genes?

      > But some people seem so driven by the
      > desire for "immortality" that they are prepared to accept fakes for it.
      > That's very irrational.

      Maybe... lots of people have felt for centuries that they "live on" in
      their children, their works, or their acquaintance's memories. They
      derive great comfort from this. I'd rather have a little more accurate
      record, is all.

      > > My motivation for joining Foresight is 1) to keep up to date on nanotech
      > > (fringe benefit: lots of interesting conversations!) and 2) to help
      > > nanotech happen sensibly. And I have to admit, 3) whatever personal
      > > advantage I can get by being close to some of the people at the cutting
      > > edge.
      > All very reasonable, but what makes you so interested in this stuff,
      > compared with other people?

      Perhaps a better grounding in science than most people, which makes this
      nanotech stuff more real to me? For most people, nanotech is either
      myth or magic. For me, it's technology, and so rather more immediate.

      > > (NOTE: If this sounds like I'm a communist, read it again, because I'm
      > > not.)
      > Don't worry, Chris, I don't think anyone would mistake you for a commie.

      You'd be surprised. People's brains tend to short-circuit when I
      discuss my beliefs and morals about property with them. I don't know
      why. But I have been called a communist even by a certain very
      intelligent senior associate after several hours of discussion.

      > > He is intelligent and knowledgeable--but some of his opinions are
      > > inflexible to the point of fundamentalism.
      > You're pretty black yourself, Mr. Pot.

      My ideas are well-grounded. Yours are inflexible. His are insane. :-)

      > > Even when they don't make sense.
      > What I have said does not make sense to you?

      For example, the one about how the tiniest genetic modification makes
      someone non-human. It's not true biologically, linguistically, or

      > > The fact that he would prefer language-deprived morons with
      > > pure-human genes to happy, healthy humanoids with slightly augmented
      > > genes says it all.
      > Happy, healthy humanoids? Transhumanoids? I say, AVOID THE NOID!

      Not even trans-humanoids. Just people a little bit smarter or healthier
      due to tweaked genes, and/or a little more capable due to implanted
      tools. And I said "humanoids" rather than "humans" only to avoid the
      inevitable argument that an augmented organism could not be human.

      > Since I haven't ever said anything publicly about "language-deprived
      > morons," I should explain that Chris wrote me and asked whether a
      > human child raised in a closet was more human than a nonhuman robot
      > engineer.

      No. I asked whether an engineer with bush-robot hands and a modest
      genetic-based intelligence enhancement was human. I think most of us
      would say that he was. Since the answer is in doubt, it was obviously
      not stated within the question. G's, sometimes I wonder why I bother.
      As to mental rigidity, at least I know when my position is fringe...

      > Since the answer was stated explicitly within the question,
      > I simply noted Chris' tautology. Human is human, nonhuman is nonhuman,
      > and I'd rather not know what "humanoid" means. But alas, I do know:
      > "humanoid" means accidentally resembling or intentionally simulating
      > a human being. It does not mean "human," but explicitly "nonhuman".

      Wrong again. American Heritage says adj. Having human characteristics
      or form. n. A being having human form. Webster says "having human form
      or characteristics" and gives <humanoid dentition> as an example of
      usage. Thus "humanoid" contains no connotation of naturalness or
      artificiality. That's why I used it, to avoid an argument on that
      question... so instead you made up this bogus claim about the meaning of
      the word.

      You are a humanoid. So is the augmented engineer. So, tragically, is
      the language-deprived child. But who has more "human form or
      characteristics", the engineer, or the moron?

      Choose what kind of future you want. On the one hand, if we relinquish
      nanotech, our non-sustainable lifestyle will probably cause a population
      and technology crash within 100 years, and we'll end up illiterate
      savages with pure genes and little else. On the other hand, if we use
      nanotech (and/or biotech), our descendents will probably be augmented
      people with many human characteristics including language, emotion, and
      curiosity. Which future would you rather see?

      Chris Phoenix cphoenix@... http://www.best.com/~cphoenix
      Interests: nanotechnology, dyslexia, caving, filk, SF, patent
      reform... Check out PriorArt.org!
    • Martin O. Baldan
      ... Us who? You and me? All the members of this mailing list? Us is by no means an acceptable definition of human .It s not even a definition, it s a
      Message 100 of 100 , Oct 31, 2003
      • 0 Attachment
        --- In nanotech@yahoogroups.com, Mark Gubrud <mgubrud@s...> wrote:
        Martín Baldán wrote:

        >> That's why I think that the biological
        >> of 'human' is not very useful in a moral and
        philosophical context.

        >Who's talking about a "biological definition"?
        I'm talking about us.
        >We are what the word humanity refers to, not any
        >abstraction, not any definition, which can never more than
        >describe what we are. If there is a final, ultimate
        reference, a one
        >true "definition" of humanity, it is simply, us. (BTW,
        ain't nobody
        >here but us humans.) Any dictionary definition you propose
        must be
        >judged on two criteria: How well it describes us, and how clearly
        >distinguishes us from everything else.
        "Us" who? You and me? All the members of this mailing list? "Us" is by no means an acceptable definition of "human".It's not even a definition, it's a you-know-what-I-mean kind of call for consensus on what you think are basics. Yes,I know what you mean and I'm not trying to obfuscate. But there's no such thing as a language-independent "reference" for every word. Every language tries to make sense out of the world by dividing it into categories, defining objects, qualities and processes. For evolutive reasons, most human languages coincide to a great extent in the way they do this, but this is a property of known human languages, not a property of the world.
        For example, have you seen a movie called "The Englishman who Went up a Hill but Came down a Mountain"? The characters argued about an elevation of the terrain near the village. The inhabitants proudly said that it was a mountain, but officially it was a hill. It was too low for the official definition of mountain. Who was right? If it were possible to find the reference of the concept "mountain" then there would be no room for arguments, and definitions would be unnecessary, but this is not the case. People would argue about which objects to include in the reference group of the concept "mountain". Definitions try to break a problematic concept into other concepts on which people agree, so that people can communicate.
        No matter how imperfect and fuzzy your definition of "human being" is, It is good enough for present situations, where homo sapiens is the only intelligent sentient we know of. You and me are human, and all the living organisms who belong to the same species as we do, and our ancestors (as long as they can arguably be considered homo sapiens) and our descendants, as long as they can be considered homo sapiens. And, according to your definition, nothing else. If our descendants changed so much (for whatever reasons) that they cannot be considered of the same species as us, then they would not be human, at least not if they have changed too much. Too much by which standards? I guess you would analyse the differences in DNA and other biomolecules, among other things. But these are biological standards, that's why I say you are using a biological definition of "human". Nothing wrong with that, just that it will be useless for lawmaking and moral reasoning if we have to coexist with other intelligent sentients, like aliens, uploads or AIs. Some other concept should be used in those contexts, such as "person", "intelligent sentient" ("sentient" includes non-human animals), "cogitrone" or whatever.

        >> it seems that your willingness
        to include a
        >> new species of human beings is influenced by safety and
        >> criteria (the requirement that they are not dangerous to
        plain old
        >> human beings). It is tempting to do so, but I don't think
        it's a
        >> good idea, since it needlessly blurs the discussion. If they
        >> as 'people' then they do, no matter how dangerous or evil
        they may
        >> be (Hitler was a person). If they don't, then they don't,
        no matter
        >> how much you like them.

        >If a new, dangerous
        species came into existence, we would have reason
        >not to call them human,
        to emphasize the distinction. You are just
        >arguing about words, Martin,
        because you seem to think they express
        >categories which have some
        fundamental metaphysical significance, as if
        >there is a correct answer to
        the question of whether your hypothetical
        >new species is human or not.
        That is not the basis of any argument I've
        >ever made. Rather, I object to
        the abuse of words to deceive people
        >about what things physically
        No, that's not my position. I know that "human" is a human concept and, no matter how much time and effort we invest in re-defining it, there's always the possibility that in some context it would be unusable, it would fall apart as described. That's why I insist that, before using a concept, you have to make sure that it is suitable *for that context*, that it stands on its own and doesn't fall apart, not in some absolute, context-independent way, but for the context under discussion. This simply means that we have to agree about the meaning of words before we can use them to discuss ideas. A concept "falls apart" (or dissolves, or call it as you like), in a given context, when people no longer agree on its meaning, because some characteristics that were supposed to be inseparable, actually are not, in that context. "Human", "life", "alive", "dead" and "dying" are words that have to be clearly defined, or redefined, before we can talk about mind uploading, because in that context we don't agree on their meaning.

        >It is
        a matter of using words to communicate, not to obfuscate. This
        nothing to do with the notion of there being Platonic universals or
        >metaphysical categories carved in stone.
        I hope that now it's clear that my insintence on using suitable concepts has much more to do with, say, Popper than with Plato.

        >> Now I expect
        >> you to at least admit
        that in a gradual uploading process, where the
        >> human being slowly
        alters his brain until it becomes something very
        >> different, but
        without altering his thoughts and behaviour, then
        >> the 'person' was
        conscious (and the same) all the time and just
        >> changed from human to

        >This "person" you refer to in this sense is just a fiction.
        In the
        >scenario you describe, one physical human is destroyed, one
        >computer, which you want me to call a person, is created.
        >intuition that there must at all times be a single "person" which
        >conscious, is here being exploited to create a mental illusion
        >soul transfer. This is voodoo you are practicing, my friend.

        >Are we single individuals?
        Or "societies of mind"? We are single
        >individuals only by reference to
        our singular bodies and lives. The
        >unity of mind is synthetic; the unity
        of the human organism is a fact,
        >but it is destroyed in your
        I think it's the other way around. The unity of the human organism as a physical object, like a chair or a rock, is an illusion. You are exchanging molecules with the environment all the time. There's no physical object called Mark that has been preserved from your birth to your present state. Most, if not all, the original molecules are scattered in the environment. Only in a biology context it makes sense to identify a group of molecules as an object called a human being and to say that this is the same human being who was born a few years ago.
        On the other hand, no matter which theories of consciousness are accepted, no matter how they explain the workings of my brain, there's something about being conscious that physical theories simply don't adress. Call it "subjective experience", "conscious experience", "qualia" or whatever. It's not a dualistic concept, it's a fact, a subjective fact, and it cannot be proved or disproved by any physical theory or experiment. I *know* what it feels to be aware, and I *guess* that you feel something like that, but I can't be sure, or prove it to you. Theories of mind try to make sense out of this subjective fact. Dualistic physical theories can be disproved, or challenged, by experiments because they postulate a "thinking matter", a "res cogitans" that in some way interacts with the ordinary physical world. Theories of mind are not physical, in the sense that no physical experiment is useful to build consensus on which of them is better. Being compatible with accepted physical theory is a prerequisite for a theory of mind to join the game. After that, all the tools we can use to judge them are philosophical tools such as consistency, simplicity and explaining power.

        >The material fact is that the human
        in your scenario dies, slowly and
        >imperceptibly. The voodoo argument is
        that this can't be the case,
        >because the system continues to function
        just as a conscious human does,
        >therefore there must be a single "person"
        (soul) which transfers from
        >one body to another. But this only means the
        human is prevented from
        >knowing that she is dying.
        Material fact according to which theory? There's no concept of "death" in physics.

        >Nobody said you'd know it if you
        died. I know this is difficult to
        >understand; it is easier to suspend
        disbelief instead, and buy into the
        >voodoo that you (following Moravec)
        are proposing.
        Do you mean that it is possible for a sentient to believe that he's "alive" while actually he's "dead"? It wouldn't be absurd, it would just mean that your concept of "death" , philosophically, is not more useful in the context of mind uploading than your concept of "human".

        >> 2) should they be
        >> On question #2, I have little doubt that,
        unless we blow ourselves
        >> up before, some day someone will create an
        upload, or many of them.

        >I have little doubt that this is actually
        impossible by any means
        >besides cutting up the brain, and before even
        that method becomes
        >possible, it is likely that we will find ways of
        extending life,
        >restoring health, etc. So I am not so pessimistic about
        this as you
        With molecular nanotechnology I think it would be quite easy, and at least the initial steps would be highly desirable as a means of preserving health against accidents: At first, a molecular framework to reinforce blood vessels in the brain; later, a molecular scaffolding around each neuron's membrane, to protect it and record its state, just in case it has to be repaired. There's nothing problematic about repairing neurons, right? But the knowledge gathered to repair them could also be used to model their behaviour and build artificial neurons that could interact with natural ones without altering their workings. Once this can be done, many people will be tempted to slowly replace their natural neurons with artificial ones. Why? Because when all the neurons are artificial, chemical signalling can be replaced by data buses, water can be drained and the "brain" (call it as you like, It doesn't matter for this particular argument)  can run a million times faster. It can also be stopped, restarted, slowed and easily augmented.
        Yes, I agree that MNT would bring perfect health for unaltered humans before it brings the technical possiblility of mind uploading. A few weeks earlier is my bet.

        >> Some of them might get dizzy with

        >This is also a bit of a delusion. There is little reason to
        think an
        >"upload" would be more powerful than humans assisted by
        >technology, including advanced nonhumanoid artificial
        >systems, which will likely exist before "uploads" became
        Everything a human being can design, including servant AIs, can be designed by an upload a million times faster. Sensing, thinking and acting just as humans do but much faster is the very least you can expect of upload capabilities. In comparison, human actions would be as slow as the growth of trees. Humans could try to fight or control uploads by making AIs which are as fast as them,  but they would have to give the AIs carte blanche to design their strategies, to design other, faster and better AIs and so on, without asking anything to humans, because there would be no time for that. It would be like trees trying to control beavers by calling logmen for help.

        >It is also socially unhealthy that people are dreaming
        of achieving
        >power over others by means of "becoming" machines, which in
        >would be a form of suicide.
        Personally, the only power over other people I would like to have is the power to prevent them from hurting me. But even that is asking too much. Uploads and AIs would always have reasons to be afraid of other, more powerful uploads and AIs.

        >> set of important criteria from
        the old concept of 'human' must be
        >> used to create a new concept
        ('person') for moral discussions.

        >It would be better if it were a
        word not chosen from the set of words
        >heretofore understood as synonyms
        for "human being," in order to prevent
        >equivocation and related
        strategies for the deceptive use of words.
        >Someone proposed "sentient"...
        or we could invent a new word, such as
        >"cogitrone." Would you have a
        problem with that? Why? We could agree,
        >for example, that all cogitrones
        have a right not to be tortured.
        No problem at all. Contrary to what you may think, I'm not obsessed with words. My only objection is that "cogitrone" sounds so funny to me that I don't think I would be able to defend my "cogitrone rights" in court in a serious tone. I prefer "intellingent sentient" or something like that, but won't complain if you use "cogitrone" in your posts.

        >"Humanoid" is also a useful
        adjective, communicating that something may
        >be like a human in some
        respects, yet not like a human in other
        >respects, and not
        I think that a concept where humans are included would be more useful, but no objection on this.

        >> You have admitted that your
        concept of 'human' falls apart in a
        >> world where extreme genetic
        modification, uploads an AIs are a
        >> reality. The same could be said
        of your concept of 'life', 'death'
        >> or 'dying'. They apply to the
        present world but they are useless in
        >> some of the described

        >No such world exists, no such situations exist. But if
        they came into
        >being, yes, they would appear to deprive concepts such as
        >"life" and "death" of meaning, reducing our human world to a
        dead world
        >of atoms, bits, and operations, devoid of purpose.
        But my point is that, from a moral and philosophical point of view, it doesn't matter whether some day we have to face these situations or not. The mere fact that they are physically possible is enough to challenge our convictions and moral values if they are based on the assumption that these situations are impossible.
        I think that you have a humanistic worldview. You think that, in order to enjoy life, we have to assume that we, humans, unaltered humans, are the center of the universe and the measure of all things. You know that, physically speaking, this is ridiculous. Nothing in the laws of physics or what we observe in nature indicates that there's a special place for us. The universe has no purpose or moral values. Moral values are an illusion, a cozy nest we build for our minds, because the truth is too cold. In your view, we are like children whose happiness depends on believing in Santa Claus, or cartoon characters who can walk in the air defying gravity as long as they don't look down.
        I say, look to your feet. You haven't falled yet, and you can't walk in the air, so there must be something under your feet to support you. It won't disappear as soon as you look at it.
        I know I'm conscious, I don't have to prove it to myself, it's something previous to my ability to analyse arguments. This feeling is more real than anything I could learn about the physical world. It doesn't matter how much or how little value I place on myself or other people place on me, I'm conscious.
        Now, according to what I've learned, there's something called "the external world" which affects what I feel. My brain is a part of the external world (as defined) because nothing in my conscious experience indicates that manipulating my brain is different from manipulating any other piece of matter. It just seems to be the case that manipulating my brain affects my conscious experience more deeply than manipulating other things does. I'm just trying to find simple and consistent theories about how the external world affects my conscious experience, my "mind".
        I've assumed that other creatures who behave in a way that seems conscious are actually as conscious as I am, in every sense. But uploads and some AIs would exhibit conscious behaviour, while dead humans do not. So, having an unaltered human form has nothing to do with having a conscious behaviour and, in my view, with being conscious.
        There's something else I know about *my* conscious experience: I am the same person I was a moment ago. No physical theory, no philosophical argument could make me think I'm not the same person, because this feeling of "continuity" is exactly what I understand as "being the same person". If someone told me that my molecules were put in place a second ago, I would ask: How did you know where to put each molecule? If they told me they used a computer simulation, then I would conclude that, a second ago, my brain was the computer and my thought was the computer simulation.
        And, once I have assumed that other people, uploads or AIs with a conscious behaviour are as conscious as I am, I find no reason to think that their conscious experiences lack this feeling of continuity. As I assumed that having a conscious behaviour involves being conscious, now I assume that behaving as the same conscious creature involves being the same conscious creature. If an upload modified its "brain" until nothing is left of the original one, or if it simply sent its program to a different hardware, and the new hardware behaved just as if nothing had happened, then I would conclude that I'm talking to the same upload, who changed its brain.
        Now, having accepted that there's nothing special about the human form, that it is just one of the many possible conscious systems, I find no reason to think that  replacing the human brain with another system which is conscious and  behaves like the original human being is different from replacing a particular kind of upload "brain" with another kind of upload "brain" with the same conscious behaviour.

        >Or, to put it another way, such situations
        would be hellish, erasing any
        >meaningful distinction between humanity and
        technology, life and death,
        >depriving life itself of meaning, making a
        mockery of our most
        >fundamental values. You are quite right about this,
        Martin. But only
        >if we choose to believe that life and death HAVE no
        meaning, that there
        >is no difference between natural and artificial, or
        human and fake
        But we know that there's difference between conscious and unconscious. We know we are conscious (at least, I *know* I am) and we *suppose* that this has a lot to do with our brains. We only have to investigate and find out what characteristics of our brains are important for conscious behaviour,  which is observable, and assume that conscious behaviour involves being conscious.

        >> So you can't state as a fact
        that "a destructive uploading involves
        >> the death of the human being"
        because your concept of 'death' was
        >> not defined for that future

        >You just stated it yourself... "destructive
        Now it's you who is playing with words. Instead of "destructive uploading" I could call it "one step uploading", "fast uploading" or whatever. The names we pick prove nothing about the facts they describe.

        >> That's a fairly good description of the
        the theory of consciousness
        >> I'm using. I don't claim that it is mine
        but I'm not sure that it's
        >> exactly the one used by a particular
        author. I don't really think
        >> that the mind is 'produced' by the
        brain, but there's no mind
        >> without a brain *somewhere*. I tend to
        believe in the onthological
        >> reality of matematical objects, actually
        I'm not a dualist because I
        >> think that what we call 'the physical
        world' is but one of the
        >> mathematical structures that include
        conscious processes (beings).

        >What, then, is substance? What
        distinguishes the things that exist from
        >those that we only imagine? For
        example, the elephant in your bedroom?
        >I'm willing to believe that
        mathematics can be used to describe
        >structures in the universe, but there
        seems to be a particular universe
        >here, not just an infinite range of
        logically consistent possibilities.
        Yes, as you said, there's a particular universe here. What this theory states is that there's an infinite number of other universes elsewhere, elsewhere in the mathematical space of logically consistent worlds. The other worlds seem unreal to us, because we don't live in them, but those with conscious inhabitants seem to them just as real as this one seems to us.

        >> (concepts that now are
        intuitive but in this
        >> situation would fall apart) or re-defining
        death, possibly
        >> as 'irreversible loss of a particular mind' and then
        stating that
        >> the uploading process causes 'death', which is deduced
        from a theory
        >> of how the brain and the mind are related. I don't
        think you can
        >> build a concept of 'death' which is applicable in this
        >> without using a concept of 'mind' or something equivalent.
        And once
        >> you have defined 'mind', you have to pick a theory of how
        it relates
        >> to the brain.
        >It is encouraging, at least, that you admit that your
        "theory of
        >consciousness" is as described, because it is something that
        makes less
        >sense the more you think about it (and I have in fact thought
        about it,
        >and would have said something like it, some 20 years
        So it seems we have travelled in opposite directions. Some seven years ago my view was pretty much like yours. I accepted that some enhancements could be done to the human brain, but worried about becoming some kind of dead thinking machine if the modifications were too fast or too radical. Mind uploading seemed ridiculous to me. It was my interest in proving it wrong, at least to myself, that made me search for a simple and consistent theory of consciousness. But, to my anguish, it was more difficult than it seemed. I couldn't say that our very atoms are the key, because they are replaced all the time. Yes, the replacement is not too fast, but what's the maximum  admissible rate, and why. Slow enough for my mind to accept the new molecules and let them join the old ones, kind of "warming" the molecules before using them? Hey, wait, that smacks of dualism (although it's not exactly) and, anyway, how on earth could you know what that rate is? No experiment can tell. So I had to discard the notion that the particular molecules in my brain are what counts, and accept that the way they are ordered is all that matters. But it can't be the relative position of every molecule, which is wildly changing all the time, due to the flow of fluids, thermal noise and other factors. There must be some characteristics, in the way they are ordered, which are stable enough in front of disturbing factors to encode memory, personality and thought. Otherwise, conscious behaviour (an observed fact) would be impossible. I think that we could find out what are these characteristics, these patterns of behaviour, by studying the brain. We know very little of them at present, but even now it seems that synapses are important, and that the exact position of free-floating molecules is not. Nanotechnology would make it possible to study the brain in as much detail as necessary, though it's not strictly required (freeze-slice-scan would probably be enough).

        >As for your definition of death, suppose Sheila's brain is sliced
        up and
        >scanned... now, according to me, Sheila is dead, because you had
        to kill
        >her to get that brain and feed it into your scanner, but
        according to
        >you, she is still alive because her "mind" has not been
        >lost, whatever that means. Well, now, suppose Sheila had
        wanted to live
        >on Pluto. Everyone knows Pluto is too cold to support
        human life, but
        >there are supercomputers powered by nuclear reactors and
        running human
        >simulations on Pluto, and Sheila, being a complete fool,
        had agreed to
        >be killed so that her brain pattern could be "uploaded" to
        the Plutonian
        >Net. So her brain is scanned, the data collected, and it is
        radioed to
        >Pluto, and the records on Earth erased to prevent questions
        from being
        >asked that the Uploading Authority does not want to have to
        Records are erased to avoid an accidental running of Sheila's mind program in two different places, which would cause the existence of two different Sheilas in this world. No philosophical problem, just an undesired outcome. It would be much safer to keep the records until it is confirmed that the transmission was successful, but anyway.

        suppose that, due to a technical glitch, the receivers on Pluto
        >malfunction and the transmission is lost. So is Sheila dead now?
        >transmission is still speeding on its way, out of the Solar
        >toward the stars. The loss is irreversible, from a Solar
        >since we can't go chasing after the lost transmission which
        is traveling
        >away at the speed of light. But perhaps in some other star
        system, an
        >advanced civilization will pick up the transmission, and load
        it into a
        >computer. Would Sheila only then come back to life, or had she
        >alive all along? But it may not have been a predetermined fact
        >this advanced civilization would be pointing their receivers in
        >direction at just the right time... You might see some
        >between this scenario and cryonics.
        Yes, I see the connection. To put it explicitly: Sheila is cryonically suspended. After a few decades, cell repair machines try to repair her brain. Let's suppose that their strategy is to record the position of a molecule, then remove this molecule  (and store it) to access the next molecule, and so on. Then, with all the information, they could calculate where to put every molecule to reconstruct Sheila's brain in a healthy state. Now let's suppose they need some advice from an AI in Pluto, so they send the information to Pluto. Let's say that, after sending it, they accidentally erase the copy of the information they had here on Earth. Now Sheila's fate depends on what happens in Pluto, just as you have described.
        Now, does this mean that you are also against cryonics? Why? After all, the revived brain would be, in a physical sense, the same object as the brain which was frozen. The same molecules, ordered the same way.
        But you asked what my theory has to say about this. Well, in some worlds the information will be recovered and in some other worlds the information will be lost. In some worlds Sheila is alive, in some worlds she's dead. In an absolute sense, Sheila is alive as long as she keeps on living *somewhere*, that is, as long as there are worlds which contain structures corresponding to some of the mind states included in the set of "possible next mind states" defined by her current mind state (by the last mind state she had when she was frozen). But for every mind state, there's an infinite set of possible worlds that contain structures which correspond to this mind state, and they are all real. So, in an absolute sense, Sheila never dies. The worlds where she dies are as irrelevant to her conscious experience as the worlds where she is never born. So why bother preserving information about her mind state, if anyway she can't die? Well, she can't die in an absolute sense, she can't die *from her own point of view* but she could die *from our point of view*. That is, Sheila is dead *for us*, inhabitants of this particular world, if we never meet her again, if the information of her mind state is irretrievably lost for us, out of our reach. We can speak of loosing this information just as we speak of loosing any important file. If we accidentally erase it, we say that the file is probably lost. If we find a backup later, then we say that the file was not lost after all. Lost for whom? For us.
        >But the question I would pose is, why should Sheila have agreed to have
        her brain sliced up, if >she didn't actually intend to commit suicide? How could anything that happened
        >after Sheila's brain was sliced up alter the
        fact of Sheila's death?
        A medieval thinker would say: "If you stop a person's heart, you kill her. If you open her chest, cut her heart out and throw it away, you are making sure that you killed her. Why would someone want to have this done to them, if they didn't intend to commit suicide? How could anything that happened after her heart was thrown away alter the fact of her death?"
        If you showed to this medieval man a heart transplant patient who has recovered, he would say: "This is not the same person. The original person died, I've seen you kill her, it's a fact. This is just another person who happens to behave very much like the original person."
        There are many reasons for mind uploading: uploads would think faster, it would be much safer in front of accidents (by means of tough materials, redundance and maybe backups), they could also think in slow motion (to comunicate with distant places in subjective real time) and it would be easy to repair and not so difficult to augment as a biological brain is. If anything went wrong, the thought process could be stopped immediately and the problem calmly studied by other uploads or AIs. Its state of consciousness could be altered in many ways without the need of drugs. It could waste very little energy, which would be very important in the long run.

        >Your theory is in fact a physical theory,
        because you are claiming that
        >in addition to the physical brain, there
        exists another thing, which you
        >call "mind." The notion of "production"
        that I introduced was an
        >attempt to capture the image you had of a
        relationship of "mind" to
        >brain. It's a common notion, as if the brain
        were some kind of
        >projector or radio transmitter which, by its physical
        activity, produces
        >the mind just as a transmitter by its physical
        activity produces an
        >electromagnetic field. But this can't be correct. We
        have no evidence
        >of any kind of extra thing, any kind of field produced
        by the brain,
        >other than the physical fields it produces, which don't
        seem to equate
        >to "mind."
        A common notion, but not mine. I don't claim that something called "mind" must exist in order to explain some phenomena. My mind *is* the phenomenon I want to explain. Its existence is all I really know.
        Theories of mind try to explain this fact and the way something called "the external world" seems to affect my mind. They are not physical, in the sense that the outcome of physical experiments can say nothing about subjective experience, about my mind. So, the notion of a "mind field" that could be physically detected has no relation whatsoever with my concept of mind.
        >> You'll find that the object evolves from a stable state to the
        >> one because those components that influence its output behave
        >> like that. You could conclude that any of those transistors
        could be
        >> replaced by another one and the object's behaviour as
        defined would
        >> be essentially the same.

        >Only if you
        started with a working definition of "the object's
        >behavior," which,
        again, is not written into physics and is required
        >only to prop up your
        increasingly fragile theory. Again, an alien
        >scientist might be able to
        figure this out, but only if she knew
        >something about devices like
        transistors and what they might be used
        >for. Your theory requires the
        intelligence of an alien scientist to
        >somehow be coded into the structure
        of the universe so that it can
        >figure in the routine "production of
        Okay, let's say we start with a lowest level description of the object according to physical law. Let's say it's quarks and electrons (or replace it with another, lower-level description if possible). Now, from a physics point of view, the information about the state of every quark and electron is all you need to describe the object's behaviour. You needn't know anything about higher-level structures such as transistors to know what quarks and electrons will do. But still, these higher-level structures are present in the object (the computer) and *not* in other objects (such as rocks). The imaginary alien scientist might never find them, but if you told him to check whether the object has this pattern of behaviour, he could in principle find it. If the object were a rock, he would never find this pattern of behaviour, not even in principle. So, this pattern of behaviour is not privilieged by physical law, but its presence or absence is a physical fact, not some arbitrary decision we make. It needn't be the only high-level pattern of behaviour in the computer. The scientist could find a pattern of ink on the processor which spells the name of the manufacturer, if you told him to look for it. This would be another pattern, no less real than transistors, but irrelevant for computation. The point is that, if you asked the scientist "is this object a computer?" and gave him a formal description of what you mean by "computer", a mathematical structure, a pattern of behaviour to look for, then he could answer: "yes, this object is, among other things, a computer" or "No, the computer pattern is not present in this object" (if it were a rock), and it would only depend on the object, not on the scientist's knowledge about computers.

        > >From a subjective point of
        view, there's no difference as long as
        >> all the computers are running
        the same process (all give the same
        >> inputs to the program). There
        would be only one conscious
        >> experience, only one mind. If they begin
        to differentiate, then the
        >> mind branches. From a subjective point of
        view, you end up in one of
        >> those brains, at random.

        does it mean to say "you end up in one of those" computers at
        >random or
        otherwise, as opposed to "you end up in" any of the others?
        >What is the
        difference? Each computer thinks it is you, while actually
        >none of them
        are the human being which was destroyed. There is no
        initially, between any of them.
        It means that you can't have many different conscious experiences at the same time. Assuming you survive the process (and we don't agree on this), what you can expect is the conscious experience of ending up in one of the computers. If a number were assigned to each of the computers, and shown on a screen to each of the uploads, what you can expect to actually see is either 1 or 2 or 3... but not a superposition of them.

        >The only way to interpret what you
        are saying is that this "you" you
        >refer to is a soul. The same old idea
        that people have been using from
        >time immemorial, and that actually
        refers to nothing but the body
        The old concept of "soul" refers to the mind, but with the assumption that it can exist without a brain. Also, it was often assumed that it could directly act on the human body, regardless of physical law (Descartes thought that the pituitary was the point of interconnection). Others (Leibnitz, I think) proposed that God made sure that body and soul, being independent, never behaved in incompatible ways.

        >Death of an animal
        occurs when the body is destroyed or otherwise ceases
        >to function as a
        living organism. This definition does not fall apart
        >just because you
        might be able to make a copy of a person which acts
        >like that person in
        some ways.
        It's not your definition which falls apart, it's the concept of "death" as something that needn't be defined because we agree on its meaning. The concept of "death" falls apart and breaks into different concepts according to different people. That's why, when you say "death" in this context, you have to give a definition, so that other people understand what you mean.
        In order to build your concept of "death", you have chosen to use another concept, "living organism", which also falls apart for the same reason. Your definition of "living organism" is a biologist's definition, in the sense that, according to it, an object has to be very closely related to the living organisms we know today (same lineage, similar DNA, proteins and other biomolecules and so on) to be considered a living organism. So, yours is a biologist's definition of death. Very strict and narrow, perfectly acceptable but useless for many moral and philosophical discussions.
        The definition of "death" I'm using has more to do with the reasons why people fear death and the reasons why people cry in funerals. My definition is "loss of a particular mind" or "loss of the information about a person's mind pattern" And by "mind pattern" I mean the structure that determines the person's memory, personality and thought. 
        >> In mind space, it would be required that your first mind state
        >> the morning is included in the set of 'possible next mind states'
        >> the last mind state you had the night before, for you to be the
        >> person who went to bed.

        >I am the same person because
        I am the same animal and I just slept for a
        >while. As I slept, some of my
        molecules changed or were exchanged with
        >the environment, but only a
        small fraction. That would have happened
        >even if I stayed awake. But I
        slept. That's all.
        "being the same person" is another concept which falls apart in this context. Your definition is something like "being the same object, that is, the same molecules arranged the same way". But, as I said, the notion that the body is one and the same physical object (in this sense) throughout life is an illusion. Strictly speaking, most of your molecules are changing their relative positions all the time, and you are exchanging molecules with the environment every time you breathe. It's no use to say that it was only a small fraction. From a physics point of view, as soon as one molecule is out of place the object as defined disappears. to avoid this, you would have to allow some amount of variation in the relative positions of molecules to account for blood flow, breathing and so on. But it wouldn't be enough to say "I allow some variation", you would have to specify the amount and the kind of variation you allow. This is what I mean by "structures": conditions about the physical state of a sytem that determine, by definition, whether the system exhibits a particular  behaviour. It's a matter of choice which structure to define, but it's not a matter of choice whether this structure is present or not.
        Or you could simply say that your definition of "being the same person" is intuitive and only applies in present, day-to-day situations, not in extreme ones. But then, when these extreme situations are being discussed, you can't use your concept. You can't say "in these situations the person is not the same" because you haven't defined what that means for you, and the old concept, on which there was agreement, is not available in this context.
        Another option would be to define the new concept as follows: "if the old concept of being the same person is applicable, then use it. If not, then the person is not the same". Maybe you are doing just this with all the other concepts ("human", "alive", "life", "death", "dying"). A strict, conservative approach. Whenever there's doubt on whether something we value is preserved, conclude that it is not. As I said, from a philosophical point of view, this kind of definitions are both perfectly acceptable
        and rather useless, as they capture very little of the meaning of the original word.

        >>>But we can imagine transplanting into
        my cranium Malkovich's brain,
        >>>or a copy of it. Whether you would
        call the resulting creature
        >>>Gubrud or Malkovich or something else
        is not a very interesting
        >>>question, in my
        >> It's a thought experiment to illustrate a

        >What is the experiment? You described a situation. There is
        >calculation to process through here, as there is in a physics
        >experiment." Your description does not lead to testable
        implications or
        >logical contradictions, if we just take it as a
        description of a
        >possible physical operation.
        The logical contradiction is trying to access a mind state without respecting the logical path of possible next mind states. Of course, it's just a logical contradiction in the context of my theory of mind.

        >I have tried to show you
        that the ideas you are using are dualistic and
        >magical, and require
        proliferating, implausibly complicated assumptions
        >in order to cope with
        even simple questions. I have tried to show you
        >simple ways of thinking
        about these matters which are fully consistent
        >and do not require
        >However, according to
        these ways of thinking the physical human being is
        >all there is of the
        human being; there is no place for the idea that in
        >addition to a
        physical human being there exists an incorporeal "person"
        >or "mind" or
        soul which can be transferred to or which can "become" a
        >nonhuman object.
        In any such process, the human being is destroyed.
        >Anything that is
        created is something else.

        >Please think about it, reread the
        discussion, try to understand before
        >replying again. My guess is it will
        take about five years for you to
        >think your way through this. It took me
        at least that long.

        As I told you, it took me about that time to think as I do now, and I was not enthusiastic about mind uploading, quite the contrary.It's not that I think that my current opinion, my theory of consciousness, is perfectly worked out and must be true, but I find that all the alternatives I've considered (including something like the one you are proposing) are weaker. Either they suffer from fatal flaws or they make arbitrary assumptions, or they deny my conscious experience, which is more real for me than any theory I may accept, not a postulate to explain something but the thing I want to explain.
        Despite our disagreements, I think that we both value pretty much the same things. Not DNA and proteins, but love and interesting conversation. Not oxygen molecules, but the feeling of fresh air. Maybe I'm more willing to discover new experiences and you are more worried about preserving the ones we know.
                                  Martín O. Baldán
      Your message has been successfully submitted and would be delivered to recipients shortly.