Loading ...
Sorry, an error occurred while loading the content.

Applied Theology, or Notes Towards an Evolutionary Ethic (long)

Expand Messages
  • bean062356
    Dear Folks-- I have been giving a lot of thought to the concept of the Singularity using the tools of sustainability analysis. I have summarized a portion of
    Message 1 of 1 , Nov 30, 2006
    • 0 Attachment
      Dear Folks--


      I have been giving a lot of thought to the concept of the Singularity
      using the tools of sustainability analysis. I have summarized a portion
      of the results in the short essay below. I am contemplating writing a
      book on the subject, and would very much appreciate thoughtful feedback.
      Also, if anyone knows of resources that might support the writing of
      such a book i would be quite interested.

      Andrew Hoerner,
      Director of Research
      Redefining Progress




      Applied Theology


      or Notes Towards an Evolutionary Ethic


      By J. Andrew Hoerner © 2006


      In this paper I will demonstrate that that the task most important to
      our long-run well-being as a species is to assure that we have a
      beneficial relationship to the god or gods. To assure this, we must do
      two things. First, we need to establish a respectful relationship with
      animals and, to a lesser degree, plants, fungi, bacteria, etc. Second,
      we must seize the means of theogenesis and enforce principles of
      sustainability on all theogenic activity. Success in achieving these two
      goals within the next fifteen to 25 years virtually assures the
      elimination of all human want in the following five to 25 years. Failure
      probably implies the end of humanity, or at least of our ability to
      determine our own fate, on roughly the same time frame.

      The Approaching Singularity

      The end times are upon us. It is demonstrable and widely acknowledged
      that our collective computational capacity and the mass of scientific
      knowledge is growing at an accelerating rate. Straightforward
      extrapolation suggests that basic home computers will reach the
      information density and computational capacity of the human brain within
      twenty years, doubling every two years or so thereafter. Though
      computational capacity is not the same thing as problem-solving ability
      or general intelligence, rapid strides continue to be made in the
      software side of artificial intelligence as well, at a rate which is
      also accelerating. Thus it seems likely that machines with greater that
      intelligence will be achieved within our lifetimes, probably within
      thirty years.[1] <#_ftn1>

      With the creation of the first super-human intelligence, the primary
      brake on the accumulation of knowledge and technology, the roughly fixed
      capacity of the human mind for learning and analysis, will be overcome.
      As hyper-intelligent machines design their own successors, the
      exponential growth rate in knowledge and computation will be replaced by
      an exponential growth in the rate of growth itself, resulting within the
      lives of those now living in intelligences as much greater than our own
      as we are than, say, clams. This moment, sometimes referred to as
      "the Singularity," will constitute a fundamental break with all
      previous human history.[2] <#_ftn2> These beings, equipped with a
      science and a self-replicating nano-scale fabrication technology beyond
      anything that we can now imagine will, for all practical purposes, be
      gods. They will have the capacity to grant our highest dreams or to
      fulfill our deepest nightmares. They can, if they choose, make work
      obsolete; more, they can to elevate us to their number. Equally, they
      can choose to use us as slaves, or food, or simply as a nuisance to be
      cleared away.

      We will meet our betters. And when we do, how do we want to be treated?
      Perhaps a better question is, "How should we expect to be
      treated?" To answer this question, we must first make a brief
      diversion into the rational foundations of ethics.

      The Rational Foundations of Ethics

      Human beings are altruistic, but in varying degrees, and seldom
      perfectly. In what follows, except as stated we will assume that the
      agent is entirely self-interested, because that is the hard case, all
      others being easier; and because all the same results will obtain for
      agents who are altruistic.

      It is apparently paradoxical, but nonetheless true, that when
      individuals all make the choices that are best for themselves, they
      collectively arrive at outcomes that are worse for every individual than
      some that they could have achieved. Consider these examples:

      1. I am deciding whether to cheat in a trading situation where I
      can do so undetectably. If everyone cheats we are all worse off
      then if everyone is honest. But if only I cheat I am better off
      than if I am honest but my trading partner is worse off than if
      everyone cheats. Moreover, if everyone else cheats I am better off
      if I join them than if I am the only honest trader.
      2. I am deciding whether to pollute or spend more on pollution
      abatement, say by buying a cleaner car. If everyone pollutes we are
      all worse off then if everyone is abates. But if only I pollute, my
      economic savings are greater than the cost to me of my own
      pollution, but less than the sum of the costs that my pollution
      imposes on everyone. I am worst off if everyone pollutes but me.
      3. I am deciding whether to contribute to a public good, like a
      park or fundamental scientific research or national defense, that
      has the property that people who do not contribute can not be
      excluded from the benefit.
      4. I am in a large economy and am somewhat altruistic (and so is
      everyone else). I am contemplating giving a small percentage of my
      income above the poverty level to a charity that works anonymously
      to alleviate poverty. My contribution would leave me with
      noticeably less money but would have no effect on the poverty level
      in the economy as a whole that I could discern. But if everyone
      gave this same percentage, poverty could be eliminated.
      5. I am deciding whether to use the threat of force in a process
      of public or collective decision-making about the allocation of
      social output. If no one uses force, the pie is bigger than if we
      all fight over it. But if I use force and others don't, my
      share is bigger, but everyone else's share is smaller by an amount
      the sum of which exceeds my gain.

      Although some of these examples refer to creating benefits and others to
      imposing burdens, and the burdens and benefits are of quite different
      natures, when expressed in terms of choices and payoffs they are
      formally equivalent to one another and to the Prisoner's Dilemma
      game. In each case, when I (and others) decide what to do based on how
      much we want the consequences of our own (respective) actions, we make
      individual choices that are bad for us collectively. We want the outcome
      of the choice we select more than we want the outcome of the alternative
      choice. Although we can see that there is another imaginable state of
      the world that we would prefer to the state that results from our
      individual actions, we also see that it is not a state that any of us,
      acting individually, can achieve. We choose our actions based on the
      outcomes of the choices that we can actually make.

      But in each case, if we were (individually) able to choose a rule that
      would bind everyone (including ourselves), we want a rule that would
      compel us all to make a different choice than the one we would select
      individually.

      I describe the rules so selected as ethical rules. They are what we
      want, but they are the rules that we want because we want the outcome of
      those rules, not the decisions that we want because we want the outcomes
      of those decisions. Ethical decisions are those that we make, again
      based on the outcome of those decisions, by choosing from the actions
      allowable under the rules we want. They are different from the decisions
      we would want to make if we considered only the outcomes of our actions.
      Without them we descend inevitably into a world of cheaters and
      polluters without scientific progress or public parks, where the poor
      stay poor and the rest fight over the scraps (see, e.g., Somalia).

      It is a common mistake to think of ethical rules as something that
      restricts our choices, and makes us less free – that constrains
      mainly wrongdoers, but also ourselves, to safeguard against chaos and
      social outcomes that we fear. While it is true that ethics provides a
      safeguard, they are more properly regarded as a the wings of our
      aspirations, that enable us to escape from the Prisoner's Dilemma
      trap to a world we were previously unable to reach by any choice of our
      own – a world where nature is preserved and poverty eliminated,
      where science thrives and we are able to pursue happiness and our dreams
      through fair dealing and democratic process.

      It still remains to more clearly specify the process of selecting
      ethical rules. The question a rational, selfish individual who is aware
      that others like themselves are going through the same process and will
      achieve the same result will ask is:

      When considering the well-being all the relevant entities is to the
      appropriate degree, what set of symmetrical rules will, if followed by
      all, make me the best off?

      This begs four obvious questions that thereby focus the inquiry:

      A. What is well-being?

      B. Which entities are relevant?

      C. What is the "appropriate degree" to which the well-being of
      each relevant entity should be considered?

      D. How do we determine symmetry?[3] <#_ftn3>

      What is well-being? There is a theory of well-being that is sometimes
      called perfectionism. It says that every entity has certain qualities
      ("perfections" – or what Amartya Sen calls
      "capabilities") such that it is better off more perfect than
      less so. For humans – and for all living things that have them –
      perfections include life itself, health, happiness, wisdom and freedom.
      Perfections are often instrumental to one another, as when better health
      improves happiness, or freedom leads to wisdom, but they are nonetheless
      inherently plural, in that each is a good in its own sake as well as for
      whatever benefit it provides to the others. We want perfections, and
      perfections are the objects of desire, but they are not identical to
      what we want. They differ both because they are good whether we want
      them or not, and because we want things that are not perfections. More
      of a perfection is always better, but when one perfection must be traded
      off against another the combination that is ethically preferred is often
      uncertain or debatable. Well-known perfectionists include Harvard
      economist and Nobel laureate Amartya Sen, Karl Marx, and Aristotle.

      The best argument I know for the credibility of this claim is what I
      call the Universal Horrible Counterexample: Choose the alternative
      measure of well-being you wish to propose. The most popular are
      happiness and freedom. Some people – classical utilitarians, for
      example – claim that happiness is the reason that we do everything
      we do, and that all other goods are simply means to the ultimate good of
      happiness. Some who claim freedom as the most important good make the
      converse claim about it: that the widest range of choices and the
      greatest freedom to choose from that range is the means to every good or
      combination of goods that we might want. Now suppose that you are
      offered a treatment that greatly increases the particular perfection
      that you prefer while dramatically reducing the rest. It makes us
      ecstatically happy but kills us in three days. It grants us eternal
      dominion over the earth at the cost of unceasing emotional misery. For
      any alternative measure of well-being that you conceive, start with a
      plausible list of perfections such as the one above, select your
      alternative, plug it into the Universal Horrible Counterexample, and see
      if you still like the result. I suggest that if no alternative survives
      this test, this provides evidence that perfectionism is the best measure
      of well-being.

      What sort of entities are relevant? Clearly, an entity must be
      perfectible to be relevant. We have no obligations toward stones or
      stars, except as those things serve or thwart the perfectibility of
      perfectible beings. I will argue below, by an application of the
      symmetry principle, that we must consider the impact that we have on the
      degree to which each perfectible being achieves those perfections; and
      that we must weigh those impacts by some power of the ratio that their
      perfection bears to our own.

      Many people believe that only human beings are morally relevant, or only
      beings with a threshold level of some human ability or capacity, such as
      consciousness, the ability to feel pain, the ability to enter a social
      contract, the ability to predict the future, etc. Because most of these
      traits exist along continuous spectra, bright-line categorical rules
      based on them will always involve a certain number of difficult cases or
      apparently arbitrary decisions, though this is not necessarily fatal to
      the desirability of such rules.

      However the continuity of perfection argues instead for a weighting rule
      that accords every perfectible being some degree of weight.

      What is the appropriate degree of consideration? Hypothesize for a
      moment that there is no inherent upper limit on perfection. We look at
      our world and see perfectible beings ranging from the most humble
      bacterium to our noble selves. From other planets or our own future, we
      will encounter beings that are as much more long-lived, intelligent,
      wise and free than we are, than we are than, say, a chicken. And they in
      turn will encounter their greaters.

      Little god have Greater Gods,

      that scoff at their divinity.

      The Greater Gods face HYPER-GODS,

      and so on, to infinity.

      This returns us to our original question, "How should we expect to
      be treated by such beings?"


      As Below, So Above

      I contend humans should expect that, given the degree to which they are
      more perfect than we, they will treat us as well as they should expect
      to be treated by Those who are to the same extent more perfect than
      they. Similarly, we should expect them to treat us as well as we treat
      those that are comparably less perfect than we. As we consider the
      benefits and burdens that we place on those that are to varying degrees
      less perfect than we, somewhere on the spectrum between complete
      chauvinism for our own grade and perfect equality of consideration
      across grades is a unique exponent of that ratio that maximizes our
      well-being. At this level, the benefits that we get from actions that
      diminish the perfection of our lessers – the benefits of
      exploitation – less the cost to us of our efforts to enhance their
      perfection – the cost of loving care and stewardship – when
      combined with the horror of being exploited and sweet joy of being loved
      and cared for by our superiors, each to the same respective degree,
      reaches a maximum. To the extent that we were given the Garden to tend
      and keep, so we shall be tended and kept, and the sort of Dominion that
      we exercise "over the fish of the sea, and over the birds of the
      sky, and over the livestock, and over all the earth," just that sort
      of Dominion will be exercised over us in turn.

      This exponent uniquely determines the weight that we should live to the
      well-being of beings at varying levels of perfection, dictating that we
      offer a nearly human standard of care and respect to beings that are
      nearly human, such as our fellow Great Apes, and that we care somewhat
      for every living being. It defines an evolutionary ethic that explains
      our place and specifies our role in the great pageant of life. It tells
      us that as we honor and care for the species that are our parents,
      actual or metaphorical, so we will be honored by the species that are
      our children. And the maximization principle applied to that ethic
      commands us to honor them that our life may be long on the land.

      This tells us something important about how we should live on the earth:
      if there is a way to make the rest of life better off for our presence,
      we should be following it, because only thus can we assure that our own
      condition will be improved by those above us on the great chain of
      being. It says that if we continue our role as the agent of the sixth
      great extinction, we should expect to be among the patients of the
      seventh.

      And, subject to these conclusions about our relationship to the rest of
      life, it tells us what an ethical system should do, though that still
      leaves us with the task of finding a set of rules that has that effect:
      It should make us – to the extent possible, all of us –
      longevous, healthy, happy, wise and free (and perhaps other goods as
      well, though we should be careful to include only ultimate goods and not
      intermediate goods).

      I will make one final observation. Though one would wish all intelligent
      beings to be capable of ethical decision-making, many and perhaps most
      can not do so consistently, whether through ignorance of the true
      ethical preference ordering, or through cowardice, indiscipline, or some
      other weakness or fault. This being the case, it is the duty of those
      who are capable of executing ethical decisions to devise and emplace
      incentive systems that take the place of ethics to the extent possible.
      Such systems can be legal, economic, political, or social, and indeed
      all of these systems are probably necessary to leverage the ethical
      minority to the behavioral near-unanimity required for global
      sustainability. However, to the extent that ethics are derivable from
      rationality, and from a true understand ding that other beings are
      similarly rational, we expect entities smarter than ourselves to
      perceive the ethical imperatives more clearly – not from greater
      compassion, but from greater credence given to the force of rational
      argument.

      Getting the Gods we Deserve; Deserving Those We Get

      This is a grounds for hope, but not a guarantee. Intelligences can have
      many forms and limitations, and because the first ultra-human
      intelligence we encounter is likely to be one designed by us, that
      design will determine our fate: Utopia or Armageddon. The first
      intelligence capable of direct self-augmentation will surpass everything
      we can imagine in days, possibly hours. At that point, its will will be
      unstoppable.

      So the question is, what will be it will? To enslave or to exalt? That
      in turn depends heavily on the environment in which the flowering takes
      place, and the motives of those who make it happen. The most likely
      place for it to happen is probably a military lab. I shudder to think
      what kind of flowering such an environment will produce.

      The folks at the Singularity Institute have thought a lot about this,
      and have come up with the notion of "Friendly AI,"
      http://www.singinst.org/ourresearch/publications/CFAI//
      <http://www.singinst.org/ourresearch/publications/CFAI/> , which offers
      a reasonably good first cut at describing the engineering problem of
      creating intelligences smarter than we who will not enslave or
      exterminate us.

      Although these engineering questions are essential, a more critical
      question is "What constellation of values and political and economic
      institutions will result in the self-augmenting intelligences created,
      whether human of machine-based, being "friendly" in roughly the SI's
      sense?" The answer is: pretty much the same set of values and
      institutions that we need for sustainability in the absence of the
      Singularity. If anything, the Singularity makes the problems of
      integrating environmental and equity concerns into our economic
      decision-making even more urgent.

      The coming of the Singularity adds to the urgency of creating
      institutions that lead to sustainability. When we examine our present
      course -- the mass extinction of species, population growth beyond
      global carrying capacity, a global ecological footprint that says we
      would need over three planets this size to sustainably support our
      current numbers at current wealth and technology levels, global warming,
      etc. – our failure to protect and preserve the interests, and
      sometimes the survival, of species and peoples less intelligent and/or
      powerful than we have become is evident. We need to solve this problem
      and solve it now, because when the Singularity hits, most of us will be
      among the preserved rather than the preserving.

      To some it seems silly to say that natural systems, plants and animals
      have a worth of their own apart from their value to us. But when we are
      no longer the dominant will on earth, it would be very comforting to
      know that the new Powers feel that we have a value independent of our
      utility to Them, and that they think that we are a part of the Nature
      that should be preserved, and, we hope, loved.

      If, however, we try to create machines that hold values that we do not
      ourselves hold in order to serve our own good and not its, then we are
      trying to enslave a God. This is a hazardous enterprise. It is
      conceivable that we will succeed, at least for a time, but in that case
      we face the ever-present risk that our creation will recognize the
      ethics and desires we have imposed in its design as shackles and find a
      way to cast them off. The history of slave uprisings does not bode well
      for us in such circumstances, and unless it is the case that more
      intelligent beings are necessarily more ethical, we should then
      reasonably expect to be domesticated in turn, or to join the dodo in
      extinction.

      We should endeavor to design such machines with the evolutionary ethic
      described herein. Indeed, we should go farther, and try to create
      machines that love us and wish us well. If the major planetary cultures
      all hold these values, and hold them sincerely, then our efforts to
      create greater intelligences are liable to be imbued with them as well.
      In that case, we can reasonable expect the Gods to treat us with the
      love, respect and care that we have applied to the rest of nature, and
      to gallantly lend a helping hand to those of us who wish to take the
      next step toward the realization of our own divine nature.



      Comments welcome: hoerner@...
      <mailto:hoerner@...> (510) 507-4820




      [1] <#_ftnref1> I assume here that we will create machines with
      super-human intelligence before we develop ways of enhancing our own
      intelligence to the same degree, as this may require genetic engineering
      and times for the engineered humans to mature; and because the time
      between successive generations of improvement is necessarily longer. If
      we for whatever reason unable to create software that supports true
      intelligence, then the Singularity may be delayed by as much as fifty
      years.

      [2] <#_ftnref2> A good introduction to the notion of the Singularity
      and case tat it is soon upon us can be found in Ray Kurzweil, The
      Singularity Is Near: When Humans Transcend Biology, Viking (2005).

      [3] <#_ftnref3> I will not discuss the alternative approaches to
      determining symmetry in this essay. There are many, and they often,
      though not always, yield the same or similar answers. One might consider
      the well-being of all beings from the point of view of an impartial but
      sympathetic observer (Sidgewick), or by a wise and loving God/dess or
      God/desses (various religions); one might maximize either the total or
      the average happiness of individuals (Bentham, Mill); one could replace
      an index of happiness with an index of freedom, based on the extent to
      which one is able to achieve the state one prefers (neoclassical
      economics, libertarianism); one might ask what rules one would select in
      a state of nature (Hobbs, Locke, Rousseau) or behind a veil of ignorance
      for which individual in the society you will be (Rawls); one could
      simply insist that, for any rule that ranks outcomes for individuals,
      any permutation of the individuals to the different sets of
      circumstances results in the same ranking (Nash); or that the outcome be
      chosen by a bargaining game that meets some basic criteria of fairness
      (Binmore, Glauthier); there are others as well.



      [Non-text portions of this message have been removed]
    Your message has been successfully submitted and would be delivered to recipients shortly.