Loading ...
Sorry, an error occurred while loading the content.

18656Re: [MAGIC-list] Re: An argument against intelligent design (the simulation argument)

Expand Messages
  • Eray Ozkural
    Feb 8, 2014
    • 0 Attachment
      Hello,

      This is a quite good angle of attack which I did not fully consider in the paper, except explaining in plain English that resources would be devoted elsewhere since they are finite. That is the basic reason. If resources were infinite, it's anyone's guess what would happen. If resources are finite, then you need to increase your intelligence as much as you could to solve the "final question". 

      This could be tied to a decision that to survive you always need more intelligence. It's the rational decision.

      You use the same method of reasoning to invalidate the argument, I think it's fair.

      An additional support could be provided as such: the uploads proliferate arbitrarily, and also for every human there will be many post-humans. For instance, I could have a million copies of myself, and so could you. There would certainly be much more incentive to invest in the future, than to get stuck in a strange pseudo-scientific archaeological study of the past (which is irrational).

      Put simply, intelligent beings have better things to do than playing video games.

      Regards,


      On Sat, Feb 8, 2014 at 5:51 PM, Gabriel Leuenberger <gabriel.leuenberger@...> wrote:
      Thanks Eray, I've read the paper now. Here's my attempt at a simpler rebuttal of the simulation hypothesis:

      If you are a conscious being, the probability that you seem to be a ancestor is P( seems I'm ancestor ) = (S+H)/(L+S+H) ,
      where H is the number of real ancestors, S is the number of simulated ancestors and L is the number of other artificial conscious beings.

      Scenario A: Advanced civilisations controls large amounts of computational power and use a small part of it to run ancestor simulations.
      A much larger part of this computational power will probably be used for more productive things, like developing new technologies.
      In order to keep a high rate of technological/artistic/scientific progress, a lot of general intelligence is necessary.
      So the population of artificial conscious beings that are working on (or consuming) new technologies is much larger than the population of simulated ancestors.
      i.e.  L >> S+H
      Hence  P( seems I'm ancestor ¦ Scenario A ) = (S+H)/(L+S+H)  is close to zero.

      Scenario B: civilisations never get to control large amounts of computational power.

      We give both scenarios equal a priori probability:
      P( A ) / P( B ) = 1
      Now we observe that it seems like we're ancestors and calculate the updated ratio of probabilities with Bayes:
      P( A ¦ seems I'm ancestor ) / P( B ¦ seems I'm ancestor ) = P( A ) / P( B ) * P( seems I'm ancestor ¦ A ) / P( seems I'm ancestor ¦ B ) = close to zero.
      Therefore Scenario A is probably false. Ergo the simulation hypothesis is probably false.


      I used the technique of the doomsday argument, try to debunk it.


      On Saturday, 8 February 2014 01:23:55 UTC+1, Eray Özkural wrote:
      [crossposting to ai-philosophy and analytic]

      Thanks Gabriel!

      I'm pleased when somebody reserves the time to read a paper that serves only to extend our philosophical understanding, and not making money or any personal benefits as such.

      There are probably weak points in the paper, but not anything major that I can see. It boils down to a very simple reason: that the bland indifference principle doesn't work here.

      The implausibility comes from:

      a) information-theoretic incompleteness necessitated by the quantum simulation.

        a1) I argue explicitly that to try to "detect attempts to detect the simulation" and prevent them itself is infeasible, probably harder than the quantum simulation

      b) the underspecification of cosmology:

        b1) What is the size of the "real" universe?

        b2) What is the age of the "real" universe?

      Since these are not adequately addressed by the simulation argument, the whole argument boils down to a quite unscientific rephrasing of young earth creationism: that this earth is 6000 years old and it's planted with dinosaur fossils to deceive us.

      Of course what Bostrom seems to believe in is much odder, judging from the most probable scenarios he outlines in his paper. He seems to believe that the world is a 100 year old simulation (in simulated time, in reality, much shorter!), and only the earth and the local solar neighborhood are simulated, and that, basically, the scenario of the movie The Matrix is true.

      Regards,


      On Mon, Jan 20, 2014 at 3:53 AM, Gabriel Leuenberger <gabriel.l...@hotmail.com> wrote:
      I also want to say that Eray did a good job of pointing out that realistic simulations within simulations within simulations are implausible.
      And my hypothesis about the length of the third part of the program is not based on any evidence.

      On Monday, 20 January 2014 02:42:32 UTC+1, Gabriel Leuenberger wrote:
      I think it would be more rigorous if we use a combination of Hutter's "observer localization" and Orseau's "space time embedded agents" to debunk Bostrom's thesis.
      Assuming there's a 'real' universe which contains real humans but also contains computers with simulated humans.
      Let's assume the software has a good enough AGI to run a very efficient and realistic simulation (which would give a whole new meaning to panpsychism).
      At present the simulated humans have no way of telling if they live in a simulation or not.

      Now we should try to argue that the shortest description of a conscious brain state of a simulated human is longer than the shortest description of a conscious brain state of a real human. By conscious brain state I mean data which represents a connectome and the corresponding brain activity of a thinking brain. The shortest description of this data would be a program composed of these three parts: 1: ToE, 2: Observer Localisation, 3: a program which transforms the physical data into the brain state information.
      My hypothesis is that the third part of the program would be longer for sophisticatedly simulated humans. It would therefore be less probable that we are part of such a simulation.

      But then there's this paradox: the simulated philosophers would also arrive at the conclusion that they are not simulated. The solution to this paradox is that an unlimited AIXI would know whether it lives in the simulation or not, because the simulation is different than reality. But the simulated philosophers don't have enough computational resources to come to this conclusion.


      On Monday, 13 May 2013 02:45:47 UTC+2, Eray Özkural wrote:
      I had written, a couple of months ago, an essay on the simulation
      argument. It contains a *generic* argument against any version of
      intelligent design, based on induction and AI. I think I had explained
      this argument to Laurent Orseau as well during AGI-12. Let me know
      what you think about it!

      http://www.examachine.net/blog/why-the-simulation-argument-is-invalid/

      Best,

      --
      Eray Ozkural

      --
      Before posting, please read this: https://groups.google.com/forum/#!topic/magic-list/_nC7PGmCAE4
      ---
      You received this message because you are subscribed to the Google Groups "MAGIC" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to magic-list+...@googlegroups.com.
      To post to this group, send email to magic...@....



      --
      Eray Ozkural, PhD. Computer Scientist
      Founder, Gok Us Sibernetik Ar&Ge Ltd.
      http://groups.yahoo.com/group/ai-philosophy

      --
      Before posting, please read this: https://groups.google.com/forum/#!topic/magic-list/_nC7PGmCAE4
      ---
      You received this message because you are subscribed to the Google Groups "MAGIC" group.
      To unsubscribe from this group and stop receiving emails from it, send an email to magic-list+unsubscribe@....
      To post to this group, send email to magic-list@....
      Visit this group at http://groups.google.com/group/magic-list.



      --
      Eray Ozkural, PhD. Computer Scientist
      Founder, Gok Us Sibernetik Ar&Ge Ltd.
      http://groups.yahoo.com/group/ai-philosophy
    • Show all 9 messages in this topic