Loading ...
Sorry, an error occurred while loading the content.

Re: Emergence

Expand Messages
  • Michael Olea
    ... Well, you know what the weather is like right now. That is not enough to tell you what the weather will be like next week. The weather keeps making news,
    Message 1 of 681 , May 1, 2007
    • 0 Attachment
      --- In ai-philosophy@yahoogroups.com, "John J. Gagne" <john_j_gagne@...>
      wrote:

      > --- In ai-philosophy@yahoogroups.com, "Michael Olea" <oleaj@> wrote:
      > >
      > > So, a dynamical system can generate, store, or discard information.

      > Of the three examples you gave the second and third are fairly
      > straight forward but I'm not sure I understand the how the loss of
      > precision results in the generation of information?

      Well, you know what the weather is like right now. That is not enough to tell
      you what the weather will be like next week. The weather keeps making
      news, keeps getting reported, keeps generating new information (worthy of
      broadcasting bandwidth, and advertising revenue) because knowing the
      state of the atmosphere to some precision now does not allow you to know
      the state of the atmosphere to the same precision at a later time. The loss of
      precision means that initial uncertainties are amplified. Information is,
      always, a reduction in uncertainty. The phase of the moon 10 years from
      now, given the phase of the moon today, is known with great certainty.
      Measure the phase of the moon today, and there is little new information to
      be had by measuring it a week from now, precisely because there is very
      little loss of precision (amplification of uncertainty) over time. There is not
      much uncertainty to be reduced by a new measurement. The weather is just
      the opposite. Precision is lost over time. That means that uncertainties are
      amplified. That means that there is much uncertainty to be reduced by a new
      measurement, much information to be gained. So a chaotic system, or more
      generally, an unpredictable system, generates new information over time
      precisely because of the loss in precision (increase in uncertainty) of
      extrapolations from the present to the future.

      > you said:
      >
      > "Over time the little ball expands, gets smeared out over phase
      > space (at a rate quantified by an n-dimensional ellipsoid, whose axes
      > are related to Lyaponov exponents). This is a loss of information -
      > fewer bits of resolution".

      > When you say "This is a loss of information - fewer bits of
      > resolution" you're talking about a loss of information about the
      > current state relative to the initial condition which was known to a
      > higher precision.

      Right. You could think of it as a kind of "half-life", how long before the number
      of bits of precision with which you can specify a location in phase space is
      cut in half? The rate at which uncertainty increases is the rate at which the
      system is generating new information.

      > I suspect it's the reverse of the mass on a spring? which settles to a
      > specific point (and loss of the initial condition) where as the point
      > in phase space expands to occupy may points.

      Yes.

      Imagine a marble rolling around in a bowl till it settles at the bottom.
      Observing the marble at rest at the bottom of the bowl tell you nothing about
      where in the bowl it was released - could have been anywhere. On the
      opther hand, no matter where the marble starts out, your predictions of
      where ot will be become more precise, less uncertain, the farther into te
      future your prediction goes - it gets ever closer to the bottom of the bowl.
      This is, as you suspected, the reverse of sensitive dependence on initial
      conditions.

      -- Michael
    • Sergio Navega
      ... I’d like to send a message to the author of the above passage. I have a beautiful landscape to sell, it s on sale, and I m offering the property of this
      Message 681 of 681 , Sep 12, 2012
      • 0 Attachment
        > "The genetic code is surely not an accidental adhesion of molecules.
        > It is an instrumental message, an energy directive created by a
        > meta-biological intelligence"
         
        I’d like to send a message to the author of the above passage.
        I have a beautiful landscape to sell, it's on sale, and I'm
        offering the property of this landscape to him for a very
        good price. It is a very large area, with a beautiful sight,
        something that is not easy to obtain. And the price is really
        very good! This area is situated (glad you asked) in the dark
        side of the Moon, a quiet and valued space, ready to be
        used for any purpose. One time offer only! Buy now, call me at:
         
         
        My Nigerian operators will be glad to process your order.
         
        SN
         
         
         
        Sent: Wednesday, September 12, 2012 4:09 PM
        Subject: Re: [ai-philosophy] A junk-free genome? (was Emergence)
         
         

        Judging from arguments at this link : http://deoxy.org/8_int.ht
        It seems we are only scratching the surface. May be we are limited by our current development, of which there could be much more to come.
        The 'results' could be the best current map we have, where the map is a diagram representing the territory.

        I am referring to these 2 sections of 'ideas'.
        ""The genetic code is surely not an accidental adhesion of molecules. It is an instrumental message, an energy directive created by a meta-biological intelligence.

              This intelligence is astrophysical and galactic in scope, pervasive, ubiquitous, but miniaturized in quanta structure. Just as the multi-billion year blue-print of biological evolution is packaged within the nucleus of every cell, so may the quantum-mechanical blueprint of astronomical evolution be found in the nucleus of the atom.""

        ""Exo-psychology hypothesizes that the evolution of astro-physical structures involves a contelligence as superior to DNA as DNA is to neuron-brains.

              The direction of organic evolution now can be stated. Starting with unicell organisms, life produces a series of neural circuits and increasingly more complex and efficient bodies to transport and facilitate higher contelligence. The culmination of this biological process is the seven-circuit brain which is able to communicate with DNA, i.e. receive, integrate and transmit information at the level of RNA.""

        Other material there I still cannot grasp.

        Michael Olea wrote:
         

         

        On Sep 10, 2012, at 9:19 AM, Sergio Navega <snavega@...> wrote:

         
         
        > The term "junk DNA" always struck me as presumptuous. I preferred a
        > more neutral term like "non-coding DNA" (though that too turns out to
        > be off the mark).
         
        I agree with that, "junk" doesn't seem to be the right word. I would
        call it WDKY DNA (We Don't Know Yet).
         
        Yes, that would have been the right approach, but it's moot now. We now have the results of a massive international project, The ENCODE Consortium, comparable to the original Human Genome Project. About 80% of human DNA is now known to have an active biological function, so it no longer falls under the category of we don't know yet. You wrote elsewhere that the proportion of "junk" to "useful" DNA is still under debate. You may have sources of information that indicate otherwise, maybe I'm missing something, but it looks to me like that debate has been pretty much over for some time now. While it was just last week that the ENCODE project published 30 papers summarizing current results, the results themselves have been made continuously available in public databases as they were obtained over the last 5 years.
         
        A little background. In 2003 the National Human Genome Research Institute (NHGRI) invited biologists to propose pilot projects for a systematic approach to identify functional pieces of DNA in just 1% of the genome (non-coding DNA, WDKY at the time). The results, published in 2007, "transformed biologists' view of the genome". I'm quoting from a summary article in Nature:
         
         
        After the pilot studies NHGRI asked for a second round of proposals covering the entire genome. It is the results of that second round that were just recently summarized. Another summary overview article is available on yahoo:
         
         
        Most of the non-coding (formerly WDKY) DNA turns out not to be junk at all but rather part of an intricate regulatory network controlling where, when, and how a relatively small number of genes are expressed. It's time for that particular debate to move on.

        One thing's for sure: the complexity
        of this whole thing is mind-boggling.
         
        Yes.

        Which should teach us to rely
        a little bit more on natural selection to solve really hard problems
        (in engineering, science, technology, AI, etc.). But I'm sure many
        will consider this to be preposterous. It's my random mind at work...
         
         
        I don't think AI students and practitioners would find this at all preposterous. A variety of genetic algorithms have been explored both in theory and in practice for quite some time. I once wrote a genetic algorithm to solve a sub-problem in a "Legal Line" parser for an automated check reader. This reader was a program that took the digital image of a check as input and wrote, among other things, the dollar amount as output. The "legal line" is the portion of the check that has the dollar amount written out in words. The problem I was facing was that the image segmenter sometimes made mistakes in its demarcation of boundaries between words (ran some words together, split some words apart). I used the edit distance between proposed words in the split and known words in a custom lexicon as the basis of a fitness function, designed some mutation operators to join and split words, created a population of legal line partitions from the segmenter's initial hypothesis, and let the population evolve. It actually converged quickly enough to be feasible for use in production code, and it got good results. But before I went very far with it I realized I could use a variation of dynamic programming that was much more efficient and always got optimal results for this formulation of the problem. It was too easy.
         
        Of course these really simple GAs bear about as much resemblance to biological natural selection as the PDP feedforward "neural nets" bear to actual brains.
         
        - m.
         
      Your message has been successfully submitted and would be delivered to recipients shortly.