Loading ...
Sorry, an error occurred while loading the content.

Re: Philosophy of AI: What is a pattern?

Expand Messages
  • Bill Hall
    Re this thread, I have finished the draft of my paper BIOLOGICAL NATURE OF KNOWLEDGE IN THE LEARNING ORGANIZATION that provides an epistemological and
    Message 1 of 45 , Nov 2, 2004
    • 0 Attachment
      Re this thread, I have finished the draft of my paper BIOLOGICAL
      NATURE OF KNOWLEDGE IN THE LEARNING ORGANIZATION that provides an
      epistemological and analytical framework for addressing this kind of
      question and will post it to my web site tonight when I get home.

      Having finished it for the time being, I've had the chance to do some
      new reading and have started on the papers of Luis Rocha, in the
      Complex Systems Modeling Team at Los Alamos who extends the Howard
      Pattee's semantic closure and some of his epistemological ideas. A
      good place to start seems to be Rocha's paper: Syntactic Autonomy: Or
      why there is no autonomy without symbols and how self-organizing
      systems might evolve them -
      http://informatics.indiana.edu/rocha/sa2.html.

      His web site - http://informatics.indiana.edu/rocha/lr_form.html#POL-
      has a cornucopia of other papers that seem to be relevant.

      Happy reading,

      Bill Hall

      Documentation Systems Analyst
      Head Office, Engineering
      Tenix Defence
      Williamstown, Vic. 3016
      Phone: 03 9244 4820
      Email:bill.hall@...
      URL: http://www.tenix.com

      Honorary Research Fellow
      Knowledge Management Lab
      School of Information Management & Systems
      Monash University
      Caulfield East, Vic. 3145
      URL: http://www.sims.monash.edu.au/research/km/

      Evolutionary Biology of Species and Organizations
      URL: http://www.hotkey.net.au/~bill.hall


      --- In ai-philosophy@yahoogroups.com, "Sergio Navega" <snavega@i...>
      wrote:
      > From Eray Ozkural:
      > > --- In ai-philosophy@yahoogroups.com, "Sergio Navega"
      <snavega@i...>
      > > wrote:
      > >
      > > > In this new way of
      > > > seeing things (that I very much agree), brains search for
      > > > regularities and "compress" them through inductive
      > > > abstractions, but the primary purpose of this process is not
      > > > to compress for the sake of compression, but to obtain a
      > > > *measure* of how good they are to the organism's life.
      > >
      > > Hmm, yes. However, ultimately, obtaining a measure of how good X
      is
      > > to the organism's life is to derive an expectation of utility with
      > > respect to the specific act, e.g. predicting the future in
      general.
      > > It's interesting, we can never get rid of this "predict the
      future"
      > > part. Do you think it is fundamental?
      >
      > Well, actually I think we must be cautious with the expression
      > "predict the future". I often use it, but it's necessary to
      > qualify it a bit. It seems obvious that an organism doesn't
      > consistently "predict the future". When we jump out of the
      > bed in the morning, it is hard to have a perfectly planned
      > day ahead of us. Lots of things happen that we couldn't
      > possibly have predicted. Instead of saying "predict the future",
      > I should have said "doesn't be surprised by the future".
      >
      > If aunt Teresa calls me at the phone, I will not be surprised
      > (although I could not have predicted that she would do so).
      > Thus, in this case, I did not predict the future but I have
      > not been "surprised" by that event. On the other hand, if
      > aunt Eunice called me on the phone, I would be *very*
      > surprised (to say the least): she died many years ago.
      > Therefore, I can say that aunt Eunice will not call me
      > on the phone tomorrow (I'm predicting the future based on
      > my previous knowledge that dead people don't use telephones),
      > but I cannot really predict everything that will happen during
      > my day. Perhaps we should talk not of predicting the future,
      > but about the range of possible *expectations* that we can
      > entertain about tomorrow. The fundamental point for AI in
      > all this is that "breaches of expectation" are important
      > sources of information about our models of the world: they
      > suggest that our models must be amended.
      >
      > >
      > > BTW, the minimum length principle probably doesn't work well in
      > > principle, because such codes are
      > > a) not fault tolerant
      > > b) not efficient
      > >
      > > in general. I suspect we'd have to take into account some other
      > > abstract optimization goals for a real intelligence.
      >
      > Interesting point, agreed.
      >
      > > > I understand Barlow's idea as saying that sometimes
      > > > organisms will retain some compressed patterns that are a
      > > > little bit worse (read: longer) than some other patterns,
      > > > just because the former is more *useful* than the latter.
      > >
      > > We'd have to formalize "useful" here.
      >
      > Perhaps this is not a simple task. It is really a vague
      > word.
      >
      > >
      > > > This is something that can only be evaluated when the
      > > > organism is interacting with its environment, and I take
      > > > this as another suggestion that perception of regularities
      > > > is only half of the equation: the agent must be allowed to
      > > > "play" with its world.
      > >
      > > Well, the agent must be allowed to act. The world is lawful,
      > > therefore we learn the laws, and apply these laws to predict the
      > > future, then we can plan, and act.
      >
      > Most of the world seems lawful, but there are many processes
      > that don't fit nicely. Perhaps intelligence is related to the
      > ability of classifying the things that are surprising (but
      > tend to be lawful when "understood") versus the things that
      > are also surprising but random (and therefore irrelevant).
      >
      > Sergio Navega.
    • John J. Gagne
      feedbackdroids: ... Shannon s theory has to do with the reliability of channels, not with the meaning of the data flowing over the channels, although he does
      Message 45 of 45 , Nov 18, 2004
      • 0 Attachment
        feedbackdroids:

        you said:

        >
        Shannon's theory has to do with the reliability of channels, not with
        the meaning of the data flowing over the channels, although he does
        assign a higher weighting to less-frequent data values. In that
        sense, novel data is more important - regardless of what they are.
        >

        Yes, I believe I said very mush the same thing:

        "that an optimal code can always be developed with the desired
        amount of redundancy to safeguard the message no matter how noisy
        the channel
        is. It does not address how the message acquired its "meaning" from
        the perspective of the transmitter or receiver of the message."

        >
        Also, I do recall reading that Donald Mackay had some measures of
        informaton that were more in line with mind and meaning, and not with
        Shannon's ideas - although I haven't read any of DM's work.
        =============
        >

        I have never read any of DM'S work…

        >
        What if you're interested in having your robot get around in the 3-D
        spatial world, and identify visual objects?
        >

        I believe that the representations I'm proposing could represent
        data of any number of dimensions (spatial or otherwise). All objects
        are multi-dimensional. Each type of sensor represents some aspect or
        dimension of our reality. The trick is to standardize the data so
        that we can standardize the algorithms that process the data.

        >
        BTW, you might be interested in knowing that vision arose early in
        evolution, during the cambrian explosion, about 550 MYA. Trilobites
        were the first to have eyes, although they eventually went
        extinct. "In the Blink of an Eye", by Andrew Parker 2003, has a good
        account of this.
        ====================
        >

        Thanks for the info.

        >
        This sounds ok, but how well would it work in a truly general
        environment, such as a 2-D or 3-D spatial world, where you really do
        need a lot of parallel channels? [thus, the CCD cam].
        >

        There is nothing inherently wrong with using arrays such as CCD's.
        The problem is treating the data produced by the CCD as an array.
        What I'm proposing is if you intend to use a CCD array then you
        should treat each pixel within the array as a separate simple-sensor
        data channel.

        >
        Regards ".... These could be strung together to form higher level
        hypothesis ...", etc, I don't know if it will help your scheme or
        not, you might consider something more hierarchical in structure.
        Your scheme sounds rather flat.
        >

        I disagree. That's like saying that symbolic language is too flat
        because of its one dimensional appearance. Also remember that my
        description is an over simplification if for no other reason then
        all the detail are not fully workout. I do believe that each "bit"
        (for lack of a better term) should have both magnitude and direction
        rather than just direction as I suggested in (U,D,S basic
        patterns).

        >
        I say this mainly because the brain is arranged in a hierarchical
        manner, where each next level up involves a higher level of
        abstraction of the incoming data. This is esp true regards the
        visual system. At the retina, you're dealing with instantaneous
        pixel intensities, by time you get 4-5 levels up into the cortex,
        you're dealing with identification of shapes, faces, moving objects,
        texture, binocular disparity, etc. Abstraction and specificity of
        stimulus increases, and sensitivity to position and transience
        decreases. Some of the cells higher up will only response to very
        specific stimuli, but once having been presented the correct
        stimulus, they continue to fire away for minutes and even hours -
        obviously some form of memory of past events. OTOH, as soon as the
        light pattern changes, the responses of cells at lower levels
        immediately changes.
        >

        Please understand that I am in no way suggesting that this method of
        coding information even remotely resembles how we code information
        within our brains. What I am suggesting is that if it works then it
        must resemble what we do some how or another (at some level of
        abstraction of what we do).

        John J. Gagne


        --- In ai-philosophy@yahoogroups.com, "feedbackdroids"
        <feedbackdroids@y...> wrote:
        >
        > --- In ai-philosophy@yahoogroups.com, "John J. Gagne"
        > <fitness4eb@c...> wrote:
        >
        > > Now, while I appreciate the abstract value of Shannon's
        > > Information
        > > theory, it seems to be limited to transmitting "Pre-existing
        > > Meaningful information" over a single information channel.
        >
        >
        > Shannon's theory has to do with the reliability of channels, not
        with
        > the meaning of the data flowing over the channels, although he
        does
        > assign a higher weighting to less-frequent data values. In that
        > sense, novel data is more important - regardless of what they are.
        >
        > Also, I do recall reading that Donald Mackay had some measures of
        > informaton that were more in line with mind and meaning, and not
        with
        > Shannon's ideas - although I haven't read any of DM's work.
        > =============
        >
        >
        > > But, in my opinion, it is too often the case that robotic
        engineers
        > > tend to want to equip their creations with very fancy sensor
        arrays
        > > like digital cameras (or whatever is fashionable this week) and
        > then
        > > try to deal with the information produced by these arrays as a
        > > whole. In my opinion, this is the wrong approach.
        >
        >
        > What if you're interested in having your robot get around in the 3-
        D
        > spatial world, and identify visual objects?
        >
        > BTW, you might be interested in knowing that vision arose early in
        > evolution, during the cambrian explosion, about 550 MYA.
        Trilobites
        > were the first to have eyes, although they eventually went
        > extinct. "In the Blink of an Eye", by Andrew Parker 2003, has a
        good
        > account of this.
        > ====================
        >
        >
        > Again, in my
        > > opinion, the integrity of each data channel within the array
        must
        > be
        > > preserved. As long as we do preserve the integrity of the
        > individual
        > > data channels, then we are sure not to limit the machines
        ability
        > to > interpret the information as it is or as it may be.
        > >
        > ............
        > > These strings are stored and analyzed over time for statistical
        > > occurrences. The more frequent the occurrence the more likely
        the
        > > relationship.
        > >
        > ............
        > > These could be strung together to form higher level hypothesis
        > > formatted as high level sentences composed of high level
        symbols.
        > > How far do we take it? Good question… I don't know. As far as
        > > it
        > > takes I guess.
        > >
        > > I know this is a bit lacking in details but you should be able
        to
        > > get the basic idea of how the subjective symbolic
        representations
        > > are formed by this type of machine. Patterns within patterns
        within
        > > patterns all represented as a text string "language" of
        > > simple
        > > sensor data channels.
        > >
        >
        >
        > This sounds ok, but how well would it work in a truly general
        > environment, such as a 2-D or 3-D spatial world, where you really
        do
        > need a lot of parallel channels? [thus, the CCD cam].
        >
        > Regards ".... These could be strung together to form higher level
        > hypothesis ...", etc, I don't know if it will help your scheme or
        > not, you might consider something more hierarchical in structure.
        > Your scheme sounds rather flat.
        >
        > I say this mainly because the brain is arranged in a hierarchical
        > manner, where each next level up involves a higher level of
        > abstraction of the incoming data. This is esp true regards the
        visual
        > system. At the retina, you're dealing with instantaneous pixel
        > intensities, by time you get 4-5 levels up into the cortex, you're
        > dealing with identification of shapes, faces, moving objects,
        > texture, binocular disparity, etc. Abstraction and specificity of
        > stimulus increases, and sensitivity to position and transience
        > decreases. Some of the cells higher up will only response to very
        > specific stimuli, but once having been presented the correct
        > stimulus, they continue to fire away for minutes and even hours -
        > obviously some form of memory of past events. OTOH, as soon as the
        > light pattern changes, the responses of cells at lower levels
        > immediately changes.
      Your message has been successfully submitted and would be delivered to recipients shortly.