Loading ...
Sorry, an error occurred while loading the content.

Re: [SeattleRobotics] Re: language - neural nets

Expand Messages
  • Matthew Tedder
    The bulk of what I ve said about neurons comes straight from Eric Kendall et al, in Principles of Neural Science with other supporting evidence in various
    Message 1 of 53 , Sep 30, 2008
    • 0 Attachment
      The bulk of what I've said about neurons comes straight from Eric Kendall et
      al, in "Principles of Neural Science" with other supporting evidence in
      various peer-reviewed journals.

      And, yes, I realize a lot of what I've covered could be discussed almost
      unendingly. For the sake of brevity, it might look as if I make assumptions
      here but I guarantee any assumptions are far and few between--and only
      because I have not yet recognized them as such. I have been quite vigorous
      with this. As per bibliography, I collect and reference down to the level
      of the paragraph of peer-reviewed journal articles (primary research) and
      only the text books with well-recognized credibility. Next to each, I write
      notes as to why I feel each excerpt is important. And, as a personal rule,
      I never cite the author's interpretations of their findings. You will find
      that I disagree with such interpretations roughly about half of the time.
      At present, I have far more references than your typical dissertation. I
      realize this is not sufficient, as views in this area can be very stubborn.
      And, in Artificial Intelligence, very political.

      I'd like to think I no longer make any presumptions, but my own history of
      thinking this has proven false too many times to count. But when have I
      found a published article that didn't make presumptions, seemingly
      unwittingly? I don't think I ever have. It's sad, I think, that it has
      seemingly become almost mandatory for authors to suggest a genetic cause
      behind observed phenomenon and fitting that these are so often in the
      discussion section.

      So what makes a theory useful? If it validates well and is predictive.
      Non-experimentally, I have a personal tradition of predicting what I will
      find in literature before I do.

      What makes a philosophy useful? In this case, even if it didn't validate,
      the ability to create predictive, spatial-temporal models of passively and
      interactively observed, external phenomenon is not yet otherwise
      accomplished. And this is very useful in the development of autonomous
      robotics.

      I consider that I learn strongly in the direction of nuture over nature, but
      you might notice I recognize specific ways that nature plays a role. The
      relative locations of neurons in the nervous system have enormous influence
      on how functional regions develop. In Steven Pinker's book, "The Blank
      Slate", the chapter "The Blank Slate's Last Stand" even Pinker seems to have
      reduced his view largely to the relative geographic locations of regions of
      the visual cortex as the part that is clearly hard-coded.

      But seriously--if robotics are to be based on the belief that hard-coding is
      required for interaction in each, individual environment, then we are
      constraining its applicability to the miserably limited of AI approaches out
      of the 1980's--symbolic logic and neural nets.

      Sometimes each part of a system cannot be understood until you first
      understand each other part. For example, how RTL (Register Transfer
      Language) works. This is the intermediary language behind the GNU Compiler
      Collection. Each language, such as C, C++, or Fortran compiles first to
      RTL, which is then optimized generally, then compiled to the specific CPU
      architecture and optimized specifically. There is no way to learn RTL in a
      consecutive manner. And for the same reason, incremental research cannot
      resovle all questions. Instead, ground-breaking research is necessary for
      progress. This is lacking, many (not just I) believe ground-breaking
      research is socially and politically restricted by the ruling dogmas in the
      field of Artificial Intelligence.

      My goal would demonstrate this. But win or loose, it is a sad fact that
      robotics projects in universities typically involve little more than
      line-following robots. I think it greatly behooves us to encourage radical
      ideas whether or not you (or I) agree with them.

      Matthew


      On Tue, Sep 30, 2008 at 12:50 PM, dan michaels <oric_dan@...> wrote:

      > --- In SeattleRobotics@yahoogroups.com<SeattleRobotics%40yahoogroups.com>,
      > "Matthew Tedder"
      > <matthewct@...> wrote:
      > >
      >
      > >
      > > Now I think I have solidified an exceptionally beautiful (informal)
      > theory > of neural intelligence--both low level phenomenon and
      > cognition. I will > publish on this, when ready. I am very busy
      > writing and considering every > criticism I can think of, against it.
      > And while I may not be able to > regenerate the functional regions of
      > the visual cortex, for example, I
      > > certainly can automatically generate systems of different functional
      > regions > that often seem, intuitively, logically necessary.
      > >
      >
      > Well, you might go over to one of the forums, like
      > google-comp.ai.philosophy, and put out some of your ideas [without
      > totally spilling your guts] to see the sort of reactions you get.
      > Basically, I take issue with just about single thing you've said so
      > far, and could argue about any of it for hours [if I saw fit to spend
      > the time], but this is not the forum for it. As I see it, most of your
      > ideas just do not jibe with my own [very] extensive reading. You
      > handwave away stuff too glibly. The first step in analyzing any idea
      > is to look at the original research paper[s], and see what they're
      > really saying.
      >
      > What I've found is the FIRST thing to ask whenever someone holds forth
      > on anything is .... what are his underlying assumptions?
      >
      > The answer is always very telling. Eg, the inderlying assumptions are
      > what turn most arguments in pure philosophy into pure pudding. And I
      > haven't yet seen any neural "model" which doesn't make exceedingly
      > simplistic assumptions from the getgo, just in order to make things
      > mathematically tractable.
      >
      > >
      > > My biggest current problem is that I do not understand what causes a
      > > receptor to form as polarizing verses hyperpolarizing--but I do know
      > > when/why this is necessary and how to artificially for it.
      > >
      >
      > OTOH, an evolved system doesn't have this quandary. If some random
      > feature happens to work, then keep it. Inhibitory signals happen to
      > add 6 or 8 or so highly useful functions to the nervous system. In
      > different places, inhibition has different effects.
      >
      > >
      > > For the sake of practical use, I have spent the last year trying to
      > come up with principles for engineering animals based on this theory.
      > I don't think you could just throw a pile of mixed up neurons in any
      > machine and expect it
      > > to learn and behave intelligently..
      > >
      >
      > Yes, tabula rasa, blank-slate, random mishmash, just doesn't cut it.
      >
      > >
      > >although, a couple of experiments with rat, cortical neurons in
      > pitri dishes wired to the senses and controls of > flight simulators
      > has, in fact, evidenced against me.
      > >
      >
      > I think you missed something here. This is another example of where
      > analyzing the original paper is the first necessary thing in the
      > discussion queue.
      >
      >
      >


      [Non-text portions of this message have been removed]
    • Matthew Tedder
      Actually I have to completely agree with your points about the importance of focusing hard on the fundamentals of reading, writing and arithmatic. It s just
      Message 53 of 53 , Oct 1, 2008
      • 0 Attachment
        Actually I have to completely agree with your points about the importance of
        focusing hard on the fundamentals of reading, writing and arithmatic. It's
        just astonishing that so many kinds spend six hours a day at school and miss
        so much of the basics... the basics are teaching to fish, instead of giving
        fish. But, another fundamental I think is the scientific method and
        critical thinking. I think that should be taught, rigorously, in high
        schools, too.

        I didn't mean to suggest line follow robots aren't important. I just hope
        this could be, exactly as you said--a building block to bigger and better
        things.

        As for a non-line following robot:

        That's a fun thing to think about. It doesn't feel right to measure
        intelligence as a linear property. Each animal evolves differently for
        survival in its particular environment. Chimpanzees are said to have
        evolved more than humans, since our split from a mutual ancestor (more
        genetic mutations, since).

        But, I think there are linear aspects as well. One of these is that greater
        intelligence necessarily implies greater independence, by definition. To
        not do the same thing under the same conditions (such as the rules for
        following a line), is to explore other ways. For example, thinking outside
        the box. Of course, this predicts that increasingly intelligent machines
        increases the risk of them rebelling against us.. Is this ominous?

        <going to sound a bit wierd>
        One night at a local bar, a physicist friend (Dr. George Lake) and me were
        discussing the concept of Gaia and that planets are intelligent life forms.
        He said that was ridiculous because if it doesn't evolve it couldn't "get
        better" thus become intelligent. I agree the idea is ridiculous, but the
        presumption that evolution is the only means to "get better" bothered
        me--only because it was a presumption. I pondered the idea for a time
        thereafter and eventually came up with a principle I call the "wiggly"
        principle.

        While no physical entity can objectively be said to have a "purpose" in the
        universe (without invoking God), everything does seem to have "direction".
        For example, a rock flying through space has the direction in which it
        flies... normally an orbit around a star or other massive body. A purpose
        is, after all, just a direction with a specific end-point (a goal). So, I
        postulated that the more complex the direction an entity has, the more
        "intelligent" it could become. By intelligent, I mean adaptive to better
        achieve its direction. For example, rocks flying around a star might smack
        into and push away smaller bits of dust or rocks but maintain its direction.
        When I hits something it's momentum cannot defeat, it may shatter eventually
        leaving only rocks with stable and secure orbits. The solar system itself
        thus ultimately adjusts itself to a state of harmony. Likewise, water has
        very complex direction. And a stream of it can overcome almost any obstacle
        by twisting and turning and rising above whatever obstacle it in its way.

        I imagine this like a key in a key hole. Putting it in and turning may or
        may not unlock but pushing and pulling and wiggling while you do it stands a
        better chance. Or, prey trying to escape a predator's grip. Vigor alone,
        provides an increased chance....

        I think evolution is one sub-category of this higher, wiggly principle--a
        principle of how entities in the physical world become "better". And so, a
        non-line following robot would do well with this principle.
        </going to sound a bit wierd/>

        Matthew

        On Wed, Oct 1, 2008 at 11:00 AM, Randy M. Dumse <rmd@...> wrote:

        > Matthew Tedder said: Wednesday, October 01, 2008 1:06 AM
        >
        > > But win or loose, it is a sad fact that robotics
        > > projects in universities typically involve little
        > > more than line-following robots.
        >
        > I think it is wonderful if we have at least line following
        > projects at universities. Many useful motion control and
        > robotics points are covered by line following.
        >
        > It is one of the low hanging fruit, a fairly simple minimal
        > machine configuration that demonstrates a motion based utility.
        > At least it is a first step. Better than no step at all.
        >
        > Likewise I don't think it is sad that grade eschool doesn't
        > teach much more than foundational reading writing 'rithmetic. In
        > fact, I wish they would stick more to the fundamentals. (As an
        > example, in 4th grade, my step daughter came home talking about
        > the rain forrest. We asked her if she remembered when we took
        > her to the rain forrest? She said she'd never been. Turns out
        > the pictures of rainbows and pretty birds didn't register at all
        > in her mind with what she actually saw in the Yucatan penisula.)
        >
        > I agree with your zeal for advanced projects, but I object to
        > any critic of teaching fundamentals to as wide an audience as
        > possible, lowering the entry barrier for someone who might want
        > to go further.
        >
        > BTW, here's a mind twist for you. Can you write a
        > non-line-following program? And just how intelligent would that
        > look?
        >
        > Randy
        >
        >
        >


        [Non-text portions of this message have been removed]
      Your message has been successfully submitted and would be delivered to recipients shortly.