Loading ...
Sorry, an error occurred while loading the content.

Re: [bafuture] Intelligence (was Sleep)

Expand Messages
  • Joschka Fisher/joseph anderson
    ... Hi, ... ___________________________________________________________ Do You Yahoo!? -- Une adresse @yahoo.fr gratuite et en français ! Yahoo! Mail :
    Message 1 of 17 , Dec 1, 2003
    • 0 Attachment
      --- Bill Rowan <whrowan@...> a écrit : >
      Hi,
      >
      > I don't think the only people who would object to
      > what you are saying have
      > the flaws you single out (being "soft science types"
      > or "having an
      > inferiority complex.") But even if they all did, ad
      > hominem attacks are
      > not a respectable form of argumentation.
      >
      > As to whether your definition of intelligence
      > answers all the questions
      > that need to be answered about intelligence, I think
      > maybe your assessment
      > of the applicability of your theory might be a
      > little overblown. As a
      > mathematician myself, I know that it is easy to feel
      > that way. You may
      > have to let yourself be guided somewhat by other
      > people's comments in that
      > regard. If there is noone you trust to give you
      > feedback, then you have
      > a problem.
      >
      > Bill Rowan
      >
      >
      >
      > ------------------------ Yahoo! Groups Sponsor
      >
      > To unsubscribe from this group, send an email to:
      > bafuture-unsubscribe@yahoogroups.com
      >
      >
      >
      > Your use of Yahoo! Groups is subject to
      > http://docs.yahoo.com/info/terms/
      >
      >

      ___________________________________________________________
      Do You Yahoo!? -- Une adresse @... gratuite et en français !
      Yahoo! Mail : http://fr.mail.yahoo.com
    • J. Andrew Rogers
      ... Granted about the ad hominem, but the reasons most people (regardless of profession or education) disagree with this are reasons that are intellectually
      Message 2 of 17 , Dec 1, 2003
      • 0 Attachment
        On 11/30/03 11:57 PM, "Bill Rowan" <rowan@...> wrote:
        >
        > I don't think the only people who would object to what you are saying have
        > the flaws you single out (being "soft science types" or "having an
        > inferiority complex.") But even if they all did, ad hominem attacks are
        > not a respectable form of argumentation.


        Granted about the ad hominem, but the reasons most people (regardless of
        profession or education) disagree with this are reasons that are
        intellectually pretty weak. Yes, this paints with a broad brush, but that
        does not invalidate the use of such a broad brush. A lot of people have
        their own little bit of religion, even when they are nominally very rational
        and thoughtful about many other topics. This particular topic brings about
        a pretty severe religious streak among many reasonably intelligent people
        (e.g. Penrose) that reflects a desire for things to work the way they wish
        rather than the way they are.

        There ARE some significantly unsatisfying consequences to the best theory
        available, an argument often used against it, but none of these consequences
        are unsupported by evidence nor is that a good reason to discard a theory.


        > As to whether your definition of intelligence answers all the questions
        > that need to be answered about intelligence, I think maybe your assessment
        > of the applicability of your theory might be a little overblown. As a
        > mathematician myself, I know that it is easy to feel that way. You may
        > have to let yourself be guided somewhat by other people's comments in that
        > regard. If there is noone you trust to give you feedback, then you have
        > a problem.


        Here is the problem, as I see it. This has been hashed out *numerous* times
        in forums far more rigorous than this and by people eminently qualified to
        dissect it, and non-axiomatic information theoretic models are exceptionally
        sturdy as a foundation for ANY discussion of intelligence. Anybody who
        claims to have a valid opinion on the matter who knows nothing about this is
        not sufficiently versed in the basics to be taken seriously.

        If you follow these things, there are only two major camps of core theory.
        The first is the people who agree that the model is "it" and then argue over
        expression and implementation (which is at least as complicated a topic).
        The second is the people who agree that it is technically correct but then
        say it is not important for reasons of dubious value to the core argument,
        like handwavy metaphysical arguments, "what if" arguments about the brain
        for things of which there is no evidence (and many times aren't relevant),
        or simply arguments from incredulity -- a philosophical bait and switch.

        Just about every active researcher in core theory today is in the first
        camp, and for good reason. The model is sufficiently strong that when it
        comes to hypothesis selection one can handily bludgeon all other major
        competing hypotheses or show the competing hypothesis to be a narrow
        expression of the general model. There IS only one correct answer, and
        currently only one strong candidate to be that answer.

        Respecting peoples opinions for their own sake has no place in math and hard
        sciences. You can have an opinion, but also better have a damn good reason
        for it and be able to defend it. As it happens, I was one of the very first
        people to propose and unify the currently accepted model many years ago.
        Every argument you can think of, I've probably hashed out with some of the
        best current AI and cognitive theorists, apparently convincingly enough that
        this has slowly become standard theory and I don't have to explain it much
        any more. A lot of proofs and papers have been done on the topic over the
        last few years that bolster the core concept as it has gained popularity.

        At this point, it is kind of like General Relativity in physics; it is
        strong enough theoretically and from evidence that it is difficult to argue
        against (and few arguments are new arguments), but there is no conclusive
        proof that it is actually correct. I will gladly defend attacks on the
        theoretical, but it seems futile (now) to not assume that this model is
        correct by default with the evidence available. It is a standard piece of
        core theory for the field.

        You have to understand that from my perspective, it feels like I am arguing
        with Flat-Earthers when people arbitrarily question the validity of the
        model.


        --
        J. Andrew Rogers (andrew@...)
      • Chris Phoenix
        ... You have to understand that many of the people you re arguing with are not questioning the validity, but the applicability. You wouldn t go to an
        Message 3 of 17 , Dec 1, 2003
        • 0 Attachment
          "J. Andrew Rogers" wrote:
          > You have to understand that from my perspective, it feels like I am arguing
          > with Flat-Earthers when people arbitrarily question the validity of the
          > model.

          You have to understand that many of the people you're arguing with are
          not questioning the validity, but the applicability. You wouldn't go to
          an economist to find out why your kid prefers chocolate ice cream over
          vanilla. You wouldn't go to a librarian to help you file your email.
          And you have certainly not convinced me that your definition of
          intelligence has anything to do with practical questions of human
          performance.

          If I want to find out whether today's humans are more intelligent than
          today's computers, I'll certainly talk to you. But if I want to find
          out whether a kid will pass a class, or whether an employee is competent
          to do a job, do you have anything whatsoever to contribute to that
          discussion? I really want to know--please answer this question. Does
          your study of intelligence have anything to say about individual human
          performance on specific tasks?

          If not, then your original entry into this conversation was simply
          off-topic. "Emotional intelligence" may have no more to do with the
          kind of intelligence you study than the "bug" that gives you a runny
          nose has to do with the "bug" that entymologists study. That doesn't
          mean the concept is wrong, flawed, or useless. It means you are
          off-topic. An entymologist might correct someone who called a spider a
          "bug". That would be really pedantic, but at least it would make some
          sense. But an entymologist who corrected someone who called a cold a
          "bug" would be... well, you get the idea. I hope.

          Chris

          --
          Chris Phoenix cphoenix@...
          Director of Research
          Center for Responsible Nanotechnology http://CRNano.org
        • wayne radinsky
          J. Andrew Rogers said: [definition of intelligence] ... Ok, I went to Google and tried to look up the math on this, just so I could get a handle on what you
          Message 4 of 17 , Dec 1, 2003
          • 0 Attachment
            J. Andrew Rogers said:

            [definition of intelligence]
            > It does not seem to be controversial to mathematicians at
            > all, most of whom seem quite happy with the definitions
            > they currently use. And indeed, one can show how all these
            > various "intelligences" fit within the single construction
            > used for the purposes of computational theory. In fact,
            > there are published proofs in algorithmic information
            > theory that all "forms" of intelligence are ONLY
            > expressible under such a construction. I have seen almost
            > no disagreement as to whether or not the mathematical
            > definition I referenced above is a good universal
            > description. The hard theory AI guys argue about damn near
            > everything, but there is a lot of general agreement on this
            > point.

            Ok, I went to Google and tried to look up the math on this,
            just so I could get a handle on what you are talking about.

            Let's start with this.


            http://mathworld.wolfram.com/KolmogorovComplexity.html


            Kolmogorov Complexity -- The complexity of a pattern
            parameterized as the shortest algorithm required to
            reproduce it. Also known as algorithmic complexity.

            Ok, so I'm thinking of a simple example, which has been
            studied to death by computer scientists. I figure, using a
            concrete example will make this easier to think about. And
            my example is sorting algorithms. You have a big list of
            numbers, and you want them sorted from smallest to biggest.
            We know that the QuickSort algorithm


            http://ciips.ee.uwa.edu.au/~morris/Year2/PLDS210/qsort.html


            is the fastest at sorting the list. (For those not
            familiar, the above link has a pretty javascript animation
            showing quicksort in action.) At least, if the list is
            random to begin with -- if it is "almost sorted" then
            another algorithm, I forget which, ends up being faster.
            Also important to note: nobody has ever *proven* that
            QuickSort is the fastest way to sort numbers. It's just the
            fastest way anybody has found so far. It's pretty unlikely
            anybody will ever find a faster way to sort numbers -- but
            it could happen, since nobody's ever proven it can't.

            But QuickSort is not the simplest way to sort a list. The
            simplest way is probably the Bubble Sort


            http://linux.wku.edu/~lamonml/algor/sort/bubble.html


            Again, nobody's ever proven this is the simplest way to
            sort numbers.

            We can think of the "Kolmogorov Complexity" of each
            algorithm as being roughly the length of the code needed to
            perform each sort.

            Now, if we look at web pages about Algorithmic Information
            Theory,


            Introduction to Algorithmic Information Theory
            http://szabo.best.vwh.net/kolmogorov.html


            we find that the AIT wonks consider the "simplest"
            algorithm to be the most "intelligent".

            Why the emphasis on size rather than speed? Well, it seems
            that if you focus on size, it allows you to do some
            interesting things. You can draw a connection between it
            and the concept of entropy in thermodynamics, and develop a
            sort of information-theory-centric version of entropy. To
            quote G. J. Chaitin,


            "Program-size complexity in AIT is analogous to entropy in
            statistical mechanics. Just as thermodynamics gives limits
            on heat engines, AIT gives limits on formal axiomatic
            systems. "

            http://www.cs.auckland.ac.nz/CDMTCS/chaitin/unm2.html


            And you can develop a sort of "incompleteness theorem"
            similar to what G�del developed for provable theorems, and
            apply it to computation.


            [PDF] Computers, Paradoxes, and the Foundations of
            Mathematics
            http://www.cs.auckland.ac.nz/CDMTCS/chaitin/amsci.pdf


            Now, here's where I come back to the concept of genetic
            fitness.

            Living organisms survive by meeting the survival
            requirements of the environment in which they live. If you
            are a sort organism (you need a little imagination here),
            and all the problems you ever encounter in your life are
            sorting problems, then we could say that the QuickSort
            species is the most "intelligent" (if we judge by speed) or
            the Bubble Sort species is the most intelligent (if we
            judge by size).

            But the nature of evolution is such that the "survival
            problem" is constantly changing -- so at some point,
            perhaps you can't survive if the only problem you know how
            to solve is how to sort things. You might need to know a
            few other things, like sorting words, and knowing that the
            umlauted � in G�del comes after the normal o, even though
            they come in a completely different numerical order. You
            might need algorithms for drawing windows and buttons and
            menus, and displaying the numbers you are sorting in
            columns, and adding and multiplying them. And so on, and
            eventually you end up with something like, say, VisiCalc.

            Now of course with computer programs, they are programmed
            by people while real organisms are not programmed but
            evolve, but you can still think of the DNA plus various
            exogenic factors as representing the algorithm that the
            organism uses for survival.

            And here's where I think you are getting into trouble when
            you try to talk about "general" intelligence. When you talk
            about Kolmogorov Complexity, you're talking about the
            complexity of an algorithm that solves a *specific* problem
            -- whether it is simple like sorting, or complex like
            computer vision. But in evolution, the "problem that needs
            to be solved" to survive is open-ended and always changing.

            And there isn't much evidence that our brains posess any
            "general intelligence" system, but rather that we posess
            various "modules" (I'm using Pinker's terminology here) for
            various aspects of intelligence, and that as we use more
            brain-scanning technology, we are able to map out the brain
            anatomy and see what parts of the brain are used for
            various high-level functions. For example, there are parts
            of the brain devoted to vision, and there are parts related
            of the brain devoted to visualization. And there are parts
            of the brain devoted to language -- different parts for
            processing what we hear and for generating speech. And
            there are parts of the brain for dealing with other people
            -- recognizing faces, dealing with office politics. And
            there are parts dealing with emotions and goal-setting and
            deciding what's important to pursue in life. And you can go
            on and on. But the important point is: the human brain does
            *not* work by just throwing a few hundred billion neurons
            into a skull and magically getting all forms of
            intelligence out the other end.

            There are countless examples of people who are very
            intelligent in one area and poor in another. Marian Carey
            is the world's greatest singer, but she sucks as a movie
            star. And the classic example is the "geeks" who are good
            at solving math problems but bad at throwing footballs or
            getting dates with girls. What is popularly referred to as
            "emotional intelligence", I would call "political
            intelligence", since it is really about succeeding in an
            environment where aliances are formed and broken and people
            lie and back-stab each other and so on. I suspect the world
            has many great mathmaticians who are really bad at
            politics. Although it also has mathematicians who are good
            at it -- those are the famous ones.

            It seems to me that, at the end of the day, the ultimate
            decision about what's "intelligent" is genetic fitness.
            Ultimately, what fails to survive just disappears. Simple
            as that. That's why I think genetic fitness is the proper
            objective measurement of intelligence.

            Some questions.

            - Why emphasize size instead of speed? After all when you
            take an IQ tests, the test is timed. But no measurement is
            made of the size of your head or the number of neurons in it.

            It seems to me that natural selection operates on *both*.
            Larger genomes are more subject to error from mutation. But
            a faster-responding organism has an edge.

            Another data point: In practice, QuickSort is in more
            widespread use than Bubble Sort. This suggests that speed
            has greater survival advantage, though admittedly an
            unscientific measurement.

            John Smart's theory of technological advance by MEST
            compression emphasizes all four axis -- matter, energy,
            space, and time. I can't prove the theory, but from an
            observational point of view it seems correct. So It seems a
            mistake to define intelligence by only one axis, size
            (matter or space) or speed or time.

            - Can evolution be proven to be open-ended? I've long
            suspected that G�del's incompleteness theorem implies that
            evolution has no end. Can it

            - One final question, which is how can you define
            intelligence with such certainty?

            I used to believe that mathematics was a system of "pure
            logic" and therefore immune to the logical fallacies of the
            human thought.

            But I now know that mathematicas is really an imperfect
            system that has been patched together for thousands of
            years whenever inconsistencies arize. For example, I can
            prove to you that 1 = 0.

            Start with:

            x = 1

            Multiply both sides by x:

            x^2 = x

            Subtract 1 from each side:

            x^2 - 1 = x - 1

            Factor x^2 -1 into (x+1)(x-1) :

            (x+1)(x-1) = x - 1

            Divide both sides by x-1:

            x + 1 = 1

            Subtract 1 from each side:

            x = 0

            Q.E.D.

            Of course, mathematicians, when they encountered this
            problem, just made up a new rule that prevents the
            inconsistency from showing up any more. And the rule is:
            you can't divide by 0. (because x-1 is 0.). But that's how
            all mathematics works: you trudge along until some logical
            inconsistency crops up, then you invent a new rule that
            resolves the conflict, and then continue. You continue on
            until, say, the concept of "infinity" causes a problem. So
            then you invent "countable infinities" and "uncountable
            infinities" to resolve the conflict, and so on. One of the
            things I find a bit unsettling about AIT, as I have read so
            far, and granted I have just started learning about it, is
            that it depends so much on unmeasurable things like the
            "minimum size algorithm" -- Bubble Sort looks like the
            simplest algorithm, but you can't prove it's the simplest,
            and yet you are building all this mathematics on the
            theoretical concept of a minimum possible algorithm. Then
            use those proofs to define "intelligence". But perhaps I
            shouldn't be bothered by this.

            At any rate, mathematics is not a perfect system but rather
            a system patched together by humans. And the main result of
            this imperfection of mathematics isn't that mathematical
            theorems are wrong (they are provably right, after all),
            but rather that there are infinitely many of them and that
            we'll never be able to know them all.

            Here's a webpage written by G.J. Chaitlin where he claims
            that mathematics proves that "The world of mathematical
            ideas has INFINITE complexity!" (emphasis his):


            From Philosophy To Program Size
            http://www.cs.auckland.ac.nz/CDMTCS/chaitin/eesti.html


            While I have no doubt that AIT theorists are proving many
            things, I question whether you can use those proofs as an
            absolute definition of "intelligence". I would say
            intelligence, in the real world, means adaptability to the
            real world environment. The ultimate laws of physics are
            not known, and it seems to me like the problem of
            adaptability in the real world cannot be described as a
            mathematical function. You can define it mathematically in
            a particular domain perhaps (a fitness function for a
            particular problem) and then find algorithms with minimal
            Kolmogorov complexity. What you can't do is define the
            algorithm with the lowest Kolmogorov complexity that solves
            all possible problems. (Of course then you have to ask if
            you're talking about the infinite set of all possible
            problems or just the ones that could actually happen in the
            finite known universe.).

            So if mathematics has infinite complexity, does evolution
            also? And if evolution has infinite complexity, does not
            the intelligence needed to survive in exponentially
            increasing complexity also potentially have infinite
            complexity? Apparently -- see my webpage:

            Exponential Change
            What is the Technological Singularity? It is the end result
            of the exponential rate of change of technology.
            http://www.singularityinvestor.com/exponential.php


            which illustrates evidence that the local complexity of the
            universe is increasing exponentially.

            Oh, but wait, when we talk about "infinite complexity" are
            we talking about a countable infinity or an uncountable
            infinity here?

            On this page C.J. Chaitin's says "I must confess that AIT
            makes a large number of important hidden assumptions! What
            are they?"


            On the intelligibility of the universe and the notions
            of simplicity, complexity and irreducibility
            http://www.cs.auckland.ac.nz/CDMTCS/chaitin/bonn.html


            Wayne



            --- "J. Andrew Rogers" <andrew@...> wrote:
            > Bill Rowan <whrowan@...> wrote:
            >>
            >> I tend to agree, but this is still controversial and
            >> deservedly so. What makes you such a big authority?
            >
            > Controversial to who? The only people who this is
            > controversial to are either 1.) "soft science" types who
            > aren't so hot at doing rigorous construction, and 2.)
            > people who have an inferiority complex whenever they feel
            > they are being measured by a rigorous standard. In other
            > words, most of the "controversy" is nothing more than
            > emotional protectionism nominally bolstered by really thin
            > reasoning. (And then there is the huge audience of people
            > who haven't thought too much about the topic one way or
            > another.)
            >
            > It does not seem to be controversial to mathematicians at
            > all, most of whom seem quite happy with the definitions
            > they currently use. And indeed, one can show how all these
            > various "intelligences" fit within the single construction
            > used for the purposes of computational theory. In fact,
            > there are published proofs in algorithmic information
            > theory that all "forms" of intelligence are ONLY
            > expressible under such a construction. I have seen almost
            > no disagreement as to whether or not the mathematical
            > definition I referenced above is a good universal
            > description. The hard theory AI guys argue about damn near
            > everything, but there is a lot of general agreement on this
            > point.
            >
            > (My only claim to authority is in algorithmic information
            > theory, which is exceedingly relevant to this discussion.
            > If any "science" or math was to be done on this matter, it
            > would be in this domain.)
            >
            >> I think these questions need to be resolved through
            >> scientific research. Investigation of the human (and
            >> nonhuman) genome will certainly play a role in this
            >> research. And something else that I think will help
            >> society to reach more of a consensus on these questions is
            >> our future ability to manipulate genetics to ensure that
            >> children are more intelligent or have other good qualities.
            >> Availability of such methods to ordinary people would go a
            >> long way toward getting people to look at the science, and
            >> not just fight about all of this.
            >
            > Most mathematicians would assert that intelligence is an
            > intrinsic machine property independent of machine
            > implementation, and they would be correct as a matter of
            > core theorems assuming one accepts the mathematical
            > definition of "intelligence". The study of genetics will
            > tell us almost nothing about intelligence, as it only has a
            > limited control of the expression, and the degrees of
            > freedom allowed in expression by the math means that one
            > will have a difficult time reverse-engineering intelligence
            > -- expression is sort of a one-way hash function of the
            > core algorithms at anything above the substrate level.
            >
            > Most of the controversy these days, from a theoretical
            > standpoint, is in implementation and design of substrates
            > that meet the mathematical description. A very difficult
            > problem.
            >
            >
            > J. Andrew Rogers






            __________________________________
            Do you Yahoo!?
            Free Pop-Up Blocker - Get it now
            http://companion.yahoo.com/
          • J. Andrew Rogers
            ... In the theoretical abstract, yes. If you are looking for discrete metrics for a specific task, not really. But then that would have nothing to do with
            Message 5 of 17 , Dec 1, 2003
            • 0 Attachment
              On 12/1/03 4:33 PM, "Chris Phoenix" <cphoenix@...> wrote:
              >
              > I really want to know--please answer this question. Does
              > your study of intelligence have anything to say about individual human
              > performance on specific tasks?


              In the theoretical abstract, yes. If you are looking for discrete metrics
              for a specific task, not really. But then that would have nothing to do
              with intelligence other than demonstrating a nominal baseline capacity that
              means little.

              One cannot make any assertions about the Kolmogorov complexity (read:
              "intelligence") of a machine by its expression of an algorithm (read:
              "task") beyond the Kolmogorov complexity *of* the algorithm expressed. In
              other words, one will not be able to distinguish between the output of
              someone with a frigid IQ and a real genius by their output in any trivially
              quantifiable domain. If one makes (shaky) assumptions about internal
              organization (e.g. identical average model distribution) one can
              theoretically *test* relative intelligence using the predictive limit
              theorems for finite state machinery, though one cannot actually measure it
              in absolute terms.

              In short, there is nothing you can do outside of a black box intelligence,
              like humans, that will generate a correct metric for intelligence as it
              relates to any specific task since the intelligence at a task is bound by
              the intelligence of the system, short of peering inside to measure the
              effective Kolmogorov complexity of the machine. You can fake metrics if you
              make a number of assumptions, but those metrics are only as good as the
              assumptions (i.e. "not very") and the assumptions themselves are generally
              untestable. This, among other reasons, is why I generally consider IQ
              measurements of black box intelligences to be of very limited utility.


              > If not, then your original entry into this conversation was simply
              > off-topic. "Emotional intelligence" may have no more to do with the
              > kind of intelligence you study than the "bug" that gives you a runny
              > nose has to do with the "bug" that entymologists study. That doesn't
              > mean the concept is wrong, flawed, or useless.


              "Emotional intelligence" is wrong, flawed, and useless because it is
              semantically null. It isn't like I am not familiar with the term and I have
              gone rounds arguing it with the "emotional intelligence" proponents (not
              very interesting). If it can't be define in rigorous terms, you sure as
              hell can't measure it meaningfully. And the source of this digression is
              using the term "intelligence" to nominally denote something that has no
              relation to "intelligence" as used in any other domain.

              So what we are left with, is words used outside of their standard usage for
              no good reason, aggravated by the fact that no one really knows what the
              non-standard usage actually means. In my original words, "nonsense".

              I personally don't care if people prattle on about "emotional intelligence"
              as long as they don't expect that they should be taken seriously. Unless,
              of course, they give people a reason to take them seriously, but none have.
              Maybe I'm just being curmudgeonly, but I don't think so.

              My primary concern: People value "intelligence", which in general usage has
              a clear relation to more rigorous definitions, and somebody somewhere is
              trying to ride on the social coattails of that term but with a fuzzy
              definition that they can manipulate to suit their agenda. Thanks, but no
              thanks.

              It has less to do with me being pedantic and more to do with it actually
              being nonsense in almost anyway it can be framed. I could make up new terms
              all week, but those terms don't have instant meaning, never mind
              credibility, simply because I made them up using words that already exist in
              the English language.


              --
              J. Andrew Rogers (andrew@...)
            • J. Andrew Rogers
              ... An important concept in algorithmic information theory closely related to this which you missed is Solomonoff induction. Conceptually, Solomonoff
              Message 6 of 17 , Dec 2, 2003
              • 0 Attachment
                On 12/1/03 7:35 PM, "wayne radinsky" <spodware@...> wrote:
                >
                > Kolmogorov Complexity -- The complexity of a pattern
                > parameterized as the shortest algorithm required to
                > reproduce it. Also known as algorithmic complexity.


                An important concept in algorithmic information theory closely related to
                this which you missed is Solomonoff induction. Conceptually, Solomonoff
                induction is the inverse of an algorithm generating pattern, that is it
                allows you to generate the optimal algorithm *from* the pattern.
                Theoretically it is a much deeper and extremely important construct with
                some very interesting properties, but this gives a good conceptual idea of
                it. It is worth noting that it is the domain of some "hard problems" in
                theoretical computer science.


                > Again, nobody's ever proven this is the simplest way to
                > sort numbers.
                >
                > We can think of the "Kolmogorov Complexity" of each
                > algorithm as being roughly the length of the code needed to
                > perform each sort.


                Of course, one of the core theorems that is worth keeping around in your
                head is that all functionally equivalent algorithms have the same Kolmogorov
                complexity independent of nominal implementation complexity. Or to put it
                another way, the Kolmogorov complexity is invariant even in the hands of an
                awful programmer.


                > Now, if we look at web pages about Algorithmic Information
                > Theory,
                > we find that the AIT wonks consider the "simplest"
                > algorithm to be the most "intelligent".


                This would be an incorrect characterization. The smallest implementation is
                also one with the highest entropy, but all functionally equivalent
                algorithms have the same Kolmogorov complexity. The term that should be
                used here is "efficiency" rather than "intelligent". (Unfortunately, you
                carry this through the rest of the post.)


                > Why the emphasis on size rather than speed? Well, it seems
                > that if you focus on size, it allows you to do some
                > interesting things.


                Important point: The "speed" of a machine only limits how long it takes to
                get a correct answer for a given algorithm. The "size" (Kolmogorov
                complexity) of a machine determines whether or not a correct answer is even
                possible. To make it more complicated, even if a correct answer isn't
                possible, a good approximation *is* possible, with a limit of accuracy that
                is a function of relative Kolmogorov complexity on a machine too small to
                give a correct answer. The first point is basic, while the second is quite
                a bit more complicated to explain but I thought I would throw it in anyway.


                > You can draw a connection between it
                > and the concept of entropy in thermodynamics, and develop a
                > sort of information-theory-centric version of entropy. To
                > quote G. J. Chaitin,
                >
                > "Program-size complexity in AIT is analogous to entropy in
                > statistical mechanics. Just as thermodynamics gives limits
                > on heat engines, AIT gives limits on formal axiomatic
                > systems. "


                There are more useful formulations of this. The laws of thermodynamics are
                perfectly describable as a finite state transaction theoretic computational
                system. Very neat.

                As an aside without the required digression, one can have mathematically
                infinite state system that can be modeled as a purely finite state system
                for most intents and purposes if one bounds a couple additional properties
                of the system.


                > And you can develop a sort of "incompleteness theorem"
                > similar to what G?del developed for provable theorems, and
                > apply it to computation.


                As an extension of this, one can also prove that all finite state machines
                have a property that is essentially analogous to "free will" while still
                being fundamentally deterministic. It is interesting in that it makes a
                very convincing model of "free will" that is not the binary "free or not"
                perspective that many people assert on the issue. This falls out of an
                inequality related to the theorem I mentioned above regarding functional
                equivalence having equivalent Kolmogorov complexity.

                Note: This is related to the Invariance Theorem, which is normally given in
                a form that is inclusive of infinite state systems (e.g. in Li & Vitanyi).
                The purely finite state version is different in an important way, for those
                really interested in getting into this.


                > Now, here's where I come back to the concept of genetic
                > fitness.
                >
                > Living organisms survive by meeting the survival
                > requirements of the environment in which they live. If you
                > are a sort organism (you need a little imagination here),
                > and all the problems you ever encounter in your life are
                > sorting problems, then we could say that the QuickSort
                > species is the most "intelligent" (if we judge by speed) or
                > the Bubble Sort species is the most intelligent (if we
                > judge by size).


                Just to stop you right here, I would refer you back up this post a bit. Two
                different sorting algorithms have the same Kolmogorov complexity and
                therefore the same "intelligence"; one can make several arguments from
                theory why this has to be the case. From the standpoint of AIT, you are not
                running two different algorithms, but two different implementations of the
                SAME algorithm. Another part where this might get confusing is that while
                the *apparent* complexity of the machines is identical, the *intrinsic*
                complexity of the machines may actually be different depending on
                implementations, something that isn't measurable without peeking inside the
                box.

                The importance of Kolmogorov complexity to intelligence is that it puts a
                hard theoretical limit on the capacity and accuracy of the
                discovery/learning process described by Solomonoff Induction.


                > And here's where I think you are getting into trouble when
                > you try to talk about "general" intelligence. When you talk
                > about Kolmogorov Complexity, you're talking about the
                > complexity of an algorithm that solves a *specific* problem
                > -- whether it is simple like sorting, or complex like
                > computer vision. But in evolution, the "problem that needs
                > to be solved" to survive is open-ended and always changing.


                See Solomonoff Induction, and machinery based on it. It isn't just
                adaptive, but *optimally* adaptive in all possible environments. Provably
                so.


                > And there isn't much evidence that our brains posess any
                > "general intelligence" system, but rather that we posess
                > various "modules"... But the important point is: the human brain does
                > *not* work by just throwing a few hundred billion neurons
                > into a skull and magically getting all forms of
                > intelligence out the other end.


                The modularity of function is adaptive to the environment, and the physical
                conformation of the machine will be generated as an adaptation. The
                "modules" are clusters of high-order information theoretic patterns.
                Functionally specialized modules are a natural emergent property of any
                universal computational substrate that is also an effective expression of
                Solomonoff induction as a response to its environment. For real systems it
                is slightly more complicated, but entirely consistent with everything we
                know about the brain as an expression of an intelligent machine.

                One of the convincing arguments regarding the correctness of this particular
                model is that one can show that it fundamentally demonstrates adaptive
                emergent structural forms that closely map to biological models.



                > There are countless examples of people who are very
                > intelligent in one area and poor in another. Marian Carey
                > is the world's greatest singer, but she sucks as a movie
                > star. And the classic example is the "geeks" who are good
                > at solving math problems but bad at throwing footballs or
                > getting dates with girls. What is popularly referred to as
                > "emotional intelligence", I would call "political
                > intelligence", since it is really about succeeding in an
                > environment where aliances are formed and broken and people
                > lie and back-stab each other and so on. I suspect the world
                > has many great mathmaticians who are really bad at
                > politics. Although it also has mathematicians who are good
                > at it -- those are the famous ones.


                The differences are in where intelligence is applied. For example, savvy
                politicians are extremely competent at manipulating every nuance of human
                interaction. Human interaction is a complex protocol like any other, and it
                takes real functional intelligence to perceive and manipulate the human
                patterns that good politicians do. The same could be said about many other
                types of skill domains.

                A brain is a finite resource, but highly adaptable and will allocate
                "complexity" resources as needed. If you are extremely good at math, a
                disproportionate amount of brain resources will be put to use for that
                domain. At the same time, this may or may not leave relatively little
                resources for domains like politics. Unless you have an excess of brain
                resources, a polymath is unlikely to be able to compete with a domain
                specialist, and most people become domain specialists in life as a matter of
                utility maximization.


                > Some questions.
                >
                > - Why emphasize size instead of speed? After all when you
                > take an IQ tests, the test is timed. But no measurement is
                > made of the size of your head or the number of neurons in it.


                Covered above. The speed just says how long it takes to come up with an
                answer. The size determines whether or not you can even come up with an
                answer, and determines how good the answer will be if you do come up with
                one. Incidentally, the best physical measurement of intelligence is
                neurons, but the number of links determines efficiency. Given that we all
                have roughly the same number of neurons, efficiency really matters. Note
                that the relationships here between physical quantity and intelligence are
                not neatly linear either.


                > John Smart's theory of technological advance by MEST
                > compression emphasizes all four axis -- matter, energy,
                > space, and time. I can't prove the theory, but from an
                > observational point of view it seems correct. So It seems a
                > mistake to define intelligence by only one axis, size
                > (matter or space) or speed or time.


                For practical purposes, yes, though I would limit it to time and space (one
                could argue that the other two parameters are a function of these two).
                Intelligence is only a measure of capability, which is arguably the most
                important aspect. No amount of time is a substitute for inadequate
                intelligence. On the other hand, given enough time a machine with
                sufficient space can find the correct answer eventually. Furthermore, one
                can often substitute space for better time performance in implementation if
                necessary, but there is no inverse capability.

                In short, time is fungible, space is not. Limits on learning and high-order
                algorithm discovery are a direct function of space.


                > - Can evolution be proven to be open-ended? I've long
                > suspected that G?del's incompleteness theorem implies that
                > evolution has no end. Can it


                Evolution is just systems theory. There is no theoretical limit to the
                complexity that can be generated by such processes.


                > - One final question, which is how can you define
                > intelligence with such certainty?


                Nothing really seems to be unexplainable within the current model, which
                makes it very persuasive. The best of the current models maps to and
                predicts reality so well, in addition to being fundamentally elegant, that
                there is little that competes with it. I don't have the certainty of a
                Koolaid drinker, but like thermodynamics, I'll will accept it as
                conditionally axiomatic until something better comes along. It is new
                enough that there are still outlier bits and pieces being filled in, but
                everything is proving as expected. Personally, I like it a lot because of
                its very high elegance factor and that it is strongly grounded in and
                derived purely from mathematics, with premises that are consistent with our
                general reality as we perceive it (e.g. effective finite state-ness).

                There is also some other interesting points. For example, there has been
                quite a number of interesting proofs and papers that assert that all
                possible forms of domain intelligence are expressible within a universal
                Solomonoff machine construct. And that this is the only possible expression
                of a universally intelligent machine.


                > I used to believe that mathematics was a system of "pure
                > logic" and therefore immune to the logical fallacies of the
                > human thought.
                >
                > But I now know that mathematicas is really an imperfect
                > system that has been patched together for thousands of
                > years whenever inconsistencies arize. For example, I can
                > prove to you that 1 = 0.


                This is the other part, which I haven't even mentioned. The "logic" system
                is not a traditional kind used in either mathematics or AI in general. As
                you are suggesting indirectly, there seems to be a real serious brittleness
                in all axiomatic or semi-axiomatic reasoning systems. Which is true.

                Most core theory people today use non-axiomatic reasoning systems, which are
                unusual for many people used to axiomatic logic and reasoning systems. If
                you are willing to forego "correct" answers and accept "good" answers, and
                deal with the occasional apparent irrationality, non-axiomatic reasoning
                systems are generally very optimal for finite state machinery and can do far
                more with far less than axiomatic systems. I mentioned above that one can
                build a machine that approximates an algorithm that it has insufficient
                space to model "correctly" (i.e. in an axiomatic model-theoretic sense).
                This is possible by using a non-axiomatic calculus that is nonetheless
                deterministic.

                For a very good example of non-axiomatic reasoning, google Pei Wang's NARS.


                > One of the
                > things I find a bit unsettling about AIT, as I have read so
                > far, and granted I have just started learning about it, is
                > that it depends so much on unmeasurable things like the
                > "minimum size algorithm" -- Bubble Sort looks like the
                > simplest algorithm, but you can't prove it's the simplest,
                > and yet you are building all this mathematics on the
                > theoretical concept of a minimum possible algorithm. Then
                > use those proofs to define "intelligence". But perhaps I
                > shouldn't be bothered by this.


                Heh. It is much more complicated than this and definitely not intuitive in
                many ways. Don't worry, you are only scratching the surface. The more you
                dig into it, the more you'll understand a lot of things. You need to read
                up (deeply) on Solomonoff induction; it will change the way you look at
                things. Go ahead, take the Red Pill. :-)

                Seriously though, algorithmic information theory at large is emerging as a
                deeply foundational part of many areas of mathematics. I often describe the
                field as "computational pattern dynamics theory", which gives a more
                concrete idea of what the field is actually about.


                > While I have no doubt that AIT theorists are proving many
                > things, I question whether you can use those proofs as an
                > absolute definition of "intelligence".


                Who knows about "absolute", but they make a bloody good working definition.
                Or more precisely, no one can seem to come up with a case where they don't
                apply.


                > I would say
                > intelligence, in the real world, means adaptability to the
                > real world environment.


                Which roughly paraphrases the definition used in the introduction of many
                related papers.


                > Oh, but wait, when we talk about "infinite complexity" are
                > we talking about a countable infinity or an uncountable
                > infinity here?


                That depends. At most, our universe seems to be countably infinite, as that
                is as much as can be granted and still come up with a model that is
                consistent with what we see.


                > On this page C.J. Chaitin's says "I must confess that AIT
                > makes a large number of important hidden assumptions! What
                > are they?"


                Which is somewhat amusing, because the assumptions that are pervasive in
                Chaitin's work are what make it have almost no relevance to AI work. The
                most pervasive and damaging assumption is the myriad of implicit infinities.
                One of the biggest theoretical problems is the assumption of UTMs as the
                default model for computational systems in mathematics. All practical
                systems have to be engineered from FSM assumptions, and many theorems have
                different expressions, or can assert additional properties, when constrained
                to this case. Yet most published expressions are for the universal case.


                --
                J. Andrew Rogers (andrew@...)
              • Troy Gardner
                ... I disagree, and I think you are imposing your definition of intelligence, here s the one from dictionary.com: 1) The capacity to acquire and apply
                Message 7 of 17 , Dec 2, 2003
                • 0 Attachment
                  > "Emotional intelligence" is wrong, flawed, and useless because it is
                  > semantically null.

                  I disagree, and I think you are imposing your definition of intelligence,
                  here's the one from dictionary.com:

                  1) The capacity to acquire and apply knowledge.
                  2) The faculty of thought and reason.
                  3) Superior powers of mind.

                  Notice they don't say about what these apply to. If intelligence is the ability
                  to aquire knowledge about and manipulate a complex system in a given domain in
                  order to achieve a desired result, who says that the system has to be
                  logic/math? an statemachine and weather patterns can be roughly described
                  through analytic means, what if it's an emotional statemachine?

                  As far as (1)
                  Since emotions are more subjective it maybe harder to test for, but that
                  doesn't mean they are not nontestable. FACS, EEG's, biochemical markers, etc.
                  are external ways.

                  Even in my own life through journalling and introspection, I've learned volumes
                  about my own emotional state machine and more importantly have through (2)
                  learned to predict, avoid and manipulate it.

                  Our heads are wired such that higher functioning (logic and reasoning)
                  measurable with various IQ tests depends greatly on our ability to deal with
                  emotional impulse control. If you don't think that's the case I can try making
                  you do math while your running or I shoot your spouse or waft a strong
                  pheremone under your nose. Using definition (3) A more intelligent person would
                  be able to deal with greater degrees of these interruptions to achieve
                  whatever.

                  But averaged out over time even these aren't necessary I take a person with
                  poor impulse control and with high IQ and a person with medium IQ and great
                  impulse control and run them through the race of life, with the works they've
                  achieved/performed/discovered at the end as a measure of applied collective
                  intelligence. The intelligence required to impulse control will have a greater
                  effect. If they were black boxes, and one generates more consistent, higher
                  quality/complexity output than the other reqardless one graduated from MIT with
                  honors and ends up a street bum and the other flunked out of a junior college
                  but goes on to have a gabillion kids and a mega coporation, what difference
                  does it make how they scored on the SAT? Which would you want on your team?


                  =====
                  Troy Gardner -"How you live your seconds, is how you live your days, is how you live your life..."

                  http://www.troygardner.com -my world, philosophy, music, writings.
                  http://www.troyworks.com -consulting & training in Flash, Java, and C#
                  http://www.intrio.com -helping bridge the gap between the humans and machines. Home of the Flickey�
                • J. Andrew Rogers
                  ... [...elided...] You are agreeing with me. Analysis of emotional state machines would be standard vanilla domain intelligence. The reduction of experience
                  Message 8 of 17 , Dec 2, 2003
                  • 0 Attachment
                    On 12/2/03 8:46 AM, "Troy Gardner" <thegreyman@...> wrote:
                    >
                    > Notice they don't say about what these apply to. If intelligence is the
                    > ability
                    > to aquire knowledge about and manipulate a complex system in a given domain in
                    > order to achieve a desired result, who says that the system has to be
                    > logic/math? an statemachine and weather patterns can be roughly described
                    > through analytic means, what if it's an emotional statemachine?
                    [...elided...]


                    You are agreeing with me.

                    Analysis of emotional state machines would be standard vanilla domain
                    intelligence. The reduction of experience to (simplified) state machines is
                    what intelligence is all about. It isn't a different "kind" of intelligence
                    but arbitrary domain expertise using regular old-fashioned intelligence. It
                    is no different than the intelligent skill of chefs, sharpshooters, or tax
                    attorneys, just an intelligent machine fed a different body of experiential
                    data that has been highly processed.

                    But if this is the case, call it what it is: domain expertise. The problem
                    is that while you may use it this way, many proponents of "emotional
                    intelligence" try and assert that there is some ineffable "more-ness" to
                    emotional intelligence that they can't articulate, which is the part I
                    object to. I have no problem with the idea that some people have a great
                    deal of domain intelligence when it comes to emotions.


                    > Our heads are wired such that higher functioning (logic and reasoning)
                    > measurable with various IQ tests depends greatly on our ability to deal with
                    > emotional impulse control. If you don't think that's the case I can try making
                    > you do math while your running or I shoot your spouse or waft a strong
                    > pheremone under your nose. Using definition (3) A more intelligent person
                    > would be able to deal with greater degrees of these interruptions to achieve
                    > whatever.
                    >
                    > But averaged out over time even these aren't necessary I take a person with
                    > poor impulse control and with high IQ and a person with medium IQ and great
                    > impulse control and run them through the race of life, with the works they've
                    > achieved/performed/discovered at the end as a measure of applied collective
                    > intelligence.


                    This is something only sort of related to intelligence. What it seems you
                    are talking about is biological biasing factors. These affect both long-
                    and short-term goal systems rather than intelligence, which can have some
                    odd effects on the expression of whatever intrinsic intelligence a person
                    has. The metric for dealing with this in most places is "discipline", which
                    has some component of intelligence to it. It is rational to be disciplined,
                    but I think there is evidence to suggest that the magnitude of particular
                    biasing factors varies widely with the individual, which may be part of the
                    reason results are uneven even if you take intelligence into consideration.

                    I personally think that being either too narrow or too broad in domain
                    expertise is suboptimal as a practical matter. People who have excellent
                    math skills but no social or business skills are going to find a loss of
                    practical utility as a result. A rational person actively tunes the balance
                    of their various skill domains in an attempt to maximize the overall
                    utility.

                    As a side note, I'm not sure where you got the apparent impression that I
                    think non-utilitarian metrics (like SAT or IQ scores) or the ability to
                    complete otherwise uninteresting and non-utilitarian work (like a degree
                    program) have any relation to intelligence beyond demonstrating a very basic
                    baseline capacity. Particularly since I've stated the contrary many times.
                    Intelligence is best expressed in a person's ability to extract maximum
                    utility out of their life, with discipline (or lack thereof) being a
                    substantial force multiplier of intelligence.


                    --
                    J. Andrew Rogers (andrew@...)
                  • Bill Rowan
                    I have to add my agreement that the proposed abstract definition of intelligence is less than a full answer to lots of human questions. For example, it it
                    Message 9 of 17 , Dec 2, 2003
                    • 0 Attachment
                      I have to add my agreement that the proposed abstract definition of
                      intelligence is less than a full answer to lots of human questions.
                      For example, it it possible to structure education so that people learn
                      faster, or better? Lots of people try to figure out ways to do that.
                      Also, does this abstract definition make predictions about such things
                      that can be tested by experiment? For an example of something along those
                      lines, there is a book titled "Cultural Literacy" in which simple reading
                      comprehension experiments are used to argue for the authors' contention
                      that there should be a codified body of fact and fiction that all children
                      in the country should be taught.

                      Bill Rowan
                    • J. Andrew Rogers
                      ... People are admittedly a special case, primarily in that their brains also come with a whole bunch of very active biasing vectors that were selected by
                      Message 10 of 17 , Dec 3, 2003
                      • 0 Attachment
                        On 12/2/03 11:22 PM, "Bill Rowan" <whrowan@...> wrote:
                        > I have to add my agreement that the proposed abstract definition of
                        > intelligence is less than a full answer to lots of human questions.
                        > For example, it it possible to structure education so that people learn
                        > faster, or better? Lots of people try to figure out ways to do that.
                        > Also, does this abstract definition make predictions about such things
                        > that can be tested by experiment? For an example of something along those
                        > lines, there is a book titled "Cultural Literacy" in which simple reading
                        > comprehension experiments are used to argue for the authors' contention
                        > that there should be a codified body of fact and fiction that all children
                        > in the country should be taught.


                        People are admittedly a special case, primarily in that their brains also
                        come with a whole bunch of very active biasing vectors that were selected by
                        evolution in conjunction with biology. These are constantly messing with
                        our goal systems in significant ways, and we don't get to select what kinds
                        of biasing go on in our brains. Biasing has no real intrinsic reason to
                        exist in a synthetic machine intelligence, and can be included in a very
                        controlled fashion to bootstrap goal systems if needed.

                        Biasing isn't discussed much with respect to intelligence in the theoretical
                        mostly because machine intelligence will neither need nor have much of it,
                        and it is really an external force acting on the machinery rather than
                        within it. There is good theory surrounding it, and it is well understood
                        that human brains are frequently dominated by it, but it is usually left out
                        of abstract theoretical discussions of machine intelligence. It is more the
                        domain of biological brain and behavioral specialists.

                        An interesting question is what the optimum amount of biasing actually is
                        from the standpoint of an intelligent machine. Humans likely have too much,
                        but having none is probably too little. I don't think anyone has ever tried
                        to come up with an answer to this question.

                        ObSpeculativeTheory: I have theorized in the past that the evolutionary
                        reason for built-in emotions (a very strong collection of biasing vectors in
                        higher animals) is that it provides a bootstrap goal system that encourages
                        the organism to interact with its environment from birth, thereby
                        bootstrapping effective higher brain function. Without some kind of basic
                        goal system to drive interaction, the brain is just so much underused
                        tissue.


                        --
                        J. Andrew Rogers (andrew@...)
                      • Chris Phoenix
                        ... Which kind of emotions are you talking about here--the speed of thought ones like fear, or the emotional state ones like sadness, or the brain
                        Message 11 of 17 , Dec 3, 2003
                        • 0 Attachment
                          "J. Andrew Rogers" wrote:
                          > ObSpeculativeTheory: I have theorized in the past that the evolutionary
                          > reason for built-in emotions (a very strong collection of biasing vectors in
                          > higher animals) is that it provides a bootstrap goal system that encourages
                          > the organism to interact with its environment from birth, thereby
                          > bootstrapping effective higher brain function. Without some kind of basic
                          > goal system to drive interaction, the brain is just so much underused
                          > tissue.

                          Which kind of emotions are you talking about here--the "speed of
                          thought" ones like fear, or the "emotional state" ones like sadness, or
                          the "brain feature" ones like boredom?

                          There are a few features like boredom, attention, and disorientation
                          that seem to be basic consequences of neural-net thinking. We attach
                          emotions and interpretations to them, but I think the emotions are a
                          side effect, not a cause, of the features. Infants have a tendency to
                          look at unfamiliar things, and it seems more instinctive than
                          emotional. Is lack-of-boredom an emotion? Or is the brain wired up at
                          a low level to select away from boring stimuli? I'm thinking the
                          latter.

                          Fast emotions seem to be useful in adults to direct attention and switch
                          brain mode for different situations. I'm not sure they're even involved
                          in motivating interaction with the environment--I think a bit of "brain
                          feature" response tuning is enough to keep an organism interactive. But
                          the fast emotions we associate with the brain features (e.g. boredom ->
                          discomfort) may have a useful reinforcing effect. Note that
                          fast-emotions can be separated from brain-features, as meditation is
                          boring but not distasteful.

                          The emotional states may be an implementation of goal selection and
                          maintenance. How do you select between plans? While you think through
                          them, your emotional state is a chemical summation of your reaction. If
                          you're happy at the end of rehearsing a goal, you act on it. And your
                          emotional state can keep you on task even when the intermediate steps
                          are not rewarding.

                          Chris

                          --
                          Chris Phoenix cphoenix@...
                          Director of Research
                          Center for Responsible Nanotechnology http://CRNano.org
                        • J. Andrew Rogers
                          ... I haven t put a lot of detailed thought into it at this level (hence the speculative part), nor is it really in my domain of expertise. But what you
                          Message 12 of 17 , Dec 3, 2003
                          • 0 Attachment
                            >
                            > Which kind of emotions are you talking about here--the "speed of
                            > thought" ones like fear, or the "emotional state" ones like sadness, or
                            > the "brain feature" ones like boredom?


                            I haven't put a lot of detailed thought into it at this level (hence the
                            "speculative" part), nor is it really in my domain of expertise. But
                            what you wrote sounds reasonable at first blush, and it would seem to be
                            an interaction of all three of these. Once the system gets up and
                            running as it were, it would seem that the goal systems that drive one
                            to interact with the environment are self-sustaining.

                            If a default goal system of sorts is required (and it seems one might
                            be) to bootstrap functional intelligence and help force initial
                            learning, then chemically driven emotional vectors seem to be a
                            reasonable mechanism for accomplishing this biologically. Obviously
                            there is a lot more to it in practice, since parents also tend to
                            interact with their offspring in the smarter animals, providing an
                            additional bootstrap mechanism there.
                          • Joschka Fisher/joseph anderson
                            Well, since neither of you yahoos look up anything, let s start with a definition of emotional intelligence http://www.byronstock.com/whatisei1234.html
                            Message 13 of 17 , Dec 3, 2003
                            • 0 Attachment
                              Well, since neither of you yahoos look up anything,
                              let's start with a definition of "emotional
                              intelligence"

                              http://www.byronstock.com/whatisei1234.html


                              Stephen J. Gould's debunking of the misuse of
                              intelligence and intelligence tests ( in particular
                              the book "The Bell Curve") in this following"
                              http://www.dartmouth.edu/~chance/course/topics/curveball.html

                              ...and then the Medical definition of intelligence
                              http://cancerweb.ncl.ac.uk/cgi-bin/omd?intelligence+test
                              <hmmm, not very helpful>

                              However the psychologist seem to have enough
                              INTELLIGENCE, to understand the shortcomings in both
                              the definitions and the measurement tools.

                              Looky-here!
                              http://chiron.valdosta.edu/whuitt/col/cogsys/intell.html

                              {in all personal prejudice...I'd go with the
                              psychologist. They seem to not only know what they're
                              talking about but..hey! it's their field, not the
                              mathematicians. Especially the pompous ones!



                              And then thereis "CiteSeer", if you want to know the
                              state of this study re: algorithms etc. a.k.a. how the
                              Mathematicians, Computer People and others of short on
                              human qualities define this amorphous quantity or at
                              least fein to measure or emulate it.
                              http://citeseer.nj.nec.com/hernandez-orallo99beyond.html


                              Aside for the cat-yelling (here at bafuture)...this
                              should at least get us started, forward! I hope.

                              joschka fischer ( who else? )



                              --- "J. Andrew Rogers" <andrew@...> a
                              écrit : > >
                              > > Which kind of emotions are you talking about
                              > here--the "speed of
                              > > thought" ones like fear, or the "emotional state"
                              > ones like sadness, or
                              > > the "brain feature" ones like boredom?
                              >
                              >
                              > I haven't put a lot of detailed thought into it at
                              > this level (hence the
                              > "speculative" part), nor is it really in my domain
                              > of expertise. But
                              > what you wrote sounds reasonable at first blush, and
                              > it would seem to be
                              > an interaction of all three of these. Once the
                              > system gets up and
                              > running as it were, it would seem that the goal
                              > systems that drive one
                              > to interact with the environment are
                              > self-sustaining.
                              >
                              > If a default goal system of sorts is required (and
                              > it seems one might
                              > be) to bootstrap functional intelligence and help
                              > force initial
                              > learning, then chemically driven emotional vectors
                              > seem to be a
                              > reasonable mechanism for accomplishing this
                              > biologically. Obviously
                              > there is a lot more to it in practice, since parents
                              > also tend to
                              > interact with their offspring in the smarter
                              > animals, providing an
                              > additional bootstrap mechanism there.
                              >
                              >
                              >
                              >
                              >
                              >
                              > ------------------------ Yahoo! Groups Sponsor
                              >
                              > To unsubscribe from this group, send an email to:
                              > bafuture-unsubscribe@yahoogroups.com
                              >
                              >
                              >
                              > Your use of Yahoo! Groups is subject to
                              > http://docs.yahoo.com/info/terms/
                              >
                              >

                              ___________________________________________________________
                              Do You Yahoo!? -- Une adresse @... gratuite et en français !
                              Yahoo! Mail : http://fr.mail.yahoo.com
                            Your message has been successfully submitted and would be delivered to recipients shortly.