Loading ...
Sorry, an error occurred while loading the content.

Re: Carnap's theory of meaning

Expand Messages
  • Peter D Jones
    ... For the umpty-umpth time: probabilites are possibilities with numbers attached. And you can talk about PW s without reifying them.
    Message 1 of 7 , Jul 1 1:38 PM
    • 0 Attachment
      --- In ai-philosophy@yahoogroups.com, "Eray Ozkural" <erayo@...>
      > That is, I am hoping that it is completely obvious to you that you can
      > have a probabilistic point of view that doesn't mention any iffy
      > entities like possible worlds.

      For the umpty-umpth time: probabilites are possibilities with numbers
      attached. And you can talk about PW's without reifying them.
    • zeb_6662001
      ... no ... skipping... ... it ... the ... Well, I suppose if you do have the statistics, then you have a foundation :) I do not want to underestimate any
      Message 2 of 7 , Jul 9 9:19 AM
      • 0 Attachment
        --- In ai-philosophy@yahoogroups.com, "jrstern" <jrstern@...> wrote:
        >
        > The strong point of a probabilistic approach is that you need have
        no
        > knowledge of any causal chain from A to Z.
        skipping...
        > To this day, I don't know what to do with any of this. If it works,
        it
        > works, but I see it as an absence of strong theories, I cannot see
        the
        > statistical approach as foundational.

        Well, I suppose if you do have the statistics, then you have a
        foundation :)

        I do not want to underestimate any controversies on the
        interpretation of statistics but IMHO the promising thing about
        statistical inference is the recently developed framework of
        graphical models which provides a mathematically rigorous way to
        model causality relations among some entities and couple the local
        knowledge on them (in terms of distributions) constructing the global.

        By themselves, they yield expert systems; automatons that perform
        some specified tasks. But through LEGO type split/merges led by an
        appropriate cognitive architecture I think bigger problems could be
        solved.

        Lack of strong theories is a problem for proposing such an
        architecture I think; not for statistical inference itself neither
        for choosing it as the foundation...

        Murat UNEY
      • jrstern
        ... Not at all. You have a behaviorist model, and the traditional problems of determining whether the categories for your model even make sense. The virtue
        Message 3 of 7 , Jul 9 11:43 AM
        • 0 Attachment
          --- In ai-philosophy@yahoogroups.com, "zeb_6662001" <e120532@...>
          wrote:
          >
          > --- In ai-philosophy@yahoogroups.com, "jrstern" <jrstern@> wrote:
          > >
          > > The strong point of a probabilistic approach is that you need
          > > have no knowledge of any causal chain from A to Z.
          > skipping...
          > > To this day, I don't know what to do with any of this. If it
          > > works, it works, but I see it as an absence of strong theories,
          > > I cannot see the statistical approach as foundational.
          >
          > Well, I suppose if you do have the statistics, then you have a
          > foundation :)

          Not at all. You have a behaviorist model, and the traditional
          problems of determining whether the categories for your model even
          make sense. The virtue you suggest is that there is no bright line
          between analytic and synthetic, schema and data, therefore any
          combination that works, works. But it is completely non-foundational
          in the way that it works.


          > I do not want to underestimate any controversies on the
          > interpretation of statistics but IMHO the promising thing about
          > statistical inference is the recently developed framework of
          > graphical models which provides a mathematically rigorous way to
          > model causality relations among some entities and couple the local
          > knowledge on them (in terms of distributions) constructing the
          global.
          >
          > By themselves, they yield expert systems; automatons that perform
          > some specified tasks. But through LEGO type split/merges led by an
          > appropriate cognitive architecture I think bigger problems could be
          > solved.
          >
          > Lack of strong theories is a problem for proposing such an
          > architecture I think; not for statistical inference itself neither
          > for choosing it as the foundation...

          Well hey, it's not that humans are the most accurate possible
          predictive agents. Back in expert system days, I heard it repeatedly
          stated that simple statistical regressions were more accurate than
          most expert systems, and far, far easier to construct. In fact, such
          quantitative systems were often more accurate than the human experts,
          it was said, much to the shock of the human experts (and more
          especially the human non-experts) involved.

          Might be true, for a large class of systems. Wouldn't surprise or
          shock me. But it's not AI, or cognition, or anything of the sort.
          AI is NOT about computing the right answer, or a simple calculator
          would rate very highly indeed, and we would have to rate the pool
          table as most "intelligent" about working out the answers to various
          mechanical problems. It is accurate, sure, in fact it is
          determinative, but that does not make it "intelligent".

          It's a standard argument. For simplicity, I take the instrumentalist
          view that algorithm X is best called AI whatever the results, and
          algorithm Y is best NOT called AI, whatever the results.


          Josh
        • zeb_6662001
          ... ... I agree and indeed what I was trying to tell is that an architecture that would play with those models would lead to AI or something else (Non-AI but
          Message 4 of 7 , Jul 10 9:53 AM
          • 0 Attachment
            --- In ai-philosophy@yahoogroups.com, "jrstern" <jrstern@...> wrote:
            >
            > Well hey, it's not that humans are the most accurate possible
            > predictive agents.

            ...

            > But it's not AI, or cognition, or anything of the sort.
            > AI is NOT about computing the right answer,

            ...

            I agree and indeed what I was trying to tell is that an architecture
            that would play with those models would lead to AI or something else
            (Non-AI but Expert System, non-AI but a fancy automaton). My point is
            that a probability space is fine enough to start with (in the simplest
            sense, given two random variables we are equiped with nice concepts
            revealing their underlying connection). Moreover, these relations are
            modular in some sense when we introduce more variables and some
            relations with the previous ones.

            The term Statistical inference should not be misleading that it would
            result the most accurate answer. It would lead an approximate but
            computable answer depending on the model of the algorithm and the
            physical entities involved.

            The thing is that probabilistic modeling seem to provide kind of
            modular modeling tools which would be used as bricks by higher level
            constructs. If these constructs merge and build more complex ones, they
            would still show more or the less same properties of the brick models;
            some sort of scalability and one of the following; accurate results,
            approximate results in the long run or totally misleading results.

            These are some nice properties which I expect from an architecture - a
            meta program structure which would be labeled as an AI. That' s why I
            ironically pronounced them "foundational", not because they would
            provide accuracy but modularity and scalability.

            We talk about probabilities for the umptiest humptiest times because we
            seem to "naturally" morph the concepts on AI that have been mentioned
            here more than umptiest humptiest times, to those in the theory of
            probabilities.

            I do think that AI is a matter of architecture...

            MU
          • Eray Ozkural
            ... Right. This is also what I meant when I said that a probabilistic framework might be able to combine together smaller machines, which we already have:
            Message 5 of 7 , Jul 11 8:01 AM
            • 0 Attachment
              On 7/10/07, zeb_6662001 <e120532@...> wrote:

              > The thing is that probabilistic modeling seem to provide kind of
              > modular modeling tools which would be used as bricks by higher level
              > constructs. If these constructs merge and build more complex ones, they
              > would still show more or the less same properties of the brick models;
              > some sort of scalability and one of the following; accurate results,
              > approximate results in the long run or totally misleading results.

              Right. This is also what I meant when I said that a probabilistic
              framework might be able to combine together smaller machines,
              which we already have: classifiers, function inference machines,
              logical planners etc. I don't know if we can tie together such
              a variety of machines, but why not?

              Best,

              --
              Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
              http://www.cs.bilkent.edu.tr/~erayo Malfunct: http://myspace.com/malfunct
              ai-philosophy: http://groups.yahoo.com/group/ai-philosophy
            Your message has been successfully submitted and would be delivered to recipients shortly.