Loading ...
Sorry, an error occurred while loading the content.

Re: [ai-philosophy] Re: Can you learn a language just from text

Expand Messages
  • Eray Ozkural
    ... That s not the reason why computer programs are buggy. It is because program-writing does not always require one to simulate the program in advance,
    Message 1 of 35 , Jul 1, 2006
    • 0 Attachment
      On 7/1/06, Peter D Jones <peterdjones@...> wrote:
      --- In ai-philosophy@yahoogroups.com, "Eray Ozkural" <erayo@...> wrote:

      > The simulator may be perfect, but it is not the fastest. Nor is it
      connected
      > to other machines fast enough. That's the only significant difference.

      No. It is error-prone. That is why programmers write bugs.

      I don't think even you could dispute that every bug
      in the world was written by a programmer.


      That's not the reason why computer programs are buggy. It is because
      program-writing does not always require one to simulate the program in
      advance, although that is what they would usually require from computer
      engineering students in exams and labwork.

      Of course this is a deeper discussion than it seems. Let me just say
      that program-writing is a complex process. For instance, we sometimes
      write programs without simulating anything. We just have a tentative
      proof that it would work. Sometimes, people even write experimental
      programs just to see how it would turn out, because simulating in head
      would be too slow, or infeasible for the kind of input he is interested in.

      So, although the programmer has to know the PRECISE SEMANTICS
      of the programming language, i.e. he has a FULL SIMULATOR in his head,
      its use is much more complex than you portray here. In the program
      writing process, lots of alternative programs pop up in the head of the
      programmer, and only promising ones are left to survive, for instance. In
      other cases, the programmer tries to break down the problem into problems
      that he has solved before, etc. etc. But at any rate, DURING program
      writing FULL SIMULATION is sometimes expensive, and thus the programmer
      exploits the larger cycle of machine-brain interaction to help with the design
      process. So, he makes use of the simulator in the computer, there is
      nothing strange with that.

      I am talking about the case of evaluating a program, which any competent
      programmer can do. It is an entirely another, and in fact, much more basic,
      matter. It is simply the fact that the programmer has memorized the semantic
      rules of the language. Then he can carry out the derivation in his head, or if
      it's large, using pen and paper. That's what the Turing Machine formalism was
      all about.  In case of a nice and small language this can be really easy.

      Best,

      --
      Eray Ozkural (exa), PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
      http://www.cs.bilkent.edu.tr/~erayo  Malfunct: http://www.malfunct.com
      ai-philosophy: http://groups.yahoo.com/group/ai-philosophy
      Pardus: www.uludag.org.tr   KDE Project: http://www.kde.org
    • Eray Ozkural
      ... Peter, it is just that simulation has a very specific definition in theory of computation, and that definition works pretty well as far as I can tell. So
      Message 35 of 35 , Jul 12, 2006
      • 0 Attachment
        On 7/13/06, Peter D Jones <peterdjones@...> wrote:
        --- In ai-philosophy@yahoogroups.com, "Eray Ozkural" <erayo@...> wrote:
        >
        > On 7/2/06, Peter D Jones <peterdjones@...> wrote:
        > >
        > > --- In ai-philosophy@yahoogroups.com, "Eray Ozkural" <erayo@> wrote:
        > > > too.
        > > > So what? The sim. in your head is just as good as the one on the
        > > computer,
        > >
        > > Nobody would use a human to do the job when a machine was
        > > available. You are not thinking in terms of engineering --
        > > finding the most resource-efficient solution.
        >
        >
        >
        > Resource efficiency is not a big concern.


        In academia, perhaps. Try telling an IT manager that
        getting a job done in five months is the same as doing
        it in five days,


        Peter, it is just that simulation has a very specific definition
        in theory of computation, and that definition works  pretty well
        as far as I can tell. So I prefer to use that established definition.

        The sim. stuff is how the invariance theorem of Solomonoff goes.
        You have to look at it as program-size, I can run exactly the same
        program through using a simulator, a universal computer can simulate
        any other computer, including other universal computers. Simple as
        that.

        Best,


        --
        Eray Ozkural (exa), PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
        http://www.cs.bilkent.edu.tr/~erayo   Malfunct: http://www.malfunct.com
        ai-philosophy: http://groups.yahoo.com/group/ai-philosophy
        Pardus: www.uludag.org.tr   KDE Project: http://www.kde.org
      Your message has been successfully submitted and would be delivered to recipients shortly.