Loading ...
Sorry, an error occurred while loading the content.
 

Re: [ai-philosophy] Axons communicate in reverse

Expand Messages
  • Eray Ozkural
    All right, so BP works to cover what exactly? Do you think it is a valid model LTP? -- Eray Ozkural
    Message 1 of 16 , Mar 4, 2011
      All right, so BP works to cover what exactly? Do you think it is a valid model LTP?

      --
      Eray Ozkural



      On Feb 20, 2011, at 9:21 PM, Rafael Pinto <kurama.youko.br@...> wrote:

      There are at least 2 BP versions for RNN's: BPTT and RTRL. And the default BP works for RNN's too, as with the Elman Network (in pratice it's almost as good as the recurrent variants, but cheaper). The Schmidhuber's RNN (actually it was created by one of his students) is the LSTM, a complete RNN architecture trained by a hybrid of BPTT and RTRL.

      []'s

      Rafael C.P.

      PS: sorry, the first e-mail was sent only to you, Eray.

      On Sun, Feb 20, 2011 at 1:04 PM, Eray Ozkural <examachine@...> wrote:
      I suppose if indeed it was that way, it would have to work on RNN's not just MLFF nets. There are some back-propagation variants for RNN's, one of them is due to Schmidhuber I suppose.

      Best,

      On Sun, Feb 20, 2011 at 2:13 PM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


      Exactly what I've commented on physorg.

      "Actually, it was thought for long that the backpropagation algorithm wasn't biologically plausible because it needs backwards connections. This study shows it's indeed the case and brings more biological plausibility to the Multi Layer Perceptron (just one type of artificial neural network among many)."

      Rafael C.P.


      On Sat, Feb 19, 2011 at 10:56 PM, Eray Ozkural <erayo@...> wrote:
       

      Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning.

      On Sat, Feb 19, 2011 at 11:55 PM, Antoan Bekele <antoan@...> wrote:


      Say what?

       

      From: ai-philosophy@yahoogroups.com [mailto:ai-philosophy@yahoogroups.com] On Behalf Of Eray Ozkural
      Sent: 19 February 2011 02:41
      To: ai-philosophy@yahoogroups.com
      Cc: Ray Gardener
      Subject: Re: [ai-philosophy] Axons communicate in reverse

       

       

      < div id=ygrp-text>

      Or it's back propagation? :P

      On Fri, Feb 18, 2011 at 11:19 PM, Ray Gardener <rayg@...> wrote:

      Neat... it's almost like a higher-level effect. The immediate firing is
      the low-level, knee-jerk reaction, then the later firing is as if there
      was some deeper meditation over inputs going on.

      Ray



      On 2/18/2011 7:19 AM, Eray Ozkural wrote:
      > http://www.physorg.com/news/2011-02-rewrite-textbooks-conventional-wisdom-neurons.html
      >
      >
      >
      > looks like  significant discovery. would neuroscience buffs care to
      > comment?
      >
      > -- Eray Ozkural, PhD candid ate.  Comp. Sci. Dept., Bilkent

      ------------------------------------

      Yahoo! Groups Links

      <*> To visit your group on the web, go to:
         http://groups.yahoo.com/group/ai-philosophy/

      <*> Your email settings:
         Individual Email | Traditional

      <*> To change settings online go to:
         http://groups.yahoo.com/group/ai-philos ophy/join

         (Yahoo! ID required)

      <*> To change settings via email:
         ai-philosophy-digest@yahoogroups.com
         ai-philosophy-fullfeatured@yahoogroups.com

      <*> To unsubscribe from this group, send an email to:
         ai-philosophy-unsubscribe@yahoogroups.com

      <*> Your use of Yahoo! Groups is subject to:
         http://docs.yahoo.com/info/terms/




      --
      Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct






      --
      Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct







      --
      Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct


    • Rafael Pinto
      The only thing I said is that BP wasn t considered biologically plausible because it needs backwards connections, which were thought to not exist in our
      Message 2 of 16 , Mar 4, 2011
        The only thing I said is that BP wasn't considered biologically plausible because it needs backwards connections, which were thought to not exist in our brains. Now this argument doesn't hold anymore. Just this, an obvious finding like your "Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning." (actually, I don't understand your objections after making this commentary, except for "internet noise"). I'm not an advocate of BP personally.

        Rafael C.P.

        On Fri, Mar 4, 2011 at 9:48 PM, Eray Ozkural <erayo@...> wrote:
         

        All right, so BP works to cover what exactly? Do you think it is a valid model LTP?

        --
        Eray Ozkural



        On Feb 20, 2011, at 9:21 PM, Rafael Pinto <kurama.youko.br@...> wrote:

        There are at least 2 BP versions for RNN's: BPTT and RTRL. And the default BP works for RNN's too, as with the Elman Network (in pratice it's almost as good as the recurrent variants, but cheaper). The Schmidhuber's RNN (actually it was created by one of his students) is the LSTM, a complete RNN architecture trained by a hybrid of BPTT and RTRL.

        []'s

        Rafael C.P.

        PS: sorry, the first e-mail was sent only to you, Eray.

        On Sun, Feb 20, 2011 at 1:04 PM, Eray Ozkural <examachine@...> wrote:
        I suppose if indeed it was that way, it would have to work on RNN's not just MLFF nets. There are some back-propagation variants for RNN's, one of them is due to Schmidhuber I suppose.

        Best,

        On Sun, Feb 20, 2011 at 2:13 PM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


        Exactly what I've commented on physorg.

        "Actually, it was thought for long that the backpropagation algorithm wasn't biologically plausible because it needs backwards connections. This study shows it's indeed the case and brings more biological plausibility to the Multi Layer Perceptron (just one type of artificial neural network among many)."

        Rafael C.P.


        On Sat, Feb 19, 2011 at 10:56 PM, Eray Ozkural <erayo@...> wrote:
         

        Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning.

        On Sat, Feb 19, 2011 at 11:55 PM, Antoan Bekele <antoan@...> wrote:


        Say what?

         

        From: ai-philosophy@yahoogroups.com [mailto:ai-philosophy@yahoogroups.com] On Behalf Of Eray Ozkural
        Sent: 19 February 2011 02:41
        To: ai-philosophy@yahoogroups.com
        Cc: Ray Gardener
        Subject: Re: [ai-philosophy] Axons communicate in reverse

         

         

        < div id=ygrp-text>

        Or it's back propagation? :P

        On Fri, Feb 18, 2011 at 11:19 PM, Ray Gardener <rayg@...> wrote:

        Neat... it's almost like a higher-level effect. The immediate firing is
        the low-level, knee-jerk reaction, then the later firing is as if there
        was some deeper meditation over inputs going on.

        Ray



        On 2/18/2011 7:19 AM, Eray Ozkural wrote:
        > http://www.physorg.com/news/2011-02-rewrite-textbooks-conventional-wisdom-neurons.html
        >
        >
        >
        > looks like  significant discovery. would neuroscience buffs care to
        > comment?
        >
        > -- Eray Ozkural, PhD candid ate.  Comp. Sci. Dept., Bilkent

        ------------------------------------

        Yahoo! Groups Links

        <*> To visit your group on the web, go to:
           http://groups.yahoo.com/group/ai-philosophy/

        <*> Your email settings:
           Individual Email | Traditional

        <*> To change settings online go to:
           http://groups.yahoo.com/group/ai-philos ophy/join

           (Yahoo! ID required)

        <*> To change settings via email:
           ai-philosophy-digest@yahoogroups.com
           ai-philosophy-fullfeatured@yahoogroups.com

        <*> To unsubscribe from this group, send an email to:
           ai-philosophy-unsubscribe@yahoogroups.com

        <*> Your use of Yahoo! Groups is subject to:
           http://docs.yahoo.com/info/terms/




        --
        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
        http://groups.yahoo.com/group/ai-philosophy
        http://myspace.com/arizanesil http://myspace.com/malfunct






        --
        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
        http://groups.yahoo.com/group/ai-philosophy
        http://myspace.com/arizanesil http://myspace.com/malfunct







        --
        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
        http://groups.yahoo.com/group/ai-philosophy
        http://myspace.com/arizanesil http://myspace.com/malfunct



      • Eray Ozkural
        Yes, that s what it made me think, I m trying to ask you which model of long-term neural learning do you hold in favor, if not BP? Best, ... -- Eray Ozkural,
        Message 3 of 16 , Mar 4, 2011
          Yes, that's what it made me think, I'm trying to ask you which model of long-term neural learning do you hold in favor, if not BP?

          Best,

          On Sat, Mar 5, 2011 at 3:05 AM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


          The only thing I said is that BP wasn't considered biologically plausible because it needs backwards connections, which were thought to not exist in our brains. Now this argument doesn't hold anymore. Just this, an obvious finding like your "Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning." (actually, I don't understand your objections after making this commentary, except for "internet noise"). I'm not an advocate of BP personally.

          Rafael C.P.


          On Fri, Mar 4, 2011 at 9:48 PM, Eray Ozkural <erayo@...> wrote:
           

          All right, so BP works to cover what exactly? Do you think it is a valid model LTP?

          --
          Eray Ozkural



          On Feb 20, 2011, at 9:21 PM, Rafael Pinto <kurama.youko.br@...> wrote:

          There are at least 2 BP versions for RNN's: BPTT and RTRL. And the default BP works for RNN's too, as with the Elman Network (in pratice it's almost as good as the recurrent variants, but cheaper). The Schmidhuber's RNN (actually it was created by one of his students) is the LSTM, a complete RNN architecture trained by a hybrid of BPTT and RTRL.

          []'s

          Rafael C.P.

          PS: sorry, the first e-mail was sent only to you, Eray.

          On Sun, Feb 20, 2011 at 1:04 PM, Eray Ozkural <examachine@...> wrote:
          I suppose if indeed it was that way, it would have to work on RNN's not just MLFF nets. There are some back-propagation variants for RNN's, one of them is due to Schmidhuber I suppose.

          Best,

          On Sun, Feb 20, 2011 at 2:13 PM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


          Exactly what I've commented on physorg.

          "Actually, it was thought for long that the backpropagation algorithm wasn't biologically plausible because it needs backwards connections. This study shows it's indeed the case and brings more biological plausibility to the Multi Layer Perceptron (just one type of artificial neural network among many)."

          Rafael C.P.


          On Sat, Feb 19, 2011 at 10:56 PM, Eray Ozkural <erayo@...> wrote:
           

          Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning.

          On Sat, Feb 19, 2011 at 11:55 PM, Antoan Bekele <antoan@...> wrote:


          Say what?

           

          From: ai-philosophy@yahoogroups.com [mailto:ai-philosophy@yahoogroups.com] On Behalf Of Eray Ozkural
          Sent: 19 February 2011 02:41
          To: ai-philosophy@yahoogroups.com
          Cc: Ray Gardener
          Subject: Re: [ai-philosophy] Axons communicate in reverse

           

           

          < div id=ygrp-text>

          Or it's back propagation? :P

          On Fri, Feb 18, 2011 at 11:19 PM, Ray Gardener <rayg@...> wrote:

          Neat... it's almost like a higher-level effect. The immediate firing is
          the low-level, knee-jerk reaction, then the later firing is as if there
          was some deeper meditation over inputs going on.

          Ray



          On 2/18/2011 7:19 AM, Eray Ozkural wrote:
          > http://www.physorg.com/news/2011-02-rewrite-textbooks-conventional-wisdom-neurons.html
          >
          >
          >
          > looks like  significant discovery. would neuroscience buffs care to
          > comment?
          >
          > -- Eray Ozkural, PhD candid ate.  Comp. Sci. Dept., Bilkent

          ------------------------------------

          Yahoo! Groups Links

          <*> To visit your group on the web, go to:
             http://groups.yahoo.com/group/ai-philosophy/

          <*> Your email settings:
             Individual Email | Traditional

          <*> To change settings online go to:
             http://groups.yahoo.com/group/ai-philos ophy/join

             (Yahoo! ID required)

          <*> To change settings via email:
             ai-philosophy-digest@yahoogroups.com
             ai-philosophy-fullfeatured@yahoogroups.com

          <*> To unsubscribe from this group, send an email to:
             ai-philosophy-unsubscribe@yahoogroups.com

          <*> Your use of Yahoo! Groups is subject to:
             http://docs.yahoo.com/info/terms/




          --
          Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
          http://groups.yahoo.com/group/ai-philosophy
          http://myspace.com/arizanesil http://myspace.com/malfunct






          --
          Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
          http://groups.yahoo.com/group/ai-philosophy
          http://myspace.com/arizanesil http://myspace.com/malfunct







          --
          Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
          http://groups.yahoo.com/group/ai-philosophy
          http://myspace.com/arizanesil http://myspace.com/malfunct








          --
          Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
          http://groups.yahoo.com/group/ai-philosophy
          http://myspace.com/arizanesil http://myspace.com/malfunct

        • Rafael Pinto
          Ah, I don t discard BP, but generalized hebbian learning sounds more plausible to me. Anyway, it s not a very educated guess, since I m more comfortable with
          Message 4 of 16 , Mar 4, 2011
            Ah, I don't discard BP, but generalized hebbian learning sounds more plausible to me. Anyway, it's not a very educated guess, since I'm more comfortable with the artificial NNs than with the natural ones.
            How about you?

            Rafael C.P.

            On Fri, Mar 4, 2011 at 10:36 PM, Eray Ozkural <examachine@...> wrote:
            Yes, that's what it made me think, I'm trying to ask you which model of long-term neural learning do you hold in favor, if not BP?

            Best,

            On Sat, Mar 5, 2011 at 3:05 AM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


            The only thing I said is that BP wasn't considered biologically plausible because it needs backwards connections, which were thought to not exist in our brains. Now this argument doesn't hold anymore. Just this, an obvious finding like your "Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning." (actually, I don't understand your objections after making this commentary, except for "internet noise"). I'm not an advocate of BP personally.

            Rafael C.P.


            On Fri, Mar 4, 2011 at 9:48 PM, Eray Ozkural <erayo@...> wrote:
             

            All right, so BP works to cover what exactly? Do you think it is a valid model LTP?

            --
            Eray Ozkural



            On Feb 20, 2011, at 9:21 PM, Rafael Pinto <kurama.youko.br@...> wrote:

            There are at least 2 BP versions for RNN's: BPTT and RTRL. And the default BP works for RNN's too, as with the Elman Network (in pratice it's almost as good as the recurrent variants, but cheaper). The Schmidhuber's RNN (actually it was created by one of his students) is the LSTM, a complete RNN architecture trained by a hybrid of BPTT and RTRL.

            []'s

            Rafael C.P.

            PS: sorry, the first e-mail was sent only to you, Eray.

            On Sun, Feb 20, 2011 at 1:04 PM, Eray Ozkural <examachine@...> wrote:
            I suppose if indeed it was that way, it would have to work on RNN's not just MLFF nets. There are some back-propagation variants for RNN's, one of them is due to Schmidhuber I suppose.

            Best,

            On Sun, Feb 20, 2011 at 2:13 PM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


            Exactly what I've commented on physorg.

            "Actually, it was thought for long that the backpropagation algorithm wasn't biologically plausible because it needs backwards connections. This study shows it's indeed the case and brings more biological plausibility to the Multi Layer Perceptron (just one type of artificial neural network among many)."

            Rafael C.P.


            On Sat, Feb 19, 2011 at 10:56 PM, Eray Ozkural <erayo@...> wrote:
             

            Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning.

            On Sat, Feb 19, 2011 at 11:55 PM, Antoan Bekele <antoan@...> wrote:


            Say what?

             

            From: ai-philosophy@yahoogroups.com [mailto:ai-philosophy@yahoogroups.com] On Behalf Of Eray Ozkural
            Sent: 19 February 2011 02:41
            To: ai-philosophy@yahoogroups.com
            Cc: Ray Gardener
            Subject: Re: [ai-philosophy] Axons communicate in reverse

             

             

            < div id=ygrp-text>

            Or it's back propagation? :P

            On Fri, Feb 18, 2011 at 11:19 PM, Ray Gardener <rayg@...> wrote:

            Neat... it's almost like a higher-level effect. The immediate firing is
            the low-level, knee-jerk reaction, then the later firing is as if there
            was some deeper meditation over inputs going on.

            Ray



            On 2/18/2011 7:19 AM, Eray Ozkural wrote:
            > http://www.physorg.com/news/2011-02-rewrite-textbooks-conventional-wisdom-neurons.html
            >
            >
            >
            > looks like  significant discovery. would neuroscience buffs care to
            > comment?
            >
            > -- Eray Ozkural, PhD candid ate.  Comp. Sci. Dept., Bilkent

            ------------------------------------

            Yahoo! Groups Links

            <*> To visit your group on the web, go to:
               http://groups.yahoo.com/group/ai-philosophy/

            <*> Your email settings:
               Individual Email | Traditional

            <*> To change settings online go to:
               http://groups.yahoo.com/group/ai-philos ophy/join

               (Yahoo! ID required)

            <*> To change settings via email:
               ai-philosophy-digest@yahoogroups.com
               ai-philosophy-fullfeatured@yahoogroups.com

            <*> To unsubscribe from this group, send an email to:
               ai-philosophy-unsubscribe@yahoogroups.com

            <*> Your use of Yahoo! Groups is subject to:
               http://docs.yahoo.com/info/terms/




            --
            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
            http://groups.yahoo.com/group/ai-philosophy
            http://myspace.com/arizanesil http://myspace.com/malfunct






            --
            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
            http://groups.yahoo.com/group/ai-philosophy
            http://myspace.com/arizanesil http://myspace.com/malfunct







            --
            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
            http://groups.yahoo.com/group/ai-philosophy
            http://myspace.com/arizanesil http://myspace.com/malfunct








            --
            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
            http://groups.yahoo.com/group/ai-philosophy
            http://myspace.com/arizanesil http://myspace.com/malfunct


          • Eray Ozkural
            I think BP may be an oversimplification, people say that the thalamocorticol loops function a lot like reinforcement learning, so I would look into that, but
            Message 5 of 16 , Mar 4, 2011
              I think BP may be an oversimplification, people say that the thalamocorticol loops function a lot like reinforcement learning, so I would look into that, but as you said, also unsupervised development.

              There is no reason to suppose learning is singular, either, I remember that there are many kinds of LTP.

              Lets just look at reinforcement learning, so you have some kind of a reward path, and I am not sure that's been modeled so well by reinforcement learning. There was an invited talk at AGI 10 about how some kind of network organizations might be implementing reinforcement learning, perhaps it's a lot more complicated than what we give the brain credit for.

              Best,

              On Sat, Mar 5, 2011 at 5:00 AM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:
              Ah, I don't discard BP, but generalized hebbian learning sounds more plausible to me. Anyway, it's not a very educated guess, since I'm more comfortable with the artificial NNs than with the natural ones.
              How about you?

              Rafael C.P.


              On Fri, Mar 4, 2011 at 10:36 PM, Eray Ozkural <examachine@...> wrote:
              Yes, that's what it made me think, I'm trying to ask you which model of long-term neural learning do you hold in favor, if not BP?

              Best,

              On Sat, Mar 5, 2011 at 3:05 AM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


              The only thing I said is that BP wasn't considered biologically plausible because it needs backwards connections, which were thought to not exist in our brains. Now this argument doesn't hold anymore. Just this, an obvious finding like your "Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning." (actually, I don't understand your objections after making this commentary, except for "internet noise"). I'm not an advocate of BP personally.

              Rafael C.P.


              On Fri, Mar 4, 2011 at 9:48 PM, Eray Ozkural <erayo@...> wrote:
               

              All right, so BP works to cover what exactly? Do you think it is a valid model LTP?

              --
              Eray Ozkural



              On Feb 20, 2011, at 9:21 PM, Rafael Pinto <kurama.youko.br@...> wrote:

              There are at least 2 BP versions for RNN's: BPTT and RTRL. And the default BP works for RNN's too, as with the Elman Network (in pratice it's almost as good as the recurrent variants, but cheaper). The Schmidhuber's RNN (actually it was created by one of his students) is the LSTM, a complete RNN architecture trained by a hybrid of BPTT and RTRL.

              []'s

              Rafael C.P.

              PS: sorry, the first e-mail was sent only to you, Eray.

              On Sun, Feb 20, 2011 at 1:04 PM, Eray Ozkural <examachine@...> wrote:
              I suppose if indeed it was that way, it would have to work on RNN's not just MLFF nets. There are some back-propagation variants for RNN's, one of them is due to Schmidhuber I suppose.

              Best,

              On Sun, Feb 20, 2011 at 2:13 PM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:


              Exactly what I've commented on physorg.

              "Actually, it was thought for long that the backpropagation algorithm wasn't biologically plausible because it needs backwards connections. This study shows it's indeed the case and brings more biological plausibility to the Multi Layer Perceptron (just one type of artificial neural network among many)."

              Rafael C.P.


              On Sat, Feb 19, 2011 at 10:56 PM, Eray Ozkural <erayo@...> wrote:
               

              Well obviously what the mechanism brings to mind first, right? That it is a kind of back-propagation learning.

              On Sat, Feb 19, 2011 at 11:55 PM, Antoan Bekele <antoan@...> wrote:


              Say what?

               

              From: ai-philosophy@yahoogroups.com [mailto:ai-philosophy@yahoogroups.com] On Behalf Of Eray Ozkural
              Sent: 19 February 2011 02:41
              To: ai-philosophy@yahoogroups.com
              Cc: Ray Gardener
              Subject: Re: [ai-philosophy] Axons communicate in reverse

               

               

              < div id=ygrp-text>

              Or it's back propagation? :P

              On Fri, Feb 18, 2011 at 11:19 PM, Ray Gardener <rayg@...> wrote:

              Neat... it's almost like a higher-level effect. The immediate firing is
              the low-level, knee-jerk reaction, then the later firing is as if there
              was some deeper meditation over inputs going on.

              Ray



              On 2/18/2011 7:19 AM, Eray Ozkural wrote:
              > http://www.physorg.com/news/2011-02-rewrite-textbooks-conventional-wisdom-neurons.html
              >
              >
              >
              > looks like  significant discovery. would neuroscience buffs care to
              > comment?
              >
              > -- Eray Ozkural, PhD candid ate.  Comp. Sci. Dept., Bilkent

              ------------------------------------

              Yahoo! Groups Links

              <*> To visit your group on the web, go to:
                 http://groups.yahoo.com/group/ai-philosophy/

              <*> Your email settings:
                 Individual Email | Traditional

              <*> To change settings online go to:
                 http://groups.yahoo.com/group/ai-philos ophy/join

                 (Yahoo! ID required)

              <*> To change settings via email:
                 ai-philosophy-digest@yahoogroups.com
                 ai-philosophy-fullfeatured@yahoogroups.com

              <*> To unsubscribe from this group, send an email to:
                 ai-philosophy-unsubscribe@yahoogroups.com

              <*> Your use of Yahoo! Groups is subject to:
                 http://docs.yahoo.com/info/terms/




              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct






              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct







              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct








              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct





              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct

            • Eray Ozkural
              ... Sorry, this is garbled. I meant, nervous systems reinforcement learning circuits probably aren t modeled well by any kind of ANN learning algorithm is
              Message 6 of 16 , Mar 4, 2011
                On Sat, Mar 5, 2011 at 5:57 AM, Eray Ozkural <examachine@...> wrote: 
                Lets just look at reinforcement learning, so you have some kind of a reward path, and I am not sure that's been modeled so well by reinforcement learning. There was an invited talk at AGI 10 about how some kind of network organizations might be implementing reinforcement learning, perhaps it's a lot more complicated than what we give the brain credit for. 

                Sorry, this is garbled. I meant, nervous systems' reinforcement learning circuits probably aren't modeled well by any kind of ANN learning algorithm is what I meant to say!

                However, I think now they're scratching the biological nets quite a bit. Now we have RAM and some possible mechanism for LTP, etc. The future's bright! I am hoping that they now see how important live high res recording is important! I personally hope that with more data computational neuroscientists will be able to crack some of the problems therein, but I suppose a lot of the brighter minds must be all buzz now, busy writing their own accounts :)

                It's interesting for me, with respect to my new RNN algorithms, currently under investigation. Can't reveal much about it, but you can guess what it is about. So, the last time I was indeed thinking of making a model that has all the elements..... and I see this news bit on kurzweilai news, I think to myself, all right so that was one of the features that I was thinking :) You see, the traditional ANN textbooks are missing out on all the really interesting stuff! Hopfield, Kohonen, networks. MLFF networks. Basically don't mean much, as you can tell by implementing them and seeing them fail miserably over and over again.

                Modeling the brain, if it is possible at all, is one of the most challenging computer science problems, yet I always see that non computer scientists (electrical engineers, mathematicians, physicists) work on this problem. It's a little weird, because the best understanding is through theory of computation. Everything else is... well... naive really :)

                I personally suspect that the newfound complexity of some of the brain circuitry and the variety of mechanisms will first come as a shock. Marvin Minsky had some similar reservations, as he often said that the brain has 400 something specialized machines. I definitely wouldn't be surprised if that were the case.

                Best,
                 
                --
                Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                http://groups.yahoo.com/group/ai-philosophy 

              • Rafael Pinto
                I think BP may be an oversimplification, people say that the thalamocorticol loops function a lot like reinforcement learning, so I would look into that, but
                Message 7 of 16 , Mar 4, 2011
                  "I think BP may be an oversimplification, people say that the thalamocorticol loops function a lot like reinforcement learning, so I would look into that, but as you said, also unsupervised development."
                  What do you mean exactly by "like RL"? What aspect exactly? Almost all neural approaches I know to RL use BP!

                  "There is no reason to suppose learning is singular, either, I remember that there are many kinds of LTP."
                  Totally agreed! Talking about BP, it seems the cerebellum could be a nice place for it. Associative learning is suspected to exist in the hippocampus, while soft competion may happen in the cortex.

                  "It's interesting for me, with respect to my new RNN algorithms, currently under investigation. Can't reveal much about it, but you can guess what it is about."
                  It's 1:30AM here, so no, I can't :P. Anyway, nice to see more people interested both in RNNs and AGI. Do you also think RNNs are essential components for AGI? I'm developing a new RNN too, with the long-term goal of AGI in mind. I think I'll have something written by May, so I can share it here if you or anyone else would like to read. I'd be interested in seeing your work when it gets ready too!
                  But yet about RNNs, do you know LSTMs  and ESNs? Any opinions?

                  "You see, the traditional ANN textbooks are missing out on all the really interesting stuff! Hopfield, Kohonen, networks. MLFF networks. Basically don't mean much, as you can tell by implementing them and seeing them fail miserably over and over again."
                  Haha, so the problem isn't me, nice.

                  []'s

                  Rafael C.P.

                  On Sat, Mar 5, 2011 at 1:10 AM, Eray Ozkural <examachine@...> wrote:
                  On Sat, Mar 5, 2011 at 5:57 AM, Eray Ozkural <examachine@...> wrote: 
                  Lets just look at reinforcement learning, so you have some kind of a reward path, and I am not sure that's been modeled so well by reinforcement learning. There was an invited talk at AGI 10 about how some kind of network organizations might be implementing reinforcement learning, perhaps it's a lot more complicated than what we give the brain credit for. 

                  Sorry, this is garbled. I meant, nervous systems' reinforcement learning circuits probably aren't modeled well by any kind of ANN learning algorithm is what I meant to say!

                  However, I think now they're scratching the biological nets quite a bit. Now we have RAM and some possible mechanism for LTP, etc. The future's bright! I am hoping that they now see how important live high res recording is important! I personally hope that with more data computational neuroscientists will be able to crack some of the problems therein, but I suppose a lot of the brighter minds must be all buzz now, busy writing their own accounts :)

                  It's interesting for me, with respect to my new RNN algorithms, currently under investigation. Can't reveal much about it, but you can guess what it is about. So, the last time I was indeed thinking of making a model that has all the elements..... and I see this news bit on kurzweilai news, I think to myself, all right so that was one of the features that I was thinking :) You see, the traditional ANN textbooks are missing out on all the really interesting stuff! Hopfield, Kohonen, networks. MLFF networks. Basically don't mean much, as you can tell by implementing them and seeing them fail miserably over and over again.

                  Modeling the brain, if it is possible at all, is one of the most challenging computer science problems, yet I always see that non computer scientists (electrical engineers, mathematicians, physicists) work on this problem. It's a little weird, because the best understanding is through theory of computation. Everything else is... well... naive really :)

                  I personally suspect that the newfound complexity of some of the brain circuitry and the variety of mechanisms will first come as a shock. Marvin Minsky had some similar reservations, as he often said that the brain has 400 something specialized machines. I definitely wouldn't be surprised if that were the case.

                  Best,
                   
                  --
                  Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                  http://groups.yahoo.com/group/ai-philosophy 


                • Eray Ozkural
                  Hi Rafael, ... That s right. But what happens when there is no reward? How about Hebbian/unsupervised learning? The convincing story for me is that RL is
                  Message 8 of 16 , Apr 18, 2011
                    Hi Rafael,

                    On Sat, Mar 5, 2011 at 6:48 AM, Rafael Pinto <kurama.youko.br@gmail.com> wrote:
                    "I think BP may be an oversimplification, people say that the thalamocorticol loops function a lot like reinforcement learning, so I would look into that, but as you said, also unsupervised development."
                    What do you mean exactly by "like RL"? What aspect exactly? Almost all neural approaches I know to RL use BP!

                    That's right. But what happens when there is no reward? How about Hebbian/unsupervised learning? The convincing story for me is that RL is implemented at a higher-level than pure learning mechanisms, maybe it's only one kind of learning.
                     
                    "There is no reason to suppose learning is singular, either, I remember that there are many kinds of LTP."
                    Totally agreed! Talking about BP, it seems the cerebellum could be a nice place for it. Associative learning is suspected to exist in the hippocampus, while soft competion may happen in the cortex.

                     
                    Interesting. Any references that you could share with us?
                     
                    "It's interesting for me, with respect to my new RNN algorithms, currently under investigation. Can't reveal much about it, but you can guess what it is about."
                    It's 1:30AM here, so no, I can't :P. Anyway, nice to see more people interested both in RNNs and AGI. Do you also think RNNs are essential components for AGI? I'm developing a new RNN too, with the long-term goal of AGI in mind. I think I'll have something written by May, so I can share it here if you or anyone else would like to read. I'd be interested in seeing your work when it gets ready too!
                    But yet about RNNs, do you know LSTMs  and ESNs? Any opinions?

                    I don't think RNNs are essential. I like them because they are a flexible low-level computer architecture. So I'm not really interested in neural learning models in my research, but I'm interested in RNN's as another universal computer.

                    You can send a version to me for review.

                    Yes I do know about LSTMs and ESNs, it now seems that ESN had it basically right?


                    "You see, the traditional ANN textbooks are missing out on all the really interesting stuff! Hopfield, Kohonen, networks. MLFF networks. Basically don't mean much, as you can tell by implementing them and seeing them fail miserably over and over again."
                    Haha, so the problem isn't me, nice.


                    and they don't even approach the most important problem in neural learning: development 

                    No, the problem isn't you, their models are really crude from a machine learning perspective and even meaningless. I'll use an SVM over feed forward neural nets any day. However, the work in RNN's is the only thing that has any bit of interestingness for me. That piano improvisation thing by Doug Eck was great, wasn't it? :) That's real AI research, and fun, too! :)))


                    Best,

                    --
                    Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                    http://groups.yahoo.com/group/ai-philosophy
                    http://myspace.com/arizanesil http://myspace.com/malfunct

                  Your message has been successfully submitted and would be delivered to recipients shortly.