Loading ...
Sorry, an error occurred while loading the content.

Re: [Artificial Intelligence Group] how long it takes to train

Expand Messages
  • predictorx
    Regarding using a 5 bit number to represent 26 letters of the alphabet as neural network output representation: Kenneth Bull wrote: Why are you using 26
    Message 1 of 5 , Mar 18, 2003
    • 0 Attachment
      Regarding using a 5 bit number to represent 26 letters of the alphabet
      as neural network output representation:

      Kenneth Bull wrote:
      "Why are you using 26 units? Why not use 5 units to represent a binary
      number between 0 and 31 instead of 26 for 0 to 25? (A=00000, B=00001,
      C=00010, ..., Z=11010)"


      This would require the neural network to learn a much more complicated
      mapping for each individual bit- in essence learning the clasification
      and learning to be a 32-to-5 encoder. Consider your bit number 4 in
      the above examples you've given. Classes "A" and "B" have bit number
      4 (counting from the left) the same (zero) and classes "C" and "Z"
      have it as a one. This is arbitrary and probably does nto reflect
      structural differences in the classes.

      This sort of representation is suggested frequently online but is
      dreadful in practice since the model needs to learn mappings to
      classes and also learn to turn off those mappings in a convoluted way.

      Keep in mind that the basis functions used by most artificial neural
      networks in practice are simple, monotonic transfer functions.
      Learning to turn "on" one of your bits requires one or more basis
      functions. Learning to turn them "off" for this arbitrary
      representation may take many more.
    • touseef liaqat
      hi all sorry replying late, bcuz of my exams. and thanks to predictorx and kenneth bull for replying. but my question remains the same and unanswered. is my
      Message 2 of 5 , Mar 27, 2003
      • 0 Attachment
        hi all

        sorry replying late, bcuz of my exams. and thanks to
        predictorx and kenneth bull for replying. but my
        question remains the same and unanswered. is my
        network is effecient enough to learn alphabets and how
        long it will take to train.

        Touseeef
        --- predictorx <no_reply@yahoogroups.com> wrote:
        > Regarding using a 5 bit number to represent 26
        > letters of the alphabet
        > as neural network output representation:
        >
        > Kenneth Bull wrote:
        > "Why are you using 26 units? Why not use 5 units to
        > represent a binary
        > number between 0 and 31 instead of 26 for 0 to 25?
        > (A=00000, B=00001,
        > C=00010, ..., Z=11010)"
        >
        >
        > This would require the neural network to learn a
        > much more complicated
        > mapping for each individual bit- in essence learning
        > the clasification
        > and learning to be a 32-to-5 encoder. Consider your
        > bit number 4 in
        > the above examples you've given. Classes "A" and
        > "B" have bit number
        > 4 (counting from the left) the same (zero) and
        > classes "C" and "Z"
        > have it as a one. This is arbitrary and probably
        > does nto reflect
        > structural differences in the classes.
        >
        > This sort of representation is suggested frequently
        > online but is
        > dreadful in practice since the model needs to learn
        > mappings to
        > classes and also learn to turn off those mappings in
        > a convoluted way.
        >
        > Keep in mind that the basis functions used by most
        > artificial neural
        > networks in practice are simple, monotonic transfer
        > functions.
        > Learning to turn "on" one of your bits requires one
        > or more basis
        > functions. Learning to turn them "off" for this
        > arbitrary
        > representation may take many more.
        >
        >
        >
        >
        >
        >
        >


        __________________________________________________
        Do you Yahoo!?
        Yahoo! Platinum - Watch CBS' NCAA March Madness, live on your desktop!
        http://platinum.yahoo.com
      • predictorx
        This will depend, of course, on many factors, but assuming that you ve got relatively current hardware, I d say something was wrong if most classes couldn t be
        Message 3 of 5 , Mar 28, 2003
        • 0 Attachment
          This will depend, of course, on many factors, but assuming that you've
          got relatively current hardware, I'd say something was wrong if most
          classes couldn't be distinguished accurately in less than an hour.

          One difficulty in this problem is that 26 separate outputs are being
          trained simultaneously, which permits the possibility that some will
          finish training before others. Consequently, modeling of some classes
          may be overfit while others are underfit.

          "Touseef Liaqat" <paramount01us@y...> wrote:
          "i am doing a project on character recognition and i am using
          back-prapogation algorithm for training the net. sample data set is
          consist of bmp pics of characters. each character size is 5x7 pixels
          so the input layer contains 35 units. i first trained two characters
          so the output layer has 2 units and i have made only one hidden layer
          which has 20 units. this network works well and trained with noise
          data. but things become worst when i increse the output units with 26
          for all alphabets and hidden layer to 30 units. this new network is
          not trainning with all other parameters are same. i waited 2 to 3 days
          for its training but nothing is happening.

          my question is that how long it takes to train the network of this
          kind(input units=35, hidden units = 30 , output units =26)?"
        Your message has been successfully submitted and would be delivered to recipients shortly.