Loading ...
Sorry, an error occurred while loading the content.


Expand Messages
  • Derek James
    Welcome to the NEAT Users Yahoo Group. Presumably you re here either because you use NEAT or are interested in using NEAT (or you created NEAT). You already
    Message 1 of 4 , Aug 24 4:16 PM
      Welcome to the NEAT Users Yahoo Group.

      Presumably you're here either because you use NEAT or
      are interested in using NEAT (or you created NEAT).

      You already know the purpose of this group and list,
      but I thought I'd introduce myself and Philip Tucker,
      and give you an idea of how we got interested in NEAT,
      what we're doing right now, and where we're going.

      My name is Derek James, and my background is not in
      either Artificial Intelligence or Computer Science.
      It's in education. I graduated from U.T. Austin in
      1993 with a B.A. in English and teacher certification
      in Math, Science, and English. But I've always
      considered AI an avocation, and I've read extensively
      on the subject since I was in high school. Philip and
      I met at Baylor University in 1989, where he was
      working on an undergraduate in CS. He went on to get
      a Master's, also in CS.

      We've been friends since then, and had discussions
      regarding AI, but never really took any steps to
      actively develop AI. That changed last year. Philip
      enrolled in an introductory neural network class here
      at the University of Texas at Dallas. I audited the
      class. We met two other guys, one who was nearly
      finished with a Master's in Cognitive Science from
      UTD, the other a developer who was working on AI in
      his spare time. The four of us started a bi-weekly
      discussion group centered around using neural networks
      and genetic algorithms.

      Like many AI researchers, our group discussed a
      variety of domains, but we were primarily interested
      in the domain of classic board games, specifically Go.

      Around the end of 2002, I began to read published work
      in the field. At one of the group member's urging, I
      read Fogel's book, Blondie24, regarding evolving ANNs
      with fixed topologies as part of a Checkers-playing

      I read about the work done at UT Austin over the past
      decade or so. I read first about SANE, then found the
      paper on NEAT. It was exactly the sort of approach I
      was looking for. I showed it to Philip, and we
      decided that it was a technique we would be interested
      in experimenting with.

      We looked at existing implementations of NEAT, but for
      a variety of reasons we decided to implement our own
      version. It is built around two existing open-source

      JOONE (Java Object-Oriented Neural Engine)


      and JGAP (Java Genetic Algorithms Package)

      There are a number of features in these packages,
      especially JOONE, which we are not using in our
      initial implementation, but which might be useful
      later on.

      We wanted to implement NEAT so that it would be easily
      adapted to a distributed computing environment (we
      believe that a high-input/output domain like Go will
      require such an environment). We also wanted to
      implement it so that it would run through a browser
      interface. This version will also persist a number of
      graphical diagnostics, including graphical
      representations of the evolved neural nets,
      evolutionary progress, etc., in the XML-based
      graphical language, SVG.

      As of this writing, the core NEAT engine is
      implemented and working, though we have not
      incorporated speciation yet. We have tested it so
      far, without speciation, on XOR, and gotten it to
      converge in multiple trials. It will probably be a
      few more months before we have a stable,
      fully-implemented version of NEAT, though.

      At that time, we plan to initiate an open-source
      project on SourceForge and make the source freely
      available to anyone who is interested. We would also
      like to have two or three demo tasks available for our
      implementation before we release it.

      As far as our own research path, we would like to
      explore competitive coevolutionary techniques,
      specifically applied to game domains. Our first game
      domain will be Tic-Tac-Toe, and we hope to scale up to
      GoMoku (5-in-a-row on a 13x13 board), and then Go. We
      also hope to explore some form of indirect encoding to
      exploit the inherent symmetry in such game domains.

      So that's who we are, where we're at right now, and
      where we hope to be in the near future. If you join
      the group, please consider introducing yourself
      (though you don't have to be as verbose as me) and
      letting the group know how you're either using NEAT or
      would like to.

      Thanks for joining, and we'll see you around...

      Do you Yahoo!?
      Yahoo! SiteBuilder - Free, easy-to-use web site design software
    • kenstanley01
      Hi, I m Kenneth Stanley. I think Derek had a great idea starting this group. I have been in contact with a lot of people who are working with NEAT
      Message 2 of 4 , Aug 24 9:47 PM
        Hi, I'm Kenneth Stanley. I think Derek had a great idea starting this
        group. I have been in contact with a lot of people who are working
        with NEAT independently, and it makes sense to have a place where we
        can pool our experience to help each other out. The group can also
        facilitate the discussion of ideas, both for applying NEAT and
        extending it. I look forward to participating!

        As for introducing myself, many of you already know who I am, so I
        won't go too long on the details, but for any who don't, I am
        completing my Ph.D. in computer science at the University of Texas at
        Austin. I am hoping to become a professor after graduation so that I
        can continue research into evolving increasingly complex neural
        networks, as well as other structures. I don't know where I will be a
        professor- it's not an easy job market! In the future, I hope to lead
        research towards the next generation of neuroevolution systems
        building on NEAT.

      • gbravoescobar@yahoo.es
        Hi Group: My name is Germán it sounds like her man (but I m not a playboy...unfortunately), I m from Spain and I ve been dedicated to AI as a hobby since
        Message 3 of 4 , Aug 4 7:13 AM
          Hi Group:

          My name is "Germán" it sounds like "her man" (but I'm not a
          playboy...unfortunately), I'm from Spain and I've been dedicated to AI as a
          hobby since 15 years ago. I side of my job is as "Engineer Consultant"
          (designing systems for the industry) the other side is "Business Consultant"
          (improving organization performance).

          I've a lot interest in NEAT because its ability to find simple solutions,
          its reduction of searching space through complexification, its ability to
          keep several possibilities open in the solution space through speciation.

          Some of my thoughts about ANNs:

          - Now I'm studying the Meta-learning or the process where the systems
          learn to learn, I think the future are in that kind of systems, why?:

          - I think an <<Intelligent>> Systems is such a system that not only is able
          to solve a task, as well it is able to use all information it has learned
          until a time to accelerate its learning speed rate in the immediate future.
          In this way I've seen some success experiment with NEAT and 'Go' game
          (successive tasks).

          - But, what is that experience?. They are not only useful parts to be
          reused; as well they are modifications in ANN structure searching space. To
          say: if exist an ANN Space of Solutions (N) that is projected in the
          Solution Space (inputs/outputs) (S) and there are n tasks (r1, r2.rn) that
          my system has been able to solve, my system must not be only a mix of
          Neurons in the N space and having the ability to solve such all tasks (<=n).
          My systems must be able to deform dynamically the searching space (inside N)
          using the previous learned information to be able to find quickly new
          solutions for task r_n+1 (Obviously if such task has a relationship with
          previous tasks).

          - So, what have to change dynamically in a GA to look for the solutions in
          the right places?.... Mutation and Crossover. Imagine this yahoo group is a
          GA where we are "solutions", everybody has his own knowledge about ANNs.
          Each message we send is a "crossover" process where we are sharing
          experience and knowledge, however we don't a fix or random crossover, we are
          intelligent "solutions" and thus, we are able to incorporate to our
          knowledge the new part, we do a "intelligent crossover" or "probabilistic
          crossover" based in the new information and in our previous experience,
          besides when our solution "mutates" because we try something new in a ANN,
          we try to maximize our probability to get a success result. (Normally in an
          intuitive/probabilistic way).

          - I think indirect encoding as DNA is a way to reduce the searching space,
          but DNA doesn't work as puzzle book to build life. DNA has evolves too. It
          has got through evolution mechanisms to manage other part of DNA and
          probably to mutate in the right way depending of natural selection
          competitiveness; in fact, recent studies have demonstrated that since the
          beginning of evolution there were immutable DNA pieces sharing by all alive
          beings. Probably under that pieces are the secret mechanism to evolve in a
          fast way.

          So...my actual studies are about "Indirect encoding": not only encoding ANN
          structures (I think to use it only is a lost war because GAs need something
          more to deal with such complex searching spaces), as well encoding a
          genotype able to modifying dynamically searching space and managing old
          useful parts (Metalearning).


          Germán Bravo.
        • Sandor Murakozi
          Hi everyone, I m new to the NEAT group, so let me introduce myself shortly: I live in Hungary and work as a software developer (currently at Lufthansa
          Message 4 of 4 , Feb 6, 2006
            Hi everyone,

            I'm new to the NEAT group, so let me introduce myself shortly:
            I live in Hungary and work as a software developer (currently at Lufthansa Systems), mostly in Java.
            I'm interested in genetic/neural computing since university, but could play with it only in my free time.
            I found NEAT some time ago and - as it seems to be a nice combination of the two fields - found it very interesting.
            I've read the whole list (lots of brilliant ideas, very good value/noise rate, congratulations to everyone!), and collected about 50 interesting problems/ideas.

            I decided to implement my own version of NEAT, mainly to have a better understanding of NEAT. Additionally I'd like a system which is as flexible as possible and reasonable, so I could integrate most of those nice ideas.

            Using NEAT my main domain would be prediction of time series (e.g. stock prices). I've seen that there were some guys playing with this, but there were not too much (positive) results.
            I wonder if those guys are already rich, or just their experiments didn't have too much succes...

          Your message has been successfully submitted and would be delivered to recipients shortly.