Loading ...
Sorry, an error occurred while loading the content.

My blog (^_^)

Expand Messages
  • Eray Ozkural
    I don t know how longer I will write but I started some stuff, I think it remains kind of more permanent than my long-winded mails that nobody bothers to read.
    Message 1 of 12 , Feb 1, 2011
    • 0 Attachment
      I don't know how longer I will write but I started some stuff, I think it remains kind of more permanent than my long-winded mails that nobody bothers to read. There are two AI-related posts already. Enjoy:



      --
      Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
      http://groups.yahoo.com/group/ai-philosophy
      http://myspace.com/arizanesil http://myspace.com/malfunct

    • Michael Olea
      ... When will we see your creation on Jeopardy? ;-)
      Message 2 of 12 , Feb 1, 2011
      • 0 Attachment

        On Feb 1, 2011, at 12:03 PM, Eray Ozkural wrote:

        I don't know how longer I will write but I started some stuff, I think it remains kind of more permanent than my long-winded mails that nobody bothers to read. There are two AI-related posts already. Enjoy



        When will we see your creation on Jeopardy?

        ;-)

      • jgkjcasey
        Thoughts after reading your blog: ... JC: Isn t reinforcement learning simply a feedback system? If you take an action you need to be able to analyse the
        Message 3 of 12 , Feb 1, 2011
        • 0 Attachment
          Thoughts after reading your blog:


          Reinforcement Learning:

          > ... reinforcement learning is trivial once you have a general-purpose learning algorithm (which is precisely what I'm working on). That is to say, reinforcement learning can be trivially reduced to general-learning, but not the other way around.

          JC:
          Isn't reinforcement learning simply a feedback system? If you take an action you need to be able to analyse the results to see what the result of that action was? Based on those results you may change (reinforce or weaken) the way you analyse a sensory input or synthesise any future actions based on a current sensory input and some goal.


          ==========================

          Role of neocortex:

          > Scientists think that the higher-level emotions of the mammals are the product of architectural innovations in the nervous system. The presence of a neocortex can make a difference, it seems. Will the AI then have the capabilities afforded to us by the neo-cortex?

          JC:
          My understanding is that the neocortex adds power to the sensory analysis and the motor synthesis. I imagine it like giving a class of students (lower brain systems) high powered calculators to do their math or English students an electronic dictionary or spell checker.

          For example the alternating task requires the frontal cortex but simple association does not require the neocortex.

          The neocortex allows a finer analysis of audio input for speech recognition and a finer synthesis of motor control for speech generation. The number and pattern of connections between cortical areas may be critical as well. That is, the communication between the various cortical areas for sharing their results and posing problems.

          ==============

          Requirement for a Goal:

          > An AGI has universal intelligence, because it has in its possession a universal computer, which it can use to learn any computable probability distribution in the world. Therefore, it has a universal learning capability which it can enact in any environment, e.g., an alien planet.

          JC:
          The ability to model a world doesn't mean anything without a goal as to what you are supposed to do in this world. An act is only deemed intelligent to the extent we can discern its goal.

          ======================

          Morality:

          > Such an AGI does not need to be taught anything about morality whatsoever. It could work out its own moral philosophy from the very ground up, just like philosophers can.

          JC:
          I think AGI and philosophers need an evolved base from which to think about morality which I see as social behaviors that enhance survival.
        • Eray Ozkural
          Hi John, Would you also please consider posting your comment on the comment box in the blog? Best Regards, ... -- Eray Ozkural, PhD candidate. Comp. Sci.
          Message 4 of 12 , Feb 2, 2011
          • 0 Attachment
            Hi John,

            Would you also please consider posting your comment on the comment box in the blog?

            Best Regards,

            On Tue, Feb 1, 2011 at 11:15 PM, jgkjcasey <jgkjcasey@...> wrote:

            Thoughts after reading your blog:


            Reinforcement Learning:

            > ... reinforcement learning is trivial once you have a general-purpose learning algorithm (which is precisely what I'm working on). That is to say, reinforcement learning can be trivially reduced to general-learning, but not the other way around.

            JC:
            Isn't reinforcement learning simply a feedback system? If you take an action you need to be able to analyse the results to see what the result of that action was? Based on those results you may change (reinforce or weaken) the way you analyse a sensory input or synthesise any future actions based on a current sensory input and some goal.


            ==========================

            Role of neocortex:

            > Scientists think that the higher-level emotions of the mammals are the product of architectural innovations in the nervous system. The presence of a neocortex can make a difference, it seems. Will the AI then have the capabilities afforded to us by the neo-cortex?

            JC:
            My understanding is that the neocortex adds power to the sensory analysis and the motor synthesis. I imagine it like giving a class of students (lower brain systems) high powered calculators to do their math or English students an electronic dictionary or spell checker.

            For example the alternating task requires the frontal cortex but simple association does not require the neocortex.

            The neocortex allows a finer analysis of audio input for speech recognition and a finer synthesis of motor control for speech generation. The number and pattern of connections between cortical areas may be critical as well. That is, the communication between the various cortical areas for sharing their results and posing problems.

            ==============

            Requirement for a Goal:

            > An AGI has universal intelligence, because it has in its possession a universal computer, which it can use to learn any computable probability distribution in the world. Therefore, it has a universal learning capability which it can enact in any environment, e.g., an alien planet.

            JC:
            The ability to model a world doesn't mean anything without a goal as to what you are supposed to do in this world. An act is only deemed intelligent to the extent we can discern its goal.

            ======================

            Morality:

            > Such an AGI does not need to be taught anything about morality whatsoever. It could work out its own moral philosophy from the very ground up, just like philosophers can.

            JC:
            I think AGI and philosophers need an evolved base from which to think about morality which I see as social behaviors that enhance survival.







            ------------------------------------

            Yahoo! Groups Links

            <*> To visit your group on the web, go to:
               http://groups.yahoo.com/group/ai-philosophy/

            <*> Your email settings:
               Individual Email | Traditional

            <*> To change settings online go to:
               http://groups.yahoo.com/group/ai-philosophy/join
               (Yahoo! ID required)

            <*> To change settings via email:
               ai-philosophy-digest@yahoogroups.com
               ai-philosophy-fullfeatured@yahoogroups.com

            <*> To unsubscribe from this group, send an email to:
               ai-philosophy-unsubscribe@yahoogroups.com

            <*> Your use of Yahoo! Groups is subject to:
               http://docs.yahoo.com/info/terms/




            --
            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
            http://groups.yahoo.com/group/ai-philosophy
            http://myspace.com/arizanesil http://myspace.com/malfunct

          • Eda Utku
            Hi Eray and All: I have a friend who is interested in launching a project and was hoping I could have someone interested in this to get in touch with him. 
            Message 5 of 12 , Feb 2, 2011
            • 0 Attachment
              Hi Eray and All:
               
              I have a friend who is interested in launching a project and was hoping I could have someone interested in this to get in touch with him.  His name is Eric Stetson and e-mail contact is: ericstetson@...
               
              I'm so excited about this and would like to really have this at my fingertips soon!  It would be such a practical way to get to your highly informative and thought provoking blogs :)
               
              Here's Eric's idea in his own words:
               
              " The basic idea is to create a visually attractive online directory of the best periodicals and blogs on the web, broken into categories, and provide links to the latest headlines from each one automatically derived from their news feeds.  Beyond that, users will be able to create a personalized page with the specific sources they want to read on a regular basis, and will also be able to see what their Facebook friends are reading and what sources and articles are the most popular.  The business will make money by selling advertising on the various pages of the site, targeted by subject depending on what the links on each page are about.
               
              I have already developed a comprehensive plan for how to launch this business and a plan for how the website will need to be programmed.  It will take me a tremendous amount of work over the next six months or so; and in addition to my own work, I will also need to hire a website developer with good programming skills to work on the complex programming aspects of the project.  You mentioned that one of your relatives works in the field of artificial intelligence.... I wonder if either you or he know any people in the programming field who might be looking for a side project?  I need somebody who is familiar with the Php programming language and building websites from data imported into and exported from databases.  I would like to hire either a freelancer and pay in cash, or, more ideally, somebody who believes strongly in the business idea and would be interested to work part-time on a longer term basis primarily for stock, at least at the beginning.  Of course I have a detailed business plan already developed, but I would only show it to someone if they sign a non-disclosure agreement."\
               
              Regards,
               
              E


              From: Eray Ozkural <erayo@...>
              To: ai-philosophy@yahoogroups.com
              Sent: Thu, February 3, 2011 8:58:01 AM
              Subject: Re: [ai-philosophy] Re: My blog (^_^)

               

              Hi John,


              Would you also please consider posting your comment on the comment box in the blog?

              Best Regards,

              On Tue, Feb 1, 2011 at 11:15 PM, jgkjcasey <jgkjcasey@...> wrote:

              Thoughts after reading your blog:


              Reinforcement Learning:

              > ... reinforcement learning is trivial once you have a general-purpose learning algorithm (which is precisely what I'm working on). That is to say, reinforcement learning can be trivially reduced to general-learning, but not the other way around.

              JC:
              Isn't reinforcement learning simply a feedback system? If you take an action you need to be able to analyse the results to see what the result of that action was? Based on those results you may change (reinforce or weaken) the way you analyse a sensory input or synthesise any future actions based on a current sensory input and some goal.


              ==========================

              Role of neocortex:

              > Scientists think that the higher-level emotions of the mammals are the product of architectural innovations in the nervous system. The presence of a neocortex can make a difference, it seems. Will the AI then have the capabilities afforded to us by the neo-cortex?

              JC:
              My understanding is that the neocortex adds power to the sensory analysis and the motor synthesis. I imagine it like giving a class of students (lower brain systems) high powered calculators to do their math or English students an electronic dictionary or spell checker.

              For example the alternating task requires the frontal cortex but simple association does not require the neocortex.

              The neocortex allows a finer analysis of audio input for speech recognition and a finer synthesis of motor control for speech generation. The number and pattern of connections between cortical areas may be critical as well. That is, the communication between the various cortical areas for sharing their results and posing problems.

              ==============

              Requirement for a Goal:

              > An AGI has universal intelligence, because it has in its possession a universal computer, which it can use to learn any computable probability distribution in the world. Therefore, it has a universal learning capability which it can enact in any environment, e.g., an alien planet.

              JC:
              The ability to model a world doesn't mean anything without a goal as to what you are supposed to do in this world. An act is only deemed intelligent to the extent we can discern its goal.

              ======================

              Morality:

              > Such an AGI does not need to be taught anything about morality whatsoever. It could work out its own moral philosophy from the very ground up, just like philosophers can.

              JC:
              I think AGI and philosophers need an evolved base from which to think about morality which I see as social behaviors that enhance survival.







              ------------------------------------

              Yahoo! Groups Links

              <*> To visit your group on the web, go to:
                 http://groups.yahoo.com/group/ai-philosophy/

              <*> Your email settings:
                 Individual Email | Traditional

              <*> To change settings online go to:
                 http://groups.yahoo.com/group/ai-philosophy/join
                 (Yahoo! ID required)

              <*> To change settings via email:
                 ai-philosophy-digest@yahoogroups.com
                 ai-philosophy-fullfeatured@yahoogroups.com

              <*> To unsubscribe from this group, send an email to:
                 ai-philosophy-unsubscribe@yahoogroups.com

              <*> Your use of Yahoo! Groups is subject to:
                 http://docs.yahoo.com/info/terms/




              --
              Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
              http://groups.yahoo.com/group/ai-philosophy
              http://myspace.com/arizanesil http://myspace.com/malfunct


            • Eray Ozkural
              My blog has temporarily moved to http://examachine.net/blog, sorry for any 404 errors. ... -- Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent
              Message 6 of 12 , Feb 2, 2011
              • 0 Attachment
                My blog has temporarily moved to http://examachine.net/blog, sorry for any 404 errors.


                On Wed, Feb 2, 2011 at 11:58 PM, Eray Ozkural <examachine@...> wrote:
                Hi John,

                Would you also please consider posting your comment on the comment box in the blog?

                Best Regards,


                On Tue, Feb 1, 2011 at 11:15 PM, jgkjcasey <jgkjcasey@...> wrote:

                Thoughts after reading your blog:


                Reinforcement Learning:

                > ... reinforcement learning is trivial once you have a general-purpose learning algorithm (which is precisely what I'm working on). That is to say, reinforcement learning can be trivially reduced to general-learning, but not the other way around.

                JC:
                Isn't reinforcement learning simply a feedback system? If you take an action you need to be able to analyse the results to see what the result of that action was? Based on those results you may change (reinforce or weaken) the way you analyse a sensory input or synthesise any future actions based on a current sensory input and some goal.


                ==========================

                Role of neocortex:

                > Scientists think that the higher-level emotions of the mammals are the product of architectural innovations in the nervous system. The presence of a neocortex can make a difference, it seems. Will the AI then have the capabilities afforded to us by the neo-cortex?

                JC:
                My understanding is that the neocortex adds power to the sensory analysis and the motor synthesis. I imagine it like giving a class of students (lower brain systems) high powered calculators to do their math or English students an electronic dictionary or spell checker.

                For example the alternating task requires the frontal cortex but simple association does not require the neocortex.

                The neocortex allows a finer analysis of audio input for speech recognition and a finer synthesis of motor control for speech generation. The number and pattern of connections between cortical areas may be critical as well. That is, the communication between the various cortical areas for sharing their results and posing problems.

                ==============

                Requirement for a Goal:

                > An AGI has universal intelligence, because it has in its possession a universal computer, which it can use to learn any computable probability distribution in the world. Therefore, it has a universal learning capability which it can enact in any environment, e.g., an alien planet.

                JC:
                The ability to model a world doesn't mean anything without a goal as to what you are supposed to do in this world. An act is only deemed intelligent to the extent we can discern its goal.

                ======================

                Morality:

                > Such an AGI does not need to be taught anything about morality whatsoever. It could work out its own moral philosophy from the very ground up, just like philosophers can.

                JC:
                I think AGI and philosophers need an evolved base from which to think about morality which I see as social behaviors that enhance survival.







                ------------------------------------

                Yahoo! Groups Links

                <*> To visit your group on the web, go to:
                   http://groups.yahoo.com/group/ai-philosophy/

                <*> Your email settings:
                   Individual Email | Traditional

                <*> To change settings online go to:
                   http://groups.yahoo.com/group/ai-philosophy/join
                   (Yahoo! ID required)

                <*> To change settings via email:
                   ai-philosophy-digest@yahoogroups.com
                   ai-philosophy-fullfeatured@yahoogroups.com

                <*> To unsubscribe from this group, send an email to:
                   ai-philosophy-unsubscribe@yahoogroups.com

                <*> Your use of Yahoo! Groups is subject to:
                   http://docs.yahoo.com/info/terms/




                --
                Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara



                --
                Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                http://groups.yahoo.com/group/ai-philosophy
                http://myspace.com/arizanesil http://myspace.com/malfunct

              • Eray Ozkural
                Eda, what does this have to do with the ai-philosophy list? -- Eray Ozkural, PhD candidate. Comp. Sci. Dept., Bilkent University, Ankara
                Message 7 of 12 , Feb 2, 2011
                • 0 Attachment
                  Eda, what does this have to do with the ai-philosophy list?

                  --
                  Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                  http://groups.yahoo.com/group/ai-philosophy
                  http://myspace.com/arizanesil http://myspace.com/malfunct

                • Eda Utku
                  Hey Eray, I just remember talking to you about how AI would replace human function on such repetitive updating so I was trying to be inspirational about
                  Message 8 of 12 , Feb 2, 2011
                  • 0 Attachment

                    Hey Eray,
                     
                    I just remember talking to you about how AI would replace human function on such repetitive updating so I was trying to be inspirational about getting you all to set up AI for such tasks as what my friend described.  Also, AI could come up with radio play lists, DJing and VJing pretty much how Genius feature on Ipod finds similar music taking the tastes of the individual into account.  There could be a motion sensor to detect dancing movements for an AI booth DJ to tap into the rhythms of a particular audience to map out cultural differences too?  I remember I was at a Paul Van Dyk concert and he had a hard time initially getting Turkish people into the groove then he started doing a bit 9/8 :) and success!  Let's get rid of all these human egos in the booth in favor of more efficient machines with no personality disorders :)
                     
                    Wow, I need to improve my BS skills :)  I was trying to see if anyone would be interested in my friend's project. 
                     
                    Sowwwy,
                     
                    E

                    From: Eray Ozkural <erayo@...>
                    To: ai-philosophy@yahoogroups.com
                    Cc: ericstetson@...
                    Sent: Thu, February 3, 2011 2:18:03 PM
                    Subject: Re: [ai-philosophy] Re: My blog (^_^)

                     

                    Eda, what does this have to do with the ai-philosophy list?


                    --
                    Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                    http://groups.yahoo.com/group/ai-philosophy
                    http://myspace.com/arizanesil http://myspace.com/malfunct


                  • Abram Demski
                    Eray, Thanks for the blog! Unfortunately I may be unable to seriously comment for a few days, but know that they are coming... --Abram ... -- Abram Demski
                    Message 9 of 12 , Feb 3, 2011
                    • 0 Attachment
                      Eray,

                      Thanks for the blog! Unfortunately I may be unable to seriously comment for a few days, but know that they are coming...

                      --Abram

                      On Tue, Feb 1, 2011 at 3:03 PM, Eray Ozkural <erayo@...> wrote:
                       

                      I don't know how longer I will write but I started some stuff, I think it remains kind of more permanent than my long-winded mails that nobody bothers to read. There are two AI-related posts already. Enjoy:




                      --
                      Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                      http://groups.yahoo.com/group/ai-philosophy
                      http://myspace.com/arizanesil http://myspace.com/malfunct




                      --
                      Abram Demski
                      http://lo-tho.blogspot.com/
                      http://groups.google.com/group/one-logic
                    • Eray Ozkural
                      Please, shoot down what I say, it s a train of thought mostly, but like my long and never ending posts on ai-philosophy. I am thinking what I can do to extend
                      Message 10 of 12 , Feb 3, 2011
                      • 0 Attachment
                        Please, shoot down what I say, it's a train of thought mostly, but like my long and never ending posts on ai-philosophy. I am thinking what I can do to extend the ai-philosophy community onto the web, as well.

                        I've heard something called an "argument mapping" have any of you people used such a thing?

                        Best,

                        On Thu, Feb 3, 2011 at 9:05 PM, Abram Demski <abramdemski@...> wrote:


                        Eray,

                        Thanks for the blog! Unfortunately I may be unable to seriously comment for a few days, but know that they are coming...

                        --Abram


                        On Tue, Feb 1, 2011 at 3:03 PM, Eray Ozkural <erayo@...> wrote:
                         

                        I don't know how longer I will write but I started some stuff, I think it remains kind of more permanent than my long-winded mails that nobody bothers to read. There are two AI-related posts already. Enjoy:




                        --
                        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                        http://groups.yahoo.com/group/ai-philosophy
                        http://myspace.com/arizanesil http://myspace.com/malfunct




                        --
                        Abram Demski
                        http://lo-tho.blogspot.com/
                        http://groups.google.com/group/one-logic





                        --
                        Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                        http://groups.yahoo.com/group/ai-philosophy
                        http://myspace.com/arizanesil http://myspace.com/malfunct

                      • jgkjcasey
                        ... Ok. But I do see blogs and forums as serving a different role. My real interest was any arguments you had for saying reinforcement learning wasn t
                        Message 11 of 12 , Feb 3, 2011
                        • 0 Attachment
                          --- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@...> wrote:
                          >
                          > Hi John,
                          >
                          > Would you also please consider posting your comment on the
                          > comment box in the blog?


                          Ok. But I do see blogs and forums as serving a different role.

                          My real interest was any arguments you had for saying reinforcement
                          learning wasn't necessary for AGI.

                          You indicated a narrow interest in solving only scientific problems
                          with regards to AI whereas I see AI as much broader than that and
                          feel the ability to solve scientific problems evolved out of and
                          makes use of mechanisms for solving everyday human problems. Thus
                          I have an interest in what the human brain can tell us about any
                          notion of building "thinking machines".

                          Scientific thinking isn't all about induction either.

                          As far as I know there is no mechanical rules of induction
                          that can generate hypotheses and theories from emperical data?

                          I am not a mathematician but isn't imagination and free invention
                          part of the process even if the results are subject to
                          being validated by deductive reasoning?

                          We use logic but for emotional reasons, to get what we want, and
                          when logic fails to achieve that need we reject it, at least most
                          people do.


                          You wrote:
                          > My claim here is thus the independence of autonomous operation from
                          > the basic definition of intelligence!

                          If you mean a logical processing of data I guess you are right but
                          we are more than that although you don't seem interested in that.

                          I have just googled the "argument mapping" you alluded to as it seems
                          interesting as a means to unscramble a bunch of conflicting ideas.

                          JohnC
                        • Eray Ozkural
                          ... Thanks ... OK ... I think we now have a lot of reason to think induction is at the basis of all scientific thinking. ... There are, in fact. That s what
                          Message 12 of 12 , Feb 3, 2011
                          • 0 Attachment
                            On Thu, Feb 3, 2011 at 10:23 PM, jgkjcasey <jgkjcasey@...> wrote:
                            --- In ai-philosophy@yahoogroups.com, Eray Ozkural <erayo@...> wrote:
                            >
                            > Hi John,
                            >
                            > Would you also please consider posting your comment on the
                            > comment box in the blog?


                            Ok. But I do see blogs and forums as serving a different role.


                            Thanks
                             
                            My real interest was any arguments you had for saying reinforcement
                            learning wasn't necessary for AGI.

                            You indicated a narrow interest in solving only scientific problems
                            with regards to AI whereas I see AI as much broader than that and
                            feel the ability to solve scientific problems evolved out of and
                            makes use of mechanisms for solving everyday human problems. Thus
                            I have an interest in what the human brain can tell us about any
                            notion of building "thinking machines".

                             
                            OK
                             
                            Scientific thinking isn't all about induction either.


                            I think we now have a lot of reason to think induction is at the basis of all scientific thinking.
                             
                            As far as I know there is no mechanical rules of induction
                            that can generate hypotheses and theories from emperical data?

                            There are, in fact. That's what Levin Search is.
                             

                            I am not a mathematician but isn't imagination and free invention
                            part of the process even if the results are subject to
                            being validated by deductive reasoning?

                            Right, and in a cognitive architecture we are actually modeling imagination and invention based on memory. So, like in a human being, the experience colors the imagination. Yet imagination is essentially infinite!

                            The results cannot always be checked against something. If it's a well-defined problem then we can, and we can most certainly solve it with Levin Search.

                            But for some unsupervised task there is no such check, so you have to rely on your predictions, perhaps calculate how likely an estimate you have made.

                            That is to say, it is better to rely on a principle of induction, which prevents our scientific minds from being cluttered with redundant information.
                             

                            We use logic but for emotional reasons, to get what we want, and
                            when logic fails to achieve that need we reject it, at least most
                            people do.

                            That's true

                             

                            You wrote:
                            > My claim here is thus the independence of autonomous operation from
                            > the basic definition of intelligence!

                            If you mean a logical processing of data I guess you are right but
                            we are more than that although you don't seem interested in that.


                            All right, we are doing lots of things, for instance we walk, and if you are not solving that problem (learning to walk etc.) then you are missing at least one part of the problem of how a human brain works. I am not at all modeling human-like behavior so that is not a problem for me. However, in the future, I can implement an autonomous agent using my algorithms, then we will see if they work well in that domain as well. If it's true AGI it should be able to deal with any problem 
                            including that. It would be a good test of the system.
                             
                            I have just googled the "argument mapping" you alluded to as it seems
                            interesting as a means to unscramble a bunch of conflicting ideas.


                            Yes, and it's sometimes better to have a visual representation of those. There are many such conflicts, counter-arguments, counter-counter-arguments, and so forth in philosophy of AI, right?

                            Best,


                            --
                            Eray Ozkural, PhD candidate.  Comp. Sci. Dept., Bilkent University, Ankara
                            http://groups.yahoo.com/group/ai-philosophy
                            http://myspace.com/arizanesil http://myspace.com/malfunct

                          Your message has been successfully submitted and would be delivered to recipients shortly.