Browse Groups

• Hi all, It seems to me that running a GA dynamically at each time step is similar to a work of mine (that I did some time ago) about running a GA dynamically
Jan 25, 2007 1 of 6
View Source
Hi all,

It seems to me that "running a GA dynamically at each time step" is similar
to a work of mine (that I did some time ago) about running a GA dynamically
at each time step on a specific military
action game where the scenario conditions changed dynamically. More
specifically, We implemented a game that recreated an
armed forces plan to transport a military vehicle placed in a hostile enemy
region (i.e., the scenario), from an origin position to a destination
one. Each
scenario consisted of a two-dimensional non-toroidal heterogeneous hostile
dynamic grid-world.
The world was heterogeneous because the terrain was not uniform, hostile
because there exist enemy
agents whose mission was to destroy the vehicle, and dynamic because the
solution search tree continuously changed depending on the actions executed
by the vehicle and the enemy agents.

The purpose of the game was to move the
military vehicle from an origin location to a destination position avoiding
the natural obstacles of the world and the direct confrontation with the
enemy agents in order to prevent
the vehicle destruction. Initially, the vehicle has some resources (i.e.,
(fuel, resistance and time) that must be kept in a controlled
way until reaching the target.
In the on-line version of the game (we also implemented some other off-line
versions where the vehicle was driven
by an evolutionary algorithm. you can have a look at
http://tracer.lcc.uma.es/problems/cgpf/The%20Computer%20Game%20Path%20Finding%20Problem.htm),

we let a human player control the vehicle and decide its actions in real
time at the same time that each of its enemies was driven by a very simple
evolutionary algorithm whose purpose was to avoid, as soon as possible
(i.e., by
minimizing the game time), that the (player-controlled) vehicle reaches the
target. In each stage of the game, each enemy had information about its
(Euclidean) distance to both the human player (i.e., the
vehicle) and the target position. To accelerate the calculus we only
considered 4 possible actions (enough for our
action game) to be executed by the enemies, i.e., shot the vehicle, attack
the vehicle to destroy it by collision,
move nearer the vehicle destination and move randomly (this last action was
make the enemy behavior less predictable and introduce intentional enemy
errors in order to increase the user satisfaction). A chromosome
represented one of the possible actions to
be executed by the enemy. Very basically, our
fitness function calculated the quality of a chromosome by considering the
set of all the possible situations taken into
account both the distance-degrees and the remaining resource amount of the
vehicle. This
was done by given some weights to the best actions and penalizing the worst
situations for the enemy.
The EA to control the enemies was run successfully in real-time. The key
for this online
use consisted of employing a reduced number of enemy agents (i.e., 5 at
most) and a reduced number of actions
to be executed in order to evolve the EA very quick in less than a second
(in addition we consider only 10 generations and a population of only 5
chromosomes).

Part of this work was published in the next reference:

@incollection{afernandez:action-games-2004,
author = {Antonio J. Fern\'{a}ndez and Javier Jim\'{e}nez},
TITLE = {Action Games: Evolutive Experiences},
YEAR = 2004,
BOOKTITLE = "Computational Intelligence: Theory and Applications",
series = "Advances in Soft Computing",
EDITOR = "Bernd Reusch",
PUBLISHER = "Springer",
pages = "487-501"
}

if anyone of you are interested in a PDF version of this paper do not
hesitate to e-mail me

best regards,
Antonio

==============================================
| Dr. Antonio J. Fernandez Leiva
| E-mail: afdez@...
|
| Departamento de Lenguajes y Ciencias de la Computacion
| E.T.S.I. Informatica (3.2.50)
| Campus de Teatinos
| 29071 - Malaga (SPAIN)
|
| http://www.lcc.uma.es/~afdez
|
| Phone: (+34) 95 213 33 15 Fax: (+34) 95 213 13 97
==============================================

At 17:40 24/01/2007 -0800, Russ Abbott wrote:

>Hi Simon,
>
>I think you misunderstood my question. Isaacs' work was published in 1965,
>long before GAs, and Hughes' work appears to be on evolving checker players
>in the traditional way: run a GA to produce a set of rules that the player
>will follow when playing. Neither one seems to involve running a GA
>dynamically at each time step.
>
>-- Russ
>
>On 1/24/07, Lucas, Simon M <<mailto:sml%40essex.ac.uk>sml@...> wrote:
>
> >
> > There are many possible variations on this theme -
> > prey / predator(s) problems were explored analytically
> > by Rufus Isaacs in his work on Differential Games,
> > and many others experimentally since then.
> >
> > The proposed solution is a *bit* similar to Monte Carlo
> > approach to playing Go, and a very similar method has
> > been used by Evan Hughes to play checkers.
> >
> > I don't have the exact references to hand - you'll have to
> >
> > Best regards,
> >
> > Simon Lucas
> >
> >
> > ________________________________
> >
> > From:
> <mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> >
> [mailto:genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>]
> > On Behalf Of Bob MacCallum
> > Sent: 24 January 2007 08:56
> > To:
> <mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> > Subject: {Disarmed} Re: [GP] Re: GA and Sudoku
> >
> > so I just want to be clear (too lazy to follow the link): there isn't a
> > population of boats, instead the boat has a population of scenarios and
> > chooses the best one?
> >
> > On 1/24/07, Russ Abbott
> <<mailto:Russ.Abbott%40gmail.com>Russ.Abbott@...
> <Russ.Abbott%40gmail.com>
> > <mailto:Russ.Abbott%40gmail.com> > wrote:
> > >
> > > Hi All,
> > >
> > > I recently came across this example
> > >
> <<https://netfiles.uiuc.edu/wdavis1/www/>https://netfiles.uiuc.edu/wdavis1/www/
> > <https://netfiles.uiuc.edu/wdavis1/www/> >of using a genetic algorithm
> > > in an agent based model. It's about a decade
> > > old and was apparently assigned as a student homework exercise.
> > Briefly,
> > > it's a simulation of boat being chased by a fleet of torpedoes. (You
> > can
> > > think of the boat and the torpedoes as agents in a simple agent-based
> > > world.) The boat is safe if it reaches its home base without getting
> > hit.
> > > The boat is slower but more maneuverable than the torpedoes.
> > >
> > > Apparently the boat uses a GA to plot its course. At each time step it
> > > does
> > > something like select the best 20 step move sequence and take the
> > first
> > > step. The torpedoes are stupid. At each time step they turn in the
> > > direction of the boat to the extent that they can. Because the
> > torpedoes
> > > are so stupid and deterministic, the boat can simulate them perfectly
> > in
> > > its
> > > GA.
> > >
> > > Has anyone seen this or anything like it? Does anyone know of similar
> > > work?
> > >
> > > -- Russ
> > >
> > > [Non-text portions of this message have been removed]
> > >
> > >
> > >
> >
> > [Non-text portions of this message have been removed]
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
>
>--
>-- Russ Abbott
>_____________________________________________
>Professor, Computer Science
>California State University, Los Angeles
>o Check out my blog at
><http://russabbott.blogspot.com/>http://russabbott.blogspot.com/
>
>[Non-text portions of this message have been removed]
>
>

[Non-text portions of this message have been removed]
• Hi again, You are right Simon, here is the mentioned reference: @inproceedings{hughes:-CEC05, author = {Evan J. Hughes}, title = {Checkers using a
Jan 25, 2007 1 of 6
View Source
Hi again,

You are right Simon, here is the mentioned reference:

@inproceedings{hughes:-CEC05,
author = {Evan J. Hughes},
title = {Checkers using a Co-evolutionary on-line
evolutionary algorithm},
booktitle = {Congress on Evolutionary Computation},
editor = {David Corne et al.},
publisher = {IEEE Press},
year = {2005},
pages = {1899-1905}
}

best regards,
Antonio
==============================================

| Dr. Antonio J. Fernandez Leiva E-mail: afdez@...
|
| Departamento de Lenguajes y Ciencias de la Computacion
| E.T.S.I. Informatica (3.2.50)
| Campus de Teatinos
| 29071 - Malaga (SPAIN)
|
| http://www.lcc.uma.es/~afdez
|
| Phone: (+34) 95 213 33 15 Fax: (+34) 95 213 13 97
==============================================

At 14:50 25/01/2007 +0000, Lucas, Simon M wrote:

>Hi Russ,
>
>
>The prey / predator work of Isaacs was
>followed up using evolutionary algorithms
>by George Burgin (working with Larry Fogel)
>among others, though I can't recall exactly
>how similar this was to the scenario you mention.
>
>In addtion to Evan's work on Brunette 24,
>he has also explored using EAs at each move;
>I think he had a paper on this at one of the recent
>CECs (05 or 06).
>
>Best regards,
>
>Simon
>
>________________________________
>
>From: Russ Abbott [mailto:russ.abbott@...]
>Sent: 25 January 2007 01:40
>To: Lucas, Simon M;
><mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com
>Subject: {Disarmed} Agents and dynamic GAs
>
>Hi Simon,
>
>I think you misunderstood my question. Isaacs' work was published in
>1965, long before GAs, and Hughes' work appears to be on evolving
>checker players in the traditional way: run a GA to produce a set of
>rules that the player will follow when playing. Neither one seems to
>involve running a GA dynamically at each time step.
>
>-- Russ
>
>On 1/24/07, Lucas, Simon M <<mailto:sml%40essex.ac.uk>sml@...> wrote:
>
>
>There are many possible variations on this theme -
>prey / predator(s) problems were explored analytically
>by Rufus Isaacs in his work on Differential Games,
>and many others experimentally since then.
>
>The proposed solution is a *bit* similar to Monte Carlo
>approach to playing Go, and a very similar method has
>been used by Evan Hughes to play checkers.
>
>I don't have the exact references to hand - you'll have to
>
>Best regards,
>
>Simon Lucas
>
>
>________________________________
>
>From:
><mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com
><mailto:genetic_programming%40yahoogroups.com>
>[mailto:<mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com
><mailto:genetic_programming%40yahoogroups.com> ] On Behalf Of Bob
>MacCallum
>Sent: 24 January 2007 08:56
>To:
><mailto:genetic_programming%40yahoogroups.com>genetic_programming@yahoogroups.com
><mailto:genetic_programming%40yahoogroups.com>
>Subject: {Disarmed} Re: [GP] Re: GA and Sudoku
>
>so I just want to be clear (too lazy to follow the link): there
>isn't a
>population of boats, instead the boat has a population of
>scenarios and
>chooses the best one?
>
>On 1/24/07, Russ Abbott <
><mailto:Russ.Abbott%40gmail.com>Russ.Abbott@...
><mailto:Russ.Abbott%40gmail.com>
><mailto:Russ.Abbott%40gmail.com> > wrote:
> >
> > Hi All,
> >
> > I recently came across this example
> >
> <<https://netfiles.uiuc.edu/wdavis1/www/>https://netfiles.uiuc.edu/wdavis1/www/
>< https://netfiles.uiuc.edu/wdavis1/www/
><<https://netfiles.uiuc.edu/wdavis1/www/>https://netfiles.uiuc.edu/wdavis1/www/>
> > >of using a genetic algorithm
> > old and was apparently assigned as a student homework
>exercise.
>Briefly,
> > it's a simulation of boat being chased by a fleet of
>torpedoes. (You
>can
> > think of the boat and the torpedoes as agents in a simple
>agent-based
> > world.) The boat is safe if it reaches its home base without
>getting
>hit.
> > The boat is slower but more maneuverable than the torpedoes.
> >
> > Apparently the boat uses a GA to plot its course. At each time
>step it
> > does
> > something like select the best 20 step move sequence and take
>the
>first
> > step. The torpedoes are stupid. At each time step they turn in
>the
> > direction of the boat to the extent that they can. Because the
>torpedoes
> > are so stupid and deterministic, the boat can simulate them
>perfectly
>in
> > its
> > GA.
> >
> > Has anyone seen this or anything like it? Does anyone know of
>similar
> > work?
> >
> > -- Russ
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
>
>[Non-text portions of this message have been removed]
>
>[Non-text portions of this message have been removed]
>
>
>
>
>
>--
>-- Russ Abbott
>_____________________________________________
>Professor, Computer Science
>California State University, Los Angeles
>o Check out my blog at
><http://russabbott.blogspot.com/>http://russabbott.blogspot.com/
>
>[Non-text portions of this message have been removed]
>
>

[Non-text portions of this message have been removed]
• Thanks for the reference. The actual paper is available here: http://ieeexplore.ieee.org/iel5/10417/33080/01554919.pdf?isnumber=&arnumber=1554919 If I
Jan 25, 2007 1 of 6
View Source
Thanks for the reference. The actual paper is available here:
http://ieeexplore.ieee.org/iel5/10417/33080/01554919.pdf?isnumber=&arnumber=1554919

If I understand it correctly, what's evolved are sequences of 100 numbers,
each in the range [0 .. 1). Each number is translated into a checkers move
by multiplying the number of moves available by the number and rounding.
Thus although a sequence of numbers can be used to play a game, it doesn't
represent anything like a game strategy. Depending on the opponent's moves,
the n-th move in the sequence can be virtually anything. Yet these sequences
are given fitness values depending on how they do against all sequence of
opponent mores. Two populations representing the two players are played off
against each other when the GA is run.

Since only one move is made after any run (the first move in the sequence of
numbers) the remaining elements in the sequence are used solely to determine
the outcome generated by that move. So rather than using the move of the
best performing sequence, Hughes modified the system and selected the most
frequently selected first move in the population. The reasoning is
presumably that the population as a whole evolves strings of moves that
overall are best. So the most commonly selected first move must be the best
one. This approach apparently worked somewhat better than the more
traditional result in which the first move in the best string determines
which move to make. But neither approach worked well when compared to a more

In such turn-taking games, one can't expect a GA to do better than an
approach (like minimax) which explores the entire game tree to some
reasonable depth. The hope (as expressed in the paper) is that the GA
approach would explore the game tree to a greater depth than minimax since
it explores only selected paths through the tree. The results as reported
in the paper don't seem to support this hypothesis except when minimax is
limited to a very shallow depth, e.g., 2 ply.

On the other hand, the GA in the boat-torpedo example does very well. The
boat-torpedo example is different from a game like checkers in that it isn't
symmetric. The boat uses a GA; the torpedoes use a more deterministic
strategy. If both sides in a predator/prey exchange use the same search
strategy; it's unlikely that a GA will help since both sides are searching
the same search space. A GA seems more useful in asymmetric scenarios
against perhaps stronger but less flexible opponents.

-- Russ

On 1/25/07, "Antonio J. Fernández Leiva" <afdez@...> wrote:
>
>
> Hi again,
>
> You are right Simon, here is the mentioned reference:
>
> @inproceedings{hughes:-CEC05,
> author = {Evan J. Hughes},
> title = {Checkers using a Co-evolutionary on-line
> evolutionary algorithm},
> booktitle = {Congress on Evolutionary Computation},
> editor = {David Corne et al.},
> publisher = {IEEE Press},
> year = {2005},
> pages = {1899-1905}
> }
>
> best regards,
> Antonio
> *==============================================
>
> *| Dr. Antonio J. Fernandez Leiva E-mail: afdez@...
> |
> | Departamento de Lenguajes y Ciencias de la Computacion
> | E.T.S.I. Informatica (3.2.50)
> | Campus de Teatinos
> | 29071 - Malaga (SPAIN)
> |
> | *http://www.lcc.uma.es/~afdez* <http://www.lcc.uma.es/~afdez>
> |
> | Phone: (+34) 95 213 33 15 Fax: (+34) 95 213 13 97
> ==============================================
>
>
>
>
> At 14:50 25/01/2007 +0000, Lucas, Simon M wrote:
>
> Hi Russ,
>
> No, I understood your question.
>
> The prey / predator work of Isaacs was
> followed up using evolutionary algorithms
> by George Burgin (working with Larry Fogel)
> among others, though I can't recall exactly
> how similar this was to the scenario you mention.
>
> In addtion to Evan's work on Brunette 24,
> he has also explored using EAs at each move;
> I think he had a paper on this at one of the recent
> CECs (05 or 06).
>
> Best regards,
>
> Simon
>
> ________________________________
>
> From: Russ Abbott [mailto:russ.abbott@... <russ.abbott@...>]
> Sent: 25 January 2007 01:40
> To: Lucas, Simon M; genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> Subject: {Disarmed} Agents and dynamic GAs
>
> Hi Simon,
>
> I think you misunderstood my question. Isaacs' work was published in
> 1965, long before GAs, and Hughes' work appears to be on evolving
> checker players in the traditional way: run a GA to produce a set of
> rules that the player will follow when playing. Neither one seems to
> involve running a GA dynamically at each time step.
>
> -- Russ
>
> On 1/24/07, Lucas, Simon M <sml@... <sml%40essex.ac.uk>> wrote:
>
>
> There are many possible variations on this theme -
> prey / predator(s) problems were explored analytically
> by Rufus Isaacs in his work on Differential Games,
> and many others experimentally since then.
>
> The proposed solution is a *bit* similar to Monte Carlo
> approach to playing Go, and a very similar method has
> been used by Evan Hughes to play checkers.
>
> I don't have the exact references to hand - you'll have to
>
> Best regards,
>
> Simon Lucas
>
>
> ________________________________
>
> From: genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> <mailto:genetic_programming%40yahoogroups.com<genetic_programming@yahoogroups.com>>
>
> [mailto:genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> <mailto:genetic_programming%40yahoogroups.com<genetic_programming@yahoogroups.com>>
> ] On Behalf Of Bob
> MacCallum
> Sent: 24 January 2007 08:56
> To: genetic_programming@yahoogroups.com<genetic_programming%40yahoogroups.com>
> <mailto:genetic_programming%40yahoogroups.com<genetic_programming@yahoogroups.com>>
>
> Subject: {Disarmed} Re: [GP] Re: GA and Sudoku
>
> so I just want to be clear (too lazy to follow the link): there
> isn't a
> population of boats, instead the boat has a population of
> scenarios and
> chooses the best one?
>
> On 1/24/07, Russ Abbott < Russ.Abbott@... <Russ.Abbott%40gmail.com>
> <mailto:Russ.Abbott%40gmail.com <Russ.Abbott@...>>
> <mailto:Russ.Abbott%40gmail.com <Russ.Abbott@...>> > wrote:
> >
> > Hi All,
> >
> > I recently came across this example
> > <https://netfiles.uiuc.edu/wdavis1/www/
> < https://netfiles.uiuc.edu/wdavis1/www/
> <https://netfiles.uiuc.edu/wdavis1/www/> > >of using a genetic algorithm
> > old and was apparently assigned as a student homework
> exercise.
> Briefly,
> > it's a simulation of boat being chased by a fleet of
> torpedoes. (You
> can
> > think of the boat and the torpedoes as agents in a simple
> agent-based
> > world.) The boat is safe if it reaches its home base without
> getting
> hit.
> > The boat is slower but more maneuverable than the torpedoes.
> >
> > Apparently the boat uses a GA to plot its course. At each time
> step it
> > does
> > something like select the best 20 step move sequence and take
> the
> first
> > step. The torpedoes are stupid. At each time step they turn in
> the
> > direction of the boat to the extent that they can. Because the
> torpedoes
> > are so stupid and deterministic, the boat can simulate them
> perfectly
> in
> > its
> > GA.
> >
> > Has anyone seen this or anything like it? Does anyone know of
> similar
> > work?
> >
> > -- Russ
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
>
>

[Non-text portions of this message have been removed]
• ... in 1965, ... players ... player ... This probably comes a bit late, but recently our paper, which features a dynamic GA for solving the Mastermind game,
Mar 7, 2007 1 of 6
View Source
--- In genetic_programming@yahoogroups.com, "Russ Abbott"
<Russ.Abbott@...> wrote:
>
> Hi Simon,
>
> I think you misunderstood my question. Isaacs' work was published
in 1965,
> long before GAs, and Hughes' work appears to be on evolving checker
players
> in the traditional way: run a GA to produce a set of rules that the
player
> will follow when playing. Neither one seems to involve running a GA
> dynamically at each time step.
>
This probably comes a bit late, but recently our paper, which features
a dynamic GA for solving the Mastermind game, has been published. It's
entitled "Finding a needle in a haystack using hints and evolutionary
computation: the case of evolutionary MasterMind", and it's available
at http://dx.doi.org/10.1016/j.asoc.2004.09.003

Cheers,

JJ
Your message has been successfully submitted and would be delivered to recipients shortly.
• Changes have not been saved
Press OK to abandon changes or Cancel to continue editing
• Your browser is not supported
Kindly note that Groups does not support 7.0 or earlier versions of Internet Explorer. We recommend upgrading to the latest Internet Explorer, Google Chrome, or Firefox. If you are using IE 9 or later, make sure you turn off Compatibility View.