Loading ...
Sorry, an error occurred while loading the content.

Re: New paper on why modules evolve, and how to evolve modular artif

Expand Messages
  • Ken
    Hi Jeff, Stef, and Martin, I hope you don t mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from
    Message 1 of 10 , Mar 2, 2013
    • 0 Attachment
      Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).

      Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
      bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"

      After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.

      Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.

      But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)

      The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.

      And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.

      I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
      If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?

      I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.

      Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.

      And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.

      For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.

      Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."

      But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.

      More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.

      Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.

      In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.

      Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.

      So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.


      Best,

      ken


      --- In neat@yahoogroups.com, "martin_pyka" <martin.pyka@...> wrote:
      >
      > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
      >
      > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
      >
      > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
      >
    • martin_pyka
      Thank you for this long answer which has a lot of inspiring ideas that I would like to pick up. But let me first point out the different topics that have been
      Message 2 of 10 , Mar 5, 2013
      • 0 Attachment
        Thank you for this long answer which has a lot of inspiring ideas that I would like to pick up. But let me first point out the different topics that have been discussed before because I am afraid that the arguments you brought forward could be assigned to the wrong topics ;)

        I think there are several questions (or topics of interest) that are discussed here.

        Related to Jeffs Paper:
        a) Does connection cost represent a natural bias necessary for the emergence of modules?

        I think this question is hard to answer and remains controversial ;) Prior beliefs can lead us to experiments that can support or refuse this theory.

        b) When someone wants to show that connection costs leads to modularity, does it matter whether a bias is implemented in the fitness function or in the encoding *given the currently known techniques in both domains*?

        My impression is, that Jeff and Stef mostly address this question and understand Ken's arguments as a response to this question, while Ken argues and understands Jeffs and Stefs arguments as a response to the question:

        c) How does a biological plausible mechanism for any bias look like?

        I think Ken's long answer here is a strong statement for encodings that are able to canalize (and abandon) certain design principles and I mostly agree with Ken's opinion related to question c. However, I don't think that an artificial encoding with such good canalizing properties already exist. So you are more arguing for spending more time in developing such an encoding instead of investigating the influence of a bias on network function / architecture with available means. Because, when we have such an encoding, it might help to understand how strong a certain bias (like connection cost) in parts of the evolutionary process actually is and to which extent we can find modularity in the network.

        So, as far as I got it, Ken does not say: „use the encoding instead of the fitness function to bias the search towards modularity" (as previous postings suggested it) but rather: „An encoding with certain properties (e.g. canalization) would allow us to reassess the value of concepts like „connection cost" and „modularity" " !?


        And in direct response to Ken's posting some random thoughts:

        I also think that open-ended evolution is driven by a canalizable encoding, and I think one can come up with many detailed properties that the encoding has to fullfill in order to be regarded as canalizable. The challenge is now: how to develop such an encoding given the fact that all these properties are emergent properties of the encoding and therefore don't lead in a deductive manner to the particular form of the encoding? I think, we face here similar problems than in the lower abstraction levels.

        My first idea was: let's apply evolutionary algorithms on encodings to find the best encoding (something similar exist already for L-Systems), but how would such a meta-language look like?

        I think that the idea of CPPNs represents already a good starting point for canalizable encodings but its implementation in HyperNEAT seems to me not to be consequent enough and looses a lot of robustness as for example self-organizing mechanisms (like structural plasticity) are not implemented. Of course, it is always easier to provide critique than to suggest a better solution ;-). I wished I would have more time to work on these things on my own.

        Best,
        Martin
      • stephane.doncieux
        Hi Ken, We are getting closer to the point. I think one ambiguity comes with the expectations of models like that of Jean-Baptiste, Jeff and Hod. I agree that
        Message 3 of 10 , Mar 6, 2013
        • 0 Attachment
          Hi Ken,

          We are getting closer to the point.

          I think one ambiguity comes with the expectations of models like that of Jean-Baptiste, Jeff and Hod. I agree that it does not answer the whole question of open-ended evolution (even with regards to modularity), but I don't think it was their intention (I haven't seen such claims in the paper). I don't believe in a global model that would take into account every single aspect of natural evolution, at least not until every piece of the puzzle has been well understood. This is the classical reductionnist approach to try to separate each aspect. It is clearly not easy to apply such a methodology for these questions as many different aspects are dependent one from the other. It anyway remains a classical and efficient approach in Science. Succeeding in isolating a single effect is, for me, a major breakthrough because of the difficulty to do it. It should actually be what we are looking for, because the contribution is then highly localized and it makes it easier to build upon it and this is really what I like in Jeff, JB and Hod work.

          The question of knowing to what extent multi-objective evolutionary algorithms are a good model of natural (i.e. open-ended) evolution is not critical in the work we are talking about. MOEA have interesting features, but also drawbacks, as you have mentionned, Ken. Jeff, JB and Hod have proposed a mechanism to adress these limitations with their stochastic domination and I think it is enough to address their problem. The results of the article should not be overestimated as well as it should not be underestimated. What it shows is that a pressure towards a low connectivity (i.e. a goal independent selection pressure) has the nice side effect of creating more modular structures. This is an interesting and valuable result. How to use such pressures in an open-ended perspective is another (and in my opinion different and also interesting) question. I completely agree with you Ken when you say that this question is not properly adressed by the model, but once more I am not sure that it is their point.

          I agree with your point Ken that a constant fitness pressure that remains the same all over the course of evolution is very unlikely to lead to a truly open-ended evolution. By the way, it seems to me that you suggest to discard all fitness pressures (i.e. all goal-oriented objectives, may it be constant or not). It is in line with novelty search, but I don't think that it is a good idea. You will have solved one part of the problem, but also introduced other limitations. One of them is related to the size of the search space. If your behavior space is large enough, you will just begin to do something interesting and switch to something else without trying to push what you have discovered to its limits.

          Lots of biologists actually think in terms of fitness pressures and try to find what fitness pressure has favored a given transition (apparition of an organ, of a particular bone, etc). Lots of those pressures have been proposed in the litterature to explain the evolution of birds wings, legs etc. Does it mean that such pressures were the same during all evolution ? Of course not. It does not mean either that it hasn't played a critical role at some time along evolution. Thinking in terms of fitness pressures is no more than a convenient way to model what happens in evolution at a relatively local scale. You are true Ken to say that most of our work remains local. If I were to study open-ended evolution, I would try to study under what conditions these fitness pressures changes do occur and I would propose algorithms to reproduce it. In this perspective, any meaningful fitness pressure is interesting and is a potentially significant piece of the puzzle. From your point of view, I guess that your research program is different. Which one will lead to a better understanding of natural evolution is a question that we cannot decide now. Interestingly your work on novelty search with local competion is actually a nice way to combine both aspects.

          By the way, I really think that it is useless to work on the encoding without also studying selection pressures that would take the best of it (may they be constant or not, goal-oriented or goal-independent). So that is why I disagree with the dichotomy you make between encoding and fitness pressures. Anyway considering all the work you have made in the field on both aspects, I guess that it is more a question of terminology than a deep disagreement.

          Best,

          stef

          --- In neat@yahoogroups.com, "Ken" <kstanley@...> wrote:
          >
          >
          >
          > Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).
          >
          > Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
          > bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"
          >
          > After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.
          >
          > Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.
          >
          > But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)
          >
          > The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.
          >
          > And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.
          >
          > I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
          > If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?
          >
          > I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.
          >
          > Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.
          >
          > And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.
          >
          > For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.
          >
          > Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."
          >
          > But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.
          >
          > More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.
          >
          > Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.
          >
          > In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.
          >
          > Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.
          >
          > So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.
          >
          >
          > Best,
          >
          > ken
          >
          >
          > --- In neat@yahoogroups.com, "martin_pyka" <martin.pyka@> wrote:
          > >
          > > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
          > >
          > > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
          > >
          > > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
          > >
          >
        • Jeff Clune
          Thanks for clarifying your positions Ken. I believe we have reached the point at which reasonable minds can respectably disagree, as you put it. Were I to
          Message 4 of 10 , Mar 7, 2013
          • 0 Attachment
            Thanks for clarifying your positions Ken. I believe we have reached the point at which reasonable minds can respectably disagree, as you put it. Were I to respond, I believe I would mostly repeat myself as the ideas I believe answer your comments and questions I've already expressed earlier in this thread. That usually is an indication of coming to an agreement, even if the agreement is to disagree. :-) 

            That said, I'll summarize my main points: I do think nature has default fitness penalties, and all we have done is change the default to one that encourages modularity and evolvability. As such, I don't think we're subject to general attacks on fitness pressures, because by default there are fitness pressures (e.g. a cost for materials), we just change them to be better with respect to modularity and evolvability. Moreover, I think a connection cost encourages modularity in a way similar to how things work in nature, even if nature more resembles a combination of performance and costs into a single objective instead of multi-objective algorithms. Our work is not dependent on an MOEA: we suspect a connection cost will encourage modularity in general irrespective of the details of the algorithm. As I've mentioned, I think nuance and subtlety are possible with a connection cost because nature can pay for more connectivity via performance gains, allowing all sorts of exceptions to the general rule. I still don't see how an initial encoding bias will have any long-term effect on evolution, so the only hope for canalization and helpful encoding biases is via fitness. I thus believe that fitness penalties are a great way to encourage good encodings via canalization. For example, we know that default penalties exist (such as a default tendency to NOT produce modularity, which has been empirically observed in our systems repeatedly), and nature will not explore those areas unless the fitness function is changed (just as nature never explored certain classes of extremely inefficient metabolisms, or birds with bones made of lead). Ultimately, since the effect of your initial encoding bias hint will vanish over millennia, your argument amounts to saying that we should not try to bias evolution at all. But this whole conversation was on the best way to bias evolution, so maybe your position is that we shouldn't be biasing evolution at all. That's fine if we don't have any default fitness penalties in the environment that will hurt us, but we have no guarantee of that. As you point out, Nature has done impressive things, but it also had a very different environment than the environment in our setups so far. What we've shown in this paper is that some things that we originally thought just happened to exist in nature, but were unnecessary vis a vis evolvability (such as a cost for materials) actually may play a role in the evolution of modularity and evolvability. I think it is worthwhile to investigate what else we may have skipped from the natural world that may be an important driver of evolvability and, ultimately, open-ended evolution.

            A final point: your criticism that 'nothing we learn matters if we don't have an open-ended algorithm' discounts all the work that has been done to date in computational evolutionary biology, not just our paper. You may be right that all of our lessons are worthless once we figure out an open-ended evolutionary algorithm, but I doubt that will be the case. I think a lot of interesting, worthwhile understandings have been gained by simulated evolution, and that much of that work will prove informative even if the underlying algorithms change. 


            Best regards,
            Jeff Clune

            Assistant Professor
            Computer Science
            University of Wyoming
            jeffclune@...
            jeffclune.com

            On Mar 2, 2013, at 4:16 PM, Ken <kstanley@...> wrote:

             



            Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).

            Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
            bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"

            After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.

            Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.

            But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)

            The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.

            And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.

            I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
            If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?

            I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.

            Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.

            And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.

            For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.

            Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."

            But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.

            More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.

            Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.

            In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.

            Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.

            So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.

            Best,

            ken

            --- In neat@yahoogroups.com, "martin_pyka" wrote:
            >
            > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
            >
            > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
            >
            > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
            >


          • Ken
            Hi Martin, I appreciate your summary of the discussion and agree with a lot of your points. You re right that my point isn t only simply to use an encoding to
            Message 5 of 10 , Mar 8, 2013
            • 0 Attachment
              Hi Martin, I appreciate your summary of the discussion and agree with a lot of your points. You're right that my point isn't only simply to use an encoding to impose a bias, which is why my explanation turned out so long.

              Regarding which encodings are canalizeable, I think the jury is still out on that question. Canalization remains a somewhat mysterious subject and I don't think it's clear whether CPPNs can canalize as well as DNA or not because we have not seen CPPNs yet with as many nodes as there are genes in the DNA of many animals (on the order of tens of thousands). Once you get up to that level (e.g. a 30,000-node CPPN), the amount of redundancy and interdependence among all the parts might lead to a similar level of canalization. As I noted before, even on Picbreeder some canalization is already evident in the more complex images.

              But I think we have a bigger problem in the field than canalization. The real problem we face, and the source of tension in this particular discussion (as well as many others), is that evolution seems to be most powerful when it is open-ended, but we as researchers in EC want to control evolution for our own purposes, which in effects closes it. Resolving this delicate contradiction is a fundamental problem in EC right now. Somehow we need to "open" evolution enough for it to flourish while still imposing some level of control - a very uneasy balancing act with no easy answers.


              Best,

              ken

              --- In neat@yahoogroups.com, "martin_pyka" <martin.pyka@...> wrote:
              >
              > Thank you for this long answer which has a lot of inspiring ideas that I would like to pick up. But let me first point out the different topics that have been discussed before because I am afraid that the arguments you brought forward could be assigned to the wrong topics ;)
              >
              > I think there are several questions (or topics of interest) that are discussed here.
              >
              > Related to Jeffs Paper:
              > a) Does connection cost represent a natural bias necessary for the emergence of modules?
              >
              > I think this question is hard to answer and remains controversial ;) Prior beliefs can lead us to experiments that can support or refuse this theory.
              >
              > b) When someone wants to show that connection costs leads to modularity, does it matter whether a bias is implemented in the fitness function or in the encoding *given the currently known techniques in both domains*?
              >
              > My impression is, that Jeff and Stef mostly address this question and understand Ken's arguments as a response to this question, while Ken argues and understands Jeffs and Stefs arguments as a response to the question:
              >
              > c) How does a biological plausible mechanism for any bias look like?
              >
              > I think Ken's long answer here is a strong statement for encodings that are able to canalize (and abandon) certain design principles and I mostly agree with Ken's opinion related to question c. However, I don't think that an artificial encoding with such good canalizing properties already exist. So you are more arguing for spending more time in developing such an encoding instead of investigating the influence of a bias on network function / architecture with available means. Because, when we have such an encoding, it might help to understand how strong a certain bias (like connection cost) in parts of the evolutionary process actually is and to which extent we can find modularity in the network.
              >
              > So, as far as I got it, Ken does not say: „use the encoding instead of the fitness function to bias the search towards modularity" (as previous postings suggested it) but rather: „An encoding with certain properties (e.g. canalization) would allow us to reassess the value of concepts like „connection cost" and „modularity" " !?
              >
              >
              > And in direct response to Ken's posting some random thoughts:
              >
              > I also think that open-ended evolution is driven by a canalizable encoding, and I think one can come up with many detailed properties that the encoding has to fullfill in order to be regarded as canalizable. The challenge is now: how to develop such an encoding given the fact that all these properties are emergent properties of the encoding and therefore don't lead in a deductive manner to the particular form of the encoding? I think, we face here similar problems than in the lower abstraction levels.
              >
              > My first idea was: let's apply evolutionary algorithms on encodings to find the best encoding (something similar exist already for L-Systems), but how would such a meta-language look like?
              >
              > I think that the idea of CPPNs represents already a good starting point for canalizable encodings but its implementation in HyperNEAT seems to me not to be consequent enough and looses a lot of robustness as for example self-organizing mechanisms (like structural plasticity) are not implemented. Of course, it is always easier to provide critique than to suggest a better solution ;-). I wished I would have more time to work on these things on my own.
              >
              > Best,
              > Martin
              >
            • Ken
              Hi Stef, I think I should clarify a couple points based on your thoughts. I really have not intended to suggest that JBM, Jeff, and Hod s paper is somehow
              Message 6 of 10 , Mar 8, 2013
              • 0 Attachment
                Hi Stef,

                I think I should clarify a couple points based on your thoughts. I really have not intended to suggest that JBM, Jeff, and Hod's paper is somehow insignificant or unhelpful. I think it's an important contribution to highlight the role of connection length in modularity. I really appreciate how it also suggests the prior "modularly varying goals" hypothesis may not always be necessary. I brought up the issue of encoding simply to make the point that there remain questions about the different ways that such a connection length bias can be achieved, but more broadly to make the point that there will often be such encoding vs. pressure questions in many areas of EC and biology too, beyond only the question of modularity. It was not intended as an attack on the value of the paper's contribution, though perhaps it started to seem that way as the debate became more intense.

                I also want to distance myself from the idea that anything I said is anti-MOEA. If you notice in my latest response, I said almost nothing about MOEAs. My point is about fitness pressure, whether applied through an MOEA or not. The same questions would come up even without an MOEA. In fact, MOEAs can be used elegantly to create an open-ended dynamic without "pressures." For example, one objective can be novelty and the other genetic diversity. I think the pros and cons of MOEAs as practical search algorithms are largely orthogonal to the discussion about fitness pressure.

                Finally, I think we should treat fitness-pressure-based explanations from biologists with skepticism. As you point out, of course biologists like such explanations (for bird wings, legs, etc.), but if biologists really had a full explanation of why certain features emerge, then EC would be much more successful than it is.

                My feeling is that fitness pressure in nature accounts for optimization (e.g. gazelles getting faster) but not for novelty (e.g. an entirely new organ or appendage emerging). Because of that, "explanations" of why certain traits arose are very confusing - are they explaining how the trait first appeared, or how it was optimized after it appeared? Interesting, it is possible to argue that the higher the fitness pressure, the less the chance for novelty (because fitness pressure is about closing off paths), so the fitness pressure would be in direct opposition to the kinds of emergent traits that it is often used to explain - but we miss that point because it explains everything about that trait *except* its origination.

                However, as I said to Martin, I do acknowledge your concern with entirely removing fitness pressure. It would be unsettling if that were the only way we could get evolution to do something interesting. But I think we have to digest this bitter pill before we can cure it, and digesting it requires acknowledging a bunch of uncomfortable facts. But my hope is still aligned with yours in the end and like you I do not believe much useful will come from doing only 100% open-ended searches and simply wishing for them to solve every problem in robotics. However, they can still be very useful for our research in learning how such searches work in the meantime.

                Best,

                ken

                --- In neat@yahoogroups.com, "stephane.doncieux" <stephane.doncieux@...> wrote:
                >
                > Hi Ken,
                >
                > We are getting closer to the point.
                >
                > I think one ambiguity comes with the expectations of models like that of Jean-Baptiste, Jeff and Hod. I agree that it does not answer the whole question of open-ended evolution (even with regards to modularity), but I don't think it was their intention (I haven't seen such claims in the paper). I don't believe in a global model that would take into account every single aspect of natural evolution, at least not until every piece of the puzzle has been well understood. This is the classical reductionnist approach to try to separate each aspect. It is clearly not easy to apply such a methodology for these questions as many different aspects are dependent one from the other. It anyway remains a classical and efficient approach in Science. Succeeding in isolating a single effect is, for me, a major breakthrough because of the difficulty to do it. It should actually be what we are looking for, because the contribution is then highly localized and it makes it easier to build upon it and this is really what I like in Jeff, JB and Hod work.
                >
                > The question of knowing to what extent multi-objective evolutionary algorithms are a good model of natural (i.e. open-ended) evolution is not critical in the work we are talking about. MOEA have interesting features, but also drawbacks, as you have mentionned, Ken. Jeff, JB and Hod have proposed a mechanism to adress these limitations with their stochastic domination and I think it is enough to address their problem. The results of the article should not be overestimated as well as it should not be underestimated. What it shows is that a pressure towards a low connectivity (i.e. a goal independent selection pressure) has the nice side effect of creating more modular structures. This is an interesting and valuable result. How to use such pressures in an open-ended perspective is another (and in my opinion different and also interesting) question. I completely agree with you Ken when you say that this question is not properly adressed by the model, but once more I am not sure that it is their point.
                >
                > I agree with your point Ken that a constant fitness pressure that remains the same all over the course of evolution is very unlikely to lead to a truly open-ended evolution. By the way, it seems to me that you suggest to discard all fitness pressures (i.e. all goal-oriented objectives, may it be constant or not). It is in line with novelty search, but I don't think that it is a good idea. You will have solved one part of the problem, but also introduced other limitations. One of them is related to the size of the search space. If your behavior space is large enough, you will just begin to do something interesting and switch to something else without trying to push what you have discovered to its limits.
                >
                > Lots of biologists actually think in terms of fitness pressures and try to find what fitness pressure has favored a given transition (apparition of an organ, of a particular bone, etc). Lots of those pressures have been proposed in the litterature to explain the evolution of birds wings, legs etc. Does it mean that such pressures were the same during all evolution ? Of course not. It does not mean either that it hasn't played a critical role at some time along evolution. Thinking in terms of fitness pressures is no more than a convenient way to model what happens in evolution at a relatively local scale. You are true Ken to say that most of our work remains local. If I were to study open-ended evolution, I would try to study under what conditions these fitness pressures changes do occur and I would propose algorithms to reproduce it. In this perspective, any meaningful fitness pressure is interesting and is a potentially significant piece of the puzzle. From your point of view, I guess that your research program is different. Which one will lead to a better understanding of natural evolution is a question that we cannot decide now. Interestingly your work on novelty search with local competion is actually a nice way to combine both aspects.
                >
                > By the way, I really think that it is useless to work on the encoding without also studying selection pressures that would take the best of it (may they be constant or not, goal-oriented or goal-independent). So that is why I disagree with the dichotomy you make between encoding and fitness pressures. Anyway considering all the work you have made in the field on both aspects, I guess that it is more a question of terminology than a deep disagreement.
                >
                > Best,
                >
                > stef
                >
                > --- In neat@yahoogroups.com, "Ken" <kstanley@> wrote:
                > >
                > >
                > >
                > > Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).
                > >
                > > Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
                > > bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"
                > >
                > > After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.
                > >
                > > Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.
                > >
                > > But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)
                > >
                > > The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.
                > >
                > > And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.
                > >
                > > I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
                > > If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?
                > >
                > > I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.
                > >
                > > Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.
                > >
                > > And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.
                > >
                > > For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.
                > >
                > > Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."
                > >
                > > But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.
                > >
                > > More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.
                > >
                > > Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.
                > >
                > > In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.
                > >
                > > Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.
                > >
                > > So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.
                > >
                > >
                > > Best,
                > >
                > > ken
                > >
                > >
                > > --- In neat@yahoogroups.com, "martin_pyka" <martin.pyka@> wrote:
                > > >
                > > > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
                > > >
                > > > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
                > > >
                > > > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
                > > >
                > >
                >
              • Ken
                Hi Jeff, You make a great case for your position, and like I said to Stef, the major point on how connection length relates to modularity that you and your
                Message 7 of 10 , Mar 8, 2013
                • 0 Attachment
                  Hi Jeff,

                  You make a great case for your position, and like I said to Stef, the major point on how connection length relates to modularity that you and your coauthors have contributed is important. In my reply to Stef I made a few general last points and I think we can safely assume we will have some very interesting further discussions at GECCO!

                  Best,

                  ken


                  --- In neat@yahoogroups.com, Jeff Clune <jclune@...> wrote:
                  >
                  > Thanks for clarifying your positions Ken. I believe we have reached the point at which reasonable minds can respectably disagree, as you put it. Were I to respond, I believe I would mostly repeat myself as the ideas I believe answer your comments and questions I've already expressed earlier in this thread. That usually is an indication of coming to an agreement, even if the agreement is to disagree. :-)
                  >
                  > That said, I'll summarize my main points: I do think nature has default fitness penalties, and all we have done is change the default to one that encourages modularity and evolvability. As such, I don't think we're subject to general attacks on fitness pressures, because by default there are fitness pressures (e.g. a cost for materials), we just change them to be better with respect to modularity and evolvability. Moreover, I think a connection cost encourages modularity in a way similar to how things work in nature, even if nature more resembles a combination of performance and costs into a single objective instead of multi-objective algorithms. Our work is not dependent on an MOEA: we suspect a connection cost will encourage modularity in general irrespective of the details of the algorithm. As I've mentioned, I think nuance and subtlety are possible with a connection cost because nature can pay for more connectivity via performance gains, allowing all sorts of exceptions to the general rule. I still don't see how an initial encoding bias will have any long-term effect on evolution, so the only hope for canalization and helpful encoding biases is via fitness. I thus believe that fitness penalties are a great way to encourage good encodings via canalization. For example, we know that default penalties exist (such as a default tendency to NOT produce modularity, which has been empirically observed in our systems repeatedly), and nature will not explore those areas unless the fitness function is changed (just as nature never explored certain classes of extremely inefficient metabolisms, or birds with bones made of lead). Ultimately, since the effect of your initial encoding bias hint will vanish over millennia, your argument amounts to saying that we should not try to bias evolution at all. But this whole conversation was on the best way to bias evolution, so maybe your position is that we shouldn't be biasing evolution at all. That's fine if we don't have any default fitness penalties in the environment that will hurt us, but we have no guarantee of that. As you point out, Nature has done impressive things, but it also had a very different environment than the environment in our setups so far. What we've shown in this paper is that some things that we originally thought just happened to exist in nature, but were unnecessary vis a vis evolvability (such as a cost for materials) actually may play a role in the evolution of modularity and evolvability. I think it is worthwhile to investigate what else we may have skipped from the natural world that may be an important driver of evolvability and, ultimately, open-ended evolution.
                  >
                  > A final point: your criticism that 'nothing we learn matters if we don't have an open-ended algorithm' discounts all the work that has been done to date in computational evolutionary biology, not just our paper. You may be right that all of our lessons are worthless once we figure out an open-ended evolutionary algorithm, but I doubt that will be the case. I think a lot of interesting, worthwhile understandings have been gained by simulated evolution, and that much of that work will prove informative even if the underlying algorithms change.
                  >
                  >
                  > Best regards,
                  > Jeff Clune
                  >
                  > Assistant Professor
                  > Computer Science
                  > University of Wyoming
                  > jeffclune@...
                  > jeffclune.com
                  >
                  > On Mar 2, 2013, at 4:16 PM, Ken <kstanley@...> wrote:
                  >
                  > >
                  > >
                  > > Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).
                  > >
                  > > Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
                  > > bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"
                  > >
                  > > After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.
                  > >
                  > > Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.
                  > >
                  > > But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)
                  > >
                  > > The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.
                  > >
                  > > And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.
                  > >
                  > > I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
                  > > If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?
                  > >
                  > > I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.
                  > >
                  > > Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.
                  > >
                  > > And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.
                  > >
                  > > For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.
                  > >
                  > > Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."
                  > >
                  > > But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.
                  > >
                  > > More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.
                  > >
                  > > Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.
                  > >
                  > > In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.
                  > >
                  > > Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.
                  > >
                  > > So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.
                  > >
                  > > Best,
                  > >
                  > > ken
                  > >
                  > > --- In neat@yahoogroups.com, "martin_pyka" wrote:
                  > > >
                  > > > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
                  > > >
                  > > > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
                  > > >
                  > > > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
                  > > >
                  > >
                  > >
                  >
                • Ken Lloyd
                  All, After lurking in the background on this interesting discussion, I found that length of connection has different dimensions of meaning vis. strength of
                  Message 8 of 10 , Mar 8, 2013
                  • 0 Attachment

                    All,

                     

                    After lurking in the background on this interesting discussion, I found that length of connection has different dimensions of meaning vis. strength of connection in coupling, and therefore on modularity (or clustering at level). Of course, these form topological (topoi), not merely topographical mappings.  The pictures proved more helpful than the words (natural language).  Perhaps a mathematical representation could help with clarity of understanding?

                     

                    Ken Lloyd

                     

                    ==========================

                    Kenneth A. Lloyd, Jr.

                    CEO - Director, Systems Science

                    Watt Systems Technologies Inc.

                    Albuquerque, NM USA

                     

                     

                     

                    From: neat@yahoogroups.com [mailto:neat@yahoogroups.com] On Behalf Of Ken
                    Sent: Friday, March 08, 2013 2:52 AM
                    To: neat@yahoogroups.com
                    Subject: [neat] Re: New paper on why modules evolve, and how to evolve modular artif

                     

                     



                    Hi Jeff,

                    You make a great case for your position, and like I said to Stef, the major point on how connection length relates to modularity that you and your coauthors have contributed is important. In my reply to Stef I made a few general last points and I think we can safely assume we will have some very interesting further discussions at GECCO!

                    Best,

                    ken

                    --- In neat@yahoogroups.com, Jeff Clune <jclune@...> wrote:
                    >
                    > Thanks for clarifying your positions Ken. I believe we have reached the point at which reasonable minds can respectably disagree, as you put it. Were I to respond, I believe I would mostly repeat myself as the ideas I believe answer your comments and questions I've already expressed earlier in this thread. That usually is an indication of coming to an agreement, even if the agreement is to disagree. :-)
                    >
                    > That said, I'll summarize my main points: I do think nature has default fitness penalties, and all we have done is change the default to one that encourages modularity and evolvability. As such, I don't think we're subject to general attacks on fitness pressures, because by default there are fitness pressures (e.g. a cost for materials), we just change them to be better with respect to modularity and evolvability. Moreover, I think a connection cost encourages modularity in a way similar to how things work in nature, even if nature more resembles a combination of performance and costs into a single objective instead of multi-objective algorithms. Our work is not dependent on an MOEA: we suspect a connection cost will encourage modularity in general irrespective of the details of the algorithm. As I've mentioned, I think nuance and subtlety are possible with a connection cost because nature can pay for more connectivity via performance gains, allowing all sorts of exceptions to the general rule. I still don't see how an initial encoding bias will have any long-term effect on evolution, so the only hope for canalization and helpful encoding biases is via fitness. I thus believe that fitness penalties are a great way to encourage good encodings via canalization. For example, we know that default penalties exist (such as a default tendency to NOT produce modularity, which has been empirically observed in our systems repeatedly), and nature will not explore those areas unless the fitness function is changed (just as nature never explored certain classes of extremely inefficient metabolisms, or birds with bones made of lead). Ultimately, since the effect of your initial encoding bias hint will vanish over millennia, your argument amounts to saying that we should not try to bias evolution at all. But this whole conversation was on the best way to bias evolution, so maybe your position is that we shouldn't be biasing evolution at all. That's fine if we don't have any default fitness penalties in the environment that will hurt us, but we have no guarantee of that. As you point out, Nature has done impressive things, but it also had a very different environment than the environment in our setups so far. What we've shown in this paper is that some things that we originally thought just happened to exist in nature, but were unnecessary vis a vis evolvability (such as a cost for materials) actually may play a role in the evolution of modularity and evolvability. I think it is worthwhile to investigate what else we may have skipped from the natural world that may be an important driver of evolvability and, ultimately, open-ended evolution.
                    >
                    > A final point: your criticism that 'nothing we learn matters if we don't have an open-ended algorithm' discounts all the work that has been done to date in computational evolutionary biology, not just our paper. You may be right that all of our lessons are worthless once we figure out an open-ended evolutionary algorithm, but I doubt that will be the case. I think a lot of interesting, worthwhile understandings have been gained by simulated evolution, and that much of that work will prove informative even if the underlying algorithms change.
                    >
                    >
                    > Best regards,
                    > Jeff Clune
                    >
                    > Assistant Professor
                    > Computer Science
                    > University of Wyoming
                    > jeffclune@...
                    > jeffclune.com
                    >
                    > On Mar 2, 2013, at 4:16 PM, Ken <kstanley@...> wrote:
                    >
                    > >
                    > >
                    > > Hi Jeff, Stef, and Martin, I hope you don't mind since all of you addressed me if I try to reply to all of you at once to keep the thread (and my brain) from branching in three directions. Many of your points follow a similar theme so I think it makes sense to respond collectively. This response is practically an article, but oh well, it's nice to get the ideas down even if it's a bit too long (it just shows you are asking me great questions that are challenging).
                    > >
                    > > Martin offers a good unifying question: "My question to Ken would be here: what is the additional ingredient that makes a
                    > > bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?"
                    > >
                    > > After some thought, I believe one of the difficulties in this discussion is that we often conflate artificial EC-style fitness-based experiments with open-ended scenarios when these are entirely different situations (I take blame myself as well for this tendency). That is, when we talk about something being "better" or "solving" a problem, we are often talking about artificial and unnatural experimental setups that have little relationship to open-ended evolutionary scenarios like nature.
                    > >
                    > > Why does that matter? It matters because in discussions that try to dovetail engineering-oriented mechanisms (like a connectivity penalty) with explanations of what happened in nature (such as the emergence of modular connectivity), it cannot simply be ignored that nature in fact is first and foremost an open-ended evolutionary system, and that that open-ended dynamic is a significant factor in the explanation of its products. What that means to me is that if you think your proposed mechanism actually *explains* something that happened in nature, then it is essential that the explanation speaks to the question of how the particular mechanism you are advancing combined historically with the open-ended evolutionary dynamics in nature to produce the result you expect.
                    > >
                    > > But because we conflate very closed-ended artificial scenarios with monumentally open-ended searches like nature, it leads to a lot of dangerous inferences. So ideas that would make sense in one context end up sounding reasonable when they don't really make any sense in the other context. The difficulty of squaring fitness-pressure objectives with nature is more serious when you consider it in this perspective. (Note that I am defining "fitness pressure" as selection based on relative performance to other organisms on a measure of some property that varies over a range of possible values, such as degree of connectivity.)
                    > >
                    > > The problem is that fitness pressures that preserve a degenerate niche for eternity are definitively not like nature, so whether they work or not, or whether I am somehow indicting them or not, should not be the issue. The issue should be that we should be worried that nature does not use a mechanism even remotely like that yet still achieves the "same" result (i.e. beautiful variations of pseudo-modular design). If you are advancing the hypothesis that this kind of constant "pressure" is somehow essential to the emergence of modularity in nature, then you must somehow explain why you needed to use a setup with these bizarre and unnatural side effects (like eternal degenerate niches) instead of whatever nature actually supposedly does use.
                    > >
                    > > And the fact that you cannot come up with anything similar to what nature does, i.e. something that does not involve creating such a deadweight pocket, reasonably may suggest that your hypothesis about nature could be wrong. That is, it may not be this endless "fitness pressure" after all that explains what is happening there, because fitness pressure in general in EC is almost always creating some kind of unintended deadweight niche.
                    > >
                    > > I think it is particularly fascinating that in fact nature obtains not really the same result, but a far more awesome result (in terms of modularity or anything else), without such an ad hoc mechanism.
                    > > If you think about it, as long as you insist on cheering for fitness pressure, it prevents you from asking how this could be - how is it possible that you can get these kinds of results without such an unnatural side effect?
                    > >
                    > > I need to emphasize here the difference between being a better engineering mechanism and a better explanation. I am focusing now primarily on the explanatory power of the proposed mechanism. But because nature is so much more accomplished than anything artificial, the explanatory gap here implies a dangerous potential to overlook what will ultimately amount also to a major engineering gap as well. There is no evidence that anything except nature in its open-ended way can create anything like the connectivity of natural brains.
                    > >
                    > > Furthermore, it is always important to acknowledge nuance and subtlety in nature, which has not really been acknowledged yet in this conversation. Nature is almost never all one way. So it is misleading and potentially confusing to talk about brains as simply modular or not. The recent discussion on the Connectionists list, where scientists have been giving all kinds of subtle and conflicting perspectives on modularity in natural brains in response to Jeff and JBM's article, echoes this nuance. The beauty of the human brain to me is not that it is modular, but that it is modular to an extent, but not entirely so, and what modularity there is is hard to pin down. This kind of nuance is not to me a mere footnote to the achievement of nature, but the central point of it: what nature achieves in spades is nuance.
                    > >
                    > > And the idea of a constant pressure of any kind is directly in conflict with the achievement of nuance, because nuance is a delicate balancing act that is easily tipped off its perch if constant pressure in *any* direction is applied without relief. Jeff is concerned with short-term versus long-term issues (which isn't really as clearly defined in an open-ended context), but even if we honor that concern, it is potentially naïve to believe that pressure in either direction from the start, or even an encoding bias in either direction from the start, is somehow going to directly align with the level of nuance observed millions of years in the future. However, while fitness pressure is eternal, encoding bias is malleable, so pushing in the "right" direction from the start is not essential for encoding. It's more like a hint to get you started, whereas fitness pressure is more like a gun forever pointed at your back.
                    > >
                    > > For example, who is to say that we should not have the opposite short-term worry as Jeff does – he worries that an encoding bias towards low connectivity "might evolve away because of fitness pressure," but can't we just as easily worry about *too much* modularity? In that case, Jeff's evil twin "opposite-Jeff" might be worried that an initial encoding bias towards *high* connectivity might evolve away. It is not clear nor established fact (see Connectionists) that the exact form of the final "solution" is particularly modular or non-modular. What it is, is subtle and somewhere in the middle. So none of this kind of panicking about what nature "needs" to harass it into such an astronomically complex future configuration makes much sense. We cannot say definitively the extent to which the final structure is "closer" to modular or non-modular, whatever that even means. Fortunately, an encoding that begins with a bias towards modularity can tone it down as needed, or ramp it up even more.
                    > >
                    > > Yet Jeff also worries about about the radiation of evolutionary lineages being blocked because of implicit penalties: He says, "You assume that evolution will branch out and explore all these options even in the face of fitness penalties for that exploration. But that is not how evolution works."
                    > >
                    > > But branching out and exploring many (not necessarily all of course) of the options is the only way that natural evolution works. That's what open-endedness is (unless you don't believe natural evolution to be open-ended). The tree of life is ever-branching. The worry about "fitness penalties" here is a red herring because it originates from closed-ended artificial EC experiments where you can end up on the wrong path. But nature does not have any single "fitness penalty" or "right path" throughout its run because the landscape is always changing as it branches and branches. For example, before trees, being an extremely tall herbivore would incur a fitness penalty, but after trees giraffes were perfectly viable. The penalty is not consistent.
                    > >
                    > > More generally, how can there be what you call a "default fitness penalty" if there is no final goal? Penalty with respect to what? Keep in mind here that the origin of pseudo-modular organization in nature likely predates the emergence even of neurons. The first neural structures piggy-backed on previously evolved organizational structure that likely influenced the subtle pseudo-modularity of connectivity from the start for reasons entirely unrelated to connection cost because these organizational conventions evolved long before neurons even existed: the bias in the encoding was in part already there.
                    > >
                    > > Which brings me back to the origin of all such conventions - canalization - which is the key here. Stephane talks about a bias that exists "all along" in evolution, but ultimately the ability to *change* bias eclipses choosing one up front. Again, in the context of artificial scenarios, it's a good engineering hack to force in some kind of bias into the encoding or into fitness that you expect to control things for a moderate number of generations. But in nature the scope is so vast that it can't be the final word; it's only the initial hint. While that hint can help, nature in the long term needs to choose and commit to its own biases, and to slither out of them from time to time, and only encoding offers that potential. Canalization is the way nature can make long-term (though not necessarily permanent) commitments. It's how conventions are established in specific lineages.
                    > >
                    > > In a genuine open-ended scenario like nature, modularity will emerge and proliferate over vast stretches of time only if modularity leads to more species emerging. Of course, the species we observe at the end are the consequence of organizational principles that supported generating many species (which is almost tautological). So it need not relate to being better or worse, or "solving" anything. It has to do with open-ended dynamics. Air will escape a hole in a balloon if you wait long enough. If that hole leads to a whole other world, you will eventually see that other world. Modularity, to the extent it actually exists in nature, has served as such a hole. But the only way such a hole can be exploited, the only way you can keep focused on that area, is if it can be canalized. An encoding that can be canalized allows you to maintain the subtle convention that is responsible for spreading diversity.
                    > >
                    > > Stef nevertheless reminds me that "selection pressure has strong impact," and I entirely agree of course. But there are two very different classes of selection pressure. One is about pushing you towards the new, and the other is about forcing you to commit to the old. There are many ways to push towards the new, and novelty search is just one. In contrast, these things we call "fitness pressures" (whether part of a MOEA or not) are the opposite – they are toxic strait jackets applied for eternity. They presume that we know what we need with no nuance whatsoever eons before anything remotely related has appeared. Again, in engineering, fair enough – it can work. But it is not an *explanation* of the products of open-ended evolution in nature, and likely is not a good way to produce open-endedness artificially either.
                    > >
                    > > So the only escape I see here from my argument is if you can argue somehow that you can do all these amazing things *without* open-ended evolution. Then all your pressures and constraints might make sense. But I don't think you can argue that, which, to finally circle back to Martin's broad question, is why encoding is ultimately superior. A canalizeable encoding is the perfect partner for an open-ended process. But it is not (as Martin puts it) because it makes a particular "bias in the encoding better." Rather, it is because encoding lets evolution delicately modify its own biases on the fly and explore all of them in parallel. That is, the ability to change, the ability to flexible, to commit but to uncommit in increments of subtlety, to radiate diversity while still committing to certain biases in certain chains, is the power that made everything happen. Any forced competition, any constant bias, any eternal relative judgment, which are all things that constant fitness pressure offers, will diminish that flexibility. It will not necessarily destroy the open-ended process, but it will reduce its power and ultimately therefore cannot explain or account for it.
                    > >
                    > > Best,
                    > >
                    > > ken
                    > >
                    > > --- In neat@yahoogroups.com, "martin_pyka" wrote:
                    > > >
                    > > > I just would like to point out that, in my opinion, part of the disagreement between you and Jeff and Ken comes from the fact that Ken somehow made the statement "it is better to implement the bias in the encoding than in the fitness function" but in actual fact argues for a specific type of implementation in the encoding.
                    > > >
                    > > > Thus, I thing the discussion should not center around the general question whether a bias should be incorporated in the fitness or in the encoding because in both areas there are better and worse ways to do it. The question is more, why a specific implementation (that Ken has obviously in mind, my impression was he thought about approaches similar to LEO) is better than another.
                    > > >
                    > > > My question to Ken would be here: what is the additional ingredient that makes a bias in the encoding better / more plausible than *any* implementation of the bias in the fitness function?
                    > > >
                    > >
                    > >
                    >

                    No virus found in this message.
                    Checked by AVG - www.avg.com
                    Version: 2013.0.2899 / Virus Database: 2641/6140 - Release Date: 03/01/13
                    Internal Virus Database is out of date.

                  Your message has been successfully submitted and would be delivered to recipients shortly.