Loading ...
Sorry, an error occurred while loading the content.

Saaf testimony

Expand Messages
  • lucas@gonze.com
    From hearings on the Berman-Coble bill, online at http://www.house.gov/judiciary/saaf092602.htm On decentralization: ==================== The most threatening
    Message 1 of 14 , Oct 2, 2002
    • 0 Attachment
      From hearings on the Berman-Coble bill, online at
      http://www.house.gov/judiciary/saaf092602.htm

      On decentralization:
      ====================
      The most threatening aspect of P2P networking to the copyright holders is
      the growing trend of decentralization. All of the most popular P2P
      networking technologies in the world are either completely or partially
      decentralized. Decentralization means that there is no central entity to
      sue or regulate using the law. Even if all the courts agreed to shut a
      decentralized network down, it could not be done because it is simply a
      free floating technology protocol on the Internet, similar to FTP or HTTP.
      The original completely decentralized P2P protocol, Gnutella, continues to
      be the leader in the decentralized P2P world. Thousands of computer
      scientists have developed hundreds of programs to hook into this ethereal
      network that floats on the Internet. Any programmer can very simply code a
      software client to hook into the network. Nobody owns Gnutella and nobody
      regulates it. However, the clear and primary use of the network is for the
      downloading of copyrighted material. This intuitive conclusion has been
      verified by MediaDefender's years of research. Gnutella was born out of a
      backlash in the online world toward the Napster lawsuit, and it was
      created to be an unstoppable P2P technology. Any person can see the
      breadth of pirated material on Gnutella by putting a generic search
      string, such as a period ("."), into any Gnutella client. When I typed a
      period (".") and hit search on a Gnutella client this morning, I received
      over 1000 returns with content ranging from Eminem to Harry Potter. I
      advise anyone to perform this simple experiment if they still need to
      convince themselves P2P networks are primarily used for piracy. Copyright
      law never anticipated a completely decentralized P2P network on the
      Internet and cannot prevent the piracy. Sometimes you have to use
      technology to regulate technology because there is no other practical
      means. Decentralized P2P networking is a case where there is no other
      solution beyond MediaDefender's anti-piracy technology. MediaDefender
      feels that it is important that the current laws do not stand in the way
      of non-invasive anti-piracy technology on the Internet. The concern is
      always that hacking and computer use laws not intended to address P2P
      anti-piracy technologies will be misapplied.
      ====

      On measures that would satisfy what it is that the bill is trying to
      sanction:

      ====
      Interdiction works by getting in front of potential downloaders when
      someone is serving pirated content using a P2P network. When
      MediaDefender's computer's see someone making a copyrighted file available
      for upload, our computers simply hook into that computer and download the
      file. The goal is not to absorb all of that user's bandwidth but block
      connections to potential downloaders. If the P2P program allows ten
      connections and MediaDefender fills nine, we are blocking 90% of illegal
      uploading. The beauty of Interdiction is that it does not affect anything
      on that computer except the ability to upload pirated files on that
      particular P2P network. The computer user still has full access to e-mail,
      web, and other file sharing programs.
      ====

      This is obviously a denial of service attack, and a fairly stupid one.
      Any limited number of providers that attempts to DoS an entire megacluster
      is trying to turn the logic of a DDoS upside down. If that interdiction
      approach works, then DDoS attacks don't work.

      Ok, so let's say the purpose is not to DoS the whole network, it's to
      bother an individual user. But the attack only targets user uploads.
      Assuming that the user is uploading out of generosity or laziness, this is
      no attack at all. She waits until the attack is over, totally unbothered,
      and goes back to uploading.

      But so what -- it doesn't matter whether this attack works. What
      attacks would work?

      - Lucas
    • coderman
      ... This type of DDoS is different, in that it is not relying on sheer traffic to implement a DoS, but issuing a number of file requests to tie up available
      Message 2 of 14 , Oct 2, 2002
      • 0 Attachment
        lucas@... wrote:
        > ====
        > Interdiction works by getting in front of potential downloaders when
        > someone is serving pirated content using a P2P network. When
        > MediaDefender's computer's see someone making a copyrighted file available
        > for upload, our computers simply hook into that computer and download the
        > file. The goal is not to absorb all of that user's bandwidth but block
        > connections to potential downloaders. If the P2P program allows ten
        > connections and MediaDefender fills nine, we are blocking 90% of illegal
        > uploading. The beauty of Interdiction is that it does not affect anything
        > on that computer except the ability to upload pirated files on that
        > particular P2P network. The computer user still has full access to e-mail,
        > web, and other file sharing programs.
        > ====
        >
        > This is obviously a denial of service attack, and a fairly stupid one.
        > Any limited number of providers that attempts to DoS an entire megacluster
        > is trying to turn the logic of a DDoS upside down. If that interdiction
        > approach works, then DDoS attacks don't work.

        This type of DDoS is different, in that it is not relying on sheer traffic
        to implement a DoS, but issuing a number of file requests to tie up available
        download slots on peers sharing copyrighted content.

        This is certainly technically feasable, and if they used a distributed network
        themselves to implement the attacks it would be hard to defend against.


        > Ok, so let's say the purpose is not to DoS the whole network, it's to
        > bother an individual user. But the attack only targets user uploads.
        > Assuming that the user is uploading out of generosity or laziness, this is
        > no attack at all. She waits until the attack is over, totally unbothered,
        > and goes back to uploading.

        Yes, but if you are tying up a large number of peers it is going to start
        effecting everyone regardless. It sounds like these attacks are intended to
        be much longer lived than a traditional DoS as well, as the bandwidth required
        is actually very low to simply establish a connection that barely trickles
        data through...



        > But so what -- it doesn't matter whether this attack works. What
        > attacks would work?

        This one would work fairly well if they did it right. Attacking namespaces
        and search domains is also annoying, like the false query hits and bogus
        music / movie files...


        --
        _____________________________________________________________________
        coderman@... http://cubicmetercrystal.com/
        key fingerprint: 9C00 C63E A71D D488 AF17 F406 56FB 71D9 E17D E793
        ( see html source for public key )
        ---------------------------------------------------------------------
      • Lucas Gonze
        ... Hm, ok, so servents stop using upload slots and instead let uploaders use all available bandwidth, just like standard web servers. ... A couple things work
        Message 3 of 14 , Oct 2, 2002
        • 0 Attachment
          coderman wrote:
          > This type of DDoS is different, in that it is not relying on sheer traffic
          > to implement a DoS, but issuing a number of file requests to tie up available
          > download slots on peers sharing copyrighted content.
          >
          > This is certainly technically feasable, and if they used a distributed network
          > themselves to implement the attacks it would be hard to defend against.

          Hm, ok, so servents stop using upload slots and instead let uploaders use
          all available bandwidth, just like standard web servers.

          > Attacking namespaces
          > and search domains is also annoying, like the false query hits and bogus
          > music / movie files...

          A couple things work against the bogus files:

          * survival of the fittest. bogus files tend to get deleted, so they
          aren't available for further upload. I guess the trick here is to
          introduce new bogus files at a rate that matches the rate of deletion.

          * bitzi and other metadata. Files with bitzi-type metadata are likely to
          be the real thing.

          The interesting part is in the back-and-forth. The crypto world puts
          cryptanalysts and cryptographers both on display, and cryptanalysts aren't
          generally held to be more evil than cryptographers. ...Makes me think
          about becoming a p2p attack hacker...

          - Lucas
        • Wes Felter
          ... An obvious extension of these two ideas is to open a very large number of idle connections to each Gnutella node, possibly exploiting hard limits or
          Message 4 of 14 , Oct 2, 2002
          • 0 Attachment
            On Wed, 2002-10-02 at 15:07, Lucas Gonze wrote:
            > coderman wrote:
            > > This type of DDoS is different, in that it is not relying on sheer traffic
            > > to implement a DoS, but issuing a number of file requests to tie up available
            > > download slots on peers sharing copyrighted content.
            > >
            > > This is certainly technically feasable, and if they used a distributed network
            > > themselves to implement the attacks it would be hard to defend against.
            >
            > Hm, ok, so servents stop using upload slots and instead let uploaders use
            > all available bandwidth, just like standard web servers.

            An obvious extension of these two ideas is to open a very large number
            of idle connections to each Gnutella node, possibly exploiting hard
            limits or non-scalability in the Windows 9x TCP stack.

            --
            Wes Felter - wesley@... - http://felter.org/wesley/
          • Brian Behlendorf
            ... Not to mention the likelihood that Gnutella software authors will simply implement the same technique the antispam community uses to identify IP addresses
            Message 5 of 14 , Oct 2, 2002
            • 0 Attachment
              On Wed, 2 Oct 2002 lucas@... wrote:
              > This is obviously a denial of service attack, and a fairly stupid one.
              > Any limited number of providers that attempts to DoS an entire megacluster
              > is trying to turn the logic of a DDoS upside down.

              Not to mention the likelihood that Gnutella software authors will simply
              implement the same technique the antispam community uses to identify IP
              addresses and address ranges of known spammers, and block them before
              their packets even hit the server - aka real-time block lists. I know you
              said "limited", I suppose there may be some way these parties could
              implement a distributed system that would be much more difficult to keep
              up with, by embedding malware in some otherwise widely distributed
              application.

              Brian
            • Kevin Prichard
              ... Even if the attack is distributed, my hunch is an attack coming from a given IP is an IP that will never be connected to a valid, human-operated
              Message 6 of 14 , Oct 2, 2002
              • 0 Attachment
                On 2 Oct 2002, Wes Felter wrote:

                > On Wed, 2002-10-02 at 15:07, Lucas Gonze wrote:
                > > coderman wrote:
                > > > This type of DDoS is different, in that it is not relying on sheer traffic
                > > > to implement a DoS, but issuing a number of file requests to tie up available
                > > > download slots on peers sharing copyrighted content.
                > > >
                > > > This is certainly technically feasable, and if they used a distributed network
                > > > themselves to implement the attacks it would be hard to defend against.
                > >
                > > Hm, ok, so servents stop using upload slots and instead let uploaders use
                > > all available bandwidth, just like standard web servers.
                >
                > An obvious extension of these two ideas is to open a very large number
                > of idle connections to each Gnutella node, possibly exploiting hard
                > limits or non-scalability in the Windows 9x TCP stack.

                Even if the attack is distributed, my hunch is an attack coming from a
                given IP is an IP that will never be connected to a valid, human-operated
                gnutellanet client. Blocking said IPs could be done, but identifying when
                a downloader is Them (on a per-IP basis) may be difficult, as all
                characteristics of today's clients can be mimic'd.

                Identifying IPs belonging to Them may require pattern analysis of these
                "attacks" across many nodes, which means sharing, possibly pooling,
                knowledge -not really good for decentralization. And, p2p being
                distributed, they can present data themselves, to bias away from their
                pool of IPs. Hrm.

                Just about anything that an author can build into a client can be
                reverse-engineered and added to the DoS code. I wonder if there exists a
                kind of "proof of membership" scheme whereby peer connection records could
                be signed or encrypted, deposited in a distributed pool for analysis.
                Records from actual human-operated clients, posessing proper
                proof-of-membership credential, could be separated from DoS client
                deposits.

                NNTP is a kind of decentralized, rolling database that could be used for
                depositing and pooling connect records. Just about all ISP accounts have
                NNTP access, it hasn't been legislated away (yet.) Problem is, any
                decentralized means by which records get analysed can be used against
                clients wanting to deny service to the DoSers. Unless they are encrypted
                with a public key, and the analysis is carried out by a central node (on
                Sealand. ;^)

                kevin
              • Eric Mathew Hopper
                ... The response to that is to auto-kick people who s bandwidth use isn t over a certain threshold. I do that anyway. If I m only transferring to them at
                Message 7 of 14 , Oct 2, 2002
                • 0 Attachment
                  On Wed, 2002-10-02 at 16:04, Wes Felter wrote:
                  > On Wed, 2002-10-02 at 15:07, Lucas Gonze wrote:
                  >> coderman wrote:
                  >>> This type of DDoS is different, in that it is not relying on sheer
                  >>> traffic to implement a DoS, but issuing a number of file requests to
                  >>> tie up available download slots on peers sharing copyrighted
                  >>> content.
                  >>>
                  >>> This is certainly technically feasable, and if they used a
                  >>> distributed network themselves to implement the attacks it would be
                  >>> hard to defend against.
                  >>
                  >> Hm, ok, so servents stop using upload slots and instead let uploaders
                  >> use all available bandwidth, just like standard web servers.
                  >
                  > An obvious extension of these two ideas is to open a very large number
                  > of idle connections to each Gnutella node, possibly exploiting hard
                  > limits or non-scalability in the Windows 9x TCP stack.

                  The response to that is to auto-kick people who's bandwidth use isn't
                  over a certain threshold. I do that anyway. If I'm only transferring
                  to them at 2-3k/sec, the file is going to take forever to get there, and
                  I'd rather give the bandwidth to someone else.

                  Also, this technique wouldn't work well on a well designed P2P system
                  that did a lot of caching. Like Freenet for example. Attempts to block
                  really popular things this way would merely result in the popular things
                  being so widely replicated that the attack became impossible.

                  Have fun (if at all possible),
                  --
                  The best we can hope for concerning the people at large is that they
                  be properly armed. -- Alexander Hamilton
                  -- Eric Hopper (hopper@... http://www.omnifarious.org/~hopper) --
                • Rod Price
                  A co-worker of mine has implemented a de-centralized version of an artificial immune system that would seem ideal for this application. The system can
                  Message 8 of 14 , Oct 2, 2002
                  • 0 Attachment
                    A co-worker of mine has implemented a de-centralized version of an
                    artificial immune system that would seem ideal for this application.
                    The system can recognize "self" and will flag "not-self." I know
                    this description is vague, but my co-worker isn't around right
                    now (9:00 pm) to help me out. For details on artificial immune
                    systems, look at http://www.cs.unm.edu/~forrest/papers.html,
                    particularly "Architecture for an Artificial Immune System" on
                    that site.

                    Curiously, although this work is ideally suited to a de-centralized
                    system, all the implementations so far have been on a single
                    machine, with the exception of my co-worker's. I'll ask him
                    tomorrow if he would be willing to post his recent conference
                    paper.

                    My initial thought for implementation is that "self" is defined as
                    the attacking IP addresses, and "not-self" is anything else.
                    Packets coming from "self" addresses get logged and ignored, while
                    packets from "not-self" addresses are allowed in as normal.

                    -Rod


                    Kevin Prichard wrote:
                    > On 2 Oct 2002, Wes Felter wrote:
                    >
                    >
                    >>On Wed, 2002-10-02 at 15:07, Lucas Gonze wrote:
                    >>
                    >>>coderman wrote:
                    >>>
                    >>>>This type of DDoS is different, in that it is not relying on sheer traffic
                    >>>>to implement a DoS, but issuing a number of file requests to tie up available
                    >>>>download slots on peers sharing copyrighted content.
                    >>>>
                    >>>>This is certainly technically feasable, and if they used a distributed network
                    >>>>themselves to implement the attacks it would be hard to defend against.
                    >>>
                    >>>Hm, ok, so servents stop using upload slots and instead let uploaders use
                    >>>all available bandwidth, just like standard web servers.
                    >>
                    >>An obvious extension of these two ideas is to open a very large number
                    >>of idle connections to each Gnutella node, possibly exploiting hard
                    >>limits or non-scalability in the Windows 9x TCP stack.
                    >
                    >
                    > Even if the attack is distributed, my hunch is an attack coming from a
                    > given IP is an IP that will never be connected to a valid, human-operated
                    > gnutellanet client. Blocking said IPs could be done, but identifying when
                    > a downloader is Them (on a per-IP basis) may be difficult, as all
                    > characteristics of today's clients can be mimic'd.
                    >
                    > Identifying IPs belonging to Them may require pattern analysis of these
                    > "attacks" across many nodes, which means sharing, possibly pooling,
                    > knowledge -not really good for decentralization. And, p2p being
                    > distributed, they can present data themselves, to bias away from their
                    > pool of IPs. Hrm.
                    >
                    > Just about anything that an author can build into a client can be
                    > reverse-engineered and added to the DoS code. I wonder if there exists a
                    > kind of "proof of membership" scheme whereby peer connection records could
                    > be signed or encrypted, deposited in a distributed pool for analysis.
                    > Records from actual human-operated clients, posessing proper
                    > proof-of-membership credential, could be separated from DoS client
                    > deposits.
                    >
                    > NNTP is a kind of decentralized, rolling database that could be used for
                    > depositing and pooling connect records. Just about all ISP accounts have
                    > NNTP access, it hasn't been legislated away (yet.) Problem is, any
                    > decentralized means by which records get analysed can be used against
                    > clients wanting to deny service to the DoSers. Unless they are encrypted
                    > with a public key, and the analysis is carried out by a central node (on
                    > Sealand. ;^)
                    >
                    > kevin
                    >
                    >
                    > To unsubscribe from this group, send an email to:
                    > decentralization-unsubscribe@egroups.com
                    >
                    >
                    >
                    > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
                    >
                    >
                  • Gordon Mohr
                    ... Except for: the activity the DoS ers are trying to stop, itself. So, once resharing of partial file fragments is widespread, a relatively simple
                    Message 9 of 14 , Oct 2, 2002
                    • 0 Attachment
                      --- Kevin Prichard <decent@...> wrote:
                      > Just about anything that an author can build into a
                      > client can be
                      > reverse-engineered and added to the DoS code.

                      Except for: the activity the DoS'ers are trying to
                      stop, itself.

                      So, once resharing of partial file fragments is
                      widespread, a relatively simple countermeasure
                      against this particular "slot-depleting" attack
                      would be to check those downloading from you, to
                      ensure that they are in fact sharing the segments
                      they've already received out to others. Perhaps
                      do this through a proxy, so they don't know it's
                      you checking.

                      If they are, great, they're helping the network.
                      If they aren't, start to contain the damage they're
                      attempting: give them only a trickle of data, open
                      another slot, bias your provisioning towards those
                      peers who are verifiably resharing.

                      Perhaps even keep referring third parties to the
                      malicious node, so that they can independently
                      discover their unhelpful status, and be generally
                      known as a mischief-maker.

                      - Gojomo


                      __________________________________________________
                      Do you Yahoo!?
                      New DSL Internet Access from SBC & Yahoo!
                      http://sbc.yahoo.com
                    • brandon@blanu.net
                      ... I suggested on blanu.net that Curious Yellow would be good for this purpose, easily deploying an ever expanding and hard to trace network for DDoSing.
                      Message 10 of 14 , Oct 2, 2002
                      • 0 Attachment
                        > said "limited", I suppose there may be some way these parties could
                        > implement a distributed system that would be much more difficult to keep
                        > up with, by embedding malware in some otherwise widely distributed
                        > application.

                        I suggested on blanu.net that Curious Yellow would be good for this
                        purpose, easily deploying an ever expanding and hard to trace network
                        for DDoSing. However, as I learned at Cory Doctorow's recent EFF speech,
                        while DoS attacks are okay, actually penetrating machines, such as to
                        spread a worm, requires that the user signs a EULA first.

                        In which case, as you suggest, the way to spread it is by bundling it as
                        malware instead of having it aggresively spread via exploits.

                        It's too bad for the RIAA that Kazaa is their enemy as Altnet is the
                        closest thing to CY-malware yet, having already established a
                        self-updating decentralized network where all users have implicitly
                        agreed to a EULA allowing for arbitrary code to run on their machines.
                      • coderman
                        ... The only problem is that caching networks have their own set of troubles, like coordinated attacks using many malicious peers to insert and request bogus
                        Message 11 of 14 , Oct 2, 2002
                        • 0 Attachment
                          Eric Mathew Hopper wrote:
                          > ...
                          >
                          > Also, this technique wouldn't work well on a well designed P2P system
                          > that did a lot of caching. Like Freenet for example. Attempts to block
                          > really popular things this way would merely result in the popular things
                          > being so widely replicated that the attack became impossible.

                          The only problem is that caching networks have their own set of troubles,
                          like coordinated attacks using many malicious peers to insert and request
                          bogus data flushing most legitimate data out of the network and filling
                          caches with junk...

                          Unfortunate for us that attack resistance is much more difficult in truly
                          decentralized networks.



                          --
                          _____________________________________________________________________
                          coderman@... http://cubicmetercrystal.com/
                          key fingerprint: 9C00 C63E A71D D488 AF17 F406 56FB 71D9 E17D E793
                          ( see html source for public key )
                          ---------------------------------------------------------------------
                        • Lucas Gonze
                          ... Any chance of getting more of an executive summary, Rod? My first thought is that the idea of us and them , as opposed to the me and everybody else
                          Message 12 of 14 , Oct 3, 2002
                          • 0 Attachment
                            Rod Price wrote:
                            > A co-worker of mine has implemented a de-centralized version of an
                            > artificial immune system that would seem ideal for this application.
                            > The system can recognize "self" and will flag "not-self." I know
                            > this description is vague, but my co-worker isn't around right
                            > now (9:00 pm) to help me out. For details on artificial immune
                            > systems, look at http://www.cs.unm.edu/~forrest/papers.html,
                            > particularly "Architecture for an Artificial Immune System" on
                            > that site.

                            Any chance of getting more of an executive summary, Rod? My first thought
                            is that the idea of 'us' and 'them', as opposed to the 'me' and 'everybody
                            else' that decentralized designs normally use, might be too slippery to
                            work with.

                            (posting that conference paper would be a good thing -- please do!)

                            - Lucas
                          • Rod Price
                            I ve attached the conference paper. It s contribution to the field lies in the fact that Keith s implementation is distributed over a small network. The
                            Message 13 of 14 , Oct 3, 2002
                            • 0 Attachment
                              I've attached the conference paper. It's contribution to the field
                              lies in the fact that Keith's implementation is distributed over a
                              small network. The fundamental ideas are found elsewhere (see the
                              link below).

                              So, an executive summary...

                              An artificial immume system is a model loosely based on the human
                              immune system. The objective is to rapidly identify invading
                              antigens (viruses, bacteria, or offending IP addresses) so that
                              other systems can halt the attack. In the body, this is done by
                              B-cells which bind specifically to a particular antigen. The trouble
                              is, there can only be about 10^8 (100 million!) different varieties
                              of B-cell in the body at any given time, but there are 10^12 to
                              10^16 possible antigens out there. The B-cells collectively form
                              a memory capable of storing 10^8 patterns, but must recognize 10^16
                              patterns.

                              The body overcomes this problem by attempting to store the *right*
                              10^8 patterns, since there are probably not more than 10^8 antigens
                              it is likely to encounter. Besides, cells in the body itself
                              represent a large set of patterns, and it wouldn't do to have some
                              B-cells identify human cells as antigens.

                              So, the body manufactures immature B-cells in the thymus. Each new
                              B-cell recognizes some pattern in a space of 10^16 possibilities.
                              If the immature B-cell recognizes a human cell / pattern, a process
                              in the thymus kills it. If the B-cell does not recognize a human
                              pattern, it is let loose into the wild (your body).

                              Think of the B-cells as initially randomly distributed in this high-
                              dimensional (10^16 dimensions) space. Someone sneezes near you and
                              a set of antigens (viruses) lands in that space. One or two B-cells
                              happen to be near the antigen location and they bind to a few of the
                              virus particles. Most of the viruses get by and multiply like crazy,
                              causing you to get sick.

                              In the meantime, the B-cells that bound to viruses have signaled that
                              an attack is in progress. In response, the body begins manufacturing
                              copies of those B-cells by the truckload. Now it's a race between
                              two exponentially growing populations. Most of the time the B-cells
                              win and you get better.

                              After the fact, however, the distribution of B-cells in that high-
                              dimensional space is no longer random. In the vicinity of the
                              antigen pattern the B-cells are very dense. The next time that
                              particular antigen attacks (someone you gave your cold to sneezes),
                              that thick set of B-cells can grab just about every virus particle
                              that got in and kill it. The immune response is very quick and you
                              stay healthy.

                              So after some time, your B-cells form a distributed memory of the
                              antigens you've encountered before. The distribution is thick in
                              the regions where likely antigens live and thin in regions where
                              they don't live.

                              --

                              That's how the body does it. Keith implemented his system with
                              mobile agents. These agents acted as carriers for digital B-cells,
                              moving patterns at random around the network. Antigens were
                              recognized in one part of the network, lots of digital B-cells were
                              generated in defense, and before long all the machines in the network
                              were able to recognize those antigens.

                              The trick lies in generating the patterns for the "B-cells" to bind
                              to. In the present case it should be quite simple: offending IP
                              addresses are the patterns. An attack on one part of the network
                              is detected and the appropriate "B-cells" then spread throughout the
                              network. The first attack succeeds to a degree but subsequent ones
                              fail. Moreover, the system is entirely de-centralized -- nowhere
                              can you find a single point of failure. This is achieved by simply
                              making each computer in the network act as its own "thymus".

                              It's fascinating work. I'm interested to hear what this group thinks
                              of it.

                              -Rod

                              Lucas Gonze wrote:
                              > Rod Price wrote:
                              >
                              >>A co-worker of mine has implemented a de-centralized version of an
                              >>artificial immune system that would seem ideal for this application.
                              >>The system can recognize "self" and will flag "not-self." I know
                              >>this description is vague, but my co-worker isn't around right
                              >>now (9:00 pm) to help me out. For details on artificial immune
                              >>systems, look at http://www.cs.unm.edu/~forrest/papers.html,
                              >>particularly "Architecture for an Artificial Immune System" on
                              >>that site.
                              >
                              >
                              > Any chance of getting more of an executive summary, Rod? My first thought
                              > is that the idea of 'us' and 'them', as opposed to the 'me' and 'everybody
                              > else' that decentralized designs normally use, might be too slippery to
                              > work with.
                              >
                              > (posting that conference paper would be a good thing -- please do!)
                              >
                              > - Lucas
                              >
                              >
                              >
                              > To unsubscribe from this group, send an email to:
                              > decentralization-unsubscribe@egroups.com
                              >
                              >
                              >
                              > Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
                              >
                              >
                            • Lucas Gonze
                              What I haven t been able to puzzle out of this paper is the strategy for keeping antigens and viruses from flipping. I m thinking of this bit from the curious
                              Message 14 of 14 , Oct 11, 2002
                              • 0 Attachment
                                What I haven't been able to puzzle out of this paper is the strategy for
                                keeping antigens and viruses from flipping. I'm thinking of this bit from
                                the curious yellow rant at http://blanu.net/curious_yellow.html:

                                "Curious Blue [antigen mobile agents] could act as an ideal platform for
                                the initial stage of a Curious Yellow [virus mobile agents] infection. All
                                that is needed is an exploit in the Curious Blue code. Once one is found,
                                the entire Curious Blue network can be turned, like a clever move in a
                                game of Othello."

                                Is it in there and I'm missing it?

                                - Lucas
                              Your message has been successfully submitted and would be delivered to recipients shortly.