Competitive and cooperative interactions in biological inspired AI

In this essay, written as my final essay in the course IT3708 spring 2010, I discuss two phenomenas that are the driving forces for a lot of the complex biological systems in nature with several simple components that interact with each other. These two phenomenas are competition and cooperation. I will discuss examples from the process of evolution, where competition play a crucial part, to ant colonies, where cooperation between ants enables these simple creatures to do extraordinary things. All of the examples presented here have an inspiration from biology and has been applied successfully to computer AI problems. I start by defining competitive and cooperative interactions among individual components in a broad sense. Then six examples, with competitive and cooperative interactions, are presented. I finished off with a conclusion on the advantages and disadvantages of using these two phenomenas in AI systems.

Definitions

Both cooperative and competitive interactions implies that there is some sort of contest, or more generally defined, something to be gained, benefited or won. If there isn’t anything to be gained, the cooperative and competitive interactions are meaningless. This fact leads me to my definitions of competitive and cooperative interactions between components.

Competitive interactions occur when a positive change for one component goes on the expense of other components.

This definitions states that competitive interactions will occur when a component’s increase in gain implies a decrease in the benefit, or performance, of other components.

Cooperative interactions occur when a positive change for one component also increases the collective benefit of a group of components.

Hence cooperative interactions occur when a component shares its prize or gain with other components. These two types of interactions are not mutually exclusive, the interaction among components can be both cooperative and competitive. One example of this is Kohonen’s Self-organizing maps which will
be discussed later in this essay.

For each example in this essay I will also try to explain whether the cooperation or competition is explicitly programmed or emergent. The word emergent, refers to a property of a collection of components that comes about only through the interactions between the components and is not a property of the component itself [Floreano & Mattiussi, 2008]. So the question boils down to whether the cooperation or competition is unexpected and just emerges from the system, or if we know that the components will cooperate or compete because we have explicitly programmed them to do so. Off course all of these examples are computer AI examples, so all of them are in a way programmed. But how “emergent” the cooperation or competition is in each examples varies. How “emergent” it is depends on how trivial it is to see if the component will cooperate or compete.

Competitive

Evolutionary Algorithms

Evolutionary algorithms (EA) use the principals of natural evolution to solve a variety of engineering problems. In a basic evolutionary algorithm there is a population of individuals, each of these individuals are a solution to the problem we are trying to solve. All individuals is assigned a fitness value which is a measure for how well the individual solves the problem. In natural evolution the fitness is the measure of an individual’s ability to survive and produce offspring in a given environment [Biology, 2005]. In the process of evolution there is a contest of survival and who gets to mate. So the next phase in the basic evolutionary algorithm is to select which individuals are allowed to mate and survive. The higher the fitness the more likely is an individual to mate and survive. During mating new individuals, or solutions, are made. These new individuals inherits some traits from their parents and may be subjected to mutation.

Evolutionary algorithms can be seen as parallel search technique where individuals are locations in the solution space. When the best solutions are allowed to mate and recombine to new solutions we are actually exploring the search space located near the current solutions for optimal solutions.

There exists a lot of variations to the basic evolutionary algorithm presented here and one variation is competitive co-evolutionary algorithms. In competitive co-evolution the fitness is assigned to each
individual by making the individual compete or evolve directly with other individuals in the population. Another variation is elitism, which allows one or more of the best individuals to survive to the next generation. This variation prevents the EA from losing good solutions before a better one comes along [Downing, 2009].

A lot of the driving forces in natural evolution and artificial evolution as discussed in this example, with evolutionary algorithms, are based on competitive interactions among individuals. Going back to my definition of competitive interactions I see that a lot of these variations of evolutionary algorithms fall under this definition in different ways. In the basic evolutionary algorithm when an individual is selected to mate, it is seen as a positive change for that individual as it is allowed to pass on its “genes”. This goes on the expense of other individuals because only a limited number of individuals are allowed
to mate. Also, with elitism an individual positive change or gain is to be allowed to survive to the next
generation. This goes on the expense of other individuals because the number of individuals allowed to
be characterized as an elite is limited. A positive change for one individual is also the increase in
fitness. In competitive co-evolution this increase in fitness for one individual can lower the fitness of
other individuals.

Hence I conclude that competitive interactions among individuals is the driving force of both natural and artificial evolution.

Each of the variations of EAs, that introduce competitive interactions among individuals, differ in how emergent the competitive interactions are, but most of them are quite explicitly programmed. For instance in elitism it is very clear that the competition will occur.

Competitive neural networks

One type of neural networks called competitive neural networks have nodes which compete for activity. These neural networks exists in large parts of the brain, and some of these form topological maps [Downing2, 2009]. These maps are structural isomorphisms between the physical world and the neurons in the brain. For example the locations in the visual field of our eyes map isomorphically to the visual neurons in the brain. Isomorphic means that if two points in our visual field are close in the physical world, then the neurons which detect sensory input from these points are close in the brain. And two points that are far away from each other, will have sensory neurons that are far away from each other in the brain.

Kohonen invented a method for constructing such an isomorphic mapping which is called self-organizing maps. This method has been successfully used for both solving the traveling salesman problem (TSP) and image segmentation [Ong, 2002]. Kohonen’s method involves a training phase of the neurons where all neurons compete to win for each pattern. The winning node is the one that is closest to that specific pattern. The winning node will then update its position so that it moves even closer to that pattern. This is done for all patterns until the network has converged.

There are competitive interactions between the nodes in the network, they all compete to win for a pattern and when one node wins for a pattern it goes on the expense of the other nodes, because only one node can win and hence fall under my definition of competitive interactions. But Kohonen also added a little cooperative twist to this neural network. After a winning node has adjusted its position its neighboring nodes in a defined radius are also allowed to change their position closer to the current pattern, the radius is then reduced over several iterations. A positive gain for one node means a positive gain for a collections of nodes, which is my definition of cooperative interactions. So this neural network has both competitive and cooperative interactions between its components.

With self-organizing maps, the construction of isomorphic maps, are as the name suggests, emergent. It is not trivial to see that Kohonen’s algorithm will produce an isomorphic mapping between the neurons and the input patterns. But the competitive interactions in Kohonen’s method are not very emergent. The competition and cooperation between the neurons are very explicitly programmed by a simple loop that measures the distance from each neuron to the current pattern and selects the one closest.

Artificial Immune System

An artificial immune system (ARTIS) is inspired by the natural immune system which has many desired properties such as distributed computation, adaption and error tolerance. In natural organisms the immune system protects the organism against diseases. Steven A. Hofmeyr and S. Forrest described in their paper [Steven A. Hofmeyr, S. Forrest, 2000] a system, LISYS, which was a network intrusion detection (NID) system based on an artificial immune system. The artificial immune system they used in this system is a quite simple compared to the human immune system. Also it doesn’t have any kind of response like the natural immune system has, it only detects intrusions.

In our immune system there are several different kinds of cells and molecules that play their part. In the ARTIS defined in this example they only model one type of immune cells called lymphocytes. These lymphocytes have many thousand receptors on their surface which can bind to the surface of other cells. Which type of receptors the lymphocyte has determines which types of cells they can bind to. When a lymphocyte binds to another cell they can be activated and this triggers a lot of reactions which can lead to the elimination of the cell that it bound to. The lymphocytes that are flowing in our body will in general not have receptors which binds to our own cells and molecules. Though this can happen, which is the case in allergies where lymphocytes are activated and create a response when a foreign material enters the body. Lymphocytes are initially created with a random set of receptors and it can therefor bind to both our own body material as well as foreign material. One type of lymphocytes called T-cells are developed in a location in the body called the thymus. Most of the material of our own body that the T-cells will encounter are present in the thymus and if an immature T-cell is activated during its development in the thymus it will be terminated. Those who survive will have receptors that serve as detectors for foreign material and will leave the thymus and circulate the body.

When a lymphocyte encounters a new pathogen, which is a biological agent which causes disease, the detection and response will be slow, the reason for this is that there are only a few lymphocytes that can bind to the new pathogen. This is called the primary response and it can take several weeks to combat the infection. To increase the efficiency of the immune system, the activated lymphocytes will start to clone themselves and will therefor grow exponentially. In the mean time the pathogens will also replicate and grow exponentially. Hence there arises competitive interactions between these two components. The pathogen struggles to survive while the lymphocyte struggle to keep the pathogen out of the body. Benefit for one of the components goes on the expense of the other type of components and therefor fall under the definition of competitive interactions in this essay.

To combat future infections of the same encountered pathogen, the IS makes, after the infection has been eliminated, memory-cells which are long-lived and have receptors which are very likely to bind to the same pathogen. The memory-cells are produced in such a large quantity that if the body gets infected by the same pathogen again, the detection and response will be so efficient that there will be no clinical indications of reinfection. This memory-based detection is the bases for immunization against pathogens.

Our immune system is quite complex and the description I have given here don’t really do it justice, but it is needed to understand the intrusion detection system example by Steven A. Hofmeyr and S. Forrest. Their ARTIS uses a binary string of a fixed length as their cells and molecules so the task of the NID system boils down to basically identifying strings that are part of the local area network the NID system resides in and the strings that are intrusions(hackers, worms, viruses etc.). The strings in the NID system is a triple consisting the of source IP address, destination IP address and the protocol or port used, so basically the string is a connection identifier. The lymphocytes of this system are called detector nodes and each computer in the network is a detector node and each of these has a set of detectors. These detectors, or receptors as they are called in the natural immune system, are also modeled as a binary string. Two strings match if they have r contiguous bits in common, where r is a given threshold. If a connection match to a detector on a detection node, that detector will become activated and an intrusion is detected. If multiple detectors at a node are activated by the same connection string, they enter a competition to become memory detectors. Those detectors which have the closest match will be selected to become memory detectors. These memory detectors are long-lived and makes copies of themselves and spread out to other nodes. This makes the entire network more efficient in identifying future intrusions. Also there is a limit to how many memory detectors can exist in the network, so there are also competitive interactions among the memory detectors themselves, and only the most frequently used will stay in the system.

As explained, there are several competitive interactions between the different components in the natural immune system as well as the artificial immune system described here and in more detail in the paper [Steven A. Hofmeyr, S. Forrest, 2000]. I have described competitive interactions among the connections, the detectors and the memory detectors. These interactions fall under the description of competitive, because the survival of each of the components are on the expense of the survival of the other components.

The competitive interactions in the LISYS system between the connections, the detectors and the
memory detectors are clear and is based on survival. In this NID example the competition is quite
explicitly programmed, but in the complex natural immune system it is much more emergent.

Cooperative

Cooperating ants

Ants are simple creatures and not very impressive by them selves, but a colony of ants can achieve extraordinary things by cooperation. They can for instance find a shortest path between a food area and the nest. Some ant species deposits what is called pheromones, which are a chemical signal which can be sensed by other members of that species. [Deneubourg et al., 1990] showed that ants that are presented with a choice of several paths, will choose the path with the strongest pheromone trail. So if ants have two paths to select from, back and fourth from the nest to a food area, the ants will in the beginning distribute themselves equally along the two paths. This is because neither have a strong pheromone trail. But the ants that selected the shortest path will return to the nest earlier than the ants that selected the longest path. This will increase the pheromone amount on the shortest path before the ants on the other path have returned. New ants will then select the shortest path because it has a much stronger pheromone trail. The pheromone trail on the shortest path will thus increase in strength and most of the ants will select this path. Note that the ants are never guaranteed to find the shortest path.

Optimization algorithms that use this kind of pheromone-based strategies of ant foraging are called ant colony optimization (ACO). As we have seen, ACO algorithms can be used to solve the shortest path problem. It can also be used to solve the traveling salesman problem which is the problem of finding the shortest cycle which visits every node exactly once in a graph. Though using ACO algorithms for solving the shortest path problem is much slower compared to other algorithms such as Dijkstra’s algorithm, one type of ACO algorithms called ant colony system (ACS), see [Dorigo et al., 1996], can be used to solve the TSP for graphs of moderate size(about 30 nodes) just as good or better than the other types of algorithms for solving this problem.

In ACS for solving TSP we start by randomly distributing ants on the nodes of the graph. Each of the ants chooses an edge which hasn’t been visited by the same ant before from its current location with a probability which is governed by the amount of pheromones on that edge. When all of the ants in the graph have completed a full tour, each ant goes backwards using the same tour while depositing an pheromone amount which is inversely proportional to the length of the tour, so that the shortest tour will have the largest pheromone amount. We then repeat this procedure while slightly evaporating the pheromone amount by a small amount for each iteration. Again the system does not give a guarantee to find the shortest tour, but it will converge to one of the shortest because of the probabilistic choice and the pheromone evaporation.

This is probably the most emergent examples in this essay. This because it is not trivial to see that the ants behaviors in ACS for solving TSP just described makes the ants cooperate in finding a solution to the TSP.

This example goes under my definition of cooperation because a positive change for one ant, which can be the ant finding a short path to some food location, increases the collective benefit of the ant colony, because of the pheromone trail which leads other ants to the short path.

Particle swarm optimization

Particle swarm optimization (PSO) is a search technique which has a biological inspiration from birds who move collectively by cooperation in the search for food [Floreano & Mattiussi, 2008]. The way it works is that birds fly to different locations and give a loud cry with an intensity proportional to the amount of food at that location. By listening to the cries of other birds the flock of birds can find the global optimum of food in a search space.

In PSO the birds are called particles and in the beginning all particles are placed at random locations in the search space and move/fly according to local information. Each particle is characterized by its position and performance and communicates its performance to neighboring particles. The performance is the value in the location where the particle currently resides. Based on all of this information each particle can keep moving in the same direction, return to the location where it had the best performance or move toward a neighboring particle that reports a higher performance. By introducing some randomness to the direction updates we allow particles to explore new areas in the search space as well as avoiding getting stuck in local optima.

If we compare this search technique to evolutionary algorithms we see that both are parallel search techniques, but while evolutionary algorithms have competitive interactions, PSO has cooperative interactions among its individual components.

In this example our components, called particles, work together by communicating their performance to each other and receive a collective benefit by doing so because all particles can then increase their performance by using the information given by the other particles together with their own local information. According to my definition of cooperative interactions these particles are cooperating. A positive gain for one particle increases the collective benefit of a group of particles because other
particles will then move towards this new location of higher performance and hence increase the collective performance.

The cooperation in this example is quite obvious and can’t be characterized as very emergent. The particles are explicitly programmed to tell other particles about their performance and go towards particles reporting higher performances.

Boid flocking

In nature we often see flocks of animals moving together through space. For instance birds and fish who move together so that no individual in the flock can be singled out and eaten by predators. To achieve this behavior, in older systems, the path of each individual had to be programmed explicitly, making it an insane task for large flocks.

To achieve the flocking behavior Reynolds suggested in his 1987 article three simple rules based on
collision avoidance, flock centering and velocity matching. Reynolds named the agents following these
rules boids. Each of these boids can perceive the distance and velocity(velocity in the vector sense with
both speed and direction) of all other boids in a small neighborhood around the agent. The rule of
collision avoidance is then defined as if a boid is too close to the other boids in neighborhood it moves
away from them. The second rule of flock centering says the opposite, if the boid is too far away from
the other boids in the neighborhood it moves towards them. And the last rule of velocity matching
specifies that if the boids speed and orientation differs from the main orientation and speed of the boids
in its neighborhood it should align and adjust its velocity to the main velocity.

One use for this method has been in movies to simulate realistic animal flocking behavior, but it has
also been used for swarms of mobile robots to make sure that the robots remain together while moving
through cluttered space. But in both cases the flock need to be directed somewhere. In a movie they
usually want the flock of animals to move from one specific place to another and not, for instance, onto
the characters on the screen. To solve this Reynolds suggested to have a global goal position which the
flock could “migrate” to. This migration behavior can then be programmed as another rule which adjust
the boids gradually towards the goal point. Also to avoid hitting obstacles in the environment more
rules are needed. Steer-to-avoid is one rule that simulates natural birds collision avoidance by vision.
The boid finds obstacles in front of it and changes its direction to the edge of the obstacle. Because of
the three main rules of flocking, the other boids will follow and avoid the obstacles.

A positive change for a boid is defined in this example as being “more” part of the flock, or a larger
flock. So this example goes under my definition of cooperative interactions because a positive change
for one component, or boid, also creates a positive change for the entire flock of boids, because then
the entire flock will be larger and more compact.

The cooperation in this example is also quite obvious and can’t be characterized as very emergent. The
boids follow some simple rules that make them flock together.

Conclusions

As mentioned in the introduction, the two phenomenas of cooperation and competition are the driving
force for a lot of the complex biological systems in nature. The advantages these dynamics provide is
bound to the fact that these systems consists of several, usually quite simple, individual components.
These components usually follow some simple behaviors according to the local information they have
or can perceive. This enables AI systems that use these components and dynamics to do parallel
processing. Each of the components can often operate separately. On a computer this means that each
component can run on a single thread on a single core. On multi-core computers this can enable a
significant increase in speed and is for instance the reason why our brain, consisting of many individual
neurons, is so superior compared to a regular computer.

The individual components also enables systems that are more robust and error tolerant. This is a fact
displayed by the immune system. In this essay I have looked at an artificial immune system which
instead of having a single computer or router acting as a network intrusion system, all the computers in
the network have detectors for intrusions and is therefor more robust against an attack on a single
computer. I also looked at systems inspired by ant colonies which for instance can be used to find the
shortest path in a graph. Though this method is not optimal and as fast as Dijkstra’s algorithm, it is very
robust against changes in the network graph. For instance if one link or router should go down the ants
will simple adjust and start finding a new path, while with Dijkstra’s algorithm the entire network
would have to exchange information again and every node would have to update their tables over
shortest paths.

As with all things there is also some disadvantages by using these components and the dynamics of
cooperation and competition. The algorithms or systems inspired by the biological systems using these
dynamics seldom guarantee any optimality. For instance Dijkstra’s algorithm for finding the shortest
path from A to B in a graph will always find the optimal solution, but the ants are not. So for simple
problems it is often best to use traditional algorithms. And though a lot of these biologically-inspired AI
systems described in this essay can be improved with parallelization, they don’t necessarily solve the
problem as fast as possible.

In this essay I have also discussed how emergent the cooperation or competition in the example is. I
characterized most of the examples here as having their cooperation or competition as hard-wired or
explicitly programmed, though most of the biological systems they are inspired from have emergent
dynamics. It turns out that the question about emergent is a question of complexity, and the more
complex the system is, the more computation is usually needed. I conclude that systems with emergent
dynamics of competition or cooperation therefor is usually slower than explicitly programmed systems.
Also as they are more complex they are harder to design and may have a worse search efficiency. It can
also be noted that systems with emergent dynamics can find more creative ways to solve a problem that
us humans might not ever come up with.

References

  • [Floreano & Mattiussi, 2008] – Dario Floreano and Claudio Mattiussi, Bio-Inspired Artificial Intelligence, The MIT Press
  • [Biology, 2005] – Neil A. Campell, Jane B. Reece. Biology, seventh edition, Pearson
  • [Downing, 2009] – Keith L. Downing, Introduction to Evolutionary Algorithms, http://www.idi.ntnu.no/emner/it3708/lectures/evolalgs.pdf, accessed 29. april 2010
  • [Reynolds, 1987] – Craig W. Reynolds, Flocks, Herds, and Schools: A Distributed Behavioral Model
  • [Deneubourg et al., 1990] – J. -L. Deneubourg, S. Aron, S. Goss, J. M. Pasteels, The self-organizing exploratory pattern of the argentine ant, Springer Netherlands
  • [Dorigo et al., 1996] – Marco Dorigo, Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem, Université Libre de Bruxelles, Belgium
  • [Steven A. Hofmeyr, S. Forrest, 2000] – Steven A. Hofmeyr, S. Forrest, Architecture for an Artificial Immune System, Santa Fe
  • [Downing2, 2009] – Keith L. Downing, Unsupervised Learning in Neural Networks, http://www.idi.ntnu.no/emner/it3708/lectures/learn-unsup.pdf, accessed 28. april 2010
  • [Ong, 2002] – S.H. Ong, N.C. Yeo, K.H. Lee, Y.V. Venkatesh, D.M. Cao, Segmentation of color images using a two-stage self-organizing network, Singapore, Elsevier

You may also like...

Leave a Reply

Your email address will not be published.