doc on digital ants absract

Upload: siva

Post on 30-May-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 doc on digital ants absract

    1/19

    CHAPTER-1

    INTRODUCTION

    Ants are relatively simple beings. With their small size and small number of

    neurons, they are incapable of dealing with complex tasks individually. The antcolony on the other hand is often seen as an intelligent entity for its great level ofself-organization and the complexity of tasks it performs. In this paper, we willfocus on one of the resources ant colonies use for their achievements, pheromonetrails. We will try to show some relationship between the stigmergic behaviorfacilitated by pheromones and the process of representation in a complex system.

    Such behavior was once studied in laboratories in order to find out howpheromones were used by ants. In a very inspiring experiment, Deneubourg et al.[3] demonstrated how ants use pheromones to optimize the roundtrip time fromnest to food source. In this experiment denoted the double bridge, ants in the nestare separated from the food location by a restricted path with various branches ofdifferent lengths. As ants travel the trail back and forth they leave their scent trail

    behind. The experiment shows that pheromone concentration is higher on theshorter path, therefore increasing even more the probability that an ant will choosethe shortest path.

    It is, however, nave to think that pheromones are the only resource ants useto navigate. In another experiment, Bethe [4] proves that pheromone trails are not

    polarized, as was once thought to be the case. The experiment consists of a nest, a

    food source and a pivot table between food location and the ants nest. After apheromone trail is formed over the pivot table, one ant is released from the nest.While the ant is in the middle of the trail, the pivot table is turned 180. If the antwould keep its heading, it would end up in the nest again as the table was rotated,

    but amazingly, the ant also turns its direction and ends up in its originaldestination, the food source. This experiment demonstrates that ants also dependson other senses to navigate, such as position of the sun in the sky (or a strongenough light source), gravity, slope etc..

    1.1 PNNL'research:

    Research coming out of Pacific Northwest National Laboratory (PNNL)

    always interests me. First, one of the lab's mission is to resolve cybersecurityissues. Second, their conclusions can be unorthodox. Case in point, Glenn Fink,

    senior research scientist at PNNL believes Nature provides examples of how we

    can protect computers by using collective intelligence.

    To help defend his position, Fink enlisted Errin Fulp, associate professor of

    Computer Science at Wake Forest University, specifically because of Fulp's

    ground-breaking work with parallel processing. Together, the two researchers

    developed software capable of running multiple security scans contiguously, with

    each scan targeting a different threat. A technique it seems, Fink acquired from

    studying behavior exhibited by ant colonies.

    1

  • 8/14/2019 doc on digital ants absract

    2/19

    1.2 Whyants?

    In the Wake Forest University article, "Ants vs. Worms" by Eric Frazier,

    Professor Fulp describes why the researchers chose to mimic ants:

    "In nature, we know that ants defend against threats very successfully. They can

    ramp up their defense rapidly, and then resume routine behavior quickly after an

    intruder has been stopped. We are trying to achieve that same framework in a

    computer system."

    All one has to do is watch a National Geographic special about ants to appreciate

    their collective capabilities. So, the doctors' reasoning does makes sense.

    1.3 SwarmIntelligence:

    The researchers call their technology Swarm Intelligence and for a good

    reason. According to Wikipedia, Swarm Intelligence is a system:

    "Typically made up of a population of simple agents or boids interacting locally

    with one another and with their environment. The agents follow very simple rules,

    and although there is no centralized control structure dictating how individual

    agents should behave, local, and to a certain degree random interactions between

    such agents lead to the emergence of "intelligent" global behavior, unknown to the

    individual agents."The digital Swarm Intelligence consists of three components:

    Digital ant: Software designed to crawl through computer code, looking forevidence of malware. The researchers mentioned that ultimately there will be 3000

    different types of Digital Ants employed.

    Sentinelis the autonomic manager of digital ants congregated on an individualcomputer. It receives information from the ants, determines the state of the local

    host, and decides if any further action is required. It also reports to the Sergeant.

    Sergeantis also an autonomic manager, albeit of multiple Sentinels. If Iunderstand correctly, the size of the network determines how many Sergeants are

    used. Also, Sergeants interface with human supervisors. The following slide

    courtesy of the researchers and the IEEE, depicts the collective arrangement:

    Meanwhile, McAfee says it continually adds incremental improvements,

    even though it does a splashy marketing message changeover this time of year,

    2

  • 8/14/2019 doc on digital ants absract

    3/19

    which can be misleading, as The Tech Heralds security editor Steve Ragan points

    out in this story.

    Still security suites remain, by-and-large, reactive and effective less than half

    the time, as Cyveillance recently reported.

    Now comes Wake Forest computer science professor, Errin Fulp, who, with the aidof a couple of ace grad students, Brian Williams and Wes Featherstun, says he is

    on to a promising new approach to defending your computer against cyber threats,

    especially invasive Internet worms, likeConfickerand Koobface.

    Fig1.1: digital ants

    Fulp is developing a pioneering defense he calls swarm intelligence,

    modeled after the behavior of ants. You can read his full report here.

    When one of Fulps digital ants detects a threat residing on a PC or in a

    network, it sets off a digital scent, attracting compatriot ants to converge, which

    then should draw the attention of a systems or network administrator.

    Our idea is to deploy 3,000 different types of digital ants, each looking for

    evidence of a threat, Fulp says. As they move about the network, they leave

    digital trails modeled after the scent trails ants in nature use to guide other ants.

    Each time a digital ant identifies some evidence of malicious coding, it attract

    more ants, producing the swarm that marks a potential computer infection.

    LastWatchdog would like to here from security experts as to whether this

    appears to be derivative of some existing technology, or is, indeed, could be a

    breakthrough paradigm shift

    1.4 Existing work:

    The Artificial Intelligence community is seeing a shift toward techniques based

    3

    http://lastwatchdog.com/antivirus-suites-fail/http://lastwatchdog.com/conficker-reactivates-spreading-pitches-fake-antivirus/http://lastwatchdog.com/conficker-reactivates-spreading-pitches-fake-antivirus/http://lastwatchdog.com/koobface-slams-facebook-misses-myspace/http://lastwatchdog.com/conficker-reactivates-spreading-pitches-fake-antivirus/http://lastwatchdog.com/koobface-slams-facebook-misses-myspace/http://lastwatchdog.com/antivirus-suites-fail/
  • 8/14/2019 doc on digital ants absract

    4/19

    on evolutionary computation. Inspiration comes from several natural fields such asgenetics, metallurgy (simulated annealing) and the mammal immune system.Growing interest in ant colony and swarm algorithms is further demonstration ofthis shift to algorithms from this paradigm.

    Marco Dorigo leads the research on optimization techniques using artificialant colonies [5]. Since 1998, Dorigo has been organizing a biannual workshop onAnt Colony Optimization and swarm algorithms at the Universit Libre de

    Bruxelles. Dorigo and his colleagues have successfully applied ant algorithms tothe solution of difficult combinatorial problems, such as the traveling salesperson

    problem, the job scheduling problem and others. Ramos [6] and Semet [7] use theant colony approach to perform image segmentation. Heusse et al. [8] appliesconcepts of ant colonies on routing of network packages. A more detailed summaryof these studies can be found in a summary paper [9].

    In simulation, ant colony behavior offers clear demonstration of the notion

    of emergencethat complex, coordinated behavior can arise from the localinteractions of many relatively simple agents. Stigmergy appears to the vieweralmost intentional, as if it were a representation of aspects of a situation. Yet, theindividuals creating this phenomenon have no awareness of the larger process inwhich they participate. This is typical of self-organizing properties: visible at onelevel of the system and not at another. Considering this, Lawson and Lewis [10]have suggested that representation emerges from the behavioral coupling ofemergent processes with their environments. We hope here to reveal, throughexperiments with a simple ant colony, the variety of parameters which affect thisself-organizing tendency.

    1.5 Introduction to Ant Box Simulator:

    TheAnt Box Simulator(ABS) idea started from curiosity about ant colonybehavior and the amazing feats they demonstrate. The concept of the simulator isof a two dimensional digital box where ants are represented by small objects

    bounded by a perimeter. They are inserted in the environment around a spotdenoted IN Hole and their goal is to find the OUT Hole. Ants have a limitedsensorial radius to smell pheromones or to detect the OUT Hole. The probabilityof one ant finding the exit of the box is directly related to area of the arena and thedistance from IN and OUT holes. The next sections of this paper demonstrate aseries of experiments using the simulator by varying several parameters andstrategies that affect ants behavior.

    1.6Are Pheromones Good Enough?

    As stated previously, ants rely heavily on pheromones to guide

    themselves. Most ant colony algorithms utilize the pheromone concept and ahighly parallel architecture to solve hard problems. It is true that being computerscientists seeking inspiration to solve engineering problems, we dont need to befixated on a high fidelity model of an ant colony. However, it is very important to

    keep in mind that our inspiration comes from a reduced biological model of how

    4

  • 8/14/2019 doc on digital ants absract

    5/19

    ants navigate.

    The fundamental question we try to answer here is: Can we solve the problemor searching for the OUT hole using only the pheromone concept? Are

    pheromone trails a good enough metaphor to solve the proposed problem?Moreover, with these experiments we can ask what other parameters in thiscomputational model provide the flexibility to produce useful behavior in thecontext of various problems.

    As a baseline, we know that in other lines of research, the pheromone concepthas proven useful, achieving excellent results in combinatorial optimization

    problems for example [5].

    1.7Simulator Architecture:

    The ABS is an application that runs on Windows 32 platform. Due to itsdemand for graphic computation, a Pentium IV or greater with a fast video card isrecommended. Also, it relies on platform specific DirectX technology, version 7 orhigher. The software was developed using Borlands Delphi, a Pascal basedlanguage. The main reason for this choice was high productivity and fast

    performance of native code offered by this tool.

    The application was built with expansion in mind; therefore the objectmodel (fig. 1) implements not only the Ant concept, but a digital environmentwhere objects and even other insects could be placed together in later experiments.Following is a brief explanation of this object model.

    The Main Form is responsible for drawing, creating and managing all kindsof simulation objects, as well as presenting a user interface for interaction with

    parameters. It keeps a dynamic list of simulation objects on SimsList. Allsimulation objects are descendants of TSimObject. TSimObject is responsible for

    basic aspects of the object, such as position, size, identification and a virtualmethod for drawing. A TMobileObject implements basic animation methods suchas wall collision, collision with other objects, current speed and direction. An ant(TAnt) is a specialization of a TMobileObject that refines and

    5

  • 8/14/2019 doc on digital ants absract

    6/19

    Fig1.2: Simulator Architecture

    6

    http://www.zdnetasia.com/i/techguide/collective%20arrangement.jpg
  • 8/14/2019 doc on digital ants absract

    7/19

    CHAPTER-2

    SWARM INTILLIGENCE

    Fig2.1: swarm intelligence

    In broad sense, machine learning is concerned with the algorithms and

    techniques which allow system to learn [1]. Depending upon how system learns,

    many categories of algorithms are available including Swarm Intelligence. InSwarm Intelligence, population is made up of agents. These agents interact locally

    i.e. with each other and to the environment to find the solution but dont have any

    central authority to control them. So their interactions lead into global behavior of

    the system. It is obvious that this technique is also inspired by the elements of

    nature like teamwork of ants, bird flying together, animal moving in heard etc [2].

    Three variations of this swarm technique are currently available, Ant

    Colony Optimization (ACO), Stochastic Diffusion Search (SDS) and Particle

    Swarm Optimization (PSO). ACO is introduced by Marco Dorigo in his doctoral

    thesis in 1992. In ACO each agent or ant will move along the problem graph the

    artificial pheromone on the graph just like the real ant in such a way that future

    7

  • 8/14/2019 doc on digital ants absract

    8/19

    artificial ants can build better solutions. SDS was first described in 1989 as a

    population-based, pattern-matching algorithm by Bishop. In SDS, each agent will

    search solution probabilistically and communicate hypothesis one on one basis and

    the positive feedback system is tuned such that after some time all the agents will

    revolve around one global best solution. Due to this approach this method not onlysearch for the solution but also finds the optimal solution. In PSO, each agent or

    particle is initially seeded into the n-dimensional solution surface with certain

    initial velocity and a communication channel to other particles. Using some fitness

    function they are evaluated after certain interval and particles are accelerated

    towards those particles which have higher fitness value. Since there is very high

    numbers of particles in the populations it is less likely to converge in local minima

    and it is one of its advantage over other search algorithms.

    In this paper first section will cover ant colony optimization, its general

    algorithm, its advantage and pitfalls, second section will cover stochastic diffusion

    search and third section will cover particle swarm optimization. Finally it will

    conclude with the conclusion and acknowledgement.

    Fig2.2: robots of swarm intelligence

    8

  • 8/14/2019 doc on digital ants absract

    9/19

    2.1 Stochastic Diffusion Search:

    This technique is a two phase scheme, in the first phase all agents will

    explore search space randomly. All agents have atomic data unit (ADU) and

    when an agent hit solution i.e. it matches the ADU, it selects other agents

    randomly and communicate message about its hit. This phase is diffusion phase.

    Whenever more number of agents points to same solution, search is terminated.

    Let us take an example of pattern matching to illustrate what is ADU and

    how this search functions. Let the search space be a picture of a crowded street.

    We want to find and locate a particular person in the picture. The picture of the

    person made in optimal conditions will be our data model. However in the crowd

    the person can appear partially occluded, rotated with respect to the position on the

    model picture, may not wear glasses etc. It means that the image of the person

    from our picture does not match perfectly with that in the scene. Moreover there

    may be some people in the crowd with similar body constitutions, similarly clothed

    etc. Due to their potential similarity to a given person they constitute partial

    matches.

    In this example the search space is a bit map and ADUs can be defined assingle pixel intensities. The locations of ADUs common to the object in the search

    space thus constitute partial solutions to the search. Stochastic Diffusion Search is

    performed in parallel by a pre specified number of elements called agents. An

    agent is characterized by a pointer to a position in the search space and by a binary

    variable called activity. It assumes value 1, if agent points to potentially correct

    position within a search space (agent is active), otherwise it is equal to 0 (agent is

    inactive).

    Initially all agents are non active; they are assigned to randomly chosen

    positions within a search space. Then each of them evaluate probabilistically its

    position in the search space by comparing a randomly chosen ADU from the data

    model with corresponding one from the search space (i.e. with the ADU in thesame relative position to the reference point as in the target). If the test is

    successful agent becomes active, otherwise it remains inactive. This results in the

    role of activity label as an indicator of potentially correct solution found by

    corresponding agent. However, it does not exclude the possibility of false positives

    - signaling 1 for non targets, nor it rules out false negatives - failing to activate on

    the best possible match in case when the ideal instantiation does not exist in the

    search space. Next, in the diffusion phase, all of the inactive agents, and only them,

    individually and randomly select one agent for communication. As a result, the

    inactive agent is reassigned to position in the search space pointed to by chosen

    agent, if the latter was active, otherwise it is randomly re-initialized. All agents

    9

  • 8/14/2019 doc on digital ants absract

    10/19

    undergo consecutively a new testing and the whole process iterates until a

    termination condition based on statistical equilibrium is fulfilled. The search is

    terminal.

    Maximal number of agents pointing to the same position in the search

    space exceeds certain threshold and remains within specified bounds over anumber of iterations.

    The main disadvantage of this scheme is in the case of search spaces

    distorted heavily by noise, diffusion of activity due to disturbances will decrease

    an average number of inactive agents taking part in random search and in effect

    will increase the time needed to reach the steady state [6].

    2.2 Particle Swarm Optimization:

    Particle Swarm Optimization is modeled by particles in multidimensional

    space that have a position and a velocity. These particles are flying throughhyperspace and remember the best position that they have seen. Members of a

    swarm communicate good positions to each other and adjust their own position and

    velocity based on these good positions. Communication is done regarding the best

    known swarm to all and local bests known in neighborhoods of particles.

    Position and velocity is updated at each iteration following the formula

    w is the inertial constant and typically is slightly less than 1.

    c1 and c2 are constants that say how much the particle is directed towards goodpositions. Good values are usually right around 1.

    r1 and r2 are random values in the range [0,1].

    is the best the particle has seen.

    is the global best seen by the swarm. This can be replaced by , the local

    best, if neighborhoods are being used.

    The general algorithm can be listed as

    a. Initialize x and v of each particle to a random value. The range of these values

    may be domain specific. b. Initialize each to the current position.

    c. Initialize to the position that has the best fitness in the swarm.

    d. Loop while the fitness of is below a threshold and the number of iterations is

    less than some predetermined maximum.

    e. For each particle do the following:

    1. Update x according to the above

    equation.

    2. Calculate fitness for new position.

    3. If it is better than the fitness of ,

    10

  • 8/14/2019 doc on digital ants absract

    11/19

    replace .

    4. If it is better than the fitness of ,

    .

    CHAPTER-3

    ANT COLONY OPTIMIZATION

    Social insects that live in colonies, such as ants, termites, wasps, and

    bees, develop specific tasks according to their role in the colony. One of the main

    tasks is the search for food. Real ants, when searching for food, can find such

    resources without visual feedback (they are practically blind), and they can adapt

    to changes in the environment, optimizing the path between the nest and the food

    source. This fact is the result of stigmergy, which involves positive feedback,

    given by the continuous deposit of a chemical substance, known as pheromone.

    A classic example of the construction of a pheromone trail in the search for

    a shorter path is shown in Figure 2 and was first presented by Colorni et al. (1991).

    In Figure 2A there is a path between food and nest established by the ants. In

    Figure 2B an obstacle is inserted in the path. Soon, ants spread to both sides of the

    obstacle, since there is no clear trail to follow (Figure 2C). As the ants go around

    the obstacle and find the previous pheromone trail again, a new pheromone trail

    will be formed around the obstacle. This trail will be stronger in the shortest path

    than in the longest path, as shown in Figure 2D.

    Fig3.1: ant colony optimization

    As shown in Parpinelli et al., 2002, there are many differences between real

    ants and artificial ants, mainly: artificial ants have memory, they are completely

    blind and time is discrete. On the other hand, an ant colony system allows

    simulation of the behavior of real-world ant colonies, such as: artificial ants have

    11

  • 8/14/2019 doc on digital ants absract

    12/19

    preference for trails with larger amounts of pheromone, shorter paths have a

    stronger increment in pheromone, and there is an indirect communication system

    between ants, the pheromone trail, to find the best path.

    3.1Related work

    Korostensky and Gonnet (2000) presented an alternative method, named

    circular sum, for obtaining the sequence of branches that will give the smallest

    tree. This method models the problem as a circular traveling salesman problem

    (cTSP), so that for a complete tour, the distance from the last city to the first one is

    added to the tour distance. The tour corresponds to the sequence of species, and the

    tour distance is the smallest score for this sequence. To construct the tree, a simple

    idea is used: the correct tree will have the same score that is found by means of the

    cTSP. In this way, a second algorithm is developed, constructing trees and

    comparing their scores with the one found by cTSP. This search method issomewhat similar to the maximum parsimony method, and thus requires a large

    computational effort for constructing a phylogenetic tree for a large number of

    species.

    Kumnorkaew et al. (2004) presented a new strategy for constructing trees.

    In this algorithm, a preprocessing step defines a number of intermediary nodes, by

    means of the intersection of the input species, which are the ancestral species.

    From this point on, input species are considered source nodes and the intermediary

    nodes are compulsory passing points. This strategy is similar to the well-known

    Steiner problem. Kumnorkaew et al. (2004) reported that equivalent trees were

    obtained to those constructed using the neighbor-joining method. However,considerable preprocessing is necessary to define proper intermediary points,

    which are underused.

    To define how ant colony optimization (ACO) is applied tothe reconstruction of phylogenetic trees, we used a fully connected graph,

    constructed using the distance matrix among species (Figure 3). In this graph,

    nodes represent the species and edges represent the evolutionary distances between

    species.

    12

  • 8/14/2019 doc on digital ants absract

    13/19

    Fig3.2: related path

    Initially, ants start in a randomly selected node. Then, they travel across the

    structured graph, and at each node a transition function (Equation 2) determines its

    direction. This equation represents the probability that the k-th ant, being at node i,goes to nodej in its next step;

    wherePk(i,j) is the probability of transition between node i andj, is thepheromone trail between two nodes, d(i,j) is the evolutionary distance betweennodes i andj,Jik is the set of nodes connected to node i and already visited by the k-

    th ant, and and are arbitrary constants.

    Equation 2 is composed of two terms: the first is based on the evolutionary

    distance between species i and j, and the second is based on the accumulatedexperience - the pheromone trail. This trail is represented as a matrix (like that for

    the distance between species), whose values are dynamically changed by the

    algorithm, and determined according to the paths chosen by ants. Therefore, (i,j) represents the attractiveness of nodej, while the ant is at node i. Therefore, the

    objective of a given ant is to find a path in the graph that maximizes the transition

    probabilities, thus obtaining a sequence of species that produces the smallest

    evolutionary distance.

    Differently from a traditional ACO, where moves are made between nodes,

    our system creates an intermediary node between the two previously selected ones.

    This node will represent the ancestral species of the other two, and it will not be in

    13

  • 8/14/2019 doc on digital ants absract

    14/19

    the list of nodes (species) to be set in the tree. Using such an intermediary node,

    distances to the remaining nodes (species) are recomputed by means of Equation 3,

    as follows:

    where u is a node that does not belong to the set of nodes connected to node

    i and already visited by the k-th ant, dnu(i,j) is the distance between the new node nand node u, based on the previous distances between (i,u) and (u,j), d(i,u) is the

    distance between nodes i and u, and is a scale constant that defines the distancebetween the new node n and its descendents i andj.

    This procedure is repeated until all nodes belong to the list of already

    visited nodes, and then a path is constructed. The score of this path is given by the

    sum of the transition probabilities of the adjacent nodes of the path.

    Paths constructed by the ants are used for updating the pheromone trail. An

    increment of the pheromone trail is made at all nodes belonging to at least one

    path, created in an execution cycle. This key point avoids fast convergence to a

    local maximum. The pheromone trail matrix is updated according to Equation 4:

    Where is the rate of evaporation of the pheromone, which reduces thepersistence of the environment to the ants. In this system, the rate of increment of

    pheromone, (i,j), was modified to allow an increment proportional to all theobtained paths, given by the division of the current path and the best path, as

    shown in Equation 5:

    Where kis the number of ants, c(t) is the path constructed by an ant up totime t, Sc(t) is the score of path c(t), and Sbest is the score of the best path found up to

    now.

    Using this procedure, ants travel through the graph, and at the end of a

    predefined number of cycles, it is possible to reconstruct the tree using the best

    path found.

    14

  • 8/14/2019 doc on digital ants absract

    15/19

    3.2Construction of the phylogenetic tree

    The execution of the ACO algorithm, as detailed above, gives a linear

    sequence of species and a measure of closeness between them, using the

    pheromone matrix. Using these elements, the phylogenetic tree can be constructed,

    as shown by the algorithm of Figure 4.

    To evaluate the methodology that we have proposed, we used two data sets.

    The first is a set of complete mitochondrial genomes (mtDNA) from 20 species of

    mammals, previously used in other studies (see, for instance, Cao et al., 1998). The

    second data set was especially constructed for this work and is based on DNA

    sequences of gene p53 from eight eutherian species. The data for this latter data set

    were found in the NCBI site

    Results of the construction of phylogenetic trees were compared with the

    well-known PHYLIP package using the programs NEIGHBOR and FITCH (Fitch

    and Margoliash, 1967).

    The comparison of two trees is based on the analysis of their structure and

    the total distance between nodes (Equation 6), proposed by Kumnorkaev et al.

    (2004):

    Where dobs is distance obtained by the algorithm, and dexp is the expecteddistance from the distance matrix, between two species, and n is the number of

    species. This distance measure is somewhat similar to the computation of the

    quadratic error.Two trees obtained with the mtDNA data set are shown in Figure 5.

    They were obtained using the proposed ACO and the neighbor-joining method,

    respectively. Although species were similarly grouped, there are small differences

    in the order of groupings. This is what causes the differences in the distances

    between branches.

    Regarding the distance between branches, the proposed ACO obtained

    better values when compared with Fitch and neighbor-joining methods, for both

    data sets (Table 1).

    15

  • 8/14/2019 doc on digital ants absract

    16/19

    Table3.1

    3.3Sensitivity of parameters

    Several experiments were done with different parameters, and, for both

    data sets, the best results were found using the parameters shown in Table 2.

    Table3.2

    Parameter controls the exploration of the search space, by weighting theimportance of the pheromone trail in the decision of an ant when it arrives at a

    branch. The algorithm is sensitive to high values of this parameter, leading to a fast

    convergence to a local optimum.

    Parameter defines the relative importance of the distance betweenspecies in the transitions between nodes. In practice, we observed that it has to be

    higher than . But values that are too high make the algorithm converge to a treethat groups species sequentially.

    The pheromone trail evaporation is controlled by the parameter , which

    is influenced by the number of ants ( ) and the number of cycles. Experimentally,we observed that values higher than 0.8 do not allow convergence to the same tree,

    and values lower than 0.2 make the algorithm find trees with larger distances

    16

  • 8/14/2019 doc on digital ants absract

    17/19

    between branches. It is supposed that this is a consequence of the convergence to a

    local optimum at the beginning of the run.

    Regarding the number of ants ( ), we found two distinct behaviors. When

    is too low (say, < 50), or too high (say, > 400), a random behavior isobserved in the resulting trees for repeated runs. For intermediary, but high values

    of (say, 200 < < 350), a well-defined tree can be obtained, but with distancesgreater than those obtained by other approaches. The range within which the best

    trees were obtained was 90 < < 120, although we believe that this value maydepend on other parameters. Future work will address this issue.

    The evolutionary distance between an ancestor and two descendent species is

    controlled by parameter . For the p53 data set, we observed that the best tree was

    obtained using = 0.5, meaning that the distance between the ancestor and the

    two descendents is the same for both branches. For the mtDNA data set, thisparameter was set to 0.3, meaning that the distance between descendents and the

    ancestral species will be divided into 30% for the first descendent and 70% for the

    other.

    CHAPTER 4

    CONCLUSIONS

    In this paper we introduced the idea of an ABS, a software program thatsimulates the stigmergc behavior of biological ants when faced with an artificial

    problem, finding the exit of a two dimensional box.

    After running the experiments, we were able to show that pheromones arereally useful if used together with a good strategy. Also, we were able to see thatdifferent strategies may serve different purposes. The settings used in theexperiment shown in Figure 4 seems, for that problem setting, to be effective.Clearly the settings used in the experiment shown in Figure 5 is not as effective.This and other parameter variations modify the colony behavior notably. Such isthe case with almost all demonstrations of emergent computation. It is, however,interesting to keep in mind the notion of exaptationin which phenomena with no

    initial merit become utilized by an evolutionary process for an entirely differentpurpose. We believe that for a distributed, emergent system like an ant colony (orother emergent systems of wide variety) the variation in parameters is exactly thatfeature that can be leveraged in an evolutionary wrapping of the emergent system.That is how a system that has no representation for some feature of a domain cancome to have representation that is effective and novel.

    As in other studies about ant algorithms, our artificial ants used only thepheromone concept to guide themselves, and as we saw on xp4 and xp5, thisapproach can lead to catastrophic behavior sometimes. The main reason why xp4and xp5 experiments failed is related to the discussion of section 1. Biological antsdont rely only on pheromones to navigate. It would be interesting to research thecreation of a framework of ant colony algorithms that include methods inspired on

    other resources used by biological ants such as gravity, light sources and vision.

    17

  • 8/14/2019 doc on digital ants absract

    18/19

    Also, the enrichment of the simulator environment with obstacles and even othertypes of insects, possibly predators, is a very interesting idea.

    To conclude this paper, we share our feeling that the toughest problem we

    faced dealing with evolutionary techniques such as ant algorithms is finding theright parameters in order to direct the system to solve a specific problem. If wefind a way to pressure the population of such system to change its own parametersand naturally evolve into a body capable of solving a specific problem, then ourtask would be defining problems in such ways that would be understandable forour population. Perhaps genetic algorithms would be a good approach. Would itthen materialize our dreams of a machine capable of solving problems with nonecessity of being pre-programmed? Would it eliminate the brittleness problemsfound on many approaches to artificial intelligence?

    These questions are the main focus of research on many AI studies, and inthe authors point of view, biological inspired ideas have a great probability ofsuccess; after all, for many problems we still dont know how to solve using

    machines, nature has proven methods that work everyday almost effortlessly.

    REFERENCES:

    [1] R.W. Matthews, J.R. Mattheus,Insect Behavior. University of Georgie, New York:

    Wiley-Interscience, 1942.

    [2] D. Gordon,Ants at Work: How an Insect Society is Organized. New York: The

    Free Press, 1999.[3] S. Goss, S. Aron, J.L. Deneubourg, J.M. Pasteels, Self-organized shortcuts in

    the Argentine ant. Naturwissenschauften, pp. 76:579-581, 1989.

    [4] A. Bethe, Recognition of nestmates, trails. Arch. Gesamt. Physiol., pp.70:17-

    100, 1898.

    [5] M. Dorigo, G. Di Caro, L.M. Gambardella, Ant Algorithms for Discrete

    Optimization,Artificial Life, vol.5, no.3, pp.137-172, 1999.[6] V. Ramos, F. Almeida, Artificial Ant Colonies in Digital Image Habitats A

    Mass Behavior Effect Study on Pattern Recognition,Proc. of ANTS2000, 2nd

    International Workshop on Ant Algorithms, pp.113-116, Brussels, Belgium,Sept. 2000.

    [7] Y. Semet, U. OReilly, F. Durand, An Interactive Artificial Ant Approach toNon-Photorealistic Rendering, Springer-Verlag, K. Deb et al. (eds.):GECCO

    2004, LNCS 3102,pp.188-200, Berlin, 2004.[8] M. Heusse, S. Gurin, D. Snyers, P. Kuntz, Adaptive Agent-driven Routing

    and Load Balancing in Communication Networks, Technical Report RR-98001-IASC, Dpartment Intelligence Artificielle et Sciences Cognitives,ENST Bretagne, 1998.

    [9] P.E. Merloti, Optimization Algorithms Inspired by Biological Ants andSwarm Behavior, San Diego State University, Artificial IntelligenceTechnical Report CS550, San Diego, 2004.

    [10]J. Lawson and J. Lewis, Representation Emerges from Coupled Behavior,Self-Organization, Emergence, and Representation Workshop, Genetic and

    Evolutionary Computation Conference Proceedings, Springer-Verlag 2004.

    18

  • 8/14/2019 doc on digital ants absract

    19/19

    [11]B.J. Ford, Brownian movement in Clarkia Pollen: A Reprise of the FirstObservations, The Microscope, volume 40, 4th quarter, pp.235-241, Chicago,Illinois, 1992.

    [12] P.E. Merloti, Ants Box Simulator, http://www.merlotti.com/EngHome/Computing/AntsSim/ants.htm, 2004.

    19