mascot: manycast architecture for service-oriented...

21
MASCOT Final Project Report Page | 1 MASCOT: Manycast Architecture for Service-Oriented Tactical Operations Request for Proposals: PG11 Effort to Stimulate University Research in C2 Systems & Enablers 1. Professors: Principal Investigator: Dr. Vinod M. Vokkarane, Assistant Professor, Computer and Information Science, University of Massachusetts Dartmouth. Expertise: design and analysis of architectures and protocols for optical and wireless networks. Co-Principal investigator: Dr. Ramprasad Balasubramanian, Associate Professor, Computer and Information Science, University of Massachusetts Dartmouth. Expertise: computer vision, artificial intelligence, distributed operating systems, autonomous mobile robotics and human-computer interface. 2. University Information: 285 Old Westport Rd., North Dartmouth, MA 02747 Phone: (508) 910-6692; Fax: (508) 999-9144 Email: [email protected] and [email protected] 3. Student: Neal Charbonneau, Graduate Student, Computer and Information Science, University of Massachusetts Dartmouth. Received his B.S. in Computer Science in 2008 from UMass Dartmouth. Research interests include computer networks and software design and development. 4. Website: http://acnl.umassd.edu/mascot 5. Project Report: This document provides a report on the MASCOT project between the University of Massachusetts Dartmouth and the United States Marine Corps. The document will begin by providing objectives and an overview of the project and then describe the work that went into completing the project in the subsequent sections. 6. Project Objective: The objective of this project is to develop a manycasting service architecture that is capable of dynamically and flexibly deploying tactical field resources in order to support next-generation net-centric warfare. Develop and implement a Service-Oriented Architecture (SOA) capable of optimizing tactical field resource allocations. The performance will be compared with traditional resource allocation techniques. 7. Significance of the Project: The successful implementation of this project will help achieve and maintain battle rhythm between the Command Operations Center (COC) and other tactical field resources, thereby providing an efficient and optimized service engine for net-centric warfare within the Marine Corps. 8. Project Overview: One of the important challenges in modern net-centric warfare is the efficient utilization of tactical field resources. In order to direct task specifications to the “best” tactical field resources

Upload: others

Post on 30-Sep-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 1

MASCOT: Manycast Architecture for Service-Oriented Tactical Operations Request for Proposals: PG11 Effort to Stimulate University Research in C2 Systems & Enablers

1. Professors: Principal Investigator: Dr. Vinod M. Vokkarane, Assistant Professor, Computer and Information Science, University of Massachusetts Dartmouth. Expertise: design and analysis of architectures and protocols for optical and wireless networks. Co-Principal investigator: Dr. Ramprasad Balasubramanian, Associate Professor, Computer and Information Science, University of Massachusetts Dartmouth. Expertise: computer vision, artificial intelligence, distributed operating systems, autonomous mobile robotics and human-computer interface. 2. University Information: 285 Old Westport Rd., North Dartmouth, MA 02747 Phone: (508) 910-6692; Fax: (508) 999-9144 Email: [email protected] and [email protected] 3. Student: Neal Charbonneau, Graduate Student, Computer and Information Science,

University of Massachusetts Dartmouth. Received his B.S. in Computer Science in 2008 from UMass Dartmouth. Research interests include computer networks and software design and development.

4. Website: http://acnl.umassd.edu/mascot

5. Project Report: This document provides a report on the MASCOT project between the University of Massachusetts Dartmouth and the United States Marine Corps. The document will begin by providing objectives and an overview of the project and then describe the work that went into completing the project in the subsequent sections. 6. Project Objective: The objective of this project is to develop a manycasting service architecture that is capable of dynamically and flexibly deploying tactical field resources in order to support next-generation net-centric warfare. Develop and implement a Service-Oriented Architecture (SOA) capable of optimizing tactical field resource allocations. The performance will be compared with traditional resource allocation techniques. 7. Significance of the Project: The successful implementation of this project will help achieve and maintain battle rhythm between the Command Operations Center (COC) and other tactical field resources, thereby providing an efficient and optimized service engine for net-centric warfare within the Marine Corps. 8. Project Overview: One of the important challenges in modern net-centric warfare is the efficient utilization of tactical field resources. In order to direct task specifications to the “best” tactical field resources

Page 2: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 2

based on the COC’s service request criterion, as well as constraints typically encountered on the battlefield, including geographical, resource availability, and communication constraints. We propose a Manycast Service Engine (MSE), wherein the COC communicates with many types of field resources chosen from a group of field resources. The “best” choice of resources may depend on their availability, relative location to target, security capability, current task load, cost, and trust. Many tactical operations require significantly more resources than are available at a single location. Distributed field resources can provide a mechanism for the cooperative sharing of a vast array of geographically distributed resources that are interconnected by a communication network. Key challenges for effectively utilizing such resources include - how best to select and locate desired resources in the field using the network and how to transfer information reliably to and from these resources. In broadcast communication, data is sent from one node (the sender) to all the other nodes in the network, and in multicast communication, data is sent from the sender to a well-defined subset of nodes in the network. In manycast communication [1], only the number of intended recipients is specified, not the actual destinations itself. A subtle difference between manycast and multicast is that, in manycast, the actual destinations are to be determined instead of being provided as in the case of multicast. As such, manycasting is well suited for the purpose of directing tactical task specifications to field resources.

Figure 1 - The Manycast Service Engine (MSE) Architecture.

The MSE is designed to assist the COC operator to make intelligent decisions based on

the large amount of data coming from various tactical data systems (TDS). The TDS Data Adapter (1 - refer Fig. 1) formats the data from existing TDSs, such as the Advanced Field Artillery Tactical Data System (AFATDS) and the Target Location Designation Handoff System (TLDHS), into a form that can be operated on by the proposed MSE. The COC then creates a Task Specification (2) to the MSE, which will, in the end, provide the COC with a list of manycasts recommendations. The recommendations are based on six parameters passed to the MSE. The six task specification parameters are: pi, ti, ki, target location, target heading, and target speed. The first parameter pi represents priority of different service features, such as load (li,j, where i is the node and j is the resource type), location (ci,j), security (si,j), and reliability (ri,j). The second parameter ti represents the resource types, such as Hummers, Marines, and Blackhawks that are chosen to respond to the task specification. The third parameter, ki, is the required number of resource type ti. The target’s location and target’s speedare used to describe the target. Consider a “Tactical Edge” situation with the need for 3 Marines and 2 Hummers at location (x,y). This can be encoded in the following task specification format, {[location, load, trust], [Marine, Hummer], 3, 2, (x,y),North, z}. This states that these resources are needed to handle the target located at (x,y) heading North at z m/s . Also, these resources should be chosen based on their relative location, current load, trustworthiness (reliability and security), and the characteristics of the target. The ordering of the priority parameters, pi, specifies their relative

Page 3: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 3

importance. The interface to the MSE allows the COC operator to prioritize the task in terms of response time, task completion time and/or task completion accuracy.

The Task Parser (3) formats the task for the Weight Calculator (4). The Weight Calculator defines a weight function, WF(pi,pi+1,…,px) to compute the apt task recommendations. The resulting weights, α,β,…,γ, will be calculated so that α>β>…>γ and α+β+…+γ=1. Different sets of weight functions will be defined to simulate different task priorities and is passed to the Resource Selector (5). For each resource type, ti, the weight functions are used with the information in the Parameter Data Store (PDS) to find the best matches. The PDS contains an MxN resource data matrix, for each MSE service feature, where M is the number of resource types and N is the number of nodes.. The Resource Selector is executed on the resource data matrix and returns a list of resource types individually ranked in order from best suited to worst for the mission. The Filter/Joiner (6) then puts the individual recommendations together according to the original requirements. The Task Response (7) provides the list of possible manycast combinations, any of which will meet the tactical edge requirement. Each response is represented by Ti,j (where i is the node of type j) and information about how they were selected. A possible response may include a list of two manycast combinations {M1,M8,H2,H5,H6,α,β,γ} and {M1,M9,H1,H2,H5,α`,β`,γ`}, where Mi is Marine i, Hi is Hummer i, and α, β, and γ represent weights assigned during the manycast computation. The COC operator will then be able to choose one of the recommendations based on the importance of each parameters pi. The COC’s selection is automatically reported back to the MSE (7). COC may also provide the performance characteristics of the completed mission to the MSE. This information is passed to the Learner Module and Update Monitor. The Learner Module uses feedback from the COC to make adjustments to the Filter and the Weight Calculator to provide better recommendations in the future based on the past preferences. The Update Monitor updates the PDS according to the recommendation that was chosen by the COC. The execution command, based on the accepted recommendation of the MSE, is transmitted onto the network and the tactical resources also update their status with the COC (8) as an acknowledgement. The network provides statistics to the Network Monitor (9), which in turn updates the PDS with relevant information, such as updated availability of resources. 9. Project Tasks:

Task 1: Research into existing COCs and implementation of task analyzer and joiner modules. Task 2: Implementation of weight calculation functions and parameter data store. Task 3: Implementation of update monitoring and network monitoring. Task 4: Implementation of learner module, GUI development, and test cases. Task 5: Integration of MASCOT with existing COC architecture.

Schedule: Phase I demo – 07/01/08 and Phase II demo – 10/08/08.

Page 4: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 4

8.1 Technology Used Before describing the prototype and process to complete the project, we will briefly mention the tools that we used. For all coding, we used the Java language along with the NetBeans IDE. We used a Subversion repository for version control. The application uses Apache Derby as an embedded SQL-compliant database. To create the user interface prototypes we used Omnigraffle. For the design of the application, we used Rational Rose to create UML diagrams. The database schema was created using Microsoft Visio. 8.2 Initial Prototype The first step we took for this project was to build a GUI prototype before the initial kickoff meeting in June. The purpose of this prototype was to get a better understanding of the USMC’s needs and to understand what was missing or what needed to be removed. The first prototype was done using the Omnigraffle application in OS X. Once the prototype was created in Onigraffle, it was then implemented in Java using the Swing UI toolkit. The initial prototype had no optimization functionality built into it; it was simply a user interface. An example of our first interface can be seen in Figure 1.

Page 5: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 5

Fig. 1 Original Java prototype (top) and original Omnigraffle prototype (bottom)

10. Design Once we had the prototype complete and received feedback during the kickoff meeting, the next step was to build an overall architecture of the core of the application that would be doing the optimization. This was designed in such a way that it was generic enough to work with any kind of optimization. The input to the optimization engine is the mission information, requested asset types, and requested service priorities. Given this information and the information in our data store (next section) the engine will provide a list of recommended assets. The design was done using Rational Rose to create UML diagrams. An example of a sequence diagram explaining the optimization process is shown below.

Page 6: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 6

Fig 2. Optimization Sequence Diagram

Page 7: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 7

11. JC3 and the Parameter Data Store One of the goals of our application was to be as realistic as possible. To do this we had to find out what kind of data would be available to it if it were actually deployed. We used the Joint Command, Control and Consultation Information Exchange Data Model, or JC3IEDM, for a basis. Looking at what kind of data was represented in this model, we decided on the information that our application would require. Our application uses an internal database, the Parameter Data Store, to keep track of assets on the field and their status. The information that stored in the data store is based off, and can be mapped to, the JC3IEDM. Below is a snapshot of our database schema in Figure 3.

The parameter data store itself is implemented using the open source embedded Java SQL-compliant database Apache Derby. Using an embedded database allows it to be easily deployed and does not require any centralized SQL server to be running for our prototype.

Fig 3. Parameter Datastore Schema 12. Improved Graphical User Interface The original user interface that we created was not optimal. There was too much information on a single screen. We wanted to streamline the process and make each step simple and fast. We again prototyped in Omnigraffle first and once decided on a design it was implemented in Java. The new interface is broken down into several screens instead of three cluttered screens. Each screen also contains a summary of all the information currently entered in to allow easy updates. The new interface also provides default settings for different options based on past missions to make the recommendation process even faster. At this phase, we also added mapping information to the prototype. The assets are located at static locations on the

Page 8: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 8

map and the user can use the map to view the assets, targets, and the paths between them. Examples of a frame the second, and most recent, version of the application can be seen in Figure 4.

Page 9: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 9

Fig 4. A frame from the second Java prototype (top) and Omnigraffle prototype (bottom). An example of the new map interface can be seen in Figure 5. The new GUI also provides a help section that describes how to use each of the application screens.

Page 10: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 10

Fig 5. The map interface. 13. Optimization Research The main problem being solved by MASCOT is an optimization problem. The service engine must choose an optimal set of assets to satisfy the request specified by the user. We researched a variety of different optimization methods. We discuss them in this section. The simplest method would be to do individual ranking and then joining to create a recommendation. In this case each available asset is ranked according to the service priorities specified for the mission. The service priorities can be thought of as objective functions. Each objective function gives a score to the asset. We can then use a weight function to combine these scores for a single overall score for each asset. Once the best assets are chosen, the best assets can be joined together to form the recommendations depending on the overall score of a team created. Another option we considered was using genetic algorithms or simulated annealing. These techniques are often used for multi-objective optimization problems. With either of these

Page 11: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 11

approaches, assets would not be ranked individually but teams of assets satisfying the request would be ranked. We would then define crossover and mutation operations on the set of assets for genetic algorithm and a neighbor operation for simulated annealing. This approach would allow us to search more regions of the search space than the previous approach but at the cost of much higher complexity. We also considered using Integer Linear Programming to solve the optimization problem. One issue with this solution is that it would provide a single optimal solution, not a set of solutions, as we require. This would also require encoding the problem as an ILP that can then be solved using different techniques like the simplex method or branch and bound. 14. Optimization Algorithm The algorithm that we decided to implement was a modified ranking and joining approach described in the previous section. The algorithm will find a minimum of 10 recommendations and display the top 10 to the user. The algorithm is based heavily on the Service Priorities specified by the user and a weight function to combine these priorities. When the user is creating the request, each of our four service priorities: Delay Sensitivity, Readiness, Expertise, and Agility are given one of three values: Low, Medium or High. A value of High for Expertise means the user is requesting assets with only a high level of expertise for the given mission. Each asset in the field can be given a score for each of the service priorities to determine which category it falls in. Once each asset has a score for each Service Priority, the scores are combined based on a weight function. The weight function is based on the current mission and gives higher weights to certain Service Priorities. For example, some missions may require high levels of agility. In this case, the agility Service Priority will be given a higher weight in the weight function compared to the other Service Priorities. There are two phases of the algorithm; the first phase filters available assets while the second phase joins the results together to form the final recommendations. The filter uses the Service Priority values specified by the user to order the results by how well each asset satisfies those Service Priorities. The filter starts by searching for assets that match the user’s original request and then relaxes each of the Service Priority values one by one and recording the additional assets that now match the settings. When this phase is complete, the best assets and the corresponding filter values those assets meet will be stored in sorted order from best to worst. The joiner then considers the user’s request and how many assets of each type the user requires. The joiner begins by trying to find enough assets that satisfy the user’s original Service Priority values and by only searching possible combinations of these assets. This reduces the search space. For example, if the current Service Priority levels specified by the user contain enough assets that meet them to satisfy the request, the joiner will then search combinations of these assets to create teams for the mission. If there are not enough assets then the joiner will relax the Service Priority levels to be able to search for combinations from a bigger pool of assets that may not all satisfy the user’s original request. All of this information is already available from the filter phase. Once the joiner is able to find enough assets, either the first time or by relaxing the service priority levels, each of the combinations are enumerated and each of these teams of assets are ranked according to the sum of the scores of their assets (using the Service Priorities and weight function). If all the assets come from a single unit then this recommendation’s score will be increased by 10%. We do this because we want to favor requests that can be satisfied from a single unit to increase the safety of this team. Once all the combinations are ranked, the top 10 are presented to the user.

Page 12: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 12

15. Performance Evaluation: The recommendations provided by the MSE can be evaluated based on several metrics including how well the recommendation ranks according to the service priorities, the distance traveled, and the time to target. When the MSE provides the top 10 recommendations, they can be compared side by side. We can see an example of this in the screen shot below.

In the figure we are comparing the recommendations’ (X-axis) overall score (red), expertise (green), and readiness (gray). The overall score is based on the sum of the individual asset’s scores. We can see all of the overall scores are very close to one another, but the user at the COC can see all the components of the overall score and choose which recommendation is best for the particular scenario. In this example, Recommendation 4 has the highest Expertise, while Recommendation 8 has the highest Readiness. Recommendation 1 seems to have the lowest of both, but is the highest recommendation because all of the assets in that recommendation come from the same group, so the score was increased (this information is available from the map view). There is another panel of graphs available to show distance traveled and time to target for each recommendation. If there are multiple groups (for diversions in our example) then each group has its own graph for the different recommendations.

Page 13: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 13

16. Future Work: Enhancements for improved interaction between the COC and the MSE will be developed. Perform integration with existing tactical service architectures (i.e., AFATDS and TLDHS). Also, integration with situational aware systems will help incorporate battlefield terrain information and environmental conditions into MSE computations. Incorporating GIS information to create graphical map based asset and target selection capability. Evaluating other multi-objective optimization techniques to select assets. Creating a web-service that can run the core algorithm and accept requests from other systems. Adding the ability to simulate mission performance before it is executed, to track the mission progress, and to update the mission in real-time. Simulation can be performed based on past missions, tracking and updating can be performed with access to GPS data and feedback from the assets on the field. Handle larger missions and multiple concurrent sub-missions. Add a learner module to take advantage of team dynamics and past performance. Appendix A: Sample Execution In this section we will walk through a sample execution of the application. The first step is to choose the target for the mission, as seen in the screen shot below. There are four targets to choose from in this example. We will choose the Hostage.

Page 14: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 14

Fig 6. The target selection screen. After the target is chosen, the mission type must be chosen. Along with the mission type there are other options, like covert, clandestine, diversion, and medic.

Page 15: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 15

Fig 7. The Mission Selection Screen. Here we have selected a “Hostage Rescue” mission. We are not selecting any other options for this example.

Page 16: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 16

Fig 8. Service Priorities Next the service priorities are chosen. Here we can see there are four possible service priorities: Delay Sensitivity, Expertise, Readiness, and Agility. Each service priority can have one of three values: High Medium and Low. In this example, we are specifying Expertise as High, so when searching for assets it should favor assets that meet this criterion.

Page 17: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 17

Fig. 9 Asset Type Selection Once the service priorities are selected we need to choose the number assets and types of assets to include in the mission. We have four predefined assets in the application: Marines, Abrams, Cougars, and HUMVEEs. In this example we are choosing 4 Marines, 3 Abrams, and 4 Cougars. The recommendations we provide will contain all the required assets with the best combined score based on the algorithm described in the Algorithm section.

Page 18: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 18

Fig 10. The Top 10 Recommended Assets Once the algorithm completes, we are presented with 10 recommendations. We can see the top three here. The recommendations contain the unique ID of each of the assets selected by the optimization algorithm. From this screen we can view the location of the assets and target on the map (shown earlier) or we can compare the various recommendations as show next.

Page 19: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 19

Fig 11. Comparison of the top 10 recommendations. Here is a comparison of the top 10 recommendations. Along the x-axis there are the recommendation numbers and along the y-axis are the normalized scores. We can see the scores based on various metrics listed on the right hand side. Here we see the Expertise of each group in the green line, the Readiness in the grey line, and the overall score in the red line. From this information, we can see we may have to make trade-offs. If we want the recommendation with the highest Expertise, we should choose either 2 or 4, while the highest Readiness would be recommendations 6 or 9.

Page 20: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 20

Fig 12. The final confirmation and execution page. Once we choose a recommendation, we are presented with a summary screen that contains metrics. From here we can execute the mission or cancel. 1. References: 1. Huang X., She Q., Vokkarane V.M., and Jue J.P. "Manycasting over Optical Burst-

Switched Networks," Proceedings, IEEE International Conference on Communications (ICC), 2007.

2. Konak A., Coit D.W., Smith E.W. “Multi-objective optimization using genetic algorithms: A tutorial” Reliability Engineering and System Safety, 91 (9), pp. 992-1007, 2006

3. Colson G., de Bruyn C., Models and methods in multiple objectives decision making, Mathematical and Computer ModellingVolume 12, Issues 10-11, , 1989, Pages 1201-1211

4. Tekinalp, O. and Karsli, G. 2007. A new multiobjective simulated annealing algorithm. J. of Global Optimization 39, 1 (Sep. 2007), 49-77.

5. Kewley, R.H.; Embrechts, M.J., "Computational military tactical planning system," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on , vol.32, no.2, pp. 161-171, May 2002

6. Rosenberg, B., Richards, M., Langton, J. T., Tenenbaum, S., and Stouch, D. W. 2008. Applications of multi-objective evolutionary algorithms to air operations mission planning. In Proceedings of the 2008 GECCO Conference Companion on Genetic and Evolutionary Computation (Atlanta, GA, USA, July 12 - 16, 2008)

Page 21: MASCOT: Manycast Architecture for Service-Oriented ...acnl.umassd.edu/mascot/resources/mascot_project.pdf · 5. Project Report: This document provides a report on the MASCOT project

MASCOT Final Project Report Page | 21

7. Rosenberg, B., Burge, J., & Gonsalves, P. 2005. Applying Evolutionary Multi-Objective Optimization to Mission Planning for Time-Sensitive Targets. In Proceedings of Genetic and Evolutionary Computation Conference (GECCO) 2005.