five-dimensional simulation for advanced decision … · spie – enabling technologies for...

8
SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16 1 Five-Dimensional Simulation for Advanced Decision Making Craig Lammers 1 , Jeffrey Steinman 1 , Maria Valinski 1 , Karen Roth 2 1 WarpIV Technologies, Inc., 5230 Carroll Canyon Road Suite 306, San Diego, CA 92121 2 Air Force Research Laboratory, AFRL/RISB 525 Brooks Road, Rome, NY 13441 ABSTRACT This paper describes the application of a new parallel and distributed modeling and simulation technology known as HyperWarpSpeed to facilitate the decision-making process in a time-critical simulated Command and Control environment. HyperWarpSpeed enables the exploration of multiple decision branches at key decision points within a single simulation execution. Whereas the traditional Monte Carlo approach re-computes the majority of calculations for each run, HyperWarpSpeed shares computations between the parallel behaviors resulting in run times that are potentially orders of magnitude faster. Keywords: Estimation and Prediction, Dynamic Situational Assessment and Prediction, HyperWarpSpeed, WarpIV Kernel, SPEEDES, PDMS 1. INTRODUCTION Computer simulation plays an important role in the decision making process. Factors such as financial constraints, safety concerns, physical impracticalities, and time constraints often prohibit conducting real-world studies, making simulation a viable alternative. Achieving a valid statistical representation from a simulation often requires executing many independent Monte Carlo runs with different parameter values, scenario excursions, and/or random number seeds. Time constraints prohibit the majority of experiments from exploring the full set of experimental permutations, although the ability to do so in an efficient manner would be of tremendous value. A new five-dimensional approach to modeling and simulation, known as HyperWarpSpeed [5], was developed to address this very problem. HyperWarpSpeed is unique in its ability to explore multiple decision branches within a single simulation execution while eliminating redundant computations between overlapping timelines. In effect, HyperWarpSpeed supports the mutual interaction of multiple event timelines continually being created and merging as they converge within a five-dimensional simulated universe. HyperWarpSpeed transparently runs on multicore computing architectures using discrete-event optimistic processing. Rollbackable event processing allows this technology to support real-time predictions with live data feeds that continually calibrate the models. In this manner, five-dimensional predictions are always based on best estimates of the dynamically changing current world state. This technology is being developed in conjunction with the Air Force Research Laboratory’s Dynamic Situation Assessment and Prediction (DSAP) program. The goal of the DSAP program is to explore ways to support dynamic re- planning in real time by taking into account live operational data in an effects-based Command and Control environment. This project leverages the five-dimensional simulation capabilities of HyperWarpSpeed along with other technologies to accomplish this goal. 2. FIVE-DIMENSIONAL SIMULATION Traditional computer simulations represent systems in four dimensions, i.e. space plus time. HyperWarpSpeed is unique in that it can internally spawn multiple behavior timelines at key decision points within a single simulation execution. These behavior timelines interact as necessary within a five-dimensional parallel universe. One way to understand how this technology works is to consider a chess game, where instead of two players taking turns exploring possible moves, there are an unlimited number of players exploring possible moves whenever necessary. This

Upload: phamkhuong

Post on 05-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

1

Five-Dimensional Simulation for Advanced Decision Making

Craig Lammers1, Jeffrey Steinman1, Maria Valinski1, Karen Roth2 1WarpIV Technologies, Inc., 5230 Carroll Canyon Road Suite 306, San Diego, CA 92121

2Air Force Research Laboratory, AFRL/RISB 525 Brooks Road, Rome, NY 13441

ABSTRACT

This paper describes the application of a new parallel and distributed modeling and simulation technology known as HyperWarpSpeed to facilitate the decision-making process in a time-critical simulated Command and Control environment. HyperWarpSpeed enables the exploration of multiple decision branches at key decision points within a single simulation execution. Whereas the traditional Monte Carlo approach re-computes the majority of calculations for each run, HyperWarpSpeed shares computations between the parallel behaviors resulting in run times that are potentially orders of magnitude faster.

Keywords: Estimation and Prediction, Dynamic Situational Assessment and Prediction, HyperWarpSpeed, WarpIV Kernel, SPEEDES, PDMS

1. INTRODUCTION Computer simulation plays an important role in the decision making process. Factors such as financial constraints, safety concerns, physical impracticalities, and time constraints often prohibit conducting real-world studies, making simulation a viable alternative. Achieving a valid statistical representation from a simulation often requires executing many independent Monte Carlo runs with different parameter values, scenario excursions, and/or random number seeds. Time constraints prohibit the majority of experiments from exploring the full set of experimental permutations, although the ability to do so in an efficient manner would be of tremendous value.

A new five-dimensional approach to modeling and simulation, known as HyperWarpSpeed [5], was developed to address this very problem. HyperWarpSpeed is unique in its ability to explore multiple decision branches within a single simulation execution while eliminating redundant computations between overlapping timelines. In effect, HyperWarpSpeed supports the mutual interaction of multiple event timelines continually being created and merging as they converge within a five-dimensional simulated universe.

HyperWarpSpeed transparently runs on multicore computing architectures using discrete-event optimistic processing. Rollbackable event processing allows this technology to support real-time predictions with live data feeds that continually calibrate the models. In this manner, five-dimensional predictions are always based on best estimates of the dynamically changing current world state.

This technology is being developed in conjunction with the Air Force Research Laboratory’s Dynamic Situation Assessment and Prediction (DSAP) program. The goal of the DSAP program is to explore ways to support dynamic re-planning in real time by taking into account live operational data in an effects-based Command and Control environment. This project leverages the five-dimensional simulation capabilities of HyperWarpSpeed along with other technologies to accomplish this goal.

2. FIVE-DIMENSIONAL SIMULATION Traditional computer simulations represent systems in four dimensions, i.e. space plus time. HyperWarpSpeed is unique in that it can internally spawn multiple behavior timelines at key decision points within a single simulation execution. These behavior timelines interact as necessary within a five-dimensional parallel universe.

One way to understand how this technology works is to consider a chess game, where instead of two players taking turns exploring possible moves, there are an unlimited number of players exploring possible moves whenever necessary. This

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

2

capability provides an invaluable mechanism for efficiently conducting “what if” type analysis. Depending on the scenario, literally billions of decision permutations can be explored within a single simulation execution that perhaps takes just a few times longer to complete than the execution of a single permutation. Whereas Monte Carlo simulation experiments may re-compute up to 90% of the same calculations within each replication, HyperWarpSpeed eliminates these redundancies by sharing computations between identical timelines.

2.1 Technical Background

As opposed to cloning an entire simulation [1][2] or cloning simulation objects [3] at key decision points, HyperWarpSpeed offers a more flexible, robust, scalable, and efficient solution to exploring multiple decision paths [5]. Coordinating how the alternative timelines interact is completely automated and transparent through unique multivariable state management structures, event splitting, and event merging techniques. These techniques are briefly discussed in the following subsections.

2.1.1 Replication Sets

Fundamental to HyperWarpSpeed is the notion of replication numbers and replication sets. Monte Carlo simulations uniquely identify individual runs with a replication number. A replication number can be used to (1) set different starting seeds for pseudo random number generation, (2) specify a set of model performance parameters, (3) identify a particular set of decisions in a battle plan, and/or (4) identify enemy responses.

A replication set is simply a collection of replication numbers. HyperWarpSpeed represents replication sets as bit fields, where each bit set to one identifies a replication in the set. For example, a replication set with 32 potential replications containing replications {0, 1, 4, 9, 15, 29} would be stored as binary 0010 0000 0010 0000 1000 0010 0001 0011, where the rightmost bit represents replication 0, the next bit to the left represents replication 1, etc.

To achieve scalable performance, high-speed bit-masking instructions automate most of the critical bookkeeping operations for replication sets. Both events and state variables are associated with replication sets. State variables manage an array of values where the array index is associated with the replication number. Events keep track of which replication numbers are represented by the event. At the start of the simulation before any branching occurs, (1) state variables store the same value for all possible replications and (2) all initially scheduled events represent the full set of replications. Branching, event splitting, event merging, and multireplication state variable assignments occur as events are processed.

2.1.2 Branching

Branching enables a HyperWarpSpeed model to simultaneously explore multiple decision points within a special event type known as a process. An event is a method on a C++ object; a process is an event that is able to pass time without exiting the method. Special modeling constructs are provided by the WarpIV Kernel to allow processes to (1) wait specified periods of time before waking up, (2) wait for interrupts to occur, and/or (3) wait for necessary resources to become available before continuing processing. Processes can have multiple wait statements within a single method.

HyperWarpSpeed allows processes to branch using syntax similar to the standard switch-case statement. Each decision point is currently limited to twenty branches. At the decision point, model developers specify the probability of taking each branch. HyperWarpSpeed schedules new processes that continue execution with subdivided replication sets based on the specified probabilities for each branch label. The maximum number of branches within an individual timeline is dependent on the replication set size. For example, a replication set of 128 could subdivide 7 times, resulting in 128 unique permutations. Any additional branch statements encountered would flip a coin to determine which branch to evaluate, as in a Monte Carlo run.

2.1.3 Multireplication State Variables

Multireplication state variables manage an array of values, where each element in the array corresponds to a particular replication number. State variables are potentially accessed and/or modified by events with arbitrary replication sets. Multireplication data types are implemented as C++ objects representing integers, doubles, and Boolean values. With operator overloading, these data types behave as normal variables. Under the hood, they are managing multiple values for different replication subsets.

2.1.4 Event Splitting

Event splitting occurs when an event or process with a particular replication set accesses multireplication variables with different values. For example, a Boolean multireplication state variable data type might store the value true for

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

3

replications {0, 1, 4, 9, 15, 29}, and false for all other replications. An event or process with replications {5, 9, 12, 15, 20} will likely produce different results depending on the stored value. So, HyperWarpSpeed automatically processes the event twice, once for replications {5, 12, 20} and then again for replications {9, 15}. During event splitting operations, HyperWarpSpeed automatically tracks replication set dependencies. Any event or process may access several multireplication state variables, causing arbitrarily complex event splitting.

2.1.5 Event Merging

Event merging enables identical timelines to recombine whenever possible. It is entirely possible for (1) identical events to be scheduled during event splitting, and for (2) pending unprocessed events to be identical. HyperWarpSpeed merges identical events before they are processed to consolidate event scheduling and processing overheads. Without this merging capability, HyperWarpSpeed would branch and split events in an exponential out-of-control manner. If decision points are independent of one another, the simulation would continually diverge and merge between decision points.

2.2 A Simple Example

Imagine within a simulated world that two aircraft, EAGLE and HAWK, are en-route to a target of opportunity. During engagement, there is a probability that EAGLE’s communication system is impacted, and as a result, becomes inoperable. In HyperWarpSpeed, EAGLE branches into two overlapping parallel universes. In one universe, EAGLE’s communication system is functional. In the other universe, EAGLE’s communication system is inoperable. Meanwhile, other missions continue unaffected by the state of EAGLE’s communication system. Both branches (universes) of EAGLE share (overlap) the modeling of these independent missions.

During battle, HAWK noticed shots were fired at EAGLE and radios to him. At this point, HAWK automatically splits into two. In one universe, HAWK successfully communicates with EAGLE. In the other universe, HAWK is unable to communicate with EAGLE. After successfully completing the mission, EAGLE and HAWK return to base. Assuming they do not speak to one another, their universes that had previously branched and split now merge because their flight back to base is independent of the state of EAGLE’s communication system.

While en-route to base, EAGLE attempts communication with air traffic control (ATC). This automatically causes EAGLE to split. The EAGLE that had previously exited battle unscathed establishes communication, but the EAGLE in the alternate universe that was impacted was unable to establish communication. What caused the split is the fact that the state of EAGLE’s communication system depended on the outcome of events earlier in the day. The Futures Graph of this scenario is shown in Figure 1.

Figure 1: Futures Graph representation of EAGLE and HAWK.

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

4

2.3 Preliminary Performance Results

Executing within the WarpIV Kernel, HyperWarpSpeed has demonstrated orders of magnitude speedup gain over the traditional brute force Monte Carlo approach executed in a serial manner. Figure 2 shows the results of a benchmark developed to exercise the branching, event splitting, and event merging features of HyperWarpSpeed. Specifically, events branched and later merged within objects of type A while other objects of type B queried multireplication state variables for type A objects. Spin loops burning CPU cycles were added to the models in order to mimic computational workloads representing actual event processing that might be performed by more realistic applications. [5]

The results indicated that 128 individual Monte Carlo runs would have required the processing of 68,096,000 events. With event merging disabled, HyperWarpSpeed would have only required processing 2,253,000 events. With event merging enabled, this number reduced to 642,000 events. HyperWarpSpeed running on a four-processor Mac Pro executed 236 times faster than the statistically equivalent 128 serial Monte Carlo replications. While these results are preliminary, and do not necessarily indicate high performance for all possible scenarios, the performance results are extremely positive and promising.

Figure 2: Performance of the branching stress test executing on a four-processor Mac Pro.

3. REAL-TIME ESTIMATION & PREDICTION HyperWarpSpeed event processing is rollbackable, which makes this technology ideal in supporting real-time predictions with live data feeds that continually calibrate the models. Calibrating the simulation with live data ensures reasonable futures are produced and eliminates the need to void a potentially invalid execution.

The conceptual operation of using optimistic simulation to support real-time estimation and prediction is straightforward. The simulation must always be running and predicting the future up to a specified time or time window, while rolling back affected models occasionally to process real-time inputs. The real-time inputs are handled as externally generated events. They can be in the form of aiding terms that are used to describe accurately known changes to the system, or in the form of noisy measurements that require balancing the believability of the measurements with the system uncertainty defined in the model. In any case, the objects affected by the inputs are rolled back to the current real-world time. Processing these real-time inputs can be thought of as calibrating, or estimating the real-time state of the simulation. After the inputs have been received and processed, the system must quickly rollforward and/or reprocess all events that were rolled back to continue predicting the future.

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

5

4. DYNAMIC SITUATIONAL ASSESSMENT & PREDICTION The Air Force Research Laboratory is investigating a next-generation planning capability that combines advanced optimistic discrete-event modeling and simulation technologies with operational databases to enable dynamic situational assessment and prediction in a live effects-based dynamic Intelligence, Surveillance and Reconnaissance (ISR) Joint Command and Control (C2) environment. Dynamic situational assessment refers to the ability to determine the current operational state of the real world battlespace based on the arrival of potentially incomplete, inaccurate, and/or noisy data from multiple sources in real-time. Dynamic situational prediction refers to the ability to rapidly predict potential outcomes of alternative courses of action. These combined capabilities were designed to facilitate dynamic re-planning in an effects-based C2 environment.

Specifically, the ongoing effort will demonstrate a semi-automated system that provides alternative courses of action when it detects that the real-world execution of a commander’s plan has gone awry and requires re-planning. The commander’s plan is in the form of an Air Tasking Order (ATO) that declares mission objectives for aircraft assets. Live operational databases provide real-time insight into the state of neutral, friendly, and hostile forces. Scenarios that require dynamic re-planning could include the (1) failure of an asset to complete a mission, (2) unexpected destruction of a pivotal asset, (3) introduction of a newly detected hostile threat into the battlespace, and (4) change in behavior of a neutral, friendly, or hostile entity. Alternate courses of action produced by the system will have been automatically generated, evaluated, and ranked.

Determining when the commander’s plan has gone awry requires knowledge and representation of both the intended plan and real world execution of the intended plan over time. Operational databases that provide insight into the state of the real world battlespace contain data that is potentially noisy, inaccurate, and/or delayed, and as such cannot be taken at face value. Representing and analyzing a military scenario requires consideration of decisions by neutral, friendly, and hostile forces.

4.1 Scenario Definition

The baseline scenario used for testing and demonstration simulates an aircraft-bombing mission. In this scenario, twelve blue-force aircraft squadrons are distributed among five airbases outside of hostile territory and awaiting orders. Each squadron has a specific munitions load whose effectiveness varies depending on the target. An initial ATO assigns each squadron to a specific red-force target of opportunity at a specific time. Targets consist of military leaders, vehicle and troop convoys, command centers, radar and surface-to-air-missile sites, bridges, power grids, and manufacturing depots. Some targets are time sensitive and must be destroyed within a specific time window. Aircraft are instructed to return to base after engaging with their targets.

4.2 System Overview

Figure 3 illustrates a high-level system design of the ideal dynamic situational assessment and prediction framework. Several elements are required to address this problem. First is the need for a real-time simulation technology capable of receiving, interpreting, and utilizing live real world intelligence data during execution to provide an improved state-estimate of the real world. Second is the need for a capability to determine when the real world execution of the commander’s plan has gone awry; this will trigger the re-planning process. Third is the need for a capability to automatically generate viable alternative courses of actions. Fourth is the need for an advanced predictive simulation environment to rapidly evaluate these alternative courses of action. Fifth is the need for a plan ranking capability that automatically determines the best overall course of action based on predicted measures of effectiveness. Sixth is the need for an underlying architecture that ties all of these components together. Since a completely automated re-planning capability is unrealistic and potentially dangerous, the commander must be able to override the system and manually generate alternative courses of action.

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

6

Figure 3: Ideal dynamic situational assessment and prediction framework. An ATO representing the commander’s intended plan is used to initialize the system. The WarpIV Kernel provides a (1) real-time scripted execution of the ATO, (2) an improved state-estimate of real world execution of the mission based on incoming data from operational databases such as the Theater Battle Management Core Systems (TBMCS), and (3) utility to detect deviation between the intended plan and reality. When re-planning is necessary, an external plan generation tool produces alternative courses of action. Each alternative course of action is simulated within a predictive HyperWarpSpeed (HWS) execution and ranked based on its effectiveness. At this point, the commander can either accept one of the alternate courses of action as the new path forward or manually generate a new ATO. The system then repeats the cycle using the new plan.

4.3 Real-time Simulation

The real-time simulation functions to (1) represent the scripted execution of the commander’s intended plan, (2) provide an improved state-estimate of the real world execution of the mission based on data from operational databases, and (3) determine when the real world execution of the mission deviates from the intended plan and re-planning is required.

Figure 4: High-level representation of an entity in the real-time simulation. Each entity contains a component modeling the scripted behavior of the commander’s plan and perceived real-world behavior based on incoming live data. Although not shown, each entity also contains a component that automatically compares the state of the plan to that of reality whenever any of its attributes change.

A high-level design of a typical entity in the real-time simulation is shown in Figure 4. Each simulation object (SimObj) contains a component representing the (1) commander’s scripted plan, and (2) real-world execution of the commander’s plan based on incoming live data. This design enables the comparison between plan and reality to occur locally at the entity level and only when an entities’ attribute changes, as opposed to an expensive global comparison between all

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

7

entities at some delta time interval. The plan and reality components are further decomposed into specific components modeling the entities behavior, munitions, sensor, and communications (not shown). In contrast to a scripted motion type, the components modeling reality feed noisy real-time sensor detections through a Kalman Filter to account for uncertainty.

Interfaces were developed to enable the real-time simulation to receive live data updates from an external process. Currently, the real-time simulation can receive live data updates (1) in the form of Distributed Interactive Simulation (DIS) Protocol Data Unit (PDU) messages, (2) from a second real-time execution of the commander’s intended plan initialized with a different random seed, or (3) from a command-line utility. Eventually, the real-time simulation will interface directly with operational databases such as the Air Operations Database (AODB) and Modernized Integrated Database (MIDB).

The real-time simulation can be viewed in any DIS-compliant visualization tool, such as JSAF. Currently, the WarpIV real-time simulation generates EntityState, Fire, and Detonation PDUs allowing the operator to visualize the movement, fire engagement, and detonation of entities.

4.4 Predictive Simulation

HyperWarpSpeed is used to rapidly evaluate and predict the outcome of each of the alternative courses of action. Preliminary models allow the simulation to branch based on (1) attack sequence, and (2) detonation result. In this manner, the simulation considers the impact of blue and red-force entities attacking first, and the impact of the entity surviving and being destroyed. If the entity receiving fire is not destroyed, it returns fire if munitions are available.

To gain a statistical understanding of the results, multireplication state variables were used to represent entity location, health, and quantity of munitions. These variables provide probabilistic results in the form of a statistical distribution. A statistical algebra utility was developed in the WarpIV Kernel to map multireplication state variables into histograms. Basic measures of effectiveness and measures of performance will be statistically computed at the completion of the simulation in order to rank the alternative courses of action.

5. BROADER VISION Several recent factors are converging that demand a broader vision for developing complex systems of systems. These factors include the net-centric transformation of military armed forces, emerging multicore computing revolution, plug-and-play component/composite interoperability methodologies, Service Oriented Architectures, web technologies, Live, Virtual and Constructive (LVC) interoperability standards, cognitive modeling, open source software, standardization of data models, the Department of Defense Architecture Framework, and now technologies such as HyperWarpSpeed that introduce a new simulation paradigm for supporting advanced decision making. [6]

To accomplish this vision, WarpIV Technologies, Inc. is in the process of forming a new study group within the Simulation Interoperability Standards Organization (SISO) known as the Parallel and Distributed Modeling and Simulation (PDMS) Standing Study Group (SSG). This study group will explore standards-based M&S architectures and ultimately provide a comprehensive report on M&S and multicore computing. The intent is for the report to raise awareness within the M&S community concerning the need for a standard multicore-ready architecture for Force Modeling and Simulation.

6. SUMMARY & CONCLUSIONS This paper described the application of the HyperWarpSpeed five-dimensional multiple hypothesis decision branching simulation technology to the dynamic situational assessment and prediction problem. Instead of running many independent replications of a simulation, HyperWarpSpeed can efficiently evaluate multiple decisions within a single simulation execution.

The HyperWarpSpeed technology has virtually unlimited applications including command and control, traffic management, failure analysis, forecasting, finance, and gaming. With preliminary benchmarks indicating excellent performance over the traditional brute force Monte Carlo approach, HyperWarpSpeed could change how the simulations of tomorrow are built.

SPIE – Enabling Technologies for Simulation Science XIII, Paper SPIE 7348-16

8

7. ACKNOWLEDGMENTS The authors of this paper would like to thank Dawn Trevisani and Dr. Timothy Busch for their continued support of this work and insight into Air Force command and control operational systems. WarpIV Technologies, Inc. is performing this work for the Air Force Research Laboratory, Rome Research Site in New York, under contract FA8750-08-C-0070.

8. BIOGRAPHIES Craig Lammers, Vice President and Senior Analyst at WarpIV Technologies, Inc., graduated with high honors in 2001 from the University of Michigan, Dearborn with a B.S. in Industrial & Systems Engineering. He is currently the program manager supporting this research and development effort with the Air Force Research Laboratory, Information Directorate applying new parallel and distributed simulation technologies to support formal estimation and prediction of complex systems.

Jeffrey Steinman, President and CEO of WarpIV Technologies, Inc. received his Ph.D. in High Energy Physics from UCLA in 1988 where he studied the quark structure function of high-energy virtual photons at the Stanford Linear Accelerator Center. Upon completing his Ph.D. dissertation, Dr. Steinman worked at the Jet Propulsion Laboratory/CalTech in the area of parallel and distributed computing technologies on hypercube supercomputers. In 1990, Dr. Steinman developed the Synchronous Parallel Environment for Emulation and Discrete Event Simulation (SPEEDES) framework as the next-generation replacement for the Time Warp Operating System (TWOS). This work transitioned to industry in 1996 when Dr. Steinman joined the staff at Metron, Inc. where he formed a new High Performance Computing Division. SPEEDES eventually transitioned into several mainstream programs including BMDS SIM, JMASS, JSIMS, EADTB, and many other smaller programs or R&D efforts. In 2000, Dr. Steinman left Metron to join RAM Laboratories as Chief Scientist where he directed all of their high performance computing R&D programs. Dr. Steinman formed WarpIV Technologies, Inc. in 2005, to focus on the development of new technologies, composable simulation architectures, and open system standards for high performance computing. Dr. Steinman is the principle developer of the WarpIV Kernel.

Maria Valinski, Vice President and Senior Software Engineer at WarpIV Technologies, Inc. graduated in 1997 from East Stroudsburg University with a B.S. in Computer Science. She is currently the Lead Software Engineer for the Joint-Virtual Network Centric Warfare (JVNCW) program that is developing an open-source government-owned product for simulating communications links of Battlefield Mobile Ad-Hoc Networks on massively parallel supercomputers using the WarpIV Kernel.

Karen Roth is an Associate Computer Engineer at the Air Force Research Laboratory. She received her B.S. in Software Engineering from Rochester Institute of Technology and is currently finishing a Masters of Engineering in Systems Engineering from Cornell University. Her research areas focus on the analysis of next-generation simulation capabilities for use in command and control systems and systems engineering for command and control.

9. REFERENCES [1] Chen D., et. al., 2004. “Incremental HLA-Based Distributed Simulation Cloning.” In proceedings of the 2004

Winter Simulation Conference. [2] Gilmour Duane, et. al., 2005. “Real-Time Course of Action Analysis.” In the proceedings of the 10th International

Command and Control Research and Technology Symposium. [3] Hybinette, M. and R. M. Fujimoto. 2001. Cloning parallel simulations. ACM Transactions on Modeling and

Computer Simulation, 11: 378-407. [4] Steinman Jeff, 2005. “The WarpIV Simulation Kernel.” In the proceedings of the 2005 Principles of Advanced and

Parallel Simulation conference. [5] Steinman Jeff, et. al., 2008, “Simulating Parallel Overlapping Universes in the Fifth Dimension with

HyperWarpSpeed Implemented in the WarpIV Kernel.” In proceedings of the Spring 2008 Simulation Interoperability Workshop.

[6] Steinman Jeff, 2008. “The Need for Change in Modeling and Simulation.” Article to be published in Chem-Bio Defense Quarterly.