slide 1 onesaf objective system (oos) overview marlo verdesca, eric root, jaeson munro...

Post on 15-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Slide 1

OneSAF Objective System (OOS)Overview

Marlo Verdesca, Eric Root, Jaeson MunroMarlo Verdesca, Eric Root, Jaeson Munro

SAICSAIC

Marlo.K.Verdesca@saic.comMarlo.K.Verdesca@saic.com

11/28/200511/28/2005

Slide 2

Background

OneSAF consists of two separate program efforts:

OTB OneSAF Testbed Baseline (OTB) is • An interactive, high resolution, entity level simulation which represents combined arms tactical operations up to the battalion level• Currently supports over 200 user sites• Based on the ModSAF baseline• ModSAF officially retired as of April 2002

OneSAF Objective System (OOS) is• New development.• Composable, next generation CGF that can represent

from the entity to the brigade level.• New core architecture.• Need based on cost savings through replacement of

legacy simulations:– BBS - OTB - Janus - CCTT/AVCATT SAF

• Commonly referred to as “OneSAF”

Slide 3

What is One Semi-Automated Forces (OneSAF) Objective System (OOS)?

A composable, next generation CGF that can represent a full range of operations, systems, and control processes (TTP) from entity up to brigade level, with variable level of fidelity that supports multiple Army M&S domain (ACR, RDA, TEMO) applications.

Software only

Platform Independent

Field to:•RDECs / Battle Labs•National Guard Armories•Reserve Training Centers•All Active Duty Brigadesand Battalions

Constructive simulation capable of stimulating Virtual and Live simulations to complete the L-V-C triangle

AutomatedComposableExtensible

Interoperable

Slide 4

Target Hardware Platform

Target PC-based computing platform: The PC-based computing platform is envisioned as being one of the standard development and fielding platforms for the OneSAF Objective System. The hardware identified in this list is compatible with current OneSAF Testbed Baseline supported Linux PC based hardware, the WARSIM Windows workstation configuration, and the Army's Common Hardware Platform.

• CPU: 2.8 GHz Xeon / 533 Processor • Memory: 1 Gb DDR at 266MHz • Monitor: 21 inch (19.8 in Viewable Image Size) • Video Card: NVIDIA Quatro 4 700XGL / 64 Mb RAM • Hard Drives: Two, 80GB Ultra ATA 100 7200 RPM • Floppy Disk Drive: 3.5 inch floppy drive • NIC: 10/100/1000 Fast Ethernet PCI Card • Network connection: Minimum of 10BaseT Ethernet • CD-RW: 40x/10x/40x CD-RW

Slide 5

Target Hardware Platform

Target PC-based computing platform: The PC-based computing platform is envisioned as being one of the standard development and fielding platforms for the OneSAF Objective System. The hardware identified in this list is compatible with current OneSAF Testbed Baseline supported Linux PC based hardware, the WARSIM Windows workstation configuration, and the Army's Common Hardware Platform.

• CPU: 2.8 GHz Xeon / 533 Processor • Memory: 1 Gb DDR at 266MHz • Monitor: 21 inch (19.8 in Viewable Image Size) • Video Card: NVIDIA Quatro 4 700XGL / 64 Mb RAM (GPU)• Hard Drives: Two, 80GB Ultra ATA 100 7200 RPM • Floppy Disk Drive: 3.5 inch floppy drive • NIC: 10/100/1000 Fast Ethernet PCI Card • Network connection: Minimum of 10BaseT Ethernet • CD-RW: 40x/10x/40x CD-RW

Slide 6

FY98 FY99 FY00 FY01 FY02 FY03 FY04 FY05

OOS ProgramMilestones

FY06

OneSAFTest BedBaseline

(OTB)

OneSAFObjective

System (OOS)

Build A

Build B

MS A *

ORD V1.0 ORD V1.1

OTB V1.0

STOC Award

BLOCK C

BLOCK D

FOC

V2.0

BLOCK A

OOS V1.0

P3I

OTB V2.0

MS B/C

BLOCK B

OneSAF Program Schedule

Slide 7

Line of Sight (LOS) Service in OneSAF

Slide 8

LOS Variants

1. Low-Resolution Sampling (Geometric)

2. High-Resolution Ray-Trace (Geometric)

3. Attenuated LOS• Atmospheric • Foliage

4. Ultra-High Resolution Buildings

Slide 9

Low Resolution Sampling & High Resolution Ray-TraceLow Resolution Sampling & High Resolution Ray-Trace

Sampled LOS as used in WARSIM

Ray-Trace LOS includes strict segment-polygon intersection tests

(picture Courtesy of WARSIM)

WARSIM reuse legacy gives us

the sampled LOS model for

use by OOS

OneSAF enhancements include exact ray-trace LOS

model

Slide 10

LOS AttenuationLOS Attenuation

Compute path loss through canopy – individual trees or

forest area

35% LOS

Compute path loss through atmosphere

10% LOS

Sensor Model accounts for attenuation path loss. Usually by a thresholding combined with a random draw of LOS result

Slide 11

LOS inside Ultra High Resolution Buildings (UHRBs)LOS inside Ultra High Resolution Buildings (UHRBs)

Slide 17The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

Hybrid GPU/CPU Approach

GPU used to conservatively cull LOS queries

Create depth map (4k x 4k) from above using orthographic projectionTest line segments against depth buffer with reversed depth testLine segments with no pixels passing depth test have LOS

CPUExact tests using raycasting for non-culled queries (using double precision arithmetic)Culling eliminates most of the worst-case (non-intersecting) queries

Slide 18The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

GPU Culling

CPU1

ExactTest

CPU2

ExactTest

LOS Queries

Results

Render Terrain (once)The terrain is rendered into the depth buffer

Slide 19The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

A set of LOS queries is received

GPU Culling

CPU1

ExactTest

CPU2

ExactTest

LOS Queries

Results

Render Terrain (once)

Slide 20The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

The first batch of queries is culled using the GPU

GPU Culling

CPU1

ExactTest

CPU2

ExactTest

LOS Queries

Results

Render Terrain (once)

Batch 1

Slide 21The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

GPU Culling

CPU1

ExactTest

CPU2

ExactTest

LOS Queries

Results

Render Terrain (once)

The non-culled queries are tested using the CPU while the GPU processes batch 2

Batch 1culled

Batch 2

Batch 1 non-culled

Slide 22The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

Conservative GPU Culling

Render terrain only once for static scenesUse GPU memory for fast rendering of dynamic scenesConservativeness leads to “false positives” requiring more CPU tests

Increased resolution decreases false positivesResolution limited to 4k x 4k on current GPUsWe use multiple buffers to increase the effective resolution

Slide 23The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

Exact Test Details

Based on ray-castingRay traverses the spatial grid

Test triangles in each grid cell

ParallelizableWe use one thread for each CPU

terrain with uniform grid

Slide 24

LOS Integration into OneSAF

Slide 25

LOSIntegration Process

OneSAF/GPU Requirements

(SAIC/UNC)OneSAF/GPU Requirements

(SAIC/UNC)

OneSAF Technical Report

(SAIC)OneSAF Technical Report

(SAIC)

GPU Algorithm Creation

(UNC)GPU Algorithm Creation

(UNC)

Execute Unit Test

(SAIC/UNC)Execute Unit Test

(SAIC/UNC)

OneSAF Scenario Creation

(SAIC)OneSAF Scenario Creation

(SAIC)

OneSAF Benchmark Results

(SAIC)OneSAF Benchmark Results

(SAIC)

Integration into OOS

(SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all the triangles in the entire database

- Gather all features in the database

- Pass all triangles and features into the initialization for the GPU

• Replace all original LOS calls with the GPU counterpart

Integration into OOS

(SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all the triangles in the entire database

- Gather all features in the database

- Pass all triangles and features into the initialization for the GPU

• Replace all original LOS calls with the GPU counterpart

Slide 26

LOSPast Performance

November 2004

• Baseline:– OOS Build 23 Block C

• Database:– Joint Readiness Training

Center (JRTC); Ft. Polk LA.• Scenario:

– 200 total entities– 100 blue tanks vs. 100

red tanks • Performing LOS while

traveling and shooting

• 100x speedup in LOS query

• 3x simulation speedup

November 2004

• Baseline:– OOS Build 23 Block C

• Database:– Joint Readiness Training

Center (JRTC); Ft. Polk LA.• Scenario:

– 200 total entities– 100 blue tanks vs. 100

red tanks • Performing LOS while

traveling and shooting

• 100x speedup in LOS query

• 3x simulation speedup

May 2005

• Baseline:– OOS Build 24 of Block D

• Database:– Joint Readiness Training

Center (JRTC); Ft. Polk LA.• Scenario:

– 3,000 total entities– 1,500 blue tanks vs. 1,500

red tanks• Performing LOS,

stationary, not shooting– 33 blue RWAs vs. 33 red

RWAs• Performing LOS while

traveling

• 100-200x speedup in LOS query

• 10x simulation Speedup

May 2005

• Baseline:– OOS Build 24 of Block D

• Database:– Joint Readiness Training

Center (JRTC); Ft. Polk LA.• Scenario:

– 3,000 total entities– 1,500 blue tanks vs. 1,500

red tanks• Performing LOS,

stationary, not shooting– 33 blue RWAs vs. 33 red

RWAs• Performing LOS while

traveling

• 100-200x speedup in LOS query

• 10x simulation Speedup

Slide 27

August 2005

• Baseline:– OOS Build 24 of Block D

• Database:– Ft. Hood, Texas

• Scenario:– 5,000 total entities– ~2,500 blue tanks vs. 2,500

red tanks• Performing LOS, stationary, not

shooting

– 2 blue RWAs vs. 2 red RWAs• Performing LOS while traveling

• 100-200x speedup in LOS query • 15-20x simulation speedup

August 2005

• Baseline:– OOS Build 24 of Block D

• Database:– Ft. Hood, Texas

• Scenario:– 5,000 total entities– ~2,500 blue tanks vs. 2,500

red tanks• Performing LOS, stationary, not

shooting

– 2 blue RWAs vs. 2 red RWAs• Performing LOS while traveling

• 100-200x speedup in LOS query • 15-20x simulation speedup

LOSCurrent Performance

Slide 28

LOS Demonstration

Slide 29

Results

• Average time for Standard LOS service call:

1 – 2 millisecond

• Average time for GPU LOS service call:

12 microseconds

• 100-200x speedup in LOS query

• Overall speedup:

20x simulation improvement

Slide 30

Route Planning in OneSAF

Slide 31

Types of Routes in OneSAF

• Direct Routes

– Follow the route waypoints exactly as entered by the user

• Networked Routes

– Follow a linear feature, such as rivers or roads

• Cross Country Routes

– Utilize a grid of routing cells that form an implicit network for the A* algorithm to search

Slide 32

Route Planning Algorithms in OneSAF

• Apply the A* algorithm to routing networks

• The cost function assigns a weight (cost) for a route segment

– Segment data includes the segment’s coordinates, slope, terrain type, and an indication of features

• The routing algorithm evaluates possible route segments based on their cost; a lower cost indicates a more favorable route.

• While evaluating a route segment, the cost function may call other ERC services, such as line of sight.

Slide 33

Cross Country Routing

Routing Cell

Each page in the runtime database is decomposed into routing cells. These routing cells form an implicit graph that can be searched using A*. Only segments requested by A* are computed at runtime.

River

a’ Lake

Segment a’ has a river and lake associatedwith it.

Segment b’ hasno features associated with it.

b’

Slide 34

Cross Country Routing

End point

Start point

Generated Route

Lake

RiverFeatures: lakes, rivers, trees, buildings, etc.

Slide 35The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

Motivation

In OneSAF, the route-planning bottleneck is feature analysis (intersection tests)

Accounts for over 50% of computation time

Feature analysis is the best (obvious) candidate as the starting point for performance improvement

The route-planning algorithm in OneSAF can use parallel feature-analysis computations

Slide 36The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

GPU-based Route Planning

Terrain Features& Route Segments

Feature Buffer(static)

Cull Segment SetAgainst Feature Set

(GPU)

Cull Feature SetAgainst Segment Set

(GPU)

Cull Feature SetAgainst Single Segment

(GPU)

Exact Feature/SegmentTests (CPU)

Results

Render Features(Once)

All Segments

All Features

All Features

Feature/Segment Pairs

ReducedSegments

ReducedFeatures

Slide 37The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL

3 phases of GPU-based culling

GPU-based culling proceeds in 3 phases:Cull segments against the full feature setCull features against the reduced set of segmentsCull reduced feature set against individual segments

Slide 38

Route Planning Integration into OneSAF

Slide 39

Route Planning Integration Process

OneSAF/GPU Requirements

(SAIC/UNC)OneSAF/GPU Requirements

(SAIC/UNC)

OneSAF Technical Report

(SAIC)OneSAF Technical Report

(SAIC)

GPU Algorithm Creation

(UNC)GPU Algorithm Creation

(UNC)

Execute Unit Test

(SAIC/UNC)Execute Unit Test

(SAIC/UNC)

OneSAF Scenario Creation

(SAIC)OneSAF Scenario Creation

(SAIC)

OneSAF Benchmark Results

(SAIC)OneSAF Benchmark Results

(SAIC)

Integration into OOS (SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all features in the database

- Convert our feature class into UNC feature class

• Modify our A* algorithm to cost multiple segments at once

(batching process)

• Replace all original route planning calls with the GPU counterpart

Integration into OOS (SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all features in the database

- Convert our feature class into UNC feature class

• Modify our A* algorithm to cost multiple segments at once

(batching process)

• Replace all original route planning calls with the GPU counterpart

Slide 40

Route PlanningPast Performance

May 2005

• Baseline: – OOS Build 24 Block D

• Database: – Joint Readiness Training Center (JRTC); Ft.

Polk LA.– ~15K Features

• Scenario:– Four tank platoons

• Cross Country traveling• Route 25km – 27km • 40 total segments / route • Full physical models, behavior models, and

sensing models

• 10x speedup in feature analysis computations• 2x simulation speedup

May 2005

• Baseline: – OOS Build 24 Block D

• Database: – Joint Readiness Training Center (JRTC); Ft.

Polk LA.– ~15K Features

• Scenario:– Four tank platoons

• Cross Country traveling• Route 25km – 27km • 40 total segments / route • Full physical models, behavior models, and

sensing models

• 10x speedup in feature analysis computations• 2x simulation speedup

Slide 41

Route PlanningCurrent Performance

August 2005

• Baseline: – OOS Build 24 Block D

• Database: – Ft. Hood, Texas– ~50K Features

• Scenario:– One tank platoon– ~50 Individual Combatants (IC)

• Tactically Traveling through dense urban environment

• Route 5km – 8km • 11 total segments / route • Full physical models, behavior models, and

sensing models

• 30-50x speedup in feature analysis computation

• 10x simulation speedup

August 2005

• Baseline: – OOS Build 24 Block D

• Database: – Ft. Hood, Texas– ~50K Features

• Scenario:– One tank platoon– ~50 Individual Combatants (IC)

• Tactically Traveling through dense urban environment

• Route 5km – 8km • 11 total segments / route • Full physical models, behavior models, and

sensing models

• 30-50x speedup in feature analysis computation

• 10x simulation speedup

Slide 42

Route Planning Demonstration

Slide 43

Route Planning Results• Average time for Standard Feature Intersection checking:

68,000 milliseconds

• Average time for GPU Feature Intersection checking: 2,200 milliseconds

• Average overall time for Standard Route Planning service call: 45 seconds

• Average overall time for GPU Route Planning service call:4.5 seconds

• Improvements: 30-50x speedup in feature intersection checking 10x overall simulation speedup

• Greater improvement if ….– Use of a complex urban terrain database– Increase entity count

Slide 44

Collision Detection in OneSAF

Slide 45

Basic Architecture

• Basic Architecture of Collision in OneSAF– Two types of collision

• Entity collides with entity• Entity collides with environment feature

• Collision detection is performed for each entity

• Medium and High resolution entities perform collision detection once per tick (15 Hz)

• Requires that the footprint of the entity be checked for intersections against features with linear, circular, and polygonal geometry.

• OneSAF Environment provides a service (get_features_in_area) to assist in detection of collisions with environmental objects

Slide 46

Current Status of Collision Detection

• OneSAF continues to finalize collision detection capabilities for version 1.0– OneSAF Build 24, Block D baseline contained minimal collision

detection capabilities

• SAIC/GPU team incorporate a basic collision detection algorithm into our GPU Build 24, Block D baseline:– Algorithm incorporated performs only the exact entity/feature tests

(similar to UNC) – Timing results that are collected are estimates to the true functionality

• Once collision detection is finalized and integrated into OneSAF:– Collision detection algorithm will be better defined– Provides more accurate timing results while comparing GPU-based

algorithms to non-GPU-based algorithms

47

Proximity & Collision Queries

Geometric reasoning of spatial relationships among objects (in a dynamic environment)

d

Closest Points & Separation Distance

d

Penetration Depth

Collision Detection Contact Points & Normals

48

Applications

Rapid Prototyping – tolerance verification

Dynamic Simulation – contact force calculation

Computer Animation – motion control

Motion Planning – distance computation

Virtual Environments -- interactive manipulation

Haptic Rendering -- restoring force computation

Simulation-Based Design – interference detection

Engineering Analysis – testing & validation

Medical Training – contact analysis and handling

Education – simulating physics & mechanics

49

Motivation

Collision and proximity queries can take a significant amount of time (up to 90%) in many applications:

dynamic simulation – automobile safety testing, cloth folding, engineering using prototyping, avatar interaction, etc.

path planning – routing, navigation in VE, strategic planning, etc.

51

Algorithm

Object-LevelPruning

Subobject-LevelPruning

Exact Tests

GPU-based PCS computation Using CPU

52

Real-Time Collision Detection for

Avatar Motion

53

Self-Collision Detection:

First interactive self-collision detection algorithm

1.5 orders of magnitude improvement in state of the art

Deformable Models

54

Real-time Surgical Simulation

Path planning for a deformable catheter

1 order of magnitude performance improvement

55

3 Stage Algorithm

Stage 1: Entities are removed that can be culled conservatively on the GPU Input 1

Entity 1

Entity 2Entity 3

Entity 4

Entity 5

Entity 4

Entity 5

56

3 Stage Algorithm

Stage 1: Entities are removed that can be culled conservatively on the GPU

Stage 2: Features are checked against entities and conservatively culled on the GPU

2

Entity 4

Entity 5

Input 1

Entity 1

Entity 2Entity 3

Entity 4

Entity 5

Entity 4

Entity 5

57

3

Entity 5

3 Stage Algorithm

Stage 1: Entities are removed that can be culled conservatively on the GPU

Stage 2: Features are checked against entities and conservatively culled on the GPU

Stage 3: Remaining features are checked per entity on the CPU

2

Entity 4

Entity 5

Input 1

Entity 1

Entity 2Entity 3

Entity 4

Entity 5

Entity 4

Entity 5

Entity 5 is the only colliding entity

58

3 Stages of Collision Detection

Each stage reduces the complexity of the next phase

If features are static, they are rasterized only once for all entity-based occlusion queries

Dynamic terrains can be easily supportedDynamic features are easily supported by the

algorithmRequires one extra rasterization of the feature

set

59

3 Stages of Collision Detection

Terrain Features are first rendered into a depth buffer at startup

At runtime Entities are checked

against feature depth buffer

Features are checked against entities

CPU performs final exact test

Feature Buffer(static)

Terrain Features& Entities

Render Features(Once)

Cull Entity Set Against

Feature Set(GPU)

Cull Feature Set Against

Single Entity(GPU)

Exact Entity/FeatureTests (CPU)

Results

Feature/Entity Pairs

Reduce Entities

Reduce Features per Entity

60

Occlusion Queries for Overlap Tests

Standard feature of current GPUs

When two objects are rendered/drawn, the queries check for overlap

We draw features and entities with occlusion queriesFind features and entities that overlapDrawing a feature or an entity takes only a few

microseconds

Slide 61

Collision Detection Integration into OneSAF

Slide 62

Collision DetectionIntegration Process

OneSAF/GPU Requirements

(SAIC/UNC)OneSAF/GPU Requirements

(SAIC/UNC)

OneSAF Technical Report

(SAIC)OneSAF Technical Report

(SAIC)

GPU Algorithm Creation

(UNC)GPU Algorithm Creation

(UNC)

Execute Unit Test

(SAIC/UNC)Execute Unit Test

(SAIC/UNC)

OneSAF Scenario Creation

(SAIC)OneSAF Scenario Creation

(SAIC)

OneSAF Benchmark Results

(SAIC)OneSAF Benchmark Results

(SAIC)

Integration into OOS (SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all features in a given area

- Convert our feature class into UNC feature class

• Replace all original collision detection calls with the GPU counterpart

Integration into OOS (SAIC)

• Add several OpenGL dll’s to ERC libraries

• Place c++ header files for OpenGL among the ERC code 

• Create a new directory among the ERC code

- Setup a new makefile/buildfile, to allow GPU to build as its own library

• Add calls to ERC Initialization to:

- Gather all features in a given area

- Convert our feature class into UNC feature class

• Replace all original collision detection calls with the GPU counterpart

Slide 63

Setting the Stage

• OneSAF Configuration– OneSAF Plan View Display (PVD)

• GPU toggle switch (route calculation time)

– OneSAF SimCore

• OneSAF Terrain Database:– Ft. Hood, Texas– Future database – South West Asia (SWA)

• Scenario Overview:– Multiple tank platoons and IC’s performing LOS, route

planning and collision detection through a complex urban environment

– Full physical models, behavior models, and sensing models

Slide 64

Collision Detection Demonstration

Slide 65

Collision Detection Results

• Average time for Standard Collision Detection query: 20 milliseconds

• Average time for GPU Collision Detection query:2.1 milliseconds

• Improvements: 10x speedup in collision detection query5x simulation speedup

• Greater improvement if ….– Use of a complex urban terrain database– Increase entity count

Slide 66

2004

Collision Detection

Line of Sight(LOS)

Route Planning

BLOCK D

GPU Performance Improvement Schedule

2005

BLOCK D

BLOCK C BLOCK D

Nov.

DARPA Review

May

DARPA Review

Aug.

DARPA Tech TODAY

100x LOS query3x sim. speedup

100-200x LOS query10x sim. speedup

10x feature int.2x sim. speedup

30-50x feature int.10x sim. speedup

10x collision query5x sim. speedup

100-200x LOS query20x sim. speedup

top related