next generation 4-d distributed modeling and visualization of battlefield

32
Next Generation 4-D Next Generation 4-D Distributed Modeling and Distributed Modeling and Visualization of Visualization of Battlefield Battlefield Avideh Zakhor UC Berkeley September 2004

Upload: kayo

Post on 09-Jan-2016

48 views

Category:

Documents


0 download

DESCRIPTION

Next Generation 4-D Distributed Modeling and Visualization of Battlefield. Avideh Zakhor UC Berkeley September 2004. Participants. Avideh Zakhor, (UC Berkeley) Bill Ribarsky, (Georgia Tech) Ulrich Neumann (USC) Pramod Varshney (Syracuse) Suresh Lodha (UC Santa Cruz). - PowerPoint PPT Presentation

TRANSCRIPT

Next Generation 4-D Next Generation 4-D Distributed Modeling and Distributed Modeling and Visualization of BattlefieldVisualization of Battlefield

Avideh ZakhorUC Berkeley

September 2004

Participants

Avideh Zakhor, (UC Berkeley) Bill Ribarsky, (Georgia Tech) Ulrich Neumann (USC) Pramod Varshney (Syracuse) Suresh Lodha (UC Santa Cruz)

Battlefield Visualization

Detailed, timely and accurate picture of the modern battlefield vital to militaryMany sources of info to build “picture”:

Archival data, roadmaps, GIS and databases: static Sensor information from mobile agents at different times and locations Scene itself time varying; moving objects Multiple modalities: fusion

How to make sense of all these without information overload?

Visualization Pentagon

DecisionMaking underUncertainty

DecisionMaking underUncertainty

UncertaintyProcessing/

Visualization

UncertaintyProcessing/

Visualization

4D Modeling/Update

4D Modeling/Update

Visualizationand renderingVisualizationand rendering

Tracking/Registration

Tracking/Registration

Research Agenda for 2003- 2004

Modeling Visualization and Rendering

Mobile situational visualization Augmented virtual environments

Add the temporal dimension (4D): Tracking of moving objects in scenes Modeling of time varying objects and scenes Dynamic event analysis, and recognition

Path planning under uncertainty

Acquisition set up for dynamic scene modeling

rotating mirror

IR line laser

Digital camcorder with IR-filter

Halogen lamp with IR-filter

VIS-light camera

PC

Sync electronic

Reference object for H-line

Roast with vertical slices

Captured IR Frames

Horizontal line scans from top to bottom at about 1 Hz

Video intensity and IR captured synchronously

IR video stream VIS video stream

Frame rate: 30 Hz (NTSC) Frame rate: 10 Hz Synchronized with IR video

stream

Processing steps

•Compute depth at the horizontal line•Track computed depth values along vertical lines•Intraframe and interframe tracking•Dense depth estimation

Results

Depth video Color video

Video analysis

− Segmenting and tracking moving objects (people, vehicles) in the scene

− Determines regions of interest/change and allows for dynamic modeling and rapid modeling

Dynamic Event Analysis

Video Scene Analysis: Activity Classification with Uncertainty

Example activities: sitting, bending and standing

The blue pointer indicates the level of certainty in the classifier decision

a

b

c

d

Audio Enhanced Visual Processing with Uncertainty

Video Processing and Classification

Audio Processingand Classification

Visualization

Description Generation

Video Acquisition

Sound Acquisition

FusionUncertainty

VE: captures only a snapshot of the real world, therefore lacks any representation of dynamic events and activities occurring in the scene

AVE Approach: uses sensor models and 3D models of the scene to integrate dynamic video/image data from different sources

AVE: Fusion of 2D Video & 3D Model

Visualize all data in a single context to maximize collaboration and comprehension of the big-picture

Address dynamic visualization and change detection

Mobile Situational Visualization System

Drawing Area

Buttons Pen Tool

Mobile Team

Collaboration Example

collaborators

Shared observations of vehicle location, direction, speed

Goal

Source

Optimal route planning for battlefield risk minimization

High risk

Moderate risk

Low risk

Risk free

Lidar Data Classification

Using height and height variation

Using LiDAR data (no aerial image)

Using all five features

Adaptive Stereo/Lidar based registration for modeling outdoor scenes

Aerial viewStereo Based Registration

LiDAR Based Registration

•LiDAR based approach seems better at turns.

•Stereo based approach captures terrain undulations

Punctuated Model Simplification• Our initial implementation considers planar loops.• The mesh containing the loops is a topological 2-manifold.

Example: simple object

Detected loops

“Inside/outside” binary tree

Simplification path

Interactions on AVE

Collaboration with Northrop Grumman- install v.1 AVE system (8/03) for demonstrations- Install v.2 AVE system (9/04) for demonstrations and

evaluation license Tech transfer

- Source code for LiDAR modeling to ARMY TEC labs- Integration into ICT training applications for MOUT after-

action review Demos/proposals/talks

− NIMA, NRO, ICT, Northrup Grumman , Lockheed Martin, HRL/DARPA, Olympus, Airborne1, Boeing

Transitions for 3D modeling• Carried out a 2 day modeling of Potomac Yard Mall in Washington, DC in December 2003 for Night Army Vision Lab, and GSTI

•Shipped equipment ahead of time

•Spent one day driving around acquiring data

•Spent ½ day processing the data

•Delivered the model to Jeff Turner of GSTI/ Night army vision lab

•Carried out another 2 day modeling of Ft. McKenna in Geogia in December 2003 in collaboration with Jeff Dehart of the ARL

•Drove the equipment from DC to Georgia in a van

•Collected data in one day, processed in few days

•Delivered the 3D model to Larry Tokarcik’s group.

•In Discussion with Harris to transition 3D modeling Architecure/software/hardware

•Invited talk at the registration workshop at CVPR

Technology Transfer on Sitvis

•We are continuing work centered around the mobile augmented battlefield visualization testbed with both the Georgia Tech and UNC Charlotte homeland security initiatives.

•Dr. Ribarsky is on the panel to develop the research agenda for the new National Visual Analytics Center, sponsored by DHS. Mobile situational visualization will be part of this agenda.

•The system is being used as part of the Sarnoff Raptor system, which is deployed to the Army and other military entities. In addition our visualization system is being used as part of the Raptor system at Scott Air Force Base.

Publications (1)

C. Frueh and A. Zakhor, "An Automated Method for Large-Scale, Ground-Based City Model Acquisition" in International Journal of Computer Vision, Vol. 60, No. 1, October 2004, pp. 5 - 24.

C. Frueh and A. Zakhor, "Constructing 3D City Models by Merging Ground-Based and Airborne Views" in Computer Graphics and Applications, November/December 2003, pp. 52 - 61.

C. Frueh and A. Zakhor, "Reconstructing 3D City Models by Merging Ground-Based and Airborne Views", Proceedings of the VLBV, September 2003, pp. 306 - 313 Madrid, Spain

C. Frueh, R. Sammon, and A. Zakhor, "Automated Texture Mapping of 3D City Models With Oblique Aerial Imagery" in 2nd International Symposium on 3D Data Processing, Visualization, and Transmission, 2004.

U. Neumann, “Approaches to Large-Scale Urban Modeling” in IEEE computer Graphics and applications

U. Neumann, “Visualizing Reality in an Augmented Virtual Environment” , acepted in Presence

U. Neumann, “Augmented Virtual Environments for Visualization of Dynamic Imagery”, accepted in IEEE Computer Graphics and Applications.

Publications (2)

U. Neumann, “Urban Site Modleing from LIDA”, CGGM’03

U. Neumann, “Augmented Virtual Environments (AVE): Dynamic Fusion of Imagery and 3D models”, VR 2003

U. Neumann, “3D Video Surveillance with Augmented Virtual Environments”, accepted in SIGGM 2003.

Sanjit Jhala and Suresh K. Lodha, ``Stereo and Lidar-Based Pose Estimation with Uncertainty for 3D Reconstruction'', To appear in the Proceedings of Vision, Visualization, and Modeling Conference, Stanford, Palo Alto, CA November 2004.

Hemantha Singamsetty and Suresh K. Lodha, ``An Integrated Geospatial Data Acquisition System for Reconstructing 3D Environments'', To appear in the Proceedings of the IASTED Conference on Advances in Computer Science and Technology (ACST), St. Thomas, Virgin Islands, USA, November 2004.

Publications (3)

Amin Charaniya, Roberto Manduchi, and Suresh K. Lodha, ``Supervised Parametric Classification of Aerial LiDAR Data", Proceedings of the IEEE workshop on Real-Time 3D Sensors and Their Use, Washington DC, June 2004.

Sanjit Jhala and Suresh K. Lodha, ``On-line Learning of Motion Patterns using an Expert Learning Framework", Proceedings of the IEEE Workshop on Learning in Computer Vision and Pattern Recognition, Washington DC, June 2004.

Srikumar Ramalingam, Suresh K. Lodha, and Peter Sturm, ``A Generic Structure-from-Motion Algorithm for Cross-Camera Scenarios'', Proceedings of the OmniVis (Omnidirectional Vision, Camera Networks, and Non-Classical Cameras) Conference, Prague, Czech Republic, May 2004.

Srikumar Ramalingam and Suresh K. Lodha ``Adaptive Enhancement of 3D Scenes using Hierarchical Registration of Texture-Mapped Models", Proceedings of 3DIM Conference, IEEE Computer Society Press, Banff, Alberta, Canada, October 2003, pp.~203-210.

Publications (4)

Suresh K. Lodha, Nikolai M. Faaland, and Jose Renteria,``Hierarchical Topology Preserving Compression of 2D Vector Fields using Bintree and Triangular Quadtrees'', IEEE Transactions on Visualization and Computer Graphics, Vol. 9, No. 4, October 2003, pages 433--442.

Suresh K. Lodha, Krishna M. Roskin, and Jose C. Renteria, ``Hierarchical Topology Preserving Simplification of Terrains", Visual Computer, Vol. 19, No. 6, September 2003.

Suresh K. Lodha, Nikolai M. Faaland, Grant Wong, Amin P. Charaniya, Srikumar

Ramalingam, Arthur Keller, ``Consistent Visualization and Querying of Spatial Databases by a Location-Aware Mobile Agent'', Proceedings of Computer Graphics International (CGI), pp.~248--253, IEEE Computer Society Press, Tokyo, Japan, July 2003.

Christopher Campbell, Michael M. Shafae, Suresh K. Lodha and Dominic W. Massaro,

``Discriminating Visible Speech Tokens using Multi-Modality'', Proceedings of the International Conference on Auditory Display (ICAD), pp.~13--16, Boston, MA, July 2003.

Publications (5)

Amin Charaniya and Suresh K. Lodha, ``Speech Interface for Geo-Spatial Visualization'', Proceedings for the Conference on Computer Science and Technology (CST), Cancun, Mexico, May 2003.

William Ribarsky, editor (with Holly Rushmeier). 3D Reconstruction and Visualization of Large Scale Environments. Special Issue of IEEE Computer Graphics & Applications (December, 2003).

Justin Jang, Peter Wonka, William Ribarsky, and C.D. Shaw. Punctuated Simplification of Man-Made Objects. Submitted to The Visual Computer.

Tazama St. Julien, Joseph Scoccinaro, Jonathan Gdalevich, and William Ribarsky. Sharing of Precise 4D Annotations in Collaborative Mobile Situational Visualization. To be submitted, IEEE Symposium on Wearable Computing.

Ernst Houtgast, Onno Pfeiffer, Zachary Wartell, William Ribarsky, and Frits Post. Navigation and Interaction in a Multi-Scale Stereoscopic Environment. Submitted to IEEE Virtual Reality 2004.

Publications (6)

G.L. Foresti, C.S. Regazzoni and P.K. Varshney (Eds.), Multisensor Surveillance Systems : The Fusion Perspective , Kluwer Academic Press, 2003.

R. Niu, P. Varshney, K. Mehrotra and C. Mohan, ``Sensor Staggering in Multi-Sensor Target Tracking Systems'', Proceedings of the 2003 IEEE Radar Conference, Huntsville AL, May 2003.

L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Automatic Camera Selection and Fusion for Outdoor Surveillance under Changing Weather Conditions'', Proceedings of the 2003 IEEE International Conference on Advanced Video and Signal Based Surveillance, Miami FL, July 2003.

H. Chen, P. K. Varshney, and M.A. Slamani, "On Registration of Regions of Interest (ROI) in Video Sequences" Proceedings of IEEE International Conference on Advanced Video and Signal Based Surveillance, CD-ROM, Miami, FL, July 21-22, 2003.

R.Niu and P.K.Varshney, “Target Location Estimation in Wireless Sensor Networks Using Binary Data,”Proceedings of the 38th Annual Conference on Information Sciences and Systems, Princeton, NJ, March 2004.

Publications (7)

L. Snidaro, R. Niu, P. Varshney, and G.L. Foresti, ``Sensor Fusion for Video Surveillance'', Proceedings of the Seventh International Conference on Information Fusion, Stockholm, Sweden, June 2004. 

E. Elbasi, L. Zuo, K. Mehrotra, C. Mohan and P. Varshney, "Control Charts Approach for Scenario Recognition in Video Sequences," in Proc. Turkish Artificial Intelligence and Neural Networks Symposium(TAINN'04), June 2004.

M. Xu, R. Niu, and P. Varshney, `` Detection and Tracking of Moving Objects in Image Sequences with Varying Illumination'', to appear in Proceedings of the 2004 IEEE International Conference on Image Processing, Singapore, October 2004.

R. Rajagopalan, C.K. Mohan, K. Mehrotra and P.K. Varshney,"Evolutionary Multi-Objective Crowding Algorithm for Path Computations," to appear in Proc. International Conf. on Knowledge Based Computer Systems (KBCS-2004), Dec. 2004.

Future Work

• Important to make sense of the “world”, not just model it or visualize it

•Tons of data being collected by a variety of sensors all over the globe all the time

•How to process or digest the data in order to:

•Recognize significant events

•Make decisions despite uncertainty, and take actions

•Current MURI most concerned about “presenting” the data to military commanders in an uncluttered way visualization

•Future work on how to automatically construct the “big picture” of what is happening by combining a variety of modalities of data Audio, video, 3D models, sensors, pictures,

Battlefield Analysis

Distributed sensors

Physical layer Processing

Model / UpdateEnvironment

Visualize

Analysis/reasoning

Recognizeevents

Accomplish tasks

Make decisionTake actions

All of thisChangingDynamicallyWith time

Outline of Talks

9:00 - 9:15 Avideh Zakhor, U.C. Berkeley, "Overview" 9:15 - 10:00 Chris Frueh and Avideh Zakhor, U.C. Berkeley, "3D modeling and visualization of static and

dynamic scenes" 10:00 - 10:45 Ulrich Neuman, U.S.C. "Data Fusion in Augmented Virtual Environments" 10:45 - 11:30 Bill Ribarsky, Georgia Tech "Testbed and Results for Mobile Augmented Battlefield Visualization" 1:00 - 1:45 Suresh Lohda, U.C. Santa Cruz "Uncertainty in Data Classification, Pose Estimation

and 3D Reconstruction for Cross-Camera and Multiple Sensor Scenarios” 1:45 - 2:30 Pramod Varshney, Syracuse University

"Decision Making and Reasoning with Uncertain Image and Sensor Data"