image cybernet systems corporation understanding ann arbor, mi 48108

27
Cybernet Proprietary 1 Image Understanding Cybernet Systems Corporation (www.cybernet.com ) has been a leader in image understanding systems since 1989. It focuses on development and rapid deployment services in computer vision, robotics, situational awareness systems and software-oriented interoperability. Our core competences are sensor systems integration and algorithm development, robotics, man-machine interface design, medical devices and applications, modeling and simulation (with focus on massive multiplayer scale simulations), and network appliances and security. Gesture Recogntion......................................................................................................................... 2 Some Prior Work in Gesture Recognition .................................................................................. 3 Gesture Products ......................................................................................................................... 4 Radar ATR and Image Understanding............................................................................................ 5 Some Prior Work in Radar ATR and Understanding ................................................................. 6 Navigation, Rendezvous & Docking .............................................................................................. 6 Some Navigation Projects Completed and Underway ................................................................ 9 Some Prior Work in Navigation, Rendezvous & Docking ......................................................... 9 Video Based and IMU-based Dead Reckoning and Position Determination Systems ................. 10 Some Prior Work in Dead Reckoning and Position Determination ......................................... 10 Ammunition Peculiar Equipment and Inspection: Ordnance ID and Inspection.......................... 11 Some Prior Work in Ammuniton Manufacturing and Inspection ............................................. 12 Optical Character Recognition...................................................................................................... 13 Some Prior Work in Optical Character Recognition ................................................................ 13 Cybernet Model-based Vision Systems ........................................................................................ 15 Some Prior Work in Model-based Vision ................................................................................. 16 Unique Sensor Systems Development .......................................................................................... 19 Some Prior Work in Sensor Systems Development ................................................................. 19 Eye Tracking, Body Tracking, Head Tracking ............................................................................. 22 Some Prior Work in Eye Tracking, Body Tracking, Head Tracking........................................ 23 Augmented Reality ....................................................................................................................... 24 Some Prior Work in Augmented Reality .................................................................................. 25 Special Facilities ........................................................................................................................... 25 DoD Field Support ........................................................................................................................ 26 Cybernet Systems Corporation 3885 Research Park Drive Ann Arbor, MI 48108

Upload: others

Post on 12-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Proprietary 1

Image Understanding

Cybernet Systems Corporation (www.cybernet.com) has been a leader in image understanding systems since 1989. It focuses on development and rapid deployment services in computer vision, robotics, situational awareness systems and software-oriented interoperability. Our core competences are sensor systems integration and algorithm development, robotics, man-machine interface design, medical devices and applications, modeling and simulation (with focus on massive multiplayer scale simulations), and network appliances and security.

Gesture Recogntion ......................................................................................................................... 2

Some Prior Work in Gesture Recognition .................................................................................. 3

Gesture Products ......................................................................................................................... 4

Radar ATR and Image Understanding ............................................................................................ 5

Some Prior Work in Radar ATR and Understanding ................................................................. 6

Navigation, Rendezvous & Docking .............................................................................................. 6

Some Navigation Projects Completed and Underway ................................................................ 9

Some Prior Work in Navigation, Rendezvous & Docking ......................................................... 9

Video Based and IMU-based Dead Reckoning and Position Determination Systems ................. 10

Some Prior Work in Dead Reckoning and Position Determination ......................................... 10

Ammunition Peculiar Equipment and Inspection: Ordnance ID and Inspection .......................... 11

Some Prior Work in Ammuniton Manufacturing and Inspection ............................................. 12

Optical Character Recognition ...................................................................................................... 13

Some Prior Work in Optical Character Recognition ................................................................ 13

Cybernet Model-based Vision Systems ........................................................................................ 15

Some Prior Work in Model-based Vision ................................................................................. 16

Unique Sensor Systems Development .......................................................................................... 19

Some Prior Work in Sensor Systems Development ................................................................. 19

Eye Tracking, Body Tracking, Head Tracking ............................................................................. 22

Some Prior Work in Eye Tracking, Body Tracking, Head Tracking........................................ 23

Augmented Reality ....................................................................................................................... 24

Some Prior Work in Augmented Reality .................................................................................. 25

Special Facilities ........................................................................................................................... 25

DoD Field Support ........................................................................................................................ 26

Cybernet Systems Corporation 3885 Research Park Drive

Ann Arbor, MI 48108

Page 2: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 2

Gesture Recogntion Cybernet is an early developer of user interfaces to robotic and computerized devices based on detection of motion in video and inertial tracker data steams, recognition of intentional human motion in these motion streams, and conversion of the human intention into machine control or computer control commands. This technology is now being populaized in computer gaming by Microsoft through it’s Kinect product. Cybernet began developing this technology in the early 1990s and holds many 11 patents1

(and 11 more patents pending) in this area, with the technology applied to fields ranging from computer gaming to augmented reality training and to human-intention-cued automated multi-camera surveillance.

Figure 1. UseYourHead eye/head tracking for PC game control

Figure 2. Augmented relatity training

1 7,852,262 Wireless mobile indoor/outdoor tracking system 7,684,592 Realtime object tracking system 7,668,340 Gesture-controlled interfaces for self-service machines and other applications 7,460,690 Gesture-controlled interfaces for self-service machines and other applications 7,121,946 Real-time head tracking system for computer games and other applications 7,050,606 Tracking and gesture recognition system particularly suited to vehicular control applications 7,036,094 Behavior recognition system 6,950,534 Gesture-controlled interfaces for self-service machines and other applications 6,801,637 Optical body tracker 6,299,308 Low-cost non-imaging eye tracker system for computer control 6,173,066 Pose determination and tracking by matching 3D objects to a 2D sensor

Range Channel

Reflectance Channel

Figure 3. Applying human motion tracking and intention detection to video surveillance

Page 3: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 3

Some Prior Work in Gesture Recognition An Automatic Learning Gesture Recognition Interface for Dismounted Soldier Training Systems,

Army STRICOM.N61339-00-C-0084, F33615-00-M-6041 Gesture-Based Multimedia Information Kiosk to Enhance Science Understanding, NASA JSC,

NAS9-99079, NAS9-98068 Recognition of Computer-Based Human Gestures for Device Control and Interacting with

Virtual Worlds, Army ARI, DASW01-99-C-0004, DASW01-98-M-0791 Recognition of Human Gestures for Device Control, Interacting with Virtual Worlds, and

Interpreting Human Activities, DARPA, DAAH01-97-C-R136

(a)

(b) (c)

(d) (e)

Figure 4. Gesture recognition identifies hands and head (a), tracks and compares the motion tracks to purposeful actions (b) and (c), and uses this information to provide instance recognition

of FM 21–60 Army hand gestures (d) and (e)

Page 4: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 4

Figure 5. GestureStorm tracks the TV presenter over a green screen, overlays over presentation video, and tracks

the presenter’s hand gestures to control the presented material (in this case, a weather map)

Figure 6. UseYourHead allows the player to move and nod the head as an alternative pointing device

while playing a first person shooter game

Gesture Products GestureStorm (www.gesturestorm.com): Cybernet markets a gesture-based product for controlling TV weather display software. This system, known as GestureStorm, leverages Cybernet’s patented vision tracking and gesture recognition software to allow the weather forecaster to manipulate on-air weather displays with simple hand movements. For example, performing a circle motion around a city on the display directs the software to display a zoomed in image of the city. The GestureStorm software can be used in conjunction with either the FasTrac or VIPIR weather display programs made by Baron Services Inc., and Cybernet has an agreement with Baron Services for selling GestureStorm to their customers. WKMG (channel 6) in Orlando uses the GestureStorm software for manipulating live weather forecasts – removing the need to create weather sequences before performing on-air forecasts, thereby allowing WKMG to provide quicker storm coverage than competitors in their market.

UseYourHead (www.gesturecentral.com) is a commercial software product that works in conjunction with a USB PC digital camera to track a player’s head movements and translate

these movements into game commands. For instance, when using the software with a flight simulator, head movements can be used to command the plane up, down, left, and right – freeing the hands for other keyboard commands. Using one’s head as an additional game command device adds an exciting new dimension of fun to playing PC games such as flight simulation, adventure, children’s, edutainment, and first and third person shooter games.

Page 5: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 5

Radar ATR and Image Understanding Cybernet has built a number of automatic target recognition systems (ATR) and radar understanding systems. These systems’ features include detection of scale invariant targets, generation of ATR test data sets from imagery and three dimensional models, recognition of structure in SAR imagery, and tracking in MTI (Moving Target Indicator).

(a)

(b)

(c)

Figure 7. (a) Radar Image of Ship (One angle); (b) 3D signal processing to reduce clutter; (c) Extracted probable wireframe representation

Page 6: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 6

Some Prior Work in Radar ATR and Understanding LCLW Multi-modal radar, ITT, RJF01947 – Audit systems for the LCLW Multi-modal radar Inverse synthetic aperture radar image extraction, Naval Air Warfare AD, N68335-07-C-0346 Orientation Invariant Combat Identification, US Navy, N68335-06-C-00651 – Develop a SAR and EO.IR orientation invariant ATR for ground vehicles Using Machine Vision Techniques to Create a Multi-Resolution Obstacle Database, Naval Air Systems Command, N00421-05-C-0022 – Develop a radar ATR truthing system from multiple radar images Automatic 3-D Structure Creation and Target Identification, U.S. Air Force (Wright-Patterson Air Force Base), F33615-02-M-1210 – Generate 3D structure from aerial SAR radar imagery

Figure 8. 3D Model capture/match from SAR imagery

Dynamic Modelbase from Motion Vision, Army MICOM, DAAH01-00-C-R010, DAAH01-98-C-R068 -- Generate ATR models from radar data Inverse synthetic aperture radar image extraction, Naval Air Warfare AD, N68335-07-C-0346

Navigation, Rendezvous & Docking The Cybernet team’s experience with 3D optical radar systems traces back to the first ERIM 3D optical radars that were deliverable to the Autonomous Land Vehicle program, and later to the U.S. Postal Service. We also have a strong practice in vehicle embedded computer vision applications (ATR, inspection, and navigation/docking) and miniature pointing and location solutions (video trackers and gesture recognition systems, MEMs INS-augmented GPS, Parachute descent tracking systems, and GPS/compass combinations). Figure 9 shows the first Optical LADAR our team built (while at the Environmental Research Institute of Michigan, in

Page 7: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 7

3D ERIM Ladar

Figure 9. ERIM LADAR for DARPA ALV & Navlab

Ann Arbor, Michigan, as part of the DARPA Autonomous Land Vehicle Program). Figure 10 shows a LUX sensor we integrate and distribute to civilians and military programs in the U.S. for IBEO, a division of SICK. IBEO is the leading company building LADARs for automotive applications – the LUX in Figure 9 is presently being quoted into major automotive applications in volume for several hundred Euros per unit (two orders of magnitude lower than R&D LADARs that have been used in previous robotics programs).

Team Cybernet was chosen, as the only small business-led Michigan-based team, to participate as a semifinalist in the 2007 DARPA Urban Challenge. Cybernet’s team placed in the Grand Challenge using the modified COTS vehicle shown below. Our approach with minimal cost controls, driver fusion systems, and industrial LADAR foreshadowed future cost sensitive Autonomous Mobility Appliqué Systems (AMAS) needed in the U.S. Army by providing the full strap-on Grand Challenge capable driving kit for less than $35,000 per vehicle (and only $250,000 in non-recurring team development cost that leveraged the innovative work Cybernet and its teammates have done for over twenty years).

Figure 11. Cybernet Automated Minivan “Cybervan” (www.cybernet.com/urbanchallenge).

Figure 12. Identification of safe road surface based on surface texture and color

Figure 10. IBEO LUX Distributed by Cybernet

Page 8: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 8

This effort has led to further autonomous navigation systems development efforts supporting ammunition handling, forklift control and Gladiator combat vehicle navigation/control.

Figure 13. Cybernet’s autonomous ammunition pallet handling forklift (left), and the Gladiator

combat vehicle that is being refit from teleoperated control to autonomous control (right)

Figure 14 shows an example of the three-dimensional vision work that has been a Cybernet technology development focus for over 25 years (and for key members of the Cybernet technical team, over 30 years). Cybernet has several patented methods2

for accomplishing these types of computer vision tasks as well as a wealth of industrial and military experience performing similar machine vision controlled robotic operations. Figure 14 shows a vehicle automated coupling demonstration system which was shown to TARDEC (U.S. Army).

Figure 14. A coupling rendezvous and docking demonstrator (TACOM DAAE07-99-C-L045)

2 7,050,606 Tracking and gesture recognition system particularly suited to vehicular control applications 7,036,094 Behavior recognition system 6,950,534 Gesture-controlled interfaces for self-service machines and other applications 6,173,066 Pose determination and tracking by matching 3D objects to a 2D sensor

Page 9: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 9

Some Navigation Projects Completed and Underway Sealandaire Gatekeeper Navigation Systems, PSC-F927C-CSC-01 Subcontract to Sealandaire UAV River Scout, Office of Naval Research (ONR), PCS-F2234-CSC-01, Subcontract to Sealandaire Material Handling System Automation Kit, Alion Science and Technology (prime) for the U.S. Army, STM1213705

Some Prior Work in Navigation, Rendezvous & Docking Autonomous Rendezvous and Docking Techniques, NASA, NAS8-03028 Automatic 3-D Structure Creation and Target Identification, U.S. Air Force, F33615-02-M-1210 Semi-Autonomous Telemaintenance of Robotic Platforms, U.S. Army, DAAB07-02-C-P618 Dynamic Modelbase from Motion Vision, U.S. Army, DAAH01-00-C-R010 An Articulated Joint for the High-Mobility, Articulated, All-Wheel Drive, Modular Vehicle (HAAMR), U.S. Army, DAAE07-99-C-L045 (Example Figure 12)

Figure 15. Image from identifying UUV/USV motion through sonar flow analysis

Figure 16. Automated Material Handling Equipment. U.S. Army Picatinny Arsenal, W15QKN-10-C-0121

Figure 17. System for tracking refueling drogue based on active LED targets on the drogue tracked by a simple computer localization system on the UAV; Terminal Guidance for Autonomous Aerial Refueling, U.S. Navy – NAVAIR, N68936-10-C-0115

Page 10: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 10

Figure 18. Video/IMU based navigation architecture

Video Based and IMU-based Dead Reckoning and Position Determination Systems Cybernet has accumulated over 8 years of experience implementing video and miniature inertial and GPS-based guidance and tracking systems. This area of expertise was initiated in 1999 when our engineers built the first electronic Automated parachute Activation Devices (AAD) to enhance airborne soldier safety. These AAD devices measure airborne soldier jump exit, static line release, parachute opening, and descent from the jump exit through to ground touchdown.

From this project starting point we have miniaturized and improved accuracy for video tracking, magnetometer, inertial measurement, and GPS-fused navigation solutions in applications focused on vehicle, personnel, and other object tracking. One system built by Cybernet was used in its 2007 Grand Challenge vehicle (Figure 18). This system

achieved nominally 10cm accuracy with GPS and maintained this accuracy for up to a mile when GPS was denied, using only $5000 of

sensor and computation equipment. A later unit supporting precision UUV navigation achieves better than 1 milliradian pointing

resolution, and when used in a human (first responder) gate measurement system, better than 2% of distance traveled for dead reckoning.

Some Prior Work in Dead Reckoning and Position Determination U.S. patent 7,852,262, Wireless mobile indoor/outdoor tracking system Terminal Guidance for Autonomous Aerial Refueling, U.S. Navy – NAVAIR, N68936-10-C-0115 – UAV tracking and rendezvous sensor to support refueling (Figure 17 above) Figure 19. (right) Non Line of Sight Weapon Orientation, PEO-STRI., W900KK-11-C-0021 – weapons pointing sensor (better than 1 milliradian)

Figure 20. (left) HIGH ACCURACY NAVIGATION SYSTEMS FOR LOW POWER UUVs, Office of Naval Research, Contract: N00014-11-M-0234 – Inertial navigation sensor array for accurate UUV navigation (augmented with sonar bottom tracking)

Precision GPS

Odometry

Inertially Compensated Magnetometer - (3) MEMS Gyro - (3) MEMS Accelerometer - (3) Magnetometer - (1) Temperature

Fiber Optic Gyro

Persistent Location

and Heading Solutions

Video Tag ID and Tracking

Page 11: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 11

Figure 21. (right) Autonomous

Rendezvous and Docking Techniques, NASA, NAS8-03028

– recognition of holographic targets for spacecraft

rendezvous and docking

Figure 22. (left) Graphics, Sensors and Planning for Robotics, NASA JPL, NAS7-1324 – 3D recognition for many applications in space: robotic repair, spacecraft rendezvous and docking, inspection, structure deflection measurement, etc.

DARPA Grand Challenge GPS Denied Precision navigation, 2007 (use in vehicle shown in Figure 10 previously)

Scalable Geo Telemetry Networks, U.S. Army, W9132V-05-C-0011, 2005-2007 Wearable Wireless Fall Event Detector, NIH, 2005-2007 NLOS Pointing Device for OneTESS Application, AT&T for PEOSTRI, 2005 Enhanced GPS/INS Tracking and Vehicle Dynamics Monitoring System, 2005 Low-Cost Navigation System to Augment GPS Receivers in General Aviation Aircraft, DOT, Enhanced Accuracy INS/GPS System Utilizing Low-Cost Sensors and Geophysical Models, U.S.

Army, Parachute Automatic Activation Device (AAD) for Low Altitude Jumps, 1999-2003

Ammunition Peculiar Equipment and Inspection: Ordnance ID and Inspection Cybernet’s Automated Tactical Ammunition Classification System (ATACS) is an automated ammunition identification, inspection, and sorting system that separates good rounds from bad, round types, and round calibers based on comparing as-built inspection criteria with inspection for surface damage and corrosion. The system separates bulk turn-in ammunition from 9mm to 50cal at a rate of approximately 100,000 rounds per day, saving the Army approximately $25 million in labor avoidance costs per machine per quarter. Early versions were built to operate in normal desert particulate and temperature ranges (up to 140 degrees F) and break down into four shipping containers for flexible system transport. These ATACS types are operating in Kuwait and at the National Training Center, three shifts per day.

Because there is a need for ammunition identification, inspection, and reclamation in less improved environments, current generation ATACS machines are being packaged into ISO

Page 12: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 12

containers – with self-contained power and facilities in the container so that a flat area is all that is needed to set-up shop.

Some Prior Work in Ammuniton Manufacturing and Inspection ATACS Logistical Support, U.S. Army, W912DY-06-D-0008 ATACS (Army Corps of Engineers Purchase Order), U.S. Army, W912DY-05-P-0112 Contact: U.S. Defense Ammunition Center

Figure 23. (a) Bulk turn-in ammunition (b) ATACS bulk ammunition inspect and sort

The Cybernet team is also integrating these technologies into full ammunition manufacturing lines, to support new types of specialized ammunition (Figure 16). The resulting line of ammunition inspection and manufacturing products is listed at atacs.cybernet.com.

(a) (b) (c)

Figure 24. (a) Ammunition case welding machine; (b) Ammunition case measurement machine; (c) Automatic munitions assembly line machine

Cybernet’s Projectile IDentification System (PIDS) was developed and demonstrated as an identification and inspection sensor for large caliber ammunition to Picatinny Arsenal, in order to support future force mortars. This device uses an inspection and identification technology similar to that proven on the ATACS, can identify and inspect ordnance based on color, shape, and OCR/bar-coded identifiers inline as the ordnance is loaded, and is fully field-ruggedized for rapid deployment.

Page 13: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 13

Contracts: Optical Projectile Identification and Inventory System, U.S. Army, DAAE30-03-C-1060, Contact: TACOM-ARDEC

(a) (b)

Figure 25. (a) Concept future force mortar with PIDS system inline with loading port; (b) PIDS identification and inspection system as implemented for port inline application

Optical Character Recognition Cybernet staff began developing and supporting OCR systems for flexible robotic manufacturing systems in 1987. While at the Environmental Research Institute of Michigan, current Cybernet staff developed the technology that currently sorts the daily U.S. 1st class mail stream. In 1990 this was a very demanding application, requiring identification of 5 and 9 digit zip codes for both hand-written and machine-printed mail pieces, at a rate of nominally 10 per second and error rates of less than 1%.

Figure 26. USPS letter and flats sorting machines using handwritten and machine print OCR automation developed by Cybernet staff while they worked at the Environmental Research

Institute of Michigan, 1987-1990

Subsequently we developed proprietary cost-effective document processing and searching technology for paper to digital capture and OCR systems supporting English and Arabic.

Some Prior Work in Optical Character Recognition Arabic-Ruq Recognition, U.S. Army, W15P7T-08-C-G201—apply Cybernet OCR expertise to Arabic – see examples below (Figures 27-29)

Page 14: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 14

Figure 27. Text Document with Horizontal and Vertical Projection

Figure 28. Vertical Projection for Word Segmentation

Figure 29. Extracting a fully-connected character

Automated Capture of Technical Manuals into IETM Format for Electronic Review & Distribution, NASA/GSFC, NAS5-32671 U.S. Patent 5,963,966 Automated capture of technical documents for electronic review and distribution

Page 15: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 15

Cybernet Model-based Vision Systems Cybernet has built a number of model-based computer vision system applications, including robotic guidance vision systems, rendezvous and docking vision systems, automatic target recognition systems (ATR), and inspection systems. These systems include detection of scale and position invariant targets, generation of ATR test data sets from imagery and three dimensional models, recognition of structure in imagery, identification and tracking of vehicles and people, and correlation of real and virtual model imagery.

Cybernet’s Machine Vision System vision software can determine the three dimensional location and orientation of an object visible in an image. The system makes use of a pre-configured CAD model of the object and one or more camera views (one is sufficient, but each additional camera increases accuracy) of the object. The original system design was intended for location and identification of partially buried unexploded ordnance, so partial occlusions of the object in the image are also handled. This technology has been patented in 6,173,066 Pose determination and tracking by matching 3D objects to a 2D sensor.

Figure 30. Image is (a) edge feature detected, one edge per grid (8x8 to 16 x16); (b) matched to

CAD wire frame for one to multiple expected objects; (c) highly tolerant of clutter

(a) (b)

(c)

Page 16: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 16

Some Prior Work in Model-based Vision Awareness and Recognition of Behavioral Threat within Complex Environments, U.S. Army – RDECOM – ARL Aberdeen Proving Grounds, W911QX-07-C-0030

Figure 31. The basic image processing system: A) input image; B) foreground codebook

elements (in white); C) running average foreground background segmentation; D) the addition of the two methods; E) the final blob after connected component analysis

Figure 32. Moving from left to right: the background model, the addition of an object that gets placed in a codebook layer, the addition of a new object in front of the object, the new object

being placed in a new codebook layer, the removal of the background objects

Page 17: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 17

Figure 34. Initial Outdoor Classification Data

Figure 35. The Vigilance post-processing display showing an object (human) inside of the area of interest

(a restricted area), and therefore highlighted in red to indicate an event detection

A Behavior Recognition System for Identifying and Monitoring Human Activities, AFRL Kirtland, F29601-98-C-0096

Figure 33. Two unidentified objects

An Optical Health Monitor for High Power Lasers, U.S. Air Force (Wright-Patterson Air Force Base), FA8650-04-C-1699, F29601-02-C-0116 Figure 36. (left) Quantum Image Processing Toolkit, U.S. Air Force (Rome Air Force Base), F30602-03-C-0064 Quantum Information Science, U.S. Air Force (Rome Air Force Base), F30602-02-C-0166

Page 18: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 18

Content Addressable Image Database Architecture, NASA, NAS5-99025 Content Addressable Graphics, Image and Video Retrieval Systems, Navy, N68335-98-C-0159 A Perception-Based Image Rendering System, Army TACOM, DAAE07-97-C-X046, DAAE07-95-C-R072

Figure 37. (left) Machine Recognition for Reality Registration in Ordnance Fuse Removal, AF Tyndall AFB, F08637-95-C-6047, F08637-94-C-6030 Perception-Based Image Interpretation, MDA (Formerly BMDO), DASG60-95-C-0061 Performance Enhancement of Operator Image Interpretation. DOT, DTRS-57-93-C-00131 A Highly Leveraged Real-Time Pose Determination

System, White Sands Missile Range. DAAD07-95-C-0111 – Figure 38.

Figure 38. Cybernet Model-based recognition and tracking of White Sand Missile Test Film

Human-Centered, Task Specific Visual Planning for Robotic Applications, Army MICOM for ARPA, DAAH01-95-C-R023, DAAH01-93-C-R231 A Generalized Video Compression Model, CECOM, DAAB07-94-C-D608, DAAB07-91-C-B031 Figure 39. (right) A High-Accuracy Absolute Navigational System for Rapid Runway Repairers, AF Eglin AFB, F08637-94-C-6003, F08635-91-C-0194 Advanced Vision-Based Decision Systems, MDA (Formerly BMDO), DASG60-91-C-0070 U.S. Patent 7,684,592 Realtime object tracking system U.S. Patent 6,173,066 Pose determination and tracking by matching 3D objects to a 2D sensor

Page 19: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 19

Unique Sensor Systems Development Cybernet has built a number of specialized optical sensors or imaging systems. These include x264 capture and compression systems, security monitoring systems (that identify vehicles, people, and suspicious behaviors), video identification and monitoring systems for small animals in their natural environments, low error rate optical character recognizers, 3D ladars, and laser-optical shearographic and interferometric three dimensional measurement systems. Some of these projects are described below.

Some Prior Work in Sensor Systems Development High-Speed FPGA Image Decoder, NASA – JPL, NNX09CD79P Robust CCSDS Image data to JPEG2K Transcoding, NASA – GSFC, NNG07CA00C

Figure 40. (right) Grasp Algorithms for Optotactile Robotic Sample Acquisition, NASA – JSC, NNX10CA94C, NNX09CD93P

Figure 41. Visual display of 3D reconstruction of fiducials – Force equals fiducial displacements

Page 20: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 20

Multispectral Desert Fauna S&R, U.S. Air Force, FA9302-10-M-0002

Figure 42. Results of our dynamic background subtraction algorithm Fire Control System for Small Caliber Weapons, U.S. Army: Picatinny, Contract: W15QKN-12-C-0003 Figure 43. (right) Closed Loop Fire Control, U.S. Army – Picatinny Arsenal, W15QKN-11-C-0019 Multi-Sensor Suites, Army CECOM, DAAB07-96-C-J625 Early Warning Aircraft Damage Detection Using Electronic Laser Speckle Pattern Interferometry, AF Robins AFB, F-09650-97-C-0217, F-09650-96-C-0381

Figure 44. (left) Shearography Motion Correction, U.S. Navy, N68936-10-C-0069 Battlefield Image and Text Distribution Using a Personal Communications System (PCS), Army CECOM, DAAB07-97-C-A254 Safe-to-Load Sensors, Army Watervliet, DAAA22-96-C-0013 Active Identify Friend/Foe Intruder System, AF Hanscom Electronic Systems Center, F19628-95-C-0179

Multispectral Imaging Comparative Design Study, Navy NRL, N00014-95-C-2130 Non-Scanning No Moving Parts Computed Tomographic Imaging Spectrometer, Phillips Lab/PKW, F29601-95-C-0088

Raw Image Average subtracted and thresholded

Page 21: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 21

Figure 45. Shearographic image of an internal composite defect

Rapid Nondestructive Optical Inspection of Advanced Composite Structures, Army ARO, DAAH04-95-C-0029 (Figure 45) Reducing Teleoperated System Data / Image Bandwidth Requirements, Army MICOM /AMSMI-RD-PC-HB, DAAH01-95-C-R125, DAAH01-93-C-R318 Virtual Panoramic Vision Blocks, DARPA, DTRS-57-94-C-0018 Moiré Interferometry Measurement Device, U.S. Army (TACOM-ARDEC), DAAE30-02-C-1038 (Figure 46 below)

Test Object Phase Map Mask Depth Map

1 Paper Plate

2 Paper Plate

3 PVC Pipe

4 PVC Pipe

5

Wooden Block with

Sloped Edges

Figure 46. Moiré sensing of 3D object structures

Page 22: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 22

A GPS-Based Digital Image and Video Capture, Editing and Transmission System for Special Forces Applications, Army CECOM, DAAB07-94-C-D608, DAAB07-91-C-B031 Implementation of a Three - Dimensional Mapping / Inspection System for Inside-Tank or Containment Areas, DOE, DE-FG02-92-ER-81391, DE-FG02-92-ER-81391 Robot Vision using Holographic Targets, NASA MSFC Marshall Space Flight Center, NAS8-38916 U.S. Patent 6,043,870 Compact fiber optic electronic laser speckle pattern interferometer U.S. Patent 6,040,900 Compact fiber-optic electronic laser speckle pattern shearography

Eye Tracking, Body Tracking, Head Tracking Cybernet has built body trackers, eye trackers, and head trackers as standalone deliverables and as components of larger HMD or projective graphics systems.

Cybernet’s Eye Tracking System tracks the movement of a person’s pupil and provides a real-time visual display of where they are looking. The system detects changes in pupil size, blinks, and other eye-related parameters.

The system consists of two lightweight head-mounted cameras, which allow a wearer a full range of head motion and limited physical freedom (an 8ft cable is supplied with each tracker). The eye tracker control software has an intuitive graphical user interface, and works with a Windows or Unix environment. The system also includes a PC with hardware for video capture and processing.

The eye tracker works by illuminating the eye with safe infrared light (IR), allowing the video capture card and processing software to process the reflected IR that is caught on camera. The result provides measurement of a user’s point of regard (gaze) and other eye-related parameters, such as pupil size and blink occurrence.

Figure 47. Cybernet Eye Tracker -- www.gesturecentral.com/eyetracker/index.html

Page 23: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 23

Ideal applications for the eye tracking system include hands-free control of computer interface systems, replacing standard mouse or joystick controls. Other applications include human-operator performance assessment and other psycho-physiological research studies.

The Firefly is a high-speed, real-time, full-body motion tracking system that uses a multiprocessor DSP system to analyze data and track up to 32 key positions on a subject’s body. The tracking points are called tags. These tags connect to a small lightweight controller box containing a high-speed microcontroller that synchronizes each tag’s output for the stationary motion capture system. The three dedicated DSPs gather position information and feed this data to a fourth DSP all via high-speed serial connections. This fourth DSP compiles all the data.

Firefly - Full Body Tracking System

Some Prior Work in Eye Tracking, Body Tracking, Head Tracking Adaptive High Resolution HMD, NAVAIR Training Systems Division, N61339-07-C-0100 Adaptive High Resolution HMD, 20/20 Immersive Display System Based on Eye Tracking, NAVY, NAVAIR – PM Training Systems Division, N61339-09-C-0022 Automatic hmd alignment device, U.S. Navy, N61339-06-C-0031

Figure 48. Cybernet’s Firefly body tracker; Capture motion using wireless optical tags and the patented real time tag tracking camera bar

U.S. Patent 6,801,637 Optical body tracker

Page 24: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 24

Figure 50. Cybernet’s Augmented Reality Live Fire Trainer

Figure 49. Cybernet HMD with integrated Eye Tracker

Integrated Head-Mounted Display Interface for Hands-Free Control. Army ARL, DAAL01-98-C-005, DAAL01-97-C-0026 Optical Head Tracking for Eye Tracking Localization, Fed Labs / Rockwell Science Center, PO#: B1K431431

Augmented Reality Cybernet Systems’ computer vision technologies have been used to correlate real world captured imagery with virtual or augmented reality generated graphics. One example, Cybernet’s Augmented Reality for Live Fire Training system, enables the rapid creation of realistic live-fire training scenarios using basic software and hardware components. The hardware consists of a COTS heads-up display for rendering models, a forward-looking camera for video input and pose detection, and our proprietary inertial measurement (I3M) unit for rifle pose detection. The software uses optical beacons, affixed to building faces, to determine the position of the soldier and to determine the computer models associated with each building. Using this data, 3D computer models of Non-Player Characters (NPCs) that simulate civilians and enemies are rendered to the soldier’s heads up display. The locations of target hits and misses are determined using rifle position and orientation data from our I3M unit, and as

Page 25: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 25

Figure 51. Pose Extraction Process: (A) Intrinsic camera calibration; (B) Luminosity of the scan line (yellow), high frequency signal (green), smoothed signal (blue); (C) Extracted barcode lines; (D) Completed line scans for corner detection; (E) Reconstructed bar-code pose; (F) excellent low light performance demonstration (contrast enhanced).

with traditional computer simulations, the NPCs act appropriately to the trainee’s actions. To expedite the creation of realistic scenarios, our system uses COTS software to create the building models, and a human readable scripting language to dictate character behavior.

Some Prior Work in Augmented Reality Augmented Reality Serious Gaming for Medical Care, OSD/Army – USAMRAA, W81XWH-08-C-0761 – Augmented reality trainer for medics Tactile panel (ph-ii), Navy - NavAir - PM NAWC TSD, N61339-08-C-0041 – Wireless vision-based tactile realistic flight simulators built from Rapid Prototyping technology Augmented Reality for Live Fire Training, U.S. Army - RDECOM - PM STTC, W91CRB-08-C-0130 – Immersive fires trainer that combines video and IMU based player and weapons tracking Augmented Reality for Live Fire Training, U.S. Army, W91CRB-08-C-0013 Augmented reality, MARCORSYSCOM, M67854-07-C-6526 – Augmented reality overlays over systems to be maintained – Live IETMs

Special Facilities Cybernet System Corporation’s main development facilities are at 3885 Research Park Drive, Ann Arbor, Michigan. These facilities include project management, CPA-level financial accounting, purchasing, administration, quality control, publication services, engineering office space and meeting rooms, system-level testing, a metal working machine shop, in-house plastic FDM (Fused Deposition Modeling) machine rapid prototyping for complex usable plastic part

Figure 52. (A) Completed system screen shot with NPC; (B) SketchUp model of the simulation environment for environment modeling; (C) Example of occlusion mapping from environment model; (D) Completed system hardware.

Page 26: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 26

shapes, a light electronics assembly lab, an optics lab, ProEngineer, AutoCAD, and SolidWorks CAD, OrCAD circuit design tools, and supporting software development (Windows, Windows CE/mobile, Linux, various embedded) and hardware development tools (spectrum analyzers, scopes, logic analyzers, magnetometer and IMU calibration, etc.). Cybernet also takes advantage of pre-qualified out-sources for parts fabrication and electronic board and module assembly when volume demands become high enough. Our facilities meet the environmental laws and regulations of federal, state, and local governments for airborne emissions, waterborne effluents, external radiation levels, outdoor noise, solid and bulk waste disposal practices, and handling and storage of toxic and hazardous materials.

In addition to the Ann Arbor headquarters, Cybernet operates small office facilities in Orlando, FL, the Washington DC area, San Diego, CA, and Johnstown, PA – supporting the USMC, Navy, Army, SOCOM, and prime contractors. Cybernet is a Mentor-Protégé company related to SAIC, is certified as an 8(a) firm by the SBA (SBA Case # 111553), and has been in continuous positive cash-flow operation since 1988 (23 years). Cybernet facilities are cleared through DoD Secret Level, with key personnel and staff members currently maintaining clearances for work performance.

Ann Arbor Facility Front Aerial

Figure 53. Cybernet Integration Facilities in Ann Arbor, MI

DoD Field Support Cybernet supports ATACS robotic equipment to the U.S. Army worldwide, including in OIF, through 24/7 phone support, rapid response logistics, and when needed field support engineering.

Page 27: Image Cybernet Systems Corporation Understanding Ann Arbor, MI 48108

Cybernet Systems Image Understanding & Sensors Capabilities 2012

Cybernet Proprietary 27

Our main development facility offers 45,000 sq. ft of space including 5 vehicle highbays, light machine shop & rapid prototyping, software development, seating for over 150 engineers and

staff, and proximately roadway test and off-road test areas. We also have access to the Chelsea Proving Ground and the Ford Romeo Proving Ground.

Mechanical Highbay Electronics & Test

Vehicle Highbay #1 Vehicle Highbays #2 &3 Vehicle Highbay #4 Demo Highbay #5

Open Office & Experiment space Open office space

Think of Cybernet for your challenging systems integration and unique robotic sensors and systems development requirements.

www.cybernet.com

[email protected] Ph: 734.668.2567 Fx: 734.668.8780

Cybernet Systems Corporation 3885 Research Park Drive

Ann Arbor, MI 48108