navigation for planetary approach & landing...

133
Final Report Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 1 - Navigation for Planetary Approach & Landing FINAL REPORT

Upload: others

Post on 24-Mar-2020

6 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 1 -

Navigation for Planetary Approach & Landing

FINAL REPORT

Page 2: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 2 -

CONTENTS

1 PROJECT OVERVIEW 13

1.1 STUDY CONTEXT AND MOTIVATION 13

1.2 INDUSTRIAL ORGANISATION 14

1.3 STUDY LOGIC AND PROGRAMME OF WORK 16

1.4 MAIN MILESTONES & WAY FORWARD 20

2 NPAL DESIGN 21

2.1 SYSTEM SPECIFICATIONS 21

2.2 DEVELOPMENT & VALIDATION APPROACH 22

2.3 HARDWARE ARCHITECTURE & DEVELOPMENT APPROACH 24

3 FEIC DEVELOPMENT 30

3.1 IMAGE PROCESSING FOR SAFE LANDING, DEVELOPMENT & VALIDATION APPROACH 30 3.1.1 References 30 3.1.2 Critical parameters 30 3.1.3 Objective criteria and IP modelling 31 3.1.4 The camera model 33 3.1.5 Image database 34 3.1.6 Algorithms benchmarking and performance evaluation 38 3.1.7 Conclusion 42

3.2 HW/SW CO-DESIGN 44 3.2.1 hardware software Functions co-design 44 3.2.2 Interfaces hardware software codesign 46

3.3 FPGA IMPLEMENTATION & VALIDATION 52 3.3.1 Image Processing Timing Constraints 52 3.3.2 Internal FEIC Components 52 3.3.3 External (SpaceWire) View 54 3.3.4 Development and Continuous Validation 54

Page 3: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 3 -

3.3.5 Key Architectural Design Decisions 56 3.3.6 External FEIC Register Interface 56 3.3.7 FEIC Technology Snapshots: Object Stores 57 3.3.8 FEIC Technology Snapshots: Extractor Select/Sort Unit 57 3.3.9 FEIC Technology Snapshots: Pixel Streams and Line Buffers 57 3.3.10 FEIC Technology Snapshots: Correlation Measure 58 3.3.11 Acceptance Testing: Further Image Processing Validation 58 3.3.12 Acceptance Testing: User/OBC Interface Validation 59 3.3.13 Stress Testing 59 3.3.14 Summary 60

4 VBNC DEVELOPMENT 62

4.1 HIGH LEVEL FUNCTIONAL DESCRIPTION 62

4.2 MAIN FEATURES 62

4.3 FUNCTIONAL CHARACTERISTICS 63

4.4 IMAGE PROCESSING FEATURES 63

4.5 OPTICAL DESIGN 64

4.6 THERMO-MECHANICAL DESIGN 67

4.7 ELECTRONIC DESIGN 70

4.8 SOFTWARE 72

4.9 SYNCHRONISATION 72

4.10 PERFORMANCE ANALYSIS 72

4.11 BREADBOARD 73

4.12 ELECTRICAL GROUND SUPPORT EQUIPMENT 73

4.13 TESTING 74

5 NAVIGATION DEVELOPMENT 75

5.1 NAVIGATION ARCHITECTURE 75

5.2 REAL TIME IMPLEMENTATION 79

5.3 NAVIGATION FUNCTIONAL ARCHITECTURE 79

5.4 REAL TIME PROFILING 82 5.4.1 Time performance evaluation using inside timers 82

Page 4: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 4 -

5.4.2 Time performance evaluation using profiling of the code 82 5.4.3 Correlation between profiling and analytic performance estimation 83 5.4.4 Java Navigation Filter V2 Profiling Results 85 5.4.5 Conclusions & References 86

5.5 PERFORMANCES CAPACITY 88

5.6 MONTE CARLO PERFORMANCE ASSESSMENT ON SCENARIOS 92 5.6.1 Mercury Campaign 92 5.6.2 Mars Campaign 98

6 VBNAT – END TO END VALIDATION 103

6.1 PANGU: VIRTUAL SCENE GENERATOR EMBEDDED 103 6.1.1 PANGU: Planet and Asteroid Natural scene Generation Utility 103 6.1.2 PANGU Architecture 104 6.1.3 Large Models 105 6.1.4 Camera Model 105 6.1.5 Image Library 106 6.1.6 TCP/IP Facilities for VBNAT 106 6.1.7 Summary 106

6.2 VBNAT V3: LANDER GNC DESIGN 107 6.2.1 Tool Functionalities 107 6.2.2 Architecture and interfaces 110 6.2.3 Covariance Analysis Tool 111

6.3 VBNAT V4: HARDWARE IN THE LOOP 113 6.3.1 Description of validation test 113 6.3.2 Strategy with 4 neighbours 114 6.3.3 Strategy with 8 neighbours 115 6.3.4 Simulated performances 117

6.4 INTRODUCING THE DEVELOPMENT ENVIRONNEMENT FOR A VISION-BASED NAVIGATION SYSTEM 118

6.5 END TO END VALIDATION 122 6.5.1 Non Real Time Test Bench 122 6.5.2 Real Time Test Bench 129 6.5.3 End to End Validation conclusions 132

7 GENERAL CONCLUSIONS 133

Page 5: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 5 -

PAGE LEFT BLANK INTENTIONALLY

Page 6: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 6 -

TABLE OF FIGURES

Figure 2-1: Transtech board coupled with the test bench 25 Figure 2-2: hardware interface between VBNC camera and FEIC 27 Figure 2-3: VBNC camera architecture 29 Figure 2-4: VBNC camera breadboard exploded + assembly views 29 Figure 3-1 - Objective criteria were defined on the basis of navigation requirements 32 Figure 3-2 – Kinematic distorsions: Sensitivity of RBF to Tz and Rz. About 50% of points are repeated with 2%

approach or 10° rotation. 40 Figure 3-3 - Radiometric distorsions: sensitivity to noise level: gaussien (left) and dead pixels (right). 40 Figure 3-4 : Sensitivity to FTM variations between consecutive frames. 40 Figure 3-5 : Détection des maxima globaux (gauche), détection des maxima locaux (droite). 41 Figure 3-6 : Sensibilité de P(GM)vis-à-vis de Rz et Tz. 43 Figure 3-7: hardware software co-design 46 Figure 3-8: data flow in FEIC function 48 Figure 3-9: Tlist array 50 Figure 3-10 :FEIC communication interface 51 Figure 3-11: Frame timing diagram 52 Figure 3-12: Internal stucture of the FEIC 53 Figure 3-13: FEIC Image Processing Data Flow 53 Figure 3-14: FEIC SpaceWire Ports 54 Figure 3-15: Development and Validation Approach 55 Figure 3-16: Correlation Measure 58 Figure 3-17: Comparison of FEIC Harris with floating-point 59 Figure 3-18: Stress test image pair 60 Figure 3-19: FPGA Utilisation By Function 61 Fig. 4-1 Sample analysis outputs 66 Tab. 4-2 VBNC transient temperatures 68 Fig. 4-3 VBNC mechanical layout. Axonometric view 68 Fig. 4-4 VBNC mechanical layout. Sectional view 68 Fig. 4-5 VBNC structural and thermal models 69 Fig. 4-6 VBNC electrical scheme 70 Figure 5-1: LOS to one landmark concept 76 Figure 5-2: Vertical Landing Simple Example 76 Figure 5-3: NPAL extended Kalman filter main equations 78 Figure 5-4:Navigation filter real-time implementation 80 Figure 5-5:Navigation function interfaces 80 Figure 5-6 Percentage of total time used by unitary methods 83 Figure 5-7: The two type of feature point generators (Left: four poins, Right: random points) 89 Figure 5-8: Two types of trajectories used for performance evaluation (Left: Mars landing trajectory, Right :

Mercury landing trajectory). 89

Page 7: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 7 -

Figure 5-9: Along track and cross track velocity error and standard deviation (Mercury trajectory) Estimation of

the along track velocity is below 2 m/s 10 second before landing and below 0.25 m/s cross track. 91 Figure 5-10: Along track and cross track velocity error and standard deviation (Mars trajectory) Mars trajectory

is more favourable (vertical descent) and allows very good cross track velocity estimation. 91 Figure 5-11 Landing site position estimation errors at Tga (Along track error) and Tgc (Cross-track error).

Percentage of runs better than the value in abscises 95 Figure 5-12 Velocity estimation errors at Tga (Along track error) and Tgc (Cross-track error). 95 Figure 6-1: NASA Spirit rover (top-left), NASA Viking lander (top-right), PANGU synthetic image (main) 103 Figure 6-2: Simple Crater Model 104 Figure 6-3: PANGU Architecture 104 Figure 6-4: Apollo Image (left) and PANGU Image (right) 105 Figure 6-5: PANGU Hierarchical Model 105 Figure 6-6 VBNAT Development : an incremental approach 107 Figure 6-7 VBNAT functionalities 107 Figure 6-8 Links between VBNAT environment and PANGU 109 Figure 6-9 Global architecture of the simulator. Definition of Camera Model, FEIC Extraction & Tracking,

FEIC/OBC Interface. 111 Figure 6-10 Java functions and Simulink environment for VBNAT V4.0 113 Figure 6-11 Reference image of the VBNAT V4.0 test : translation between 2 consecutive images 114 Figure 6-12 Four neighbours strategy : subpixel accuracy calculated using linear interpolation 114 Figure 6-13 Tracks statistics for 4-neighbour strategy : norm of the error 115 Figure 6-14 Plot of the tracks on the image used in the VBNAT V4.0 simulation with 4-neighbour strategy

(tracks in red and blue crosses for new points) 115 Figure 6-15 Figure 6-16 Eight neighbours strategy : subpixel accuracy calculated using biquadratic

interpolation 115 Figure 6-17 Tracks statistics for 8-neighbour strategy : norm of the error 116 Figure 6-18 Plot of the tracks on the image used in the VBNAT V4.0 simulation with 8-neighbour strategy

(tracks in red and blue crosses for new points) 116 Figure 6-19 - Tracks obtained in the VBNAT V3 simulation with 8-neighbour strategy. 117 Figure 6-20 - Tracks statistics for 8-neighbour strategy : norm of the error 117 Figure 6-21: exemples of reused environment developments 118 Figure 6-22: FEIC FPGA and functional test bench architecture 119 Figure 6-23: ESG test bench architecture 120 Figure 6-24: real time test bench architecture 121 Figure 6-25 Scenario #1 : image samples #1, #20 and #40 124 Figure 6-26 Mercury Descent : 1st set of images for validation- image samples #1, #258 and #516 124 Figure 6-27 Mercury Descent : 2nd set of images for performance evaluation- images #1, #516 and #1033 125 Figure 6-28 Mars Descent : image samples #1, #718 and #1437 125 Figure 6-29 Mercury performance test : correlation and Id plots for tracked points 126 Figure 6-30 Mercury performance test : error on tracked point position and estimation of distance to mean

plane 127 Figure 6-31 Mercury performance test : cross-track and along track velocity errors 127 Figure 6-32 Mars performance test : correlation and Id plots for tracked points 128 Figure 6-33 Mars performance test : error on tracked point position and estimation of distance to mean plane 128

Page 8: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 8 -

Figure 6-34 Mars performance test : cross-track and along track velocity errors 129 Figure 6-35 Real time testbench architecture 130 Figure 6-36 real time synchronisation between FEIC and navigation software 131

Page 9: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 9 -

ACRONYMS

ACK Acknowledge ADC Analog/Digital Converter AOCMS Attitude Orbit Control Management System AOCS Attitude and Orbit Control System APS Active-Pixel Sensor ASIC Application-Specific Integrated Circuit AST Autonomous Star Tracker CCD Charge-Coupled Device CCSDS Consultative Committee on Space Data Standards CPM Chemical Propulsion Module CTE Charge Transfer Efficiency CVE Correlation Value Error DEM Digital Elevation Model DMA Direct Memory Access DOB De-Orbit Boost DPRAM Dual Port RAM DSN Deep Space Network DSNU Dark Signal Non Uniformity ESA European Space Agency ESG Electrical Stimuli Generator EU Electronic Unit FDIR Failure Detection, Identification and Recovery FEIC Feature Extraction Integrated Circuit FOV Field Of View FPGA Flexible Programmable Gate Array FPN Fixed Pattern Noise FVE Feature Extraction Value Error G-List Gate-List GNC Guidance Navigation and Control HF High Frequency HRC High Resolution Camera ICD Interface Control Document I/F Interface IMU Inertial Measurement Unit INS Inertial Navigation System IP Image Processing IPQ Inertial Planetocentric eQuatorial frame ITT Invitation To Tender IVN Integrated Vision and Navigation for Planetary Exploration LF Low Frequency LiGNC Lidar Based GNC for Rendezvous and Landing

Page 10: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 10 -

LMS List of Maximum Strength LOS Line of Sight LS Landing Site LT Track points list (before rejection) LULA Lunar Landing Systems Study MAS Mission Analysis Section MCM Multi-Chip Module MECO Main Engine Cut Off MER Mars Exploration Rover MMH Mono Methil Hidrazine MMO Mercury Magnetospheric Orbiter MOC Mission Operation Centre MON Mono-Oxide of Nitrogen MPL Mars Polar Lander MPO Mercury Planetary Orbiter MSE Mercury Surface Element MSR Mars Sample Return MTF Modulation Transfer Function MWGS Modified Weighted Gram-Schmidt NACK No-Acknowledge NPAL Navigation for Planetary Approach and Landing OBC On-Board Computer OBDH On-Board Data Handling OH Optical Head PANGU Tool used to generate virtual scene from a given Terrain Model PCI Peripheral Component Interconnect (computer bus) PGM Portable Gray Map PRNU Photo Response Non Uniformity RBF Lander Body-fixed Frame RPQ Rotating Planetocentric eQuatorial frame S/C Spacecraft SEU Single Event Upset SNR Signal to Noise Ratio SRAM Static Random Access Memory STR Star Tracker TC List of Tracked Points TN List of New Points UD Upper Diagonal USB Universal Serial Bus VBN Vision Based Navigation VBNAT Vision Based Navigation Analysis Tool VBNC Vision Based Navigation Camera VGPEP Visual Guidance Entry Point VHDL VHSIC Hardware Description Language

Page 11: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 11 -

VHSIC Very High-Speed Integrated Circuit WP Work Package

Page 12: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 12 -

PAGE LEFT BLANK INTENTIONALLY

Page 13: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 13 -

1 PROJECT OVERVIEW

1.1 STUDY CONTEXT AND MOTIVATION Autonomous navigation is a key enabling technology for future planetary exploration mission. For planetary landers, vision based navigation concepts, working on complex, unstructured scenes, appear as the most promising while in the same time the most challenging technology.

Identified as early as in the frame of the Lunar Landing studies as a central element, studies for the definition of navigation for planetary landers have been conducting around three main themes:

• GNC concepts supportive of a vision-based navigation,

• camera design to deliver the required images,

• image processing to extract the necessary observable.

First of all, GNC concepts were identified and studied in the frame of the Lunar Landing Systems Study. The basic concept for piloting, site selection, guidance and retargeting were defined, the parametric performances were demonstrated. Vision based navigation were further studied in the frame of IVN. Some image processing was introduced at that stage. The necessity to develop an artificial scene generator to get over natural limitations of mock-up systems was highlighted.

Concerning the second theme, the camera design itself appears challenging, working on large FOV, high line-of-sight resolution requirements, in a very large dynamic range, with strong uniformity and quality requirements. In this very domain, progress has been observed in the recent years, in which Astrium and Galileo Avionica (former Officine Galileo) have been instrumental: the Rosetta NavCam is certainly the first camera developed for this very purpose. In parallel, APS detectors technology demonstrated its maturity and offers large perspectives for adaptive, programmable devices.

The last theme is image processing. A most promising possibility raised in the Lunar Landing studies for navigation is the extraction and tracking of feature points. The key criterion is tracking robustness and tracking duration, while in the same time it is required to adaptively identify new feature points. Feature points tracking offers for navigation a converging performance pattern, highly suitable for high precision landers. However, it involves a large amount of data that requires to be processed at a high rate.

The NPAL study builds upon this R&D perspective. It holds the ambition to bring two major contributions to secure the concept for a planetary lander, and give real chances for an operational concept to fly on-board on BepiColombo or any other planetary lander in the coming years, namely:

• a complete camera design to demonstrate state-of-the-art capability for compact, low mass and large field of view cameras, suited for navigation purposes. The recent development in Europe in the field of APS technology will serve that purpose,

• an ASIC implementing the most demanding image processing algorithm for on-line extraction and tracking of feature points at a high rate.

Page 14: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 14 -

1.2 INDUSTRIAL ORGANISATION

Astrium, as prime contractor, is committed to the proper execution of the study on schedule and is ultimately responsible vis-à-vis ESA for all the outputs of the work performed by the study team. Astrium is responsible for the study management and technical co-ordination and directly in charge of the following tasks, combining the expertise of the Toulouse and Stevenage

teams:

- During the Engineering and Specification phase: Mission review and lander design, Image Scene Analysis, VBNC specification, and Navigation concept review,

- During the design phase: Navigation on-board software design and development, GNC conceptual design, market and application survey, to freeze the FEIC definition,

- During the ASIC development phase: feasibility analysis and functional specification for FPGA design, then in close coordination with University of Dundee implementation and validation of the prototype FEIC, detailed FEIC design, FEIC validation board implementation and testing,

- During the Performance evaluation phase: the VBNAT, in its incremental augmented versions v2 to v4. Astrium operates the tool to do the functional tests and performance campaigns: navigation performance tests, image processing performance tests in close cooperation with INETI, FEIC performance assessment. Eventually, the performance synthesis is under direct Astrium responsibility,

- Astrium is also directly in charge to coordinate closely with ESA the interface with the “BepiColombo Definition Studies”, to establish contacts and collect technical information from planetology scientists, to ensure maximum cross-fertilisation with the other Bepicolombo TDAs, in particular in the field of miniaturised sensors, integrated electronics, horizon sensor, for use in the harsh Mercury environment.

Galileo Avionica, as sub-contractor, teams with Astrium on this navigation study. Galileo Avionica is directly in charge of the following tasks:

- During the design phase: Navigation camera preliminary design;

- During the camera development phase: Navigation camera optics design, Navigation camera detailed design, Navigation camera manufacture to an “elegant breadboard” level.

- Integration of the FEIC in the camera.

- Development and manufacture of an EGSE for electrical stimulation

- Perform functional tests using electrical stimulation and point target optical stimulation.

Page 15: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 15 -

University of DUNDEE, as laboratory subcontractor, brings to the team two complementary facets of its activity. UofD is directly in charge of the following tasks:

- During the Engineering and Specification phase: support to the image scene analysis by collecting data, providing PANGU generated scenes and comparing actual and generated planetary scenes, provide a reference set of well calibrated planetary scenes generated by PANGU to serve for specification of navigation and vision algorithms,

- During the Design phase: upgrade the PANGU tool to enhance its functionality, and adapt it to support its integration in the VBNAT environment

- During the ASIC Development activity: the architectural design of the FEIC, VHDL design and synthesis, VHDL testing, support to the implementation and validation of the prototype FEIC, in cooperation with Astrium.

Science Systems (Space) Limited, as subcontractor, mainly brings the following:

- During the Design phase: review of existing simulator tools, adaptation and development of the BepiColombo lander dynamics simulator into VBNAT v1,

- Integration of the PANGU tool into the VBNAT simulation tool.

INETI, as laboratory subcontractor, will support navigation design and validation with its expertise in image processing. In the frame of this study, INETI is directly in charge of the following tasks:

- During the Engineering and Specification phase: Image processing theoretical background,

- During the Design phase: Image processing algorithms detailed design and prototyping, Image processing algorithms extensive testing on the reference and generated planetary scenes,

- During the Performance Evaluation phase: integration of the image processing algorithms into the VBNAT. Interface to the PANGU for image generation from PANGU scenes. The performance campaign is conducted in close coordination with Astrium.

ATMEL, as subcontractor, has been selected for the ASIC manufacturing, for its unique capacity in production of space quality integrated circuits. ATMEL will be directly in charge of:

- FEIC manufacturing, guaranteeing the proper manufacturing of the ASIC.

Page 16: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 16 -

1.3 STUDY LOGIC AND PROGRAMME OF WORK

The study starts with the analysis of the mission and system requirements, allowing the major design drivers of the on-board navigation system to be identified and described (trajectory, environment conditions, surface visibility, camera viewing angle). A first reference concept is established, describing the overall avionics building blocks: relative navigation, navigation camera, inertial sensor, control thrusters.

At this early stage of the study, the expected scenes characteristics are analysed, in relationship with planetary scientists, and a representative set of planetary images is gathered. Existing image processing algorithms are reviewed. The relative navigation concept is identified and traded, with a particular emphasis on data fusion algorithms.

Camera requirements can be derived in terms of photometry, dynamics and also functional architecture. Analysis and design of the environment, vehicle and GC algorithms, lead to an initial version of the navigation simulation environment (VBNAT 1.0).

Once these analyses are performed, the design of the different core items can be done in a parallel fashion in order to optimise the overall study planning. Three main activities are identified, which feed the incremental constitution of the navigation simulation environment:

1. Design of the vision-based relative navigation algorithm. Covariance analyses allow the performances of the navigation and associated sensors to be assessed. The navigation algorithm is defined and prototyped at this stage, by a comprehension review of the IVN concept, together with the identification of re-usable building blocks developed for fast and optimal inertial navigation and data fusion from CapRee. The resulting navigation algorithm are integrated into the simulation environment, together with a model of the feature extraction function, leading to the VBNAT 2.0

2. Design of the image processing algorithms. Image processing algorithms are designed, taking into account their future implementation in hardware. Tests are conducted on static images, to characterize the algorithm efficiency in terms of number of extracted points, density and repartition of points, and on images sequences, to check the tracking robustness. The navigation simulation environment is updated, taking into account the image processing algorithms definition, and leading to the VBNAT 3.0.

3. Design of the navigation camera. The navigation camera architecture is established, with a particular emphasis on optical and electronics aspects.

Building on top of these design activities, two main wrap-up activities are performed:

4. Incremental validation of the navigation concept,

5. Analysis of the hardware implementation of the feature extraction algorithms, yielding the specification, feasibility report and development plan of the FEIC.

The outputs of the activities allow the phase 1 Review to be held.

Page 17: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 17 -

ReferenceMission

PlanetologyInputs

BCDS,IVN

NavigationConcept

SceneAnalysis

Trajectory& System needs

PANGU

Environment,Vehicle

& GC Design

SimulationEnvironment(VBNAT 1.0)

NavigationDesign

FeatureExtraction

Design

GNCSimulationCampaign

Nav Algos

NavigationPerformanceAssessment

FeatureExtraction

Algos

CameraFunctional

Architecture

ImageLibrary

Miniaturized InertialMeasurement Package

Miniaturized Star Trackerfor Harsh Environments

Highly IntegratedControl & Data Systems

VBNAT 2.0

FEIC Definition, Feasibility & Development Plan

NavCam Definition & Development Plan

Navigation Simulation Environment & Performance Results

Phase 1Review

VBNAT 3.0

HWImplementation

Analysis

CameraDesign

ImageProcessing

Backgd review

FEICArchitecture Design

FEICPrototype

ReferenceMission

PlanetologyInputs

BCDS,IVN

NavigationConcept

SceneAnalysis

Trajectory& System needs

PANGU

Environment,Vehicle

& GC Design

SimulationEnvironment(VBNAT 1.0)

NavigationDesign

FeatureExtraction

Design

GNCSimulationCampaign

Nav Algos

NavigationPerformanceAssessment

FeatureExtraction

Algos

CameraFunctional

Architecture

ImageLibrary

Miniaturized InertialMeasurement Package

Miniaturized Star Trackerfor Harsh Environments

Highly IntegratedControl & Data Systems

Miniaturized InertialMeasurement Package

Miniaturized Star Trackerfor Harsh Environments

Highly IntegratedControl & Data Systems

Miniaturized InertialMeasurement Package

Miniaturized Star Trackerfor Harsh Environments

Highly IntegratedControl & Data Systems

VBNAT 2.0

FEIC Definition, Feasibility & Development Plan

NavCam Definition & Development Plan

Navigation Simulation Environment & Performance Results

FEIC Definition, Feasibility & Development Plan

NavCam Definition & Development Plan

Navigation Simulation Environment & Performance Results

Phase 1Review

VBNAT 3.0

HWImplementation

Analysis

CameraDesign

ImageProcessing

Backgd review

FEICArchitecture Design

FEICPrototype

Phase 1 Study Logic

Page 18: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 18 -

The phase 2 study logic is organised around two main tracks:

6. Development, implementation and validation of the FEIC. This ASIC implementation flow includes a prototyping stage, leading to a FPGA version of the FEIC. Then, the detailed ASIC design is performed, possibly taking into account some minor adaptations from the navigation camera detailed design activity, and the resulting final design is manufactured into ASICs with several complementary packaging. At last, a validation board is built, allowing the FEIC functional and performances to be verified.

7. Development, implementation and testing of the navigation camera. The detailed design of the navigation camera is performed, and the camera is manufactured under the form of an elegant breadboard. Functional and performance capabilities of the camera is then demonstrated.

At the end of the study, a wrap-up activity synthesises the major findings, and provides recommendations about the vision-based navigation concept.

Page 19: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 19 -

FEICArchitecture Design

Phase 2 ATPFEICPrototype

FEICDetailed Design

FEICManufacture

FEICValidation Board

NavCamDetailed Design

NavCamManufacture

NavigationValidation

SimulationTool V4.0

StudySynthesis

CameraDemonstration

FEICArchitecture Design

Phase 2 ATPFEICPrototype

FEICDetailed Design

FEICManufacture

FEICValidation Board

NavCamDetailed Design

NavCamManufacture

NavigationValidation

SimulationTool V4.0

StudySynthesis

CameraDemonstration

Phase 2 Study Logic

Page 20: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 20 -

1.4 MAIN MILESTONES & WAY FORWARD

The project was kicked off on October 29 2001. Phase 1 has been successfully completed in the summer 2003 with a consolidated design, an operational environment for validation making extensive use of virtual scene generation –the VBNAT, and a preliminary architecture for the real time implementation.

Two major decisions affected the course of the study:

- the BepiColombo lander was abandoned by the project; the NPAL project maintained the Mercury scenario as the most sizing one. A complementary scenario was introduced for study in phase2, based on the emerging Mars missions, either ExoMars or the future Mars Sample Return.

- The FEIC ASIC was abandoned: it was demonstrated at end of phase 1 that the complexity of the FEIC was not compatible with the 0.5 micron technology retained for NPAL. The FEIC complexity proved to be in the limit of the technology. A FPGA implementation on a 6 Mgate device was decided instead. The economy made on the ASIC manufacturing was in turn used to develop and validate the real time prototype of the complete navigation chain, thus making the complete NPAL set-up eligible for a “next step to flight” approach.

The second phase was completed in April 2006, with in a first time the development of the FPGA implementing the “smart video compressor”, the FEIC that was achieved in November 2004, followed by the camera integration and a complete demonstrator of the function. The real time integration was completed by January 2006 and the validation demonstrated in March 2006. The final presentation was held in ESA/ESTEC on April 25, 26 2006.

The NPAL set up is now submitted to reinforcement and porting to a flight compatible hardware in order rto support a flighrt campaign on a helicopter drone in September 2007. At that time NPAL will have reached a TRL of 5-6.

Page 21: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 21 -

2 NPAL DESIGN

2.1 SYSTEM SPECIFICATIONS The vision system has the ambition to cover the descent and landing sequence towards an unknown, possibly hazardous terrain, taking the lander to a soft landing, controlling the dynamics to within 1 m/s of residual velocity at contact. Hazards above 10 cm of size are detected and avoided. The orientation of the vehicle in the local gravity field is monitored and controlled to some degrees to limit the conservativeness of landing gear design. Both atmospheric and non atmospheric planetary bodies are considered. Although the original activity was geared towards landing on a non-atmospheric planet (Mercury), the concept has been successfully tested also for the case of atmospheric planetary bodies (Mars) proving great robustness and flexibility.

Page 22: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 22 -

2.2 DEVELOPMENT & VALIDATION APPROACH

MemoryRAM

FEIC developmentOpen loop testing

Navigation Development

Vision-Based Navigation Analysis Tool V3.5

Actuators Sensors

GNC

Environment

z

1

ThrusterDemands

Thrust Power

RCS Power

Forces _v ec_RSF

Torques_v ec_RSF

Inert ia_mat_RSF

Mass

Fuel Consumption

PROPULSION

OUTPUTS TO WORKSPACE

Forces_v ec_RSF

Torques_vec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

Aspec_IPQ_RSF

W_ipq2rsf_RSF

dot_W_ipq2rsf_RSF

ActualState

ORBIT & DYNAMICS

LS posit ion in Camera Screen

Visual Measurements

Velocity Increments

Att itude Increments

deriv ative Homography Matrix

Estimated State

Estimated LS Position in IPQ

NAVIGATION

Actual State LS Position in Camera Frame

Landing Site Proj ection : IPQ to Camera Screen

Aspec_IPQ_RSF

W_ipq2rsf_RSF

Att itude_Increments

Velocity _Increments

IMU

Estimated State

LS Position IPQ

Propulsion Power

Thruster Commands

Att itude Control Power1

GUIDANCE AND CONTROL

[ActualState][Mass]

[w_ipq2rsf][F_rsf]

0

In1

In2

In3

In4Covariance Analysis

Trajectory

vbn_Output.outputCovTraj

Constant

In1

In2

In3

Out1

Out2

ClosedLoopSwitch

Clock

Actual State

Aidings

XCam_ls

Visual Measurements

CAMERA

Integration & Closed Loop

Testing

Camera developmentOpen loop testing

The VBNAT Tool

The Transtech Board

ESG Stimulation

Opto-mechanical conceptDetector managementOptical performance verif.

MemoryRAM

FEIC developmentOpen loop testing

Navigation Development

Vision-Based Navigation Analysis Tool V3.5

Actuators Sensors

GNC

Environment

z

1

ThrusterDemands

Thrust Power

RCS Power

Forces _v ec_RSF

Torques_v ec_RSF

Inert ia_mat_RSF

Mass

Fuel Consumption

PROPULSION

OUTPUTS TO WORKSPACE

Forces_v ec_RSF

Torques_vec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

Aspec_IPQ_RSF

W_ipq2rsf_RSF

dot_W_ipq2rsf_RSF

ActualState

ORBIT & DYNAMICS

LS posit ion in Camera Screen

Visual Measurements

Velocity Increments

Att itude Increments

deriv ative Homography Matrix

Estimated State

Estimated LS Position in IPQ

NAVIGATION

Actual State LS Position in Camera Frame

Landing Site Proj ection : IPQ to Camera Screen

Aspec_IPQ_RSF

W_ipq2rsf_RSF

Att itude_Increments

Velocity _Increments

IMU

Estimated State

LS Position IPQ

Propulsion Power

Thruster Commands

Att itude Control Power1

GUIDANCE AND CONTROL

[ActualState][Mass]

[w_ipq2rsf][F_rsf]

0

In1

In2

In3

In4Covariance Analysis

Trajectory

vbn_Output.outputCovTraj

Constant

In1

In2

In3

Out1

Out2

ClosedLoopSwitch

Clock

Actual State

Aidings

XCam_ls

Visual Measurements

CAMERA

Integration & Closed Loop

Testing

Camera developmentOpen loop testing

The VBNAT Tool

The Transtech Board

ESG Stimulation

Opto-mechanical conceptDetector managementOptical performance verif.

Camera Breadboard

The complete navigation solution has been developed and prototyped. An elegant breadboard of the camera has been assembled, that differs from the flight design mainly for the optics and for some electronic components. The compactness of the electronics is affected in a limited manner by the use of commercial components (FPGA) instead of dedicated optimised implementations (ASICs).

Page 23: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 23 -

Figure 4 – The NPAL Breadboard in Laboratory Test Set-up

The elegant breadboard respects the flight design in most respect. The use of commercial optics and commercial grade components favours a demonstrator approach for testing on-ground and fine tuning.

Camera Size 13 cm x 13 cm x 8 cm (baffle incl.)

Camera mass budget 500 grs

Camera Power 2 W

Communication link SpaceWire 100 Mbit/sec

Memory capacity Up to 200 FPs and textures for tracking

Tableau 1 – NPAL Breadboard Characteristics Summary

The breadboard is used to perform the complete proof-of-concept of the NPAL navigation solution. The breadboard is used in two configurations: either with direct picture acquisition on the APS, or through electrical stimulation. An Electrical Stimulation Generator permits to pass a complete video sequence in real time, shunting the detector to directly feed the camera memory. The sequences played previously in non real time on the navigation filter can be replayed, thus validating completely the real time behaviour.

A Mercury scenario at 10 and 20 Hz has been used extensively in that respect. The measure of the duty cycle for the internal loop for controlling the FEIC is demonstrated to be compatible with the real time requirements. This loop is a unique feature that controls the FEIC thanks to navigation aiding, and navigation piloting for the designation of points to be selected and tracked.

NPAL FOR MARS

The validation with the real electronics is made on virtual scenes, permitting to test numerous scenarios and terrain conditions. A complementary validation has been performed on a Mars scenario. The first picture depicts the motion observed under parachute from high altitude, the second picture concerns the last 100 meters where the acquisition of terrain characteristics and hazards such as boulders can be observed.

Page 24: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 24 -

2.3 HARDWARE ARCHITECTURE & DEVELOPMENT APPROACH

2.4.1 Hardware FEIC development and integration

The trades off activity at the beginning of the project showed that, the camera had to integrate a huge part of picture analysis in order to make a filter between the received image and the navigation software algorithm, and so decrease the necessary bandwidth of the communication interface.

An ASIC manufacturing for the featuring and tracking point analysis has been envisaged, nevertheless the complexity of the project, added to the activity sharing in several places in Europe , made that a reprogrammable solution should take less risks.

The hardware camera activity was shared in two main activities. The first was the FEIC development, which is the function of featuring and tracking point activity integrated in the FPGA, the second was the development and integration of a camera by Galileo Avionica Company in Firenze.

Both activities are developed in parallel and separately. But at one moment the FEIC is integrated in the camera, so the interfaces between both functions had to be defined, keeping in mind that they should work separately, but also should communicate between them at the end, very efficiently with, in case of failure, the difficulty to redesign and validate again. Nevertheless this solution was envisaged thanks to FPGA programming solution. All the camera architecture design has been done taking in account this requirement.

The FEIC function has been developed in the University of Dundee. To develop the VHDL set of functions, commercial simulators as “MODELSIM” are well adapted, and during several months, each VHDL function has been tested using software tool on a PC. Nevertheless when the overall integration had to be done, a reprogrammable board was necessary.

Two requirements needed to be resolved. The first requirement was to avoid the obligation to modify one line of VHDL code when the FEIC function should be implemented to the camera.

The second requirement was the possibility to use the same board to make the functional test activity with the navigation software. Using the same solution for the FEIC validation and functional test integration, we had the possibility, in case of problem or failure, to take benefit of accumulated experience on the board by the University of Dundee, and a FEIC function tests set which was used as a work base.

It has been decided to propose to University of Dundee to use a commercial TRANSTECH board added to a SRAM board developed in ASTRIUM. This solution ( see figure 1) has been used during the FEIC development and validation in University of Dundee then during the VBNAT V3 validation in Toulouse.

The benefit of this solution is that there was no investment on several boards, one for the FEIC validation and a second for the functional test validation, which are activities weakly linked, but the reuse of the same board.

Page 25: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 25 -

SRAM mezzanine board

FEIC transtech board

spacewireport

RAMFlip

RAMFlop

tracking &correlationfunction in

FEIC

spacewireport

spacewireminirouter

USBspacewire interface

spacewire

synchro signals

Adress, data ctrl

Adress, data ctrl

spacewire

spacewire

USB

Compatible IBM

Figure 2-1: Transtech board coupled with the test bench

Below: picture of the Transtech board with the SRAM mezzanine mounted on top. The SpaceWire cable can be seen

Page 26: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 26 -

2.4.2 Hardware interfaces between the FEIC and the VBNC camera

The VBNC camera and the FEIC FPGA ware developed separately, nevertheless, the interfaces between both functions are strong, because the camera stores an image in a memory that the FEIC must quickly read and transfer before a new image is available.

During hardware design care has been done to verify the possibility to validate and integrate the camera without the FEIC function, also to verify that the FEIC was working on the camera without the camera function, (the camera does not disturb the FEIC function) and finally both functions can communicate together efficiently.

To resolve these requirements it has been decided to minimize the interfaces between both functions and have pins which give the possibility to put the input output interfaces in high impedance, or disable some commands.

• The interface between the FEIC and the VBNC memory is a SRAM interface.

• A signal ping pong outputs from VBNC to FEIC and signals that the picture is ready to be read.

• A signal PIXMEM_TRI when activated puts the pixel memory manager pins in high impedance and allows the memory SRAM write through the FEIC using the SPACEWIRE port.

• A signal TEST_IMG when activated enables the possibility to program the SRAM memory through a FEIC specific SPACEWIRE port. When the pin is not activated the download is impossible. This solution protects the memory, even if an a user tries to transfer data to FEIC trough the SPACEWIRE link. This protection is done to avoid memory access from the VBNC camera memory manager and FEIC at same time which could make a short circuit on the RAM pins.

• FEIC-TRI is a pin which puts the output pins of the FEIC in high impedance when the signal is activated, this signal is used to disable FEIC activity when camera is in validation mode.

• The reset pin is used to make a hardware reset on the FEIC by the camera. In this case the FEIC function is reprogrammed by the prom contents.

• The FEIC FPGA integrates a SPACEWIRE mini-router. The mini-router integrates 3 input output links, towards the camera, the nominal host computer and redundant computer, and 2 SPACEWIRE ports (mini-router port, FEIC port, when TEST_IMG is activated several FEIC registers are dedicated to memory SRAM write access.

During the camera validation, the FPGA has not been mounted on the printed circuit and all camera validation tests have been done, then the FPGA has been soldered and programmed. By activation of test pins, verification has been done that the FEIC has no influence on the camera validation tests.

Pixel memory manager has been put in high impedance, and functional tests have been done on the FEIC in the same situation than on the TRANSTECH board, the only exception was that we were using the camera and its FEIC , instead of the TRANSTECH board associated with the mezzanine SRAM board.

When both validation tests have been successful, the interface VBNC camera and FEIC has been quickly tested.

Page 27: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 27 -

FEIC

RAM_A

PIXEL RAMMANAGER

A-Add

A-Data

A-OEN 1

A-CSN 0

A-WEN 0

B-Add

B-Data

B-OEN 0

B-CSN 0

B-WEN 0

RAM_B

TEST_IMG

1

PIXMEM_TRI

FEIC_TRI

0

1

Figure 2-2: hardware interface between VBNC camera and FEIC

Page 28: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 28 -

2.4.3 VBNC camera design architecture

The main specification of the camera is to have the capacity to extract and track until 200 points on a 1024*1024 pixel picture. The image period has been specified to be 20 HZ. Also the camera needs to have the ability to send complete images to the computer

The specifications require that the camera had the possibility to be used as a star tracker, so a processor was mandatory to perform the requirements. Using this opportunity, a leon 2 processor has been integrated in the camera FPGA to provide several supplementary functions as the real time pixel correction, a standardized communication protocol, telemetry measurements, PROM programmability and several functional modes.

The VBNC must be able to transmit an image. If real time specifications need to be respected, this means that the theoretic bandwidth is (20 *1024*1024*10) = 200 Mega bits/s . Actually the used technology does not allow this performance, due to the FPGA frequency limitation.

The FEIC maximum bandwidth correspond to the T-LIST send by the FEIC to the host computer after each image extraction and tracking analysis, this bandwidth is no more than 2 Mega bits/s.

The previous analyses have driven the communication interface choice as the SPACEWIRE link. Nevertheless it is a point to point link, so two ports were required, one for the FEIC and the second for the camera. The obligation to have a redundancy lays down to have 4 ports, not compatible of space architecture (cable weight, volume….). The solution to integrate a SPACEWIRE mini-router in the FEIC allowed resolving this difficulty.

The microprocessor LEON 2 has been preferred instead of a proprietary ALTERA processor in the camera, because if an ASIC has to be done in the case of camera industrialization, the LEON 2 Fault tolerant is available for a ESA project upon request , and no more hardware study is required., excepted the design of the ASIC itself.

In order to simplify the design a SRAM Flip Flop buffer has been developed. When the camera stores a buffer, this other one is read by the FEIC for extraction and tracking. The camera is considered as the master, when the image has been stored, a Ping-Pong signal is activated to inform the FEIC that buffers have flipped, if the FEIC did not finish its activity, data are lost and an error 13 is sent to the host computer, if the FEIC is in active mode an new extraction and tracking is starting if all necessary information has been transmitted from software navigation host computer.

Commands are sent to the VBNC camera through CCSDS protocol commands defined in the VBNC ICD document, commands are sent to the FEIC registers through low level commands and a protocol specific to FEIC function.

Actually the camera contains two FPGA.

A first FPGA contains the pixel memory manager associated to the LEON 2 processor .512 K bytes of EEPROM and PROM, 2 Mega bytes of SRAM. The APS controller is soldered in the bottom of the box.

The FEIC function is integrated in a 6 millions gates FPGA.

Page 29: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 29 -

PROM EEPROM SRAM

Memory bus

LEONSpacewire A Spacewire B

APS

APS SCANController

Pixel MemoryManager

RAMBuffer A

FEIC

Pixel RAM A

Pixel RAM B

Ping pong

Spacewire 2

Spacewire 1

Pixel data

RAMBuffer B

Figure 2-3: VBNC camera architecture

Figure 2-4: VBNC camera breadboard exploded + assembly views

Page 30: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 30 -

3 FEIC DEVELOPMENT

3.1 IMAGE PROCESSING FOR SAFE LANDING, DEVELOPMENT & VALIDATION APPROACH

To optimally support the navigation processing, the selection and definition of the image processing (IP) algorithms to be implemented in the ASIC is a crucial step. The main issue for vision based relative navigation lies in the selection of physical (fixed) points on the scene. These points should be relevant enough to support long term tracking along the landing sequence. Performances are driven by the robustness to the dynamic of the lander and scene appearance during the approach. The object of the performance analysis was to confront an extensive review of feature point extraction and tracking algorithms to the various needs of the mission on the basis of objective criteria.

The overall validation approach consisted in:

1. Review the state of the art

2. Define the image processing criterions and critical parameters

3. Specify and develop a relevant camera model

4. Build a complete image data set based on PANGU and true images

5. Benchmark most of the algorithms on a complete dataset for selection

6. Evaluate the performances of the selected algorithm

7. Extrapolate performances to Navigation criterions (length and accuracy of tracks)

8. Define criterions and associated dataset for FEIC implementation validation

3.1.1 References [1] E. Loupias and N. Sebe, Wavelet-based Salient poins for Image Retrieval, RR

99.11, Laboratoire Reconnaissance de Formes et Vision, INSA Lyon, November 1999. URL : http://rfv.insa-lyon.fr/~loupias/points/

[2] C. Schmid, R. Mohr, C. Bauckhage, Evaluation of Interest Point Detectors, Int. Journal of Computer Vision. URL: ftp.inrialpes.fr/pub/movi/publications/Schmid-ijcv00.ps.gz

[3] TN1400

[4] TN2400

3.1.2 Critical parameters Extraction and tracking algorithms are based on the invariance of local feature points. As a consequence, algorithm design and validation must account for an exhaustive analysis of their behaviour wrt distorsion sources. In terms of kinematics distortions, we distinguish between (the following values are provided as an indication for planetary landing missions):

Page 31: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 31 -

• TXY – motions parallel to image plan, they induce mainly parallax effects due to 3D nature of the terrain. Maximal expected translations are about 15 pixels. Maximum parallax effects are around 2/100° pixels.

• R XY –Rotations perpendicular to the optical axis, they have a secondary effect on the point appearance. During S/C manœuvres (~50°/s ), they can reach 2.5°.

• RZ –Rotations around the optical axis have a direct effect on the point appearance whose texture rotates. This is a very penalizing effect but in the frame of landing missions, Rz rates remain generally low (~1°/s).

• TZ –Translation along the optical axis produce a zoom effect that strongly modifies the point appearance. The expected approaching rate (Vz/z) lies between 0 and 5%.

RZ and TZ are obviously the most disturbing effects regarding extraction and tracking performances.

For radiometric distortions:

• FTM – The modular transfer function at the Nyquist frequency (sampling frequency /2) is a common blurring measure. Contributors to FTM are : optic, detector, motion and microvibrations. A coarse estimation gives a FTM lying between 0.13 et 0.2. It has an impact on the salience of points.

• FTM variations between successive acquisitions are due to motion blur and thus accelerations that are maximal during manoeuvres (~40°/s2). A coarse estimate considers few % of FTM variations in worst cases.

• Random noise has two sources: detector noise (gaussian with RSB ~70 to 110), dead pixels follow a uniform distribution over the image.

• Illumination conditions may vary according to the sun elevation, acquisition time adjustment optical transmission variations within the optical field of view (0-40%).

3.1.3 Objective criteria and IP modelling A major effort has been involved in the definition of objective criteria that could catch the intrinsic modelling of IP algorithms. The main objective was to fit the exact behaviour of the extraction and tracking algorithms in order to finely understand and decouple the main influences. Objective criteria were defined on the basis of navigation requirements.

The most relevant objective criteria for feature extraction revealed to be the spatial distribution of points (SDP), the distribution of relative accuracy (RAC) and the repeatability between consecutive frames (RBF).

SDP introduces a metric to compare the spreading of extracted feature point. The adopted metric is based on Loupias work [1] and uses the entropy of the distribution of feature points. SDP requires the definition of a grid over the image and a probability measure that is estimated from the number of points within a cell over the total number in the image. It is defined by the following entropy:

N

ppSDP

N

iii

log

log1

∑=

−=

Page 32: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 32 -

Where N is the number of cells. When the distribution of points is uniform over the image, entropy is maximum (≤1). With aggregations, entropy decreases (≥0).

For the different kinematics transformation of the image (translations and rotation), RAC revealed to follow approximatively a Rayleigh distribution confirming the assumption of a 2D Gaussian distribution for the error.

RBF [2] is related to the ability of an extractor to detect the same feature in consecutive frames. Let us consider two images F1 et F2. E1 is the set of extracted points in F1, E2 is the set of extracted points in F2 and E2’ is the projection of E1 in F2. RBF(ε) is defined by:

)'(

)',(:'',)',()(

2

22222222

EcardxxdistExExxxcard

RBFε

ε<∈∈

=

card is the cardinal a set and ε is the maximum acceptable distance between two homologous points.

This criterion is crucial in the determination of the track duration.

Figure 3-1 - Objective criteria were defined on the basis of navigation requirements

For the simplicity of functional architecture and interfaces, we focused on tracking algorithms as purely frame-to-frame matching. On this basis, we proposed an efficient way to characterise a tracking algorithm:

1. to evaluate its matching characteristics by means of transition probabilities catching the ability of the algorithm to associate homologous points, namely the probability of good match P(GM)

2. to extrapolate its behaviour on longer tracks, predicting quantities such as the distribution of track length, or the mean number of false tracks:

• The distribution of track length:

( )( ) ( )( )GMPRBFGMPRBFnNP n ..1.)( −==

n is an occurrence of the track length.

• The mean length of tracks

)(.1)(.lengthMean GMPRBF

GMPRBF−=

• The % of good (false) tracks

•Matching characteristics: good and false matchingrates

•Number of extracted points•Spatial distribution•Accuracy•Repeatability

Quality measures

TrackingExtraction

•Minimum number of tracks•Uniform distribution•Minimal accuracy

•Minimum track length

Requirements

•Matching characteristics: good and false matchingrates

•Number of extracted points•Spatial distribution•Accuracy•Repeatability

Quality measures

TrackingExtraction

•Minimum number of tracks•Uniform distribution•Minimal accuracy

•Minimum track length

Requirements

Page 33: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 33 -

3. to evaluate its dynamic behaviour by means of a reduced set of continuous measurements.

This modelling for extraction and tracking revealed efficient in decoupling the different influences. Besides, the validity of the extrapolation modelling was confirmed by comparison of the real histogram of track length and the extrapolated one. Previous remarks confirmed that this model could serve as a reference for further comparison and performance evaluation.

3.1.4 The camera model In order to efficiently validate the image processing algorithms, a camera model has been designed that accounts for the overall camera transfer function. This model is applied on raw images provided by either the PANGU tool or coming from real imagery database.

GLOBAL architecture and interfaces

The camera model is a combination of three successive models.

Images:

- RawImage is a raw image over sampled with a factor 2. This factor is necessary to model aliasing effects. The calibration of RawImage (radiance or irradiance model) has to be defined with respect to Pangu image generation.

- DistortedImage is the distorted image after geometrical distortions. Same size as RawImage.

- BlurImage is the distorted image after blurring effects (optical, detector and motion) and under sampling (aliasing effects).

Distortion

MTF

Radiometry

RawImage

DistortedImage

BlurImage

NumericalImage

PANGURaw image generation

Camera model

Distorsions, MTF and radiometry

modelling

Raw

imag

e

Page 34: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 34 -

- NumericImage is a digitised image including all the detector defects and noises. Same size as BlurImage.

We have considered that the raw images are monochromatic. It implies that we have worked with a mean wavelength, which determines the quantum efficiency, the optical transmission and optical MTF. All these quantities are represented by their mean values.

3.1.5 Image database An image database was constituted with two underlying objectives:

• The definition of the sequences required for the image processing validation plan.

• The proposition of a validation protocol for the simulated images.

Validation on virtual scenes

The image sequences required for the detailed simulation plan and performance study of image processing have been defined in accordance with the concerned industrial partners. The sequences are categorised in:

Translation in the image plan: Ia, Ib and Ic

Rotation perpendicular to the optical axis: II

Translation along the optical axis: IIIa, IIIb and IIIc

Complexe motion: IV

Illumination conditions: Va and Vb

Analysis of a rotation for different illumination conditions: VIa and VIb

Analysis of zoom effects for different illumination conditions: VIIa and VIIb

Reference trajectory: VIII

The nominal assumptions made for the kinematics and illumination conditions are the following:

High gate: altitude=10000m, speed=1000m/s

Low gate: altitude=100m, speed=40m/s

Rotations from 0°/s to 50°/s

Sun elevation: 5.9°

Name Description Object

Ia Place the camera at the High Gate (Altitude=10km), the optical axis being in the vertical direction and apply a translation parallel to the image plan. Take one image every 10m for 30 iterations (in order to simulate a speed from 200 to 6000 m/s at a 20Hz acquisition frequency). The motion in the image

To model the effect of parallax on purely 2D translational motions.

Page 35: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 35 -

should be comprised between 1 and 30 pixels.

Ib Similar to Ia but with Altitude=1km (Boulders must not be observable in this sequence). Take one image every 1m for 30 iterations (in order to simulate a speed from 20 to 600 m/s at a 20Hz acquisition frequency). The motion in the image should be comprised between 1 and 30 pixels.

To check the fractal assumption of the ground with respect to the elevation in the field of view and quantify its effect.

Ic Similar to Ia but with Altitude=100m (Boulders should be observable in this sequence). Take one image every 10cm for 30 iterations (in order to simulate a speed from 2 to 60 m/s at a 20Hz acquisition frequency). The motion in the image should be comprised between 1 and 30 pixels.

Similar to Ib and Analyse the performance of extraction and tracking on boulders.

II Place the image plane parallel to the ground at an altitude=2000m. Apply a rotation around an axis that is perpendicular to the optical axis. Take one image every 0.2° from 0° to 5° and one image every 2° from 5° to 21° (about 33 images).

Quantify the effect of image distortion due to rotations in the focal plan. The rotations around the optical axis may be simulated using a single image.

IIIa Place the camera at the High Gate (Altitude=10km), the optical axis being in the vertical direction and apply a translation parallel to the optical axis. Take one image every 10m for 20 iterations (in order to simulate a speed from 200 to 4000 m/s at a 20Hz acquisition frequency).

IIIb Similar to IIIa but with Altitude=1km. Take one image every 2.5m for 20 iterations (in order to simulate a speed from 50 to 1000 m/s at a 20Hz acquisition frequency).

IIIc Similar to IIIa but with Altitude=100m. Take one image every 25cm for 20 iterations (in order to simulate a speed from 5 to 100 m/s at a 20Hz acquisition frequency).

Analyse the effect of ground approaching rates.

IV Apply a constant complex motion to the camera, combination of translations, rotations and zoom effects.

Sequence duration: 2s

Sequence frequency: 20Hz

Altitude=2000m

Translation in the image plan: Vx=Vy=100m/s

Translation along the optical axis: Vz=100m/s

Quantify the effect of a complex combination of motions.

Page 36: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 36 -

Rotation perpendicular to the image plan: Ωx=Ωy=5°/s

Rotation around the optical axis: Ωz=10°/s

Va Place the camera at the high gate (altitude=10000m) and acquire images corresponding to 4 elevations of the sun: 2°, 30°, 60° and 90°.

Impact of the sun elevation on the ground surface appearance and stability of extracted points.

Vb Similar to Va but for the low gate (altitude=100m). Similar to Va with boulders.

VIa For an altitude=2000m and a sun elevation=5° apply a rotation around the optical axis and take one image every 1° for 20 iterations.

VIb Similar to VIa but with a sun elevation=80°.

VIIa For an altitude=2000m and a sun elevation=5° apply a translation along the optical axis and take one image every 5m for 20 iterations.

VIIb Similar to VIIa but with a sun elevation=80°.

Effect of the sun elevation on the repeatability for the most demanding motions: rotation around the optical axis and zoom effects. Evaluation of the most interesting sun elevation with respect to relevance of extracted points.

VIII Part of a type II trajectory (see RD2) in which the images contain more than one hierarchical surface (nominal values may be adjusted in consequence).

Page 37: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 37 -

Representativity of the scenes at IP level

In the context of image processing, the validation of Pangu images cannot be based on a visual analysis of the realism like the one of a planetology scientist. The image processing tasks analysed in the NPAL project (Extraction and Tracking) work on local measures of the radiometry. The principle is to extract local information from small windows of the images (typically 10x10 pixels). The performances of the algorithms are thus very sensitive to the local texture.

A way to validate the local radiometry of Pangu is to compare the behaviour of different selected algorithms on both: Pangu and real sequences. A preliminary analysis was performed on the available Apollo and Pangu images, concluding that for both Pangu and Apollo images, the algorithms revealed the same behaviour (see [3]). Several test cases have been performed in order to compare the behaviour of IP algorithms on Pangu and Apollo images:

Sensitivity to: values

MTF Gaussian blur - σ from 0 to 5, step 0.5

σ 0.025 – random

σ 0.050 – random Gaussian

σ 0.100 – random

0.01 – random

0.05 – random

Noise

Salt & Pepper

0.10 – random

Horizontal (x) 0 to 30 step 3

Vertical (y) 0 to 30 step 3 Translation

Diagonal (x,y) H: 0 to 30 step 3, V: 0 to –29 step 3

Rotation 0º - 90º

Scale (Zooming) -5% to +20% step 2%

Illumination -30% to +30% step 6%

Page 38: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 38 -

For each of these test cases, the following quality measures have been computed for both, Apollo and Pangu images: NEP, SDP, RBF, MSS, transition probabilities. A preliminary analysis on translation results (see [4] for complete report) reveals that, for every algorithm, all these measures are very lightly different on both images. This small difference may be explained by the difference of camera models (Apollo images are obviously more blurred than Pangu ones). Anyway, we notice that the tendency is quite well respected. The only exception concerns the SUSAN algorithm whose behaviour is much more dependent on the used image. It may be explained by the fact that, contrarily to any other algorithm, no filter is applied previously to SUSAN.

Remark: Since the camera model of the real sequences is not known, we may only perform relative comparison because the simulated and real images don’t correspond to the same camera model.

3.1.6 Algorithms benchmarking and performance evaluation Let us remind that the image processing chain is a succession of three steps :

Criterion map Selection Tracking

Harris Erreur ! Source du renvoi introuvable., Beaudet Erreur ! Source du renvoi introuvable., Moravec Erreur ! Source du renvoi introuvable., SUSAN Erreur ! Source du renvoi introuvable., Loupias Erreur ! Source du renvoi introuvable., Spoke Erreur ! Source du renvoi introuvable., Tomasi Erreur ! Source du renvoi introuvable.

Globale

Locale

Correlation

Flot optique

The selection process consisted in

• Evaluating the individual performances to propose the most adequat strategy

• Confirm the expected performances on worst cases

Criterion map

A crucial factor in the choice of the criterion map is the RBF criterion:

RBF Txy Tz (2%)

Rz (10°)

Noise (RSB=70)

Illumination (6%)

Harris 0.97 0.4 0.4 0.6 1 Beaudet 0.98 0.5 0.36 0.4 1 SUSAN 0.97 0.2 0.25 0.3 0.87 Tomasi 0.94 0.4 0.75 0.9 0.35

Moravec 0.97 0.32 0.05 0.15 0.97 Spoke 0.62 0.1 0.1 0.1 1

Loupias 0.04 0.05 0.05 0.25 0.75

Page 39: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 39 -

This table shows that Harris, Beaudet and Tomasi have the best stability of extracted points.

Page 40: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 40 -

The following curves show the sensitivity of Harris with respect to the most critical distorsion sources:

0

0,2

0,4

0,6

0,8

1

1,2

0 5 10 15 20 25

Tz (%)

RB

F

00.20.40.60.8

11.2

0 50 100 150

RZ (°)

RBF

Figure 3-2 – Kinematic distorsions: Sensitivity of RBF to Tz and Rz. About 50% of points are repeated with 2% approach or 10° rotation.

0

0.2

0.4

0.6

0.8

0 20 40 60 80

SNR

RB

F

0

0.2

0.4

0.6

0.8

1

0.00010.0010.010.1

(%)

RB

F

Figure 3-3 - Radiometric distorsions: sensitivity to noise level: gaussien (left) and dead pixels (right).

0

0,2

0,4

0,6

0,8

1

0 20 40 60 80 100

MTF variations (% )

RB

F

Figure 3-4 : Sensitivity to FTM variations between consecutive frames.

We observed that RBF is very sensitive to kinematics and radiometric distorsions but within a confortable domain for the camera parameters, the Harris extractor have interesting performances. The following table summarizes the expected performance of Harris on a reference case:

Page 41: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 41 -

Reference case RBF

Rz 0.05° 0.98

Tz 2% 0.4

Gaussien noise S NR=70 0.6

FTM variations 5% 0.9

Total 0.21

The total was computed assuming multiplicative contributions.

Point selection

Two types of point selection among the criterion map were compared :

• Gloal selection by means of a global threshold over the criterion map

• Local selection by means of local maxima

A comparison of SDP criterion in both cases clearly revealed the interest of the local maxima wrt global one.

SDP Global maxima Local maxima

Harris 0.52 0.8

Beaudet 0.69 0.82

Figure 3-5 : Détection des maxima globaux (gauche), détection des maxima locaux (droite).

Page 42: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 42 -

We also noted that the local maxima improve the good match probability with respect to false match probability. The correlation surface being more salient in such cases. Besides points aggregations are avoided.

Tracking

The good match probability was evaluated on to algorithms : the correlation maxima computation and the tomasi optical flow approach.

P(GM) Tz

(2%)Rz

(10°)Bruit

(RSB=70)

Harris + Correlation

1 1 0.99

Tomasi + optical flow

0.89 0.96 0.99

The correlation revealed to be more robust to kinematics and illumination conditions.

3.1.7 Conclusion We then had to chose between two approaches.

The first approach consists in, for each step, extracting N points of image n, extracting N points of image n+1 and then matching both sets of points. The advantage of this approach is to limit the drift of tracking. But the track length extrapolation shows a high dependency on the product RBFxP(GM). As an axample, for RBF.P(GM)=0.98, the track length expectation is around 50. From previous evaluations, we have demeonstrated that RBF is much lower than 0.98 under classical distorsions conditions, which is a killer for this approach.

The second approach consider that every point of a predicted window is a candidate to matching. In this case RBF=1. Tis is very attractive but induces a random walk whos standard deviation is around σ√N (σ is the matching standard deviation, N is the track length). As an example, for σ=0.1 pixel, a 100 frames track has a 1 pixel drift, which is quite low.

As a consequence, the chosen approach is:

Pour tout point p de l’image n-1 1 G représente la fenêtre de

prédiction 2 Pour tout point m de G 2.1 Calculer la corrélation entre p et m 3 L’homologue de p est le point qui maximise la corrélation.

1 WN représente la fenêtre où l’on extrait les nouveaux points dans l’image n 2 Evaluer le critère de Harris sur WN 3 Identifier les maxima locaux 4 Sélectionner les N meilleurs maxima

SUIVI EXTRACTION

Rejeter les maxima coïncidant avec d’anciennes traces

Page 43: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 43 -

Expected robustness

The following curves show the sensitivity of P(GM) to kinematics distorsions ;

P(GM)

00,20,40,60,8

11,2

0 5 10 15 20 25 30 35Rz (°)

P(GM)

0,940,950,960,970,980,99

11,01

0 5 10 15 20 25 30Tz (%)

Figure 3-6 : Sensibilité de P(GM)vis-à-vis de Rz et Tz.

For a 8° rotation or a 20% approach, The algorithm is able to match two points with a probability better than 0.99 which induce an expected track length of around 100.

The following table shows the robustness to noise conditions:

Gaussian noise (RSB=70)

Dead pixels (0.1%)

P(GM) 0.995 0.979

The following table summarises the prerformances on a reference test case.

Reference case

P(GM)

Rz 0.05° 1

Page 44: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 44 -

Tz 5% 1

Bruit RSB=70 0.995

Total 0.995

The total was computed assuming multiplicative contributions. It corresponds to an expected track length of about 200.

3.2 HW/SW CO-DESIGN

3.2.1 hardware software Functions co-design The next figure presents the FEIC Extraction and Tracking which is in charge of providing the image measurements. It contains the HW part of the algorithms.

Inside the FEIC, the coordinates of points are integer values. As a consequence, outside the FEIC, the absolute location of points must be updated according to sub-pixel interpolation. The FEIC provides the homologous of a point whose coordinates are integer values (it is the nearest pixel point but not exactly the tracked point). The observed motion of the point must then be applied to the sub-pixel location of the point itself, which is stored in the OBC.

The Harris detector consists in computing every pixel of the image to output an array of criterions whose dimensions are equal to the window. The retained selection algorithm consists in extracting the local maxima of the criterion map, the output of the local maxima function (LM) must also contain the neighboring values of the criterion in order to refine the position of the extracted point if required or perform a second selection step on the basis of local criterion map.

The N-Best function consists in providing the N local maxima with highest criterion value.

Inherent to the choice of new points is the fact that we must not select old points, correlated_points correspond to points output from the correlation function. The distance to be computed must be as simple as possible. The nominal distance measure will be the sum of differences in x and y.

The objective of the List Management function is to store the list of new and tracked points with the corresponding texture. The “clever” part of the list management is performed inside the OBC, which is in charge of deciding:

• The points to keep

• The points to correlate

• The points to reject

LT and LTC correspond to the same object but at different time of the process. LT is the list of points before correlation and LTC is the list of points after correlation. T is simply a concatenation of LN and LTC without the textures. It is the output of the FEIC.

The OBC may indicate to the FEIC that some points must be suppressed from the LT list. The OBC also indicates that new points must be added to LT (either by filling the “holes” of suppressed points if there are or adding points).

The OBC updates the x, y, dx and dy fields in LT.

Page 45: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 45 -

Care must be done that FEIC algorithm is the reference of the points, the OBC receives a copy of the points and OBC responsibility is to give:

• command to move points from LN to LT list ,

• commands to update the x, y, dx and dy fields in LT

• commands to delete, freeze unfreeze points from the LT list.

Before the FEIC implementation in VHDL several FEIC algorithms have been in C++ (feic_sim):

Firstly in floating point, then using 32/64-bit integers, tested on Apollo and PANGU image sequences.

feic_sim accepted image sequence and simulates FEIC/OBC, extended feic_sim with “register-level” simulation facilities.

Hardware components design have been done in stages, starting with the feature extractor: the largest task, implemented component designs using “register level” C++, feic_sim checked C++ results against “register level” simulation, port signals of “register level” C++ design were logged to file.

The implemented design in VHDL used C++ port signals as test bench stimuli for VHDL design, inputs supplied on each clock cycle and validated on each cycle, the differences were used to update C++ and/or VHDL as required.

I

CORRELATION

SURFACE L

HARRIS C

LOCAL MAXIMA

N-BEST

EXTRACTION

REJECTION

LN

LTC

LIST

MANAGEMENT

G

W

T

LM

LMS

Page 46: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 46 -

During the FEIC VHDL development, in parallel has been developed a C++ command-line SpaceWire driver program to allow FEIC registers to be read/written. the C++ simulation/validation program has been updated to generate a script of register read/write commands which can be executed by the SpaceWire driver program. It included checks on number of points extracted, time stamps etc.

C++ simulation program has been written to generate a script for a specific image sequence. The same sequence and script to the command line driver has been fed validating the results read back from the FPGA hardware. A similar script was used in ModelSim tests.

FEICImage

T-List

G-List

Sub-pixel Tracking

Navigation aidings

NavigationFliter

Figure 3-7: hardware software co-design

The decision to choose that the design of the function should be written in VHDL and integrated in the FEIC or on a high level languhas been driven by several factors that can be summarized here:

• Float calculus must be avoided in a hardware function.

• Repetitive functions with same variables

• No if but sequential functions are efficient in VHDL

• Simplification of interfaces

• Make a first filter as near as possible from the camera in order to decrease the bandwidth transmission.

• When calculus requires float instructions, do it with a processor

• If possible only one interface between hardware and software. the main reason is to simplify the validation, because the interfaces hardware software are generally difficult to verify

On the present design The entire FEIC function has been developed in VHDL and sub-pixel tracking , navigation aidings and filter have been developed in high level language. The interface is the spacewire standardized serial link interface .

3.2.2 Interfaces hardware software codesign The optical signal is the image taken by the APS

Page 47: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 47 -

The optical signal is digitised and stored in memory. The pixel failures are corrected in the camera, before data transfer to flip flop buffer.

The picture is 1024 pixels* 1024 pixels. Each pixel is 10 bits wide.

The date of the picture needs to be transferred to the FEIC and OBC.

The OBC and the camera can be synchronised by a common timer function located on OBC. One solution is to use the time code on SPACEWIRE link. A time value is sent to the camera processor in order to program a timer. The programming of camera timer is initialised by an event due to time code sent on SPACEWIRE link. This solution gives good time accuracy, due to SPACEWIRE protocol, which prioritises the time code message.

The timer can be refreshed every second.

The numerical picture interface allows sending data from the camera to the FEIC, each picture is dated by the camera.

The dating points out the number of milliseconds of the day. 30 bits are sufficient to define this value. It is reset each 24 hours.

The dating is coded in CCSDS format. The most significant bits (bit 30 and bit 31), which are not mandatory for millisecond dating, can be added, with value 0.

The picture requirement is actually 1024*1024pixels but the camera can also deliver a window of 768*768 pixels.

The interface between the VBNC camera and the FEIC is Flip Flop buffer . Each buffer is 2 Mega bytes to store an image of 1024*1024 pixels, and each pixel is 10 bits width.

The memory manager stores the image in the buffer, starting at address 0. Care must be done that the first eight addresses store the image dating.

Alternatively the memory pixel manager fills a flip flop buffer with pixels coming from the APS. At same time, the second buffer is read by the FEIC, when the pixel memory manager has finished his data transmission; it toggles the ping-pong signal and starts a new image store. The FEIC DMA when the ping pong signal toggles starts a new DMA access to extract feature points ,when the FEIC receives, from the FEIC the start tracking , it starts the tracking function. Both functions must be ended before a new ping pong signal toggles again.

The FEIC is shared in 4 main functions:

1 The feature point extracting function

2 The feature point tracking function

3 The management list

4 The SPACEWIRE mini-router.

Page 48: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 48 -

FeatureExtractor

SelectedFeature

List(LMS)

Featurerejecter

NewList(LN)

To beTracked

List(LT)

FeatureTracker

TrackedList

(LTC)

TransferList

(T-List)

On BoardComputer

GoodList

(G-List)

Keep onlynew points

Listto

track

Pointstracked

OBC defined points to keepMov LN points in LT list

informs predicted positioninforms error position

Addboth lists

Concatenate

Compare withnew points Keep only

new points

Pointstracked

Addboth lists

Figure 3-8: data flow in FEIC function

The FEIC algorithm manages the list of feature and tracking points. The goal of navigation software is to modify this list, which is always kept in the FEIC. The navigation computer may have a copy but the origin of the list of points is always in the FEIC function.

At the beginning, when a first image is available, only a list of new points is transmitted from the FEIC to the host computer and no tracking analysis is of course done. The navigation software will decide to keep points by ordering the transfer of some of them from the New points list to Tracking points list, also it give prediction place and error , and the status to delete, freeze or unfreeze the point.

When a new image is available the FEIC transmit the information to the navigation software which answers by ordering the start tracking. This command, in fact indicates that no more points will be modified on the previously FEIC received list..

The feature point analysis and the tracking point analysis are done in parallel, the result of tracking point analysis is a list of points with strength neighbour points and the status of the point (validated frozen or not, tracked or not).

The list of new points is compared to the list of tracked points and equivalent points in both list are eliminated from the New list. Both lists are concatenated and transmitted to the navigation software and a new cycle begins by the transfer of interesting points from New list to tracking lists….

The cycle is short, at 20 HZ, the timing is limited to 50 ms, several measurements prove that the feature and tracking analysis are done in less than 16 ms, conditioning that the start tracking command has been transmitted immediately after a new image is available.

The necessary theoretical bandwidth to communicate from FEIC to the navigation software is

(3 + 50*7 + 200*15)*32*20 ~ 2.14 Mbits/s

The G LIST theoretical bandwidth can be calculated as

(53 +203+203+203 +3)*32*20 ~ 450 Kbits/s

Page 49: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 49 -

The Tlist is presented in figure 5. It consists in 3 main parts :

The fist part is 3 words width:

• The number of words in the packet

• Number of new points and tracked points

• Image dating

The second part is the list of new points and can vary from 0 to 50 points

• Texture handle

• Position

• Pixel strength

• Pixel strength to north

• Pixel strength to south

• Pixel strength to west

• Pixel strength to east

The third part is the list of tracked points and can vary from 0 to 200 points

• Texture handle

• Position

• Point error

• Last correlation date

• Nothing

• Pixel strength

• Pixel strength to north

• Pixel strength to south

• Pixel strength to west

• Pixel strength to east

• Pixel strength to north west

• Pixel strength to north east

• Pixel strength to south west

• Pixel strength to south east

• Correlation result status

Page 50: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 50 -

This list is the list which is automatically sent by the FEIC, nevertheless the host computer can request to have more accurate details as the texture of the feature point.

Number of words

Number of LN points

Number of LT points

Unused unused

LN points Time stamp

Word 32 bits

1 Byte

1 Byte

2 Bytes

Word 32 bits

D8 D0unused

D7 D0

D7 D0

image1

image2

image3

image4

image5

image6

image7

image8

ListHeader

3 Words

feature position x,y

last coorelationTimestamp

coorelation strength offeature point

coorelation strengthnorth

coorelation strengthsouth

coorelation strengthwest

Word 32 bits

Word 32 bits

Word 32 bits(x,y)

Word 32 bits(x,y-1)

Word 32 bits(x,y+1)

Word 32 bits(x-1,y)

coorelation strengtheast

coorelation strength tonorth west

coorelation strength tonorth east

coorelation strength tosouth west

coorelation strength tosouth east

status bit

Word 32 bits(x+1,y)

Word 32 bits(x-1,y-1)

Word 32 bits( x+1,y-1)

Word 32 bits(x-1,y+1)

Word 32 bits(x-1,y+1)

Word 32 bits

D15-D0

D15-D0

D15-D0

D10 D0D26 D16

image1

image2

image3

image4

image5

image6

image7

image8

D15-D0

D15-D0

D15-D0

D15-D0

D15-D0

D15-D0

D2-D0

Data Definition Useful bits Number Data Size

feature position x,y

feature point strength H

pixel strength to north

pixel strength to south

pixel strength to west

pixel strength to east

Word 32 bits

Word 32 bits

Word 32 bits( x,y-1)

Word 32 bits(x,y+1)

Word 32 bits(x-1,y)

Word 32 bits(x+1,y)

D10 D0D26 D16

D15-D0

D15-D0

D15-D0

D15-D0

D15-D0

ListNew

Points

NC*7words

Reserved identifier Texture Handle Word 32 bits

Reserved identifier Texture Handle Word 32 bits

feature point error dx,dy Word 32 bitsD7- D0

Reserved word Reserved word (0xDEAD) Word 32 bits

ListTrackedPoints

NT*15words

Figure 3-9: Tlist array

To decrease the transmission delay, a SPACEWIRE port has been integrated to the FEIC and the commands protocol enable access to the FEIC registers

It has been considered that the camera commandability is part of the data handling system, while the FEIC commandability is part of navigation control, so to communicate to the camera which is generally done at initialisation only a high level protocol has been proposed and CCSDS protocol is actually used to communicate with the camera.

The FEIC uses a lower level protocol which allows a fast and simple communication with the hardware FEIC registers by dedicated commands with several register command types which simplify the dialogue.

To simplify the interface function the camera and FEIC function have each a dedicated port, so a SPACEWIRE mini-router has been implemented in the FEIC to give the ability from one external connector to have the access to both functions.

Page 51: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 51 -

During the FEIC validation phase, registers can be accessed in order to fill the memory by a picture using the SPACEWIRE link, when the camera is integrated to the VBNC camera, this command is forbidden by hardware control.

SpaceWireMini-Router

RouterConfiguration

Port

FEIC Register Access Port Event/DataPort

Event/DataGenerator

Image TestDMA Registers

Feature ListRegisters

Control/StatusRegisters

PrimeSpaceWireLink to OBC

RedundantSpaceWireLink to OBC

SpaceWireLink to

Camera

0

123

4 5

Figure 3-10 :FEIC communication interface

Page 52: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 52 -

3.3 FPGA IMPLEMENTATION & VALIDATION This section of the report highlights the development, VHDL implementation and validation of the FEIC FPGA design.

3.3.1 Image Processing Timing Constraints The FEIC FPGA was designed to receive 10-bit grey-scale mega-pixel images from the camera FPGA at a rate of 20 frames per second (20 Hz) via double-buffered image memory. Whilst the FEIC is processing an image frame from one buffer the camera will be writing to the other image buffer. Immediately following the reception of a new image frame the FEIC must begin the feature extraction process and must complete this within 62% of the 50 ms frame time i.e. 31 ms. At some point during feature extraction the OBC will instruct the FEIC to begin feature tracking; this operation must be completed within 20% of the frame time i.e. 10 ms. Following successful extraction and tracking the FEIC must reject any newly extracted feature point that is found to be too close to any of the tracked points. This operation must not take more than 4% of the frame time i.e. 2 ms. Feature extraction and tracking must take place in parallel with the results being made available to the OBC within 66% of the frame time i.e. 33 ms after the frame arrived.

The time between the transmission of the list of tracked points (T-List) to the OBC and the reception of the list of good points for tracking (G-List) from the OBC is needed by the OBC to evaluate the T-List and to prepare the next G-List. As a result the OBC will normally instruct the FEIC to perform feature extraction on a subregion of the image buffer. These contraints are shown graphically in Figure . Note that the FEIC does not allow feature tracking on the first image frame following a device reset.

Extraction

First Frame Second Frame

G-List

Tracking

Reset

Rejection

T-List

Extraction

Rejection

T-List

Third Frame

G-List

Tracking

Extraction

Rejection

T-List

Figure 3-11: Frame timing diagram

Further contraints on the system design were that the double-buffered image memory ought be read at no more than the pixel rate i.e. 20 MHz. Since the camera stored two pixels in each word it was possible to read the image memory at 20 MHz while providing two parallel streams of pixels each at 20 MHz for the feature extractor and the feature tracker.

These constraints drove the design and implementation of the chosen feature extraction (Harris) and feature tracking (correlation) algorithms. Additional constraints imposed by the size of the target FPGA (a Xilinx Virtex-II V6000) and the hardware resources that it provided led to significant effort being made to optimise the final design. The FEIC was required to extract up to 50 feature points and to track 200.

3.3.2 Internal FEIC Components The internal FEIC design consists of an interface to the image memory buffers which provides six different pixel streams to the rest of the chip. Two pixel streams (one for reading and one for writing) allow the OBC and test equipment to access the image memory directly via the FEIC controller and register file; two further pixels streams are provided to the feature extractor, one to the feature tracker and one to the list

Page 53: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 53 -

manager. A SpaceWire mini-router provides the external interface between the FPGA and the OBC. The top-level blocks of the FEIC are shown in Figure with arrows representing the main data paths. The off-chip arrows to the OBC and camera represent SpaceWire links.

ImageMemoryInterface

ListManager

Extractor

Tracker

FEIC Controller&

Register File

Camera

SpaceWireMini-router

OBC

FPGA/ASIC

Figure 3-12: Internal stucture of the FEIC

The FEIC presents a simple register/memory interface to the OBC: extraction and tracking parameters, FEIC status and feature point lists are all accessible by reading and writing specific FEIC registers. A set of special command registers allow the OBC to instruct the FEIC to perform certain actions. For example, a write to one particular register will be interpreted by the FEIC as an instruction to begin feature extraction using parameter values stored in other FEIC registers. A variety of test facilities are provided to assist with ground-based testing of the FEIC such as timers to measure extraction and tracking times. When a camera is not connected to the FEIC image memory buffers the OBC can send test images to the image memory via the FEIC. The FEIC can then be instructed to process the image as if it had been sent by the camera.

Average GlobalMaxima

Convolve

Convolve

Convolve

Harris LocalMaxima

SelectSortGradients

Pixels Correlate

Figure 3-13: FEIC Image Processing Data Flow

The extractor unit shown in Figure consists of a component to compute image gradients, three others to perform a smoothing convolution operation, a Harris corner detector, a local maximum filter and a large component to store the top 50 points with the largest Harris value. The feature tracker unit consists of a component to compute the average value of the pixels in the predication gate, a correlation unit containing fully pipelined square root and division functions and a global maximum filter to identify the point in the

Page 54: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 54 -

prediction gate with the largest correlation strength. The relationship between these components is shown in Figure . The list manager shown in Figure accepts new feature points from the feature extractor, drives the feature tracker and liaises with the OBC. The camera is only allowed to communicate with the FEIC FPGA via the image memory interface: the FEIC/camera SpaceWire link is only to be used to communicate with the OBC.

3.3.3 External (SpaceWire) View The FEIC FPGA has six SpaceWire ports: three internal (virtual) ports and three external (physical) ports. A diagram is shown in Figure . Internal port 0 is used to configure the SpaceWire mini-router embedded in the FEIC FPGA. This can be used to start and stop the external links, to control their speed and behaviour etc. External ports 1-2 provide physical SpaceWire links between the FEIC/Camera unit and the OBC while external port 3 provides a physical SpaceWire link between the camera and the OBC via the mini-router.

PrimeSpaceWireLink to OBC

SpaceWireMini-Router

RouterConfiguration

Port0

21

3

4 5

Event/DataPort

SpaceWireLink ToCamera

ControlPort

FPGA/ASIC

RedundantSpaceWireLink to OBC

Figure 3-14: FEIC SpaceWire Ports

Internal port 4 is the control port of the FEIC and connects directly to the FEIC controller and register file of Figure . Internal port 5 is used by the FEIC to send the feature point lists along with small event messages to the OBC.

3.3.4 Development and Continuous Validation The development of a large FPGA design involving mathematical image processing algorithms such as the Harris detector and the cross-correlator led to a careful and incremental design and validation process. This began with an implementation of the key feature extraction and tracking algorithms using floating-point arithmetic in C++ followed by initial testing and performance evaluation. The result was a FEIC simulator program that would accept a sequence of images as input and perform automatic feature extraction and tracking using a simple controller to represent the OBC. The simulated OBC assumed that each feature point would be at the same position in the next image frame as in the previous: provided that the motion of the feature point was not enough to leave the predication gate, the correlator would be able to identify the new position of the feature point and therefore provide a new location for the next prediction gate. The tracked list was kept as full as possible with any empty spaces replaced with the strongest newly extracted feature points. Points that failed to be tracked were deleted. This system was tested on various synthetic image sequences obtained from PANGU models of Mercury. It was also tested on some (poor quality) Lunar image sequences from the Apollo space programme. The next step was to produce a comparable

Page 55: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 55 -

C++ implementation using 32-bit and 64-bit integer arithmetic and to compare its behaviour with the initial floating-point implementation on the test image sequences.

Following the successful validation of the integer algorithms in the FEIC simulator, specific hardware feature extraction components were designed down to the register level starting with the gradient computation unit needed for the Harris algorithm. Each component was then implemented as a C++ thread designed to mirror the final hardware implementation. In particular, each C++ simulation component had the same set of input and output ports as the hardware unit and executed under control of a simulated hardware system clock. On every (simulated) clock cycle the sampled inputs and computed outputs of the C++ component were logged to an output file. The C++ component was then used in parallel with the “normal” C++ code for image processing within the FEIC simulator. The results from the C++ simulated hardware unit were continuously compared against the output of the “normal” image processing code to ensure that they were identical at all times.

C++Floating-Point

FEICSimulation

C++64-bit Integer

FEICSimulation

C++FEIC

HardwareSimulation

VHDLFEIC

HardwareComponents

VHDLTest Bench

ComponentDesign

Validate

Validate

Test data generation

Refine

Refine

Figure 3-15: Development and Validation Approach

After making any refinements to the design of the hardware component simulated in C++ to ensure that it behaved exactly like the “normal” integer C++ implementation the design was implemented in VHDL. For each component a separate VHDL test bench was created which supplied the VHDL component with a set of signal values for each input port on every clock cycle and validated the signal values of each output port on every clock cycle. The input signal values and the expected output signal values used for validation were read directly from the log files produced by the C++ hardware simulation.

This process of designing hardware components, implementing them in the C++ simulator to validate the algorithmic/functional behaviour and then implementing them in VHDL ensured that all the VHDL components behaved exactly like the original C++ implementations. The increment approach allowed the complete FEIC design to be constructed in small steps and its behaviour to be continually tested throughout the development process. This design process is summarised in Figure .

During the final stages of development when all the FEIC algorithms were implemented on the FPGA the C++ FEIC simulation program was used to perform high level validation of the complete design using the Transtech test board containing a Xilinx Virtex-II V6000 FPGA and double-buffered image memory. The FEIC simulation program was supplied with an image sequence as input. The program recorded the commands used to control the simulated FEIC in a script file which could be replayed to the FPGA on the test board. The expected high-level outputs (list of extracted feature points and tracked points) obtained

Page 56: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 56 -

from the simulated FEIC could be compared directly with those obtained from the FPGA. A similar technique was used to generate the test scripts for the chip-level VHDL test bench of the FEIC FPGA.

3.3.5 Key Architectural Design Decisions There were four main architectural design decisions. The first was to use SpaceWire instead of PCI for the link between the FEIC/Camera and the OBC. Not only did this significantly reduce the number of pins required on the chip package (particularly with the need for a redundant link) but it also enabled long cables to be used. This improves the design options for the spacecraft itself since the OBC/GNC unit can now be placed a long distance from the FEIC/Camera which must be mounted out the outside of the vehicle.

The second key architectural decision was to use stream based processing. This followed from an idea from the FEIC FPGA/ASIC algorithm feasibility study and significantly reduces the amount of on-chip memory that is required. The feature extraction sub-components store either four or eight complete image lines for a total of 44 full-width image lines of storage. Special line buffers are used to convert the single continuous stream of pixels into a continuous stream of four (or eight) parallel image lines for the filter components. The output of the component is a single pixel stream which is passed onto the line buffers of the next component in the sequence.

The third architectural decision was to use as few hardware multipliers as possible to reduce the cost and complexity of an ASIC implementation in the future. Multiplexing was used to reduce the multipliers that are part of the correlation function: with the component running at twice the pixel rate the numerator terms are computed on one clock cycle with the denominator terms being computed on the next cycle. For the Harris convolution smoothing operation based on 7×7 multipliers on three parallel gradient streams the 147 multipliers were reduced to just 12 by utilising symmetry within the Gaussian convolution kernel and by adjusting the shape of the Gaussian to reduce the number of unique coefficients within the kernel.

The final key architectural decision was to use a register-based external interface rather than designing a system that would respond to many different forms of command packet. Instead a uniform packet protocol was defined with the packet payload defining the register address to be accessed along with any data (for write operations). The packet header contains control bits which identify the operation type: read/write. With this system a small component was designed which accepts SpaceWire packets, decodes and validates them. For valid commands the register address is set up before the write enable is strobed (for writes). On the cycle after the address was set up the register contents are retrieved from the data bus and used for the payload of the SpaceWire reply packet.

3.3.6 External FEIC Register Interface The FEIC external interface consists of 1024 registers divided into four segments of 256 registers. The first segment is used for configuration and control of all the FEIC features while the remaining three segments each provide write access to an aspect of the tracked point lists. The first segment of 256 registers is further subdivided into four blocks of 64 registers: command, write-only, read-only and read-write. The command registers cause the FEIC to execute a command when they are written with the data written being used as a parameter if necessary. For example, writing to register 5 will execute the “start-tracking” command. The write-only register block is mainly used to allow the OBC to configure the 49 coefficients of the Harris smoothing convolution kernel. The read-only block provides access to status information such as the number of feature points extracted and to details of the hardware configuration such as the version number of the VHDL code. Finally the read-write block is used to store configuration parameters such as the origin and dimensions of the feature extraction window, the SpaceWire address of the OBC (for FEIC-generated event messages) and the correlation threshold.

Page 57: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 57 -

This register system provides a consistent, simple-to-understand and easy-to-use interface which drastically simplifies the FEIC interface that needs to be implemented in the OBC.

3.3.7 FEIC Technology Snapshots: Object Stores There is a lot technology within the FEIC design but there are a few important features which deserve to be highlighted. The first is the use of object stores to simplify the storage and management of feature points in the feature point lists. An object store is used to hold a number of objects of the same type where each object may be subdivided into multiple fields. When a client needs a new object it can request one from the object store and be given a handle (small integer) in return. The client must use this handle to identify the object when reading or writing to fields of the object within the store. Once the client has finished with the object it can be deleted allowing the storage associated with the object to be re-used.

Within the FEIC there are three object store instances of a single VHDL component. One holds the details of the newly extracted feature points, one holds the tracked feature points and the third holds the 49 pixels of texture associated with each feature point. Whenever a feature point is extracted from an image its position, frame number, Harris strength and neighbouring Harris strengths are written to the LN object store. A free texture object is allocated for the new feature point and the texture handle written to the LN store as well. As a result the list of new points is reduced to a list of object handles which simplifies its storage and manipulation. A similar situation is used for the list of tracked points with tracked feature points having Harris strengths replaced by correlation strengths and other additional fields.

A key benefit of keeping the texture of a feature point in a separate texture store means that the 49 texture pixels don’t need to be moved around when transferring points from the new list to the tracked list: only the texture handle needs to be moved. A further feature of object stores is that when data is read from the store the handle and field number are provided along with the data itself. This allows the object store to be implemented in different memory technologies such as RAM or registers with different latencies without affecting the rest of the FEIC design.

3.3.8 FEIC Technology Snapshots: Extractor Select/Sort Unit One of the most critical components in the FEIC and one which can consume large amounts of logic is the feature point select/sort unit. This component must receive 112-bit feature points (excluding 49 pixels of texture) from the Harris/local maximum filter, sort them by Harris strength and retain only the N strongest features where N can be as high as 100 for a Virtex-II V6000 target FPGA. This component is expected to process as many as 65536 points per frame with points arriving with as little as three pixel clock cycles between then. Within this time the component must insert them into the sorted list of N feature points and write the seven fields of feature point data to the new point object store. With a 40 MHz system clock there are six cycles available: one for determining the location within the list for the new feature point (or to determine that the feature can be discarded) and four to write the seven fields of data into the object store in two parallel streams. This component is highly tuned for speed and area but consumes 20-30% of area. It is this component that restricts the number of feature points that can be extracted from each frame based on the target FPGA (or ASIC) size.

3.3.9 FEIC Technology Snapshots: Pixel Streams and Line Buffers All the image processing algorithms used in the FEIC are based on small windows of 3×3 or 7×7 pixels. In a “normal” implementation these windows would be moved across the source image to generate the output processed image. However, in the FEIC the source image must be received as a single continuous pixel stream and the output image must be generated as a single continuous pixel stream. The algorithms to do

Page 58: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 58 -

this have been developed as part of this project but still rely on receiving three (or seven) parallel lines of pixels as input. This is achieved by a system of line buffers which store the last three (or seven) lines of image data received from the input pixel stream. While these lines are fed to the component in parallel the fourth (or eighth) line is being filled from the pixel stream.

To enable the pixel coordinates to be obtained from the pixel stream when needed and to enable filters to determine when valid pixels are being received the pixel stream is augmented with three control signals. One signal marks the last pixel on each line, another identifies which are valid line pixels on a line (as opposed to invalid border pixels), and the last identifies which are valid lines within the feature extraction window.

3.3.10 FEIC Technology Snapshots: Correlation Measure The cross-correlation unit evaluates the intensity-insensitive correlation measure function shown in Figure over a 7×7 pixel window (N=7). In common with the other FEIC image processing components, this measure is evaluated using a fully pipelined stream-based implementation: the pipelined square root operator accepts a continuous stream of 32-bit integer inputs and produces a stream of 16-bit integer outputs with a latency of 16 cycles. The division operation is optimised to improve accuracy by shifting the 32-bit numerator left by as many places as possible before it and the 16-bit denominator are passed to the pipelined integer division unit. The I*T numerator and I*I denominator terms are computed using a shared multiplier on alternate clock cycles with the component running at twice the pixel rate. This reduces the number of multipliers form 98 to just 49.

( )( )[ ]

[ ] [ ]∑∑ ∑∑

∑∑

= = = =

= =

−−++

−−++=

N

i

N

j

N

i

N

j

xy

N

i

N

j

xy

TjiTIjyixI

TjiTIjyixIyxc

1 1 1 1

22

1 1

),(),(

),(),(),(

Figure 3-16: Correlation Measure

3.3.11 Acceptance Testing: Further Image Processing Validation The delivered FEIC FPGA design was subjected to further validation using an independent set of tests generated by Astrium. These acceptance tests consisted of a set of images containing synthetic patterns with high and low gradients for validating the feature extraction algorithms, images with high and low variance for validating the feature tracking algorithms and some sample Mercury images for validating feature extraction and tracking together.

Three criteria were defined for measuring the performance of the feature extractor and another three criteria were defined for measuring the performance of the feature tracker. These included the Feature Value Error criterion which provides a measure of the relative distribution of the error in the Harris value of the FEIC compared to a floating-point algorithm, and the Correlation Value Error to measure the relative distribution of error in the correlation strength of the FEIC relative to a floating-point algorithm. For all tests the FEIC was required to have less than 0.5% difference in the mean and standard deviation and provided a stringent test for the implemented FEIC design.

Initial results from these tests led to some minor optimisations in the FEIC algorithms to improve their performance. Further analysis of the FEIC algorithms in light of the final results have identified areas in which the FEIC could be improved to increase the number of feature points extracted from dark images or

Page 59: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 59 -

images with low contrast. If “ideal” images of Mercury correspond to a camera gain of 1.00 then features will be extracted if the camera gain exceeds 0.07 i.e. for images more than ten times darker than the ideal. The Feature Value Error and Correlation Value Error criteria of less than 0.5% error are satisfied for images with a gain of at least 0.40 i.e. for images more than twice as dark as the ideal. These minimum gain limits could be further reduced if some changes to the computation paths were made. However, the current implementation provides results that are as good as floating-point even if it could do better for low-contrast.

Figure 3-17: Comparison of FEIC Harris with floating-point

An example of the performance of the FEIC integer Harris computation relative to a C++ floating-point implementation is shown in Figure . This graph plots the FEIC Harris strengths against the floating-point Harris strengths for each pixel of the Astrium acceptance test image Egrid_inf. For comparison the line labelled f(x)=x/1024 shows the position that FEIC Harris points ought to have if they perfectly match the floating-point computation. The initial horizontal part of the graph is due to the fact that the FEIC clamps negative Harris strengths to zero.

3.3.12 Acceptance Testing: User/OBC Interface Validation Before the FEIC design running on the Transtech FPGA test board was delivered to Astrium the system was validated using a sequence of independent high-level system tests developed by Astrium. This included validation of the use of SpaceWire to interact with the FEIC mini-router for configuration and control, and the validation of the SpaceWire interface to the FEIC. Further tests included packet routing, FEIC modes of operation and list management. No problems were discovered by these tests.

3.3.13 Stress Testing A stress-test system was developed for testing the FEIC design in both the Transtech test board and for the final camera design. A driver program on the test PC downloads the two images shown in x to the camera

Page 60: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 60 -

image memory. The driver then enters a continuous loop: the FEIC is instructed to extract feature points from the first image (the eight corners of the two squares) and the results are verified. The image buffers are then swapped and the FEIC is instructed to extract new feature points and track the old points. The new points are verified along with the tracked points: only the four points associated with the white square can be tracked whereas the points from the grey squares will be lost. This loop is intended to be left running for hours or days and can be used to identify any problems that might occur after extended use.

Figure 3-18: Stress test image pair

3.3.14 Summary The design, implementation and testing of the FEIC FPGA implementation in VHDL has been a significant amount of work and has resulted in the delivery of a high performance image processing chip which can extract 50-100 features points from a 750x750 pixel window of a mega-pixel image while tracking up to 200 feature points at 20 frames per second. Larger extraction windows can be used if the OBC needs less time to process the FEIC feature point lists or if a lower frame rate is used.

The feature extraction, tracking and management facilities provide the “heavy lifting” in a single chip allowing the OBC to dedicate itself to running the GNC algorithms needed for vision-based landing. Furthermore, the use of SpaceWire as the OBC/FEIC/Camera interface enables the OBC and the camera to be located at large distances from each other providing extra flexibility in the design of the lander vehicle.

The FEIC FPGA design has an extensive feature set including configurable feature extraction window size and location within the frame buffers, configurable prediction gate size for feature tracking, the ability to manipulate the list of tracked feature points including being able to freeze points so that they won’t be used for tracking but will be preserved for future use, optional feature point texture update following successful tracking, image memory read and write (only in test mode), event notification (list ready, extraction error) and a wide range of statistics such as extraction and tracking times, lists sizes etc.

Testing and validation is extremely important and has been performed continuously at every level of the design from the low-level components to the high-level components. A novel C++ hardware simulation system has enabled the VHDL components to be validated directly against the “standard” C++ Harris and cross-correlation algorithms implemented using 64-bit integer arithmetic derived from the original floating-point algorithms.

The final FEIC design consists of 36000 lines of VHDL source code with an additional 34000 lines of VHDL test benches. A further 80000 lines of C++ code was developed for support programs, the FEIC simulation programs and for the testing and validation approach. A sample SpaceWire driver program which can be used to control the FEIC for test and validation purposes was written in about 6500 lines of Java.

The resulting FPGA design uses 70 hardware multiplier blocks, under 2 Mbits of on-chip RAM and every FPGA logic block on the device. The process of analysing and “compiling” the VHDL to generate a bitmap for the FPGA takes over six hours on a 3 GHz Pentium 4 with 1 Gb of RAM. A breakdown of the FPGA usage is given in Figure .

Page 61: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 61 -

2%

52%

21%

10%

9%6%

MemoryInterface

ImageProcessing

ListManager

SpaceWire

Registers

TextureStore

Figure 3-19: FPGA Utilisation By Function

Page 62: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 62 -

4 VBNC DEVELOPMENT

4.1 HIGH LEVEL FUNCTIONAL DESCRIPTION

The Vision Based Navigation Camera (VBNC) is composed of:

• an objective • a detector • the electronics for the detector control and readout • the image processing electronics • the electronics for the camera control and for the communication with the external world

From a functional point of view, the VBNC can be schematically represented as in Fig. 4.1-1. The light collected by the objective is converted by the detector in a digital electrical signal. The VBNC control electronics, that has in charge the overall control of the camera, drives the detector, collects its output signal and manages all the camera operations, including the data exchange via SpaceWire I/F and the provision of images to the image processing section, whose task is to extract from them the information necessary for the lander navigation. The camera works following the instructions from the On Board Computer (OBC), to which it sends the telemetries containing the information requested by it, in terms of raw images, processed data and housekeeping data.

Fig. 4.1-1 VBNC functional scheme

4.2 MAIN FEATURES

The main features of the VBNC are the following:

• FOV: 70° • Detector:

• Type: APS • Pixel number: 1024x1024

• Outputs: raw and processed images, with rate up to 20Hz • Functions:

• Image processing capabilities • Programmable, also in flight (possibility to patch the SW by TC) • Checkout and self test

• I/F: • Power I/F: single 3.3V • Dual redundant SpaceWire, 100Mbit/s speed • Test connector for synthetic images injection

• Dimensions: 130 x 130 x 94 mm3 • Mass: 500g • Power consumption: 4W

APS VBNC Objective SpaceWire

Image

To the OBC

Page 63: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 63 -

4.3 FUNCTIONAL CHARACTERISTICS

The VBNC is organised for operating modes, as shown in Fig. 4.3-1. Five different modes are implemented:

• Initialisation (INI) Mode • Standby (STB) Mode • Tracking (TRK) Mode • Image (IMG) Mode • SW Maintenance (SWM) Mode

Fig. 4.3-1 VBNC operating modes and transitions

At switch-on or after a reset, the VBNC enters in the Initialisation (INI) Mode, where HW and SW are initialised.

After the reception of the Stand By (STB) TC, the Application SW is copied from EEPROM to RAM and executed. In STB Mode the camera executes standard operations (such as HK acquisition and HC execution), manages telecommands and telemetries, and is ready to switch to all the operative modes. The return to the STB from an operative mode can be either autonomous (after a commanded number of cycles) or commanded by TC. The STB Mode is also used to carry out calibrations, by checking, on command, the presence of defective pixels (in addition to the ones detected during the in-house calibration). This is done by looking at a uniformly black or uniformly white scene, for detecting, respectively, hot or cold pixels. The information gathered during this operation is then used for a real time correction of defective pixels in tracking mode.

The Tracking (TRK) mode is the nominal mode for features extraction, where all the images acquired by the detector are transmitted to the FEIC (via shared memory banks) that processes them in real time.

In Image (IMG) Mode the VBNC transmits raw or processed images or data, via the SpaceWire I/F.

4.4 IMAGE PROCESSING FEATURES

In addition to the features extraction, carried out by the FEIC, the VBNC has additional, general purpose image processing features. They are:

• Identification and correction of defective pixels • Windowing • Thresholding (all pixels below a commanded threshold set to zero)

INISW

STB

TRK IMG

PROM SW

EEPROM SW

SWITCH-ON

Page 64: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 64 -

• Dynamic range reduction (from 10 bits down to 1 bit) • Star tracking (simplified), where up to 16 stars are detected and their position tracked at subpixel

accuracy • Horizon detection, where position and curvature of a planetary limb are measured

4.5 OPTICAL DESIGN

The design of the objective for the VBNC has been based on the following main drivers:

• Field Of View (FOV) of 70° • Very low distortion (1%)

These are the two most demanding requirements, especially if requested at the same time. An additional difficulty (typical of any optical design for space applications) is the limited number of glasses that can be used, that must be selected among the available radiation resistant materials. The resulting design meets all the requirements, including the above two. This was also possible by exploiting the possibility to work on more relaxed values (with respect to standard objectives) for other parameters, in particular the optical aperture, kept small to minimise the quantity of light coming from the reference light source, the sunlit surface of Mercury, evidently very bright.

The selected layout for the VBNC objective is reported in Fig. 4.5-1, that shows in more detail the single lenses, the overall dimensions and the ray tracing. As it can be seen, there are six lenses within a very compact layout with an overall length lower than 27mm and a maximum diameter of 12 mm. All lenses are spherical, and this simplifies the manufacturing. All the lens materials have been selected in order to ensure resistance to the radiation environment of the space. This also means that there is no need of additional protecting windows in front of the optics.

Focal Length 16.23 mm

Relative Aperture (f#) 8.2

Field of View full cone 70°

Back Focal Length 5.79 mm

Spectral range 0.45 ÷ 0.80 µm

Optical transmittance > 0.9 over the whole spectral range

Distortion at FOV 10°

20°

30°

35°

0.18%

0.58%

1.02%

0.97%

MTF at 22.7 lp/mm best focus

(see figure on right)

On Axis 0.81

24.5° Tang = 0.77 Sag= 0.82

35° Tang = 0.70 Sag= 0.82

Page 65: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 65 -

Global uniformity 9%

Local uniformity 3%

Optical materials - Fused Silica (lens 1)

- SF6G05 (lenses 2 and 5)

- LAK9G15 (lenses 3, 4 and 6)

With reference to Fig. 4.5-1, the lenses are numbered from left to right. All glasses are radiations resistant

Detector

Pixel size

1024 x 1024 pixels

22.5 x 22.5 µm

Tab. 4.5-1 Optical characteristics

Tab. 4.5-1 reports a summary of the optical characteristics in nominal conditions, i.e. at ambient temperature and manufactured and assembled with zero deviations from the design values.

As it can be verified, it meets all the optical requirements (FOV = 70°, MTF > 0.58, distortion < 1%, global uniformity better than 30%, local uniformity on 100 pixels better than 4%), with the exception of the distortion, which is just slightly over the required 1%. Since no specific filter is present within the optics, the spectral characteristics reported in Tab. 4.5-1 are essentially due to the spectral response of the detector.

Fig. 4.5-1 VBNC objective and baffle – lenses detail and ray tracing

The design is such that the manufacturing and assembly are not critical, but is at the same time robust in terms of tolerance sensitivity.

Page 66: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 66 -

The figures of tab. 1.4-1 have been obtained by carrying out several optical analyses (see also fig. 1.4-2), also aimed at verifying the actual performance of the objective in its operative environment:

• MTF vs. spatial frequency • Field curvature • Distortion • Thermal analyses (that show how the performance are met from –30°C to +70°C) • Ghost analysis (showing that ghost effects are negligible) • Montecarlo tolerance sensitivity analysis (showing that standard manufacturing tolerances can be

used)

Fig. 4-1 Sample analysis outputs

Page 67: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 67 -

4.6 THERMO-MECHANICAL DESIGN

A simple mechanical layout, studied to minimise the mass and to optimise the thermal behaviour has been implemented, starting from the following main design drivers:

• Optical axis normal to the mounting plane (for thermal reasons) • No thermal control for the detector • Camera fully covered with Multi Layer Insulator (MLI) thermal blanket • Two PCBs linked by a rigid flex connection

The mechanical structure is shown in fig. 1.5-1 and 1.5-2. Objective and detector are assembled within a single subsystem, in order to ensure the best optomechanical stability. The optomechanical assembly is directly attached to the sensor baseplate, that, in the hypothesis of the optical axis normal to the mounting plane, allows to optimise the heat exchange between I/F and detector.

The material of the objective is titanium. It has been selected in order to ensure the best matching with the thermo-mechanical properties of the glass, in such a way to minimise the stability problems.

The focal plane assembly is constituted by the detector, glued on a support of TZM. This material has been selected because not only it matches the thermo-mechanical properties of the detector and of the objective, but also ensures a high conductivity, allowing to keep the detector at a temperature the nearest possible to the I/F plate.

The overall optical assembly (objective plus focal plane assembly) is screwed on the baseplate through a pin and a slot, to ensure a decoupling between the two parts, necessary to cope with the different thermal expansions when the camera is subjected to high temperature variations. In fact, being the baseplate in aluminium and the support in TZM, they experience a different expansion. The slot on the support permits a mutual shift without stress between the two parts.

The baffle is thermally decoupled by the objective and is linked to the box with a low conductance spacer. In this way the high temperatures experienced by the baffle when hit by the Sun are not brought to the rest of the camera.

Structural and thermal analyses have been executed, to verify the robustness of the design. The structural analysis, based on an ANSYS Finite Element Model, has shown that the mechanical design allows to withstand the launch environment, in terms of shock, sine and random vibrations, with minor updates necessary to stiffen the box cover.

For the thermal analysis, different sets of cases have been considered, all based on the Mercury environment, taking into account the effect of the direct solar flux and the planetary IR flux:

• Steady state analyses have shown that the main problems occur for the baffle and the spacer that connects it to the box, that experience very high temperatures; reasonable temperatures have been observed for the internal components

• Transient analyses during descent have therefore also been conducted. They have shown that, providing some measures at mission level (shadow the camera with respect to direct sunlight), the temperatures of all the components (baffle included) remain at acceptable values for a 60s descent (see tab. 1.5-1)

Page 68: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 68 -

Temperature vs. time (°C)

Item 0 s 60 s 360 s 3600 s Item 0 s 60 s 360 s 3600 s

Baffle 26.08 156.64 333.85 342.23 Lens 59.05 59.71 66.79 88.57

Spacer 42.04 85.08 200.77 209.74 Detector 62.69 62.73 63.81 70.29

Box 60.18 60.24 77.88 83.92 Support 60.46 60.48 63.10 66.97

Board 60.30 60.33 63.10 65.49 Baseplate 60.23 60.27 63.01 65.63

Housing 59.75 59.83 66.62 85.09

Tab. 4-2 VBNC transient temperatures

Fig. 4-3 VBNC mechanical layout. Axonometric view

Fig. 4-4 VBNC mechanical layout. Sectional view

Page 69: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 69 -

Fig. 4-5 VBNC structural and thermal models

Some elements of the structural models

Thermal model

Page 70: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 70 -

4.7 ELECTRONIC DESIGN

The VBNC electronics, shown in Fig. , is composed by the following functional blocks:

1. APS sensor 2. VBNC ASIC 3. Processor Program/Data and Boot memory 4. FEIC ASIC 5. Pixel memory 6. Interface drivers/receivers

Fig. 4-6 VBNC electrical scheme

The selected detector is an APS (Active Pixel Sensor) currently under development at FillFactory, in the frame of an ESA contract. The choice of the APS technology has been dictated by reasons of compactness, because this kind of detectors do not require all the ancillary circuits required by CCDs for control and readout. The result of this is a net reduction of the camera electronics.

PROM

EEPROM

RAM

RAM

FEIC

ASIC

SPACEWIRE

RAM

PIXEL

APS

DSU

TEST

INTERNAL

RECEIVER DRIVER

SPACEWIRE P/S

OSCILLATOR

LINEAR

OSCILLATOR

RECEIVER DRIVER

SPACEWIRE

Channel A Channel B

SPACEWIRE

Page 71: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 71 -

From a physical point of view, the VBNC control, image processing and SpaceWire I/F blocks are implemented by means of electronic circuits (all digital) with high integration level, achieved through the realisation of two Application Specific Integrated Circuits (ASICs). More in detail:

• The VBNC control is in charge to the VBNC ASIC, designed by GA, that embeds a microprocessor, a SpaceWire router and the APS HW control logic. The microprocessor, through a SW specifically developed for the VBNC, acts as a supervisor of the whole camera.

• The image processing is in charge to the FEIC ASIC, developed by Astrium. The FEIC also embeds a SpaceWire router that provides the physical link with the external connectors, as well as a link to the router embedded within the VBNC ASIC

• PROM, EEPROM and RAM memory banks are available for SW code storage and execution, and for data and image storage

All the electronic parts are mounted on two boards linked with a rigid flex connection and enclosed within a box.

The VBNC ASIC is the back bone of the camera system. This very large scale integrated logic has several tasks in the system. In particular it embeds

• a high calculation power microprocessor (the Leon II, based on the Sparc architecture) with the relevant peripherals

• the APS HW control logic • the control of the pixel memory • the management of the system interface and the system buses for external peripheral control • two SpaceWire routers (only one used within the VBNC)

The processor has the main task to manage and synchronise the different camera activities. Four separated memories are available:

• Boot PROM (32K by 8 bits), implemented on a single chip • Program EEPROM (256K by 8bits), implemented on 2 chips • Data and code RAM (512K by 32 bits), implemented on 4 chips • Pixel memory (two banks of 512K by 24 bits), implemented on 6 chips

The Boot PROM contains the minimal code for system maintenance and interface management. The Program EEPROM contains the application code for the VBNC main operating modes. This memory can be patched using the software contained in the Boot PROM directly in flight. The EEPROM content is copied in the code RAM before the execution. The Data and Code RAM is the memory for program execution and data management.

The Pixel RAM is a very large memory area, composed by two identical banks (bank A and bank B), used for acquired pixel storage. Its dimensions are sufficient for the storage of two successive frames. This allows to carry out the operations of acquisition (by the APS scan logic), pre-processing (correction of bad pixels by the VBNC processor) and processing (by the FEIC) in parallel and in synchrony with the pixel flow coming from the APS. This element acts as fast internal I/F between the two ASICs.

One of the tasks of the VBNC ASIC, under supervision of the control SW, is to manage this memory, through a specific signal (PINGPONG signal) that makes the two banks available for read and write

Page 72: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 72 -

operations to VBNC and FEIC alternatively. In TRK Mode the bank switching is enabled and the camera alternately fills the two banks with the detected images, carrying out on them the defective pixels correction.

The estimated power consumption of the VBNC is about 4W, even if the power measured from the tests carried out on the BB has resulted sensibly higher. The reason for this discrepancy can be found in the fact that the BB uses FPGAs used in place of ASICs and two oscillators instead of one.

4.8 SOFTWARE

The software is divided in two parts. The boot SW, stored within a PROM, contains the minimum code necessary for the camera survival and for ensuring the communications with the OBC. The application SW, stored within an EEPROM, implements all the VBNC functionalities. With this configuration it possible to update the application SW even in flight, while keeping the boot SW protected from degradations induced by the environment or by erroneous patches.

4.9 SYNCHRONISATION

The synchronisation between OBC and VBNC occurs through the “time update” TC and the SpaceWire “ticks”, the latter being used for fine synchronisation at millisecond accuracy. The OBC time (from current year down to millisecond resolution, in accordance with CCSDS standard) is passed to the VBNC through a specific TC. In addition, every second (if enabled by the OBC), a sync signal, automatically generated by the SpaceWire I/F, is sent to keep the two clocks locked.

The synchronisation is intended as a means to align the OBC and the VBNC clocks, the mutual operations being in general asynchronous. This time is then used by the VBNC to tag not only the telemetries, but also the images exchanged with the FEIC, by adding to them a “Time Stamp”, that allows the FEIC to clearly identify the acquisition time of each image.

4.10 PERFORMANCE ANALYSIS

The main parameter for determining the performance of the camera and the associated image processing is the Signal to Noise Ratio (SNR), required to be higher than 70. it is defined as follows:

σ−

= BS NNSNR

(4.10-1)

where:

• NS is the useful signal, output of the detector in response to the image of the terrain • NB is the background level • σ is the rms noise

The SNR performance can be met without the need of any cooling system. It is therefore assumed (and verified by the thermal analysis) that the detector temperature is below 80°C. It has been calculated that, in these worst case conditions, the value for the SNR is 93, that is a value sufficiently higher than the required 70.

The robustness of the VBNC operations is ensured by the following mechanisms:

• Use of EDAC protected memories to cope with errors induced by the environment (SEUs) • Cyclic Redundancy Check (CRC) applied during the code transfer from EEPROM to RAM

Page 73: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 73 -

• The whole application SW, stored in EEPROM, can be updated, in part or completely, even in flight

• The survival code for initialisation and communication (not patchable) is stored in PROM (hence immune to SEUs) and tested very accurately

• A RAM scrubbing procedure is automatically activated at switch-on and after any reset • A Watch Dog mechanism allows to save the SW from endless loops, by commanding an

automatic reset in this case. The wait for a manual TC at the end of the INI mode (after reset) allows to return full control to OBC, that can eventually decide to carry out a SW update (solving the problem that has generated the endless loop)

The expected nominal error rate is lower than 0.01 events/day, with large margins

4.11 BREADBOARD

An elegant BB has been realised and tested. It differs from the flight version mainly for:

• the use of a commercial objective • the presence of commercial electronics • FPGAs instead of ASICs • the use of a different APS detector (STAR1000 from FillFactory):

• radiation hardened • 1024x1024 pixels • 10 bits, 12 MHz digital output

4.12 ELECTRICAL GROUND SUPPORT EQUIPMENT

Two EGSEs have been realised:

• the VBNC SpaceWire I/F (VSIF) • the Electrical Stimuli Generator (ESG)

The VBNC SpaceWire I/F (VSIF) has essentially the scope to provide a user interface to the camera during all the on-ground operations, when the VBNC is not directly connected to the OBC. In other words, the VSIF allows an operator to command the VBNC and to monitor its outputs. This is done by exploiting the standard databus of the VBNC, i.e. the SpaceWire serial line. It also allows to build sequences of TCs and TMs, to be sent to the VBNC through the databus. These sequences can be stored within specific files and loaded when needed. All TCs and TMs exchanged between VBNC and VSIF can be monitored on both the PC screen and on file.

The ESG is based on an acquisition board (PCI6534, by National Instruments), hosted within a PC. By connecting the outputs of this board to the test connector of the VBNC it is possible to inject a sequence of bytes that, bypassing the VBNC detector, reproduce a sequence of consecutive images at the real frame rate, i.e. up to 20Hz, for a maximum duration of about 30s. The sequence to be transmitted must be prepared off-line, prior to the image transmission and stored within the memory of the PC. It can consist of one of the following possibilities:

• A short movie built by the ESG itself • A PANGU movie • A sequence of static images • Some simple shapes and patterns (such as squares, rectangles, grey scales), selected by the user

and directly built by the ESG SW

Page 74: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 74 -

The ESG SW allows the user to select one of these possibilities and translates the selected sequence into a continuous stream of bytes, stored within the I/O board memory. At the end of this activity the ESG is ready to transmit this sequence to the VBNC after a command from the user.

The ESG also allows the converse operation, i.e. the storage of an image sequence taken from the VBNC through the I/O boards into the PC memory. This sequence can be then sent back to the VBNC when desired.

4.13 TESTING

The BB has been subjected to an intense test campaign. Several tests have been carried out to fully validate the design:

• Physical tests: dimensions and mass measured • Electrical tests: bonding and insulation checked; power consumption measured • Functional tests:

• all mode transitions verified, by generating, through the VSIF, an automatic test sequence • transition time from switch-on to stand-by measured at 2 seconds, as required • all SpaceWire link speeds verified at the oscilloscope • calibration function (automatic detection of defective pixels) tested by injecting (through the

ESG) several random error patterns (white pixels on black background, black pixels on white background).

• all bad pixels successfully detected and stored in memory (for real-time correction) • parameter setting capabilities verified (e.g. change of the thresholds for bad pixel detection) • SW downloading and uploading capabilities verified • synchronisation mechanisms fully validated • acquisition and processing chain tested with synthetic images generated by the ESG:

• many synthetic images generated and checked pixel by pixel • all image processing features (windowing, thresholding, dynamic range reduction) verified • star tracking function validated • horizon detection performance measured with several (96) simulated planetary images, with

variable position, radius and orientation: limb position measured at subpixel accuracy, planet radius within 2% accuracy

• optical chain tested with real images: • for general functional verification • for an end to end check of the star tracking and horizon detection features, by placing the

camera in front of a simulated star pattern and a simulated planetary image • Performance tests:

• at detector level (no optics) for measuring its performance in terms of uniformity and noise: • dark current negligible at ambient temperature (this confirms the analyses carried out for

higher temperatures) • very low noise values measured (below the quantisation level at 10 bits) • very good SNR (392 vs. 70 required) at ambient temperature

• synchronisation accuracy within 1ms • at camera level, for measuring all optical parameters (distortion, MTF, uniformity), even if not

representative of flight model, but useful as inputs for the navigation model The majority of tests has been successful. The main non conformance is the power consumption, but, already stated in section 1.7, the problem is mainly due to the presence, within the BB, of FPGAs in place of ASICs.

Page 75: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 75 -

5 NAVIGATION DEVELOPMENT

5.1 NAVIGATION ARCHITECTURE

NPAL navigation is based on unknown landmarks tracking. Unknown landmarks are remarkable portion of image that can be tracked over time. Observation of the LOS variation over time provides obbservability of the relative position of the landmark with respect to the vehicle.

The simplest form of unknown landmark navigation is to track a single landmark. The principle is illustrated in Figure . Starting from an initial uncertainty, the first LOS measurement is reducing the initial positioning uncertainties to the level of the LOS error in the two directions orthogonal to the LOS, while the range error cannot be reduced. If we assume that the landmark LOS orientation changes with the time due to the vehicle trajectory, a later observation can further reduce the range error by intersection of the two observations uncertainty ellipsoids.

This view of the navigation principle is useful to introduce the principle, but makes simplifying hypothesis that are not completely valid. As long as the motion of the camera is known between the two observations, the concept applies, but if the motion is not completely known, the observabity can no more be analysed with this simple view. In a space vehicle, Inertial Measurement Unit (IMU) enables measurement of position variation between the two observations. IMU provides measurements of the vehicle angular rates and specific acceleration. To compute the vehicle position variation between the two observation points, the navigation filter integrates the state vector using these measurements. The propagated position error is equal to the initial error increased by the impact of IMU measurement noises and the impact of the velocity estimation error. Velocity estimation error is a key parameter limiting the rate of convergence.

So, the main question is: Is it possible to estimate the full state vehicle by looking at unknown feature points LOS variation? To answer this question, let takes a simple example where the vehicle is a vertical landing trajectory. The descent scenario is illustrated in Figure 5-2.

The observation is the position of features in the focal plane of the camera, which is more or less equivalent to the observation of the Line Of Sight (LOS) of the features. In this first analysis, let suppose that the measurement is the LOS separation between two features θ , as indicated in the Figure 5-2.

The measurement equation is:

=

xDarctanθ EQ 5-1

This measurement equation is non linear and we will consider an intermediate measurement variable

)tan(1θ

ν = to simplify analysis. With this definition, the measurement equation simplifies to:

kxDx

==ν EQ 5-2

Where Dk /1= is an unknown constant. Navigation problem is to come to estimate x and D through successive measurements.

Page 76: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 76 -

Figure 5-1: LOS to one landmark concept

Z

X

x

D

θ

Figure 5-2: Vertical Landing Simple Example

Initial Covariance Propagation using Inertial Navigation

LOS variations reduce the error covariance

Page 77: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 77 -

After these simple transformations, the problem can now be formulated as follows: The observation is equal to the distance to the ground multiplied by an unknown constant.

Without any further information, the system is not observable since the feature points are unknown. One way to gain observability is to make use of the dynamics of the system and to observe the same feature under different conditions. The observability analysis can be made by derivation of EQ 5-2. As the features are not moving ( 0=k& ) it follows:

xk&& =ν EQ 5-3

It is easy to observe the derivative of the observation, and if the vehicle velocity x& is known independently (e.g. from initial conditions propagated through the inertial navigation filter), the constant k can be estimated and consequently the distance to the ground x is obtained thanks to EQ 5-2. However, for a landing vehicle, the vehicle velocity relative error increases when the vehicle approach to the ground (the error is constant and the velocity tends to zero) and consequently the distance estimation error diverge unless the velocity can be estimated. In order to get velocity observability, we can derive the measurement equation again:

xk &&&& =ν EQ 5-4

Thanks to the knowledge of the acceleration (through IMU measurements) and observation of the measurement acceleration, the constant can now be estimated independently from the velocity knowledge. Once the constant is estimated, the velocity can be estimated also.

Without any acceleration, the constant cannot be estimated and the system is unobservable. The key feature of any Lander is that there is at some point of the trajectory a significant level of acceleration, and consequently the vehicle state is observable.

The NPAL filter takes advantage of the previous observations by observing LOS of several feature points. Each point is tracked independently of the others. Prediction error in the LOS are used to update the vehicle state vector.

The NPAL filter main equations are shown in figure 5-3. The state vector includes the classical vehicle navigation states (position, velocity and attitude) and three components per tracked feature point (the relative position estimate). When a feature point track ends, the states associated to this feature point are “recycled’ to be assigned to a new feature point.

Page 78: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 78 -

( )

( )

−=

rr

rr

rr/

r

1

v

v

pp

ppψ

vp

X

nf

f

v

v

δ

δδ

δδ

δ

M

( ) ( ) ( )( ) ( )

( )( ) ( ) iii

r/iiii

r/i

iiiir/i

iiiir/i

ri

iiiiiii

vvvf

vf

vff

vfvf

dtd

dtd

dtd

dtd

dtd

dtd

vpωppω

ppω

ppωp

pppp

−×−−×−=

−×−=

−×−=

−=−

=

=

c

c

c

c

c

c

c

c

c

c

c

c

z

y

z

z

z

y

z

x

z

z

z

x

vu

w

w

ww

w

www

ww

ww

mδδ

δδ

δδ

δ

XHm δδ ⋅=

0=∂∂

iv

c

vw

c~ωψ

w=

∂∂ c

( )ci

fR

ppw

iv

=−∂

∂i

c

System states equations Measurement equations

ivp Vehicle position in IPQ

ivv Vehicle absolute velocity in IPQ

rr/vv Vehicle velocity with respect to RPQ in RPQ

vi->Q Vehicle attitude w.r.t IPQ

rfp Feature point position in RPQ

=

vu

m The two components of the feature point projection in the focal plane.

cw The three component of the feature point position in the camera frame (homogeneous coordinates of the projection).

X Filter propagation state vector.

Xδ Filter error state vector

cω Angular rate vector of the vehicle in the camera frame.

ψδ Attitude error microrotation (3 angles components of the rotation Qδ )

Figure 5-3: NPAL extended Kalman filter main equations

Page 79: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 79 -

5.2 REAL TIME IMPLEMENTATION

The real time implementation is shown in Erreur ! Source du renvoi introuvable.. The filter is divided in two tasks:

• The high frequency task which perform the IMU measurement integration and compensation from scale factor and bias estimated by the filter. The gravitational acceleration is integrated in the fast task using a simplified model computed by the LF task.

• The Low frequency task takes as input the history of the HF task integration and the sensor measurements. The state vector is then estimated and updated. At the end of the LF task, the state vector is extrapolated and new initial values of the HF task integrators are computed.

This implementation allows to maximize decoupling between the HF task and the LF task, minimizing the overhead in the HF task resulting from interaction with the navigation filter. Thanks to this decoupling, it is even possible to validate the filter independently on trajectories rather than on IMU measurements outputs. This allows to decouple the potential impact of both blocs on the performance.

5.3 NAVIGATION FUNCTIONAL ARCHITECTURE

The navigation functional architecture is shown in Erreur ! Source du renvoi introuvable.. It is composed of 4 functions:

• The navigation filter which provides the vehicle state estimation and feature points distance estimation. A continuous estimation of the mean plane fitting the last tracked feature points is provided and propagated in order to keep a minimal prediction capability even when all tracks simultaneously ends. (following an anomaly or a fast retargeting attitude manoeuvre).

• The camera aiding which provides feature point position estimation to the image processing and select the points to be tracked by the FEIC. Point selection logic is mainly based on Harris criteria (highest Harris weight preferred), but can be improved by selecting points in certain region of the image (for landing site distance estimation) or with good correlation properties. Feature points position prediction is based on an homography computed based on the state estimation.

• The feature point selection function which selects the feature points to include in the estimation among the 200 points tracked by the FEIC. In order to provide good state estimation, feature points shall be spread inside the FOV, not too close to the horizon (points at horizon do not contain much information about the distance to the plane), not too close one to the others (to provide different kind of information) and not too close to the edge (to avoid very short tracks).

• The landing site position estimation which provides the landing site position estimation after an initial designation in one image. This is a simple propagation of the LOS updated by measurements made after the designation to provide a best estimate of the LOS.

Page 80: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 80 -

Figure 5-4:Navigation filter real-time implementation

NavigationFilter

NavigationCamera

InertialMeasurement

Unit

LandingSite

Selection

FeaturePoints

Selection

CameraAiding

Landing sitePosition

Estimation

Navigation

Vehicle position/velocity

Landing site position

Gui

danc

e

Piloting

Feat

ure

poin

ts p

ositi

on

Feature points tracks

Navigation aiding

Candidate feature points

Table of feature points to track

Landing site position(in camera coordinates)

3D te

rrai

n re

cons

truct

ion

State vector

Figure 5-5:Navigation function interfaces

Page 81: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 81 -

In the current implementation a mean plane estimator is used for several purposes:

• to compute homography parameters.

• to initialize feature points depth when a new track is acquired.

• to predict landing site position and depth.

The landing site mean plane estimator is a least square plane estimator using all the tracked feature points. A ponderation is used to put more weight on close points than on far points, since close points move faster and need better prediction. The new plane parameters are not updated if the actual error covariance is not improved with respect to the previous estimation, allowing to keep reasonable estimation in case of lost of all tracks. Landing site depth prediction relies on the same plane estimation for simplicity. However, it is foreseen that better accuracies could be achieved by putting more weight on the points surrounding the landing site LOS direction. The can be foreseen as a potential improvement.

Page 82: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 82 -

5.4 REAL TIME PROFILING Profiling of the Java navigation filter (with timers or with direct profiling) and analytic operations count has been performed to estimate the expected real-time performance of the navigation filter and, if needed, stress the parts of the filter that should be improved, in terms of real-time constraints. Based on the result of this activity, the navigation filter has been significantly redesigned to optimize real time performances. The second version of the filter is then compared with the first one and provides significant margins with respect to real time constraints for NPAL project application.

The profiling test that has been done consists in simulating only the Java code of the navigation filter and recording the time duration spent in each java method. Java language allows to retrieve a statistical overview of the time spent in the methods called during the running of a program.

This allows to distinguish each main function of the navigation filter algorithm and to evaluate the real time performance of the filter (see ref.[RD3]).

5.4.1 Time performance evaluation using inside timers

The first time performance evaluation consists simply in probing the java code with specific timers in order to record the time spent in the High Frequency and Low Frequency steps of the navigation filter. Four timers have been inserted within the java code :

- first timer is dedicated to the HF task

- second and third timers have been inserted to specifically record respectively the beginning and the end of the common parts of HF-LF task

- the last timer is dedicated to the LF task

Results are shown in the following table. The timers output the mean value of each measurement, the sum of the measurement duration, the sum of the squared measurement duration and the number of calls to the functions. The provided data gives an overview of the expected real-time performance.

Mean duration (ms)

ii

t∑ (ms) 2i

it∑ (ms2) Number of calls

to the function

HF Task 0.25667 265.14 0.8753 1033 HF-LF Task (without HF part) 9.5145·10-3 0.9895 0.01055 104

LF Task 100.85 10488.55 1855585.69 104

Table 5-1 Output of the Java timer inserted into the Navigation filter

5.4.2 Time performance evaluation using profiling of the code

The Java language allows to profile the code and retrieve a statistical analysis of the time spent within the various methods. The analysis of the results can be done under several ways :

- time spent by function method (sorted by caller or callee) : allows to evaluate the time spent in each method

- method time duration sorted by line number : allows to evaluate, which part of the method requires the majority of time.

Page 83: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 83 -

The following figure presents the percentage of total time used by unitary methods. We can see that only 5 methods represents 3/4th of the total simulation time and that more than 40% of the total time is used by a single method, i.e. the Modified Weighted Gram-Schmidt (MWGS) algorithm.

astro.matrix.MWGS.computeUD

astro.matrix.MatrixKit.mult

astro.matrix.MatrixKit.add

astro.matrix.MatrixKit.scalarMult

astro.matrix.MatrixKit.multT

astro.gnc.cov.models.landingcam.NpalCameraCovModel.buildStateTransitionMatrix

astro.matrix.MatrixKit.identityMatrix

astro.matrix.MatrixKit.copy

astro.gnc.cov.models.navigation.NavigationStatesCovModel2.buildStateTransitionMatrix

astro.gnc.cov.test.WrapperRefTestHF.<init>

astro.gnc.cov.NpalKalmanFilterLF2.getCyclicBufferIndex

astro.gnc.cov.test.SchedNPAL.buildNavigationFilter

astro.gnc.cov.KalmanOperationsKitUD.update

other methods

Figure 5-6 Percentage of total time used by unitary methods

The method computeUD in astro.matrix.MWGS represents 41.76% of the total simulation duration. This is due to the update phase in the LF task, which requires lots of operations : between LF step N and LF step N+1, for each primary point, r*p times, (if r is the ratio between Sensor Frequency and Low Frequency, p is the number of primary points during HF step Ni, i = 0,…, r-1), optimal gain should be computed and UD covariance should be updated. E.g., if p = 10, if LF frequency is 2 Hz, HF frequency is 20 Hz, r = 10, during each LF step, optimal gain computation and UD covariance update (with MWGS algorithm) are done p*r times = 100.

The profiling analysis also shows that the LF task requires 79.41% of the total time and the HF task requires only 0.59% of the total simulation duration.

5.4.3 Correlation between profiling and analytic performance estimation

What is interesting to compare is the performance results obtained using the profiling of the java code and analytic performance evaluation based on a systematic operation count of the algorithm.

The following table presents the analytic operation count (in terms of additions and multiplications) required by various algorithm used in the LF task (cf. ref. 0).

Page 84: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 84 -

# of multiplications # of additions

MWGS (mwgs(m,n) ) ( )1 3 12

n m n n m⋅ ⋅ + + − ( )2m n⋅

MWGS Update

n states; m meas (mwgs(n+m,n+m) ) ( ) ( ) ( )1 1 3 3 12

n m n m n m+ ⋅ + + ⋅ + − ( )3n m+

Matrix Product : A*B (A : [mxn] , B : [nxp]) m n p⋅ ⋅ ( )1m n p⋅ − ⋅

P=UDUT (n) decomposition nnn32

21

61 23 −+ nn

61

61 3 −

Table 5-2 Analytic operations count of various algorithms

The LF task is the task that requires most of the navigation filter calculation duration. A detailed review of the analytic operations count has been performed for the LF task, decomposed into main steps.

The total number of elementary operations required by the LF task is 21283487 (12504329 multiplications and 8779158 additions).

We can link the number of operations to the LF task duration obtained through the profiling of the Java code. We have seen that the LF task mean duration measured during the profiling is 100.85 ms. The number of cycles per flop is given by the following formula :

µptn fN

=

where :

- n is the number of cycle per floating operation (flops)

- t is the algorithm duration (in s)

- N is the number of elementary operations required by the algorithm

- fµp is the frequency of the micro processor used to run the algorithm : t* fµp is the total number of cycles

In our case, the navigation filter was simulated on a computer with an internal clock at 2.8 GHz. The duration of the LF task is 100.85 ms and the total number of operations is 21283487.

This leads to a total number of cycles per flop of :

39100.85 10 2.8 10

21283487LFn−⋅

= ⋅ = 13.3 cycles per flop.

With the profiling test, we have seen that the MWGS algorithm corresponds to 51.86 % of the LF task duration. The analytic operations count gives a total of 17396300 operations for the MWGS algorithm. If we compare to the number of cycles per flop for the MWGS algorithm (Kalman gain computation and update P), this leads to :

( )39

51.86% 100.85 102.8 10

17396300MWGSn−⋅ ⋅

= ⋅ = 8.4 cycles per flop.

Page 85: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 85 -

The difference between both results (total number of operation with total duration, and MWGS number of operations with corresponding time) may come from the fact that the estimation of the number of cycles per flop for the MWGS algorithm is relatively close to the true value : we have isolated a very small part of the code, for which we were able to have an analytic estimation of the number of operations and also a good evaluation of the simulation duration.

For the LF task, we have estimated the number of elementary operations required by the various algorithms. This means that we consider that the time spent in this task is only due to those operations. In fact, apart the elementary operations required by the core navigation filter algorithm, other operations are needed, which have not been evaluated and taken into account in the estimation of the number cycles per flop. This can explain the difference observed with both results (nLF and nMWGS).

However, we can still see that the two values are relatively close, and that we can expect a number of cycles per flop around 10. For a LEON microprocessor (100 MHz), this will lead to a LF task duration of :

LEON

n Ntf

⋅= = between 1.8 and 2.8 s

5.4.4 Java Navigation Filter V2 Profiling Results

The new version of the filter was designed to overcome the limitations of the first filter version. The main modifications consist in:

• Do scalar updates using the Agee Turner scalar update algorithm modified by Carlson. This could have been implemented in the filter V1 version but would not provide very much gain since 30% of the time was spend in matrix multiplications required to update the states.

• Manage only one state vector at a time to simplify state update management.

• Compute state transition matrix using numerical integration (Actually implemented by a Runge Kutta of fourth order).

• Optimize matrix multiplications of sparse matrices.

A similar simulation case has been run with the new filter version.

Function % Time (s)Toltal 99,89 5,50

Other than filter 31 1,71Agee Turner 32,89 1,81Sparse Matrix Mult 21 1,16Other 15 0,83

Table 5-3 : Filter V2 profiling result

The number of operation counts spend in Agee Turner is:

Multiplications : 127

23 2 −+ nn Additions : 1

23

23 2 −+ nn

The next table shows a comparison between MWGS version and Carlson Agee Turner. It can be seen that the new version is about 20 times faster than the previous one. It can be seen also that analytical prediction

Page 86: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 86 -

and profiling are in good accordance since assuming 10 cycles per flop, the prediction of the time spend in Agee turner algorithm was 1.9s where 1.8s was measured!

However, this was not the main modification of the filter architecture. As can be seen in Table 5-3, the time spend in matrix multiplications has been reduced in the same proportion and this was what required to redesign the filter architecture. With the new version, a good balance between Agee Turner and other task has been found and the actual design is very close to the optimum. The actual version can process 10 feature points 20 times faster than real time.

Number of primary points 10Number of states 39

Operation * +MWGS 105042 68921Number of call per image 10MWGS flops per image 1050420 689210Agge Turner 2417 2339Number of call per image 20Agge Turner flops per image 48340 46780Ratio 21,7298304 14,7330056

Number of call in the test (100s,20Hz) 2000 2000Total number of flops 96680000 93560000Total number of flops (+ & *) 190240000Cycles per flop 10PC frequency 1 GHz

Predicted simulation time 1,9024 s

Agee Turner computing time prediction

Operation Count

Table 5-4 : MWGS and Agee Turner update implementation comparison

5.4.5 Conclusions & References

The V2 version of the navigation filter has been optimized based on the result of the profiling activities.

It is now possible to increase significantly the number of tracked points (The filter is still faster than real time (63%) with 30 points tracked at 20Hz on a 1GHz PC, i.e. with a state matrix of 99 states!), giving some room for robustness improvement.

Very good computation time prediction has been derived based on a combination of benchmarking and analytical computation, allowing further performance optimization which can be achieved for example by:

• Intelligent selection of measurements (Discard intermediate measurements?)

• Selection of the filter frequency (Is 2Hz the right choice?)

• Selection of the number of tracked points.

Page 87: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 87 -

References

[RD2] "Sub-optimal filter - implementation description and verification document – Output of WP 5101", NPAL Project, EADS Astrium Technical Note, ref : NPAL.ASTR.TN.001.04, issue 2, dated from March 4th 2004.

[RD3] "Profiling of Navigation filter – Output of WP 7100", NPAL Project, EADS Astrium Technical Note, ref : NPAL.ASTR.TN.008.05, issue 2, dated from December 2nd 2005.

[RD4] "VBNAT v3 Simulation Report – Output WP 5200", NPAL Project, EADS Astrium Technical Note, ref : NPAL.ASTR.TN.018.03, issue 2, dated from October 29th 2003.

[RD5] "Vision-Based Navigation for safe landing on Mars – Output of WP 8300 ", NPAL Project, EADS Astrium Technical Note, ref : NPAL.ASTR.TN.006.04, issue 1 rev. 1, dated from May 2nd 2005.

[RD6] "VBNAT Software Documentation", NPAL Project, EADS Astrium Technical Note, ref.: NPAL.ASTR.TN.010.05, issue 2, dated from November 29th 2005

[RD7] "Real-Time Test Bench Performance Document", NPAL Project, EADS Astrium Technical Note, ref. : NPAL.ASTR.TN.009.05, issue 2, dated from April 24th 2006

Page 88: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 88 -

5.5 PERFORMANCES CAPACITY

In order to evaluate the navigation performances capacity, a systematic test of all components in the performance has been conducted:

• Unitary test of the Kalman filter on very simple scenario (Single or four points, three measurements, no measurement error). These tests are not reported in this document but allow to validate the good implementation of the measurement equations in the linear domain. They have already revealed that the filter fails to converge when the initial error in the feature distance is too large, but this was not the objective of these tests to solve this problem since it was largely amplified by setting the measurement error to zero. It was therefore expected that the sensitivity of the filter to this error will be less with nominal measurement errors.

• Test with simulated measurements (No image processing). The terrain model is simple (a plane or a sphere) and the point reparation on the surface can be chosen between 4 deterministic points or any number of randomly distributed points.. These tests allow to completely validate the filter in real condition, with noise, points entering and leaving state estimation. They allow to completely validate the theoretical achievable performance assuming image processing behaves as expected.

• Test with simple synthetic images and image processing. These tests are identical to the previous ones, but images are generated and processed with the FEIC simulator. The simulated images has also been used for real time tests with the camera.

• Test with Pangu generated images and simulated FEIC. It was not foreseen to implement this at simulation level with the real time version of the navigation filter but this was finally done to improve the investigation means to interpret the test results.

All the validation steps have been conducted successfully, providing results in accordance to prediction except the last one. The errors observed when Pangu images are used are significantly outside the expected values, revealing some modelisation error which are still under investigation.

The most complete validation steps performed without Pangu images uses synthetic images as shown in Figure 5-7. The left image is a “four point images” where the only four points are tracked symmetrically around the FOV center. These images have been used for debugging purpose. The right image shows images generated by selecting randomly 20 points on a surface (plane or sphere) and keeping then during random duration to simulate the fact that point tracking ends after some time due degradation of the correlation consecutive to the zoom effect These images are the most representative used for navigation validation independently of image processing validation.

No DEM was used, but the spherical model allows validating the robustness of the filter to the DEM (Since the filter only makes plane fitting).

Three trajectories have been used: A simple vertical descent trajectory for early validation, a Mars landing trajectory under parachute and a Mercury landing trajectory.

Page 89: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 89 -

Figure 5-7: The two type of feature point generators (Left: four poins, Right: random points)

-2000

-1000

0

-2500-2000

-1500-1000

-5000

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Y

Y (m)

X

Mars descent scenario

Z (m)

Z

X (m

)

-20000

20004000

60008000

1000012000

1400016000

-2000

0

2000

4000

0

1000

2000

3000

4000

5000

6000

7000

8000

Y (m)

Mercury descent scenario

Y

X

Z

Z (m)

X (m

)

Figure 5-8: Two types of trajectories used for performance evaluation (Left: Mars landing

trajectory, Right : Mercury landing trajectory).

Page 90: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 90 -

The objective of the performance assessment was to evaluate the best achievable performances. For this purpose, no IMU errors have been considered, but only initial state estimation errors and camera measurement errors. As such the performances obtained can be considered as the best that can be achieved using optical navigation with image processing performances as expected.

The camera tracking noise considered is a white noise with 1 pixel error at 1 sigma. However, it was found necessary to increase this value up to 2 pixels to make the filter robust to large initialisation errors (When a new point is tracked). This can therefore be considered as conservative since measured tracking errors are significantly less.

Typical simulation results are shown in Figure 5-9 for a Mercury scenario and in Figure 5-10 for a Mars scenario. The Mars scenario is much more favourable and lower velocity errors can be achieved. Cross track errors goes bellow 2 cm/s well before the end of the trajectory.

These results confirm the feasibility of full state estimation using only visual sensor. The image content is simplified with respect with Pangu generated images but all the components are present from image processing to navigation.

In order to assess filter robustness, performance simulation have been performed varying the most probable sources of unmodelled errors. Asymptotic performance achieved with up to:

• IMU/NavCam alignment error < 0.01°

• Camera scale factor error < 0.1%

• Measurement time tag error < 5 ms

These values are easily achieved with a minimal care in the mechanical layout and calibration campaign (Lower level are commonly achieved with star trackers). However these values can explain the unexpected results obtained when navigation was integrated with Pangu, since Pangu geometric calibration has not been validated and errors of this order of magnitude could easily be introduced (A fraction of pixel).

Page 91: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 91 -

With FEIC in the loop, There is almost no noise since synthetic images are used,

With simulated tracks and 1 pixel (1 sigma) measurement noise.

Figure 5-9: Along track and cross track velocity error and standard deviation (Mercury trajectory)

Estimation of the along track velocity is below 2 m/s 10 second before landing and below 0.25 m/s cross track.

Figure 5-10: Along track and cross track velocity error and standard deviation (Mars trajectory)

Mars trajectory is more favourable (vertical descent) and allows very good cross track velocity estimation.

Page 92: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 92 -

5.6 MONTE CARLO PERFORMANCE ASSESSMENT ON SCENARIOS The validation of the Vision based Navigation Concept has followed an incremental process :

- VBNAT Version 2 Campaign : validation of G&C laws and Navigation algorithm, and performance sizing

- VBNAT Version 3 Campaign : validation of Pangu, Camera Model and Image processing algorithms. Whole concept performance assessment

The performance of the Vision Based Navigation has been fully assessed through an intensive performance evaluation, using Monte Carlo campaigns with dispersions in knowledge and realisation at the beginning of each simulation Two different scenarios for performance evaluation have been simulated under VBNAT environment :

- Mercury scenario

- Mars scenario

5.6.1 Mercury Campaign The Mercury campaign corresponds to the VBNAT V3 campaign, using version 3.3 of VBNAT environment.

The reference trajectory is based on the Reference Trajectory defined for BEPI COLOMBO by EADS Astrium UK. It has been adapted to NPAL mission. (position of the Landing Site on the planet, orbit, planet model, ...).

The achieved plan of the campaign aims to:

• Demonstrate the operability of VBNAT V3

ii. Evaluate the performance of the image tracking, and compare it to the prediction

iii. Demonstrate the operability of OBC Modules

iv. Demonstrate the operability of the aiding function, and the non divergence of the navigation filter in the first moment after VGPEP

• Evaluate the navigation performance

v. Compare the results of VBNAT V2 and VBNAT V3. Evaluate the performance without any smart selection of the tracked feature points, and determine the necessity of such a selection

vi. Compare two smart selection methods on a simple run, and analyse the results of each method with representative dispersion of the initial errors

Initial Dispersions

The following Table gives an overview of the main contributors of the dispersion expected at High Gate. It is derived from Lunar Landing system study.

Page 93: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 93 -

Those dispersions are used to initialise the simulations. Hence, the reference initial errors taken into for Monte-Carlo simulations are normally distributed with standard deviations equal to the Root Sum Square below.

Altitude error (m)

Velocity error (m/s)

Altitude error (m)

Velocity error (m/s)

Orbit determination 108 2 108 2Descent orbit burn 264 0,1J3 uncertainty 161 0,2 161 0,2Propagation error 100 4 100 4Main burn accelerometer 110 0,6 110 0,6Thrust vector misalignment 773 0Thrust magnitude estimation 0 12,2Mass uncertainty 0 5,9ROOT SUM SQUARE 852,59 14,28 244,31 4,52

Navigation dispersionGNC dispersion

Table 5-5 : Dispersion Analysis results

Note : Navigation dispersion stands for estimation errors; GNC dispersion concerns complete vehicle state deviation from nominal trajectory.

Terrain model an PANGU version

Pangu Viewer v1.51. was used to generate images seen by the camera from a Digital Elevation Model (512*512 pixels). The DEM (Digital Elevation Model) used in the framework of this campaign is the so called “smooth model” with 12 layers, delivered by University of Dundee. The sun elevation considered is 5.9 deg.

Campaign Summary Results

Test description Number of runs

Performance Position Error (m)

Velocity Error (m/s)

Specification driven from GC 8 2,4 (CT) 2,3 (AT)

Finite tracks (10s) / All Noises / Mercury Smooth DEM 100 Value at 93% of

the runs

38 (LS CT) 29 (LS AT)

2.2 (FP, MECO)

0.6 (CT) 1.5 (AT)

True tracks- No camera noise Mercury Smooth DEM

10 points chosen at random 100 Value at 93% of

the runs 39 (LS CT) 59 (LS AT)

0.22 (CT) 2.9 (AT)

True tracks- All Noises Mercury Smooth DEM

Selection of the 10 points with higher Harris value

300 Value at 99% of the runs

32 (LS CT) 10.7 (LS AT) 10 (FP, Tgc)

1 (CT) 1.8 (AT)

True tracks- All Noises / Mercury Smooth DEM

Multi-criterion smart selection, 10 points 300 See *

35 (LS CT) 54 (LS AT) 20 (FP, Tgc)

0. 85 (CT) 6 (AT)

Table 5-6 : Summary of the performances obtained in the framework of the Mercury campaign

CT = Cross Track (values at Tgc, 5s before MECO)

Page 94: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 94 -

AT = Along Track (values at Tga, 2s before MECO)

LS : Landing Site Estimation

FP : Feature Point Estimation

*: The performance at Tgc (cross-track) corresponds to the value at 99% of the runs. But, since there is some unexpected update at t=3491s, figures at Tga (t = 3491,7s) are not representative of the actual performance. The given performance corresponds to the worst case at t= 3490.9s (0.7s before Tga)

Performance Detailed Results

The objective of this test was to demonstrate the convergence of the full Navigation Chain with all noises (IMU and Camera) set to their typical values and evaluate its performances with tracked points of highest Harris value (that should correspond to long and precise tracks)

Table 5-7 : Detailed results of Mercury Campaign performance test

* Relative position estimation error : landing site estimation error, divided by the real distance to the landing site. Here, it is the maximum value before Tga (along-track error) or Tgc (cross-track errors)

** Relative velocity estimation error: velocity estimation error, divided by the true velocity. Here, it is the maximum value before Tga (along-track error) or Tgc (cross-track errors)

Specification

(3 σ) Mean value 3 σ

Value at 99% of the runs

During the whole descent

Relative position estimation error (%) * Along-track / Cross-track (Hor. / 3rd axis)

4.3 50.6 -0.2 -10.8

16.6 2.94 2.16

64.6 1.85 12.6

Relative velocity estimation error (%) ** Along-track / Cross-track (Hor. / 3rd axis)

3.2 -2.33 0.72 -0.46

12.00 1.57 2.02

7.99 1.59 1.40

At Tgc before MECO (5s) Landing site position estimation error (m)

Cross-track (Hor. / 3rd axis) 8 -1.5

-27.4 4.4 5.6

4.6 31.9

Velocity estimation error (m/s) Cross-track (Hor. / 3rd axis)

2.4 0.28 -0.53

0.62 0.55

0.91 0.99

At Tga before MECO (2s) Landing site position estimation error (m)

Along-track 8 7.0 3.4 10.7

Velocity estimation error (m/s) Along-track

2.3 0.66 1.36 1.80

At MECO

Attitude estimation error (°) RBF frame

- -0.07 -0.13 -0.05

0.75 0.50 0.51

0.84 0.54 0.56

Page 95: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 95 -

4 6 8 10 12 14 16 180

50

100Landing site position error histogram, Along Track/Cross Track

0 1 2 3 4 5 6 70

50

100

Per

cent

age

of ru

ns

20 22 24 26 28 30 32 340

50

100

Value, m

Figure 5-11 Landing site position estimation errors at Tga (Along track error) and Tgc (Cross-track error). Percentage of runs better than the value in abscises

0 0.5 1 1.5 2 2.50

50

100Velocity error histogram, Along Track/Cross Track

0 0.2 0.4 0.6 0.8 1 1.2 1.40

50

100

Per

cent

age

of ru

ns

0 0.2 0.4 0.6 0.8 1 1.2 1.40

50

100

Value, m/s

Figure 5-12 Velocity estimation errors at Tga (Along track error) and Tgc (Cross-track error).

Percentage of runs better than the value in abscises

Page 96: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 96 -

Main achievements

In the framework of this VBNAT V3 Campaign, several objectives have been achieved :

Integration of feature extraction and tracking algorithm prototype into VBNAT

Integration of the camera model as developed by INETI into the VBNAT.

Update of VBNAT to be able to generate and utilize images using the capabilities of PANGU. These images can be processed in closed loop by the implemented feature extraction techniques. Pangu images can also be generated and stored into files to be used in open loop.

Performances of both image processing strategies (with and without texture update) have been evaluated and compared in different situations.

Aidings Function performance is in line with the need of Image Processing. (convergence in the first moment after VGPEP is ensured).

For what concerns the Navigation Filter, the main objective of VBNAT V3 has been reached: Convergence of the filter is ensured with the actual errors of IP algorithm on the feature point tracks.

Moreover, two “smart selection” methods have been implemented and their improvement on robustness and performances demonstrated. One is based on an Image Processing track quality criteria (Harris value), the other is based on a combination of Feature Position in camera Field Of View and Harris criterion.

Remaining robustness problems identified in VBNAT V2 have been more clearly understood: Upgrades of the Navigation Filter design have been identified (reinforcing robustness when we are closed to the border of the linearity hypothesis of the extended Kalman Filter at initialisation of new feature point positions)

Using the results of the VBNAT V3 campaign, the following points have been identified to improve the current performances :

o At Navigation Design level, improvement on robustness could allow to reduce the noise measurement matrix level. Therefore, the improvements presented in previous paragraph could allow to speed up convergence of the filter, while ensuring robustness.

o At Image Processing level, processing 1024*1024 pixels images (that is to say that PANGU would deliver 2048*2048 images) would ensure a relative reduction of tracking errors which would also allow to reduce noise measurement matrix level of the Kalman Filter, and hence speed up convergence.

Regarding the Image Processing status, the performances on frame to frame criteria are in line with the expectations in terms of:

• Track length statistics

• Robustness to distortions

• Matching accuracy

Page 97: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 97 -

Long term performances using a strategy with no texture update are also in line with our expectations since they present a low drift of tracks but smaller statistics of track length when distortions increase.

In this Campaign the tests performed were adapted to assess the most conclusions and demonstrations achievable with the VBNAT tool version 3.3.and its interface. Following this approach, the results obtained with the Monte Carlo simulations and VBN6 constitute a good demonstration of the validity of NPAL concept and brings a rough overview of the performance level achievable with such a solution.

Page 98: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 98 -

5.6.2 Mars Campaign In the second phase of the NPAL study, analysis has been done to assess the applicability, interest, and system impact of the NPAL concept for future Mars missions.

Vision Based Navigation has been identified as a baseline option in the Framework of MSR project. More generally, Vision Based Navigation can be considered in any Soft Landing mission, as it might be used during a quite long period of the trajectory (from a Range of 20km till the very end of the trajectory). One of the main advantages of the Vision is the capacity to identify hazards, which can allow a Landing Site retargeting. The main drawback of optical sensor is the lack of robustness to dust storm or other climate perturbations, which could blur images.

A second possible application of Vision Based Navigation (VBN) has been identified in the framework of Landers using airbags. In these missions (MER, or Exomars like) VBN enables to estimate the horizontal velocity at parachute separation in the few seconds before Touchdown. (cf. DIMES in MER missions). This horizontal velocity cancelling is necessary in order to stay in the Airbag tolerance to transverse velocity at Touchdown.

- Semi-hard landing: Lander equipped with airbag: MER or Exomars (“light mission”)

- Soft Landing: After parachute separation 3 axes control to perform Soft landing on tripods (e.g. MPL/MSR)

From these preliminary analyses, two reference scenarios are defined.

Particularities of a Martian Scenario

The main differences between Mars Landing and Mercury Landing are listed and commented hereafter:

Dust Storm

VBN, using by definition an optical sensor, is not robust to a dust storm that would completely hide the terrain. This point shall be studied in the analyses performed. It could be modelled by adding a mask on a portion of the images generated by Pangu.

Conversely, the use of a passive sensor permits very large field of view and a large dynamic range that makes it usable at long range, covering a large surface of terrain (typ. a few 100 km). One possibility to make the system more robust would be to implement some dust identification feature (to be defined / secured), that would prevent from selecting feature points into dust storm area. The large field of view of the vision camera could therefore permit to maintain navigation capacity even in the presence of a geographically limited dust storm.

Atmosphere influence

Presence of atmosphere might have an influence on Image Processing performances.

Moreover, in presence of atmosphere, flames generated by the Main Engine could be seen on images. This aspect may constrain the camera position on Lander.

Page 99: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 99 -

Final Blur due to Engine thrust

The ejection queue generated by the main engine is expected to generate a dust cloud at the very end of the trajectory, which will hide camera field of view.

Martian geology

The Martian surface is quite different from the Mercury one. Apart from the existence of volcanoes and dunes, the atmosphere induces much more scarce craters than on Mercury surface. The effect of erosion is also much more important and leads to terrains globally more smooth.

The impact expected of this different geology concerns the ability for Image Processing to track feature points on images with poor intensity variation. A worst case for this aspect, being a uniform image obtained observing a perfect plain with no boulders.

However, taking into account the width of the terrain portion that the camera can see during the descent; this worst case seems unrealistic to produce along the whole descent. Moreover, in NPAL Phase 1, it has been demonstrated that Navigation filter requires only a limited amount of feature points to estimate Lander states.

Uniformity is also concerned with texture: the absence of significant relief does not presume the variety of soils and terrain nature. It can be sufficient for supporting the NPAL navigation. Images from MER show large plains of rocks and boulders presumably generated by erosion, most favourable for NPAL VBN.

Entry Trajectory

On Mercury landing, VBN can be used as soon as the terrain visibility was ensured, that is from a Range of 20km. This Range is drawn by the capacity of the camera. During the last 20km the velocity at starting of Visual Phase is high. (600m/s considered in NPAL reference trajectory).

On Martian landing, the trajectory has an additional atmospheric entry phase. During this phase, camera is behind the heat shield and cannot be used. Therefore VBN is forced to begin at lower altitude. However, the presence of parachute phases induces lower velocity than on Mercury Landing. Globally the duration of the phase where the vision is used is quite similar. The scenarios that are selected tends to be the more conservative to ensure that demonstration applies to the whole range of candidate missions.

Past and Future Mars Missions

Among Mars Landers, we can distinguish 2 types of missions corresponding respectively to so-called semi-hard and soft Landers. In both scenarios, VBN is supposed to be used only after parachute opening. Indeed, in previous phases, camera is not supposed to be at the exterior of the vehicle because before and during atmospheric entry, a thermal shield is protecting the Lander. Therefore a camera can be used only after separation of the thermal shield.

Hence, in the following scenarios, we focus on trajectories after parachute opening.

Semi-hard landing: Lander with AIRBAG: MER / Exomars

In such missions, Vision Based Navigation could be used during parachute phase only. Indeed, during final “airbag” phases, camera field of view is supposed to be hidden. Before final “airbag” phase, the residual transverse velocity is reduced to its minimum by an appropriate ignition of transverse thrusters. The success of this operation is obviously linked with the accuracy of transverse velocity estimation done by the

Page 100: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 100 -

embedded Navigation system. Coupled with the IMU, VBN can estimate this residual transverse velocity. The feasibility, and performance of such estimation is evaluated in this study.

Soft Landing : MPL / MSR

From parachute phase to the very end of the descent, the VBN can be used coupled or not with an additional sensor (altimeter). It is an essential system for hazard mapping.

On Mars missions, the Landing Site area is supposed to be quite big (several kilometres); however it shall be possible to perform a retargeting at the very end of the trajectory if boulders are identified around the Landing Site point. For such purpose, Vision is essential as rocks can be located on camera images.

“NPAL for Mars” scenarios

From this review of the Martian missions, we can define a representative scenario that can be used to further analyse the capacity of VBN for Mars landing. Two types of application are foreseen for VBN technology:

- A Navigation solution for Soft Landing missions

- An autonomous additional system for high accuracy estimation of the residual horizontal velocity before Parachute jettisoning in the framework of “Airbag” missions (semi-hard Landers).

Trajectory

For Navigation analysis, a similar trajectory can be used for the 2 types of missions. Indeed, the Soft Landing and “Airbag” trajectories of the final Martian descent are somehow similar. Both trajectories are composed of a parachute phase even if in the case of “Airbag” Landing, this phase is longer. The velocity profile during Main Chute phase can be represented in a first order as a constant vertical velocity phase, with 80 m/s of vertical velocity and up to 30 m/s of horizontal velocity.

In Soft Landing missions, the trajectory phase ends up with a 3 axes controlled phase that can be modelled in a first order by a quasi vertical descent with a constant vertical deceleration. For semi-hard landing, this is replaced with a longer parachute phase, followed by thrust activation and airbag deployment for absorbing the shock at contact.

VBNAT adaptation to Mars Scenario

An upgrade of VBNAT has been necessary to include Mars environment models (wind, and atmospheric drag) and also a ‘dynamic under parachute’ model for Main Chute phase.

Moreover, the virtual scene generation tool (PANGU) and the terrain models has been adapted to be representative of Martian scenes.

Mars scenario synthesis

Mars Plains

The reference Mars scenario used in the framework of this study contains a phase where the portion of terrain seen by the camera is comparable to a plain. During this portion of the trajectory, there are very few craters in the Field of View; and the boulders are still too far and too small to be correctly tracked. There are few long tracks in this period; therefore convergence of the navigation filter is poor.

Page 101: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 101 -

From this hint, we can derive a recommendation: Video Based Navigation should start early in the mission. The earlier it is activated, the better the chances of having geological specificities in the Field of View. In our tests, the hypothesis is that VBN starts under parachute phase at an altitude of 5200m. Therefore, when a plain is observed in the Field Of View, the Navigation Filter has already converged.

Navigation Performance

Estimation error wrt simulation time

t0 = 0s t1 = 40s end of parachute phase

t2 = final point

Position (m) 10 40 1.5 Velocity (m/s) 10 2.3 / 0.5* 0.1

Vision +

Altimeter Attitude (deg) 5 3.1 / 0.5* 3.1 / 0.2*

Position (m) 300 110 / 40* 2

Velocity (m/s) 10 6/0.2* 0.2

No IP in the loop

(V2 Campaign)

Vision only

Attitude (deg) 5 4 / 0.3* 4 / 0.1*

Position (m) 10 26 1.7 Velocity (m/s) 30 1 / 0.6 0.3 / 0.04

Vision +

Altimeter Attitude (deg) 5 3.2 / 0.8 2.7 / 0.4 Position (m) 300 50 4 Velocity (m/s) 30 3 / 0.6 0.05 / 0.02

IP in the loop (V3

Campaign)

Vision only

Attitude (deg) 5 1.5 / 0.5 1.5 / 0.4

Table 5-8 : Navigation performance results of Mars simulation campaign

*: x / y : x = along track performance; y= cross track performance

Note: the altimeter is implemented only at initialisation (init altitude) and reduction of the mean plain error covariance.

Cross velocity estimation

The Martian simulations confirm that cross velocity estimation is well observable. Cross velocity has converged under 0.5 m/s at t1, that is still 2 km altitude, and 50 seconds to go. (Separation of parachute, and beginning and beginning of 3 axis-controlled phase). This represents a very good robustness property and performances largely exceeding the need as expressed for a DIMES-like measurement system for semi-hard landers.

NPAL concept is applicable for High Speed Landing missions (like Exomars or MER) where transverse velocity has to be estimated in order to be annihilated at the end of parachute phase.

Altimeter benefit

The use of Altimeter brings robustness to the filter. Further hybridisation (full implementation as complementary sensor) could help further improve performance.

Moreover, it contributes to improve the speed of convergence of along track velocity. This can be very useful in order to ensure the stability of the GNC closed loop.

Impact of Vertical Trajectory

On Mars scenarios the trajectory is closed to the vertical; whereas flight path angle considered on Mercurial scenarios was only 20 degrees. This element ensures more stability to the Filter. For instance, in a very

Page 102: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 102 -

inclined trajectory, some points tracked can be very far from the vehicle, and an attitude error can be translated into a very high error on the position of the tracked points which are close to the horizon.

Conclusion for NPAL Martian simulations results

Tolerance of the concept to high angular rates has been demonstrated:

Lander oscillations under Parachute cause phases with large angular slews. In those phases, the Aidings to be provided by Navigation Filter to Image Processing is preponderant in order to ensure a correct tracking of the points. The closed loop tests (Image Processing and Navigation) validate the good behaviour and strong robustness of the NPAL concept in such conditions. Indeed, even with high initial uncertainties (30m/s in velocity, and 5 deg); the Aidings delivered by Navigation filter to IP allow an efficient tracking of feature points which enable the convergence of the Navigation filter.

In addition, a large FoV appears as a key for success, if we want to keep the same terrain in visibility with a large relative vehicle motion.

NPAL concept has shown good behaviour in presence of Martian geological particularities:

A main concern is the presence of plains with few craters and low perturbed relief. In the phase where such terrain is occupying the main part of the Field Of View, two consequences can be feared: the shortage of feature points, the high proportion of “fake” points (resulting of Aliasing effect). Those two consequences on Image Processing can affect the Navigation filter performance.

The tested conducted here have shown the good tolerance of the concept to phases with few consistent tracks when the filter has already converged. However, the VBN should be activated early in the mission (at altitudes of several kilometres) in order to increase the chance to spot feature points susceptible to be tracked during several frames, and ensure the good initial convergence of the filter.

Conversely, this reinforces the requirement that camera and IP be capable to work optimally with low contrast, complex textured images.

Applicability for Airbag and Soft Landing missions

NPAL concept is perfectly adapted to estimate the cross velocity under parachute phase. No additional altimeter is required for this function. With an initial uncertainty of 30m/s on velocity, the filter converges around 0.5m/s at the end of main parachute phase. A trade off should be done between the estimation accuracy required and the computational load of the filter.

For Soft Landing missions, the use of an altimeter guarantees a better stability of the filter, and a quicker convergence which could be essential when closing the GC loop. In presence of dust storm occupying the whole part of the Field Of View during; for instance, all the second phase of the trajectory, the altimeter, by ensuring an earlier convergence would be useful to maintain the estimation function within the required performance.

Page 103: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 103 -

6 VBNAT – END TO END VALIDATION

6.1 PANGU: VIRTUAL SCENE GENERATOR EMBEDDED

6.1.1 PANGU: Planet and Asteroid Natural scene Generation Utility Landing a spacecraft on the surface of another body in the solar system is a difficult and hazardous undertaking. If a vision-guided lander system is to be used then the system needs to be thoroughly tested before being considered on realistic images obtained under different lighting conditions and from different positions and attitudes relative to the target landing site. Physical test systems, such as CamRobII, suffer from many problems including illumination, calibration, scale, flexibility and cost. As a result, PANGU was developed to generate detailed synthetic models of cratered planetary surfaces and to allow images to be generated of those models from any position and under any light condition. The images are generated using OpenGL to leverage the powerful 3D rendering facilities of modern graphics cards. A TCP/IP interface allows clients to obtain images from specific locations and attitudes as well as performing back-projection queries to enable the physical 3D location to be obtained for feature points extracted from synthetic images. This enables the navigation filters to be validated.

Although PANGU was initially developed to model the Moon and Mercury it has been developed further to provide models of asteroids and of Mars (including sand-filled craters). On the image generation side there have been improvements in the rendering speed and quality along with the addition of facilities such as RADAR and LIDAR instrument simulation. An example of an early Mars model is shown in Figure 6-1.

Figure 6-1: NASA Spirit rover (top-left), NASA Viking lander (top-right), PANGU synthetic image (main)

Page 104: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 104 -

6.1.2 PANGU Architecture The architecture of the PANGU system is summarised in Figure 6-3 and consists of two sets of tools. The first set of tools is used to generate the synthetic surfaces. They can be used to generate fractal surfaces, lists of craters and then add those craters to the surface in a realistic way ensuring that the crater size, age and density distributions defined by the user are obeyed. Young craters tend to have prominent rims and visible ejecta blankets while older craters will have their rims eroded and the bowl filled in to varying degrees. A cross-section of a crater is shown in Figure 6-2 and highlights the key parts of the crater which are modelled in detail by PANGU. Once the appropriate models have been created and the craters added the result can be written to a special polygon file for use with the second set of tools.

Terrain LevelH (Crater Height)Hr (Rim Height)

D (Crater Diameter)

Figure 6-2: Simple Crater Model

The second set of tools are the visualisation tools. These include a tool for generating shadow maps for a particular model and Sun position/direction and the main image generation tool: the PANGU OpenGL viewer. Shadows are vital for producing realistic images but take a significant amount of time to compute, particularly for the large models required by the NPAL study. As a result the shadow generation is an off-line task which must be performed separately before a model is viewed. However, for a specific model and Sun orientation the shadow map only needs to be generated once. Several different maps can be generated for different Sun orientations and the appropriate one selected when the view is launched.

SurfaceGenerator

GUI

SurfaceGenerator

SocketInterface

SurfaceParameters

SurfacePolygons

CraterList

CraterModel

Parameters

SurfaceDEM

CameraModel

ParametersTextureMap

IlluminationParameters

SurfaceViewer

BoulderList

BoulderModel &

Parameters

Image

MakeTexture

MakeShadows

CameraCoordinates

ShadowMap

FlightPath

IlluminationParameters DEM

ImageGenerator

GUI

Figure 6-3: PANGU Architecture

An example of a real Apollo image compared with a synthetic PANGU image is given in Figure 6-4. The side of the Apollo spacecraft has been added to the PANGU image to assist with the visual comparison.

Page 105: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 105 -

Figure 6-4: Apollo Image (left) and PANGU Image (right)

6.1.3 Large Models A requirement of the NPAL study was to be able to view the synthetic model with a 70 degree field of view from high gate down to low gate. This required a model which was 500 km in size along each edge which could be viewed at the landing site with a grid resolution of 25 cm. With current technology it is not possible to construct and manipulate models of such size at such a high resolution so PANGU uses a hierarchical modelling technique. Using a base digital elevation model of 1025×1025 samples the 500 km area was synthesised. Then the central section was removed and replaced by a new 1025×1025 sample DEM at twice the physical resolution with sides of 250 km. This process was repeated to obtain a 12 layer model in which the central layers had the desired resolution of 25 cm. The resulting model can be viewed over a wide range of distances from the central landing site while using only 500 Mb of disk space and 1 Gb of RAM. An example of several layers of a 65×65 base model is shown in Figure 6-5.

Figure 6-5: PANGU Hierarchical Model

6.1.4 Camera Model PANGU uses a simple pin-hole camera model. The aim is to provide “perfect” images which can be post-processed by other tools to obtain images that are representative of those generated with a specific camera.

Page 106: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 106 -

6.1.5 Image Library PANGU was used in the early stages of the NPAL project to create a library of image sequences. These were for different trajectories and lighting conditions for two types of model. One model was fairly smooth while the other rough and heaviliy cratered. These image sequences, which included rotations of the camera about a single axis, were used in the feature extraction and tracking algorithm selection process along with a sequence of (poor quality) images from the Apollo Lunar landing programme.

6.1.6 TCP/IP Facilities for VBNAT In addition to providing the eary test images, PANGU was used through the NPAL project as the image generator for the VBNAT tool. The VBNAT tool passes the position and orientation of the FEIC camera to the PANGU viewer running on a separate PC. PANGU then generates an image from this view point within a few seconds and returns it to VBNAT.

PANGU also provides various back-projection facilities: given a position and a direction, PANGU can return the 3D coordinates of the closest point on the model that is visible along that direction. Additionally, given the coordinates of a pixel from a previously generated image, PANGU can be used to determine the corresponding 3D location of the point on the model which defines the colour of that pixel. This back-projection facility enables VBNAT to compare the 3D coordinates of feature points obtained from the navigation filter against the true 3D position as returned by PANGU.

6.1.7 Summary PANGU has provided vital facilities for the generation of realistic Mercury-like images for various positions, attitudes and Sun illumination conditions. These images have been used to guide the selection of suitable image processing algorithms for the FEIC feature extraction and tracking algorithms, used to validate and test the behaviour of the FEIC FPGA implementation and to support the development and use of the VBNAT tool for vision-guided navigation.

Page 107: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 107 -

6.2 VBNAT V3: LANDER GNC DESIGN VBNAT stands for Vision-Based Navigation Analysis Tool. It corresponds to an overall simulation environment, including all the modules needed for the Visual Based Navigation. It has been developed following an incremental approach, by a progressive upgrade of the simulation environment

PanguInputs for VBNAT 1.0

Navigation Algos

Image Processing Algos

FEIC board

VBNAT 1.0

VBNAT 2.0

VBNAT 3.0

VBNAT 4.0

Set-up of simulation environmentPangu Integration

Navigation performance

Nav & IP Algos performance

VBNAT update & improvement

PanguInputs for VBNAT 1.0

Navigation Algos

Image Processing Algos

FEIC board

VBNAT 1.0

VBNAT 2.0

VBNAT 3.0

VBNAT 4.0

Set-up of simulation environmentPangu Integration

Navigation performance

Nav & IP Algos performance

VBNAT update & improvement

Figure 6-6 VBNAT Development : an incremental approach

Throughout the various versions of the VBNAT software, various updates have been performed, which concerns CAMERA block, Dynamics, GNC.

6.2.1 Tool Functionalities VBNAT is a multi-user facility environment that sum up all developed functionalities for visual based navigation simulations.

Vision-Based Navigation Analysis Tool V3.5

Actuators Sensors

GNC

Environment

z

1

ThrusterDemands

Thrust Power

RCS Power

Forces _v ec_RSF

Torques_v ec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

PROPULSION

OUTPUTS TO WORKSPACE

Forces_v ec_RSF

Torques_v ec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

Aspec_IPQ_RSF

W_ipq2rsf _RSF

dot_W_ipq2rsf _RSF

ActualState

ORBIT & DYNAMICS

LS position in Camera Screen

Visual Measurements

Velocity Increments

Attitude Increments

deriv ative Homography Matrix

Estimated State

Estimated LS Position in IPQ

NAVIGATION

Actual State LS Position in Camera Frame

Landing Site Projection : IPQ to Camera Screen

Aspec_IPQ_RSF

W_ipq2rsf _RSF

Attitude_Increments

Velocity_Increments

IMU

Estimated State

LS Position IPQ

Propulsion Power

Thruster Commands

Attitude Control Power1

GUIDANCE AND CONTROL

[ActualState][Mass]

[w_ipq2rsf][F_rsf]

0

In1In2In3

In4Covariance Analysis

Trajectory

vbn_Output.outputCovTraj

Constant

In1

In2

In3

Out1

Out2

ClosedLoopSwitch

Clock

Actual State

Aidings

XCam_ls

Visual Measurements

CAMERA

Actuators :Main engine, AOCS thrusters Sensors :

IMU, Camera, IP

Environment :

S/C dynamics, Atmosphere, parachute,

sloshing, gravity

Navigation :

Navigation Filter, LS position estimation

G&C :

Guidance laws, position and

Attitude control

Post-processing Covariance analysis

Vision-Based Navigation Analysis Tool V3.5

Actuators Sensors

GNC

Environment

z

1

ThrusterDemands

Thrust Power

RCS Power

Forces _v ec_RSF

Torques_v ec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

PROPULSION

OUTPUTS TO WORKSPACE

Forces_v ec_RSF

Torques_v ec_RSF

Inertia_mat_RSF

Mass

Fuel Consumption

Aspec_IPQ_RSF

W_ipq2rsf _RSF

dot_W_ipq2rsf _RSF

ActualState

ORBIT & DYNAMICS

LS position in Camera Screen

Visual Measurements

Velocity Increments

Attitude Increments

deriv ative Homography Matrix

Estimated State

Estimated LS Position in IPQ

NAVIGATION

Actual State LS Position in Camera Frame

Landing Site Projection : IPQ to Camera Screen

Aspec_IPQ_RSF

W_ipq2rsf _RSF

Attitude_Increments

Velocity_Increments

IMU

Estimated State

LS Position IPQ

Propulsion Power

Thruster Commands

Attitude Control Power1

GUIDANCE AND CONTROL

[ActualState][Mass]

[w_ipq2rsf][F_rsf]

0

In1In2In3

In4Covariance Analysis

Trajectory

vbn_Output.outputCovTraj

Constant

In1

In2

In3

Out1

Out2

ClosedLoopSwitch

Clock

Actual State

Aidings

XCam_ls

Visual Measurements

CAMERA

Actuators :Main engine, AOCS thrusters Sensors :

IMU, Camera, IP

Environment :

S/C dynamics, Atmosphere, parachute,

sloshing, gravity

Navigation :

Navigation Filter, LS position estimation

G&C :

Guidance laws, position and

Attitude control

Post-processing Covariance analysis

Figure 6-7 VBNAT functionalities

Some tools have been developed in the frame of various VBNAT version in order to store, visualize and post-process images and tracks. All these tools are Simulink blocks based on Matlab functions and S-functions.

Page 108: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 108 -

Generic Model blocks

Guidance & Control Design

The Guidance and control block can be decomposed in 4 main blocks in series: The guidance block, whose role is to provide a reference state for the Lander to follow; the position control block, which maintain the Lander on the trajectory provided by the guidance; the attitude control block which enables manoeuvres; and the thrusters selection block, which generates the commands for each individual thrusters of the Lander.

Navigation Design

The main elements of the Navigation function are :

- An extended Kalman Filter : The Navigation Filter is a 2 frequency Extended Optimal Kalman Filter.

- A Mean Plane Estimator (Least square estimation of a mean plane containing the feature points tracked, using the distances from vehicle to each feature point and taking into account their misknowledge)

- A Landing Site Estimator (Least Square Estimation of a “local” plane : same principle than mean plane estimation but with a weight depending on the distance to the landing site of each feature point in order to estimate the mean plane around the Landing Site)

- An Image Processing Aiding Function based on an Homography principle (providing Position of a Feature Point in Camera Screen at time t+dt from a position at time t)

Sloshing Model

A Sloshing model is included in VBNAT Dynamics block. Fuel slosh is notoriously difficult to model accurately. The behaviour of the fuel mass will depend on a number of things: Presence of gravitational field, S/C manoeuvres, shape, composition (surface tension etc.) and partitioning (bladder etc.) of the Tank.

If we assume a spherical tank in a gravitational field the fuel will sit in one part of the tank and when disturbed, its surface will oscillate (with surface tension effects tending to damp the motion and distort the surface). The surface would have a pendulum type motion.

Since the type of manoeuvre present in this simulation is mainly constant accelerations the pendulum model is the best fit for the conditions, with added damping to simulate the fuel forces. This simple model is an approximation to the general fluid behaviour and is used to test the sensitivity of the design to sloshing effects, not for precise predictions of fluid behaviour.

Parachute model

Parachute model has been derived from the model of sloshing. Starting from the Pendulum equations, we have the following differences : the link is not rigid and drag force are applied to the pendulum.

IMU

The following error of the Inertial Measurement Unit have been modeled :

- Gyrometer scale factor, angular noise, bias, and random drift

Page 109: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 109 -

- Accelerometer scale factor, bias and noise

Atmospheric perturbations model

A parameterised planet atmosphere model has been implemented to compute the air perturbations on the Lander. This allows to derive the aerodynamics forces an torques to be applied on the S/C. A wind profile model has been defined to complete the atmospheric perturbations modules, and allows to wind profile scenarios and random wind gusts.

The aerodynamics forces and torques is computed as a function of the S/C velocity, the S/C attitude. The model is supposed to cover the incidence angles less than 30 degrees, in this domain, lift forces and torques will be assumed to be linear with respect to the incidence angle. The linear coefficient CL and CT shall be high level tunable parameters

Image Generation / Image Processing The image generation process, using PANGU software, has been directly included within the VBNAT environment. The Simulink™ model send its request to PANGU for "image generation" through a TCP/IP interface to the computer where PANGU is running. PANGU uses the Digital Elevation Model (DEM) of the terrain and the Camera position and attitude to generate the images at the Camera frequency. It provides the image to the VBNAT environment.

Images

Vehicle true position and attitude

VBNAT PC PANGU PC

TCP/IP

-20000

20004000

60008000

1000012000

1400016000

-2000

0

2000

4000

0

1000

2000

3000

4000

5000

6000

7000

8000

Y (m)

Mercury descent scenario

Y

X

Z

Z (m)

X (m

)

S/C trajectory

S/C attitude

DEM

Image

Images

Vehicle true position and attitude

VBNAT PC PANGU PC

TCP/IP

Images

Vehicle true position and attitude

VBNAT PC PANGU PC

TCP/IP

-20000

20004000

60008000

1000012000

1400016000

-2000

0

2000

4000

0

1000

2000

3000

4000

5000

6000

7000

8000

Y (m)

Mercury descent scenario

Y

X

Z

Z (m)

X (m

)

S/C trajectory

S/C attitude

DEM

Image

Figure 6-8 Links between VBNAT environment and PANGU

The generated images are used as inputs by a Camera model, which represents the realistic noises brought by a real camera. It is relevant of the camera transfer function, radiometry and distortions.

The Image Processing (IP) prototype has been developed in C++ language as being the most complex and time consuming functions. The whole image processing algorithm (Extraction + Tracking) is a single function to be encapsulated in a Simulink S-function.

The output of the FEIC Extraction & Tracking function is a concatenation of two lists: the list of new points (Tn) and the list of tracked points (Tc). The FEIC module uses the aidings provided by the Navigation filter to track the selected feature points.

The VBNAT environment provides also dedicated functionalities related to image generation and image processing :

- write generated images in PGM format files

Page 110: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 110 -

- For open loop simulations, one can read successive images from PGM files

- The DisplayImage S-function allows to display the current frame

- Back Projection function has been extracted from the original PANGU function : The objective of the function is to locate points in the image on the ground surface to serve as a physical reference for further performance evaluation.

Post-processings A set of Post-Processes has been defined, to be used after a simulation or a set of simulation to easily view the performances reached. After a single run, when the appropriate post-process is called, it computes useful data and plots them. For example, typical post-processes provide :

- Cross-track and along-track Position and Velocity estimation error

- Estimation error on the feature points position using the results of the back-projection tool

- Position in IPQ, velocity in IPQ and attitude in RBF estimation errors

- True and estimated Landing Site position in LDS frame

- Mean plane parameters in LDS frame: estimated distance to plane and estimated normal to plane

- Errors made by the Navigation aidings function : difference between the real and the predicted position of the feature points in the camera frame at the following step time.

- IP post-processing : evaluating the tracking error evolution (TEE) using back-projection of the real point position and estimated position

SpaceLab Compatibility Some of the modules of the VBNAT environment have been formatted to be compatible with the ESA Spacelab library :

- Camera V1 : simple model of the camera

- Detailed Camera model : add noise representative of the camera to an input raw Image

- PANGU interface (PGIF) : retrieve images from the PANGU viewer running on another PC, through TCP/IP link

- Read and write pgm images

6.2.2 Architecture and interfaces The following figure shows the global simulator architecture and the interfaces of the image processing functionalities.

Page 111: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 111 -

FEIC

Extraction & Tracking

Image

Navigation filter PANGU

Tn,Tc

G Nav H’

Camera Model

Guidance, control and dynamic of the spacecraft

vbn_PANGU_Image

FEIC/OBC

Interface

Ind

G FEIC

mode

Figure 6-9 Global architecture of the simulator. Definition of Camera Model, FEIC Extraction & Tracking,

FEIC/OBC Interface.

As said in previous sections, some functionalities have been added to the simulator in order to store and read images. It allows to prevent from using Pangu + Camera model in the loop. A first simulation is performed to store the sequence of images. Then, this sequence can be read off-line inside the loop:

6.2.3 Covariance Analysis Tool Covariance Analysis Facility has been added into VBNAT environment. This tool enables to perform two types of analysis :

- Inertial Navigation dispersions errors (taking into account initial estimation errors, IMU errors, Projection errors, and Gravity Field model errors),

- Dynamic dispersions (taking into account, initial dispersions, Boosts realisation errors, Gravity Field model errors, Attitude Control errors).

Inertial Navigation This tool propagates the initial misknowledge on Position Velocity and Attitude, along a reference trajectory; assuming Inertial Measurement Unit characteristics; and gravity model hypotheses.

The Navigation Error Sources taken into account are the following :

- Initial dispersion = a priori knowledge errors

- Accelerometers errors

- Gyros errors

- Projection errors : attitude errors induce errors in acceleration and angular velocity vector projection

- Gravity field errors : due to position errors (Schuler effect), and Gravity model errors.

Page 112: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 112 -

Dynamic Dispersion Dynamic dispersions are defined as the difference between the actual state vector (assuming hypothesis on boost and attitude realization errors) and the nominal pre-calculated state vector.

This tool propagates initial dispersion (covariance matrix) along a reference trajectory. It allows to :

- simulate uncontrolled boost (“Open Loop”)

- simulate controlled boost (“Closed Loop”) at a first order :

o perfect Guidance in Position and Attitude,

o perfect control in Position

o non perfect control in Attitude (if Limit Cycle option is activated)

o Attitude Navigation accuracy = accelerometer accuracy

The Dynamic dispersions Sources taken into account are the following :

- initial dispersions around nominal trajectory start point

- Boost realization errors

- Attitude realization errors

- Gravity errors : due to position errors (Schuler effect), and Gravity model errors.

Page 113: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 113 -

6.3 VBNAT V4: HARDWARE IN THE LOOP The real-time configuration of the VBNAT has been validated following various steps. The initial step consists in interfacing with real hardware throughout the Simulink interface. The VBNAT V4.0 version of the simulator corresponds to a transition between the simulated benchmark, within full Matlab/Simulink environment, and the real-time benchmark, in full Java environment. The objective is to connect the FEIC hardware into the VBNAT V3 environment, i.e. hardware in the loop in Simulink environment.

To test the interfaces of the FEIC hardware connected to the Simulink environment, the two main modules have been extracted :

- IP : simulates the FEIC behaviour. This block will be replace with the Java FEIC interface module, in order to communicate with the FEIC hardware

- FEIC/OBC interface : retrieves and generates the G-List required by the FEIC (including the predicted points). It also provides the measured points to the Navigation module.

The objective of the VBNAT V4.0 simulator is then to stimulate those modules (FEIC and G-List) with the correct inputs/outputs, with the FEIC hardware dialoguing directly with the Simulink module through the FEIC Java interface module.

The following figure presents the simplified architecture of the VBNAT V4.0 simulator. The blocks developed in pure Simulink format are shown in blue (Navigation G-List with associated interfaces) and the blocks developed in pure Java are shown in magenta (i.e. the interface to FEIC which dialogs with the FEIC Hardware through the USB Spacewire). One part of the interface to FEIC is coded in Java but run under the Simulink environment.

Simulink

Java

Navigation : G-List I/F

FEIC I/F

FEIC I/F

FEIC

USB SpaceWire

Figure 6-10 Java functions and Simulink environment for VBNAT V4.0

6.3.1 Description of validation test

The test that has been performed under VBNAT V4.0 environment consists in sending 50 similar images (403x403 pixels) to the FEIC with a simple linear shift between two consecutive images : -1.5 pixel along X and -0.5 pixel along Y. This translation movement can be penalizing because, since the translation does not correspond to an integer number of pixels, for a given point, the light level of associated pixel will be slightly different for two consecutive images (this is due to interpolation of the image after translation). This could lead to oscillating errors.

Page 114: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 114 -

Y axis

X axis

-1.5 pixel

-0.5 pixel

Figure 6-11 Reference image of the VBNAT V4.0 test : translation between 2 consecutive images

Both options for sub pixel accuracy, 4 neighbours or 8 neighbours, have been tested. The Harris window origin was set to x = 170 pixels, y = 170 pixels and its width was set to 200x200 pixels. Four sub windows have been used with overlapping areas of 10 pixels wide.

6.3.2 Strategy with 4 neighbours

For each image, the FEIC provides the T-list, which contains the points that have been tracked from previous image to current one. For each tracked point, the FEIC provides the strength of the eight neighbour pixels of the tracked point. The strategy with 4 neighbours consists in using only 4 pixels (North, South, East and West) to obtain subpixel accuracy.

Current tracked point

North West North East

South West South East

North

West

South

East

Subpixel position of

tracked point

Figure 6-12 Four neighbours strategy : subpixel accuracy calculated using linear interpolation

This strategy is obviously less precise than the one using the 8 neighbours, however it is simpler and requires less calculation.

In the VBNAT simulator, the navigation module calculates an aiding, which allows the FEIC to predict the future position of the tracked point. In the test performed on the VBNAT V4.0, the aiding coming from the navigation module is set to zero. This means that the FEIC will try to track the points at wrong locations, with an error corresponding to the movement between two images : ∆x= 1.5 pixel and ∆y = 0.5 pixel. The track statistics are performed for the 50 images and for all tracked points.

The norm of the x and y errors is around 0.4 pixel. The following figures show the results of the 4-neighbour strategy : the result is plotted in green, the standard deviation (1 σ) wrt the result in blue and the minimum and maximum values in red.

Page 115: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 115 -

0 5 10 15 20 25 30 35 40 45 50

0

0.5

1

1.5

2

2.5

3

3.5

Frame number

Nor

m o

f pix

el e

rror

Subpixel translation error without texture update

0 5 10 15 20 25 30 35 40 45 500

0.1

0.2

0.3

0.4

0.5

Frame number

Nor

m o

f pix

el e

rror

Subpixel translation error without texture update

Figure 6-13 Tracks statistics for 4-neighbour strategy : norm of the error

50 100 150 200 250 300 350 400

50

100

150

200

250

300

350

400

Figure 6-14 Plot of the tracks on the image used in the VBNAT V4.0 simulation with 4-neighbour strategy (tracks in red and blue crosses for new points)

6.3.3 Strategy with 8 neighbours

With the 8-neighbour strategy, the subpixel accuracy is obtained using not only the 4 neighbour pixels (North, South, West and East), but also adding the 4 new pixels (North-West, North-East, South-West, South-East). This strategy should lead to an improved determination of the tracked points location.

Current tracked point

North West

North East

South West South East

North

West

South

East

Subpixel position of

tracked point

Figure 6-15 Figure 6-16 Eight neighbours strategy : subpixel accuracy calculated using biquadratic interpolation

Page 116: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 116 -

The algorithm used to obtain subpixel accuracy in the case of eight neighbours is based on the biquadratic interpolation. The algorithm determine the coordinates of the maximum, using the strengths of the nine pixels (central pixel + 8 neighbour pixels). Here also the aiding coming from the navigation is set to zero, which means that the FEIC will try to track the points at wrong locations, with an error corresponding to the movement between two images : ∆x= 1.5 pixel and ∆y = 0.5 pixel. The track statistics are performed for the 50 images and for all tracked points.

In the case of 8-neighbour strategy, the norm of the x and y errors is around 0.2 pixel (green curve), which is twice better than the performance obtained with only 4 neighbours. The following figures show the results of the 8-neighbour strategy : the result is plotted in green, the standard deviation (1 σ) wrt the result in blue and the minimum and maximum values in red.

0 5 10 15 20 25 30 35 40 45 50

-1

0

1

2

3

4

5

Frame number

Nor

m o

f pix

el e

rror

Subpixel translation error without texture update

0 5 10 15 20 25 30 35 40 45 500

0.1

0.2

0.3

0.4

0.5

Frame number

Nor

m o

f pix

el e

rror

Subpixel translation error without texture update

Figure 6-17 Tracks statistics for 8-neighbour strategy : norm of the error

50 100 150 200 250 300 350 400

50

100

150

200

250

300

350

400

Figure 6-18 Plot of the tracks on the image used in the VBNAT V4.0 simulation with 8-neighbour strategy (tracks in red and blue crosses for new points)

Page 117: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 117 -

6.3.4 Simulated performances

In order to compare the FEIC performances to the simulated one, we present results obtained with the C++ prototype of the FEIC as integrated in the VBNAT V3 test bench. These results are related to the subpixel translation sequence. The tracking results from a 8-neighbours interpolation.

Figure 6-19 - Tracks obtained in the VBNAT V3 simulation with 8-neighbour strategy.

Figure 6-20 - Tracks statistics for 8-neighbour strategy : norm of the error

We observe that the mean and standard deviation of the norm of the error are strongly related to the FEIC performances.

Page 118: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 118 -

6.4 INTRODUCING THE DEVELOPMENT ENVIRONNEMENT FOR A VISION-BASED NAVIGATION SYSTEM

During the first phase of the NPAL project, PC software tools under Windows XP or Linux operating systems are used for the development of algorithms on SIMULINK or picture generation on PANGU.

Software development is done on VBNAT simulation platform which is enriched all along the project and used as the central development environment.

The FEIC VHDL development is done on a commercial tool and its functions are tested separately on MODELSIM or equivalent tool, nevertheless when the need that all functions have to be tested together, the proposal to lend a test tool based on a TRANTECH and SRAM board has been accepted.

This tool will be later used in the process to execute the functional validation phase of the software navigation. This solution allows avoiding any modification of the VHDL source code between the FEIC validation and the functional validation phase excepted if a FEIC failure should be found.

Also the interfaces between THE PC and the FEIC will be chosen to keep the same hardware configuration during the FEIC development, camera development, functional validation and real time validation.

The following figure shows the common tools which have been used all along the project from the camera and FEIC development until the real time validation.

VHDL code development

ModelSimTranstech

boardspacewireminirouter

FPGAvalidation software

WindowsXP

VBNAT V3matlab

WindowsXP

VBNAT V4

Transtechboard

spacewireminirouter

FPGAvalidation software

Windows XP

matlabWindows

XP

camera development

spacewireminirouter

FEIC camera integration

spacewireminirouter

camera validationsoftware

Windows XP

camera validationsoftware

Windows XP

FPGAvalidation software

Windows XP

FEIC camera validation with VBNAT V4 on EGSE

spacewireminirouter

camera validationsoftware

Windows XP

FPGAvalidation software

Windows XP

matlabWindows

XP

Pangupicture

generator

Pangupicture

generator

Pangupicture

generator

Pangupicture

generator

Figure 6-21: exemples of reused environment developments

Page 119: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 119 -

During the VHDL development and validation, the following test bench which is the merge of a TRANSTECH and a SRAM board and a PC, has been used to validate the complete FEIC function. The test bench was also giving the possibility to download images through the SPACEWIRE.

Then after FEIC validation in Dundee, the test bench has been delivered to ASTRIUM and used to replay the FEIC tests, and start the functional validation phase.

Only the software application has been replaced on the PC, the SPACEWIRE interface, the drivers have been kept and the memory has been filled by images transmitted through the SPACEWIRE communication link. The test bench has been used to verify the navigation software behaviour.

VBNAT V4

USBspacewireinterface

SRAM board and transtech board

FEIC SRAMUSB SpaceWire

Figure 6-22: FEIC FPGA and functional test bench architecture

For the camera development, two software test applications have been developed.

The first application (VSIF) was devoted to the camera command and telemetry, the application giving a graphical User Interface to initialise the camera, command the mode modifications, download or upload memory, receive the telemetry and images from the optics. During this first phase the FEIC is unnecessary, nevertheless the communication interface between the camera and the test bench is done using the same SPACEWIRE interface than other parts of the NPAL project.

A second application (ESG) is developed in order to download or upload images to the camera without the need of the optics. This solution is used to validate the camera electronics and FEIC behaviour in real time without be disturbed by distortion which can be generated by optics camera.

The results will be compared to the output of the functional test validation

The test bench uses a National Instrument board to download or upload pictures between the DRAM PC and the camera memory by a specific parallel test interface. The camera is master of the data transmission, and 2 Giga bytes of memory are necessary to store enough pictures. Then the test bench will be used again during the real time validation phase.

Page 120: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 120 -

NI 6534

host station 1

camera

buffer 1

buffer 2FEIC spacewire

usbspacewireinterface

usb

NI 6534

PCI

Host station 1 is camera test benchand contains:picture transfer ESG software through Ni boardsVSIF software to communicate to camera thrugh spacewire interfaceoperating system is windows XP

Figure 6-23: ESG test bench architecture

The real time test bench is a merge of the previous used test benches , the ESG which allows the transfer in real time of images generated by PANGU, and the VBNAT V4 which is the last status of the navigation software. The camera is used because it is the instrument which allow the real time behaviour verification of the project set.

Page 121: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 121 -

NI 6534

host station 1

NPAL camera

buffer 1

buffer 2FEIC spacewire

host station 2

usbspacewireinterface

usb

PCI

Host station 1 is ESG test bench and contains:picture transfer function through Ni boardoperating system is windows XP

Host station 2 is VBNAT V4 and contains :navigation algorithms running through MatLabFPGA ModelSim programming toolcamera interface functionsFEIC interface functionsoperating system is windows XP

Figure 6-24: real time test bench architecture

Page 122: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 122 -

6.5 END TO END VALIDATION The end to tend validation gives a final overview of the major performance figures of merit of the navigation chain, with two major steps:

- Performances measured on the non real time implementation with the Transtech board configuration, necessary for mission preparation and design verification

- Performances measured on the real time implementation with the camera itself connected on the ESG image generator

6.5.1 Non Real Time Test Bench The non-real time test-bench corresponds to the final version of the VBNAT real-time simulator, except that it used the FEIC on Transtech Board instead of the FEIC within the Camera. This allows to run the application within an environment close to real time, but still in non-real time to perform easier debug and validation testings. The objective of non-real time simulations is to ensure that the behaviour of the various modules corresponds to what is expected (Navigation, interfaces, Image processing, List processing), without adding the real-time constraint. Images are sent to the FEIC memory and the reception of the image synchronises all the other modules.

Non real time tests bench architecture

Navigation Filter

USB SpaceWireinterface

Synchronisation withImage sequences

FEIC transtech board

Spacewire Minirouter

Tracking andcorrelation function

image

Aidings

T-list

USB Spacewire

Navigation Filter

USB SpaceWireinterface

Synchronisation withImage sequences

FEIC transtech board

Spacewire Minirouter

Tracking andcorrelation function

image

Aidings

T-list

USB Spacewire

Page 123: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 123 -

The test bench uses the TRANTECH board, the SPACEWIRE interface, already used during the FEIC function development, and a PC.

The PC integrates the navigation filter and a software application which allows the PANGU images loading through the SPACEWIRE.

After FEIC initialisation, the VBNAT transmits a picture to the FEIC then waits for T LIST answer. The navigation software processes the extracted points and sends a G-LIST command.

The VBNAT transmits a new image and send the STAT Tracking command, at this time FEIC is ready to start the feature extraction function and the tracking point function.

When functions are achieved, the cycle begins again.

This validation allows the functional validation, real time measurements can be done on the FEIC, and on some algorithms of the VBNAT platform. The real time is perturbed by the delay generated by the image transmission which takes a large part of the allowed bandwidth on the communication interface. Nevertheless the measurements done during this test phase gave a good knowledge of the real time behaviour, and the advantages of a step by step validation.

Description of reference scenario For the validation and performance evaluation of the real-time simulator, various scenarios have been defined :

- simple scenarios to fully validate the navigation filter without adding perturbation due to disturbed descent trajectories. It consists in a simple vertical descent with constant acceleration of 3 m.s-2. The spacecraft starts its descent at an altitude of 2500 m above the planet surface and stops at an altitude of 100 m. The simulation considered a planet with 7000 km radius. Simulations lasts 40 s.

- Fully mission representative scenario, which correspond to the Mercury landing scenario used in the VBNAT V3 simulation campaign. It corresponds to the BEPI Colombo scenario, i.e. landing on Mercury ground. All parameters have been initialised with the Mercury data. The descent corresponds to an inclined approach with small attitude changes. The simulation lasts around 51 s. This is a more realistic test, which allows to evaluate the performance of the Image Processing and of the Navigation filter.

- Mars landing scenario : It corresponds to the Mars scenario, i.e. landing on Mars ground. All parameters have been initialised with the Mars data. The descent corresponds to a vertical descent with an initial phase under parachute and the final phase without parachute. The simulation lasts

TEST BENCH FEIC

Initializes FEIC& Frame buffers

Send Picture by spacewire commandSend command Flip Buffer

Send T-LIST

Send G-ListSend Picture by spacewire commandSend command Flip BufferSend Start Tracking

Send G-List

Send T-LIST

Send Picture by spacewire commandSend command Flip BufferSend Start Tracking

Page 124: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 124 -

around 72 s. This is a more realistic test, which allows to evaluate the performance of the Image Processing and of the Navigation filter.

Description of image sequences The validation of the real-time environment has been done following a dedicated procedure. The first steps consist in testing the Navigation filter and the FEIC behaviour with a simplified scenario, i.e. vertical descent. Then more realistic trajectories are tested, corresponding to Mercury or Mars scenarios.

For each scenario, associated images should be available in order to feed the FEIC with data coherent to the Navigation data. To validate the Navigation filter behaviour, perfect simulated images, without any noise, as seen by the S/C camera. With those perfect images with only four points available for tracking, the Navigation filter should be able to perfectly estimate the S/C position and velocity.

For each scenario, a set of images has been generated :

- Validation scenario - Simple vertical descent with simple images : A set of 40 images (1024x1024 pixels) has been generated, corresponding to a camera sampling frequency of 1Hz. To simplify the post-treatment of the results, only 4 feature points have been plotted on each image.

Figure 6-25 Scenario #1 : image samples #1, #20 and #40

- Scenario for validation and performance evaluation – Mercury descent with simple or PANGU images: the first set of images contains only four feature points plotted on each images. This allows to test the navigation algorithm without having complex Image Processing to be done on real images. The second set of images uses directly the image from PANGU, associated to the Bepi Colombo trajectory

Figure 6-26 Mercury Descent : 1st set of images for validation- image samples #1, #258 and #516

Page 125: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 125 -

Figure 6-27 Mercury Descent : 2nd set of images for performance evaluation- images #1, #516 and #1033

- Scenario for performance evaluation – Mars descent with simple or PANGU images.

Figure 6-28 Mars Descent : image samples #1, #718 and #1437

Validation tests Several tests have been performed to validate the full loop behaviour and the interfaces between the Navigation filter, the Java interface with the FEIC and the FEIC hardware. In the case of validation tests, the reference image contains only simulated image with 4 points. Therefore the number of primary points is always 4.

The global behaviour of the FEIC (points detection and tracking) and the Navigation filter behaviour have been validated, with simple and realistic trajectories and various covariance initial condition for the tracked point position estimation.

Performance tests For the evaluation of the Navigation filter performance, two scenarios have been tested :

- Mercury descent : corresponds to a non vertical approach with a small angle between the S/C velocity and the planet surface

- Mars Descent : is an almost vertical descent with an initial phase under parachute and the final phase without.

Page 126: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 126 -

Mercury scenario

The objective of this performance test is to evaluate the behaviour of the Navigation filter and the FEIC, with the same Mercury scenario as for the validation tests, but with more realistic images generated by PANGU. Several performance indicators were saved and plotted in order to assess the performance evaluation. :

- Correlation values and Id for the tracked points. We can easily see the decreasing of the correlation, which indicates that the point is tracked with less accuracy. At the beginning the tracks are very long and few seconds before landing, the tracks become shorter. The attitude manoeuvre around 8 s after the beginning of the simulation, is correctly tracked (no loss of points). It demonstrates the ability of the FEIC to track points on real images.

Correlation value for track point#11 :maximum value in blue corresponds to the central pixel and below are the correlation values for its associated 8 neighbours

Id of the 20 tracked points

Figure 6-29 Mercury performance test : correlation and Id plots for tracked points

- Maximum error for tracked point positions and distance to mean plane The maximum normalised error does not increase a lot. Despites few points that have a large maximum normalised error and which will be rejected by the filter with the maximum error threshold parameter, the plot shows that the maximum error is kept below 5 σ. Even with the distortion of the image due to the S/C descent, the FEIC is able to track the points with a reasonable error.

Maximum normalised error for all tracked point positions

estimation of the distance to plane (in blue true distance, in green estimated distance - x axis in s, y axis in m)

Page 127: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 127 -

Figure 6-30 Mercury performance test : error on tracked point position and estimation of distance to mean plane

- Cross-track and along track velocities : cross-track covariance has converged very quickly and that the cross-track innovations are almost always higher. Even if the cross-track velocity error is very small, we can see that it is still higher than its covariance, which could explain the instabilities observed in the maximum normalised error. The same phenomenon can be observed on the along-track velocity innovation. Even if the final performance is good, around 2 m/s, the innovation is still higher than its covariance, that has converged very quickly. This could also explain the instabilities and oscillations observed.

Cross-track innovation of the S/C velocity projected on the S/C 3 axes (x in blue, y in green and z in red),

compared to the velocity covariance (positive in blue)

Along-track innovation of the S/C velocity (in green), compared to the velocity covariance (in blue)

Figure 6-31 Mercury performance test : cross-track and along track velocity errors

This first performance test outputs good results in terms of velocity error, points tracking but also shows oscillations of the Navigation filter with higher innovations than their respective covariance.

Mars scenario

The same performance evaluation has been done on Mars scenario. Here what is also interesting is the initial phase of the descent, which corresponds to a parachute phase. The S/C is then oscillating under the parachute. Same performance indicators have been saved and plotted in order to assess the performance evaluation. :

- Correlation values and Id for the tracked points. On the following figure, we can see the impact of the parachute phase on the correlation value : oscillations due to S/C movement under parachute

Page 128: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 128 -

Correlation value for track point#0 :maximum value in blue corresponds to the central pixel and below are the

correlation values for its associated 8 neighbours Id of the 20 tracked points

Figure 6-32 Mars performance test : correlation and Id plots for tracked points

- Maximum error for tracked point positions and distance to mean plane : even with the initial parachute phase and with the distortion of the image due to the S/C descent, the FEIC is able to track the points with a reasonable error. At the end, due to terrain morphology, less points can be detected and the performance is degraded.

Maximum normalised error for all tracked point positions

estimation of the distance to plane (in blue true distance, in green estimated distance - x axis in s, y axis in m)

Figure 6-33 Mars performance test : error on tracked point position and estimation of distance to mean plane

- Cross-track and along track velocities : the parachute phase has no impact on the velocity innovations. The performance of cross-track velocity innovations is good (below 25 cm/s a more than 60 s before landing) but, here also, the innovation is higher than the covariance. The innovation for the along-track velocity is higher than its covariance and reaches very high value at the beginning of the descent. The filter has difficulties to correctly estimate the velocity, probably due to a poor estimation of the tracked points. The final performance is good with an innovation, which is almost null but the maximum value for the innovation 10 s before landing is a little bit smaller than 5 m/s, which is still high.

Page 129: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 129 -

Cross-track innovation of the S/C velocity projected on the S/C 3 axes (x in blue, y in green and z in red),

compared to the velocity covariance (positive in blue)

Along-track innovation of the S/C velocity (in green), compared to the velocity covariance (in blue)

Figure 6-34 Mars performance test : cross-track and along track velocity errors

Results Summary of Performance tests

The following table presents a summary of the performances obtained on both Mercury and Mars scenarios with the Navigation Filter and the FEIC using real PANGU images.

Mercury scenario Mars scenario Final maximum velocity innovation (m/s) 1.7 m/s 0.074 m/s

Maximum velocity innovation 10 s before landing (m/ s) 2.25 m/s 4.95 m/s Final maximum cross-track innovation (m/s) 0.22 m/s 0.073 m/s

Maximum cross-track innovation (m/s) 10 s before landing 0.5 m/s 0.25 m/s Final maximum along-track innovation (m/s) 1.83 m/s 0.015 m/s

Maximum along-track innovation (m/s) 10 s before landing 2.48 m/s 5.76 m/s Final distance to plane error (m) 115.5 m 0.3 m

Table 6-1 Summary of Navigation filter performances with non real time tests

6.5.2 Real Time Test Bench The main difference between the use of TRANSTECH board and the use of the camera, is that in the test with TRANSTECH board, the image is downloaded through the Spacewire, so it does not correspond to a real time behaviour, because the picture download duration is about 500ms and is not compliant with real time requirements. When using the camera or the ESG test bench, only the T list is transferred through the Spacewire interface, in this case, real time requirements are respected. The real time performance tests consist in replacing the Transtech FEIC with the Camera and perform real time test with images coming in real-time from the Camera.

Page 130: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 130 -

Non real time tests bench architecture

spacewire

Host station is VBNAT V4 and contains:Navigation algorithmsCam era inte rface func tionsFEIC int erfa ce functionsOperat ing syst em is Windows XP

PCI NI 6534

Host station is ESG test bench and contains:Picture transfer function t hough NI 6 534 boardO perating syste m is Windows XP

Figure 6-35 Real time testbench architecture

The real time validation requires two PCs and the VBNC camera itself.

The camera has to be considered as the master. When VBNC camera initialisation is achieved, the camera enters in initialisation mode, then following commands from VBNAT successively in Stand Bye mode and in tracking mode.

Two sub modes exist in tracking mode. A normal mode which transfer images from the camera optic to the FEIC, and a hibernated mode which transfer images from ESG interface to FEIC until no more images are available in the ESG.

The real time synchronisation is done by information exchanges between the camera and the navigation software. When the VBNC camera SRAM is filled by a picture, an event is sent to the navigation software through the SPACEWIRE, the navigation software answers by commanding the start tracking function. When the feature point extraction and tracking functions have been performed, the rejection function is done and a T list is transmitted to the navigation software. The application will answer by a G list command generation and wait until a new picture is available. This information is transmitted by the event, and the cycle begins again.

The real time is tapped out by the period of picture reception which is programmed on the camera. Actually 50ms and 100ms periods are allowed in the specification, nevertheless for validation requirement , larger period can be programmed.

Page 131: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 131 -

Camera

FEIC activity

Navigationsoftware

Flip event

F ea ture e xtr a ct i on (13 m s)

Starttracking

T rack ing (3 m s)

RejectconcatenateTransmit

T list

Image availability

1 5 m s

LF navigation thread

HF ThreadProcess Tlist

Se ndG l i st

Image availability

Flip event

Fea t ure e xt ra ct ion ( 13 m s )

Starttracking

Tracking(3 ms)

50 m s

Figure 6-36 real time synchronisation between FEIC and navigation software

Description of reference scenario For the evaluation of the Navigation filter performance in real time, the same scenarios have been played in non-real time simulation and in real time simulations, and compared to each other:

- Mercury descent : corresponds to a non vertical approach with a small angle between the S/C velocity and the planet surface

- Mars descent : is an almost vertical descent with an initial phase under parachute and the final phase without.

Results Summary of Performance tests The following table presents a summary of the performances obtained on both Mercury and Mars scenarios with the Navigation Filter and the FEIC using real PANGU images, on the real time bench. Results are similar to those obtained with non real time tests.

Mercury Mars Non real-time Real-time Non real-time Real-time

Final maximum velocity innovation (m/s)

2.27 m/s 3.23 m/s 0.18 m/s 0.42 m/s

Maximum velocity innovation 10 s before

landing (m/s) 6.1 m/s 6.68 m/S 5.18 m/s 3.6 m/s

Final maximum cross- 0.185 m/s 0.282 m/s 0.05 m/s 0.11 m/s

Page 132: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 132 -

track innovation (m/s) Maximum cross-track innovation (m/s) 10 s

before landing 0.53 m/s 0.7 m/s 0.26 m/s 0.27 m/s

Final maximum along-track innovation (m/s)

2.5 m/s 3.55 m/s 0.18 m/s 0.41 m/s

Maximum along-track innovation (m/s) 10 s

before landing 6.8 m/s 7.5 m/s 4.3 m/s 4.95 m/s

Final distance to plane error (m)

40 m 138 m 0.6 m 2 m

Table 6-2 Summary of Navigation filter performances with real time tests

6.5.3 End to End Validation conclusions Results obtained with real FEIC and Navigation Filter integrated and simulated in the same environment are similar or even better than those obtained with full simulated loop, as presented in section Erreur ! Source du renvoi introuvable.

End to end validation allows to validate various interfaces :

• between image generation and FEIC : several functionalities have been implemented such as : - generation of simulated images associated to specific reference trajectories, with specified

number of tracked points

- image format adaptation (from pgm to png or osp, binary or ascii),

- download of scenario images to the FEIC memory for image processing

• between FEIC and Navigation filter : the lists provided by the FEIC (list of new points and list of tracked points) are analysed by the "list-processor", which extracts the raw data provided by the FEIC and after processing (sub pixel interpolation, ordering of points of interest…), provides formatted list of tracked points to the Navigation filter.

• between Navigation filter and FEIC : The Navigation filter provides aidings, which allow to estimate the next positions of the tracked points. This helps the FEIC to look for the points int the following image.

Intensive and successful work has been done to deploy the Navigation filter into the final simulation environment and also the validation of the various interfaces between all the VBNAT components : FEIC, Camera, Navigation Filter.

Page 133: Navigation for Planetary Approach & Landing …emits.sso.esa.int/emits-doc/ESTEC/AO6080-RD5-NPAL_Final...INS Inertial Navigation System IP Image Processing Final Report Navigation

Final Report

Navigation for Planetary Approach & Landing ESA Contract Reference 15618/01/NL/FM – May 2006 - 133 -

7 GENERAL CONCLUSIONS

An ambitious research program was set to give Europe access to proprietary solutions in the field of soft landing. Key technology breakthroughs made this effort accessible, notably the evolution in the APS detectors, the capacity of the new generation LEON calculators. A long led effort in navigation design and real time implementation proved to be a major contributor of the NPAL program.

The NPAL camera exists at an elegant breadboard level. Its real time operation is demonstrated.

The NPAL system offers powerful possibilities that still remain to be investigated at system level: the large number of feature points, the high robustness, and the impressive performances of its navigation permit to propose many scenarios of use.

“Lost in Space” initialisation can be envisaged at initial vision acquisition, suppressing the need for external sensors such as star trackers.

Terrain mean reconstruction permits to support hazard qualification and thus enhance the mission success probability.

The tracks mean motion analysis permits to isolate areas that do not move according to a global motion: dust storms, masked areas can be envisaged to be identified and rejected.

The NPAL camera will be used in real flight conditions in 2006/2007, in the frame of the ESA “Precsion Landing GNC Test Facility” development.