terrain-relative planetary orbit...

7
TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin Peterson 1 , Heather Jones 1 , Corinne Vassallo 2 , Allen Welkie 3 , and William “Red” Whittaker 1 1 The Robotics Institute, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA 2 The Department of Physics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA 3 The Department of Engineering, Swarthmore College, 500 College Ave, Swarthmore, PA 19081, USA ABSTRACT This paper presents an approach to visual navigation for a spacecraft during planetary orbit. Spacecraft orbital pa- rameters are determined autonomously by registration to terrain. To accomplish this, individual camera images are registered to pre-existing maps to determine spacecraft latitude and longitude. Using knowledge of orbital dy- namics, the time history of sensed latitudes and longi- tudes is fit to a physically realizable trajectory. Outlier measurements are rejected using robust estimation tech- niques. Simulation results using data from the Lunar Re- connaissance Orbiter show that method determines the orbit semi-major axis with an average error less than 6km for a 500km altitude lunar orbit. Key words: orbit determination; vision; robotics. 1. INTRODUCTION Visual navigation for precise, reliable guidance of crewed and robotic planetary missions offers the opportunity to revolutionize space exploration by improving targeting of landing sites, substantially reducing mission risk, and extending human reach through autonomous missions to distant locations such as the Jovian moons. State-of- practice radio-based navigation exhibits unacceptable la- tency at long range, relies on interaction with Earth, and cannot achieve the landing accuracy possible through vi- sual means. Autonomous navigation via camera offers an alternative to radio without the mass and power re- quired for traditional altimetry (e.g., RADAR and laser), and without communication to Earth. Visual navigation will enable autonomous spacecraft to land in radio-dark locations like craters and the far sides of planets. Visual navigation provides three primary advantages over alternatives: accuracy, low latency, and autonomy. Dur- ing descent, accuracy and latency are important factors in mission success. Far from earth, visual navigation is crit- ical, as communication latency precludes precision ma- neuvering. Visual navigation for all mission phases will enable fully autonomous operation from launch to land- ing. Deep Space 1 demonstrated visual navigation far from planetary surfaces by triangulation to distant asteroids. These observations relied on the asteroid appearance as a single pixel in an imager, reducing measurements to dis- criminating a point source from black background. Vi- sual navigation during terminal descent has been shown in terrestrial demonstrations to be more accurate than ra- dio navigation and operates with latency independent of distance to Earth. With new high-resolution Mars and Moon imagery, terrain relative navigation can achieve 30-meter localization precision. However, autonomous visual-only navigation is undeveloped for planetary ap- proach, capture, and orbit. This paper describes Terrain Relative Planetary Orbit De- termination (TROD), an approach to visual navigation for orbiting spacecraft. TROD matches terrain appearance to on-line imagery and utilizes knowledge of orbital dynam- ics to determine position, velocity, and orbital parame- ters of the spacecraft trajectory. Simulation results using data from the Lunar Reconnaissance Orbiter show that the method determines the semi-major axis with an aver- age error less than 6 km for a 500 km lunar orbit. 1.1. Related Work Orbit determination has a deep history dating back cen- turies to early physicists. The related work is deep and it is out of scope to discuss most aspects of orbit deter- mination here. Rather, we focus on spacecraft orbit de- termination and computer vision approaches to trajectory modeling. In deep space, camera-based navigation measures an- gles to distant asteroids to triangulate location [RBS + 97, BRK + 02]. These observations rely on the asteroid ap- pearance as a single pixel in an imager, reducing mea- surements to discriminating a point source from black background. Objects that appear larger than a few pixels introduce significant angular error. The Voyager missions to Jupiter, Saturn and beyond used optical measurements

Upload: others

Post on 20-Aug-2020

22 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

TERRAIN-RELATIVE PLANETARY ORBITDETERMINATION

Kevin Peterson1, Heather Jones1, Corinne Vassallo2, Allen Welkie3, and William “Red” Whittaker1

1The Robotics Institute, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA2The Department of Physics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA 15213, USA3The Department of Engineering, Swarthmore College, 500 College Ave, Swarthmore, PA 19081, USA

ABSTRACT

This paper presents an approach to visual navigation fora spacecraft during planetary orbit. Spacecraft orbital pa-rameters are determined autonomously by registration toterrain. To accomplish this, individual camera images areregistered to pre-existing maps to determine spacecraftlatitude and longitude. Using knowledge of orbital dy-namics, the time history of sensed latitudes and longi-tudes is fit to a physically realizable trajectory. Outliermeasurements are rejected using robust estimation tech-niques. Simulation results using data from the Lunar Re-connaissance Orbiter show that method determines theorbit semi-major axis with an average error less than 6kmfor a 500km altitude lunar orbit.

Key words: orbit determination; vision; robotics.

1. INTRODUCTION

Visual navigation for precise, reliable guidance of crewedand robotic planetary missions offers the opportunity torevolutionize space exploration by improving targetingof landing sites, substantially reducing mission risk, andextending human reach through autonomous missions todistant locations such as the Jovian moons. State-of-practice radio-based navigation exhibits unacceptable la-tency at long range, relies on interaction with Earth, andcannot achieve the landing accuracy possible through vi-sual means. Autonomous navigation via camera offersan alternative to radio without the mass and power re-quired for traditional altimetry (e.g., RADAR and laser),and without communication to Earth. Visual navigationwill enable autonomous spacecraft to land in radio-darklocations like craters and the far sides of planets.

Visual navigation provides three primary advantages overalternatives: accuracy, low latency, and autonomy. Dur-ing descent, accuracy and latency are important factors inmission success. Far from earth, visual navigation is crit-ical, as communication latency precludes precision ma-neuvering. Visual navigation for all mission phases will

enable fully autonomous operation from launch to land-ing.

Deep Space 1 demonstrated visual navigation far fromplanetary surfaces by triangulation to distant asteroids.These observations relied on the asteroid appearance as asingle pixel in an imager, reducing measurements to dis-criminating a point source from black background. Vi-sual navigation during terminal descent has been shownin terrestrial demonstrations to be more accurate than ra-dio navigation and operates with latency independent ofdistance to Earth. With new high-resolution Mars andMoon imagery, terrain relative navigation can achieve30-meter localization precision. However, autonomousvisual-only navigation is undeveloped for planetary ap-proach, capture, and orbit.

This paper describes Terrain Relative Planetary Orbit De-termination (TROD), an approach to visual navigation fororbiting spacecraft. TROD matches terrain appearance toon-line imagery and utilizes knowledge of orbital dynam-ics to determine position, velocity, and orbital parame-ters of the spacecraft trajectory. Simulation results usingdata from the Lunar Reconnaissance Orbiter show thatthe method determines the semi-major axis with an aver-age error less than 6 km for a 500 km lunar orbit.

1.1. Related Work

Orbit determination has a deep history dating back cen-turies to early physicists. The related work is deep andit is out of scope to discuss most aspects of orbit deter-mination here. Rather, we focus on spacecraft orbit de-termination and computer vision approaches to trajectorymodeling.

In deep space, camera-based navigation measures an-gles to distant asteroids to triangulate location [RBS+97,BRK+02]. These observations rely on the asteroid ap-pearance as a single pixel in an imager, reducing mea-surements to discriminating a point source from blackbackground. Objects that appear larger than a few pixelsintroduce significant angular error. The Voyager missionsto Jupiter, Saturn and beyond used optical measurements

Page 2: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

along with radio to Earth for navigation. Images of theplanet’s natural satellites against a star background wereused, with knowledge of ephemerides to update a naviga-tion filter run on Earth [CSB83].

Significant work has been done on camera-based terrainrelative navigation. While this technique has proved to bequite successful in simulation and sounding rocket test-ing [TMR+06, TMR+07, PBC+10], the focus has mostlybeen on the final stages of descent to a planetary surface,not on higher altitude trajectories. Singh and Lim imple-ment an Extended Kalman Filter (EKF) to track space-craft position and velocity in orbit; they use vision-basedmeasurements, but they also assume that altitude can beobtained from an altimeter with only 12.5m (1σ) error.Such high-end radar is programmatically costly. Theyuse crater-based feature tracking and thus they limit their“experiments to segments of the trajectory that span thehigher latitudes where the craters are more clearly visi-ble” [SL08].

In addition to planetary applications, work has also beendone on autonmous navigation for landing on asteroids.The Near Earth Asteroid Rendezvous (NEAR) missionto the asteroid Eros was the first to use optical trackingof craters for navigation, but these craters were manuallychosen [MC03]. The Hayabusa spacecraft, which con-ducted a sample return mission to the asteroid Itokawa,used manual tracking of landmarks for localization dur-ing descent [KHK+09]. Hayabusa also used a long range(50 m to 50 km) LIDAR and a short range laser rangefinder to determine its altitude [KHK+06]. Li presentsa feature-based navigation routine for asteroid landing[Li08]. This method provides position and orientationstate, although this is also defined relative to the land-marks, not with respect to a global map.

A proposed method for autonomous orbit determinationfor the NEAR is described in [MC03]. The reliance ofthis approach on the detection on circular crater featuresis somewhat limiting. As noted in [SL08], craters arehard to detect when the sun angle is high, since there areno shadows to define the edges of the crater rim. In ad-dition, craters appear different at different altitudes, mak-ing them unsuitable for use over the range of altitudesat which a spacecraft localization is needed in planetarylanding missions.

2. METHOD

2.1. Overview

TROD works by matching imagery obtained in orbit withorthorectified maps of the planetary surface to determinelatitude and longitude over time. Kepler’s laws are thenapplied to determine orbital parameters.

Figure 1: TROD matches images obtained in orbit tomaps and then applies Kepler’s laws to determine orbitalparameters. Given time and estimated latitude and longi-tude at several locations, the orbit is fully determined.

2.2. Orbit Determination

Kepler’s laws of orbital motion state that the area sweptby the line between a body and its orbit center in a givenamount of time is equal throughout an orbit [Sid01]. Ke-plerian orbits are elliptical in shape with a focal point atthe barycenter of the system. Consequently, orbits areuniquely determined by several measurements of latitude,longitude, over time as depicted in Fig. 1.

Elliptical orbit of a spacecraft about a planetary body isdescribed by the six classical orbital elements:

• a: semi-major axis• e: eccentricity• i: inclination• ω: argument of periapsis• Ω: right ascension of the ascending node (RAAN)• M : mean anomaly

The parameters a and e describe the size and shape of theellipse. The parameters i, ω and Ω describe the orienta-tion of the ellipse in space, (See Fig. 2). M describeswhere the spacecraft is in the orbit, although the relatedparameter θ, or true anomaly, is often used.

Orbit determination is performed in two stages. First,inclination and RAAN are determined by fitting a planeto unit vectors that extend from the orbit center towardsthe spacecraft. Once the plane has been determined, thenoisy unit vectors are projected onto the plane to deter-mine spacecraft angular motion over time. Kepler’s lawsdescribing the relationship between angular motion overtime are inverted using non-linear least squares to deter-mine a and e. The argument of periapsis is then deter-mined from the other five parameters and the measure-

Page 3: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

X

Z

Y

Ω

ω

i

θ

Figure 2: Classical orbital elements: i, the inclination,is the angle between the normal of the orbital plane andthe z-axis; Ω, the RAAN, is the angle between the x-axisand the line of nodes, or the intersection between the xy-plane and the orbital plane; ω, the argument of periapsis,is the angle between the line of nodes and the periapsis;θ, the true anomaly, is the angle between a vector to theperiapsis and a vector to the spacecraft’s current position.

ment times. Spurious measurements are rejected usingRANSAC [FB81].

2.2.1. Fitting the Orbital Plane

As the spacecraft orbits, it images the surface below todetermines its latitude, φ, and longitude, λ. The latitudeand longitude coordinates are transformed into unit vec-tors and rotated into the J2000 frame:

rf = [cos(φ) cos(λ), cos(φ) sin(λ), sin(φ)]T (1)r = R(t)rf (2)

Where rf is the unit vector in the moon-fixed frame,R(t)is the rotation between the moon-fixed latitude and lon-gitude measurements and the J2000 reference frame atthe time of the first measurement, and r is the unit vectorrepresented in the J2000 frame.

In the J2000 frame, the unit vectors serve as noisy mea-surements of the orbital plane. The smallest eigenvalueof the covariance matrix of the unit vectors correspondsto the eigenvector representing the normal of the plane.Singular Value Decomposition is used to find the eigen-vector and the normal:

Σ =∑i rir

Ti (3)

[U, S, V ∗] = svd(Σ) (4)n = v∗min (5)

The sign of n is arbitrary and noise in the points can causethe normal to point in either direction. The direction of

n is chosen to agree with the average normalized crossproduct of adjacent unit vectors. RANSAC is used to re-ject spurious measurements.

The measurement coordinate system is defined by settingthe zm axis equal to n and aligning the ym axis with thecross product of n and r0. The xm axis completes theright-handed coordinate system.

zm = n (6)ym = n× r0 (7)

xm =(n× r0

)× z (8)

The inclination of the orbit, i, is defined as the angle be-tween the orbital and equatorial planes:

ze = [0, 0, 1]T (9)

i = arccos(zTmze) (10)

Ω is calculated by taking the cross product of ze and zmto get the ascending node line vector, l. The ascendingnode line vector is then projected onto both the xm andym axes to determine Ω.

l = ze × zm (11)Ω = atan2

(yTml, x

Tml)

(12)

2.2.2. Fitting the Keplerian Parameters

Kepler’s laws state that the vector between a satelliteand the body which it is orbiting sweeps equal areas inequal time. The period of an orbit is proportional to a

32 ,

where a is the semi-major axis. Combining these lawsand knowledge of the gravitational constant of the centralbody, µ, the angle traveled by the spacecraft in a giventime interval can be determined. The equation relatingtime and angle for an elliptical orbit is [Sid01]:

t(a, e, θ) =

a32

õ

[2 arctan

[√1− e1 + e

tan

2

)]− e√

1− e2 sin θ

1 + e cos θ

](13)

Where t(a, e, θ) is time since periapsis and θ is the anglefrom the periapsis.

The measurement coordinate system is not aligned withthe periapsis, so the offset in time and angle between theperiapsis and the first measurement must be recovered.Define αi as the measurement angle in the measurementcoordinate frame and αp as the offset between the orbitalcoordinate system and the measurement coordinate sys-tem (Figure 1):

αi = atan2(yTmri, xTmri) (14)

θi = αi + αp (15)tp = t(a, e, αp) (16)

Page 4: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

Equation 13 can now be used to define an objective(Equation 17) for optimization. The objective is opti-mized using non-linear least squares.

J(e, a, αp) =(ti−t(e, a, αp)−t(e, a, αi+αp)

)2(17)

The remaining parameters, ω and M , are computed di-rectly as follows [Sid01]:

ω = atan2(

r0zsin(i) , r0x cos(Ω) + r0y sin(Ω)

)+ αp

Mi = (ti − tp)√

µa3

2.3. Terrain Matching

To determine latitude and longitude of the spacecraft, im-ages are taken in flight and are then registered to prior ter-rain maps. Terrain matching occurs in three phases: im-age rectification, coarse correlation, and fine adjustment.Rectification scales and transforms the image to roughlymatch the scale and orientation of the map. Coarse cor-relation finds a low-fidelity correspondence between theterrain and the rectified image and accounts for gross er-rors in initial latitude, longitude, and altitude. Fine ad-justment accounts for local variation in terrain within themap and refines the coarse fit.

Rectification is performed using a prior estimate of thespacecraft altitude and orientation. The corners of theimage are projected to 2D points on the map using a flatground assumption. These four point correspondencesdefine a homography that roughly maps camera points tomap points. Using the estimated homography, the imageis warped to match the orthorectified map. The resultingwarped image has the same scale and orientation as themap, and can now be correlated. This process is similarto that used by Trawny, et al. [MTR+09].

The rectified image is downsampled and correlated witha region of interest (ROI) from the map. The ROI is fo-cused on the estimated image center as determined dur-ing rectification and is sized to be three times larger thanrectified image in map coordinates. This allows for er-ror in the prior location estimate. Due to the size of themaps, correlation is performed in the frequency domain.Fourier transforms of the maps are precomputed to limiton-line computation time, which is essential for runningnumerous iterations of the routine. The peak of the roughcorrelation is used to determine an ROI for a refined esti-mate of location (Fig. 3).

Given a coarse estimate of alignment between the imageand the map, a fit is performed to correct for local terrainshape and attitude error between the camera and the map.The 50 strongest Harris corners [HS88] in the rectifiedimage are projected into the map using the inverse cameramatrix and the pose estimate from the coarse registration.A small template from the rectified image around the fea-ture is then correlated with a region of interest in the map

Figure 3: Initial errors in latitude and longitude are cor-rected by correlating the orthorectified image (center)with the map (left). The resulting correlation peak (right)is used as a rough estimate of the location of the craft.

Figure 4: The initial location determined by correlationis refined by finding Harris corners in the rectified image,projecting these locations into the map, and correlatingeach patch with a constrained area of the map. Final lo-cation is determined using RANSAC.

near the expected location of the feature. This correlationis performed in the spatial domain and the peak of thecorrelation determines the location of the feature withinthe map. The set of feature correspondences are then pro-cessed using RANSAC to filter spurious measurementsand determine the registration between the camera andmap (Fig. 4).

3. RESULTS

TROD performance was evaluated in two stages, firstwith simulated orbit latitude/longitude measurementsgenerated with AGI’s satellite toolkit (STK) [Ana11]and then with latitude/longitude measurements generatedfrom the output of a Lunar rendering engine. With thesimulated measurements, the sensitivity of the orbit fitwas tested by varying each of the six parameters individ-ually. Then, normally distributed noise was added to eachof the simulated orbits to determine the effect of imper-fect data. Lastly, the technique was validated by runningthe visual terrain-matching simulation and then using itsoutput latitude/longitude data as the input for the orbit fit.

All results were analyzed using Circular Error Probabil-

Page 5: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

ity (CEP) as the error metric. The CEP is the radius ofthe sphere centered on the true position which containsa given percentage the predicted values. The CEP95, forinstance, is the spherical radius containing 95% of thepredicted results. For all orbits, CEPs of max and meanerror were computed by sampling positions along the trueand measured orbits at ten second intervals. The distancebetween the predicted and actual locations was computedand the mean distances were recorded. The CEP was thencalculated for the collection of orbits under consideration.

3.1. Performance with Normally Distributed Noise

Orbit determination performance was characterized bygenerating a set of orbits with a wide range of orbitalparameters. Each orbit was sampled regularly in timeto determine latitude and longitude relative to the plan-etary surface. Latitude and longitude were perturbed byadding normally distributed noise, and the fitting algo-rithm was applied. Because the noise was normally dis-tributed, RANSAC was disabled. Each noise level wasevaluated 100 times per orbit. CEP was measured as afunction of the six parameters.

All experiments except those with high eccentricity usethe parameters in Table 1 as the basis for the orbit andvary one parameter. Experiments with high eccentricityuse the basis orbit parameters, except a = 3550 km toavoid impacting the surface of the moon at higher eccen-tricities.

Table 1: Basis Orbit Parameters

a e i ω Ω M0 noise (σ)2250 km 0.01 45 45 45 45 0.1

Figure 5 shows the variation in CEP as a is varied fromfrom 1800 km to 3600 km. Noise is held constant at 0.1degree. At low altitudes, error in a is consistently lessthan three meters with a CEP95 of 2.6 km with a equal to1800km. CEPs increase with a and e. This is expected,since the fit relies on accurate angular measurements. Asthe distance from the planet increases, sensitivity to errorcorrespondingly increases. Inclination and RAAN haveessentially no effect on position error, as they are deter-mined entirely by the plane fit (Figures 7 and 8). Becauseω and M cause variation in the portion of the orbit that isobserved by the fitter, error increases slightly and period-ically as ω and M vary (Figures 9 and 10).

Terrain matching performance is expected to vary asa function of altitude, camera configuration, and off-nominal lighting effects (e.g., lens flare). To characterizethe effect of different levels of noise on fit performance,normal distribution noise was varied from 0.0 to 0.5

and CEP was measured. Figure 11 shows that increasingnoise causes linearly increasing error in the fit.

3.2. Performance with Terrain Matching

Orbit determination performance with terrain matchingwas evaluated on 15 orbits. The orbits were generated byvarying the inclination, periapsis, and mean anomaly ofthe basis orbit. This produces varied ground tracks overthe planetary surface and accounts for variation in match-ing performance as a function of terrain appearance.

The terrain matching dataset differs from the orbits insection 3.1 in two important ways. First, half of eachorbit is in shadow and therefore only half of the mea-surements from each orbit can be used. Second, due tolengthy rendering times the duration of each orbit is lim-ited to a single full orbit. These two effects significantlyreduce the number of measurements available during or-bit determination. Figure 12 shows the effect of decreas-ing the number of measurements available to fit the orbit.Note that although error increases, as more points are col-lected this noise will decrease and approach the results inFigure 11.

Despite these challenges, orbit determination with visualmatching achieves error only twice as high as in the ex-periments from section 3.1. Typical mean position errorsare below 6 km and almost all errors are below 10 km(Table 2).

Table 2: CEPs with Terrain Matching (km)

CEP50 CEP75 CEP80 CEP905.6 6.1 9.9 18.1

4. CONCLUSIONS AND FUTURE WORK

This paper has presented a method for determining orbitsby correlating on-orbit imagery with a map. The tech-nique fits the six orbital parameters required to define aKeplerian orbit. Because the technique uses only cameradata, it is appropriate for autonomous spacecraft. Resultsfrom 500km Lunar orbits demonstrate that this techniqueis viable.

Better than 1 km accuracy can be achieved by improvingthe performance of terrain matching. Increasing map res-olution and incorporating digital elevation data will sig-nificantly enhance matching capability and accomplishthis goal.

Kepler’s laws assume point masses, but real planets havelumpy mass distributions. Low orbits are affected bygravity anomolies and high orbits are perturbed by thirdbodies. An important next step is to incorporate morecomprehensive gravity models into the orbit determina-tion algorithm.

Finally, adaptation of TROD for use with Kalman Fil-tering techniques will enable real-time low-latency orbitdetermination. This will close the autonomy loop and in

Page 6: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

1800 2000 2200 2400 2600 2800 3000 3200 3400 36000

5

10

Semi−major axis(km)

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 5: CEPs as a function of a, in km.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

5

10

15

20

Eccentricity

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 6: CEPs as a function of eccentricity, e.

0 20 40 60 80 100 120 140 160 180

1

2

3

4

5

Inclination (deg)

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 7: CEPs as a function of inclination, i, in degrees.

0 50 100 150 200 250 300 350

1

2

3

RAAN (deg)

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 8: CEPs as a function of argument of RAAN, Ω,in degrees.

0 50 100 150 200 250 300 3500

5

10

Argument of Periapsis (deg)

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 9: CEPs as a function of argument of periapsis, ω,in degrees.

0 50 100 150 200 250 300 350

2

4

6

8

Initial Mean Anomaly (deg)

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 10: CEPs as a function of argument of meananomaly, M , in degrees.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

10

20

30

Noise

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 11: CEPs as a function of added noise.

0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.20

10

20

30

40

50

Noise

Mean D

ista

nce (

km

)

CEP50

CEP80

CEP90

CEP95

Figure 12: CEPs as a function of added noise for shortestimation periods as in terrain matching experiments.

turn enable spacecraft that can capture, orbit, and descendwithout human intervention.

REFERENCES

[Ana11] Analytical Graphics Inc. STK 9.2.2. www.agi.com, 1989–2011.

[BRK+02] S. Bhaskaran, J. E. Riedel, B. Kennedy, T.C.Wang, and R.A. Werner. Navigation of theDeep Space 1 spacecraft at Borrelly. Techni-cal report, NASA, 2002.

[CSB83] J. K. Campbell, S. P. Synnott, and G. J. Bier-man. Voyager orbit determination at jupiter.IEEE Transactions on Automatic Control,AC-28(3), 1983.

[FB81] M. A. Fischler and R. C. Bolles. Randomsample consensus: A paradigm for model fit-ting with applications to image analysis andautomated cartography. Comm. of the ACM,24:381–395, 1981.

[HS88] C. Harris and M. Stephens. A CombinedCorner and Edge Detection. In Proceed-ings of The Fourth Alvey Vision Conference,pages 147–151, 1988.

[KHK+06] T. Kubota, T. Hashimoto, J. Kawaguchi,M. Uo, and K. Shirakawa. Guidance andnavigation of hayabusa spacecraft for aster-oid exploration and sample return mission.In SICE-ICASE International Joint Confer-ence, Bexco, Busan, Korea, 2006.

[KHK+09] T. Kubota, T. Hasimoto, J. Kawaguchi,K. Shirakawa, H. Morita, and M. Uo. Visionbased navigation by landmark for robotic ex-plorer. In Proceedings of the 2008 IEEEInternational Conference on Robotics andBiomimetics, Bangkok, Thailand, 2009.

[Li08] S. Li. Computer vision based autonomousnavigation for pin-point landing roboticspacecraft on asteroids. In IntelligentRobotics and Applications, Lecture Notes

Page 7: TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATIONrobotics.estec.esa.int/i-SAIRAS/isairas2012/Papers/Poster... · 2012. 8. 22. · TERRAIN-RELATIVE PLANETARY ORBIT DETERMINATION Kevin

Figure 13: Three example orbit fits using terrain match-ing. Green circles show locations where measurementswere taken. Red circles show the match location. Ter-rain matching performs well in some locations (top, mid-dle) and poorly in others. Significant error is handled byRANSAC enabling orbit determination despite spuriousmeasurements.

in Computer Science, volume 5315, pages1115–1126, 2008.

[MC03] J. K. Miller and Y. Cheng. Autonomous

landmark tracking orbit determination strat-egy. In Proceedings of AAS/AIAA Astrody-namics Specialist Conference, Big Sky, MO,2003.

[MTR+09] A. Mourikis, N. Trawny, R. Roumeliotis,A. Johnson, A. Ansar, and A. Matthies.Vision-aided inertial navigation for space-craft entry, descent, and landing. In IEEETransactions on Robotics, volume 25, April2009.

[PBC+10] R. S. Park, S. Bhaskaran, Y. Cheng, A. John-son, N. E. Lisano, and A. A. Wolf. Tra-jectory reconstruction of sound rocket us-ing inertial measurement unit and landmarkdata. Journal of Spacecraft and Rockets,47(6):1003–1009, 2010.

[RBS+97] J.E. Riedel, S. Bhaskaran, S.P. Synnot, S.D.Desai, W.E. Bollman, P.J. Dumont, C.A.Halsell, D. Han, B.M. Kennedy, G.W. Null,Jr. Owen, W.M., R.A. Werner, and B.GWilliams. Navigation for the new millen-nium: Autonomous navigation for Deep-Space 1. In Proc. Int. Symposium SpaceFlight Dynamics, 1997.

[Sid01] M. J. Sidi. Spacecraft Dynamics & Control:A Practical Engineering Approach. Cam-bridge University Press, 2001.

[SL08] L. Singh and S. Lim. On Lunar on-orbitvision-based navigation: Terrain mapping,feature tracking driven EKF. In AIAA Guid-ance, Navigation and Control Conference,Honolulu, HI, 2008.

[TMR+06] N. Trawny, A. Mourikis, S. Roumeliotis,A. Johnson, and J. Montgomery. Vision-aided inertial navigation for pin-point land-ing using observations of mapped land-marks. Journal of Field Robotics, 2006.

[TMR+07] N. Trawny, A. Mourikis, S. Roumeliotis,A. Johnson, J. Montgomery, A. Ansar, andL. Matthies. Coupled vision and inertial nav-igation for pin-point landing. In NASA Sci-ence and Technology Conference, 2007.