vision-based relative navigation for formation flying of spacecraft

11
AIAA-2000-4439 VISION-BASED RELATIVE NAVIGATION FOR FORMATION FLYING OF SPACECRAFT Roberto Alonso, * John L. Crassidis and John L. Junkins Department of Aerospace Engineering Texas A&M University College Station, TX 77843-3141 Abstract The objective of this paper is to develop a robust and efficient approach for relative navigation and at- titude estimation of spacecraft flying in formation. The approach developed here uses information from a new optical sensor that provides a line of sight vector from the master spacecraft to the secondary satel- lite. The overall system provides a novel, reliable, and autonomous relative navigation and attitude determi- nation system, employing relatively simple electronic circuits with modest digital signal processing require- ments and is fully independent of any external systems. State estimation is achieved through an optimal ob- server design, which is analyzed using a Lyapunov and contraction mapping approach. Simulation results in- dicate that the combined sensor/estimator approach provides accurate relative position and attitude esti- mates. Introduction Spacecraft formation flying is an evolving technol- ogy with many possible applications, such as long base- line interferometry, stereographic imaging, synthetic apertures, and distinguishing spatial from temporal magnetospheric variations. A significant advantage of distributed spacecraft platforms over a single multi- functional spacecraft is that single point failures can be rectified through replacement of cheaper and smaller spacecraft to maintain mission capability, thus provid- ing a more reliable and robust system. Many missions (in particular interferometry missions) rely on precise relative position and attitude knowledge in order to maintain mission requirements. * Graduate Student of Aerospace Engineering. Assistant Professor of Aerospace Engineering. Senior Mem- ber AIAA. George J. Eppright Chair Professor of Aerospace Engineer- ing. Fellow AIAA. Copyright c 2000 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. Formation flying in a satellite constellation is a spe- cial structure that involves the consideration of several bodies simultaneous in time and close in position. All of the spacecraft are typically predominately attracted by a central field from a planet, such as the Earth. The motion relative among them is an issue of active research. Among the many research issues navigation is addressed in this paper. In general, the absolute position and velocity (it is called absolute because it refers to an inertial or quasi-inertial frame) of each one can be estimated by using a Global Positioning System (GPS) receiver and/or other conventional tech- niques, such as radar or antenna range, rate-range, angles, etc. Absolute navigation relates each one with respect to a common fixed frame. Relative navigation seeks optimal estimates for the position and velocity of one satellite relative to the other one. To date, most research studies into determining both absolute and relative positions and attitudes between vehicles in a formation have involved using GPS, 1 which re- stricts the spacecraft formation to near-Earth appli- cations. An application of GPS-like technology to a deep space mission has been proposed, 2 but this requires extensive hardware development and is sub- ject to the generic GPS performance-limiting effects, including multipath, geometric dilution of precision, integer ambiguity resolution, and cycle slip. In or- der to reduce the operational cost, size and weight of spacecraft for formation missions, new technologies are required to fill the gap implicit in standard sensors. The vision-based navigation (VISNAV) system de- scribed in this paper comprises an optical sensor of a new kind combined with specific light sources (bea- cons) in order to achieve a selective or “intelligent” vision. The sensor is made up of a Position Sens- ing Diode (PSD) placed in the focal plane of a wide angle lens. When the rectangular silicon area of the PSD is illuminated by energy from a beacon focused by the lens, it generates electrical currents in four 1 of 11 American Institute of Aeronautics and Astronautics Paper 2000–4439

Upload: juan-manuel-mauro

Post on 16-Dec-2015

216 views

Category:

Documents


1 download

DESCRIPTION

paper

TRANSCRIPT

  • AIAA-2000-4439

    VISION-BASED RELATIVE NAVIGATION

    FOR FORMATION FLYING OF

    SPACECRAFT

    Roberto Alonso, John L. Crassidis and John L. Junkins

    Department of Aerospace Engineering

    Texas A&M University

    College Station, TX 77843-3141

    Abstract

    The objective of this paper is to develop a robustand efficient approach for relative navigation and at-titude estimation of spacecraft flying in formation.The approach developed here uses information from anew optical sensor that provides a line of sight vectorfrom the master spacecraft to the secondary satel-lite. The overall system provides a novel, reliable, andautonomous relative navigation and attitude determi-nation system, employing relatively simple electroniccircuits with modest digital signal processing require-ments and is fully independent of any external systems.State estimation is achieved through an optimal ob-server design, which is analyzed using a Lyapunov andcontraction mapping approach. Simulation results in-dicate that the combined sensor/estimator approachprovides accurate relative position and attitude esti-mates.

    Introduction

    Spacecraft formation flying is an evolving technol-ogy with many possible applications, such as long base-line interferometry, stereographic imaging, syntheticapertures, and distinguishing spatial from temporalmagnetospheric variations. A significant advantage ofdistributed spacecraft platforms over a single multi-functional spacecraft is that single point failures can berectified through replacement of cheaper and smallerspacecraft to maintain mission capability, thus provid-ing a more reliable and robust system. Many missions(in particular interferometry missions) rely on preciserelative position and attitude knowledge in order tomaintain mission requirements.

    Graduate Student of Aerospace Engineering.Assistant Professor of Aerospace Engineering. Senior Mem-

    ber AIAA.George J. Eppright Chair Professor of Aerospace Engineer-

    ing. Fellow AIAA.Copyright c 2000 by the American Institute of Aeronautics

    and Astronautics, Inc. All rights reserved.

    Formation flying in a satellite constellation is a spe-cial structure that involves the consideration of severalbodies simultaneous in time and close in position. Allof the spacecraft are typically predominately attractedby a central field from a planet, such as the Earth.The motion relative among them is an issue of activeresearch. Among the many research issues navigationis addressed in this paper. In general, the absoluteposition and velocity (it is called absolute because itrefers to an inertial or quasi-inertial frame) of eachone can be estimated by using a Global PositioningSystem (GPS) receiver and/or other conventional tech-niques, such as radar or antenna range, rate-range,angles, etc. Absolute navigation relates each one withrespect to a common fixed frame. Relative navigationseeks optimal estimates for the position and velocityof one satellite relative to the other one. To date,most research studies into determining both absoluteand relative positions and attitudes between vehiclesin a formation have involved using GPS,1 which re-stricts the spacecraft formation to near-Earth appli-cations. An application of GPS-like technology toa deep space mission has been proposed,2 but thisrequires extensive hardware development and is sub-ject to the generic GPS performance-limiting effects,including multipath, geometric dilution of precision,integer ambiguity resolution, and cycle slip. In or-der to reduce the operational cost, size and weight ofspacecraft for formation missions, new technologies arerequired to fill the gap implicit in standard sensors.

    The vision-based navigation (VISNAV) system de-scribed in this paper comprises an optical sensor of anew kind combined with specific light sources (bea-cons) in order to achieve a selective or intelligentvision. The sensor is made up of a Position Sens-ing Diode (PSD) placed in the focal plane of a wideangle lens. When the rectangular silicon area of thePSD is illuminated by energy from a beacon focusedby the lens, it generates electrical currents in four

    1 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • directions that can be processed with appropriate elec-tronic equipment to estimate the energy centroid ofthe image. While the individual currents depend onthe intensity of the light, their imbalances are weaklydependent on the intensity and are almost linearly pro-portional to the location of the centroid of the energyincident on the PSD. The idea behind the concept ofintelligent vision is that the PSD can be used to seeonly specific light sources, accomplished by frequencydomain modulation of the target lights and some rela-tively simple analog signal processing (demodulation).The light is produced by LEDs (beacons) modulated atan arbitrary known frequency while the currents gen-erated are driven through an active filter set on thesame frequency. Calculating the current imbalancesthen yields two analog signals directly related to thecoordinates locating the centroid of that beacons en-ergy distribution on the PSD, in a quasi-linear fashion,and therefore to the incident direction of this light onthe wide-angle lens (which gives a line of sight vec-tor). Benefits of this configuration include: 1) Verysmall sensor size, 2) Very wide sensor field of view, 3)No complex/time consuming CCD signal processing orpattern recognition required, 4) Excellent rejection ofambient light interference under a wide variety of op-erating conditions, and 5) Relatively simple electroniccircuits with modest digital signal processing (DSP)micro-computer requirements. Finally, the precisionof these sensors are comparable to a star camera in es-tablishing line of sight direction. These benefits clearlymake the VISNAV system a viable sensor for relativenavigation and attitude determination of spacecraft information. A more detailed description of the VISNAVsystem can be found in Ref. [3].

    The dynamical equations of motion for relative po-sition and attitude motion are highly nonlinear. Theextended Kalman filter can be used to estimate thestate variables, where the gains are calculated in someoptimal sense to obtain the minimum variance esti-mate. For example, in absolute attitude estimation,implementations of the filter shown in Ref. [4] hasbeen successfully used on several real spacecraft formany years now. The main disadvantage in the ex-tended Kalman filter is the potential for divergence ofthe filter in cases where the distance between the lo-cal linearization and the true models is large, which istypically mitigated by redundant sensors on the space-craft. A different approach is followed in this paper forrelative navigation and attitude estimation using theVISNAV system, which is largely based on the work ofSalcudean.5 This uses an optimal observer that ensuressome kind of globally asymptotic stability. Therefore,the need for redundant systems in formations of space-craft can be reduced.

    The organization of this paper proceeds as follows.First, the basic equations for the VISNAV system aregiven. Then, the relative attitude equations are de-

    rived, followed by a derivation of the orbital equationsof motion. Next, the dynamical equations are simpli-fied by using some control design assumptions. Then,the observer is derived for relative attitude and po-sition estimation. The stability of the observer isassessed through a Lyapunov and contraction mappinganalysis. Finally, simulation results are presented.

    Basic Equations

    In this section the mathematical models are pre-sented in the context of the particular problem relatedto relative position and attitude estimation from line ofsight observations. The notation6 used in the deriva-tions is briefly revisited for the sake of clarification.The angular velocity of the frame with respect tothe frame is represented by the physical vector (physical denotes that the vector is independent of theframe, whereas mathematical denotes the physical vec-tor components expressed in some frame). The vector is the mathematical vector made up of the com-

    ponents of taken in the frame. The derivativewith respect to time is indicated by the operator p,where pR is the rate of change of the vector R rela-tive to the frame , and pR is the time derivative ofthe vector expressed in the frame.

    Measurement Equation

    Figure 1 shows the focal plane measurement of theVISNAV system for a master and secondary satel-lite system using one light source from a beacon (seeRef. [3] for more details). Three frames are usedto describe the orientation and position of the mas-ter and secondary satellites. The first one, denotedby (Xs, Ys, Zs), is fixed on the secondary satellite,with the LED beacons firmly attached to the bodyof the satellite, and having known positions in the(Xs, Ys, Zs) frame. This frame is also the referenceframe for the attitude problem. We assume that thisframe is centered at the mass center of this spacecraft,and is denoted using the superscript s on the mathe-matical vectors. The second reference system, denotedby (Xf , Yf , Zf ), is fixed on the master satellite, wherethe focal plane of the VISNAV system is located. Weassume that the Zf axis is along the boresight, pass-ing through the input pin hole which is at a distanceZf = +f from the focal plane. The axes Xf and Yfare arbitrary, but fixed in the VISNAV sensor. Thisframe is denoted as the f frame. The third frame, de-noted by (Xm, Ym, Zm), is fixed to the mass center ofthe master satellite. The position and orientation ofthis frame with respect to the focal frame is assumedto be known. The vectors for the master frame areidentified with the superscript m.

    The point S is the origin of the frame s. The pointO is the location of each light beacon in the secondarysatellite; normally there are several beacons to assurecontinuous tracking of the satellite and for redundancy.

    2 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • Xs Ys

    Zs

    Light Source(Secondary Satellite)O

    Sensor Head(Master Satellite)

    Offset Point

    S

    Focal PlaneFocal

    Distance

    M

    Zm

    Ym

    Xm

    MasterSatellite

    I

    Y X

    A

    ZF

    Pin Hole

    .

    .

    SecondarySatellite

    ff

    f

    Fig. 1 Focal Plane Measurement from One Light Source

    The point I is sometimes referred as the image centersince it is the intersection of each light beam from thebeacon with the focal plane, where position of I withrespect to the focal reference system is used to forma line of sight observation. The point denoted as Fin Figure 1 is the pinhole which is coincident with thesensor principal point. Three vectors are now defined:~SO (the vector from the center S of the s frame to

    the beacon location O), ~SI (the vector from the the

    center S of the s frame to the image center I), and ~OI(the vector from the beacon location O to the image

    center I, with the constraint equation given by ~OI =~SI ~SO.The orientation between the secondary and master

    frames is denoted by the (unknown) rotation matrixCms which transforms a vector expressed in the sec-ondary frame s to the primary frame m. The rotationmatrix Cmf between the focal and the master framesis known by ground calibration. Expressing the vec-tors ~SI, ~OI and ~SO in frame components gives thefollowing relation7

    Cms

    (~SI ~SO

    )s Cms vs = vm

    (~OI)m

    (1)

    where

    vs =1

    XI xOYI yOZI zO

    (2a)

    (XI xO)2 + (YI yO)2 + (ZI zO)2 (2b)

    and (XO, YO, ZO) represents the known beacon loca-tion, and (XI , YI , ZI) is the unknown position with

    respect to the secondary satellite. The measurementsxI and yI in the focal frame can be expressed in unitvector form by

    vf =1

    x2I + y2I + f

    2

    xIyIf

    (3)

    where f is the known focal distance. This unit vectorin the master frame is expressed using the fixed rota-tion matrix between the sensor plane frame and themaster satellite reference frame, with vm = Cmf v

    f . Abias offset in the measurement is also accounted for inthe model (denoted by A in Figure 1). The bias vec-tor is a constant error vector induced by an unbalanceof the horizontal and vertical gains in the focal planedetector relative to the particular coordinate systemassociated with the detector at calibration. This vec-tor is denoted by va and is normally referenced in thefocal plane frame:

    vma = Cmf v

    fa = C

    mf

    xaya

    0

    f

    (4)

    Finally, the measurement equation for each lightsource from a beacon, placed on the secondary satel-lite, is as follows:

    vmj = Cms v

    sj + v

    ma for j = 1, . . . , N (5)

    where N is the number of LED beacons.Small separations between light beams from multi-

    ple LEDs reduces the discrimination of each beacon,which ultimately produces a dilution of precision in

    3 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • the position and attitude solution. A larger distancebetween the satellites also produces a dilution of pre-cision since the beacons ultimately approach angularco-location. If the relative position between satellitesis known then only two non-colinear line of sight vec-tors are required to determine an attitude solution. Ina similar fashion for the position navigation only prob-lem, where the satellite is considered to be a masspoint (in other words without attitude), two line ofsight vectors are only required. A covariance analy-sis shows that when the relative position and attitudeboth are unknown then two line of sight vectors pro-vide only one axis of attitude and one axis of positioninformation.8 Furthermore, an observability analysisusing two line of sight observations indicates that thebeacon that is closest to the target provides the mostattitude information but has the least position infor-mation, and the beacon that is farthest to the targetprovides the most position information but has theleast attitude information. In order to find a deter-ministic solution for the position and velocity at leastfour vector observations are required.

    Relative Attitude Equations

    In this section the governing equations for the rel-ative attitude dynamics between two bodies are re-viewed. The dynamical equations presented here arederived using non-inertial reference frames, howeveronly minor changes are required from the standard for-mulation.6 Starting from Eq. (5) and taking derivativeof each vector with respect to the same frame in whichthey are expressed gives the following expressions

    pvm = Cms pvs + pCms v

    s

    = Cps (pvs + CsmpC

    ms v

    s)(6)

    The bias in Eq. (5) is considered to be a constant, soits derivative is zero. The same expression in Eq. (6)can be derived by the application of the transport the-orem, which yields the following expressions

    pmv = psv + ms v (7a)pvm = Cms (pv

    s + [sms]vs) (7b)

    where the matrix [] denotes the cross product ma-trix, given by

    [] 0 3 23 0 12 1 0

    (8)

    Both expressions, (6) and (7), must be equivalent.Setting these equations equal to each other yields thetime rate of change of the attitude matrix, given by

    CsmpCms = [

    sms] (9a)

    pCms = Cms [

    sms] = [msp]Cms (9b)

    The relative attitude dynamics are described by theexpression in Eq. (9) in terms of attitude matrix andthe angular velocity between both frames.

    We now write the expression in Eq. (9) in terms ofthe corresponding quaternions.9 Toward this end, the

    quaternion is expressed as, qsm =[eT sin

    2cos

    2

    ]T,

    where e is the eigenaxis between both frames and isthe rotation angle measured from frame m to frame s.The quaternion is a vector and has the same compo-nents in both them and s frames, and can be expressedin any external frame as an arbitrary (i.e. general) vec-tor. This has an advantage over the rotation matrixformulation, which is fixed to the reference system sand m in this case. This property of the quaternion isused later.

    An infinitesimal rotation is expressed in terms of thequaternion as dqsm = 1+

    12smdt, where dt is the time

    differential. Multiplying by the quaternion qsm andtaking the first-order infinitesimal part, the followingdifferential equation is given

    pqsm =1

    2qsm msm

    =1

    2

    ... qoI33 + []

    ... qo

    ... T

    sm

    0

    =1

    2

    qosm + []sm

    T sm

    (10)

    where the quaternion qsm is decomposed into a scalar

    and a vector part as qsm =[(sm)

    T q0]T, and []

    33 is the skew symmetric matrix which representsthe cross product () = [] (). Both the atti-tude matrix and quaternion formulations will be usedin the definition of the observer feedback error, butthe quaternion formulation is used in the actual im-plementation of the observer.

    Relative Navigation Equations

    From basic orbit theory,10 the equations of motionare written assuming that each satellite is referencedwith respect to the same inertial frame. The vectorsare described in the Figures 2 and 3:

    p2iRs = GmeRm + r

    Rm + r3+ as (11a)

    p2iRm = GmeRm

    Rm3+ am (11b)

    where G is the universal constant of gravitation andme is the mass of the Earth. The relative orbitis described by the difference between both vectors,r = Rm Rs. Taking derivatives with respect to the

    4 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • SecondarySatellite

    X s

    Zm

    YmMasterSatellite

    Rs

    Rm

    .

    Xm.

    DetectorHead

    LightSources 1O

    FM

    S

    r

    .ON

    .

    O2Z s

    Ys

    Fig. 2 Formation Flying Geometry.

    Rm Rs

    rmm

    SecondarySatellite

    RmZm

    Ym

    Xm

    Reference Frame

    mm

    Reference Orbit

    ms

    MasterSatellite

    me

    Fig. 3 Relative Navigation: the master and the

    secondary satellite orbits.

    inertial frame, the following equation is obtained

    p2i r = e[

    Rm + r

    Rm + r3 RmRm3

    ]+a (12)

    where e Gme. The quantity a = (as ap) is thedifferential disturbance between both satellites. It col-lects the higher order gravitational potential terms (J2,J3, etc), and the nonconservative forces (such as drag,solar pressure, etc.) applied to the satellites. Onlythe differential terms are considered, generated by adifference in altitude, in mass, in exposed area, etc.,which in general reduces the magnitude of the abso-lute perturbation. For example, similar satellites withlarge solar arrays orbiting near each other cannot haverelative drag forces or torques between them, even forthe case where each individual satellite is subject to astrong drag perturbation. In other words both satellitehave orbits that are decaying at the same rate.The differential equation for the relative navigation

    problem, which is described by Eq. (12), can be simpli-fied if an appropriate reference system, other than theinertial frame, is used to express the relative positionand velocity vectors. If the master satellite position

    vector is written as Rm = Rm [1, 0, 0]T, the expres-

    sion can be simplified. The frame with this propertyis the Local Vertical Local Horizontal (LVLH) refer-ence frame,11 which is widely used to reference EarthPointing satellites. The LVLH frame is centered at themass center of the master satellite, and the axes havethe following orientation: the X axis is along the po-sition vector, the Z axis is normal to the motion plane(strictly speaking, the osculating plane), and the Yaxis is defined to complete a right hand orthogonalreference frame.In special cases, to describe the relative motion a

    non-orthogonal reference frame is chosen in order toplace the velocity vector continuously along the Y axis,and the position along the X axis (this condition isvalid only for circular orbits using the LVLH frame).This simple fact gives a very efficient representationof the state vector at any time, but it requires an ex-tra non-orthogonal transformation at every step time,which can be expensive from a computational pointof view. This type of representation is convenient forthe case of highly eccentric orbits, which may not becommon in low altitude satellite constellations. Inaddition the time derivative can be replaced by thederivative with respect to the true anomaly, which isa valid transformation because it is an increasing timefunction.12

    By using the transport theorem, the left side ofEq. (12) is differentiated with respect to the LVLHframe, which is noted from now on as m for simplic-ity. Both frames may of may not be coincident, butthey must have a fixed transformation between them.Because the m frame has axes with free orientation,the LVLH represents just one of the possible framechoices. In this paper both the m and LVLH framesare the same so that

    p2i r = p2mr+ 2im pbr+ pmim r

    + im (im r)(13)

    where im is the angular velocity of the master framewith respect to the inertial frame and takes values onlyalong the Z axis of the m frame, i.e., mim = [0, 0, ]

    T.

    The angular acceleration, pmim, is denoted for sim-plicity as mim. This is expressed in the m frame as

    mim = [0, 0, ]T. The vector in Eq. (13) is decomposed

    in m frame components and takes the final expressiongiven by

    xyz

    =

    Rm + x

    Rm + r3 RmRm3

    y

    Rm + r3z

    Rm + r3

    yx0

    2y x2x y

    0

    +

    axayaz

    (14)

    5 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • The forcing part along the X axis component has thefollowing structure: [f (Rm + r12) f (Rm)]m whichis not robust from a numerical point of view. Thisexpression is maintained for compactness and will beused in the observer analysis, but for practical imple-mentations it is convenient to re-write it avoiding thesubtraction of two large numbers.10 Equation (14) ex-presses the dynamical model for relative navigationbetween the secondary satellite with respect to themaster satellite.We note that the number of master satellite orbit

    parameters computed on the ground and to be usedin Eq. (14) is at most 3. For the general case, themagnitude Rm, the angular velocity , and angularacceleration are just needed. For the special caseinvolving circular orbits, only the position magnitudeis necessary.

    Equation for the Relative Dynamics

    In the attitude problem, Eulers equation or themeasured gyro outputs are the starting point for thederivation of the rotational dynamics equation to ob-tain the angular velocity between the inertial frameand a body frame. For the relative attitude prob-lem, Eulers equation must be applied in a differentialmode, similar in fashion as the orbit case. However, weseek an expression without an additional third frame(inertial one included), in addition to the m and the sframes, so that the system is independent of the extrareference frames choice. In other words, the relativenavigation and the relative attitude must be a func-tion only of the definition of the master and secondaryframes and completely independent of the particularchoice of the inertial frame or any other frame otherthan m and s. This simple fact is common in controltheory, where the error or its derivative is only definedby the current and the desired state independent ofany other frame choice.In the two body problem previously derived, the

    equation for r is very accurate because it is supportedby well known models for almost all involved forces inhand, with any remaining small perturbation bounded.In the relative attitude dynamics the presence of in-ternal torques, which are normally unmodeled with anunbounded time integral, plays an important role inthe model equations. We assume that each satellitein the constellation has an attitude control subsystemable to maintain the desired satellite orientation in-side of some allowable bound. The last hypothesis is aqualitative one. We assume that the measurements areavailable frequently enough to use simpler propagationmodels (to be derived) as a function of the samplinginterval.The derivation begins with the computation of the

    angular velocity between the secondary and the iner-tial frame, denoted by the subscript i:

    is = im + ms (15)

    Taking derivatives with respect to the inertial frameof both sides of Eq. (15), and then applying Eulersequation on both sides, the following is obtained

    piis = piim + pims (16a)

    psis = pmim + pmms + im ms (16b)Tcs +Tds is Isis

    Is= im ms

    + pmms +Tcm +Tdm im Imim

    Im

    (17)

    where Tcm and Tcs are the control torques applied tothe master and the secondary satellite, respectively,Tdm and Tds are the disturbance torques on eachsatellite, and Im and Is are the respective inertia ten-sors.The controller of each satellite is designed to main-

    tain the attitude error and the angular velocity errorbetween the body frame and the desired frame, whichis denoted by d, to within a small tolerance. Denotingthe angular velocity between the secondary frame andthe desired frame as ds, the following equations aredirectly derived

    is =id + ds (18a)

    psis =psid + psis (18b)

    In general psid 0, because the angular velocitybetween the desired and inertial frame is constant orsmooth enough to consider it to be negligible. There-fore, the first order approximation of Eq. (18) is givenby psis = psds + o (psid). To maintain thesatellite on the desired manifold, where psds = 0,the desired torque should be equal to the sum of theequivalent torque and an extra term to make psdsconverge to zero in finite time.For the secondary satellite, the dynamics to drive

    the angular rate error to the desired manifold is drivenby the following differential equation

    Ispsds = Tcs Tds + is Isis sds (19)where the constant s in effect dictates a time con-stant, which is selected to fulfill the requirement ofthe control system. Similarly, for the master satellitewe have

    Impmdm = Tcm Tdm + im Imim mdm

    (20)

    Substituting Eqs. (19) and (20) into Eq. (17) yields

    pmms = I1x x dmz dmydmz I1y y dmxdmy dmx I1z z

    mms

    +Tm +

    (21)

    where m and s are chosen such that I1m m

    I1s s I1 for each axis. This approximation is6 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • AngularError

    PositionError

    mv

    mv

    mv

    mv

    mmff

    dz

    Fig. 4 Angular and Position Error Visualization

    valid since the control system for each satellite willgive the same response (i.e., each satellite is assumedto be controlled identically). The vector componentsof dm are known in the master frame, but they aresecond order terms and can typically be ignored forshort time propagation. If the above approximationis the identity relation, i.e., I1m m = I

    1s s = I

    1,then Tm = 0. A simple first order equation is usedto model the effect of a mismatch between I1m m andI1s s, with

    pTm = HT+ T (22)where H is a positive definite matrix, which is typi-cally diagonal. The term T represents a Gaussiannoise vector. Finally, the relative dynamics obey thefollowing equations

    mms = mms +T

    m + (23)

    Tm= HT+ T (24)

    These are the main angular rate equations used in theobserver design.

    Observer Design

    In this section an observer is designed to estimatethe relative attitude and angular velocity as well asthe relative position and the linear velocity. Becausethe measurements are based on the line of sight pro-jection onto the focal plane, the angular orientationand linear position cannot be decoupled (the same istrue for the angular and linear velocities). In Ref. [8]the information matrix of the attitude and positionestimation errors is explicitly calculated for two lineof sight observations. The information matrix is di-vided into four partitions, where the two main diagonalelements correspond to the attitude and position in-formation matrices that have the identical structureif each problem (i.e., attitude or position) was con-sidered independent of each other. The off-diagonalpartitions couple the attitude and the position errors.A diagonalization (i.e., a decoupling of the attitudeand position) of the information matrix occurs onlyin very special cases. Therefore, the entire problemwhich includes both attitude and position estimationis considered in the observer design.

    Attitude and Angular Velocity Observer

    The observer design treats the attitude portion byrepresenting the residual (measurement minus esti-mate) error through a quaternion vector formulation,

    and treats the position portion of the residual in astraightforward position vector formulation. The an-gular error between the measured (vm) and the esti-mated (vm) vectors in master frame can be visual-ized by a rotation axes normal to plane that containsboth vectors. This axis (mm) can be interpreted asthe vector part of the quaternion error, and the ro-tation angle between both vectors is the scalar partof the quaternion. The position error (dz) is simplevector difference between the estimated and measuredvectors. Figure 4 shows both approaches.Before continuing with this concept, the following

    matrix relation is first written

    Cms = Cmm C

    ms = CC

    ms (25)

    where the estimated vector, matrix or frames are noted

    with the superscript (), and Cmm C. The rota-tion error matrix between the estimated and measuredquantities can be written in terms of the quaternionas C = I +2qo

    k+2[]2. To simplify the notationthis matrix is simply defined as C (I + [])mm.Equation (5) can now be re-written as

    vm = (I + [])mm Cms vs (26)

    where vs is an estimated vector, which depends on therelative position. This clearly shows the coupling ofthe relative navigation and attitude problems (in thestandard attitude problem this vector is the assumedknown reference vector, e.g., the vector to the sun ora star in an inertial frame). Equation (26) can be re-written in residual form as

    vm vm = []mmvm (27)

    Using the multiplicative property of the cross productmatrix the right hand side of Eq. (27) can be expressedin a more convenient form as

    vm vm = [v]m (mm)m (28)

    where the vector mm is expressed as the vector part ofa quaternion in any frame. As stated previously thisis an advantage of using the quaternion parameteriza-tion over the rotation matrix in the observer. The lefthand side of Eq. (28) is denoted by dz vm vm forsimplicity.The number of measured line of sight vectors is gen-

    erally greater than one, and the processing of thisinformation can be done in the least square sense.Each estimated vector cross product is stacked intoa matrix as

    Vm =

    [v1]m

    ...

    [vN]m

    (29)

    In this case the pseudoinverse is computed using allavailable information. Therefore, the quaternion error

    7 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • is computed by

    V +m dz = ms [q, qo] (30)

    where V +m is the pseudoinverse of Vm. The compu-tation of the quaternion error is comparable to thealgorithm presented in Ref. [4]. In the approach of[4] the scalar part of the quaternion (qo) is assumedto always be equal to +1. However, the scheme pre-sented in this section maintains all four elements of thequaternion error because the sign of the scalar part isused in the design of the observer.The nonlinear observer presented in Ref. [5] is used

    for attitude estimation; however, two slight modifica-tions are introduced. The first one incorporates theangular velocity model in Eq. (23) and the second in-cludes a model of a potential bias, represented by bm,in the quaternion differential equation to include anyoffset of the sensor, which may even be the computa-tion of the focal distance. A first order model is chosenfor this bias, given by

    bm = Mbm + b (31)

    where the M is a diagonal positive definite matrixwhich represents the time constant of the process, andb is a Gaussian noise.The proposed observer follows the same concept of

    the original work of Luenberger,13 by introducing feed-back terms to drive the error between the measuredand estimated quantities to zero with some rate of con-vergence. The dynamics of the observer are given by

    ms =

    1

    2[qoI33 + [

    ms ]]

    [mms + b+Kv sign (qo)

    ] (32)qo =

    1

    2(ms )

    T[mms + b+Kv sign (qo)

    ](33)

    m

    ms = mms +Tm +Kp sign (qo) (34)

    Tm

    = HTm +KT sign (qo) (35)bm

    = M bm +Kb sign (qo) (36)

    where Kv, Kp, KT and Kb are positive definite ma-trices. The sign function ensures that the smallestpossible angle is chosen between the two equivalentrotations angles described by and 2 .The quaternion error between the truth (q) and esti-

    mate (q) is defined, following Ref. [4], as q = qq1,and the error dynamics are given by

    q = q q1 + q q1

    =1

    2[ q q ]

    (37)

    q =1

    2q ( ) = 1

    2q (38)

    where the superscript () indicates the error betweenthe truth and the estimate. Using Eq. (38) the errordynamics can be shown to be given by

    =1

    2[qoI33 + [ ]]

    [ms + bKv sign (qo)

    ] (39) qo =

    1

    2( )

    T[ms + bKv sign (qo)

    ](40)

    ms = ms TKp sign (qo) (41) T = HTKT sign (qo) (42)

    b = M bKb sign (qo) (43)

    In the above expressions, the indication of frames hasbeen deleted to simplify the notation, but all vectorsare expressed in the master frame, and the quater-nion error describes the rotation between m and m.The system clearly has two equilibrium points forthe set of the angular vectors, [, qo, ,T,b] =[013,1,013,013,013].The extended Kalman filter can be used to also es-

    timate the relative attitude and position from line ofsight observations. However, a proof of global asymp-totic stability using the extended Kalman filter evenfor the attitude only problem is difficult to derive.Global asymptotic stability for the entire problem in-volving attitude and position for the observer derivedin this section has not been shown to date; however, ananalysis of the stability of the observer design can beshown by dividing the entire problem into independentattitude and position cases. Although this approachdoes not prove global stability, it does provide somemeasure of local stability for each problem. A Lya-punov stability approach and contraction theory areused in each case.The next step is to analyze the stability of the error

    equation. Because the system is nonlinear and non-autonomous, Lyapunovs approach is used. Based onthe candidate Lyapunov function used in Ref. [5], thefollowing function is chosen

    V =1

    2

    (TK1p + b

    TK1b b)

    +

    {(qo 1)2 + T if qo 0(qo + 1)

    2+ T if qo < 0

    (44)

    From the definition of quaternion, the following two re-lations are true: q2o+

    T = 1 and qoqo+T =

    0. Taking the time derivative of the candidate Lya-punov function, the following expression is obtained

    V = TK1p + bTK1b

    b

    +

    { 2qo if qo 02qo if qo < 0

    (45)

    Substituting Eqs. (40), (41) and (43) into Eq. (45)

    8 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • yields

    V = TK1p bTK1b M b TKv (46)The function V is clearly always negative, so V is in-deed a Lyapunov function and the stability has beenproved. Therefore, Eq. (46) assures that 0,and since q2o+

    T = 1 the magnitude of the scalarpart converges exponentially with |qo| 1.Navigation Observer

    The observers for the relative position and relativelinear velocity are given by

    r = v Kp dz (47)v = f (r, v)Kv dz (48)

    where f () is the right hand side of Eq. (14), r is therelative position vector, and v is the relative linear ve-locity vector. The minus signs in Eqs. (47) and (48)are due to the definition of dz. The constant gains Kpand Kv are positive definite matrices (usually diago-nal).The error equations are directly calculated as

    r = v +Kp dz (49)

    v = f (r,v) f (r, v) +Kv dz (50)To prove observer convergence the right hand side

    of Eq. (50) must be bounded. Both functions f (r,v)and f (r, v) are continuous so that a derivative al-ways exists. The distance between both functionsis given by the mismatch between the true and es-timated quantities. Lipschitzs condition is used toquantify this mismatch with [f (x) f (x)] K x,where the vector x = [r,v]

    Trepresents the state vec-

    tor. The value of K is calculated using the mean value

    inequality, i.e. K = supf (x), where the supremum

    is calculated in the range of possible variations of thestate vector in a limited time span. For the case ofK 1, the observer can be defined by a contractionmapping.14 To complete the contraction mapping, twooperators are defined: T (x) =

    tfto

    f [x ()] d , and

    T (x) = tfto

    f [x ()] d . Using the Lipschitz condi-tion, the sequential estimation process after n timesteps gives the following inequality

    T (x) T (x) Kntn

    n!x (51)

    where t is the sampling interval. In the attitude ob-server the observation is modeled by a function of arotation matrix or a quaternion error. In the positioncase the observation is modeled using the position vec-tor r. The relation between the measurement residualand the position can be written as dz = r, where is a positive definite matrix. This approach assumesthat a linear relationship between the residual and po-sition error exists, which is valid since the projection of

    Table 1 Orbital Elements of the Master Satellite.

    Semimajor axis a = 6, 878 kmEccentricity e = 0Inclination i = 50

    Node right ascension = 5

    Argument of perigee = 10

    Mean anomaly M = 8

    line of sight on the focal plane is an affine function ofthe relative position. We note that this relation is usedonly in the analysis; the observer given in Eq. (48) isused in the actual implementation. The position andvelocity errors can now be re-written as[

    rv

    ]=

    [Kp +K I33Kv +K 033

    ] [r

    v

    ](52)

    where the gains are selected to have K < Kp andK < Kv . The linear system is now exponentiallystable, where the proof can be found in Ref. [5].

    Simulation

    The orbital elements used in the simulation of themaster satellite are shown in Table 1. A small ini-tial condition perturbation of these elements is usedto simulate the motion of the secondary satellite. Aplot of the relative motion between both satellites foran 80 second simulation is shown in Figure 5. The trueinertia matrices of both satellites is given by

    Is = Im = diag[100, 120, 130] N-m-s2 (53)

    In the observer the following inertia matrices are used:

    Is = Im = diag[110, 115, 140] N-m-s2 (54)

    The true relative initial angular velocity is given by

    = [0.065, 0.048, 0.03]T deg/sec (55)

    The relative angular velocity trajectory is computedby integrating the following equation

    = I1s (56)

    where Is is the true inertia and = 0.02. A noiseof f/3000 is assumed for each measurement on thefocal plane and four beacons have been placed on thesecondary satellite at a distance of 1 meter from themass center, along each coordinate axis. The fourthbeacon is placed at [1, 1, 1]

    Tin the secondary frame.

    The observer described in the last section is im-plemented for state estimation from the line-of-sightmeasurements. The initial condition angular error isa rotation of about 15 along each of the coordinatesaxes. The initial angular velocity has 50 percent er-rors from Eq. (55). The initial position condition 10percent from the true value and the initial linear ve-locity condition is 30 percent from the true value. The

    9 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • 9095

    100105

    110115

    120

    510

    1520

    2530

    5

    10

    15

    20

    25

    X (m)Y (m)

    Z (m

    )

    Fig. 5 True Relative Position Between Satellite

    0 10 20 30 40 50 60 70 802

    1

    0

    1

    Rol

    l (deg

    )

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    Pitc

    h (de

    g)

    0 10 20 30 40 50 60 70 802

    1

    0

    1

    2

    Yaw

    (deg

    )

    Time (sec)

    Fig. 6 Attitude Errors

    sampling rate is 4 Hz. Figures 6-9 show attitude andangular velocity errors, and position and linear veloc-ity errors for the estimator. The relative distance alongthe X axis is almost three times the distance along theother two axes (around 94 meters against 30 meters).This difference can be observed in the oscillation of theattitude error in roll, which is intuitively correct. Theroll angle error is within 0.3 degrees, and the pitch andyaw angles are within 0.05 degrees. The position errorin all three axis is within 1 cm. Also, the velocities arewell estimated using the observer.

    Conclusions

    An observer based system has been presented asan alternative to the extended Kalman Filter for for-mation flying navigation of spacecraft. The measure-ments were assumed to be given by line of sight obser-vations using a novel sensing approach involving LEDbeacons and position sensing technology in the focalplane. The observer design utilizes a simple term inthe attitude estimator to insure that convergence isachieved in the fastest time possible, and stability has

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    X (de

    g/sec

    )

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    Y (de

    g/sec

    )

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    Z (de

    g/sec

    )

    Time (sec)

    Fig. 7 Angular Velocity Errors

    0 10 20 30 40 50 60 70 802

    1

    0

    1

    X (m

    )

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    3

    Y (m

    )

    0 10 20 30 40 50 60 70 801

    0

    1

    2

    3

    Z (m

    )

    Time (sec)

    Fig. 8 Position Errors

    0 10 20 30 40 50 60 70 800.01

    0

    0.01

    0.02

    X (m

    /s)

    0 10 20 30 40 50 60 70 800.02

    0.01

    0

    0.01

    Y (m

    /s)

    0 10 20 30 40 50 60 70 800.02

    0.01

    0

    0.01

    Z (m

    /s)

    Time (sec)

    Fig. 9 Linear Velocity Errors

    10 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439

  • been proven using a local Lyapunov analysis. The rel-ative position and linear velocity observer has beenproven to meet quantitative convergence using a con-traction mapping analysis. Simulation results haveshown that accurate relative attitude and position es-timation is possible.

    Acknowledgement

    This work was supported under a NASA grant(NCC 5-448), under the supervision of Dr. F. Lan-dis Markley at NASA-Goddard Space Flight Center,and from a Texas Higher Education CoordinatingBoard Grant (000512-0004-1999) and Air Force Of-fice of Sponsored Research grant (32525-57200). Theauthors wish to thank Dr. Markley for many helpfulsuggestions and comments.

    References1Corazzini, T., Robertson, A., Adams, J. C., Hassibi, A.,

    and How, J. P., GPS Sensing for Spacecraft Formation Fly-ing, Proceedings of the 1997 ION-GPS , Kansas City, MO,Sept. 1997, pp. 735744.

    2Purcell, G., Kuang, D., Lichten, S., Wu, S.-C., and Young,L., Autonomous Formation Flyer (AFF) Sensor TechnologyDevelopment, 21st Annual AAS Guidance and Control Con-ference, Breckenridge, CO, Feb. 1998, AAS Paper 98-062.

    3Junkins, J. L., Hughes, D. C., Wazni, K. P., andPariyapong, V., Vision-Based Navigation for Rendezvous,Docking and Proximity Operations, 22nd Annual AAS Guid-ance and Control Conference, Breckenridge, CO, February1999, AAS Paper 99-021.

    4Lefferts, E. J., Markley, F. L., and Shuster, M. D., KalmanFiltering for Spacecraft Attitude Estimation, Journal of Guid-ance, Control, and Dynamics, Vol. 5, No. 5, Sept.-Oct. 1982,pp. 417429.

    5Salcudean, S., A Globally Convergent Angular VelocityObserver for Rigid Body Motion, IEEE Transactions on Au-tomatic Control , Vol. 36, No. 12, Dec. 1991, pp. 14931497.

    6Wrigley, W., Hollister, W., and Denhard, W., GyroscopicTheory, Design, and Instrumentation, MIT Press, Cambridge,MA, 1969.

    7Light, D. L., Satellite Photogrammetry, Manual of Pho-togrammetry, edited by C. C. Slama, chap. 17, American Societyof Photogrammetry, Falls Church, VA, 4th ed., 1980.

    8Crassidis, J. L., Alonso, R., and Junkins, J. L., OptimalAttitude and Position Determination From Line of Sight Mea-surements, The Richard H. Battin Astrodynamics Conference,College Station, TX, March 2000, AAS Paper 00-268.

    9Shuster, M. D., A Survey of Attitude Representations,Journal of the Astronautical Sciences, Vol. 41, No. 4, Oct.-Dec.1993, pp. 439517.

    10Battin, R. H., An Introduction to the Mathematics andMethods of Astrodynamics, American Institute of Aeronauticsand Astronautics, Inc., New York, NY, 1987.

    11Bate, R. R., Mueller, D. D., and White, J. E., Funda-mentals of Astrodynamics, Dover Publications, New York, NY,1971.

    12Tschauner, J., Elliptic Orbit Rendezvous, AIAA Jour-nal , Vol. 5, No. 6, June 1967, pp. 11101113.

    13Luenberger, D. G., Observers for Multivariable Systems,IEEE Transactions on Automatic Control , Vol. 11, No. 2, April1966, pp. 190197.

    14Luenberger, D. G., Optimization by Vector Space Methods,John Wiley & Sons, Inc., New York, NY, 1969.

    11 of 11

    American Institute of Aeronautics and Astronautics Paper 20004439