space object detection in video satellite images using motion...

10
Research Article Space Object Detection in Video Satellite Images Using Motion Information Xueyang Zhang, 1 Junhua Xiang, 1 and Yulin Zhang 1,2 1 College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China 2 School of Aerospace, Tsinghua University, Beijing 100084, China Correspondence should be addressed to Xueyang Zhang; [email protected] Received 1 May 2017; Revised 7 August 2017; Accepted 30 August 2017; Published 17 October 2017 Academic Editor: Linda L. Vahala Copyright © 2017 Xueyang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Compared to ground-based observation, space-based observation is an eective approach to catalog and monitor increasing space objects. In this paper, space object detection in a video satellite image with star image background is studied. A new detection algorithm using motion information is proposed, which includes not only the known satellite attitude motion information but also the unknown object motion information. The eect of satellite attitude motion on an image is analyzed quantitatively, which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change, variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frame image. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information to detect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a good way for space object detection. 1. Introduction In September 8, 2014, Tiantuo-2 (TT-2) designed by the National University of Defense Technology independently was successfully launched into orbit, which is the rst Chinese interactive earth observation microsatellite using a video imaging system. The mass was 67 kg, and the altitude was 490 km. Experiments based on interactive control strate- gies with human in the loop were carried out to realize continuous tracking and monitoring of moving objects. Now, video satellites have been widely utilized by domes- tic and foreign researchers, and several video satellites have been launched into orbits, such as Skysat-1 and 2 by Skybox Imaging, TUBSAT series satellites by the Technical Univer- sity of Berlin [1], and the video satellite by Chang Guang Satellite Technology. They can obtain video images with dierent on-orbit performances. Increasing space objects have greater impact on human space activities, and cataloging and monitoring them become a hot issue in the eld of space environment [25]. Compared to ground-based observation, space-based observation is not restricted by weather or geographical location and avoids the disturbance of the atmosphere to the objects signal, which has a unique advantage [2, 3, 6]. To use a video satellite to observe space objects is an eective approach. Object detection and tracking in satellite video images is an important part of space-based observation. For the general problem of object detection and tracking in video, optical ow [7], block matching [8], template detection [9, 10], and so on have been proposed. But most of these methods are based on gray level, and enough texture information is highly required [11]. In fact, small object detection in optical images with star background mainly has the following di- culties: (1) Because the object occupies only one or a few pixels in the image, the shape of the object is not available. (2) Due to background stars and noise introduced by a space environment detecting equipment, the object almost sub- merges in the complex background bright spots, which increases the diculty of object detection greatly. Aiming at these diculties, many scholars have proposed various algorithms, mainly including track before detec- tion (TBD) and detect before tracking (DBT). Multistage Hindawi International Journal of Aerospace Engineering Volume 2017, Article ID 1024529, 9 pages https://doi.org/10.1155/2017/1024529

Upload: others

Post on 29-Mar-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

Research ArticleSpace Object Detection in Video Satellite Images UsingMotion Information

Xueyang Zhang,1 Junhua Xiang,1 and Yulin Zhang1,2

1College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China2School of Aerospace, Tsinghua University, Beijing 100084, China

Correspondence should be addressed to Xueyang Zhang; [email protected]

Received 1 May 2017; Revised 7 August 2017; Accepted 30 August 2017; Published 17 October 2017

Academic Editor: Linda L. Vahala

Copyright © 2017 Xueyang Zhang et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Compared to ground-based observation, space-based observation is an effective approach to catalog and monitor increasing spaceobjects. In this paper, space object detection in a video satellite image with star image background is studied. A new detectionalgorithm using motion information is proposed, which includes not only the known satellite attitude motion information butalso the unknown object motion information. The effect of satellite attitude motion on an image is analyzed quantitatively,which can be decomposed into translation and rotation. Considering the continuity of object motion and brightness change,variable thresholding based on local image properties and detection of the previous frame is used to segment a single-frameimage. Then, the algorithm uses the correlation of object motion in multiframe and satellite attitude motion information todetect the object. Experimental results with a video image from the Tiantuo-2 satellite show that this algorithm provides a goodway for space object detection.

1. Introduction

In September 8, 2014, Tiantuo-2 (TT-2) designed by theNational University of Defense Technology independentlywas successfully launched into orbit, which is the firstChinese interactive earth observation microsatellite using avideo imaging system. The mass was 67 kg, and the altitudewas 490 km. Experiments based on interactive control strate-gies with human in the loop were carried out to realizecontinuous tracking and monitoring of moving objects.

Now, video satellites have been widely utilized by domes-tic and foreign researchers, and several video satellites havebeen launched into orbits, such as Skysat-1 and 2 by SkyboxImaging, TUBSAT series satellites by the Technical Univer-sity of Berlin [1], and the video satellite by Chang GuangSatellite Technology. They can obtain video images withdifferent on-orbit performances.

Increasing space objects have greater impact on humanspace activities, and cataloging and monitoring them becomea hot issue in the field of space environment [2–5]. Comparedto ground-based observation, space-based observation is not

restricted by weather or geographical location and avoids thedisturbance of the atmosphere to the object’s signal, whichhas a unique advantage [2, 3, 6]. To use a video satellite toobserve space objects is an effective approach.

Object detection and tracking in satellite video imagesis an important part of space-based observation. For thegeneral problem of object detection and tracking in video,opticalflow [7], blockmatching [8], template detection [9, 10],and so on have been proposed. But most of these methodsare based on gray level, and enough texture information ishighly required [11]. In fact, small object detection in opticalimages with star background mainly has the following diffi-culties: (1) Because the object occupies only one or a fewpixels in the image, the shape of the object is not available.(2) Due to background stars and noise introduced by a spaceenvironment detecting equipment, the object almost sub-merges in the complex background bright spots, whichincreases the difficulty of object detection greatly.

Aiming at these difficulties, many scholars have proposedvarious algorithms, mainly including track before detec-tion (TBD) and detect before tracking (DBT). Multistage

HindawiInternational Journal of Aerospace EngineeringVolume 2017, Article ID 1024529, 9 pageshttps://doi.org/10.1155/2017/1024529

Page 2: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

hypothesis testing (MHT) [12] and dynamic programming-based algorithm [13–15] can be classified into TBD, whichis effective when the image signal-to-noise ratio is very low.But high computation complexity and hard threshold alwaysfollow them [16]. Actually, DBT is usually adopted for objectdetection in star images [17]. Some DBT algorithms arediscussed below.

An object trace acquisition algorithm for sequence starimages with moving background was put forward to detectthe discontinuous trace of a small space object by [18], whichused the centroid of a group of brightest stars in sequenceimages to match images and filtered background stars. Aspace object detection algorithm based on the principle ofstar map recognition was put forward by [19], which used atriangle algorithm to accomplish image registration anddetected the space object in the registered image series. Basedon the analysis of a star image model, Han and Liu [20] pro-posed an image registration method via extracting featurepoints and then matching the triangle. Zhang et al. [21] useda triangle algorithm to match background stars in imageseries and classified stars and potential targets and thenutilized the 3 frames nearest neighboring correlation methodto detect targets grossly, in which false targets were filteredwith the multiframe back and forth searching method. Analgorithm based on iterative optimization distance classifica-tion is proposed to detect small visible optical space objectsby [22], which used the classification method to filter back-ground stars and utilized the continuity of object motion toachieve trajectory association.

The core idea of the DBT algorithms above is to matchthe sequence images based on the relative invariance of thestar position and to filter background stars. Therefore, thereis also a large amount of computation and poor real-timeperformance. A real-time detection algorithm based onFPGA and DSP for small space objects was presented by[23], but the instability and motion of observation platformwere not considered. In addition, image sequences used tovalidate the algorithm in the literature above are from aground photoelectricity telescope in [15, 17, 18, 21–23] andgenerated by simulation in [19, 20]; the characteristic ofimage sequence from space-based observation has notbeen considered.

This paper studies space object detection in a videosatellite image with star image background, and a new detec-tion algorithm based on motion information is proposed,which includes not only the known satellite attitude motioninformation but also the unknown object motion informa-tion. Experimental results about a video image from TT-2demonstrate the effectiveness of the algorithm.

2. Characteristic Analysis of SatelliteVideo Images

A video satellite image of space contains deep space back-ground, stars, objects, and noise introduced by imagingdevices and cosmic rays. The mathematical model is givenby [24] as follows:

f x, y, t = f B x, y, t + f s x, y, t + f T x, y, t+ n x, y, t ,

1

where f B x, y, t is the gray level of deep space background,f s x, y, t is the gray level of stars, f T x, y, t is the gray levelof objects, and n x, y, t is the gray level of noise. x, y is thepixel coordinate in the image. t is the time of the video.

In the video image, stars and weak small objects occupyonly one or a few pixels. It is difficult to distinguish the objectfrom background stars by morphological characteristics orphotometric features. Besides, attitude motion of the objectleads to changing brightness, even losing the object in severalframes. Therefore, it is almost impossible to detect the objectin a single-frame image and necessary to use the continuity ofobject motion in multiframe images.

Figure 1 is a local image of a video from TT-2, whereFigure 1(a) is a star and Figure 1(b) is an object (debris). Itis impossible to distinguish them by morphological charac-teristics or photometric features.

3. Object Detection Using Motion Information

When attitude motion of a video satellite is stabilized, back-ground stars are moving extremely slowly in the video andcan be considered static in several frames. At the same time,noise is random (dead pixels appear in a fixed position) and

(a) (b)

Figure 1: Images of a star and an object.

2 International Journal of Aerospace Engineering

Page 3: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

only objects are moving continuously. This is the mostimportant distinction of motion characteristics of theirimages. Because of platform jitter, objects cannot be detectedby a simple frame difference method. A new object detectionalgorithm using motion information (NODAMI) is pro-posed. The motion information contains known satellite atti-tude motion information. When the attitude changes, thevideo image varies accordingly. This case can be transformedinto the case of attitude stabilization by compensating atti-tude motion to image. The complexity of the detectionalgorithm reduces greatly without image registration.

The procedure of NODAMI is shown in Figure 2.Firstly, the effect of satellite attitude motion on image is

analyzed and attitude motion compensation formula isderived. Then, each step of NODAMI is described in detail.

3.1. Compensation of Satellite Attitude Motion. The inertialframe is defined as Oi − XiYiZi, originating at the center ofthe Earth, and the OiZi-axis aligned with the Earth’s NorthPole and the OiXi-axis aligned with the vernal equinox. Thesatellite body frame is defined as Ob − XbYbZb, originatingat the mass center of the satellite and three axes aligned withthe principal axes of the satellite. The pixel frame is defined asI − xy, originating at the upper left corner of the image planeand using the pixel as a coordinate unit. x, y represent thenumber of columns and rows, respectively, of the pixel inthe image. The image frame is defined as O − XpYp, originat-ing at the intersection of the optical axis and the image plane,and the OXp-axis and the OYp-axis aligned with Ix andIy, respectively. The camera frame is defined asOc − XcYcZc,originating at the optical center of the camera, and theOcZc-axis aligned with the optical axis, the OcXc-axisaligned with −OXp, and the OcYc-axis aligned with −OYp.All the 3D frames defined above are right handed.

In the inertial frame, let the coordinate of the object bexi, yi, zi and the coordinate of the satellite be xi0, yi0, zi0 .Assuming that the 3-2-1 Euler angle of the satellite bodyframe with respect to the inertial frame is ψ, θ, φ , thecoordinate of the object in the satellite body frame is givenas follows:

xb

yb

zb

= R321 ψ, φ, θxi − xi0

yi − yi0

zi − zi0

= RX φ RY θ RZ ψ

xi − xi0

yi − yi0

zi − zi0

,

2

where R321 is the attitude matrix of the satellite body framewith respect to the inertial frame and RX θ , RY θ , andRZ θ are the coordinate transformation matrices after arotation around the x-axis, y-axis, and z-axis, respectively,with the angle θ:

RX θ =1 0 00 cos θ sin θ

0 −sin θ cos θ,

RY θ =cos θ 0 −sin θ

0 1 0sin θ 0 cos θ

,

RZ θ =cos θ sin θ 0−sin θ cos θ 0

0 0 1

3

Assuming that the camera is fixed on the satellite and the3-2-1 Euler angle of the camera frame with respect to thesatellite body frame is α0, β0, γ0 , which is constantlydetermined by the design of the satellite structure, thenthe attitude matrix of the camera frame with respect tothe satellite body frame R0 can be derived. Let the coordi-nate of Oc in the satellite body frame be S0 = x0, y0, z0

T ;then, the coordinate of the object in the camera frame isgiven as follows:

xcyczc

= R0

xb − x0yb − y0zb − z0

4

The design of the video satellite tries to make α0, β0, γ0as zero, and actually, they are of small quantities. x0, y0, z0are much lesser than xb, yb, zb. Without loss of generality,let R0 = I, S0 = 0, that is, the camera frame and the satellitebody frame coincide.

As shown in Figure 3, the coordinate of the object in theimage frame is given as follows:

xpyp

=

xczcyczc

f , 5

where f is the focal length of the camera.If the field of view of the camera is FOV, then xp, yp <

tan FOV/2 .

Video Imagedenoising

Single-framebinarysegmentation

Coordinateextraction

Trajectoryassosiation

Targettrajectory

Compensatingattitude motion(Kalman filter)

Variable coefficient

Figure 2: Procedure of NODAMI.

3International Journal of Aerospace Engineering

Page 4: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

The coordinate of the object in the pixel frame is givenas follows:

m

n=

xpdxypdy

+m0

n0, 6

where dx, dy is the size of each pixel and m0, n0 is thecoordinate of the image center.

The video satellite can achieve tracking imaging of theobject based on interactive attitude adjustment with humanin the loop. Attitude adjustment is applied to the attitude ofthe satellite body frame with respect to the inertial frame.Assuming that the 3-2-1 Euler angle of attitude adjustmentis Δψ, Δθ, Δφ , the coordinates of the point P in the cameraframe, the image frame, and the pixel frame are xc1, yc1, zc1 ,xp1, yp1 , and m1, n1 , respectively, before attitude adjust-ment and xc2, yc2, zc2 , xp2, yp2 , and m2, n2 , respectively,after attitude adjustment.

Ignoring the orbit motion of the satellite during theattitude adjustment, which is reasonable when adjustmentis instantaneous, gives

xc2yc2zc2

= R321 Δψ, Δθ, Δφxc1yc1zc1

= RX Δφ RY Δθ RZ Δψxc1yc1zc1

7

Denote xc, yc, zcT , xc1, yc1, zc1

T , xc2, yc2, zc2T as x,

x1, x2 and transformation function determined by RX Δφ ,RY Δθ , RZ Δψ as TX,Δφ, TY ,Δθ, TZ,Δψ, that is,

TX,Δφ x = RX Δφ ⋅ x,TY ,Δθ x = RY Δθ ⋅ x,TZ,Δψ x = RZ Δψ ⋅ x

8

Then,

x2 = TX,Δφ TY ,Δθ TZ,Δψ x1 9

Denote xp, ypT , xp1, yp1

T , xp2, yp2T as y, y1, y2 and

the mapping from the camera frame to the image frame asA, that is,

y = A x ,y1 = A x1 ,y 2 = A x2

10

Denote transformation function of the image framedetermined by TX,Δφ, TY ,Δθ, TZ,Δψ as rX,Δφ, rY ,Δθ, rZ,Δψ, that is,

A TX,Δφ x = rX,Δφ A x = rX,Δφ y ,A TY ,Δθ x = rY ,Δθ A x = rY ,Δθ y ,A TZ,Δψ x = rZ,Δψ A x = rZ,Δψ y

11

Then,

y2 = A x2 = A TX,Δφ TY ,Δθ TZ,Δψ x1

= rX,Δφ A∘TY ,Δθ TZ,Δψ x1

= rX,Δφ rY ,Δθ A∘TZ,Δψ x1

= rX,Δφ rY ,Δθ rZ,Δψ y1

12

Equation (12) proves that transformation of the imageframe caused by attitude adjustment can be decomposed intocompound of transformation function determined by therotation around the satellite body frame axes.

Obviously,

rX,Δφ y =

xp−sin Δφ ⋅ yp + cos Δφcos Δφ ⋅ yp + sin Δφ−sin Δφ ⋅ yp + cos Δφ

,

rY ,Δθ y =

cos Δθ ⋅ xp − sin Δθsin Δθ ⋅ xp + cos Δθ

ypsin Δθ ⋅ xp + cos Δθ

,

rZ,Δψ y =cos Δψ ⋅ xp + sin Δψ ⋅ yp−sin Δψ ⋅ xp + cos Δψ ⋅ yp

=cos Δψ sin Δψ−sin Δψ cos Δψ

xpyp

13

Yc

Xc

Oc

Xp

Yp

ZcO

(xp , yp)

P (xc,yc,zc)

f

Camera frame

Image frame

Image plane

Yi

Xi

Oi Zi

Inertial frame

Figure 3: Diagram of frames.

4 International Journal of Aerospace Engineering

Page 5: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

When Δψ, Δθ, Δφ are of small quantities, rX,Δφ, rY ,Δθ canbe approximated as follows:

rX,Δφ y =xp

yp + Δφ ,

rY ,Δθ y =xp − Δθ

yp

14

Then, the conclusion can be drawn that the effect ofsatellite attitude motion on an image can be decomposedinto the translation and rotation of the image. The effectsin an image of small rotations around each axis are thatroll and pitch lead to image translation, whereas yawinduces image rotation.

Especially, when dx = dy = d, the attitude motion com-pensation formula can be given as follows:

m2

n2=

cos Δψ sin Δψ

−sin Δψ cos Δψ

m1 −m0

n1 − n0

+−Δθ

Δφ⋅1d+

m0

n0

15

In practice, onboard sensors, such as fiber optic gyro-scope, give the angular rate of attitude motion, ω =ωx, ωy , ωz

T . Supposing that the time interval per frame isΔt, when Δt is of small quantity, the angular rate in Δt canbe regarded as constant. Let

ϑ = ωΔt ,

e = ωΔtϑ

= e1, e2, e3 T16

Then, the attitude adjustment matrix of the current framemoment with respect to the previous frame moment is givenas follows:

A =c + 1 − c e21 1 − c e1e2 + se3 1 − c e1e3 − se2

1 − c e2e1 − se3 c + 1 − c e22 1 − c e2e3 + se1

1 − c e3e1 + se2 1 − c e3e2 − se1 c + 1 − c e23= aij 3×3,

17

where

c = cos ϑ,s = sin ϑ

18

The 3-2-1 Euler angle Δψ, Δθ, Δφ can be solved asfollows:

Δψ = atan 2 a23, a33 ,Δθ = arcsin −a13 ,Δφ = atan 2 a12, a11

19

Generally, attitude adjustment does not rotate the image,that is, Δψ can be approximated as 0. And Δθ, Δφ can beapproximated as follows:

Δθ = ωyΔt,Δφ = ωxΔt,

20

for they are of small quantities.Substituting Δψ = 0 and (20) into (15) gives

m2

n2= m1

n1+

−ωyΔtωxΔt

⋅1d

21

3.2. Image Denoising. Noise in the video image mainlyincludes the space radiation noise, the space backgroundnoise, and the CCD dark current noise. A bilateral filter canbe used to denoise, which is a simple, noniterative schemefor edge-preserving smoothing [25]. The weights of the filterhave two components. The first of which is the same weight-ing used by the Gaussian filter. The second component takesinto account the difference in intensity between the neigh-boring pixels and the evaluated one. The diameter of the filteris set to 5, and weights are given as follows:

wij =1Ke− xi−xc

2+ yi−yc2 /2σ2s e− f xi ,yi −f xc ,yc 2/2σ2r , 22

where K is the normalization constant, xc, yc is the center ofthe filter, f x, y is the gray level at x, y , σs is set to 10, andσr is set to 75.

3.3. Single-Frame Binary Segmentation. In order to distin-guish between stars and objects, it is necessary to removethe background in each frame image, but stray light leads touneven gray level distribution of background. Figure 4(a) isa single frame of the video image, and Figure 4(b) is itsgray level histogram. The single-peak shape of the gray his-togram shows that the classical global threshold method,which is traditionally used to segment the star image[15, 18, 20–22, 24], cannot be used to segment the video image.

Whether it is a star or an object, its gray level is greaterthan the pixels in its neighborhood. Consider using variablethresholding based on local image properties to segmentthe image. Calculate standard deviation σxy and mean valuemxy for the neighborhood of every point x, y in the image,which are descriptors of the local contrast and average graylevel. Then, the variable thresholding based on local contrastand average gray level is given as follows:

Txy = aσxy + bmxy , 23

where a and b are constant and greater than 0. b is the contri-bution of the local average gray level to the thresholding andcan be set to 1. a is the contribution of local contrast to thethresholding and is the main parameter to set according tothe object characteristic.

But on the other hand, for space moving objects, theirbrightness is sometimes changing according to the changingattitude. If awas set to be constant, the object would be lost insome frames. Considering the continuity of object motion, if

5International Journal of Aerospace Engineering

Page 6: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

in the current frame x, y is detected as the probable objectcoordinate, the object detection probability is much greaterin the 5× 5 window of x, y in the next frame. Integratedwith the continuity of object brightness change, if thereis no probable object detected in the 5× 5 window of x,y in the next frame, a can be reduced with a factor, thatis, aρ ρ < 1 . aρ will be stored as auv if u, v is detected asthe probable object coordinate in the 5× 5 window of x, yin the next frame. So the variable thresholding based on localimage properties and detection of the previous frame is givenas follows:

Txy = axyσxy + bmxy 24

The difference between (23) and (24) is the variablecoefficient axy based on detection of the previous frame. axyis initially set to be a little greater than 1 and reset to theinitial value when the gray level at x, y becomes large again(greater than 150).

The image binary segmentation algorithm is given asfollows:

g x, y =1, f x, y > Txy,0, f x, y ≤ Txy,

25

where f x, y is the gray level of the original image at x, yand g x, y is the gray level of the segmented image at x, y .

3.4. Coordinate Extraction. In the ideal optical system, thepoint object in the CCD focal plane occupies one pixel, butin practical imaging condition, circular aperture diffractionresults in the object in the focal plane diffusing multipixels.So the coordinate of the object in the pixel frame is deter-mined by the position of the center of the gray level. Thesimple gray weighted centroid algorithm is used to calculatethe coordinates, with positioning accuracy up to 0.1~0.3pixels [26], as follows:

xS, yS =〠

x,y ∈Sf x, y · x, y

〠x,y ∈S

f x, y, 26

where S is object area after segmentation, f x, y is thegray level of the original image at x, y , and xS, yS isthe coordinate of S.

The coordinates will be used in single-frame binarysegmentation.

3.5. Trajectory Association. After processing a frame image,some probable object coordinates are obtained. Associatethem with existing trajectories or generate new trajectories,by the nearest neighborhood filter. The radius of the neigh-borhood is determined by object characteristic, which needsto be greater than the moving distance of object image in aframe and endures losing the object in several frames.

Kalman filter is an efficient recursive filter that estimatesthe internal state of a linear dynamic system from a series ofnoisy measurements. Use Kalman filter to predict probableobject’s coordinate and where to use the nearest neighbor-hood filter in the current frame.

Assuming that the state vector of the object at the kthframe is xk = xk, yk, vxk, vyk

T , that is, its coordinate andvelocity in the pixel frame, the system equation is

xk = Fxk−1 + Buk +w =

1 0 Δt 00 1 0 Δt0 0 1 00 0 0 1

xk−1

+

0 −Δtd

Δtd

0

0 −1d

1d

0

ωxk

ωyk

+w,

zk =Hxk + v = 1 0 0 00 1 0 0

xk + v,

27

where F is the state transition matrix, B is the control-inputmatrix, uk is the control vector,H is the measurement matrix,

(a)

10.90.80.70.60.50.40.20

0

2000

4000

6000

8000

10000

12000

14000

16000

0.1 0.3

(b)

Figure 4: Single-frame image and its gray level histogram.

6 International Journal of Aerospace Engineering

Page 7: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

zk is the measurement vector, that is, the coordinate associ-ated with xk−1, yk−1 obtained by the nearest neighborhoodfilter at the kth frame,w is the process noise which is assumedto be zero mean Gaussian white noise with covariance Q,denoted as w~N 0,Q , and v is the measurement noisewhich is assumed to be zero mean Gaussian white noise withcovariance R, denoted as v~N 0, R .

In fact, Buk uses the result of (21) and realizes compensa-tion of attitude motion when adjusting attitude. In the case ofattitude stabilization, uk = 0. Equation (27) unifies these twosituations by the control vector.

The details of how the Kalman filter predicts and updatesare omitted here for there is nothing special.

When a trajectory has 20 points, we need to judgewhether the trajectory is an object. Velocity in the state vectoris used. If the mean velocity of these points is greater than agiven thresholding, the trajectory is an object; otherwise, itis not. That is to judge whether the point is moving. Thethresholding here is mainly to remove image motion causedby the instability of the satellite platform and other noise.The thresholding is usually set to 2.

When satellite attitude is adjusting, do not generate newtrajectories in order to reduce false objects. On the otherhand, in practice, adjusting attitude means something inthe current frame is of interest and there is no need to detectnew objects.

Thus, objects are detected based on the continuity ofmotion in multiframes and trajectories are updated. Com-putation complexity is greatly reduced without imagematching of stars in different frames. And compensationof attitude motion results in detecting objects well evenduring attitude adjusting.

4. Experimental Results

NODAMI is verified using a video image from TT-2. TT-2has 4 space video sensors, and the video image used inthis paper is from the high-resolution camera, whose focallength is 1000mm, FOV is 2°30′, and dx = dy = 8 33 μm.The video image is 25 frames per second, and the resolutionis 960× 576.

Firstly, continuous 20 frame images taken in attitudestabilization are processed. In (24), axy is initially set to 1.1and b is set to 1. The reduced factor is set to 0.8. The neigh-borhood in image segmentation is set to 7× 7. For example,10 object areas are obtained after segmenting the 20th frame

(as shown in Figure 5). These areas include objects, stars, andnoise, which cannot be distinguished in a single frame.

In (27),Q is set to 10−4I4 based on empirical data and R isset to 0.2I2, where In is the n×n identity matrix. The radius ofneighborhood in trajectory association is set to 5. NODAMIdetects one object in the 20 frame images, as shown inFigure 6. The image of 960× 576 is too large; Figure 6 is inter-ested in the local image, and a white box is used to identifythe object. Figure 6 shows that the brightness of the objectvaries considerably. NODAMI detects the object well.

Besides, for these 20 frames, if axy in (24) is set to con-stant, the object will be lost in 4 frames whereas (24) with avariable coefficient detected the object in all the frames.

Another longer video of 40 s taken in attitude stabiliza-tion is processed, which has 1000 frame images. NODAMIdetects one object in the 1000 frame images, as shown inFigure 7, derived by overlaying the 1000 frame images. Theimage of 960× 576 is too large; Figure 7(b) is interested inthe local image, and a white box is used to identify the objectevery 50 frames.

Figure 7 shows that the brightness of the object variesconsiderably and naked eyes tend to ignore the object in quitea few frames. Actually, if axy in (24) is set to constant, theobject will be lost in 421 frames whereas (24) with a variablecoefficient detected the object in 947 frames. Probability ofdetection improved from 42.1% to 94.7%.

Besides, even (24) with a variable coefficient could notdetect the object in all the frames. But NODAMI can associ-ate the object with the existing trajectory correctly afterlosing the object in several frames.

Then, we process the video image of 30 s, during whichsatellite attitude was adjusted. Overlaying 750 frame imagesgives Figure 8. The trajectory of the object is in the whitebox of Figure 8. The brightness of the object also variesconsiderably. The 2 right trajectories are of stars caused bysatellite attitude adjustment. The object image was movingoriginally towards the lower left, but it appears to be movingtowards the upper left due to attitude adjustment; mean-while, the star image moved upwards, resulting in the 2 righttrajectories. NODAMI detected the space object trajectory

Figure 5: Result of segmenting the 20th frame image.

Figure 6: Detection results of 20 frame images.

7International Journal of Aerospace Engineering

Page 8: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

well and abandoned the trajectory of the stars. Similarly, thenumber of points detected in the trajectory is less than 750,for the brightness of the object varies toomuch and the objectis lost in several frames. But NODAMI can associate theobject with the existing trajectory correctly after losing theobject in several frames.

Using the known satellite attitude motion informationand (21) to compensate attitude adjustment can give thetrajectory of the object as shown in Figure 9, from which itcan be seen that the trajectory is recovered well and the effectof attitude adjustment on the image is removed.

Experimental results show that the algorithm has goodperformance for moving space point objects even withchanging brightness.

5. Conclusion

In this paper, a new space object detection algorithm in avideo satellite image using motion information is proposed.The effect of satellite attitude motion on an image is analyzedquantitatively, which can be decomposed into translationand rotation. Attitude motion compensation formula isderived. Considering the continuity of object motion andbrightness change, variable thresholding based on localimage properties and detection of previous frame is used tosegment a single-frame image. Then, the algorithm uses thecorrelation of object motion in multiframe and satellite atti-tude motion information to detect the object. Computationcomplexity is greatly reduced without image matching of

stars in different frames. And compensation of attitudemotion results in detecting objects well even during attitudeadjusting. Using the algorithm to process the video imagefrom Tiantuo-2 detects the object and gives its trajectory,which shows that the algorithm has good performance formoving space point objects even with changing brightness.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors would like to express their thanks for thesupport from the Major Innovation Project of NUDT(no. 7-03), the Young Talents Training Project of NUDT,and the High Resolution Major Project (GFZX04010801).

References

[1] M. Steckling, U. Renner, and H. P. Röser, “DLR-TUBSAT,qualification of high precision attitude control in orbit,” ActaAstronautica, vol. 39, no. 9–12, pp. 951–960, 1999.

(a) (b)

Figure 7: Detection results of 1000 frame images.

Figure 8: Overlay of 750 frame images.

60 80 100 120 140 160 180 200

210

220

230

240

250

260

270

280

290

Figure 9: Trajectory of the object.

8 International Journal of Aerospace Engineering

Page 9: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

[2] M. Gruntman, “Passive optical detection of submillimeter andmillimeter size space debris in low Earth orbit,” Acta Astro-nautica, vol. 105, no. 1, pp. 156–170, 2014.

[3] U. Krutz, H. Jahn, E. Kuhrt, S. Mottola, and P. Spietz,“Radiometric considerations for the detection of spacedebris with an optical sensor in LEO as a secondary goalof the AsteroidFinder mission,” Acta Astronautica, vol. 69,no. 5, pp. 297–306, 2011.

[4] C. Phipps, “A laser-optical system to re-enter or lowerlow Earth orbit space debris,” Acta Astronautica, vol. 93,pp. 418–429, 2014.

[5] R. Sun, J. Zhan, C. Zhao, and X. Zhang, “Algorithms andapplications for detecting faint space debris in GEO,” ActaAstronautica, vol. 110, pp. 9–17, 2015.

[6] Y. Wang, S. Mitrovic Minic, R. Leitch, and A. P. Punnen,“A GRASP for next generation sapphire image acquisitionscheduling,” International Journal of Aerospace Engineering,vol. 2016, Article ID 3518537, 7 pages, 2016.

[7] B. K. P. Horn and B. G. Schunck, “Determining optical flow,”Artificial Intelligence, vol. 17, no. 81, pp. 185–203, 1981.

[8] X. Jing and L. P. Chau, “An efficient three-step searchalgorithm for block motion estimation,” IEEE Transactionson Multimedia, vol. 6, no. 3, pp. 435–438, 2004.

[9] J. Coughlan, A. Yuille, C. English, and D. Snow, “Efficientdeformable template detection and localization without userinitialization,” Computer Vision and Image Understanding,vol. 78, no. 3, pp. 303–319, 2000.

[10] Z. Lin, L. S. Davis, D. Doermann, and D. DeMenthon,“Hierarchical part-template matching for human detectionand segmentation,” in ICCV 2007 IEEE 11th InternationalConference on Computer Vision 2007, pp. 1–8, IEEE, Rio deJaneiro, Brazil, 2007.

[11] Y. Zhu, W. Hu, J. Zhou, F. Duan, J. Sun, and L. Jiang, “A newstarry images matching method in dim and small space targetdetection,” in IEEE Computer Society Fifth InternationalConference on Image and Graphics, pp. 447–450, Xi'an, Shanxi,China, 2009.

[12] S. D. Blostein and T. S. Huang, “Detection of small movingobjects in image sequences using multistage hypothesistesting,” in International Conference on Acoustics, Speech, &Signal Processing, ICASSP, vol. 1062, pp. 1068–1071, NewYork, NY, USA, 1988.

[13] L. A. Johnston and V. Krishnamurthy, “Performance analysisof a dynamic programming track before detect algorithm,”IEEE Transactions on Aerospace and Electronic Systems,vol. 38, no. 1, pp. 228–242, 2002.

[14] S. M. Tonissen and R. J. Evans, “Performance of dynamicprogramming techniques for track-before-detect,” IEEETransactions on Aerospace and Electronic Systems, vol. 32,no. 4, pp. 1440–1451, 1996.

[15] Y. Y. Zhang and C. X. Wang, “Space small targets detectionbased on improved DPA,” Acta Electronica Sinica, vol. 38,no. 3, pp. 556–560, 2010.

[16] K. Punithakumar, T. Kirubarajan, and A. Sinha, “A sequen-tial Monte Carlo probability hypothesis density algorithmfor multitarget track-before-detect,” in Signal and Data Pro-cessing of Small Targets 2005 (Proceedings of SPIE), vol.5913, pp. 587–594, San Diego, CA, USA, 2005.

[17] T. Huang, Y. Xiong, Z. Li, Y. Zhou, and Y. Li, “Space targettracking by variance detection,” Journal of Computers, vol. 9,no. 9, 2014.

[18] C. H. Zhang, B. Chen, and X. D. Zhou, “Small target traceacquisition algorithm for sequence star images with movingbackground,” Optics and Precision Engineering, vol. 16, no. 3,pp. 524–530, 2008.

[19] J. Cheng, W. Zhang, M. Y. Cong, and H. B. Pan, “Research ofdetecting algorithm for space object based on star map recog-nition,” Optical Technique, vol. 36, no. 3, pp. 439–444, 2010.

[20] Y. Han and F. Liu, “Small targets detection algorithm based ontriangle match space,” Infrared & Laser Engineering, vol. 9,pp. 3134–3140, 2014.

[21] J. Zhang, X. Xi, and X. Zhou, “Space target detection in starimage based on motion information,” in ISPDI 2013 - FifthInternational Symposium on Photoelectronic Detection andImaging, p. 891307, Beijing, China, 2013.

[22] R. Yao, Y. N. Zhang, T. Yang, and F. Duan, “Detection of smallspace target based on iterative distance classification and tra-jectory association,” Optics and Precision Engineering, vol. 20,no. 1, pp. 179–189, 2012.

[23] Y. Xu and J. Zhang, “Real-time detection algorithm for smallspace targets based on max-median filter,” Journal of Informa-tion and Computational Science, vol. 11, no. 4, pp. 1047–1055,2014.

[24] Z. Wang and Y. Zhang, “Algorithm for CCD star image rapidlocating,” Chinese Journal of Space Science, vol. 26, no. 3,pp. 209–214, 2006.

[25] C. Tomasi and R. Manduchi, “Bilateral filtering for gray andcolor images,” International Conference On Computer Vision,pp. 839–846, Bombay, India, 1998.

[26] C. C. Liebe, “Accuracy performance of star trackers-a tutorial,”IEEE Transactions on Aerospace and Electronic Systems,vol. 38, no. 2, pp. 587–599, 2002.

9International Journal of Aerospace Engineering

Page 10: Space Object Detection in Video Satellite Images Using Motion Informationdownloads.hindawi.com/journals/ijae/2017/1024529.pdf · 2019-07-30 · Research Article Space Object Detection

RoboticsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Active and Passive Electronic Components

Control Scienceand Engineering

Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

International Journal of

RotatingMachinery

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporation http://www.hindawi.com

Journal of

Volume 201

Submit your manuscripts athttps://www.hindawi.com

VLSI Design

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 201

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Shock and Vibration

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawi Publishing Corporation http://www.hindawi.com

Volume 2014

The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014

SensorsJournal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

Navigation and Observation

International Journal of

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014

DistributedSensor Networks

International Journal of