hand-eye calibration: 4-d procrustes analysis approach · tion, least squares, octonions,...

16
2966 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020 Hand-Eye Calibration: 4-D Procrustes Analysis Approach Jin Wu , Member, IEEE, Yuxiang Sun , Miaomiao Wang , Student Member, IEEE , and Ming Liu , Senior Member, IEEE Abstract— We give a universal analytical solution to the hand-eye calibration problem AX = XB with known matrices A and B and unknown variable X , all in the set of special Euclidean group SE(3). The developed method relies on the 4-D Procrustes analysis. A unit-octonion representation is proposed for the first time to solve such a Procrustes problem through which an optimal closed-form eigendecomposition solution is derived. By virtue of such a solution, the uncertainty description of X , being a sophisticated problem previously, can be solved in a simpler manner. The proposed approach is then verified using simulations and real-world experimentations on an industrial robotic arm. The results indicate that it owns better accuracy and better description of uncertainty and consumes much less computation time. Index Terms— Hand-eye calibration, homogenous transforma- tion, least squares, octonions, quaternions. I. I NTRODUCTION T HE main hand-eye calibration problem studied in this paper is aimed to compute the unknown relative homo- geneous transformation X between a robotic gripper and an attached camera, whose poses are denoted as A and B, respectively, such that AX = XB. Hand-eye calibration can be solved via general solutions to the AX = XB problems or through minimizing direct models established using repro- jection errors [1]. However, the hand-eye problem AX = XB is not restricted only to the manipulator-camera calibration. Rather, it has been applied to multiple sensor calibration prob- lems, including magnetic/inertial ones [2], camera/magnetic ones [3], and other general models [4]. That is to say, the solu- tion of AX = XB is more generalized and has broader applications than the methods based on reprojection-error minimization. The early study of the hand-eye calibration problem can be dated back to 1980s when some researchers try Manuscript received April 24, 2019; revised June 18, 2019; accepted July 11, 2019. Date of publication August 6, 2019; date of current version May 12, 2020. This work was supported by the National Natural Science Foundation of China (Grant No. U1713211), the Research Grant Council of Hong Kong SAR Government, China, under Project No. 11210017, and No. 21202816, and Shenzhen Science, Technology and Innovation Commission (SZSTI) JCYJ20160428154842603, awarded to Prof. Ming Liu. The Associate Editor coordinating the review process was Peter Liu. (Corresponding author: Ming Liu.) J. Wu, Y. Sun, and M. Liu are with the Department of Electronic and Com- puter Engineering, The Hong Kong University of Science and Technology, Hong Kong (e-mail: [email protected]; [email protected]; [email protected]). M. Wang is with the Department of Electronic and Computer Engi- neering, Western University, London, ON N6A 3K7, Canada (e-mail: [email protected]). Color versions of one or more of the figures in this article are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIM.2019.2930710 to determine the gripper–camera transformation for accurate robotic perception and reconstruction. During the past over 30 years, there have been a large variety of algorithms solving the hand-eye problem AX = XB. Generally speaking, they can be categorized into two groups. The first group consists of those algorithms that calculate the rotation in the first step and then compute the translation part in the second step, while in the second group, algorithms compute the rotation and translation simultaneously. There are quite a lot of methods belonging to the very first group that we call them as separated ones, including the representatives of rotation- logarithm-based ones, such as Tsai and Lenz [5], Shiu and Ahmad [6], Park and Martin [7], Horaud and Dornaika [8], and quaternion-based one from Chou and Kamel [9]. The simultaneous ones appear in the second group with the related representatives of as follows. 1) Analytical Solutions: Quaternion-based method by Lu and Chou [10], dual-quaternion-based one by Daniilidis [11], Sylvester-equation-based one by Andr- eff et al. [12], and dual-tensor-based one by Condurache and Burlacu [13]. 2) Numerical Solutions: Gradient/Newton methods by Gwak et al. [14], linear-matrix-inequality (LMI)-based one by Heller et al. [15], alternative- linear-programming-based one by Zhao [16], and pseudo-inverse-based one by Zhang [3] and Zhang et al. [17]. Each kind of algorithms has its own pros and cons. The separated ones cannot produce good enough results with those cases when translation measurements are more accurate than rotation. The simultaneous ones can achieve better optimiza- tion performances but may consume a large quantity of time when using numerical iterations. Some algorithms will also suffer from their own ill-posed conditions in the presence of some extreme data sets [18]. What is more, the uncer- tainty description of X in a hand-eye problem AX = XB, being an important but difficult problem, has always troubled researchers until the first public general iterative solution by Nguyen and Pham [19]. An intuitive overview of these algorithms in the order of publication time can be found out in Table I. Until now, hand-eye calibration has accelerated the devel- opment of robotics communities according to its various usages in sensor calibration and motion sensing [20], [21]. Although it has been quite a long time since the first pro- posal of hand-eye calibration, the studies around it are still 0018-9456 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.

Upload: others

Post on 27-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2966 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Hand-Eye Calibration: 4-D ProcrustesAnalysis Approach

Jin Wu , Member, IEEE, Yuxiang Sun , Miaomiao Wang , Student Member, IEEE,

and Ming Liu , Senior Member, IEEE

Abstract— We give a universal analytical solution to thehand-eye calibration problem AX = X B with known matricesA and B and unknown variable X , all in the set of specialEuclidean group SE(3). The developed method relies on the 4-DProcrustes analysis. A unit-octonion representation is proposedfor the first time to solve such a Procrustes problem throughwhich an optimal closed-form eigendecomposition solution isderived. By virtue of such a solution, the uncertainty descriptionof X , being a sophisticated problem previously, can be solved ina simpler manner. The proposed approach is then verified usingsimulations and real-world experimentations on an industrialrobotic arm. The results indicate that it owns better accuracyand better description of uncertainty and consumes much lesscomputation time.

Index Terms— Hand-eye calibration, homogenous transforma-tion, least squares, octonions, quaternions.

I. INTRODUCTION

THE main hand-eye calibration problem studied in thispaper is aimed to compute the unknown relative homo-

geneous transformation X between a robotic gripper and anattached camera, whose poses are denoted as A and B,respectively, such that AX = X B. Hand-eye calibration canbe solved via general solutions to the AX = X B problemsor through minimizing direct models established using repro-jection errors [1]. However, the hand-eye problem AX = X Bis not restricted only to the manipulator-camera calibration.Rather, it has been applied to multiple sensor calibration prob-lems, including magnetic/inertial ones [2], camera/magneticones [3], and other general models [4]. That is to say, the solu-tion of AX = X B is more generalized and has broaderapplications than the methods based on reprojection-errorminimization. The early study of the hand-eye calibrationproblem can be dated back to 1980s when some researchers try

Manuscript received April 24, 2019; revised June 18, 2019; acceptedJuly 11, 2019. Date of publication August 6, 2019; date of current versionMay 12, 2020. This work was supported by the National Natural ScienceFoundation of China (Grant No. U1713211), the Research Grant Council ofHong Kong SAR Government, China, under Project No. 11210017, and No.21202816, and Shenzhen Science, Technology and Innovation Commission(SZSTI) JCYJ20160428154842603, awarded to Prof. Ming Liu. The AssociateEditor coordinating the review process was Peter Liu. (Corresponding author:Ming Liu.)

J. Wu, Y. Sun, and M. Liu are with the Department of Electronic and Com-puter Engineering, The Hong Kong University of Science and Technology,Hong Kong (e-mail: [email protected]; [email protected];[email protected]).

M. Wang is with the Department of Electronic and Computer Engi-neering, Western University, London, ON N6A 3K7, Canada (e-mail:[email protected]).

Color versions of one or more of the figures in this article are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIM.2019.2930710

to determine the gripper–camera transformation for accuraterobotic perception and reconstruction. During the past over30 years, there have been a large variety of algorithms solvingthe hand-eye problem AX = X B. Generally speaking, theycan be categorized into two groups. The first group consistsof those algorithms that calculate the rotation in the firststep and then compute the translation part in the secondstep, while in the second group, algorithms compute therotation and translation simultaneously. There are quite a lotof methods belonging to the very first group that we call themas separated ones, including the representatives of rotation-logarithm-based ones, such as Tsai and Lenz [5], Shiu andAhmad [6], Park and Martin [7], Horaud and Dornaika [8],and quaternion-based one from Chou and Kamel [9]. Thesimultaneous ones appear in the second group with the relatedrepresentatives of as follows.

1) Analytical Solutions: Quaternion-based method byLu and Chou [10], dual-quaternion-based one byDaniilidis [11], Sylvester-equation-based one by Andr-eff et al. [12], and dual-tensor-based one by Conduracheand Burlacu [13].

2) Numerical Solutions: Gradient/Newton methodsby Gwak et al. [14], linear-matrix-inequality(LMI)-based one by Heller et al. [15], alternative-linear-programming-based one by Zhao [16],and pseudo-inverse-based one by Zhang [3] andZhang et al. [17].

Each kind of algorithms has its own pros and cons. Theseparated ones cannot produce good enough results with thosecases when translation measurements are more accurate thanrotation. The simultaneous ones can achieve better optimiza-tion performances but may consume a large quantity of timewhen using numerical iterations. Some algorithms will alsosuffer from their own ill-posed conditions in the presenceof some extreme data sets [18]. What is more, the uncer-tainty description of X in a hand-eye problem AX = X B,being an important but difficult problem, has always troubledresearchers until the first public general iterative solutionby Nguyen and Pham [19]. An intuitive overview of thesealgorithms in the order of publication time can be found outin Table I.

Until now, hand-eye calibration has accelerated the devel-opment of robotics communities according to its varioususages in sensor calibration and motion sensing [20], [21].Although it has been quite a long time since the first pro-posal of hand-eye calibration, the studies around it are still

0018-9456 © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See https://www.ieee.org/publications/rights/index.html for more information.

Page 2: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2967

TABLE I

COMPARISONS BETWEEN RELATED METHODS

very popular. There is still a remaining problem that noalgorithm can simultaneously estimate X in AX = X Bwhile preserving highly accurate uncertainty descriptions andconsuming extremely low computation time. These difficultiesare rather practical since in the hand-eye problem AX = X B,the rotation and translation parts are tightly coupled with highnonlinearity, which motivates Nguyen and Pham to derive thefirst-order approximation of the error covariance propagation.It is also the presented nonlinearity that makes the numericaliterations much slower.

To overcome the current algorithmic shortcomings, in thispaper, we study a new 4-D Procrustes analysis tool for therepresentation of homogeneous transformations. Understand-ing the manifolds has become a popular way for moderninterior analysis of various data flows [22]. The geomet-ric descriptions of these manifolds have always been vital,which are usually addressed with the Procrustes analysis thatextracts the rigid, affine, or non-rigid geometric mappingsbetween the data sets [23], [24]. Early studies on Procrustesanalysis have been conducted since 1930s [25]–[27], andlater generalized solutions are applied to spacecraft attitudedetermination [28], [29], image registration [30], [31], laserscan matching using iterative closest points (ICP) [32], [33],and so on. Motivated by these technological advances, thispaper has the following contributions.

1) We show some analytical results to the 4-D Procrustesanalysis in Section III and apply them to the solution ofhand-eye calibration problem detailed in Section II.

2) Since all variables are directly propagated into finalresults, the solving process is quite simple and com-putationally efficient.

3) Also, as the proposed solution is in the form of thespectrum decomposition of a 4 × 4 matrix, the closed-form probabilistic information is given precisely andflexibly for the first time using some recent results inautomatic control.

Finally, via simulations and real-world robotic experimentsin Section IV, the proposed method is evaluated to ownbetter potential accuracy, computational loads, and uncertaintydescriptions. Detailed comparisons are also shown to revealthe sensitivity of the proposed method subject to input noiseand different parameter values.

II. PROBLEM FORMULATION

We start this section by first defining some importantnotations in this paper that are mostly inherited from [34].The n-dimensional real Euclidean space is represented by R

n ,which further generates the matrix space R

m×n containing allreal matrices with m rows and n columns. All n-dimensionalrotation matrices belong to the special orthogonal groupSO(n) := {R ∈ R

n×n |RT R = I, det(R) = 1}, whereI denotes the identity matrix with proper size. The specialEuclidean space is composed of a rotation matrix R and atranslational vector t , such that

SE(n) :=�

T =�

R t0 1

�|R ∈ SO(n), t ∈ R

n�

(1)

with 0 denoting the zeros matrix with adequate dimensions.The Euclidean norm of a given squared matrix X will bedefined with �X� = (tr(XT X))1/2, where tr denotes thematrix trace. The vectorization of an arbitrary matrix X isdefined as vec(X), and ⊗ represents the kronecker productbetween two matrices. For a given arbitrary matrix X , X† iscalled its Moore–Penrose generalized inverse. Any rotation Ron SO(3) has its corresponding logarithm given by

log(R) = φ

2 sin φ(R − RT ) (2)

in which 1 + 2 cos φ = tr(R). Given a 3-D vector x =(x1, x2, x3)

T , its associated skew-symmetric matrix is

[x]× =⎛⎝ 0 −x3 x2

x3 0 −x1−x2 x1 0

⎞⎠ (3)

satisfying x× y = [x]× y = −[ y]×x, where y is also an arbi-trary 3-D vector. The inverse map from the skew-symmetricmatrix to the 3-D vector is denoted as [x]∧× = x.

Now, let us describe the main problem in this paper. Giventwo measurement sets

A = {Ai |Ai ∈ SE(3), i = 1, 2, . . . , N }B = {Bi |Bi ∈ SE(3), i = 1, 2, · · · , N } (4)

consider the hand-eye calibration least square

arg minX∈SE(3)

J =N

i=1

�Ai X − X Bi�2 (5)

Page 3: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2968 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Fig. 1. Relationship between various homogeneous transformations for gripper–camera hand-eye calibration.

where Ai and Bi come to the reality using poses in twosuccessive measurements (also see [11, Fig. 1])

Ai = TAi+1 T−1Ai

Bi = T−1Bi+1

TBi (6)

with TAi being the i th camera pose with respect to the standardobjects in a world frame and

TBi = TBi,3 TBi,2 TBi,1 (7)

are gripper poses with respect to the robotic base, in whichTBi,1 , TBi,2 , and TBi,3 are the transformations between jointsof robotic arms. The relationship between these homogeneoustransformations can be found out in Fig. 1. The task for us inthe remainder of this paper is to give a closed-form solutionof X considering rotation and translation simultaneously and,moreover, derive the uncertainty description of X .

Let us write A and B into

A =�

RA tA0 1

�, B =

�RB tB0 1

�. (8)

Then, one easily obtains�RA RX = RX RB

RAtX + tA = RX tB + tX. (9)

The method by Park and Martin [7] first computes RX fromthe first equation of (9) and then solves tX by inserting RXinto the second sub-equation. The Park’s step for computingRX is tantamount to the following optimization:

arg minRX∈SO(3)

Ni=1

�RX ai − bi�2 (10)

with

ai = [log(RAi )]∧bi = [log(RBi )]∧. (11)

Note that (10) is in fact a rigid 3-D registration problem,which can be solved instantly with singular value decom-position (SVD) or eigendecomposition [29], [32]. However,the solution of Park and Martin does not take the translationinto account for RX , while the accuracy of RX is actuallyaffected by tX . Therefore, there are some other methods tryingto compute RX and tX simultaneously [11], [12]. While thesemethods fix the remaining problem of Park and Martin, theymay not achieve global minimum, as the optimization

arg minRX∈SO(3),tX∈R3

Ni=1

� �RX ai − bi�2+�RX tB + tX − RAtX − tA�2

�(12)

is not always convex. Hence, the iterative numerical methodsare proposed to achieve the globally optimal estimates ofRX and tX , including solvers in [3], [14], [16], and [17].In Section III, we show a new analytical perspective forhand-eye calibration problem AX = X B using the proposed4-D Procrustes analysis.

III. 4-D PROCRUSTES ANALYSIS

The whole results in this section are proposed for the firsttime solving specific 4-D Procrustes analysis problems. Thedeveloped approach is, therefore, named the 4DPA method forsimplicity in the later contents.

A. Some New Analytical Results

Problem 1: Let {U} = {ui ∈ R4} and {V} = {vi ∈

R4}, where i = 1, 2, . . . , N, N ≥ 3, be the two point sets

in which the correspondences are well matched, such thatui corresponds exactly to vi . Find the 4-D rotation R andtranslation vector t , such that

arg minR∈SO(4),t∈R4

Ni=1

�Rui + t − vi�2. (13)

Page 4: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2969

Solution: Problem 1 is actually a 4-D registration problemthat can be easily solved via the SVD

R = Udiag[1, 1, 1, det(UV )]V T

t = v − Ru (14)

with

U SV T = H

H =N

i=1

(ui − u)(vi − v)T

u = 1

N

Ni=1

ui , v = 1

N

Ni=1

vi (15)

where S = diag(s1, s2, s3, s4) is the diagonal matrix containingall singular values of H . However, SVD cannot reflect theinterior geometry of SO(4), and such geometric informationof special orthogonal groups will be very helpful for fur-ther proofs [35], [36]. The 4-D rotation can be characterizedwith two unitary quaternions qL = (a, b, c, d)T and qR =(p, q, r, s)T by [37]

R = RL(qL)RR(qR) (16)

with

RL(qL) =

⎛⎜⎜⎝

a −b −c −db a −d cc d a −bd −c b a

⎞⎟⎟⎠

RR(qR) =

⎛⎜⎜⎝

p −q −r −sq p s −rr −s p qs r −q p

⎞⎟⎟⎠ (17)

being the left and right matrices. Interestingly, such RL(qL)and RR(qR) are actually matrix expressions for quaternionproducts from left- and right-hand sides, respectively. Risingfrom 3-D spaces, the 4-D rotation is much more sophisticatedbecause the 4-D cross product is not as unique as that in the3-D case [38]. Therefore, methods previously relying on the3-D skew-symmetric matrices are no longer extendable for4-D registration. The parameterization of R ∈ SO(4) by (16)can also be unified with the unit octonion given by

σ = 1√2

�qT

L , qTR

�T ∈ R8. (18)

Our task here is to derive the closed-form solution of such σ

and therefore compute R and t . Using the analytical formin (16), we can rewrite the rotation matrix R into R =(c1, c2, c3, c4), with c1, c2, c3, and c4 standing for the four

columns of R, respectively. For each column, the algebraicfactorization can be performed via

c1 = P1(σ )σ

c2 = P2(σ )σ

c3 = P3(σ )σ

c4 = P4(σ )σ (20)

where P1(σ ), P2(σ ), P3(σ ), P4(σ ) ∈ R4×8 are given in

Appendix A. These matrices, however, are subjected to thefollowing equalities:

P1(σ )PT1 (σ )

= P2(σ )PT2 (σ )

= P3(σ )PT3 (σ ) = P4(σ )PT

4 (σ )

= 1

2(a2 + b2 + c2 + d2 + p2 + q2 + r2 + s2)I

= I . (21)

Then, following the step of [39], one can obtain that ideally:PT

1 (σ )H1 + PT2 (σ )H2 + PT

3 (σ )H3 + PT4 (σ )H4 = σ (22)

where H1, H2, H3, and H4 are the four rows of the matrix H ,such that

H = �HT1 , HT

2 , HT3 , HT

4

�T. (23)

Evaluating the left part of (22), an eigenvalue problem isderived to [39]

Wσ = λW ,maxσ . (24)

The optimal eigenvector σ is associated with the maximumeigenvalue λW ,max of W , with W being an 8×8 matrix in theform of

W =�

0 KK T 0

�(25)

where K is shown in (19), as shown at the bottom of thispage. This indicates that λW ,max subjects to

det(λW ,max I −W)

= det(λW ,max I) det

�λW ,max I − 1

λW ,maxK T K

= det�λ2

W ,max I − K T K�

(26)

where the details are shown in Appendix A. In other words,λ2

W ,max is the eigenvalue of the 4 × 4 matrix K T K . As thesymbolic solutions to the generalized quartic equations havebeen detailed in [40], the computation of the eigenvalues ofW will be very simple. When σ is computed, it also gives Rand, thus, will produce t according to (14).

K =

⎛⎜⎜⎝

H11 + H22 + H33 + H44 H12 − H21 − H34 + H43 H13 + H24 − H31 − H42 H14 − H23 + H32 − H41H12 − H21 + H34 − H43 H33 − H22 − H11 + H44 H14 − H23 − H32 + H41 −H13 − H24 − H31 − H42H13 − H24 − H31 + H42 −H14 − H23 − H32 − H41 H22 − H11 − H33 + H44 H12 + H21 − H34 − H43H14 + H23 − H32 − H41 H13 − H24 + H31 − H42 −H12 − H21 − H34 − H43 H22 − H11 + H33 − H44

⎞⎟⎟⎠(19)

Page 5: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2970 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Sub-Problem 1: Given an improper rotation matrix R whichis not strictly on SO(4), find the optimal rotation R ∈ SO(4)to orthonormalize R.

Solution: This is the orthonormalization problem and can besolved by replacing H as R in (15), as indicated in [41]–[43].

Problem 2: Let {E} = {Ei ∈ SO(4)} and {Z} = {Zi ∈SO(4)}, where i = 1, 2, . . . , N , be the two matrix sets inwhich Ei corresponds exactly to Zi . Find the 4-D rotation R,such that

arg minR∈SO(4)

Ni=1

�Ei R − RZi�2. (27)

Solution: First, we provide some properties on this problem.Problem 2 is very different with (10) because for rotation onSO(4), the exponential map indicates a 6×6 skew-symmetricmatrix. Therefore, the previous 3-D registration method cannotbe extended to the 4-D case. In the solution to Problem 1,we reveal some identities of unit octonion for the represen-tation of rotation on SO(4). Note that this is very similar tothe previous quaternion decomposition from rotation (QDR)that has been used for solving AR = R B, where A, B, R ∈SO(3) [44]. Then, we are going to extend the QDR to theoctonion decomposition from rotation (ODR) for the solutionof Problem 2.

Like (20), R ∈ SO(4) can also be decomposed from rows,such that

R = �rT1 , rT

2 , rT3 , rT

4

�Tr1 = σ T Q1(σ )

r2 = σ T Q2(σ )

r3 = σ T Q3(σ )

r4 = σ T Q4(σ ) (28)

where Q1(σ ), Q2(σ ), Q3(σ ), and Q4(σ ) ∈ R4×8 are shown

in Appendix A.Invoking this ODR, we are able to transform Ei R − RZi

into

Ei R − RZi = (Mi,1σ , Mi,2σ , Mi,3σ , Mi,4σ ) (29)

where i = 1, 2, . . . , N and

Mi,1 =

⎛⎜⎜⎝

σ T G11,i

σ T G12,i

σ T G13,i

σ T G14,i

⎞⎟⎟⎠ , Mi,2 =

⎛⎜⎜⎝

σ T G21,i

σ T G22,i

σ T G23,i

σ T G24,i

⎞⎟⎟⎠

Mi,3 =

⎛⎜⎜⎝

σ T G31,i

σ T G32,i

σ T G33,i

σ T G34,i

⎞⎟⎟⎠ , Mi,4 =

⎛⎜⎜⎝

σ T G41,i

σ T G42,i

σ T G43,i

σ T G44,i

⎞⎟⎟⎠ (30)

in which each G j k,i , j, k = 1, 2, 3, 4 takes the form

G j k,i =�

0 J j k,i

J Tjk,i 0

�(31)

with parameter matrices J j k,i ∈ R4×8 given in Appen-

dix A. Afterward, the optimal octonion can be sought

by

arg minR∈SO(4)

Ni=1

�Ei R − RZi�2

= arg minσ T σ=1

Ni=1

σ T

⎛⎝ 4

j=1

4k=1

G2j k,i

⎞⎠ σ

= arg minσ T σ=1

σ T Fσ (32)

where

F =N

i=1

4j=1

4k=1

G2j k,i

=N

i=1

4j=1

4k=1

�J j k,i J T

jk,i 00 J T

jk,i J j k,i

�. (33)

Equation (32) indicates that σ is the eigenvector belonging tothe minimum eigenvalue of F, such that

Fσ = λF,minσ . (34)

Since J j k,i J Tjk,i and J T

jk,i J j k,i have quite the same spectrumdistribution, (33) also implies that there are two differentminimum eigenvalues for F with their associated eigenvectorsrepresenting qL and qR, respectively. That is to say, qL andqR are the eigenvectors of F11 and F22, such that

F11 =N

i=1

4j=1

4k=1

J j k,i J Tjk,i

F22 =N

i=1

4j=1

4k=1

J Tjk,i J j k,i

F =�

F11 00 F22

�(35)

associated with their minimum eigenvalues, respectively. Then,inserting the computed qL and qR values into (16) gives theoptimal R ∈ SO(4) for Problem 2.

Sub-Problem 2: Let {E} = {Ei ∈ SO(4)} and {Z} = {Zi ∈SO(4)}, where i = 1, 2, . . . , N , be the two sequential matrixsets in which Ei does not exactly correspond to Zi . Find the4-D rotation R

arg minR∈SO(4)

Ni=1

�Ei R − RZi�2 (36)

provided that {E} and {Z} are sampled asynchronously.Solution: In this problem, {E} and {Z} are the asyn-

chronously sampled measurements with different timestamps.First, we need to interpolate the rotations for smooth con-sensus. Suppose that we have two successive homogeneoustransformations Ei and Ei+1 with timestamps of τE,i andτE,i+1, respectively. There exists a measurement of Zi with atimestamp of τZ,i ∈ [τE,i , τE,i+1]. Then, the linear interpola-tion Ei,i+1 on SO(4) can be found out by

arg maxEi,i+1∈SO(4)

tr

�w�

Ei ETi,i+1 − I

�T �Ei ETi,i+1 − I

�+ (1−w)

�Ei,i+1 ET

i+1 − I�T �Ei,i+1 ET

i+1− I��

⇒ arg maxEi,i+1∈SO(4)

tr{Ei,i+1[wEi + (1−w)Ei+1]T } (37)

Page 6: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2971

where w is the timestamp weight between Ei and Ei,i+1 , suchthat

w = τE,i+1 − τZ,i

τE,i+1 − τE,i. (38)

Then, the interpolation can be solved using the solution toProblem 1 by letting H = [wEi + (1− w)Ei+1]T . After theinterpolation, a new interpolated set {E} can be established thatwell corresponds to {Z} and R can be solved via the solutionto Problem 2.

B. Uncertainty Descriptions

In this section, we use x for representing the true valueof the noise-disturbed vector x. The expectation is denotedusing · · · [29], [45]. All the errors in this section areassumed to be zero-mean, which can be found out in [29].As all the solutions provided in Section III-A are all inthe spectrum-decomposition form, the errors of the derivedquaternions can be given by the perturbation theory [46]. In arecent error analysis for the attitude determination from vectorobservations by Chang et al. [47], the first-order error of theestimated deterministic quaternion q is

δq = [qT ⊗ (λmax I − M)†]δm (39)

provided that m = vec(M), λmax being the maximum eigen-value of the real symmetric matrix M

Mq = λmaxq. (40)

The above-mentioned quaternion error is presented,given the assumption that δq is multiplicative, suchthat

q = δq � q (41)

where � denotes the quaternion product. The following con-tents will discuss the covariance expressions for this type ofquaternion error.

Using (39), we have the following quaternion errorcovariance:

�δq

= δqδqT = [qT ⊗ (λmax I−M)†]δmδmT [qT ⊗ (λmax I−M)†]T = [qT ⊗ (λmax I − M)†]�δm

× [qT ⊗ (λmax I − M)†]T (42)

in which

�δm = ∂m∂b

�b

�∂m∂b

�T

(43)

where b denotes all input variables contributed to the finalform of M . Let us take the solution to Problem 2 for example.For qL , we have

F11qL = λF,minqL = λF11,minqL (44)

which yields that

M = −F11, λmax = −λF11,min

m = −vec(F11)

∂m∂b= − 1

N

Ni=1

4j=1

4k=1

∂vec�

J j k,i J Tjk,i

�∂b

b = [vec(Ei )T , vec(Zi)

T ]T (45)

where we assume that b for every pair of {Ei , Zi} havethe same probabilistic distribution. The computation of(∂(J j k,i J T

jk,i )/∂b) can be intuitively conducted using analyti-cal forms of matrices in Appendix A, and this part of work isleft for the audience of this paper. The covariance of qR can,therefore, be computed by replacing F11 with F22 in (45).The cross-covariance between qL and qR can also be given asfollows:

�δqLδqR =�δqLδqT

R

�=� �

qTL ⊗ (F11 − λF11,min I)†

�δmL

δmTR

�qT

R ⊗ (F22, λF22,min I)†�T�

= �qTL ⊗ (F11 − λF11,min I)†��δmL δmR�

qTR ⊗ (F22, λF22,min I)†�T (46)

in which mL = vec(F11), mR = vec(F22), and �δmL δmR isgiven by

�δmL δmR =∂mL

∂b�b

�∂mR

∂b

�T

. (47)

Eventually, the covariance of the octonion σ will be

�σ =�

�δqL �δqLδqR

�δqRδqL �δqR

�. (48)

C. Solving AX = X B From SO(4) Perspective

The SO(4) parameterization of SE(3) is presented byThomas [37] that

T =�

R t0 1

�F1←→F−1

1

RT ,SO(4) =�

R εtεtT R 1

�(49)

in which ε denotes the dual unit that enables ε2 = 0. Theright part of (49) is on SO(4), and a practical method forapproaching the corresponding homogeneous transformationis that we can choose very tiny numbers for ε = 1/d , whered � 1 is a positive scaling factor to generate real-numberapproximation of RT ,SO(4)

RT ,SO(4) ≈�

R 1d t

1d tT R 1

�. (50)

It is also noted that the mapping in (49) is notunique. For instance, the following mapping also holds forRT

T ,SO(4) RT ,SO(4) = RT ,SO(4) RTT ,SO(4) = I when d � 1:

T =�

R t0 1

�F2←→F−1

2

RT ,SO(4) =�

R εt−εtT R 1

�. (51)

The convenience of such mapping from SE(3) to SO(4) isthat some nonlinear equations on SE(3) can be turned to

Page 7: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2972 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

linear ones on SO(4). Choosing a scaling factor d makesan approximation of homogeneous transformation on SO(4).Then, the conventional hand-eye calibration problem AX =X B can be shifted to

arg minX∈SE(3)

J =N

i=1

�Ai X − X Bi�2

⇒ arg minRX,SO(4)∈SO(4)

J

=N

i=1

�RAi ,SO(4) RX,SO(4) − RX,SO(4) RBi ,SO(4)�2

(52)

which can be instantly solved via the solution to Problem 2.With asynchronously sampled measurements, the problemcan be refined with solution to Sub-Problem 2. While theuncertainty descriptions of σ related to RX,SO(4) are shownin Section III-B, we would reveal what the covariances ofthe rotation and translation look like. Suppose that, now,we have obtained the covariance of σ in (48) for RX,SO(4).The covariances between the columns of the rotation matrixand the cross-covariances between the columns of rotation andtranslation are considered. Recalling (20), one finds out thatthe covariance between the i th column and j th column ofRX,SO(4) is calculated by

�δci δc j

= [Pi (δσ )σ + Pi (σ )δσ ][Pj (δσ )σ + Pj (σ )δσ ]T

=�

Pi (δσ )σσ T PTj (δσ )+ Pi (σ )δσδσ T PT

j (σ )

+ Pi (δσ )σδσ T PTj (σ )+ Pi (σ )δσσ T PT

j (δσ )

=�

Yi (σ )δσδσ T Y Tj (σ )+ Pi (σ )δσδσ T PT

j (σ )

+Yi(σ )δσδσ T PTj (σ )+ Pi (σ )δσδσ T Y T

j (σ )

= Yi (σ )�σ Y Tj (σ )+ Pi (σ )�σ PT

j (σ )

+Yi (σ )�σ PTj (σ )+ Pi (σ )�σ Y T

j (σ ) (53)

where Pi (δσ )σ = Yi (σ )δσ and Yi (σ ) is a linear mapping ofσ , which can be evaluated by symbolic computations. For thecurrent 4-D Procrustes analysis, interestingly, we have

Yi (σ ) = Pi (σ ). (54)

Therefore, (53) can be interpreted as

�δci δc j = 4 Pi (σ )�σ PTj (σ ). (55)

In particular, the rotation–translation cross-covariances will bedescribed by taking the first three rows and three columns ofcovariance matrices of d ·�δc1δc4 , d ·�δc2δc4 , d ·�δc3δc4 , respec-tively. More specifically, if we need to obtain the covarianceof RX,SO(4), one arrives at

�RX,SO(4) =�δRX,SO(4)δRT

X,SO(4)

�=

4i=1

[Yi (σ )+ Pi (σ )]�σ [Yi (σ )+ Pi (σ )]T

= 44

i=1

Pi (σ )�σ PTi (σ ) (56)

where

δRX,SO(4)

=�

δ P1(σ )σ + P1(σ )δσ , δ P2(σ )σ + P2(σ )δσ ,δ P3(σ )σ + P3(σ )δσ , δ P4(σ )σ + P4(σ )δσ

=� [Y1(σ )+ P1(σ )]δσ , [Y2(σ )+ P2(σ )]δσ ,[Y3(σ )+ P3(σ )]δσ , [Y4(σ )+ P4(σ )]δσ

�. (57)

The covariance of RX then equals to

�RX = �RX,SO(4) (1:3, 1:3) (58)

where (1:3, 1:3) denotes the block containing first three rowsand columns. Finally, the covariance of tX is given by

�tX = d2�δc4δc4(1:3, 1:3). (59)

D. Discussion

The presented SO(4) algorithm for hand-eye calibration hasthe following advantages.

1) It can simultaneously solve rotation and translation in Xfor hand-eye calibration problem AX = X B and thusown comparable accuracy and robustness with previousrepresentatives.

2) All the items from A and B are directly propagatedto the final forms of eigendecomposition without anypre-processing techniques, e.g., quaternion conversionfrom rotation, rotation logarithm remaining in the pre-vious studies.

3) According to the direct propagation of variables to thefinal result, the computation speed is extremely fast.

4) The uncertainty descriptions can be obtained easily withthe given analytical results.

However, the proposed method also owns its drawback, thatis, the accuracy of the final computed X is actually affectedby the scaling factor d . Here, one can find out that d isactually a factor that scales the translation part to a smallvector. However, this does not mean that larger d will leadto better performance since very large d may reduce thesignificant digits of a fixed word-length floating-point number.Therefore, d can be empirically determined according tothe scale of translation vector and the required accuracy offloating-number processing. For instance, for a 32-bit com-puter, one single-precision floating-point number requires 4bytes for storage, then d = 1 × 105 ∼ 1 × 106 ≈ 216 ∼ 220

will be redundant enough, guaranteeing the accuracy of at least220−32 m = 2−12 m = 2.44 × 10−04 m, which is enoughfor most optical systems with measurement ranges of 10 m.How to choose the most appropriate d value dynamicallyand optimally will be a difficult but significant task in laterworks. The algorithmic procedures of the proposed methodare described in Algorithm 1 for intuitive implementation.Engineers can also turn to the links in the Acknowledgmentsection for some MATLAB codes.

IV. EXPERIMENTAL RESULTS

A. Experiments on a Robotic Arm

The first category of experiments is conducted fora gripper–camera hand-eye calibration shown in Fig. 1.

Page 8: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2973

Algorithm 1 Proposed 4DPA Method for Hand-Eye Calibra-tionParameter: Empirical value of d .Require:

1) Get N measurements of Ai , Bi in {A} and {B} respec-tively. If they are not synchronously measured, get themost appropriate interpolated sets using solution to Sub-Problem 2.

2) Select a scaling factor d empirically for SE(3)− SO(4)mapping.

Step 1: Convert measurements in {A} and {B} to rotations onSO(4) via (51).Step 2: Solve the hand-eye calibration problem AX = X Bvia the solution to Problem 2. Remap the calculated SO(4)solution to SE(3) using (51).Step 3: Obtain the covariance of the octonion σ related to X .Compute the rotation-rotation and rotation-translation cross-covariances via (55).

Fig. 2. Gripper–camera hand-eye calibration experiment.

The data set is generated using a UR10 robotic armand an Intel Realsense D435i camera attached firmlyto the end-effector (gripper) of the robotic arm (seeFig. 2).

The UR10 robotic arm can give accurate outputs of homoge-neous transformations of various joints relative to its base. TheD435i camera contains color, depth, and fisheye sub-camerasalong with an inertial measurement unit (IMU). In this section,the transformation of the end-effector TBi of the robotic arm iscomputed using those transformations from all joints via (7).We only pick up the color images from D435i to obtainthe transformation of the camera with respect to the 12 × 9chessboard. Note that in Fig. 1, the standard objects can bearbitrary ones with certain pre-known models, e.g., point cloudmodel and computer-aided design (CAD) model. The D435i isfactory-calibrated for its intrinsic parameters, and we constructthe following projection model for the utilized camera:

lcam, j =�

lcam,1, j

lcam,2, j

�⎛⎝ lcam,1, j

lcam,2, j

1

⎞⎠ = O

�Lcam,1, j

Lcam,3, j,

Lcam,2, j

Lcam,3, j, 1

�T

(60)

where lcam, j denotes the j th measured feature points (corner)of the chessboard in the camera imaging frame, O is thematrix accounting for the intrinsic parameters of the camera,and Lcam, j = (Lcam,1, j , Lcam,2, j , Lcam,3, j )

T is the projectedj th feature point in the camera ego-motion frame. To obtainthe i th pose between the camera and the chessboard, we canrelate the standard point coordinates of the chessboard Lchess,jfor j = 1, 2, . . . in the world frame from a certain model withthat in the camera frame by

Lchess, j = TAi Lcam, j . (61)

By minimizing the projection errors from (61), TAi willbe obtained with nonlinear optimization techniques, e.g.,the Perspective-n-Point algorithm [48], [49]. In our experi-ment, the scale-invariant feature transform (SIFT) is invokedfor the extraction of the corner points of the chessboard [50].We use several data sets captured from our platform to producecomparisons with representatives, including classical ones ofTsai and Lenz [5], Park and Martin [7], Chou and Kamel [9],Daniilidis [11], and Andreff et al. [12] and recent ones ofHeller et al. [15], Zhao [16], and Zhang et al. [17]. The errorof the hand-eye calibration is defined as follows:

Error = 1

N

��� Ni=1

�Ai X − X Bi�2 (62)

where Ai and Bi are detailed in (6). All the timing statistics,computation and visualization are carried out on a MacBookPro 2017 with i7-3.5-GHz CPU along with the MATLABr2018a software. All the algorithms are implemented usingthe least coding resources. We employ YALMIP to solve theLMI dqhec optimization in the method of Heller et al. [15].For Zhao’s method [16], we invoke the fmincon function inMATLAB for numerical solution.

The robotic arm is rigidly installed on the testing tableand is operated smoothly and periodically to capture theimages of the chessboard from various directions. Using theabove-described mechanisms, we form the series of {A}, {B}.We select d = 104 as a scaling factor for evaluation inthis section, as the translational components are all within[−2, 2] m, and in such a range, the camera has the empiricalpositioning accuracy of about 0.05 ∼ 0.2 m. We choose theF2 mapping in (51) for conversion from SE(3) to SO(4) sincein real applications, it obtains much more accurate hand-eyecalibration results than the F1 mapping (49) presented in [37].The scalar thresholds for the other numerical methods are allset to 1× 10−15 to guarantee the accuracy. We conduct eightgroups of the experiments using the experimental platform.The errors and computation timespans are processed 100 timesfor averaged performance evaluation, which are providedin Tables II and III. The least errors are marked using thegreen color, and the best ones for computation time are taggedin blue. The statistics of the proposed method are marked boldin Tables II and III for emphasis. The digits after the case serialnumbers indicate the sample counts for the studied case.

One can see that with growing sample counts, all themethods obtain more accurate estimates of X . While with

Page 9: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2974 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

TABLE II

COMPARISONS WITH CLASSICAL METHODS FOR GRIPPER–CAMERA HAND-EYE CALIBRATION

TABLE III

COMPARISONS WITH RECENT METHODS FOR GRIPPER–CAMERA HAND-EYE CALIBRATION

larger quantities of measurements, the processing speeds forthe algorithms become slower. However, among all methods,despite they are analytical or iterative, the proposed SO(4)method almost always gives the most accurate results withinthe least computation time. The reason is that the proposed4DPA computes the rotation and translation in X simultane-ously and well optimizes the loss function J of the hand-eyecalibration problem. The proposed algorithm can obtain betterresults than almost all other analytical and numerical ones,except for the cases 1–3 using the method of Daniilidis. Thisindicates that with a few samples, the accuracy of the proposed4DPA is lower than that of the method of Daniilidis but is stillclose. However, a few samples indicate relative low confidencein calibration accuracy, and for cases with higher quantitiesof measurements, the proposed 4DPA method is always thebest. This shows that the designed 4-D Procrustes analysisfor mapping from SE(3) to SO(4) is more efficient thanother tools, e.g., the mappings based on dual quaternion [11]and Kronecker product [12]. Furthermore, our method usesthe eigendecomposition as the solution that is regarded as arobust tool for engineering problems. Our method can reducethe estimation error to about 3.06%–94.01% of original statscompared with the classical algorithms and 0.39%–86.11%compared with the recent numerical ones. The proposedmethod is also free of pre-processing techniques such asquaternion conversion in other algorithms. All the matrixoperations are simple and intuitive, which makes the compu-tation very fast. Our method can lower the computation timeto about 9.98%–70.68% of original stats compared with theclassical analytical algorithms and 1.45%–28.58% comparedwith the recent numerical ones. A synchronized sequence ofthe camera–chessboard poses and end-effector poses is madeopen-source (see the links in the Acknowledgment section).Every researcher can freely download this data set and evaluatethe accuracy and computational efficiency. The advantagesof the developed method on both precision and computationtime will lead to very effective implementations of hand-eyecalibration for industrial applications in the future.

B. Error Sensitivity to the Noises and Different Parametersof d

In this section, we study the sensitivity of the proposedmethod subject to input measurement noises. We define thenoise corrupted models of rotations as

RA,i = RA,i + ErrorRXR1

RB,i = RB,i + ErrorRXR2 (63)

where R1 and R2 are the random matrices whose columnssubject to Gaussian distribution with covariances of I andErrorRX is a scalar accounting for the rotation error. Likewise,the noise models of translations can be given by

tA,i = tA,i + ErrortXT1

tB,i = tB,i + ErrortXT2 (64)

with noise scale of ErrortX and T1 and T2 noise vectorssubject to normal distribution also with a covariance of I .The perturbed rotations are orthonormalized after the additionof the noises. Here, the Gaussian distribution is selectedby following the tradition in [19] since this assumption ofdistribution covers most cases that we may encounter in thereal-world applications.

We take all the compared representatives from Section IV-Ato this part by adding three more ones of the proposed methodwith different d values of d = 103, d = 105, and d = 106.Then, we can see both the comparisons with the representa-tives and observe the influence of the positive scaling factor d .Several simulations are conducted where we generate datasets of A,B with N = 1000, and the obtained results areaveraged for ten times. We independently evaluate the effectof ErrorRX and ErrortX imposed on Error. The relationshipbetween ErrorRX and Error is shown in Fig. 3, while therelationship between ErrortX and Error is shown in Fig. 4.These relationships are demonstrated in the form of the logplot. We can see that with increasing errors in rotation andtranslation, the errors in the computed X value all arise to alarge extent. One can see in the magnified plot of Fig. 3 that

Page 10: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2975

Fig. 3. Sensitivity of errors subject to input rotation noises.

the optimization methods achieve the best accuracy, but theproposed method can also obtain the comparable estimates.It is shown that with various values of d , the performances ofthe proposed method differ quite a lot. Fig. 4 indicates thatwith d = 103, the evaluated errors on translation are the worstamong all compared ones. However, with a larger d value,this situation has been significantly improved, generating themagnified image in Fig. 4 that when d = 105 and d = 106,the errors of the proposed method are quite close to the leastones. As in Section IV-A, we have tested that the proposedmethod has the fastest execution speed, it is shown that forthe studied cases, the developed method can be regarded as abalancing one between the accuracy and computation speed.

C. Simulations on Uncertainty Descriptions

The uncertainty description of the hand-eye calibrationproblem AX = X B is studied by Nguyen and Pham forthe first time iteratively in [19]. It works very well with

Fig. 4. Sensitivity of errors subject to input translation noises.

both synthetic and real-world data. However, it still has itsdrawbacks as follows.

1) The covariance of the rotation RX is independently esti-mated from RA RX = RX RB, but, in fact, the accuracyof RX is also affected by tA and tB.

2) The covariances of RX and tX are required to becomputed iteratively, while how many iterations wouldbe sufficient to provide accurate enough results is stillan unsolved problem.

Hence, the covariance should also be decided by tA andtB and, if possible, do not require iterations. The proposedSO(4) method in this paper, however, simultaneously estimatesRX and tX together and can also generate the analyticalprobabilistic information within several deterministic stepsconsidering the tightly coupled relationship inside AX = X B.Let us define ξRX ,x , ξRX ,y , and ξRX ,z as errors in rotationRX around the x-, y-, and z-axes, while ξtX ,x , ξtX ,y , andξtX,z being errors in translation tX about the x-, y-, and

δRX,SO(4)δRTX,SO(4)

=�

δRX δ tX/d−δ tT

X RX/d − tTXδRX/d 0

��δRT

X −RTXδ tX/d − δRT

X tX/dδ tT

X/d 0

=⎡⎢⎣ δRXδRT

X +1

d2 δ tXδ tTX − 1

d

�δRX RT

Xδ tX + δRXδRTX tX

�− 1

d

�δ tT

X RXδRTX + tT

XδRXδRTX

� 1

d2

�tTX RX RT

Xδ tX + tTX RXδRT

X tX + tTXδRX RT

Xδ tX + tTXδRXδRT

X tX

�⎤⎥⎦

�RX,SO(4) ='δRX,SO(4)δRT

X,SO(4)

((65)

=⎡⎢⎣ �RX +

1

d2 �tX − 1

d

��δ tX × δθX

�+�RX tX

�− 1

d

��δ tX × δθX

�+ �RX tX

�T 1

d2

�tTX

�δ tX × δθX

�+ tTX�RX tX

�⎤⎥⎦

≈⎡⎢⎣�RX +

1

d2 �tX − 1

d�RX tX

− 1

dtTX�RX

1

d2 tTX�RX tX

⎤⎥⎦ (66)

Page 11: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2976 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Fig. 5. 2-D covariance projections of the solutions to hand-eye calibration using the proposed method and that of Nguyen and Pham. The gray dashed linesindicate the mean bounds of the simulated statistics. The green dashed lines are from the solution of Nguyen and Pham, while the black solid ones are fromour proposed algorithm. The discrete points in blue reflect the simulated samples.

z-axes, respectively. Given the covariances �RX and �tX ,the covariance of the equivalent SO(4) transformation can becomputed by (66) where we have

δRX,SO(4) =�

δRX δ tX/d−δ tT

X RX/d − tTXδRX/d 0

�(67)

and δRX,SO(4)δRTX,SO(4) is simplified from (65) to (66), as

shown at the bottom of the previous page, according to [51]

RX = −[ω]×RX

δRX = −[δθX ]×RX

δRX RTX = −[δθX ]× (68)

in which ω is the angular velocity vector and θX denotes thesmall angle rotation of RX . Therefore, with �RAi

, �tAi, �RBi

,and �tBi

, we can compute �RAi ,SO(4) and �RBi ,SO(4) . In thispaper, we consider that the system errors in each measurementstep are identical, and therefore, we have

�RAi= �RA, �tAi

= �tA

�RBi= �RB , �tBi

= �tB . (69)

Now, we conduct the same simulation as that pro-vided in the Python open-source codes of Nguyenand Pham [19] (https://github.com/dinhhuy2109/python-cope/examples/test_axxb_covariance.py). The input covariances are

�RA = 10−10 I, �tA = 10−10 I

�RB =⎛⎝ 4.15625 −2.88693 −0.60653−2.88693 32.0952 −0.14482−0.60653 −0.14482 1.43937

⎞⎠× 10−5

�tB =⎛⎝ 19.52937 2.12627 −1.06675

2.12627 4.44314426 0.38679−1.06675 0.38679 2.13070

⎞⎠× 10−5.

(70)

The simulation is carried out for 10 000 times, generatingthe randomly perturbed {A}, {B}, and in each set, there are

60 measurements. �b is computed according to the simulatedstatistics for {A}, {B}. The statistical covariance bounds ofthe estimated RX and tX values are then logged. Using themethod by Nguyen and Pham and our proposed method,the 2-D covariance projections are shown in Fig. 5. One cansee that both methods can estimate the covariance correctly,while our method achieves very slightly smaller covariancesbounds. This reflects that our proposed method has reachedthe accuracy of Nguyen and Pham for uncertainty descriptions.What needs to be pointed out is that the proposed method isfully analytical rather than the iterative solution in the methodof Nguyen and Pham. While the analytical methods are alwaysmuch faster than iterative ones, this simulation has indirectlyreflected that the proposed method can both correctly estimatethe transformation and determine the precision covarianceinformation within a short computational period, which isbeneficial to those applications with high demands on areal-time system with rigorous scheduling logics and timing.

D. Extension to the Extrinsic Calibration Between 3-D LaserScanner and a Fisheye Camera

In this section, the developed 4DPA method in this paper isemployed to solve the extrinsic calibration problem between a3-D laser scanner and a fisheye camera, mounted rigidly on anexperimental platform shown in Fig. 6. This platform containsa high-end 2-D laser scanner of Hokuyo UST 10-lx spinnedby a powerful Dynamixel MX-28T servo controlled throughthe serial ports by the onboard Nvidia TX1 computer withgraphics processing unit (GPU). It also consists of an IntelRealsense T265 fisheye camera with a resolution of 848 ×800 and a frame rate of 30 frames/s, along with an onboardfactory-calibrated IMU. The spin mechanism and the feedbackof an internal encoder of the servo guarantee the seamlessstitching of successive laser scans that produce highly accurate3-D scene reconstructions.

Page 12: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2977

TABLE IV

TRAJECTORY ERRORS BEFORE AND AFTER THE EXTRINSIC CALIBRATION USING THE PROPOSED METHOD

Fig. 6. Experimental platform equipped with a 3-D laser scanner and afisheye camera, along with other processing devices.

Fig. 7. Signal flowchart in the extrinsic calibration between the 3-D laserscanner and the fisheye camera using the proposed algorithm.

The single sensor or a combination of a laser scanner anda camera will be of great importance for scene measure-ment and reconstruction [52]–[54]. However, due to inevitableinstallation misalignments, the extrinsic calibration betweenthe laser scanner and the camera should be performed forreliable measurement accuracy. Several algorithms have beenproposed to deal with the calibration issues inside thesesensors recently [55]–[57]. These methods in fact require

some standard objects such as large chessboards to obtainsatisfactory results. We, here, extend our method to solvingthis extrinsic calibration, without needs of any other standardreference systems. The sensor signal flowchart can be seenin Fig. 7.

For the developed system, we can gather three sourcesof data, i.e., images from the fisheye camera, the inertialreadings of angular rate and acceleration, and the 3-D laserscans. At the first stage, the fisheye camera and the IMUmeasurements are processed via feature extraction [50] andnavigation integral mechanisms [58], respectively. Then, theyare integrated together for the camera pose, denoted as TAi

with an index of i , using the method in [59]. The pose of the3-D laser scanner, denoted as TBi with index i , is computed viathe 3-D ICP [33], as shown in Fig. 8. As the camera and thelaser scanner poses have the output frequencies of 200 and1 Hz, respectively, the synchronization between them isconducted by continuous linear quaternion interpolation thatwe developed recently [43]. Then, using the properly syncedTAi and TBi , we are able to form the proposed hand-eyecalibration principle with entry point equation in (6). Withprocedures shown in Algorithm 1, where d is set to d = 105

empirically, the extrinsic parameters, i.e., the rotation and thetranslation between the laser scanner and the fisheye camera,are calculated.

Then, these parameters are applied to the developed plat-form for 3-D trajectory verification using the Visual-lidarOdometry and Mapping (V-LOAM), method [60]. We put thesystem into measurement mode and then start moving it fromorigin to origin. Then, when computing the trajectories withthe uncalibrated and calibrated data, we can find out thatthe trajectory after the calibration has much less odometricerrors (see Fig. 9). In later periods, a similar experiment isrepeated twice. The detailed statistics of the trajectory errorsare presented in Table IV containing results on each of thex-, y-, and z-axes. One can see that the errors have beensignificantly reduced after calibration, which indicates theeffectiveness of the proposed calibration scheme in real-scenemeasurement application. Also, the results of the proposed4DPA method for hand-eye calibration will be affected by thevalue of d , as described in the previous sections. Therefore,a study on such an influence is conducted using the data forexperiment 3 (see Table V).

We tune the d value from 1 × 103 to 1 × 107. Theerrors indicate that the chosen value d = 1 × 105 in thissection results in sufficiently accurate estimates. And withlarger values of d , the error bounds almost reach their limits.While for those small values of d , we can see that theycannot deal with the calibration accurately. The reason is that

Page 13: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2978 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Fig. 8. Reconstructed scenes using the presented 3-D laser scanner. They are later used for the pose estimation of the laser scanner frame with ICP.

TABLE V

VERIFIED 3-D TRAJECTORY ERRORS AFTER HAND-EYE CALIBRATION

WITH DIFFERENT VALUES OF d (EXPERIMENT 3)

Fig. 9. Projected XY trajectories before and after the extrinsic calibrationbetween the 3-D laser scanner and the fisheye camera.

the approximation in (51) requires large d for more precisecomputation (but not too large, see Section III-D). The optimaldynamic selection of the parameter d will be the next task forus in the near future.

V. CONCLUSION

This paper studies the classical hand-eye calibration prob-lem AX = X B by exploiting a new generalized method onSO(4). The investigated 4-D Procrustes analysis provides uswith very useful closed-form results for hand-eye calibration.With such a framework, the uncertainty descriptions of theobtained transformations can be easily computed. It is ver-ified that the proposed method can achieve better accuracy

and much less computation time than representatives in thereal-world data sets. The proposed uncertainty descriptions forthe 4 × 4 matrices are also universal to other similar prob-lems, such as spacecraft attitude determination [29] and 3-Dregistration [32]. We also notice that the Procrustes analysison SO(n) may be of benefit to solve the generalized hand-eyeproblem AX = X B in which SE(n) and this is going to bediscussed in the next task for us in further works.

APPENDIX ASOME CLOSED-FORM RESULTS

A. Analytical Forms of Some Fundamental Matrices

Taking c1 = P1(σ )σ as an example, one can explicitlywrite out

c1 =

⎛⎜⎜⎝

ap − bq − cr − dsaq + bp + cs − drar + cp − bs + dqas + br − cq + dp

⎞⎟⎟⎠ .

One would be very easy to verify that c1 = P1(σ )σ . Then,the similar factorization can be established for c2, c3, and c4and r1, r2, r3, and r4, respectively, generating the followingresults:

P1(σ ) = 1√2

�× �

P2(σ ) = 1√2

⎛⎜⎜⎝−q −p s −r −b −a −d cp −q r s a −b c d−s −r −q p d −c −b −ar −s −p −q −c −d a −b

⎞⎟⎟⎠

P3(σ ) = 1√2

⎛⎜⎜⎝−r −s −p q −c d −a −bs −r −q −p −d −c −b ap q −r s a b −c d−q p −s −r b −a −d −c

⎞⎟⎟⎠

P4(σ ) = 1√2

⎛⎜⎜⎝−s r −q −p −d −c b −a−r −s p −q c −d −a −bq −p −s −r −b a −d −cp q r −s a b c −d

⎞⎟⎟⎠

Q1(σ ) = 1√2

⎛⎜⎜⎝

p −q −r −s a −b −c −d−q −p s −r −b −a −d c−r −s −p q −c d −a −b−s r −q −p −d −c b −a

⎞⎟⎟⎠

Page 14: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2979

Q2(σ ) = 1√2

⎛⎜⎜⎝

q p s −r b a −d cp −q r s a −b c ds −r −q −p −d −c −b a−r −s p −q c −d −a −b

⎞⎟⎟⎠

Q3(σ ) = 1√2

⎛⎜⎜⎝

r −s p q c d a −b−s −r −q p d −c −b −ap q −r s a b −c dq −p −s −r −b a −d −c

⎞⎟⎟⎠

Q4(σ ) = 1√2

⎛⎜⎜⎝

s r −q p d −c b ar −s −p −q −c −d a −b−q p −s −r b −a −d −cp q r −s a b c −d

⎞⎟⎟⎠ .

The results of J j k,i can then be computed using symboliccomputation tools, e.g., MATLAB and Mathematica

J11,i

=

⎛⎜⎜⎝

e11 − z11 e12 + z21 e13 + z31 e14 + z41e12 + z21 z11 − e11 e14 − z41 z31 − e13e13 + z31 z41 − e14 z11 − e11 e12 − z21e14 + z41 e13 − z31 z21 − e12 z11 − e11

⎞⎟⎟⎠

J12,i

=

⎛⎜⎜⎝

e21 − z21 e22 − z11 e23 + z41 e24 − z31e22 − z11 z21 − e21 e24 + z31 z41 − e23e23 − z41 z31 − e24 −e21 − z21 e22 − z11e24 + z31 e23 + z41 z11 − e22 −e21 − z21

⎞⎟⎟⎠

J13,i

=

⎛⎜⎜⎝

e31 − z31 e32 − z41 e33 − z11 e34 + z21e32 + z41 −e31 − z31 e34 + z21 z11 − e33e33 − z11 z21 − e34 z31 − e31 e32 + z41e34 − z21 e33 − z11 z41 − e32 −e31 − z31

⎞⎟⎟⎠

J14,i

=

⎛⎜⎜⎝

e41 − z41 e42 + z31 e43 − z21 e44 − z11e42 − z31 −e41 − z41 e44 − z11 z21 − e43e43 + z21 z11 − e44 −e41 − z41 e42 + z31e44 − z11 e43 + z21 z31 − e42 z41 − e41

⎞⎟⎟⎠

J21,i

=

⎛⎜⎜⎝

e12 − z12 z22 − e11 e14 + z32 z42 − e13z22 − e11 z12 − e12 −e13 − z42 z32 − e14z32 − e14 z42 − e13 e12 + z12 e11 − z22e13 + z42 −e14 − z32 z22 − e11 e12 + z12

⎞⎟⎟⎠

J22,i

=

⎛⎜⎜⎝

e22 − z22 −e21 − z12 e24 + z42 −e23 − z32−e21 − z12 z22 − e22 z32 − e23 z42 − e24−e24 − z42 z32 − e23 e22 − z22 e21 − z12e23 + z32 z42 − e24 z12 − e21 e22 − z22

⎞⎟⎟⎠

J23,i

=

⎛⎜⎜⎝

e32 − z32 −e31 − z42 e34 − z12 z22 − e33z42 − e31 −e32 − z32 z22 − e33 z12 − e34−e34 − z12 z22 − e33 e32 + z32 e31 + z42e33 − z22 −e34 − z12 z42 − e31 e32 − z32

⎞⎟⎟⎠

J24,i

=

⎛⎜⎜⎝

e42 − z42 z32 − e41 e44 − z22 −e43 − z12−e41 − z32 −e42 − z42 −e43 − z12 z22 − e44z22 − e44 z12 − e43 e42 − z42 e41 + z32e43 − z12 z22 − e44 z32 − e41 e42 + z42

⎞⎟⎟⎠

J31,i

=

⎛⎜⎜⎝

e13 − z13 z23 − e14 z33 − e11 e12 + z43e14 + z23 e13 + z13 −e12 − z43 z33 − e11z33 − e11 z43 − e12 z13 − e13 −e14 − z23z43 − e12 e11 − z33 z23 − e14 e13 + z13

⎞⎟⎟⎠

J32,i

=

⎛⎜⎜⎝

e23 − z23 −e24 − z13 z43 − e21 e22 − z33e24 − z13 e23 + z23 z33 − e22 z43 − e21−e21 − z43 z33 − e22 −e23 − z23 −e24 − z13z33 − e22 e21 + z43 z13 − e24 e23 − z23

⎞⎟⎟⎠

J33,i

=

⎛⎜⎜⎝

e33 − z33 −e34 − z43 −e31 − z13 e32 + z23e34 + z43 e33 − z33 z23 − e32 z13 − e31−e31 − z13 z23 − e32 z33 − e33 z43 − e34−e32 − z23 e31 − z13 z43 − e34 e33 − z33

⎞⎟⎟⎠

J34,i

=

⎛⎜⎜⎝

e43 − z43 z33 − e44 −e41 − z23 e42 − z13e44 − z33 e43 − z43 −e42 − z13 z23 − e41z23 − e41 z13 − e42 −e43 − z43 z33 − e44−e42 − z13 e41 + z23 z33 − e44 e43 + z43

⎞⎟⎟⎠

J41,i

=

⎛⎜⎜⎝

e14 − z14 e13 + z24 z34 − e12 z44 − e11z24 − e13 e14 + z14 e11 − z44 z34 − e12e12 + z34 z44 − e11 e14 + z14 −e13 − z24z44 − e11 −e12 − z34 z24 − e13 z14 − e14

⎞⎟⎟⎠

J42,i

=

⎛⎜⎜⎝

e24 − z24 e23 − z14 z44 − e22 −e21 − z34−e23 − z14 e24 + z24 e21 + z34 z44 − e22e22 − z44 z34 − e21 e24 − z24 −e23 − z14z34 − e21 z44 − e22 z14 − e23 −e24 − z24

⎞⎟⎟⎠

J43,i

=

⎛⎜⎜⎝

e34 − z34 e33 − z44 −e32 − z14 z24 − e31z44 − e33 e34 − z34 e31 + z24 z14 − e32e32 − z14 z24 − e31 e34 + z34 z44 − e33−e31 − z24 −e32 − z14 z44 − e33 −e34 − z34

⎞⎟⎟⎠

J44,i

=

⎛⎜⎜⎝

e44 − z44 e43 + z34 −e42 − z24 −e41 − z14−e43 − z34 e44 − z44 e41 − z14 z24 − e42e42 + z24 z14 − e41 e44 − z44 z34 − e43−e41 − z14 z24 − e42 z34 − e43 z44 − e44

⎞⎟⎟⎠

where e jk, z jk, j, k = 1, 2, 3, 4 are the matrix entries ofEi and Zi , respectively. Note that these computation proce-dures can also be found out at https://github.com/zarathustr/hand_eye_SO4.

B. Matrix Determinant Property

Given an arbitrary square matrix

M =�

A BC D

�.

If D is invertible, then the determinant of M is

det(M) = det(D) det(A− B D−1C).

Page 15: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

2980 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 69, NO. 6, JUNE 2020

Inserting the above-mentioned result into

det(λW ,max I −W) = det

��λW ,max I −K−K T λW ,max I

��

gives (26).

ACKNOWLEDGMENT

The authors would like to thank Dr. Z. Zhang from theUniversity of Leeds for his detailed explanation of the imple-mented codes in [17]. They would like to thank Dr. H.Nguyen from Nanyang Technological University, Singapore,for the discussion with him on his useful codes of theuncertainty descriptions of hand-eye calibration [19]. Theywould also like to thank Dr. D. Modenini from the Universityof Bologna and Prof. D. Condurache from the GheorgheAsachi Technical University of Iasi for constructive com-munications. The codes and data of this paper have beenarchived on https://github.com/zarathustr/hand_eye_SO4 andhttps://github.com/zarathustr/hand_eye_data.

REFERENCES

[1] K. Koide and E. Menegatti, “General hand–eye calibration based onreprojection error minimization,” IEEE Robot. Autom. Lett., vol. 4, no. 2,pp. 1021–1028, Apr. 2019.

[2] Y.-T. Liu, Y.-A. Zhang, and M. Zeng, “Sensor to segment cal-ibration for magnetic and inertial sensor based motion cap-ture systems,” Measurement, vol. 142, pp. 1–9, Aug. 2019.doi: 10.1016/j.measurement.2019.03.048.

[3] Z.-Q. Zhang, “Cameras and inertial/magnetic sensor units alignmentcalibration,” IEEE Trans. Instrum. Meas., vol. 65, no. 6, pp. 1495–1502,Jun. 2016.

[4] D. Modenini, “Attitude determination from ellipsoid observations:A modified orthogonal Procrustes problem,” AIAA J. Guid. Control Dyn.,vol. 41, no. 10, pp. 2320–2325, Jun. 2018.

[5] R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous andefficient 3D robotics hand/eye calibration,” IEEE Trans. Robot. Autom.,vol. 5, no. 3, pp. 345–358, Jun. 1989.

[6] Y. C. Shiu and S. Ahmad, “Calibration of wrist-mounted robotic sensorsby solving homogeneous transform equations of the form AX=XB,”IEEE Trans. Robot. Autom., vol. 5, no. 1, pp. 16–29, Feb. 1989.

[7] F. C. Park and B. J. Martin, “Robot sensor calibration: Solving AX=XBon the Euclidean group,” IEEE Trans. Robot. Autom., vol. 10, no. 5,pp. 717–721, Oct. 1994.

[8] R. Horaud and F. Dornaika, “Hand-eye Calibration,” Int. J. Robot. Res.,vol. 14, no. 3, pp. 195–210, Jun. 1995.

[9] J. C. K. Chou and M. Kamel, “Finding the position and orientation ofa sensor on a robot manipulator using quaternions,” Int. J. Robot. Res.,vol. 10, no. 3, pp. 240–254, Jun. 1991.

[10] Y.-C. Lu and J. Chou, “Eight-space quaternion approach for robotichand-eye calibration,” in Proc. IEEE ICSMC, vol. 4, Oct. 2002,pp. 3316–3321.

[11] K. Daniilidis, “Hand-eye calibration using dual quaternions,” Int. J.Robot. Res., vol. 18, no. 3, pp. 286–298, Mar. 1999.

[12] N. Andreff, R. Horaud, and B. Espiau, “Robot hand-eye calibration usingstructure-from-motion,” Int. J. Robot. Res., vol. 20, no. 3, pp. 228–248,2001.

[13] D. Condurache and A. Burlacu, “Orthogonal dual tensor method forsolving the AX=XB sensor calibration problem,” Mech. Mach. Theory,vol. 104, pp. 382–404, Oct. 2016.

[14] S. Gwak, J. Kim, and F. C. Park, “Numerical optimization on theEuclidean group with applications to camera calibration,” IEEE Trans.Robot. Autom., vol. 19, no. 1, pp. 65–74, Feb. 2003.

[15] J. Heller, D. Henrion, and T. Pajdla, “Hand-eye and robot-worldcalibration by global polynomial optimization,” in Proc. IEEE ICRA,May 2014, pp. 3157–3164.

[16] Z. Zhao, “Simultaneous robot-world and hand-eye calibration by thealternative linear programming,” Pattern Recogn. Lett., to be published.doi: 10.1016/j.patrec.2018.08.023.

[17] Z. Zhang, L. Zhang, and G.-Z. Yang, “A computationally efficientmethod for hand—Eye calibration,” Int. J. Comput. Assist. Radio. Surg.,vol. 12, no. 10, pp. 1775–1787, Oct. 2017.

[18] H. Song, Z. Du, W. Wang, and L. Sun, “Singularity analysis for theexisting closed-form solutions of the hand-eye calibration,” IEEE Access,vol. 6, pp. 75407–75421, 2018.

[19] H. Nguyen and Q.-C. Pham, “On the covariance of X in AX = XB,”IEEE Trans. Robot., vol. 34, no. 6, pp. 1651–1658, Dec. 2018.

[20] Y. Tian, W. R. Hamel, and J. Tan, “Accurate human navigation usingwearable monocular visual and inertial sensors,” IEEE Trans. Instrum.Meas., vol. 63, no. 1, pp. 203–213, Jan. 2014.

[21] A. Alamri, M. Eid, R. Iglesias, S. Shirmohammadi, and A. E. Saddik,“Haptic virtual rehabilitation exercises for poststroke diagnosis,” IEEETrans. Instrum. Meas., vol. 57, no. 9, pp. 1876–1884, Sep. 2007.

[22] S. Liwicki, G. Tzimiropoulos, S. Zafeiriou, and M. Pantic, “Eulerprincipal component analysis,” Int. J. Comput. Vis., vol. 101, no. 3,pp. 498–518, 2013.

[23] A. Bartoli, D. Pizarro, and M. Loog, “Stratified generalized Procrustesanalysis,” Int. J. Comput. Vis., vol. 101, no. 2, pp. 227–253, Jan. 2013.

[24] L. Igual, X. Perez-Sala, S. Escalera, C. Angulo, and F. De La Torre,“Continuous generalized procrustes analysis,” Pattern Recognit., vol. 47,no. 2, pp. 659–671, Feb. 2014.

[25] C. I. Mosier, “Determining a simple structure when loadings for certaintests are known,” Psychometrika, vol. 4, no. 2, pp. 149–162, Jun. 1939.

[26] B. F. Green, “The orthogonal approximation of an oblique structure infactor analysis,” Psychometrika, vol. 17, no. 4, pp. 429–440, Dec. 1952.

[27] M. W. Browne, “On oblique procrustes rotation,” Psychometrika, vol. 32,no. 2, pp. 125–132, Jun. 1967.

[28] G. Wahba, “A least squares estimate of satellite attitude,” SIAM Rev.,vol. 7, no. 3, p. 409, 1965.

[29] M. D. Shuster and S. D. Oh, “Three-axis attitude determination fromvector observations,” J. Guid. Control, Dyn., vol. 4, no. 1, pp. 70–77,1981.

[30] R. Horaud, F. Forbes, M. Yguel, G. Dewaele, and J. Zhang, “Rigid andarticulated point registration with expectation conditional maximization,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 3, pp. 587–602,Mar. 2011.

[31] N. Duta, A. K. Jain, and M. P. Dubuisson-Jolly, “Automatic constructionof 2D shape models,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23,no. 5, pp. 433–446, May 2001.

[32] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fittingof two 3-D point sets,” IEEE Trans. Pattern Anal. Mach. Intell.,vol. PAMI-9, no. 5, pp. 698–700, Sep. 1987.

[33] P. J. Besl and D. N. McKay, “A method for registration of 3-D shapes,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256,Feb. 1992.

[34] M. Wang and A. Tayebi, “Hybrid pose and velocity-bias estima-tion on SE(3) using inertial and landmark measurements,” IEEETrans. Autom. Control, vol. 64, no. 8, pp. 3399–3406, Aug. 2019.doi: 10.1109/TAC.2018.2879766.

[35] J. Markdahl and X. Hu, “Exact solutions to a class of feedback systemson SO(n),” Automatica, vol. 63, pp. 138–147, Jan. 2016.

[36] J. D. Biggs and H. Henninger, “Motion planning on a class of 6-DLie groups via a covering map,” IEEE Trans. Autom. Control, to bepublished. doi: 10.1109/TAC.2018.2885241.

[37] F. Thomas, “Approaching dual quaternions from matrix algebra,” IEEETrans. Robot., vol. 30, no. 5, pp. 1037–1048, Oct. 2014.

[38] W. S. Massey, “Cross products of vectors in higher dimensional euclid-ean spaces,” Amer. Math. Monthly, vol. 90, no. 10, pp. 697–701,1983.

[39] J. Wu, Z. Zhou, B. Gao, R. Li, Y. Cheng, and H. Fourati, “Fast linearquaternion attitude estimator using vector observations,” IEEE Trans.Automat. Sci. Eng., vol. 15, no. 1, pp. 307–319, Jan. 2018.

[40] J. Wu, M. Liu, Z. Zhou, and R. Li, “Fast symbolic 3D registration solu-tion,” 2019, arxiv:1805.08703.[Online]. Available: https://arxiv.org/abs/1805.08703

[41] I. Y. Bar-Itzhack, “New method for extracting the quaternion from arotation matrix,” J. Guid., Control, Dyn., vol. 23, no. 6, pp. 1085–1087,2000.

[42] A. H. J. de Ruiter and J. R. Forbes, “On the solution of Wahba’s problemon SO(n),” J. Astron. Sci., vol. 60, no. 1, pp. 1–31, Mar. 2013.

[43] J. Wu, “Optimal continuous unit quaternions from rotation matri-ces,” AIAA J. Guid. Control Dyn., vol. 42, no. 4, pp. 919–922,2018.

[44] J. Wu, M. Liu, and Y. Qi, “Computationally efficient robust algorithmfor generalized sensor calibration problem AR=RB,” IEEE Sensors J.,to be published. doi: 10.1109/JSEN.2019.2924668.

[45] P. Lourenço, B. J. Guerreiro, P. Batista, P. Oliveira, and C. Silvestre,“Uncertainty characterization of the orthogonal Procrustes problem witharbitrary covariance matrices,” Pattern Recogn., vol. 61, pp. 210–220,Jan. 2017.

Page 16: Hand-Eye Calibration: 4-D Procrustes Analysis Approach · tion, least squares, octonions, quaternions. I. INTRODUCTION T HE main hand-eye calibration problem studied in this paper

WU et al.: HAND-EYE CALIBRATION: 4-D PROCRUSTES ANALYSIS APPROACH 2981

[46] R. L. Dailey, “Eigenvector derivatives with repeated eigenvalues,”AIAA J., vol. 27, no. 4, pp. 486–491, Apr. 1989.

[47] G. Chang, T. Xu, and Q. Wang, “Error analysis of Davenport’s q-method,” Automatica, vol. 75, pp. 217–220, Jan. 2017.

[48] X.-S. Gao, X.-R. Hou, J. Tang, and H.-F. Cheng, “Complete solutionclassification for the perspective-three-point problem,” IEEE Trans.Pattern Anal. Mach. Intell., vol. 25, no. 8, pp. 930–943, Aug. 2003.

[49] T. Hamel and C. Samson, “Riccati observers for the nonstationary PnPproblem,” IEEE Trans. Autom. Control, vol. 63, no. 3, pp. 726–741,Mar. 2018.

[50] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,”Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, 2004.

[51] R. Mahony, T. Hamel, and J.-M. Pflimlin, “Nonlinear complementaryfilters on the special orthogonal group,” IEEE Trans. Autom. Control,vol. 53, no. 5, pp. 1203–1218, Jun. 2008.

[52] P. Payeur and C. Chen, “Registration of range measurements withcompact surface representation,” IEEE Trans. Instrum. Meas., vol. 52,no. 5, pp. 1627–1634, Oct. 2003.

[53] L. Wei, C. Cappelle, and Y. Ruichek, “Camera/laser/GPS fusion methodfor vehicle positioning under extended NIS-based sensor validation,”IEEE Trans. Instrum. Meas., vol. 62, no. 11, pp. 3110–3122, Nov. 2013.

[54] A. Wan, J. Xu, D. Miao, and K. Chen, “An accurate point-based rigidregistration method for laser tracker relocation,” IEEE Trans. Instrum.Meas., vol. 66, no. 2, pp. 254–262, Feb. 2017.

[55] Y. Zhuang, N. Jiang, H. Hu, and F. Yan, “3-D-laser-based scenemeasurement and place recognition for mobile robots in dynamic indoorenvironments,” IEEE Trans. Instrum. Meas., vol. 62, no. 2, pp. 438–450,Feb. 2013.

[56] Z. Hu, Y. Li, N. Li, and B. Zhao, “Extrinsic calibration of 2-D laserrangefinder and camera from single shot based on minimal solution,”IEEE Trans. Instrum. Meas., vol. 65, no. 4, pp. 915–929, Apr. 2016.

[57] S. Xie, D. Yang, K. Jiang, and Y. Zhong, “Pixels and 3-D pointsalignment method for the fusion of camera and LiDAR data,” IEEETrans. Instrum. Meas., to be published. doi: 10.1109/TIM.2018.2879705.

[58] Y. Wu, X. Hu, D. Hu, T. Li, and J. Lian, “Strapdown inertial navigationsystem algorithms based on dual quaternions,” IEEE Trans. Aerosp.Electron. Syst., vol. 41, no. 1, pp. 110–132, Jan. 2005.

[59] N. Enayati, E. D. Momi, and G. Ferrigno, “A quaternion-basedunscented Kalman filter for robust optical/inertial motion tracking incomputer-assisted surgery,” IEEE Trans. Instrum. Meas., vol. 64, no. 8,pp. 2291–2301, Aug. 2015.

[60] J. Zhang, M. Kaess, and S. Singh, “A real-time method for depthenhanced visual odometry,” Auto. Robots, vol. 41, no. 1, pp. 31–43,Jan. 2017.

Jin Wu (S’15–M’17) was born in Zhenjiang, China,in May 1994. He received the B.S. degree from theUniversity of Electronic Science and Technology ofChina, Chengdu, China.

He has been a Research Assistant with the Depart-ment of Electronic and Computer Engineering, TheHong Kong University of Science and Technology,Hong Kong, since 2018. He has coauthored over30 technical papers in representative journals andconference proceedings of the IEEE, AIAA, andIET. His current research interests include robot

navigation, multi-sensor fusion, automatic control, and mechatronics.Mr. Wu received the Outstanding Reviewer Award for the Asian Journal

of Control. One of his papers published in the IEEE TRANSACTIONS ON

AUTOMATION SCIENCE AND ENGINEERING was selected as the ESI HighlyCited Paper by the ISI Web of Science from 2017 to 2018.

Yuxiang Sun received the bachelor’s degree fromthe Hefei University of Technology, Hefei, China,in 2009, the master’s degree from the University ofScience and Technology of China, Hefei, in 2012,and the Ph.D. degree from The Chinese Universityof Hong Kong, Hong Kong, in 2017.

He is currently a Research Associate with theDepartment of Electronic and Computer Engineer-ing, Robotics Institute, The Hong Kong Universityof Science and Technology, Hong Kong. His currentresearch interests include mobile robots, autonomous

vehicles, deep learning, simultaneous localization and mapping (SLAM) andnavigation, and motion detection.

Dr. Sun was a recipient of the Best Student Paper Finalist Award at theIEEE International Conference on Robotics and Biomimetics (ROBIO) 2015.

Miaomiao Wang (S’15) received the B.Sc. degree incontrol science and engineering from the HuazhongUniversity of Science and Technology, Wuhan,China, in 2013, and the M.Sc. degree in control engi-neering from Lakehead University, Thunder Bay,ON, Canada, in 2015. He is currently pursuing thePh.D. degree with the Department of Electrical andComputer Engineering, Western University, London,ON, Canada.

He is also a Research Assistant with the Depart-ment of Electrical and Computer Engineering, West-

ern University. He has been an author of technical papers in flagship journalsand conference proceedings, including the IEEE TRANSACTIONS ON AUTO-MATIC CONTROL and the IEEE Conference on Decision and Control (CDC).His current research interests include multiagent systems and geometricestimation and control.

Mr. Wang received the prestigious Ontario Graduate Scholarship (OGS) atWestern University in 2018.

Ming Liu (S’12–M’13–SM’18) received the B.A.degree in automation from Tongji University, Shang-hai, China, in 2005, and the Ph.D. degree from theDepartment of Mechanical and Process Engineering,ETH Zürich, Zürich, Switzerland, in 2013, under thesupervision of Prof. R. Siegwart.

During his master’s study at Tongji University,he stayed one year at Erlangen-Nürnberg University,Erlangen, Germany, and the Fraunhofer InstituteIISB, Erlangen, as a Master Visiting Scholar. He iscurrently with the Electronic and Computer Engi-

neering, Computer Science and Engineering Department, Robotics Insti-tute, The Hong Kong University of Science and Technology, Hong Kong,as an Assistant Professor. He is also a Founding Member of the ShanghaiSwing Automation Co., Ltd., Shanghai. He is coordinating and involvedin NSF Projects and National 863-Hi-TechPlan Projects in China. He haspublished many popular papers in top robotics journals, including theIEEE TRANSACTIONS ON ROBOTICS, the International Journal of RoboticsResearch, and the IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND

ENGINEERING. His current research interests include dynamic environmentmodeling, deep-learning for robotics, 3-D mapping, machine learning, andvisual control.

Dr. Liu was a recipient of the European Micro Aerial Vehicle Competition(EMAV’09) (Second Place), two awards from the International Aerial RobotCompetition (IARC’14) as a Team Member, the Best Student Paper Awardas a first author of the IEEE International Conference on Multisensor Fusionand Information Integration (MFI 2012), the Best Paper Award in Informationof the IEEE International Conference on Information and Automation (ICIA2013) as a first author, the Best Paper Award Finalist as a coauthor, the BestRoboCup Paper Award of the IEEE/RSJ International Conference on Intelli-gent Robots and Systems (IROS 2013), the Best Conference Paper Award ofthe IEEE International Conference on CYBER Technology in Automation,Control, and Intelligent System (CYBER) 2015, the Best Student PaperFinalist of the IEEE International Conference on Real-time Computing andRobotics (RCAR 2015), the Best Student Paper Finalist of ROBIO 2015,the Best Student Paper Award of the IEEE-ICAR 2017, the Best Paper inAutomation Award of the IEEE-ICIA 2017, twice the Innovation ContestChunhui Cup Winning Award in 2012 and 2013, and the Wu Wenjun AIAward in 2016. He was the Program Chair of the IEEE RCAR 2016 andthe Program Chair of the International Robotics Conference in Foshan 2017.He was the Conference Chair of the International Conference on ComputerVision System (ICVS) 2017. He is currently an Associate Editor of the IEEEROBOTICS AND AUTOMATION LETTERS.