c 2016 kuocheng wang - ideals

53
c 2016 Kuocheng Wang

Upload: others

Post on 11-Nov-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: c 2016 Kuocheng Wang - IDEALS

c© 2016 Kuocheng Wang

Page 2: c 2016 Kuocheng Wang - IDEALS

MEDICAL IMAGE REGISTRATION ON TUMOR GROWTH WITH TIMESERIES

BY

KUOCHENG WANG

THESIS

Submitted in partial fulfillment of the requirementsfor the degree of Master of Science in Systems and Entrepreneurial Engineering

in the Graduate College of theUniversity of Illinois at Urbana-Champaign, 2016

Urbana, Illinois

Advisers:

Professor Thenkurussi KesavadasProfessor Steve Lavalle

Page 3: c 2016 Kuocheng Wang - IDEALS

Abstract

Data registration is a common process in medical image analysis. The goal

of data registration is to solve the transformation problem with multiple im-

ages’ alignment. Conventionally, diagnosing the tumors periodically requires

understanding the growth and spread of tumor which is performed by doc-

tors by visual inspections of multiple MRI scan taken over different stages

in time series. Due to the misalignment of patient’s posture, comparison of

these multiple MRI scans is tedious. This problem is addressed often using

image registration of non-rigid body. In this method the features are first ex-

tracted from the original data set. There are several features one can extract

like chamfer, line, region, etc. The feature we chose for the first method was

the best fit plane, the second method was the principal axis. Those results

are later compared with Iterative Closest Point(ICP) method.

The 2 main motivations are 1. to compare different image data set to cor-

relate different measures of anatomical structures, 2. to aid doctors measure

the change in dynamic structural patterns of tumor growth, brain devel-

opment, etc. In this thesis, we present data registration using rigid body

for tumor growth which was usually explore with non-rigid body registra-

tion. Furthermore, we demonstrate different methods of image registration

on rigid body for a time series of tumors. The results were qualitatively

compared based on template matching of time series tumor data.

ii

Page 4: c 2016 Kuocheng Wang - IDEALS

To my parents, for their love and support.

iii

Page 5: c 2016 Kuocheng Wang - IDEALS

Acknowledgments

I would like to express my deepest appreciation to my adviser, Professor

Thenkurussi Kesavadas, who guided me and encouraged me when I got stuck

throughout my research. I would also like to thank to my other adviser,

Professor Steve Lavalle, who always has a sense of humor. He gave me

valuable suggestions on my research. Without them, it would be impossible

to finish my thesis.

My sincere thanks also goes to my labmates, Yao Li, Xiao Li, Naveen

Kumar Sankaran, Pramod Chembrammel , Shrey Pareek, Alexandra Nille,

Nate Speidel and Chris Widdowson who continuously support me spiritually

throughout my life.

Besides my labmates, I would like to thank to my friends. They are from

Engineering System Design Lab. In particular, I want to like to Anand P.

Deshmukh, Tinghao Guo, Daniel R. Herber, Danny J. Lohan and Jason W.

McDonald for their help and accompany in my life.

I thank my department, ISE, OSF Jump Simulation Center and Univer-

sity of Washington. ISE provides me the opportunity of coming to UIUC

and taking courses from all engineering departments. OSF Jump Simulation

Center, specifically, Dr. Matthew Bramlet, Lela.G.Dimonte and Brent Cross

generously provides me all the image data I used for simulation. Dr. Beth

Ripley from University of Washington, did all the tumor segmentation. They

built a bridge between the doctors and engineers so that I could understand

the problem better.

Last but not the least, I would like to thank my parents and my grandpar-

ents for their unconditional love in my graduate life. I would not have been

able to accomplish this thesis without their continuous love and encourage-

ment.

iv

Page 6: c 2016 Kuocheng Wang - IDEALS

TABLE OF CONTENTS

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . 11.1 Image Registration . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Tumor Data Representation . . . . . . . . . . . . . . . . . . . 2

1.2.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . 21.2.2 Tumor Data in Time Series . . . . . . . . . . . . . . . 3

1.3 Programming Library . . . . . . . . . . . . . . . . . . . . . . . 41.4 Challenges and Motivation . . . . . . . . . . . . . . . . . . . . 51.5 Guide to the Remaining Chapters . . . . . . . . . . . . . . . . 5

CHAPTER 2 literature review . . . . . . . . . . . . . . . . . . . 72.1 Rigid Body Registration . . . . . . . . . . . . . . . . . . . . . 72.2 Deformable Registration . . . . . . . . . . . . . . . . . . . . . 8

CHAPTER 3 Methodology . . . . . . . . . . . . . . . . . . . . . . 103.1 Best Fit Plane Method . . . . . . . . . . . . . . . . . . . . . . 10

3.1.1 Best Fit Plane . . . . . . . . . . . . . . . . . . . . . . . 113.1.2 Closed Form Solution of Transformation Matrix . . . . 12

3.2 Principal Axes Matching . . . . . . . . . . . . . . . . . . . . . 133.2.1 Image Moment . . . . . . . . . . . . . . . . . . . . . . 143.2.2 Principal Axes . . . . . . . . . . . . . . . . . . . . . . 143.2.3 Transformation Matrix Computation . . . . . . . . . . 16

3.3 Iterative Closest Point . . . . . . . . . . . . . . . . . . . . . . 163.3.1 Global Match . . . . . . . . . . . . . . . . . . . . . . . 173.3.2 Local Match . . . . . . . . . . . . . . . . . . . . . . . . 17

CHAPTER 4 Implementation . . . . . . . . . . . . . . . . . . . . 194.1 Best Fit Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2 Principle Axes Matching . . . . . . . . . . . . . . . . . . . . . 204.3 ICP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

v

Page 7: c 2016 Kuocheng Wang - IDEALS

CHAPTER 5 Result . . . . . . . . . . . . . . . . . . . . . . . . . . 235.1 Best Fit Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

5.1.1 Registration Result . . . . . . . . . . . . . . . . . . . . 245.1.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 26

5.2 Principle Axes Matching . . . . . . . . . . . . . . . . . . . . . 275.2.1 Principal Axis . . . . . . . . . . . . . . . . . . . . . . . 275.2.2 Tumor and skeleton after registration . . . . . . . . . . 295.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.3 Iterative Closest Point . . . . . . . . . . . . . . . . . . . . . . 345.3.1 Global Matching . . . . . . . . . . . . . . . . . . . . . 345.3.2 Local Matching . . . . . . . . . . . . . . . . . . . . . . 365.3.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 38

CHAPTER 6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . 39

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

vi

Page 8: c 2016 Kuocheng Wang - IDEALS

List of Figures

1.1 Time series 1. . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2 Time series 2. . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3 Time series 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Time series 4. . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3.1 Principal Axis [1]. . . . . . . . . . . . . . . . . . . . . . . . . 153.2 Different parts in a tumor. . . . . . . . . . . . . . . . . . . . 18

5.1 Blood vessel. . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.2 Best Fitting Method on Skeleton. . . . . . . . . . . . . . . . 245.3 Best Fitting Method on template and time series 2. . . . . . 255.4 Best Fitting Method on time series 2 and 3. . . . . . . . . . . 255.5 Best Fitting Method on template and time series 4. . . . . . 265.6 Template principal axes. . . . . . . . . . . . . . . . . . . . . . 275.7 Time 2 principal axes. . . . . . . . . . . . . . . . . . . . . . . 285.8 Time 3 principal axes. . . . . . . . . . . . . . . . . . . . . . . 285.9 Time 4 principal axes. . . . . . . . . . . . . . . . . . . . . . . 295.10 Pincipal axes registration(Time 2). . . . . . . . . . . . . . . . 295.11 Pincipal axes registration(Time 3). . . . . . . . . . . . . . . . 305.12 Pincipal axes registration(Time 4). . . . . . . . . . . . . . . . 305.13 Principal axes registration(skeleton time 2). . . . . . . . . . . 315.14 Principal axes registration(skeleton time 2 bottom view). . . 315.15 Pincipal axes registration(skeleton time 3). . . . . . . . . . . 325.16 Principal axes registration(skeleton time 4). . . . . . . . . . . 325.17 Principal axes registration(skeleton time 4 bottom view). . . 335.18 Time series 2 and template. . . . . . . . . . . . . . . . . . . . 345.19 Time series 3 and template. . . . . . . . . . . . . . . . . . . . 355.20 Time 2 and 3. . . . . . . . . . . . . . . . . . . . . . . . . . . 355.21 Part 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.22 Part 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.23 Part 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.24 Part 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375.25 Part 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

vii

Page 9: c 2016 Kuocheng Wang - IDEALS

CHAPTER 1

Introduction

Advance in medical imaging technology provides doctors a powerful tool

to understand tumor growing process. However, due to the misalignment

of patient’s posture, comparison of multiple MRI scans becomes tedious. In

this thesis, we tried Best Fit Plane method, Principal Axis Mathcing method

and Iterative Closest Point method to match the sensed image to template

image.

1.1 Image Registration

Image registration is a very common process in computer vision and image

processing world. It is a process of finding the transformation between two

images. Image registration has a broad application on medical images, such

as tumor growth comparison and functional brain mapping. It can be an

assistant process for doctors in surgery planning and surgery training. Image

registration can be classified into different categories based on the image

data and registration method. For example, there are intra-modality and

inter-modality methods, spatial domain and frequency domain methods, rigid

body registration and non-rigid body registration methods, etc.

Intra-modality registration is registering images that are taken in the same

modality. For example, register an MRI image with another MRI image.

Inter-modality is registering image from different modalities. The reason

that we need intra-modality registration is easy to see. The reason the we

need inter-modality is that different image scans have its own advantages and

disadvantages. By comparing different scans on the same organ, the doctor

can get a better understanding of the organ that needs treatment. Spatial

domain methods tend to match the image intensity patterns while frequency

domain methods tend to match the image in the frequency domain. People

1

Page 10: c 2016 Kuocheng Wang - IDEALS

use rigid body to describe something that is not deformable. In rigid body

registration, this indicates that the shape between images are similar. To

be more accurate, shape can be further defined as the normalized shape, or

in other words, after scaling, they are very similar to each other. Non-rigid

body registration is opposite to rigid body registration. It starts from a

simple object and deform it to a more complex one. For example, human

brain changes a lot when they are young. The brain between different ages

can be seen as a non-rigid body transformation problem. The final step is to

use the same transformation to transform the data from other time series to

the template.

1.2 Tumor Data Representation

Lung cancer is a primary cause of cancer death in the US. Bsed on [2], the

survival rate is only 15%. Early treatment can be very important to a patient.

In our case, the lung tumor was collected from a 50 year old patient, who

endorsed a 40 pack year history of smoking.

1.2.1 Data Collection

There are different modalities that can be used for collecting data of a tumor

from a person, like computed tomographic(CT) scan, magenetic resonance

spectroscopy(MRS) and positron emission tomography(PET). The data we

got was stereolithography(STL) files on different time series, which belongs

to OSF Saint Francis Medical Center. The data were initially taken from

MRI scan. The doctors then did a 3D reconstruction using a medical image

software called Osirix. This would give a 3D block of data, which contained

lung, skeleton, bones, fat, heart, catheter and muscle. To extract a 3D organ

or a tumor from the block, the doctor chose a threshold, which was used to

separate different parts from the data. The data was saved as separate STL

files.

2

Page 11: c 2016 Kuocheng Wang - IDEALS

1.2.2 Tumor Data in Time Series

The purpose of registration is to transform all the tumors from different

time series to the first time series. In other words, the first times series is

the target. Figure 1.1 to 1.4 is the original data. Different time series of the

tumor suggest that the tumor is shrinking.

Figure 1.1: Time series 1.

Figure 1.2: Time series 2.

3

Page 12: c 2016 Kuocheng Wang - IDEALS

Figure 1.3: Time series 3.

Figure 1.4: Time series 4.

1.3 Programming Library

The primary programming language we chose was C++. There are two main

reasons for this. The first is that there’s a wide range of graphics libraries

available for C++. The second is C++ is faster than other languages such

as Java and Python.

The graphics library we used was Visualization Tool Kit(VTK). VTK is an

open source library and it supports a large variety of visualization algorithms.

Many things become handy when using VTK. For example, reading STere-

4

Page 13: c 2016 Kuocheng Wang - IDEALS

oLithography(STL) file is a built in function so one doesn’t have to write his

own reader. People use VTK for commercial applications. Softwares such as

OsiriX, ParaView and 3DSlicer are developed use VTK. Another library we

used was Eigen, which was a linear algebra library. It is fast, versatile and

reliable. One can start without knowing anything about Eigen and be able

to use it right away.

1.4 Challenges and Motivation

The first challenge is the error in the 3D model. There are 2 main causes

for source errors [3]. The first one is MRI imaging errors. During the MRI

scanning process, there are many chances that can create noise. The other

one is model generation errors. Many techniques can be used for surface

model generation, each of them has its own advantages and disadvantages.

Geiger proposed a method for analyzing model generation errors in [4]. Both

of these errors affected the 3D model we got hence affected the registration

result to some extent. Another challenge is due to the change in shape of

tumor in different time period. There has to be a good way to describe the

position and orientation of the data so when matching them, it will make

sense to doctors.

There are 2 main motivations for tumor data registration. The first one

is to make it possible for doctors to compare different image data sets to

better understanding the status of the tumor. If the tumor grows, it might

indicates that the treatment is not proper. The second motivation is that it

provides a common ground for all the tumors and scientists can find ways to

describe the change in dynamics pattern of tumor without having to worry

about the spatial change in tumor.

1.5 Guide to the Remaining Chapters

Chapter 2: This chapter did a paper review of data registration. It separated

into rigid body registration and non-rigid body registration.

Chapter 3: This chapter describes the theory behind Best Fit Plane method,

Principal Axis Matching method and ICP method.

5

Page 14: c 2016 Kuocheng Wang - IDEALS

Chapter 4: This chapter includes the algorithms of each method. They are

very detailed listed in case anyone who wants to implement these methods.

Chapter 5: This chapter contains all the results after implementing the 3

methods.

Chapter 6: This chapter concludes the thesis by summarizing the advan-

tages and disadvantages of the 3 methods and gives possible ways to improves

the 3 methods.

6

Page 15: c 2016 Kuocheng Wang - IDEALS

CHAPTER 2

literature review

2.1 Rigid Body Registration

Rigid body registration has been developed for a long time. It is a com-

prehensive process, in the sense that in order to do registration, one has to

go through the process of (1) preprocessing the data, (2) extracting the fea-

ture from the image, (3) finding the correspondence of the feature between

source image and reference image, and (4) finding the correct transformation.

Among these four main steps, the last three are generally considered more

challenging.

Many methods were developed for feature selection. There are point-based

method, Curve and surface based method, moment and principal axis based

method and correlation method. In traditional computer vision literature,

point features are also known as cornerpoints, control points and interest

points. One of the most obvious feature is the corner in the image and this is

calculated by inertia matrix[5]. Other than this feature, there are many other

feature points in medical image. Based on how the control points are selected,

they can also be separated into intrinsic, extrinsic or both [6]. The intrinsic

points are usually the points that are derived from the specific images [7].

Hill et al. lists many possible features, such as “a point anatomical struc-

ture,the apical turn of the cochlea, the intersection of two linear structure,

the blood vessel bifurcation or confluence, a particular topographic feature

on a surface” [8], [7],[9]. technique to combine MRI and CT images use

landmarks. The extrinsic points are labeled by the users [10]. The extrinsic

points method requires a label, staples for example. Researchers have used

glass beads [11], chrome alloy spheres [12], polyvinyl alchohol gel markers

[13].

Another method is moment-based method. One can derive principal axis

7

Page 16: c 2016 Kuocheng Wang - IDEALS

out of the moment of inertia. This idea comes from Hu [14] and became

popular since then. Tracy L. [15] used normalized principal for rigid body

registration. Alpert developed a low computational cost technique using

principal axis for registration brain volume image [16]. The problem with

principal axis is that it is very sensitive to missing data. Arata and Dhawan

[17] implemented iterative technique to minimize the sensitivity.

The third method is curve or surface method. In 2D, it is a curve while

in 3D, it is a surface. Gueziec [18] generated curves automatically from the

intensity of the image . Pelizzari and Chen [19] extracted surface contours

from the source image and fit the surface contours from the target image.

Feature correspondence is another main topic in image registration. There

are 2 types of matching problem, one is complete matching and the other is

partial matching. If the number of points between the template and sources

images are the same, it is called completed matching, otherwise, it is partial

matching. Vinod and Ghose [20] implemented artificial neural network to

solve the point pattern matching problem after treating the original prob-

lem as an integer programming problem. Li et al.[21] proposed a K-d tree

method to find the matching pattern. Morgera and Cheong [22] introduced

a hybrid and iterative method to adapt to different dimension patterns. Due

to the large number of points, finding the correspondences are always time

consuming.

The final important step is finding the transformation between the two

data sets. Horn [23] proposed a closed-form solution of transformation using

quaternions. Arun [24] came up with a closed-form solution for finding a least

square error solution for transformation matrix knowing the corresponding

points between two data sets. Horn, Hilden, and Negahdaripour [25] found

the closed-form solution of orientation using orthogonal matrices. Walker,

Shao, and Volz [26] developed a dual quaternion method to estimate 3D

location parameters. When the data from the real world is noisy, all of these

algorithms demonstrate similar performance and stability[27].

2.2 Deformable Registration

Deformable registration is sometimes known as nonlinear registration. It

is commonly used in tumor growth registration. Deformable registration

8

Page 17: c 2016 Kuocheng Wang - IDEALS

is different from rigid body registration in the sense that tumor size is

changing with time, hence non-linear functions are usually used for reg-

istration. Commonly used nonlinear functions are polynomial functions,

quadratic [28], cubic [29],and higher order polynomials [30]. Other func-

tions such as exponential warping [31] and thin plates [32] was also explored.

In the medical imaging world, a large amount of work has been done by find-

ing a non-linear transformation to fit the source image to the target image

[33],[34],[35],[36],[37],[38]. However, these applications are developed for gen-

eral fitting, not specifically for tumor growth. The lack of using anatomical

detail causes poor registration. Cosina [39] developed a PDE-constrained

and adjoint-based optimization formulation for tumor growth and use it to

help tumor registration. Evangelia[40] proposed a deformable approach to

register brain atlas with images of brain tumor patients. It uses optimization

approach to estimate tumor-related parameter. Several non-rigid registration

methods also have been published [41],[42],[43].

The challenge of tumor registration is that tumor is a deformable body. In

our case, the doctors want to compare the tumor change between different

time series without the loss of the original shape information. This eliminates

the possibility of using non-rigid transformation. We decided to use rigid

body registration to reach the goal.

9

Page 18: c 2016 Kuocheng Wang - IDEALS

CHAPTER 3

Methodology

In this chapter, 3 methods are listed. Best Fit Plane method uses a linear

fit plane as a feature for tumor data; Principal Axis method uses principle

axis as a feature for the data. The features are then used for calculating

the transformation matrix to transform the sensed image to template image.

Both of these methods use the closed form solution of transformation matrix,

which is introduced on Section 3.1.2. Iterative Closest Point algorithm(ICP)

is one of the earliest develped algorithms for data registration. It is used to

compare with Best Fit Plane method and Principal Axis method.

3.1 Best Fit Plane Method

Best fit plane belongs to a more general class of methods, curve fitting. Curve

fitting is a process of finding a curve or a mathematical function to fit a set of

data points [44]. Curve fitting contains both interpolation and extrapolation.

Extrapolation refers that the estimated curve doesn’t cross all the training

points, while interpolation refers that the curve goes through all the points.

Our data is a 3D point cloud and we project them into a 3D plane, which is

why we call this method Best Fit Plane method.

There are many function approximation techniques, such as polynomial

response surface, radio basis function(RBF), artificial neural network(ANN),

etc. In medical data, the function approximation techniques can be used

to correlate x,y axis with z axis of a point. Polynomial response surface is

an extrapolation method . RBF is an interpolation method. Artificial neu-

ral network is a method that is inspired by the concept of artificial neural

networks. There are advantages and disadvantages of using these method.

Polynomial fitting is easy to use. Most of the functions can be approximated

as polynomial function. However, higher order polynomial equation can be

10

Page 19: c 2016 Kuocheng Wang - IDEALS

oscillatory. RBF guarantees to pass through all the training points, while it

can be computationally expensive. When cost function and learning algo-

rithms are chosen appropriately, ANN can be very robust. But ANN requires

user to choose many parameters, like number of layers, number of units in

a layer and the architecture of the network. One has to decide what to use

based on their own implementation.

3.1.1 Best Fit Plane

Curve fitting is a usual technique engineers or scientists like to use when

describing the behavior of the data from either simulation or experiment.

The idea we came up was to use a best fit plane to fit the data, and then

found the transformation matrix to transform the plane to the template

plane. Best fit plane belongs to polynomial fit, in our case, it is a first order

linear fit. Consider the plane as a set {Pi},where i goes from 1, ...n, n is the

number of data points. Pi is a 3 × 1 vector represented in 3D space as [Xi

Yi Zi]T . The equation of a best fit plane is

Zi = aXi + bYi + c (3.1)

The coefficient {a, b, c} of the equation can be computed with 3 non-linear

points. Since we have many more points, finding {a, b, c} essentially becomes

an optimization problem. That is, to find a {a, b, c} that minimizes the error

between the original points and the points projected onto the plane. Let {pi}be the original point set, where pi is one point in the set, which is represented

as [xi yi zi],

E2 =n∑

i=1

||zi − (aXi + bYi + c)||2 (3.2)

In order to solve this problem, let’s rewrite the equation as

[Xi Yi 1

] a

b

c

= zi (3.3)

The equation now becomes a matrix multiplication format, Ax = Z. a, b, c

11

Page 20: c 2016 Kuocheng Wang - IDEALS

is solved by taking the pseudo-inverse of A.

x = (ATA)−1AT z (3.4)

As long as ATA is full rank, a, b, c can be uniquely found.

3.1.2 Closed Form Solution of Transformation Matrix

Denote two 3D data sets {pi} and {p′i}, i = 1, 2, ...n. pi and p′i are 3 × 1

vectors. Assume that the shape between 2 images are very similar and of the

same size. Denote R as the rotation matrix, T as the translation vector that

transforms {pi} to {p′i}, where R is a 3 × 3 matrix and T is a 3 × 1 vector.

The following is the formulation of transforming {pi} to {p′i},

p′i = Rpi + T (3.5)

Usually, there is noise so it’s not possible to find a R and T that satisfies the

above equation exactly. The goal of image registration is to find R and T to

minimize

Σ2 =n∑1

||p′i − (Rpi + T )||2 (3.6)

Arun and Huang [24] found the closed form solution for this optimization

problem. The following is a brief summary of how to find R and T . The

whole derivation is a little bit involved. Readers are advised to read the

original paper.

Define p and p′ as the centroids of the 2 data set,

p ,1

N

N∑i=1

pi (3.7)

p′ ,1

N

N∑i=1

p′i (3.8)

let

qi , pi − p (3.9)

q′i , p′i − p′ (3.10)

12

Page 21: c 2016 Kuocheng Wang - IDEALS

Let H be the covariance matrix,

H ,N∑i=1

qiq′Ti (3.11)

THe SVD of H is

H = UΛV t (3.12)

Here U and V are 3 × 3 orthonormal matrices, Λ is a 3 × 3 positive semi-

definite diagonal matrix. The rotation matrix is hence

R = V UT (3.13)

The translation matrix is

T = p′ −Rp (3.14)

Notice that if the determinant of R is 1, then the rotation matrix is the one

we want. If it is -1, it indicates that R is a reflection, the solution should not

be accepted. However, this case doesn’t usually happen.

3.2 Principal Axes Matching

Moment of inertia is commonly used in physics or other engineering fields,

referring to its ability of resistance to the change of its motion. In computer

vision, an image moment is some weighted average of pixels’ intensities, or

a function, as described in section 3.2.1. The orientation of an image can

be extracted from the image moment, which is referred to as principal axes.

After getting the principal axes for both sensed image and template image,

the transformation that is used to match the sensed image’s principal axes

to the template’s principal axes can be used to transform the sensed image

to the template image.

13

Page 22: c 2016 Kuocheng Wang - IDEALS

3.2.1 Image Moment

Based on [45],[46], the 3D ordinary moments, mijk, is defined as follows

mijk =

∫ ∞−∞

∫ ∞−∞

∫ ∞−∞

xiyjzkf(x, y, z)dxdydz (3.15)

where i, j, k are the integers and i + j + k is the order of the moment, f is

a binary function. In most of laser scans or medical image scans, the 3D

object is approximated by a tessellation of basic shapes, such as triangle

or tetrahedral. Hence the discretized formulation is more useful. Let A be

the set that contains all the triangles, A1, A2, A3, ...AN . N is the number of

element. f(x, y, z) is defined as

f(x, y, z) =

1, (x,y,z) ε A

0, elsewhere(3.16)

Consider only the tetrahedral case, the volume integral can be rewritten as

a surface integral of each single element.

mijk =N∑t=1

(mijk)t =N∑t=1

∫ ∫At

xiyj[z(x, y)]kds (3.17)

mijk is the ordinary moment for each individual element. Moment contains

some information on image properties. This is the definition of first and

second order moment[1].

m000 = m010 = m001 = 0 (standardized position)

m110 = m101 = m011=0 (standardized orientation)

The moment we used for extracting rotation information is the secondary

moment, which means i+j+k=2. It can be used to calculate principal axis,

as we will see later.

3.2.2 Principal Axes

The principal axis can be used to describe the orientation of an object. Fig.

3.1 is an example of the principal axis. The principal axis works well even

when the shape of the object is not symmetric.

14

Page 23: c 2016 Kuocheng Wang - IDEALS

Figure 3.1: Principal Axis [1].

Denote {u1, u2, u3} the principal axes. When the 3D object is centered

at the origin, the principal axes is defined as the eigenvector of the inertia

matrix [46] [15].

I =

Ixx −Ixy −Ixz−Iyx Iyy −Iyz−Izx −Izy −Izz

where

Ixx = m020 +m002, Iyy = m200 +m002, Izz = m200 +m020 (3.18)

Ixy = Iyx = m110, Ixz = Izx = m101, Iyz = Izy = m011 (3.19)

Ambiguity Elimination

Based on the previous section definition, {u1, u2, u3} are the eigenvector of

I matrix. Assume that {u1, u2, u3} is the normalized eigenvector, the sign of

ui cannot be uniquely determined. To see why,

Iui = λui, I(−ui) = λ(−ui) (3.20)

15

Page 24: c 2016 Kuocheng Wang - IDEALS

There are 8 groups of eigenvectors in total, {±u1,±u2,±u3}. This suggests

that there are 8 orientations that can describe the 3D object.

Galvez and Canton [1] proposed a way of selecting unique axes and elimi-

nating the ambiguity. The unique axis is {v1, v2, v3}. First, the requirement

of right-handed system will eliminate 4 combinations of {±u1,±u2,±u3}.Then a heuristic procedure was developed to get rid of the rest 3 combina-

tions.

3.2.3 Transformation Matrix Computation

After extracting principal axes from both images, we need to find a way

to transform sensed image’s axes to template’s axes. Given 2 sets of axes

X1 and X2, suppose they are at the same origin, the rotation matrix that

transforms 1 set of axis to the other is just a linear transformation:

X1 = RX2 (3.21)

where R,X1, X2 are 3 × 3 matrices. Because X1, X2 are principal axis, they

are full rank and the inverse of either matrix exists, R matrix can be found

by:

X1X−12 = R (3.22)

3.3 Iterative Closest Point

ICP is first introduced by [47]. The idea is to first find the corresponding

points between two data sets. This is achieved by using Eucledian distance

to measure the distance between each point in source data and all the points

in template data and assign a closest point from the source data to each

point in the template file. Then the algorithm will iteratively estimate the

transformation and look for the corresponding points until all the points are

close enough. The way we used to calculate transformation matrix is the

same as the one in best fit plane method.

16

Page 25: c 2016 Kuocheng Wang - IDEALS

3.3.1 Global Match

We call ICP a global matching method because it takes the whole source

data set and matches it to the template. This shall not be confused with

global optimization. In optimization content, ICP can only find the local

minimum. Here, we refer global matching as a local minimum error on the

entire source and sensed image. We later implemented a local ICP, which

means a local minimum only used for each part of the tumor in next section.

3.3.2 Local Match

In previous method, ICP was used as a global matching method, because it

minimizes the error between the distance of each point in the source image

and the target image. Another possible way of doing transformation is to

use local matching method. We can separate the target image into differ-

ent parts, and find the corresponding transformation. Figure 3.2 shows the

label of the template image, each label refers to a part in the tumor. This

can be done because sometimes the doctor does not care about the relative

position between each part in the tumor, but they only want to compare

the corresponding parts in tumor between times series. This requires that

the tumor can be separated into different parts clearly and one can find the

corresponding parts between time series.

17

Page 26: c 2016 Kuocheng Wang - IDEALS

Figure 3.2: Different parts in a tumor.

In Figure 3.2, there are 6 parts in the tumor template. Tumors in other

time series will be separated into different parts based on the similarities

between the template’s parts and the time series’ parts.

18

Page 27: c 2016 Kuocheng Wang - IDEALS

CHAPTER 4

Implementation

3D tumor data saved from patients are used for the purpose of this research.

The methodology discussed in the previous chapter was used to analyze the

data set.

4.1 Best Fit Plane

We used 4 vertices to represent the plane as shown in Fig 5.1. But 3 vertices

are sufficient. In order to find one vertex of the plane, (X1, Y1, Z1), first, cal-

culate the center of the tumor, (xc, yc, zc), then add two arbitrary constants

to xc, yc to get the value of X1 and Y1, finally use equation 3.12 to find Z1.

Other 3 points are calculated the same way. These are the steps of finding the

transformation suggested by Arun [24]. We made some adjustments based

on our implementation:

Step 1: Load the STL file of the organ.

Step 2: Calculate the centroids of two data sets, p and p′.

Step 3: Find the best fit plane, more specifically, find 4 vertices used to

form the best fit plane for two data sets.

Step 4: Find {qi}, {q′i} in equation 3.9 and 3.10 with 4 vertices and the

centroids.

Step 5: Calculate H matrix based on 3.12.

Step 6: Find the SVD of H.

Step 7: Calculate det(R).

If det(R) = +1, keep R.

If det(R) = -1, abandon solution, and output error.

Step 8: Calculate T (equation 3.14).

Step 9: Transform the source data to target use the transformation found

with the best fit plane.

19

Page 28: c 2016 Kuocheng Wang - IDEALS

The reason we thought this idea would work was we assumed the tumor

shape didn’t change too much. Even the data of the tumor is not inherently

linear, the best fit plane can represent the feature of the tumor to some

extent. If the tumor doesn’t change at all, the linear fit plane will not change

either.

4.2 Principle Axes Matching

The following is the steps of eliminating the ambiguity of principal axis:

Step 1. Find inertia matrix of tumor data, I.

Step 2. Find the eigenvector of I.

Step 3. Find the furthest point P1, from the centroid to the object’s surface

along u1 direction. v1 is directed from the centroid to P1.

Step 4. Find the furthest point P2, from the centroid to the object’s surface

along u2 direction. v2 is directed from the centroid to P2.

Step 5. The third eigenvector, v3, is selected so that the final coordinate

system is right-handed. Mathematically speaking,

v1 × v2 = v3(cross product). (4.1)

Notice that this algorithm does not work for perfect symmetric object

because the furthest point along ±u1 are the same. The same fact holds for

±u2 and ±u3.The easiest way to find the I is using CAD software like Solid Works.

However, the data file we got were STL files, it only gave the vertex and

surface information of the patient’s tumor and organs. Solid Works is not

able to calculate the mass property directly from the STL. We have to convert

the STL file to a solid body. In Solid Works, there is one option that can

import STL as a solid body. Another thing we need to pay attention to is

that CAD is only able to read an STL file and convert to a solid body with

a few thousand points. Medical image data is usually too large for the CAD

software, hence it is necessary to do a sub-sampling before importing it to the

CAD software. The software we used for sub-sampling was Blender. After

importing the STL file into Blender, choose “edit mode”. Under the “Tools”

bar, there is an option called “Remove”, choose “Remove Doubles”. You can

20

Page 29: c 2016 Kuocheng Wang - IDEALS

choose “Merge Distance” for how far apart between the vertices that should

merge together. We were very careful when choosing the distance because if

the distance was too big, we would lose many points and hence the geometry

could not describe the original shape accurately. If the distance was too

small, the number of vertices left was still large and Solid Works wouldn’t

be able to handle them.

In summary, this is the basic step:

Step 1. Import the STL file into Blender.

Step 2. Do sub-sampling on the original data, make sure there are sufficient

points left for calculating the inertia matrix. Save as a new STL file.

Step 3. Import the STL file into Solid Works as a “Solid Body”.

Step 4. Under “Tools”, choose “Section Properties”.

Step 5. Find the principal axis by using the values from the moment of

inertia of the area at the centroid.

Step 6. Calculate the centroid of sensed image and template image sets.

Step 7. Translate both the tumor from the source file and the template

file to the origin.

Step 8. Calculate the rotation matrix R with equation 3.22.

Step 9. Rotation the sensed image with rotation matrix R.

4.3 ICP

ICP is the easiest algorithm we tried. There is a built-in function in VTK

which can be called directly. In case anyone is curious about the actual

implementation, we put it down as below.

Step 1. Find the closest point in the target data corresponding to each

source point(there can be multiple target points correspond to 1 source

point).

Step 2. Estimate the rotation matrix and translation to transform the

source image to the target image.

Step 3. Use the transformation found from the previous step and transform

the source image to the target image.

Step 4. Go back to the first step if the error is larger than the tolerance.

In case of Local Matching Algorithm, the first step is to separate tumor into

different parts. This can be done with an STL manipulator. The manipulator

21

Page 30: c 2016 Kuocheng Wang - IDEALS

we chose was Meshmixer. It is a very powerful mesh manipulating tool. We

can select different clusters of meshes and save them as individual STL files.

Meshmixer also has other functions such as automatic alignment of surfaces,

plane cuts, hollowing, etc.

The disadvantage of this algorithm is that it takes a very long time to run

and it is likely to fall into local optimum. The runtime of ICP is O(n2). We

are using Intel(R) Core(TM) i7-5820K CPU, with the RAM of 16.0 GB and

64 bit windows operating system. With this system, it took more than 4

hours to run. After running the algorithm, we saved the data as STL and

visualized in Blender.

22

Page 31: c 2016 Kuocheng Wang - IDEALS

CHAPTER 5

Result

The results from the implementation are presented in this chapter. The way

we judge whether the registration is good is by looking at how well each

part matches between template and organ image. Specifically, there are 2

procedures in image registration, one is translation and the other is rotation.

Translation is mainly compared by comparing whether the core part(biggest

part) of the tumor matches. The orientation is difficult because the whole

tumor changes its shape with time. We compared by looking at how well

the center and other parts matched because that can be an indication of the

orientation.

5.1 Best Fit Plane

Figure 5.1 is an example of how the clipping plane looks like for the blood

vessel. The clipping plane is just an equation z(x, y). We set the template

clipping plane and the source clipping plane the same size so that during

transformation, we don’t have to worry about the scale of the plane.

23

Page 32: c 2016 Kuocheng Wang - IDEALS

Figure 5.1: Blood vessel.

5.1.1 Registration Result

Even the main purpose of the thesis is not registering any organs, a registra-

tion using best fit plane for the skeleton was carried out.

Figure 5.2: Best Fitting Method on Skeleton.

In Figure 5.2, the dark blue skeleton is the template and the lighter one is

the second time series. The spine and rib between two images do not align

perfectly.

24

Page 33: c 2016 Kuocheng Wang - IDEALS

Figure 5.3: Best Fitting Method on template and time series 2.

In Fig 5.3, the brownish color is the template and the other is Time series 2.

The angle of the alignment is fine, the translation is not accurate. Template

can be moved down to match better.

Figure 5.4: Best Fitting Method on time series 2 and 3.

In Figure 5.4, the green one is the time series 3, the other is time series

2. The translation is not accurate. Time series 3 should be moved down to

match better.

25

Page 34: c 2016 Kuocheng Wang - IDEALS

Figure 5.5: Best Fitting Method on template and time series 4.

In Fig 5.5, the dark purple one is the template, the other is time series

4. The reason the template tumor looks different than the previous one is

because the image was taken at a different angle. The core of the tumor

matches good, but other little clusters don’t match.

5.1.2 Summary

Best Fit Plane is a naive way of finding the features of data. The underlying

assumption is the shape of the image from different time series don’t change

too much. Ideally speaking, if the segmentation from MRI image is good

enough so that there is little noise and the shape preserves well, the best

fit plane should be a good representation of the spatial location of the data

throughout different time series. In skeleton scan, there are some extra parts

in the template, so the best fit plane does not match exactly between 2 time

series. Tumor registration results show that the translation does not work

well. It is hard to judge rotation because the shape of tumor changed. There

are many parts in a tumor, match the core part of tumor is easy, but match all

the clusters is hard. As we can see, none of the above images match perfectly.

Another important thing to notice is that using a linear plane to describe

a non-linear data set itself is not recommanded in most applications. The

advantage of this method is it only uses 3 points from each data set as feature

and avoid the need for iterative registration, this saves a lot of computation

26

Page 35: c 2016 Kuocheng Wang - IDEALS

time.

5.2 Principle Axes Matching

The following sections demonstrate principal axes for each tumor and the

results of registration.

5.2.1 Principal Axis

Figure 5.2 to 5.9 is the principal axis for all tumors.

Figure 5.6: Template principal axes.

27

Page 36: c 2016 Kuocheng Wang - IDEALS

Figure 5.7: Time 2 principal axes.

Figure 5.8: Time 3 principal axes.

28

Page 37: c 2016 Kuocheng Wang - IDEALS

Figure 5.9: Time 4 principal axes.

5.2.2 Tumor and skeleton after registration

The following is the result after registration. All white color tumor represents

template tumor, and red represents tumors in other time series.

Figure 5.10: Pincipal axes registration(Time 2).

In figure 5.10, the core part of tumor matches well, the other small parts

do not match at all. This is because the principal axes between 2 images

don’t correspond consistently. The green principal axis corresponds to the

cluster which was pointed by red axis in Time Series 2.

29

Page 38: c 2016 Kuocheng Wang - IDEALS

Figure 5.11: Pincipal axes registration(Time 3).

Time series 3(Figure 5.10) matches well positionwise with the template,

but not the orientation. This has the same problem with time series 2, which

refers to the inconsistency between the template principal axes and the sensed

image principal axes.

Figure 5.12: Pincipal axes registration(Time 4).

The template and time series 4(Fig 5.12) also matches well in position, but

not in orientation. This has the same problem with previous registration.

30

Page 39: c 2016 Kuocheng Wang - IDEALS

Figure 5.13: Principal axes registration(skeleton time 2).

In Fig 5.13, the front view of skeleton matching looks well though some

misalignments in spine can be seen.

Figure 5.14: Principal axes registration(skeleton time 2 bottom view).

Fig 5.14 is the bottom view. One can see obvious misalignment in ribs and

spine.

31

Page 40: c 2016 Kuocheng Wang - IDEALS

Figure 5.15: Pincipal axes registration(skeleton time 3).

In Fig 5.15, the principal axes do not correspond consistently between

the template and time series 3 and caused a wrong orientation registration.

The wrong registration result looks more obvious in skeleton than tumor

because first, skeleton is a rigid body, second, the special structure of the

skeleton, which contains spine and ribs, makes it more obvious if there is any

misalignment.

Figure 5.16: Principal axes registration(skeleton time 4).

32

Page 41: c 2016 Kuocheng Wang - IDEALS

Figure 5.17: Principal axes registration(skeleton time 4 bottom view).

From both the front view and bottom view of skeleton time series 4’s(Fig

5.16 and 5.17) registration, the same problem can be seen, which is the

misalignment on spine and ribs.

5.2.3 Summary

As we can see, the principal axes do not correspond consistently between the

template and the source tumor. This can be due to the reason that we sub-

sampled the tumor in order to feed into Solid Works. Another more possible

reason is that the structure of the tumor changed.

The result of skeleton registration is much better than tumor registration

because at least 2 time series’ axes have the same correspondence on principal

axes with the template. However, this registration is still rough as shown in

Fig 5.14, 5.16 and 5.17. There are obvious misalignments on ribs and spine

between the template and the sensed image. When the axes don’t correspond

consistently between the template and the source image, the registration is

wrong, as we can see in Fig 5.15. In this case, the registration is useless.

To summarize, principal axis is very sensitive to the shape of the object.

The tumor changes its shape a lot at different time and the principal axes do

not correspond well. We tried this method with skeleton because we thought

the skeleton did not change with time for an adult, hence the Principal

Axes Method should match nicely. The result shows that 2 out of 3 time

33

Page 42: c 2016 Kuocheng Wang - IDEALS

series match the template. The reason that there are some misalignments

is the template image has some extra parts. The possible reason that time

series 3 distorts principal axes is the sub-sampling of the original data when

calculating the moment of inertia.

5.3 Iterative Closest Point

The following sections present the results of Global Matching and Local

Matching Method.

5.3.1 Global Matching

Figure 5.18: Time series 2 and template.

As shown in Fig 5.18, the highlighted tumor is template, the other is time

series 2. Most of the clusters are close to each other.

34

Page 43: c 2016 Kuocheng Wang - IDEALS

Figure 5.19: Time series 3 and template.

In Fig 5.19, the highlighted tumor is time series 3, the other one is template.

All clusters are reasonably close to each other.

Figure 5.20: Time 2 and 3.

In Fig 5.20, the highlighted tumor is time series 2 and the other is time

series 3. As it can be seen, each part in time series 2 is close to time series

3, hence the two tumor matches well.

35

Page 44: c 2016 Kuocheng Wang - IDEALS

5.3.2 Local Matching

The following is the results of individual part’s registration between template

and time series 2. The reason of choosing times series 2 is because this time

series preserve most parts in the template, while in other time series, many

parts disappeared.

Figure 5.21: Part 1.

Part 1(Fig 5.21) in both time series are in good position for doctor to

compare. It is hard to comment on orientation, because their shapes are

totally different.

Figure 5.22: Part 2.

36

Page 45: c 2016 Kuocheng Wang - IDEALS

Part 2(Fig 5.22) matches well, part of the reason is that the shape between

2 parts are similar.

Figure 5.23: Part 4.

Part 4(Fig 5.23) matches well when considering the position. The same

reason with part 1, there is no way to comment on the rotation of these parts.

Figure 5.24: Part 5.

Part 5(Fig 5.24) matches well. Because the shape of 2 parts are still similar,

we can see not only position, but also orientation, matches well.

37

Page 46: c 2016 Kuocheng Wang - IDEALS

Figure 5.25: Part 6.

Part 6(Fig 5.25) has a good match in position, but it is hard to comment

on the orientation due to the odd structure of this part.

5.3.3 Summary

In global matching result, the highlighting parts were the parts that were

transformed. As we can see, they are reasonably close to the source images.

One thing to notice, ICP can be very inaccurate if the parts are far away

from each other. If that is the case, we can find the centroids of the two data

sets and move them based on the location of the centroids. After moving two

data sets such that their centroids overlap, ICP can be used for registration.

The result for local matching shows that when we reduced the data size,

each part itself in source data can be matched to template very well. It can

never match perfectly because the tumor did not maintain its shape. We

do not have part 3 because it disappeared in the source data. The way we

decided which parts match are by comparing the relative position of each

part with respect to the main part, which is the biggest part of the tumor.

Overall speaking, when using Global Matching Method, we can compare

the orientation by utilizing the information of the core of a tumor and its

surrounding parts. This is not possible when we use Local Matching Method

because all parts are matched separately.

38

Page 47: c 2016 Kuocheng Wang - IDEALS

CHAPTER 6

Conclusion

Deformable body registration and rigid body registration usually have very

different methods and their research interests are also very different. In this

thesis, we tried methods that are usually used in rigid body registration on

non-rigid body registration. The reason we avoid using non-rigid body reg-

istration is that the purpose is to let the doctor compare the tumor between

different time series, thus the shape need to be preserved. 3 methods were

tried. They are Best Fit Plane method, Principal Axes Matching method

and ICP method.

Best Fit Plane is a naive approach. It works well when the tumor shape

between different time series does not change too much. The advantage

though, is that the plane can be used as features and the transformation

between the planes can be used for the tumor. This avoids using all the data

points so that the transformation can be found very fast.

The second method we tried was Principal Axes Matching method. This

method has something in common with Best Fit Plane method. Instead of

using a plane as its feature, it used principal axes. In our case, Solid Works

was used for the calculation of second moment of inertia. The final results

showed that even the principal axes described the tumor in a correct manner,

none of the source’s principal axes corresponds correctly to the template’s.

This leads to a wrong registration. We further tried Principal Axes Matching

on skeleton. 2 out of 3 time series had the correct correspondence on principal

axes. After doing registration on the time series that have the correct cor-

respondence, there were still some misalignments. The reason could be that

we used simplified model in Solid Works to calculate the moment of inertia.

This can be improved by calculating the moment of inertia analytically.

The last method, ICP, is the easiest and slowest method of all 3. Most

of time, when 2 objects are far from each other, ICP does not work. In

our case, ICP did work for all the data sets we tried. The results for both

39

Page 48: c 2016 Kuocheng Wang - IDEALS

Global Matching and Local Matching look fine. Local Matching worked

better than Global Matching because it only matched each individual part

of the tumor. If the doctor does not care too much about relative position

within the tumor but only want to compare the tumor’s parts separately,

Local Matching method may be accepted. Even so, coming up with an

algorithm that can separate the tumor based on similarities between the

template and source data can be challenging. Global Matching method took

around 4 hours to run. One way to improve this method is to translate the

data from source image to the template such that their centroids overlap

before starting ICP.

The way we validated our results is by checking translation and rotation.

Translation was compared by how well the core of the sourced tumor matched

the template. Rotation was compared by how far away each part between the

template and the sourced data. This validation method is not very accurate

because there is no quantitative measurement. Future work is still required

to improve validation.

40

Page 49: c 2016 Kuocheng Wang - IDEALS

REFERENCES

[1] J. Galvez and M. Canton, “Normalization and shape recognition ofthree-dimensional objects by 3d moments,” Pattern Recognition, vol. 26,no. 5, pp. 667–681, 1993.

[2] A. Jemal, R. Siegel, E. Ward, Y. Hao, J. Xu, T. Murray, and M. J.Thun, “Cancer statistics, 2008,” CA: a cancer journal for clinicians,vol. 58, no. 2, pp. 71–96, 2008.

[3] D. Simon, R. Otoole, M. Blackwell, F. Morgais hen, A. DiGioia,and T. Kanade, “Accuracy validation in image-guided orthopaedicsurgery,” in Proceedings of the Second International Symposium on Med-ical Robotics and Computer Assisted Surgery, vol. 6. Citeseer, 1995,pp. 185–192.

[4] B. Geiger, “Three-dimensional modeling of human organs and its appli-cation to diagnosis and surgical planning,” Ph.D. dissertation, INRIA,1993.

[5] W. Forstner, “A feature based correspondence algorithm for imagematching,” International Archives of Photogrammetry and RemoteSensing, vol. 26, no. 3, pp. 150–166, 1986.

[6] C. R. Maurer and J. M. Fitzpatrick, “A review of medical image regis-tration,” Interactive image-guided neurosurgery, vol. 17, 1993.

[7] D. L. Hill, D. J. Hawkes, J. Crossman, M. Gleeson, T. Cox, E. Bracey,A. Strong, and P. Graves, “Registration of mr and ct images for skullbase surgery using point-like anatomical features,” The British journalof radiology, vol. 64, no. 767, pp. 1030–1035, 1991.

[8] D. L. Hill, S. Green, J. Crossman, D. J. Hawkes, G. P. Robinson, C. Ruff,T. Cox, A. Strong, and M. J. Gleeson, “Visualization of multimodal im-ages for the planning of skull base surgery,” in Visualization in Biomed-ical Computing. International Society for Optics and Photonics, 1992,pp. 564–573.

41

Page 50: c 2016 Kuocheng Wang - IDEALS

[9] D. Hill, D. J. Hawkes, M. J. Gleeson, T. Cox, A. J. Strong, W.-L. Wong,C. F. Ruff, N. D. Kitchen, D. Thomas, and A. Sofat, “Accurate framelessregistration of mr and ct images of the head: applications in planningsurgery and radiation therapy.” Radiology, vol. 191, no. 2, pp. 447–454,1994.

[10] P. A. van den Elsen, M. A. Viergever, A. C. van Huffelen, W. van derMeij, and G. H. Wieneke, “Accurate matching of electromagnetic dipoledata with ct and mr images,” Brain topography, vol. 3, no. 4, pp. 425–432, 1991.

[11] E. M. Friets, J. W. Strohbehn, J. F. Hatch, and D. W. Robert, “Aframeless stereotaxic operating microscope for neurosurgery,” Biomed-ical Engineering, IEEE Transactions on, vol. 36, no. 6, pp. 608–617,1989.

[12] D. R. Haynor, A. Borning, B. Griffin, J. Jacky, I. Kalet, and W. Shuman,“Radiotherapy planning: direct tumor location on simulation and portfilms using ct. part i. principles.” Radiology, vol. 158, no. 2, pp. 537–540,1986.

[13] T. Takizawa, “A neurosurgical navigation system employing mr images,”Toshiba Med Rev, vol. 34, pp. 10–18, 1990.

[14] M.-K. Hu, “Visual pattern recognition by moment invariants,” informa-tion Theory, IRE Transactions on, vol. 8, no. 2, pp. 179–187, 1962.

[15] T. L. Faber and E. M. Stokely, “Orientation of 3-d structures in medicalimages,” Pattern Analysis and Machine Intelligence, IEEE Transactionson, vol. 10, no. 5, pp. 626–633, 1988.

[16] P. A. Transformation-Alpert et al., “The principal axes transformation-a method for image registration,” J Nucl Med, vol. 31, pp. 1717–1723,1990.

[17] L. K. Arata and A. P. Dhawan, “Iterative principal axes registration: anew algorithm for retrospective correlation of mr-pet brain images,” inEngineering in Medicine and Biology Society, 1992 14th Annual Inter-national Conference of the IEEE, vol. 7. IEEE, 1992, pp. 2776–2778.

[18] A. Gueziec, X. Pennec, and N. Ayache, “Medical image registration us-ing geometric hashing,” IEEE Computational Science and Engineering,vol. 4, no. 4, pp. 29–41, 1997.

[19] C. A. Pelizzari, G. T. Chen, D. R. Spelbring, R. R. Weichselbaum,and C.-T. Chen, “Accurate three-dimensional registration of ct, pet,and/or mr images of the brain.” Journal of computer assisted tomogra-phy, vol. 13, no. 1, pp. 20–26, 1989.

42

Page 51: c 2016 Kuocheng Wang - IDEALS

[20] V. Vinod and S. Ghose, “Point matching using asymmetric neural net-works,” Pattern Recognition, vol. 26, no. 8, pp. 1207–1214, 1993.

[21] B. Li, Q. Meng, and H. Holstein, “Similarity k-d tree method for sparsepoint pattern matching with underlying non-rigidity,” Pattern recogni-tion, vol. 38, no. 12, pp. 2391–2399, 2005.

[22] S. D. Morgera and P. L. C. Cheong, “Rigid body constrained noisy pointpattern matching,” IEEE Transactions on Image Processing, vol. 4,no. 5, pp. 630–641, 1995.

[23] B. K. Horn, “Closed-form solution of absolute orientation using unitquaternions,” JOSA A, vol. 4, no. 4, pp. 629–642, 1987.

[24] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting oftwo 3-d point sets,” Pattern Analysis and Machine Intelligence, IEEETransactions on, no. 5, pp. 698–700, 1987.

[25] B. K. Horn, H. M. Hilden, and S. Negahdaripour, “Closed-form solutionof absolute orientation using orthonormal matrices,” JOSA A, vol. 5,no. 7, pp. 1127–1135, 1988.

[26] M. W. Walker, L. Shao, and R. A. Volz, “Estimating 3-d location param-eters using dual number quaternions,” CVGIP: image understanding,vol. 54, no. 3, pp. 358–367, 1991.

[27] A. Lorusso, D. W. Eggert, and R. B. Fisher, A comparison of fouralgorithms for estimating 3-D rigid transformations. University of Ed-inburgh, Department of Artificial Intelligence, 1995.

[28] R. Y. Wong, “Sensor transformations,” IEEE Transactions on Systems,Man, and Cybernetics, vol. 7, no. 12, pp. 836–841, 1977.

[29] A. Goshtasby, “Piecewise cubic mapping functions for image registra-tion,” Pattern Recognition, vol. 20, no. 5, pp. 525–533, 1987.

[30] P. Van Wie and M. Stein, “A landsat digital image rectification system,”IEEE Transactions on Geoscience Electronics, vol. 15, no. 3, pp. 130–137, 1977.

[31] M. Yanagisawa, S. Shigemitsu, and T. Akatsuka, “Registration of locallydistorted images by multiwindow pattern matching and displacementinterpolation: The proposal of an algorithm and its application to digitalsubtraction angiography,” in Proceedings of the seventh internationalconference on pattern recognition, vol. 2, 1984, pp. 1288–1291.

[32] F. Bookstein, “Thin-plate splines and the decomposition of deforma-tion,” IEEE Trans. Patt. Anal. Mach. Intell, vol. 10, 1988.

43

Page 52: c 2016 Kuocheng Wang - IDEALS

[33] J. Ashburner, K. J. Friston et al., “Nonlinear spatial normalization usingbasis functions,” Human brain mapping, vol. 7, no. 4, pp. 254–266, 1999.

[34] M. Ferrant, S. K. Warfield, C. R. Guttmann, R. V. Mulkern, F. A.Jolesz, and R. Kikinis, “3d image matching using a finite element basedelastic deformation model,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI99. Springer, 1999, pp. 202–209.

[35] J. P. Pluim, “Mutual information based registration of medical images,”University Medical Center Utrecht, Utrecht, 2000.

[36] S. Periaswamy and H. Farid, “Medical image registration with partialdata,” Medical image analysis, vol. 10, no. 3, pp. 452–464, 2006.

[37] H. Chui and A. Rangarajan, “A new point matching algorithm for non-rigid registration,” Computer Vision and Image Understanding, vol. 89,no. 2, pp. 114–141, 2003.

[38] P. Thompson and A. W. Toga, “A surface-based technique for warpingthree-dimensional images of the brain,” Medical Imaging, IEEE Trans-actions on, vol. 15, no. 4, pp. 402–417, 1996.

[39] C. Hogea, C. Davatzikos, and G. Biros, “An image-driven parameterestimation problem for a reaction–diffusion glioma growth model withmass effects,” Journal of mathematical biology, vol. 56, no. 6, pp. 793–825, 2008.

[40] E. I. Zacharaki, D. Shen, S.-K. Lee, and C. Davatzikos, “Orbit: Amultiresolution framework for deformable registration of brain tumorimages,” Medical Imaging, IEEE Transactions on, vol. 27, no. 8, pp.1003–1017, 2008.

[41] Y. Liu, C. Yao, L. Zhou, and N. Chrisochoides, “A point based non-rigid registration for tumor resection using imri,” in Biomedical Imaging:From Nano to Macro, 2010 IEEE International Symposium on. IEEE,2010, pp. 1217–1220.

[42] S. K. Kyriacou, C. Davatzikos, S. J. Zinreich, and R. N. Bryan, “Non-linear elastic registration of brain images with tumor pathology usinga biomechanical model [mri],” Medical Imaging, IEEE Transactions on,vol. 18, no. 7, pp. 580–592, 1999.

[43] X. Li, B. M. Dawant, E. B. Welch, A. B. Chakravarthy, D. Freehardt,I. Mayer, M. Kelley, I. Meszoely, J. C. Gore, and T. E. Yankeelov, “Anonrigid registration algorithm for longitudinal breast mr images andthe analysis of breast tumor response,” Magnetic resonance imaging,vol. 27, no. 9, pp. 1258–1270, 2009.

44

Page 53: c 2016 Kuocheng Wang - IDEALS

[44] S. S. Halli and K. V. Rao, Advanced techniques of population analysis.Springer Science & Business Media, 2013.

[45] F. A. Sadjadi and E. L. Hall, “Three-dimensional moment invariants,”Pattern Analysis and Machine Intelligence, IEEE Transactions on,no. 2, pp. 127–136, 1980.

[46] A. Reeves and B. Wittner, “Shape analysis of three dimensional objectsusing the method of moments,” in IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition, 1983, pp. 20–26.

[47] P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” inRobotics-DL tentative. International Society for Optics and Photonics,1992, pp. 586–606.

45