image registration and data fusion in radiation therapy

10
Image registration and data fusion in radiation therapy M L KESSLER, PhD The University of Michigan, Ann Arbor, MI 48103, USA ABSTRACT. This paper provides an overview of image registration and data fusion techniques used in radiation therapy, and examples of their use. They are used at all stages of the patient management process; for initial diagnosis and staging, during treatment planning and delivery, and after therapy to help monitor the patients’ response to treatment. Most treatment planning systems now support some form of inter activ e or automated image regis trat ion and provide tools for mappi ng information, such as tissue outlines and computed dose from one imaging study to another. To complement this, modern treatment delivery systems offer means for acquiring and registering 2D and 3D image data at the treatment unit to aid patient setup. Techniques for adapting and customizing treatments during the course of therapy using 3D and 4D anatomic and functional imaging data are currently being introduced into the clinic. These techniques require sophisticated image registration and data fusion technology to accumulate properly the delivered dose and to analyse possible physiological and anatomical changes during treatment. Finally, the correlation of radiological changes after therapy with delivered dose also requires the use of image registration and fusion techniques. Received 7 February 2006 Revised 17 March 2006 Accepted 24 April 2006 DOI: 10.1259/bjr/70 617164 2006 The British Institute of Radiology Data from multiple anatomical and functional imaging studies have become important components of patient management in radiation therapy. From initial diagnosis to treatment planning and from delivery to monitoring the patient post-therapy, these data drive the decisions about how the pat ien t is tre ated and help ass ess the pr ogre ss and ef fi cacy of ther apy. While X- ra y CT remains the primary imaging modality for most aspects of treatment planning and delivery, the use of data from other mod alities suc h as MRI and MR spect ros copy (MRS) and positron/single photon emission tomography (PET/SPECT) is becoming incre asing ly prev alent and valuable, especially when taking advan tage of highl y conformal treatment techniques such as intensity-modu- lat ed rad iot her apy [1– 3]. The se additi ona l ima gin g stu die s pro vide comple men tar y inf ormation to hel p elu cidate the condit ion of the pat ien t bef ore , dur ing and after treatment. The use of time-series image data to assess physiological motion for initial planning as well as anatomical and functional changes for possible treatment adaptation is becoming more widespread as diagnostic imaging devices produce quality 4D image data and as X-ray imaging systems are incorporated into the treat- ment room. In order to make use of the information from these multiple imaging studies in an integrated fashion, the dat a must be geo metri cal ly re gis ter ed to a common coordinate system. This process is called image registra- tion. Once different datasets are registered, information such as tissue boundaries, computed dose distributions and other image or image-derived information can be mapped between them and combined. This process is called data fusion. Figure 1a provides a simple example of these two processes. Numerous techniques exist for both image registration and data fusi on. The choi ce and adva nt age of one te chni qu e over anot her depe nds on the part icul ar app lic ation and typ es of ima ge dat a inv olv ed. Whi le exhau stive and detail ed revi ews of image registr ation algorithms have appeared in the literature [4], this paper is meant to pr ov ide a br oa d ov er vi ew as we ll as examples of image registration and data fusion techni- ques that are employed in radiation therapy. Image registration The basic task of image registration is to compute the geometric transformation that maps the coordinates of corres ponding or homologous points be twee n two imaging studies. While there are many different techni- ques used to carry this out, most approaches involve the sa me three basi c components. The fi rst and main compon ent is the tra nsf ormati on model its elf , whi ch can range from a single global linear transformation for ha ndli ng rotations and tr ansl ations (s ix degree s of  fre edom; thr ee rot ati ons and thr ee tra nsl ati ons) to a completely free for m def ormati on mod el where the transformation is represented by independent displace- ment vectors for each voxel in the image data (degrees of freedom can reach three times the number of voxels). The second component is the metric used to measure how well the images are (or are not) registered, and the thi rd compon ent is the opt imi zer and opt imi zat ion scheme used to bring the imaging data into alignment. It is also worth mentioning that these general compo- nen ts, the tra nsf ormation mod el, whi ch def ine s the deg rees of fre edom or parame ter s, the met ric or cos t function used to measure the worth of the registration and the opti mizati on engi ne used to re ach a fi na l solu tion, are comp letel y analo gous to the compo nents required by inverse treatment planning systems. The British Journal of Radiology, 79 (2006), S99–S108 The British Jour nal of Radiolog y, Specia l Issue 2006 S99

Upload: arakbae

Post on 06-Apr-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 1/10

Image registration and data fusion in radiation therapy

M L KESSLER, PhD

The University of Michigan, Ann Arbor, MI 48103, USA

ABSTRACT. This paper provides an overview of image registration and data fusiontechniques used in radiation therapy, and examples of their use. They are used at allstages of the patient management process; for initial diagnosis and staging, duringtreatment planning and delivery, and after therapy to help monitor the patients’response to treatment. Most treatment planning systems now support some form ofinteractive or automated image registration and provide tools for mappinginformation, such as tissue outlines and computed dose from one imaging study toanother. To complement this, modern treatment delivery systems offer means foracquiring and registering 2D and 3D image data at the treatment unit to aid patientsetup. Techniques for adapting and customizing treatments during the course oftherapy using 3D and 4D anatomic and functional imaging data are currently beingintroduced into the clinic. These techniques require sophisticated image registrationand data fusion technology to accumulate properly the delivered dose and to analysepossible physiological and anatomical changes during treatment. Finally, thecorrelation of radiological changes after therapy with delivered dose also requires theuse of image registration and fusion techniques.

Received 7 February 2006Revised 17 March 2006Accepted 24 April 2006

DOI: 10.1259/bjr/70617164

’ 2006 The British Institute of

Radiology

Data from multiple anatomical and functional imagingstudies have become important components of patientmanagement in radiation therapy. From initial diagnosisto treatment planning and from delivery to monitoringthe patient post-therapy, these data drive the decisionsabout how the patient is treated and help assess theprogress and efficacy of therapy. While X-ray CTremains the primary imaging modality for most aspectsof treatment planning and delivery, the use of data fromother modalities such as MRI and MR spectroscopy(MRS) and positron/single photon emission tomography(PET/SPECT) is becoming increasingly prevalent andvaluable, especially when taking advantage of highlyconformal treatment techniques such as intensity-modu-lated radiotherapy [1–3]. These additional imagingstudies provide complementary information to helpelucidate the condition of the patient before, duringand after treatment. The use of time-series image data toassess physiological motion for initial planning as well asanatomical and functional changes for possible treatment

adaptation is becoming more widespread as diagnosticimaging devices produce quality 4D image data and asX-ray imaging systems are incorporated into the treat-ment room.

In order to make use of the information from thesemultiple imaging studies in an integrated fashion, thedata must be geometrically registered to a commoncoordinate system. This process is called image registra-tion. Once different datasets are registered, informationsuch as tissue boundaries, computed dose distributionsand other image or image-derived information can bemapped between them and combined. This process iscalled data fusion. Figure 1a provides a simple example

of these two processes.Numerous techniques exist for both image registrationand data fusion. The choice and advantage of one

technique over another depends on the particularapplication and types of image data involved. Whileexhaustive and detailed reviews of image registrationalgorithms have appeared in the literature [4], this paperis meant to provide a broad overview as well asexamples of image registration and data fusion techni-ques that are employed in radiation therapy.

Image registration

The basic task of image registration is to compute thegeometric transformation that maps the coordinates of corresponding or homologous points between twoimaging studies. While there are many different techni-ques used to carry this out, most approaches involve thesame three basic components. The first and maincomponent is the transformation model itself, whichcan range from a single global linear transformation forhandling rotations and translations (six degrees of 

freedom; three rotations and three translations) to acompletely free form deformation model where thetransformation is represented by independent displace-ment vectors for each voxel in the image data (degrees of freedom can reach three times the number of voxels).The second component is the metric used to measurehow well the images are (or are not) registered, and thethird component is the optimizer and optimizationscheme used to bring the imaging data into alignment.It is also worth mentioning that these general compo-nents, the transformation model, which defines thedegrees of freedom or parameters, the metric or costfunction used to measure the worth of the registration

and the optimization engine used to reach a finalsolution, are completely analogous to the componentsrequired by inverse treatment planning systems.

The British Journal of Radiology, 79 (2006), S99–S108

The British Journal of Radiology, Special Issue 2006  S99

Page 2: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 2/10

Although it is often desirable or necessary to registernumerous imaging studies to each other, the process of 

registration is generally carried out by registering twodatasets at a time. In radiation therapy, a commonstrategy is to register each of the imaging studies to thetreatment planning CT, as it is used as the primarydataset for treatment planning and dose calculations.Transformations between studies that are not explicitlyregistered to each other can be easily derived bycombining the appropriate transforms and inverse trans-forms between the different datasets and the planningCT. For the discussions that follow, the two datasets

 being registered are labelled Study A and Study B. StudyA will be the base or reference dataset that is held fixedand Study B will be the homologous or moving dataset

that is manipulated to be brought into geometricalignment with Study A. Study B’ will refer to thetransformed or registered version of Study B (Figure 1b).

Transformation model 

The transformation model chosen to describe themapping of coordinates between two studies dependson the clinical site, the imaging conditions and theparticular application. In the ideal case, where thepatient is positioned in an identical orientation duringthe different imaging studies and the scale and centre of the imaging coordinate systems coincide, the transfor-

mation is a simple identity transform I and xB5x A for allpoints in the two imaging studies. This situation mostclosely exists for the data produced by dual imagingmodality devices such as PET-CT or SPECT-CTmachines, especially if physiological motion is controlledor absent [6].

Naturally, it is common for the orientation of thepatient to change between imaging studies, making moresophisticated transformations necessary. For situationsinvolving the brain, where the position and orientation of the anatomy are defined by the rigid skull, a simplerotate-translate model can be accurately applied. In thiscase, a global linear transformation specified by three

rotation angles (hx,h y,hz) and three translations (tx,t y,tz)can be used to map points from one image dataset to

another. A more general linear transformation is an

affine transform, which is a composition of rotations,translations, scaling (sx,s y,sz) and shearing (shx,sh y,shz). A

property of affine transformations is that they preservecollinearity (‘‘parallel lines remain parallel’’). Currently,the DICOM imaging standard uses affine transforma-tions to specify the spatial relationship between twoimaging studies [7]. Most commercial treatment plan-ning systems only support image registration usingaffine transformations, although support for moresophisticated transformations should appear soon.

The assumption of global rigid movement of anatomyis often violated, especially for sites other than the headand large image volumes that extend to the body surface.Differences in patient setup (arms up versus armsdown), organ filling and uncontrolled physiological

motion confound the use of a single affine transform toregister two imaging studies. In some cases where localrigid motion can be assumed, it may be possible to use arigid or affine transformation to register sub-volumes of two imaging studies. For example, the prostate itself may

 be considered rigid, but it certainly moves relative to thepelvis, depending on the filling of the rectum and

 bladder. By considering only a limited field-of-view thatincludes just the region of the prostate, it is often possibleto use an affine transformation to accurately register theprostate anatomy in two studies [8–10]. One or more sub-volumes can be defined by simple geometric cropping ormasks derived from one or more anatomical structures(Figure 2).

Even with a limited field-of-view approach, there aremany sites in which affine registration techniques are notsufficient to achieve acceptable alignment of anatomy. Inthese sites, an organ’s size and shape may change as aresult of normal organ behaviour or the motion of surrounding anatomy. For example, the lungs change in

 both size and shape during the breathing cycle, and theshape of the liver can be affected by the filling of thestomach. When registering datasets that exhibit thesekinds of changes, a non-rigid or deformable model must

  be used to accurately represent the transformation between studies.

Deformable transformation models range in complex-

ity from a simple extension of a global affine transforma-tion using higher order polynomials with relatively fewparameters, to a completely local or ‘‘free form’’ model

(a) (b)

Figure 1. Schematic of the image registration and data fusion processes. (a) Anatomical information from a spin-echo MR is firstregistered and then fused with functional information from a 11C thymidine PET to create a synthetic MR-PET image volume.(b) General components of the registration process.

M L Kessler 

S100 The British Journal of Radiology, Special Issue 2006 

Page 3: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 3/10

where each point or voxel in the image volume can moveindependently and the number of parameters may reachthree times the number of voxels considered. Betweenthese two extremes are transformation models designedto handle various degrees of semi-local deformationsusing a moderate number of parameters, such as splines[11].

Global polynomials have been used successfully tomodel and remove image distortions in MR and otherimage data as a pre-processing step for image registra-tion [12], but are not typically used for modellingdeformation of anatomy because of undesirable oscilla-tions that occur as the degree of the polynomialincreases. Spline-based transformations, such as B-splines [11, 13] avoid this problem by building up theoverall transformation, or deformation function, using aset of weighted basis functions defined over (or whichcontribute only over) a limited region. Figure 3 illus-trates this approach for a one-dimensional cubic B-spline. The displacement or deformation, Dx, at a givenpoint is computed as the weighted sum of basis functions

centred at a series of locations called knots. Changing theweight or contribution w of each basis function affectsonly a specific portion of the overall deformation. Byincreasing the density of knots, more complex andlocalized deformations can be modelled.

Another spline based transformation, called thin-platesplines, uses a set of corresponding control pointsdefined on both image datasets and minimizes a bendingenergy term to determine the transformation parameters[14–16]. Unlike B-splines, the location of each controlpoint does have some amount of global influence,meaning that changing the position of a control pointin one area will affect the entire deformation in somecapacity. Using more points reduces the influence of each point but this comes at a higher computational costthan with B-splines.

Finally, free-form or non-parametric transformationmodels are represented using vector fields of the explicitdisplacements for a grid of points, usually at the voxellocations or an integer sub-sample of these (Figure 4).Algorithms for solving for the displacements with

(a) (b) (c)

Figure 2. Various strategies for cropping data for limited field-of-view image data. (a) Simple geometric cropping. (b) Piecewisecropping. (c) Anatomically-based cropping.

(a) (b)

Figure 3. B-spline deformation model. (a) 1D example of the cubic B-spline deformation model. The displacement Dx as afunction of x is determined by the weighted sum of basis functions. The double arrow shows the region of the overall

deformation affected by the weight factor w7 

. 3D deformations are constructed using 1D deformations for each dimension.(b) Multiresolution registration of lung data using B-splines. Both knot density and image resolution are varied duringregistration. This can help avoid local minima and decrease overall registration time.

Image registration and data fusion in radiation therapy 

The British Journal of Radiology, Special Issue 2006  S101

Page 4: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 4/10

non-parametric models use some form of local drivingforce to register the image data. Common models includefluid flow [17, 18], optical flow (based on intensitygradients) [19, 20] and finite element methods [21].

Registration metric 

In most registration algorithms, the parameters of atransformation model which bring two datasets intogeometric alignment are determined by maximizing orminimizing a registration metric which measures thesimilarity or dissimilarity of the two image datasets.Most registration metrics in use today can be classified aseither geometry-based or intensity-based. Geometry-

  based metrics make use of features extracted from theimage data, such as anatomic or artificial landmarks and

organ boundaries, while intensity-based metrics use theimage data directly.

Geometry-based metrics

The most common geometry-based registrationmetrics involve the use of points [22], lines [23, 24] orsurfaces [22, 25, 26]. For point matching, the coordinatesof pairs of corresponding points from Study A and StudyB are used to define the registration metric. These pointscan be anatomic landmarks or implanted or externally-placed fiducial markers. The registration metric isdefined as the sum of the squared distances between

corresponding points. To compute the rotations andtranslations for a rigid transformation, a minimum of three pairs of points are required and for affinetransformations, a minimum of four pairs of non-coplanar points are required. Using more pairs of pointsreduces the bias that errors in the delineation of any onepair of points has on the estimated transformationparameters. However, accurately identifying more thanthe minimum number of corresponding points can bedifficult as different modalities often produce differenttissue contrasts (a major reason why multiple modalitiesare used in the first place) and placing or implantinglarger numbers of markers is not always possible or

desirable.Alternatively, line and surface matching techniques donot require a one to one correspondence of specific

points, but rather try to maximize the overlap betweencorresponding lines and surfaces extracted from twoimage studies, such as the brain or skull surface or pelvic

  bones. These structures can be easily extracted using

automated techniques and minor hand editing. As withdefining pairs of points, it may be inherently difficult ortime consuming to accurately delineate correspondinglines and surfaces in both imaging studies. Furthermore,since the extracted geometric features are surrogates forthe entire image volume, any anatomic or machine-baseddistortions in the image data away from these featureswill not be taken into account during the registrationprocess.

Intensity-based metrics

To overcome some of the limitations of using explicitgeometric features to register image data, another classof registration metric has been developed which uses thenumerical greyscale information directly to measure howwell two studies are registered. These metrics are alsoreferred to as similarity measures since they determinethe similarity between the distributions of correspondingvoxel values from Study A and a transformed version of Study B. Several mathematical formulations are used tomeasure this similarity. The more common similaritymeasures in clinical use include: sum-of-squared differ-ences and cross-correlation [27] for registration of datafrom X-ray CT studies and mutual information forregistration of data from both similar and different

imaging modalities [15, 28, 29].The mutual information metric provides a measure of 

the information that is common between two datasets[30]. It is assumed that when two datasets are properlyaligned, the mutual information of the pair is amaximum, which makes it an appropriate registrationmetric. It can be used for a wide range of imageregistration situations since there is no dependence onthe absolute intensity values and it is very robust tomissing or limited data. For example, a tumour mightshow up clearly on an MR study but be indistinct on acorresponding CT study. Over the tumour volume themutual information is low, but no prohibitive penalties

are incurred. In the surrounding healthy tissue themutual information can be high, and this becomes thedominant factor in the registration.

(a) (b)

Figure 4. Visualization of (a) deformation computed between datasets registered using B-splines and (b) fluid flow model. Thedeformation or displacement is known for every voxel but only displayed for a subset of voxels for clarity ((b) image courtesy ofGustavo Olivera, University of Wisconson).

M L Kessler 

S102 The British Journal of Radiology, Special Issue 2006 

Page 5: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 5/10

Optimizer and registration scheme

Most image registration systems use optimizationschemes such as gradient descent or problem specificadaptations of these. Registration of datasets is usuallycarried out in a hierarchical fashion, starting withdownsized versions of the data and iteratively register-ing successively finer versions. The degrees of freedomof the geometric transformation can also be varied tospeed the registration process. An example schememight begin with simple translations, and then allowrotations, then low spatial frequency deformations andfinally the full deformation model [12]. A hierarchicalapproach saves computation time and also helps avoidlocal minima, which become more likely as the degreesof freedom of the deformation model increase.

For deformable image registration problems using alarge number of degrees of freedom, some form of regularization may also be imposed to discourage‘‘unreasonable’’ deformations such as warping of bonesand folding of tissue. One approach to this problem is to

filter the deformations between iterations of the optimi-zation [31]. Another approach is to include a regulariza-tion term in the registration metric that penalizesnon-physical deformations. The regularization term caneven be made spatially variant using known or estimatedtissue properties [32].

Data fusion

The motivation for registering imaging studies is to beable to map information derived from one study toanother, or to directly combine or fuse the imaging datafrom the studies to create displays that contain relevant

features from each modality. For example, a tumourvolume may be more clearly visualized using a specificMR image sequence or coronal image plane rather thanthe axial treatment planning CT. If the geometrictransformation between the MR study and the treatmentplanning CT study is known, the clinician is able tooutline the tumour using images from the MR study and

map these outlines to the images of the CT study. Thisprocess is called structure mapping (Figure 5).

Another approach to combining information fromdifferent imaging studies is to map directly the imageintensity data from one study to another so that at eachvoxel there are two (or more) intensity values rather thanone. The goal is to create a version of Study B (Study B’)

with images that match the size, location and orientationof those in Study A. These corresponding images canthen be combined or fused in various ways to helpelucidate the relationship between the data from the twostudies. Various relevant displays are possible using thismultistudy data. For example, functional informationfrom a PET imaging study can be merged with theanatomic information from an MRI study and displayedas a colourwash overlay (Figure 6). This type of imagesynthesis is referred to as image fusion.

A variety of techniques exist to present fused data,including the use of overlays, pseudo-colouring andmodified greyscales. For example, the hard bone featuresof a CT imaging study can be combined with the softtissue features of an MRI study by adding the boneextracted from the CT to the MR dataset. Anothermethod is to display anatomic planes in a side-by-sidefashion (Figure 6). Such a presentation allows structuresto be defined using both images simultaneously.

In addition to mapping and fusing image intensities,3D dose distributions computed in the coordinate systemof one imaging study can be mapped to another. Forexample, doses computed using the treatment planningCT can be displayed over an MR study acquired after thestart of therapy. With these data, regions of post-treatment radiological abnormality can be readily com-pared with the planned doses for the regions. With the

introduction of volumetric imaging on the treatmentunits, treatment delivery CT studies can now be acquiredto determine more accurately the actual doses delivered.By acquiring these studies over the course of therapy andregistering them to a common reference frame, doses forthe representative treatments can be reformatted andaccumulated to provide a more likely estimate of the

Figure 5. Structure mapping. A tumour volume is outlined by the clinician on an MR study and then mapped to the treatmentplanning CT using the computed transformation.

Image registration and data fusion in radiation therapy 

The British Journal of Radiology, Special Issue 2006  S103

Page 6: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 6/10

delivered dose. This type of data can be used as inputinto the adaptive radiotherapy decision process.

Validation

It is important to validate the results of a registration before making clinical decisions based on the results. Todo this, most image registration systems provide somecombination of numerical and visual verification tools. Acommon numerical evaluation technique is to define aset of landmarks for corresponding anatomic points onStudy A and Study B and compute the distance betweenthe actual location of the points defined on Study A andthe resulting transformed locations of the points from

Study B’. This calculation is similar to a ‘‘pointmatching’’ metric but, as discussed earlier, it may bedifficult to accurately and sufficiently define the appro-priate corresponding points, especially when registeringmultimodality data. Also, if deformations are involved,the evaluation is not valid for regions remote from thedefined points.

Regardless of the output of any numerical techniqueused, which may only be a single number, it is important

for the clinician to appreciate how well in threedimensions the information they define on one study ismapped to another. There are many possible visualiza-tion techniques to help to evaluate qualitatively theresults of a registration. Most of these are based on datamapping and fusion display techniques. For example,paging through the images of a split screen display andmoving a horizontal or vertical divider across regionswhere edges of structures from both studies are visiblecan help uncover even small areas of misregistration(Figure 7). Another interesting visual technique involvesdynamically switching back and forth between corre-sponding images from the different studies at about once

per second and focusing on particular regions of theanatomy to observe how well they are aligned.In addition to comparing how well the images from

Study A and Study B’ correspond at the periphery of anatomic tissues and organs, outlines from one study can

  be displayed over the images of the other. Figure 8shows a brain surface which was automatically segmented

(a) (b) (c)

Figure 6. Different approaches to display data from multiple studies which have been registered and reformatted. (a) Side-by-side display with linked cursor. (b) Split screen display. (c) Colourwash overlay.

(a) (b) (c)

Figure 7. Image-image visual validation using split screen displays of native MR and reformatted CT study.

M L Kessler 

S104 The British Journal of Radiology, Special Issue 2006 

Page 7: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 7/10

from the treatment planning CT study and mapped tothe MR study. The agreement between the CT-basedoutlines at the different levels and planes of the MRstudy demonstrate the accuracy of the registration.

In practice, the accuracy of the registration processdepends on a number of factors. For multimodalityregistration of PET/CT/MR data in the brain, registra-tion accuracy on the order of a voxel size of the imagingstudies can be achieved. Outside the head, many factorsconfound single voxel level accuracy, such as machineinduced geometric and intensity distortions as well as

dramatic changes in anatomy and tissue loss or gain.Nevertheless, accuracy at the level of a few voxels iscertainly possible in many situations.

Clinical applications

Image registration and data fusion are useful at eachstep of the patient management process in radiationtherapy; for initial diagnosis and staging, during treat-ment planning and delivery, and after therapy to helpmonitor the patient’s response. The overall purpose of these tools at each stage is the same; to help to integratethe information from different imaging studies in a

quantitative manner to create a more complete repre-sentation of the patient. Over the past several years,treatment planning and treatment delivery systems haveevolved to provide direct support for image registrationand data fusion. Typical examples of how thesetechniques are used for treatment planning, delivery,and adaptation are described here.

Treatment planning

Most modern treatment planning systems permit theuse of one or more datasets in addition to the treatment

planning CT for structure delineation and visualization.These are sometimes referred to as ‘‘secondary’’ datasets.In order to transfer anatomic outlines and other

geometric information from these datasets to the plan-ning CT, the transformation between the secondarydataset and the planning CT is required. Furthermore,using the inverse of this transformation, it is also possibleto transfer information computed using the planning CT,such as the planned dose, to the secondary dataset.

Incorporation of secondary or complementary datafrom MRI and nuclear medicine imaging studies is

 becoming increasingly common. MR provides superiorsoft tissue contrast relative to CT and the ability to imagedirectly along arbitrary planes can aid in the visualiza-

tion and delineation of certain anatomic structures, suchas the optic nerves and chiasm. MR can also provideinformation on localized metabolite concentrations usingspectroscopy [3, 33]. Incorporation of functional informa-tion from PET and SPECT can help remove ambiguitiesthat might exist on the treatment planning CT betweenthe tumour and other conditions such as atelectasis andnecrosis [34]. These studies can also indicate nodalinvolvement and provide a map of local tissue functionthat can be used construct objective functions for doseoptimization [3, 35].

Figure 9 illustrates the use of MR as a secondarydataset for target and normal structure delineation. Anaxial and coronal MR study was acquired and registered

to the treatment planning CT using a geometrictransformation which allowed rotations and translations,as the anatomy in this region moves in a rigid fashion.Since the image data were from different modalities, themutual information registration metric was used. Split-screen visualization of the registered datasets was usedto validate the computed transformation which was

 judged to be accurate to within 1–2 mm over the imagevolume (Figure 7). The gross tumour volume (GTV) wasdefined as the region of enhancement in the post Gd-DPTA contrast MR studies. The clinician outlined thisvolume on both the axial and coronal sets of MR images.The optic nerves and chiasm were outlined on the

coronal MR study. The outlines were used to generate a3D surface description for each tissue and these weremapped to the coordinate system of the planning CT

(a) (b) (c)

Figure 8. Image-geometry visual validation structure overlay of CT defined brain outlines over MR images.

Image registration and data fusion in radiation therapy 

The British Journal of Radiology, Special Issue 2006  S105

Page 8: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 8/10

using the computed transformation. The outlines of these

mapped surfaces for each CT image were derived byintersecting the transformed surfaces along the planesdefined by each image (Figure 5). Because of differencesin partial volume averaging between the axial andcoronal MR images, the outlines derived from the axialand coronal MR data are not identical. In these cases, theclinician has the choice to use one or other outline or togenerate a composite outline using a Boolean ORoperation.

For this example, the CT greyscale data did notcontribute any information for the definition of theGTV, optic nerves, or optic chiasm. Had the physicianoutlined a CT-based GTV, it could have been incorpo-rated directly into the composite GTV or compared withthe MR based outlines to reconcile potential ambiguities.At this point, the outlines of the tumour and normalstructures were used for treatment planning as if theywere derived using only the planning CT. The finalplanning target volume (PTV) was created by uniformlyexpanding the composite GTV surface by 5 mm toaccount for setup uncertainty. A treatment plan anddose distribution was generated using the CT data, PTVand normal structures. The CT-based dose distributionwas then mapped back to the MR study for furthervisualization of the dose relative to the underlyinganatomy.

Treatment delivery 

Once a treatment plan is created, it is transferred to thetreatment unit for delivery. The location and orientation of the patient on the treatment machine must be adjusted sothat the centre and orientation of the coordinate system of the treatment plan coincide with that of that of thetreatment unit. Image registration is typically used tocarry out this process using images acquired in thetreatment room and the planning CT. The most commonpractice is to generate a pair of orthogonal digitallyreconstructed radiographs (DRRs) from the planning CT

and register these simulated radiographs with actualradiographs acquired by a flat-panel imager attached tothe treatment unit. It is now also possible to acquire

volumetric image data at the treatment unit using cone-

 beam reconstruction of a set of projection images acquired by rotating the treatment gantry around the patient. (Seepapers by Kirby and Glendinning, Moore et al and Chenand Pouliot in this issue). These cone-beam data can beregistered directly with the planning CT to determine howto shift (and possible rotate) the treatment table to properlyposition the patient for treatment [36, 37].

Figure 10 shows an example of an interface for 3Dimage-based alignment at the treatment unit using acone-beam CT dataset and the planning CT. Automatedimage registration using successively finer data resolu-tion and mutual information are used to determine therotations and translations required to align the twodatasets. These are then translated into machine para-meters which can be automatically downloaded to thetreatment unit and set-up. In this example, the accuracyof the registration is assessed using both image-imageand structure-image overlay displays. These same toolsand image data are also available off-line so that theclinician can track and analyse the progress of thetreatments.

Treatment adaptation and customization

On-line imaging has made it more convenient toacquire image data of the patient over the course of 

treatment. Using these data, it is possible to uncoverchanges in patient anatomy or treatment setup that aresignificant and dictate changes to the original treatmentplan. Better estimates of individual treatment doses can

  be computed using these data and the actual machineparameters. By registering these data to the ‘‘base’’treatment planning CT, it is possible to construct a morecomplete model of the accumulated dose to the patient.This information can then be used to assess if and how atreatment plan should be adapted or further customized[38–41].

Figure 11 shows an example of dose accumulation fortwo datasets of the patient at different points in the

 breathing cycle. The dose distribution displayed on theleft image was computed directly using the imagedataset shown [42]. The dose distribution displayed on

Figure 9. Incorporation of MR image data into the treatment planning process.

M L Kessler 

S106 The British Journal of Radiology, Special Issue 2006 

Page 9: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 9/10

the middle image was computed from another datasetand mapped to the image data shown using the B-splinedeformation field computed by registering the twodatasets using a sum-of-squares difference metric. Thedose distribution on the right is the weighted sum of thetwo distributions. This process can be continuedthroughout the course of therapy to provide up-to-dateinformation on the delivered dose.

Summary

Over the past several years there has been anexplosion of the use of image data from a variety of modalities to aid in treatment planning, delivery andevaluation. In order to make quantitative use of thesedata it is necessary to determine the transformation thatrelates the coordinates of the individual datasets to oneanother. The process of finding this transformation isreferred to as image registration. Once the geometricrelationship between the datasets has been determined itis possible to utilize the information they provide bymapping the image data, derived structures, andcomputed dose distributions between datasets using a

process called data fusion. There are many different

techniques that have been and are being studied toimprove the accuracy and utility of both image registra-tion and data fusion. Both processes are now essentialcomponents for modern treatment planning and deliv-ery. As the need, availability and diversity of image datacontinues to increase, they will be even more importantto each part of patient management process. These tools,however, can not replace clinical judgment. Differentimaging modalities image the same tissues differentlyand, although tools may help us to understand betterand differentiate between tumour and non-tumour, they

cannot yet make the ultimate decision of what to treatand what to not treat. These decisions still lie with theclinician, although they now have more sophisticatedtools to help them make these decisions.

Acknowledgments

Portions of the text and some of the figures presentedhere were published previously in Kessler ML, RobersonM. Image registration and data fusion for radiotherapytreatment planning. In: Schlegel W, Bortfeld T, GrosuA-L, editors. New technologies in radiation oncology.

Springer, 2006.

Figure 10. Volumetric registrationat the treatment unit. A cone-beamCT acquired at the time of treatmentis registered to the treatment plan-ning CT (larger dataset) to properlyposition the patient on the treat-m ent table (courtesy of PeterM o nr o e, P hD , V a ri a n M ed i ca lSystems).

Figure 11. Example of dose summation/accumulation using registered datasets (courtesy of Mihaela Rosu, The University ofMichigan).

Image registration and data fusion in radiation therapy 

The British Journal of Radiology, Special Issue 2006  S107

Page 10: Image Registration and Data Fusion in Radiation Therapy

8/3/2019 Image Registration and Data Fusion in Radiation Therapy

http://slidepdf.com/reader/full/image-registration-and-data-fusion-in-radiation-therapy 10/10

References

1. Webb S. The physical basis of IMRT and inverse planning.Br J Radiol 2003;76:678–89.

2. Eisbruch A. Intensity-modulated radiation therapy: aclinical perspective. Semin Radiat Oncol 2002;12:197–8.

3. Ling CC, Humm J, Larson S, Amols H, Fuks Z, Leibel S,et al. Towards multidimensional radiotherapy (MD-CRT):

 biological imaging and biological conformality. Int J RadiatOncol Biol Phys 2000;47:551–60.4. Maintz JB, Viergever MA. A survey of medical image

registration. Med Image Anal 1998;2:1–36.5. Hill DL, Batchelor PG, Holden M, Hawkes DJ. Medical

image registration. Phys Med Biol 2001;46:R1–R45.6. Townsend DW, Beyer T. A combined PET/CT scanner: the

path to true image fusion. Br J Radiol 2002;75:24S–30S.7. DICOM Part 3, PS3.3 – Service Class Specifications,

National Electrical Manufacturers Association, Rosslyn,Virgina, USA, 2004.

8. McLaughlin PW, Narayana V, Meriowitz A, Troyer S,Roberson PL, Gonda R Jr, et al. Vessel sparing prostateradiotherapy: dose limitation to critical erectile vascularstructures (internal pudendal artery and corpus cavernosum)

defined by MRI. Int J Radiat Oncol Biol Phys 2005;61:20–31.9. McLaughlin PW, Narayana V, Kessler M, McShan D,Troyer S, Marsh L, et al. The use of mutual information inregistration of CT and MRI datasets post permanentimplant. Brachytherapy 2004;3:61–70.

10. Roberson PL, McLaughlin PW, Narayana V, Troyer S,Hixson GV, Kessler ML. Use and uncertainties of mutualinformation for CT/MR registration post permanentimplant of the prostate. Med Phys 2005;32:473–82.

11. Unser M, Aldroubi A, Eden M. B-spline signal processing:part I-theory. IEEE Trans Signal Processing 1993;41:821–33.

12. Kybic J, Unser M. Fast parametric elastic image registration.IEEE Trans Image Processing 2003;75:1427–42.

13. Maurer CR, Aboutanos GB, Dawant BM, et al. Effect of geometrical distortion correction in MR on image registra-

tion accuracy. J Comput Assist Tomogr 1996;20:666–79.14. Bookstein F. Principal warps: thin-plate splines and the

decomposition of deformations. IEEE Trans PatternAnalysis Machine Intelligence 1989;567–85.

15. Meyer CR, Boes JL, Kim B, Bland PH, Zasadny KR, KisonPV, et al. Demonstration of accuracy and clinical versatilityof mutual information for automatic multimodality imagefusion using affine and thin-plate spline warped geometricdeformations. Med Image Anal 1997;1:195–206.

16. Coselmon MM, Balter JM, McShan DL, Kessler ML. Mutualinformation based CT registration of the lung at exhale andinhale breathing states using thin-plate splines. Med Phys2004;31:2942–8.

17. Lu W, Chen M, Olivera GH, Ruchala KJ, Mackie T. Fastfree-form deformable registration via calculus of variations.

Phys Med Biol 2004;49:3067–87.18. Christensen GE, Carlson B, Chao KS, Yin P, Grigsby PW,

Nguyen K, et al. Image-based dose planning of intracavi-tary brachytherapy: registration of serial-imaging studiesusing deformable anatomic templates. Int J Radiat OncolBiol Phys 2001;51:227–43.

19. Thirion JP. Image matching as a diffusion process: ananalogy with Maxwell’s demons. Med Image Anal1998;2:243–60.

20. Wang H, Dong L, Lii MF, Lee AL, de Crevoisier R, MohanR, et al. Implementation and validation of a three-dimen-sional deformable registration algorithm for targetedprostate cancer radiotherapy. Int J Radiat Oncol Biol Phys2005;61:725–35.

21. Brock KK, Sharpe MB, Dawson LA, Kim SM, Jaffray DA.

Accuracy of finite element model (FEM)-based multi-organdeformable image registration. Med Phys 2005;32:1647–59.

22. Kessler ML, Pitluck S, Petti PL, Castro JR. Integration of multimodality imaging data for radiotherapy treatmentplanning. Int J Radiat Oncol Biol Phys 1991;21:1653–67.

23. Balter JM, Pelizzari CA, Chen GT. Correlation of projectionradiographs in radiation therapy using open curve seg-ments and points. Med Phys 1992;19:329–34.

24. Langmack KA. Portal imaging. Br J Radiol 2001;74:789–804.25. Pelizzari CA, Chen GT, Spelbring DR, Weichselbaum RR.

Accurate three-dimensional registration of CT, PET, and/orMR images of thebrain.J ComputAssist Tomogr 1989;13:20–6.

26. van Herk M, Kooy HM. Automatic three-dimensionalcorrelation of CT-CT, CT-MRI, and CT-SPECT usingchamfer matching. Med Phys 1994;21:1163–78.

27. Kim J, Fessler JA. Intensity-based image registration usingrobust correlation coefficients. IEEE Trans Med Imaging2004;23:1430–44.

28. Viola P, Wells WM. Alignment by maximization of mutualinformation. Int J Computer Vision 1997;137–54.

29. Maes F, Collignon A, Vandermeulen D, Marchal G, SuetensP. Multimodality image registration by maximization of mutual information. IEEE Trans Med Imaging1997;16:187–98.

30. Roman S. Introduction to coding and information theory,

Undergraduate Texts in Mathematics, ISBN 0-387-94704-3,New York, NY: Springer-Verlag, 1997.

31. Staring M, Klein S, Pluim JP. Nonrigid registration withadaptive, content-based filtering of the deformation field.Proc SPIE Medical Imaging 2005: Image Proc. 2005:212–21.

32. Ruan R, Fessler JA, Roberson M, Balter J, Kessler M.Nonrigid registration using regularization that accommo-dates for local tissue rigidity. Proc SPIE Medical ImagingProc. Vol 6144, 2006. (In press).

33. Graves EE, Pirzkall A, Nelson SJ, Larson D, Verhey L.Registration of magnetic resonance spectroscopic imagingto computed tomography for radiotherapy treatmentplanning. Med Phys 2001;28:2489–96.

34. Munley MT, Marks LB, Scarfone C, Sibley GS, Patz EF Jr,Turkington TG, et al. Multimodality nuclear medicine

imaging in three-dimensional radiation treatment planningfor lung cancer: challenges and prospects. Lung Cancer1999;23:105–14.

35. Marks LB, Spencer DP, Bentel GC, et al. The utility of SPECT lung perfusion scans in minimizing and assessingthe physiologic consequences of thoracic irradiation. Int JRadiat Oncol Biol Phys 1993;26:659–68.

36. Jaffray DA, Siewerdsen JH, Wong JW, Martinez AA.Flatpanel cone-beam computed tomography for image-guided radiation therapy. Int J Radiat Oncol Biol Phys2002;53:1337–49.

37. Mackie TR, Kapatoes J, Ruchala K, Lu W, Wu C, Olivera G,et al. Image guidance for precise conformal radiotherapy.Int J Radiat Oncol Biol Phys 2003;56:89–105.

38. Smitsmans MH, Wolthaus JW, Artignan X, de Bois J, Jaffray

DA, Lebesque JV, et al. Automatic localization of theprostate for on-line or off-line image-guided radiotherapy.Int J Radiat Oncol Biol Phys 2004;60:623–35.

39. Yan D, Wong J, Vicini F, Michalski J, Pan C, Frazier A, et al.Adaptive modification of treatment planning to minimizethe deleterious effects of treatment setup errors. Int J RadiatOncol Biol Phys 1997;38:197–206.

40. Yan D, Lockman D, Martinez A, Wong J, Brabbins D, Vicini F,et al. Computed tomography guided management of inter-fractional patient variation. Semin Radiat Oncol 2005;3:168–79.

41. Lam KL, Ten Haken RK, Litzenberg D, Balter JM, PollockSM. An application of Bayesian statistical methods toadaptive radiotherapy. Phys Med Biol 2005;50:3849–58.

42. Rosu M, Chetty IJ, Balter JM, Kessler ML, McShan DL, TenHaken RK. Dose reconstruction in deforming lung anat-

omy: dose grid size effects and clinical implications. MedPhys 2005;32:2487–95.

M L Kessler 

S108 The British Journal of Radiology, Special Issue 2006