[ieee eighth mexican international conference on current trends in computer science (enc 2007) -...

6
Non-invasive tumor localization by 3D registration of range sensor and computer tomography images. Ruben Posada-Gomez 1 , Giner Alor-Hernandez 2 , Mario Alberto García-Martínez 3 , Albino Martínez-Sibaja 4 Departamento de Postgrado e Investigación, Instituto Tecnológico de Orizaba Av. Ote. 6 No 852, Col. Emiliano Zapata, Orizaba, Veracruz, México 1 [email protected], 2 [email protected], 3 magarcia@ itorizaba.edu.mx, 4 [email protected] Abstract This paper describes a patient positioning method that allows a non-invasive localization of tumors in the field of conformal radiotherapy for the treatment of cancers located in the head. The proposed methodology uses computer tomography and range sensor images of the patient’s head; on the one hand tomography data has information of tumor localization related to the patient’s head and on the other hand range sensor images has information of patient’s face localization into the treatment’s room. A 3D multimodality registration of computer tomography and range sensor images and the calibration of range sensor at radiotherapy treatment room avoid the use of stereotactic frame for patient’s positioning. Tests performed with human data proved that the registration algorithm is accurate (0.1mm mean distance between homologous points) and robust even for facial expression changes. 1. Introduction The objective of a radiotherapy treatment, is to destroy tumors while preserving the surrounding healthy organs as much as possible. Radiotherapy machines consist of X-ray sources turning around one axis and emitting ionizing beams destroying carcinogenic cells. In radiotherapy, patient positioning becomes a very important work since success of treatments depends of positioning accuracy. With the arrival of multileaf collimator [1,2] and the development of 3D conformal radiotherapy (CRT) [3], patients positioning becomes crucial. Some patient’s positioning techniques were then proposed, many of them invasive. Treatment protocols depend on the organ to be irradiated. This paper focuses on intracranial tumor treatment. In cranial CRT, the patient’s positioning is usually based on stereotactic frames [4], which consist in metallic frames screwed on the patient’s skull, a computer tomography (CT) is then done. The stereotactic frame is also visible in the CT, since both the tumor and the frame are viewed in the coordinate system of the CT-scanner, it is possible to localize the region to be treated in CT coordinates. The reference marks of stereotactic frame serves to locate the patient into radiotherapy room (Fig. 1), using a system of three laser beams to exactly superimpose the frame coordinate system and the coordinate system of the radiotherapy machine ( 0, , , m m m m x y z r r r ), where the origin 0 m represents the machine isocentre, that s the point where all ionizing beams converge. Knowing the exact position of the tumor in the frame coordinate system, it is possible to translate and to rotate the table on which the patient is lying so that the tumor centre matches perfectly with the machine isocentre 0m. Other positioning methods include the use of implanted radiopaque fiducials [5], but even if they facilitate the identification of an internal marker, they are more invasives. The recent development of 3D surface’s sensors, digitizing a scene in a short time period has put forward their use for patient’s positioning in a radiotherapy room [6]. Another patient positioning method is based on the use of infrared LEDs and spherical markers fixed onto a dental support blocked between the maxillary dentition of the patient [7]. This approach can be used for a fractioned radiotherapy treatment, meanwhile patient dedicated Eighth Mexican International Conference on Current Trends in Computer Science 0-7695-2899-6/07 $25.00 © 2007 IEEE DOI 10.1109/ENC.2007.17 65

Upload: albino

Post on 07-Mar-2017

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

Non-invasive tumor localization by 3D registration of range sensor and computer tomography images.

Ruben Posada-Gomez1, Giner Alor-Hernandez2, Mario Alberto García-Martínez3, Albino Martínez-Sibaja4

Departamento de Postgrado e Investigación, Instituto Tecnológico de Orizaba Av. Ote. 6 No 852, Col. Emiliano Zapata, Orizaba, Veracruz, México

[email protected], [email protected], 3magarcia@ itorizaba.edu.mx, [email protected]

Abstract

This paper describes a patient positioning method that allows a non-invasive localization of tumors in the field of conformal radiotherapy for the treatment of cancers located in the head. The proposed methodology uses computer tomography and range sensor images of the patient’s head; on the one hand tomography data has information of tumor localization related to the patient’s head and on the other hand range sensor images has information of patient’s face localization into the treatment’s room.

A 3D multimodality registration of computer tomography and range sensor images and the calibration of range sensor at radiotherapy treatment room avoid the use of stereotactic frame for patient’s positioning. Tests performed with human data proved that the registration algorithm is accurate (0.1mm mean distance between homologous points) and robust even for facial expression changes. 1. Introduction

The objective of a radiotherapy treatment, is to destroy tumors while preserving the surrounding healthy organs as much as possible. Radiotherapy machines consist of X-ray sources turning around one axis and emitting ionizing beams destroying carcinogenic cells. In radiotherapy, patient positioning becomes a very important work since success of treatments depends of positioning accuracy.

With the arrival of multileaf collimator [1,2] and the development of 3D conformal radiotherapy (CRT) [3], patients positioning becomes crucial. Some

patient’s positioning techniques were then proposed, many of them invasive.

Treatment protocols depend on the organ to be irradiated. This paper focuses on intracranial tumor treatment. In cranial CRT, the patient’s positioning is usually based on stereotactic frames [4], which consist in metallic frames screwed on the patient’s skull, a computer tomography (CT) is then done. The stereotactic frame is also visible in the CT, since both the tumor and the frame are viewed in the coordinate system of the CT-scanner, it is possible to localize the region to be treated in CT coordinates. The reference marks of stereotactic frame serves to locate the patient into radiotherapy room (Fig. 1), using a system of three laser beams to exactly superimpose the frame coordinate system and the coordinate system of the radiotherapy machine ( 0 , , ,m m m mx y zr r r

), where the origin 0m represents the machine isocentre, that s the point where all ionizing beams converge. Knowing the exact position of the tumor in the frame coordinate system, it is possible to translate and to rotate the table on which the patient is lying so that the tumor centre matches perfectly with the machine isocentre 0m.

Other positioning methods include the use of implanted radiopaque fiducials [5], but even if they facilitate the identification of an internal marker, they are more invasives. The recent development of 3D surface’s sensors, digitizing a scene in a short time period has put forward their use for patient’s positioning in a radiotherapy room [6]. Another patient positioning method is based on the use of infrared LEDs and spherical markers fixed onto a dental support blocked between the maxillary dentition of the patient [7]. This approach can be used for a fractioned radiotherapy treatment, meanwhile patient dedicated

Eighth Mexican International Conference on Current Trends in Computer Science

0-7695-2899-6/07 $25.00 © 2007 IEEEDOI 10.1109/ENC.2007.17

65

Page 2: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

devices are needed (molded bite plate system), and the treatment becomes more complex for radiologists.

Recently, an interesting head positioning method has been proposed, their method uses two 3D sensor, one of them is fixed in the CT-room and the other is fixed at treatment room [8]. The algorithm principle can be divided in three parts consisting of a reference surface generation during CT-simulation, “controlled” patient face acquisitions in the therapy room and data alignments providing the patient positioning parameters. In the CT-room, the 3D sensor position is calibrated using a specially designed calibration plate. This calibration provides the geometrical link between the coordinate systems of the 3D sensor and of the CT-scanner. During the CT-data acquisition, a 3D sensor is used to acquire the patient’s face. The treatment room sensor being calibrated with the same method as the CT-simulation sensor, the face point positions are known in the irradiation machine coordinate system.

The proposed method requires only patient internal landmarks and was applied to lesions located in the head. Moreover, the proposed algorithm was conceived with the intent to modify as less as possible the different steps of traditional treatment methods for taking into account radiologist habits.

2. Methodology

The proposed procedure for positioning is divided into two stages (Fig. 2), the first one is done before the treatment and it consists on the one hand in the calibration of surface sensor at treatment room. On the other hand, in this stage are obtained the 3D surface images of patients head from CT images. The second stage starts with a sketchy position and the acquisition of patients face with the surface sensor, then a 3D registration of this information with the diagnostic image (3D CT reconstruction) allow to relate the information of radiotherapy room with the CT data, therefore all position in the room will be known in relation to coordinates of 3D range sensor. The movement determination to execute for a correctly positioning of the patient in the irradiation room can be then done.

In other words, the purpose of the positioning method is to found the relation between two main reference systems that must be related to know the position of the target volume in the treatment room: CT coordinate system RCT ( 0 , , ,CT CT CT CTx y zr r r

) and treatment room coordinate system Rm ( 0 , , ,m m m mx y zr r r

). The coordinate system of CT diagnostic images comes from the 3D reconstruction of a CT-scan, these images are used in 3D dosimetry and the relation between the target volumes and the external points of reference (given generally by the stereotactic frame) is known in this reference system. To position the patient in the treatment room, RCT and Rm are related through a third reference system given by the stereotactic frame or a similar system. The proposed method of patient positioning, replace the stereotactic frame by a surface range sensor (Fig. 3).

xmzm

ym

Radiotherapy machine

stereotactic frame

referencemarks

Fig.1. Patient’s positioning using stereotactic frame.

During the treatment

Before the treatment

3D reconstruction of patients head from CT images

Localization of isocenter into 3D sensor coordinates

(calibration)

Initial positioning

Patients face 3D acquisition

in treatment room

Multimodal 3D registration

Determination of necessary movements of positioning table

Correction of patients position

Fig. 2. Methodology for patient’s positioning.

66

Page 3: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

The surface sensor data are given in a referential R3DS ( 3 3 3 30 , , ,DS DS DS DSx y zr r r

). The relation between R3DS and Rm can be expressed

by 3 DS 1 2 3 1 m

3 DS 4 5 6 2 m

3 DS 7 8 9 3 m

x r r r t xy r r r t yz r r r t z

1 0 0 0 1 1

⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

(1)

It is necessary to manufacture a calibration object

(CO), permitting to know precisely by construction, the position of particular points in Rm, This CO enclose its own referential Rpl. CO must be accurately aligned with Rm by user (Fig. 4).

Constraints of a radiotherapy room (mainly distance between patient and 3D surface sensor), can not allows the use of mechanical contact sensor; in other hand stereovision and passive triangulation are not suitable because of the less accuracy of these systems (in a range of centimeters) and illuminations conditions of radiotherapy room [9]. Finally active triangulation sensors are innocuous and represent actually a compromise between accuracy and acquisition time. Most common calibration methods are oriented to transfer 3D world coordinates into image coordinates: Relating the 3D coordinate points with their projection in the acquired images [10], extracting physical parameters of camera and even modeling lens distortion [11].

A complete calibration process, includes calculate intrinsic and extrinsic parameters. Generally intrinsic

calibration helps the fabricant for adjust any internal sensor’s component while external calibration helps to know the orientation and position of sensor relative to some fiducially coordinate system. The last is the most necessary in patient positioning, since it relates the radiotherapy room with the 3D sensor.

As the proposed calibration method uses only 3D information, it does not depends of 3D sensor measuring principle and could be applied even if a different 3D sensor will be used. The calibration procedure is composed by three parts: the calibration object and its acquisition, the identification of three dimensional markers in generated cloud of points and the relation between two coordinate systems. Theoretically, the calibration step must be done only once after the installation of the 3D sensor in the treatment room, nevertheless one proposes to do this step periodically to insure the correct relation between these references.

The calibration object has its own coordinate system Rpl ( 0 , , ,pl pl pl plx y zr r r

) knowing the

localization of Rpl, it is possible to establish the relation with R3DS as:

3 4 3 3 4 3

3 4 3

1 4 3 1 4 3

1 4 3

( )* ( )*( )*

( )* ( )*( )*

DS DSpl

DS

DS DSpl

DS

pl pl pl

i i x j j yx

k k zi i x j j y

yk k z

z x y

− + −=

+ −

− + −=

+ −= ∧

r rr

r

r rr

r

r rr

(2)

i1, j1, k1, i3, j3, k3, i4, j4, k4 being the centers of three

spheres located in the calibration object. To relate RCT and Rm, it is necessary to find the

relation between visible anatomical structures in these

0m

xCT

yCT

z CT

x3DS

y 3DS

z 3DS

xm

y m

3DS03D Sensor

0CT

z m

Cal

ibra

tion

3D Registration

Fig. 3. Different coordinate systems involved with a radiotherapy treatment.

0pl xpl

y pl

z pl

Positionning table|

Calibration Object 3D Sensor

0m xm

y m

z m

x3DS

y 3DS

z 3DS

3DS0

Fig. 4. Calibration of the system.

67

Page 4: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

two references. As the surface of the patient’s face is visible in both references systems, a registration between the images generated by the 3D surface sensor and the 3D reconstruction of CT-scan images is done.

The principle of the data registration is mathematically formulated in (3), Dm and Dt being two data sets containing common information (homologous structures). The data is registered when the Dt set to be transformed with T best matches the Dm model data set. ft and fm represent the algorithms for extracting from Dt and Dm respectively the pertinent information used to compute the similarity measure S. The latter gives an estimation of the degree of similarity between the two sets of data. T is typically a matrix whose parameters are modified by an optimization algorithm in order to maximize (or to minimize) S. If S has reached the searched extrema, the parameter matrix T% for data superimposition is known.

( )%m m t t

arg optT S f ( D ),T f ( D )

T⎡ ⎤= ⎣ ⎦

(3)

For registration between 3D CT images and 3D

surfacing images, two main problems occur: the number of points generated by both modalities, and the difference in the resolution of images. One solution for the problem of resolution difference, is the employment of a similarity measure based on a distance, like Hausdorff distance (H(A,B)). The Hausdorff distance (4) is based on the determination of the directed Hausdorff distance (5). The latter is given by the largest distance among all smallest distances of a point a A∈ to the points b of B.

( ) ( )H ( A,B ) max h A,B ,h B, A⎡ ⎤= ⎣ ⎦ (4)

Where:

max minh( A,B ) a b

a A b B−

∈ ∈ (5)

max minh( B, A ) b a

b B a A−

∈ ∈ (6)

For the reduction in registration time, a solution

was developed using a data down sampling algorithm [12] considering that in the H(A, B) estimation, the distance depends only on a couple of points A and B.

3. Experiences and results

In order to test the algorithm of registration, some different patients were acquired with the range sensor, and then a known transformation has been done; the difference between the applied transformation and computed one, gives the error of registration. Table 1 summarizes the results of ten different patients to which eight different transformations were applied. It shown that the mean for global translation error

( )tm δ is 8.42 E-06 mm, and the mean for global

rotation error ( )rm δ is 9,91 E-02 degrees. When the registration algorithm was developed, it

has been considerate that the patient adopts a neutral expression of the face during the acquisition of images. The main differences between two images of patient (Fig. 5) can be located at the level of eyes (open in a modality, closed in the other) or at the level of the mouth (smile in ones, absence to smile in the other). Even though great differences between two acquisitions are not expected, this last situation has been simulated. In Fig. 5a, the patient adopts a relatively neutral expression (picture of the over) and then the same voluntary smiles (pictures of the underside). It can be noticed that the data down sampling of images (Fig 5b) reduced strongly the expressions differences. This observation (merely visual) was confirmed by the fact that the registration of faces with and without smiling seemed visually precise.

To determine the influence of expressions differences it registration, it has been asked to

Table 1. mean error for translation and rotation parameters

Image

number ( )m tδ ( )tσ δ ( )m rδ

( )rσ δ

1 1.20 E-15 1.49 E-15 9.90 E-2 2.80 E-1 2 4.16 E-6 1.17 E-5 9.90 E-2 2.80 E-1 3 1.56 E-15 1.53 E-15 9.90 E-2 2.80 E-1 4 7.17 E-5 3.17 E-5 9.90 E-2 2.80 E-1 5 4.16 E-6 1.17 E-5 9.90 E-2 2.80 E-1 6 6.08 E-16 7.35 E-16 9.90 E-2 2.80 E-1 7 4.16 E-6 1.17 E-5 9.90 E-2 2.80 E-1 8 1.11 E-15 9.70 E-16 9.90 E-2 2.80 E-1 9 1.48 E-15 1.33 E-15 9.90 E-2 2.80 E-1 10 9.55 E-16 7.98 E-16 9.90 E-2 2.80 E-1

gm 8.42 E-6 6.71 E-6 9.91 E-2 2.80 E-1

gσ 2.23 E-5 1.04 E-5 1.37 E-4 1.35 E-4

68

Page 5: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

volunteers to simulate some extreme situations. 3D images (Fig. 6a) have been generated for patients with the closed mouth (Dm data) and with the wide open mouth (Dt data). Tests were performed using points of the Dm and Dt situated in one corresponding volume to the bottom of the forehead, to eyes, to the nose and a part of cheeks. The registration seemed to drive to good results since data superimposed themselves visually (Fig. 6b).

4. Conclusion

The results presented in this contribution prove that the proposed positioning algorithm is precise. The proposed method is non invasive and at least as accurate as the best invasive and non invasive methods. Moreover, no dedicated piece must be built for patients and, except obviously for the positioning step, the standard treatment protocols are not influenced by the algorithm.

Theoretically, the sensor position in the therapy room is calibrated once for all, this device being in a fixe position. In practice, and for safety reasons, the sensor should be calibrated for each irradiation or at least periodically.

The gotten results show that of the small variations to the level of expression will not meaningful affect the registration results. It is also possible to use systematically data situated in the region of the nose and eyes for example. To conclude, the face of the patient seems to be a reference mark that can drive to a robust registration algorithm.

The next step of the positioning algorithm evaluation will consist in following experiments. Patients will be positioned with the classical invasive

frame based method. The proposed algorithm will be used in parallel to obtain a second tumor coordinate set. 5. References [1] M.A. Bazioglou, J. Kalef-Ezra & C. Kappas. Comparison of dosimetrictechniques for the assessment of basic dosimetric data of stereotactic fields, Physica Medica, 17(3):123-128 2001. [2] S. Papatheodorou, J. C. Rosenwald, M. E. Castellanos, S. Zetkili, L. Bonvalet & G. Gaboriaud. Utilisation d’un collimateur multilames pour la production de faiseaux modulés en intensité, Cancer/Radiother, 2: 392-403, Elsevier Paris, 1998. [3] S. A. Leibel, C. C. Ling, G. J. Kutcher, R. Mohan, C. Cordon-Cordo & Z. Fuks, The biological basis for conformal three-dimensional radiation therapy, Int. J. Rad. Onc. Biol. Phys., 21(3): 805-811, 1991. [4] J. G. Schwade, P. V. Houdek, H.J. Landy, J. L. Bujnoski, A. A. Levin, A. A. Abitbol, C. F. Serago & V. J. Pisciotta. Small field stereotactic external beam radiation therapy of intracranial lesions: fractionated treatment with a fixed halo immobilization device, Radiology, 176:563-565, 1990. [5] K. P. Gall, L. J. Verhey & M. Wagner. Computer-assisted positioning of radiotherapy patients using implanted radiopaque fiducials, Med. Phys, 20(4):1153-1159,1993. [6] L. S. Ploeger, M. Frenay, A. Betgen, J. A. de Bois, K. G.A. Gilhuijs & M. van Herk. Application of video imaging for improvement of patient set-up, Radiother Oncol., 68(3):277-284., 2003. [7] S. L. Meeks, F. J. Boba, T. H. Wagner, J. M. Buatti, W. A. Friedman, K. D. Foote, Image localization for frameless stereotactic radiotherapy,

Fig. 5. a) Original image with two different expressions, b) images sampled with the data down

sampling algorithm.

Fig. 6. a) Used points for registration, b) Results of image registration.

Region used in the registration algorithm

69

Page 6: [IEEE Eighth Mexican International Conference on Current Trends in Computer Science (ENC 2007) - Morelia, Michoacan, Mexico (2007.09.24-2007.09.28)] Eighth Mexican International Conference

Int. J. Radiation Oncology Biol. Phys., Vol 46, No. 5, pp. 1291-1299, 2000. [8] S. Li and D. Liu and G. Yin and P. Zhuang and J. Geng, Real-time 3D-surface-guided head refixation useful for fractioned stereotactic radiotherapy, Medical Physics, vol. 33, no. 2, pp. 492-503, February 2006. [9] N. Smith, I. Meir, G. Hale, R. Howe, L. Johnson, P. Edwards, D. Hawkes, M. Bidmead & D. Landau, .Real time 3D surface imaging for patient positioning in radiotherapy., Int. J. Rad. Onc. Biol. Phys. (supplement), 57(2):187, 2003. [10] J. Gühring, .Dense 3-d surface acquisition by structured light using off-the-shelf components., Proc. of the Photonics West, Videometrics VII, SPIE, 4309:220-231, San Jose, USA, 21-26,2001. [11] O.D. Faugueras and G. Toscani. “The calibration problem for stereo”, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (Institute of Electrical and Electronics Engineers, Los alamitos, C.A. 1986), pp. 15-20. [12] R. Posada1, C. Daul, D. Wolf, R. Miranda, L. Leija, Data down sampling for a fast registration of 3D-medical images in radiotherapy, in Proc. of Int. Conf. on Electrical and Electronics Engineering (ICEEE), Acapulco, Guerrero, 8 – 10 Septembre 2004

70