computer vision and image understanding - uni-siegen.de · a institute for vision and graphics,...

11
Time-of-Flight sensor calibration for accurate range sensing Marvin Lindner a,, Ingo Schiller b , Andreas Kolb a , Reinhard Koch b a Institute for Vision and Graphics, University of Siegen, Germany b Institute of Computer Science, Christian-Albrechts-University of Kiel, Germany article info Article history: Received 13 January 2009 Accepted 7 November 2009 Available online 21 August 2010 Keywords: Calibration Time-of-Flight Photonic Mixer Device abstract Over the past years Time-of-Flight (ToF) sensors have become a considerable alternative to conventional distance sensing techniques like laser scanners or image based stereo-vision. Due to the ability to provide full-range distance information at high frame-rates, ToF sensors achieve a significant impact onto current research areas like online object recognition, collision prevention or scene and object reconstruction. Nevertheless, ToF-cameras like the Photonic Mixer Device (PMD) still exhibit a number of error sources that affect the accuracy of measured distance information. For this reason, major error sources for ToF- cameras will be discussed, along with a new calibration approach that combines intrinsic, distance as well as a reflectivity related error calibration in an overall, easy to use system and thus significantly reduces the number of necessary reference images. The main contribution, in this context, is a new inten- sity-based calibration model that requires less input data compared to other models and thus signifi- cantly contributes to the reduction of calibration data. Ó 2010 Elsevier Inc. All rights reserved. 1. Introduction In many automatization areas like robotics or automotive engi- neering, the determination of distance information as well as the reconstruction of objects or complete environments is a funda- mental task with respect to computer vision. Information obtained from digitized scenes represent important input data for e.g. posi- tion estimation, online object recognition or collision prevention. During the last years, a compact and low-priced alternative to common 3D scanning devices like laser scanners, structured light scanners or stereo-vision setups has gained in popularity. Based on the Time-of-Flight principle (ToF), ToF-cameras like the Pho- tonic Mixer Device (PMD) are capable to estimate full-range dis- tance information in real time by illuminating the scene with modulated infrared light and determining the phase shift between the reference signal and the reflected light (see Section 2). Due to several error sources, however, a proper calibration of such devices is mandatory in order to achieve reliable distance information and thus accurate range sensing. Unfortunately, most of the known cal- ibration models cover only one error source at a time. Conse- quently, the calibration task gets very complex and time consuming in respect to the reference data acquisition. For this reason, we present a combined calibration approach estimating intrinsic parameters, correction data for the systematic error as well as reflectivity related distance deviations in an over- all, easy to use system. The main contribution in this context is a new calibration model for reflectivity related errors that requires less reference data compared to other existing models. Altogether, the number of required reference data for the calibration is highly reduced, resulting in a decreased complexity of the calibration task itself. The article is structured as follows: A short, technical overview about ToF sensing is given in Section 2, followed by a discussion of known error sources in Section 3 and prior calibration models in Section 4. The combined calibration approach is presented in Sec- tion 5, along with a new reflectivity related error model, which is described in Section 5.3. Finally, a short summary is given in Sec- tion 6. 2. Technical background The main component of common PMD/ToF-cameras is a CMOS- based sensor consisting of so-called smart pixels [1–3]. The princi- ple of ToF distance measurement with PMD-cameras is illustrated in Fig. 1. The illumination units of the camera emit intensity mod- ulated near infrared light (NIR), which is triggered by a internal ref- erence signal s. The emitted signal is reflected at the surface of objects and detected by appropriate smart pixels of the ToF- camera. The smart pixel itself determines the correlation between the detected optical signal r and the reference signal s, which has been internally shifted by the phase offset s: cðsÞ¼ r s ¼ lim T!1 Z T=2 T=2 rðtÞ sðt þ sÞdt: ð1Þ 1077-3142/$ - see front matter Ó 2010 Elsevier Inc. All rights reserved. doi:10.1016/j.cviu.2009.11.002 Corresponding author. E-mail addresses: [email protected] (M. Lindner), [email protected] formatik.uni-kiel.de (I. Schiller). Computer Vision and Image Understanding 114 (2010) 1318–1328 Contents lists available at ScienceDirect Computer Vision and Image Understanding journal homepage: www.elsevier.com/locate/cviu

Upload: vuonghanh

Post on 27-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

Computer Vision and Image Understanding 114 (2010) 1318–1328

Contents lists available at ScienceDirect

Computer Vision and Image Understanding

journal homepage: www.elsevier .com/ locate /cviu

Time-of-Flight sensor calibration for accurate range sensing

Marvin Lindner a,⇑, Ingo Schiller b, Andreas Kolb a, Reinhard Koch b

a Institute for Vision and Graphics, University of Siegen, Germanyb Institute of Computer Science, Christian-Albrechts-University of Kiel, Germany

a r t i c l e i n f o

Article history:Received 13 January 2009Accepted 7 November 2009Available online 21 August 2010

Keywords:CalibrationTime-of-FlightPhotonic Mixer Device

1077-3142/$ - see front matter � 2010 Elsevier Inc. Adoi:10.1016/j.cviu.2009.11.002

⇑ Corresponding author.E-mail addresses: [email protected] (

formatik.uni-kiel.de (I. Schiller).

a b s t r a c t

Over the past years Time-of-Flight (ToF) sensors have become a considerable alternative to conventionaldistance sensing techniques like laser scanners or image based stereo-vision. Due to the ability to providefull-range distance information at high frame-rates, ToF sensors achieve a significant impact onto currentresearch areas like online object recognition, collision prevention or scene and object reconstruction.Nevertheless, ToF-cameras like the Photonic Mixer Device (PMD) still exhibit a number of error sourcesthat affect the accuracy of measured distance information. For this reason, major error sources for ToF-cameras will be discussed, along with a new calibration approach that combines intrinsic, distance aswell as a reflectivity related error calibration in an overall, easy to use system and thus significantlyreduces the number of necessary reference images. The main contribution, in this context, is a new inten-sity-based calibration model that requires less input data compared to other models and thus signifi-cantly contributes to the reduction of calibration data.

� 2010 Elsevier Inc. All rights reserved.

1. Introduction

In many automatization areas like robotics or automotive engi-neering, the determination of distance information as well as thereconstruction of objects or complete environments is a funda-mental task with respect to computer vision. Information obtainedfrom digitized scenes represent important input data for e.g. posi-tion estimation, online object recognition or collision prevention.

During the last years, a compact and low-priced alternative tocommon 3D scanning devices like laser scanners, structured lightscanners or stereo-vision setups has gained in popularity. Basedon the Time-of-Flight principle (ToF), ToF-cameras like the Pho-tonic Mixer Device (PMD) are capable to estimate full-range dis-tance information in real time by illuminating the scene withmodulated infrared light and determining the phase shift betweenthe reference signal and the reflected light (see Section 2). Due toseveral error sources, however, a proper calibration of such devicesis mandatory in order to achieve reliable distance information andthus accurate range sensing. Unfortunately, most of the known cal-ibration models cover only one error source at a time. Conse-quently, the calibration task gets very complex and timeconsuming in respect to the reference data acquisition.

For this reason, we present a combined calibration approachestimating intrinsic parameters, correction data for the systematicerror as well as reflectivity related distance deviations in an over-

ll rights reserved.

M. Lindner), [email protected]

all, easy to use system. The main contribution in this context is anew calibration model for reflectivity related errors that requiresless reference data compared to other existing models. Altogether,the number of required reference data for the calibration is highlyreduced, resulting in a decreased complexity of the calibration taskitself.

The article is structured as follows: A short, technical overviewabout ToF sensing is given in Section 2, followed by a discussion ofknown error sources in Section 3 and prior calibration models inSection 4. The combined calibration approach is presented in Sec-tion 5, along with a new reflectivity related error model, which isdescribed in Section 5.3. Finally, a short summary is given in Sec-tion 6.

2. Technical background

The main component of common PMD/ToF-cameras is a CMOS-based sensor consisting of so-called smart pixels [1–3]. The princi-ple of ToF distance measurement with PMD-cameras is illustratedin Fig. 1. The illumination units of the camera emit intensity mod-ulated near infrared light (NIR), which is triggered by a internal ref-erence signal s. The emitted signal is reflected at the surface ofobjects and detected by appropriate smart pixels of the ToF-camera. The smart pixel itself determines the correlation betweenthe detected optical signal r and the reference signal s, which hasbeen internally shifted by the phase offset s:

cðsÞ ¼ r � s ¼ limT!1

Z T=2

�T=2rðtÞ � sðt þ sÞdt: ð1Þ

Fig. 1. The principle of PMD/ToF-measurement.

Fig. 2. Examples of 2D/3D camera setups. Top left: SwissRanger3000 and high-resolution CMOS-camera, top right: ZESS monocular 2D/3D PMD-prototype [8],bottom: PMD [vision] 3k-S with CCD-camera.

M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328 1319

Commonly sinusoidal signals are assumed

sðtÞ ¼ cosðxtÞ; rðtÞ ¼ kþ a cosðxt � /Þ; ð2Þ

where x = 2pf is the modulation frequency, a is the amplitude ofthe incident optical signal and / is the phase offset relating to theobject distance, finally yielding cðsÞ ¼ a

2 cosðxsþ /Þ.By sampling the correlation function four times, i.e. taking four

sequential images Ii = c(si), i = 0, . . . ,3 with an internal phase delaysi = i � p/2x, and using simple trigonometry, we can determine apixel’s phase shift / between 0 and 2p, the correlation amplitudea and the incident light intensity h by

/ ¼ atan2ðI3 � I1; I0 � I2Þ þ p; h ¼ 14

X3

i¼0

Ii;

a ¼ 12

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðI3 � I1Þ2 þ ðI0 � I2Þ2

q: ð3Þ

The distance d to the according object region is finally given by:

d ¼ c4pf

/; ð4Þ

where c � 3� 108 ms represents the speed of light and f is the signal’s

modulation frequency (commonly 20 MHz are used, resulting in anunambiguous distance range of 7.5 m).

Todays devices provide a resolution of 64 � 48 up to204 � 204 px at approx. 20 Hz. Due to an automatic suppressionof background light, some PMD-cameras are suitable for outdoorapplications.

An approach to overcome the limited resolution of the PMD-camera represents the combination with 2D-sensors either by spe-cial monocular setups [4] or binocular software solutions [5–7].Exemplary setups are shown in Fig. 2.

3. Sources of measuring errors

Like almost every sensing device, the PMD exhibits several errorsources which influence the accuracy of the distance measurementand therefore have negative impact on almost every downstreamprocessing task relying on accurate distance information.

3.1. Noise

A general problem in the context of range sensing is given bynoise leading to unsteady point clouds. For CMOS sensors and thusfor PMD-cameras, noise can generally be classified into three cate-gories [9]: signal noise, time variant noise as well as time invariantnoise.

Unfortunately, a concrete error model for PMD noise is un-known. For this reason, noise reduction is commonly done by

either averaging distance information over time or by using spatialsmoothing filter like simple gauss or edge preserving bilateralfilter.

Like Rapp [10], we suggest averaging the raw images Ii ratherthan the estimated distance information especially for low exposedpixels. Doing this, erroneous correlation samples have much lessinfluence on the averaging process as they would have by simplyaveraging the falsified distance information. In addition, the com-putation effort is strongly reduced as Eq. (1) has to be evaluatedonly once after the average determination instead of each frame.

After temporal denoising, the resulting distance informationcan be further denoised by spatial averaging using common imagefilters. We suggest bilateral filters, which use the intensity imageas additional semantic criteria in order to preserve edges and dis-continuities. Unlike average filters, smoothing filters should be ap-plied to distance information only. An application to raw images isnot recommend as it would alter the sample relation inappropri-ately, leading to even increased distance errors.

In the context of time invariant noise, noise can be classifiedinto defect pixel (noticeable as static white or black pixel) and lea-ker (which are significant brighter than the neighborhood) as wellas fixed pattern noise due to individual pixel offsets. These offsets,including defect pixels, can be determined by taking a black image,i.e. keeping the optics shut while averaging over an sufficientamount of images (typically 100 images). In contrast, intensitydependent inhomogeneity caused by the camera’s optics as wellas gain differences have to be treated differently by incorporatinggain linearization.

3.2. Systematic wiggling error

The most crucial error source however is formed by a so-calledsystematic wiggling error altering the measured distance by shift-ing the distance information significantly towards or away fromthe camera depending on the surface’s true distance.

For the demodulation, distance information is commonly deter-mined by taking four subsequent images, i.e. sampling the correla-tion function as stated in Section 2. In praxis, however, the

Fig. 4. Distance deviation due to varying object reflectivity, i.e. active light incidentto the sensor; front and side view (left), after a combined distance and reflectivityrelated calibration (right).

1320 M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328

underlying general assumption of a sinusoidal signal shape is notmet due to hardware and cost limitations. Analyzing the real refer-ence signal of a PMD-camera, it arises that the optical signal shapeis rather far from the theoretical assumed sinusoidal shape [10]. Asa result, the four sample reconstruction scheme leads to a system-atic distance error as shown in Fig. 3.

In order to avoid wiggling errors, an enhanced representation ofthe correlation function incorporating higher Fourier modes is of-ten discussed [2,10]. For example, by modeling the correlationmore precisely via a finite sum of superimposed cosine waves

cðsÞ ¼Xl

k¼0

ck cosðkðxsþ /Þ þ hkÞ;

a least square optimization over N P 2l + 1 samples leads to follow-ing phase demodulation schema:

k/þ hk ¼ argXN�1

n¼0

Ine�2piknN

!;

where In ¼ c 2px � n

N

� �. Here, the distance related phase shift / could be

obtained by using a look-up table (LUT) for the fixed offsets hk of theadditional modes.

However, extending the demodulation scheme for higherFourier modes is rather impracticable as the number of requiredsample images Ii as well as the calculation effort for the demodu-lation significantly increases. Especially the higher number ofsamples leads to further interferences in acquiring dynamic scenes,making aliasing artifacts more noticeable.

An alternative approach tries to reduce the wiggling error byassuming a box-shaped reference signal and combine this ap-proach with the standard sinusoidal assumption. By doing so, theerror can be already reduced if only a few reference images areavailable [11].

In general, the simpler sinusoidal based demodulation schemais commonly used further on, adjusting the falsified distance infor-mation via phenomenological calibration models (see Section 4).

3.3. Reflectivity/integration time related deviations

In the context of systematic errors, distance information is alsooften altered by an integration time dependent offset [12] as wellas an reflectivity related error, i.e. a non-linear distance shift re-lated to the object distance and reflectivity (see Fig. 4).

Whereas the integration time related deviation can be avoidedby calibrating the camera for a fixed integration time, the reason

-10

-5

0

5

10

15

20

25

1 1.5 2 2.5 3 3.5 4 4.5

erro

r [cm

]

distance [m]

bspline

Fig. 3. Systematic wiggling error for distance measurements between 1.0 and4.5 m. A fitted error function is shown in black.

for the reflectivity related false measurement is still unknownand depends on the observed scene. So far only a few phenomeno-logical calibration approaches exist (see Section 4).

3.4. Flying pixel

In addition to noise, hardware inequalities and theoretical sig-nal mismatch, false distance information can also occur due tothe pixels relative large solid angles. Here, different distances in-side a solid angle lead to superimposed reflected light reducingits amplitude and introducing a false phase shift – commonlycalled flying pixel. These pixel usually lie in-between fore- andbackground, but can also tend towards the camera depending onthe surface’s true distance. While most of the error sources abovecan be damped by calibration models, flying pixel can only be de-tected and marked as invalid for further processing.

4. Related work

Up to now, several calibration models have been introduced,covering the systematic wiggling error either alone or in combina-tion with reflectivity related deviations. Due to their phenomeno-logical nature, all models determine the deviation betweenmeasured distance information and according reference data.

First models, which has been quite simple, try to describe thesystematic error by linear or rather basic polynomial functions[13,14]. In consequence, these approaches require only a smallnumber of input images, but generally limit the sensor’s workingrange or yield nonsatisfying deviation corrections.

Current models, in contrast, typically utilize look-up tables [15]or B-splines [16] to describe distance deviations more precisely(see Fig. 3). Major drawback of these models, however, is the largeramount of required reference data.

Regarding reflectivity related deviations, current calibrationmodels utilize the amplitude respectively intensity image of theTOF camera as an estimate for the object reflectivity. Accordingly,known models are bivariate straightforward extensions, address-ing both the intensity as well as distance as input parameter fora coupled correction function, and use either multidimensionalLUTs [17] or a two-dimensional B-spline patch [12] to model thedistance deviations. Since a dense sampling of the two-dimen-sional distance-intensity space is necessary, common approachesrequire an extremely large amount of reference data.

5. Combined calibration approach

Considering all major error sources, the acquisition of individualcalibration data makes the calibration task rather time consumingand complex. For this reason, we will introduce a new combinedcalibration approach that incorporates several (so far separate)models into one and realizes a reduction of complexity.

M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328 1321

Error sources that have been integrated into the combinedmodel cover

� The estimation of intrinsic and extrinsic camera parameters forcamera rigs.� The adjustment of wiggling errors as well as.

� The correction of additional reflectivity related deviations.

In addition, we will introduce a new intensity-based calibrationmodel which requires less input data and thus further reduces theamount of reference information.

Figs. 5 and 6 shows the processing chain and the different mod-ules of the calibration approach. The top row shows the calibrationprocedure in which the calibration parameters are estimated. Fromthe checkerboard images on the left the intrinsic and extrinsicparameters are estimated using the Analysis-by-Synthesis ap-proach described in Section 5.1. The Analysis-by-Synthesis ap-proach is also used to estimate the wiggling correctionparameters based in the extrinsic camera parameters which is de-scribed in Section 5.2. Finally the reflectivity related calibrationparameters are estimated as described in Section 5.3. Below theline in Fig. 5 the correction procedure is depicted. The depth imageis radially corrected, then the wiggling correction is applied and fi-nally the reflectivity related calibration is performed.

5.1. Intrinsic and extrinsic camera calibration

Targeting computer vision tasks such as the correct reconstruc-tion of observed scenes, the estimation of intrinsic camera param-eters as focal length f, projection center (cx,cy) as well as lensdistortion coefficients – commonly known as intrinsic calibration– is required. Given these parameters, each PMD pixel can beback-projected by its distance information to form a three dimen-sional point cloud of surface points representing the scene for fur-ther processing. The relation between an image pixel x and itsaccording 3D-point X in space is thereby defined by

x ¼ KP0½RT j � RT C�X

¼ KP0MX;ð5Þ

where P0 ¼ ½I;0� 2 R3�4 represents the standard perspectiveprojection,

K ¼

fx ¼ f=sx 0 cx

0 fy ¼ f=sy cy

0 0 1

2664

3775; ð6Þ

Depth Image AdjustmWigglin

CheckerboardImages

(Intensity, Depth)

Analysis by

Calibration

Synthesis

ExtrinParame

IntrinsicParameter

WigglinCorrect

Undistortion

Fig. 5. System overview for the co

is the matrix of the intrinsic camera parameters and M 2 R4�4

stands for the extrinsic cameras transformation consisting of therotation matrix R and the camera center transformation C [18–20].

The lens distortion is commonly modeled by a polynomial of 4thdegree consisting of two radial distortion parameters a1 and a2, i.e.

gðrÞ ¼ 1þ a1r2 þ a2r4; ð7Þ

where r2 ¼ ~x2 þ ~y2 is the distance of the normalized image coordi-nate ~x ¼ K�1x to the projection center (cx,cy). Additional tangentialparameter t1 and t2 can be added covering decentering or imperfectcentering of the lens components and other manufacturing defectsin a compound lens system [21]:

xd ¼ ~x � gðrÞ þ 2t1~yþ t2ðr2=~xþ 2~xÞ;yd ¼ ~y � gðrÞ þ 2t2~xþ t1ðr2=~yþ 2~yÞ:

ð8Þ

Note, that Eq. (8) is not analytically invertible. However, as weare looking for a new regular pixel grid, Eq. (8) still can be usedto calculate the distorted position for an undistorted pixel, whosevalue finally is estimated by bilinear interpolation.

For the PMTec PMD [vision] 19k model of the PMD-camera,intrinsic calibration has been applied, using the existing calibrationmodule included in Intel’s OpenCV library [22] which is based onthe work of Brown and Zhang published in [21] and [23]. Testshave shown, that the estimated intrinsic, extrinsic and radialparameters turned out to be reliable and already provide suitablerough pose estimations.

Nevertheless, due to the narrow opening angle [14,24], a highdependency between intrinsic parameters and camera position(especially between rotation and translation) occurs so that a pre-cise pose estimation is difficult. For these reasons, a new approachincorporating depth information has been proposed [14,24]. Bycombining a ToF-camera with (multiple) CDD-cameras in a camerarig and applying image synthesization, the approach provideshighly accurate relative pose estimations between each cameraof the rig, making real-time pose estimation for e.g. 2D/3D data fu-sion or robot navigation possible [5,25].

The calibration is initialized with standard computer visiontechniques from OpenCV [22] to obtain a first guess of the camerarig’s intrinsic and extrinsic parameters (based on a planar checker-board pattern), which are then refined by applying a non-lineariterative optimization. The refinement is done in two steps:First the parameters are initialized based on the minimization ofthe reprojected error concerning checkerboard corners, then theparameters are refined for the whole image in the area of thecheckerboard pattern using image synthesis.

For the first step, which only refines the intrinsic and externalcamera parameters, all relevant parameters, such as rotation,

entg

Depth ImageCorrected

IntensityCalibration

sicter

IntensityCorrection

gion

IntensityAdjustment

mbined calibration approach.

Fig. 6. The optimization on the reprojection of the corners (red = reprojection, green = detected corner). Left image: reprojection after guess by OpenCV, right image:reprojection after first optimization step. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

1322 M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328

translation, focal length as well as radial and tangential lens distor-tion are collected in a parameter vector p. Note that if multiplecameras are used, the rigid coupling of the cameras is enforcedby adding only relative external parameters between the cameras.The error measurement for the minimization is defined by the dis-tance between the real checkerboard corners x on the image andthe projection of the corresponding 3D-points on the real calibra-tion pattern X applying

S1pðXÞ ¼ KP0MX¼! x; ð9Þ

where p are the relevant parameters for that image, K is the intrinsiccamera matrix as in Eq. (6) and M holds the external camera param-eters for that image.

For all parameters pi the partial derivatives of the function S1pðXÞ

are determined. Note that the parameters are included both in theintrinsic matrix K and the extrinsic camera transformation M. Theerror is minimized using Least-Squares minimization with respectto the parameters pi.

In the second step the optimization of the parameters is doneon the full images in the area where the calibration pattern is vis-ible. This step uses a model-driven Analysis-by-Synthesis approachfor optimization, in which a synthesized image of the model of thecheckerboard is compared to the real image. Function S2

pðxÞ rendersthe checkerboard pattern for the internal camera parameters K andthe external camera transformation M.

S2pðxÞ ¼ href ðXÞ ¼ href C � CzRT K�1

k xrT

z K�1x

!¼! hðxÞ; ð10Þ

where Cz is the z-component of C and rz is the last column of therotation matrix R. Eq. (10) states that the intensity h of the synthe-sized checkerboard pattern should be equal to the intensity href inthe calibration image in the region of the checkerboard. For furtherdetails please see [24]. From the measured intensity difference theoptimal parameters can be derived using Least-Squares methods.This minimizes the error on the whole images. Note that thesynthesized intensity images and the real intensity images aresmoothed with a Gaussian kernel to guide the optimization. Fromthe parameter covariance and the partial derivatives of the synthe-sized images a parameter update is computed. This process is theniterated until convergence.

Fig. 7 shows the successful refinement of the camera parame-ters of the PMD-camera. The synthesized checkerboard (in red) isoverlaid on the amplitude image. The initial parameters used tosynthesize the checkerboard pattern are still distorted (left) whileafter optimization the synthesization matches the real image(right).

The approach is also capable of calibrating a ToF-camera with-out additional CCD-cameras. The low-resolution and small field-of-view of the ToF-camera, however, increases the ambiguitybetween rotation and translation and it is difficult to distinguishbetween focal length, depth deviation and camera pose, whichleads to unstable pose estimation results when using the ToF-camera alone as also shown in [14]. The following analysis shows,that the approach without additional CCD-camera leads to highercorrelations in the internal and external camera parameters. Foranalysis we compare the calibration of a single ToF-camera,without additional CCD-cameras, which will be denoted scheme[A], and the calibration of a ToF-camera combined with a high-resolution CCD-camera, which we will denote scheme [B]. We usea SwissRanger3000 from Mesa-Imaging, which features a resolu-tion of 176 � 144 px and a field-of-view of approx. 40�, and asCCD-camera a IEEE1394 Grashopper from Point Grey with aresolution of 1600 � 1200 px and approx. the same fov is used.

5.1.1. Error analysis of single ToF-camera [A]In this section the accuracy and correlations of the parameters

of a ToF-camera are investigated, if calibrated without additionalCCD-cameras from 42 ToF-images in the range of 1.2–6 m, whichwe denote scheme [A]. Table 1 shows the estimated values forthe camera matrix. Compared to the scheme with additional CDDcamera (see Table 5), the focal length is estimated too small, whichwas compensated by external and wiggling error parameters.

The estimated accuracies for the internal parameters are shownin Table 2. It can be seen that these values have been estimatedwith good confidence, although the values are not correct, whichis due to the ambiguity of focal length, external parameters andwiggling error.

Table 3 states the accuracies of the external camera parameters.These have also been estimated with high precision, although thedeviations of the translations are in the range of a few millimeters.

Table 4 shows the correlations of the external camera parame-ters for the calibration of a single ToF-camera. It can be seen thatcorrelations between translation and rotation are quite high,although lower than the values in [14] – which can be explainedby the increased resolution and field-of-view of the ToF-camera.

5.1.2. Error analysis with additional CCD-camera [B]In this second experiment, the calibration has been performed

with the proposed combination of a high-resolution(1600 � 1200 px) CCD- and a ToF-camera (scheme [B]). The meanreprojection error of the CCD-camera after initial guess by OpenCVis 1.24031 px and the mean reprojection error of the ToF-camera is0.167957 px. Note that for the initial guess every camera pose is

Fig. 7. The synthesized checkerboard pattern (red) overlaid on the amplitude image of the PMD-camera. Left image: rendering with initial camera parameters from OpenCV,right image: rendering with refined camera parameters. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of thisarticle.)

Table 1Estimated focal lengths, principle point and radial distortion for scheme [A].

fx fy cx cy

207.0551 205.1033 105.9832 85.6877

Table 2Accuracy of the internal camera parameters for scheme [A].

rfxrfy

rcx rcy

0.05881 0.06827 0.03207 0.03610

Table 3Accuracy of the external camera parameters of the ToF-camera for scheme [A].

rX rY rZ rx r/ rj

3.26 3.36 12.02 8.82e�4 8.94e�4 46.0e�4

Table 4Computed correlations between the external ToF-camera parameters correspondingto the accuracy values shown in Table 3.

X Y Z x / j

X 1 0.017 0.086 0.005 0.636 0.002Y 0.017 1 0.213 0.482 0.003 0.031Z 0.086 0.213 1 0.020 0.080 0.028x 0.005 0.482 0.020 1 0.035 0.157/ 0.636 0.003 0.080 0.035 1 0.024j 0.002 0.031 0.028 0.157 0.024 1

Table 5Estimated focal lengths, principle point and radial distortion for scheme [B].

fx fy cx cy

210.9124 209.8890 100.6778 82.2850

Table 6Accuracy of the internal camera parameters of the ToF-camera for scheme [B].

rfxrfy

rcx rcy

0.00241 0.00330 0.00095 0.00083

Table 7Accuracy of the external camera parameters of the ToF-camera for scheme [B].

rX rY rZ rx r/ rj

0.019 0.017 0.047 1.51e�6 1.14e�6 3.92e�6

M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328 1323

estimated individually and no rigidity for the camera rig isenforced. After enforcing rigidity in the rig, the mean reprojectionerror of the CCD-camera is 1.24021 px and 0.708043 px for theToF-camera. After optimization on the checkerboard corners, thereprojection error is reduced to 1.16606 px for the CCD-cameraand 0.440916 px for the ToF-camera.

Table 6 shows the accuracies of the estimation of the internalcamera parameters of the ToF-camera, which are shown in Table5. By combining the ToF-camera with the CCD-camera, we gainone order of magnitude for focal length and radial distortionparameters and two orders of magnitude for the estimation ofthe principle point, compared to the values in Table 2.

According to Tables 3 and 7 shows the accuracies of the externalcamera parameters for the combined approach. Again a significantincrease in confidence is visible in the order of two magnitudes.

This matches the results of the correlation analysis which is shownin Table 8. The correlations between rotation and translation isnow widely reduced.

5.2. Wiggling correction

As mentioned in Section 3.2, current ToF-/PMD-cameras sufferfrom a systematic wiggling error that affects the distance accuracyin a negative way. However, by modeling the distance deviation viaa B-spline as in [16], the systematic wiggling error can be ex-pressed quite accurately (see Fig. 3).

Therefor a depth correction B-spline function is defined:

d0ðdðxÞÞ ¼ dðxÞ �Xm

l¼0

clB3l ðdðxÞÞ; ð11Þ

in which cl are the controlpoints of the B-spline function which isestimated for wiggling calibration, B3

l ðdÞ are the basis B-splines,and m is the number of control points. In this optimization frame-work the controlpoints are evenly distributed in the available depthrange of the calibration data. The wiggling corrected depth d0(d(x))is obtained by the application of the B-spline function.

The estimation of the B-spline parameters is integrated in theAnalysis-by-Synthesis calibration introduced in Section 5.1 by syn-thesizing depth images analog to Eq. (10):

Table 8Correlations between the external ToF-camera parameters corresponding to theaccuracy values shown in Table 7.

X Y Z x / j

X 1 0.006 0.000 0.034 0.101 0.002Y 0.006 1 0.025 0.013 0.027 0.013Z 0.000 0.025 1 0.013 0.045 0.007x 0.034 0.013 0.013 1 0.014 0.006/ 0.101 0.027 0.045 0.014 1 0.093j 0.002 0.013 0.007 0.006 0.093 1

1324 M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328

S3pðxÞ ¼ dref ðxÞ ¼ �

Cz

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffixK�T K�1x

prT

z K�1x¼! d0ðdðxÞÞ ð12Þ

where rz is the last column of the rotation matrix R, Cz is the z-component of the camera translation C and K the intrinsic cameramatrix. Eq. (12) states that the synthesized depth image valuesdref(x) should be equal to the wiggling error corrected depth imagevalues d0(d(x)) taken by the ToF-camera. The partial derivatives offunction S3

pðxÞ are computed and the parameters of the depthdeviation function are estimated in parallel to the intrinsic cameraparameters using Least-Squares optimization.

For a proper distance calibration of a PMD/ToF-camera it ismandatory to know the exact distance of the camera to the refer-ence object (commonly a planar wall). Unlike prior models thatmake use of special equipment in order to address precise refer-ence distances like track lines [15,16], our combined calibrationapproach utilizes the vision-based optimization in Section 5.1 toestimate the camera position in respect to the planar checker-board, which in this context serves as our reference plane. In con-sequence, the combined approach is more flexible than othersolutions as e.g. track lines are not always available.

Note that, in order to obtain a proper wiggling calibration, thewhole operating range of the ToF-camera has to be covered duringinput image acquisition.

5.2.1. Error analysisAn example of calibrated depth measurements for a SwissRang-

er3000 together with the mean errors and standard deviations isshown in Fig. 9, which corresponds to the uncalibrated measure-ments shown in Fig. 8 in which the estimated B-spline function,approximating the wiggling error, is shown as well. The horizontalaxis shows the distance of the camera center to the referenceplane, which is acquired using the vision-based optimization. Notethat for scheme [A], distances are estimated from the camera poses

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000−100

−500

50100150200250300350400450

Per Pixel/Mean Depth Error before Wiggling Compensation, scheme [A]

Calibrated Distance [mm]

Dep

th D

evia

tion

[mm

]

Per Pixel Depth ErrorStandard Deviation and Mean of Depth ErrorSpline Function

Fig. 8. Wiggling error before correction for scheme [A] (left) and [B] (right). The depth errshown in black and the estimated B-spline function is shown in light gray. Note that the ein pose estimation. Using an additional CCD-camera helps to minimize such errors.

only, which has been calculated from the low-resolution ToF-images. This leads to a higher error and expanding (or shrinking)of the whole measurement range. The vertical axis shows theremaining error which is reduced below 50 mm throughout thewhole operating range of the camera.

Table 9 one page 21 shows the correlations between the B-spline controlpoints cn of Eq. (11), and the estimated externalparameters of the ToF-camera for schemes [A] and [B]. Note thatfor scheme [A] the mean correlation of 42 external camera param-eters is shown. It is visible that the correlations between theparameters are already small for a single ToF-camera, they arehowever further reduced if calibrated with an additional CCD-camera.

With increasing distance to the camera, the estimation of theextrinsic parameters becomes unstable and the correlations arelikely to increase. Fig. 10 shows the correlations between the last15 images of the calibration sequence and the B-spline parametersfor the calibration scheme [A].

5.3. Reflectivity related error adjustment

In the previous section, we already described how to reducesystematic distance deviations by incorporating a wiggling errorestimation into the combined calibration model. Current PMD sen-sors, however, are usually affected by additional, reflectivity re-lated deviations (cmp. Section 3.3). These deviations arecommonly addressed by bivariate models incorporating both dis-tance as well as intensity information (see Section 4). However, fit-ting a two-dimensional B-spline patch or building up LUTsrespectively, a dense set of reference data that covers a high num-ber of intensity–distances pairs. In practice, the acquisition of suchinformation is rather time consuming and impractical. For this rea-son, we look for an alternative approach that separates both cali-bration parameters and allows an independent treatmentrequiring less calibration data.

The general reason for the bivariate treatment of reflectivity re-lated errors is the fact that, due to the distance related intensityattenuation, all distances exhibits an individual range of intensityvalues affecting deviation approximation. A possible solution fora parameter separation, thus, seems to be the application of anintensity normalization that yields comparable intensity valuesfor each distance. And indeed, an according normalization of inten-sity values reveals an uniform behavior of the remaining distancedeviation as depicted in Fig. 11. As a result, we treat the reflectivityrelated deviations independently from the distance informationrequiring less input data, i.e. the full intensity range has to be

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000−100−50

050

100150200250300350400450

Per Pixel/Mean Depth Error before Wiggling Compensation, scheme [B]

Calibrated Distance [mm]

Dep

th D

evia

tion

[mm

]

Per Pixel Depth ErrorStandard Deviation and Mean of Depth ErrorSpline Function

or of every ToF-image pixel is shown in gray, mean error and standard deviations arerror for scheme [A] is much higher due to the underestimated focal length and errors

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000−100

−500

50100150200250300350400450

Per Pixel/Mean Depth Error after Wiggling Compensation, scheme [A]

Calibrated Distance [mm]

Dep

th D

evia

tion

[mm

]Per Pixel Depth Error Per Pixel Depth ErrorStandard Deviation and Mean of Depth Error

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 6000−100−50

050

100150200250300350400450

Per Pixel/Mean Depth Error after Wiggling Compensation, scheme [B]

Calibrated Distance [mm]

Dep

th D

evia

tion

[mm

]

Standard Deviation and Mean of Depth Error

Fig. 9. Wiggling error after correction. The remaining depth error of every ToF-image pixel is shown in gray, mean error and standard deviations are shown in black.

Table 9Mean correlation between wiggling calibration and 42 extrinsic ToF-camera parameters for scheme [A] (top) and [B] (bottom).

c1 c2 c3 c4 c5 c6 c7 c8 c9 c10

[A]X 0.003961 0.005141 0.004248 0.008265 0.008560 0.007543 0.006268 0.006445 0.005499 0.008683Y 0.009283 0.014735 0.015381 0.011452 0.009135 0.009906 0.010467 0.011846 0.019702 0.011163Z 0.028139 0.044939 0.039660 0.048402 0.071567 0.072426 0.064882 0.085183 0.060678 0.057444x 0.011123 0.021644 0.026257 0.025850 0.023605 0.017961 0.016778 0.027473 0.043463 0.031005/ 0.007161 0.011170 0.008878 0.016070 0.014817 0.025697 0.020512 0.027757 0.017060 0.011743j 0.002067 0.002662 0.003347 0.004977 0.006531 0.004576 0.003711 0.003857 0.007055 0.007044

[B]X 0.000808 0.000899 0.000794 0.000664 0.000217 0.000107 7.879e�5 0.0001095 7.859e�5 0.000220Y 5.115e�5 0.001183 0.000850 7.472e�5 0.000105 5.284e�5 0.000141 0.000171 0.000137 1.242e�5Z 0.018747 0.021767 0.026397 0.007001 0.009077 0.006846 0.006749 0.005138 0.008107 0.004001x 0.000220 8.442e�5 0.000262 0.000111 7.873e�5 5.090e�5 9.019e�5 8.253e�5 0.000103 3.948e�5/ 0.000791 0.001078 0.001220 0.000330 0.000432 0.000321 0.000276 0.000210 0.000344 0.000213j 0.000118 0.000125 0.000181 5.918e�5 4.691e�5 2.558e�5 4.221e�5 3.542e�5 4.918e�5 2.305e�5

Correlation of depth B−Spline parameters to external camera parameters

10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 10. Correlation between external ToF-camera parameters and wiggling errorparameters cn for scheme [A]. The 10 B-spline parameters are at the vertical axisbetween 90 and 100 and the external camera parameters of the 15 images with thebiggest distance to the checkerboard are on the horizontal axis in blocks of 6, withincreasing distance from left to right. Note how different B-spline parameterscorrelate with different external parameters. The high correlations next to thediagonal correspond to the bold values in Table 8.

M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328 1325

acquired only once, whereas for the distant dependent normaliza-tion simply the lowest and brightest intensity are sufficient.

Accordingly, the intensity-based calibration model can be for-malized as follows:

1. Determination of the distance related intensity normalizationparameter min(d0) and max(d0), mapping any intensity h ontothe interval [0,1] by

hðh;d0Þ ¼ normðh;minðd0Þ;maxðd0ÞÞ;

¼ h�minðd0Þmaxðd0Þ �minðd0Þ

;ð13Þ

where d0 already represents the wiggling corrected distancevalue.

2. Fitting of the actual distance correction d(h) with respect to thenormalized intensity.

The error adjustment is finally given by

dðd0; hÞ ¼ d0 � dðhðh;d0ÞÞ: ð14Þ

In our case, all three function dðhÞ, min(d0) as well as max(d0) aremodeled by polynomials of degree 3, i.e.

minðd0Þ ¼X3

k¼0

amink � d0k;

maxðd0Þ ¼X3

k¼0

amaxk � d0k;

dðh;d0Þ ¼X3

k¼0

adk � hk:

ð15Þ

Fig. 12. The modified calibration checkerboard pattern used for reflectivity relatedcalibration. Gray values are scaled from left to right in the center: 100% black, 80%black, 60% black, 40% black and 20% black.

-2

-1

0

1

2

3

4

5

6

7

0 200 400 600 800 1000 1200 1400

Rem

aini

ng D

ista

nce

Dev

iatio

n [c

m]

Intensity

2m2.5m

3m3.5m

-2

-1

0

1

2

3

4

5

6

7

0 0.2 0.4 0.6 0.8 1

Rem

aini

ng D

ista

nce

Dev

iatio

n [c

m]

Normalized Intensity

2m2.5m

3m3.5m

Fig. 11. Remaining distance deviations with respect to unnormalized (top) andnormalized intensity values (bottom). The plot is based on six different calibrationtargets that has been captured from four distances.

1326 M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328

However, in practice, the PMD image (and therefore the inten-sity image as well) unfortunately is affected by a radial light atten-uation, so that the normalization parameter as stated in Eq. (15)cannot be applied for distinct pixel even if they exhibit the samedistance. Thus, we decided to extend the normalization parameterby an additional radial attenuation:

minðd0; rÞ ¼Xn

k¼0

amink d0k

Xm

l¼0

bminl rl

¼Xn

k¼0

Xm

l¼0

cminkl d0krl

� �;

maxðd0; rÞ ¼Xn

k¼0

amaxk d0k

Xm

l¼0

bmaxl rl

¼Xn

k¼0

Xm

l¼0

cmaxkl d0krl

� �;

dðh; d0; rÞ ¼X3

k¼0

adkhkXm

l¼0

bdl rl

¼Xn

k¼0

Xm

l¼0

cdklh

krl� �

;

ð16Þ

where r is the euclidian distance to the projection center (cx, cy) on theimage plane (cmp. Eq. (6)) and cmin

kl ¼ amink bmin

l and cmaxkl ¼ amax

k bmaxl .

Analog Eq. (14) extends to

hðh; d0; rÞ ¼ normðh;minðd0; rÞ;maxðd0; rÞÞ;dðd0; h; rÞ ¼ d0 þ dðhðh; d0; rÞÞ:

ð17Þ

Analog to the wiggling compensation, the new approach can easilybe integrated in the combined calibration process by using a mod-ified calibration pattern as shown in Fig. 12. The pattern consists ofa regular checkerboard pattern in which some squares have beenreplaced to cover the required intensity range. On the left the innersquares are printed with 100% black, decreasing in four steps to 20%black on the right. In consequence, the synthesized checkerboardpattern used for non-linear parameter estimation as in Section 5.1has to be altered as well.

The estimation of the reflectivity related calibration parametersis done in form of an additional step using the same input data asfor the intrinsic/wiggling calibration. First, the normalization coef-ficients cmin

kl and cmaxkl are determined regarding the black and white

squares of the checkerboard pattern. Therefore the depth range isdivided into intervals of a given size (we normally use 100 mm)and for every interval minimum and maximum intensity is deter-mined. Note that correct data recording is crucial and has to takeinto account that the full intensity range for all depths has to beavailable. The radial attenuation is also determined in this step.On the white squares of the calibration pattern the mean of theintensity values is calculated for every radius r. For r only naturalnumbers are used, resulting in a discretization of the radius withstepsize 1. Finally, the coefficients of the actual deviation functiondðhðh; d0; rÞÞ are estimated based on the normalized intensities asdescribed above.

The results of the optimization process and the application tothe calibration pattern is shown in Figs. 13 and 14.

In Fig. 13 on the left a depth input image is shown as a threedimensional triangle mesh. In the front the reflectivity related dis-tance deviation of darker regions is clearly noticeable.

Fig. 13 on the right shows the calibration pattern after theintensity-based adjustment has been applied. Note that the devia-tion error has been widely reduced, although a homogeneous pla-nar surface is still not achieved.

In Fig. 14 the impact of the intensity depth calibration is dis-played as the difference between depth corrected (Fig. 13 left)and depth and intensity corrected (Fig. 13 right). The radial light

Fig. 13. Left: Model of calibration pattern, depth corrected. Note the non-planarity in darker areas. Right: Model of the calibration pattern, depth and reflectivity corrected.The non-planarity has been reduced.

0

50

1000

50100

150

4

6

8

Inte

nsity

Cor

rect

ion

[mm

]

4

4.5

5

5.5

6

6.5

7

7.5

Fig. 14. Difference between the wiggling corrected and the wiggling & reflectivitycorrected depth image (Fig. 13 left–right). Radial attenuation effects account for thecurved depth, the spikes arise from the reflectivity related errors.

M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328 1327

attenuation influence and the pure intensity error influence areclearly noticeable in the data.

6. Conclusions

A proper calibration of ToF-cameras is mandatory for everyapplication using ToF-cameras. While noise and the systematicwiggling error are already widely investigated in the literaturethere is little prior work on the reflectivity related depth errors.The main contribution of this paper is a new calibration approachfor reflectivity related errors aiming for a decreased number of ref-erence images compared to prior models. Additionally, a lightweight calibration framework is presented in which intrinsic cali-bration, wiggling error calibration and reflectivity related depth er-ror calibration are integrated.

In this new framework, which is based on Analysis-by-Synthe-sis, a planar checkerboard pattern with different levels of reflectiv-ity is used which has the necessary features to suit the needs for allcalibration steps, making special equipment, like special tracklines, obsolete.

Application of the presented calibration approach provides asignificant improvement of the measurement data of ToF-camerasmaking them more reliable and usable as a measurement device.

Further work will handle the dependency of the reflectivitycaused depth errors on the integration time of the ToF-camera. Aknown but little investigated error source are flying pixels. A sys-tematic identification and a correction of these flying pixels isdesirable. ToF-camera measurements in corners show roundings

due to multiple reflections. An identification and correction forthese errors is also targeted in the future.

Acknowledgments

This work is partly supported by the German Research Founda-tion (DFG), Grants KO-2960/5 and KO-2044/3-2 and the project3D4YOU, Grant 215075 of the ICT (Information and Communica-tion Technologies) Work Programme of the EU’s 7th Frameworkprogram.

References

[1] H. Kraft, J. Frey, T. Moeller, M. Albrecht, M. Grothof, B. Schink, H. Hess, B.Buxbaum, 3D-Camera of high 3D-frame rate, depth-resolution and backgroundlight elimination based on improved PMD (photonic mixer device)-technologies, in: OPTO, 2004.

[2] R. Lange, 3D Time-of-Flight Distance Measurement with Custom Solid-StateImage Sensors in CMOS/CCD-Technology, Ph.D. thesis, University of Siegen,2000.

[3] Z. Xu, R. Schwarte, H. Heinol, B. Buxbaum, T. Ringbeck, Smart pixel – photonicmixer device (PMD), in: Proc. Int. Conf. on Mechatron. & Machine Vision, 1998,pp. 259–264.

[4] T. Prasad, K. Hartmann, W. Weihs, S. Ghobadi, A. Sluiter, First steps inenhancing 3D vision technique using 2D/3D sensors, in: O. Chum, V. Franc(Eds.), 11th Computer Vision Winter Workshop 2006, 2006, pp. 82–86.

[5] M. Lindner, A. Kolb, Data-fusion of PMD-based distance-information and high-resolution RGB-images, in: International Symposium on Signals, Circuits andSystems (ISSCS), vol. 1, 2007, pp. 121–124.

[6] R. Reulke, Combination of distance data with high resolution images, in: ISPRSCommission V Symposium ‘Image Engineering and Vision Metrology’, 2006.

[7] Q. Yang, R. Yang, J. Davis, D. NistTr, Spatial-depth super resolution for rangeimages, in: CVPR, IEEE Computer Society, 2007.

[8] O. Lottner, K. Hartmann, O. Loffeld, W. Weihs, Image registration andcalibration aspects for a new 2D/3D camera, in: EOS Conf. on Frontiers inElectronic Imaging, 2007, pp. 80–81.

[9] S. Klein, Entwurf und Untersuchung von integrierten Ausleseschaltungen fürhochauflösende 3D-Bildsensoren auf PMD-Basis zur Unterdrnckung von Fixed-Pattern-Noise (FPN), Master’s thesis, University of Siegen, 2008.

[10] H. Rapp, Experimental and Theoretical Investigation of Correlating TOF-Camera Systems, Master’s thesis, University of Heidelberg, Germany, 2007.

[11] M. Lindner, A. Kolb, New insights into the calibration of ToF-sensors, in: IEEEConf. on Computer Vision & Pattern Recogn., Workshop on ToF-Camera basedComputer Vision, 2008. doi:10.1109/CVPRW.2008.4563172.

[12] M. Lindner, A. Kolb, Calibration of the intensity-related distance error of thePMD ToF-camera, in: Proc. SPIE, Intelligent Robots and Computer Vision, vol.6764, 2007. doi:10.1117/12.752808.

[13] M. Stommel, K.-D. Kuhnert, Fusion of stereo-camera and PMD-camera data forreal-time suited precise 3D environment reconstruction, in: IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS), 2006, pp.4780–4785.

[14] C. Beder, R. Koch, Calibration of focal length and 3D pose based on thereflectance and depth image of a planar object, Int. J. Intell. Syst. Technol.Appl., Issue Dynamic 3D Imaging 5 (3/4) (2008) 285–294. ISSN 1740-8865,http://dx.doi.org/10.1504/IJISTA.2008.021291.

[15] T. Kahlmann, F. Remondino, H. Ingensand, Calibration for increased accuracyof the range imaging camera SwissRanger™, in: Image Engineering and VisionMetrology (IEVM), 2006.

[16] M. Lindner, A. Kolb, Lateral and depth calibration of PMD-distance sensors, in:International Symposium on Visual Computing (ISVC), LNCS, vol. 2, Springer,2006, pp. 524–533.

1328 M. Lindner et al. / Computer Vision and Image Understanding 114 (2010) 1318–1328

[17] J. Radmer, P. Fuste, H. Schmidt, J. Kruger, Incident light related distance errorstudy and calibration of the PMD-range imaging camera, in: Computer Visionand Pattern Recognition Workshops, 2008. CVPR Workshops 2008, 2008, pp.1–6.

[18] O. Faugeras, Three-dimensional Computer Vision, The MIT Press, 1993.[19] Forsyth, Ponce, Computer Vision - A Modern Approach, Alan Apt, 2003.[20] Y. Ma, S. Stoatto, J. Kosecka, S. Sastry, An Invitation to 3D Vision, Springer,

2004.[21] D. Brown, Decentering distortion of lenses, Photomet. Eng. 32 (3) (1966) 444–

462.[22] OpenCV, <http://sourceforge.net/projects/opencvlibrary>, 2006.

[23] Z. Zhang, Flexible camera calibration by viewing a plane from unknownorientations, in: Proceedings of the International Conference on ComputerVision, Corfu, Greece, 1999, pp. 666–673.

[24] I. Schiller, C. Beder, R. Koch, Calibration of a PMD camera using a planar calibrationobject together with a multi-camera setup, in: The International Archives of thePhotogrammetry, Remote Sensing and Spatial Information Sciences, Part B3a,Beijing, China, vol. XXXVII, ISPRS Congress, 2008, pp. 297–302, xXI.

[25] A. Prusak, O. Melnychuk, I. Schiller, H. Roth, R. Koch, Pose estimation and mapbuilding with a PMD-camera for robot navigation, Int. J. Intell. Syst. Technol.Appl., Issue Dynamic 3D Imaging 5 (3/4) (2008) 355–364. ISSN 1740-8865,http://dx.doi.org/10.1504/IJISTA.2008.021298.