an autonomous robotized system for a thermographic camera

8
An Autonomous Robotized System for a Thermographic Camera Alberto Pretto 1,3 , Emanuele Menegatti 1,3 , Paolo Bison 2 , Ermanno Grinzato 2 , Gianluca Cadelano 2 and Enrico Pagello 1,3 1 University of Padova, Dep. of Information Engineering, Via Gradenigo 6/B - Padova (Italy) 2 CNR-ITC, Corso Stati Uniti 4 - Padova (Italy) http://www.itc.cn.it 3 IT+Robotics Srl, Contrà Valmerlara 21 - Vicenza (Italy) http://www.it-robotics.it Abstract This paper presents a new approach to automatically monitor an indoor environment on thermodynamic basis. It uses temperature as the driving parameter and is especially suited for comfort analysis or evaluation of moisture. The system measures all fundamental environment parameters (e.g., air temperature, relative humidity and air speed) by imaging with a thermal camera (IR camera) a set of special targets arranged in a grid (the reference grid), which can be placed close to a wall or in any other place of the room. The thermal camera is mounted on a pan-tilt unit to realize the monitoring process in an automatic way. The system processes the thermal images in real-time and autonomously controls the pan-tilt unit. A fast automatic learning procedure enables to recognize the special targets on the grid also in challenging environments and in different environment conditions, while a Particle Filter is used to update the state of the system (i.e., position of the intersection point between the optical axis of the camera and the planar surface of the grid). The system is able to perform a reliable global localization of the position of the thermal camera. During the scanning of the wall surfaces, a set of positions are automatically and sequentially reached by the moving IR camera: for each position a thermal image is recorded. Images are hence rectified in order to obtain a more accurate temperature sampling. We successfully tested our system in several challenging environments. 1 Introduction Instruments commonly used to monitor environmental conditions can perform punctual measurements. Most of the times, they are not appropriate for creating a map of the distribution of the values of the parameters of interest (e.g. temperature, light, humidity, air speed) within a building. It is totally impractical to move a measuring head compris- ing a temperature sensor, anemometer, hygrometer every 10 cm in a given room. Moreover, variations of interest are sometimes below the range of common instrumentation. For instance, low limit for ordinary anemometers is about 0.5 ms -1 while natural convection occurring in buildings could be even smaller than 0.1 ms -1 . Another issue is the non-destructive moisture detection, because moisture is the main reason for building damages. Currently, the moisture survey of buildings, performed us- ing equipment such as moisture meters, is slow and pro- hibits the monitoring of hard-to-reach locations. An in- frared thermal imaging camera helps to monitor moisture by imaging the different temperatures of wet versus dry building materials, making it a useful tool for a building moisture survey. Non-destructive testing of water intrusion can be performed using the IR camera in modern buildings, but could fail when high relative humidity and low temper- ature decrease the evaporation process. More important, the relation between moisture and temperature is much more complex as expected and a much deeper knowledge is needed [12]. Figure 1: The IR camera in front of the reference grid used to monitor the environmental conditions of the Masino cas- tle (Italy) with the proposed approach.

Upload: others

Post on 13-May-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Autonomous Robotized System for a Thermographic Camera

An Autonomous Robotized System for a Thermographic CameraAlberto Pretto1,3, Emanuele Menegatti1,3, Paolo Bison2, Ermanno Grinzato2, Gianluca Cadelano2 and Enrico Pagello1,3

1 University of Padova, Dep. of Information Engineering, Via Gradenigo 6/B - Padova (Italy)2 CNR-ITC, Corso Stati Uniti 4 - Padova (Italy) http://www.itc.cn.it3 IT+Robotics Srl, Contrà Valmerlara 21 - Vicenza (Italy) http://www.it-robotics.it

AbstractThis paper presents a new approach to automatically monitor an indoor environment on thermodynamic basis. It usestemperature as the driving parameter and is especially suited for comfort analysis or evaluation of moisture. The systemmeasures all fundamental environment parameters (e.g., air temperature, relative humidity and air speed) by imaging witha thermal camera (IR camera) a set of special targets arranged in a grid (the reference grid), which can be placed close to awall or in any other place of the room. The thermal camera is mounted on a pan-tilt unit to realize the monitoring processin an automatic way. The system processes the thermal images in real-time and autonomously controls the pan-tilt unit.A fast automatic learning procedure enables to recognize the special targets on the grid also in challenging environmentsand in different environment conditions, while a Particle Filter is used to update the state of the system (i.e., position ofthe intersection point between the optical axis of the camera and the planar surface of the grid).The system is able to perform a reliable global localization of the position of the thermal camera. During the scanning ofthe wall surfaces, a set of positions are automatically and sequentially reached by the moving IR camera: for each positiona thermal image is recorded. Images are hence rectified in order to obtain a more accurate temperature sampling.We successfully tested our system in several challenging environments.

1 Introduction

Instruments commonly used to monitor environmentalconditions can perform punctual measurements. Most ofthe times, they are not appropriate for creating a map of thedistribution of the values of the parameters of interest (e.g.temperature, light, humidity, air speed) within a building.It is totally impractical to move a measuring head compris-ing a temperature sensor, anemometer, hygrometer every10 cm in a given room. Moreover, variations of interest aresometimes below the range of common instrumentation.For instance, low limit for ordinary anemometers is about0.5 ms−1 while natural convection occurring in buildingscould be even smaller than 0.1 ms−1.Another issue is the non-destructive moisture detection,because moisture is the main reason for building damages.Currently, the moisture survey of buildings, performed us-ing equipment such as moisture meters, is slow and pro-hibits the monitoring of hard-to-reach locations. An in-frared thermal imaging camera helps to monitor moistureby imaging the different temperatures of wet versus drybuilding materials, making it a useful tool for a buildingmoisture survey. Non-destructive testing of water intrusioncan be performed using the IR camera in modern buildings,but could fail when high relative humidity and low temper-ature decrease the evaporation process. More important,the relation between moisture and temperature is muchmore complex as expected and a much deeper knowledge

is needed [12].

Figure 1: The IR camera in front of the reference grid usedto monitor the environmental conditions of the Masino cas-tle (Italy) with the proposed approach.

Page 2: An Autonomous Robotized System for a Thermographic Camera

The study of the indoor thermal and humidity conditions isa problem addressed by many authors [4]. The distributionof the surface temperature is of course very important, butthe correct approach is to measure the boundary air condi-tions, at the same time. It is a matter of fact that the mainlimiting factor is the difficulty to analyze the environmen-tal conditions using a suitable time and space scale. In fact,for a detailed study the density of measurement requiredis much more extensive than the ones practically availableusing ordinary techniques.The paper illustrates a new approach to solve such a prob-lem using IR thermography (patent pending [7]). Ba-sically, the new deal is to measure the surface and airthermodynamic status starting from temperature measure-ments on special set of targets (Fig. 3(b)), distributed onthe inspected volume (e.g., Fig. 1) in order to compose aso called reference grid (Fig. 1 and 2). The scanning ofthe wall surfaces and targets by the IR camera enables toobtain a high-resolution map of the physical phenomenadistribution [1]. Finally, the local conditions are recoveredby means of an automatic processing, based on a robustmathematical model.

1.1 System overviewThe proposed system performs the scanning of the surfaceof interest in an automatic way. It detects the targets in theright order, even if they are not in a regular pattern, in orderto follow the geometry of a complex building (Fig. 1). Thesystem detects a set of putative targets inside every thermalimage using an efficient blob-detection technique. Everycandidate is therefore classified as inlier or outlier usinga trained Adaptive Boosting (AdaBoost) machine learningalgorithm [6]. The metrical positions where the targets arelocated are then recovered in real-time. We propose to copesuch a task exploiting the relationship between the IR cam-era image plane and the planar surface of the reference grid(Fig. 2). Our approach is to represent the state of the sys-tem (i.e., the position of the intersection point between theoptical axis of the camera and the planar surface of the ref-erence grid) in a probabilistic fashion through a posteriordensity function (PDF). The PDF can be estimated recur-sively over time from the incoming observations (thermalimages) and actions (pan-tilt movement) using a RecursiveBayes Filter [13] that exploits the Markov assumption ofthe stochastic process. The action model represents theprobability density of the state at time t given the stateat time t − 1 and the last action (pan-tilt movement) per-formed. The observation model represents the probabilitydensity of the observation at time t given the state at thesame time. To solve the Bayes Filter we use a Particle Fil-ter, that is an approximated solution of the Bayes Filter.Particle Filters are well suited to cope with multi-modalPDF, as in our case, due to the strong spatial symmetry ofthe reference grid.Before starting the scanning process, the system is able to

perform a reliable global localization of the position of thethermal camera.During the scanning of the wall surfaces, a set of specialpositions are automatically and sequentially reached by themoving IR camera: for each position a thermal image isrecorded. These special positions represent the centroidsof the neighbour targets. Images taken in such locationsusually contain four or more targets, enabling the estima-tion of the homography that allows an accurate mappingbetween image points and world points. Given the homog-raphy, we rectify the image (i.e., remove the distortion dueto perspective projection) in order to obtain a more accu-rate temperature sampling.Finally, a resampling process gives a very detailed spa-tial distribution of the air and surface thermal hygrometricconditions. Of course, the arrangement of different ther-mograms composing a mosaic of the entire wall surfaceunderlies the hypothesis that no significant changes hap-pened during the scanning time. Therefore, the faster thescanning, the better it is.

2 Hardware Setup

The prototype is made of four main hardware components:1) the sensor device (the IR camera); 2) the pan-tilt cameramount; 3) the controlling PC and 4) the reference grid.

Figure 2: The hardware setup of the system: during thescanning process, the system automatically detects andtracks the positions of the targets that compose the refer-ence grid. The red dashed line represents a possible scan-ning path.

Page 3: An Autonomous Robotized System for a Thermographic Camera

(a) (b)

Figure 3: (a) The FLIR A320 thermal camera mounted onthe pan-tilt motor; (b) The special target used to composethe reference grid: it is patched with multiple squared sur-faces, the top right surface is composed by a wet lint thatis exploited in the target tracking process.

2.1 Thermal cameraThe chosen thermal camera is a FLIR A320 (Fig. 3(a)),equipped with the standard lens (Field of View 25◦ × 19◦;focal distance: 18 mm; F number 1.3; spatial resolution:1.36 mrad). The detector is an uncooled microbolome-ter focal plane array of 320 × 240 pixels, working in the7.5 − 13 µm spectral band, with 70 mK of thermal reso-lution at room temperature. The frame rate is 9 Hz. Thisdevice is mainly conceived for automatic industrial processcontrol, due to dedicated features and advanced intercon-nection.

2.2 Pan-tilt camera mountThe position of the thermal camera is controlled by a sim-ple and cheap pan-tilt motor with a constant velocity of 6◦

degrees per second for the horizontal movement (pan) and3◦ degrees for the vertical movement (tilt). The device iscontrolled by the PC via a RS232 serial connection and itdoesn’t provide any feedback information (e.g., the odom-etry).

2.3 Controlling PCThe thermal images acquisition and the camera motioncontrol are performed in real-time using a 2GHz Core 2Linux-based laptop. The target positions are tracked andmatched with a reference grid (Fig. 2, see below). Thecamera position is continually changed in agreement witha given sampling sequence. For every cluster of targets, aset of parameters are sampled.

2.4 Reference gridThe innovative features of the method are mainly based onthe extraction of useful radiative data by means of a set ofspecial targets (Fig. 2 and 3(b), patent pending [7]). A light

metallic frame is placed in the field of view of the IR cam-era holding on the targets. This reference grid serves forthe following purposes:

• makes easier the identification of the running surfacepatch at the particular time;

• allows image registration and correct geometric scal-ing;

• allows a precise temperature compensation for thereflected thermal energy;

• over the targets the IR camera is able to record par-ticular temperature values needed for the measuringof the air temperature map; also the air speed parallelto the surface is mapped on each target point;

Furthermore, a reference passive device is used to calibratethe temperature values measured by thermography and tomeasure the relative humidity of the air [11, 2].A precise thermographic reading of an object that is not ablack-body, requires at first the knowledge of the reflectedIR radiation by the inspected surface and the surface emis-sivity [10]. A practical way to achieve such a data is to usea diffusive-reflective material placed close to the surface.Any target supported by the grid is patched with multiplesquared surfaces where these informative radiative fluxescan be detected by the thermal camera and used for theheat flux estimation. One of these surface is composed bya wet lint ant it will be exploited in the target tracking pro-cess.

3 Targets DetectionIn order to automatically detect the targets inside the envi-ronment, we focus on the wet lint that is embodied in eachtarget (Fig. 3(b)). In fact, such a slice appears usually well-defined inside the thermal images (e.g., Fig. 4(a)). Ourprocess starts detecting a set of putative targets inside ev-ery thermal image using a blob-detection algorithm. Ev-ery candidate is therefore classified as inlier (real target) oroutlier (false positive) using a trained Adaptive Boosting(AdaBoost) machine learning algorithm.

3.1 Selecting putative targets

Given the known distance of the IR camera from the grid,we can approximately fix the size of the targets in the ther-mal images, and the radius (say σ) of its squared surfacesas well. The surface composed by the wet lint usually ap-pears roughly as a uniform and well-defined squared re-gion. Therefore, we start the process looking into the im-age for all the blob structures with approximately radiusσ. As blob detector, we use the Difference of Gaussians(DoG) operator, that is a computational efficient approxi-mation of the Laplacian of the Gaussian (LoG) filter, one

Page 4: An Autonomous Robotized System for a Thermographic Camera

of the most common blob detectors.

(a) (b)

(c) (d)

Figure 4: (a) The input gray-level thermal image: two tar-gets are visible in the left side of the image; (b) The Differ-ence of Gaussians (DoG) filter applied to the input image;(c) The adaptive thresholding operator applied to the DoGimage (b): every white blob represents a putative target;(d) The putative targets detected in (c) are classified as in-lier or outlier based on surrounding pixels in (a) using theAdaBoost machine learning algorithm.

We can define l(x, y, σ) as the convolution of the origi-nal thermal image f(x, y) with a bivariate Gaussian kernelg(x, y, σ) with zero-mean and diagonal covariance matrixwith all non-zero entries equals to σ2:

l(x, y, σ) = g(x, y, σ) ∗ f(x, y) (1)

The Difference of Gaussians is therefore defined as:

DoG(x, y;σ) = l(x, y;σ + ∆σ)− l(x, y;σ −∆σ) (2)

where ∆σ is chosen in order to well approximate the LoGoperator. The application of the DoG filter results in strongpositive responses for dark blobs (e.g., the wet lints of thetargets) of radius σ (Fig. 4(b)).Finally, we need to select a set of connected dark regionsthat represent our putative targets. We transform the DoGgrayscale image to a binary image th(x, y) using an adap-tive thresholding operator [9], according to the followingequation:

th(x, y) ={

1 if DoG(x, y;σ) > T (x, y)0 otherwise (3)

where T (x, y) is a threshold calculated individually foreach pixel as a gaussian weighted sum of pixels neigh-borhood (b × b), summed by a positive parameter s. Wetypically use b = 25 while s depends on the contrast of the

wet lints inside the thermal images. An example of the ap-plication of the adaptive thresholding operator is depictedin Fig. 4(c).For each detected blobs (i.e., connected dark regions), itis hence computed the centroid, the area in pixels and thesize ratio (width/length) of its bounding box.

3.2 Classify targetsAs we have seen, every detected blob represents a putativetarget: we turn now to the problem of the classificationof those regions in real (inlier) and fake (outlier) targets.This is a 2-class categorical classification problem: we usea method similar to the Viola-Jones object detection tech-nique [14], based on the AdaBoost (Adaptive Boosting,[6]) learning algorithm applied to a set of Haar-like fea-tures efficiently extracted from the input thermal images.

3.2.1 Feature extraction

As most of the classification algorithms, also AdaBoostneeds a discrete representation (i.e., a features vector) ofthe object that should be classified.

Figure 5: The four Haar-like features used in the classifi-cation: the white regions is interpreted as “add that area”,while the black regions as “subtract that area”.

For each putative target, we consider in the input gray-levelimage (Fig. 4(a)) a squared region of fixed size that shouldcover all the surface of the viewed target (e.g., the blackand white boxes depicted in Fig. 4(d)), and not only thewet lint surface (see Fig. 3(b)).Similarly to [14], we compute in that region a small num-ber (four in our case) of Haar-like visual features (Fig. 5):the white regions is interpreted as “add that area”, whilethe black regions as “subtract that area”. The sums anddifferences of the pixel values over the squared regions arecalculated efficiently using integral images [5]. We includein our features vector also the two values representing thearea in pixels and the size ratio (width/length) computedduring the selecting of the putative targets (Sec. 3.1).

3.2.2 Classification using boosting

Boosting is a classification method that works by sequen-tially applying a simple classification algorithm (in mostcases, a decision tree) to re-weighted versions of the train-ing data, and then taking a weighted majority vote of thesequence of classifiers thus produced. Boosting is a super-vised learning technique, that means it generates a function

Page 5: An Autonomous Robotized System for a Thermographic Camera

that maps inputs (the features vector) to a class label (in-lier or outlier in our case) given a set of training data. Thisdataset is composed of a number of pairs of input featuresvector and the desired class label.AdaBoost (Adaptive Boosting) is an instance of the boost-ing machine learning algorithm that try to modify the se-quence of the simple classifiers giving higher weight totraining samples that are currently misclassified [6].In our case the training stage is performed off-line beforethe scanning process.

Figure 6: The graphical user interface (GUI) of the ap-plication developed for the training stage of the AdaBoostalgorithm. The user select each putative targets and assignto it a label (green boxes are inlier, gray boxes are outlier).

We develop a specific graphical tool for this task (Fig. 6).The user chooses a set of representative thermal images:for each, the application automatically select the putativetargets (Sec. 3.1). Using a point-and-click strategy, theuser decides the class label of the putative targets: inlier(green boxes in Fig. 6) or outlier (gray boxes in Fig. 6). Thecollected set of input feature vectors and the correspondingclass labels (inlier/outlier) are therefore used to train theAdaBoost algorithm: in other words, the algorithm learnthe functional relationship F : y = F (x) between inputfeature vectors x and the output labels y. This relationshipwill be used during the real classification stage to predictthe class label of a putative target given its incoming fea-ture vector.

4 Automatic Monitoring ProcessThe presented monitoring process involves an accuratecomputation for each location of some fundamental pa-rameters as the air temperature and speed, the relative hu-midity and the wall temperature. Those thermodynamicquantities are evaluated given the measures extracted fromthe recorded thermal images during the scanning process.On the other hand, the IR camera focus only a partial fieldsof view (here’s called Partial Field-of-View, PFOV) of thedesired area. In order to automatically perform the whole

monitoring process, it is therefore essential to accuratelytrack the location of every PFOV inside the area of interestduring a complete scan.Given the 2D metrical map of the reference grid (i.e., thelocation of every target) and a thermal image with the pro-jection of at least four known targets (i.e., we know whichare the projected target), it is possible to compute the ho-mographyH between image points and grid points solvinga linear system [8] (Fig. 7).

Figure 7: The IR camera looking at a plane with four pointcorrespondences (P0, P1.P2, P3) needed to determine H .During the scanning process, we track the position of theintersection point between the optical axis of the cam-era (zcam) and the planar surface of the reference grid[source: www.wikipedia.com].

Given the homography H , it is possible to map every im-age point pi to a world point Pi inside the surface of thereference grid as:

Pi = Hpi (4)

As we have seen, the computation of the homographies in-volves the knowledge of which target is actually inside thePFOV (i.e., which target is actually projected inside thethermal image). We therefore propose to estimate the po-sition of the intersection point between the optical axis ofthe camera (zcam) and the planar surface of the referencegrid: given the knowledge of this location (the state of thesystem), the four targets correspondences and hence thecorresponding homography can be easily recovered.It is important to note that no geometric calibration of theIR camera is needed in our system: it is only assumedthat the optical axis of the camera zcam intersect the im-age plane in a point fairly close to the image center.

4.1 Target tracking using a Particle FilterOur approach is to represent the state of the system in aprobabilistic fashion through a posterior density function

Page 6: An Autonomous Robotized System for a Thermographic Camera

(PDF) P [xt|z0, . . . , zt;u0, . . . , ut], where xt is the state ofthe system at time t, z0, . . . , zt is the sequence of all theobservations until time t and u0, . . . , ut is the sequence ofall the actions performed until time t. In our case, a sin-gle observation is the set of the detected targets with theirposition inside the thermal images (see Sec. 3), while asingle action is a command sent to the pan-tilt motor (forexample, pan clockwise).The PDF can be estimated recursively over time using in-coming observations and actions using a Recursive BayesFilter, that exploits the Markov assumption of the stochas-tic process [13]:

Bel(xt) = ηP [zt|xt]∫P [xt|ut, xt−1]Bel(xt−1)dxt−1

(5)Here we define Bel(xt) = P [xt|z0, . . . zt;u0, . . . , ut](also called Belief ), while P [xt|ut, xt−1] is the actionmodel, P [zt|xt] is the observation model and η is a nor-malization factor.The action model represents the probability density of thestate at time t given the the state at time t − 1 and the lastaction ut (pan-tilt movement) performed.The observation model represents the probability densityof the observation at time t given the state at the same time.To solve the Bayes Filter (Eq. 5) we use a Particle Filter,that is an approximated solution of the Bayes Filter. Par-ticle Filters are well suited to cope with multi-modal PDF,as in our case, due to the strong spatial symmetry of thereference grid (Fig. 8).The key idea of the Particle Filter is to represent the pos-terior density functions of the state by means of a set ofN particles (x[1]

t , x[2]t , . . . , x

[N ]t ). Each particle represents

an hypothesis of the state (i.e., in our case a 2D positioninside the grid): the denser a subregion of the state spaceis populated by particles, the more likely it is the true statefalls into this region [13].In detail, at time t our tracking process works as follow:

1. We detect targets (i.e., the observation zt) within theincoming thermal image using the strategy presentedin Sec. 3.

2. All particles are moved inside the grid accordingto the action model, i.e. we replace each parti-cle x[i]

t−1 with a new particle x[i]t ∼ P [xt|ut, x

[i]t−1],

i = (1, . . . , N). We use here an approximation ofthe motion: given the last command sent to the pan-tilt motor, the elapsed time from the command in-vocation and the distance of the IR camera from thegrid, it is possible to recover the relative movementof the intersection point inside the grid. In order tosample a new particle x[i]

t , this relative movement isadded to the position of each x[i]

t−1 plus a zero meangaussian noise that model the action’s uncertainty.

3. All particles are weighted according to the obser-vation model: for each particle x[i]

t , we calculated

the so-called importance-factor w[i]t , with w

[i]t =

P [zt|x[i]t ], i = (1, . . . , N). In our case, given an

hypothetical position x[i]t , we estimate the associa-

tion between the detected targets in the image andthe real targets that lies in the neighborhood of thatposition. This is obtained comparing the relativedistances and angles between the image center andthe detected targets with the relative distances andangles between the particle position and the neigh-bour targets in the reference grid. With four ormore target correspondences, it is possible to com-pute the homography H [i]. We then project the im-age center pc in the grid plane using this homogra-phy: P [i]

c = H [i]pc. If the position of the particlex

[i]t falls close to the real position, the projection of

the image center Pc[i] should fall near this particle.In order to compute the importance-factor, we com-pute the distance d[i] between P

[i]c and x

[i]t . The

importance-factor w[i]t is then obtained sampling the

probability of d[i] from a zero-mean gaussian distri-bution that model the observation noise.

4. The particles are finally resampled based on theirimportance-factors [13], i.e. we construct a newpopulation of particles by selecting particles fromthe actual population with probability proportionalto the weight of each particle. Some initial particlesmay be forgotten and some may be duplicated.

4.2 Initialization and scanning

At the beginning of the process, the particles are uniformlydistributed in the whole working area (Fig. 8(a)). A se-quence of fixed movements is performed in order to es-timate the real position inside the grid: all particles thatfall outside the working area obtain an importance-factorequals to zero, then they will be discarded during the re-sampling process. At the end of the this initializationstep, all particle are condensated around the real position(Fig. 8(f)). During the scanning of the wall surfaces, aset of special positions are automatically and sequentiallyreached by the moving IR camera: for each position a ther-mal image is recorded. These special positions (magentaboxes in Fig. 8) represent the centroids of the neighbourtargets. Images taken in such location usually contain fouror more targets, enabling the estimation of the homographythat allows an accurate mapping between image points andworld points. Given the homography, we rectify the image(i.e., remove the distortion due to perspective projection)in order to obtain a more accurate temperature sampling.

Page 7: An Autonomous Robotized System for a Thermographic Camera

(a) (b)

(c) (d)

(e) (f)

Figure 8: A typical localization sequence. The greenboxes represent the position of the targets inside the ref-erence grid, while the magenta boxes represent the spe-cial positions that should be reached during the scanningprocess. (a) The particles are uniformly distributed in thewhole working area; (b),(c),(d),(e) Due to the IR cameramovements, the particles tends to come close to real po-sition; (f) All the particle are condensated around the realposition.

5 Experiments

We implemented our monitoring system in C++ using theefficient OpenCV image processing library1 [3]: in Fig. 9is depicted the graphical user interface of the application.The whole process runs in real-time with a frame-rate of9 fps on a 2GHz Core 2 Linux-based laptop.

Figure 9: The graphical user interface (GUI) of the devel-oped application. On the left: the actual grabbed IR-image;On the right: the reference grid and the particles represent-ing the probability distribution of the position of the IRcamera’s optical axis inside the grid.

We successfully tested our system in several environmentswith different distributions in the surface temperature (e.g.,Fig. 1 shows the technique of measurement at work duringthe experiments inside the Masino castle in Italy). Duringthe initialization step, the real position of the intersectionpoint between the optical axis and the planar surface ofthe grid has been correctly estimated in all the experiments(Fig. 8(f)).

Figure 10: The condensation risk studied by the newmethod inside the Masino castle (Italy).

After the initialization, the complete scanning procedureshas been completed correctly in all cases, following the ex-pected scanning path and grabbing the thermal images inthe correct positions. This results have been reached after acorrect learning procedure that enables the system to wellrecognize the targets inside the grid.Figure 10 shows a few cross sections that has been mea-sured according to the described method in Masino cas-tle in Italy. The maps represent the air temperature in therange 3.5 ÷ 5.5◦C, as seen in Fig. 11 (a). Similar mapsfor the air speed distribution in the range 0÷ 0.5ms−1 has

1http://opencv.willowgarage.com

Page 8: An Autonomous Robotized System for a Thermographic Camera

been collected (see Fig. 11 (b)). All results are geomet-rically corrected, with horizontal and vertical coordinatesexpressed in cm.

(a) (b)

Figure 11: (a) Air temperature in the range 3.5 ÷ 5.5◦C,with geometry corrected and space scale in cm ; (b) Airspeed distribution in the range 0 ÷ 0.5ms−1 (space scalein cm).

6 Conclusions

IR Thermography is an effective non-destructive methodfor the evaluation of the indoor thermal-hygrometric con-ditions and the materials decay.Using the proposed grid of targets, the moisture dam-age could be detected in a much more reliable and non-destructive way. High thermal and spatial resolution areachieved allowing to follow the real physical process. Theproposed robotized system allows an automatic trackingof the grid targets and a reliable and accurate mapping be-tween image points and world points that improves sig-nificantly the effectiveness of the method. Indeed, witha fully automatic scanning the presence of humans insidethe monitored environment is not necessary, avoiding per-turbations of the microclimate natural status.Future works deal with the improvement of the target de-tection strategy in order to make it less sensitive to thequality of the learning procedure.

References

[1] P.G. Bison, C. Bressan, E. Grinzato, and S. Marinetti.Automatic air and surface temperature measure byir thermography with perspective correction. SPIE,1821:252–260, November 1992, Boston (USA).

[2] P.G. Bison, G.M. Cortelazzo, E. Grinzato, and G.A.Mian. Automatic thermal reference detection inthermographic images. XIIth Thermosense SPIE,1313:269–277, April 16-20 1990, Orlando (USA).

[3] Gary Rost Bradski and Adrian Kaehler. LearningOpenCV. O’Reilly, 2008.

[4] D. Camuffo. Microclimate for cultural heritage. El-sevier, 1998.

[5] Franklin Crow. Summed-area tables for texture map-ping. In SIGGRAPH ’84: Proceedings of the 11thannual conference on Computer graphics and inter-active techniques, pages 207–212, 1984.

[6] J. H. Friedman, T. Hastie, and R. Tibshirani. Addi-tive logistic regression: a statistical view of boosting.Technical Report, Dept. of Statistics, Stanford Uni-versity, 1998.

[7] E. Grinzato. Thermal-hygrometric monitoringof wide surfaces by IR Thermography. Patentn.PD2006A000191.

[8] R. I. Hartley and A. Zisserman. Multiple View Ge-ometry in Computer Vision. Cambridge UniversityPress, ISBN: 0521540518, second edition, 2004.

[9] A. K. Jain. Fundamentals of Digital Image Process-ing. Prentice Hall, 1986.

[10] X. Maldague. Theory and Practice of Infrared Tech-nology for Non Destructive Testing. John Wiley &Sons, 2001.

[11] C. Ohman. Practical methods for improving ther-mal measurement. Development Department, AGAInfrared System AB, Box 3, S-182 Danderyd, Sweden.

[12] J. Snell. The thermal behavior and signaturesof water in buildings. Thermosense XXIX SPIE,6541:654109.1–654109.9, April 9-13 2007, Orlando(USA).

[13] Sebastian Thrun, Wolfram Burgard, and Dieter Fox.Probabilistic Robotics. The MIT Press, 2005.

[14] Paul Viola and Michael Jones. Rapid object detectionusing a boosted cascade of simple features. In Pro-ceedings of the 2001 IEEE Computer Society Con-ference on Computer Vision and Pattern Recognition(CVPR), pages 511–518, 2001.