combining multiple large volume metrology systems: … · 2016-01-18 · measurement systems are...

11
Precision Engineering 43 (2016) 514–524 Contents lists available at ScienceDirect Precision Engineering jo ur nal homep age: www.elsevier.com/locate/precision Combining multiple Large Volume Metrology systems: Competitive versus cooperative data fusion Fiorenzo Franceschini , Maurizio Galetto, Domenico Maisano, Luca Mastrogiacomo Politecnico di Torino, DIGEP (Department of Management and Production Engineering), Corso Duca degli Abruzzi 24, 10129 Torino, Italy a r t i c l e i n f o Article history: Received 15 May 2015 Received in revised form 19 August 2015 Accepted 16 September 2015 Available online 22 October 2015 Keywords: Cooperative fusion Competitive fusion Large Volume Metrology Large scale dimensional metrology Hybrid localization a b s t r a c t Large Volume Metrology (LVM) tasks can require the concurrent use of several measuring systems. These systems generally consist of set of sensors measuring the distances and/or angles with respect to a point of interest so as to determine its 3D position. When combining different measuring systems, character- ized by sensors of different nature, competitive or cooperative methods can be adopted for fusing data. Competitive methods, which are by far the most diffused in LVM, basically perform a weighted mean of the 3D positions determined by the individual measuring systems. On the other hand, for cooperative methods, distance and/or angular measurements by sensors of different systems are combined together in order to determine a unique 3D position of the point of interest. This paper proposes a novel cooperative approach which takes account of the measurement uncertainty in distance and angular measurements of sensors of different nature. The proposed approach is compared with classical competitive approaches from the viewpoint of the metrological performance. The main advantages of the cooperative approach, with respect to the competitive one, are: (i) it is the only option when the individual LVM systems are not able to provide autonomous position measurements (e.g., laser interferometers or single cameras), (ii) it is the only option when only some of the sensors of autonomous systems work correctly (for instance, a laser tracker in which only distance not angular measurements are performed), (iii) when using systems with redundant sensors (i.e. photogrammetric systems with a large number of distributed cameras), point localization tends to be better than that using the competitive fusion approach. © 2015 Elsevier Inc. All rights reserved. 1. Introduction and literature review Sensor fusion can be defined as the combination of sensory data from different sources, so that the resulting information is somehow better than the information deriving from the single sources taken separately [1,2]. The term “better” can assume dif- ferent meanings depending to the context of interest: it can mean more accurate, more complete, more reliable, etc. In their survey, Weckenmann et al. [3] define the concept of multisensor data fusion in dimensional metrology as “the process of combining data from several information sources (sensors) into a common representa- tional format in order that the metrological evaluation can benefit from all available sensor information and data”. Corresponding author. Tel.: +39 0110907225. E-mail addresses: [email protected] (F. Franceschini), [email protected] (M. Galetto), [email protected] (D. Maisano), [email protected] (L. Mastrogiacomo). An individual dimensional metrology system is basically able to collect and process several input data, deriving from a set of sensors, so as to provide an output, generally the dimensional coordinates of the point of interest (see Fig. 1). It generally consists of: A set of sensors, i.e. devices able to measure primary geometric quantities, such as distances and angles. A Data Processing Unit (DPU), which processes sensory data in order to provide an estimation of the 3D position of the point of interest. Typical industrial applications are the reconstruction of curves/surfaces for dimensional verification and/or the assembly of large-sized mechanical components. According to the sensors layout, dimensional metrology sys- tems can be classified as (i) centralized, if sensors are grouped into a unique stand-alone unit, or (ii) distributed, if sensors are spread around the measuring volume [4–6]. Large Volume Metrology (LVM) tasks often involve the concur- rent use of multiple dimensional metrology systems (for instance, http://dx.doi.org/10.1016/j.precisioneng.2015.09.014 0141-6359/© 2015 Elsevier Inc. All rights reserved.

Upload: others

Post on 21-Jan-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

CC

FP

a

ARRAA

KCCLLH

1

dssfmWistf

ml

h0

Precision Engineering 43 (2016) 514–524

Contents lists available at ScienceDirect

Precision Engineering

jo ur nal homep age: www.elsev ier .com/ locate /prec is ion

ombining multiple Large Volume Metrology systems:ompetitive versus cooperative data fusion

iorenzo Franceschini ∗, Maurizio Galetto, Domenico Maisano, Luca Mastrogiacomoolitecnico di Torino, DIGEP (Department of Management and Production Engineering), Corso Duca degli Abruzzi 24, 10129 Torino, Italy

r t i c l e i n f o

rticle history:eceived 15 May 2015eceived in revised form 19 August 2015ccepted 16 September 2015vailable online 22 October 2015

eywords:ooperative fusionompetitive fusionarge Volume Metrologyarge scale dimensional metrologyybrid localization

a b s t r a c t

Large Volume Metrology (LVM) tasks can require the concurrent use of several measuring systems. Thesesystems generally consist of set of sensors measuring the distances and/or angles with respect to a pointof interest so as to determine its 3D position. When combining different measuring systems, character-ized by sensors of different nature, competitive or cooperative methods can be adopted for fusing data.Competitive methods, which are by far the most diffused in LVM, basically perform a weighted mean ofthe 3D positions determined by the individual measuring systems. On the other hand, for cooperativemethods, distance and/or angular measurements by sensors of different systems are combined togetherin order to determine a unique 3D position of the point of interest.

This paper proposes a novel cooperative approach which takes account of the measurement uncertaintyin distance and angular measurements of sensors of different nature. The proposed approach is comparedwith classical competitive approaches from the viewpoint of the metrological performance. The mainadvantages of the cooperative approach, with respect to the competitive one, are: (i) it is the only optionwhen the individual LVM systems are not able to provide autonomous position measurements (e.g., laser

interferometers or single cameras), (ii) it is the only option when only some of the sensors of autonomoussystems work correctly (for instance, a laser tracker in which only distance – not angular – measurementsare performed), (iii) when using systems with redundant sensors (i.e. photogrammetric systems with alarge number of distributed cameras), point localization tends to be better than that using the competitivefusion approach.

© 2015 Elsevier Inc. All rights reserved.

. Introduction and literature review

Sensor fusion can be defined as the combination of sensoryata from different sources, so that the resulting information isomehow better than the information deriving from the singleources taken separately [1,2]. The term “better” can assume dif-erent meanings depending to the context of interest: it can mean

ore accurate, more complete, more reliable, etc. In their survey,eckenmann et al. [3] define the concept of multisensor data fusion

n dimensional metrology as “the process of combining data fromeveral information sources (sensors) into a common representa-

ional format in order that the metrological evaluation can benefitrom all available sensor information and data”.

∗ Corresponding author. Tel.: +39 0110907225.E-mail addresses: [email protected] (F. Franceschini),

[email protected] (M. Galetto), [email protected] (D. Maisano),[email protected] (L. Mastrogiacomo).

ttp://dx.doi.org/10.1016/j.precisioneng.2015.09.014141-6359/© 2015 Elsevier Inc. All rights reserved.

An individual dimensional metrology system is basically able tocollect and process several input data, deriving from a set of sensors,so as to provide an output, generally the dimensional coordinatesof the point of interest (see Fig. 1). It generally consists of:

• A set of sensors, i.e. devices able to measure primary geometricquantities, such as distances and angles.

• A Data Processing Unit (DPU), which processes sensory data inorder to provide an estimation of the 3D position of the point ofinterest.

Typical industrial applications are the reconstruction ofcurves/surfaces for dimensional verification and/or the assemblyof large-sized mechanical components.

According to the sensors layout, dimensional metrology sys-tems can be classified as (i) centralized, if sensors are grouped into

a unique stand-alone unit, or (ii) distributed, if sensors are spreadaround the measuring volume [4–6].

Large Volume Metrology (LVM) tasks often involve the concur-rent use of multiple dimensional metrology systems (for instance,

Page 2: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

F. Franceschini et al. / Precision Eng

DPU

Sensor 1

Sensor 2

Sensor M

… [X, Y, Z]T

Dimensional Metrology System

Fig. 1. Schematic representation of a generic dimensional metrology system. TheDc

ttslat

oa

eidvamAstw

ot[

alaf

The most obvious and immediate approach would be that of

ata Processing Unit (DPU) gathers and processes sensory data to output the 3Doordinates of the point of interest, i.e. [X, Y, Z]T .

wo or more laser trackers, or scanners, combined with a dis-ributed photogrammetric system, etc.) [3,7–9]. This practice haseveral benefits including but not limited to: (i) overcoming theimitations of the individual system; (ii) improving measurementccuracy; (iii) taking advantage of the overall available instrumen-ation; (iv) reducing the risk of measurement errors.

The concurrent use of multiple systems requires the definitionf suitable data fusion strategies. Two possible approaches can bedopted [2,10]:

Competitive fusion. Each system performs an independent mea-surement of the 3D coordinates of the point of interest and theseposition measurements are fused into a single one [11]. Thisfusion approach is defined as competitive, since each system“compete” for the definition of the fusion result. For example,this principle is implemented in the SpatialAnalyser®, probablythe most diffused software for LVM applications [12]. The goalof competitive fusion is to improve the measurement accuracywhile reducing the risk of measurement errors [11].Cooperative fusion. Data provided by two or more independent(non homogeneous) sensors, even from different measuring sys-tems are processed in order to achieve information that otherwisecould not be obtained from individual sensors [11]. According tothis logic, the different sensors share their local measurementsand “cooperate” for determining a unique position measurementof the point of interest. For example, data from (i) two sensorsof a system performing angular measurements, and (ii) one sen-sor of another system performing distance measurements canbe combined for determining the 3D coordinates of the point ofinterest.

Compared to the competitive data-fusion approach, the coop-rative one is more difficult to implement, as it requires that thendividual measurement systems are able to return “intermediate”ata, such as distance and angular measurements by the rele-ant sensors. However it will be shown that a cooperative fusionpproach could potentially make a more efficient use of the infor-ation available, resulting in improved metrological performance.lso, it is the only option when dealing with sensors that, takeneparately, are not able to perform independent localizations ofhe point of interest (for instance a laser interferometer combinedith a single photogrammetric camera) [13].

While the scientific literature encompasses several descriptionsf the competitive approaches [11], the cooperative ones are almostotally ignored or confined to specific measurement applications14–19].

This paper proposes a novel cooperative approach which takes

ccount of the measurement uncertainty in distance and angu-ar measurements of sensors of different nature. The proposedpproach will be compared with classical competitive approachesrom the viewpoint of the metrological performance.

ineering 43 (2016) 514–524 515

The remainder of the paper is structured as follows. Section 2introduces the problem of data-fusion in dimensional metrologythroughout an introductory example, while the problem is for-malized in Section 3. Section 4 discusses the classical competitiveapproach, also proposing a generalized cooperative approach. Sec-tion 5 reports a benchmarking between the proposed approachand a classical competitive approach, based on real and simulatedexperiments. The concluding section summarizes the original con-tributions of the paper, focusing on the benefits, limitations andpossible future developments.

2. Introductory example

Before providing a comprehensive description of the compet-itive and cooperative approaches, we present an introductoryexample to illustrate the data-fusion problem. Let us consider ameasurement volume covered by two LVM systems (see Fig. 2):

• A photogrammetric system (PS), which is able to determine theposition of a target in contact with the object to be measured. Pre-cisely, this system includes several cameras distributed aroundthe measurement volume, able to determine the 2-D position ofthe target within the local image and determining the angles sub-tended by the target with respect to the camera’s focal point [8].The 3-D coordinates of the target are then determined, combiningthe local measurements by the individual cameras [8,20].

• A Laser Tracker (LT), which is able to measure the 3-D coordi-nates of a point by tracking a laser beam from the instrument to aretro-reflective target, in contact with the object to be measured.The position of the retro-reflective target is calculated using thedistance (d̂LT ) and the angular (�̂LT and ϕ̂LT ) measurements sub-tended by the target.

In the layout proposed in Fig. 2, the PS includes a set of 3 cameras.Each ith camera performs azimuth (�̂Ci

) and elevation ( ϕ̂Ci) angular

measurement with respect to the point of interest. Next, the posi-tion of P can be calculated combining the local measurements fromthe three cameras; in formal terms:

x̂PPS= [xPPS

, yPPS, zPPS

]T = fPS((�̂C1 , ϕ̂C1 ), (�̂C2 , ϕ̂C2 ), (�̂C3 , ϕ̂C3 )), (1)

where x̂PPSis the estimate of the position of P; �̂Ci

and ϕ̂Ciare local

angular measurements performed by each ith camera; fPS is a suit-able function for turning local angular measurements into x̂PPS

.The proposed layout also includes a LT which is typically

equipped with an absolute distance meter (ADM) and/or a laserinterferometer (performing distance measurements ( �̂LT )) and twoangular encoders (performing angular measurements, i.e. azimuth(�̂LT ) and elevation ( ϕ̂LT ) with respect to P). As for the PS, the LT isable to estimate the position of P using the available local measure-ments:

x̂PLT= [xPLT

, yPLT, zPLT

]T = fLT ( �̂LT , �̂LT , ϕ̂LT ) (2)

being x̂PLTis the estimate of the position of P; �̂LT , �̂LT and ϕ̂LT are

local distance and angular measurements performed by the LT; fLT

is a suitable function for turning local measurements into x̂PLT.

Although the two systems are able to operate independently,their combined use could lead to the aforementioned advantages.So, how to take advantage of both the systems? What is the mostappropriate strategy?

averaging of the position estimates (x̂PPSand x̂PLT

) provided bythe two systems. However, this is certainly not the most efficientapproach since it neglects the metrological performance (i.e. themeasurement uncertainty) of the individual systems and the fact

Page 3: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

516 F. Franceschini et al. / Precision Engineering 43 (2016) 514–524

ystem

ttpao

x

wttx

mdtt

x

b

aϕLlu

3

pP

s �ij

s �ij

Fig. 2. Example of concurrent measurement by two LVM s

hat the estimate of one system can be significantly better thanhat of the other system. In typical competitive approaches, thisroblem can be overcome aggregating x̂PPS

and x̂PLTby weighted

verage, where weights take account of measurement uncertaintyf the single LVM system; in formal terms:

ˆPCOMP= [xPCOMP

, yPCOMP, zPCOMP

]T = fCOMP(x̂PLT, x̂PPS

), (3)

here x̂PCOMPis the estimate of the position of P; x̂PLT

and x̂PPSare

he estimates of the position of P respectively given by the LT andhe PS; fCOMP is a suitable function for combining x̂PLT

and x̂PPSinto

ˆPCOMP, considering individual system uncertainty.

On the other hand, the cooperative fusion approach aims at esti-ating the position of P as a function of the local angular and/or

istance measurements by the individual sensors of the two sys-ems, also considering its metrological characteristics; in formalerms:

ˆPCOOP= [xPCOOP

, yPCOOP, zPCOOP

]T

= fCOOP((�C1 , ϕC1 ), (�C2 , ϕC2 ), (�C3 , ϕC3 ), �LT , �LT , ϕLT ), (4)

eing x̂PCOOPis the estimate of the position of P; �̂Ci

and ϕ̂Ciare local

ngular measurements performed by each ith camera; �̂LT , �̂LT andˆ LT are local distance and angular measurements performed by theT; fCOOP is a suitable function for determining the position of P fromocal sensor measurements, taking account of sensor measurementncertainty.

. Problem description

Rij =

⎡⎢⎣

cos �ij cos �ij

cos ωij sin �ij + sin ωij sin �ij co

sin ωij sin �ij − cos ωij sin �ij co

The problem herein discussed is to define of the 3D position of aoint, combining the measurements of a number of LVM systems.recisely, there are N LVM systems (D1, . . ., DN), distributed over the

s: a laser tracker (LT) and a photogrammetric system (PS).

measurement volume, so that each system is able to “see” the pointof interest (P ≡ [X, Y, Z]T). Each ith system consists of Mi sensors. Acentralized Data Processing Unit (DPU) receives and processes localmeasurement data from the totality of the sensors of the N LVMsystems. A schematic representation of the problem architecture isshown in Fig. 3.

Each jth sensor from the ith LVM system has its own spatialposition and orientation, and local coordinate system oij-xijyijzij. Ageneral transformation between a local and the global coordinatesystem (O-XYZ) is given by:

X = Rijxij + X0ij⇒

⎡⎣ X

Y

Z

⎤⎦ =

⎡⎢⎣

r11ijr12ij

r13ij

r21ijr22ij

r23ij

r31ijr32ij

r33ij

⎤⎥⎦

⎡⎢⎣

xij

yij

zij

⎤⎥⎦ +

⎡⎢⎣

X0ij

Y0ij

Z0ij

⎤⎥⎦ .

(5)

X0ij= [X0ij

, Y0ij, Z0ij

]T are the coordinates of the origin of oij-xijyijzij –i.e. the position of the jth sensor of the ith LVM system – in the globalcoordinate system O-XYZ and xij = [xij, yij, zij]T are the coordinates ofP in the local coordinate system oij-xijyijzij. Rij is a rotation matrix,which elements are functions of three rotation angles (i.e. ωij, �ijand �ij, see Fig. 4):

− cos �ij sin �ij sin �ij

cos ωij cos �ij − sin ωij sin �ij sin �ij − sin ωij cos �ij

sin ωij cos �ij + cos ωij sin �ij sin �ij cos ωij cos �ij

⎤⎥⎦ , (6)

where ωij represents a counterclockwise rotation around the xijaxis; �ij represents a counterclockwise rotation around the newyij axis (i.e. yij

′), which was rotated by ωij; �ij represents a coun-terclockwise rotation around the new zij axis (i.e. zij

′′), which wasrotated by ωij and then �ij.

The (six) location/orientation parameters related to the jthsensor in the ith LVM system (i.e. X0ij

, Y0ij, Z0ij

, ωij, �ij, �ij) are con-sidered as known parameters, since they are measured in an initialcalibration process.

We remark that each ith LVM system (Di) is able to estimatethe position of P autonomously. Let X̂Pi

= [XPi, YPi

, ZPi]T be the

measurement of point P by Di, expressed in the global coordinatesystem O-XYZ through Eq. (5). The accuracy of Di in measuring

Page 4: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

F. Franceschini et al. / Precision Engineering 43 (2016) 514–524 517

Fig. 3. Schematic representation of the problem architecture. Two LVM systems (D1 and D2) are presented, respectively consisting of M1 = 2 and M2 = 3 sensors. P is the pointto be measured. The Data Processing Unit (DPU) gathers and processes the data obtained from the distributed sensors.

een a

tm

wg

m

pi

X

Fig. 4. Rotation parameters regarding the transformation betw

he coordinates of P can be quantified by defining the covarianceatrix ˙X̂Pi

.

Let Sij be the generic vector of local measurement(s) associatedith the individual jth sensor of the ith measurement system. In

eneral, Sij can be of three types:

Sij = [d̂ij] for sensors performing distance measurements only (forinstance, ADMs, interferometers, ultrasound sensors, etc.), whered̂ij is the local distance measurement. These sensors generallyequip LVM systems based on multilateration [8];Sij = [�̂ij, ϕ̂ij] for angular sensors (for instance, optical/magneticalencoders, photogrammetric cameras, iGPS sensors, etc.), where�̂ij and ϕ̂ij are respectively the local measurements of the azimuthand elevation angles. These sensors generally equip LVM systemsbased on triangulation [8,21];Sij = [d̂ij, �̂ij, ϕ̂ij] for hybrid sensors, i.e. sensors able to measureone distance and two angles (for instance, laser trackers, 3D scan-ners, or combinations of other sensors). These sensors generallyequip hybrid LVM systems [8];

Let also ˙Sijbe the covariance matrix related to each sensor

easurement.The competitive approach (schematized in Fig. 5a) estimates the

osition of P, relying on the position estimates performed by thendividual systems and the relevant uncertainties:

ˆ P,COMP = fCOMP(X̂Pi|˙X̂Pi

∀i ∈ {1, . . ., N}), (7)

local coordinate system (oij-xijyijzij) and the global one (OXYZ).

On the other hand, the cooperative approach (schematized inFig. 5b) estimates the position of P, relying on the local angular anddistance measurements, performed by the individual sensors andthe relevant uncertainties:

X̂P,COOP = fCOOP(Sij|˙Sij∀(i, j) : i ∈ {1, . . ., N}, j ∈ {1, . . ., Mi}).

(8)

4. Discussion of the two data-fusion approaches

The following two sections provide a detailed discussion of thecompetitive and cooperative fusion approaches, respectively.

4.1. Competitive data-fusion approach

The competitive approach is relatively simple to formalize, usingthe notation introduced in Section 3.

Let A be a 3Nx3 matrix defined as A = [ I3,3 I3,3 . . . I3,3 ]T,

in which I3,3 is the 3 × 3 identity matrix. Defining b as b =[ X̂

TP1

X̂TP2

. . . X̂TPN

]T, the estimation of the position of P

(X̂P,COMP) can be obtained by reversing the relationship:

AXP,COMP = b. (9)

Since this system is overdefined (3 N equations in 3 unknownparameters, i.e. the 3 coordinates of XP,COMP), there are several

Page 5: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

518 F. Franceschini et al. / Precision Engineering 43 (2016) 514–524

DPU

1

Sen sor 1

Sen sor 2

Sensor M1

… D

PUN

Sen sor 1

Sen sor 2

Sensor MN

… C

OM

PET

ITIV

E F

USI

ON

CO

OPE

RA

TIV

E F

USI

ON

DPU

1

Sen sor 1

Sensor 2

Sensor M1

DPU

N

Sen sor 1

Sen sor 2

Sensor MN

)b()a(System 1

System N

1

ˆPX

NPX̂

System 1

… System N

COOPP ,X̂COMPP,X̂

11S

12S

11MS

1NS

2NS

NNMS

ion ap

pataSttmU

t

X

wis

W

a

tat

aa

4

A(

C

wv

⎡ ⎤T ⎡ ⎤

Fig. 5. Schematic representation of the two possible data-fus

ossible solution approaches, ranging from those based on the iter-tive minimization of a suitable error function [8], to those based onhe Least Squares method [22]. To this purpose, the most elegantnd practical approach is probably that of the Generalized Leastquares (GLS) method [23], in which a weight matrix (W), whichakes into account the uncertainty produced by the equations ofhe system, is defined. One of the most practical ways to define this

atrix is the application of the Multivariate Law of Propagation ofncertainty (MLPU) to the random variable in Eq. (9), i.e. b.

By applying the GLS method to the system in Eq. (9), we obtainhe position estimate of P as:

ˆ P,COMP = (AT WA)−1

AT Wb, (10)

here W is the weighting matrix that – under the hypothesis ofndependence between the measurements provided by differentystems – is defined as:

=

⎡⎢⎢⎢⎢⎢⎣

˙X̂P10 · · · 0

0 ˙X̂P2· · · 0

......

. . ....

0 0 · · · ˙X̂PN

⎤⎥⎥⎥⎥⎥⎦

−1

(11)

nd 0 is the 3 × 3 null matrix.Please notice that Eq. (10) is the most compact form to express

he inversion of Eq. (9), however, it is not the most convenientpproach to calculate X̂P,COMP . For instance, rather than consideringhe inverse of W, one could take the inverse of each ˙X̂Pi

since W is

block diagonal matrix. Similar considerations holds for Eqs. (12)nd (36).

.2. Cooperative data-fusion approach

The problem of cooperative data-fusion is more complicated.s for the competitive approach, an estimation of the position of P

X̂P,COOP) can be obtained by reversing the linear system:

XP,COOP = q, (12)

here C and q are respectively the design matrix and the obser-ation vector, which may depend on the specific LVM systems and

proaches, i.e. (a) competitive and (b) cooperative approach.

the relevant sensors in use. Precisely, the design matrix and theobservation vector have the following form:

C =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡⎢⎢⎣

C11

...

C1M1

⎤⎥⎥⎦

...⎡⎢⎢⎣

CN1

...

CNMN

⎤⎥⎥⎦

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

, q =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡⎢⎢⎣

q11

...

q1M1

⎤⎥⎥⎦

...⎡⎢⎢⎣

qN1

...

qNMN

⎤⎥⎥⎦

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

. (13)

The following sections discuss the definition of the generic Cijmatrix and qij vector, depending on the sensor type and show apractical solution to the problem in Eq. (12).

4.2.1. Distance sensors (multi-lateration)When the jth sensor of the ith system is a distance sensor, Cij

and qij are defined as follows.Each jth sensor of the ith system is able to estimate the distance

from its position (X0ij) to P (XP). Consistently with the notation

introduced in Section 3 and neglecting the sensors’ local measure-ment errors, this distance estimate can be expressed as:

d̂ij = ||XP − X0ij|| =

√(X − XOij

)2 + (Y − YOij)2 + (Z − ZOij

)2. (14)

Squaring both terms, we obtain

(X − XOij)2 + (Y − YOij

)2 + (Z − ZOij)2 − d̂2

ij = 0. (15)

Function in Eq. (15) can be linearized by means of its first orderTaylor series expansion around a generic point XA = [XA, YA, ZA]T.Therefore, a linear approximation of Eq. (15) is given by the sumof (i) Eq. (15) evaluated in XA and (ii) the gradient of Eq. (15) withrespect to X evaluated in XA, multiplied by (X − XA) :

(XA − XOij)2 + (YA − YOij

)2 + (ZA − ZOij)2 − d̂2

ij

+ 2⎢⎣

XA − XOij

YA − YOij

ZA − ZOij

⎥⎦ · ⎢⎣X − XA

Y − YA

Z − ZA

⎥⎦ ≈ 0. (16)

Page 6: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

F. Franceschini et al. / Precision Eng

Fig. 6. For a generic network device (Di), two angles – i.e. �ij (azimuth) and ϕij

(o

(

C

w

msp123456789

4

a

(tc(

r�o

tes

elevation) – are subtended by a line joining the point P (to be localized) and therigin oij of the local coordinate system oij-xijyijzij .

XA act as a first approximation of Xp. The approximation in Eq.16) gets better and better the closer XA is to Xp.

In matrix form, Eq. (16) becomes:

ijX = qij, (17)

here

Cij = 2[ XA − XOijYA − YOij

ZA − ZOij]

qij = d̂2ij

+ X2A − X2

0ij+ Y2

A − Y20ij

+ Z2A − Z2

0ij

. (18)

Being Eq. (17) the linearization of Eq. (15), only an approxi-ated value of Xp can be obtained by reversing Eq. (17). An iterative

olution of Eq. (17) can be obtained according to the followingrocedure:. Define a preliminary value of XA (close to the true value of Xp). Solve for Xp by reversing Eq. (17). If Xp is reasonably close to XA then. Go to Step 9. Else. Set XA as the obtained approximation of Xp .. Go To Step 2. End if. End

.2.2. Angular sensors (triangulation)If the jth sensor of the ith system is an angular sensor, Cij and qij

re defined as follows.From the local perspective of each sensor, two angles – i.e. �ij

azimuth) and ϕij (elevation) – are subtended by the line passinghrough P and a local observation point, which we assume as coin-ident with the origin oi ≡ [0, 0, 0] of the local coordinate systemsee Fig. 6).

Precisely, ϕij describes the inclination of segment oijP withespect to the plane xijyij (with a positive sign when zij > 0), whileij describes the counterclockwise rotation of the projection (oijP′)f oijP on the xijyij plane, with respect to the xij axis. If we consider

he angular measurements (�̂ij, ϕ̂ij) and we neglect measurementrrors, the following relationships hold for each ith local coordinateystem: ⎧

tan �̂ij = yij

xij

⎪⎨⎪⎩

if xij ≥ 0 then − �

2≤ �̂ij ≤ �

2

if xij < 0 then�

2< �̂ij <

3�

2

sin ϕ̂ij = zij cos �̂ij cos ϕ̂ij

xij

{−�

2≤ ϕ̂ij ≤ �

2

.

(19)

ineering 43 (2016) 514–524 519

Eq. (19) can be reformulated as:[sin �̂ij − cos �̂ij 0

sin ϕ̂ij 0 − cos �̂ij cos ϕ̂ij

]xij = 0. (20)

Reversing Eq. (5) we have

RTijX = xij + RT

ijX0ij. (21)

Combining Eqs. (20) and (21), one can obtain:[sin �̂ij − cos �̂ij 0

sin ϕ̂ij 0 − cos �̂ij cos ϕ̂ij

]RT

ijX

=[

sin �̂ij − cos �̂ij 0

sin ϕ̂ij 0 − cos �̂ij cos ϕ̂ij

]RT

ijX0ij. (22)

which can be written as:

CijX = qij, (23)

where qij = Cij · X0ijand

Cij =[

sin �̂ij − cos �̂ij 0

sin ϕ̂ij 0 − cos �̂ij cos ϕ̂ij

]· RT

ij . (24)

4.2.3. Hybrid approachWhen a generic sensor is able to perform both distance and

angular measurements (i.e. providing d̂ij, �̂ij and ϕ̂ij), the problemis relatively simple. Neglecting measurement errors, the followingrelationships hold:⎧⎪⎪⎨⎪⎪⎩

xij = d̂ij cos ϕ̂ij cos �̂ij

yij = d̂ij cos ϕ̂ij sin �̂ij

zij = d̂ij sin ϕ̂ij

. (25)

Combining Eqs. (5) and (25) we obtain:

RTijX − RT

ijX0ij= [ d̂ij cos ϕ̂ij cos �̂ij d̂ij cos ϕ̂ij sin �̂ij d̂ij sin ϕ̂ij ]

T.

(26)

Eq. (26) can be reformulated as:

CijX = qij, (27)

where Cij = RTij , qij = tij + RT

ijX0ijand tij = [d̂ij cos ϕ̂ij cos �̂ij

d̂ij cos ϕ̂ij sin �̂ijd̂ij sin ϕ̂ij]T.

4.2.4. Weighting and solutionIn order to solve Eq. (12) for X̂P,COOP, a GLS approach can be

adopted. To this purpose, it is convenient to define a weight matrixthat is inversely proportional to the variability of the error vectoru:

u = CX̂P,COOP − q. (28)

Notice that u is not only a function of X̂P,COOP but it also dependson the sensors’ local measurements and intrinsic parameters (i.e.X0ij

, ωij, �ij and �ij,). For simplicity we neglect here the uncertaintyin the estimation of the intrinsic parameters, which is generally

Page 7: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

5 on Eng

om

S

tt

wid

w

P

b{

⎧⎪⎪⎪⎪⎨⎪⎪⎪⎪⎩

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩

20 F. Franceschini et al. / Precisi

btained by a suitable calibration process [24]. Defining the overalleasurement vector as:

=

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡⎢⎢⎣

S11

...

S1M1

⎤⎥⎥⎦

...⎡⎢⎢⎣

SN1

...

SNMN

⎤⎥⎥⎦

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

, (29)

he GLS weighting matrix (�) can be obtained applying the MLPUo u [25]:

= (Ju(S)cov(S)JTu(S))

−1, (30)

here Ju(s) is the Jacobian matrix of u with respect to S. Assum-ng sensors to be independent from each other, � will be a blockiagonal matrix:

=

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

⎡⎢⎢⎣

�11 0 0

0. . . 0

0 0 �1M1

⎤⎥⎥⎦ · · · 0

.... . .

...

0 · · ·

⎡⎢⎢⎣

�N1 0 0

0. . . 0

0 0 �NMN

⎤⎥⎥⎦

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

. (31)

here the generic sub-matrix �ij depends on the type of sensor.

recisely, if Sij = [d̂ij], �ij is given by:

ij = (Jijcov(Sij)JTij)

−1. (32)

eing

Jij = (2dij)

cov(Sij) = 2dij

. (33)

If Sij = [�̂ij, ϕ̂ij], �ij is again given by Eq. (32) being

Jij =[

xij cos �ij + yij sin �ij 0

zij sin �ij cos ϕij xij cos ϕij + zij cos �ij sin ϕij

]

cov(Sij) =[

2�ij

0

0 2ϕij

] . (34)

If Sij = [d̂ij, �̂ij, ϕ̂ij], �ij is given by Eq. (32) being⎡cos ϕij cos �ij −dij cos ϕij sin �ij −dij sin ϕij cos �ij

Jij = ⎢⎣ cos ϕij sin �ij dij cos ϕij cos �ij −dij sin ϕij sin �ij

sin ϕij 0 dij cos ϕij

⎥⎦

cov(Sij) =

⎡⎢⎣

2dij

0 0

0 2�ij

0

0 0 2ϕij

⎤⎥⎦

.

(35)

ineering 43 (2016) 514–524

Having defined �, Eq. (12) can be reversed as:

X̂P,COOP = (CT ˝C)−1

CT ˝q. (36)

Note that cov(Sij) is diagonal since sensors are assumed to beindependent to each other. This is a reasonable assumption sincethe sensors operate independently during measuring operations.However, external factors – such as accidental vibrations, thermalgradients, etc. – may cause correlation between measures of sepa-rate sensors. In this case the assumption can be relaxed simply byinserting the non-diagonal elements in the matrix, if they can beestimated.

5. Test bench comparison

The purpose of this section is to compare by simulations the twofusion approaches on two practical case-studies: (i) combinationof a set of LTs (i.e. equipped with hybrid sensors) and (ii) combi-nation of measuring system based on triangulation (i.e. equippedwith angular sensors) and multi-lateration (i.e. equipped with dis-tance sensors). Also, the results of some preliminary experimentsare presented.

5.1. Combination of LTs

In this case-study, we simulate an industrial-like environmentwhere a set of LTs are used to measure the position of severalpoints of interest. In practice, LTs are generally positioned at a suf-ficiently large distance from the points to be measured, in order to“cover” the largest possible measurement volume. Moreover, theyare sturdy placed on isostatic tripods, which can be adjusted inheight.

In the simulations, the LTs are distributed according to in an areaof 20 m × 20 m with a variable height between 1 and 2 m and withtheir vertical axis orthogonal to the floor plan. As an example seeFig. 7.

The number of LTs is varied from 2 to 10, and 30 different layoutsare generated for each case; 9 × 30 = 270 total layouts are generated.For each of these layouts, we generate 1000 total points uniformlydistributed in the measurement volume.

For each point and each jth sensor, distance and angularmeasurements (d̂ij, �̂ij, ϕ̂ij) are simulated by adding a zero-meanGaussian noise to the relevant “true” values (i.e. dij, �ij and ϕij);these “true” values are exactly calculated knowing the relative pos-itions of the sensors with respect to the point of interest (angles indegrees and distances in mm):

d̂ij = dij + εd with εd∼N(0, d = 2.5 · dj,i · 10−3)

�̂ij = �ij + ε� with ε�∼N(0, � = 8 · 10−4)

ϕ̂ij = ϕij + εϕ with εϕ∼N(0, ϕ = 8 · 10−4)

. (37)

The above measurements are assumed to be uncorrelated andthe standard deviations of the Gaussian noise are consistent withthe typical uncertainties in distance and angular measurements byLTs, as reported in the scientific literature [7].

The position of the points of interest is estimated through thecompetitive and cooperative approaches.

In order to apply the cooperative approach, the measurementuncertainties of the sensors (in the form of d, � and ϕ) are

assumed to be known.

Regarding the competitive approach, coordinates measurementuncertainty (˙X̂Pi

) was estimated using a Montecarlo simulation

with 1000 replications for each point.

Page 8: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

F. Franceschini et al. / Precision Engineering 43 (2016) 514–524 521

20004000

60008000

1000012000

140001600 0

18000

200 0

4000

6000

8000

10000

12000

14000

16000

0

1000

2000

x [ mm]

y [mm]

z [m

m]

0

0

20000

10000

20000

Fig. 7. Example of simulated layout with 10 LTs. Circles and dotted lines represent respectively the positions and axis orientations of the LTs.

(a) (b)

0

0.05

0.1

0.15

0.2

0.25

0.3

1098765432Numbe r of LTs

Error - Coope rative Fusion

e [m

m]

0

0.05

0.1

0.15

0.2

0.25

0.3

1098765432

Number of LTs

Err or - Co mpetitive Fusion

e [m

m]

Fig. 8. Boxplot of the localization error against the number of LTs in use: (a) competitivethe edges of the box are the 25th and 75th percentiles, the whiskers extend to the moindividually.

Table 1Mean value, 25th and 75th percentiles (Q1 and Q3) of the localization error distribu-tion. Comparison between cooperative and competitive fusion approaches (valuesin mm).

No. of LTs Cooperative Competitive

Mean Q1 Q3 Mean Q1 Q3

2 0.0438 0.0219 0.0771 0.0438 0.0222 0.07833 0.0173 0.0082 0.0352 0.0176 0.0082 0.03484 0.0098 0.0051 0.0204 0.0099 0.0052 0.02085 0.0087 0.0045 0.0162 0.0087 0.0045 0.01626 0.0063 0.0035 0.0121 0.0064 0.0035 0.01227 0.0063 0.0033 0.0120 0.0062 0.0033 0.01208 0.0053 0.0030 0.0093 0.0053 0.0030 0.0094

lt

e

snrt

9 0.0052 0.0029 0.0102 0.0052 0.0029 0.010410 0.0044 0.0025 0.0087 0.0044 0.0025 0.0088

Error in the localization is analyzed by comparing the simulatedocalization (X̂P) with the nominal positions (X). To this purpose,he absolute localization error is defined as:

= ||X − X̂P ||. (38)

Results are summarized in Fig. 8 and Table 1. In detail, Fig. 8

hows multiple boxplots of the localization error (e) against theumber of LTs in use. As expected, increasing the number of LTsesults in a reduction of both the localization error and its uncer-ainty.

and (b) cooperative fusion approach. On each box, the central mark is the median,st extreme data points, which are not considered as outliers; outliers are plotted

Table 1 provides a quantitative synthesis of the simulationresults. The analyses of the mean value and the 25th and 75thpercentiles of the localization error show non-significant differ-ences between the two approaches.The great similarity betweenthe results obtained through the two approaches is probably dueto the fact that, for each LT, the position of P is determined uniquely,with no redundant information (i.e. three local measurements cor-respond to the three unknown coordinates of P).

5.2. Combination of triangulation and multilateration systems

The second case study analyses the combination between twodistributed LVM systems based on triangulation and multilat-eration respectively. For example, this can be the case of thecombination of a photogrammetric system (based on triangulation)with a set of interferometers (based on multilateration).

In the simulations, sensors are uniformly distributed in an areaof 20 m × 20 m at variable height between 2 m and 3 m (for the tri-angulation system) and 1 m and 2 m (for multilateration system)(see Fig. 9). Both triangulation and multilateration sensors are ori-ented with vertical axis orthogonal to the floor plan.

The number of sensors of each system is varied from 4 to 10,

generating 30 different layouts for each case. Total 7 × 7 × 30 = 1470different layouts are generated. For each layout, total 1000 pointsuniformly distributed over the measurement volume are gener-ated.
Page 9: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

522 F. Franceschini et al. / Precision Engineering 43 (2016) 514–524

Table 2Mean value of the localization error (mm) for the different layouts tested. The results obtained by competitive (“Comp.”) and cooperative (“Coop.”) approaches are reported.Table A.1 (in Appendix) is the extended version of this table.

Number of distance sensors

4 5 6 7 8 9 10

Comp. Coop. Comp. Coop. Comp. Coop. Comp. Coop. Comp. Coop. Comp. Coop. Comp. Coop.

Number of angular sensors4 0.0192 0.0189 0.0152 0.0151 0.0111 0.0110 0.0096 0.0095 0.0085 0.0085 0.0077 0.0077 0.0064 0.00645 0.0189 0.0188 0.0129 0.0129 0.0106 0.0106 0.0095 0.0094 0.0088 0.0088 0.0073 0.0072 0.0073 0.00736 0.0199 0.0197 0.0141 0.0140 0.0121 0.0120 0.0096 0.0096 0.0086 0.0085 0.0075 0.0075 0.0069 0.00697 0.0209 0.0205 0.0139 0.0137 0.0119 0.0118 0.0097 0.0096 0.0084 0.0084 0.0073 0.0073 0.0067 0.00668 0.0186 0.0183 0.0136 0.0136 0.0123 0.0123 0.0095 0.0095 0.0082 0.0081 0.0075 0.0075 0.0069 0.00699 0.0189 0.0186 0.0151 0.0150 0.0111 0.0110 0.0090 0.0090 0.0083 0.0083 0.0077 0.0076 0.0070 0.0070

10 0.0184 0.0181 0.0129 0.0127 0.0101 0.0100 0.0088 0.0087 0.0080 0.0080 0.0080 0.0079 0.0064 0.0064

20004000

60008000

100001200 0

1400 01600 0

18000

200 0

4000

6000

8000

10000

12000

14000

16000

0

1000

2000

x [ mm]

y [mm ]

z [m

m]

10000

20000

010000

20000

0

ultilat

nr

d

lvtd

str

c

uastp

l

Fig. 9. Example of simulated layout with 10 triangulation (circles) and 10 m

Distance measurements (d̂ij) are simulated by adding a Gaussianoise to the true value (i.e. dij), exactly calculated according to theelative positions between sensors and points (distances in mm):

ˆij = dij + εd with εd∼N(0, d = 2.5 · dij · 10−3). (39)

Azimuth and elevation measurements of (d̂ij, �̂ij, ϕ̂ij) are simu-ated by adding a zero-mean Gaussian noise to the relevant truealue (i.e. dij, �ij and ϕij), which is calculated knowing the posi-ion of the sensors with respect to the points of interest (angles inegrees):

�̂ij = �ij + ε� with ε�∼N(0, � = 3 · 10−3)

ϕ̂ij = ϕij + εϕ with εϕ∼N(0, ϕ = 3 · 10−3). (40)

The above measures are assumed to be uncorrelated and thetandard deviations of the Gaussian noise are consistent with theypical uncertainties in angular measurements by photogrammet-ic cameras, as reported in the scientific literature [8].

The position of the points of interest is estimated through theompetitive and cooperative approaches.

In order to apply the cooperative approach, the measurementsncertainties of the sensors (in the form of d, � and ϕ) aressumed to be known. Regarding the competitive approach, thetandard deviations relating to the position of P are estimated

hrough a Montecarlo simulation with 1000 replications for eachoint.

As for the simulations described in Section 5.1, results are ana-yzed by comparing the result of the simulated localization (X̂P)

eration (triangles) sensors. Dotted lines represent sensors’ orientation axis.

with the “true” nominal positions (X), by means of the absolutelocalization error (see Eq. (38)). Results are summarized in Table 2,which shows the average value of the localization error (e) for thedifferent layouts tested. Table A.1 (in Appendix) is the extended ver-sion of Table 2, including the 25th and 75th percentiles of the errordistribution. The analysis of these results suggests the followingconsiderations:

• The localization uncertainty tends to decrease when increasingthe number of sensors. In particular, there seems to be an asymp-totical decreasing trend. For the purpose of example, Fig. 8 showsthe results of the cooperative fusion, when the number of angularsensors is fixed to 7.

• The cooperative approach is systematically slightly better thanthe competitive one in terms of localization uncertainty. Thisaspect is also evident in Table A.1 (in Appendix) where theinterquartile range (i.e. the gap between the 25th and 75th per-centiles) is systematically lower in the case of the cooperativeapproach.

5.3. Preliminary experiments

The two approaches have been implemented considering a spe-cific combination of two LVM systems: (i) a photogrammetric

system (PS) OptiTrack V120-TRIOTM [26] equipped with 38.1 mmreflective spherical markers, and (ii) a laser tracker (LT) APIRadianTM [27]. The experiments have been conducted in the lab-oratories of Microservice S.r.l., which also provided the LT.
Page 10: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

F. Franceschini et al. / Precision Engineering 43 (2016) 514–524 523

0

0.02

0.04

0.06

0.08

0.1

10987654Number of distan ce sensors

e [m

m]

Boxplot of localization err or

Fig. 10. Boxplot of the localization error against the number of distance sensors,wmt

FvP(

olUt

oata

tm[

a

F

Table 3Distances measured on the scale-bar, using different sensor configurations (valuesreported in mm).

Config. dA–B dA–BdB–C dB–C

dA–C dA–C

Calibrated 728.294 0.003 727.703 0.003 1455.996 0.003Individual systems

LT 728.281 0.029 727.653 0.032 1455.930 0.030PS 728.330 0.089 727.65 0.12 1455.98 0.11

CompetitiveLT + PS 728.257 0.019 727.649 0.019 1455.906 0.016

CooperativeLT + PS 728.260 0.018 727.648 0.018 1455.907 0.015LT + 2C 728.248 0.019 727.676 0.021 1455.924 0.019LT + 1C 728.288 0.022 727.609 0.023 1455.896 0.019

hen applying the cooperative approach. On each box, the central mark is theedian, the edges of the box are the 25th and 75th percentiles, the whiskers extend

o the most extreme data points, not considered as outliers.

The PS consists of a set of 3 cameras fixed on a line frame (seeig. 11), each of which is able to provide an azimuth (�̂PS) and ele-ation ( ϕ̂PS) measurement of the target point. Using these data, theS is able to estimate the position of each measured point P [26]Fig. 10).

The LT is equipped with an absolute distance meter (ADM)r a laser interferometer (measuring distances (d̂LT )) and angu-ar encoders (measuring azimuth (�̂LT ) and elevation ( ϕ̂LT ) angles).sing these data, the LT is able to automatically estimate the posi-

ion of measured points [27].When implementing the cooperative approach, the 3D position

f each measured point is obtained using the local distance andngular measurements, performed by the sensors equipping thewo systems (i.e. �̂PS and ϕ̂PS , for each camera of the PS, and d̂LT , �̂LT

nd ϕ̂LT , for the LT).To obtain an optimal alignment between the coordinate sys-

ems of the LT and the PS, 18 points, randomly distributed in theeasuring volume, have been measured by both the LVM systems

2,10].The layout of the two systems is reported in Fig. 11, which

lso shows the scale-bar used for the experiment. Distance values

ig. 11. Measurement layout (A, B and C are the reference points of the scale-bar).

LT: Laser tracker; PS: photogrammetric system; 1C: central camera of the pho-togrammetric system; 2C: two lateral cameras of the photogrammetric system.

among the reference markers on the scale-bar were calibrated ona Coordinate Measuring Machine (CMM – DEA Iota0101).

The position measurement of three points on the scale bar hasbeen replicated 30 times to obtain the results in Table 3. The meanvalue of the distances measured on the scale-bar and the relatedstandard deviations are reported, for all the possible sensor con-figurations. Results are compared with those obtained using (i)the LT and the PS individually and (ii) the competitive approachimplemented by the SpatialAnalyser® software [12].

Table 2 suggests few considerations: (i) the cooperative fusionimproves the system performance when compared to the resultsof the single LVM system, and (ii) the sensor configuration LT + PSdrives to the same result, both when implementing the coopera-tive and the competitive data fusion, however, differently from thecompetitive approach, the cooperative one can be also profitablyapplied for the LT + 2C and LT + 1C configurations.

6. Conclusions

This paper presents a structured comparison between theclassical competitive approach and a new cooperative approachfor fusing data obtained from different LVM systems. The latterapproach uses angular and/or distance data, measured by the sen-sors equipping each LVM system, so as to compute the 3D positionof the points of interest. Input data of this model are (i) localmeasurements (i.e. data concerning their position/orientation) per-formed by the individual sensors (and relevant uncertainties), and(ii) calibration parameters of the individual sensors.

The main advantages of the proposed approach, with respect tothe competitive one, are: (i) it is the only option when the individualLVM systems are not able to provide autonomous position mea-surements (e.g., laser interferometers or single cameras), (ii) it is theonly option when only some of the sensors of autonomous systemswork correctly (for instance, a laser tracker in which only distance– not angular – measurements are performed), (iii) when using sys-tems with redundant sensors (i.e. photogrammetric systems witha large number of distributed cameras), point localization tends tobe better than that using the competitive fusion approach.

The same considerations do not seem to hold when fusing datafrom LTs. In this particular case, the difference in performancebetween cooperative and competitive fusion is not statistically sig-nificant.

A limitation of the proposed approach is that it can be appliedonly to sensors performing angular and distance measurements,with respect to the point of interest.

Future research is aimed at enriching the proposed method

so as to take account of the uncertainties related to the sensorlocalization/orientation parameters that derive from the relativecalibration process.
Page 11: Combining multiple Large Volume Metrology systems: … · 2016-01-18 · measurement systems are able to return “intermediate” data, such as distance and angular measurements

524 F. Franceschini et al. / Precision EngA

pp

end

ix.

Tab

le

A.1

Mea

n

valu

e,

25th

and

75th

per

cen

tile

of

the

loca

liza

tion

erro

r

dis

trib

uti

on

(mm

)

for

the

dif

fere

nt

layo

uts

test

ed, w

hen

com

bin

ing

a

LVM

syst

em

base

d

on

tria

ngu

lati

on

wit

h

one

base

d

on

mu

ltil

ater

atio

n. T

he

resu

lts

obta

ined

byco

mp

etit

ive

(“C

omp

.”)

and

coop

erat

ive

(“C

oop

.”)

fusi

on

are

rep

orte

d.

Nu

mbe

r

of

dis

tan

ce

sen

sors

4

5

6

7

8

9

10

Avg

.

Q1

Q3

Avg

.

Q1

Q3

Avg

.

Q1

Q3

Avg

.

Q1

Q3

Avg

.

Q1

Q3

Avg

.

Q1

Q3

Avg

. Q

1

Q3

Nu

mbe

r

of

angu

lar

sen

sors

4C

omp

.

0.01

92

0.00

46

0.02

17

0.01

52

0.00

41

0.01

78

0.01

11

0.00

33

0.01

26

0.00

96

0.00

30

0.01

16

0.00

85

0.00

28

0.01

02

0.00

77

0.00

26

0.00

91

0.00

64

0.00

23

0.00

74C

oop

.

0.01

89

0.00

46

0.02

16

0.01

51

0.00

41

0.01

78

0.01

10

0.00

34

0.01

25

0.00

95

0.00

30

0.01

15

0.00

85

0.00

28

0.01

02

0.00

77

0.00

26

0.00

90

0.00

64

0.00

23

0.00

75

5C

omp

.

0.01

89

0.00

48

0.02

34

0.01

29

0.00

38

0.01

47

0.01

06

0.00

35

0.01

31

0.00

95

0.00

30

0.01

09

0.00

88

0.00

28

0.01

06

0.00

73

0.00

25

0.00

85

0.00

73

0.00

25

0.00

89C

oop

.

0.01

88

0.00

48

0.02

29

0.01

29

0.00

38

0.01

46

0.01

06

0.00

36

0.01

30

0.00

94

0.00

30

0.01

08

0.00

88

0.00

28

0.01

06

0.00

72

0.00

25

0.00

85

0.00

73

0.00

25

0.00

89

6C

omp

.

0.01

99

0.00

50

0.02

38

0.01

41

0.00

40

0.01

67

0.01

21

0.00

35

0.01

43

0.00

96

0.00

30

0.01

14

0.00

86

0.00

28

0.01

04

0.00

75

0.00

26

0.00

88

0.00

69

0.00

24

0.00

82C

oop

.

0.01

97

0.00

50

0.02

36

0.01

40

0.00

39

0.01

67

0.01

20

0.00

35

0.01

43

0.00

96

0.00

30

0.01

15

0.00

85

0.00

28

0.01

03

0.00

75

0.00

26

0.00

90

0.00

69

0.00

24

0.00

81

7C

omp

.

0.02

09

0.00

51

0.02

51

0.01

39

0.00

40

0.01

67

0.01

19

0.00

35

0.01

38

0.00

97

0.00

31

0.01

18

0.00

84

0.00

28

0.01

00

0.00

73

0.00

25

0.00

86

0.00

67

0.00

24

0.00

80C

oop

.

0.02

05

0.00

50

0.02

47

0.01

37

0.00

40

0.01

66

0.01

18

0.00

35

0.01

38

0.00

96

0.00

31

0.01

18

0.00

84

0.00

28

0.00

99

0.00

73

0.00

25

0.00

85

0.00

66

0.00

24

0.00

79

8C

omp

.

0.01

86

0.00

48

0.02

26

0.01

36

0.00

40

0.01

66

0.01

23

0.00

36

0.01

50

0.00

95

0.00

31

0.01

15

0.00

82

0.00

27

0.00

96

0.00

75

0.00

26

0.00

90

0.00

69

0.00

24

0.00

84C

oop

.

0.01

83

0.00

48

0.02

24

0.01

36

0.00

40

0.01

68

0.01

23

0.00

36

0.01

51

0.00

95

0.00

31

0.01

14

0.00

81

0.00

27

0.00

96

0.00

75

0.00

26

0.00

90

0.00

69

0.00

24

0.00

83

9C

omp

.

0.01

89

0.00

51

0.02

36

0.01

51

0.00

43

0.01

87

0.01

11

0.00

34

0.01

32

0.00

90

0.00

29

0.01

05

0.00

83

0.00

28

0.00

97

0.00

77

0.00

26

0.00

91

0.00

70

0.00

24

0.00

82C

oop

.

0.01

86

0.00

50

0.02

34

0.01

50

0.00

43

0.01

88

0.01

10

0.00

34

0.01

30

0.00

90

0.00

29

0.01

04

0.00

83

0.00

28

0.00

96

0.00

76

0.00

26

0.00

91

0.00

70

0.00

24

0.00

82

10C

omp

.

0.01

84

0.00

49

0.02

21

0.01

29

0.00

38

0.01

50

0.01

01

0.00

33

0.01

19

0.00

88

0.00

29

0.01

03

0.00

80

0.00

27

0.00

95

0.00

80

0.00

26

0.00

93

0.00

64

0.00

23

0.00

77C

oop

.

0.01

81

0.00

49

0.02

21

0.01

27

0.00

38

0.01

50

0.01

00

0.00

33

0.01

18

0.00

87

0.00

29

0.01

02

0.00

80

0.00

27

0.00

96

0.00

79

0.00

26

0.00

92

0.00

64

0.00

23

0.00

77

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

ineering 43 (2016) 514–524

References

[1] Crowley JL, Demazeau Y. Principles and techniques for sensor data fusion. SignalProcess 1993;32(1–2):5–27.

[2] Haghighat MBA, Aghagolzadeh A, Seyedarabi H. Multi-focus image fusion forvisual sensor networks in DCT domain. Comput Electr Eng 2011;37(5):789–97.

[3] Weckenmann A, Jiang X, Sommer KD, Neuschaefer-Rube U, Seewig J, Shaw L,et al. Multisensor data fusion in dimensional metrology. CIRP Ann – ManufTechnol 2009;58(2):701–21.

[4] Xiong N, Svensson P. Multi-sensor management for information fusion: issuesand approaches. Inf Fusion 2002;3(2):163–86.

[5] Maisano DA, Jamshidi J, Franceschini F, Maropoulos PG, Mastrogiacomo L,Mileham AR, et al. A comparison of two distributed large-volume measure-ment systems: the mobile spatial co-ordinate measuring system and theindoor global positioning system. Proc Inst Mech Eng Part B: J Eng Manuf2009;223(5):511–21.

[6] Maisano D, Mastrogiacomo L. A new methodology to design multi-sensornetworks for distributed Large-Volume Metrology systems based on triangula-tion. Precision Eng 2015, http://dx.doi.org/10.1016/j.precisioneng.2015.07.001(in press).

[7] Calkins JM, Salerno RJ. A practical method for evaluating measurement systemuncertainty. In: Boeing Large Scale Metrology Conference. 2000.

[8] Franceschini F, Galetto M, Maisano D, Mastrogiacomo L, Pralio B. Distributedlarge-scale dimensional metrology. London: Springer-Verlag; 2011.

[9] Galetto M, Mastrogiacomo L. Corrective algorithms for measurement improve-ment in MScMS-II (mobile spatial coordinate measurement system). PrecisionEng 2013;37(1):228–34.

10] Durrant-Whyte HF. Sensor models and multisensor integration. Int J Robot Res1988;7(6):97–113.

11] Boudjemaa R, Forbes AB. Parameter estimation methods for data fusion. NPLreport CMSC 38/04; 2004.

12] New River Kinematics. SpatialAnalyzer®; 2015. Retrieved 13.01.15 from http://www.kinematics.com/spatialanalyzer/.

13] Galetto M, Mastrogiacomo L, Maisano D, Franceschini F. Cooperative fusion ofdistributed multi-sensor LVM (Large Volume Metrology) systems. CIRP Ann –Manuf Technol 2015;64(1):483–6.

14] Beckerman M, Sweeney FJ. Segmentation and cooperative fusion of laser radarimage data. Proc SPIE – Int Soc Opt Eng 1994;2233:88–98.

15] Ho AYK, Pong TC. Cooperative fusion of stereo and motion. Pattern Recognit1996;29(1):121–30.

16] Labayrade R, Royere C, Gruyer D, Aubert D. Cooperative fusion for multi-obstacles detection with use of stereovision and laser scanner. Auton Robots2005;19(2):117–40.

17] Lamallem A, Valet L, Coquin D. Local versus global evaluation of a coopera-tive fusion system for 3D image interpretation. In: ISOT 2009 – InternationalSymposium on Optomechatronic Technologies. Istanbul: IEEE; 2009.

18] Wen CY, Hsiao YC, Chan FK. Cooperative anchor-free position estimation forhierarchical wireless sensor networks. Sensors 2010;10(2):1176–215.

19] Budzan S. Fusion of visual and range images for object extraction. Lect NotesComput Sci 2014;8671:108–15 (including subseries Lecture Notes in ArtificialIntelligence and Lecture Notes in Bioinformatics).

20] Luhmann T, Robson S, Kyle S, Harley I, editors. Close range photogrammetry.New York, NY, USA: John Wiley & Sons; 2006.

21] Caja J, Gómez E, Maresca P. Optical measuring equipments. Part I: Calibrationmodel and uncertainty estimation. Precision Eng 2015;40:298–304.

22] Wolberg J. Data analysis using the method of least squares: extracting the mostinformation from experiments. Berlin/Heidelberg: Springer-Verlag; 2005.

23] Kariya T, Kurata H. Generalized least squares. Chichester, UK: John Wiley &Sons, Ltd; 2004.

24] Mastrogiacomo L, Maisano D. Network localization procedures for experimen-tal evaluation of mobile spatial coordinate measuring system (MScMS). Int JAdv Manuf Technol 2010;48(9–12):859–70.

25] BIPM, IFCC, IUPAC, ISO. Evaluation of measurement data—guide for the expres-sion of uncertainty in measurement. JCGM 100: 2008; 2008.

26] NaturalPoint. OptiTrack V120:Trio; 2015. https://www.naturalpoint.com/optitrack/products/v120-trio/.

27] API Automated Precision INC. RadianTM; 2015. Retrieved 13.01.15 from http://www.apisensor.com/index.php/products-en/laser-trackers-and-accessories-en/radian-en.