ouellette ptv

Upload: samik4u

Post on 08-Mar-2016

11 views

Category:

Documents


2 download

DESCRIPTION

paper on particle tracking velocimetry

TRANSCRIPT

  • 7/21/2019 Ouellette ptv

    1/13

    R E S E A R C H A R T I C L E

    Nicholas T. Ouellette Haitao XuEberhard Bodenschatz

    A quantitative study of three-dimensional Lagrangian particle trackingalgorithms

    Received: 17 December 2004 / Revised: 11 October 2005 / Accepted: 11 October 2005 / Published online: 15 November 2005 Springer-Verlag 2005

    Abstract A neural network particle finding algorithmand a new four-frame predictive tracking algorithm areproposed for three-dimensional Lagrangian particletracking (LPT). A quantitative comparison of these and

    other algorithms commonly used in three-dimensionalLPT is presented. Weighted averaging, one-dimensionaland two-dimensional Gaussian fitting, and the neuralnetwork scheme are considered for determining particlecenters in digital camera images. When the signal tonoise ratio is high, the one-dimensional Gaussian esti-mation scheme is shown to achieve a good combinationof accuracy and efficiency, while the neural networkapproach provides greater accuracy when the images arenoisy. The effect of camera placement on both the yieldand accuracy of three-dimensional particle positions isinvestigated, and it is shown that at least one cameramust be positioned at a large angle with respect to the

    other cameras to minimize errors. Finally, the problemof tracking particles in time is studied. The nearestneighbor algorithm is compared with a three-framepredictive algorithm and two four-frame algorithms.These four algorithms are applied to particle tracksgenerated by direct numerical simulation both with andwithout a method to resolve tracking conflicts. The newfour-frame predictive algorithm with no conflict reso-lution is shown to give the best performance. Finally, thebest algorithms are verified to work in a real experi-mental environment.

    1 Introduction

    Over the past decade, Lagrangian particle tracking

    (LPT) has become widely used in experimental fluiddynamics. In an LPT experiment, the flow of interest isseeded with tracer particles that are then imaged toreconstruct the fluid motion. While the related Eulerianmeasurement technique of particle image velocimetry(PIV) commonly uses a very high seeding density andcalculates average velocity vectors for clusters of parti-cles based on the assumption that nearby particles movesimilarly (Adrian 1991), LPT uses a lower seeding den-sity but finds individual, longer particle tracks that maybe used to calculate both Eulerian and Lagrangianquantities. While PIV systems generally measure two-dimensional velocities, multiple cameras are often used

    in LPT systems, enabling the stereoscopic reconstructionof particle tracks in three dimensions (3D) (Maas et al.1993; Malik et al. 1993). Such particle tracks may beused to calculate flow properties, including theLagrangian statistics of turbulent flows (La Porta et al.2001; Voth et al. 2002). Lagrangian data is of funda-mental importance for understanding turbulent mixingand scalar dispersion as well as for setting the parame-ters of stochastic models of turbulence (Yeung 2002).

    The construction of tracks in three-dimensional LPTmay be separated into three tasks, which we will analyzeindividually. Each task is computationally challenging,and no optimal algorithms have yet been found. We

    present here a quantitative study under a wide range ofconditions of many of the commonly used algorithmsfor three-dimensional LPT systems. We compare thesealgorithms with newly developed methods.

    The first task of an LPT system is to process theimages from the cameras to yield the positions of thetracer particles in the image space of each camera.Several algorithms for finding particles in images areexplored in Sects. 2 and 3, and we introduce a neuralnetwork scheme that is shown to outperform the otheralgorithms in situations where the signal to noise ratio is

    N. T. Ouellette (&) H. Xu E. BodenschatzLaboratory of Atomic and Solid State Physics, Clark Hall, CornellUniversity, Ithaca, NY, USAE-mail: [email protected]

    E. BodenschatzMax Planck Institute for Dynamics and Self-Organization,Go ttingen, Germany

    Experiments in Fluids (2006) 40: 301313DOI 10.1007/s00348-005-0068-7

  • 7/21/2019 Ouellette ptv

    2/13

    poor. After finding the two-dimensional particle posi-tions, the second task is to reconstruct the three-dimensional coordinates, since each camera images onlya two-dimensional projection of the measurement vol-ume. The effects of camera placement on the stereo-matching process are discussed in Sect.4. We hereconsider only the case of four cameras, which is con-sidered to be a good number for three-dimensional LPT(Maas1996). Finally, to create tracks the particles mustbe followed in time through the sequence of images. Thetracking problem is discussed in Sect.5, including acomparison of several algorithms from both the fluiddynamics and machine vision communities. In addition,it is shown in Sect. 5 that our new modification of acommonly used tracking algorithm can significantlyimprove its performance. We note that the stereoscopicmatching and tracking steps may be interchanged sothat particles are first tracked in two dimensions andthen the tracks are matched together to create three-dimensional tracks. We, however, prefer to track inthree dimensions rather than in two dimensions, sincethe particle seeding density is decreased by a factor of

    the added dimension, simplifying the tracking problem.Finally, in Sect.6, we demonstrate the validity of our

    LPT algorithms by measuring the statistics ofLagrangian velocity and acceleration in a high Reynoldsnumber experimental flow and comparing them withprevious well-known results.

    2 Particle finding problem

    The ideal algorithm for determining the centers of par-ticles in camera images must meet several criteria:

    1. Sub-pixel accuracy: the fidelity of the calculatedtracks to the actual particle trajectories in the flow isdue in a large part to the accuracy of the particlefinding algorithm.

    2. Speed: because of the very high data rate from cam-eras used to image a high Reynolds number flow,some consideration must be payed to the speed andefficiency of the algorithm.

    3. Overlap handling: even for moderate particle seedingdensities, there will be instances of particle imagesoverlapping one another in the field of view of asingle camera.

    4. Robustness to noise: the cameras used in a LPT sys-tem will record noisy images.

    Many algorithms have been proposed, includingweighted averaging (Maas et al. 1993; Maas 1996; Dohet al. 2000), function fitting (Cowen and Monismith1997; Mann et al. 1999), and template matching (Gue-zennec et al. 1994), as well as the standard PIV tech-nique of image cross correlation (Adrian 1991;Westerweel1993). In this section, we compare weightedaverage and both one-dimensional and two-dimensionalGaussian fitting. We also introduce a new particlefinding algorithm using neural networks.

    Before one can design a particle finding algorithm,one must make assumptions as to what constitutes aparticle in an image. For all results presented in thisarticle, we assume that every local maximum in intensityabove a threshold represents a particle.

    2.1 Weighted averaging

    Weighted averaging, often referred to as a center ofmass calculation, is both simple and commonly used inLPT systems. In a typical weighted averaging scheme,the digital image from a camera is first segmented intogroups of pixels representing single particles. The center(xc, yc) of the particle is then determined by averagingthe positions of its component pixels weighted by theirintensity gray values. If we let I(x,y) represent the pixelgray value at position (x,y), the horizontal coordinate ofthe particle center is given by

    xcP

    pxpIxp;yp

    PpIxp;yp ; 1

    where the sums run over all pixels in a particular group.The vertical coordinate is defined similarly.

    In this analysis, we used a weighted averaging schemegiven by Maas and co-workers (Maas et al. 1993; Maas1996) that attempts to handle overlapping particles. Asabove, we assume that every local intensity maximumrepresents a particle. Groups of contiguous pixels con-tainingNlocal intensity maxima are assumed to containN particles. In a preprocessing step, such groups ofpixels are split intoNsubgroups, each containing only asingle maximum. The pixel groups are split according tothe assumption that the grayscale intensity of a particleimage should continuously drop as the distance from thecenter increases. Local intensity minima are arbitrarilyassigned to the pixel subgroup containing the mini-mums brightest neighboring pixel.

    This weighted averaging method is efficient and sim-ple to implement computationally, as well as havingsome overlap handling capabilities. As will be demon-strated below, however, both the accuracy of the methodand its performance on noisy images are poor comparedto the others tested.

    2.2 Gaussian fitting

    If the functional form of the intensity profile of a particleimage were known, fitting this function to each particleimage would result in a very accurate determination ofthe particle centers. In general, however, this function isnot known. A common approach is thus to approximatethis intensity profile by a Gaussian (Mann et al. 1999).

    Ix;y I02prxry

    exp 12

    xxcrx

    2 yyc

    ry

    2" #( ):

    2

    302

  • 7/21/2019 Ouellette ptv

    3/13

    To determine the particle centers in a group of pixelswith N local maxima, NGaussians are fit to the group.

    This method is highly accurate and can handleoverlapping particles. It is, however, computationallyvery expensive, requiring roughly a factor of four morecomputer time than the other methods studied. Inaddition, this method requires large particle images.Inspection of Eq. 2 reveals that each of these Gaussiansrequires the fitting of five parameters: I0,rx, ry,xc, andyc. There must at minimum, then, be five pixels in everyparticle group, and many more than this to get anaccurate fit. This problem is compounded for the case ofoverlapping particles, since a group with Nlocal maximamust contain at least 5N pixels. Small particle imagesmust therefore either be ignored or fit with some lessaccurate method.

    These issues can be mitigated by using an approxi-mation to the full two-dimensional Gaussian fittingscheme. Instead of fitting a Gaussian to the full particlepixel group, one can fit two one-dimensional Gaussians(Cowen and Monismith 1997). One Gaussian willdetermine the horizontal position of the particle and the

    second will determine the vertical position. This methoduses the local maximum pixel as well as the four pointsdirectly adjacent horizontally and vertically. We labelthe coordinates of the horizontal points x1, x2, and x3,labeling from left to right. Solving the system of equa-tions

    Ii I0ffiffiffiffiffiffi2p

    p r

    exp 12

    xi xcr

    2 3

    for i=1, 2, 3 gives a horizontal particle coordinate of

    xc

    1

    2

    x21 x22 lnI2=I3 x22 x23 lnI1=I2

    x1 x2 lnI2=I3 x2 x3 lnI1=I2;

    4

    with the vertical position of the particle defined analo-gously.

    This 1D Gaussian Estimator retains much of theaccuracy of the full two-dimensional fitting scheme whilerequiring fewer data points. In addition, it can be madesignificantly more computationally efficient by notingthat the arguments of the required logarithms in Eq. 4are all pixel intensities. In a digital image, there is a finiteset of possible pixel intensities. The required logarithmsmay then be precomputed, cutting the computationalwork down to merely a few multiplications.

    2.3 Neural networks

    In addition to the two algorithms described above, wehave also investigated the use of a neural network ap-proach to particle finding. Neural networks have beenused before in LPT to solve the tracking problem (Grantand Pan 1997; Chen and Chwang 2003; Labonte 1999,2001) and to perform stereoscopic matching (Grantet al.1998). Carosone et al. (1995) applied the Kohonenneural network to the problem of distinguishing the

    images of isolated particles from those of overlappingparticles. Here, we used a new neural network schemethat solves the particle finding problem fully.

    The neural network was shown square segments ofimages centered on local intensity maxima and trainedto find single particles. This approach was used ratherthan showing the network full images since neural net-works must have a standardized number of inputs andoutputs. If the network were shown entire images, thenumber of outputs would vary since the number ofparticles in each image is in general unknown. Addi-tionally, by showing the network standardized sectionsof images, overlap was trivially handled by training thenetwork to find single particles near the center of eachwindow of pixels.

    The network used was a fully connected feed-forwardnetwork with an input layer of 81 neurons, corre-sponding to a 99 pixel window, a single hidden layer of60 neurons, and an output layer of 2 neurons, corre-sponding to the horizontal and vertical position of theparticle center. The 99 window was chosen so that thenetwork was sure to see full particle images; changes to

    the size of this window should be relatively unimportant.The neurons were implemented using standard logisticactivation functions and the network was trained usingthe standard backpropagation training algorithm aug-mented with a momentum term (Mitchell 1997).

    While the training of the network was slow, it is aone-time cost. The subsequent computation of particlecenters is a fast operation. As mentioned above, thenetwork easily handles the problem of overlap. Addi-tionally, as will be shown below, the network was veryrobust to noisy images, its performance degradingslowly as the noise level was increased.

    3 Particle finding results

    3.1 Methodology

    The four algorithms described above were tested usingcomputer generated synthetic images. Images were cre-ated assuming in turn two different intensity profiles forthe light scattered by the particles. In each case, theunderlying intensity profile was discretized by integrat-ing it over each pixel.

    The first intensity profile used was Gaussian. While

    this choice of profile certainly introduces a bias towardsthe two Gaussian particle finding algorithms tested, theintensity profile in a real experimental system is well-approximated by a Gaussian (Westerweel 1993;Cowenand Monismith1997). The second intensity profile usedwas the Airy pattern generated by diffraction from acircular aperture. This profile is given by (J1(x)/x)

    2,where J1(x) is the cylindrical Bessel function of order 1andx is the distance from the particle center. The ratioof the width of the Gaussian and the radius of the firstpeak of the Airy pattern was fixed at 0.524 in order in

    303

  • 7/21/2019 Ouellette ptv

    4/13

    equalize the energy in the 2D Airy pattern and the 2DGaussian. While choosing the Airy pattern and Gauss-ian in this way is not typical for studies of tracer particleimaging (Westerweel1993), equalizing the energy in thetwo profiles in this way ensures that they are very dif-ferent, and so provide a good test of the algorithms ondifferent types of images. For each intensity profile,particle images were generated by imposing a pixel gridon the profile and integrating over each pixel.

    Several sets of images were generated for eachintensity profile, varying in turn the particle seedingdensity, particle image size, noise variance, and dynamicrange of intensity levels. The images generated were256256 pixels in size with each pixel intensity stored inan 8 bit grayscale value, for a total of 256 gray levels.The particle intensity maxima were drawn from aGaussian distribution centered at a gray value of 180.Portions of sample images generated with both Gaussianand Airy pattern intensity profiles are shown in Fig. 1.

    The neural network used was trained using examplesof images with only Gaussian intensity profiles, but withvarying particle density, particle size, mean noise, and

    noise fluctuations.

    To determine the relative performance of the differentimage processing algorithms, errors were calculated bytaking the difference between the found particle centerand the actual center for both the horizontal and verticalcoordinates. The distributions of these errors weresymmetric with mean zero in all cases. The error distri-butions were also equivalent for both the vertical andhorizontal components, showing no bias. In this section,we quantify the performance of the different algorithmsbased on the parameter D, defined by

    ZD=2D=2

    Pxdx 0:8; 5

    where P(x) is the probability density function of theerror as a function of distance from the true particlecenter.D gives an estimate of the pixel error associatedwith each test.

    3.2 Particle seeding density

    Increasing the particle seeding density introduces agreater likelihood of overlap into the image data.Overlapping particles are inherently more difficult tohandle than isolated particles, and so one would expectthe accuracy of an algorithm to decrease in general asthe seeding density increases.

    The performance of the four algorithms comparedare shown in Fig.2. For each of the seeding densities,the mean width of the particle image intensity profilewas fixed at a half-width of 0.8 pixels. No noise wasadded to any of these images in these data sets.

    As expected, the error of all the algorithms increased

    moderately with the seeding density. The accuracy ofthe Gaussian methods and the neural network alsodecreased when Airy pattern intensity profiles wereused. The error increases slowly with seeding densityfor all methods, suggesting that a density of 600 par-ticles per image is acceptable for LPT. The percentageof particles successfully identified, however, decreasesmarkedly as the seeding density increases, dropping aslow as 68% for the weighted averaging method, asshown in Fig.3. For this reason, we consider seedingdensities of only 300 particles per 256256 pixel imagefor the rest of this analysis, since all methods testedmaintain a yield of over 90% for a seeding density of

    300 particles.

    3.3 Particle image size

    As with increasing the particle seeding density, increas-ing the size of particle images makes overlap morecommon. Larger particle images, however, also providemore information to the particle finding algorithm. Thechange in accuracy as a function of particle image sizeshould therefore not be monotonic.

    Fig. 1 Particle images at identical locations generated witha Gaussian intensity profiles and b Airy pattern intensity profiles

    304

  • 7/21/2019 Ouellette ptv

    5/13

    The performance of the four algorithms as a functionof particle image size is shown in Fig. 4. For each valueof the image size, the seeding density was fixed at 300particles per image, and no noise was added to anyimages.

    For the data sets using Gaussian intensity profiles,the error showed a minimum at an image half-width of0.70.9 pixels for both Gaussian methods and for theneural network. The behavior of the weighted averagingalgorithm is more varied. The peak in error at an imagehalf-width of 0.7 may perhaps be explained by theintroduction of overlap while at the same time having

    very small clusters of pixels representing particle images.For the images generated with Airy pattern intensity

    profiles, the error reached a peak at an image half-widthof 0.91.0 pixels. The decrease in error as the particlesbecame larger than this for the Gaussian methods andthe neural network may be traced to the fact that largerAiry patterns appear more Gaussian around their peaks.The weighted averaging method was observed toperform better on the Airy pattern intensity profiles thanthe Gaussian profiles, most likely because the extremelybright central peak of the Airy pattern weights the

    central pixel very heavily and is less affected by theoutlying pixels of neighboring overlapping particles.

    3.4 Noise level

    After investigating the effects of particle image size andseeding density, these parameters were fixed at anintensity profile half-width of 0.8 pixels and roughly 300particles per image, and noise was added in steps to theentire image. We used additive Gaussian noise in ourtests. While we did not explicitly model each of the manytypes of noise that may occur in an LPT system(including errors from the camera sensor, thermal noise,the detection of scattered light, and many others), thecentral limit theorem says that as the various types oferrors inherent in such a system are compounded, thenoise distribution will approach a Gaussian. In addition,while the Poissonian shot noise associated with indi-vidual photon detection may dominate in low-light sit-uations, other noise sources dominate when the lightlevel is high and the overall noise may be well-approxi-

    mated by a Gaussian (Westerweel 2000). High light

    a

    b

    Fig. 2 The error of the four algorithms as a function of the numberof particles per 256256 pixel image.aGenerated from images withGaussian intensity profiles. b Generated from images with Airypattern intensity profiles. Symbols: n weighted averaging, d 2DGaussian fit, m 1D Gaussian Estimator, . neural network

    65

    70

    75

    80

    85

    90

    95

    100

    150 200 250 300 350 400 450 500 550 600

    Yield

    Particles per image

    75

    80

    85

    90

    95

    100

    150 200 250 300 350 400 450 500 550 600

    Yield

    Particles per image

    a

    b

    Fig. 3 The yield of the four algorithms as a function of the numberof particles per 256256 pixel image, defined as the percentage ofparticles successfully identified. a Generated from images withGaussian intensity profiles, b Generated from images with Airypattern intensity profiles. Symbols: n weighted averaging, d 2DGaussian fit, m 1D Gaussian Estimator, .neural network

    305

  • 7/21/2019 Ouellette ptv

    6/13

    levels are common when powerful lasers are used toimage the tracer particles.

    The mean of the Gaussian noise was fixed at zero,and the standard deviation was changed. The effects ofchanging the noise standard deviation on the perfor-mance of the four algorithms are shown in Fig.5.

    As was expected, when the noise standard deviationwas increased, the accuracy of all the methods de-creased. It is important to note, however, that the per-formance of the neural network degraded more slowlythan that of the other algorithms, beating the 1DGaussian Estimator for high levels of the noise when

    Gaussian intensity profiles were used and coming closeto both Gaussian methods when Airy pattern intensityprofiles were used.

    3.5 Mean noise

    In addition to changing the standard deviation of thenoise, the mean noise was also varied. The noisestandard deviation was fixed at 5.6, corresponding to2.5 bits of noise, a typical value for CMOS cameras.

    As above the intensity profile half-width was fixed at0.8 pixels with roughly 300 particles per image. Theeffects of changing the mean noise are shown inFig.6.

    The weighted averaging method and both Gaussianmethods showed sensitivity to the mean noise level, al-beit less sensitivity than they did to changes in the noisestandard deviation. The neural network, however,showed almost no change in accuracy as the mean noisewas increased in images using Gaussian intensity profilesand only a slow decrease in accuracy when Airy patternintensity profiles were used. Coupled with its slowerdegradation as the noise standard deviation was in-creased, the neural network method appears to be muchmore robust to noise than the other three methodsanalyzed.

    3.6 Summary

    From the analysis in this section, we may draw severalconclusions. Figures4, 5, and 6 make it clear that the

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0.12

    0.6 0.7 0.8 0.9 1 1.1 1.2

    (

    pixels)

    Mean particle image half width (pixels)

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    0.08

    0.09

    0.1

    0.6 0.7 0.8 0.9 1 1.1 1.2

    (

    pixels)

    Mean particle image half width (pixels)

    a

    b

    Fig. 4 The error of the four algorithms as a function of particlesize. a Generated from images with Gaussian intensity profiles bGenerated from images with Airy pattern intensity profiles.Symbols: n weighted averaging, d2D Gaussian fit, m 1D GaussianEstimator, . neural network

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0.12

    0.14

    0 1 2 3 4 5

    (

    pixels)

    Noise Standard Deviation (intensity gray values)

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    0.08

    0.09

    0.1

    0.11

    0 1 2 3 4 5

    (

    pixels)

    Noise Standard Deviation (intensity gray values)

    a

    b

    Fig. 5 The error of the four methods as a function of the standarddeviation of the added white Gaussian noise. a Generated fromimages using Gaussian intensity profiles. b Generated from imagesusing Airy pattern intensity profiles. Symbols: n weighted averag-ing, d 2D Gaussian fit, m 1D Gaussian Estimator, . neuralnetwork

    306

  • 7/21/2019 Ouellette ptv

    7/13

    weighted averaging algorithm is significantly less accu-rate in all cases than the other three algorithms tested.Its main advantages are speed and simplicity of imple-mentation. The 1D Gaussian Estimator, however, maybe implemented so that it runs with comparable, if notsuperior, efficiency, and is therefore preferable to theweighted averaging method. In addition, since theaccuracy of the two Gaussian methods used is compa-rable and the 1D Gaussian Estimator is significantlymore efficient to implement, it is preferable to the two-dimensional Gaussian fit.

    While the neural network did not perform as well asthe 1D Gaussian Estimator in the absence of noise, itwas far more robust to changes in both the mean leveland fluctuations in the noise. It is also important tonote that the accuracy of the neural network wasroughly the same when applied to the Gaussianintensity profile images and the Airy pattern imageseven though it had only been trained on Gaussianimages. Therefore, when trained on synthetic images,the network can be used to find particles in realexperimental images.

    In summary, the 1D Gaussian Estimator should bethe preferred particle finding algorithm when imagenoise is negligible, while a neural network approach issuperior when noise levels are appreciable.

    4 Stereoscopic reconstruction

    In three-dimensional LPT, images of the measurement

    volume from different view angles are recorded simul-taneously, usually in different cameras. The positions ofthe cameras in the world coordinate system are deter-mined by calibration. After image processing, as de-scribed in Sects.2 and 3, the particle centers on theimage planes of the cameras together with the knowncamera positions can be used to reconstruct the three-dimensional coordinates of the particles. Due to the lackof distinguishing features of the tracer particle images,the only constraint that can be used in stereoscopicmatching is the fact that the lines of sight from dif-ferent cameras to a particle must intersect at the particlecenter. Obviously, ambiguities will arise when the

    number of particles in the measurement volume in-creases. Dracos (1996) showed that for reasonable par-ticle seeding density and optical setup, at least threecameras are needed in order to resolve matching ambi-guity. For this work, we consider a system with fourpinhole cameras and we focus on the effect of thearrangement of the cameras on the stereo-matching

    0.07

    0.08

    0.09

    0.1

    0.11

    0.12

    0.13

    0.14

    0.15

    0.16

    0 10 20 30 40 50 60 70 80

    (

    pixels)

    Mean noise (intensity gray values)

    0.07

    0.08

    0.09

    0.1

    0.11

    0.12

    0.13

    0.14

    0.15

    0 10 20 30 40 50 60 70 80

    (

    pixels)

    Mean noise (intensity gray values)

    a

    b

    Fig. 6 The error of the four algorithms as a function of the meannoise level. While the weighted averaging and both Gaussianmethods lose accuracy as the mean noise increases, the neuralnetwork is essentially unaffected. a Generated from images usingGaussian intensity profiles. b Generated from images using Airypattern intensity profiles. Symbols: n weighted averaging, d 2D

    Gaussian fit, m 1D Gaussian Estimator, . neural network

    /2

    /2

    /2

    /2

    X

    Z

    Y

    Fig. 7 Arrangement of camera axes

    307

  • 7/21/2019 Ouellette ptv

    8/13

    process. The four cameras are arranged in two verticalplanes with angle h between them. The two cameraswithin a vertical plane are symmetric about the hori-zontal plane and are separated by angle/, as sketched inFig.7.

    Our stereo-matching algorithm is a combination ofthose in Dracos (1996) and Mann et al. (1999) and willonly be described briefly here. From a particle image oncamera A, we can construct a line of sight passingthrough the particle image center and the perspectivecenter of camera A. This line of sight is then projectedonto the image plane of camera B. Any particle imageon camera B that is within a tolerance e away from theprojection of the line of sight is then added into a list. Intotal, we have six such lists, one for each combination oftwo cameras. These lists are then checked for consis-tency. In this work, we keep only the results that areconsistent among all four cameras. The particle coor-dinates in the three-dimensional world are then com-puted from the least square solution of the line of sightequations (Mann et al. 1999).

    Motivated by the geometry of our experimental

    apparatus and the size of the cameras used, we generatefor each camera 1,000 images of300 particles in a101010 cm3 volume. The cameras are chosen with amagnification of 1:20 and the image resolution is256256; therefore, each pixel corresponds to roughly0.4 mm in space. The particle positions in three dimen-sions are randomly chosen and are independent of eachother. In the simulated images, the particle intensityprofile half-width is fixed at 0.8 pixels and no noise wasadded to the images. These images are processed usingthe 1D Gaussian Estimator to find particle centers onthe image plane, which are then fed to the stereo-matching algorithm to calculate the particle positions in

    three dimensions. The matching tolerance e is chosen tobe 0.5 pixels for all image sets. By comparison with theknown particle three-dimensional positions, we are ableto compute the yield and accuracy of the algorithm andthe rms errors of the calculated particle three-dimen-sional positions, rx, ry, and rz, as summarized inTables1 and2. Here yield is defined as the number ofparticles in three dimensions found by the matchingalgorithm, including potentially incorrect matches,divided by the actual number of particles that can beseen by all four cameras, while accuracy is the ratio ofthe number of correctly matched particles to the totalnumber of particles found by the algorithm. We define a

    particle center position in three dimensions output bythe algorithm as correctly matched if a simulated

    particle can be found within a sphere of radius 0.2 mmabout that position, i.e., if the distance between thecalculated center position to any particle is less thanroughly 0.5 pixels. Because our 1D Gaussian Estimatorcan find particle centers on the image plane with rmserror less than 0.1 pixels, a tolerance of 0.5 pixels forparticle position in three dimensions is adequate and isalso consistent with the tolerance e used in stereo-matching.

    From these results, we see that the yield is almostindependent of camera arrangement. In fact, the yield ofstereo-matching is almost entirely determined by theyield of particle center finding. For the sets of imagesthat we simulated, the 1D Gaussian Estimator gives ayield of about 0.97, independent of camera setup.Assuming the performance of the center finding algo-rithm on images from four cameras is uncorrelated, theexpected yield of stereo-matching is 0.974=0.89, whichis very close to the measured value for the cases of fourcameras widely separated (large/ and h). When either/or h is reduced, the images from different camerasbecome more and more similar; therefore, the yield

    increases slightly. However, this increase of yield is atthe price of accuracy, because of the rise of ambiguitywith decreasing camera separation.

    Results in Table1 indicate that as long as h is keptat 90, changing / alone has only a small effect onstereo-matching. Both the yield and the accuracy varyslightly. The rms errors in the three-dimensionalcoordinates correspond to 0.050.07 pixels, which is atthe same level of the error from particle center findingon the image plane. When / is decreased, the increaseof error in the x and the y direction is partiallycompensated by the decrease of error in the z direc-tion. However, Table2 suggests that when the four

    cameras are close to collinear, the position error alongthe axis that the cameras converging to can increase tomore than 0.1 pixels, much higher than the error intwo-dimensional particle center finding. Moreover, theaccuracy decreases appreciably as h is reduced while /is fixed at 45.

    In reality, the situation will be much more compli-cated: the images are noisy, the particle intensity profileis not Gaussian, there will be calibration errors in thecamera model, and so forth. Our simulations have by nomeans considered all these effects. However, an impor-tant guideline for practice that we can learn from thesesimulations is that at least one camera should be kept at

    a large angle away from the rest to reduce error in three-dimensional position.

    Table 1 Effect of/ on stereo-matching; h is fixed at 90

    / () yield (%) accuracy (%) rx (mm) ry (mm) rz (mm)

    90 90.0 98.5 0.0241 0.0235 0.029060 89.8 98.5 0.0266 0.0256 0.023845 90.0 98.5 0.0280 0.0261 0.022030 90.0 98.4 0.0289 0.0268 0.021115 90.1 98.2 0.0289 0.0268 0.0205

    Table 2 Effect ofh on stereo-matching; / is fixed at 45

    h () yield (%) accuracy (%) rx (mm) ry (mm) rz (mm)

    90 90.0 98.5 0.0280 0.0261 0.022060 90.0 98.3 0.0311 0.0254 0.021930 90.2 97.7 0.0394 0.0221 0.020815 90.4 97.0 0.0411 0.0200 0.0190

    308

  • 7/21/2019 Ouellette ptv

    9/13

    5 Particle tracking

    Once particles have been found in the two-dimensionalimages and their three-dimensional positions have beenconstructed, they may be tracked in time. In general,tracking large numbers of particles over many frames isan example of a multidimensional assignment problemand is known to be NP-hard (Veenman et al. 2003).

    Instead of attempting to generate tracks using all dataframes, therefore, tracking algorithms approximate theoptimal solution by considering only a few frames at atime.

    In order to measure the performance of a trackingalgorithm quantitatively, we define the tracking error tobe

    Etrack TimperfectTtotal

    ; 6

    where Timperfect is the number of tracks the algorithmfailed to produce perfectly, when compared to theknown tracks, and Ttotal is the total number of tracks in

    the data set (Chetverikov and Veresto y 1999). A per-fectly determined track must contain no spurious pointsand must also begin at the same time as the actual datatrack, though it may not have the same length.

    Here, we report Etrack as a function of the parameter

    n DrD0

    ; 7

    where Dr is the average distance each particle movesfrom one frame to the next and D0 is the average sepa-ration between a particle and its closest neighbor in asingle frame. We note that n is exactly the inverse of theparameterp defined by Malik et al. (1993). Whenn > 1,the particle seeding density is low and/or the particlesmove slowly from frame to frame; in this limit, trackingis not difficult. Whenn becomes of order unity, however,the particle density is high and/or the particles movequickly, making tracking much more difficult.

    The algorithms were tested using fully three-dimen-sional direct numerical simulation (DNS) data of fluidparticles in turbulence provided by Lance Collins atCornell University. The DNS was carried out at aTaylor microscale Reynolds number of Rk=52 in a2p2p2p box with periodic boundary conditions;despite this low Reynolds number, however, the indi-vidual particle trajectories undergo chaotic motion andare therefore not trivial to track. Since Dr was fixed bythe DNS, the parameter n was varied by changing D0.This was accomplished by first fitting the DNS tracksinto progressively smaller periodic boxes. When a trackleft the box, it was cut and the remaining portion ofthe track was wrapped back into the box. All of thesesub-tracks were then considered to start at time zero,increasing the particle density and therefore increasingn. This approach also simulates particles leaving themeasurement volume; note, however, that since no new

    particles enter, n changes over time. Since Etrackmeasures the number of tracks generated perfectly,however, the largest value of n defines the difficulty ofthe tracking problem. In addition to increasing thedifficulty of the tracking problem by this method, wealso included additive Gaussian noise of magnitude upto 0.5 Dr to the particle positions, with no appreciablechange in the relative performance of the trackingalgorithms tested.

    We note that the tracking algorithms were simply fedlists of fluid particle positions, with no generation ofsynthetic images. The results of testing the trackingalgorithms presented here are therefore independent ofthe center finding method chosen.

    Before describing the tested algorithms individually,we present some definitions and common features of allthe algorithms. Let x i

    n denote the ith position in the nthframe. A tracking algorithm, then, tries to find an xj

    n+1

    for each xin such that xjis the position of the particle in

    framen+1 that was at positionxiin framen. In order todetermine which of all the xj

    n+1 to choose, a trackingalgorithm defines a cost /ij

    n (which we define below for

    several algorithms) for each pair of xin and xjn+1. Theoptimal solution to the tracking problem would makelinks between x i

    n and xjn+1 such that

    U Xn

    Xi

    Xj

    /nij 8

    is minimized. As mentioned above, however, this is amultidimensional assignment problem and is intractable.The tracking algorithms considered here approximatethis optimal solution. The most common approximationmade is to restrict the number of frames over which U isoptimized, sometimes referred to as a greedy matchingapproximation (Veenman et al.2001). Greedy matchingalgorithms are preferable to iterative schemes (Sethi andJain1987; Ohmi and Li2000) for the present applicationsince such schemes are unsuitable for processing longtracks. Most tracking algorithms, including those pre-sented here, also restrict the xj

    n+1 investigated as possi-ble matches for each xi

    n by imposing a limit on thedistance a particle can travel from one frame to the next.

    Within these approximations, a tracking algorithmis specified by two parameters: the heuristic used tocalculate the cost /ij

    n and the method used to breaktracking conflicts. A conflict occurs when /ij

    n=/kjn for

    i k, so that two particles in frame n match equallywell with the same particle in frame n+1. It isimportant to note that these conflicts do not, in gen-eral, arise from overlapping particle images, since wetrack here in three dimensions: overlapping imagesthat were not properly identified will not pass throughthe stereomatching step and will consequently not befed to the tracking algorithm.

    In this study, we have compared four tracking heu-ristics and two methods of conflict breaking. Illustra-tions of the heuristics are shown in Fig. 8. The trackingheuristics used were

    309

  • 7/21/2019 Ouellette ptv

    10/13

    1. Nearest Neighbor (NN): the tracking cost is given bythe distance between the point in frame n and thepoint in frame n+1:

    /nij xn1j xni : 9

    2. 3 Frame: Minimum Acceleration (3MA): the positionof the particle in frame n1 is used along with theposition in frame n to estimate a velocity and there-fore a position ~xn1i for the particle in frame n+1,where

    ~xn1i xni ~vniDt; 10where~vni is the estimated velocity. The tracking cost iscalculated for all the particles falling in a search volumesurrounding the estimate, and is given by the particleacceleration (Malik et al.1993; Dracos1996):

    /nijxn1j 2xni xn1i

    2Dt2 ; 11

    where Dt is the time elapsed between frames. Since thismethod requires that a track consist of at least twopoints (one in frame n and one in frame n1), we alsospecify that the first two frames in the track are joinedusing the nearest neighbor heuristic.3. 4 Frame: Minimum Change in Acceleration (4MA):

    The position of the particle in frame n+1 is estimatedin the same way as in the 3MA algorithm. For each ofthe particles in the search volume in frame n+1, aposition~xn2i in framen+2 is estimated to be

    ~xn

    2

    i xni ~v

    ni 2Dt ~a

    ni 2Dt

    2

    12and particles in a search volume around it are investi-gated. The particle chosen as the best match in framen+1 is that which leads to the smallest change inacceleration from framen+1 to framen+2 (Malik et al.1993; Dracos1996):

    /nij 1

    2Dt2 xn2j 2xn1j xni xn1j 2xni xn1i n o:

    13

    a b

    dc

    Fig.8 Diagrams of the fourtracking heuristics. The blackcircles and line indicatepositions already joined into atrack. Dark gray circlesindicatepositions a single frame into thefuture, while light gray circlessignify positions two framesinto the future. Open circlesindicate estimated positions,and the crossed circles indicate

    the positions used to generatethe estimates. a NearestNeighbor heuristic, b the 3Frame: Minimum Acceleration,c the 4 Frame: MinimumChange in Acceleration, and dthe 4 Frame: Best Estimate. Ineach case, the arrow points outwhich position will be chosen asthe next point on the track

    310

  • 7/21/2019 Ouellette ptv

    11/13

    As with the three frame method, the first two points on atrack are joined using a nearest neighbor heuristic.4. 4 Frame: Best Estimate (4BE): we have developed

    this algorithm as an extension of the four-framemethod described above. We replace the /ij

    n definedabove by the distance between particles in frame n+2and the second estimated position:

    /nij xn2j ~xn2i ; 14

    where~xn2i is the estimated position in framen+2. Thisalgorithm therefore makes no attempt to estimate the

    third time derivative along the track segment.Conflicts were handled in two ways. The simplest way

    to handle conflicts is to give up: when a particle in framen+1 is the best match for multiple particles in frame n,the involved tracks are stopped at frame n and a newtrack is considered to have begun in frame n+1. Onemay also handle conflicts by choosing the set of linksfrom frame n to frame n+1 so that the total cost

    Pij/ ij

    n

    is minimized (Veenman et al. 2001). This task, now atwo-dimensional assignment problem, may be solvedefficiently using the Munkres Algorithm (Bourgeois andLasalle1971).

    The results of testing the different tracking algorithms

    are shown in Fig. 9. The 4BE algorithm with no conflictbreaking cleary performed better than all the other tes-ted algorithms, making no tracking mistakes forn

  • 7/21/2019 Ouellette ptv

    12/13

    search, Inc. These cameras have a framerate of 27,000pictures per second at a resolution of 256256 pixels,giving us approximately 24 measurements per sg. Thecameras are arranged in a single plane with an angularseparation of roughly 45.

    The measured particle trajectories are used to calcu-late Lagrangian velocities and accelerations. Timederivatives were calculated from the tracking data byconvolution with a Gaussian smoothing and differenti-ating kernel as described in Mordant et al. (2004).

    As shown above for simulated data, the 1D GaussianEstimator and the 4BE tracking algorithm gave the bestcombination of accuracy and efficiency of the algorithms

    tested. We have therefore used them in our experiment.Figure10 shows the measured probability densityfunctions (PDFs) for the two radial velocity compo-nents. As expected, the PDFs are almost perfectlyGaussian. The axial velocity component (not shown) isless Gaussian, as noted in Voth et al. (2002) for thisflow.

    Figure11 shows the measured PDFs of the tworadial acceleration components. The PDFs show theexpected stretched exponential shape. We compare thismeasured data with the previous measurements ofMordant et al. (2004), who used very fast silicon stripdetectors to image tracer particles in our flow. Mor-

    dant et al. (2004) took images at a rate of 70 kHz in avolume of 222 mm with effectively 512512 pixels.We see some depression in the tails of our measuredPDF, which we attribute to the poorer temporal andspatial resolution in our experiment, since Mordantet al. (2004) took nearly three times as many picturesper second.

    Given that the 1D Gaussian Estimator and the 4BEalgorithm produce accurate Lagrangian statistics in ourhigh Reynolds number flow, we conclude that they cansuccessfully be applied to real experiments.

    7 Conclusions

    The choice of algorithms used in a three-dimensionalLPT system is important for the overall accuracy of sucha system. We have proposed both a particle findingalgorithm and a tracking algorithm. For particle centerfinding, any advantages the commonly used method ofweighted averaging enjoys are outweighed by the sig-nificant gains in accuracy one can achieve with (1) forhigh signal-to-noise ratios the comparably efficient 1DGaussian Estimator and (2) for low signal-to-noise ra-tios our new neural network scheme. For cameraplacement, at least one camera must be placed at a largeangular separation from the others in order to increaseaccuracy by removing matching ambiguities. Finally,our new 4BE tracking algorithm outperforms the otheralgorithms tested by a large margin, and the addition ofthe Munkres algorithm (Bourgeois and Lasalle1971) forbreaking conflicts to a tracking method does not in-crease the tracking accuracy.

    Additionally, we have verified that the best algo-rithms tested, the 1D Gaussian Estimator and the 4BEtracking algorithm, produce particle tracks with thecorrect flow statistics when applied to a real high Rey-nolds number flow.

    AcknowledgementsThe authors would like to thank Lance Collinsat Cornell University for contributing DNS data for testing thetracking algorithms presented here. This work was supported bythe National Science Foundation under grants PHY-9988755 andPHY-0216406.

    References

    Adrian RJ (1991) Particle-imaging techniques for experimentalfluid mechanics. Annu Rev Fluid Mech 23:261304

    1e-07

    1e-06

    1e-05

    0.0001

    0.001

    0.01

    0.1

    1

    -20 -10 0 10 20

    PDF

    a /

    1/2

    Fig. 11 PDFs of the radial acceleration components. The symbolsdenote the two components of measured data. The solid lineis datameasured by Mordant et al. (2004). The tails of our measureddistribution are somewhat depressed due to our low time resolution

    1e-07

    1e-06

    1e-05

    0.0001

    0.001

    0.01

    0.1

    1

    -6 -4 -2 0 2 4 6

    PDF

    u /

    1/2

    Fig. 10 PDFs of the radial velocity components. The symbolsdenote the two components of measured data, while the solid lineisa Gaussian. As expected, the velocity data fall almost perfectly onthe Gaussian

    312

  • 7/21/2019 Ouellette ptv

    13/13

    Bourgeois F, Lassalle J-C (1971) An extension of the Munkresalgorithm for the assignment problem to rectangular matrices.Commun ACM 14:802804

    Carosone F, Cenedese A, Querzoli G (1995) Recognition of par-tially overlapped particle images using the Kohonen neuralnetwork. Exp Fluids 19:225232

    Chen Y, Chwang AT (2003) Particle image velocimetry system withself-organized feature map algorithm. J Eng Mech ASCE129:11561163

    Chetverikov D, Veresto y J (1999) Feature point tracking forincomplete trajectories. Computing 62:321338

    Cowen EA, Monismith SG (1997) A hybrid digital particle trackingvelocimetry technique. Exp Fluids 22:199211

    Doh D-H, Kim D-H, Choi S-H, Hong S-D, Saga T, Kobayashi T(2000) Single-frame (two-field image) 3-D PTV for high speedflows. Exp Fluids 29:S85S98

    Dracos Th (1996) Particle tracking in three-dimensional space. In:Dracos Th (ed) Three-dimensional velocity and vorticity mea-suring and image analysis techniques. Kluwer, Dordrecht

    Grant I, Pan X (1997) The use of neural techniques in PIV andPTV. Meas Sci Technol 8:13991405

    Grant I, Pan X, Romano F, Wang X (1998) Neural-networkmethod applied to the stereo image correspondence problem inthree-component particle image velocimetry. Appl Opt37:36563663

    Guezennec YG, Brodkey RS, Trigui N, Kent JC (1994) Algorithmsfor fully automated three-dimensional particle tracking veloci-

    metry. Exp Fluids 17:209219La Porta A, Voth GA, Crawford AM, Alexander J, Bodenschatz E

    (2001) Fluid particle accelerations in fully developed turbu-lence. Nature 409:10171019

    Labonte G (1999) A new neural network for particle-tracking ve-locimetry. Exp Fluids 26:340346

    Labonte G (2001) Neural network reconstruction of fluid flowsfrom tracer-particle displacements. Exp Fluids 30:399409

    Maas H-G (1996) Contributions of digital photogrammetry to 3-DPT. In: Dracos Th (ed) Three-dimensional velocity and vorticitymeasuring and image analysis techniques. Kluwer, Dordrecht

    Maas H-G, Gruen A, Papantoniou D (1993) Particle trackingvelocimetry in three-dimensional flowspart 1. Photogram-metric determination of particle coordinates. Exp Fluids15:133146

    Malik NA, Dracos Th, Papantoniou DA (1993) Particle trackingvelocimetry in three-dimensional flowspart 2. Particle track-ing. Exp Fluids 15:279294

    Mann J, Ott S, Andersen JS (1999) Experimental study of relative,turbulent diffusion. Ris National Laboratory Report Ris-R-1036(EN)

    Mitchell TM (1997) Machine learning. McGraw-Hill, Boston, pp

    81127Mordant N, Crawford AM, Bodenschatz E (2004) Experimental

    Lagrangian acceleration probability density function measure-ment. Physica D 193:245251

    Ohmi K, Li H-Y (2000) Particle-tracking velocimetry with newalgorithms. Meas Sci Technol 11:603616

    Sethi IK, Jain R (1987) Finding trajectories of feature points in amonocular image sequence. IEEE Trans Pattern Anal MachIntell 9:5673

    Veenman CJ, Reinders MJT, Backer E (2001) Resolving motioncorrespondence for densely moving points. IEEE Trans PatternAnal Mach Intell 23:5472

    Veenman CJ, Reinders MJT, Backer E (2003) Establishing motioncorrespondence using extended temporal scope. Artif Intell145:227243

    Voth GA, La Porta A, Crawford AM, Alexander J, Bodenschatz E

    (2002) Measurement of particle accelerations in fully developedturbulence. J Fluid Mech 469:121160

    Westerweel J (1993) Digital particle image velocimetrytheory andapplications. PhD Dissertation, Delft University Press

    Westerweel J (2000) Theoretical analysis of the measurement pre-cision in particle image velocimetry. Exp Fluids 29:S3S12

    Yeung PK (2002) Lagrangian investigations of turbulence. AnnuRev Fluid Mech 34:115142

    313