284 ieee transactions on nanobioscience, vol. 7, no. …nehorai/paper/dec2008pinaki.pdfestimation...

14
284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008 Estimating Locations of Quantum-Dot-Encoded Microparticles From Ultra-High Density 3-D Microarrays Pinaki Sarder*, Student Member, IEEE, and Arye Nehorai, Fellow, IEEE Abstract—We develop a maximum likelihood (ML)-based parametric image deconvolution technique to locate quantum-dot (q-dot) encoded microparticles from three-dimensional (3-D) images of an ultra-high density 3-D microarray. A potential application of the proposed microarray imaging is assay analysis of gene, protein, antigen, and antibody targets. This imaging is performed using a wide-field fluorescence microscope. We first describe our problem of interest and the pertinent mea- surement model by assuming additive Gaussian noise. We use a 3-D Gaussian point-spread-function (PSF) model to represent the blurring of the widefield microscope system. We employ parametric spheres to represent the light intensity profiles of the q-dot-encoded microparticles. We then develop the estimation algorithm for the single-sphere-object image assuming that the microscope PSF is totally unknown. The algorithm is tested nu- merically and compared with the analytical Cramér-Rao bounds (CRB). To apply our analysis to real data, we first segment a section of the blurred 3-D image of the multiple microparticles using a -means clustering algorithm, obtaining 3-D images of single-sphere-objects. Then, we process each of these images using our proposed estimation technique. In the numerical examples, our method outperforms the blind deconvolution (BD) algorithms in high signal-to-noise ratio (SNR) images. For the case of real data, our method and the BD-based methods perform similarly for the well-separated microparticle images. Index Terms—Fluorescence microscope, maximum likelihood estimation, microparticle, q-dot, 3-D microarray. I. INTRODUCTION W E develop a parametric image deconvolution technique based on a maximum likelihood (ML) approach to lo- cate quantum-dot (q-dot)-encoded microparticles from three-di- mensional (3-D) images of ultra-high density 3-D microarrays. The estimated microparticle locations are useful as prior infor- mation for any next stage imaging analysis with the microarray, such as target quantification. In 3-D microarrays [1], a large number of reagent-coated and q-dot-embedded microparticles are randomly placed in each Manuscript received March 10, 2008; revised June 05, 2008. Current version published February 06, 2009. This work was supported in part by the NSF under Grants CCR-0330342 and CCR-0630734, in part by the NIH under Grant 1T90 DA022871, and in part by the Mallinckrodt Institute of Radiology, Washington University. Asterisk indicates corresponding author. *P. Sarder is with the Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130 USA (e-mail: [email protected]). A. Nehorai is with the Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA (e-mail: ne- [email protected]). Digital Object Identifier 10.1109/TNB.2008.2011861 array spot to capture multiple targets. The targets are antigens 1 and proteins [1], [2]. The imaging is performed using a standard widefield fluorescence microscope. The microscope imaging properties distort the 3-D object image [3]. We use a parametric 3-D Gaussian point-spread-function (PSF) model to represent the wide- field microscope blurring. The microscope output signal is represented by the light from the spherically shaped objects distorted by the microscope aberration. The CCD detector 2 attached to the microscope functions as a photon counter, introducing Poisson noise to the output signal. In addition, analog-to-digital (A/D) conversion noise is introduced into the measurement at the CCD detector output. We approximate the Poisson noise—corrupted object signal blurred by the micro- scope PSF using a signal with additive Gaussian noise. This approximation is justified because the light signal from q-dots are captured with a high signal-to-noise ratio (SNR) during the experimental data collection [4], [5]. The microscope- and measurement-noise problems have to be solved in order to make any qualitative or quantitative anal- ysis with the distortion-free object images [6]. In the current paper, we develop a ML-based deconvolution technique to lo- cate the positions of q-dot-encoded microparticles from 3-D mi- croarray images. In microscopy imaging, ML-based algorithms are useful for estimating specific information of interest (e.g., microparticles’ centers and their q-dot intensity levels) using parametric formulations when the object model is simple, such as a sphere in our case. In these simple cases, the objects can be easily modeled using simple parametric forms with the un- known information of interest as parameters. Then, a problem- specific deconvolution algorithm can be developed using pop- ular ML or least-squares-based techniques instead of using com- mercial deconvolution software, which is costly and time inten- sive to run. We compare the performance of our proposed algorithm with those of the conventional blind-deconvolution (BD) algorithm as embedded in MATLAB [8] and the parametric blind-decon- volution (PBD) [7] method using numerical examples. We se- lect these two reference algorithms as representatives of many existing microscopy image deconvolution methods. These two algorithms, like many other conventional techniques, are devel- oped in general for arbitrary object shapes. Hence, the existing methods are often very ad hoc. In contrast, in our application, 1 An antigen is a substance that stimulates an immune response, especially the production of antibodies. 2 Charged coupled device, a light-sensitive chip used for image gathering. 1536-1241/$25.00 © 2008 IEEE Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 24-Jul-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

Estimating Locations of Quantum-Dot-EncodedMicroparticles From Ultra-High Density 3-D

MicroarraysPinaki Sarder*, Student Member, IEEE, and Arye Nehorai, Fellow, IEEE

Abstract—We develop a maximum likelihood (ML)-basedparametric image deconvolution technique to locate quantum-dot(q-dot) encoded microparticles from three-dimensional (3-D)images of an ultra-high density 3-D microarray. A potentialapplication of the proposed microarray imaging is assay analysisof gene, protein, antigen, and antibody targets. This imagingis performed using a wide-field fluorescence microscope. Wefirst describe our problem of interest and the pertinent mea-surement model by assuming additive Gaussian noise. We usea 3-D Gaussian point-spread-function (PSF) model to representthe blurring of the widefield microscope system. We employparametric spheres to represent the light intensity profiles of theq-dot-encoded microparticles. We then develop the estimationalgorithm for the single-sphere-object image assuming that themicroscope PSF is totally unknown. The algorithm is tested nu-merically and compared with the analytical Cramér-Rao bounds(CRB). To apply our analysis to real data, we first segment asection of the blurred 3-D image of the multiple microparticlesusing a -means clustering algorithm, obtaining 3-D images ofsingle-sphere-objects. Then, we process each of these images usingour proposed estimation technique. In the numerical examples,our method outperforms the blind deconvolution (BD) algorithmsin high signal-to-noise ratio (SNR) images. For the case of realdata, our method and the BD-based methods perform similarlyfor the well-separated microparticle images.

Index Terms—Fluorescence microscope, maximum likelihoodestimation, microparticle, q-dot, 3-D microarray.

I. INTRODUCTION

W E develop a parametric image deconvolution techniquebased on a maximum likelihood (ML) approach to lo-

cate quantum-dot (q-dot)-encoded microparticles from three-di-mensional (3-D) images of ultra-high density 3-D microarrays.The estimated microparticle locations are useful as prior infor-mation for any next stage imaging analysis with the microarray,such as target quantification.

In 3-D microarrays [1], a large number of reagent-coatedand q-dot-embedded microparticles are randomly placed in each

Manuscript received March 10, 2008; revised June 05, 2008. Current versionpublished February 06, 2009. This work was supported in part by the NSF underGrants CCR-0330342 and CCR-0630734, in part by the NIH under Grant 1T90DA022871, and in part by the Mallinckrodt Institute of Radiology, WashingtonUniversity. Asterisk indicates corresponding author.

*P. Sarder is with the Department of Electrical and Systems Engineering,Washington University in St. Louis, St. Louis, MO 63130 USA (e-mail:[email protected]).

A. Nehorai is with the Department of Electrical and Systems Engineering,Washington University in St. Louis, St. Louis, MO 63130, USA (e-mail: [email protected]).

Digital Object Identifier 10.1109/TNB.2008.2011861

array spot to capture multiple targets. The targets are antigens1

and proteins [1], [2]. The imaging is performed using a standardwidefield fluorescence microscope.

The microscope imaging properties distort the 3-Dobject image [3]. We use a parametric 3-D Gaussianpoint-spread-function (PSF) model to represent the wide-field microscope blurring. The microscope output signal isrepresented by the light from the spherically shaped objectsdistorted by the microscope aberration. The CCD detector2

attached to the microscope functions as a photon counter,introducing Poisson noise to the output signal. In addition,analog-to-digital (A/D) conversion noise is introduced into themeasurement at the CCD detector output. We approximate thePoisson noise—corrupted object signal blurred by the micro-scope PSF using a signal with additive Gaussian noise. Thisapproximation is justified because the light signal from q-dotsare captured with a high signal-to-noise ratio (SNR) during theexperimental data collection [4], [5].

The microscope- and measurement-noise problems have tobe solved in order to make any qualitative or quantitative anal-ysis with the distortion-free object images [6]. In the currentpaper, we develop a ML-based deconvolution technique to lo-cate the positions of q-dot-encoded microparticles from 3-D mi-croarray images. In microscopy imaging, ML-based algorithmsare useful for estimating specific information of interest (e.g.,microparticles’ centers and their q-dot intensity levels) usingparametric formulations when the object model is simple, suchas a sphere in our case. In these simple cases, the objects canbe easily modeled using simple parametric forms with the un-known information of interest as parameters. Then, a problem-specific deconvolution algorithm can be developed using pop-ular ML or least-squares-based techniques instead of using com-mercial deconvolution software, which is costly and time inten-sive to run.

We compare the performance of our proposed algorithm withthose of the conventional blind-deconvolution (BD) algorithmas embedded in MATLAB [8] and the parametric blind-decon-volution (PBD) [7] method using numerical examples. We se-lect these two reference algorithms as representatives of manyexisting microscopy image deconvolution methods. These twoalgorithms, like many other conventional techniques, are devel-oped in general for arbitrary object shapes. Hence, the existingmethods are often very ad hoc. In contrast, in our application,

1An antigen is a substance that stimulates an immune response, especially theproduction of antibodies.

2Charged coupled device, a light-sensitive chip used for image gathering.

1536-1241/$25.00 © 2008 IEEE

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 2: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 285

a realistic prior knowledge of the object shape is available. Weemphasize the fact that it is possible to employ such prior knowl-edge in the image deconvolution algorithm. As a consequence,the deconvolution analysis becomes a pure parameter estima-tion problem and it no longer needs any kind of blind approachto solve. In this paper, we present a simple parametric ML-basedestimation framework for deconvolution microscopy of simpleobject shape images. Our proposed analysis is analytically andnumerically very tractable, which is unlike the PBD algorithm.See Section V for more details.

In our numerical examples, our method significantly outper-forms the other two methods in high SNR; BD performs theworst. Our algorithm achieves the Cramér-Rao bound (CRB)for sufficiently large data, as should be expected for the MLestimation methods; the other two do not. Our algorithm per-forms better since it contains prior information of the objectshape (spherical), whereas the other methods (BD and PBD) donot have that flexibility. We compare the performance of ouralgorithm with the BD algorithm using real-data images. Weobserve that the BD-based method performs as well as our pro-posed method for well-separated micropartcle images, whereasthe performances differ for the microparticles that are very closeto each other. This result is the motivation for calibrating the dis-tance between microparticles in our future research in order toimprove the estimation accuracy.

II. PROBLEM DESCRIPTION

In this section we discuss the 3-D microarray with q-dot-em-bedded microparticles [4], [5], its image-acquisition procedure,our nomenclature convention, and the motivation for our currentwork.

A. Target-Capturing Process

In the 3-D ultra-high-density microarray as proposed in [4]and [5], microparticles (3–4 m in diameter) coated with cap-ture reagents are immobilized in an array (see Fig. 1(a)). Eachmicroparticle is coated with antibodies3 or antibody fragmentsas receptors that bind specific agents. The identities of the an-tibodies on the microparticles are determined by interrogatingthe semiconductor q-dot bar code [9], [10], enabling identifica-tion of the microparticles among a host of others. Q-dots do notsaturate or photo-disintegrate as organic fluorophores do underintense illumination. This quality enhances the detection sen-sitivity greatly and produces a high SNR image at the detectoroutput. The ensemble with different receptors is shown schemat-ically in Fig. 1(a), with each color representing a specific codeassigned to a given receptor [4], [5].

During the sensor operation, an input fluid stream with targetsis passed tangentially through the sensor using an appropriatemicrofluidic technique. The targets encounter all the micropar-ticles. The tortuous flow geometry leads to intimate contact be-tween targets and receptors. Therefore, if an intended target is

3An antibody is a large Y-shaped protein used by the immune system to iden-tify and neutralize foreign objects such as bacteria and viruses. Each antibodybinds to a specific antigen unique to its target.

Fig. 1. (a) A microarray ensemble containing 25 spots; each spot contains alarge number of microparticles coated with capture reagents. (b) A target mole-cule is sensed by being captured on an optically encoded microparticle; photolu-minescent signals are generated from the microparticle and nanoparticle q-dots.

contained in the input fluid stream, it binds to the micropar-ticle surface and, given sufficient exposure, accumulates to a de-tectable level [4], [5].

We present a schematic diagram of the target-capturingprocess in Fig. 1(b). A sector of a polystyrene microparticlecontains an optical bar code of q-dots that luminesce at a distinctset of wavelengths (e.g., red light wavelength in Fig. 1(b)). Themicroparticle surface contains receptor molecules. Potentialtarget molecules bind to the receptor molecule at active sites onits surface. The binding of the target molecules to the micropar-ticle surface is detected via additional receptors decoratingthe surface of polystyrene nanoparticles containing q-dots thatluminesce at unique wavelengths (e.g., blue light wavelengthin Fig. 1(b)) that are shorter than the q-dot wavelengths ofthe microparticles. During the sensor operation, a cocktailof nanoparticles is periodically released into the input fluidstream. The nanoparticles then bind to the target molecules thatwere earlier bound to the microparticles. This process resultsin site-specific enhancements of the luminescence at a specificwavelength [4], [5] and hence greatly increases the detectionsensitivity of the target molecules on the microparticles.

Example: We illustrate a simple example in Fig. 2. Two mi-croparticles are coded with q-dots of emission wavelengthsand . The orange- and red-colored microparticles with q-dotshaving emission wavelengths and contain antibody A andantibody B, respectively, as receptors. The microparticles can be

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 3: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

286 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

identified and localized from the luminescence corresponding totheir respective q-dot light signals at the respective wavelengths.

The target solution contains antigen A. The blue-colorednanoparticles contain q-dots with emission wavelength,which is different from and smaller than and . Eachnanoparticle contains either antibody A or B as its receptors.Since the targets are present only on the microparticle con-taining antibody A, the nanoparticles bind only there. All theq-dots produce light upon excitation of the whole ensembleusing UV light. The q-dot light at wavelength is observedwherever the microparticle with antibody A is present, con-firming the presence of antigen A and absence of antigen B inthe target solution. This light signal also gives the amount ofantigen A in the target solution.

In summary, signals at wavelengths and identify andlocalize the corresponding microparticles. The target moleculesare quantified from the q-dot signal at wavelength .

B. Imaging

The 3-D image of the specimen is acquired from a seriesof two-dimensional (2-D) images by focusing a computer-con-trolled epi-fluorescence microscope at different planes of theensemble structure [6], [11]–[13]. A cooled CCD records im-ages at multiple wavelengths using a liquid-crystal tunable filter.Cooled CCD introduces almost negligible dark detector noisein the acquired image [4]. In this paper, we assume that a sig-nificant amount of the distortion in the output image is char-acterized by (a) the microscope 3-D PSF, (b) Poisson noise in-troduced by the CCD detector, and (c) additive A/D conversionnoise introduced at the CCD detector output.

C. Nomenclature

The 3-D image of the specimen is acquired along the opticaldirection from a series of 2-D focal-plane (also called radial-plane) images. The optical direction is defined along the z axisin the Cartesian coordinate system. Any direction parallel tothe focal plane is denoted as a radial direction. We show theseconcepts in Fig. 3(a) and the meridional section of the acquired3-D image in Fig. 3(b). The meridional section is defined as aplane along the optical axis (perpendicular to the focal plane)passing through the origin.

D. Motivation

In Fig. 4, we present the bar-code-q-dot-light-intensity imageon the focal plane of reference at 0 of an assembly of mi-croparticles (without targets) from a microarray spot. We focuson estimating the microparticle locations in this image. This ex-periment was performed with each microparticle bar code con-taining single type of q-dots of similar amount.

In general, the 3-D microarray setup contains microparticlebar codes with multiple types of q-dots of varying amounts.Our proposed method can estimate the microparticle centersand their intensity levels in this general setup [4], [5]. However,in this paper, we are considering only the case of the experi-mental setup where each microparticle contains q-dots of sameamount and type. Presence of the varying amount and multipletype of q-dots does not change the performance of our proposedmethod.

The estimated locations of the microparticles are useful asprior information for any next-stage imaging analysis with this3-D microarray, such as target quantification. The targets arequantified from the q-dot light signals of the nanoparticles.In the quantification analysis, the q-dot signal profile of thenanoparticles on each microparticle forms a spherical shelllike shape. Here, the inverse equation requires to estimate themicroparticles’ locations as part of the estimation process. Aprior knowledge of the shells’ centers (which are the sameas the corresponding microparticle center locations) greatlyenhances the estimation speed of the target quantification.

III. MEASUREMENT MODEL

In this section, we first discuss our proposed single-sphere ob-ject model for the q-dot light intensity profile of the microparti-cles. Then we discuss a 3-D Gaussian PSF model for a widefieldfluorescence microscope. Finally, we present the measurementmodel for estimation analysis.

A. Single-Sphere Object Model

We model the bar-code-q-dot-light-intensity profile of eachmicroparticle at a specific wavelength using parametric spheresas follows:

(1)

where denotes the bar-code-q-dot-light-intensity profile of a single microparticle at a specific wave-length for ; is the unknown intensity level,which is proportional to the number of specific q-dots presentinside the microparticle; and and are the unknowncenter and known radius of the microparticle, respectively.Hence, the unknown parameter vector of the object is denotedby .

Here we use a homogeneous intensity profile for modelingthe object. This reduces the number of unknowns in our pa-rameter estimation algorithm (see Section IV) and increases theprocessing speed. The microparticles, and hence the objects, aremany in numbers in the microarray (see Fig. 4) and analyzingall of them in a reasonable time requires fast processing. Hence,we estimate an average intensity value per voxel of every objectwhich is unlike estimating an inhomogeneous intensity profile.This is a trade-off between fast processing and analysis with arealistic object intensity distribution. Note that employing an in-homogeneous intensity profile in (1) makes the estimation anal-ysis overly ill-posed and additional prior information on the ob-ject intensity is required for a satisfactory performance.

To verify our model, we use real data and present a simpleillustration to show that the bar-code-q-dot-light-intensity pro-file of a single microparticle resembles a sphere. We use BD al-gorithm for this analysis. We use the deconvblind command ofMATLAB for all the BD operations in this paper with an initialPSF represented by ones in all the voxels. We assume that theblurred microparticle image as shown in Fig. 4 is analyticallyrepresented by the object convolved by the PSF with additive

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 4: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 287

Fig. 2. Example of the target-capturing process: target solution, the ensemblebefore and after target capturing, and the final ensemble with q-dot-encodednanoparticle tag.

Fig. 3. A schematic view of (a) the focal (radial) plane, the optical direction,and the radial direction in a Cartesian coordinate system; (b) the meridionalplane in a Cartesian coordinate system.

Gaussian noise [14] (see also Section III.C for more discussion).Following this assumption, we apply the BD algorithm to a ran-domly chosen microparticle image (see Fig. 5(a)). In Fig. 5(b),we show a meridional section of the resulting estimated objectintensity profile, which resembles a sphere. This analysis justi-fies our proposed single-sphere object model.

It is worth mentioning that the BD-based method follows con-ventional blind deconvolution techniques (see Appendix I) [15],[16]. However, BD-estimated object intensity images do notgive precise estimates of the microparticle centers. Hence, we

Fig. 4. Bar-code-q-dot-light-intensity image on the focal plane of reference at0�� of an assembly of microparticles from a spot of the 3-D microarray device.

use an analytical estimation method to fit our proposed spher-ical-shaped object model in (1) to the statistical measurementmodel for estimating the center parameters, as we discuss inIII.C.

B. Three-Dimensional Gaussian PSF Model

The PSF is the 3-D impulse response4 of the fluorescencemicroscope system used to characterize the out-of-focus light.The 3-D impulse response is not an exact 3-D impulse [11] inthe fluorescence microscope, because the finite lens aperture in-troduces diffraction ring patterns in radial planes. In addition,the measurement setup usually differs from the manufacturer’sdesign specifications. Therefore, the microscope PSF becomesphase aberrated with symmetric features in radial planes andwith asymmetric features along the optical direction [6].

The classical widefield microscope 3-D PSF model of Gibsonand Lanni [17], [18] is given by

(2)

where is the Bessel function of the first kind, isthe wave number, is the wavelength of the fluorescent signal,

is the numerical aperture of the microscope, and is themagnification of the lens. The vector contains the true andideal measurement setup parameters: refractive index of the im-mersion oil , the specimen , and the cover-slip ;thickness of the immersion oil and the coverslip ; dis-tance between the back focal and the detector planes ; depth

at which the point source is located in the specimen; is dis-tance from the in-focus plane to the point of evaluation, is thenormalized radius in the back focal plane, and is the optical

43-D impulse response is the output intensity profile of the microscope systemwhen the input is a point light source in space; a 3-D impulse. A 3-D impulserepresents the limiting case of a light pulse in space made very short in volumemaintaining finite volume integral (thus giving an infinitely high intensity peak)[14].

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 5: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

288 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

Fig. 5. Meridional sections of (a) a microparticle’s blurred bar-code-q-dot-light-intensity image and (b) the resulting BD-estimated object intensity profile. Merid-ional sections of (c) the BD-estimated PSF intensity profile from the image as shown in Fig. 5(a) and (d) its least-squares fitted version following the model (3).

path difference (OPD) function between corresponding systemsin design and nondesign conditions [17].

The Gibson and Lanni PSF model in (2) is computationallyexpensive because the integration in the formula requires inten-sive numerical evaluation. Also, it is worth noting that our ap-proach requires that a large number of microparticle images beprocessed as is evident from Fig. 4. Hence, we perform the es-timation analysis using a 3-D Gaussian PSF model as proposedin [20] as follows:

(3)

where the unknown parameter vector is . Themodel (3) for representing the widefield fluorescence micro-scope PSF assumes that the Gaussian functions are centeredat the origin of the PSFs and they are separable. The origin ofthe PSF is denoted by the co-ordinate in(2) and (3). The advantage of using a centered separable 3-DGaussian PSF model is that it preserves the symmetry of the

Gibbson and Lanni PSF model along the focal planes and theasymmetry of the Gibbson and Lanni PSF model along the op-tical direction. We also assume in our proposed parameter es-timation algorithm (as presented in Section IV) that the 3-DGaussian PSF is normalized according to the norm5, andhence, . It is worth mentioning that in [20], the authorsconclude that PSF model (3) is not-very-accurate approximationof the original PSF model in (2). However, we use this modelin our analysis, since it needs a smaller number of unknown pa-rameters to estimate an image with a large number of micropar-ticles.

In general, a 3-D PSF can be obtained by three different tech-niques: experimental, analytical, and computational [6]. In theexperimental methods, images of one or more point-like objectsare collected and used to obtain the PSF. These methods havethe advantage that the PSF closely matches the experimentalsetup. However, images obtained with such point-like objects

5If ��� � �� � � � � � � � � � is a vector then its � norm ismax��� �� �� �� � � � � �� �� [19].

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 6: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 289

have a very poor SNR unless the system is specially optimized.In the analytical methods, the PSF is calculated using the clas-sical diffraction-based Gibson and Lanni model (2). In the com-putational methods, it is preferable to estimate the PSF and ob-ject functions simultaneously by using BD algorithms [6]. Suchis the case when all the PSF parameters are not known or a PSFmeasurement is difficult to obtain,

To verify that PSF model (3) is useful for our application ofinterest, we use real data and present an illustration to show that(3) is quite similar to the widefield microscope PSF. We estimatethe widefield microscope PSF from the real data using the BDalgorithm. Furthermore, we use a least-squares fit [19] on theBD-estimated PSF using model (3) to visualize the similaritybetween them. In Fig. 5(c), we show the meridional section ofthe BD-estimated PSF that is obtained by applying the BD al-gorithm to the image as shown in Fig. 5(a). We fit this BD-es-timated PSF using model (3), and the meridional section of theresultant image is shown in Fig. 5(d). The least-squares error iscomputed as 4.42%. This confirms that we do not lose much inestimation accuracy using PSF model (3).

C. Statistical Measurement Model

For the single spherical-shaped object (1), the imagingprocess is expressed analytically as

(4)

where is the output of the convolution process be-tween and at the voxel ,and is the convolution operation6 [21]. The CCD detectorattached to the fluorescence microscope behaves as a photoncounter, introducing Poisson noise to , and thatprocess is given by

(5)

where is the reciprocal of the photon-conversion factor [21],[22], is the number of photons measured in theCCD detector, and is a Poisson random variable with pa-rameter . Since the q-dot light from the specimen produce veryhigh SNR at the CCD detector, and assuming that the param-eter value is sufficiently large, we approximate (5) using an ad-ditive Gaussian noise model [23]. Independent of these, the A/Dconversion error is introduced in the measurement at the CCDdetector output. We model this error using an additive Gaussianrandom variable. Under these assumptions and considerations,the measurement is given by

(6)

6For two discrete functions ���� and ����, the convolution operation is givenby ��������� � ������� � ��� ����� ��� � [14].

Fig. 6. Block diagram representation of the imaging process as discussed inSection III.

where is additive Gaussian noise, independentfrom voxel to voxel, with mean zero and variance depending on

at the voxel , and is ad-ditive Gaussian noise independently and identically distributed(i.i.d.)7 from voxel to voxel. Note that in (6),represents the Gaussian approximation of the Poisson randomvariable in (5) and represents the A/D conversionprocess error of the CCD detector. In Fig. 6, we illustrate thewhole imaging process using a block diagram.

IV. PARAMETER ESTIMATION

We develop a parameter estimation technique for estimatingthe unknown parameters from the acquired3-D image segment of a single sphere. In this section, wefirst describe our developed estimation technique for unknown3-D Gaussian PSF. Then, we compute the Cramér-Rao bound(CRB), which is the lowest bound on the variance of error ofany unbiased estimator of the parameters in under certainregularity conditions [23].

The deconvolution problem is ill-posed since the unknownparameters in forward model (6) include: (i) object parametervector , (ii) PSF parameter vector , and (iii) noise parameters.We consider the following approximation of (6). The randomvariables and are independent atevery voxel. Hence, is a Gaussianrandom variable with zero mean and independent from voxelto voxel. We approximate the random variablewith where is aGaussian random variable with mean and variance[23]. We assume that follows and hence,

follows . Themeasurement is independent from voxel to voxel.The likelihood of the measurement for a voxel coordinate

is given by,

(7)

We estimate as follows,

7Statistically independent observations that are drawn from a statisticallyidentical probability distribution function [23].

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 7: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

290 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

(8)

where stands for the argument of the minimum, i.e., thevalue of the given argument, , for which the value of the givenexpression, , attains its minimum value. Differentiating

with respect to and equating the resultant analyticalexpression with zero, we obtain,

(9)

where denotes . Simplifying (9),

(10)

Note that we represent (10) in the quadratic form of, where , and

. Since, is al-ways positive, a unique solution to (10) exists when (i)

and (ii) . We compute,and hence, satisfy (i). We com-

pute for satisfying(ii). This relationship is in general true for images captured athigh SNR. Solving (10),

(11)

We approximateand thus we have,

(12)

The above approximations in (11)–(12) are acceptable for highSNR images produced by the q-dot based imaging and large .We conclude from (12) that the estimation of is essentiallyequivalent to fitting to the available measurement

for each voxel coordinate . Hence,for estimating we approximate (6) as follows,

(13)

where we assume as the model fittingerror. The random variable is i.i.d. from voxel tovoxel. Thus, the ML estimate of the parameters and in ouranalysis are equivalent to their corresponding least-squares es-timate, which is in fact independent of the noise statistics. TheML estimates of the unknown parameters are equivalent to their

least-squares estimates in a linear model when the noise is addi-tive Gaussian and i.i.d. from one measurement to the other [25].

In the following discussion, we denote. We define

(14)

We rewrite (13):

(15)

We assume the available measurements are , wherewith to

with toand with to

. With these assumptions and notations, we group themeasurements into a vector form:

(16)

where and are -dimensional vec-tors whose

components are andat the voxel coordinate

, respectively, and similarly for . The log-like-lihood function for estimating using forward model (16) isgiven by

(17)

where denotes the Euclidean vector-norm operation8. TheML estimate of the parameters (see, e.g., [24]) is

(18)

where stands for the argument of the maximum, i.e., thevalue of the given argument, , for which the value of the givenexpression, , attains its maximum value, isthe projection matrix on the column space of [25], and

is the complementary projection matrix [25], given as

(19)

CRB: The Cramér-Rao bound (CRB) analysis is helpful to an-alyze the estimation accuracy using our approximated forwardmodel (13). CRB is the lowest bound on the variance of anyunbiased estimator under certain regularity conditions. It hasthe important features of being universal, i.e., independent ofthe algorithm used for estimation among the unbiased ones,and asymptotically tight, meaning that for certain distributions,

8If ��� � �� � � � � � � � � � is a vector then its Euclidean norm is given by��� � � � � � � � � � [19].

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 8: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 291

there exist algorithms that achieve the bound as the number ofsamples becomes large. Note that the ML estimate achieves theCRB asymptotically. In brief, the CRB is a performance mea-sure that can be used to determine the best accuracy we canexpect for a certain model [25]. We compute the mean of themeasurement vector as . We denote as theFisher information matrix of dimension with thentry as

(20)

where is the th element of the parameter vector . The CRBfor the unbiased estimates of the parameters in is derivedfrom the diagonal elements of the matrix [25]. The par-tial derivative terms in (20) can easily be computed using theexpression of ; hence we do not include those details here.

V. NUMERICAL EXAMPLES

We present two numerical examples comparing the per-formance of our proposed parameter estimation method withthe PBD- and BD-based methods. We also compare themean-squares errors (MSE) of the estimated parameters inwith their analytical bounds on the variance of error.

A. Examples: Data Generation

1) Example 1: In this example, we aim to present the ro-bustness of our proposed algorithm by comparing its perfor-mance with the BD- and PBD-based methods. Here, we sim-ulate the data following the true image distortion phenomenon.In detail, we produce data by not following exactly the objectmodel (1), the PSF model (3), and the forward model (15). Herewe consider (i) the situation in which the q-dots are randomlyplaced, and each of their light intensity profile follows a 3-DGaussian function in space inside each microparticle; (ii) theGibson and Lanni PSF model (2); (iii) the under-sampling phe-nomenon encountered while acquiring the data in real life; (iv)the photon-count process in the CCD detector; and (iv) additiveA/D conversion noise at the CCD detector output.

We generate image data for a single-sphere-object of radiuson a 551 551 551 sampling lattice with voxel

size nm. Each q-dot light intensity pro-file in space is produced using a symmetric 3-D Gaussian func-tion with variance parameter nm. Here we assumethat such light intensity profile is deterministic and is distributedin space based on the variance parameter of the Gaussianfunction. The q-dot centers are placed in each voxel locationinside the object with probability following a binomial randomvariable with and . We assume a largenumerical value for the maximum light intensity of each q-dotsin their corresponding center location.

We generate the synthetic PSF following a parametric PSFmodel as proposed in [7]. In this model, the exponentialphase-term inside the Gibson and Lanni model (2) is replacedby , where

(21)

is the pupil function of the objective lens andis the unknown parameter vector of . The pupil amplitudefunction is given as

(22)

where is an unknown parameter, is the unknown cut-offfrequency of the objective lens [26], and is theunknown parameter vector of . The term isgiven by

(23)

where and and are givenas follows:

(24)

where is the unknown parameter of , and

(25)

where is the unknown param-eter vector of . Hence, is the un-known parameter vector of the pupil function of the ob-jective lens. We generate synthetic PSF data using the values:

and .We generate the microscope output by convolving the simu-

lated object image with the PSF generated over the same grid.Image formation is a continuous space process, whereas the de-convolution takes place on sampled images. To incorporate thisdifference on our simulation, we sample every fifth voxel in-tensity along every dimension to obtain a reduced image with

voxels and . Usingthis image, we generate the photon counts inthe CCD detector following (5) with . We generate theoutput data on the CCD detector by introducing random, ad-ditive, and i.i.d. Gaussian noise of mean zero and variancewith (see (5)). The noise introduced is i.i.d. fromvoxel to voxel. We vary following , whereis the maximum intensity value of for

. In Fig. 7, we show the MSEs of the estimated micropar-ticle center parameter as a function of , which is essen-tially a linear function of the A/D conversion noise variance atthe CCD detector output, since ,where [dB].

2) Example 2: In this example, we aim to compare the per-formance of estimation of the unknown parameters with theircorresponding CRBs using our proposed algorithm and BD- andPBD-based algorithms. We simulate the data here using the for-ward model as in (15).

We generate image data for a single-sphere-object of radiuson a 111 111 111 sampling lattice with voxel

size . We use for the datageneration. We generate the synthetic PSF following (3).

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 9: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

292 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

We generate the output of the microscope and the CCD de-tector by convolving the simulated object image with the mi-croscope PSF generated over the same grid with additive mea-surement noise following forward model (13). The data in thisexample are generated using forward model (13) with random,additive, and i.i.d. Gaussian noise of mean zero and variance .We plot the MSEs of the estimated microparticle center param-eter as a function of SNR for this data in Fig. 8. We defineSNR in Example 2 as

(26)

B. Parameter Estimation

For both the examples, we perform the estimation using thefollowing three techniques and compare their performances.

1) Our proposed parametric method: We estimate the mi-croparticle center parameters using (18). We assume thatthe sphere parameters and the PSF parameters are un-known.

2) Parametric blind-deconvolution (PBD)-based method (seeAppendix I) [7]: We estimate the object function as fol-lows:

(27)

where and denote the object, PSF,and output measurement, respectively; is theestimated object function at the th iteration; and

[7]. In this paper, we as-sume that the PSF is known a priori forestimating the object function using PBD. We define

as theerror at the th iteration. We continue iterating (27)until .

3) Conventional blind-deconvolution (BD)-based method(see Appendix I) [8]: In this method, we simultaneouslyestimate the unknown PSF and the object functions usingthe deconvblind command of MATLAB.

Microparticle Location Estimation: Our proposed method(see Section IV) directly estimates the microparticle center pa-rameters . However, the BD and PBD algorithms donot estimate these quantitatively. In those cases, we first trans-form the voxel intensities of the estimated object functions tozero below a certain threshold level. Then we average the voxelcoordinates with nonzero intensity values to estimate the mi-croparticle centers. We present the estimation performances ofthe numerical examples in Figs. 7 and 8 (see also the followingSection V.C for more discussion).

C. Results and Discussion

In Example 1 (see Fig. 7) our method does not perform aswell as the BD- and PBD-based methods at very low SNR. This

Fig. 7. Mean-square errors of the estimated microparticle center parameter �as a function of varying A/D conversion noise variance � at the CCD de-tector output. The simulated data is generated following the original physicalphenomenon (see Section V). Our proposed method outperforms the BD- andPBD-based methods for low A/D conversion noise levels.

Fig. 8. Mean-square-errors and Cramér-Rao bound of the estimated micropar-ticle center parameter � as a function of SNR. The data are generated usingforward model (13) (see Section V). Our algorithm achieves the Cramér-Raobound; the other two do not. Also, our proposed method significantly outper-forms the other two methods in high signal-to-noise ratio regions.

example confirms that the Gaussian PSF model is reasonablefor solving the deconvolution problem for high SNR imaging ofsimple object shapes. In Example 2 (see Fig. 8) at the low SNR,PBD outperforms our proposed method but the estimation usingboth the PBD and BD fail to achieve the CRB. In the same ex-ample, our method performs the best among the three startingfrom 5 dB and also achieves the CRB. In summary, in both theexamples we observe that (i) our proposed method always per-forms better in the high SNR regions and (ii) BD performs worstamong the three. On the other hand, the computational speed ofBD is the highest among all the three algorithms. It is worthmentioning that q-dot based imaging produces high SNR im-ages in the practical experiments. The main reason for the betterperformance of our algorithm is that it contains a prior informa-tion of the object shape (spherical), whereas the other methods

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 10: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 293

Fig. 9. Bar-code-q-dot-light-intensity image on the focal plane of reference at 0 �m of a section of seven microparticle images from a spot of the 3-D microarraydevice, (b) their image after a thresholding operation, (c) their binary image where the red color signifies their foreground shape, and (d) their segmented versionsin which different colors signify separated single-sphere-objects.

(BD and PBD) do not carry that assumption. Note that we as-sume the PSF is known while evaluating the PBD algorithm,since an unknown PSF-based PBD analysis requires an exten-sive computational load [7]. We conclude that our proposed es-timation method performs significantly better than the two ex-isting methods for the high SNR data.

VI. ESTIMATION RESULTS USING REAL DATA

In this section, we present a quantitative estimation analysisof a real-data set using our method and compare the perfor-mance with the BD algorithm. We do not use the PBD algorithmfor this performance comparison, although it performs betterthan the BD in the numerical examples. We make this selectionbecause the PSF is unknown for the real-data set; hence the PBDrequires an optimal model selection analysis for choosing andestimating the unknown parameters of in (27). This require-ment makes PBD a time-intensive algorithm for processing animage with a large number of microparticles, as shown in Fig. 4.The BD algorithm is faster and does not perform much worsethan the PBD algorithm (see Figs. 7 and 8).

Experiment Details: We apply our proposed algorithm andthe BD-based estimation algorithm to a section of the 3-Dsphere images acquired on a 1036 1360 51 sampling latticewith a sampling interval of 0.16 m along the , anddirections. We extract a section of seven microparticle imageson a 241 340 51 sampling lattice from the original data totest the estimation performance. The radius of the spheres isknown as m by experimental convention [4], [5]. Theparticles are observed using a standard widefield microscopewith

, and mm. The images are captured byattaching a cooled CCD camera to the microscope.

Image Segmentation: We segment single-sphere-shaped im-ages of the seven microparticles using the -means clustering al-gorithm9. The -means algorithm is evaluated using the kmeanscommand of MATLAB [8]. We note that sometimes the mi-croparticle images merge into groups of two or three (see Fig.4) and hence are optically indistinguishable. This requires per-

9The �-means algorithm is a clustering algorithm to cluster � objects basedon attributes into � partitions; � � �.

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 11: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

294 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

TABLE ICOMPARISON OF ESTIMATION PERFORMANCES USING OUR PROPOSED AND BLIND DECONVOLUTION-BASED METHODS. WE USE THREE-DIMENSIONAL IMAGE

SEGMENT OF SEVEN MICROPARTICLES REAL DATA AS SHOWN IN FIG. 9. THE ORIGIN IS DEFINED AT THE CENTER OF THIS SEGMENT. EUCLIDEAN DISTANCES

BETWEEN: (I) THE ESTIMATED CENTERS ARE COMPUTED AS �� � � � � �� � � � � �� � � � ���� AND (II) THE ESTIMATED

INTENSITIES ARE COMPUTED AS �� � � �

forming our proposed estimation analysis using a two-sphere-object or a three-sphere-object model for greater accuracy at thecost of more computation time.

Evaluation of the kmeans requires a prior knowledge of thecluster-number. For the single-sphere-object-based models, thetotal number of clusters is equivalent to the number of micropar-ticles in the image that we analyze. However, for the case ofmultiple-sphere-object-based models, the number of clusters inthe image is hard to determine and requires an added model se-lection analysis, resulting in an increasing computational load.Hence, we refrain from processing the image using multiple-sphere-object-based models.

We employ a clustering algorithm here for the purpose ofpresenting an automated imaging analysis framework withmany microparticles image. In the first step of such automatedanalysis, the microparticles’ image sections are clustered. Theneach microparticle location is estimated from its correspondingimage section using our proposed algorithm. In order to presentthis automated analysis, we confine ourselves to evaluateusing a segment of seven microparticles’ image only. Howeverthis framework is general and applicable for any number ofmicroparticles’ image in the microarray. Note that we canalso employ other clustering algorithm, such as mixture ofGaussians based method, for this purpose.

Microparticle Localization and Quantification: We localizeand quantify each segmented single-sphere-object using ourproposed algorithm and the BD algorithm from the block ofseven microparticle images. The BD algorithm does not esti-mate the unknown parameters and quantitatively.Hence, for localization using this algorithm, we adopt a similarmethod as we describe in Section V.B. For quantification,we first average the intensities above a certain threshold levelof the estimated objects at the voxels with nonzero intensityvalues. Finally, we multiply those average values with the cor-responding maximum PSF intensity values for estimating theparameter inside the microparticles at a specific wavelength.

Results and Disussion: In Fig. 9, we present our analysisfor the real-data set. Here we define the origin at the center ofthe seven-microparticle block. In Fig. 9(a), we show the bar-

code-q-dot-light-intensity image on the focal plane of referenceof the microparticle image section at 0 m. We set the voxelintensities to zero below a certain threshold in this 3-D image,and the resultant image is shown in Fig. 9(b). The binary ver-sion of this image is shown in Fig. 9(c), where we indicate thenonzero intensity regions in red. Finally, in Fig. 9(d) we presenttheir segmented version, where different colors specify sepa-rated single-sphere-objects.

The results of our proposed algorithm and the BD-based al-gorithm are presented in Table I. In this table, we include theEuclidean distances between the center location estimates andthe intensity level estimates obtained using the two methods.From the table we observe that evaluation using both methodsyields similar results for Spheres 1 and 2. This is expected sinceq-dot lights are captured at high SNR for real data. Recall thatin the numerical example of Section V.A.1 we showed that ourproposed and BD methods perform similarly for measurementscaptured at high SNR.

However, the estimation results are slightly different usingthe two methods for Spheres 3, 5, 6, and 7 even though thesespheres’ images were captured at high SNR. The reason is thatthese spheres merge into groups of two in both of these cases.Hence, their intensities contribute to each other during the con-volution operation. Hence, fitting single-sphere-object modelsseparately to each segmented single sphere image data of amerged group of two sphere images does not computationallyperform well. Instead, estimation analysis of these spheres canbe improved using a two-sphere-object-based analysis.

Apart from this, Sphere 4 does not produce a consistent resultusing either method. In this case, it is not possible to know ifthe image is produced by the q-dot light of any microparticle. Itmight be the case that q-dots from a damaged microparticle areilluminated there since the estimated signal is much weaker.

VII. COMPARISON WITH [9]

We employ optically encoded microparticles randomlyplaced in a microarray for massively parallel and high

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 12: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 295

throughput analysis of biological molecules [9]. The mi-croparticles contain multicolor q-dots and thus built in codesfor rapid target identification. Such microparticles’ surfacesare conjugated with bio-molecular probes for target detectionwhile identification codes are embedded in the microparticles’interior [9].

In addition, in our work, we also employ secondary nanopar-ticles embedded with q-dots for on-off signaling and enhancingdetection sensitivity. The q-dot light colors of the nanopar-ticles are chosen unique irrespective of the targets and theirillumination wavelength is different than the wavelengths ofthe microparticles’ q-dots. Though our real data experiment isconfined to microparticles embedded with single color q-dots,we can extend our work to target-captured microparticlesembedded with multicolor q-dots. Note that q-dot lights fromthe nanoparticles of a target captured microparticle ideallyforms a shell-shaped object and similarly q-dot lights from amicroparticle ideally forms a sphere-shaped object.

In a microparticle, the q-dots are more dense near the pe-riphery rather than the interior part [9]. However, no experi-mental evidence exists in the literature describing such distri-bution. Hence, in our work we assume a homogeneous intensityprofile of the microparticles. This assumption can be extendedto a more realistic modeling scenario for a microparticle withincreasing intensity level from the center location toward theperiphery.

Each q-dot light intensity follows a 3-D Gaussian profile inspace [9]. However, there exists no data to model the spatialspread of such profile for a single q-dot since q-dots are im-aged in groups [9]. In our numerical examples (Section V), weassumed that such spread is approximately in nm range. Thisassumption is realistic since the q-dots that we employ for ex-perimental data collection have diameter 6 nm in each.

VIII. CONCLUSION

We addressed the problem of locating q-dot-encoded mi-croparticles from 3-D images of an ultra-high density 3-Dmicroarray. Our future aim is to quantify the multiple targetsthat this device will capture using the microparticle locationestimation as prior information. Such analysis is a two-stageestimation process.

Our work is motivated by the fact that in fluorescencemicroscopy imaging when the object shape is simple, itcan be easily modeled using simple parametric forms withthe unknown information of interest as parameters. Then, aproblem-specific deconvolution algorithm can be developedusing popular ML- or least-squares-based techniques. Thisavoids the need of using commercial deconvolution software,which is often costly and time intensive to run. Simple andsymmetric object models motivate our use of Gaussian PSFmodel for the widefield fluorescence microscope. In exper-iments, q-dot based imaging produces high SNR images.This motivates modeling the statistical measurement using aGaussian noise statistics. Using these practical assumptions, wedeveloped a more parametric image deconvolution algorithmusing the concentrated likelihood formulation [24] as presentedin (18). We achieve robust estimation performance in the

practical high SNR scenarios using our proposed method. Theperformance comparison of the parameter estimations with thecorresponding CRBs as presented in the numerical Example2 (see Section V) supports the use of the Gaussian PSF andGaussian noise models for simple symmetric object imagescaptured in high SNR using fluorescence microscopes.

In summary, in numerical examples, our proposed simpleparametric ML-based image deconvolution method performsbetter for simple object shapes in high SNR than the conven-tional methods. Our proposed analysis based method performssimilarly as the conventional BD method for only the well-sepa-rated microparticles real data high-SNR images. This motivatesus to to calibrate and re-design our proposed microarray de-vice experimentally which we plan to perform in our future re-search. Namely, we will compute the optimum distance betweenthe microparticles using statistical hypothesis testing methodsto find the desired separation that guarantees no cross-talk be-tween the microparticles during imaging after the target-cap-turing process. This will enable us to place the micropariclesin predefined holes that will be separated by an optimum dis-tance that we will compute. Such implementation will expeditethe first step segmentation process during the imaging analysis.Namely, it will be sufficient to segment the microparticles imageusing a simple grid-based technique [28] instead of using otherclustering algorithms.

APPENDIX I

We discuss the PBD- and BD-based methods. We first reviewa BD method based on Richardson-Lucy (RL) algorithm. TheRL algorithm was originally developed from Bayes’s theorem.Bayes’s theorem is given by

(28)

where is the conditional probability of the event givenis the probability of the event , and is the

inverse conditional probability, i.e., the probability of eventgiven event . The probability can be identified as the ob-ject distribution ; the conditional probability can beidentified as the PSF centered at , i.e., ; and the prob-ability can be identified as the degraded image . Theinverse relation of the iterative algorithm is given by,

(29)

where is the iteration number. Under an isoplanatic condition,we write (29) as,

(30)

When the PSF is known, one finds the object by iter-ating (30) until convergence. An initial guess is requiredfor the object to start the algorithm. Advantage of this algo-rithm includes a non-negativity constraint if the initial guess

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 13: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

296 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. 4, DECEMBER 2008

. Also energy is conserved in these iterations. Theblind form of the RL algorithm is given by,

(31)

(32)

At the th blind iteration of this algorithm both (31) and (32) areperformed assuming estimated object at the th blind stepis known in (31). For each of (31) or (32), a specific numberof RL iterations are performed as specified by . The loop isrepeated until convergence. Extension of the above algorithmsfor 3-D images is trivial. Initial guesses and are re-quired to these algorithms. No positivity constraints are requiredbecause the above algorithms ensure positivity.

MATLAB-based BD algorithm uses an accelerated anddamped version of the RL algorithm specified by (31) and (32),[8]. Note that similar algorithm specified by (31) and (32) canbe developed using expectation maximization (EM) frameworkassuming that the measurement is Poisson noise corruptedversion of the convolution . Motivated by this, thePBD algorithm was derived in [7] assuming the measurement

as that follows the forward model (5). There anEM formulation [25] is employed to maximize the likelihood.Also the authors proposed a parametric PSF model as wereviewed in the Section V.A.1. In the PBD algorithm, the objectis updated using a similar iterative form as (30) (see (27)).In our numerical example (see Section V.A.1), we assumedthat the PSF is known in (27) for simplicity. However, in theoriginal work the authors also proposed a numerical method toupdate the PSF parameters at each PBD step [7].

ACKNOWLEDGMENT

We are grateful to Professor David Kelso of NorthwesternUniversity for providing us with the real-data images for thisresearch.

REFERENCES

[1] P. W. Stevens, C. H. J. Wang, and D. M. Kelso, “Immobilized par-ticle arrays: Coalescence of planar and suspension-array technologies,”Anal. Chem., vol. 75, no. 5, pp. 1141–1146, 2003.

[2] P. W. Stevens and D. M. Kelso, “Imaging and analysis of immobilizedparticle arrays,” Anal. Chem., vol. 75, no. 5, pp. 1147–1154, 2003.

[3] G. M. P. van der Kempen, L. J. van Vliet, P. J. Verveer, and H. T. M. vander Voort, “A quantitative comparison of image restoration methodsfor confocal microscopy,” J. Microscopy, vol. 185, no. 3, pp. 354–365,1997.

[4] A. Mathur and D. M. Kelso, “Segmentation of microspheres inultra-high density microsphere-based assays,” in Proc. SPIE, 2006,vol. 6064, pp. 60640B1–60640B10.

[5] A. Mathur, “Image Analysis of Ultra-High Density, Multiplexed,Microsphere-Based Assays,” Ph.D. Thesis, Northwestern Univ.,Evanston, IL, 2006.

[6] P. Sarder and A. Nehorai, “Deconvolution methods of 3D fluorescencemicroscopy images: An overview,” IEEE Signal Process. Mag., vol. 23,no. 3, pp. 32–45, 2006.

[7] J. Markham and J. Conchello, “Parametric blind deconvolution: A ro-bust method for the simultaneous estimation of image and blur,” J.Opt.Soc. Amer. A, vol. 16, no. 10, pp. 2377–2391, 1999.

[8] MATLAB Software Package, The MathWorks [Online]. Available:http://www.mathworks.com/products/matlab/

[9] M. Han, X. Gao, J. Z. Su, and S. Nie, “Quantum-dot-tagged microbeadsfor multiplexed optical coding of biomolecules,” Nature Biotechnol.,vol. 19, pp. 631–635, 2001.

[10] Crystalplex Corp.. Pittsburgh, PA [Online]. Available: http://www.crystalplex.com

[11] J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, “Three-dimensional imaging by deconvolution microscopy,” Methods, vol. 19,no. 3, pp. 373–385, 1999.

[12] P. J. Verveer, “Computational and Optical Methods for Improving Res-olution and Signal Quality in Fluorescence Microscopy,” Ph.D. Thesis,Delft Technical Univ., Delft, The Netherlands, 1998.

[13] H. T. M. van der Voort and K. C. Strasters, “Restoration of confocalimages for quantitative image analysis,” J. Microscopy, vol. 178, no. 2,pp. 165–181, 1995.

[14] A. V. Oppenheim, A. S. Willsky, S. Hamid, and S. H. Nawab, Signalsand Systems, 2nd ed. Englewood Cliffs, NJ: Prentice Hall, 1996.

[15] M. P. van Kempen, “Image Restoration in Fluorescence Microscopy,”Ph.D. Thesis, Delft Technical Univ., Delft, The Netherlands, 1999.

[16] D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEESignal Process. Mag., vol. 13, no. 3, pp. 43–64, 1996.

[17] S. F. Gibson and F. Lanni, “Experimental test of an analytical model ofaberration in an oil-immersion objective lens used in three-dimensionallight microscopy,” J. Opt. Soc. Amer. A, vol. 8, no. 10, pp. 1601–1612,1991.

[18] T. J. Holmes, “Maximum-likelihood image restoration adapted fornon coherent optical imaging,” J. Opt. Soc. Amer. A, vol. 5, no. 5, pp.666–673, 1988.

[19] G. Strang, Introduction to Linear Algebra, 3rd ed. Wellesley, MA:Wellesley-Cambridge Press, 2003.

[20] B. Zhang, J. Zerubia, and J. O. Martin, “Gaussian approximations offluorescence microscope point spread function models,” Applied Opt.,vol. 46, no. 10, pp. 1819–1829, 2007.

[21] D. L. Snyder and M. I. Miller, Random Point Processes in Time andSpace, 2nd ed. New York: Springer-Verlag, 1991.

[22] T. M. Jovin and D. J. Arndt-Jovin, “Luminescence digital imaging mi-croscopy,” Annu. Rev. Biophys. Biophys. Chem., vol. 18, pp. 271–308,1989.

[23] A. Papoulis and S. U. Pillai, Probability, Random Variables and Sto-chastic Processes, 4th ed. New York: McGraw-Hill Science, 2001.

[24] B. Porat, Digital Processing of Random Signals: Theory andMethods. Englewood Cliffs, NJ: Prentice Hall, 1994.

[25] S. M. Kay, Fundamentals of Statistical Signal Processing, EstimationTheory. Englewood Cliffs, NJ: PTR Prentice Hall, 1993.

[26] J. A. Conchello and Q. Yu, C. J. Cogswell, G. Kino, and T. Willson,Eds., “Parametric blind deconvolution of fluorescence microscopyimages: Preliminary results,” in Three-Dimensional Microscopy:Image Acquisition and Processing III Proc. SPIE, 1996, vol. 2655, pp.164–174.

[27] Y. M. Cheung, “k-means—A Generalized k-means clustering algo-rithm with unknown cluster number,” in Proc. 3rd Int. Conf. Intell.Data Eng. Automat. Learn. (IDEAL’02), Manchester, U.K., Aug.12–14, 2002, pp. 307–317.

[28] Y. H. Yang, M. J. Buckley, S. Dudoit, and T. P. Speed, “Comparisonof methods for image analysis on cDNA microarray data,” J. Comput.Graph. Stat., vol. 11, pp. 108–136, 2002.

Pinaki Sarder (S’03) received the B.Tech. degree inelectrical engineering from Indian Institute of Tech-nology, Kanpur, in 2003. He is currently a Ph.D. stu-dent and research assistant in the Department of Elec-trical and System Engineering at Washington Univer-sity in St. Louis (WUSTL).

His research interest is in the area of statisticalsignal processing with focus on medical imagingand genomics.

Mr. Sarder is a recipient of the Imaging SciencesPathway program fellowship for graduate students at

WUSTL starting from January 2007 to December 2007.

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.

Page 14: 284 IEEE TRANSACTIONS ON NANOBIOSCIENCE, VOL. 7, NO. …nehorai/paper/dec2008pinaki.pdfestimation framework for deconvolution microscopy of simple object shape images. Our proposed

SARDER* AND NEHORAI: ESTIMATING LOCATIONS OF QUANTUM-DOT-ENCODED MICROPARTICLES FROM ULTRA-HIGH DENSITY 3-D MICROARRAYS 297

Arye Nehorai (S’80-M’83-SM’90-F’94) receivedthe B.Sc. and M.Sc. degrees in electrical engineeringfrom the Technion, Israel, and the Ph.D. degreein electrical engineering from Stanford University,California.

From 1985 to 1995, he was a faculty member withthe Department of Electrical Engineering at YaleUniversity. In 1995 he joined as Full Professor theDepartment of Electrical Engineering and ComputerScience at The University of Illinois at Chicago(UIC). From 2000 to 2001 he was Chair of the

department’s Electrical and Computer Engineering (ECE) Division, which thenbecame a new department. In 2006 he became Chairman of the Departmentof Electrical and Systems Engineering at Washington University in St. Louis.He is the inaugural holder of the Eugene and Martha Lohman Professorshipand the Director of the Center for Sensor Signal and Information Processing(CSSIP) at WUSTL since 2006.

Dr. Nehorai was Editor-in-Chief of the IEEE TRANSACTIONS ON SIGNAL

PROCESSING during 2000–2002. From 2003 to 2005 he was Vice President(Publications) of the IEEE Signal Processing Society, Chair of the PublicationsBoard, member of the Board of Governors, and member of the ExecutiveCommittee of this Society. From 2003 to 2006 he was the founding editor ofthe special columns on Leadership Reflections in the IEEE Signal ProcessingMagazine. He was co-recipient of the IEEE SPS 1989 Senior Award for BestPaper with P. Stoica, co-author of the 2003 Young Author Best Paper Awardand co-recipient of the 2004 Magazine Paper Award with A. Dogandzic. He waselected Distinguished Lecturer of the IEEE SPS for the term 2004 to 2005 andreceived the 2006 IEEE SPS Technical Achievement Award. He is the PrincipalInvestigator of the new multidisciplinary university research initiative (MURI)project entitled Adaptive Waveform Diversity for Full Spectral Dominance. Hehas been a Fellow of the IEEE since 1994 and of the Royal Statistical Societysince 1996. In 2001 he was named University Scholar of the University ofIllinois.

Authorized licensed use limited to: WASHINGTON UNIVERSITY LIBRARIES. Downloaded on February 7, 2009 at 11:06 from IEEE Xplore. Restrictions apply.