adaptive neighborhoods for manifold learning...

5
2005 IEEE 6th Workshop on Signal Processing Advances in Wireless Communications ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING-BASED SENSOR LOCALIZATION Neal Putwari and Alfred 0. Hero III University of Michigan, Dept. of Electrical Engineering tk Computer Science I301 Beal Avenue, Ann Arbor, MI, USA E-mail: [npatwari, hero] @eecs.umich.edu ABSTRACT Connectivity measurements, i.e., whether or not two sen- sors can communicate, can be used to cakulate localization in networks of inexpensive wireless sensors. We show that a Laplacian Eigenmaps-based algorithm, combined with an adaptive nei&hbor weighting method, can provide an ac- curate, low complexity solution. Laplacian Eigenmaps is a manifold learning method which optimizes using eigen- decomposition, thus is non-iterative and finds the global optimum. Comparatively, the new localization method is less computationally complex than multi-dimensional scal- ing (MDS), and we show via simulation that it has lower variance. 1. INTRODUCTION Emerging applications of wireless sensor networks will de- pend on automatic and accurate locabun of thousands of sensors. Device cost will be a key factor. By eliminating the need for additional hardware, such as for GPS, ultra- sound, or high accuracy RF time-of-anival (KIA), we cm widen the sensor network application space. In this paper, we compare localization algorithms which use connectiv- ity measurements. If a sensor can successfulIy demodulate rhe packets transmitted by another sensor, then the TWO are considered to be connected. When received signal strength (RSS) is too low, packets can’t be demodulated, and sensors will not be connected. This paper emphasizes that connectivity is a measure- ment subject to error due to the unpredictable RF channel. Noisy measurements lead to noisy coordinate estimates. Lo- calization algorithms must be chosen to minimize the bias and variance of the coordinate estimates, and to keep com- putational complexity low, so that sensor localization will scale well with rhe size of the network. This paper in- troduces a Laplacian Eigenmaps based localization method which has both lower computational complexity and lower variance than MDS-based methods. 1.1. Estimation Problem Statement: Formally stated, we consider a network of n unknown-location sensors, and m This research was supported in pat by National Scienm Foundation ITR Grant No. CCR-0325571. reference (or anchor) sensors which have CI priori knowl- edge of their coordinates (for a total of N = n + m sen- sors). The cooperative localization problem we consider in this paper is the estimation of the unknown-location sensor coordinates { zi} €or i = 1 ... n, given: (1) Reference coor- dinates (z~)~=~+~...N, and (2) Pair-wise connectivity mea- surements {&i,j]. Connectivity measurements are random variables, subject to error, and their statjstical model js de- scribed in Section 2. In this paper, we do not consider mea- surements of TOA, RSS, or AOA; furthermore, references ate assumed to have exact coordinate knowledge. Extension of these results is future work, 1.2. Relevant Research: Connectivity measurement-based localization, also called mnge ,free localization, has found considerable application in ad hoc networks and wireless sensor networks, eg., in [I, 2, 31. In particular, localization via MDS was introduced in [2], which demonstrated that localization can be achieved without resorting to iterative optimization algorithms that don’t always converge to the global maxima. The MDS-MAP method in [2] effectively appjies the manifold learning technique called Isomap 141 to the connectivity-based sensor localization problem. We compare our new method to the MDS-MAP method in Sec- tions 3.1 and 5. 2. CONNECTIVITY MEASUREMENT MODEL The key to developing reliable localization systems is to ac- curately represent the severely degrading effects of the RF channel. We do nor consider two sensors to be connected solely based on the distance between them - iwo sensors are connected if the receiving sensor can successfully demodu- lare packets transmitted by the other sensor. The receiver fails to successfully demodulate packets when the received signal strength (RSS) is too low. Since RSS is a random variable due to the unpredictability of the fading channel, and connectivity is a function of RSS, connectivity is also a random variable. Specifically, the connectivity measurement of sensors i and j, QQ, is modeled as a binary quantization of RSS, 0-7803-8867-4/051$20.0002005 IEEE 1098 Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 11-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING …web.eecs.umich.edu/~hero/Preprints/AdaptiveNeighborhoods.pdf · Emerging applications of wireless sensor networks will de- pend on

2005 IEEE 6th Workshop on Signal Processing Advances in Wireless Communications

ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING-BASED SENSOR LOCALIZATION

Neal Putwari and Alfred 0. Hero III

University of Michigan, Dept. of Electrical Engineering tk Computer Science I301 Beal Avenue, Ann Arbor, MI, USA

E-mail: [npatwari, hero] @eecs.umich.edu

ABSTRACT

Connectivity measurements, i.e., whether or not two sen- sors can communicate, can be used to cakulate localization in networks of inexpensive wireless sensors. We show that a Laplacian Eigenmaps-based algorithm, combined with an adaptive nei&hbor weighting method, can provide an ac- curate, low complexity solution. Laplacian Eigenmaps is a manifold learning method which optimizes using eigen- decomposition, thus is non-iterative and finds the global optimum. Comparatively, the new localization method is less computationally complex than multi-dimensional scal- ing (MDS), and we show via simulation that it has lower variance.

1. INTRODUCTION

Emerging applications of wireless sensor networks will de- pend on automatic and accurate locabun of thousands of sensors. Device cost will be a key factor. By eliminating the need for additional hardware, such as for GPS, ultra- sound, or high accuracy RF time-of-anival (KIA), we c m widen the sensor network application space. In this paper, we compare localization algorithms which use connectiv- ity measurements. If a sensor can successfulIy demodulate rhe packets transmitted by another sensor, then the TWO are considered to be connected. When received signal strength (RSS) is too low, packets can’t be demodulated, and sensors will not be connected.

This paper emphasizes that connectivity is a measure- ment subject to error due to the unpredictable RF channel. Noisy measurements lead to noisy coordinate estimates. Lo- calization algorithms must be chosen to minimize the bias and variance of the coordinate estimates, and to keep com- putational complexity low, so that sensor localization will scale well with rhe size of the network. This paper in- troduces a Laplacian Eigenmaps based localization method which has both lower computational complexity and lower variance than MDS-based methods.

1.1. Estimation Problem Statement: Formally stated, we consider a network of n unknown-location sensors, and m

T h i s research was supported in p a t by National Scienm Foundation ITR Grant No. CCR-0325571.

reference (or anchor) sensors which have CI priori knowl- edge of their coordinates (for a total of N = n + m sen- sors). The cooperative localization problem we consider in this paper is the estimation of the unknown-location sensor coordinates { z i } €or i = 1 . . . n, given: (1) Reference coor- dinates ( z ~ ) ~ = ~ + ~ . . . N , and (2) Pair-wise connectivity mea- surements { & i , j ] . Connectivity measurements are random variables, subject to error, and their statjstical model js de- scribed in Section 2. In this paper, we do not consider mea- surements of TOA, RSS, or AOA; furthermore, references ate assumed to have exact coordinate knowledge. Extension of these results is future work,

1.2. Relevant Research: Connectivity measurement-based localization, also called mnge ,free localization, has found considerable application in ad hoc networks and wireless sensor networks, eg., in [I, 2, 31. In particular, localization via MDS was introduced in [ 2 ] , which demonstrated that localization can be achieved without resorting to iterative optimization algorithms that don’t always converge to the global maxima. The MDS-MAP method in [2] effectively appjies the manifold learning technique called Isomap 141 to the connectivity-based sensor localization problem. We compare our new method to the MDS-MAP method in Sec- tions 3.1 and 5.

2. CONNECTIVITY MEASUREMENT MODEL

The key to developing reliable localization systems is to ac- curately represent the severely degrading effects of the RF channel. We do nor consider two sensors to be connected solely based on the distance between them - iwo sensors are connected if the receiving sensor can successfully demodu- lare packets transmitted by the other sensor. The receiver fails to successfully demodulate packets when the received signal strength (RSS) is too low. Since RSS is a random variable due to the unpredictability of the fading channel, and connectivity is a function of RSS, connectivity is also a random variable.

Specifically, the connectivity measurement of sensors i and j , QQ, is modeled as a binary quantization of RSS,

0-7803-8867-4/051$20.0002005 IEEE 1098

Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.

Page 2: ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING …web.eecs.umich.edu/~hero/Preprints/AdaptiveNeighborhoods.pdf · Emerging applications of wireless sensor networks will de- pend on

where E,j is the received power (dBm) at sensor i transmit- ted by sensor j, and PI is the receiver threshold (dBm) un- der which packets cannot be demodulated. This step func- tion is an approximation, but is an accurate model for most digital receivers.

Received power Pi,j is strongly affected by shadowing and multipath fading. For a particular environment of de- ployment, the walls, furniture, buildings, trees and other obstructions in the channel between the two devices cause these deleterious channel effects. Since we can’t predeter- mine the exact layout of every place of deployment, we have to consider these effects to be random, Both empirical and theoretical evidence shows that RSS i s weII-modeled as a Iog-normal random variable [ 5 ] . Since Pi:i is expressed in dB, it is Gaussian distributed, with mean P(di,j) and vari- ance oiB. The mean received power P is an exponentially decreasing function of the actual Iransmitter-receiver sepa- ration distance di.j = I(zi - zjI\,

where Po is the received power (dBm) at a short reference distance do, and np is the ‘path-loss exponent’, typically between 2 and 4. Also, values for UdB are usually between 4 and 12 [51. The precision possible from connectivity-based localization is proportional to the ratio u d ~ / n ~ [6] .

Because of (Z), we can talk about the distance R at which the mean received power is equal to the receiver thresh- old Pi.

R = d O ’ ‘ l n P (3)

We call R the ‘mean communication range’. Two devices separated by R have a 50% chance of being connected.

In real networks, connectivity is not symmetric. If a pair of devices don’t have the same transmit power, they will be connected more often when the device with higher trans- mit power is transmitting. However, an asymmetric con- nectivity graph provides more information than a symmetric graph: devices are ‘in-range’, ‘our-of-range’, or ‘intermed- iate-range’ (when devices are connected in only one ditec- tion). Essentially, asymmetric Connectivity measurements are equivalent to 3-level quantized received signal strength (QKSS) [6]. Localization in this paper is limited CO less in- formative, symmetric connectivity measurements.

3. LAPLACIAN EIGENMAPS

The Laplacian Eigenmaps method considers the minimiza- tion of the cost SLE [7]:

subject to the translation and scaIing constraints,

The minimum of cost SLE without any constraints would occur when all the coordinates zi were equal. The con- straints in (5 ) remove the translation ambiguity by setting the origin as the center, and counteract the tendency IO put all points at the origin by mandating a unit norm average coordinate.

The benefit of the formulation in (4) and (5) is that the globally optimum solution can be found via eigen-decomp- osition. We define the N x N weight matrix W = [ [ w ~ , ~ ] ] ~ : ~ and its column sums (or row sums, since W is symmetric) U i = W i , j . Then the graph Laplacian L is defined as

(6)

where diag[ul:. . . ,UN] is the diagonal matrix with {ut) on its diagonal. The eigen-decomposition of L is the set of ( .X , ,vk) , for eigenvalues Xk and eigenvectors vk, k = 1 . . . N . Here, we assume w.1.a.g. that the eigenvectors are sorted in increasing order by magnitude of eigenvalue. As presented in detail by Belkin and Niyogi in [7], the VI; for i = 2 . . . d+l provide the optimal lowest-cost, d-dimensional solution to (4). Specificajly,

L = diag[ul,. . . , UN] - W

zi = [VZ( i ) , ...: V d + l ( z ) ] : (7)

where v k ( i ) is h e ith element of the kth eigenvector. Finding the smallest eigenvalues and eigenvectors of a

sparse and symmetric matrix is a problem which has been studied for decades in physics and chemistry IS, SI, and can be solved using distributed algorithms for parallel process- ing. The computational complexity is 6 ( K X 2 ) , where K is the average number of neighbors. Note that MDS requires decomposition of a full matrix, which is an S(Ar3) opera- tion.

1.2. Multi-Dimensional Scaling: Classical MDS finds the coordinates {zi} which minimize the folIowiog cost func- tion:

SMos = 1 (s$ - l lZ i - zj112)2 8) i $3

where & j is a measured distance between sensors i and j. In MDS-MAP 121, S i , j is set to the shortest-path number of hops between sensurs i and j. Note that the difference in (8) is not taken between distances, but between sguar-eddis- tances, in order to linearize the optimization problem. Us- ing squared distances tends to emphasize the pairs with high di , j and magnify their errors.

More fundamentally, (8) is based on disfance rather than connectivity. Equation (8) asserts that the distance between zi and z j should be equal to Si , j . In comparison, the Lapla- cian Eigenmaps cost in (4) simply asserts that the distance between sensors i and j is IOW.

4. WEIGHT SELECTION

The selection of weights wi,j for neighboring sensors is critical to localization performance. In the original Lapla- cian Eigenmaps method 171, weights are selected by Jook- ing at the local geometric structure of neighboring high-

e9

Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.

Page 3: ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING …web.eecs.umich.edu/~hero/Preprints/AdaptiveNeighborhoods.pdf · Emerging applications of wireless sensor networks will de- pend on

dimensional data points. This paper presents multiple meth- ods of weight selection, and then compares them via simu- lation in Section 5.

First, in the E p ~ l Weights method, we set W i , j = Q j , j 7

i.e.3 wij = 1 if i and j are connected and 0 if not. As will be shown in Section 5 , this is a poor weight selection method, because sensors with the most neighbors will tend to have too much ‘pull’, and will bias their neighbors’ coordinate estimates too close to their own.

To counteract this tendency, we offer two alternatives which both affect the column sums of W , i.e., ui = Note that ui is analagous to the ‘pull’ of sensor i. In both al- ternative methods, we first set wi?j using the equal weights method. Then,

N wi,j.

Equal Sum-ufweighrs : Adjust the weights such that the new column sums GLi = pu for all i = 1.. . N , where p u is the average of the original column sums, pu = + g1 ui.

Lineal- Sum-of-Weights : Adjust the weights such that the new column sums iii are linearly related to u j . Specif- ically, let Ut = + ~ ( I L ~ - pU)/ru where uu is the standard deviation of {ua)E1. In this paper, slope /3 = 0.1 is used throughout.

Ad-jusment of weights to achieve desired column sums is described in Section 4.1. The 1Y output by any neighbor weight selection method is then used to calculate coardinate estimates {&} via the Laplacian Eigenmaps algorithm in Section 3.

4.1. Symmetric Adjustment of Weights: Matrix IY must remain symmetric after any weight adjustment, since it de- scribes a symmetric graph. If we just scaled the weights in column i by iii,/ui, column i would have the desired sum bi, but TY would not remain symmetric. In this weight- adjustment algorithm, we iteratively adjust wi,j (or equiv- alently wj, i) untiI iii = ui. The inputs to the algorithm are: the original weights {wi,j}; the desired sum of weights { i i i ) for i = 1 . . . N; and a convergence threshold E (here E = 0.01). The algorithm outputs the modified weight ma- trix. The steps are:

N 1. Calculate ui = CjZ1 wi,j for i = 1 . . . N.

2. Define “li = m, fori = I . . . N .

3. Assign wi:j = wj,* := y;wi,yyj V neighbors i, j .

4. If Vi , 1 - E < *ii < 1 + E , stop. Else go to 1.

The algorithm requires O(KIV) multiplies, where K is the average number of neighbors. We do not address the con- vergence of this algorithm here, except to note that in simu- lations, it typically converges in 5-10 iterations.

4.2. Two-Stage Weight Selection: In [lo], it was shown that localization estimates can be greatly improved by using a two-stage neighbor selection method. We consider the following two-stage algorithm for weight seIection:

1.

2.

Using the linear sum-of-weights method to set W , calculate the Laplacian Eigenmaps coordinate esti- mates {Zi).

For the 2nd round, let the desired column sums be

(9)

where ki is the number of its neighbors j for which - i+ I{ < R, and R is the radius of coverage. Ad-

just W to meet (&}, and then calculate final coordi- nate estimates { Z , ) . - z r -1 ... N -

Intuitively, if few of the neighbors of sensor i are estimated to be within its communication range, then we can guess that sensor i’s weights should be increased. The presented choices are by no means optimal, and other iterative algo- rithms or updates are certainly possible. We simply show that the performance of this ad hoc method does in fact dra- matically improve localization performance.

5. SIMULATION RESULTS

In this section we test the localization performance using different estimators in multiple sensor geometries. For each test, we run 200 independent simulation trials in order to determine the mean coordinate estimate zi for i = 1 . . . n, and the covariance matrix C. In each simulation, the statis- tical model in Section 2 is used to randomly generate con- nectivity measurements in the sensor network. Each plot in Fig. 1 shows f i (7) and the 1-0 covariance ellipse (-) for each sensor. For comparison, we always plot in gray (or red in the electronic vsrsion) the actual sensor coordinate ( 0 ) and the CramQRao bound (CRB) for the ] - U covari- ance ellipse (- - - -) [6 ] . For each test, we summarize the mean bias 6 = 1(zi - Zill and the standard devia-

tion, i? = m, of the localization estimator. Note all distances are in terms of L, the chosen scale of the network.

5.1. MDS-MAP

We first test MDS-MAP in a 7 by 7 grid network, in which the four comer sensors are reference sensors, and the other 45 are unknown-location sensors. For a communication ra- dius R = 0.5, the MDS method has standard deviation of location error @ = 0.218 and a bias of 6 = 0.087, as shown in Fig. l(a). At R = 0.5, almost all pairs of sensors are within 1 or 2 hops from each other, At lower radii R, the MDS-MAP achieves very low bias, as shown in Table 5, but the standard deviation of error is largely constant, con- sistently about twice the lower bound of the CRB.

5.2. Laplacian Eigenmaps One-Stage

Equal Weights: Next, we test Laplacian Ejgenmaps using the equal weights method (as described in Section 4). The sjmulation results show a heavily biased estimator. For R = 0.5, the results are shown in Fig. I@), in which the mean

Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.

Page 4: ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING …web.eecs.umich.edu/~hero/Preprints/AdaptiveNeighborhoods.pdf · Emerging applications of wireless sensor networks will de- pend on

Location MDS- Estimator MAP

LEEql. LEEql. LELin. Wts. E-Wts. E-wts. LE 2-Stage Linear Sum-of-Weights

1 -

0.75-

0.5-

Geometry R = 0 . 3

0.25-

0."

7 by 7 Grid Grid+Z/B Grid+Z/2 Unif. Rand 6=0.oz6 I b=o.i06 } 6=0.056 I b=o.a48 1 $=0.039 1 i;=o.o46 -' b=o.om I,=a.mg

0 0.25 0.5 0.75 1

R = 0.4

R = 0.5

Figure

1 ~'

0.75-.

0 = 0.205 5 = 0.191 ii = 0.153 6 = 0.153 d = 0.133 i? = 0.142 d = 0.155 13 = 0.126 6 =0.022 b=0.154 6=0.059 & = 0.035 6 = 0.033 6 =0.037 6 = 0.055 6=0.048 3 = 0.205 5 = 0.188 b = 0.143 3 = 0.144 5 = 0.136 b = 0.139 E = 0.141 U = 0.127 b = 0.087 $ = 0.186 I , =0.040' b = 0.036 b = 0.026 h = 0.027 b = 0.040 b=0.031

= 0.218 5 = 0.189 B = 0.149 B = 0.146 6 = 0.144 B = 0.141 a = 0.149 8 = 0.140 1 (a ) 1 (b) 1 i c ) 1 (d) 1 Ce) I (0 1(d 1 (hj

0.251

1-

075.

0 5

0 25-

0-

0 0.25 0.5 0.75 1

1-

075-

0 5-

0 2 5 ~

0'

I 1 I 1 I . 0 0.25 0.5 0.75 1 (f 1 0 0.25 0.5 0.75 1 (e1 0 0.25 0.5 0.75 1 (4

1

0 75

0 5

0 25

0

I 0 025 05 075 1 (% 0 025 0.5 075 1 b)

Fig. 1. Estimator mean (7) and I-a uncertainty ellipse (-) for each blindfoIded sensor compared to the true location (e) and CRB on the I-U uncertainty ellipse (- - -), when reference sensors are located at each x. AI1 cases are R = 0.5 tests described in Section 5 and in Table 1,

1101

Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.

Page 5: ADAPTIVE NEIGHBORHOODS FOR MANIFOLD LEARNING …web.eecs.umich.edu/~hero/Preprints/AdaptiveNeighborhoods.pdf · Emerging applications of wireless sensor networks will de- pend on

6. DISCUSSION AND CONCLUSION bias b = 0.186 and the standard deviation of location error 8 = 0.189. At 8 = 0.3 and R = 0.4, the biases 6 listed in Table 5 are lower but still very high.

Equal Sum-of-Weights: The performance of Laplacian Eig- enmaps, when weights are determined by the equal sum-of- weights method, is dramatically better than the equal weights method, as shown in Table 5. For R = 0.5, the results shown in Fig. l(c) show that the edge nodes seem to have weights too high compared to the intenor nodes, the oppo- site bias pattem compared tu Fig. 1 (b).

Linear Sum-of-Weights: The Laplacian Eigenmaps with adjusted sum-of-weights reduces the bias compared to equal sum-of-weights. As shown in Table 5 and in Fig. l(d), the bias has been reduced, especiaIIy at R = 0.4, even though Ihe values of 8 are largely unchanged.

5.3. Laplacian Eigenmaps ’Ibo-Stage

Using the two-stage weight adjustment described in Section 4.2, bias is further reduced. Furthermore, as shown in Table 5 , the variance for R = 0.3 and 0.4 is dramatically lower than the one-stage linear sum-of-weights method. These variances in the grid geometry are about 30-35% higher than the Cramir-Rao lower bound, so even an efficient estimator would not reduce 8 dramatically further.

However, we certainly don’t expect that sensors will be manged in a perfect grid. The true test of sensor local- ization is performance sensor placement is random, which is presented next. Each test shows the performance of the Laplacian Eigenmaps two-stage weight selection method. Grid Plus Noise: First, we add a Gaussian random vec- tor to each unknown grid coordinate, i.e., for i = 1.. . n. zi = ii -k Zi/c, where z i is the original coordinate on the 7 by 7 grid, and {Z;) are independent Gaussian-distributed with mean zero and covariance (l/S)*Zz, and c = 2 or 4. Essentially, the standard deviation of the random addition is either one-fourth or one-half of the distance between grid nodes. We generate two geometries from this mode1 for c L 2 and c = 4, and show simulation results in Table 5 and in Fig. 3(f-g). For R = 0.4 and 0.5, the bias and standard deviation of location error increase only sIowIy. However, for R = 0.3, the bias and variance do increase consider- ably. We note that sensors actually located outside of the unit square EO, 112 have noticeably higher bias and variance.

UniformRandom: Next, for i = 1. . .N, xi are inde- pendently chosen from a uniform distribution over the unit square area, [0, 1J2. The sensors closest to each comer are selected as the 4 references, so in this test, even the refer- ences are randomIy deployed. The resulting @ are lower than in the 7 by 7 grid or the grid plus noise geome~ries. We note that the CRB for a is also about 15% lower far this deployment compared to the 7 by 7 grid, so it is legitimate to expect lower 8. Essentially, sensors very dose together can provide increased information about their relative loca- tion. However, the biases 6 are higher than the 7 by 7 grid, especially for €2 = 0.3.

For random deployments, a low communication radius like R = 0.3 may cause some sensors to have very few neigh- bors, and localization performance will suffer. System de- signers should plan for the tendency of sensors outside of the convex hull of the reference nodes to experience higher iocahzation errors.

Using a realistic statistical model for connectivity? simu- lations show the potential of the Laplacian Eigenmaps method to be a robust, low-bias and low-variance sensor location es- timator. It dues not suffer from local optima and it has IOW computational complexity compared to MDS-based estima- tors. The presented two-stage weight-selection method is used to achieve low bias and standard deviation within 35% of the lower bound. However, general analysis of weight se- lection methods has not been attempted. Finally, distributed algorithms have not yet been presented for the proposed methods. These issues remain open for future research.

7. REFERENCES

Dragos Niculescu and Badri Nath, “Ad hoc positioning sys- tem,” in IEEE Globecom 2001, April 2001, vol. 5, pp. 2926

Yi Shang, Wheeler Ruml, Yhg Zhang, and Markus P J. Fromherz, “Localization from mere connectivity(’ in MO- bihoc ’03, June 2003, pp. 201-212. Niveditha Sundaram and Parameswaran Rammathan, “Con- nectivity based location estimation scheme for wireless ad hoc networks,” in IEEE Globecom 2002, Nov. 2002, vol. 1, pp. 143-147.

Joshua B. Tenenbaum, Vin de Silva, and John C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, pp. 2319-2323, Dec 2000.

Theodore S. Rappaport, Wireless Communicatioru: Pririci- p k s and Pracrice, Prentice-Hall Inc., New Jersey, 1996.

Neal Patwari and Alfred 0. Hero III, “Using proximity and quantized RSS for sensor localization in wireless networks,” in IEEWA CM 2nd Workshop on wireless Sensor Nets. & Ap- plicarions, Sept. 2003. Mikhail Bekin and Partha Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation:’ Neural Computurion. vol. 15, no. 6, pp. 1373-1396, June 2003.

Earnest R. Davidson, “The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric mahices,” .l Comput. Pkys., vol. 17, no. 1, pp. 87-94. Jan. 1975.

Luca Bergamaschi, Giorgio Pini, and Flavio Sartoretto, “Computational experience with sequentid and paral- le1,preconditioned jacobidavidson for large, sparse symmet- ric matrices,” J. Compururioriul Physics, vol. 188, no. 1, pp. 318-331, June 2003.

Jose A. Costa, Neal Pam&, and Alfred 0. Hero Ill, ‘7%- tributed multidimensional scaling with adaptive weighting for node localization in sensor networks,” lEEWACM Trans. Sensor Networks, submined May 2004, (revised Jan. 2005).

-293 1.

1102

Authorized licensed use limited to: University of Michigan Library. Downloaded on May 5, 2009 at 16:34 from IEEE Xplore. Restrictions apply.