international conference on indoor positioning and indoor navigation

317
International Conference on Indoor Positioning and Indoor Navigation 28-31 Oct 2013 France

Upload: lethuan

Post on 01-Jan-2017

278 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: International Conference on Indoor Positioning and Indoor Navigation

International Conference onIndoor Positioning and Indoor

Navigation

28-31 Oct 2013France

Page 2: International Conference on Indoor Positioning and Indoor Navigation

Table of Contents

Position Estimation Using a Low-cost Inertial Measurement Unit with Help of Kalman Filtering andFastening-Pattern Recognition, T. Chobtrong [et al.] ..............................................................................1REFERENCE NAVIGATION SYSTEM BASED ON WI-FI HOTSPOTS FOR INTEGRATION WITHLOW-COST INERTIAL NAVIGATION SYSTEM, M. Kamil [et al.] ...................................................3Survey of accuracy improvement approaches for tightly coupled ToA/IMU personal indoor navigationsystem, V. Maximov [et al.] .....................................................................................................................7Enhancement of the automatic 3D Calibration for a Multi-Sensor System, E. Koeppe [et al.] .............11

A gait recognition algorithm for pedestrian navigation system using inertial sensors, W. Liu [et al.] .. 14

An UWB based indoor compass for accurate heading estimation in buildings, A. Norrdine [et al.] .... 18

Accuracy of an indoor IR positioning system with least squares and maximum likelihood approaches, F.Domingo-perez [et al.] ........................................................................................................................... 22

An indoor navigation approach for low-cost devices, A. Masiero [et al.] ............................................. 24

ARIANNA: a two-stage autonomous localisation and tracking system, E. De marinis [et al.] ............ 26

Source localisation by sensor array processing using a sparse signal representation, J. Lardies [et al.] ...30

OFDM Pulse Design with Low PAPR for Ultrasonic Location and Positioning Systems, D. Albuquerque[et al.] ..................................................................................................................................................... 34

Dynamic Collection based Smoothed Radiomap Generation System, J. Kim [et al.] ........................... 36

Pedestrian Activity Classification to Improve Human Tracking and Localization, M. Bocksch [et al.] ...39

Accurate Smartphone Indoor Positioning Using Non-Invasive Audio, S. Lopes [et al.] .......................44

Locally Optimal Confidence Ball for a Gaussian Mixture Random Variable, P. Sendorek [et al.] ....... 48

Evaluating robustness and accuracy of the Ultra-wideband Technology-based Localization Platform underNLOS conditions, P. Karbownik [et al.] .................................................................................................53

Robust Step Occurrence and Length Estimation Algorithm for Smartphone-Based Pedestrian DeadReckoning, W. Kang [et al.] ................................................................................................................... 55

Context Aware Adaptive Indoor Localization using Particle Filter, Y. Zhao [et al.] ..............................60

Verification of ESPAR Antennas Performance in the Simple and Calibration Free Localization System,M. Rzymowski [et al.] ............................................................................................................................64

Optimal RFID Beacons Configuration for Accurate Location Techniques within a Corridor Environment,E. Colin [et al.] ....................................................................................................................................... 68

A Cooperative NLoS Identification and Positioning Approach in Wireless Networks, Z. Xiong [et al.] .73

Visual Landmark Based Positioning, H. Chao [et al.] ........................................................................... 79

RFID System with Tags Positioning based on Phase Measurements, I. Shirokov................................. 84

Broadcasting Alert Messages Inside the Building: Challenges & Opportunities, F. Spies [et al.] ........ 91

For a Better Characterization of Wi-Fi-based Indoor Positioning Systems, F. Lassabe [et al.] .............95

Locating and classifying of objects with a compact ultrasonic 3D sensor, W. Christian [et al.] ........... 99

I

Page 3: International Conference on Indoor Positioning and Indoor Navigation

Location Estimation Algorithm for the High Accuracy LPS LOSNUS, M. Syafrudin [et al.] ............103

Infrastructure-less TDOF/AOA-based Indoor Positioning with Radio Waves, C. Aydogdu [et al.] ....105

Sound Based Indoor Localization - Practical Implementation Considerations, J. Moutinho [et al.] ...109

Proposed Methodology for Labeling Topological Maps to Represent Rich Semantic Information forVision Impaired Navigation., A. Jayakody........................................................................................... 113

Improvements and Evaluation of the Indoor Laser Localization System GaLocate, J. Kokert [et al.] 115

Observability Properties of Mirror-Based IMU-Camera Calibration, G. Panahandeh [et al.] .............117

Processing speed test of Stereoscopic vSLAM in an Indoors environment, J. Delgado vargas [et al.] ....119

Enhanced View-based Navigation for Human Navigation by Mobile Robots Using Front and Rear VisionSensors, M. Tanaka [et al.] ...................................................................................................................123

Generation of reference data for indoor navigation by INS and laser scanner, F. Keller [et al.] ......... 127

Implementation of OGC WFS floor plan data for enhancing accuracy and reliability of Wi-Fifingerprinting positioning methods, D. Zinkiewicz [et al.] ..................................................................129

On-board navigation system for smartphones, M. Togneri [et al.] ...................................................... 133

A Gyroscope Based Accurate Pedometer Algorithm, S. Jayalath [et al.] .............................................138

Bluetooth Embedded Inertial Measurement Unit for Real-Time Data Collection, R. Chandrasiri [et al.] 142

WiFi localisation of non-cooperative devices, C. Beder [et al.] .......................................................... 146

Creation of Image Database with Synchronized IMU Data for the Purpose of Way Finding for VisionImpaired People, C. Rathnayake [et al.] ...............................................................................................150

Relevance and Interpretation of the Cramer-Rao Lower Bound for Indoor Localisation Algorithms, M.Kyas [et al.] .......................................................................................................................................... 152

Efficient and adaptive Generic object detection method for indoor navigation, N. Rajakaruna [et al.] ...156

Hidden Markov Based Hand Gesture Classi?cation and Recognition Using an Adaptive Threshold Model,J. Mechanicus [et al.] ............................................................................................................................160

Pedestrian Detection and Positioning System by a New Multi-Beam Passive Infrared Sensor, R. Canals[et al.] ................................................................................................................................................... 169

Study of rotary-laser transmitter shafting vibration for workspace measurement positioning system, Z.Liu [et al.] .............................................................................................................................................174

Efficient Architecture for Ultrasonic Array Processing based on Encoding Techniques, R. García [et al.]...............................................................................................................................................................178

Using Double-peak Gaussian Model to Generate Wi-Fi Fingerprinting Database for Indoor Positioning,L. Chen [et al.] ......................................................................................................................................182

Indoor Positioning using Ultrasonic Waves with CSS and FSK Modulation for Narrow Band Channel,A. Ens [et al.] ........................................................................................................................................188

Improving Heading Accuracy in Smartphone-based PDR Systems using Multi-Pedestrian Sensor Fusion,

II

Page 4: International Conference on Indoor Positioning and Indoor Navigation

M. Jalal abadi........................................................................................................................................ 190

A New Indoor Robot Navigation System Using RFID Technology, M. Fujimoto [et al.] ...................194

Accurate positioning in underground tunnels using Software-Defined-Radio, F. Pereira [et al.] ....... 196

Positioning in GPS Challenged Locations ? The NextNav's Metropolitan Beacon System, S. Meiyappan[et al.] ................................................................................................................................................... 202

Indoor Positioning using Wi-Fi -- How Well Is the Problem Understood?, M. Kjærgaard [et al.] ......207

The workspace Measuring and Positioning System(wMPS)?an alternative to iGPS, B. Xue [et al.] . 211

Key Requirements for Successful Deployment of Positioning Applications in Industrial Automation, L.Thrybom [et al.] ....................................................................................................................................213

Texture-Based Algorithm to Separate UWB-Radar Echoes from People in Arbitrary Motion, T. Sakamoto[et al.] ................................................................................................................................................... 217

Experimental Evaluation of UWB Real Time Positioning for Obstructed and NLOS Scenarios, K. Al-qahtani [et al.] .......................................................................................................................................221

Device-Free 3-Dimensional User Recognition utilizing passive RFID walls, B. Wagner [et al.] ....... 225

First Theoretical Aspects of a Cm-accuracy GNSS-based Indoor Positioning System, Y. Lu [et al.] . 229

2D-indoor localisation with GALILEO-like pseudolite signals, A. Monsaingeon [et al.] ...................234

Performance Comparison between Frequency-Division and Code-Division access methods in anultrasonic LPS, F. álvarez [et al.] ......................................................................................................... 239

Fusion methods for IMU using neural networks for precision positioning, L. Tejmlova [et al.] ........ 241

Stance Phase Detection using Hidden Markov Model in Various Motions, H. Ju [et al.] ................... 243

Standing still with inertial navigation, J. Nilsson [et al.] ..................................................................... 247

Single-channel versus multi-channel scanning in device-free indoor radio localization, P. Cassarà [et al.]...............................................................................................................................................................249

Indoor Positioning using Time of Flight Fingerprinting of Ultrasonic Signals, A. Dvir [et al.] ..........253

The Construction of an Indoor Floor Plan Using a Smartphone for Future Usage of Blind IndoorNavigation., A. Jayakody...................................................................................................................... 257

Study aimed at advanced use of the indoor positioning infrastructure IMES, Y. Yutaka ..................... 261

Health Monitoring of WLAN Localization Infrastructure using Smartphone Inertial Sensors, R. Haider[et al.] ................................................................................................................................................... 265

GPS Line-Of-Sight Fingerprinting for Enhancing Location Accuracy in Urban Areas, A. Uchiyama [etal.] .........................................................................................................................................................269

Utilizing cyber-physical systems to rapidly access and guard patient records, T. Czauski [et al.] ......273

Evaluation of Indoor Pedestrian Navigation System on Smartphones using View-based Navigation, M.Nozawa [et al.] ..................................................................................................................................... 275

III

Page 5: International Conference on Indoor Positioning and Indoor Navigation

- chapter 1 -

Signal Processing & Analysis

Page 6: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Position Estimation Using a Low-cost Inertial

Measurement Unit with Help of Kalman Filtering and

Fastening-Pattern Recognition

T. Chobtrong1, M. Haid, M. Kamil and E. Günes

Competence Center of Applied Sensor System

Darmstadt University of Applied Sciences

Darmstadt, Germany [email protected]

Abstract—To improve the quality control in the automotive

industry, an intelligent screwdriver is being developed in order to

track the position of a bolt being fastened. In many situations

such as inside the car or in the engine compartment, it is not

possible to track the bolt-positions via a visual-based tracking

system. In order to solve this problem, the screwdriver is

integrated with an inertial measurement unit instead of using

vision-based tracking systems for tracking the bolt-positions.

Alike common inertial navigation systems, the challenge of this

tracking system is its inaccuracy caused by sensor drift. This

paper presents a position tracking algorithm using a low-cost

inertial measurement unit for this intelligent screwdriver. The

position tracking algorithm is based on Kalman filter with help

of the fastening-pattern recognition algorithm based on Hidden

Markov Model.

Keywords-IMU; Indoor Navigation; Inertial Navigation;

Kalman Filter; Hidden Markov;

I. INTRODUCTION

To prevent problems like missing or unfastened bolts by a worker and improve the quality of automotive manufacturing, a system which is able to track the position of a bolt being fastened, is required. Because the shape of a vehicle is complex and its main material is metal, vision-based or radio-based tracking systems are not suitable for tracking a tool-tip under these conditions. Vision-based tracking systems need to get a picture of the tool-tip to track its position. In some cases such as the car’s inside or the engine compartment, parts of the car or the body itself obstructs the camera. Therefore, the vision-based tracking system loses the position of the tool-tip around that area. In case of radio-based tracking systems, there are some inaccuracy and lost-of-contact problems caused by magnetic-field distortion, because there are many metallic objects in and around an automotive manufacturing line [1]. Another disadvantage of the vision-based and radio-based tracking systems is that these systems need to install their equipments and supporting infrastructure. Therefore, these systems need a lot of investment and they are difficult to adapt to new and changing conditions & processes such as a new car model.

To develop a tool-tip tracking system for INSCHRAV project, a low-cost inertial navigation system (INS) is applied for support this application, because of its contactless and referenceless properties as well as its low-cost, low-weight and compact design [2]. This intelligent screwdriver is integrated with a low-cost inertial measurement unit (IMU). However, the challenge of this tracking system is the accumulated error from its measurement signals corrupted by stochastic noise [3].

This paper presents the overview of the INSHRAV algorithm which has been developed to improve the accuracy of the tool-tip tracking system using INS. This algorithm is able to estimate the position of a tool-tip based on complimentary Kalman filter (CKF) with help of fastening-pattern recognition, based on Hidden Markov Model (HMM). In brief, the optimal position estimated by CKF is optimized by the observed position that is determined by the fastening –pattern recognition.

II. OVERVIEW OF THE INSCHRAV ALGORITHM

The INSCHRAV algorithm estimates the position of a tracked tool-tip by using the measurement signals from IMU500, a low-cost IMU designed and developed by Competence Center of Applied Sensor System (ccass). There are 4 main steps in the INSCHRAV algorithm, attitude

Figure 1. Demonstration of the tool-tip tracking system using inertrial tracking system

1/278

Page 7: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure 2. IMU500, a low-cost inertial measurement unit

Figure 3. Absolute position error from the estimation (a) in x-axis (b)

in y-axis and (c) in z-axis, with INSCHRAV algorithm (solid line)

and without INSCHRAV algorithm (dash line)

estimation, gravity compensation, position estimation and fastening-position recognition.

The attitude estimation is based on extended Kalman filter (EKF) as the study from Madgwick et al. in [4]. In brief, the current attitude of the screwdriver is estimated by the signals from gyroscope, and it is corrected with the tilt and heading estimation by using measurement signals from accelerometers and magnetometers in order. Then, the measured acceleration signals which contains gravity vector are compensated by using the attitude information, to estimate the dynamic acceleration of the screwdriver.

Finally, the position estimation algorithm estimates the position of the tool-tip as in [5]. However, this estimation is additionally corrected by the fastening-pattern recognition. Because of strict manufacturing procedures, the pattern in which the bolts are fastened is stringently defined. Therefore, the bolt-position being fastened is possible to be recognized through the sequence recognition algorithm based on HMM. This recognition module supports 11 hidden states (10 positions and reset point) with 10 observation states which are determined by a movement classification module.

III. OVERVIEW OF THE IMU500

The IMU500 is a low-cost inertial measurement unit (IMU) designed in-house especially for the INSCHRAV project. The IMU is composed of 2 main sensors, a tri-axial accelerometer & magnetometer (STMicro LSM303DLHC) and a tri-axial gyroscope (STMicro L3GD20). The microcontroller of the IMU500 is ARM Cortex M4 from ST, which supports floating point arithmetic.

IV. EXPERIMENTS

The INSCHRAV algorithm’s performance was tested through laboratory simulations at ccass. The model of the intelligent screwdriver attached with IMU500, was moved to fasten 10 bolts 10 times (the pitch between each bolt is 10 cm.) on the model of the cylinder head of a 4-cylinder engine. The INSCHRAV algorithm has been compiled and deployed to the IMU500. In these experiments, a reference signal provided by an infrared tracking system, Lukotronic AS200, was used to evaluate the performance of the inertial tracking system.

V. RESULTS

As shown in Figure 3, the errors from position estimation without the INSCHRAV algorithm (dash line) continuously increase over time. But the errors from position estimation with INSCHRAV algorithm are controllable and stable after the first

recognized position (at time 1490 ms). Significantly, the INSCHRAV position estimation based on CKF re-calculated its process covariance matrix and Kalman gain to improve the estimation with the observation signals from the fastening-position recognition. Therefore, the position errors from INSCHRAV algorithms rapidly decrease after the bolt-position has been recognized.

VI. CONCLUSION AND FURTHER DEVELOPMENT

This position estimation using a low-cost IMU with the help of Kalman filter and Fastening-pattern recognition (INSCHRAV algorithm) is able to track a position of a bolt being fastened with accuracy of ±50 mm on each axis. Further improvement of this project is to optimize the initial parameters of the INSCHRAV algorithm to improve the tracking system’s accuracy. Moreover, this tracking system will be integrated with the intelligent screwdriver, and will be tested the overall performance of the system.

REFERENCES

[1] D. Vissiere, A. Martin, and N. Petit, “Using distributed magnetometers to increase IMU-based velocity estimation into perturbed area,” Decision and Control 2007, 46th IEEE Conference, pp. 4924 –4931, 2007.

[2] M. Haid, “Improvement of the referenceless inertial objeckt-tracking for low-cost indoor-navigation by Kalman-filtering (Verbesserung der referenzlosen inertialen objecktverfolgung zur low-cost indoor-navigaton durch anwendung der kalman-filterung),” Ph.D. dissertation, Universität Siegen, 2004.

[3] D. H. Titterto, and J. L.Weston, “Strapdown inertial navigation technology,” The Institution of Electrical Engineers and The American Institute of Aeronautics and Astronautics, 2004.

[4] S. Madgwick, A. Harrison, and R. Vaidyanathan, “Estimation of IMU and MARG orientation using a gradient descent algorithm,” in proceeding of Rehabilitation Robotics (ICORR), 2011 IEEE International Conference, 2011.

[5] M. Haid, T. Chobtrong, E. Günes, M. Kamil, and M. Münter, “Improvement of inertial object tracking for low-cost indoor-navigation with advanced algorithms,” 16. GMA/ITG-Fachtagung, Sensor und Messsysteme 2012, Nuremberg, 2012

2/278

Page 8: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Reference Navigation System Based in WI-FI

Hotspots for Integration with Low-Cost Inertial

Navigation System

Mustafa Kamil, Pierre Devaux, Markus Haid, Thitipun Chobtrong, Ersan Günes

Competence Center for Applied Sensor Systems

University of Applied Sciences Darmstadt

Darmstadt, Germany

[email protected]

Abstract— In recent years, low-cost inertial navigation has been a

well-known solution for indoor object tracking, in densely built-

up areas or hybrid indoor-outdoor-environments. As the

hardware required for inertial navigation is tiny, light-weight

and widely available in the market, using MEMS motion sensors

promises various industrial, medical or entertainment

applications to be realized at very low manufacturing costs.

Nevertheless the error characteristics these sensors have shown

until now, have limited the applicability to be usable for just

simple tasks in smart phones and tablet PCs.

To overcome this problem, recent research at the CCASS has

successfully achieved a sensor fusion technique for supporting an

inertial navigation system (abbreviated as: INS) with GPS as a

source of reference navigation. For this concept, short reference

signal outages can be overcome by the INS reliably and the INS

errors are limited to minimum values. Nevertheless, longer

indoor duration environments (production halls, storage areas or

indoor parking places, etc.) still results in an unlimited growth of

the navigation errors, as commonly known for low-cost INS

performance.

In order to enable fields of application, that require both outdoor

and indoor navigation, an alternative reference navigation

system must be found. For best possible market penetration, this

system has to be low-cost, able of penetrating walls, require no

additional installation work and no changes to the building. As

perfectly meeting these requirements, the research aims for

developing a localization method based on regular Wi-Fi

hotspots.

The present short paper handles the reference system

development from the trivial hotspot information to the complete

reference localization system and the overall system concept

including the INS integration. The presented system doesn’t

require knowledge of the hotspots positions, is technically based

on Wi-Fi fingerprinting and aims for application in tracking and

tracing in distribution logistics and industrial production.

Keywords: Inertial Navigation, Wi-Fi Fingerpinting, Sensor

Fusion, Indoor Navigation, MEMS

I. INTRODUCTION

Extending position determination techniques for vehicles, objects and personnel towards applicability in roofed,

inaccessible or densely built-up environment is a key technology for optimization of processes across a variety of industries.

As an example, in distribution logistics of automotive plants, the transport of finished vehicles to the various loading stations (e.g., train, ship, truck) or to the customer is of great importance for the manufacturer. The transportation of products has to be processed quickly and efficiently in order to avoid negative feedbacks on the production and also to guarantee the fastest possible delivery to the customer. In the state of the art situation it is not possible for the planners to get feedback on the implemented work in the distribution process. Next to that it is unknown, if any sudden disturbances have occurred or what the requirements for everyone of the available resources are.

The key to fill this information gap would be accurate ID-related location data provided in real-time for every vehicle on the ground. In combination with data on tasks to be accomplished and resources available, the tracking of the vehicles can help to add control possibilities to the planning and also gives a powerful tool for the verification of the planer’s success. For example, it can be determined at anytime, if the object is transported correctly to the defined target and additionally if this was accomplished in the designated time. Loss and interchanges of vehicles can be indicated immediately, and so, quick and precise reactions can be initiated.

Using location and job information in one integrated environment allows for the first time creating a synchronized information base for planning and implementation of logistics, which in the sequel can be used for optimizing the delivery procedure and for quick and accurate reactions on any unforeseen changes. The results are less loss of time, more efficient and lock-free use of resources (e.g. personnel, facilities, transport routes, etc.) and consequently higher productivity (cf. [1]).

II. INERTIAL NAVIGATION

The principle of inertial navigation is based on the measurement of object movements with the help of mass inertia under acceleration. For this, an orthogonal constellation

3/278

Page 9: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

of both three acceleration and three angular rate sensors is needed. This assembly allows the determination of all accelerations and angular moments applied on any object moving in space, without requiring signals from the surrounding environment. The acceleration sensors are used to capture translational movements in the three spatial directions. The angular rate sensors (gyroscopes) capture the rotational speeds of the three spatial axes. Using low-cost sensors for this issue allows on one side a substantial reduction of system cost, but, on the other side, it is also connected with loss of accuracy, accumulating over time (cf. [2]). The reason for that is mainly given by a random bias error in the signals of those sensors. With an integration of translational acceleration or angular rate signals this error gets amplified resulting in a time-dependent growth of inaccuracy. The scenario of an inertial object tracking on the base of just integrated sensor signals is therefore not possible at the current moment and requires higher techniques to reduce the unwanted side-effect (cf. [3]).

In practice, INS are often coupled with other localization systems, for example with a Global Positioning System (GPS) receiver device that periodically provides absolute position data while the INS is used to interpolate the intercostal values. Furthermore, advanced signal processing algorithms can help to reduce the error of position and orientation to achieve a better time performance. For that it is usual to make use of estimation filters like the Kalman filter (cf. [4-5]) and to eliminate additional errors evoked by parasitic effects like G-acceleration and the Coriolis Effect. In addition to that the use of extra sensors on the inertial platform can provide helpful information, such as the measurement of the earth’s magnetic field.

III. SENSOR FUSION, GPS-INS-INTEGRATION

As known from literature as well as personal experiments, low-cost GPS navigation does not offer a guaranteed accuracy of more than 12 meters of position deviation and it also suffers some additional errors in cause of signal reflection. Next to that it is not possible to acquire GPS signals in roofed areas or shaded environment. INS, as described above, have the problem of sensor drifting and as a result of the loss of accuracy over time. Hence, the authors have concentrated earlier work into the Integration of INS with low-cost GPS aiming for the application of both systems’ advantages while compensating for their limitations.

One of the most common algorithms utilized for the implementation of sensor fusion is a Kalman filter with indirect formulation. Indirect formulation, means, that the estimations provided by the filter do not describe the systems’ motion values themselves, but rather the errors made by the INS and the inertial sensors. The algorithm processes both the inertial sensor values in the so-called propagation step and the GPS information in the so-called measurement update step. In the earlier approach, both the GPS position and velocity measurements were used by the Kalman filter. Subsequent to each filter iteration, the obtained error state vector was fed back into the INS mechanization block and then reset to a zero-vector which made the filter algorithm work in feedback configuration. The feedback configuration allows the system states to be corrected immediately after the measurements have

been processed by the filter which keeps the error states small and the algorithm stable. The system overview is shown in figure 1.

Figure 1: GPS-INS sensor fusion system model based on Kalman filtering in

feedback configuration

The main purpose of the present project was to find and implement a low-cost positioning technique for supporting an INS in longer periods of indoor operation using a technology that requires neither additional investments nor changes to the building infrastructure. Wireless hotspots are widely available inside most buildings belonging to private or industrial sectors as Internet service is often distributed by Wi-Fi hotspots. Hence, the first goal was to realize a simple localization method based on this technology in order to provide the basis for uninterrupted low-cost navigation system for both indoor and outdoor operation by sensor fusion of INS, GPS and Wi-Fi positioning.

IV. WI-FI-INS-INTEGRATION

A. Main Concept

The overall concept for the indoor navigation part is composed of three correlative steps of processing Wi-Fi signals spread by randomly positioned hotspots inside a specific building and acquired by a mobile low-cost receiver attached to the multi-sensor system. The first step comprises the creation of a signal pattern database in the form of a reference file that represents a virtual assignment of selected positions inside the building to a specific pattern of signal levels received from a number of hotspots exactly at this position and a record of the MAC addresses the selected hotspot devices use (cf. figure 2, 3). The creation of the database file is done as a preparing step before the navigation actually starts. The navigation finally is realized by comparing the signals resulting from continuous acquisition to the previously created database. The position estimation is implemented using the smallest Euclidean distance method.

B. Detailed Concept

The indoor navigation comprises two successive operation modes:

Database creation: Definition of a number of reference positions and hotspots, acquisition of signal levels in

978-1-4673-1954-6/12/$31.00 ©2012

IEEE

4/278

Page 10: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

each of those positions and storage inside a tabular database file

Database matching: Continuous acquisition, signal level matching, estimation of the position, forwarding positive matches to the sensor fusion filter

1 2 3 4 5-120

-100

-80

-60

-40

-20

0

Hotspot No.

Sig

na

l S

tre

ng

th (

dB

m)

Signal Pattern Square No. 1

1 2 3 4 5-120

-100

-80

-60

-40

-20

0

Hotspot No.

Sig

na

l S

tre

ng

th (

dB

m)

Signal Pattern Square No. 1

Figure 2 and 3: Signal strength patterns for positions No. 1 (top) and 2

(bottom) out of six defined reference positions, the MAC addresses were

replaced due to data protection requirements

Firstly, the database creation requires the selection of reference positions and of the hotspots for which the signal levels are to be observed. The reference positions shall be selected with a distinct relative displacement by means of the checkpoint concept described below. For the hotspot selection, a compromise has to be found between many observed signals for achieving lowest possible position ambiguities and between fastest possible signal processing. Most common buildings, however, have a limited number of hotspot devices so that the first optimization criterion is often limited. Therefore, the present approach utilized five hotspot devices as an average value between private and industrial Wi-Fi configurations. After these fundamental decisions have been made, the signal levels for the selected hotspots at each of the defined reference positions must be acquired and the hardware identifiers as well as the acquired signal patterns must be stored to the database file. The latter is designed as a two-dimensional array inside a text file for maximizing reading and writing speed while minimizing file size and memory allocation (cf. [6]).

The database matching mode implements the actual navigation part which comprises the continuous Wi-Fi signal acquisition for the same hotspots selected before and the comparison of the continuously read signal levels to those recorded in the database file. A common algorithm for realizing that is given by the Nearest-Neighbor-Method (cf. [7]):

For Nearest-Neighbor-Method, there are mainly two possible implementations which are on one side the so-called “Manhattan distance” and on the other side the so-called “Euclidean distance”. In order to maximize the computational reliability for the navigational signal processing, the present approach applies the Euclidean distance variant of the Nearest-Neighbor-Method despite of its roughly more complex calculation (cf. [7]):

As a next step, for each continuous acquisition sample, the signal levels are compared with each row of the array inside the database file as those rows correspond to the desired reference position coordinates. The comparison routine results in a one-dimensional array containing the smallest Euclidean distances for all known positions by means of the first cell for first reference position, the second cell for second reference position, and so on. The final solution is reached by taking the minimum value inside the array, comparing this value to a tolerance range and giving back the corresponding position coordinates. If the minimum value is inside the bounds of the tolerance range for a minimum number of samples, then the receiver device must be sufficiently close to the reported reference position. Otherwise, no reference position is detected, so that the navigation is continued by the stand-alone INS mechanization.

C. Checkpoint Concept

The checkpoint concept referred to above is directly connected to the INS integration procedure. Low-cost INS can provide an accurate source of navigational information for limited periods of time, but they lose accuracy during long-term operation. Hence, when INS are frequently provided with navigational reference information, they can be re-calibrated to reset the navigation errors by means of the accuracy given for the reference system. Combining INS and Wi-Fi-based positioning can help to compensate for the INS short-term stability while reducing the requirements for the density of reference positions used by the Wi-Fi system. Instead of dividing indoor environments into grids of uninterrupted reference positions, only “checkpoints” have to be defined to realize a density corresponding to the performance shown by the inertial navigation. The checkpoint concept results in less

5/278

Page 11: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

effort required for the database creation, higher positioning reliability and better compensation of signal fluctuation effects.

V. RESULTS

As shown in figure 4 for a continuous round walkthrough

inside a building corridor, signal pattern comparison can

successfully lead to the recognition of previously defined

reference positions. Furthermore, the application of the

checkpoint concept proves to be useful, as the given

displacement significantly simplifies differentiating between

the single positions. The figure also shows that the signal

levels can fluctuate during dynamic motion as well as during

stationary periods which represents one of the major

challenges for stand-alone Wi-Fi-based indoor navigation.

VI. CONCLUSION

The presented development has shown the results of a proof-of-concept study developed for the extension of a previously developed GPS-INS-System towards both outdoor and long-term indoor operation. It was possible to realize a low-cost Wi-Fi positioning prototype system by application of a previously installed hotspot infrastructure and simple signal processing despite the hotspot positions being unknown. The present system concept is suitable for application in aided inertial navigation systems as a source of indoor reference positions. As GPS position measurements can be applied for integration of INS and GPS in outdoor environments, both indoor and outdoor reference systems could be made available for further study to be implemented in future aiming for authentic commercial or industrial application.

VII. FURTHER DEVELOPMENT

For the reason that Wi-Fi signals usually suffer from

unforeseeable fluctuations induced by multipath propagation,

environment dynamics or transmission power variations,

further strategies must be found for reducing the effects on the

navigational signal processing. Using an INS inside the system

enables an effective reduction of fluctuation effects as it

can provide information for dynamic motion. A second

strategy would be available when the INS and the Wi-Fi

reference system are fused by a Kalman filter algorithm, as the

Euclidean distance information can provide a useful input to

the filter and hence affect the level of trust given for the Wi-Fi

reference. Finally, the application of motion monitoring

algorithms can provide a powerful tool for filtering illogical

results, too fast or impossible motion, such as inside the wall

or outside the building.

ACKNOWLEDGMENT

We would like to express our very special thanks to Professor Dr. Markus Haid for supervising this work at the CCASS in Darmstadt. His efforts are much appreciated.

REFERENCES

[1] M. Haid, M. Kamil, T. Chobtrong, E. Günes, M. Münter, H. Tutsch, “IN-DIVER - Integrated Distribution Planning using an inertial-based tracking system,” International Conference on Flexible Automation and Intelligent Manufacturing, Tampere (Finland), 2012

[2] N. Yazdi, F. Ayazi, and K. Najafi, "Micromachined inertial sensors," Proceedings of the IEEE, vol. 86, no. 8, 1998.

[3] M. Haid, "Verbesserung der referenzlosen inertialen Objektverfolgung zur Low-cost Indoor-Navigation durch Anwendung der Kalman-Filterung, Dissertation," Universität Siegen, Siegen (Germany), 2005.

[4] M. Haid, J. Breitenbach, "Low cost inertial object tracking as a result of Kalman filter," Applied Mathematics and Computation, Volume 153, Issue 2, ELSVIER, Science direct, 2004.

[5] O. Loffeld, Estimationstheorie Bd. I / II.: Oldenbourg Verlag, 1990.

[6] M. Paciga and H. Lutfiyya, "Herecast: An Open Infrastructure for Location-based services using Wi-Fi", Wireless and Mobile Computing, IEEE International Conference on Networking and Communications, Montreal, (Canada), 2005.

[7] I. Nikolaou, S. Denazis, "Positioning in Wi-Fi Networks", University of Patras, Patras (Greece).

Figure 4: Acquired signals for the selected hotspots with correspondent position markings

6/278

Page 12: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Survey of Accuracy Improvement Approaches forTightly Coupled ToA/IMU Personal Indoor

Navigation SystemVladimir Maximov

LLC RTLSMoscow, Russia

Email: [email protected]

Oleg TabarovskyLLC RTLS

Moscow, RussiaEmail: [email protected]

Abstract—In this work we present the personal indoor naviga-tion system based on ranges measurements with Time of Arrival(ToA) principle and Inertial Measurement Unit (IMU). Survey ofaccuracy improvement approaches including monocular cameraSimultaneous Localization an Mapping (SLAM) and WiFi SLAMis provided. Provided experimental results show that integrationof data from various navigation systems with different physicalprinciples can increase the accuracy and robustness of the overallsolution.

Index Terms—PDR; IMU; ToA; Monocular SLAM; VectorField SLAM;

I. INTRODUCTION

Indoor real-time locating systems (RTLS) are spread widelynowadays, and use various physical layers - from RF toacoustic and infrared. RTLS system, developed in our com-pany employs RF ToA range measurements between mobilereceivers - tags, and stationary base stations - anchors. It canprovide steady solution with 1 meter accuracy 80% of a time.But it is known that indoor RF range-measuring systems aresuffering from NLOS measurements, another limitation comesfrom measurements update rate that is about 1 Hz. In order toprovide more smooth and robust updates, other sources of nav-igation information should be used. For this work we chooseonly those sources, that are relatively autonomous - inertial,visual, field strengths, with more autonomous and ubiquitousnavigator in mind. This paper has following structure: firstpart describes tightly coupled PDR navigator, second part isdevoted to inertially augmented monocular SLAM, and thirdpart gives some results on WiFi SLAM.

II. TIGHTLY COUPLED TOA/IMU NAVIGATOR

A. Pedestrian Dead Reckoning

There is a lot of reports on pedestrian navigation systemsthat use inertial sensors - from foot-mounted, with full 3Dstarpdown INS [1] to 2D strapdown INS attached to a pedes-trian body [2]. In our system we use practical solution, wherepedestrian dead reckoning (PDR) navigator uses velocity Vb,that is determined by estimating step frequency using ac-celerometer signals. Angle Ψ defines the rotation of bodyframe with respect to navigation frame. Fig. 1 shows frames

Fig. 1. System frames

Fig. 2. Personal navigator functional diagram

used for navigation system, where: X,Y - navigation frame (n-frame) axes, Xp, Yp - pedestrian frame (p-frame) axes, Xb, Yb- body frame (b-frames) axes, Ψ -heading angle, δΨ - headingangle misalignment. Fig. 2 shows the functional diagram oftightly-coupled ToA/IMU navigator, where: ~ab - accelerationvector in b-frame; ~mb - magnetic field vector in b-frame;ΨAHRS -AHRS heading angle; Vp - velocity in p-frame; δVp -estimated p-frame velocity error; δΨ - estimated heading error;~Rn - n-frame coordinates; δ ~R - estimated n-frame coordinateserror; ~Rn - corrected coordinates in n-frame; ~RCSS - mea-surements of ranges by RF chirp spread spectrum (CSS) ToAsystem. Velocity absolute value is calculated by multiplyinginverse of counter value by experimentally estimated scale-factor:

Vp = Sv/Scnt (1)

978-1-4673-1954-6/12/$31.00 c© 2012 IEEE

7/278

Page 13: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

where Scnt - counter value; Sv - step scale-factor. Headingangle is estimated by AHRS (attitude and heading referencesystem) that fuses data from inertial sensors and vector mag-netometer in 15-state Kalman filter.

B. PDR Errors Correction

Pedestrian velocity and heading angle can be used tocalculate coordinates in n-frame, drift will inevitably occur.Drift is caused by magnetic field disturbances that are highlyprobable indoors; velocity derived form step frequency is alsoprone to various errors, ranging from false step detection toperson-varying scale factor in (1). Tightly-coupled ToA/IMUnavigator uses indirect Extended Kalman Filter (EKF) thatfuses PDR and ToA ranges in order to estimate and compen-sate PDR error model with following 4-th order state vector:[

δ ~R δΨ δSv]T

(2)

where δSv - pedometer scale-factor error.1) System model: It can be easily shown that linearized

PDR errors dynamic equation can be written as:

δ ~R =

[−V nxV ny

]δΨ +

[cos Ψsin Ψ

]VpδSv (3)

where −V nX , V nY - velocity components in n-frame.It gives following system matrix F for discrete EKF:

F =

1 0 0 0

−V ny ∆t 1 0 cos ΨVb∆tV nx ∆t 0 1 sin ΨVb∆t

0 0 0 1

(4)

where ∆t - sampling period.2) Measurement model: Range measurement, delivered by

CSS ToA system can be written as a function of currentposition x, y and known base stations coordinates Xi, Yi:

ri = h (x) =

√(Xi − x) + (Yi − y)

2 (5)

Measurement vector z is formed as a difference betweenpredicted and measured ranges:

z =[rPDR1 − rCSS1 . . . rPDRn − rCSSn

]T(6)

C. Filtering and experimental results

As system and measurement models are defined, the stan-dard set of discrete Joseph-form EKF equations was applied toget the estimated errors values. Experimental setup includedCSS tag paired with custom-built inertial module. Data wasacquired in a typical office environment. Inertial sensors datawas sampled with 20 Hz rate together with 1 Hz CSS ToAranges . It can be seen from Fig. 3 and 4 that tightly coupledsystem can effectively cope with various misalignments andstep scale-factors, continuously adapting to the pedestrian.Variations of misalignment angle plots on Fig. 3 are due toexternal magnetic distortions.

0 100 200 300 400 500 600 700−3

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Time, sec

Mis

alig

nm

en

t a

ng

le δ

Ψ,

rad

∆Ψ0 = 0 rad.

∆Ψ0 = 1.2 rad.

∆Ψ0 = −2.5 rad.

Fig. 3. Misalignment angle estimation

0 100 200 300 400 500 600 700−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

Time, secS

tep

sca

le f

acto

r m

ultip

lier

err

or

δ S

v

Sv = 11

Sv = 7

Sv = 17

Fig. 4. Step scale factor multiplier estimation

III. USING MONOCULAR SLAM FOR PDR ACCURACYIMPROVEMENT

Tightly integrated scheme described above, can effectivelyfuse the data, compensate PDR errors and smooth CSS ToAranges measurements. But it also has several drawbacks, andreliance on magnetic heading is one of them. While AHRScombines the gyroscope and magnetometer data and can filterout short-time magnetic disturbances, long-time disturbancesstill pose the problem. It is known that monocular cameraSLAM algorithm can provide information on camera attitude,velocities and coordinates [3], [4]. Typically 30 frames persecond (fps) camera rate is used to make smooth trackingpossible. In order to make algorithm more suitable for mobileplatforms, data from gyroscopes was used on prediction step,and accelerometer data on correction step - so it was possibleto reduce camera frame rate to 10 fps. Monocular SLAM aug-mented with inertial data is based on EKF with dynamicallychanging state vector. State vector x includes camera statexc and number of features states xf . Two different modes ofmonocular SLAM were tested - compass mode - where onlythe attitude of the camera being estimated, and full 6-D mode- where attitude is estimated alongside with coordinates andvelocities of the camera in starter frame:

x =[xc x1f . . . xif

]T(7)

8/278

Page 14: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

State vector for the 6-D mode:

xc =[Rc q δω υ

]Txf =

[Rf θ φ ρ

]T(8)

where q - camera attitude quaternion, Rc - camera coordinatesvector in starter frame, υ - camera velocity vector in starterframe, δω - gyroscopes biases vector, Rf , θ, φ, ρ - standardinverse depth parametrization model of visual features [4]. Forthe compass mode xc will contain only q and δω terms, andxf - only θ and φ.

A. System model

System state model for 6-D case can be written as:˙Rc˙qδ ˙ω˙υ

= f (x, ω) =

υ

1/2Ωqνωνυ

(9)

where Ω - is a a following skew-symmetric matrix:

Ω =

0 −ωx + δωx −ωy + δωy −ωz + δωz

ωx − δωx 0 ωz − δωz −ωy + δωyωy − δωy −ωz + δωz 0 ωx − δωxωz − δωz ωy − δωy −ωx + δωx 0

Gyroscope biases and camera velocity are modeled as randomwalk processes with correspondent noises νω and νυ . Fore thecompass case only q and δω components of the model areused [5].

B. Measurement model

Monocular SLAM measurement model with pin-hole cam-era was augmented with accelerometer data, which givesthe system an ability to keep the local vertical. Normalizedacceleration vector, measured in body frame is transformed tonavigation frame, where it is compared with normalized localgravity vector:

‖an‖ = Cnb (q) ‖ab‖ (10)

where Cb - direction cosines matrix; ‖ab‖ - normalized accel-eration vector in b-frame; ‖an‖ - estimated acceleration vectorin n-frame; Acceleration measurement is gated by measuredacceleration absolute value - only undisturbed accelerationvectors are used. Ranges from CSS ToA system were alsoadded to the measurement model for the 6-D case.

C. Experimental results

The experimental setup included BeagleBone board with320x240 web-camera, custom-built AHRS module, CSS ToAtag. Data from camera was sampled at 10 fps rate, inertial dataat 100 Hz rate, and CSS ToA data at 1 Hz rate.

1) Compass mode: Pedestrian heading angle can be eval-uated with range-only system while pedestrian is moving,but it is ambiguous for the stationary case. Magnetic fielddisturbances, especially in industrial areas will finally distortthe AHRS readings also. To address this problem, ceiling-looking camera was used as a source of heading angle infor-mation. Two plots of Fig. 5 show heading angle behavior ingyro-only mode (magnetic correction was switched off) - Ψω

5 10 15 20 25 30 35 40

−0.2

−0.1

0

0.1

0.2

Time, sec

Att

itu

de

an

gle

s,

rad

Ψω, rad.

ΨSLAM

, rad.

Fig. 5. Heading angles

0 20 40 60 80 100−0.5

0

0.5

1

1.5

2

Time, secA

ttitu

de

an

gle

s,

rad

ΨSLAM

, rad.

θSLAM

, rad.

γSLAM

, rad.

Fig. 6. Attitude angles for 6-D SLAM experiment

−2 −1 0 10

0.5

1

1.5

2

2.5

3

Position, m.

Positio

n, m

.

End

Begin

Fig. 7. Pedestian trajectory for 6-D SLAM experiment

and heading angle ΨSLAM , provided by monocular SLAM,augmented with inertial data. It is clear that monocular SLAMhelps to eliminate heading drift considerably.

2) 6-D mode: For this mode pedestrian was equipped withforward-looking hand-held camera, AHRS and CSS ToA tag.Standard office environment proved to be very difficult placefor pedestrian monocular SLAM, as landmarks set is changingfast and features base can quickly grow more then a 100,nevertheless, for short periods of time, while pedestrian islocated in the same room, monocular SLAM can providegood support to PDR navigator. Fig. 7 shows the estimatedtrajectory.

9/278

Page 15: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Fig. 8. Results of WIFi VFSLAM simulation

IV. WIFI SLAM

Received signal strength information (RSSI) of WiFi signalsanother ubiquitous source of navigation information. Addi-tional reason to take that information into account is that WiFias well as inertial sensors and monocular camera are at the coreof any modern mobile platform such as smartphone. RSSI haslong been used for navigation purposes with the fingerprintingapproach as a basic one. One of the approaches, that enablesto collect the RSSI data without need for fingerprinting, is theapproach, called Vector Field SLAM (VFSLAM) [6]. Beingthe standard EKF-SLAM with dynamically changing statevector, that consists of vehicle part and a number of featuresparts, it is original in a way of representing RSSI surface.Features represent the RSSI levels for the regular square gridcorners. RSSI value inside the current grid is calculated bybilinear interpolation.

A. WiFi SLAM simulation and test results

Matlab simulation of VFSLAM was performed to evaluateit’s effectiveness in well-controlled environment. Gaussianprocesses [7] were used to approximate random surfaces tosimulate signal strengths for three base stations. Grid size waschosen to be 1m. Fig. 8 shows the result of simulation. Greenand blue and red surfaces represent reference fields, whilesame color markers correspond to field values, estimated byVFSLAM algorithm. It can bee seen that estimated values arepretty close to reference values, so the next step is to evaluatethe algorithm with real-life data.

To evaluate perfomance of VFSLAM in real life situation,data from PDR and WiFi RSS collected in office environmentwas fused by VFSLAM with following state vector:

x =[x y Ψ m1 . . . mn

]T(11)

where m1 . . .mn - RSSI levels at regular grid corners.Pedestrian velocity and angular rate around vertical axis

(derived from AHRS and corrected with gyroscope bias es-timation) are used to propagate the state forward, while WiFi

−5 0 5 10 15−6

−4

−2

0

2

4

6

8

10

12

Distance, m

Dis

tan

ce

, m

PDR Trajectory

WiFi VFSLAM Trajectory

Fig. 9. Calculated pedestrian paths

RSS measurements, taken with 0.5Hz rate served as the onlyexternal information. Grid size was chosen to be 5 meters.Fig. 9 shows two calculated pedestrian paths, and it can beeasily seen that even for rough 5-meter grid, WiFi VFSLAMdelivers much robust results than PDR alone.

V. CONCLUSION

In this paper authors propose practical approach to indoorpedestrian navigation which combines several different sourcesof navigation information. Survey of accuracy improvementapproaches is provided, with emphasis to usefulness of thesystem on mobile smartphone-like platform. There is a lot ofopen questions remain - both from practical and from theoreti-cal standpoints - how to optimally fuse different frameworks -WiFi, visual and inertial, switch from one mode of navigationto the other, and so on. In our ongoing research activity weuse BeagleBone as the prototype platform, and the next stepwill be the integration of all surveyed approaches onto thiscompact platform to get fully-functioning prototype.

REFERENCES

[1] E. Foxlin, Pedestrian tracking with shoe-mounted inertial sensors, IEEEComputer Graphics and Applications, vol. 25, no. 6, pp. 3846, Nov. 2005.

[2] Atia, M. M., Noureldin, A., Georgy, J., Korenberg, M., Bayesian FilteringBased WiFi/INS Integrated Navigation Solution for GPS-Denied Environ-ments, NAVIGATION, Journal of The Institute of Navigation, Vol. 58, No.2, Summer 2011, pp. 111-125.

[3] Rydell, J.; Emilsson, E., ”CHAMELEON: Visual-inertial indoor navi-gation,” Position Location and Navigation Symposium (PLANS), 2012IEEE/ION , vol., no., pp.541,546, 23-26 April 2012

[4] Civera, J. Davison, A.J. Montiel, J., Inverse Depth Parametrizationfor Monocular SLAM, Robotics, IEEE Transactions on , vol.24, no.5,pp.932,945, Oct. 2008

[5] Montiel, J., Davison, A.J. A visual compass based on SLAM, IEEEInternation Conference on Robotics and Automation, 2006

[6] Gutmann, J.-S., Eade, E., Fong, P., Munich, M.E., Vector Field SLAM-Localization by Learning the Spatial Variation of Continuous Signals,Robotics, IEEE Transactions on , vol.28, no.3, pp.650,667, June 2012

[7] Reece, S., Roberts, S., An introduction to Gaussian processes for theKalman filter expert, Information Fusion (FUSION), 2010 13th Confer-ence on , vol., no., pp.1,9, 26-29 July 2010

10/278

Page 16: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Enhancement of the automatic 3D Calibration for a

Multi-Sensor System The improved 3D calibration method of a radio-based Multi-Sensor System with 9 Degrees

of Freedom (DoF) for Indoor Localisation and Motion Detection

Enrico Köppe

Division 8.1 Sensors, Measurement and Testing Methods

BAM, Federal Institute for Materials Research and Testing,

Berlin, Germany

[email protected]

Daniel Augustin, Achim Liers, Jochen Schiller

Computer Systems & Telematics

FU-Berlin

Berlin, Germany

daniel.augustin, achim.liers, [email protected]

Abstract—The calibration of the integrated sensors in a multi-

sensor system has gained in interest over the last years. In this

paper we introduce an enhanced calibration process, which is

based on the preceding study described in [1]. The enhancement

consists of the integration of a gyroscope. So far only the

accelerometer and the magnetic field sensor were taken into

account for the calibration process. Due to this improvement we

reach a better approximation of the accelerometer and the

magnetic field sensor. Additionally, we minimize the standard

deviation of the single sensors and improve the accuracy of the

positioning of a moving person.

Keywords-sensor calibration and validation; person tracking;

inertial navigation system; inertial measurement unit; embedded

systems; multi-sensor system

I. INTRODUCTION

The fast development of mobile sensor technologies , for instance, GPS tracking or MEMS and new hard- and software solutions for smart phones result in growing interest as well as innovative solutions for outdoor and indoor localization. For indoor localization based on inertial sensors it is necessary to calibrate the sensor system with an initial calibration method. The aim of this work is to enhance the accuracy of indoor positioning and tracking by body motion sensing by the improved calibration of the inertial sensors.

II. CALIBRATION PROCEDURE

For the processing of motion sequences for localization it is necessary to use sensors with high sensitivity and time stable measurement behavior. This can be ensured by a continuous recalibration of the sensors. The procedure which will be presented in this paper uses a recalibration procedure independent of external equipment. Basis for the procedure is the natural movement of the person who is wearing the sensor. From the performed motion sequences of the person the necessary data for the sensor correction is recorded and analyzed. At first the acceleration and the magnetic field sensors are calibrated. Over a long time period their measurement values describe the surface of an ellipsoid, whose

parameters are approximated. By using the corrected magnetic field and acceleration data the current sensor orientation is calculated and utilized as comparison value for the calibration of the gyroscope.

In figure 1 the three steps of the calibration procedure are shown with a continuous recalibration of all three used inertial sensors.

Figure 1. Schematic diagram of the calibration procedure

A. First step: Data acquisition and normalization of the

sensor data

With the data of the three used sensors (9 Degrees of Freedom (DoF)) measured in free movement, it is possible to generate two ellipsoids (one with the acceleration sensor and one with the magnetic field sensor, 3 + 3 Degrees of Freedom) and a straight line (gyroscope). In the next step we need to normalization the data. For that reason we calculate the rest position and the local earth normal, which depends on the gravitation field and the magnetic field of the earth. Then we filter the data using a standardized finite impulse response

sensor data: accelerometer

sensor data: magnetic field sensor

Kalman filteringsuppress noise

Kalman filteringsuppress noise

Approximation of the sensor orientation using the gravitation field and the magnetic field

Adjustment of the sensor orientation to determine the calibration of the gyroscope

Calibration by the approximation of

an ellipsoid

Calibration by the approximation of

an ellipsoid

sensor data: calibration of the

magnetic field sensor

sensor data: calibration of the accelerometer

sensor data: gyroscope

Low pass filtering Low pass filtering Low pass filtering

Determination of the resting point using the Totmann

circuit

sensor data: calibrated gyroscope

Calibration by the Approximation of a

straight line

A

B

C

11/278

Page 17: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

(FIR) filter with Gauß window function (elimination of outliers by fast movements).

B. Second step: Calculation of the 6 ellipsoid parameters

(acceleration, ACC, and magnetic field sensor, MAG)

For a good estimation of the two ellipsoids we use the approximation of the least Median of Squares and the bisection method. There are still some errors which we can be eliminated by the optimization of the ellipsoid especially by the optimization of the parameters x, y, z, rx, ry, rz. Detailed information of the calculation steps are given in [1]. Other perturbations are estimated with a validation of the measurement values in two steps of the realized FIR filter (critical frequency).

C. Third step (new): Data transfer from the gyroscope to the

sensor data ACC and MAG

In this new step we tend to combine data of the three sensors. For that we transfer the data of the accelerometer and the magnetic field sensor to an angle velocity (°/s). This is done using the first deviation of the data of both sensors and calculation of the simultaneous change in the angle between two consecutive calibrated data points. Due to the modification of the orientation of the sensors we receive the angle velocity.

In the next step it is important to do a time synchronization of the gyroscope data, the accelerometer data, and the magnetic field sensor data. A time shift is caused by the filtering of the accelerometer and magnetic field sensor data. After that the Gauß-Newton method is applied to determine the scaling and the resting point of the gyroscope based on the calculated angle velocities of the accelerometer and the magnetic field sensor.

The described calibration process is performed, continuously. One result of this continuous calibration process is the minimization and the elimination of internal and external error sources as well as temperature influence, drift behaviour and different offsets as hard iron offset and soft iron offset.

III. RESULTS

For evaluation of the calibration procedure 109

experiments were carried out. The resulting calibration values

of the three different sensors are shown in Table 1.

TABLE I. CALIBRATION AND OFFSET VALUES FOR THE SPECIFIC

SENSORS

axis offset of the rest position deformation

Accelerometer

ax -15.95 mg 0.969

ay -33.90 mg 1.042

az 41.34 mg 0.961

Magnetic field sensor

mx -81.15 mGauss 0.969

my 116.12 mGauss 1.042

mz 60.27 mGauss 0.961

Gyrocope

gx 1.67 °/s 0.98

gy 0.51 °/s 1.01

gz 041 °/s 0.97

The maximum standard deviation for each sensor is shown

in Table 2. Furthermore, all standard deviations of the

calibration values are below the noise level of each single

sensor.

TABLE II. STANDARD DEVIATION OF THE THREE DIFFERENT SENSORS

IN EACH AXIS

Sensor Standard deviation

axis calibration noise

Accelerometer

ax (rest position) 2.3 3.6

ax (1g) 2.5

ay (rest position) 2.4 4.5

ay (1g) 5.2

az (rest position) 2.4 4.8

az (1g) 2.4

Magnetic field

mx (rest position) 0.47 4.3

mx (local) 0.11

my (rest position) 0.09 4.0

my (local) 0.23

mz (rest position) 0.22 4.1

mz (local) 0.07

Gyroscope

gx (rest position) 0.14 1.07

gx (digit in °/s) 0

gy (rest position) 0.1 0.2

gy (digit in °/s) 0

gz (rest position) 0.1 0.1

gz (digit in °/s) 0

In Figure 2 we see good conformity for the calibration results

of the gyroscope (abbr. G) with the calibrated and calculated

data of the accelerometer and magnetic field sensor (abbr.

MA). The minimal difference between the angle velocity ϕ(G)

of each axis and ϕ(MA) of each axis is caused by the

calibration.

Figure 2. Calculated and measured calibration data of the gyroscope

IV. CONCLUSION

The procedure of the continuous recalibration shown in this paper can be used to improve indoor positioning and to ensure long term stability of commercial sensors. Due to this procedure a higher accuracy for the determination of a position is possible. Additionally the influences of the internal senor drift, the temperature dependence as well as the influence of external disturbances, for example, local magnetic fields are reduced. Anyway, the promising results of the described

978-1-4673-1954-6/12/$31.00 ©2012 IEEE 978-1-4673-1954-6/12/$31.00 ©2012 IEEE

12/278

Page 18: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

calibration procedure enable an application on the consumer market and as well in mobile phone and smartphone sector.

REFERENCES

[1] E.Köppe, D. Augustin, A. Liers and J.Schiller, “Automatic 3D

Calibration for a Multi-Sensor System” IPIN 2012

13/278

Page 19: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

A gait recognition algorithm for pedestrian navigation

system using inertial sensors

Wen Liu

Navigation College

Dalian Maritime University

Dalian, China

[email protected]

Yingjun Zhang

Navigation College

Dalian Maritime University

Dalian, China

[email protected]

Abstract—In this paper, we investigate the gait recognition

algorithm which can be applied in a foot-mounted pedestrian

navigation system, where the gait types are computed using

dynamic time warping algorithm. This algorithm copes with the

different dimension walking samples which due to the

randomness of walking motion. Future, in order to get the

walking samples, we propose a step cycle detection algorithm

based on 3D gyro magnitude and the sliding window method.

Subsequently, by combining pedestrian walking characteristics, a

gait sample set is established using actual measured data, where

the gait samples includes continuous horizontal walking sample,

intermittent horizontal walking sample, upstairs walking sample

and downstairs walking sample, which are common in daily life.

Taking advantage of the dynamic time warping algorithm, the

gait can be recognition and best route can also be computed.

Finally, employing the artificial marking method, we evaluate the

performance of the gait recognition algorithm using actual

measured data. The test results show that the recognition

accuracy is reliable, specifically, 95.86% for continuous

horizontal walking, 90.73% for intermittent horizontal walking,

93.48% for upstairs walking, and 98.85% for downstairs walking.

Keywords: gait recognition; pedestrian navigation; inertial

sensors; dynamic time warping.

I. INTRODUCTION

GPS is an important component in the positioning system, and plays a key role in outdoor positioning. However, GPS continues to struggle indoors due to the failure of satellite signals to penetrate buildings [1]. Furthermore, recent developments in the field of smart mobile terminals have led to an increased interest in indoor positioning and navigation. In most recent studies, indoor positioning and navigation has been discussed in two different ways, one is Local Positioning System (LPS), and the other one is Pedestrian Dead Reckoning (PDR) [2]. Compared with LPS, PDR approach has a number of attractive features: autonomy, cost-effective, without installing markers or instrumentation in advance. Specifically, PDR is divided into stride and heading system (SHS) and inertial navigation system (INS) [3]. Pedestrian inertial navigation technology which bases on MEMS inertial sensors has gradually become an indoor navigation solution as its independence, portability, low cost and other characteristics.

Pedestrian inertial navigation system widely adopts system framework that characterized by extended Kalman filtering and

strapping MEMS inertial sensors on insteps, which is proposed by Foxlin [4], and the main problem of system is error accumulation caused by inertial sensor drift error. To solve this problem, Kalman filtering is used to tracking system error, and the error is kind of corrected by Zero Velocity Updates (ZVUPs) algorithm [4, 5]. In order to obtain higher navigation accuracy, without introducing other sensors, we pay attention to analysis the gait motion to find the error correction information. The aim of this paper is to analysis the gait recognition algorithm.

Recent developments in gait recognition field adopt the solution that straps many Inertial Measurement Units to different positions of the body, and achieves the gait recognition by analyzing the rules of acceleration and the angular velocity change [6, 7]. To simplify this method, only one IMU is used in our solution. In addition, the dynamic time warping algorithm is used to cope with different dimension walking samples which due to the randomness of walking motion. Employing the artificial marking method, we evaluate the performance of the gait recognition algorithm using actual measured data. This paper has been divided into three parts. The first part deals with walk cycle detection algorithm, the second part analyses gait recognition algorithm, the algorithm is evaluated in the last part.

II. WALKING CYCLE DETECTION ALGORITHM

Walking cycle is a process that is from the start to the end of walking motion. Specifically, the walking motion is divided into single-step and complex-step. Because only one IMU is strapped on instep, the walking cycle refers to complex-step. This part analyses the walk cycle detection algorithm, the aim is dividing successive raw data into walking cycle sections which are objects of gait recognition algorithm. Reliable walking cycle detection algorithm is prerequisite of gait recognition. For the purpose of accurate detection, a method based on 3-axis angular velocity and a sliding window is proposed, the algorithm idea is as follows: the raw data matrix is constructed firstly; the threshold based on the norm of 3-axis angular velocity is set, in addition, the static state is determined by sliding window method, and the static state matrix is constructed; finally, walking cycle matrix which is based on static state matrix is constructed. Specific steps are as follows:

14/278

Page 20: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

A. Construct the raw data matrix

The data measured by MEMS inertial sensors are saved in a matrix of N×10. Each row represents a set of sensor data. Specifically, they are, in order, data index, 3-axis acceleration, 3-axis angular velocity, 3-axis magnetic field strength. Our algorithm requires index and 3-axis angular velocity, so a raw data matrix of N×4 is constructed, as shown in Table I.

TABLE I. RAW DATA MATRIX SAMPLE

Index

Angular velocity (rad/s)

X axis Y axis Z axis

1 0.002411 0.982755 -0.000122

… … … … 3689 1.537429 7.157153 -0.480536

3690 1.256393 7.626344 -0.999916

… … … … end 0.023122 0.786572 -0.0021342

B. Construct static state matrix

Set two identifying signs, sign Start is used to identify the start of each static state, and sign End is used to identify the end of each static state. The sliding window is formed by sign Start and End. The Entire detection process begins from first row to the end of raw data matrix. To begin with, sign Start and End are pointed to the first row, and the width of sliding window is zero. The detection rule is the norm formed by 3-axis angular velocity less than 0.5 rad/s at the index (i), set sign Start and End to index (i) if the detection rule is met, then the width of sliding window is zero.

Slide sign End to the index (j) which makes the norm formed by 3-axis angular velocity greater than or equal 0.5 rad/s at index (j). Then, set sign End to index (j-1), and the width of sliding window is j-i, Note that the “cross zero” state also meets the detection rule. In order to prevent false detection, the sliding windows are ignored if the width is less than 24, in the other words, the sliding windows are valid when the width greater than 24. And the threshold is determined by sampling frequency and static state duration.

The sign Start and End of each detection are saved into a matrix of (N×3), which is called static state matrix. Each row represents a static state, stationary state index is saved in the first column, and Start and End are saved in the second and third column, as shown in Tab. II.

Slide the sign Start to End after each walking cycle is detected. So the width of sliding window is zero. Then, sliding sign Start and End to the next index if the detection rule is met simultaneously.

Repeat the above steps until the raw data matrix is detected. The static state matrix is as shown in Table II.

TABLE II. STATIC STATE MATRIX SAMPLE

Index Start End

1 3855 3935 … … …

10 5391 5438

… … … end 5706 5751

C. Construct walking cycle matrix

To construct walking cycle matrix which is the basis of gait recognition, we set the end of static state in row i as the start of walking motion in row i+1, then, the walking cycle matrix is constructed, as shown in Table III. Compared with static state matrix, walking cycle matrix has one more column which presents the start of each walking cycle. Therefore, the raw data of each walking cycle can be obtained.

The raw data are divided into walking cycle sections which are saved in each rows of walking cycle matrix using the above algorithms.

TABLE III. WALKING CYCLE MATRIX SAMPLE

Index Starta Startb End

1 3720 3855 3935

… … … … 11 5438 5542 5595

… … … …

end 5901 6021 6075

a. start stands for the beginning of each walking cycle;

b. start stands for the beginning of static state in a walking cycle.

III. GAIT RECOGNITION ALGORITHM

Gait recognition algorithm is the identification process of gait samples, and specifically, the distance between gait samples and sample set is computed by recognition algorithm. So the sample set and recognition algorithms are the core factor.

A. Construct sample set

In this paper, we research three kinds of gait, and specifically, horizontal walking, upstairs and downstairs walking which are common in indoor environment. From the data measured we found that another horizontal walking existed, and specifically, it happens in transient process of different gait. Therefore, we divide horizontal walking into two kinds, one is continuous horizontal walking, and the other is intermittent horizontal walking. In addition, from date we found that Y-axis angular velocity varied significantly, we adopt Y-axis angular velocity as the variable to describe samples.

Four kinds of walking sample are obtained by trial and error. The aim is to find out relative standard samples which improve the recognition accuracy. Finally, the sample set is constructed, as shown in Fig.1.

B. Dynamic time warping algorithm

The delay maybe happens when two static array have the same variation tendency due to variation of walking speed. In addition, the dimension of two walking array may be different. For this problem, dynamic time warping algorithm adopted.

15/278

Page 21: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

0 20 40 60 80 100 120 140-10

-5

0

5

10

Index

An

gu

lar

rate(

rad

/s)

(a)

0 50 100 150-6

-4

-2

0

2

4

6

8

Index

An

gu

lar

rate(

rad

/s)

(b)

0 50 100 150-4

-2

0

2

4

6

Index

An

gu

lar

rate(

rad

/s)

(c)

0 20 40 60 80 100 120 140-6

-4

-2

0

2

4

6

8

Index

An

gu

lar

rate(

rad

/s)

(d)

Figure 1. Sample set (a. Continuous horizontal walking-139 elements; b.

Intermittent horizontal walking-163 elements; c. Upstairs walking-155 elements; d. Downstairs walking-136 elements )

The algorithm takes advantage of dynamic programming idea [8], the timer shaft is warped unequally to realize the alignment of different dimension samples. The detailed process is as follows: suppose there are two vectors, t (sample vectors) and r (vector to be identified), t = [1 2 10 3], r = [1 1 10 2 3], t is 4 dimensions, and r is 5 dimensions.

1) Construct Euclidean distance matrix d Euclidean distance matrix d is 4×5 dimensions. dij

represents the square of the distance between i-th element of t and j-th element of r, as follows:

0 0 81 1 4

1 1 64 0 1

81 81 0 64 49

4 4 49 1 0

d

2) Construct adjacency matrix D Adjacency matrix D and Euclidean distance matrix d have

the same dimension. The matrix D is initialized to zero, and d (1,1) is assigned with D (1,1); Then, the first line of the adjacent matrix is assigned, and specifically, the D (1, i) of first line of the first i-th column (i> 1) is assigned with the sum of

element d (1, i) and D (1, i-1). Assignment of the first column is the same way, D matrix is as follows:

0 0 81 82 86

1 0 0 0 0

82 0 0 0 0

86 0 0 0 0

D

Then, all-zero sub-block (3×4) located at lower right corner is assigned as follows: first, the assignment process begins from the first row, from left to right, each element is formed by two parts, one is the element which is located at the same position in Euclidean distance matrix, and the other is the min

element of three elements, and specifically, the left element, up element and left-up element. D matrix is as follows:

0 0 81 82 86

1 1 64 64 65

82 82 1 65 113

86 86 50 2 2

D

The distance between two vectors is shown at the lower right corner of Adjacency matrix D. Therefore, the distance between t and r is 2, and the distance between different dimensions sample can be computed using this algorithm.

C. Gait Recoginition Case

In order to analyze the application of dynamic time warping algorithm in gait recognition, we use a case to explain. Firstly, a 173 element sample is selected, and the relationship with sample set is shown as Fig. 2. The distances computed using dynamic time warping algorithm are shown as Table IV. The distance between sample and continuous horizontal walking is minimal, so the sample belongs to horizontal walking.

0 20 40 60 80 100 120 140 160-8

-6

-4

-2

0

2

4

6

8

10

Index

An

gu

lar

rate(

rad

/s)

Continuous horizontal walking sample

Upstairs walking sample

Downstairs walking sample

Intermittent horizontal walking sample

Test sample

Figure 2. Correlation between test sample and sample set

TABLE IV. DISTANCES COMPUTED USING DYNAMIC TIME WARPING

ALGORITHM

Item Distance between samples and the sample set

Continuous

horizontal

walking

Intermittent

horizontal

walking

Upstairs

walking

Downstairs

walking

Distance 13.2673 284.7651 301.8935 61.3711

IV. EXPERIMENTAL VERIFICATION

For the purpose of verifying the gait recognition algorithm proposed, experimental verification is conducted. The MEMS inertial sensor MTx [9] (28A58G25) is used, and experimental data are raw data measured with MTx. And the sampling frequency is 120Hz. The algorithm accuracy is computed using gait recognition algorithm and artificial marking method as follows:

16/278

Page 22: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

A. Artificial Marking Method

The realistic gait information during a waking motion is necessary to compute the gait recognition algorithm accuracy. Therefore, we mark each walking cycle artificially during the experiment, which is called artificial marking method.

B. Experimental Analysis

We conducted 9 experiments which are in stadiums, shopping malls and laboratories, and specifically, 733 walking cycles are done by the same person. As mentioned above, the recognition accuracy is computed by artificial marking method, statistical result is as follows: 95.86% for continuous horizontal walking, 90.73% for intermittent horizontal walking, 93.48% for upstairs walking, and 98.85% for downstairs walking. The detailed results are as shown in Table V.

TABLE V. EXPERIMENT RESULTS

NO. Artificial Marking Gait Recognition Recognition Accuracy (%)

Ca I

b U

c D

d C

a I

b U

c D

d C

a I

b U

c D

d

1 69 2 0 0 69 2 1 0 98.5 100 ---e ---

e

2 187 2 0 0 185 3 1 0 98.9 100 --- e ---

e

3 132 4 12 0 131 7 10 0 99.2 100 83.3 --- e

4 45 2 0 0 43 3 1 0 95.5 100 --- e ---

e

5 23 6 7 8 24 5 7 8 100 83.3 100 100

6 46 2 0 0 46 1 1 0 100 50 --- e ---

e

7 47 2 0 0 45 3 1 0 95.7 100 --- e ---

e

8 28 6 6 8 28 5 7 8 100 83.3 100 100

9 24 10 32 29 18 12 29 28 75 100 90.6 96.5

a. C stands for continuous horizontal walking;

b. I stands for intermittent horizontal walking;

c. U stands for upstairs walking;

d. D stands for downstairs walking;

e. --- stands for none.

V. CONCLUSION AND PROSPECT

The main problem of pedestrian navigation system based on MEMS inertial sensors is error accumulated caused by drift error of inertial sensors. In recent years, there has been an increasing interest in correct error based on existing framework. For this purpose, in this paper, we analyzed the gait recognition algorithm, and tried to take advantage of gait information to correct the error. Specifically, a walking cycle detection algorithm based on 3-axis angular velocity and the sliding window is proposed to divide successive raw data into walking cycle sections which are objects of gait recognition algorithm. In addition, the gait recognition algorithm based on dynamic time wrapping which copes with the different dimension walking samples which due to the randomness of walking motion is proposed as well. Finally, in order to evaluate the performance of the gait recognition algorithm, 9 experiments are conducted. The test results show that the recognition accuracy is reliable, specifically, 95.86% for continuous horizontal walking, 90.73% for intermittent horizontal walking, 93.48% for upstairs walking, and 98.85% for downstairs walking.

This research has thrown up many questions in need of further investigation. Next step, we will focus on error correction algorithms based on gait information.

ACKNOWLEDGMENT

The research is supported by Project 61073134 supported by National Natural Science Foundation of China; Project 51179020 supported by National Natural Science Foundation of China; National 863 Project (No.2011AA110201); The Applied Fundamental Research Project of the Ministry of Transport of China (No.2013329225290).

REFERENCES

[1] Dedes G, Dempster A G. Indoor GPS positioning - challenges and

opportunities [C]. IEEE 62nd Vehicular Technology Conference,

Texas, USA, 2005:412-415.

[2] Jiménez A R, Seco F, Zampella F et al. PDR with a foot-mounted

imu and ramp detection [J]. Sensors, 2011, 11 (10):9393-9410.

[3] Harle R. A survey of indoor inertial positioning systems for

pedestrians [J]. IEEE Communications Surveys & Tutorials,

2013, PP (99):1-13.

[4] Foxlin E. Pedestrian tracking with shoe-mounted inertial sensors

[J]. IEEE Computer Graphics and Applications, 2005, 25 (6):38-

46.

[5] Jimenez A R, Seco F, Prieto J C et al. Indoor pedestrian

navigation using an INS/EKF framework for yaw drift reduction

and a foot-mounted imu [C]. The 7th Workshop on Positioning

Navigation and Communication (WPNC), HTW Dresden,

Germany, 2010:135-143.

[6] M.J. V-N.Recognition of human motion related activities from

sensors [D]. Malaga, Spain:University of Malaga, 2010.

[7] Frank K, Nadales M J V, Robertson P et al. Reliable real-time

recognition of motion related human activities using mems

inertial sensors [C]. The 23rd International Technical Meeting of

the Satellite Division of the Institute of Navigation, Portland,

OR, United states, 2010:2919-2932.

[8] Myers C, Rabiner L, Rosenberg A E. Performance tradeoffs in

dynamic time warping algorithms for isolated word recognition

[J]. IEEE Transactions on Acoustics, Speech and Signal

Processing, 1980, 28 (6):623-635.

[9] Xsens. http://www.xsens.com/en/general/mtx.

17/278

Page 23: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

An UWB Based Indoor Compass for Accurate

Heading Estimation in Buildings

Abdelmoumen Norrdine1, David Grimm

2, Joerg Blankenbach

1 and Andreas Wieser

3

1 RWTH Aachen University, Geodetic Institute, Aachen

2 Leica Geosystems, Heerbrugg

3 ETH Zurich, Institute of Geodesy and Photogrammetry, Zurich

norrdine ; [email protected]; [email protected]; [email protected]

Abstract—The demand for positioning systems locating people

and/or objects automatically inside buildings or in other GPS-

denied environments has rapidly increased during the recent

years. Several systems have been developed. Most of them are

only designated for position estimation.However, in addition to

the position the user’s orientation is also useful or even

mandatory for certain applications.

In this contribution the determination of azimuth (heading) of

mobile devices is presented using an indoor positioning system

based on time-of-flight measurements with Ultra Wide Band

(UWB) pulses. The system enables the determination of 3D

positions with accuracies in the cm-range even when multipath

propagation is present. The main focus of this contribution is the

determination of azimuth (heading) of mobile users without using

antenna arrays, a magnetic compass, or inertial sensors (and

therefore not requiring any prior knowledge about an initial

orientation). The proposed method for azimuth determination is

based on selective shadowing of UWB signals using arotating

attenuation shield. The time-varying attenuation of the received

UWB wavesallows estimating the direction of arrival of the

respective signals at the receiving antenna by means of signal

processing methods. If the emitting transceiver’s position is

known the antenna orientation can be derived therefrom.

Using a prototype, first experiments have been carried out. The

results prove the feasibility and indicate an accuracy of less than

1 degree under good conditions in an indoor environment.

Index Terms—Heading, azimuth, orientation, indoor

localization, trilateration, Ultra Wide Band (UWB)

I. INTRODUCTION

Recently, the need for automated systems locating people and objects inside buildings (indoors) has rapidly increased. A mainreason for this is the general availability of positioning and navigation outdoors and the demand for seamless extension of the related applications to indoor environments. Some examples are pedestriannavigation in public buildings (such as railway stations or airports), locatingfirefighters in emergency situations, tracking and finding assets, orautomated robot control. Global Navigation Satellite Systems (GNSS) areonly available outdoors except under very special conditions with very low accuracy requirements (e.g. accepting deviations of 100 m or more).The satellite signals are heavily attenuated by walls, ceilings and objects, and cannot be used indoors

therefore.Worldwide intensiveresearch in indoor positioning is a result.

In addition to the pure position information (typically 2D or 3D coordinates in a local reference system), the spatial orientation of the user ormobile device may also be useful or even mandatory for certainapplications. Examples areaugmented reality applications where a view of the real world is augmented by virtual objects. Both the3D position and orientation of theuser i.e., all six degrees of freedom (6 DOF) have to be knownwith high accuracy in such cases.A further example is the use of the moving direction (heading or azimuth) of the user for dead reckoning in pedestrian navigation.

The most widespread method for heading estimation in pedestrian navigation systems is the utilization of a magnetic compass. However, magnetic anomalies occurring in indoor environments due to electrical wiring, metal furniture or building materials (reinforced concrete)may cause heading estimation errorsexceeding30° [1, 2] and therefore rendering the magnetic azimuth virtually useless. A methodrequiring no additional sensors but applicable during motiononly is based on the heading determination by means of the baseline betweensubsequent position estimates. Often, inertial measurement units (IMUs) are used to indicate orientation [3, 4]. However, thisapproach suffers from accumulated errors, because of the required multiple integration of the sensor output.Anotherapproach for orientation estimation is the use of an antenna array, i.e., of multiple antennas rigidly attached to the mobile device.The accuracy is proportional to the distance between the antennas; so this approach isuseful for GPS-based orientation estimation of outdoor platformsbut hardly applicableto pedestrians or small devices in indoor environments because of signal obstructions and fading effects [5].A method for attitude estimation in indoor applications based on a vision system is presented in [6]. However,it is limited to environments where straight lines can be detected (e.g. doors and corridor borders).

In the following an alternative approach for azimuth determination based on a highly accurate Indoor Local Positioning System (ILPS) usingUltra Wide Band (UWB) signals is introduced.

This paper is organized as follows: first, the UWB-ILPSis presented. Next, the proposed system and method for

18/278

Page 24: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 2: Camera orientation by using two UWB antennas mounted

on a rigid baseline

Figure 1: System architecture of UWB-ILPS.

Figure 3: Rotating attenuation shield

Figure 4: UWB antenna mounted on the NORDIS hardware.

measuring the orientation areoutlined. This is followed by thepresentation and discussion of a real-world experiment,and by a conclusion.

II. UWB-INDOOR LOCAL POSITIONING SYSTEM

UWB systems have advantageousproperties

forpositioningin indoor environments.They yielda high spatial

resolution, arerobust with respect to multipath, and their

signalspenetrate various materials. The UWB-ILPSused for

this contribution was already developed in previous research

and successfully implemented in a prototype [7-9]. The UWB-

ILPS consists of several TimeDomainTM

P210 UWB

transceivers operating in the 3.2- 6.3 GHz frequency range. It

has a positioning update rate of more than 3s and a radiated

power of less than 50 µW. Therefore it can be only deployed

for static operation and in indoor environments with temperate

obstacles. The slope distances between all the transceivers can

be derivedeven in non line of sight scenarios by measuring the

Time Of Flight (TOF) of the UWB pulses. In order to avoid

the need for synchronization between the transceivers, TOF is

implemented as two way ranging [7]. Thus, ranging to

multiple transceivers is accomplished successively.Using the

distances between the Reference Stations (RSi), whose

coordinates (Xi,Yi,Zi) in the building reference system are

known, and each Mobile Station(MS) with unknown

coordinates (XMS,YMS,ZMS) (s. Fig.1), positioning results with

an accuracy up to 2 cm have beenachieved [8].

The heading determination could be achieved using the

baselines between subsequent positions of the mobile station

or using the simultaneously estimated positions of at least two

antennas mounted rigidly at the MS. Thelatter method has

been successfully implemented for yaw determination for a

digital camera (Fig.2) [7-9]. However, the first approach

requires rather fast motion and is only applicable if a constant

relation between MS orientation and direction of motion is

maintained; the second approach is of limited applicability to

small objects or pedestrians.

III. SINGLE ANTENNA AZIMUTH DETERMINATION

The main focus of this contribution is the determination of

the azimuthof a leveled UWB antenna without using an

antenna array or the (past) trajectory. If the application does

not force the antenna to remain leveled the missing rotation

angles (roll and pitch) can be determined using an additional

sensor, e.g. an inclinometer.This is not further investigated in

this paper.

The proposed method originates from [3].It is based on the

idea of selective shadowing of UWB signals received from

reference stations RSi. For signal shadowing, an Attenuation

Shield (AS) (15cm x 7 cm x 4 mm, PVC material)is utilized

rotating around the receiver antenna(Fig. 3).

For the experimental evaluation of the concept a rotating

device called NORDIS hardwarewasutilized, which was

originally used for research in GPS orientation determination

outdoors [3]. A BroadSpecTM

omnidirectional UWBantenna

was mounted on the NORDIS hardware.AS isrotated about the

antennabore sight with constant velocity (Fig. 4). The angle of

arrival of the RS signals are indicated with regard to the zero

direction of the NORDIS hardware which is, however, not the

required heading of MS. Though, the heading of the NORDIS

19/278

Page 25: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 5: Measurement setup

Figure 6: Received UWB signal

Figure 7: Example of signal strength curve

Figure 8: Signal strength with transceiver at RS1and RS2during one

revolution of the attenuation shield.

hardware can be derived from the azimuth from MS to RS -

calculated using the coordinates of MS and RS - and the

determined angle of arrival.

For the evaluation of the proposed approach test

measurements have been accomplished in the geodetic

measuring lab of ETH. Therefore,themobile station equipped

with NORDIS hardware as well as the reference stations were

set up in a horizontal planeon fixed reference points (=survey

pillars)whose relative positions are exactly known (Fig. 5).

The signal strength associated with these two reference

stations has been continuouslycaptured during four revolutions

of AS using the antenna equipped with the NORDIS

hardware.

To get the azimuth values from the raw data, several signal

processing steps are performed in post-processing:

A. Leading edge detection and signal strength calculation

During the shield rotation the signal strength was calculated

continuously by integrating the received UWB signal. The

rotation is slow w.r.t. the signal integration such that the

rotation angle of the shield is assumed constant during

integration.The respective integration interval starts at the

leading edge of the pulse and is 2 ns wide. Fig.6 shows an

exemplary UWB signal and the estimated leading edge.

Figure7 shows a signal strength curve (in black) after

integrating several UWB signals during four revolutions of

AS.

B. Signal smoothing

To remove signal outliers and reduce signal fluctuations, the

measured signal has to be first smoothed. Due to the quasi-

sinusoidal nature of the signal,the Fast Fourier Transformation

(FFT)has been used in combination with the Inverse FFT

(IFFT) in order to resynthesize the signal based on itsspectral

analysis. In the spectral analysisstage,frequency components

lower than a preset threshold are set to zero. Figure 7 shows a

smoothed signal strength curve.

C. Time delay estimation

The TimeDelay (TD)is a measure of the angle between the

two RS. The calculation is based on TD estimation between

the measured waveformsassociated with the two transceivers

atRS1 and RS2 respectively (Fig. 5). These two waveforms

x(RS1) and r(RS2) are depicted in Figure8.

In this early stage of research two methodshave been used

for TD estimation:

Local maximum difference: The TD is estimated by

detectingthe maxima within the two waveformsx and r.

The respective maximum occurs when MS, RS and the

rotation shield are exactly lined up (Fig. 8). One

maximum occurs when the MS liesbetween the AS and

RS. In this case the received power consists of line of

sight signal and reflected signal from AS. The second

maximum occurs when AS liesbetween MS and RS.

The occurrence of the second maximum results from

the power gain caused by signal diffraction around AS.

20/278

Page 26: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 9: Cross-correlation results.

The difference between the local maxima of the

waveforms x and r corresponds to the rotation angle.

Cross-correlation: The common method to determine

TD is cross-correlating the two signals from the two

reference transceivers [3]. In this work a similar

method is used. TD is calculated independently

byfinding a specific template curve t within the

signalsx and r. To find the template curve location, the

cross-correlation has been used. The template curve tis

similar to a bell curve(s. Fig.7). The peak and the width

of the template curve depend on the shape of the

rotating shield and can be calculated beforehand by a

suitable calibration. The cross-correlation value of

measured signal xand template curve tat delay δ is

defined as

(1)

where mx and mt are the means of x and t respectively.

TD is indicated by the delay of the maximal correlation value

as shown in equation

(2)

By applying both methods to the signals from the previous

example, theaveragedTDbetween the two curves correspond to

91,15 deg (Fig. 8) and 90,27 deg (Fig. 9) by using the local

maximum difference and the cross-correlation method

respectively.The true angle isaright angle (90°)(Fig.5).

Utilizing reference points in the measuring lab which are

geodetically determined with high accuracy, the realization of

the true angle can be assumed as error free.

IV. CONCLUSION

Besides position estimation for indoor applications the

orientation determination of objects and persons is also a

challenging task. The proposed method of selective shadowing

of radio signals may be used to determine the angle of arrival

of signals and contribute to the determinationof the spatial

orientation of a mobile user in a building. The results of a first

experiment using a modified UWB-transceiveras mobile

station are promising.Further researchis required to quantify

the attainable accuracy depending on geometric configuration,

environmental conditions, signal propagation through

materials (walls), and MS kinematics.This includes alsoUWB

wave propagation simulation by using ray-tracing methods

and wave diffraction equations in order to determine the

optimal attenuation shield dimension and position.

The accuracy of TD estimation is a function of signal-to-

noise ratio (SNR) and the angular sampling rate of the signal

strength. By increasing the signal acquisition update rate and

the UWB signal quality by using new generation P410 UWB-

Transceiver [10], the TD estimation and consequently the

estimation error might be reduced significantly.Due to their

lower power consumption, higher update rate and reduced

size, the P410 radios could be also used for pedestrian

navigation. Moreover the presented time delay estimation

method and additional methods such assignal zero crossingor

adaptive signal processing based method have to be examined

and compared to each other [11, 12].

REFERENCES

[1] Afzal, M.H.; Renaudin, V. and Lachapelle, G.,“Assessment of Indoor

Magnetic Field Anomalies using Multiple Magnetometers”, proceeding of ION/GNSS, 2010.

[2] Skvortzov, V.Y. ; Hyoung-Ki, L.; SeokWon, B.; YongBeom, L.,

“Application of Electronic Compass for Mobile Robot in an Indoor Environment”, 2007 IEEE International Conference on Robotics and

Automation, 2007 , Page(s): 2963- 2970

[3] Grimm, D. E., “GNSS Antenna Orientation Based on Modification of Received Signal Strengths”, Dissertation (Dr.sc.ETH), ETH Zurich,

2012

[4] Hesch, J. A. and Roumeliotis, S. I., “An indoor localization aid for the visually impaired”, IEEE International Conference on Robotics and

Automation, pp. 3545–3551, 2007.

[5] Kuylen, L. V.; Nemry, P.; Boon, F.; Simsky, A. and Lorga, J. F. M., “Comparison of Attitude Performance for Multi-Antenna Receivers”,

European Journal of Navigation, Vol. 4, No. 2, pp. 1–9, 2006.

[6] Kessler, C.; Ascher, N; Frietsch, M.; Weinmann, M. and Trommer, G., “Vision-based Attitude Estimation for Indoor Navigation usingvanishing

Points and Lines”, Proceedings of IEEE ION/PLANS,pp.310-318, 2010.

[7] Norrdine, A.“Präzise Positionierung und Orientierung innerhalb von

Gebäuden“, Dissertation, Schriftenreihe Fachrichtung Geodäsie der TU

Darmstadt, Heft 29, 2009

[8] Blankenbach, J. andNorrdine, A.,“Mobile Building Information Systems based on precise Indoor Positioning”, Journal on Location

BasedServices, Volume 5, Issue 1, pp. 22-37,Taylor & Francis, 2011 [9] Pflug, C.„Ein Bildinformationssystem zur Unterstützung der

Bauprozesssteuerung“, Dissertation, Schriftreihe des Instituts für

Baubetrieb der TU Darmstadt, D50, 2009 [10] Time Domain Corporation, http://www.timedomain.com/

[11] Zhou, C., Qiao, C., Zhao, S., Dai, W. and Li, L., ”A zero crossing

algorithm for time delay estimation”, IEEE 11th International Conference on Signal Processing (ICSP), pp. 65 - 69, 2012

[12] Park, S. and Kim, Y.T., ”Adaptive signal processing algorithms for time

delay estimates and tracking”, Proceedings of the 20th Southeastern Symposium on System Theory, pp. 433 – 437, 1988

1

2 2

1 1

(x[i] )(t[i ] m )

[ ]

(x[i] ) (t[i ] m )

N

x t

ixm

N N

x t

i i

m

R

m

max max[ ( )]xm ii

R

21/278

Page 27: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accuracy of an indoor IR positioning system with

least squares and maximum likelihood approaches

F. Domingo-Perez, J. L. Lázaro-Galilea,

E. Martín-Gorostiza, D. Salido-Monzú

Department of Electronics

University of Alcalá

Alcalá de Henares, Spain

[email protected]

A. Wieser

Institute of Geodesy and Photogrammetry

ETH Zürich

Zürich, Switzerland

[email protected]

Abstract—This paper focuses on the predicted accuracy of indoor

positioning of mobile clients emitting modulated infrared signals

(IR). The related positioning system makes use of several anchor

nodes receiving the IR signal and measuring phase differences of

arrival, which are converted into range difference values. These

range difference values are used with hyperbolic trilateration to

estimate the position of the respective emitter. This work deals

with two issues of localization techniques: the selection of the best

sensor subset and the selection of the optimum algorithm in the

sense of localization accuracy and computational effort. We

compare nonlinear least squares (NLS) and maximum likelihood

estimation using the Cramer-Rao lower bound as a benchmark to

select the appropriate sensor subset and the optimum algorithm

in each case. Results show that we can have accurate results by

neglecting the correlations and using NLS with the best sensor

subset according to sensors-target geometries.

Keywords-least squares approximation; maximum likelihood

estimation; phase difference of arrival; source location

I. INTRODUCTION

Sensor resource management (SRM) [1] is related to localization techniques in the sense that SRM deals with sensor placement for optimal localization (e.g. highest accuracy in a whole area) and the selection of the best sensor configuration among available options (sensor subset selection). This paper shows the effect of the latter in a Phase Difference of Arrival (PDOA) infrared (IR) localization system. The effect of selecting a sensor subset that provides good geometry conditions is analyzed by simulation for maximum likelihood estimation (MLE) and nonlinear least squares (NLS).

The paper is organized as follows. Section II gives an overview of PDOA localization. The IR system is briefly described in section III. Section IV shows the simulation scenario and results. Finally, section V provides the conclusions of the study.

II. POSITION ESTIMATION WITH PDOA

N anchor nodes measure phase of arrival of a sinusoidally modulated IR signal transmitted from a board that acts as the target. Pairing a reference node with the remaining N-1 nodes and differencing their phase measurements gives a set of N-1 PDOA values. These values can be converted into range

differences, resulting in a set of N-1 hyperbolae that intersect in a single point in the absence of noise, whereas in real conditions the point of intersection must be estimated.

The N-1 range differences can be expressed as a function of the parameters to be estimated (θ = [x y z]

T):

Δd = f(θ) + ε, (1)

where Δd is the N-1 measurement vector, f(θ) is the noiseless range difference function vector in terms of θ and ε represents the deviation. Solving (1) by NLS or MLE means optimizing an Euclidean distance (2) or a Mahalanobis distance (3):

Δd f(θ)TΔd f(θ)

Δd f(θ)TΣ-1

Δd f(θ)

We apply the iterative Gauss-Newton algorithm to solve (2) and (3) due to the nonlinearity of f(θ):

k+1

= k+ J

TWJ)

-1J

TWΔd f(

k)

where the superscripts k and T denote the iteration instant and the transpose operator, respectively. J is the Jacobian matrix of f(θ) evaluated at instant k and W is a weight matrix. In case of MLE, W = Σ-1

, whereas W is the identity matrix in NLS.

III. SYSTEM DESCRIPTION

The system contextualizing this work achieves range difference estimation by measuring phase differences of a modulated IR signal continuously reaching different receivers. The IR emitter, boarded in the robot to position, generates an 8 MHz intensity modulated signal to drive a wide angle IR-LED at 940 nm. The receivers, placed in fixed and known positions in the ceiling of the area, are formed by a low level conditioning stage adapting the photocurrent generated by a wide angle silicon PIN photodiode. The outputs of the receivers are simultaneously digitized and range difference measurements are estimated from the resulting sequences [2].

978-1-4673-1954-6/12/$31.00 ©2013 IEEE

22/278

Page 28: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

0 0.5 1 1.5 2 2.5 3

0

0.5

1

1.5

2

2.5

3

1 2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

I

II III

IV

V

Width, m

Length

, m

Sensors

Sensors: 5

Sensors: 4

Sensors: 3

Figure 1. Optimum sensor subset selection.

Figure 2. MLE vs. NLS, sensors I, IV and V.

The phase measurement noise is modeled as a zero mean normal distribution whose variance is inversely proportional to the output signal-to-noise ratio of the receiver. The phase error is converted into a distance error using the modulation frequency and the propagation speed in vacuum conditions (c = 3·10

8 m/s).

Five sensors are deployed to cover the area of a 9 m2

regular square; they are placed in the corners and in the center according to [3] (the height of the sensors is 2.80 m). The height of the emitter is constant and known (0.65 m).

IV. RESULTS

Results have been obtained with 5000 Monte Carlo runs. Fig. 1 shows the positioning cell under test. 21 test points have been tested with three, four and five sensors with MLE and NLS. The RMSE is compared with the square root of the trace of the Cramer Rao Lower Bound (CRLB) and plotted in Fig. 2, 3 and 4. Figures captions show the sensors in use. When just three sensors are used we neglect the measurements of the further sensors (II and III, see Fig. 1), which are added in the subsets containing four and five sensors, sensor V is always the reference. Fig. 2, 3 and 4 show that we can neglect the correlations of the measurements using NLS and reach the CRLB selecting the appropriate subset, which is depicted in Fig. 1 for each sensor configuration.

V. CONCLUSIONS

This paper has presented how important it is to take into consideration SRM in an IR positioning system. Selecting the best subset in relation with the geometry of the cell and the point of interest allow the application of lower-complexity algorithms and avoid the computation of the correlations. Further research will include a study of the computation time we can save and the derivation of an indicator to select the best subset.

ACKNOWLEDGMENT

This research was supported by the Spanish Research Program through the project ESPIRA (ref. DPI2009-10143). F. Domingo-Perez thanks the FPU program (Ministerio de Educación, Cultura y Deporte, Spanish Government, 2012).

REFERENCES

[1] C. Yang, L. Kaplan, and E. Blasch, “Performance measures of covariance and information matrices in resource management for target state estimation,” IEEE Trans. Aerosp. Electron. Syst., vol. 48, no. 3, pp. 2594–2613, Jul. 2012.

[2] E. M. Gorostiza, J. L. Lázaro Galilea, F. J. Meca Meca, D. Salido Monzú, F. Espinosa Zapata, and L. Pallarés Puerto, “Infrared sensor system for mobile-robot positioning in intelligent spaces,” Sensors, vol. 11, no. 5, pp. 5416-5438, May 2011.

[3] Y. Chen, J.-A. Francisco, W. Trappe, and R. P. Martin, “A practical approach to landmark deployment for indoor localization,” in 3rd annu. IEEE Commun. Soc. on Sens. and Ad Hoc Commun. and Netw. (SECON’06), Reston, VA, 2006, pp. 365-373.

Figure 3. MLE vs. NLS, sensors I, II, IV and V.

Figure 4. MLE vs. NLS, sensors I, II, III, IV and V.

23/278

Page 29: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

An indoor navigation approach for low-cost devices

Andrea Masiero, Alberto Guarnieri, Antonio Vettore and Francesco PirottiInterdepartmental Research Center of Geomatics (CIRGEO)

University of PadovaPadova, Italy

[email protected]

Abstract—The increasing diffusion of low-cost devices ismotivating the development of a ever-growing number of mobileapplications. On the other hand, indoor navigation is recentlybecoming a topic of wide interest, especially thanks to its possibleuse in some socially relevant applications (e.g. indoor localizationduring emergencies). Motivated by these considerations, thispaper proposes a Bayesian probabilistic approach to the problemof indoor navigation with low-cost mobile devices (e.g.smartphones). The proposed approach deals with theunavailability of the GPS signal by integrating geometricinformation on the environment with measurements provided bythe inertial navigation system and the radio signal strength of astandard wireless network. The proposed system takes advantageof the sensor measurements to build a statistical model of thecharacteristics of the environment. The estimated model of theenvironment is used to improve the localization ability of thenavigation device.

Keywords- indoor navigation; sensor fusion; nonlinear filtering

I. INTRODUCTION

The capillary diffusion of smartphones and tablets and theunreliability of the GPS signal [9, 10] in certain operatingconditions (e.g. indoor environments) are motivating anincreasing interest in the development of alternative navigationsystems for low-cost mobile devices.

Several approaches have been considered in the literature([1, 2, 4, 7, 13]) to deal with the lack of the GPS signal, most ofthem based on the use of Inertial Navigation System (INS)measurements and on the Radio Signal Strength (RSS) ofwireless networks. However, results that can be obtained byusing INS or RSS separately are often unsatisfactory. On theone hand, because of the low reliability of INS measurements,position estimation systems based on such updates quicklydrifts from the real track. On the other hand, WiFi signalinstability and change in the environment do not allow toobtain a sufficiently small positioning estimation error forsystems based exclusively on the RSS. Hence, nowadays it iscommonly accepted that indoor navigation systems have tointegrate the use of different sensors (e.g. INS and RSSmeasurements) to deal with the unreliability (or lack) of theGPS signal. In this direction, several approaches have beenrecently proposed in the literature [5, 8, 12].

This paper considers a Bayesian approach to tackle theindoor pedestrian navigation problem, where information fromINS and RSS measurements is integrated with a priorigeometrical and physical information on the environment.

Information integration is formulated as a nonlinearoptimization problem, and effective tracking is obtained bymeans of a multiple hypothesis approach. Furthermore, sensormeasurements can be used also to improve the model of theenvironment and to detect regions with specific characteristics(e.g. landmarks [11]): such characteristics can be used toimprove the successive performance of the navigation system.

The results obtained in our simulations in a universitybuilding suggest that the proposed approach allow to obtaingood navigation accuracy using low-cost devices (providedwith a minimum number of navigation sensors).

II. SYSTEM DESCRIPTION

A. Characterization of the Navigation System

In this work it is assumed that the device used for thenavigation satisfies the following requirements: it is providedof a 3-axis accelerometer and of a 3-axis magnetometer.Furthermore, since in terrestrial applications usually the heightwith respect to the floor is of minor interest, navigation isconsidered as a planar tracking problem. However, notice thatcertain exceptions to the last assumption are admitted, e.g. todeal with stairs and lifts. Since this work is mainly motivatedby the interest in developing a navigation system for (standard)low-cost mobile devices, then simulations have been performedby using standard smartphones, thus without the use of anyexternal sensor.

Let (ut,vt) be the position of the smartphone, expressed withrespect to the North and East directions, at time t, and let(us,vs,ws) be the smartphone (local) coordinate system. Then,conventionally, the heading direction is assumed toapproximately correspond to one of the axes of the coordinatesystem (system is designed to estimate and correct headingdirection discrepancies, with absolute value lower than 15degrees, with respect to such direction).

The rationale is that of using a dead reckoning-likeapproach: a proper analysis of the accelerometer measurementsallow to detect the human steps, while the magnetometerallows to estimate the movement direction with respect to theNorth. In order to make the estimation method more robust,such information is integrated with that provided by RSSmeasurements and the geometrical characteristics of thebuilding. RSS measurements of the standard WiFi networks areprovided by the corresponding sensor in the smartphone.

24/278

Page 30: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

B. Dynamic Model of the System

The measurements available to the navigation procedureare the lengths st of the steps [6], the angles t (on thehorizontal plane) with respect to the North direction, and theRSS measurements. Here the index t in st indicates theprogressive step number. Analogously, t denotes the angleassociated to the t-th step. Exploiting an RSS channel model[4], the RSS measurements are converted in distancemeasurements: i.e. djt is the distance measurement at the t-thstep with respect to the j-th Access Point (AP) (djt caneventually be empty), and dt the vector formed by the set ofdistances djt, for all j.

Assuming that the starting position (u0,v0) is known, thenthe information provided by ut , vtt=1,…,t and t , stt=1,…,t isequivalent:

[ut+1

v t+1]=[u t

vt]+s t[ sin αt

cosαt]

(1)The system dynamic and the measurements will be modeled asfollows:

[t1

st1]=[ t

s t ]t (2)

yt=C t [t

s t

d t][

b , t

00 ] t (3)

where wt and t are assumed to be independent zero meanGaussian white noises, Ct is a matrix formed by ones andzeros, that selects the measurements available for step t, andb,t is a bias in the direction measurement. Measurements oft, st, dt and are assumed to be independent, and measurementerrors are assumed to be Gaussian (measurement error of st, dt

are assumed to zero-mean, whereas, according with the biasassumption stated above, the error of t is assumed to havemean b,t that can be estimated from data).

III. LOCALIZATION

A. Multiple Hypothesis Tracking

Following a Bayesian approach, the tracking algorithmestimate the position of the device by integrating geometricinformation of the environment with that provided by thesensor measurements. Specifically, the spatial domain ofinterest (in our case the three floors of the building) ispartitioned in a set of L disjoint regions. Then, the discretevariable t is defined to be equal to i if the position of thedevice is inside region i at time t. Thus t is a discrete statevariable, and the vector t, formed by the values of t collectedfrom time 0 to t, represents a rough description of the temporaltrack of the smartphone from time 0 to t. Then, the problem ofestimating the positions of the device can be formulated as in(4). This problem is solved by using interior methods andproperly setting the initial guess of the solution [3].

X t , t = arg max X t ,t

∑t t u t , v t− t

2

2 +

+∑t

s t u t , vt −s t2

s2

∑ j , t

d j , t u t , v t−d j , t 2

d2

+

+ log p X t∣t = t , G log p t= t∣G

(4)

B. Estimation of System Characteristics

Measurements provided by the sensors are used to estimatespecific characteristics of the system and of the environment.In particular, the influence of sensor errors on the orientationmeasurements (provided by the magnetometer) are estimatedand (partially) corrected online.

IV. RESULTS

A low-cost smartphone, Huawei Sonic U8650, has beenused to validate the proposed navigation system in a buildingof the University of Padova. Taking into consideration tracksof approximately 600 steps, the mean estimation error of thecurrent position is 2.5m, whereas considering fixed timedelayed estimation then the mean error is 2.3m.

REFERENCES

[1] M. Barbarella, et.al., “Improvement of an MMS trajectory, in presenceof GPS outage, using virtual positions”, ION GNSS 2011.

[2] M. Barbarella, S. Gandolfi, A. Meffe, and A. Burchi, “A test field forMobile Mapping System:design,set up and first test results”,MMT 2011.

[3] B.M. Bell, J.V. Burke, G. Pillonetto, “An inequality constrainednonlinear Kalman-Bucy smoother by interior point likelihoodmaximization”, Automatica, vol. 45 (1), pp. 25-33, January 2009.

[4] A. Cenedese, G. Ortolan, and M. Bertinato, “Low-density wirelesssensor networks for localization and tracking in critical environments”,IEEE trans. on vehicular technology, vol.59(6), pp.2951-2962,July 2010.

[5] N. El-Sheimy, K.-W. Chiang, and A. Noureldin, “The utilization ofartificial neural networks for multisensor system integration innavigation and positioning instruments”, IEEE trans. on instrumentationand measurement, vol. 55 (5), pp. 1606-1615, October 2006.

[6] J. Jahn, U. Batzer, J. Seitz, L. Patino-Studencka. and J. GutiérrezBoronat, “Comparison and evaluation of acceleration based step lengthestimators for handheld devices”, 2010 , IPIN, pp. 1-6, 2010.

[7] A.R. Jimenez Ruiz, F.S. Granja, J.C. Prieto Honorato, and J. I.G. Rosas,“Accurate pedestrian indoor navigation by tightly coupling foot-mounted IMU and RFID measurements”, IEEE transactions oninstrumentation and measurement, vol. 61(1), pp. 178-189, Jan. 2012.

[8] C. Lukianto, H. Sternberg, “Stepping – Smartphone-based portablepedestrian indoor navigation”, Archives of photogrammetry,cartography and remote sensing, Vol. 22, pp. 311-323, 2011.

[9] M. Piras, A. Cina, “Indoor positioning using low cost GPS receivers:Tests and statistical analyses”, IPIN 2010.

[10] M. Piras, G. Marucco, K. Charqane, “Statistical analysis of different lowcost GPS receivers for indoor and outdoor positioning, PLANS 2010.

[11] H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, R.R. Choudhury,“No need to war-drive:Unsupervised indoor localization”,MobiSys2012.

[12] Widyawan, G. Pirkl, D. Munaretto, C. Fischer, C. Ane, et al., “Virtuallifeline: Multimodal sensor data fusion for robust navigation in unknownenvironments”, Pervasive and mobile computing, vol. 8, 388-401, 2012.

[13] M. Youssef, and A. Agrawala, “The horus WLAN locationdetermination system”, MobiSys '05, pp. 205-218, 2005.

978-1-4673-1954-6/12/$31.00 ©2012

25/278

Page 31: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

ARIANNA: a Two-stage Autonomous Localisation

and Tracking System

Enrico de Marinis, Fabio Andreucci, Otello Gasparini, Michele Uliana, Fabrizio Pucci, Guido Rosi, Francesca

Fogliuzzi

R&D and Automation Dept.

DUNE s.r.l.

Rome, Italy

[email protected]

Abstract— ARIANNA is a small-size system, wearable by an

operator for his localisation and tracking. Its design stems from

the following assumptions: no need of infrastructure for

localisation; low cost, no need of warm-up time (e.g. training

phases); seamless switch between GPS-denied/available

conditions; computational requirements relaxed enough to be

hosted in a commercial smartphone. ARIANNA meets these

objectives by adopting a novel two-stage approach: the former

stage is a conventional tracking process based on Extended

Kalman Filter and step detection; the latter is a post-processing

in which the errors due to the sensor drifts are estimated and

compensated. The system has been extensively tested with

various sensors, different operators, in clear and polluted

magnetic environments, with good and poor/intermittent GPS,

with paths ranging from 300 m to 3 km, each walked with mixed

speeds. The results systematically show good and repeatable

performance.

Keywords- IMU, PDR, compass, GPS, tracking, calibration,

localisation, pedestrian, indoor positioning, multi-sensor

navigation, human motion models

I. INTRODUCTION

Substantial efforts and resources have been steered in the past decade toward INSs (Inertial Navigation System) for human tracking and localization based on IMUs (Inertial Measurement Unit) based on MEMS (Micro Electro-Mechanical Systems) technology [1], [2]. The major attractive is that these devices might provide low-cost, low-power, miniaturized, lightweight and infrastructure-less solutions for the accurate navigation in GPS-denied scenarios. However they suffer significant bias, noise, scale factors, temperature drifts and limited dynamic range, resulting into position deviation and magnification of the angular Abbe error. These drawbacks de-facto prevent the use of MEMS IMUs for long-range localisation. As a consequence, it is not surprising that most of the efforts in the recent years address a widespread ensemble of techniques to improve the localization capabilities of MEMS-based INS for pedestrians. Most of the techniques rely on the PDR (Pedestrian Dead Reckoning) [1], where the walking behavior is exploited to reset the INS errors by adopting an ECKF (Extended Complementary Kalman Filter). Other approaches achieve better performance by exploiting the presence of ancillary sensors, such compass [3], or by visual-inertial odometry [4]. Also independent, pre-existing sources of

information are exploited, such as or RFID tags [5] or “map matching” techniques [6]. The recent trends jointly exploit multiple-sensors readings (e.g. compass, barometer, RFID tags) into UKF (Unscented Kalman Filter) structure [7].

However, scrutinizing the current state of the art, it can be highlighted that a common factor shared by all the approaches is the adoption of a unique, powerful, sophisticated processing, fusing multiple input data coming from heterogeneous sensors, usually sampled at different rates and with different relative delays, trying to provide the best possible output. This pushes up the HW complexity and poses a constraint on the battery drain of a wearable system, as well as on its cost, weight and size. In addition, some sensors need a mandatory calibration phase before the operations: gyroscopes biases and scale factors drift with temperature and magnetometers need the Hard-Iron Calibration (HIC) and Soft-Iron Calibration (SIC). The lack of gyro calibration introduces an amplification of the Abbe error and uncalibrated magnetometers can significantly magnify the position errors, when they are exploited to reduce the inertial angular drifts. Despite the plethora of calibration methods for gyros and magnetometers [8], [9], some MEMS-based IMUs and compasses also suffer a long-term obsolescence of the calibration (e.g. a few months for gyros and even 1-2 weeks for magnetometers HIC). This would imply a re-calibration performed on a regular basis: an unacceptable task from the end-user perspective.

In this paper we describe ARIANNA, a novel comprehensive system for the tracking of pedestrian operators. The key assumptions and requirements of ARIANNA stem from a long phase of analysis performed with the collaboration of end-users (e.g. firefightes, army, speleologists).

Low cost, small-size and lightweight system, smoothly wearable by an operator, with at least 4 hours of battery life with no recharge.

Unavailability of any ancillary infrastructure for localisation, either pre-existing or to be deployed during the operations.

Zero-touch interaction with the operator, no need of warm-up times, training phases or constraints on the initial path to be walked.

No calibration tasks to be performed by the end-users.

26/278

Page 32: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Performance independent of the number of operators.

Computational requirements relaxed enough to be hosted (as option) in a commercial smartphone.

These objectives are met by adopting a novel two-stage approach: the former is a conventional PDR based on ECKF; the latter is a post-processing in which sensor drifts are estimated and compensated. The data coming from the GPS (when available) and from the compass (when reliable) can be exploited in both stages.

This paper is organized as follows: the proposed ARIANNA system and its post-processing are illustrated in Section II, whereas its performance is assessed in Section III. Finally, in Section IV some conclusions are provided.

II. ARIANNA SYSTEM

ARIANNA is a light, smoothly wearable and highly customizable localization and tracking system for the remote tracking of pedestrians, seamlessly managing presence/absence of the GPS signal. Its basic components are:

miniaturized IMU+Compass shoe-fastened unit, small enough to be also sealed into the heel;

wearable computing and transmission unit, also equipped with GPS, where PDR processing is performed (it can range from a Smartphone to a dedicated pocket-size HW, depending on the end-user’s needs);

remote receiving and visualisation unit (e.g. a commercial, mid-level PC) where the ARIANNA proprietary post-processing is performed.

As illustrated in Fig.1, the raw sensor data from a shoe-mounted unit can be linked to the processing unit by a wireless (e.g. BT) link or by a waterproof cable (e.g. when the operators walk in partially flooded environments). In the wireless version, the sensor unit comes with a battery insuring 4 hours of continuous operations and the recharging can be done with a proprietary RF device (working at 150 kHz), avoiding the need of accessible plugs (e.g. when the sensor is sealed inside the heel). The position data are computed by the processing unit (power consumption 1.2 W); these data are transmitted to the remote command and control center (C2), where the ARIANNA post processing for the drift compensation is performed and the tracking data are displayed in 3D. The bandwidth needed for each operator on the user-C2 link is so

small (50 bps) that a commercial digital radio modem (260-485MHz band, 38-57 kbps) can in principle accommodate hundreds of simultaneous transmissions. So far 3G/4G cellular links and commercial radio-modem have been employed over virtually unlimited and 2-3 km ranges, respectively.

A schematic block diagram of the whole processing chain of ARIANNA is depicted in Fig. 2. A purely inertial tracking is computed in the wearable processing unit; this PDR is performed at the sensors sample rate (e.g. 400 Hz) and is expected to be affected by significant drifts, as no information coming from the ancillary sensors (compass, GPS) is exploited. The uncompensated tracking data (along with the raw compass

and GPS data, if available), transmitted to the C2 at a much lower rate (e.g. 1-2 Hz), are subsequently employed in a joint scheme to estimate the HIC of the compass. The normalized GPS data (if and when reliable) and the compensated compass data are subsequently employed to estimate the positioning drift parameters, so to compensate them in the last processing step.

It should be highlighted that the compass data, even if corrupted by local polarization and interference, are always available, whereas GPS data can appear and disappear in an unpredictable way: the ARIANNA post-processing automatically handles this, avoiding the inclusion of any special logic, thus insuring seamless indoor/outdoor operations (e.g. continuous walk inside and outside buildings).

Figure 1. Basic elements of ARIANNA system.

Figure 2. Functional block diagram of ARIANNA processing chain.

High rate local processing

Gyro. Accel. Compass GPS

Reliability

PDR

with

ECKF

Step

detection

Low rate remote post-processing

Uncompensated Position

Compass HIC

estimation

Drift factors estimation

Reliability

Compensation and

position adjustment

27/278

Page 33: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Beyond the performance improvement expected by the

joint exploitation of the independent information coming from the GPS and compass, ARIANNA comes with some additional advantages at system level. The processing performed at higher data rate is a PDR based on ECKF with a minimal complexity configuration, as no attempt of further correction/compensation is performed at this stage: this minimises the hosting HW complexity, cost and the associated battery drain. In addition, the uncompensated position data are transmitted at rates as low as 1-2 Hz (enough to insure an effective post-processing) and this slow transmission rate further shrinks the requirements on the power needed for the data delivery and the relevant bandwidth to be allocated. On the post-processing side, the low data rate and the absence of complex algorithms are the key factors to let the proprietary estimation/compensation algorithms run on any commercial mid-level PC. From the operational point of view, usually the gyro biases are estimated by requiring the operator to stand still a few tens of seconds before moving; the HIC and SIC parameters can be roughly estimated by requiring the operator to walk a circle or an 8-shaped path. ARIANNA does not have such requirements: the operator’s interaction with the system in basically zero-touch, so to let him/her focus on the mission, also considering that constraints such as the still periods and/or constrained paths sound as unacceptable by some classes of end-users (e.g. soldiers, firefighters). As a last consideration, looking at the ARIANNA system as a whole, the mitigated requirements on calibration, power, bandwidth and hardware leave a significant room for customisation.

III. EXPERIMENTAL RESULTS

ARIANNA has been widely and extensively tested with various sensors, different operators, in clear and very polluted magnetic environments, with straight and random paths ranging from 300 m to 3 km, each walked with mixed speeds, ranging from 0 km/h (long still periods), up to 8 km/h. Usually the performance are measured by walking closed paths and adopting the metric PE = ||r0-re||/L, i.e. the distance between the starting and final positions (r0 and re, respectively) as a percentage of the walked distance L. However this metric could be somewhat misleading, as it does not account for the departure of the estimated path from the ground truth: e.g. two distinct angular errors might compensate each other, so to lead to a small PE score, despite the poor similarity of the path with the ground truth. In the absence of a calibrated testbed, enabling point-by-point differential measures, we introduce also the (subjective) SFI index (Shaping Fidelity Index) roughly ranking the similarity between the estimated path and what we know to be the ground truth (0=no similarity, 10=excellent match).

The following Table I summarises the mean values and the SD of the PE and SFI metrics, estimated over 36 heterogeneous experiments. From the table, the significant boost of ARIANNA w.r.t. the PDR and PDR+MAG (i.e. PDR with magnetic drift reduction) is apparent, both for PE metric and SFI index. It should be also considered that PDR also benefits of a calibrated compass and an initial still period (gyro biases estimation), whereas ARIANNA does not.

In the following, the results of three experiments are provided. In Fig. 3 no GPS is employed and ARIANNA solely relies on uncalibrated compass to compensate drifts. The experiment is a 2.53 km path walked back and forth on a long straight road, then entering a large building and finally back to the starting point. The PE metric in this case is 0.51%. In vertical plane (not reported here) PDR is affected by a constant drift, leading to a final vertical position error of 45 m, whereas ARIANNA never exceeds 1.5 m of vertical position error along the whole experiment, with an error at the end point of 1 cm.

TABLE I. MEAN AND S.D. OF THE PERFORMANCE METRICS

No GPS GPS (urban/suburban)

PE % SFI (0-10) PE % SFI (0-10)

PDR 9.612.4 3.82.8 - -

PDR+MAG 7.06.0 4.13.0 - -

ARIANNA 1.752.3 8.41.4 0.81.1 9.10.5

Figure 3. Estimated paths by PDR (black) and ARIANNA exploiting only

compass data (red); walked distance: 2.53 km.

Figure 4. GPS (green) and ARIANNA exploiting both GPS and compass

data (black) and only compass (red); walked distance: 1.4 km.

28/278

Page 34: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

The experiment in Fig. 4 has been performed in a typical dense urban environment. Both GPS and compass are employed in ARIANNA. The path length is 1.40 km, with a long section walked in the underground metro station, where the GPS, although still available, is definitely unreliable. This underground path estimated by ARIANNA is reported in red in Fig. 4 and a detail is provided in Fig. 5. The PE metric for this experiment is 0.71% (for GPS is 0.4 %). In Fig. 4, large fluctuations can be noticed for GPS: they are mainly due to the typical multipath effects in urban environments; on the opposite ARIANNA preserves a better resemblance with the ground truth. Also in this case (not reported in the figures) the vertical drift of the PDR leads to a final vertical position error of 24 m, whereas the ARIANNA vertical error at the final point of 0.2 m (the corresponding GPS error is 5 m, but with fluctuation as large as 20 m along the whole experiment).

Figure 5. Detail of Fig. 4, relevant to the underground metro station.

Figure 6. PDR with compass aiding (black) and ARIANNA (red), both

exploiting the same uncalibrated compass data; walked distance: 2.32 km.

The experiment in Fig. 6 consists of 6 rounds (plus some random walk at the beginning and at the 5th round) of a soccer pitch for a total walked length of 2.32 km. In this case the uncalibrated compass data have been employed to correct the PDR estimation, an operation resulting in an effective improvement when the compass is properly calibrated, but in

this case, the lack of calibration results in a dramatic loss of performance for PDR+MAG. On the opposite, ARIANNA, although exploiting the same uncalibrated compass, performs well, giving a final PE=0.31%. In addition, the final vertical error is 3 m for PDR and only 1 cm for ARIANNA.

IV. CONCLUSIONS

In this paper, ARIANNA, a customizable, novel pedestrian positioning and tracking system specifically designed for low-cost MEMS-based IMUs, is presented. It splits the path estimation and the drift compensation in two separate processing structures: the former, hosted on the wearable computing unit of the operator, operates at higher rate; the latter, hosted on the remote receiver side, operates at much slower rate. An extensive validation campaign, performed with a wide range of experimental conditions, has systematically demonstrated a superior performance of ARIANNA w.r.t. PDR and, more important, a good repeatability. The current work for its further improvement is focused on configurations with IMUs mounted on both shoes and the management of lifts and elevators. In conclusion, ARIANNA is a mature system in which electronic, logistic, recharging, processing and visualization have not been designed just for demonstration, but for the use in real-operations.

REFERENCES

[1] E. Foxlin, “Pedestrian tracking with shoe-mounted inertial sensors”, IEEE Comput. Graph. Appl., vol. 25, no. 6, pp. 38–46, Nov. 2005.

[2] H. Leppäkoski, J. Collin, J. Takala, “Pedestrian Navigation Based on Inertial Sensors, Indoor Map, and WLAN Signals” in Journal of Signal Processing Systems, 2013.

[3] A.R. Jimenez, F. Seco, J.C. Prieto and J. Guevara, “Indoor Pedestrian Navigation using an INS/EKF framework for Yaw Drift Reduction and a Foot-mounted IMU”, WPNC 2010: 7th Workshop on Positioning, Navigation and Communication, 2010.

[4] M. Li, A. I. Mourikis, “High-precision, consistent EKF-based visual–inertial odometry”, International Journal of Robotics Research, Volume 32, No 6, May 2013.

[5] A.R. Jiménez, F. Seco, J.C. Prieto, and J. Guevara Rosas, “Accurate Pedestrian Indoor Navigation by Tightly Coupling Foot-Mounted IMU and RFID Measurements”, IEEE Trans. Instrum. Meas., vol. 61, no. 1, pp. 178–189, Jan. 2012.

[6] S. Kaiser, M. Khidera, P. Robertson, “A pedestrian navigation system using a map-based angular motion model for indoor and outdoor environments”, Journal of Location Based Services, Special Issue: Indoor Positioning and Navigation. Part III: Navigation Systems, Volume 7, pp. 44-63, Issue 1, 2013.

[7] M. Romanovas, V.Goridko, L.Klingbeil, M.Bourouah, A.Al-Jawad, M.Traechtler, Y. Manoli, “Pedestrian Indoor Localization Using Foot Mounted Inertial Sensors in Combination with a Magnetometer, a Barometer and RFID”, in Progress in Location-Based Services, Lecture Notes in Geoinformation and Cartography, pp 151-172, 2013.

[8] W. Ilewicz, A. Nawrat, “Direct Method of IMU Calibration”, Advanced Technologies for Intelligent Systems of National Border Security, Studies in Computational Intelligence Volume 440, pp 155-171, 2013

[9] Z. Wu, Y. Wu, X. Hu, M. Wu, “Calibration of Three-Axis Magnetometer Using Stretching Particle Swarm Optimization Algorithm”, IEEE Transactions on Instrumentation and Measurement, Volume: 62, Issue: 2, pp. 281-292, Feb. 2013.

29/278

Page 35: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Source Localization by Sensor Array Processing

using a Sparse Signal Representation

Joseph LARDIES

Institute FEMTO-ST

DMA

Besançon, France

[email protected]

Marc BERTHILLIER

Institute FEMTO-ST

DMA

Besançon, France

[email protected]

Abstract—The objective of the communication is the localization

of emitting sources by an array of sensors. The sources are

narrowband or wideband and can be correlated or uncorrelated.

The method is based on a sparse representation of sensor

measurements with an overcomplete basis composed of time

samples from the sensor array manifold. A new adaptation

approach of sparse signal representation based on a compromise

between a residual error and the sparsity of the representation is

proposed. An important part of our source localization technique

is the choice of the regularization parameter which balances the

fit of the solution to the data versus the sparsity prior. An

appropriate regularization parameter which handles a

reasonable tradeoff between finding a sparse solution and

restricting the recovery error is obtained using three approaches.

Simulations and experimental results demonstrate that the

proposed method of selecting the regularization parameter can

effectively suppress spurious peaks in the spatial spectrum and

only the correct sources are localized.

Keywords : array processing; source localization; sparse

representation; high-resolution

I. INTRODUCTION

Source localization using sensor array processing is an active research area, playing a fundamental role in many applications such as electromagnetic, acoustic, seismic sensing, and so on. The receiving sensors may be any transducers that convert the received energy to electrical signals and an important goal for source localization methods is to be able to locate closely spaced sources in presence of noise. There are a lot of high-resolution algorithms for source localization such as MUSIC and ESPRIT [1,2] which require the computation of the sensor output covariance matrix and the knowledge of the signal subspace. We propose a different approach for source localization based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. The method uses the l1 –norm penalty for sparsity and the Frobenius-norm penalty for noise or residual error. However, this method needs the use of a regularization parameter [3-6] which handles a reasonable tradeoff between finding a sparse solution and restricting the recovery error. If the regularization parameter is too low there are a lot of spurious sources and, inversely, if the regularization parameter is too high some sources are dismissed. In [7] Zheng et al. presented a sparse spatial spectrum but the regularization parameter was chosen arbitrary. In the communication, the

selection of the regularization parameter is analyzed using three methods: the L-curve method, the analysis of the Chi-square distribution of the recovery error and the analysis of the Rayleigh distribution of the recovery error. Numerical and experimental results are presented.

II. SOURCE LOCALIZATION FRAMEWORK

A. Source Localization Problem

The goal of sensor array source localization is to find the locations of sources that impinge on an array of sensors. To simplify the exposition we only consider the farfield scenario and confine the array to a plane. The available information is the geometry of the array, the parameters of the medium where sources propagate and the measurements on the sensors. Consider an antenna array of N elements and assume P signals impinge on the array from unknown directions θ1, θ2,… θP. The array output can be described as [6-8]:

y(t)= A(θ) s(t) + b(t) = (t)i

s)i

(θaP

1i

+ b(t) (1)

where y(t) is the array output, s(t) is the complex amplitude of

the signal field, b(t) is the additive Gaussian noise and A(θ) =

[a(θ1), a(θ2),… a(θP)] is the array manifold matrix (NxP). The

goal is to find the unknown directions θ1,.. ,θP of the sources

from the observation y(t), when the number of samples is

small (less to 50), using a sparse signal representation.

B. Source Localization by Sparse Representation

We formulate the source localization problem given in (1)

into a sparse signal reconstruction perspective. For this

formulation one defines an overcomplete matrix A containing

all possible source locations. Let θ ,θ , ,θ1 2 L

be a

sampling grid of L possible source locations. In the farfield

this grid contains the directions of arrival and in the near field

this grid contains bearing and range information. We assume

that θ1,.. ,θP θ ,θ , ,θ1 2 L

and the model (1) can be

reformulated as :

y(t) = A( θ~

) x(t) + b(t) (2)

30/278

Page 36: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

where x(t) is the (Lx1) vector, whose ith

entry is equal to the jth

entry of s(t) if iθ~

= jθ and zero otherwise. Therefore, the

locations information is converted to the positions of the non-

zero entries in x(t). The important point is that A( θ~

) is known

and does not depend on the unknown source locations jθ as

A(θ) did and the problem is the estimation of the spatial

spectrum of x(t), which has to exhibit sharp peaks at the

correct source locations. With multiple snapshots the model

(2) can be written as :

Y = A( θ~

)X + B (3)

where Y = [y(1) y(2)…y(T)] is the matrix (NxT) containing T

time samples, X (LxT) and B (NxT) are defined similarly.

Matrix X has a two-dimensional structure: a spatially structure

with a spatial index i=1,…,L and a temporally structure with a

time index j=1,…,T. But sparsity only has to be enforced in

space. This can be done by computing the l2-norm of the

corresponding rows of X : )(

ix

2l and penalizing the l1-norm

of )(

x 2l

= [)()()(

L.....,,

2,

1xxx

2l 2l 2l]

T. The sparsity of the

resulting (Lx1) vector )(

x 2l

corresponds to the sparsity of the

spatial spectrum. We can find the spatial spectrum by solving

the joint sparse optimization problem [9]:

min 1

x )2(l subject to 2

F

X)~

(AY 2 (4)

The method uses the l1 –norm penalty for sparsity of the

representation and the Frobenius-norm penalty for noise or

residual error, it forces the residual to be small. For this

problem, the task of choosing the regularization parameter 2

properly is very important and is discussed in the next section.

III. REGULARIZATION PARAMETER ESTIMATION

A. Regularization parameter estimation by the L-curve

The regularization parameter controls the tradeoff between

the sparsity of the spectrum and the residual norm. This

parameter balances the fit of the solution to the data versus the

sparsity prior. If the regularization parameter is too low there

are a lot of spurious sources in the spectrum and if the

regularization parameter is too high some sources would

disappear. A very popular method for a choice of the

regularization parameter is the L-curve method [10]. Having

noted the important roles played by the norms of the solution

and the residual, it is quite natural to plot these two quantities

versus each other. The L-curve plots the log l1- norm of

sparsity against the Frobenius norm of recovery error for a

range values of the regularization parameter:

log(1

x )2(l ) = f(2

F

X)~

(AY ) (5)

This curve has typically a L shape and the regularization

parameter value corresponding to the corner is the one that

balances the tradeoff optimally. However, in several cases the

corner is not clearly visible and the L-curve gives a bad

regularization parameter. Another approach based on the

cumulative distribution function of noise is presented.

B. Regularization parameter estimation by the Chi-square

distribution of noise

We present an approach to select the regularization

parameter automatically for the case where some statistics of

the noise can be estimated or are known. Let X( β ) be the

time-spatial matrix obtained using β as the regularization

parameter. Malioutov et al. [6] and Zheng et al. [11], propose

to select the parameter β to match the residuals of the solution

to some statistics of the noise. If the distribution of the noise B

is known or can be modeled, then the regularization parameter

is obtained such that the Frobenius norm of the residual error

approaches the Frobenius norm of noise. Let bmn be the (m,n)

element of the B matrix. Malioutov et al. assume that the noise

is Gaussan, independent and identically distributed with zero

mean and variance equal 2 . We have:

2

FB =

N

1m

T

1n

2mnb (6)

and 2

FB has approximately a Chi-squared distribution with

NT degrees of freedom upon normalization by the variance of

noise : 2

2F

B

2NT

. The cumulative distribution function of

the 2NT

is

p = F(z, NT) = dt

)NT(2

etz

0 2/

2/t2/)2NT(

2/NT

(7)

where is the Gamma function and the inverse of the chi-

square cumulative distribution function for a given probability

p (or a given confidence interval) and NT degrees of freedom

is

z = F-1

(p, NT) = p)NT,z(F:z (8)

From (4) we must choose 2 high enough so that the

probability that 22

FβB is small and we use the chi-square

distribution with a very high degree of confidence to ensure

the suppression of spurious sources. Unfortunately, when the

number of time samples is small (inferior to 50) even with a

degree of confidence up to 0.999 we cannot obtain the

regularization parameter to effectively suppress such spurious

sources (see applications). To explain this phenomenon we

consider equation (4)

2

F

X)~

(AY =2

F

X)~

(AS)(A +trace ])X)~

(AS)(A(B[ H

31/278

Page 37: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

+ trace ]B)X)~

(AS)(A[( H + 2

FB (9)

Under the l1-norm minimization if we only exploit 2 =

2

FB to obtain the regularization parameter the spurious

noise cannot be removed and we obtain in the spatial spectrum

inevitable spurious peaks. The value of the regularization

parameter is too small. We present now a method to obtain the

regularization parameter with a large dynamic range.

C. Regularization parameter estimation by the Rayleigh

distribution of noise

We assume that the noise is complex, Gaussian,

independent and identically distributed. The element (m,n) of

the B matrix is complex : bmn=cmn+jdmn with m=1,..,N and

n=1,..,T

2

FB = )2

mn2mn dc(

T

1n

N

1m

(10)

If cmn ~ N (0, /2σ2 ) and dmn ~ N (0, /2σ2 ) then the absolute

value of the complex number bmn : mnb = )dc( 2mn

2mn is

Rayleigh-distributed. The cumulative distribution function of

the Rayleigh distribution is

pmn = G(mnb , /2σ2 ) = dt

et2b

02

22/tmn

(11)

and the inverse of the Rayleigh cumulative distribution

function for a given probability pmn (or a given confidence

interval) and a scale parameter /2σ2 is

mnb = G-1

( mnp , /2σ2 )=

mnmnmn p)2/2

,b(G:b (12)

Then, we have

2

FB = 2

mnbT

1n

N

1m

= )/2σT

1n

N

1m

2,mnp(1G

NT )/2σ2,maxp(1G = 2

Rayl (13)

where pmax = max mnp . The regularization parameter is

obtained from the Rayleigh inverse cumulative distribution

function using (13).

IV. NUMERICAL AND EXPERIMENTAL RESULTS

We consider a uniform linear array of N=6 sensors

separated by half a wavelength of the narrowband source

signals. Two uncorrelated sources at 0° and 10° are present in

the field. The number of snapshots is T=50 and SNR=20 dB.

Figure 1(a and b) presents the variations of the regularization

parameter using the three methods. The L curve presents

differents corners and it is very difficult to obtain the

regularization parameter from this plot. Figure 1(b) presents

the variations of the regularization parameter versus the

degree of confidence for the Chi-square distribution and the

Rayleigh distribution. A large dynamic range of the

regularization parameter is obtained by the Rayleigh

distribution.

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.186.2

6.4

6.6

6.8

7

7.2

7.4

7.6

||Y - AX||2F

log(||x

||1)

Courbe en L

(a)

0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1200

400

600

800

1000

1200

1400

Probabilité de localisation des sources

Pa

ram

ètr

e d

e r

ég

ula

risa

tio

n

Chi2

Rayleigh

(b)

Figure1. (a) Regularization parameter by the L-curve ; (b) Regularization parameter versus the degree of confidence by the Chi-square and the Rayleigh

distribution

Figure 2 (a) shows 20 spatial spectra for two uncorrelated

sources with a regularization parameter obtained by the Chi-

square distribution using a 0,99 degree of confidence. The

spurious sources are too important and it is impossible to

localize the two true sources. Even with a 0,999 degree of

confidence we cannot localize the sources (Figure 2(b)). If we

use a 0,99 degree of confidence and a Rayleigh distribution,

we can then localize the two true uncorrelated sources as it is

shown in Figure 2(c).

-100 -80 -60 -40 -20 0 20 40 60 80 100-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

Angle d' incidence (degrés)

Puis

sance (

dB

)

(a)

-100 -80 -60 -40 -20 0 20 40 60 80 100-250

-200

-150

-100

-50

0

Angle d' incidence (degrés)

Puis

sance (

dB

)

(b)

-100 -80 -60 -40 -20 0 20 40 60 80 100-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

Angle d' incidence (degrés)

Pu

issa

nce

(d

B)

(c)

Figure 2. Spatial spectra for two uncorrelated sources by the sparse

representation using the Chi-square distribution (a) with p=0,99 ; (b) with

p=0,999 ; (c) using the Rayleigh distribution with p = 0,99

Consider now the case of two correlated sources. The number

of snapshots is T=50 and RSB=20 dB. Figure 3(a) shows the

variations of the regularization parameter and Figure 3(b)

presents 20 spatial spectra with a regularization parameter

obtained by the Rayleigh distribution with pmax = 0,99. The

two correlated sources can be localized.

0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1200

300

400

500

600

700

800

900

1000

Probabilité de localisation des sources

Pa

ram

ètr

e d

e r

ég

ula

risa

tion

Chi2

Rayleigh

(a)

-100 -80 -60 -40 -20 0 20 40 60 80 100-200

-180

-160

-140

-120

-100

-80

-60

-40

-20

0

Angle d' incidence (degrés)

Pui

ssan

ce (

dB)

(b)

Figure 3. (a) Regularization parameter variations ; (b) Spatial spectra for two correlated sources using the Rayleigh distribution with p=0,99

Figure 4 (a) shows the experimental test in an anechoid room.

The two sources to localize are the two loudspeakers

generating sinusoidal waves. The distance between

32/278

Page 38: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

microphones is d= λ/2 and T = 50. Figure 4(b) shows the

variations of the regularization parameter using the Chi-square

and the Rayleigh distribution. The Rayleigh distribution has a

large dynamic range and we use this distribution to obtain the

regularization parameter with p=0,99. The spatial spectra by

the sparse method is plot in Figure 4c, we can localize the two

acoustical sources easily.

(a)

0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1400

600

800

1000

1200

1400

1600

1800

2000

Probabilité de localisation des sources

Pa

ram

ètr

e d

e r

ég

ula

risa

tio

n

Chi2

Rayleigh

(b) (c)

Figure 4. (a) Experimental procedure ; (b) Parameter regularization variations;

(c) Experimental spatial spectra by the sparse method using the Rayleigh

distribution and p=0,99

The method is now used to localize wideband sources. In

Figure 5 we look at three wideband signals consisting of one

or two harmonics each. At 1 = 60° there are two harmonics

with frequencies 320 Hz and 480 Hz ; at 2 = 68° there is a

single harmonic with frequency 320 Hz, à 3 = 100° there are

again two harmonics with frequencies 400 Hz and 480 Hz and

at 4 = 108° there is a single harmonic with frequency 400

Hz. As shown in Figure 5 the sparse method resolves all

sources and does not have any distortion due to noise, contrary

to MUSIC method.

MUSIC

0 100 200 300 400 500 600 700 800 900

0

50

100

150

Fréquence(Hz)

Ang

le d

'inci

denc

e (d

egré

s)

PARCIMONIE

0 100 200 300 400 500 600 700 800 900

0

50

100

150

Figure 5. Wideband source localization

In Figure 6 we present three chirps localized at 1 = 60°, 2 =

78° and 3 = 100°, with frequency span from 250 Hz to 500

Hz ( d/λ [0,25-0,5] ). Using the conventionl beamforming

method we cannot localize the three wideband sources. The

spatial-frequency spectra of the chirps are merged and cannot

be separated as shown in Figure 6 (a), specially in lower

frequency ranges. The methodology presented in the

communication can be used for the localization of three

wideband sources as it is shown in Figure 6 (b).

(a)

(b)

Figure 6. (a) Conventional beamforming ; (b) Sparse method for the localization of three wideband sources

V. CONCLUSION

The regularization parameter plays an important role in the

source localization problem by sparse reconstruction. This

parameter handles a reasonable tradeoff between finding a

sparse solution and restricting the amplitude of the recovery

error. Simulations and experimental results shown the

effectiveness of the method in the suppression of spurious

sources when the number of time samples is small. The sparse

signal reconstruction method presented can be used for the

localization of wideband sources.

REFERENCES

[1] S.U. Pillai; Array signal processing; Springer-Verlag; 1989 [2] S. Marcos ; Les méthodes à haute résolution ; Edition

Hermès, Paris, 1998

[3] J.J. Fuchs; More on sparse representations in arbitrary

bases; IEEE Trans. on IT, Vol. 50, pp. 1341-1344; 2004

[4] D.L. Donoho and X. Huo; Uncertainty principles and ideal

atomic decomposition; IEEE Trans. on IT, Vol. 47, pp. 2845-

2862; 2001

[5] S. Bourguignon, H. Carfantan and T. Bohm; Spar Spec : a

new method for fitting multiple sinusoids with irregularly

sampled data; Astronomy&Astrophysics; Vol. 462, pp. 379-

387; 2007

[6] D.M. Malioutov, M. Cetin, A.S.Willsky; A sparse signal

reconstruction perspective for source localization with sensor

arrays; IEEE Trans. Signal Processing; Vol. 53, pp. 3010-

3022; 2005

[7] J. Zheng, M. Kaveh and H. Tsuji; Sparse spectral fitting

for direction of arrival and power estimation; Proc.IEEE/SP;

15th

Workshop on Statistical Signal; 2009

[8] J. Lardiès, H. Ma, M. Berthillier ; Localisation de sources

de bruit par représentation parcimonieuse des signaux issus

d’une antenne acoustique, GRETSI 2011, Bordeaux

[9] J.S. Sturm; Using SeDuMi 1.02, a Matlab toolbox for

optimization over symmetric cones; Optimization Methods

and Software; Vol. 11, pp. 625-653; 1999

[10] P.C. Hansen; The L-curve and its use in the numerical

treatment of inverse problems; Advances in Computational

Bioengineering; Edit. P. Johnston; 2000

[11] C. Zheng, G. Li; Subspace weighted l2,1 minimization for

sparse signal recovery; Journal on Advances in Signal

Processing; 2012

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

-100 -80 -60 -40 -20 0 20 40 60 80 100-150

-100

-50

0

Angle d' incidence (degrés)

Pui

ssan

ce (d

B)

33/278

Page 39: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

OFDM Pulse Design with Low PAPR for UltrasonicLocation and Positioning Systems

Daniel F. Albuquerque, Jose M. N. Vieira, Sergio I. Lopes, Carlos A. C. Bastos, Paulo J. S. G. FerreiraSignal Processing Lab – IEETA/DETI – University of Aveiro

3810-193 Aveiro, Portugaldfa, jnvieira, sil, cbastos, [email protected]

Abstract—In this paper we propose an iterative algorithmto design ultrasonic orthogonal frequency division multiplexing(OFDM) pulses with low peak-to-average power ratio (PAPR),increasing, not only, the probability of pulse detection, but also,the system power efficiency. The algorithm is based on thePapoulis-Gerchberg method, where in each iteration the PAPRof the resultant pulse is reduced while keeping the spectrumflat and band limited. On each iteration the amplitude of theOFDM carriers are kept constant and only the phases of carriersare optimized. The experimental results have shown that forultrasonic OFDM pulses with a large number of carriers it ispossible to design pulses with a PAPR of 1.666. The designerpulse is ideal for time of flight (TOF) measurement purposes.

Keywords—Ultrasounds, Ultrasonic Pulse, Pulse Design, Timeof Flight, Pulse Detection, OFDM, PAPR, Papoulis-Gerchberg.

I. INTRODUCTION

The OFDM is a method of data transmission that usesmultiple carriers at a very low rate [1]. The main advantageof using OFDM is its robustness to some adverse indoorultrasonic (US) channel conditions, such as strong multipathand different equalization along the frequency [1]. Due to thisadvantages, the authors have proposed an ultrasonic pulse thatuses OFDM pulses to measure the TOF and transmit datasimultaneously [2]. However, one of the major drawbacksof using OFDM pulses to measure the TOF is the highPAPR when comparing to other types pulses, such as chirps.The PAPR is defined as the ratio between the peak powerto the mean power of the OFDM pulse. On the one hand,the probability of pulse detection increases with the signalenergy [3]. On the other hand, if the transmission systemuses a power amplifier it is important to increase the signalenergy and reduce the signal amplitude peak in order toincrease the power amplifier efficiency [1]. Therefore, the pulseused for TOF measurement should present a PAPR as low aspossible [1]. The literature usually covers the PAPR problemfor communication purposes [4], [5] but for TOF measurement,where the quest for the best pulse is the goal, the typicalsolutions are not for the PAPR problem but for a similarone, the peak-to-mean envelope power ratio (PMEPR) [3]. ThePMEPR instead of measuring the ratio between the peak powerand mean power of the real transmitted signal it computes theratio using the signal envelope. For narrow-bandwidth signals1

the PMEPR provides a good approximation of the PAPR value(typical radar case) [3], [5]. However, for typical US signals

1Narrow-bandwidth signals are signals whose carriers’ frequency is muchgreater that the signal bandwidth.

(up to 100 kHz) the narrow-bandwidth model it is not wellsuited.

II. PROPOSED ALGORITHM

The algorithm to optimize the PAPR of OFDM pulsesis presented in Fig. 1 and it is adapted from the algorithmproposed in [6] which is based on the Papoulis-Gerchbergalgorithm [7], [8]. The algorithm starts by computing the

Compute: q (k)

from Newman method

Compute: S(k) = e jq (k)

S(k) = 0 for k ≠ k0..kNc-1

IFFT

Clip the signal peaks

Compute: q (k)

from X(k)

Compute the real part

x(n) = 2 x Res(n)

FFT

Fig. 1: Proposed iterative algorithm to decrease the PAPR.

carrier phases, θ(k) = (k − 1)2π/Nc, using the Newmanmethod, where Nc is the number of carriers. After it iscomputed the frequency carriers information, S(k) = ejθ(k),with amplitude one which results into an OFDM pulse witha PAPR around 3.5. The resultant signal is then converted tothe time-domain and the double of its real part is computed.Note that the double is only important to keep the carriersamplitude equal to one. Therefore, from the resultant signal,x(n), the peaks are removed by clipping the maximum andthe minimum of the signal. Note that the clipping processmust be between 75% to 95% from the maximum amplitudeof the signal to ensure that the algorithm converges and thatthe PAPR is reduced as fast as possible [6]. After passingthe clipped signal to the frequency-domain the new carriersphases are obtained and the first iteration is completed. Foreach iteration, the carriers phase from the last iteration mustbe kept.

III. ALGORITHM RESULTS

This section presents the results of the proposed algorithmfor two types of OFDM pulses, a short pulse, with 100 ms, anda long pulse, with 20 s. Both pulses performance are comparedwith a chirp signal2 with the same characteristics.

2The term chirp is sometimes used interchangeably with sweep signal andlinear frequency modulation signal.

978-1-4673-1954-6/12/$31.00 c©2012

34/278

Page 40: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

A. Short Pulse

For the short pulse an 100 ms OFDM pulse with 1000carriers from 20 kHz to 30 kHz was used. The algorithm wasran one million times and the clipping process started at 0.8 ofthe maximum signal value. If the PAPR during one iteration isnot reduced, the clipping value for the next iteration changes to80% of the previous clipping value plus 0.2. For example if aclipping of 0.8 does no reduce the PAPR the clipping changesto 0.84 and after that to 0.872 and so on. The result for thistest is presented in Fig. 2. As can be seen the PAPR reducesto the value of 2 in just 3980 iterations. The problem is toreduce the PAPR bellow 2. After one million of iterations thePAPR is only 1.945 and it is only reduced by 5.6× 10−11 ineach iteration.

100

101

102

103

104

105

106

1.7

2

2.3

2.6

2.9

3.2

3.5

Iteration

PA

PR

Fig. 2: Algorithm results after 1 million iterations for an100 ms OFDM pulse with 1000 carriers.

The resultant OFDM pulse will be compared with a chirppulse with the same main characteristics: amplitude, durationand bandwidth. The probability of detection as a function ofthe signal’s amplitude for the last pulse sample using a matchedfilter and considering a threshold that produce a probability offalse alarm of 10−6 is depicted in Fig. 3. One can observe

0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.20

0.2

0.4

0.6

0.8

1

(Max. Amplitude)/(Noise std.)

Pro

babili

ty o

f D

ete

ction

OFDM pulse

Chirp pulse

Fig. 3: Probability of detection for an OFDM and a chirp pulseas a function of the signal amplitude.

that the OFDM pulse detection is slightly better than the chirppulse detection for the same amplitude.

B. Long Pulse

For the long pulse a 20 s OFDM pulse with 2 millioncarriers from 0 Hz to 100 kHz was used. The algorithm was ran10 million times and the clipping process was manually tuningbetween 80% and 99.999%. Fig. 4 presents the instantaneouspower for the resultant OFDM pulse and for a chirp withsimilar characteristics: energy, duration and bandwidth. ThePAPR reducing technique shows up its value, the OFDM pulsepresents a PAPR of 1.666 against 2 for the chirp. As a resultof this, the OFDM pulse has a considerable better efficiency.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

10

20

30

Occu

ren

ce

s (

%)

Instantaneous Power (Normalized to Chirp Peak Power)

(a) OFDM Instantaneous Power Distribution.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

10

20

30

Occu

ren

ce

s (

%)

Instantaneous Power (Normalized to Chirp Peak Power)

(b) Chirp Instantaneous Power Distribution.

Fig. 4: Instantaneous Power Distribution for a 20 s OFDMpulse with 2 million carriers and a chirp pulse with a bandwidthof 100 kHz. The instantaneous power was normalized to thepeak power of the chirp.

IV. CONCLUSION

Using the proposed algorithm it is possible to designOFDM pulses that present a low PAPR. The results show thatit is only needed some thousand iterations to obtain an OFDMpulse with a PAPR of 2, however to go beyond this value itwill be necessary to iterate the algorithm millions of times.Additionally, the results also show that it is easy to obtain anOFDM pulse with low PAPR for long pulses than for shortpulses. It was possible to design an OFDM pulse with 20 s ofduration that presents a flat spectrum between 0 and 100 kHzand a PAPR of 1.666. This result represents a 16.7% energygain when compared with the chirp pulse having the sameamplitude, length and bandwidth.

REFERENCES

[1] Henrik Schulze and Christian Luders, Theory and Applications of OFDMand CDMA, John Wiley & Sons, first edition, 2005.

[2] Daniel F Albuquerque, Jose M N Vieira, Carlos A C Bastos, and PauloJ S G Ferreira, “Ultrasonic OFDM Pulse Detection for Time of FlightMeasurement Over White Gaussian Noise Channel,” in 1st internationconference on Pervasive and Embedded Computing and CommunicationSystems, Vilamoura, Portugal, 2011.

[3] Nadav Levanon and Eli Mozeson, Radar Signals, JOHN WILEY &SONS, 2004.

[4] Seung Hee Han and Jae Hong Lee, “An overview of peak-to-averagepower ratio reduction techniques for multicarrier transmission,” IEEEWireless Communications, vol. 12, pp. 56–65, 2005.

[5] Jiang Tao and Wu Yiyan, “An Overview: Peak-to-Average Power RatioReduction Techniques for OFDM Signals,” Broadcasting, vol. 54, no. 2,pp. 257–268, 2008.

[6] Edwin Van der Ouderaa, Johan Schoukens, and Jean Renneboog, “Peakfactor minimization using a time-frequency domain swapping algorithm,”Instrumentation and Measurement, vol. 37, no. 1, pp. 145–147, 1988.

[7] Athanasios Papoulis, “A new algorithm in spectral analysis and band-limited extrapolation,” Circuits and Systems, vol. 22, no. 9, 1975.

[8] R. W. Gerchberg, “Super-resolution through Error Energy Reduction,”Optica Acta: International Journal of Optics, vol. 21, no. 9, 1974.

35/278

Page 41: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Dynamic Collection Based Smoothed Radiomap

Generation System

Jooyoung Kim, Myungin Ji, Youngsu Cho, Yangkoo Lee, Sangjoon Park

Positioning / Navigation Technology Research Team, Robot / Cognitive System Research Department

ETRI (Electronics and Telecommunications Research Institute)

Daejeon, Korea

[email protected], [email protected], [email protected], [email protected], [email protected]

Abstract— The fingerprinting method has been considered as a

promising technology of indoor localization. However, it has a

serious problem to employ to applications targeting broad service

areas due to the high cost for constructing DB. In the

conventional DB generation system, it utilizes a statistic collection

process. In the process, every collector gathers signal

characteristics on every point which divides a service area into a

grid. To achieve sufficient data set for calculating the

characteristics, a collector usually stands on the points for a while,

up to a couple of minutes. Therefore, it takes too much time and

cost to construct a fingerprint DB for whole serving areas. To

deal with this problem, a dynamic collection based fingerprint

DB generation system is proposed. In the proposed system,

collectors walk along predesigned routes and gather signal

measurements in motion using smartphone, not staying on a

point. Therefore, the proposed system remarkably reduces the

cost for constructing DB by improving the time consuming

collection process. However, the signal collecting in motion

causes to decrease the reliability of the DB because of the signal

variation which is significant in indoor areas. To mitigate the

variation, we exploit a moving average filter to the smooth noisy

measurements. As a result, the proposed system improves the

efficiency of the fingerprint DB generation system with

reasonable positioning performance. Experimental result proves

the validity of the proposed dynamic collection based smoothed

radiomap. The average positioning error using the proposed

smoothed radiomap is about 7.41m, and the standard deviation

of the error is 6.17m.

Keywords-fingerprinting, radiomap, dynamic collection, smart-

phone

I. INTRODUCTION

Recently, location based services have been considered as a key applications especially after the emergence of smart-phones. As the smart devices including smart-phones invade daily life deeply, the demand for a location system which is available ubiquitously is increasing rapidly. Therefore, accurate locating system regardless operation site is identified an important component of the applications. The Global Positioning System (GPS) meet the demand in outdoor areas, but any remarkable solution has not been proposed yet for indoor areas [1, 2].

To provide location information in indoor areas, several approaches have been proposed in the literatures, and most methods are classified into four categories: Time Of Arrival

(TOA), Time Difference Of Arrival (TDOA), Angle Of Arrival (AOA), and fingerprint methods. The TOA, TDOA, AOA methods have drawbacks to be utilized to smart-phones because of some additional requirements such as additional devices, aiding information, and timing synchronization, etc. In addition, these measurements are vulnerable under the complex signal propagation environments of indoor areas. Thus, the fingerprint based location is considered as a promising technology for indoor localization.

The fingerprint based location system is generally divided into two steps. The first step is an off-line phase or a training phase which constructs a radiomap through site-surveying to collect RSSI measurements of a service area [3]. The vector derived from statistics of the collected RSSI measurements is called fingerprint, and the set of the fingerprints is combined to build a radiomap which represents the characteristics of the signal pattern in a certain area. Then, in second an on-line phase or a positioning phase, positions of users are estimated by comparing the measured RSSI from users with the fingerprints.

Though the advantages mentioned above, the main reason to adapt fingerprint based location system into a large-scale field is laborious and time-consuming site surveying or collecting process for building radiomaps. Therefore, an automated collecting process, called dynamic collection, is adapted. Since the measurements are coarsely collected from the dynamic collection, a smoothed radiomap gerneation system is proposed to guarantee the reliability of radiomaps.

II. SYSTEM MODEL

A. Dynamic collection process using smart-phones

As explained above, the most significant drawback of the fingerprinting based location system is cumbersome and time consuming process for building a radiomap. Conventionally, reference RSSI measurements for building a radiomap are collected statically; Collectors stand at known positions, and pin-point their location manually on a ready-made map, then wait for a while to gather enough tuples of the measurements. Hence, it is too prohibitive to build radiomaps across broad areas based on the conventional static collecting process.

To solve the problem, a dynamic collection process is proposed. In the proposed process, collectors determine a path,

36/278

Page 42: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

not a point, and walk along the path with a smart-phone which gathers the reference measurements and calculates the ground-truth of the collectors, automatically. In the conventional process, a significant part of labor for collecting is consumed when a collector confirms his or her location for pin-pointing and waits to gather the measurements. However, the effort is remarkably decreased in the proposed system because it is automated by using a smart-phone.

For this purpose, a smart-phone application, called collecting app, is developed. The collecting app has the ready-made map of a service area, and provides accessible paths to collectors. Selecting one of the paths, the collectors start the collecting process. While the collectors follow the path, the application calculates their positions based on pedestrian dead-reckoning by using sensors of smart-phones, and gathers Wi-Fi signals. Then, the application combines the Wi-Fi signal patterns and the calculated reference positions on which the patterns are gathered. As a result, a simple and cost-effective collecting process is achieved by using the application so it makes building radiomaps of broad area feasible.

B. Smoothed radiomap generation

Though the proposed collecting process remarkably reduces labor and cost of collecting measurements, the simplified process may result in a problem to make a reliable radiomap fingerprint. The RSSI measurement is inherently unstable and unfortunately, it is much more severe in indoor environments. In conventional process, therefore, collectors wait for a while to gather sufficient number of measurements, not only one measurement, to avoid the sudden variation of the signals. Then, the fingerprint is generated regarding the statistical characteristic derived from the set of measurements.

However, in the case of the dynamic collection process, it is hard to expect to gather sufficient number of measurements because collectors continuously move to follow the path, and frequency for gathering Wi-Fi beacon signals by a smart-phone is limited. To overcome the limitation, a smoothing algorithm which exploits neighbor measurements is utilized. Note that, the main reason to adapt statistical characteristics of signals in conventional radiomap generation is to reduce the effect of sudden variance of the signals. Thus, mitigating the variation is also achieved with smoothing process for dynamic collection based radiomap.

The proposed smoothed radiomap generating system consists of two procedures. First, a collected area is divided into several cells and a fingerprint for each cell is calculated by averaging the measurements gathered within the cell. Then, the averaged fingerprints are smoothed with fingerprints of neighbor cells. In the experiments, moving average filter is adapted for smoothing. The positioning result using the smoothed radiomap is shown in next section.

III. EXPERIMENTAL RESULTS

To validate the proposed smoothed radiomap generating system based on the dynamic collection, the positioning performance exploiting the smoothed radiomap is evaluated through experimental result.

The test-bed, CO-EX, Seould, Korea, is shown in Fig. 1. The size of Co-Ex is 36,364m

2, but it takes about four hours to

collect the RSSI measurements of one floor. The positioning error is calculated with moving a designated path, and the true and estimated positions are also illustrated in Fig. 1, as blue squares and orange circles. In the experiment, the dynamic collection based smoothed radiomap is exploited and the positions are estimated by the k-nearest neighbor algorithm with K=3. The average positioning error is 7.42m, and standard deviation of the error is 6.17m.

Figure 1. Shape of the test-bed (Co-

Ex), and true (blue squares) and estimated positions (orange circles)

Figure 2. CDF curve of the positioning

errors

In Fig. 2, the cumulative distribution function (CDF) of the positioning error is shown. 90% of the errors are bounded within 8m, and 70% of errors are bounded 3m.

IV. CONCLUSIONS

In this paper, a smoothed radiomap generation system is proposed for dynamic collection. In dynamic collection, the reference measurements are automatically collected by a smart-phone application, so the efficiency of the collecting process is much more improved. Since the collected data are not enough to obtain statistical characteristic which is usually exploited in conventional fingerprints generating system, the smoothed radiomap generation system is proposed. The experimental results show that the positioning results show reasonable accuracy, about 7.41m in average, despite coarsely gathered measurements yielded by the dynamic collection process.

ACKNOWLEDGMENT

This research was funded by the MSIP(Ministry of Science, ICT & Future Planning), Korea in the ICT R&D Program 2013.

REFERENCES

[1] Y. S. Cho, M. Ji, Y. Lee, and S. Park, “WiFi AP position estimation using contribution from heterogeneous mobile devices,” Proc. IEEE Position Location and Navigation Symposium (PLANS), pp.562-567, Apr. 2012.

[2] G. M. Djuknic, R. E. Richton, “Geolocation and Assisted GPS,” IEEE Computer, vol. 2, pp. 123-125, Feb., 2001.

[3] P. Bahl, V. Padmanabhan, "RADAR: An in-building RF-based user location and tracking system," Proc. IEEE INFOCOM 2000, Tel-Aviv, Israel, vol. 2, pp. 775-784, Mar., 2000

978-1-4673-1954-6/12/$31.00 ©2012

IEEE

37/278

Page 43: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

38/278

Page 44: International Conference on Indoor Positioning and Indoor Navigation

39/278

Page 45: International Conference on Indoor Positioning and Indoor Navigation

40/278

Page 46: International Conference on Indoor Positioning and Indoor Navigation

41/278

Page 47: International Conference on Indoor Positioning and Indoor Navigation

42/278

Page 48: International Conference on Indoor Positioning and Indoor Navigation

43/278

Page 49: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accurate Smartphone Indoor PositioningUsing Non-Invasive Audio

Sérgio I. Lopes, José M. N. Vieira, João Reis, Daniel Albuquerque and Nuno B. CarvalhoDepartment of Electronics, Telecomunications and Informatics,

University of Aveiro, 3810 Aveiro, Portugal.Email: sil,jnvieira,jreis,dfa,[email protected]

Abstract—In this paper we propose a reliable acoustic indoorpositioning system fully compatible with the hardware of a con-ventional smartphone. The proposed system takes advantage ofthe smartphone audio I/O and processing capabilities to performacoustic ranging in the audio band using non-invasive audiosignals and it has been developed having in mind applicationsthat require high accuracy, such as augmented/virtual reality,gamming or audio guide applications. The system works in adistributed operation mode, i.e. each smartphone is able to obtainits own position information using a GPS-like topology. In orderto support the positioning system, a wireless sensor network(WSN) of synchronized acoustic anchor motes was designed. Tokeep the infrastructure in sync we developed an Automatic TimeSynchronization and Syntonization Protocol that resulted in async offset error below 5 µs. Using Time Difference of Arrival(TDoA) measurements we were able to obtain position estimateswith an absolute mean error of 7.3 cm and a correspondentabsolute standard deviation of 3.1 cm for a position refresh rateof 350 ms, witch is acceptable for the type of application we arefocused.

Keywords—LPS, IPS, acoustic positioning, smartphone local-ization, location-aware.

I. INTRODUCTION

The Global Positioning System (GPS) is the most widelyused method for outdoor localization and provides globalcoordinates with an accuracy within 10 meters [1]. Moreover,GPS signals are too weak to penetrate buildings, which makesthem useless for indoor positioning. High accuracy indoorpositioning systems normally use Radio-Frequency (RF), e.g.Ultra-Wideband (UWB), or acoustic signals [2]. UWB posi-tioning systems use narrow pulses with very short duration(subnanosecond) resulting in widely spread radio signals in thefrequency domain [3] and in high accuracy ToA measurements,when compared with other RF methods [4]. A major drawbackof UWB systems is related to the synchronization task, thattypically results in increased hardware complexity and cost,due to the high precision needed in ToA estimation. On theother hand, by using acoustic signals, a resolution in time inthe order of µs can easily be achieved using only off-the-shelfcomponents.

II. SYSTEM ARCHITECTURE

The proposed system was developed having in mind in-creased accuracy (in the decimeter order) applications, i.e aug-mented/virtual reality, gamming or audio guide applications.To achieve these requirements we focused in the following

criteria when designing the system: indoor operation, sub-decimeter accuracy, smartphone compatibility, scalability andlow-cost infrastructure.

Indoor operation limits the use of GPS systems, due to theattenuation, multi-path and interference that RF signals sufferwhen used indoors. To obtain increased accuracy a range-basedpositioning system with an infrastructure of anchors at knownpositions was used in order to circumvent the lack of accuracyof mutual positioning systems. Smartphone compatibility re-stricts the selection of the sampling frequency of the acousticsignal due to smartphone hardware constraints. Commerciallyavailable smartphones allow a maximum sampling rate of44.1 kHz therefore limiting the useful band to 22.05 kHz. InFigure 1 is presented the overall architecture of the proposedpositioning system. A modular infrastructure approach takesadvantage of a low-cost Wireless Sensor Network (WSN)thus ensuring also the scalability criterium. This way, multiplerooms with unique IDs can be added depending on the needs.

Access Point Anchor Mote

Anchor Mote

Mobile Device

APAM0AM0

AM1

AM2

AM0

AM1

MD0

MD1

MD2

MD3

Room 0 Room 1

GateWay Mote

WWW

DBPositioning

Server

PC

Wi-FiRouter

Remote Configuration

System Backend

APAM1 AM2

Wi-Fi

WSN

Fig. 1: Overall system architecture.

978-1-4673-1954-6/12/$31.00 c©2012

44/278

Page 50: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

III. POSITIONING APPROACH

The positioning process can be split in three main stages[5]: synchronization, measurement and position estimation.

A. Synchronization

Time division multiple access (TDMA) was used based ina centralized architecture with all anchors in sync. To keepthe WSN infrastructure in sync a reliable automatic time syn-chronization and syntonization protocol is used. The proposedmethod is based in a simplified version of the IEEE 1588standard, and allowed us to obtain a clock offset sync error ofless than 5 µs [6]. This resulted in range measurements withan error with a standard deviation less than 1 cm. In Figure 2is presented the time slot structure used in the coordinationprocess. For each anchor mote is reserved a specific time slotfor signal transmission. Each time slot can be split in threedistinct periods, namely: signal transmission period (sig k),listening period (list k) and a guard time period (guard time).

The signal transmission period is the time that the transmitterneeds to send the acoustic pulse. The listening period is thetime slot used by the mobile device to estimate the rangemeasurement and the guard time period was added to reducethe impact of the room reverberation1.

sig 0 list Klist 0 sig K guard time

slot K

sig 0

slot 0

tlist0 tguard0 tsig1 tsigk tlistk tguardk

...

tsig0 tsig0

list 0guard time

tlist1 tguard1 tsig2

slot 1

guard time

Fig. 2: TDMA structure.

B. Measurement

The measurement stage is based on ToA estimation by themobile device. Anchor motes were programmed to periodicallytransmit acoustic chirp pulses. The usage of chirp pulsesovercomes most of the problems (when compared to puresine tones) such as, poor resolution, low environment noiseimmunity, short range and low robustness to the Doppler effect.The probability of detection of a transmitted chirp is directlyrelated with the signal-to-noise ratio (SNR) rather than theexact waveform of the received signal [8]. The transducersused to equip the anchor motes were the same presented in[9], i.e. a piezo-tweeter speaker and a Panasonic WM61-Aelectret microphone.

1) Signal design: Signals with time and frequency diver-sity, e.g. linear frequency modulated signals or chirps, arewell known in RADAR and represent a case where timeand frequency are booth used to increase the probabilityof detection. In RADAR, chirps with large time bandwidthproduct (TBP) are used to obtain narrow compressed peakswith SNR maximization, resulting in signals with increasedprobability of detection, but also when Doppler tolerance isneeded. Chirps can achieve up to ±B/10 Doppler tolerance,improving the detection probability for large Doppler shifts[10]. By increasing TBP and using adequate weighting inthe signal design is possible to increase: the SNR, the pulsecompression (better time resolution) and the Doppler tolerance,

1The room reverberation time was measured using the ISO 3382 standardwhich resulted in a T60 reverberation time of 25 ms [7].

dB

dB

0 10 20 30ï

ï

0

1

t (ms)

18 19 20 21 22ï

ï

ï

ï

ï

0

f (KHz)

ï ï 0 1ï

ï

ï

ï

ï

0

t (ms)

0 10 20 30ï

ï

0

1

t (ms)

18 19 20 21 22ï

ï

ï

ï

ï

0

f (KHz)

ï ï 0 1ï

ï

0

t (ms)

0 10 20 30ï

ï

0

1

t (ms)

Combined Window

18 19 20 21 22ï

ï

ï

ï

ï

0

f (KHz)

ï ï 0 1ï

ï

ï

ï

ï

0

t (ms)

)e)d)c)b)a

wodniW gninnaHwodniW ralugnatceR

&KLUSï.+]

Fig. 3: Chirp Pulse design. Chirp with frequency content from18 kHz to 22 kHz. First line shows the weighted pulses in the timedomain, second line its frequency response and in the third line isshowed the autocorrelation function in time around the central peak.

which highly improves the probability of detection in static anddynamic positioning scenarios [8]. To increase the transmittedpower maintaining the chirp pulse non-invasive, i.e. inaudibleto humans, a combined window that uses the right half of arectangular window combined with the left half of a hanningwindow was used. In table I are compiled the most importantfigures of merit for the rectangular, hanning and combinedwindows.

TABLE I: Figures of merit of the chirp pulses presented in Figure 3.CR is the compression ratio, PSL represents the Peak Sidelobe Leveland PL represents the Peak Level.

Chirp Weighting B CR PSL PLPulse Window (KHz) TBP (ms) (dB) (dB)

a) Rectangular 4 120 0.30 13.5 0.0b) Hanning 4 120 0.60 46.9 -8.5c) Combined 4 120 0.44 15.8 -3.3

2) ToA Estimation: To measure ToA, an approach based onselective time filtering using prior knowledge of the systemTDMA settings was used, see Figure 4 for more details.This selective time search heavily reduces the probability offalse peak detection (i.e. interferences) due to the implicitinformation that is present in the periodicity of the transmittedpulses.

After correlation, the L2-norm of the signal at the outputof the correlator xc is computed, see equation 1,

xL2ne[m] =

√√√√(m+1).D−1

∑n=mD

|xc[n]|2 , with m = 0, . . . ,Nc (1)

where D is the size of the L2-norm estimator and Nc isthe number of chunks of the correlated signal to process.This way, is possible to obtain a decimated energy estimator,which considerably reduces the number of instructions neededto detect a peak. Moreover, an adaptive threshold methodis used to increase the algorithm performance. The method

45/278

Page 51: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

uses a FIFO buffer, WFIFO, that contains Nb samples of thedecimated energy estimator xL2ne. Due to the signal periodicitywe selected a value of Nb that allows the inclusion of allthe data needed to compute a position estimate using theTDMA structure presented in Figure 2. This way, by lookingto the maximum and minimum values of WFIFO, we are ableto compute a time-variant Signal-to-Noise-Ratio (SNR), seeequation 2. The dynamic threshold th value was defined usingthe conservative rule of 50% of the SNR, i.e. th = SNR/2.

SNR = 20 log10(max(|WFIFO|)−min(|WFIFO|)) (2)

Furthermore, we included Non-Line-of-Sight (NLoS) mit-igation in the ToA estimation in order to improve the peakdetection performance in real situations where multi-path andNLoS occurs. The used approach consists in the search ofearlier peaks (i.e. peaks that appear in the left neighborhoodpart of the main peak) with lower energy, but above th.

C s [-n]

Push t1p into toa vector

x

xc

t1p

th

Start TER

End TER

Read Audio Input Buffer

xL2ne > th

T

F

L2 - norm Estimation

Listening Time?

F

T

xL2ne

NLOS Mitigation(search 1st peak in left neighborhood)

toa

ToA EstimationRoutine

Adaptive Thresholding

(max - min)/2

wFIFO xL2ne

Launches PER

Fig. 4: ToA Estimation Routine (TER).

C. Position Estimation

Using three anchor nodes is possible to obtain two TDoAestimates which gives the possibility to compute 2D positionestimates. Post validation of each group of ToA measurementsis need in order to generate a valid TDoA vector to use inthe localization algorithm, see Figure 5 for more details. Tosolve the localization problem and since TDoA measurementsare always noisy, (e.g. thermal noise, external acoustic noise,sound velocity changes, etc.) the position estimation can beseen as an optimization problem. We opted to find the positionthat minimize the squared error intersection point for allthe hyperbolas defined for each intervenient anchor node. Adetailed description of the used method can be found in [1].

dtoa

Start PER

k = 0

dtoa(end-k) < Tslot

Position Estimation Routine

Compute 1st Finite Difference

vector dtoa

k < Nan -1

k++

T

F : Valid ToA group of measurements

End PER

F

T

Generate tdoa vector

Position Estimation Algorithm

tdoa

(x,y,z)

toa

Fig. 5: Position Estimation Routine (PER) with TDoA pre-validation.

IV. SYSTEM PROTOTYPE

The system prototype consists of two distinct devices: theacoustic mote and a smartphone acting as a mobile device.A WSN of Acoustic Motes is used to build an infrastructureof anchors at known positions. These motes can be usedas building blocks that can easily be added to an existentinfrastructure in order to meet the scalability criterion. Themobile station uses an iPhone App running in real-time withWi-Fi connectivity.

Acoustic Mote A Acoustic Mote B

PC Gateway

Microphone

Speaker Acoustic Module

Communications

Module

Batteries

iPhone App

Microphone

Headphones

a)

b)

Fig. 6: System Devices. a) Acoustic Anchor Motes and Gateway. b)Mobile Device Running the positioning application.

46/278

Page 52: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

V. EXPERIMENTAL VALIDATION

Two experiments were performed in order to evaluate theproposed system. The first experiment was held to obtain aquantitative evaluation of the overall system by measuring theestimated position error to a grid of fixed points in the room. Asecond experiment was taken to obtain a qualitative evaluationof the positioning system when a person equipped with amobile device is in a moving trajectory. A grid of 6×5m with

−1 0 1 2 3 4 5 6 7 8−1

0

1

2

3

4

5

6

7

8

9

A1

A2

A3

X−axis (m)

Y−

axis

(m

)

Fig. 7: Experiment 1) : Real positions - black cross; Estimatedpositions - red points; Anchor nodes - circles A1, A2 and A3.

1 m step was used to evaluate the positioning system. All theobtained results are plotted overlapped in the same xy plane,see Figure 7. Note that, no outlier measurements are present.This can be justified by the fact that all measurements weretaken in laboratory under a controlled acoustic environment,i.e. acoustic noise bellow 40 dBSPL. A smartphone runningthe positioning app was placed at each position marked witha black cross, see Figure 7, with a constant height of 1.70 mand one hundred position estimates were then obtained, seeresults in Figure 8. An absolute mean error of 7.3 cm and anabsolute standard deviation of 3.1 cm was obtained.

(6,7) (5,7) (4,7) (3,7) (2,7) (1,7) (1,6) (1,5) (1,4) (1,3) (1,2) (1,1) (2,1) (3,1) (4,1) (5,1) (6,1) (2,4) (3,4) (4,4) (5,4) (6,4)−0.05

0

0.05

0.1

0.15

0.2

Abs. Error (m)

Position (x,y)

Fig. 8: Absolute positioning error and correspondent standard devia-tion for X-axis (black) and Y-axis (red).

To obtain a qualitative evaluation of the positioning systema second experiment was performed. In this case a movingperson with the receiver on top of the head was used to evaluatethe positioning system in a moving trajectory, see Figure 9. Inthis experiment, only a qualitative evaluation can be performedbecause errors introduced by the human movement cannot beextracted due to difficulty in ground-truth validation.

The audibility of the proposed signals was perceptible onlyby people with early age, i.e. people with age below 25 yearsold. Among the people that are able to detect the presence ofthese signals, all agreed that a classification of non-invasiveaudio was acceptable.

−1 0 1 2 3 4 5 6 7 8−1

0

1

2

3

4

5

6

7

8

9

A1

A2

A3

X−axis (m)

Y−ax

is (m

)

Fig. 9: Experiment 2): Real trajectory - solid black line; Estimatedpositions - red points; Anchor nodes - circles A1, A2 and A3.

VI. CONCLUSIONS

In this paper we propose an effective indoor acoustic posi-tioning system compatible with conventional smartphones thatuses non-invasive audio signals and TDoA measurements forlow-cost ranging thus enabling effective indoor positioning forconventional smartphones. The system is supported by a WSNof acoustic anchors running with a sync offset error below 5 µs.Experimental tests were performed using an iPhone 4S in orderto evaluate the proposed system. Results showed stable andaccurate position estimates with an absolute standard deviationof less than 3.1 cm for a position refresh rate of 350 ms witchis acceptable for the type of application we are focused.

REFERENCES

[1] A. H. Sayeda, A. Tarighata, and N. Khajehnouri, “Network-basedwireless location,” IEEE Signal Processing Magazine, vol. 22, no. 4,pp. 24–40, 2005.

[2] R. Mautz, Indoor Positioning Technologies. Institute of Geodesy andPhotogrammetry, Department of Civil, Environmental and GeomaticEngineering, ETH Zurich, 2012.

[3] N. Patwari, J. N. Ash, S. Kyperountas, A. O. Hero, R. L. Moses, andN. S. Correal, “Locating the nodes: cooperative localization in wirelesssensor networks,” Signal Processing Magazine, IEEE, vol. 22, no. 4,pp. 54–69, 2005.

[4] S. Gezici, Z. Tian, G. B. Giannakis, H. Kobayashi, A. F. Molisch, H. V.Poor, and Z. Sahinoglu, “Localization via ultra-wideband radios,” IEEESignal Processing Magazine, vol. 22, no. 4, pp. 70–84, July 2005.

[5] I. Amundson and X. D. Koutsoukos, A Survey on Localization forMobile Wireless Sensor Networks. Springer-Verlag Berlin Heidelberg,2009.

[6] J. Reis and N. Carvalho, “Synchronization and syntonization of wirelesssensor networks,” in Wireless Sensors and Sensor Networks (WiSNet),2013 IEEE Topical Conference on, 2013, pp. 151–153.

[7] J. S. Bradley, “Using ISO 3382 measures, and their extensions, toevaluate acoustical conditions in concert halls,” Acoustical Science andTechnology, vol. 26, no. 2, pp. 170–178, 2005.

[8] N. Levanon and E. Mozeson, Radar Signals. Hoboken, New Jersey:John Wiley & Sons, Inc., 2004.

[9] S. I. Lopes, J. M. N. Vieira, and D. Albuquerque, “High accuracy3d indoor positioning using broadband ultrasonic signals,” in Trust,Security and Privacy in Computing and Communications (TrustCom),2012 IEEE 11th International Conference on, June 2012, pp. 2008 –2014.

[10] M. I. Skolnik, Ed., Radar Handbook, 2nd ed. McGraw-Hill, 1990.

47/278

Page 53: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Locally Optimal Confidence Hypersphere for a

Gaussian Mixture Random Variable Pierre Sendorek, Maurice Charbit *

Karim Abed-Meraim**

Sébastien Legoll***

* Télécom ParisTech, Paris, France

[email protected]

** Polytech Orléans, Orléans, France

*** Thales Avionics, Valence, France

Abstract—We address the problem of finding an estimator such

as its associated confidence ball is the smallest possible in the case

where the probability density function of the true position (or of

the parameter to estimate) is a d-dimensional Gaussian mixture.

As a solution, we propose a steepest descent algorithm which

optimizes the position of the center of a ball such as its radius

decreases at each step but still ensures that the ball centered on

the optimized position contains the given probability. After

convergence, the obtained solution is thus locally optimal.

However our benchmarks suggest that the obtained solution is

globally optimal.

Keywords — Confidence domain; Gaussian Mixture Model;

Optimization; Monte-Carlo; Robust Estimation; Accuracy

I. INTRODUCTION

In navigation, it is often of practical interest to express the accuracy of a position estimator by the dimensions of its confidence domain [1]. One may ask which estimator achieves the optimal accuracy with respect to this criterion. In a Bayesian setting, when the probability density function (pdf) of the true position given the measurement is Gaussian, it is well known that the smallest ball containing the true position with a given probability is centered on the mean. In this case the best estimator is the mean. But the problem has less been studied when the probability density has less symmetries. However this situation naturally appears in navigation. When several sources are used to form the measurement vector, taking into account the probability of failure of each source results in obtaining a pdf of the position expressed as a Gaussian Mixture (GM) [2,9]. In this case it is interesting to have a position estimator such as its associated confidence ball is the smallest possible.

In this paper, we address the problem of finding the smallest confidence ball containing the position with a given probability. As a solution, we propose a “multiple” steepest descent algorithm which optimizes the position of the center of a ball such as its radius decreases at each step but still ensures that the ball centered on the optimized position contains the given probability. After convergence of the steepest descent, the obtained solution is thus locally optimal. Thus, this steepest descent is run as many times as there is Gaussians in the GM. Finally, the estimated position yield by the algorithm is, among all the locally optimal solutions, the center of the ball with the smallest reached radius.

Our algorithm’s solution is compared to the globally optimal solution (computed thanks to an exhaustive search) in the 1-dimensional case. It is shown that the globally optimal solution empirically matches our algorithm's. In particular, when the probability density function is only a single Gaussian, the obtained solution matches the optimal solution and is the mean.

II. POSITION OF THE PROBLEM

A. Probability of being outside a ball

Suppose that the pdf of our d -dimensional parameter of

interest, say X , is described by a GM. Let g

N be the number

of Gaussians composing the mixture, and for each component

j from 1 tog

N , let j

be the weight of the Gaussian in the

mixture, j

its mean and j

C its covariance. The pdf of X

thus writes

1

( ) . ( ) gN

X j j

j

p x f x

(1)

Where ;( ) , )(j j j

f N xx C is the evaluation at x of

the pdf of a Gaussian with a mean j and with a covariance

jC . We call A the probability of X to be outside a ball of

center c and of radius r . The definition of A is given by

( , )( , )) (( , ) P )r( .

Xx B c r

B c r p x dA c xr X

(2)

B. The Problem

The problem is to find a center c such as the radius r is

the smallest under the constraint that X has to be in this ball

with an expected probability of 1 . This problem is

equivalent to the following:

Find c such as r is the smallest possible under the

constraint ( , )A c r .

We will see in appendix how the value of ( , )A c r can be

computed using some numerical functions related to the

48/278

Page 54: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

(generalized) chi square law. Also, as the reader may have noticed, we chose to deal with the complementary of the probability to be inside a ball. This is because the user may want to use standard libraries like [6] which implement the chi squared complementary cumulative function (also called the survival function) known to be more precise for the tails of the distribution, which is our case since is usually closer to 0

than 1 in practical applications.

III. THE OPTIMIZATION ALGORITHM

A. Overview

This section details the principle of our algorithm, which is the original contribution of this work. Derivation of the equations will be explained in further sections, which will be decreasing in the abstraction level.

Our algorithm proceeds to gN optimizations, each one being

under the form of a steepest descent algorithm which stops once it reaches a local optimum. Each of these steepest

descents is initialized with jc . The initialization step

requires a value of r which satisfies ( , )A c r . This value

of r can be found either by interval halving, or by more sophisticated methods namely the secant method or Newton

method, because ( , )r A c r is a decreasing function (as a

complementary cumulative function). When both of these values are found, the optimization step starts by finding which

of the (small) variations ( , )c r of the couple ( , )c r do not

change the probability of X to be outside the ball. This gives

a set of possible directions ( , )c r with the following property

( , ) ( ., )c rcA c r A r (3)

Among all those possible directions for c , we choose the one

which leads to the greatest improvement in terms of radius : hence, the chosen direction is the steepest. The center c is

optimized by being replaced by ' cc c and to finish the

optimization step, instead of replacing r by rr as a radius

for the following step, the algorithm solves the equality

( ', )A c r in the variable r (e.g. by one of the already

mentioned methods) which is preferred to avoid the cumulation of linearization errors during the successive steps of the optimization. Once this optimization step is finished, another begins. The process is repeated as long as (the step is not negligible) there is an improvement of the radius. Finally,

among the gN obtained local optima, the one which is chosen

is the one for which the radius is the smallest.

B. Steepest descent direction

To find the steepest descent direction, we want to find

which are the (small) variations ( , )c r such as (3) is satisfied.

This implies that we want

, ) ( , ) 0( .c rA rc r A c (4)

And because r and c are supposed to be small, we replace

the left term by Taylor's first order expansion

, ) ( , ) ( , ). (( , ).c r c c r rr A c rA A c r A c rc ,

where ( , )c A c r is the gradient, which is the vector of partial

derivatives according to the components of c and where

( , )rA c r is the partial derivative of A according to r .

Hence we get the equation

( , . 0). ( , )c c r rA c r A c r (5)

or equivalently, since ( , )r A c r is nonzero

( , ). (/ , )r c c r

A c r A c r (6)

Since several directions are possible, the problem is now to

find the steepest descent direction of c . To do this, we take

among all the vectors c which have the same (small) norm,

say | |c , the one which minimizes r . Mathematically it

is equivalent to say that we search for

:| |

( , ).

( , )arg min

c c

c c

r

A c r

A c r

.

Finally, since the Cauchy-Schwartz inequality ensures

| ( , ) | . ( , ). | ( , ) | .c c c cA c r A c r A c r (7)

As a consequence, since ( , )r A c r is negative, we want

the value of r to be negative, so ( , ). ( , )/c c rA c r A c r is

minimized when . ( , )/ | ( , ) |c c c

A c r A c r which

saturates the left inequality in (7).

The value is the size of the step which was supposed to

be small during the calculations. But in practice, we take so

as to halve the dimensions of the actual radius and the

algorithm works. Also, to avoid oscillations around local

optima, when a variation of the center leads to an increase of

radius (whereas the linearization always “predicts” a

decrease), the step is halved. Halving can be repeated at most

hN times (supplied by the user), after which the algorithm

considers that the potential improvement is negligible. This

translates into the constraint /r r Q , which results in

choosing as a step size . ( , )

| ( , |)

r

c

r A c r

Q A c r

with an initial

value 2Q . Finally, the algorithm to find the optimal

( , )c r sums up to

49/278

Page 55: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

MCr

for( 1: gj N )

2Q

jc

find r such as ( , )A c r

do

. ( , )

| ( , |)

r

c

r A c r

Q A c r

. ( , )/ | ( , ) |c c c

A c r A c r

old old ( , )( ),c c rr

cc c

find r such as ( , )A c r

if ( oldr r )

old old(( , ) ),cc r r

2Q Q

while( 2 hNQ )

if( MCr r )

( ( , ), )MCMC rc c r

At the end of the algorithm, ( , )MC MCc r will describe the

locally optimal ball containing X with a probability1 .

C. Computation of the probability to be in a ball

The computation of the value of ( , )A c r implies the use of

the generalized chi-square cumulative function, which can be efficiently computed e.g. using the algorithms in [3,4]. Indeed

( , )( , ) ( )

j jx B c r

j

A c r f x dx

(8)

Where ( , )

( ) Pr( )j

x B c rjx dx rf

, for a variable j which

follows a generalized non central chi-square law [4,7] with appropriate parameters (see appendix). In the following sections, this value will be computed from the numerical routines associated to this law.

D. Derivative according to the center

The expression of the gradient ( , )cA c r has similarities

with the expression of ( , )A c r which makes its computation

by the Monte-Carlo method comfortable. We derive the expression of the gradient by remarking that

' ( , ) (0, )

( , ( ') ' ( ))X X

x B c r x B r

p x dx pr x c dxA c

(9)

Which allows to derive under the sign sum

(0, )( , ). ( ).c c c X c

x B rx c dxA c r p

1

1 ' ( , )

. ( ').( ' ) . . '.gN

j j j j c

j x

T

B c r

f x C dx x

E. Derivative according to the radius

The following derivations refer to a mapping of the

Euclidean coordinates into the generalized d -dimensional

polar coordinates. However, we won't need to explicit the integrals since the expression in polar coordinates will only be used to obtain the formula of the derivative according to the radius. The obtained integral has a pleasant expression to be mapped back into Euclidean coordinates, in which we will be able to evaluate the integral numerically. Using (9) as a starting

point, the substitution / | |x x and | |x leads to the

generalized polar coordinates

1

0

( ) (( ). )( . (, ) )d

Xr p r c dA dc r

where is the Lebesgue measure on the unit sphere [5] which

we won’t need to make more explicit for calculating the

derivative 1

0

1 1

1

1

1' ( , )

(

( ' )( '

| ' | | ' |

( )( 1) (( ) ) . ( )

( )

) ( ) . ( )

1) ( ') '

( , )

g

g

d

X

N T

d

j j j j

jr

N T

j j j j

jx B c r

r

C c f

x cx

rd p r c d d

r

c d d

dC f x d

x x cx

c

A c r

IV. MONTE CARLO COMPUTATIONS

Our choice is oriented towards numerical integration since

the analytical formulae of the derivatives are unknown to the

authors in the general case. Among numerical methods, we

chose Monte-Carlo which is known to be insensitive to the

increase of dimensionality and is an efficient way to sample

the integration space at points where the integrand has

significant values (far from zero). Indeed, the derivatives

( , )cA c r and ( , )

rA c r can both be expressed, modulo the

adequate choice of the functions j

g , as

1

( ). ( )g

d

N

j j j

j x

df x g x x

where each th

j term of the sum can be computed thanks to

the Monte-Carlo method with Importance Sampling. Thus the

integral

50/278

Page 56: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

( ). ( )d

j j

x

f x g x dx

(10)

is numerically computed by sampling for the iid variables

, 1...( )

dj t t NX

each one according to the law ( , . )

j jN m C

where m is a value greater than 1. Such a choice of m favors

realizations of ,j t

X outside ( , )B c r which is a set where the

functions j

g are null. Thus the pdf of each random variable

,j tX is ( ; , )

j jx N x mC . A Monte-Carlo integration

approximates (10) by

,

,

1 ,

,

1 ,

( )( )

( ; , )

1.

( )

( ; , )

d

d

N

j j t

j j t

t j t j j

N

j j t

d

td j t j j

f Xg X

N X mC

f X

N N XN

mC

which tends to the ratio of the expectations

( )( ) ( ; , )

( ; , )

( ')( '; , ) '

( '; , )

j

j j j

j j

j

j j

j j

f xg x N x mC dx

N x mC

f xN x mC dx

N x mC

which is indeed the desired value (10), when dN tends to

infinity. However the strength of importance sampling in this

case is that only a small amount d

N of drawings suffices to

make the algorithm work, because the possible errors in the

computation of c and at each step are approximately

corrected at the next step thanks to the computation of the new

values of c and which only take the current value of

( , )c r in consideration as a starting point to the descent.

V. BENCHMARKS

In a mono dimensional setting, we compare the solution

obtained by our algorithm to the globally optimal solution

obtained by a greedy search on the discretized space. Our

algorithm is assessed on 100 Gaussian Mixtures with

randomly drawn parameters : g

N is drawn as 2G where G

follows a geometric law of mean 4 (to avoid the trivial case

1g

N ), j is drawn according to a centered Gaussian with

a standard deviation equal to 10 and the / 3j j

C K (which

are scalars in the 1D case) where j

K is drawn according to a

chi-square law with 3 degrees of freedom. The means are thus

spaced 10 times the order of magnitude of the standard

deviations of the Gaussians to avoid too much overlapping, in

which case the global maximum could be unique and thus

each local maximum could trivially be the global optimum.

Finally the weightsj

are drawn as 1

/gN

j j i

i

U U

,

where the variables j

U are iid chi-squared variables with 1

degree of freedom. The random samples ,j t

X are drawn only

once for a given set of parameters describing a GM.

For each set of these parameters, we compare the obtained

radius MCr with the globally optimal radius Gr by computing

their ratio. The evaluation is made on several sets of

parameters ( , , )d

m N and for each one, a Box and Whisker

plot of the ratio /MC G

r r is made in figure (1). Simulations

show that the smaller the value of the greater the value of

dN must be to converge to the global optimum, at the

expense of the computational load. Nevertheless, a good

choice of the importance sampling parameter m can spare the

choice of a too large value ford

N . For well-chosen values of

m and d

N , the radius obtained by our algorithm is in more

than 75% of the time the global optimum or close to the global

optimum (the ratio /MC G

r r is less or equal than 1.2). The use

of importance sampling enables to converge to the right result

with few (2

10d

N ) particles even when 7

10

.

VI. CONCLUSION

This paper has proposed a position estimator under the form of

an algorithm which minimizes its associated confidence ball in

the case when the position’s probability density function is

expressed as a Gaussian mixture in multiple dimensions. The

algorithm has been assessed in one dimension, where a

comparison against a greedy algorithm is possible. Numerical

computations showed that the obtained confidence ball is most

of the times the globally optimal one or is close to the

optimum.

Figure 1. Box and Whisker plots of the ratios of the obtained radius with the

globally optimal radius

51/278

Page 57: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

VII. APPENDIX

The integral of a multivariate gaussian pdf over a ball has no

closed analytical formula but can be efficiently computed

thanks to the use of the numerical routines associated to the

the non central chi square and the generalized non central chi

square cumulative functions. Consider a random variable

~ ( , )j jX N C , then we have

(0, )

Pr( ( ;(0, )) , )B

j j

r

B r CX N xx d

1

1

2(0, )

1 1exp ( ) ( )

22

T

j j jd

j

B r

dxx C x

C

Because the covariance matrix can be diagonalized : T

jC R R we have

1

1

2(0, )

1 1exp ( ) ( )

22

T T

j jd

B r

dxx R R x

1

1

2(0, )

1 1exp ( ) ( )

22

T

d

B r

dyy m y m

Where we use the substitution y Rx and m R .

2 2 2 2

1

Pr Prii i

d

i

Y Zr r

(11)

Where ~ ( , )Y RX N m and where the variables

( // ~ ,1)i i ii i iiNY mZ are independent (because

their decorrelation implies their independence in the gaussian

case). We recognize the cumulative function of the

generalized non central chi square law [7] in equation (11).

Numerical routines to efficiently compute its value can be

found in [3,4].

In the particular case when 2

j jC I , we have

2

j j I and

22

2(0, ))Pr( Pr i

i

X Zr

B r

which can be evaluated thanks to the cumulative function of

the non central chi square law [6,8], evaluated at

2

2

r

with a

non centrality parameter 2

2m

and d degrees of freedom. It’s

complementary to one can be evaluated with the so called

survival function, known to be more precise in our case [6].

REFERENCES

[1] RTCA Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. 1828 L Street, NW Suite 805, Washington, D.C. 20036 USA.

[2] Pervan, Boris S., Pullen, Samuel P., Christie, Jock R., "A multiple hypothesis approach to satellite navigation integrity", Navigation, Vol. 45, No. 1, Spring 1998, pp. 61-84.

[3] Robert B Davies “Numerical inversion of a characteristic function”, Biometrika Trust, vol. 60, pp. 415-417, 1973.

[4] Robert B Davies “Algorithm AS 155: The distribution of a linear combination of 2 random variables”, Journal of the Royal Statistical Society. Series C (Applied Statistics), January 1980.

[5] Wikipedia “Spherical Measure” http://en.wikipedia.org/wiki/Spherical_measure

[6] Scipy http://www.scipy.org/

[7] Wikipedia “Generalized Non Central Chi Square Law” http://en.wikipedia.org/wiki/Generalized_chi-squared_distribution

[8] Matlab http://www.mathworks.com

[9] Pesonen, Henri, "A Framework for Bayesian Receiver Autonomous Integrity Monitoring in Urban Navigation", NAVIGATION, Journal of The Institute of Navigation, Vol. 58, No. 3, Fall 2011, pp. 229-240.

52/278

Page 58: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Evaluating robustness and accuracy of theUltra-wideband Technology-based Localization

Platform under NLOS conditionsPiotr Karbownik, Grzegorz Krukar, Andreas Eidloth, Norbert Franke, and Thomas von der Grun

Locating and Communication Systems DepartmentFraunhofer Institute for Integrated Circuits

Nuremberg, GermanyEmail: [email protected]

Abstract—In this paper, we present measurement results of theexperimental validation of the UWB localization platform underLine-Of-Sight (LOS) and Non-Line-Of-Sight (NLOS) conditions.The platform is based on the Time Difference of Arrival (TDoA)technique and the energy detection receiver. TDoA based local-ization systems require minimum four anchors to provide 3Dposition data. In order to deal with NLOS scenario our platformuses eight receiving anchors. Additionally, we have developedpositioning algorithms customized to improve the robustness andaccuracy of the system.

I. INTRODUCTION

Indoor environment pose challenges for localization sys-tems, mainly because of multipath effects and the presenceof the stationary and moving objects shadowing Line-Of-Sight (LOS) between anchors and a tracked item. The ultra-wideband (UWB) technology might be a possible solution forindoor environments due to its specific properties [1].

In this paper, we present the architecture of the enhancedUWB localization platform as well as the results of experi-mental validation under LOS and Non-Line-Of-Sight (NLOS)conditions. Compared to the work presented in our last paperfrom the 2012 edition of the IPIN conference [2], wherea scenario with LOS conditions and four receiver anchors wasconsidered, the current version of the platform is evaluatedunder both LOS and NLOS conditions in 2D and 3D. Throughthe addition of four anchors we obtained an enhanced systemwith redundancy that can deal with static and moving obsta-cles. However, due to low update rate of the platform, focuswas laid on static obstacles. Moreover, in order to increasethe overall system robustness and accuracy a new algorithmfor position calculation was used. Instead of a basic, algebraicsolution (AS) [3], an algorithm based on Bayesian filtering[4] techniques was implemented. Due to the limited number ofinput channels of a LeCroy SDA 816Zi real-time oscilloscope,the UWB platform consists of two synchronized oscilloscopes.Together they provide eight inputs with an analog bandwidthof up to 16 GHz.

II. LOCALIZATION PLATFORM ARCHITECTURE

A Picosecond 3500D impulse generator providing a pulseof a full width at half the maximum of 65 ps with an amplitudeof 8 V was used as a transmitter. The UWB omnidirectional

antenna operating in 2-11 GHz band played the role of a trans-mit antenna. Compared to the previous IPIN 2012 conferencepaper [2], the receiver anchors architecture remain unchanged.In order to obtain an eight-channel receiver, two LeCroy SDA816Zi and SDA 820Zi real-time oscilloscopes (16 GHz and20 GHz analog bandwidth, respectively) were connected andsynchronized. The LeCroy SDA 816Zi oscilloscope operatingas a master allowed for a real-time processing of all eight inputsignals. The graphical user interface developed in Visual Basicand running on the master oscilloscope executed the energydetection receiver algorithms with a sampling rate of 2 GS/sand position calculation functions [6]. In order to enablelocalization without synchronization between a transmitterand receivers, an algorithm based on the Time Difference ofArrival method was implemented [3]. With time of arrivalvalues obtained from multiple anchors and processed bythe Extended Kalman Filter (EKF), positioning accuracy incentimeters range was achieved. The anchors were placedaround the measurement site at different heights. To assess theimpact of the anchors’ spatial distribution on the localizationaccuracy, a geometric dilution of precision (GDOP) analysiswas performed [7]. Height and GDOP can be read from Fig. 1.

III. MEASUREMENTS AND RESULTS

As a measurement site, a test room at the Fraunhofer Insti-tute IIS in Nuremberg was chosen. Measurements were takenunder LOS and NLOS conditions within a 4 m x 9 m area. Thetrue position of the localized UWB transmitter (defined as thephase center of the transmit antenna) was determined with theuse of the iGPS laser-based positioning and tracking systemof a typical accuracy of 200 µm [5]. For each measurementposition, 20 samples were taken. The integration window sizeand the number of acquisitions were equal to 0.5 ns and20, respectively [6], [8]. Fig. 1 shows the obtained results ata height of 106 cm together with the calculated GDOP. Forthe considered anchor spatial distribution, GDOP influencesthe localization accuracy to a small extent taking values ofa degrading factor up to 2.5 for all the measurement positions.

53/278

Page 59: International Conference on Indoor Positioning and Indoor Navigation

Fig. 1. The x-y position results with a GDOP visualization.

TABLE IMEAN ABSOLUTE LOCALIZATION ERRORS FOR THE ”LOS” SCENARIO.

Position AS EKFX Y ε2D ε3D ε2D ε3D

[cm] [cm] [cm] [cm] [cm] [cm]100.0 100.0 13.6 29.7 6.18 16.4105.0 203.0 9.7 17.9 2.7 6.8100.0 298.0 4.3 10.1 3.6 5.9198.0 101.0 5.6 11.6 2.2 7.1197.0 202.0 28.5 62.6 12.7 34.3200.0 302.0 3.9 6.6 4.9 8.3298.0 103.0 1.3 2.5 0.9 3.5300.0 195.0 7.5 19.5 3.0 12.0300.0 300.0 2.4 5.7 2.7 5.4396.0 104.0 1.3 1.5 1.8 2.3399.0 202.0 1.4 1.7 1.8 3.0402.0 300.0 1.8 4.8 3.4 6.6499.0 102.0 0.5 1.6 0.7 1.4495.0 203.0 0.7 1.0 1.6 2.6500.0 299.0 3.1 5.7 1.2 2.7601.0 103.0 0.7 1.9 0.4 2.2599.0 202.0 0.9 2.0 0.7 2.3602.0 303.0 3.6 8.3 1.9 5.4698.0 101.0 10.3 19.2 5.5 23.1699.0 201.0 1.0 2.4 0.4 2.2700.0 298.0 4.3 7.6 1.8 5.2797.0 102.0 0.6 6.7 1.5 1.5796.0 201.0 17.7 37.5 6.6 8.1800.0 303.0 9.1 12.0 6.1 7.2

MEAN: 5.6 11.7 3.1 7.3

A. Localization in a LOS Environment

The 24 static measurement positions were distributed overthe measurement site approximately with a 1 m grid at a heightof 106 cm see Fig. 1 and Table I. During the whole mea-surement procedure the LOS conditions between the transmitantenna and the receive antennas were maintained. Table Ishows the accuracy of the localization platform given by themean absolute error in 2D, ε2D, and 3D, ε3D, for the AS andthe EKF. The average ε3D is lower than 12 cm and 8 cm forthe AS and the EKF, respectively.

B. Localization in a NLOS Environment

In this scenario, one measurement position in a center of theroom at a height of 106 cm was chosen - see Table II. The

TABLE IIMEAN ABSOLUTE LOCALIZATION ERRORS FOR THE ”NLOS” SCENARIO.

Position Shadowed εX εY εZ ε2D ε3D[cm] Anchor [cm] [cm] [cm] [cm] [cm]

X=495.0

a 4.0 1.5 4.2 4.3 6.0b 2.2 0.9 9.2 2.4 9.5c 0.2 0.2 0.7 0.3 0.7d 2.0 3.8 24.3 4.3 24.7

Y=203.0

e 1.1 0.1 3.2 1.1 3.3f 0.4 2.7 11.3 2.7 11.6g 2.4 0.1 6.2 2.4 6.6h 0.2 0.9 2.5 1.0 2.7

NLOS conditions were obtained by placing a person betweenthe transmit and receive antennas. For each shadowed receiveranchor the position of the transmitter was captured. Table IIshows the accuracy of the localization platform given by themean absolute error for the EKF under NLOS conditions. Asexpected, the ε2D, for the NLOS scenario is higher but stillcomparable to the error for LOS scenario. However, the ε3Dfor the shadowed receiver anchors b, d and f is more than threetimes higher than for the ε3D obtained under LOS conditions.This phenomena might be related to the deteriorated verticaldilution of precision under the NLOS conditions to thosespecific receiver anchors.

IV. CONCLUSIONS

In this paper, we presented the performance of the UWBlocalization platform. Obtained results show that it is possibleto achieve the localization accuracy averaged over all measure-ment positions being better than 8 cm and 25 cm in 3D underLOS and NLOS conditions respectively. The usage of the EKFallows for improvement of positioning accuracy as well asfor dealing with NLOS conditions between a transmitter andone of the receiver anchors. Further work includes analysis ofa NLOS scenario with higher number of shadowed receiveranchors and hardware implementation of the receiver.

REFERENCES

[1] Z. Irahhauten, H. Nikookar, and M. Klepper, ”2D UWB Localizationin Indoor Multipath Environment Using a Joint ToA/DoA Technique,”Wireless Comm. and Networking Conf. WCNC 2012, pp. 2253-2257,April 2012.

[2] P. Karbownik, G. Krukar, M.M. Pietrzyk, N. Franke, and T. v.d.Gruen, ”Experimental Validation of the Ultra-wideband Technology-based Localization Platform,” Int. Conf. on Indoor Positioning andIndoor Navigation IPIN 2012, 3 pgs., Nov. 2012.

[3] R. Bucher and D. Misra, ”A Synthesizable VHDL Model of the ExactSolution for Three-dimensional Hyperbolic Positioning System,” VLSIDesign, vol. 15 (2), pp. 507-520, 2002.

[4] J. Wendel, ”Das Kalman-Filter” in Integrierte Navigationssysteme:Sensordatenfusion, GPS und Integrierte Navigation. Oldenbourg Wis-senschaftsverlag GmbH, Munich, 2007, ch.6, pp. 129-147.

[5] Nikon website, http://www.nikonmetrology.com, last accessed June2013.

[6] M.M. Pietrzyk and T. v.d. Gruen, ”Ultra-wideband Technology-basedRanging Platform with Real-time Signal Processing,” Int. Conf. onSignal Processing and Comm. Systems ICSPCS 2010, 5 pgs., Dec. 2010.

[7] R.B. Langley, ”Dilution of Precision,” GPS World, vol. 10(5), pp. 52-59,May 1999.

[8] P. Karbownik, G. Krukar, A. Eidloth, M.M. Pietrzyk, N. Franke, andT. v.d. Gruen, ”Ultra-wideband Technology-based Localization Platformwith Real-Time Signal Processing,” Int. Conf. on Indoor Positioning andIndoor Navigation IPIN 2011, 2 pgs., Sept. 2011.

54/278

Page 60: International Conference on Indoor Positioning and Indoor Navigation

- chapter 2 -

Positioning Algorithm

Page 61: International Conference on Indoor Positioning and Indoor Navigation

Robust Step Occurrence and Length EstimationAlgorithm for Smartphone-Based

Pedestrian Dead ReckoningWonho Kang†, Seongho Nam‡, Youngnam Han†, and Sookjin Lee§

†Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea‡Agency for Defense Development (ADD), Daejeon, Korea

§Electronics and Telecommunications Research Institute (ETRI), Daejeon, Koreae-mail: †wonhoz, [email protected], ‡[email protected], §[email protected]

Abstract—Recently, personal positioning systems are necessaryto build many location-based services. Pedestrian dead reckoning,which is a pedestrian positioning technique using the accelerom-eter sensor to recognize pattern of steps, is an alternative methodthat has advantages in terms of infrastructure-independent.However, the variation of walking pattern on each individual willmake some difficulties for the system to detect displacement. Thisis really interested authors to develop a sensor-based positioningsystem that applied generally to all individuals.

Experiment begins with the feasibility test of accelerometersensor. In this work, a smartphone with average sampling rate 20Hz is used to records the acceleration. Then, the acceleration dataare analyzed to detect step occurrence with peak step occurrencedetection and to estimate the step length using two kinds ofdynamic step length estimation methods, which are root-basedand log-based schemes. The experimental results show that anaverage 2% error in step occurrence detection, and standarddeviation 0.0320 m and 0.0498 m in root-based and log-basedstep length estimation, respectively.

Keywords—Personal positioning systems, sensor-based posi-tioning systems, pedestrian dead reckoning, smartphone

I. INTRODUCTION

Positioning is a technique that used to know object’s posi-tion in a frame of reference. Generally, positioning can be doneusing some infrastructures-aid, such as Global PositioningSystem (GPS) satellite or Base Transceiver Station (BTS)cell-phone service provider. However, the implementation ofindoor positioning system is still found any limitations, e.gGPS satellite signal dependence make this technique cannotbe used in the building. Instead, positioning with BTS cell-phone can be used indoor seamlessly, but the accuracy is verysmall which is about 100 m up to 35 km [1]. Of course thislimitations make them impossible to be implemented in indoorpositioning.

Indoor positioning becomes important when user needs toknow its position in a building, such as a firefighters whoneed to know about their position in a building during a res-cuing effort. An alternative of indoor positioning is pedestriandead reckoning (PDR). PDR technique determines the latestposition of a pedestrian by adding estimated displacementto starting known position. Displacement is represented byamount of steps and each step has its various step length.

Detection number of step occurrence and estimation of steplength can be done using accelerometer sensor.

Recent smartphones which is coming with integrated ac-celerometer sensor become a new spirit to use PDR as apedestrian indoor positioning system. This is because smart-phones have small physical form and light in weight makingeasy to carry it anywhere. Moreover, using integrated sensorin smartphone is less expensive than purchasing specialtyhardware and it is more convenience in setting up the solutionsto pedestrian. In this work, experimental data are collectedwith Samsung Galaxy Note with an Android simple programto record the acceleration.

The structure of this paper is as follows: Section II describesthe principle of pedestrian dead reckoning. This is then fol-lowed by our experimental scenario in Section III and ourexperimental results in Section IV. Finally, we conclude ourwork in Section V.

II. PEDESTRIAN DEAD RECKONING

Pedestrian Dead Reckoning (PDR) is a pedestrian position-ing solution by adding distance traveled to the known startingposition. Pedestrian distance traveled can be determined byusing accelerometer sensor to detect step occurrence andestimate displacement. Accelerometer sensor must be attachedto the body to record the acceleration. Some related researchhas been done in previous studies using a special sensormodules that is attached on the helmet [2], attached at the foot[3], [4], or using low-cost sensor integrated in smartphone andplaced it to the trouser pocket [5]–[7].

Basically, the implementation of PDR technique includessome operations: orientation projection, gravity and noisefiltering, step occurrence detection, and step length estimation[5], [6]. However, this work is a subsystem of complete PDRsystem which is not include heading orientation estimationprocess.

A. Orientation Projection

Accelerometer sensor actually indicates 3-axis accelerationrelative to the smartphone body frame itself. Therefore itcan be projected from x-y-z local coordinate system to the

55/278

Page 62: International Conference on Indoor Positioning and Indoor Navigation

personal coordinate system to obtain the acceleration valuesin front-side-up using pitch, roll, yaw angles of smartphone.This process is usually used to resolve of smartphone arbitraryplacement. The rotation matrices for pitch (θ), roll (ϕ), yaw(ψ) angles are formed as

Rθ =

1 0 00 − cos θ sin θ0 sin θ cos θ

, (1)

Rϕ =

cosϕ 0 sinϕ0 1 0

− sinϕ 0 cosϕ

, (2)

and

Rψ=

cosψ sinψ 0− sinψ cosψ 0

0 0 1

, (3)

respectively. We can obtain the rotation matrix that convertslocal coordinate system to the personal coordinate system bymultiplying the above three rotation matrices as

R=RψRθRϕ

=

cψcϕ− sψsθsϕ −sψcθ cψsϕ+ sψsθcϕ−sψcϕ− cψsθsϕ −cψcθ −sψsϕ+ cψsθcϕ

−cθsϕ sθ cθcϕ

(4)

where c and s stand for cos and sin functions, respectively.The acceleration on the personal coordinate system can beobtained as

aperson=Ralocal. (5)

B. Gravity and Noise Filtering

The acceleration signal must be filtered to obtain the desiredoutput signal: gravity-free and noise-free signal. Gravity is alow-frequency signal component that causing offset shift upthe y-axis, about 9.8m/s. To eliminate the influence of gravity,the signal is filtered with high-pass filter similar to [6], whichis implemented with equation (6) and (7).

g=αg + (1− α) apersonz (6)

astep=apersonz − g (7)

Low-frequency signal component, represented with meanof waveform, is subtracted to remove gravity component. Theoutput of high-pass filter then processed by low-pass filter tosmooth the signal and reduce random noise. Low-pass filterhas done by using a moving average filter as equation (8).

aout (u) =1

W

W−12∑

v=−W−12

ain (u+ v) (8)

where aout and ain are output average-filtered and input unfil-tered acceleration signal. W is moving window, the number ofpoints used in the moving average. Results of these filtrationprocess are signal which is free from gravity and minimumrandom noise as shown in Fig. 1 with respect to differentwindow size. Unfiltered raw signal represents with green line,

0 1 2 3 4 5 6 7−6

−4

−2

0

2

4

6

8

10

12

Time [t]

Accele

ration [

m/s

2 ]

Raw acceleration

Low−pass filtered acceleration (W=3)

Low−pass filtered acceleration (W=5)

Low−pass filtered acceleration (W=7)

Low−pass filtered acceleration (W=9)

Fig. 1. Low-pass filtered acceleration with various window size.

and the magnitude of filtered signals decreases as the windowsize increases. In this paper, the value of W is taken as 5,which is obtained empirically through signal analysis. Theoutput of filtering process can be proceed further to obtainthe information about the step occurrence.

C. Step Occurrence Detection

Pedestrian distance traveled is represented by the numberof steps. Therefore, it is necessary to accurately detect stepoccurred in order to get better estimation. There are twocommon step occurrence detection methods which can be usedto analyze acceleration signal: peak step occurrence detection[4]–[6] and zero-crossing step occurrence detection, [2], [7],[8].

The zero-crossing step occurrence detection method countssignal crossing zero level to determine the occurrence of step.Researchers usually have used time interval threshold to rejectfalse step occurrence detection. This method is not appropriateto detect user steps in general approaches, because it requirescertain time interval threshold to make decision whether thezero-crossing represents a represents a valid step occurrence ornot. The problem comes when time interval between footfallsvaried for some subjects, so it is quite difficult to detectstep event accurately using zero-crossing step event detectionmethod without calibration process.

The other method is to detect the peaks of acceleration.According to [4], the peaks of magnitude of accelerationcorrespond to the step occurrences because the magnitude ofacceleration will remain same whether the smartphone is tiltedor not. In this paper, we also use the peak step occurrencedetection method. However, we use vertical acceleration in-stead of magnitude of acceleration, in consider to resolve theproblem of tilting. Because vertical acceleration is generatedby vertical impact when the foot hits the ground. To detectstep occurrence, we employ four kind of threshold detectionscheme. This scheme detects a step occurrence when theacceleration meats peak, frontside, and backside thresholds,

56/278

Page 63: International Conference on Indoor Positioning and Indoor Navigation

2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8−4

−2

0

2

4

6

8

Time [t]

Accele

ration [

m/s

2 ]Low−pass filtered gravity−free acceleration

Positive peak

Negative peak

FrontsideThreshold

DecrementTrend

PeakThreshold

IncrementTrend

BacksideThreshold

Fig. 2. Peak step occurrence detection on low-pass filtered gravity-freevertical acceleration.

and shows increment and decrement trend on the frontside andbackside, respectively in a certain interval. Frontside thresholdis from the difference between the current peak and theprevious valley, and backside threshold is from the differencebetween the current peak and the next valley. In this paper,thresholds are constant values that determined experimentallyfor all test subjects.

Fig. 2 illustrates three valid steps taken from a walkingpattern of a test subject. Red dot points represent a valid peakwhich is a peak acceleration exceeding peak threshold. Peakacceleration is shown in the blue dashed-line, while valleyacceleration is shown in red dashed-line with valley pointsin red dot points. The difference between peak and valleyacceleration is used as frontside and backside thresholds. Astep occurrence is detected when valid peak meets abovethresholds and shows increment and decrement trend on thefrontside and backside in sequence in a certain interval.

D. Step Length Estimation

Total traveled distance can be calculated by estimating steplength in every valid detected step occurrence. Generally, thereare two methods for estimating step length: static method anddynamic method. Static method assumes that any valid stephaving the same length, which can be determined throughequation (9).

lk= l, ∀k (9)

where the constant l is normally in the range of 0.6 to 0.85m.

In contrary, dynamic method assumes any valid step havingtheir different step length which can be estimated using certainapproach proposed in [9]. It assumes that vertical bounce,which is happen as a impact from walking activity, is propor-tional with step length. The vertical bounce is calculated usingpeak-to-peak differences at each step occurrence as equation

0.4 0.5 0.6 0.7 0.8 0.9 10.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

Reference step length [m]

Estim

ate

d s

tep length

[m

]

Reference

Mean of estimated step length

Median of estimated step length

Fig. 3. Root-based step length estimation on 0.4 to 1 m-predefined distanceintervals.

0.4 0.5 0.6 0.7 0.8 0.9 10.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

Reference step length [m]

Estim

ate

d s

tep length

[m

]

Reference

Mean of estimated step length

Median of estimated step length

Fig. 4. Log-based step length estimation on 0.4 to 1 m-predefined distanceintervals.

(10).

lk=β4

√astepmax − astepmin (10)

However, this approach is derived from the waist-mountedpedestrian dead reckoning. In this paper, the smartphone isheld in hand, so the equation (10) cannot be applied directly.With the consideration of the position of smartphone, theequation (10) should be modified as the equation (11) whereγ is the offset.

lk=β4

√astepmax − astepmin + γ (11)

By the way, the log-based step length estimation is a littlemore accurate than the root-based one since the range of logfunction is much wider than that of root function. In thisreason, the equation (12) is used for step length estimationin this paper.

lk=β log[astepmax − astepmin

]+ γ (12)

57/278

Page 64: International Conference on Indoor Positioning and Indoor Navigation

Start

Orientation

Projection

Low-Pass Filtering

(Noise Elimination)

High-Pass Filtering

(Gravity Elimination)

Step Occurrence

Detection

Step Length

Estimation

End

(a) Overall system

End

Start

> Peak Point Threshold

> Frontside Threshold

> Backside Threshold

> Trend Threshold

Candidate Step

Ye

sY

es

Ye

sY

es

No

No

No

No

(b) Step Occurrence Detection

Fig. 5. Flow chart of overall system and step occurrence detection

Fig. 3 depicts the root-based step length estimation andFig. 4 does log-based step length estimation where black dotsrepresent the estimated step length on y-axis with respectto reference step length on x-axis. This result is from theexperiment that the user walks 0.4 to 1 m-predefined distanceintervals. The results of two method in figures seems almostsame, but the log-based step length estimation is a little moreaccurate than the root-based one which will be proved inSection IV.

III. EXPERIMENTAL SCENARIO

In order to evaluate the reliability of system to detectdisplacement in the variation of walking pattern without cal-ibration, the actual walking test was done. The experimentswere done in 7th floor hallway of Information TechnologyConvergence building, Korea Advance Institute of Scienceand Technology, Korea. We used an accelerometer sensorintegrated in Samsung Galaxy Note with Android IcecreamSandwitch operating system. The acceleration value thenproceed in Matlab with procedure as shown in flowchartin Fig. 5 where Fig. 5(a) depicts the flowchart of overallsystem and Fig. 5(b) does that of step occurrence detection.In experiments, smartphone was placed in the hand, as shownin Fig. 6, and it was assumed that no obstacles in front ofsubject.

Fig. 6. Experimental scenario that smartphone was placed in the hand asusing phone normally while user walks in straight path.

IV. EXPERIMENTAL RESULT

A. Eligibility of Smartphone Sensor

Eligibility of a sensor is observed from the sampling fre-quency since sampling frequency shows how fast a sensorsampling the data. From several tests, our accelerometer has asampling rate of 20 Hz. It indicates that this sensor is enoughto detect step occurrence since normal walking frequency is1.5 Hz which is much less than the sampling frequency.

B. Step Occurrence Detection

In order to detect a valid step occurrence, we implement fourkind of threshold as explained in Section II-C. This schemeis implemented to whole test subjects without individualcalibration process to fit their walking pattern. To compare theerror from different distance, we use percentage error whichis calculated from difference of actual steps counted and stepsdetected. The detected step occurrence is shown in Fig. 7 ,for instance, when a user asked to walk 16 steps. The stepoccurrence detection error from the experimental results of20 test subjects was about 2%. It explains that the method isquite reliable in detecting steps without performing individualcalibration process.

C. Step Length Estimation

As explained in Section II-D, the step length can be cal-culated by root-based and log-based step length estimation

58/278

Page 65: International Conference on Indoor Positioning and Indoor Navigation

0 2 4 6 8 10 12−4

−3

−2

−1

0

1

2

3

4

5

6

Time [t]

Accele

ration [

m/s

2 ]Low−pass filtered gravity−free acceleration

Peak step occurrence detected points

Fig. 7. Peak step occurrence detection on low-pass filtered gravity-freeacceleration.

0.4 0.5 0.6 0.7 0.8 0.9 10.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1.1

1.2

Reference step length [m]

Estim

ate

d s

tep length

[m

]

Reference

Root−based step length estimation

Log−based step length estimation

Fig. 8. Comparison of root-based and log-based step length estimation on0.4 to 1 m-predefined distance intervals.

0 5 10 15 20 25 30 35 40 450.64

0.66

0.68

0.7

0.72

0.74

0.76

0.78

Step

Estim

ate

d s

tep length

[m

]

Reference

Root−based step length estimation

Log−based step length estimation

Fig. 9. Comparison of root-based and log-based step length estimation on0.7 m-predefined distance intervals.

processes in every valid detected step occurrence. Comparisonof estimated step length of two methods is shown in Fig. 8 andFig. 9. Fig. 8 depicts experimental result that the user walks0.4 to 1 m-predefined distance intervals and Fig. 9 does thecase of when a user asked to walk 0.7 m-predefined distanceintervals. Log-based scheme can estimate step length betterthan root-based one. This is indicated by the small standarddeviation where 0.0320 m for log-based method and 0.0498 mfor root-based one.

V. CONCLUSION

This paper presents a positioning system that can beused generally without the individual calibration process. Thesystem focused on displacement estimation by utilizing theaccelerometer sensor integrated on a smartphone which isplaced in the hand.

Step occurrence detection on various walking pattern with-out calibration process results an average error of about2%. This result shows that step occurrence detection usingfour-type-threshold peak step occurrence detection is quitereliable to detect steps with general-approached. When a stepevent detected, step length should be determined to estimatedisplacement. In this work, step length estimation performedusing root and log-based dynamic methods. The log-baseddynamic method give better estimation than root-based one.It is confirmed from the experiment showing the less standarddeviation of the estimated step length.

REFERENCES

[1] N. Deblauwe, GSM-based positioning: techniques and applications.Asp/Vubpress/Upa, 2008.

[2] S. Beauregard and H. Haas, “Pedestrian dead reckoning: A basis forpersonal positioning,” in Proceedings of the 3rd Workshop on Positioning,Navigation and Communication (WPNC?06), 2006, pp. 27–35.

[3] A. Jimenez, F. Seco, C. Prieto, and J. Guevara, “A comparison ofpedestrian dead-reckoning algorithms using a low-cost mems imu,” inIntelligent Signal Processing, 2009. WISP 2009. IEEE InternationalSymposium on. IEEE, 2009, pp. 37–42.

[4] J. W. Kim, H. J. Jang, D.-H. Hwang, and C. Park, “A step, stride andheading determination for the pedestrian navigation system,” Journal ofGlobal Positioning Systems, vol. 3, no. 1-2, pp. 273–279, 2004.

[5] Y. Jin, H.-S. Toh, W.-S. Soh, and W.-C. Wong, “A robust dead-reckoningpedestrian tracking system with low cost sensors,” in Pervasive Comput-ing and Communications (PerCom), 2011 IEEE International Conferenceon. IEEE, 2011, pp. 222–230.

[6] I. Bylemans, M. Weyn, and M. Klepal, “Mobile phone-based dis-placement estimation for opportunistic localisation systems,” in MobileUbiquitous Computing, Systems, Services and Technologies, 2009. UBI-COMM’09. Third International Conference on. IEEE, 2009, pp. 113–118.

[7] S. Ayub, X. Zhou, S. Honary, A. Bahraminasab, and B. Honary, “Indoorpedestrian displacement estimation using smart phone inertial sensors,”International Journal of Innovative Computing and Applications, vol. 4,no. 1, pp. 35–42, 2012.

[8] S. Shin, C. Park, J. Kim, H. Hong, and J. Lee, “Adaptive step lengthestimation algorithm using low-cost mems inertial sensors,” in SensorsApplications Symposium, 2007. SAS’07. IEEE. IEEE, 2007, pp. 1–5.

[9] H. Weinberg, “Using the adxl202 in pedometer and personal navigationapplications,” Analog Devices AN-602 application note, 2002.

59/278

Page 66: International Conference on Indoor Positioning and Indoor Navigation

Context Aware Adaptive Indoor Localization usingParticle Filter

Yubin Zhao, Yuan Yang, Marcel KyasComputer Systems and Telematics, Institute of Computer Science, Freie Universitat BerlinEmail: [email protected], [email protected], [email protected]

Abstract—Range-based wireless positioning systems suffer highnoise in indoor environment. The positioning algorithm usingbuilding map and non-line-of-sight (NLOS) information to obtainthe position is complicated. We propose a low complexity context-aware adaptive particle filtering scheme to improve the trackingperformance of indoor positioning systems. It combines threemethods: mobility behavior prediction, constraint sampling andweight adaptation. (1) With mobility behavior prediction, wedivide the building layout into several regions and predict whichregion the target belongs to in the next interval by applying alinear transition prediction function. (2) Constraint sampling: toobtain effective particle samples, we introduce a constraint sam-pling method. The constraint conditions are constructed accord-ing to the measurement constraints and layout region obtainedin step (1). The measurement constraints are set up throughmin-max algorithm, which is a robust to the ranging noise.Then the particles are uniformly sampled within the constrainedconditions. (3) Finally, to obtain an accurate estimation, a lowcomplexity weight adaptation method is designed to reduce theimpact of measurement noise. Experimental results demonstratethat our context aware adaptation scheme achieves an accurateestimation performance and low computation complexity.

Index Terms—indoor localization, particle filter, weight adap-tation, context aware.

I. INTRODUCTION

Recently, there has been a growing interest in indoor local-ization techniques which rely on in-building communicationsinfrastructure. Wireless systems determine the location of amobile target from measurements taken on the transmittedsignals, e.g. received-signal-strength (RSS), time-of-arrival(TOA) or angle-of-arrival (AOA) by the nodes in the wirelessnetwork. A major challenge for indoor location algorithms,is the robustness to the high dynamic and unpredictable in-building wireless environment. Particle filter is one effectivesolution which is feasible and adaptable for the implemen-tation in the non-linear and non-Gaussian environment [1],[2]. It can achieve high accurate estimation with unreliablemeasurement. Integrating with building information, such asmap matching, it can avoids wired motion tracking estimationsuch as walking through a wall or jumping out of the building[3], and also reduce the estimation error [4].

Using building information still has some drawbacks in thereal system. First, the methods which use building information,such as map matching, a large database should be built andthe model is quite complicated. Secondly, Some constraintmethods have prior knowledge of target movement whichis not feasible for the real scenario. For instance, particle

elimination methods know the target moves along the hallway,thus they will eliminate the particles which are not in thehallway. However, in the real world case, targets can moveanywhere they want, if the prior information is wrong, thetracking path is limited in the constraint region. Finally, evenif the prior information is correct, the measurement noise stillinfluence the estimation significantly.

We propose a low complexity particle filter scheme whichintegrates target’s motion context and building information.First, we predict the mobile target’s motion behavior. Anddivide the indoor building into several possible regions. Theregion where the target belongs to is predicted based on thelinear prediction equation in particle filter. With the detectedregion, a joint constraint conditions are constructed based onthe region information and measurement information. The par-ticle samples are generated within this constraint conditions.Finally, the estimation is attained using our weight adaptationmethod. Our method is robust to the measurement noise andinaccurate layout constraint condition, and it can achievehigh accurate estimation. Since less building information isrequired, the computation complexity is low.

II. SYSTEM MODEL

In the rang-based wireless positioning system, the mobiledevice with unknown position is called target, such as mobilesensor node, smartphone and robot. The wireless devices withknown positions and measure the ranges (or distances) to thetarget are called anchors. In our system, the range measure-ment is based on time-of-flight (TOF) range measurement [5].The measurement for each anchor is formulated as:

zjt =

√(Xt − pj

x)2 + (Yt − pjy)2 + nj

t (1)

where zjt denotes the measurement for jth anchor; xt =

[Xt, Yt]T is the target’s coordinates; [pjx, pj

y]T denotes the an-chor’s position; nj

t is the measurement noise njt ∼ N (µj

t , Rjt ).

III. MOTION DETECTION USING BUILDING LAYOUT

The building consists of rooms and hallways. The target hasdifferent motion behaviors in rooms and hallways. Thus wedivide the building layout into several region according to themotion behavior in the building. We record the coordinatesof each region as the constraint condition. If the target ispredicted to move in this region, one constraint conditionsare the coordinates of this region. No additional information

60/278

Page 67: International Conference on Indoor Positioning and Indoor Navigation

Type

II

Type I

Type IType II Type I

Type I

Fig. 1. Region partition: Type I: room or corridor without cross; Type II:corridor on the cross.

is recorded in our system, such as non-line-of-sight (NLOS)conditions. Thus the complexity is quite low.

It is easy to define a room as a single region. However, themovement of the target in the hallway can be different. Thus,we divide hallway into two types of region, which is shown inFig. 1. The first one is the region on the cross. In this region,the target can either turn right or left, and can also forwardor backward. Thus, the constraint condition is less reliableand should not restrict the target estimation. The second typeis the corridor with no corners or cross. The target can onlymove forward or backward, no other options. In this case, theconstraint conditions are reliable in this region, which helpsus adapt the particle weights.

We use linear prediction function to predict the targetmovement and estimate the region according to its movement.

xt = Ftxt−1 + qt (2)

where xt = [Xt, Yt]T is the target’s movement state; Ft is thelinear transition matrix; xt−1 is the previous state and qt is theprediction noise qt ∼ N (0, Qt). The region is chosen basedon:

Xkmin ≤ Xt ≤ Xk

max

Y kmin ≤ Yt ≤ Y k

max

(3)

where [Xkmin, Xk

max, Y kmin, Y k

max]T denotes the coordinates ofthe region. The constraint conditions for particle sampling arealso based on (3).

IV. CONSTRAINT SAMPLING

The region constraint is the first constraint condition forparticle sampling. However, if the motion prediction is notaccurate, the region constraint will lead to wrong estimation.Thus, we propose a second constraint condition: measure-ment constraint. Min-max algorithm is a robust and simplealgorithm. It draws rectangle area according to the rangemeasurement, and the drawn area is like a box, as shown inFig. 2. The estimation error of min-max algorithm does notincrease when the measurement error is high. Then, we use it

Fig. 2. The constraint conditions drawn by min-max algorithm

to draw a second constraint region:smin

X,t = maxpjX − zj

t Nj=1

smaxX,t = minpj

X + zjt N

j=1

sminY,t = maxpj

Y − zjt N

j=1

smaxY,t = minpj

Y + zjt N

j=1

(4)

where (pjX , pj

Y )T denotes jth anchor’s position; zjt is the range

measurement for jth anchor. Then, we combine these twoconditions to draw an integrated constraints, which is alsobased on min-max algorithm:

max(Xkmin, smin

X,t ) ≤ Xt ≤ min(Xkmax, smax

X,t )max(Y k

min, sminY,t ) ≤ Yt ≤ min(Y k

max, smaxY,t )

(5)

According to the max-entropy-principle, the particles are uni-formly sampled within (5).

V. WEIGHT ADAPTATION

A. Predicted Measurement

To further reduce the measurement noise, we propose aweight adaptation method. First, we make a measurementprediction. The calculation steps are as follows: xt denotesthe prediction value of xt according to (2):

xt = Ftxt−1 (6)

where xt−1 is the estimation at previous time t − 1. Whenconsidering the predicted noise qt, we denote xt as:

xt = xt + qt (7)

where qt is assumed to be the additive noise and followsnormal distribution qt ∼ N (0, Qt); Qt is the covariance attime t. Then we obtain a predicted measurement for sensors:

zt = ht(xt) = ht(xt + qt) (8)

61/278

Page 68: International Conference on Indoor Positioning and Indoor Navigation

B. Belief Factor θ

Belief factor θ is the tuning parameter for the predictedmeasurement and it is used to adapt the measurement zt toapproach the actual measurement ht(xt). Then, the adaptivelikelihood function is constructed as:

pAL(zt|xit) = πv(θzt + (1 − θ)zt − zi

t) (9)

where pAL() indicates the adaptive likelihood.

C. Optimal θ

The we formulate the adaptation method as the convex op-timization, which minimize the distance between our adaptedmeasurement and the actual measurement. And we constructthe objective function as follows:

θ = arg min ||ht(xt) − [θzt + (1 − θ)zt]||2 (10)

which turns to be a least-squares approximation problem.Since zt is the non-linear functions of the prediction noise qt

according to (8), it is difficult to obtain an analytical optimalresult. Thus, we use first order Taylor series expansion at xt

to linearize (8) :

zt ≈ ht(xt) +∂ht(xt)

∂xtqt (11)

where ∂ht(xt)/∂xt is the partial differential of ht(xt) withrespect to xt. And substitute (11) and zt = ht(xt) + vt into(10), we obtain:

||ht(xt)− [θzt + (1− θ)zt]||2 ≈ ||θ ∂ht(xt)

∂xtqt + (1− θ)vt||2 (12)

Therefore, the problem is converted into a linear optimizationproblem, which is solvable analytically by expressing theobjective as the convex quadratic function:

Ft(θ) = θ∂ht(xt)

∂xtQt[

∂ht(xt)∂xt

]T θT +(1−θ)Rt(1−θ)T (13)

where Qt and Rt are the covariance of qt and vt.Then, the optimal θ can be obtained if and only if:

∂Ft(θ)∂θ

= 2θ∂ht(xt)

∂xtQt[

∂ht(xt)∂xt

]T − 2Rt + 2θRt = 0 (14)

Then, the unique θ is derived:

θ =Rt

∂ht(xt)∂xt

Qt[∂ht(xt)

∂xt]T + Rt

(15)

VI. EXPERIMENT AND RESULTS

We employ a reference system for indoor localization test-beds to examine our proposed algorithms. In this system, wedeployed 17 wireless sensor nodes either along the corridoror in the offices of our research building. A robot carrying asensor node as target moved along the corridor of the buildingwith constant speed while recording its own positions [5]. Theerror of record positions is less than 15 cm, which can be seenas the actual positions.

All sensors are integrated with nanoPAN 5375 RF modulewith 2.4 GHz transceiver and 1 Mbps data rate for rangemeasurement, LPC 2387 as micro-controller and CC1101 900

Fig. 3. Building layout for indoor localization experiment and the robottrajectory. The triangles mark the positions of sensor nodes which are placedeither in the offices or along the corridor.

MHz transceiver as radio transceiver for communication. Thedata collected from sensor nodes are also range measurementvalues which are based on TOA. At each measurement in-terval, the target carried by a robot is measured by sensornodes, meanwhile, the robot recorded its actual coordinatesin the building. Fig. 3 depicts the map of our experimentalbuilding. The triangles, which are randomly deployed, markthe sensor nodes’ positions. According to the statistical errorsof measurements, it is hard to model the error to a typicaldistribution. In general, the expectation of measurement erroris 1 m and the standard deviation is about 5 m.

We implement three particle filter schemes in this exper-iment. The first one is a generic particle filter without anyconstraints, named as PF. The second one is the particle filterwith map matching method, which consider the NLOS effectand building layout, named as M-PF. The last particle filteris our proposed method, named as CA-PF. The trajectoriesare shown in Fig. 4. The solid line indicates the ground truthtrajectories. The triangles mark the anchor positions just asFig. 3. The dash curves depict the estimation trajectories. Theestimation accuracy comparisons are listed in Table I and II.As shown in Table I and II, if the range measurement isunreliable, particle filter with map matching can not achievea high accurate estimation. The estimation error is evenhigher than the generic particle filter. Our context-aware basedparticle filter is highly robust and can achieve a very accurateestimation.

TABLE ITRAJECTORY I: PERFORMANCE COMPARISON

Algorithm MAE (m) RMSE(m) min error(m) max error(m)PF 0.2061 2.1439 0.0466 5.8189

M-PF 0.3216 2.3176 0.0362 17.1701CA-PF 0.2501 1.5653 0.0393 6.6470

We adapt the number of particles for each particle filterscheme, and check the estimation performance. The resultsare drawn in Fig. 5. Fig. 5 indicates that without a constraintcondition, particle filter can not achieve a high accurateestimation with a few particles, e.g. generic particle filterhas a very high RMSE with 10 particles. Map matching can

62/278

Page 69: International Conference on Indoor Positioning and Indoor Navigation

0 10 20 30 40 50 60 70

−5

0

5

10

15

20

25

30

35

40

The actual trajectoryTrajectory of CA−PFAnchors

(a) Trajectory I

0 10 20 30 40 50 60 70

−5

0

5

10

15

20

25

30

35

40

The actual trajectoryTrajectory of CA−PFAnchors

(b) Trajectory II

Fig. 4. Estimation Trajectories using reference system

TABLE IITRAJECTORY II: PERFORMANCE COMPARISON

Algorithm MAE (m) RMSE(m) min error(m) max error(m)PF 0.5438 2.2635 0.0404 7.3092

M-PF 0.4246 2.3973 0.0733 12.8521CA-PF 0.4419 1.5467 0.0210 7.5943

provide an accurate estimation, but our method is even better.To achieve fast processing, only 30 particles are sufficient toachieve low RMSE.

Fig. 6 illustrates the average processing delay for thethree particle filter schemes. The three algorithms are highlyoptimized and tested in matlab platform. It is clearly observedthat the processing delay increases linearly with number ofparticles. The gap between PF and M-PF results in the regiondetection method. But the gap between M-PF and our methodis quite small. Our method is the highest delay but still in avery shot time. Thus it will not influence the performance ofthe whole system.

VII. CONCLUSION

We propose a context-aware particle filter tracking al-gorithm, which fuses layout information and measurementinformation to obtain the position of mobile target. Our method

10 20 30 40 50 60 70 80 90 1000

5

10

15

20

25

30

35

40

45

Number of Particles

Roo

t Mea

n S

quar

e E

rror

(m

)

PFM−PFCA−PF

Fig. 5. Root Mean Square Error (RMSE) Comparison for DifferentAlgorithms with different number of particles.

10 20 30 40 50 60 70 80 90 1001.55

1.6

1.65

1.7

1.75

1.8

1.85x 10

Number of Particles

Pro

cess

ing

Tim

e/ 1

e−10

sec

onds

PFM−PFCA−PF

Fig. 6. Processing delay with different number of particles.

is adaptable to dynamic environment and robust to the highwireless noise. The experiment results demonstrate that ourmethod achieves high accurate estimation and low processingdelay. Future work will focus on hybrid indoor and outdoortracking with geographic information.

REFERENCES

[1] X. Hu, T. Schon, and L. Ljung, “A Basic Convergence Result for ParticleFiltering,” Signal Processing, IEEE Transactions on, vol. 56, no. 4, pp.1337–1348, 2008.

[2] J. Prieto, S. Mazuelas, A. Bahillo, P. Fernandez, R. M. Lorenzo, andE. J. Abril, “Adaptive Data Fusion for Wireless Localization in HarshEnvironments,” Signal Processing, IEEE Transactions on, vol. 60, no. 4,pp. 1585–1596, 2012.

[3] G. Mao, B. Fidan, and B. Anderson, “Wireless Sensor Network Local-ization Techniques,” Computer Networks, vol. 51, no. 10, pp. 2529–2553,2007.

[4] Y. Qi, H. Kobayashi, and H. Suda, “Analysis of Wireless Geolocationin a Non-Line-of-Sight Environment,” Wireless Communications, IEEETransactions on, vol. 5, no. 3, pp. 672–681, 2006.

[5] S. Schmitt, H. Will, B. Aschenbrenner, T. Hillebrandt, and M.Kyas,“A Reference System for Indoor Localization Testbeds,” in InternetConference on Indoor Positioning and Indoor Navigation, IPIN 2012,2012, pp. 1–4.

63/278

Page 70: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Verification of ESPAR Antennas Performance in the

Simple and Calibration Free Localization System

Mateusz Rzymowski#1

, Przemysław Woźnica#2

, Łukasz Kulas#3

Department of Microwave and Antenna Engineering,

Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology

Gdańsk, Poland [email protected],

[email protected],

[email protected]

Abstract—This paper presents the results of simulations and

measurements of an indoor localization system that uses

Electronically Steerable Parasitic Array Radiator (ESPAR)

antennas with switched directional beam. Proposed antennas are

dedicated for 2.4GHz ISM low-cost applications where

determination of the incoming signal is required. The antennas

performance is analyzed and verified with relation to positioning

methods based on the simplest direction of arrival (DoA)

algorithm. The object position is estimated by incoming signal

direction indicated by a pair of antennas. The existing analysis of

ESPAR antenna performance with regard to DoA estimation

methods usually base on experimental measurements where

negative influence of the environment is limited or only certain

operational angles are discussed. In this paper all measurements

were done in the office and warehouse environment and

compared with corresponding ray-tracing simulations.

Keywords: switched-beam, ESPAR antenna, WSN, positioning,

localization

I. INTRODUCTION

Determination of an object’s position indoors using RF

signal properties is an important subject that has been proved

to be useful in such areas of application as healthcare, assets

management and safety systems [1]. Among Indoor

Positioning Systems (IPS) based on radio wave properties we

can distinguish systems that rely on [2]: RSS (Received Signal

Strength) which are based on received signal power level [3],

ToA (Time of Arrival) based on time of radio signal

propagation [4], TDoA (Time Difference of Arrival) based on

differences in radio signal time of arrival [5] and DoA

(Direction of Arrival) which use the antennas with possibility

of changing radiation pattern [6]. Reconfigurable antennas are

beneficial for wireless networks functionality [6]. Variability

of the radiation patterns in such antennas can improve the link

quality, increase the system range or reduce the energy

consumption. It is also the key issue for low-cost systems

where determination of the incoming signal is required. The

example of reconfigurable antennas for such applications are

ESPAR arrays [6-11]. They have a simple construction with

one active monopole surrounded by a defined number of

passive elements. Main beam direction can be changed with

the angle dependent on the number of passive elements. Beam

steering in ESPAR arrays is performed by electronic switches

that have to provide required load for the parasitic elements,

close to open or short circuit. The switching circuits can be

simplified by applying the SPST (Single Pole Single Throw)

keys (ON/OFF) instead of multiway switches. By adequate RF

switches configuration it is possible to obtain the directional

beam. There are several methods of estimating the direction of

the incoming signal that can be implemented on ESPAR

antennas. The simplest and most popular approach to

estimating direction of arrival is main beam switching [10-11].

The beam is swept in discrete steps in order to detect the

strongest signal, which indicates the incoming signal direction.

But the wide main beam or high backward radiation level can

negatively influence the accuracy of the estimation so the

main goal is to obtain as narrow beam as possible. Results

reported so far show that DoA localization based on ESPAR

antenna can be significantly improved using advanced

algorithms like MUSIC or ESPRIT [14-15], but in most cases

localization verification is conducted in an anechoic chamber

and algorithms employ simplified theoretical models for 2D

environments. Such algorithms are difficult in implementation

and results obtained from reflection free environments are

hard to reproduce in real test-beds especially when big amount

of obstacles in propagation path is present like in the office or

warehouse.

In this paper a simple localization system using two

ESPAR antennas simultaneously, proposed in [11], was

simulated and verified. The system is based on the simplest

DoA algorithm which is finding the direction where maximum

value of signal was received. In comparison to [11], the

measured characteristic of manufactured ESPAR antenna were

used within detailed model of real environment, to achieve

more reliable simulation results. The simulations where also

confronted with real environment measurements. Section II

describes the antenna construction and measurements. The

next section presents the simulation results of the proposed

system configuration. In section IV the real environment

measurements are discussed.

II. ANTENNA DESIGN AND MEASUREMENTS

The proposed antenna is presented in Fig. 1. It is a twelve elements ESPAR array with one active monopole in the center of the ground plane realized as a top layer of the PCB base. The monopole is fed by SMA connector while the parasitic

978-1-4673-1954-6/12/$31.00 ©2012

IEEE

64/278

Page 71: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

elements can be shortened or opened to the ground by the SPST switches connected to them on the bottom layer of the antenna. The opened elements are directors as the electromagnetic wave is passed through them while the shorted parasitic elements are referred to as reflectors because they

Fig. 1. Realized ESPAR antenna – top view.

reflect energy. The antenna was designed to operate on 2,4 GHz frequency band and realized on 1,55 mm thick FR4 substrate with top layer metallization. It is fed by a female SMA connector.

The parasitic elements are silver plated wires with 1,2 mm thickness. They are shortened or opened to the ground with NEC μPG20112TB switches which are placed on the bottom layer of the antenna what is illustrated in Fig. 2. This model was chosen because of the low insertion losses (about 0,3 dB) and quite good isolation (typically 25 dB). The 56 pF DC cut capacitors were implemented to the input and outputs of switching circuits. The RF switches are controlled and powered by the external driver based on STM32 microcontroller. The driver provides 3 V power supply and use special communication protocol to control the switching process so that it can works autonomously or be steered from other devices (eg. PC or RF module).

The radiation pattern of described antenna was measured in anechoic chamber and is presented in Fig. 3.

Fig. 2. Realized ESPAR antenna – bottom view.

III. NUMERICAL SIMULATIONS

A. Scene setup

As the environment for the simulation the part of the floor of the Department of Microwave and Antenna Engineering was chosen (see Fig. 4 and Fig. 5). The setup consists of five rooms and two parts of a corridor. The materials and its electrical properties used in the environment model were provided by the software producer [12], [13]. As the simulation engine the optical ray tracing was chosen using the empirical coefficients to model the interactions between radio signals and the environment.

Fig. 3. Measured 3D antenna radiation pattern.

The proposed setup is presented in Fig. 4, where colors of

lines represent materials used in the simulation and

highlighted area represents the simulation scene consisting of

three rooms and two parts of corridor (see Fig. 5).

Three antennas with switched directional radiation pattern

presented in Fig. 3 were used in the simulation. Antennas’

positions are marked in Fig. 5 as blue dots and labeled

consecutively a1, a2 and a3, while the simulated areas are

labeled r1, r2, r3 (the three rooms) and c1, c2 (two parts of the

corridor). All the antennas were placed at the same height

equal 2,8 m, while the height of predicted signal area is 1,5 m.

Fig. 4. The overall view of the simulation setup (see text for explanations).

65/278

Page 72: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

B. Ray-tracing results

For the ray tracing simulation the resolution of the

simulation was set to 10 cm and the following limitations for

each ray were established: maximum four transmissions, four

reflections and two diffractions. The results of simulations for

all three antennas and all possible main beam configurations

are presented in Fig. 6.

Fig. 5. Simulation scene together with antennas positions (see text for

explanations).

Fig. 6. Received power distribution for all antennas and all main beam

directions.

C. Localization accuracy

To determine the localized node's (LN) position DoA

algorithm using only two reference nodes equipped with

ESPAR antennas [11] was implemented. The algorithm

determine direction of localize node signal basing on highest

received signal value attached to ESPAR antenna

configuration. If the determined directions are divergent, the

pair of convergent ones is estimated as one with the smallest

angular difference regarding to original ones. Localization was

performed, where all the simulated points were used as the

testing points. Two pairs of antennas were used during

localization. First pair was antenna 1 and antenna 3 the second

was antenna 2 and antenna 3. The third pair of antennas was

not taken into account because of its inadequate placement due

to the algorithm’s performance. The results in form of

Cumulative Distribution Function (CDF) of localization errors

for the whole scene are presented in Fig. 7. The functions were

calculated as normalized cumulative histograms of

localization estimation errors. The faster the functions reaches

one value the better the results are (for example function

argument at CDF equal 0,5 indicates that 50 % of

measurements have error smaller or equal augment value).

Fig. 7. CDF for the whole scene (see text for explanations).

The values of error mean and median value are presented in

Table II.

TABLE I. LOCALIZATION ACCURACY (IN METERS) - SIMULATION

antennas mean median

a1 – a3 2.3454 2.0236

a2 – a3 2.2515 2.0070

IV. MEASUREMENTS

All measurements were done in the environment which simplified model was simulated in the previous section. The environment can be described as fusion of office and storehouse. The selected area was divided into 4x8 grid covering 5,5x11,5 m. The locations of measurement points are presented in Fig. 8 as red squares. The measurement set-up consist of three ESPAR antennas placed in different rooms and connected to RF transceiver. The transceiver with ESPAR antenna was a reference node and measured the incoming signal strength from the localized node. It has to be mentioned that the EM wave propagation in the real environment is much more complicated than in the modeled simulation scene, because of the repository character of the rooms. Another issue related to the measurements is the fact that used modules provides weak output power which influences the measured

66/278

Page 73: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

data quality, especially when the distance between the reference and localized nodes is big.

Fig. 8. Measurement grid (see text for explanations).

In this case the DoA algorithm was also used to estimate

the position of localized node and as before the same pairs of antennas were used for calculations. The results in form of error mean and median value are presented in table II.

TABLE II. LOCALIZATION ACCURACY (IN METERS) - MEASUREMENTS

antennas mean Median

a1 – a3 3,65 2,68

a2 – a3 3,58 3,39

A significant difference between the simulated and

measured results occurred. The measurement error is more than one meter bigger than in the simulation. As it was mentioned it is because the model was highly simplified and did not include a number of obstacles that were in the analyzed testbed, like metal cupboards, moving people, wood and metal furniture. Another reason is the low output power of the modules which did not allow to distinguish the position of the LN when it was placed in a different room than the reference node. It means that even if the signal source and one of the reference nodes were in the same room and the estimated location was right, the other reference node influenced with a higher estimation error to the system because of the very low level of the received signal from all directions.

V. CONCLUSION

This paper presents a results of simulations and real environment measurements of the localization system based on two ESPAR antennas with switched directional beam. The testbed was modeled and simulated and then the measurements in real environment were done. The localization errors were calculated for both cases described in previous sections. The comparison of the simulations and measurements confirms the fact that the operation of localization systems is hard to simulate with simple models, which is the case in many publications, because of complex wave propagation within indoor environment. The results show that simple algorithm, that use only switched directional beam, can be considered as sufficient for robust localization. More sophisticated algorithms and additional localization methods are required to increase the accuracy. Creation of more detailed model, especially with regard to the metal obstacles should be

considered. The measurements should be repeated on a denser grid, and the output power of the modules has to be increased with regard to existing standards.

ACKNOWLEDGMENT

This work has been supported by the Polish National Centre

for Research and Development under agreement

LIDER/23/147/L-1/09/NCBiR/2010.

REFERENCES

[1] Taub, D.M.; Leeb, S.B.; Lupton, E.C.; Hinman, R.T.; Zeisel, J.;

Blackler, S.; , "The Escort System: A Safety Monitor for People Living with Alzheimer's Disease," Pervasive Computing, IEEE , vol.10, no.2,

pp.68-77, April-June 2011

[2] Bensky, A.:Wireless Positioning Technologies and Applications. GNSS Technology and Applications Series, Artech House, 2008.

[3] Chin-tseng Huang; Cheng-hsuan Wu; Yao-nan Lee; Jiunn-tsair Chen; , "A novel indoor RSS-based position location algorithm using factor

graphs," Wireless Communications, IEEE Transactions on , vol.8, no.6,

pp.3050-3058, June 2009 [4] Patwari, N.; Hero, A.O., III; Perkins, M.; Correal, N.S.; O'Dea, R.J.; ,

"Relative location estimation in wireless sensor networks," Signal

Processing, IEEE Transactions on , vol.51, no.8, pp. 2137- 2148, Aug. 2003

[5] Bin Xu; Ran Yu; Guodong Sun; Zheng Yang; , "Whistle:

Synchronization-Free TDOA for Localization," Distributed Computing Systems (ICDCS), 2011 31st International Conference on , vol., no.,

pp.760-769, 20-24 June 2011

[6] Luis Brás, Nuno Borges Carvalho, Pedro Pinho, Lukasz Kulas, and Krzysztof Nyka, “A Review of Antennas for Indoor Positioning

Systems”, International Journal of Antennas and Propagation, vol.

2012, Article ID 953269, 14 pages, 2012. doi:10.1155/2012/953269J. [7] R. Schlub, Junwei Lu and T. Ohira, “Seven Element Ground Skirt

Monopole ESPAR Antenna Design using a Genetic Algorithm and the

Finite Element Method”, IEEE Trans. on Antenna and Propagation, Vol. 51, No. 11, pp. 3033-3039, Nov. 2003.

[8] R. Schlub, D. V. Thiel, “Switched Parasitic Antenna on a Finite Ground

Plane With Conductive Sleeve”, IEEE Transactions On Antennas And Propagation, May 2004

[9] H. Kawakami and T. Ohira "Electrically steerable passive array radiator

(ESPAR) antennas", IEEE Antennas Propag. Mag., vol. 47, no. 2, pp.43 -49 2005

[10] Taillefer, E., Hirata, A., Ohira, T., "Direction-of-arrival estimation

using radiation power pattern with an ESPAR antenna", Antennas and Propagation, IEEE Transactions on, On page(s): 678 - 684 Volume: 53,

Issue: 2, Feb. 2005

[11] M. Sulkowska, K. Nyka, L. Kulas, “Localization in Wireless Sensor Networks Using Switched Parasitic Antennas”, in Proceedings of the

18th International Conference on Microwaves, Radar and Wireless

Communications (MIKON '10), pp. 1–4, June 2010. [12] (2012) AWE Communications website. [Online]. Available:

http://www.awe-communications.com/Manuals/

[13] (2012) AWE Communications website. [Online]. Available: http://www.awe-

communications.com/Download/DemoData/Databases_Material.zip

[14] Plapous, C.; Jun Cheng; Taillefer, E.; Hirata, Akifumi; Ohira, T., "Reactance domain MUSIC algorithm for electronically steerable

parasitic array radiator," Antennas and Propagation, IEEE Transactions

on , vol.52, no.12, pp.3257,3264, Dec. 2004 doi: 10.1109/TAP.2004.83643

[15] An-min Huang; Qun Wan; Xin-Xin Chen; Wan-Lin Yang, "Enhanced

Reactance-Domain ESPRIT Method for ESPAR Antenna," TENCON 2006. 2006 IEEE Region 10 Conference , vol., no., pp.1,3, 14-17 Nov.

2006doi: 10.1109/TENCON.2006.344036

67/278

Page 74: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Optimal RFID Beacons Configuration for Accurate

Location Techniques within a Corridor Environment

Alain Moretto, Elizabeth Colin

Allianstic

ESIGETEL

Villejuif, France

Alain.moretto / Elizabeth.colin @esigetel.fr

MarcHayoz

Telecommunication Dpt.

EIA-FR

Fribourg, Swiss

[email protected]

Abstract—When using fingerprinting or tri/multilateration

techniques, emitters must be deployed in the environment. A

critical issue is where the emitters should be placed, yet too few

studies on this topic have been carried out. This paper focuses on

the placement of the emitting sources in order to increase the

accuracy in the position estimation.

This work gives guidelines on the placement of the emitting

sources in the context of trilateration based location architecture

within a hallway in order to increase both precision and

accuracy.

Keywords: tri/multilateration; beacon placement; active RFID;

RSSI

I. INTRODUCTION

Tri and multi lateration techniques are now very commonly used for location purposes [1-4]. The performance degradation of such techniques in indoor environments has been widely pointed out and studied [5-6]. Multipath phenomena, time varying fadings and dead spots particularly affect distance estimation between emitters and receivers and, as a consequence, on position estimation accuracy. Some solutions have been suggested to reduce the impact of errors of distance estimation. The first solution is to found an accurate propagation model which takes into account the geometry of the room, indoor environment specificities and material dielectric permittivity [7]. The other solution is to merge different localization techniques such as infrared or ultrasonic sensors, optical beacons recognition, odometry, use of gyroscopes or, more recently, by light intensity measurements [8]; this is known as multi-modal approach. Another (not exclusive) approach is to use statistical tests to identify and eliminate incorrect distance measurements and to use Kalman or particular filters to increase the likelihood to find an object at a given position given many other pieces of information.

When it comes to wireless sensor networks (WSN), many researchers have proposed solutions based on specific nodes placement, studying spatial distribution and nodes density for both static and dynamic architecture configurations [9-12].

Yet, when beacons are non-communicating RF objects, too few studies can be found.

In this work, two different corridors are meshed with 433MHz-RFID beacons. A two antennas-RFID reader acquires the Received Signal Strength Information (RSSI) from all the beacons. A propagation model is deduced and 3D-trilateration algorithm is subsequently implemented to determine the position of a robot carrying the reader.

We considered a typical indoor environment as a corridor which is subjected to a multipath dominating phenomenon. Measurements were made in 2 corridors with similar lengths. A grid of twenty four 433MHz-RFID active beacons (tags) was deployed in both cases. An RFID reader acquires the Received Signal Strength Information (RSSI).

We focused on the mean and standard deviation of the localization errors as criteria of accuracy and precision. In order to optimize the accuracy of the position estimation of a robot along a corridor we put under test all the possible quadruplets and found the 5 best quadruplets of beacons.

The paper is organized as follows. In section 2, we describe our positioning system. The initial beacon placement within the two environments is described and we elaborate on the choice of our propagation model. Section 3 describes the positioning technique we used as well as the measurement scenario itself. Results are presented and commented in section 4.

II. THE ENVIRONMENT AND ITSMODELING

A. Environment Features

Our robot system positioning is based on UHF RFID technology: one 433MHz-reader with two dipole antenna is embedded on the robot and active RFID tags, used as active beacons to obtain the robot location.

The considered location environment is two corridors. The first one is 22.8m x 2m x 2.5m, located at ESIGETEL engineering school (corridor E), the second one is 18.5m x 1.5m x 5m, located at IBISC robotic laboratory (corridor I). Fig. 1 shows both corridor geometries are quite different. Corridor E is wide but with a classical ceiling height whereas corridor I is a narrow one with a high ceiling.

In both corridors, tags are placed on walls at 1.30m and 2.10m height, distance between two tags is 1.5m. Those heights respectively correspond to a doorknob and a standard door height. The layout is shown in Fig. 2.

68/278

Page 75: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. Corridor I and E, respectively on the left hand side and the right

hand side.

Figure 2. Layout of the tags on the corridor

Corridor E has 30 tags and corridor I has 24 tags.

Note that no tag has been placed on the ceiling because of the specific radiation pattern of the reader antenna. A zero is in the axis of the dipole, that’s to say towards the ceiling and the floor.

Our tags can be detected from as far as 40 meters in an indoor environment.

B. Environment Model

As reader acquires RSSI, we need to choose the propagation model of the environment in order to estimate the reader-to-tag distance from the received power.

One-Slope model gives a medium trend of the wave propagation behavior. Moreover, it is a simple and bijective relation between power and distance (1).

𝑃𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑑 = 𝐾

𝑑𝑛 (1)

K and n are two constants to be defined.

Measurements of RSSI-power are made along each corridor to define the slope features (K, n). So that, each tag has his own channel model (see Fig. 3). In this figure measurements are in blue and the slope in red.

Figure 3. Received power measurement and propagation model of tag 2.

Obviously, this medium behavior does not show the multipath effects, responsible of dispersion of the received power measurements. This dispersion can bring about wrong distance estimation.

In order to limit deep fadings we implement a classical antenna diversity technique. We compare the received power on each antenna and record the highest level.

Fig. 4 shows the distance estimation error modulus distribution for each tag model. 45% of distance estimation error due to the propagation model is below 2m. In this specific configuration, tag number 17 lead to an estimated error below 1m for 40% of all the distances estimated in the corridor

Figure 4. Distance estimation error modulus distribution due to deviation

from the One-Slope ideal model in corridor E.

III. POSITIONING

The One-Slope model is a bijective one. As a consequence, once the reader embedded on our robot receives the RSSI from

each tag, the distance 𝑑𝑖 to ithtag Bican be estimated. The

position of the robot can be calculated using Trilateration method.

69/278

Page 76: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

A. Trilateration

Let assume that electromagnetic power expands in an isotropic way(see Fig. 5). Under this condition, iso-energy curve is a sphere; Let Bi(xi, yi, zi) be the exact location of the ithbeacon in a Cartesian coordinate. The equation of each

sphere is:

𝑥 − 𝑥𝑖 2 + 𝑦 − 𝑦𝑖

2 + 𝑧 − 𝑧𝑖 2 = 𝑑𝑖 (2)

Figure 5. Robot Positioning using Trilateration.

We need four quadratic equations, that’s to say four distance estimations, to find the position (x, y, z) of the robot:

x − x1

2 + y − y1 2 + z − z1

2 = d1

x − xi 2 + y − y2

2 + z − z2 2 = d2

x − xi 2 + y − y3

2 + z − z3 2 = d3

x − xi 2 + y − y4

2 + z − z4 2 = d4

(3)

Expanding and regrouping terms in (Eq. 3) we obtain:

𝐴 𝑥𝑦𝑧 = 𝑏 (4)

With 𝐴 = 2

𝑥2 − 𝑥1 𝑦2 − 𝑦1 𝑧2 − 𝑧1

𝑥3 − 𝑥1 𝑦3 − 𝑦1 𝑧3 − 𝑧1

𝑥4 − 𝑥1 𝑦4 − 𝑦1 𝑧4 − 𝑧1 (5)

And

𝑏 =

𝑑12 − 𝑑2

2 − 𝑥12 − 𝑥2

2 − 𝑦12 − 𝑦2

2 − 𝑧12 − 𝑧2

2

𝑑12 − 𝑑3

2 − 𝑥12 − 𝑥3

2 − 𝑦12 − 𝑦3

2 − 𝑧12 − 𝑧3

2

𝑑12 − 𝑑4

2 − 𝑥12 − 𝑥4

2 − 𝑦12 − 𝑦4

2 − 𝑧12 − 𝑧4

2

(6)

Matrix A(Eq. 5) must be invertible to find a solution. This is clearly not the case if the four beacons are coplanar. Noisy estimation of distance may also lead to a non-invertible matrix. Indeed, with imperfect information the spheres may not intersect at a single point, in fact the spheres may not intersect at all. That’s why an estimate of the position is generally found by looking for the point that simultaneously minimizes the distance to all spheres by using mathematical techniques such

as Least Square Estimation (LSE). In this work, we deliberately wanted to explore the performance of trilateration (and not multi-lateration) before any additional signal processing.

B. Measurementcampaigns

At this step, corridors are empty (no furniture and no people walking along) to avoid additional fading or extra scattering sources. Doors remain closed.

At 433 MHz, the wave length is about 70cm. Shannon spatial sampling theorem says that a measure should be done every half wave length at most.We then have 61 acquisitions in corridor I and 75 acquisitions in corridor E.

Received power is measured by the robot every 30cm from all the RFID-beacons and recorded. Maximum received power at each antenna is recordedas explained previously.

C. Localization step

Every position is estimated through 𝑛4 possible

quadruplets of tags (n = 30 for corridor E, ie 27405 quadruplets and n = 24 for corridor I, ie 10626 quadruplets).

Positions estimated out of the corridor are not taken into account to compute mean position and standard deviation.

IV. RESULTS

A. Results analysis

Fig. 6 shows that 60% of the estimated positions are out the corridor. We focus on the other estimations that are meaningful. Due to the use of antenna diversity technics, and without filtering, 18% of the overall estimated mean errors are less than 2m.

Figure 6. Mean error distribution in corridor E

We tried to find the “best” beacon placement with three criteria and for the all corridor. First criterion is accuracy. This performance criterion is given by the minimum value of the mean error (table 1). Another criterion is precision which is directly linked to the error standard deviation (table 2). At last,

70/278

Page 77: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

we try to find the best quadruplets that offer a good accuracy and a good precision at the same time (table 3).

If the accuracy of the position estimation is our main goal, table 1 shows that we should not expect a mean error beneath 2.3m. Yet, the choice of tags 1, 2, 3 and 23 leads to an accuracy equivalent or greater than that given by commercial solutions (around 3m).

TABLE I. WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST

MEAN ERROR

Corridor I Corridor E

tag Id Mean error (m)

Standard deviatio

n (m)

tag Id Mean error (m)

Standard deviation

(m)

1,2,

3,23

2.29 2.26 6,24,

26,27

3.15 2.15

7,17,

18,19

2.39 1.95 8,11,

23,24

3.27 2.45

9,10,

16,19

2.57 2.04 8,23,

24,25

3.28 2.85

7,9,

18,19

2.65 2.19 8,9,2

3,24

3.29 2.67

9,10,

16,17

2.72 2.51 14,16

17,18

3.29 2.02

If the precision of the position estimation is our main goal, table 2 showsthat reducing the dispersion of position estimation in a corridor has a price: we should not expect an overall accuracy lower than 3,7m. This bounded limit is easily overcome with the help of positioning technics fusion and/or with the help of likelihood maximizing algorithms.

TABLE II. WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST

STANDARD DEVIATION ERROR

Corridor I Corridor E

tag Id Mean error (m)

Standard deviatio

n (m)

tag Id Mean error (m)

Standard deviation

(m)

10,11,

14,20

3.71

1.32

7,9,

10,26

4.73

1.43

2,8,

19,20

4.25

1.35

11,15

17,18

3.94

1.61

1,3,

4,24

4.98

1.42

3,4,

22,29

5.32

1.66

8,9,

19,21

6.43

1.57

3,4,

25,29

4.25

1.73

4,12,

15,16

5.45

1.62

6,7,

9,22

5.72

1.74

At last, table 3 gives us the 5 best candidates if we have requirements in precision and accuracy. A mere glance at the first three candidates allows us to choose a balance between both criteria. Quadruplet (9 11, 15, 16) offers a slightly less accurate positioning system over corridor I than the first quadruplet (7, 17, 18,19) but improves precision for instance.

TABLE III. WE LOOK FOR THE QUADRUPLETS THAT GIVE THE LOWEST

MEAN ERROR AND LOWEST STANDARD DEVIATION ERROR

Corridor I Corridor E

tag Id Mean error (m)

Standard deviatio

n (m)

tag Id Mean error (m)

Standard deviation

(m)

7,17

18,19

2.39 1.95 13,16

17,18

3.40 1.82

9,11,

15,16

2.75 1.63 6,24,

26,27

3.15 2.15

1,2,

3,23

2.29 2.26 14,16

17,18

3.29 2.01

9,10,

16,19

2.57 2.04 8,24,

26,27

3.51 2.03

6,7,

19,23

3.06 1.76 11,15

17,18

1.61 2.02

B. Beacon placement

To improve the visual representation of the results obtained previously, we represented the first three quadruplets, according to the three chosen criteria, in corridor E (see Fig. 7) and in corridor I (see Fig.8).

The first observation that can be done is that beacon should be preferably centered and grouped in order to increase accuracy.

Most of the polygons that are an answer to our requirements have three neighbor tags.

Slightly spacing neighbor tags can increase the accuracy. Yet, trying to space tags as much as possible in other to cover the corridor is definitely not a good idea.

At last, polygons (quadruplets) that lead to a compromise between precision and accuracy should be place to a corridor end.

Figure 7. Three best quadruplet placement in corridor E according to accuracy criterion (upper figure), precision criterion (median figure) or

accuracy+precision criterion.

71/278

Page 78: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 8. Three best quadruplet placement in corridor I according to

accuracy criterion (upper figure), precision criterion (median figure) or accuracy+precision criterion.

V. CONCLUSION

Beacon placement strongly affects the quality of 3D-spatial localization in a tri-lateration context especially in an indoor environment. Multipath phenomena make it difficult to estimate proper beacon-to-reader distances; this strongly affects the performances of this type of positioning system.

In this work, we focus on finding a preconfigured beacon placement in other to increase our positioning system performances according to three criteria: accuracy (mean overall error), precision (error standard deviation) and the fusion of these two criteria.

Beacon placement design is suggested and expected performances are given without any signal post-processing nor further performance improvement.

1) Acknowledgment We deeply thank MaximeJubert for endless hours of

measurements and for his collaboration during the validation step of our project at the IBISC robotic lab.

REFERENCES

[1] Sumana Das, Thiago Teixeira and Syed FarazHasan, “Research Issues

related to Trilateration and Fingerprinting Methods. An Experimental Overview of Wi-Fi Positioning System” International Journal of Research in Wireless Systems (IJRWS), Volume 1, Issue 1, November 2012.

[2] Federico Thomas and LluísRos, “Revisiting Trilateration for Robot Localization”, IEEE transactions on robotics, Vol. 21, N°. 1, February 2005.

[3] Jun Wang, Paulo Urriza, Yuxing Han, and Danijela, “weighted centroid localization algorithm theoretical analysis and distributed implementation”, IEEE Transactions on Wireless Communications,Volume:10, Issue: 10, October 2011.

[4] Frédéric Lassabe, “Géolocalisation et prédiction dans les réseaux Wi-Fi en intérieur”, M.Sc. Thesis report, Université de Franche-Comté, April 2009.

[5] Mihail L. Sichitiu and VaidyanathanRamadurai, “Localization of Wireless Sensor Networks with a Mobile Beacon”, IEEE Convention of the Electrical and ELectronic Engineers in Israel - IEEEI , 2004.

[6] ShashankTadakamadla, “Indoor Local Positioning System For ZigBee, Based On RSSI”, M.Sc. Thesis report.,Department of Information Technology and Media, Mid Sweden University, 2006.

[7] A. Moretto, E. Colin, “New Indoor Propagation Channel Model for Location Purposes”, Progress In Electromagnetics Research Symposium Proceedings, Taipei, March 25–28, 2013.

[8] Youngsuk Kim, Junho Hwan, Jisoo Lee andMyungsikYoo“Position estimation algorithm based on tracking of received light intensity for indoor visible light communication systems”,2011 Third International Conference on Ubiquitous and Future Networks (ICUFN).

[9] Randolph L. Moses, Dushyanth Krishnamurthy, and Robert Patterson, “A Self-Localization Method for Wireless Sensor Networks”, EURASIP Journal on Applied Signal Processing, Volume 2003, Issue 4, pp. 348-358, March 15, 2003.

[10] NirupamaBulusu, John Heidemann, Deborah Estrin, “Density Adaptive Algorithms for Beacon Placement in Wireless Sensor Networks”, In Proceedings of IEEE ICDCS’01.

[11] Javier O. Roa, Antonio Ramón Jiménez, Fernando Seco Granja, José Carlos Prieto, Joao L. Ealo, “Optimal Placement of Sensors for Trilateration: Regular Lattices vs Meta-heuristic Solutions01/2007; In proceeding of: Computer Aided Systems Theory - EUROCAST 2007, 11th International Conference on Computer Aided Systems Theory, 2007.

[12] Guangjie Han1, Deokjai Choi and Wontaek Lim, “Reference node placement and selection algorithm based on trilateration for indoor sensor networks”, Wireless Communications and Mobile Computing, 2009.

72/278

Page 79: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

A Cooperative NLoS Identification and PositioningApproach in Wireless Networks

Zhoubing Xiong, Roberto GarelloDepartment of Electronics and Telecommunications

Politecnico di TorinoTurin, Italy

zhoubing.xiong, [email protected]

Francesco Sottile, Maurizio A. SpiritoPervasive Technologies

Istituto Superiore Mario BoellaTurin, Italy

sottile, [email protected]

Abstract—Non-line-of-sight (NLoS) propagation of radio fre-quency (RF) signal has proven to be challenging for the local-ization of unknown nodes in wireless networks. In particular,the NLoS range measurements can greatly affect the accuracyof mobile node’s position and in turn may cause the positionestimation error diverging. This paper analyzes the Cramer-Raolower bound of cooperative localization in presence of NLoSmeasurements and proposes a cooperative NLoS identificationscheme as well as a cooperative positioning algorithm based onbelief propagation. The proposed algorithm is fully distributedand does not require prior NLoS state of range measurements.Simulation results show that the proposed algorithm is able todetect the state of each range measurement (NLoS or LoS) andimprove positioning accuracy in several NLoS conditions.

I. INTRODUCTION

Nowadays, localization based applications, such as assettracking, intruder detection, healthcare monitoring and soforth, are revolutionizing our life [1]. These applications oftenrequire very accurate position estimation even in challengingenvironments (e.g., in indoor and industrial environments).One aspect that affects the accuracy of radio-based localizationsystems is non-line-of-sight (NLoS) propagation that makesrange observations to be positively biased. In the literature,lots of approaches have been proposed to mitigate large errorscaused by the NLoS links [2]–[8]. In [2]–[4] , some algorithmshave been adopted to identify whether a range measurementis in NLoS or LoS status based on channel statistics. In [5]and [6] the authors have proposed some NLoS mitigationalgorithms in vehicular applications, but they did not take intoaccount cooperation among unknown mobile nodes. In [7] and[8], cooperation among unknown nodes is exploited, but theyrequired to know the exact status of NLoS links, which mightbe unrealistic.

In cooperative positioning, apart from range measurementswith respect to anchors (i.e., nodes whose positions are per-fectly known), unknown nodes perform range measurementsalso among them and exchange aiding data, such as estimatedposition and the estimated probability density function or thecorresponding estimated uncertainty. The cooperation amongmobile nodes is beneficial for network localization [1]. In fact,both positioning accuracy and availability are improved. Oneimportant aspect of cooperative localization is how to appro-priately take into account the uncertainty of unknown nodes’

positions. This task has been already investigated mostly inline-of-sight (LoS) condition [1], where ranging errors arerelatively small and corresponding uncertainty can be wellmodeled. However, in NLoS conditions, ranging errors aremuch larger and more irregular, thus cooperative localizationprocesses may diverge if the NLoS state associated to rangemeasurements are not identified.

This paper focuses on cooperative localization in NLoSscenarios and it adopts a cooperative approach based on amodified version of the belief propagation (BP) algorithm [1],[7]. The proposed algorithm estimates mobile positions and thestatus of range measurements in parallel. Moreover, it analyzesthe positioning bounds in NLoS environment and uses it tocheck the result of position estimates and NLoS identification.

The rest of this paper is organized as follows. Sec. II in-troduces the measurement models and derives the cooperativeCramer-Rao lower bound (CRLB) of the positioning error inNLoS scenarios. Sec. III describes the proposed cooperativeNLoS detection and positioning algorithm based on the beliefpropagation (BP) approach [1], [7]. Finally, Sec. IV presentssimulation results and Sec. V draws conclusions.

II. MEASUREMENTS MODELING AND CRLB

A. Measurements Models

Concerning range measurement models, in this work themodels presented in [7] have been adopted as they areextracted from experimental measurements by using UWBmodules [4].

1) LoS Model: Range measurements in LoS condition areassumed as Gaussian distributed:

r = d+ nlos, (1)

where d is the exact distance between the two nodesinvolved in the measurement, and nlos is a Gaussian distributednoise, nlos ∼ N (0, σ2), with zero mean and standard deviationσ = 0.25 m.

2) NLoS Model: Range measurements in NLoS conditionare modeled as:

r = d+ nnlos, (2)

73/278

Page 80: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

where nnlos is the measurement noise supposed to be exponen-tially distributed, pnnlos (x) = λ exp(−λx) when x ≥ 0, with rateparameter λ = 0.38 m−1.

B. State Definition

Let sn→m be the state associated to the range measurementrn→m from neighbor n to mobile m. The state sn→m canassume value either 0 if the corresponding range measurementis performed in LoS condition or 1 in NLoS condition. As aconsequence, P (sn→m = 0) + P (sn→m = 1) = 1.

Based on the above definitions and assuming that the statesassociated to range measurements are not prior known, thelikelihood function of the range measurement could be simplyexpressed as the weighted sum on the state:

p(rn→m|xm,xn) =

1∑i=0

P (sn→m = i) p(rn→m|xm,xn, sn→m),

(3)where xm = [xm, ym] is the position of the mobile m andxn = [xn, yn] the position of the neighbor n. Note that thelikelihood function could be either a normal distribution or anexponential one depending on the link condition:

p(rn→m|xm,xn, sn→m) = 1√2πσ2

exp(− (rn→m−‖xn−xm‖)2

2σ2

), sn→m = 0

λ exp (−λ (rn→m − ‖xn − xm‖)) , sn→m = 1, (4)

where ‖·‖ denotes the Euclidean distance.Some NLoS identification techniques presented in literature

are based on the processing of the received signal [3], [4], butthey are too complex and not feasible to be implemented oncheap devices. Since range measurements are correlated withthe position of the mobile m, it would be efficient to proceed inparallel both mobile position estimate and NLoS identificationfor all the involved range measurements as presented in sec.III.

C. Cramer-Rao Lower Bound

The Cramer-Rao lower bound (CRLB) expresses a lowerbound on the variance of any unbiased estimator. In local-ization, this information can be used to know which is themaximum achievable positioning accuracy in a given scenario.Also it can be used during on-line estimation process to selectthe closest set of neighbors that are able to meet the requiredpositioning accuracy while energy for ranging is minimized[9]. In fact, following this approach, the transmission poweris adaptively adjusted to reach the selected neighbors.

In cooperative localization [10], the available set of rangemeasurements can be written as:

Z =ra→ma∈Am

, rn→mn∈Mm

m∈M

. (5)

Let A and M denote the full set of anchors and mobiles,respectively, in the network. In (5), Am ⊆ A and Mm ⊂ Mare the set of anchors and mobiles, respectively, connected to

m. The corresponding log-likelihood function is given by:

log (p (Z|X)) =∑m∈M

∑a∈Am

log p(ra→m|xm,xa)+∑m∈M

∑n∈Mm

log p(rn→m|xm,xn), (6)

where X is the set of mobiles’ positions, that is, X =

[x1,x2, . . . ,xM ], where M is the cardinality of M.The CRLB is obtained by inverting the Fisher information

matrix (FIM) that is given by the negative expectation of thesecond-order derivatives of the log-likelihood function:

F = −E[∂2

∂X2log p(Z|X)

]. (7)

From (6) and (7), the global FIM can be decomposed as thesum of two matrices: the first one takes into account linksbetween mobiles and anchors while the second one considerslinks among mobiles (see [10] for more details)

F = Fanch + Fmob. (8)

In particular, Fanch is a block diagonal matrix whose corre-sponding values depend on the anchor measurements, (9). Onthe contrary, Fmob is not a block diagonal matrix as it dependson the partial derivatives among mobiles, (10).

Fanch =

Fanch

1

Fanch2

. . .FanchM

, (9)

Fmob =

Fmob

1 K12 . . . K1M

K21 Fmob2 . . . K2M

......

. . ....

KM1 KM2 . . . FmobM

, (10)

where Kmn is a 0 matrix if there is no measurement betweenn and m.

Considering the fact the a generic range measurement frommobile m to an anchor a can be performed either in LoS orNLoS condition, the set of anchors connected to the mobile m,Am, can be subdivided into two subsets: LoS subset denotedwith Alos

m and NLoS subset denoted with Anlosm . Therefore, the

matrix Fanchm can be expressed as the sum of two matrices that

take into account to the above defined subsets:

Fanchm = Fanch los

m + Fanch nlosm , (11)

where Fanch losm and Fanch los

m are given by:

Fanch losm =

∑a∈Alos

m

1

σ2d2am

[∆x2am ∆xam∆yam

∆yam∆xam ∆y2am

], (12)

Fanch nlosm =

∑a∈Anlos

m

λ

d3am

[−∆y2am ∆xam∆yam

∆yam∆xam −∆x2am

], (13)

74/278

Page 81: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

where ∆xam and ∆yam are the differences of x and y com-ponents, respectively, between nodes a and m, i.e. ∆xam =

xa−xm, ∆yam = ya−ym, while dam is the Euclidean distanceexpressed as:

dam = ‖xa − xm‖ =√

∆x2am + ∆y2am. (14)

Note that (12) is obtained by second-order differentiatingof Gaussian distribution and σ is the noise standard deviation.(13) is the second-order derivative of exponential distributionand λ is the rate parameter. In (13) there is negative signfor the diagonal elements, which means NLoS measurementsdecreases the Fisher information and have negative effect onthe positioning performance.

Following the same approach, the matrix Fmobm that takes

into account the connection among mobiles is given by:

Fmobm = Fmob los

m + Fmob nlosm , (15)

Fmob losm =

∑n∈Mlos

m

1

σ2d2nm

[∆x2nm ∆xnm∆ynm

∆ynm∆xnm ∆y2nm

], (16)

Fmob nlosm =

∑n∈Mnlos

m

λ

d3nm

[−∆y2nm ∆xnm∆ynm

∆ynm∆xnm −∆x2nm

]. (17)

Concerning the correlation block Kmn, if there is measure-ment from node n and m, it could be calculated as:

Kmn = − 1

σ2d2nm

[∆x2nm ∆xnm∆ynm

∆ynm∆xnm ∆y2nm

], sn→m = 0

Kmn = − λ

d3nm

[−∆y2nm ∆xnm∆ynm

∆ynm∆xnm −∆x2nm

], sn→m = 1

Let J be the inverse matrix of FIM and Jm be the 2 × 2

block related to the mobile m, then the CRLB for mobile m

can be calculated as:

Ωm ,√Jm(1, 1) + Jm(2, 2). (18)

As it can be observed from (13) and (17), the presence ofNLoS measurements make the Fisher information decreasing,as a consequence the variance on the position error increases.In fact, the more severe NLoS condition the larger localizationerror. This effect will be shown in the simulation results.

III. MESSAGE PASSING ALGORITHM

Since there is no prior information about the state of eachrange measurement, the basic idea would be to use rangemeasurements to infer first the mobile’s position, then thestate of range measurements. Alternatively, in order to improvepositioning accuracy, both mobile’s positions and links’ statescan be estimated in parallel through some iterations of the BPalgorithm. However, this approach has some drawbacks. one isthe network traffic generated by the cooperation packets (notethat the size of messages depends on the number of particlesused to approximate the distributions). Another drawback isthe computational effort required to calculate the integral ofneighbor’s belief. The proposed algorithm assumes that the

belief of mobile’s position is Gaussian distributed, thus the mo-bile just needs to send to its neighbors the estimated positionand the corresponding uncertainty. This approach known asexpectation propagation (EP) [11] is an approximation of theBP algorithm. Based on this assumption, we propose a NLoSidentification and positioning algorithm, namely cooperativeNLoS identification and positioning algorithm (CIDP). In thefollowing sections, the message passing for a generic mobilem are introduced.

A. Incoming Messages

The localization approach is realized by factor graph as Fig.1. In particular, the joint posteriori distribution can be factor-ized by messages from anchor nodes and mobile neighborsas.

Fig. 1. Factor graph for cooperative positioning.

1) Message from Anchor: The incoming message froman anchor a ∈ Am is proportional to the integral of themultiplication between the likelihood function and the beliefof the anchor that is a Dirac delta function centered on xa,i.e. b(xa) = δ(x− xa):

µa→m ∝∫p(ra→m|xm,xa)b(xa)dxa

= p(ra→m|xm,xa). (19)

When referring to more than one state, the likelihoodfunction can be calculated by using (3), thus p(ra|xm,xa)

becomes:

p(ra→m|xm,xa) =

1∑i=0

P (sa→m = i) p(ra→m|xm,xa, sa→m),

(20)2) Message from Mobile Neighbor: Similarly, the incoming

message from a mobile neighbor can be expressed as:

µn→m ∝∫p(rn→m|xm,xn)b(xn)dxn. (21)

Since the mobile neighbor’s position xn has a certainuncertainty, the belief b(xn) is not a Dirac delta function.In principle, it can be represented by the distribution of thesamples. Thus, the calculation in (21) is too complex to be per-formed. In order to simplify that calculation, some approaches,presented in [7], assume that b(xn) is a Gaussian function. Inthis paper, to further reduce the complexity, the belief of themobile neighbor n is approximated as a Dirac delta function

75/278

Page 82: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

(i.e. as if it is an anchor, b(xn) ≈ δ(x−xn)). To compensate thisimportant approximation, the position uncertainty associated toneighbor n is considered as an additional noise for the rangemeasurement rn→m. More specifically, the variance associatedto ranging (given by σ2 for LoS measurements and 1/λ2 forNLoS measurements) are increased by the position uncertaintyof the mobile’s neighbor. For simplicity, this uncertainty iscalculated as the trace of the estimated covariance matrix [12],i.e., trace(Pn). As a consequence, the new parameters σnm andλnm to be used in the likelihood function are given by (22)and (23), respectively.

σnm =√σ2 + trace(Pn), (22)

λnm =λ√

1 + λ2trace(Pn). (23)

In conclusion, by using the above approximation, the incomingmessage is given by:

µn→m ∝ p(rn→m|xm, xn), (24)

where p(rn→m|xm, xn) is the likelihood function evaluated byusing the new modified parameters σnm and λnm that take intoaccount the uncertainty of mobile neighbor n.

B. Position Estimate

When all the messages from the anchors and mobile’sneighbors are available, the mobile node can calculate its beliefb(xm) that is proportional to the factorization all the incomingmessages and the a priori pdf p (xm):

b(xm) ∝ p(xm)∏a∈Am

µa→m(xm)×∏

n∈Mm

µn→m(xm),

(25)where µa→m(xm) and µn→m(xm) are calculated by using (19)and (21), respectively. After that, the estimated position iscalculated as the average value of the belief distribution whilethe estimated covariance matrix Pm calculated by using theset of particles as reported in [12]. Therefore, the belief isapproximated with a Gaussian distribution and the relatedparameters, i.e. the mean value and the trace of Pm, arebroadcast to its neighbors.

C. Outgoing Messages

The outgoing message is simply proportional to the beliefdividing the incoming message from a specific factor node.

1) Messages to Anchor: The message from mobile toanchor node is

µm→a (xm) ∝ b(xm)

µa→m (xm). (26)

The state probability is defined as the integration of multi-plication of likelihood and message from the mobile:

P (sa→m) =

∫p (ra→m|xm,xa, sa→m)µm→a (xm) dxm.

(27)

By applying the assumption that b(xm) is a delta function, theprevious equation can be simplified as:

P (sa→m) ≈ p (ra→m|xm,xa, sa→m)

µa→m (xm). (28)

Since the probability of one range measurement should benormalized, the LoS or NLoS probability can be furthermoresimplified as

P (sa→m) =P (sa→m)∑1

i=0 P (sa→m = i)

=p (ra→m|xm,xa, sa→m)∑1

i=0 p (ra→m|xm,xa, sa→m = i). (29)

Based on previous assumption, the message coming frommobile is not necessary to decide the range measurement state.In fact, only the estimated position and the corresponding traceare necessary to compute the probability of NLoS.

2) Messages to Mobile: The outgoing message to mobileµm→n is the similar to the one to anchor, but it can be canceledout when calculating the NLoS state. Therefore, it is notcalculated in the implementation of the algorithm. Similarly,the LoS or NLoS probability is given by

P (sn→m) =p (rn→m|xm, xn, sn→m)∑1

i=0 p (rn→m|xm, xn, sn→m = i). (30)

Finally, hard decision is made when the algorithm con-verges. For a given range measurement, if P (sn→m) is largerthan 0.5, it is assumed in NLoS state, otherwise it is in LoSstate.

The fact that the belief of mobile’s position is approximatedwith a Dirac delta function may result in inaccurate positionestimate in NLoS state condition. However, the computationalcomplexity and network traffic can be greatly reduced, makingthe proposed algorithm suitable for distributed localizationand feasible to be implemented in mobile devices with lowcomputational capability.

IV. SIMULATION RESULTS

The performance of the proposed C-NDP algorithm is testedby MC simulations. The simulated scenario is typical officeenvironment with size 20×20 meters and the wireless networkis a small-scale network composed of 15 nodes. Five of themare anchors deployed at the four corners and in the centerof environment in order to provide a good geometry forlocalization (see Fig. 2). The remaining ten nodes are staticunknown nodes whose positions are randomly selected in eachrun of the simulation. The radio connectivity is chosen as20 meters. Since NLoS condition is generated by obstacles,symmetric links are considered between unknown nodes, e.g.,if rn→m is in NLoS state then rm→n is also in NLoS state,but the two range measurements are not the same due to thevalue of the measurement noise.

Three positioning algorithms have been tested and thencompared. The first one is sum-product algorithm over awireless network (SPAWN) proposed in [1], and it is a genericbelief propagation algorithm for localization. It is supposed to

76/278

Page 83: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

Fig. 2. Simulation environment. Blue squares are anchor nodes and are fixed.Red dots are unknown nodes and are different for each MC runs.

have no knowledge of NLoS states, denoted as SPAWN-NLoS-U. The second one is SPAWN proposed in [7], which supposedto perfectly know the NLoS states, denoted as SPAWN-NLoS-K. The last one is the proposed CIDP algorithm. 1000 MCruns have been performed for a chosen NLoS probability, androot mean square of the positioning errors (RMSE) have beencalculated for performance comparison.

Fig. 3 shows the positioning performance of the abovementioned algorithms and the corresponding CRLB. As itcan be observed, the presence of NLoS conditions greatlyincrease the positioning errors. If this is not well awareof, the standard belief propagation algorithm could diverge.The proposed CIDP algorithm is about 0.5 meter worse thanSPAWN-NLoS-K, but it does not require to know whether arange measurement is in LoS or NLoS condition. Furthermore,the estimated CRLB, which uses the estimated positions andestimated NLoS status, can bound the positioning errors well.Hence this bound can be used to provide some insights topositioning accuracy and can be used in the energy-efficientpositioning algorithm as [9].

Fig. 3. Positioning performance.

The performance of NLoS identification is presented in Fig.4 and Fig. 5. In particular, Fig. 4 shows the detection error ratefor each range measurement and Fig. 5 shows the estimatedNLoS probability of all the measurements. The detection per-formance of measurements from anchors and mobile neighborsshows similar behavior. The error rate is highest when NLoSprobability is around 0.6, which indicates that the proposedalgorithm has high miss detection rate when NLoS and LoSis equally distributed. At low NLoS probability, the detectionperformance of mobile measurements is slightly better thanthat of anchor measurements, due to the simulation conditionof symmetric links. At high NLoS probability, the detectionperformance of anchor measurements is better, because of theincreased uncertainty of neighbors’ positions caused by thebad NLoS range measurements.

Fig. 4. State detection error rate.

Fig. 5. NLoS probability estimate.

As it can be observed from Fig. 5, the estimated NLoSprobability is close to real probability. When NLoS probabilityis smaller than 0.6, the proposed algorithm overestimates theNLoS probability; while the probability is larger than 0.6, thealgorithm underestimate the NLoS probability. That is becausethe detection is based on position estimates. If there are enoughLoS range measurements, the range measurements with largeerrors will be identified as NLoS; but if there are not enough

77/278

Page 84: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28− 31st October 2013

LoS range measurements, the range measurements with smallerrors will be identified as LoS. When NLoS probability goesup to 0.8, the detection errors become larger, which means theLoS range measurements are not enough to localize the nodes.In full NLos conditions, the error on probability detection isaround 0.17. The reason is that the range error may not belarge even in NLoS condition, e.g., the probability of NLoSranging error less than 0.5 meter is about 0.17, which coincidesestimated probability error in full NLoS condition.

V. CONCLUSIONS AND FUTURE WORK

This paper analyzed the CRLB of cooperative localizationin presence of NLoS range measurements and proposed a co-operative NLoS identification and positioning algorithm. Theproposed algorithm was fully distributed with low complexityand low network traffic and did not require prior informationof NLoS state. Simulation results showed that the proposedalgorithm was able to detect NLoS range measurements andto improve positioning accuracy in the NLoS conditions. How-ever, there is large gap between the existing NLoS positioningalgorithm and the CRLB. The future work would be howto narrow this gap. Moreover, an energy-efficient positioningalgorithm in NLoS environment can be developed based theproposed CRLB formulae as [9].

REFERENCES

[1] H. Wymeersch, J. Lien, and M. Z. Win, “Cooperative Localization inWireless Networks,” Proceedings of the IEEE, vol. 97, no. 2, pp. 427–450, Feb. 2009.

[2] S. Gezici, H. Kobayashi, and H. V. Poor, “Nonparametric nonline-of-sight identification,” in Vehicular Technology Conference, 2003. VTC2003-Fall, vol. 4, Oct. 2003, pp. 2544–2548.

[3] I. Guvenc, C.-C. Chong, F. Watanabe, and H. Inamura, “NLOS Iden-tification and Weighted Least-Squares Localization for UWB SystemsUsing Multipath Channel Statistics,” EURASIP Journal on Advances inSignal Processing, no. 1, 2008.

[4] H. Wymeersch, S. Marano, W. M. Gifford, and M. Z. Win, “A MachineLearning Approach to Ranging Error Mitigation for UWB Localization,”IEEE Transactions on Communications, vol. 60, no. 6, pp. 1719–1128,June 2012.

[5] K. Yu and Y. J. Guo, “Improved Positioning Algorithms for Nonline-of-Sight Environments,” IEEE Transactions on Vehicular Technology,vol. 57, no. 4, pp. 2342–2353, July 2008.

[6] H. Liu, F. Chan, and H. C. So, “Non-Line-of-Sight Mobile PositioningUsing Factor Graphs,” IEEE Transactions on Vehicular Technology,vol. 58, no. 9, pp. 5279–5283, Mov. 2009.

[7] S. Van de Velde, H. Wymeersch, and H. Steendam, “Comparison ofmessage passing algorithms for cooperative localization under nlos con-ditions,” in 9th Workshop on Positioning Navigation and Communication(WPNC), Mar. 2012, pp. 1–6.

[8] R. M. Vaghefi and R. M. Buehrer, “Cooperative sensor localization withnlos mitigation using semidefinite programming,” in 9th Workshop onPositioning Navigation and Communication (WPNC), Mar. 2012, pp.13–18.

[9] M. Dai, F. Sottile, M. A. Spirito, and R. Garello, “An energy efficienttracking algorithm in uwb-based sensor networks,” in IEEE 8th Interna-tional Conference on Wireless and Mobile Computing, Networking andCommunications (WiMob), Oct. 2012, pp. 173–178.

[10] F. Penna, M. A. Caceres, and H. Wymeersch, “Cramer-Rao Bound forHybrid GNSS-Terrestrial Cooperative Positioning,” IEEE Communica-tions Letters, vol. 14, no. 11, pp. 1005–1007, Nov. 2010.

[11] T. Minka, “Expectation propagation for approximate bayesian infer-ence,” in 17th Conference in Uncertainty in Artificial Intelligence, Aug.2001, pp. 362–369.

[12] F. Sottile, H. Wymeersch, M. A. Caceres, and M. A. Spirito, “Hy-brid GNSS-terrestrial cooperative positioning based on particle filter,”in IEEE Global Telecommunications Conference (GLOBECOM), Dec.2011, pp. 1–5.

78/278

Page 85: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Visual Landmark Based Positioning

Hui Chao, Saumitra Das, Eric Holm, Raghuraman Krishnamoorthi, Ayman Naguib

Qualcomm Research

CA, USA

(huichao, saumitra, eholm, raghuram, anaguib)@qti.qualcomm.com

Abstract—In this paper, we discuss a system and algorithms for

using storefront logo images as landmark targets for indoor

localization. The system searches for known storefront logo

imagery as a user pans a smartphone camera or names visible

storefronts. As one or more targets are recognized, the location of

a user may be estimated by combining image matching results

with the visibility information for each storefront on the map. We

discuss algorithmic approaches to deal with some of the unique

characteristics of storefront logo matching. We discuss

algorithms that define visibility information for landmarks in an

indoor environment. Finally, we present positioning experiment

results in a real shopping mall with our end-to-end positioning

system on a phone. Experiments with the system have

demonstrated the viability of this approach for a good indoor

positioning experience.

Keywords - image based positioning, computer vision, visual

landmarks;

I. INTRODUCTION

Positioning in a large indoor shopping mall still poses challenges for existing technologies to deliver precise, reliable, and cost-effective location information. In an indoor environment, a mobile device may be unable to reliably receive GPS signals for position estimation. Various techniques have been proposed to obtain a position fix using ultra-sonic, infrared, magnetic, or radio sensors [1]. However, all of these technologies may be limited in their utility, due to the lack of infrastructure that can provide consistent, reliable and robust signal transmitters in some of the venues. Given readily available optical sensors on mobile devices, image based positioning, which requires no change of indoor infrastructure, could be an alternative or complementary approach to these indoor positioning techniques.

Image based positioning has become a popular area of research in recent years due to the advent of smartphones with good connectivity, computing and imaging capabilities. In this approach, environmental components are analyzed and results are matched against pre-captured and stored data. Image or vision based positioning methods may be categorized into two groups. In the first approach, a user’s location may be recognized by simply taking a photo of the nearest street corner or storefront and finding the most similar image in the database with known location [2-6]. This approach recognizes a location and assumes the camera or the user is located in a close vicinity of the structure that was captured in the image. A second approach is similar to Simultaneous Localization and Mapping (SLAM) used in robotic vision. As a robot moves in an unknown environment, it builds up a map containing image

features and their precise 3D location [7-8]. This 3D feature map is then used to determine the location with accurate pose estimation by matching previous recorded image features in the database with ones in the current view. Both approaches require a database of images, or image features that were obtained from previously captured images, with registered locations on a 2D or 3D map. The map is typically created by traversal of the environment.

Although previous approaches have proven to be quite effective, attempting to directly match the current scene with previously captured images in a shopping mall may have challenges of deployment and maintenance. This is due to the dynamic nature of shopping venues where the decoration of the environment may often change with seasons or events. Implementing such a solution would require frequent updates of the reference images to ensure data relevance. During our investigation, we made an observation that even in these dynamic and noisy environments, a storefront logo that represents the brand signature of a retailer exhibits some key properties that allow for easier deployment, and could provide a more robust solution for visual landmark based positioning. First, a logo is typically visually consistent across different venues, and is unique and stable over time. Also, publicly available database samples make the harvesting of reference images easier and more effective. Thirdly the information about a store such as its name, location and entrances are often provided on the venue map. A storefront logo image is often placed at the entrance in the direction that is parallel or perpendicular to the wall of the storefront as shown in figure 1. Therefore, the location of a storefront image can be easily registered on a 2D map.

Although brand images are robust landmarks for positioning in a shopping mall, they also pose some challenges. First, without a detailed survey of the environment, the exact dimension of a storefront logo image may not be known. Second, when the user pans a camera to capture a storefront scene, a logo may only occupy a small region of the image; i.e., relevant feature points may be concentrated in a relatively small area instead of evenly distributed around the whole image. Furthermore, logo images often consist of high contrast edges without much texture variety or details. Logo images often have repetitive patterns and large variations in illumination due to special lighting placed behind and around the logo. All of these make detection and pose estimation very difficult. To overcome some of these challenges, a method may be used that combines landmark information with trilateration for location estimation. On the map, regions from which a storefront is visible are derived from the topological 2D layout of the architectural structure. This region is called the visibility

79/278

Page 86: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

map. It provides the baseline information of the possible locations of the user on the map given that a landmark is visible. As more landmarks are recognized, possible locations of the user become refined, and the position estimation becomes more accurate.

Figure 1. Example of a typical indoor shopping mall where brand logo

images are placed on the storefronts.

In the following sections, we first discuss possible algorithmic approaches to deal with some of the unique characteristics of storefront logo matching. We then describe an algorithm that computes the visibility map for a storefront on a 2D map. Finally, we present positioning experiment results in a real shopping mall with our end-to-end positioning system on a phone.

II. LANDMARK RECOGNITION

A storefront logo provides a unique and stable landmark for positioning in a shopping mall. It comes from a small set of brand images, which may be available from various sources including the website of the retailer. Stores of the same retailer in different geographic locations often have a similar appearance for the purpose of consistent branding. These factors make the harvesting of reference landmarks easier and more effective. A typical landmark database may consist of image features extracted from logos that represent 20 to 100 stores in a typical indoor shopping mall.

Brand image recognition has many useful applications. Various methods have been proposed using local and global feature matching [9-12]. However, it continues to be a challenging area; a combination of different approaches may be needed to develop a robust detector.

A. Image matching with SIFT-like features

In our study, an object detector suitable for wide baseline matching was used [13,14]. This detector looks for scale invariant key points and performs a sliding window search to determine the presence and pose of a storefront logo from a database of reference logo images. Figure 2 shows two matching examples. In our experiment, with 24 reference logo images and 400 test images of varying quality, the average recognition rate was 84%. However, as the database size increases, the false positive rate increases. With 53 reference

and 1000 test images, the recognition rate degraded to 62%. The experiment was performed on a PC with exhaustive search for matching. A real time mobile solution for storefront logo recognition will require faster matching algorithm and further improvements in detection rate by creating more robust feature points from training data and selecting more discriminative feature points when computing and memory resources are limited.

Brand logo detection can be regarded as a special case of object detection. Unlike nature scene images, which are rich in texture details, brand images often lack texture variation, and therefore provide fewer key feature points for matching. In addition, in this use case, reference and test images may be acquired with different resolution, size, quality, and illumination conditions. These factors combine to make logo detection more challenging.

Figure 2. Image matching results using local invariant feature matching.

Corresponding key points are connected with green lines. In the bottom

example, even with occlusion, matching key points were found.

B. OCR

Some brand logos contain the names of retailers that correspond to the store names on the map. Thus, OCR is a natural choice for detecting some of these landmarks, typically utilizing a relatively small vocabulary containing all of the store names in the venue. An initial experiment [15] suggested that OCR is potentially useful for detecting brand logos. In our preliminary study, although image matching outperformed OCR in most of the cases, OCR performed well for simple text logos such as “Aldo” that don’t exhibit much texture variety and are written in a common font style.

C. User explicit naming

There may be situations where landmark recognition with computer vision fails, in which case, an explicit naming of the visible storefronts by the user may be used to estimate a user’s location.

80/278

Page 87: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

III. VISIBILITY MAP OF A LANDMARK

In some of the proposed methods [7, 8], pose estimation

provided accurate positioning information. However, pose

estimation is often not robust or is sometimes not possible

when the exact dimensions of a storefront logo image are not

known; or when feature points are concentrated in a relatively

small part of the overall image as shown in figure 2. These

factors make pose estimation increasingly prone to errors and

generally unreliable. Also, pose information is typically

unknown when performing logo recognition with OCR or with

a user’s explicit naming. To overcome these challenges,

trilateration using a landmark visibility map can be utilized for

location estimation. We consider the case where a user is in an

open or hallway area in a shopping mall (not inside a

particular store).

To compute the visibility map, the hallway area is first

identified on the venue map; then for each storefront, its

visibility is inferred based on map analysis.

Figure 3. (a) A shopping mall map. (b) Open and hallway areas are

identified and highlighted in blue . (c) The regions from which the storefront

of Store_A can be visible are highlighted in orange.

A. Identify the Hallway Region.

A 2D map with store information for a shopping mall was obtained from a commercially availabe map provider’s web site in combination with the venue’s visitor map. The 2D venue map is first converted to a black and white binary image. The walls and doors are depicted as black pixels and open areas are depicted as white pixels as in figure 3a. Then, an indoor boundary is identified as the largest enclosed area after a morphological “close” and “fill” operation [16]. Within this indoor area, connected white components that represent the walkable area on the map are identified and ranked based on the sizes of their bounding boxes. The top connected component will typically be identified as a hallway region. An example is shown in figure 3b.

B. Infer the Visibile Region of a StoreFront

The visibility map of a storefront refers to an area from

which this particular storefront may be visible to the user.

Similar to computing field-of-view from the location of the

storefront, this region is approximated on a 2D venue map,

using ray tracing. All hallway region points that are in the line-

of-sight of the storefront are identified as visible points. Figure

2c shows the visibility map of Store_A with two storefronts.

IV. LOCATION ESTIMATION

A user’s location may be estimated using visibility maps of the identified storefronts or landmarks. As one or more landmarks are detected at a user’s location, the location of the user or mobile device may be estimated as a function of the overlapping visibility regions. The estimation can be simply derived from the center of mass of the overlapping regions. This estimation may be improved by taking into account the likelihood of where the user might stand. This likelihood can be estimated based on (1) the pose estimation and associated confidence in the case of using computer vision, or (2) the best viewing angle and distance from which a landmark can be seen. An example is shown in figure 4. For the two identified storefronts in figure 2, their visibility regions are highlighted in figure 4a. The user’s location can be estimated from the overlapping area as shown in figure 4b.

Figure 4. Location estimation based on two detected landmarks. (a)

Nordstrom has two entrances and two visible regions which are hightlighted

in blue. J.Jill store’s visbility region is hightlighted in green. (b) The user’s location, highlighted as the red dot, is estimated based on the overlapping area

of the visibility region.

V. POSITIONING EXPERIMENT AND RESULTS

A mobile location system was developed that takes one or

more identified landmarks and outputs user’s location. A

database containing a list of the names, locations and

orientations of all the storefronts on a 2D map was created off-

line and stored in memory on the mobile device. The visibility

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

(a) (b) (c)

(a) (b)

81/278

Page 88: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

region for each storefront was then computed in real-time on

the mobile device. As one or more landmarks were identified,

the location of the user was estimated and refined.

An experiment was performed with this system. Data was

collected in a real shopping mall of 70,000 m2 in area and with

about 100 shops on the floor. Twenty-six ground truth location

points were marked on the venue map. These points were

selected so that they could be easily located at the actual

venue. A user was asked to visit each of the 26 pre-marked

locations, and at each of these locations to name two to four of

the most visible storefronts. Since identified landmarks were

obtained with the user explicitly naming the visible stores,

pose information was not available. The identified storefronts

were collected and used to estimate the user’s location. The

following assumptions were made in the visibility map

inference: (1) a storefront is only visible within 850 to -85

0

from the normal of the storefront on the map; (2) a storefront

is visible within 80 meters of the storefronts.

Figure 5. Location estimation results are compared against groundtruth. The

length and color of the line indicates the amount of error. The longer the line

and darker the red, the larger the error.

The experimental results are shown in figure 5, where

ground truth locations are plotted against the estimated

locations. The color and length of the line indicates the

amount of the error. The overall performance can be seen in

the cumulative distribution function (CDF) plotted in figure 6.

The positioining error rates with 2 and 3 visual landmarks are

compared with the one obtained from a commerically availabe

indoor location application. At 50% of the ground truth

locations, the error rate is about 12 meters given 3 identified

landmarks. The consistency and accuracy of the experimental

results demonstrated the viability of this approach for a good

indoor positioning experience.

Figure 6. CDF of the experiment results with 2 or 3 identified storefronts as

visual landmarks for location estimation in comparison with location results

obtained using a commercially available indoor positioning application. The

horizontal axis (x) is the positioining error, the vertical axis (y) is the probability that the error is less than or equal to x

REFERENCES

[1] R. Mautz, “Indoor Positioining Technologies,” Habilitation Thesis, ETH Zurich, Feb 2012.

[2] G. Schindler, M. Brown, and R. Szeliski, “City-scale location recognition,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2007.

[3] K. Ni, A. Kannan, A. Criminisi, and J. Winn, “Epitomic location recognition,” in CVPR, 2008.

[4] H. Aoki, B. Schiele, A. Pentland, “Realtime Personal Positioning System for Wearable Computers,” Proceedings of the 3rd IEEE International Symposium on Wearable Computers, October 18-19, 1999

[5] H. Kawaji, K. Hatada, T. Yamasaki, and K. Aizawa, “Image-Based Indoor Positioning System: Fast Image Matching Using Omnidirectional Panoramic Images,” in 1st ACM International Workshop on Multimodal Pervasive Video Analysis. ACM, 2010.

[6] M. Werner, M. Kessel, C. Marouane, “indoor positioining using smartphone camera”, International Conference on Indoor Positioning and Indoor Navigation (IPIN) 2011.

[7] Stephen Se, David Lowe, Jim Little, “Vision-based Mobile Robot Localization And Mapping using Scale-Invariant Features” ICRA2001.

[8] X. Li, J. Wang, A. Olesk, N. Knight, W. Ding, “Indoor Positioning within a Single Camera and 3D Maps” Ubiquitous Positioning Indoor Navigation and Location Based. Service (UPINLBS), 2010,

[9] C. Constantinopoulos, E. Meinhardt-Llopis, Y. Liu, V. Caselles “A ROBUST PIPELINE FOR LOGO DETECTION, ” Constantinos Constantinopoulos, Enric Meinhardt-Llopis, Yunqiang Liu, Vicent Caselles: A robust pipeline for logo detection. ICME 2011.

[10] L. Ballan, M. Bertini, A. Del Bimbo, and A. Jain, “Automatic trademark detection and recognition in sport videos,” in Proc.ICME, 2008.

[11] J. Schietse, J.P. Eakins, and R.C. Veltkamp, “Practice and challenges in trademark image retrieval,” in Proc. Int. Conf. on Image and Video Retrieval, 2007.

[12] F. Pelisson, D. Hall, O. Riff, and J.L. Crowley, “Brand identification using gaussian derivative histograms,” in Proc. of the Int. Conf. on Computer Vision Systems (ICCV), 2003.

[13] D.G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60,no. 2, 2004.

[14] https://developer.qualcomm.com/mobile-development/mobile-technologies/computer-vision-fastcv

[15] https://developer.vuforia.com/resources/sample-apps/text-recognition

[16] R. E. Woods, R. C. Gonzalez “Digital Image Processing,” 2nd Edition, 2002, ISBN-10: 0201180758.

ground Truth location

X estimated location55

8 m

ete

rs275 meters

82/278

Page 89: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

83/278

Page 90: International Conference on Indoor Positioning and Indoor Navigation

- chapter 3 -

Fields, Waves & Electromagnetics

Page 91: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

RFID System with Tags Positioning based on Phase

Measurements

Igor Shirokov

Dept. of Radio Engineering

Sevastopol National Technical University

Sevastopol, Ukraine

[email protected]

Abstract —The problem of radio frequency identification is very

actual in modern life. State-of-art approach to mentioned

problem assumes not only identifying of tag(s) but its positioning

as well. Recently author is developing the homodyne method of

useful-signal detection which allows carrying out the phase

measurements in a microwave band in a simplest manner. In

turn, phase progression at microwave propagation contains

information about link length which opens a good opportunity

for object positioning with high accuracy. RFID system with

tag(s) identifying and positioning is considered in a paper. The

homodyne detection of low-frequency signal assumes the tag(s)

identification. Phase method of distances measurements is put

into a basis of tag(s) positioning. Interrogators are placed in fixed

points of a room and radiate low-power microwave signals.

Transponders are moved and had to be identified and located.

Transponders shift the frequencies of microwave signals (each

transponder its own frequency shift, the identifying of

transponder) and reradiate the frequency-transformed

microwave signals back in the directions of interrogators. Each

interrogator selects the low-frequency difference signals and

measures the phase differences between these signals and the

reference one. Based on these measurements the distances to

transponders are calculated. Some aspects of transponder and

interrogator design are discussed in a paper. The use of one-port

resonant transistor amplifier in a transponder composition

improves the technical features of entire system. The use of

separate antennas for transmitting and receiving improves the

decoupling of these channels and improves the sensitivity of

entire system as well. The algorithm of distance determination,

based on phase measurements, is discussed in a paper. The serial

changing of frequency of microwave signal from 1292.5 MHz to

1302.5 MHz assumes undoubted determination of a distance up

to 30 m (60 m of two-path propagation) with high resolution.

Keywords-component; RFID; Homodyne detection; Microwave

phase measurements; One-port transistor amplifier; Microwave

phase shifter; Patch antennas.

I. INTRODUCTION

Besides the tag identifying in RFID systems the microwave propagation offers a good opportunity for tag positioning. The use of the pulse radar method for measuring distances and angles are quite unsuitable for indoor applications. The resolution of this method is too low and there is a minimal distance requirement of the pulse radar measurement that is usually higher than the room size. The resolution of the phase

method of distance measurements is determined by the microwave length. Depending on the wavelength one can reach an accuracy of 10 mm and better [1]. In this paper the new method of tag identifying and it positioning is presented. Positioning is calculated in terms of distances measurements, from the beacons to transponders (tags). The microwave phase progression measurements are used for these purposes. No doubts, the phase method causes an ambiguity because the phase measurements can only have values in an interval between 0-2π. In this paper the way of bypassing this problem is discussed.

The task of simultaneous positioning of several tags often is of need. In this case the problem of tags differentiating appears. Furthermore, the electromagnetic compatibility (EMC) of functioning of several radio engineering units must be taken into account. The simultaneous functioning of these units has not to deteriorate the tag differentiating and its positioning. The way of solving this problem is discussed in the paper. Further, the tags tracking assume the radiating of electromagnetic waves. Obviously, the system radiating power must be as small as possible. The radiating of electromagnetic energy from the tags preferably had to be excluded. This issue is discussed in the paper as well. Besides the mentioned above technical and EMC aspects system had to be efficient from the economic point of view. All of system units must have simplest design. The hardware installation has not to involve essential manpower. The system power consumption must be as small as possible. In other words the system must satisfy the demands of state-of-art tendencies of so called ―green communication‖. These aspects are discussed in the paper.

II. APPROACH TO A PROBLEM

The system implementation, which is free from mentioned problems, assumes using of homodyne method of microwave phase measurements, which is well developed in author’s previous works [2], [3]. The further developing of this approach takes place in the paper.

Realizing the homodyne method of microwave phase measurements and, consequently, distances determination within a room, we offer to place the radio beacons (at least

two) along the extended wall and at the certain distance b each

from another. This distance will be the system base. The number of beacons can be higher than 2 and the ones can be

84/278

Page 92: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

placed along the different room walls. The positioning of tags

is characterized by the distances ijd from the tags to each

beacon. All of that ensures the elimination of doubts in distance determination and additionally it ensures the system functioning on entire room square, at arbitrary distances from beacon(s) to tag(s). But these aspects are not discussed in the paper and doubts elimination is solved organizationally.

Certainly, the system operates in a room only. Usually the material of the wall is not transparent for microwaves (we do not consider the wooden wall) at all, or signals are damped very much. In this case the additional beacons must be installed in the neighboring room. We do not ―lose‖ a tag when it will ―enter‖ the next room. It will be ―visible‖ for beacons of both rooms when it will be in a door aperture.

The transponders (tags) are placed on the objects that are to be located. The number of objects can be arbitrary, but with the certain restrictions, which will be discussed later. In the paper we will discuss the simultaneous operating of two transponders, not changing the approach to a problem in general.

Taking into account the system base b and all of distances

ijd , we can determine the tags positions in Cartesian

coordinate system with respect to system base and beacons easily enough.

Certainly, the object positioning will be carried out in a plane only. The heights of beacon antennas placing and transponder antennas placing, all of them must be quite equal. The violating of this rule results in appearing of distances determination errors.

However, this problem can be solved easily by the placing of additional (third) beacon on a plane of the wall on the

certain distance from system base b . The heights of beacons

placing and transponders placing can be arbitrary in this case; the calculating routine will solve this problem.

The block diagrams of a transponder and a beacon are shown in Fig. 1.

Each transponder, which is placed on the object, consists of microwave antenna, controlled transmission phase shifter (CTPS), one-port microwave transistor amplifier (OPTA), and low-frequency oscillator of transponder (LFOT).

Each beacon consists of microwave oscillator (MWO), microwave directional coupler (MDC) microwave transmitting antenna, microwave receiving antenna, microwave mixer (MMIX), low-frequency mixer (LMIX), low-frequency heterodyne (LHET), selective amplifier-limiter (SALIM), low-frequency oscillator of beacon (LFOB), and phase detector (PD).

The line ―Microwave Frequencies‖ assumes controlling of microwave oscillator frequencies. The frequency changing is of need for adequate distance determination. The frequencies of different beacons must be different but closely spaced. The problem of frequency choosing will be discussed later.

Figure 1. The block diagrams of a beacon and a transponder

The phase differences of low-frequency signals are obtained on the line ―Phase Differences‖. These phase differences of low-frequency signals contain the information representing the phase progression of microwave signals.

The line ―Transponder Selection‖ assumes frequency controlling of low-frequency heterodyne. This Figure represents the serial treating of transponder signals. Obviously, the use of parallel chains after the microwave mixer assumes parallel signal treating. The processing time will be lower, but the hardware cost will be higher in this case.

III. BASE EQUATIONS

Each thi beacon radiates the microwave signal that can be

described as

1 0 0 0( ) sin ωi i i iu t U t

where 0iU is the amplitude,

0i is the frequency, and 0i is

the initial phase. These oscillations are radiated in the direction of inner part

of the room where the thj tag is placed. The microwave,

propagated along the distance ijd , obtains the attenuation ijA

and phase progression 0i ijk d :

2 0 0 0 0( ) sin ωij ij i i i ij iu t A U t k d ,

where 0 02π λi ik is the propagation constant, 0λi is the

wavelength.

LFOT

OPTA CTPS

Transponder

Phase

MWO

PD

SALIM

LFOB

MMIX Microwave

Frequencies

Beacon

to Transponders

LHET LMIX Transponder

Selection

MDC

85/278

Page 93: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

The thj transponder receives this microwave signal with its

microwave antenna. Then the controlled transmission phase shifter implements the monotonous change of microwave-

signal phase over the period jT of the low-frequency

oscillations on the value π . The low-frequency oscillator

generates these oscillations with certain frequency stability. The value of this stability will be discussed later.

The shown block diagram assumes passing of microwave signal thru the phase shifter twice. So, the microwave-signal

phase will be changed on the value 2π over the period jT of

the low-frequency oscillations, as it is shown in Fig. 2a or in Fig.2b. The change of microwave-signal phase over the period

jT of the low-frequency oscillations on the value 2π is

tantamount to the frequency shift [4] of microwave signal on

the frequency 2π /j jT . In a certain assumption, this

technical solution is equivalent to Doppler’s frequency shifting.

Figure 2. The law of microwave signal phase changing

The amount of frequency shift is chosen small. Really, jF

( / 2πj jF ) is equal to tens of kilohertz or closely and at

any case it does not exceed the value in hundred kilohertz.

One more feature is observed in this case: the initial phase

of the controlling low-frequency oscillations φ jL is transferred

into the microwave-signal phase directly, without any changes. This feature was put on a basis of author’s previous investigations [2], [3].

After the controlled phase shifter the microwave signal is amplified by the one-port microwave transistor amplifier [2]. This microwave amplifier possesses the highest simplicity of design implementation, has very low power consumption, and has excellent noise characteristics. Described amplifier operates in narrow frequency band, but this feature is not dramatic one in our case. Furthermore, the perfect antenna matching can be implemented in a narrow frequency band as

well. Thus, we obtain the microwave signal amplifying in essential value with the excellent noise factor [7].

Further, the amplified microwave signal passes thru the phase shifter again and obtains the frequency and phase shift. The frequency/phase transformed microwave signal will be

3 0 0 0 0( ) sin ω φ φij ij i i j i ij i jLu t A U t k d

where ijA takes into account the transponder gain.

The transponder gain determines the operating distance of the system only and it does not affect the accuracy of object positioning. So, we will assume the gain of transponder is

equal to 1 (ij ijA A ). Transponder reradiates this

frequency/phase transformed microwave signal back in the beacon direction. In the beacon the secondary received microwave signal will be

2

4 0 0 0 0 0( ) sin ω φ ,ij ij i i j i i i ij i jLu t A U t k d k d

where 0ik takes into account the frequency shift 0ωi j .

The frequency shift j is much lower than the initial

frequency 0ωi

(e.g. 0 0ω 2π 1.5i if GHz and

(10 100)F K kHz), then 0 0i ik k . This secondary received

signal is mixed with the original microwave signal and at the mixer output the low-frequency signal of difference is selected. This low-frequency signal will be

2

5 0 0( ) sin 2 φij ij i j i ij jLu t A U t k d

As we can see from (1), the initial frequency 0ωi

and the

initial phase 0i of origin microwave signal both are

subtracted in the mixer. The only double phase progression

02 i ijk d of the microwave signal is of interest for the distance

definition.

The low-frequency signals from each thj transponder are

obtained at the output of each mixer of each thi beacon, but the

phase shift will be unique for each pair beacon-transponder

and it will be determined by each distance ijd . The frequency

shift in j for each transponder determines its identification.

As the frequencies of signals j from different transponders

are quite different, it is inconvenient to measure the phase differences between these signals and the reference one. Avoiding this problem the heterodyning of received signal is

proposed. The frequency of heterodyne i in thi beacon is

chosen so that the difference i j remains constant and

the one is equal to 10 kHz, for example. The signal with such frequency is amplified up to limitation and it will be described as

86/278

Page 94: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

6 0 0( ) sin 2 φ φij ij i ij jL iHu t U t k d

where ij i j , φiH is the initial phase of heterodyne

signal. The phase of this signals is compared with the phase of

low-frequency reference signal with the same frequency

ij . So, the phase detector output data ij will be

proportional to value

02 φij i ijk d t :

where is the reduced mutual frequency instability of all

of low-frequency oscillators, φ is the sum of all of initial

phases of all of low-frequency oscillators.

Thus, analyzing the data ij , we can determine each of

distances ijd .

IV. ERRORS AND PROCESSING ALGORITHM

The term t is the dynamic error of phase

measurements, φ is the static one. However, what value of

the error we are talking about? For signal frequency in 10 kHz the absolute frequency instability of crystal oscillator not exceeds 0.1 Hz. For the signal processing time in 10 ms the dynamic error will be 0.36°, what corresponds to distance determination error in 0.2 mm (twice value) for the frequency of microwave signal in 1.5 GHz. Certainly, we can neglect the dynamic error t .

The static error φ is constant for all time of measuring

process (ever since all of oscillators are started up). We can exclude this error by the calibration procedure, but it will be excluded automatically in a result of processing-algorithm implementation.

Thus, the only thing we must ensure is the high frequency stability of each low-frequency oscillator. In other words, the phase mismatch between any two oscillators can not exceed the phase measurements resolution during the whole time of measuring procedure implementation. If the algorithm of coordinates’ determination is not time-consuming, and the number of iterations is not high, the use of ordinary crystal oscillators will be the best solution for technical implementation.

A little bit different approach we must use to the

determination of microwave-oscillator frequency instability.

Here the measured distance plays an important role. Let

assume the maximal operating distance ijd in 50 m and

maximal error in distance determination ijd in 1 mm (the

phase measurements error in 1.2°), then for frequency in

1.5 GHz the maximal frequency instability 0 0f f will be

3 ppm. Such value of frequency instability is realized by

temperature stabilizing of reference crystal oscillator. Generally, it is possible to measure a phase difference

between 0 and 2π. The phase progression 0i ijk d will be

represented as 02π in k d , where n is integer. In order to

avoid this problem we serially change the operating frequency of microwave oscillator of each beacon [5] and we measure the phase differences between the reference low-frequency oscillator signal and low-frequency mixer output signal. At

first time we fix the frequency 1f and fix this phase difference

as 1φ . After that we change the frequency of microwave

oscillator in a certain value 2f and fix new value of phase

difference 2φ and calculate the distance as

1 2

1 2

( φ φ )

2π( )id c

f f

.

The frequency difference 1 2f f was chosen in 5 MHz.

Such difference corresponds to undoubt phase measurements in 30 m (taking into account two-way propagation) range. The increasing of this difference increases the system accuracy but decreases the system operation range and vice versa.

Certainly, these calculations yield the rough results of distance determination. These calculations let us obtain the number of phase cycles n and the possibility to determine the

distance in terms of integer numbers of wavelengths. The

exact value of distance ijd can be obtained by measuring the

phase difference 0i ijk d . Taking into consideration the

accuracy of phase measurements in 1.4° (8 digits) and possible wavelength in 0.2 m, the resolution in distance determination will be about 1 mm. We should understand that the measured distance will be conditional distance, taking into account antennas phase centers and all feeder lengths.

V. SOME FEATURE OF TRANSPONDER DESIGN

A. One-Port Transistor Amplifier

Reflection amplifiers are well situated for transponder design according the Fig. 1. Reflection amplifiers for X band operation have been developed in the late seventies of last century by utilizing a circulator and a Gunn diode oscillator, providing the gain in 10 dB with a noise figure of 15 dB. Another reflection amplifier for the 20 GHz frequency band has been developed using a FET and a package resonance as a positive feedback [6]. Experimental results for this amplifier at 23 GHz showed a noise figure in 6 dB with a gain in 8 dB.

Similar approach was used in works [7] where a reflection amplifier circuit using a transistor with positive series feedback was suggested. Described amplifier showed a higher stability and efficiency than the reflection amplifier with parallel feedback. A low noise one-port transistor amplifier has been developed in the 1.4-1.6 GHz frequency band to study the capabilities of this kind of amplifiers resulting in a lower cost and simplified research [7].

In mentioned amplifier the active element chosen is a GaAs Hetero Junction FET NE33284AA with a gate length

0.3gL μm and a gate width 280gW μm. This device has a

87/278

Page 95: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

minimum noise factor 0.3fN dB and an associated gain

19G dB at 1.5 GHz when it is biased at 2dsV V and

10dI mA [7].

Theoretical calculations, conclusions, and work of the described amplifier, are based on the use of parasitic capacitances and inductances of the FET, which in a combination of using of external constructive and internal parasitic reactive parts form the serial positive feedback providing amplification of microwave signal. It is obvious the parasitic reactive parts of proper FET and reactive parts of its installation have an impact exclusively in the microwave band. In radio-frequency band the influence of these parasitic elements is reduced and signal amplification in this case is problematic enough. Besides, the adjustment of similar amplifier is a man power consuming process, as it is practically impossible to consider and to calculate the influence of parasitic reactive elements on amplifier parameters in advance.

And the last, the scheme contains in various variants up to four constructive inductances and up to five discrete bypassing, dividing and frequency-forming capacitances, that complicates the design and in addition complicates the process of amplifier adjustment.

At the same time it is extremely necessary to have at different system units the stable, simple, and reliable one-port amplifier, which is capable to amplify the signals in microwave band. Furthermore, such amplifier should not demand special adjustment that will allow using it in RFID systems at their mass production.

In a paper another approach to one-port transistor amplifier design is proposed [8]. The schematic diagram of one-port resonant transistor amplifier is shown in Fig.3.

Figure 3. Schematic diagram of OPTA

Input signal from Beacon 2 ( )u t arrives at a first tap of the

inductance coil (here and further — coil) one end of which is connected to the common wire and its other end is connected to the gate of FET. Thus, on this gate the signal, in-phased with an input signal and increased on amplitude is induced. This voltage causes the occurrence of an in-phase current through the channel of the FET which flows through a part of the coil due to the direct connection of FET source with the second tap of the coil. Thus the current through the coil is in-phase with an input signal. In other words the positive feedback is realized and the amplification of signals takes place. The maximum gain reaches at a resonance of the tank formed by coil and input capacitance of FET.

B. Amplifier Simulation

The Advanced Design System environment was used for simulation, and corresponding SPICE-model of active element was loaded.

As a coil of inductance with taps the segments of strip-line were used. As a substrate the standard dielectric FR-4 of 0.8 mm of thickness was used.

The result of simulation is shown in fig. 4.

Figure 4. Result of simulation of OPTA’s operation

Using of super low noise FET Avago ATF-38143 with 0,4 dB noise figure, 16 dB associated gain, and 230 mmho transconductance, the stable resonant amplification of one-port transistor amplifier up to 45 dB was obtained.

C. Experimental Investigations

The experimental investigations were implemented with standard standing-wave and attenuation indicator. Scalar bridge for differentiating of incident and reflected waves was used. The photo of measurement implementation is shown in Fig. 5.

For gain measurements the following approach was used.

First of all the certain level of incident wave was set. The bridge output was shorted, so the level of reflected wave was the same and it was fixed.

Whereupon, the shorter was removed and 30 dB attenuator was set. So, if the output of attenuator would be shorted, the reflected wave will be attenuated on 60 dB taking into account the double passing through the attenuator.

Then the one-port transistor amplifier was connected to the output of attenuator (see fig. 5 a). In this case the instrumentation indications of reflected wave with respect to initially fixed ones plus 60 dB will be the real amplifying of a signal.

We obtained the amplifier gain in 42 dB with a power consumption of about 18 mW. (1.8 V of power supply and 10 mA of quiescent current). So, the results of simulations and measurements were well agreed.

88/278

Page 96: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 5. OPTA measurements

The dynamic range of network analyzer was not good and it not exceeded 30 dB. So, we can not see the signal amplification in full scale. The low-value instrumentation indications were suppressed with noise strip (see fig. 5 b). However, the maximal value of amplifier gain was interpreted adequately (see mark).

Although amplifier operates in quite narrow frequency band (-3 dB bandwidth was near 5 MHz), it is not great drawback in our case. Furthermore, perfect antenna matching can be implemented in narrow band only.

VI. SOME FEATURE OF BEACON DESIGN

The operation of discussed RFID system has similar difficulties, as the operation of conventional radar system. The system energy is weak. The main problem in this case is the suppression of transmitted signal in a receiving channel. Standard Y-circulator can suppress unwanted signal on a value in 20-25 dB only. It is not enough for system operation.

We suggested using two separate conventional patch antennas for transmitting channel and for receiving one. Antennas are shown in fig. 6.

The patch dimensions were 55 X 55 mm. The distance between the patch edges was 70 mm. As a substrate the standard dielectric FR-4 of 1.5 mm of thickness was used.

The simulation of each antenna and the simulation of antennas mutual coupling were implemented in Microwave Office environment.

The results of simulation are shown in fig. 7.

Figure 6. Transmitting and receiving patch antennas

Figure 7. Antenna(s) simulations

As a single unit the antenna is well situated for the beacon design. The antenna’s VSWR was 1.13 on a central frequency in 1.3 GHz (see fig. 7 a) and not exceeded 1.3 in a working frequency band. But mutual coupling between antennas exceeded 50 dB on a central frequency. None Y-circulator can ensure such decoupling. This feature of antennas unit corresponds to system demands perfectly.

VII. RESTRICTIONS

Certainly, the complex indoor environment assumes the multipath microwave propagation. First patch of the beacon emits microwave signal within entire room space and the scattered microwave signals are received with another patch of the beacon. But these scattered signals do not interfere with the

89/278

Page 97: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

useful signal because the last has the frequency shift. Only received by the transponder the scattered signal obtains the same frequency shift, but this signal has much lower amplitude than direct one. Surely, the presence of bulk metal in a room will disturb the normal system operation, as well as operation of any other radio engineering system.

The number of transponders operating simultaneously in a room is large and it was discussed above. But certain restriction appears due to the signals mixing. The combinatorial components can interfere with useful signal. The careful ID frequency choosing will eliminate this problem. In any case, this problem will be actual for large number of objects in a room.

VIII. CONCLUSION

Thus, the functioning of equipment for precision object positioning was discussed. The considered equipment possesses the simplest design and the lowest cost. At the same time, its metrological features are high. The calculation routines are quite realized. The equipment installation does not demand the extended manpower.

The transponder, which is set on an object, does not generate any radio signals. It only receives and retransmits the microwave signal from beacon(s). So, the intensity of electromagnetic field in object’s nearby environment is very low.

The theoretical investigations concerning the system accuracy give the optimistic results, which are confirmed by author’s previous experimental investigations in this field. The final conclusions will be made after the real equipment testing.

The system is at the stage of a proposal. Some of system modules are at improving process now. As the system assumes the radar approach the energy of the system is very weak. So,

the accurate adjusting of transmitter, transponder, and receiver has to be implemented. The main goal of this adjusting is the ensuring of declared system operation range. The increasing of transmitter out power is the worst way of this problem solving. The system must ensure so-called ―green communication.‖ Now the output microwave power of transmitter does not exceed 15 dBm.

REFERENCES

[1] V. B. Pestrjakov, Phase radio engineering systems, Moscow, Soviet radio, 1968, 468 p. (in Russian)

[2] I. B. Shirokov, ―The Multitag Microwave RFID System with Extended Operation Range,‖ in Chipless and Convention Radio Frequency Identification: Systems for Ubiquitous Tagging, IGI Global, 2012, 344p. / pp. 197- 217

[3] I. B. Shirokov, ―Precision Indoor Objects Positioning based on Phase Measurements of Microwave Signals,‖ in Evaluating AAL Systems Through Competitive Benchmarking, Indoor Localization and Tracking, S. Chessa and S. Knauth (Eds.): Communications in Computer and Information Science, 309, Springer-Verlag Berlin Heidelberg, 2012, 107 p. / pp. 80–91.

[4] J. S. Jaffe, R. C. Mackey, ―Microwave frequency translator,‖ IEEE Trans on Microwave Theory and Techniques, vol.13, pp. 371- 378, 1965.

[5] I. B. Shirokov, ―The Method of Distance Measurement from Measuring Station to Transponder,‖ (in Ukrainian) Pat. Ukraine, #93645 pub. in Bull. #4, Feb. 25, MPC G01S 13/32, 7 p. (2011)

[6] H. Tohyama, H. Mizuno, ―23-GHz Band GaAs MESFET Reflection Type Amplifier,‖ IEEE Transactions on Microwave Theory and Techniques, vol. MTT-27, no.5, pp. 408-411, May 1979.

[7] A. P. Venguer, J. L Medina, R. A. Chávez, and A. Velázquez, ―Low noise one-port microwave transistor amplifier,‖ Microwave and Optical Technology Letters: Vol. 33, No. 2, pp. 100-104, Apr 2002.

[8] I. B. Shirokov, ―Shirokov’s one-port resonant transistor amplifier,‖ Pat. Appl. Ukraine # a201114351 from 05 December 2011, МPC H03F 21/00 (in press).

90/278

Page 98: International Conference on Indoor Positioning and Indoor Navigation

- chapter 4 -

Signal Strength or fingerprinting

Page 99: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Broadcasting Alert Messages Inside the Building:

Challenges & Opportunities

Osama ABU OUN, Christelle BLOCH,

François SPIES

FEMTO-ST Lab (CNRS)

University of Franche-Comte

1 Cours Leprince-Ringuet

25200 Montbéliard, France

Wahabou ABDOU

Laboratory of Electronics,

Computer Science and Image.

University of Burgundy,

France

Email: [email protected]

Abstract—Emergency evacuation from buildings during catastrophic events need to be quick, efficient and distributed. Indoor

positioning and wireless communications can be used to optimize this process. Indeed, they allow to determine the current location of the people present in a building and to transmit location-based information. This permits to give all of these people the best directions that can help them find their way out of the building, and/or to help the rescue teams find them by giving the approximate location. But this involves many challenges, linked both to indoor environment and to the emergency of the situation. The goal is to improve evacuation with regards to various antagonistic criteria, particularly simplicity of use, fastness and reliability. Various involved issues are described, namely

The repetition of GNSS information by access points (APs), using beacons. The way each mobile device calculates or receive its own exact location, using the wireless network, which contains both RSSIs and coordinates,

Then the paper more specifically focuses on scientific bolts encountered in this context, and finally gives some experimental results gathered in a feasibility study to validate some of the basic concepts of this approach.

Keywords-component; indoor positioning, optimization, emergency evacuation, GNSS information.

I. INTRODUCTION

Even with the successive developments in the GNSS technologies and the possibility of determining the precise location of the receiver up to several meters, the service is not reliable and most of the time is not available indoors especially in the big buildings.

Using the wireless networks to broadcast alert messages and customized evacuation directions inside the building organizes and speeds up the process of evacuation. In addition, It helps in distribution the persons to the various exits according to their current locations.

Broadcasting could be done either by : (I) using the simple broadcast in which the access points broadcast the data immediately to the mobile phones, or (II) using broadcast trees, where the mobile phones will rebroadcast the data within their coverage areas in order to extend the broadcast range. The broadcast tree root could be a mobile phone connected to the Internet using 3G/4G connection.

This paper discusses different scenarios that could be used to apply this solution considering the already mentioned parameters.

II. Context

The mobile phone inside the building could be located in an area in which it has the ability to receive broadcast messages from several Wi-Fi access points. Selection criteria and optimization process should be applied to choose the most appropriate directions according to current position of each mobile phone.

Broadcasting method is an essential part of this solution. In the large building especially the public ones, most people do not connect to the Wi-Fi access points in these buildings, either the access points are not public and the access is limited to a certain group, or simply some persons do not need to use the network or may be they do not want to drain the phones batteries. Using the Wi-Fi beacons to broadcast the alerts and evacuation messages can help in dealing with such scenarios without the need to deploy new public Wi-Fi networks inside the buildings.

In some cases, people could be trapped inside the building because of some obstacles or because of being injured or having a certain disability. Thus, rescue teams need their exact locations inside the building in order to evacuate them. Thus, using the wireless access points to broadcast the GNSS coordinates inside the building could be one of the best and the most economic solution to overcome this problem.

91/278

Page 100: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

III. Broadcasting Messages Inside the Building

A. Sending GNSS in beacons from an access point

Literature has been conducted on the technologies that could enhance delivering the GNSS information inside the buildings. Using the Wi-Fi beacons is one of these technologies [1]. In fact it is possible for the Wi-Fi access points to overload their beacons with certain data, e.g., pre-configured GNSS coordinates, in order to be delivered to the mobile phones and other Wi-Fi clients without the need to have an association with them.

Beacons have not been used in a wide range of applications, although it has many advantages over the other solutions in the field of the GNSS like GNSS repeaters and IMES (Indoor Messaging System) [2] in which new hardwares should be installed in the buildings. On the contrary, this solution is a software-based and could be applied by adding an extension to the IEEE 802.11 standard without the need to install new access points or to use special Wi-Fi cards in the clients. In addition it saves the resources of the network and the phones by eliminating the need to establish full connections.

B. Using the received beacons to determine the most accurate position

In the proposed solution, every access point has its own exact GNSS coordinates. The access points broadcast these coordinates with their beacons, so each mobile phones in a certain place could receive the coordinates from all the access points that cover this place. Furthermore, calculating the distance from these access points could provide the mobile phone with an accurate position. in this model we propose using the hybrid algorithm that combining a signal strength cartography and a calibrated propagation model. This algorithm has been developed by LIFC (Laboratoire d'Informatique de Franche-Comté) / FEMTO-ST to express the distance in a heterogeneous environment [3].

C. Using Wi-Fi to Broadcast Evacuation Directions

Broadcasting the evacuation directions using the Wi-Fi network can solve major problems which exist in the traditional ways, some of these problems are related to the people in the building in which some people could have certain disabilities preventing them from receiving the directions, and the other problems are related to the state of building during the evacuation, for instance, having some blocked exits or dangerous corridors because of the fire.

There are three different levels suggested for broadcasting the evacuation directions in a building using Wi-Fi: All the Wi-Fi access points in the building broadcast the emergency exits accompanied with their exact positions to all the mobiles. In this case the evacuation management system (if there is any) has no information about the persons who exist in the building and their approximate locations. Thus, each mobile is going to decide which exit is the closest according to the approximate distance between the mobile and the exit. Each Wi-Fi access point broadcasts the emergency exits which are located in the same range of the access point itself. As in the first level, there is no information available about the mobiles and the persons in the building and the mobile will decide which exit is the closest.

Broadcasting customized directions to each mobile according to the position of the mobile and the building situation. The directions should be generated by the evacuation management system in the building, this solution works between three entities: the mobile phone, the access points, and the management server. The protocol depends on the broadcast and the Wi-Fi management frames to exchange the data between the mobile phone and the access points without any association between them. This gives the ability to the mobile phone to stay connected to another network, while it is using the positioning and evacuation services of the building internal network. The access points relay the positions of the mobile phones to the server in order to keep updated snapshot of the mobile phones inside the building. Thus in evacuation time, the server will send the best evacuation plan for each mobile phone through the access point.

IV. Experiments and Evaluation

A. Experiment Design

During this study, multiple scenarios have been simulated so as to measure the time needed to evacuate a building by following the receiving the evacuation directions which have been sent using the Wi-Fi broadcast after.

The simulation is done using NS2 equipped with the “Shadowing Patterns” model, many variables are taken into consideration, these variables could be categorized into three main groups:

Building structure: the building dimensions, positions of emergency exits, capacity of emergency exits.

Network structure: Wi-Fi access points and their positions.

Population: number of persons in the building, their initial positions, the initial target coordinates and speed.

B. Experiment Policies

Following we discuss the policies used during the simulation:

Person movement policy: the person moves from its initial position to its target in straight line till it receives the evacuation broadcast along with the available evacuation plans, it stops moving and evaluates all the plans according to the distance between

92/278

Page 101: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

its position and the emergency exit of each plan. Then it starts moving in a straight line toward the closest exit. At the exit it joins the waiting queue that organizes passing the exit to the outside according to the exit capacity.

Initial person position, Initial person target: random functions could cover the whole area of the building or a certain side of it.

Person speed, Evacuation speed: random functions give different values for each person.

Exits positions: all the emergency exits are located in the external walls of the buildings.

Access points: distributed in which they cover the whole area.

Exits performance: statistics contain : time of first and last persons arrived, time of first and last persons evacuated.

Evacuation performance: statistics contain the summary of all the exits and the evaluation of the evacuation process.

C. Experiment Scenarios

Utilizing aforementioned policies, we test and analyze three different scenarios, each scenario has been applied ten times according to the following criteria:

TABLE I. Experiment Criteria

Scenario Area APs Exits Persons Distributions

1 20m * 20m 3 2 25 100%, 75%, 50%

2 40m * 40m 3 2 50 100%, 75%, 50%

3 60m * 60m 3 3 150 100%, 75%, 50%

D. Experiment Results

In all scenarios, when the persons have been distributed over the whole building (Distribution over 100%), all the exits worked almost the same. In fact, we monitored the time evacuation of the last person, we got similar times.

It is not the same case when we tested the same scenarios with the same parameters except for the distribution, in which we used distribution over 75%. The result in this case have been changed completely. In the scenarios with three exits, 50% of the persons have been evacuated using one exit and the other two exits evacuated the rest, whereas in the two exits scenarios, 75% of the persons have been evacuated through one exit. Therefore the total evacuation time has been increased about 15% comparing to the same scenarios when the we used the distribution 100%.

The worst evacuation time was noted when we applied the same scenarios with distributing the persons over 50%. The total evacuation time has been increased by 150% comparing to the first test. In the scenarios with three exits, more than 90% of the persons have been evacuated using one exit.

considering that the reason of the evacuation is an earthquake and we have only few minutes to evacuate the building. If we consider the time needed to evacuate the persons from the building is equal to the evacuation time that we got using the ideal distribution “Distribution over 100%”, that means only about 33% of the persons in the building will be able to to leave it in the right time when they are distributed over 50% of its area. Since we are comparing between the same building in different scenarios, we can say that the reason is a bad load balancing that resulted from choosing the evacuation plan by the people using just the distance between their current position and each exit, while ignoring the current situation of the building, even if they already calculated their exact position and they received the right evacuation plans .

By analyzing the results, we found that it was possible to evacuate about 25-30% of the persons who couldn't leave in case they used one of the other exits, especially for the persons who where nearly at the center of the building between all the exits.

V. Conclusion

These experiments proved that broadcasting the alert messages and multiple evacuation directions inside the building, in which each person will choose the best plan according to the distance to the exit, can be useful only in the ideal situation where the positions of the persons cover the whole building and there is no diversity in the density in the building, which is not the case we have to deal with it most of the time. In most of the cases, the persons positions will be concentrated in certain places inside the building. Therefore, in case of the evacuation, they will line up in front of the closest exit waiting for their turn to leave the building, whilst the other exits are empty. Note that the other exits are far but the persons would leave faster using them.

In these experiment, we supposed that each mobile phone represents just one person, in the real situations and in most of the case we will find that one group of two or three persons or even more will use the same evacuation directions of one mobile phone.

Furthermore, some persons ignore the directions and follow the crowds. Therefore, more people are waiting on the crowded exits.

The solution is to build a protocol can monitor the situation of the building and the persons inside it in order to generate the possible evacuation plans for each mobile phone as soon as the evacuation process is triggered. Evacuation plans should be generated depending on the current situation of the building and it could generate new plans in case of any update. The system should depend on the management frame in the IEEE 802.11, so the persons can communicate with the system without the need to connect to certain Wi-Fi network. It can also offer two ways of communication, server to clients and clients to server. The system should define three different entities: the management server, the Wi-Fi access point, and the mobile phone. The access points relay all the data to the server for processing and encapsulate the data from the server using the appropriate IEEE 802.11 management frame.

93/278

Page 102: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

REFERENCES

[1] Ranveer Chandra, Jitendra Padhye, Lenin Ravindranath, Alec Wolman, Microsoft Research: Beacon-Stuffing: Wi-Fi Without Associations.

[2] Naohiko Kohtake , Shusuke Morimoto , Satoshi Kogure and Dinesh Manandhar, 2011 International Conference on Indoor Positioning and Indoor Navigation (IPIN), 21-23 September 2011, Guimarães, Portugal

[3] Matteo Cypriani, Frédéric Lassabe, Philippe Canalda, François Spies, Wi-Fi-Based Indoor Positioning: Basic Techniques,

Hybrid Algorithms and Open Software Platform, 2010 INTERNATIONAL CO NFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 15-17 SEPTEMBER 2010, ZÜRICH, SWITZERLAND

94/278

Page 103: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

For a Better Characterization of Wi-Fi-based IndoorPositioning Systems

Frederic LassabeResearch Institute on Transports

Energy and SocietyBelfort-Montbeliard

University of TechnologyRue Ernest Thierry-Mieg

Belfort, FranceEmail: [email protected]

Matteo CyprianiLaboratoire de Recherche Telebec

en Communication SouterraineUQAT, 675, 1e avenueVal d’Or, QC, Canada

Email: [email protected]

Philippe Canalda and Francois SpiesDepartement d’Informatique

et Systemes ComplexesFEMTO-ST, Universite de Franche-Comte

Montbeliard, FranceEmail: [email protected]: [email protected]

Abstract—Many Wi-Fi indoor positioning systems exist, whichhave all been published through scientific articles includingperformance estimation, often in terms of accuracy. However,deployment of such systems is conditional not only to theiraccuracy but also to various criteria which might be unclearor implicit. In this article, we present a detailed taxonomy ofWi-Fi indoor positioning systems. This work aims at providinga set of criteria that we identified through tests, state of theart study and experience in developing these systems, and thatcan be extended to various types of indoor positioning systems.Criteria consider modelling of RSSI data as well as hardwareand software architectures to meet the requirements of a Wi-FiIPS.

I. INTRODUCTION

Considering the state of the art of indoor positioning sys-tems, it has become hard to choose which hardware, physicalmeasurements, architecture, and algorithm to use or to studywhen dealing with such systems. This article aims at providingan overall view of criteria involved in indoor positioningsystems design and development. In the remainder of thedocument, we first present a taxonomy of various propertiesof Wi-Fi Indoor Positioning Systems (IPS).

Second, we apply this taxonomy to a set of related workwe studied or developed. Therefore, it provides a guideline tothe development and the deployment of a system of this kind,given its goals as well as hardware and/or software constraints.From the taxonomy and its application to various systems, wedraw conclusions about design and models choices accordingto various systems related to the deployment context.

II. ARCHITECTURE

In this section, we cover the physical elements of Wi-Fibased positioning systems and their architecture. Then, wepresent the impact of the infrastructure over the centralizationof the positioning algorithms, and finally, we define theimplicit and explicit positioning as well as their impact onprivacy.

A. Wi-Fi architectures

As Wi-Fi is a wireless communication medium, its firstuse is to transmit data between devices. However, its signalscan be used to locate mobile devices within a Wi-Fi networkrange. Most indoors Wi-Fi networks are designed among thefollowing topologies:

I infrastructure mode,I ad-hoc mode,I mesh networks.

Infrastructure and mesh modes are relying on access points,which are fixed wired and wireless bridges to which userdevices, also defined as client stations, connect to obtainnetwork access. With ad hoc mode, every mobile device inthe topology acts as a client station as well as a router. In thissubsection, we focus on Wi-Fi positioning systems based oninfrastructure mode.

Physical measurements used to locate a mobile device canbe gathered either by the mobile device itself, or by thenetwork infrastructure. We describe here the advantages anddrawbacks of these choices in terms of measurements.

1) Measurements performed by the mobile device: mobiledevices range from laptop computers with Wi-Fi networkinterface card, to smartphones whose SoC usually embed Wi-Fi access. These devices also support many operating systems,from desktop OS such as GNU/Linux, MS Windows and MacOS to OS dedicated to light devices such as MS Windows CEand RT, Apple iOS, and Google Android, and many others.

An advantage of using the mobile device to provide mea-surements lies in the fact that almost any combination ofhardware and OS provides Wi-Fi measurements capabilities1.

However, three drawbacks are identified. First, the numberof hardware and software configurations makes it very difficultto build an application running on every combination of deviceand OS. This problem is mitigated by some circumstances

1A major exception being Apple iOS which actually provides the featurebut contractually refuses positioning applications based on such feature in itsApp Store.978-1-4673-1954-6/12/$31.00 c©2012 IEEE

95/278

Page 104: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

where you have control over the mobile devices used by thesystem. On the contrary, when designing a system open to allcustomers with their own devices, such a choice becomes veryproblematic.

Second, while the OS API might provide access to Wi-FiRSSI values through their kernel, there is no guarantee that itis actually implemented in the Wi-Fi chipset firmware, whichcould lead to inconsistent measurements and bad location es-timation. Moreover, various chip models may provide variousmeasurements qualities and accuracy, which will impair theresulting position estimate.

2) Measurements performed by the infrastructure: perform-ing the measurements on the infrastructure devices is particu-larly interesting because once solved for a network, virtuallyany mobile device’s signal within range can be measured,therefore allowing to locate the mobile device.

This requires to be able and authorized to add softwareto the infrastructure access points. Although most high-endaccess points (e.g. Cisco) won’t provide these features, a solu-tion relies on adding dedicated, low cost devices that providean API to add software. For instance, the OpenWrt Linux-based OS [1] for access point is supported by hundreds ofmodels across various brands and provides packet sniffing aswell as Wi-Fi signals measurements. Measuring Wi-Fi signalsfrom the infrastructure is interesting for two main reasons: anyWi-Fi device will be measurable, and the measurements willremain consistent. However, it is not always possible to knowthe mobile devices characteristics such as antenna gain andoutput power, which impacts on the RSSI measured.

Whatever solution is chosen, the infrastructure can stilloperate normally concerning data transfers while providingtransparent Wi-Fi measurements to the positioning system.

B. Centralized or distributed

The position computation can be centralised or distributed.Centralizing the location computation is interesting whendetermination of the location is based on a lot of data thathas to be maintained in a consistent state, such as finger-printing systems. Centralized architecture performs well withinfrastructure-side measurements, because usually, the accesspoints are wired on the same network and can communicatewith the server at wire speed without impacting the Wi-Fibandwidth.

A distributed system can perform position computation onmany unsynchronized and remote devices. Such an architec-ture is implemented either by having the mobile devices com-pute their own location (ad hoc positioning), or by providingmany light weight positioning servers across the network. Thelight weight servers could be colocated with the access pointsor other existing devices on the network.

Designing and maintaining a distributed positioning systemis more complicated than a centralized one because all datarequired must be synchronized on all devices that computelocations. However, it scales very well with an increasingnumber of mobile devices.

C. Privacy, implicit and explicit positioning

Privacy is of great concern of today’s life, where all indi-vidual deeds can be processed by computers in real-time. Itis a sensitive topic that has to be dealt with when providinga positioning system. Before dealing with positioning systemsand privacy, we have to define implicit and explicit positioning.Explicit positioning requires that the user actively requires hislocation. Implicit positioning can be performed without theuser requiring it, and even without him knowing it. It can bebased on any data provided by the user’s device during itsregular operation (network transmissions, etc.).

1) Privacy: On one hand, with mobile-centered systems,the device gathers and exploits its measurements to determineits location. It does not require to give any information to otherdevices, so it can only be located based on the user’s will. Onthe other hand, in an infrastructure-centered system, implicitpositioning can be used to watch and monitor the mobiles ina centralized way. As an example, [2] describes the capabilityto identify the mobile terminals a separate criterion, namedrecognition.

III. TRANSMISSION MEDIUM

In a positioning system, the transmission medium is themedium used to either transmit location information, or tobe measured to get information to locate a mobile device.Common transmission medium used for positioning include:

• radio networks, based or not on a standard like Wi-Fi (IEEE 802.11), Bluetooth (IEEE 802.15.1), ZigBee(based on IEEE 802.15.4), etc.,

• infrared light,• ultrasounds,• mechanical devices like accelerometers, gyroscopes or in-

floor sensors,• optical devices (video cameras) [3],• geodesy instruments, like laser telemeters and theodolites.Some positioning systems rely on a combination of two or

more transmission media (see Active Bat [4] for instance).Positioning algorithms, best expected accuracy, scalability,

system architecture and energy consumption all depend tosome extent on the medium used. For example, ultrasoundsmay not be used in a noisy environment. In the next sub-sections, we describe only radio networks, since Wi-Fi is aradio-based network2.

Radio networks have the ability to transmit through ob-stacles such as building walls, even if they are stronglyimpacted. There is no global rule when using radio networks tolocate devices, since there are so many standards with variousproperties.

A. Short range radio

In short range radio networks, e.g. Bluetooth, the devicesmaximum range reaches only a few meters in a realisticenvironment. Therefore, many devices are required to locatea mobile device.

2We ignore the IR implementation of IEEE 802.11

96/278

Page 105: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

RFID (Radio Frequency IDentification) is another shortrange radio medium best suited for asset tracking in logisticsor industry. It relies on passive tags that can transmit a signalinduced by a RFID reader. It estimates the location as theRFID reader location, which can be recorded.

B. Medium range radio

Wi-Fi and ZigBee are medium range radio networks. Al-though they have a potential range of hundreds of metersoutside, indoors, their range usually only reaches severaldozens of meters. It allows to use less devices than in shortrange networks, but getting an accurate location requires morecomplex algorithms to process measurements of the signalcarrier wave. There exist many indoor positioning algorithmsbased on Wi-Fi, which will be approached later in thisdocument.

C. Long range radio

These radio media are usually used for outdoor positioning.They include GSM, UMTS, and LTE netwoks. To some extent,we also may include GNSS in this category.

IV. PERFORMANCE METRICS

In this section, we propose several performance metrics usedto evaluate positioning systems performances, which will beaddressed quickly in the next subsections.

A. Symbology

Symbology is the representation of locations solved by thepositioning system. Common cases are:

• cartesian coordinates in a local system of coordinates,• global spheric coordinates (latitude and longitude),• discrete locations such as presence in rooms [5].Coordinates can be mapped over discrete locations while

the opposite may not be possible.

B. Spatial Scale

The criterion defines the size of the positioning system cov-erage. The spatial scale is often linked with the transmissionmedium used to provide positioning. In [6] only three scalesare considered: building, campus and city.

C. Calibration

The calibration of a positioning system is an offline stepperformed when setting up a positioning system. Calibration isrequired by many systems in order to gather the data necessaryto its operation.

Since positioning algorithms base their output on the databuilt during calibration, calibration is often a critical step forthe positioning to work properly, especially for systems basedon a fingerprinting of the environment. On the opposite, somesystems like those purely based on signal propagation models,do not require any calibration – besides a calibration of themobile terminals themselves, that is sometimes required.

Some systems are able to calibrate on their own, in anautomated way. We call this process self-calibration.

D. StabilityStability is the ability for the system to perform with

accuracy even with perturbation. Perturbation may range fromdevices failure (hardware failure, power cut or empty batteries,etc.) to environment changes, such as building topology mod-ification or furniture reorganization. The later case is the mostcritical, since nothing prevents the system to operate but maystrongly impact the overall performances. Stability is closelyrelated to fault tolerance and fault detection. Some work isconducted with this particular property in mind [7].

E. AccuracyThe accuracy is the criterion that immediately comes into

mind when it is question of evaluating positioning systemsperformances.

However, as shown in this article, it is far from being theonly one, nor is it always the most important; furthermore,it can be evaluated in several ways, possibly related to posi-tioning symbology. In coordinates systems, accuracy can bedefined as the positioning error (distance between locationestimate and real location), while in room presence systems, itis evaluated through the percentage of good room detections.

The best way to compare several positioning systems is torun them in the same testbed and compare the results obtainedfrom the series of tests. However, even such comparison maybe biased, especially if a positioning system was developpedinside the considered testbed.

F. CostThe cost of a system includes:• the initial hardware cost,• the deployment cost,• the maintenance cost,• the energy consumption.Note that the energetic consumption of the equipment is also

related to the autonomy, particularly of the mobile terminals(cf. subsection IV-J).

G. Positioning RateThe positioning rate determines the frequency to which the

position of the mobiles is computed. We can express it in Hertz(number of positions per second) or with a time unit (delaybetween two successive positions).

The importance of these criteria is proportional to the mo-bile speed and number of mobiles to locate when positioningis centralized on a server.

H. Positioning DelayThe positioning delay is the delay between a positioning

request and its resolution. It is not necessarily related topositioning rate.

For instance, a high delay may be related to a positioningalgorithm requiring to gather data for several seconds beforebeing able to compute the device location.

A high delay, coupled to a low positioning rate may be asymptom of a positioning system whose computation is tooslow to determine a position.

97/278

Page 106: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

I. Scalability

In Wi-Fi positioning systems, scalability is bound to theevolution of the size of the serviced area and by the numberof devices to locate.

J. Energy Consumption

Depending on the system’s application, the energy con-sumption of the mobile terminal must be taken into account.In Wi-Fi positioning, energy can be used for transmitting po-sitioning requests, scan surrounding signals and/or to computeone’s location. These are the key points to optimize.

K. Publication

When a system is published outside the scientific domain,it usually means it is mature enough to be made available forregular users. Commercial systems are not necessarily betterthan R&D systems, but they are expected to be more robustand provide user friendly interface to be used.

There are several publication systems among whose two areparticularly used: patented proprietary systems are protected,developed and maintained only by one (or more) commer-cial organization. Free (Libre) and Open Source Software(FLOSS), on the other hand, are published with their sourcecode and specifications and are maintained by a communitywhich may include several commercial organizations.

V. OVERVIEW OF STATE OF THE ART WI-FI POSITIONINGSYSTEMS

System II-C1 IV-A IV-B IV-D IV-ERADAR [8] no c b u 3mEkahau [9] yes c b u 1− 3mHorus [10] ? c c u 4mOwlPS [11] no c b u 4.5mNibble [5] ? r b u 95%

Aeroscout [12] yes c c u 3− 10mPoint2map [13] no c b u 4− 5m

IV-F IV-G IV-H IV-I IV-J IV-KRADAR [8] 1 2 2 1 4 pEkahau [9] 1 2 4 1 3 pHorus [10] 1 3 4 2 3 sOwlPS [11] 3 2 4 3 4 sNibble [5] 2 3 4 1 ? s

Aeroscout [12] 3 3 4 2 4 pPoint2map [13] 2 2 3 1 4 p

TABLE ITAXONOMY APPLIED TO STATE OF THE ART SYSTEMS. CRITERIA

NUMBERED BY THEIR ENTRY IN THE DOCUMENT.

Table I shows the systems properties. Privacy is denotedbased on the ability to locate someone without his authoriza-tion. Symbology is denoted c for coordinates, and r for roompresence. Spatial scale is defined by respectively b, c and w forbuilding, campus and wide system. Publication is defined as pand s for patented and published in scientific papers. Stabilityis denoted u and s for unstable and stable. Accuracy is givenconcerning Wi-Fi-based results (since Aeroscout also resortson other components). It is the average accuracy, given it is theonly metric provided by all the papers. Accuracy is based on

the claims of the authors of the systems. Other criteria rangefrom 1 (poor) to 4 (excellent).

Concerning architecture, centralization, and calibration allsystems are in infrastructure mode, centralized, and calibrated.

VI. CONCLUSION AND FUTURE TRENDS

In this article, we present a set of criteria to qualifypositioning systems not only through their raw accuracy, butalso through various useful properties either about softwarearchitecture and models or hardware architecture. Indeed,studying positioning systems and their algorithms only throughaccuracy does not enlighten the necessary trade-offs betweensystem cost, available hardware, people for maintenance, andso on.

We compare some major state of the art systems throughour criteria. This comparison shows that most of the systemspublished are centralized and require calibration to be opera-tional. Mostly, they use fingerprinting algorithms.

Next steps in the field of positioning systems comparisonmethods shall include new criteria, such as system interop-erability, and ease of use, such as provided by a RESTfulprotocol.

REFERENCES

[1] OpenWrt official website, “http://openwrt.org/.”[2] J. Hightower and G. Borriello, “A survey and taxonomy of location

systems for ubiquitous computing,” tech. rep., IEEE Computer, 2001.[3] F. Fleuret, J. Berclaz, R. Lengagne, and P. Fua, “Multi-camera people

tracking with a probabilistic occupancy map,” IEEE Transactions onPattern Analysis and Machine Intelligence, 2007.

[4] A. Ward, A. Jones, and A. Harper, “A new location technique for theactive office,” IEEE Personnal Communications, vol. 5, pp. 42 – 47,October 1997.

[5] P. Castro, P. Chiu, T. Kremenek, and R. R. Muntz, “A probabilistic roomlocation service for wireless networked environments,” in UbiComp’01: Proceedings of the 3rd international conference on UbiquitousComputing, (London, UK), pp. 18–34, Springer-Verlag, 2001.

[6] M. Kjaergaard, “A taxonomy for radio location fingerprinting,” inLocation- and Context-Awareness (J. Hightower, B. Schiele, andT. Strang, eds.), vol. 4718 of Lecture Notes in Computer Science,pp. 139–156, Springer Berlin / Heidelberg, 2007.

[7] C. Laoudias, M. Michaelides, and C. Panayiotou, “Fault tolerant posi-tioning using WLAN signal strength fingerprints,” in Indoor Positioningand Indoor Navigation (IPIN), 2010 International Conference on, pp. 1–8, Sept. 2010.

[8] P. Bahl and V. N. Padmanabhan, “RADAR: An in-building RF-baseduser location and tracking system,” in INFOCOM (2), pp. 775–784,2000.

[9] R. Roos, P. Myllymaki, H. Tirri, P. Misikangas, and J. Sievanen, “Aprobabilistic approach to WLAN user location estimation,” InternationalJournal of Wireless Information Networks, vol. 9, pp. 155–164, July2002.

[10] M. A. Youssef, A. Agrawala, A. U. Shankar, and S. H. Noh, “Aprobabilistic clustering-based indoor location determination system,”Tech. Report CS-TR-4350, University of Maryland, Mar. 2002.

[11] OwlPS project’s official web page, “http://owlps.pu-pm.univ-fcomte.fr/.”[12] A. E. V. Solutions, “Aeroscout system: Bridging the gap between wi-fi,

active rfid and gps.”[13] J. Cardona, F. Lassabe, and A. Herrera, “System and method for

determining location of a wi-fi device with the assistance of fixedreceivers,” May 14 2013. US Patent App. 13/069,219.

98/278

Page 107: International Conference on Indoor Positioning and Indoor Navigation

- chapter 5 -

Time of Flight, TOF, TOA, TDOA

Page 108: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Locating and classifying of objects with a compact

ultrasonic 3D sensor

Christian Walter

Institute of Electrodynamics, Microwave and Circuit

Engineering

Vienna University of Technology

Vienna, Austria

[email protected]

Herbert Schweinzer

Institute of Electrodynamics, Microwave and Circuit

Engineering

Vienna University of Technology

Vienna, Austria

[email protected]

Abstract— Various applications exist for scene analysis based on

ultrasonic sensors – including robotics, automation, map building

and obstacle avoidance. We present a compact sensor for 3D

scene analysis with a wide field of view extending the mainlobe of

the transducer, inherent 3D location awareness and low cost. The

sensor employs a centered electrostatic transducer and four

microphones in a small size spatial configuration. Accurate time

of flight measurements are performed using pulse compression

techniques. Low-cost is achieved by using binary correlation

techniques allowing the use of single bit A/D converters. Precise

angle of arrival measurements are performed using multichannel

cross-correlation between microphones. This information

together with multiple measurements at different positions is

used to obtain a cloud of 3D reflection points. These points are

further processed, segmented into groups and identified with

physical objects if possible. Measurements are then compared

with a simulation of the scene showing the suitability of our

sensor for scene analysis.

Keywords-component; ultrasonic, 3D scene analysis,

localization, indoor, map building

I. INTRODUCTION

Scene analysis is an important area for different applications like robotics, automation, supervising and map building. Key objectives of such a system are determination of position, orientation and type of objects in a-priori unknown environment. Using ultrasound for this task is beneficial due to its low cost, low propagation speed allowing accurate time of arrival (ToA) measurement, insensitivity to dust or foggy atmosphere and inherent data reduction [1]. Data reduction is an important aspect for scene analysis as one of the main difficulties is segmentation and model fitting of data [2]. Before data is segmented model fitting is not possible. On the other side segmentation already requires some ideas about geometric objects leaving us with a chicken-and-egg dilemma. Compared to optical systems ultrasound has some advantages here due to the specular reflection properties of objects where inherent data reduction exists. On the other side, systems using ultrasound require some form of motion as the information obtained from a single position is limited.

Different approaches for sensor design has evolved over the last decades which can be distinguished by their measurement

principle, if they work in 2D or 3D, their geometric configuration and the required number of transducers. Early systems used simple sensor sweeping techniques as in [3]. In systems employing more than one sensor the most common configuration is the binaural one for 2D localization. Soon this has been extended to 3D by various researchers [4, 5].

Our proposed sensor configuration can be used for 3D localization, is low in cost and size and provides high accuracy. The sensor consists of four microphones with a centered transmitter where time of flight (ToF) measurements can be used for localization of passive reflection points in the room. The main distance information is contained in the ToF whereas the direction information is contained in time difference of arrival (TDoA) measurements. At a given sensor position, the type of object involves whether a reflection point exists. Two constraints have to be met: First of all the object must have a acoustically hard boundary such that acoustic waves are reflected and secondly the law of reflection must be fulfilled. As at a single position only limited information is obtained sensor motion is required to gather enough information for scene analysis. Scene reconstruction is performed using the reflection points measured within the sensor coordinate system and the position of the sensor in the room. Therefore both measurements should have high accuracy to get satisfying results. In case of mobile robot applications, a combination of this sensor with an indoor positioning system (IPS) is possible. LOSNUS, an IPS developed at our group, which enables high accurate positions measurements with uncertainties smaller than 1cm [6, 7], can be used for sensor locating.

II. SENSOR DESIGN

A. Sensor

A similar sensor design as in [5] is used and shown in Fig. 1. It consists of a cross of four microphones M1-M4 with a center ultrasonic transducer T. Spacing between the microphones is of equal distance 2d. As a general rule sensor construction is a tradeoff between spatial resolution [8] and compactness which reduces both sensor size and object dependence of measurements. Assuming only convex or planar objects the time differences between the channels are bound by 0 ≤ |ToFi − ToFj | ≤ 2d/c for the pairs M1/M2 and M3/M4. Between other pairs the time difference is bound by 1.41·d/c.

99/278

Page 109: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. Sensor configuration

B. Signal processing

High resolution ToF measurements are usually performed using pulse compression techniques or phase measurements. Our system uses pulse compression techniques using a linear frequency modulated chirp (LFM) where the obtainable resolution is a function of the time-bandwidth product. Pulse compression requires a known signal, called template, for detecting the presence of the template in the received signal. Difficulties arise when a single template signal is applied outside of the main lobe of a broadband ultrasonic transducer. Echoes resulting from directions outside the mainlobe are heavily changed in phase/amplitude and correlation can drop by more than 50% [8, 9]. Furthermore the ToF estimates are no longer correct due to imperfect matching with the template.

A possible solution for this is using multiple learned templates signals which can also be used for direction estimation as in [10]. Other possibilities are spatial prediction of the sensor response following Huygens principle and modeling the sensor as a set of small point sources, as well as numerical integration over the transducer surface where each point contributes to the pressure field in the far field [11]. A comparison of the different methods for point synthesis (point), no prediction (none) and manually extracted templates (measured) is shown in Fig. 2 where the sensor was placed in front of a flat wall and was rotated from 0 – 60° on its x-axis. As the movement is continuous so should be the time differences. The peak sensitivity of the sensor is about 4 microseconds per degree [8] making clear that these errors cannot be neglected if accurate positions of reflection points are needed.

-10 0 10 20 30 40 50 60-50

0

50

100

150

200

250

[us]

M2/M

1 - point

M2/M

1 - none

M2/M

1 - measured

M2/M

1 - tdoa

M4/M

3 - point

M4/M

3 - none

M4/M

3 - measured

M4/M

3 - tdoa

Figure 2. Errors in time difference estimations for different methods (point =

Huygens model, measured = manually extracted templates, none = single

template, tdoa = channel cross correlation)

The proposed solution is to use pulse compression with a small set of templates for echo detection and then applying cross-correlation between channels to identify individual time delays. The plot in Fig. 2 shows exactly this algorithm (TDoA)

and it can be seen that there are zero outliers from -8° to plus 60°.

The complete algorithm can be described as following. For an input signal ri, 1≤i≤4, where i is the microphone index, binary cross correlation is performed with all templates tj, 1≤j≤T of length M.

(1)

For the T cross-correlation waveforms peaks are identified in all four channels. To suppress side lobes and to avoid multiple interpretation of the same echo only the best peaks within a given time window are selected. In the next step peaks are combined together in groups. A set of peaks belong to the same group if they obey the time difference constraints presented in the sensor section. Afterwards more accurate time differences are obtained by performing channel cross correlation between the microphone pairs. Final ToF results are calculated as following: The time delays are fitted to the ToF data giving a single time offset. The reported ToF results from the offset plus the time delays.

C. Calculation of reflection points

Based on the four ToF measurements distance estimates can be found by multiplication with the speed of sound. Having four distance estimates the position of a reflection point in our coordinate system can be calculated according to [6] where d is the microphone spacing as shown in Fig. 1 and di are the distances.

(2)

III. SCENE ANALYSIS

A. Tracing of Objects

For further analysis and if multiple measurements are available it is important that reflection points which belong to the same physical object are grouped together. The algorithm proposed is designed for a mobile robot where Ri shall deliver the position of the robot at time instant i. Let d(Ri,Rj) be the Euclidean distance between the two robot positions Ri and Rj. A measured position Pim at time instant i where m is the

measurement index belongs to a group G if PjnG with

(3)

S is the snap size and B is the backtracking length. The first condition expresses that the distance between two points cannot be larger than the distance the robot has moved plus a snap length which accounts for measurement uncertainty. The second condition ensures that positions on different objects are not in the same group.

B. Geometrical acoustics

Geometrical acoustics or ray acoustics describes sound propagation in terms of rays. The well know equation (4) is the vector formulation of the law of reflection as shown in Fig. 3(a). All vectors are assumed to be unity vectors.

100/278

Page 110: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

(4)

While transmission, reflection and absorption can be easily treated by geometrical acoustics, diffraction effects can be accounted for by an extension of the theory, e.g. by the geometrical theory of diffraction [12]. In case of a homogenous medium a surface diffracted ray around a cylinder can be found by imaging a string from a point P to Q pulled taut as shown in Fig. 3(b).

Figure 3. (a) Law of reflection, (b) Surface diffraction

IV. EXPERIMENTAL SETUP

Measurements have been performed in a standard office room with a size of 500x300x330cm shown in Fig. 4(a). The sensor was mounted on a linear belt (Schunk PowerCube™ PLB 070) assumed to be parallel to the sensor x-axis. Movement has been performed in steps of 1mm starting from x=0mm to x=300mm yielding 301 measurements. In addition to the objects already present in the room we have added a small cylinder with a diameter of 84mm. The electrostatic transmitter used is a Senscomp series 600 environmental grade transducer. The driving signal was a linear frequency modulated chirp with duration of 750μs, a bandwidth of 28 kHz and a center frequency of 51 kHz. The microphones used are four SPM0404UD5 from Knowles.

V. RESULTS

A. Localization result

The post processed positions data for the X/Y plane is shown in Figure 4(b). The Z/Y plane is not shown due to limited space. In total 1283 reflection points have been obtained from the ToF data. Three different physical objects could be identified which are responsible for the echoes: The cylinder placed as additional object into the room, the wall

(front side of the cabinet) and the floor. The floor is not located at x=0cm as the sensor was not perfectly parallel to the floor and sensitivity for echoes at angles of 90 degree is lowest. One object next to the floor was not further identified. Additional detected groups exists which are due to multiple reflections. For example the group labeled “wall-sensor-wall” consists of positions calculated from the acoustic wave propagating from the sensor to the wall, is reflected back to the sensor, is then reflected back to the wall and finally reflected back to the sensor. The echo “wall-cylinder-wall” is even more complex. The wave is reflected from the wall to the backside of the cylinder, then back to the wall and then back to the sensor. Despite the simplicity of the scene the capabilities of our sensor construction can be seen. A simple range based system with an opening angle of 20° would track the cylinder most of the times and only at the extents of the belt would see the wall. Information about direction and the floor would be completely lost.

B. Cylinder

The individual ToF for the microphone pairs M1/M2 and M3/M4 are shown in Figure 5.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

1.416

1.418

1.42

1.422

1.424

1.426

X: 0.11

Y: 1.418

x [m]

ToF

"c[m

]

cylinder only

X: 0.091

Y: 1.417X: 0.133

Y: 1.417

1 - cylinder - M1

1 - cylinder - M2

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

1.416

1.418

1.42

1.422

1.424

1.426

X: 0.11

Y: 1.417

x [m]

ToF

"c[m

]

cylinder only

1 - cylinder - M3

1 - cylinder - M4

Figure 5. ToF distances for cylinder, (a) M1/M2 ,(b) M3/M4

As the cylinder axis is parallel to the z-axis there is nearly no time difference for the microphone pairs M3 and M4. Looking at Fig. 5(a) showing our coordinate system and microphone placement we can see that at the position x=0cm microphone M1 is closer. At the position x=9.1cm the ToF reaches the smallest value for M1. This is the case when the cylinder is centered between the transducer and the microphone M1. At the distance x=11cm the transducer is directly in front of the cylinder giving the same ToF for both microphones. At x=13.3cm the first case is repeated for microphone M2.

(a)

(b)

(a)

(b)

(a) (b)

(c)

Figure 4. (a) Photo of scence, (b) Post processed traces for X/Y plane, (c) Cylinder in X/Y plane at enlarged scale of 5cm in X and 7mm in Y

101/278

Page 111: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

C. ToF and energy for wall

As simple the wall seems to be as an object for scene analysis, it becomes more complicated when objects are in front of the wall. This can be seen in Fig. 6(a) where the ToF distances for the microphone pair M1/M2 are shown. We can see that for a microphone in the vicinity of the cylinder the ToF becomes larger. As this happens at different positions for each microphone, echoes have a perceived wrong direction. This is the reason why in Fig. 4(b) positions for the wall are not drawn behind the cylinder – instead they are all directed either to the left or to the right. Fig. 6(b) shows the effect of diffraction on the energy of the echo.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35

2.565

2.57

2.575

2.58

X: 0.094

Y: 2.577

x [m]

ToF

"c[m

]

ToF plot - wall

X: 0.134

Y: 2.575

2 - wall - M1

2 - wall - M2

0 0.05 0.1 0.15 0.2 0.25 0.3 0.350

0.5

1

1.5

2

2.5

3x 10

4

X: 0.006

Y: 2.635e+04

x [m]

Px(n

)2

Energy plot - wall

2 - wall - M1

2 - wall - M2

2 - wall - M3

2 - wall - M4

Figure 6. (a) ToF distances for wall with diffraction effects, (b) Echo

energies with break-in in case of diffraction

D. Spurious echoes

Two spurious echoes have been identified. One group of positions is coming from exactly the same direction as the echoes from the wall and have exactly twice the distance from the sensor. This was verified with the data from Fig. 4(b) where the distance to the wall is 130.6cm. The distance to the group labeled wall-sensor-wall is approximately twice as large.

The other spurious object identified was an echo coming from a reflection on the wall, again reflected from the backside of the cylinder and then reflected back to the sensor again from the wall. The distance of the cylinder at x=−11cm is 71.68cm. The distance of the wall is 130.6cm. The diameter of the cylinder is 84mm. Therefore the total acoustic path taken by the echo is 130.6cm + 2(130.6cm − 71.68cm − 8.4cm) + 130.6cm = 362.26cm. This closely resembles the distance of approximately 181cm shown in Fig. 4(b).

VI. SIMULATION

To verify a correct operation of the sensor in a diffraction situation the scene has been simulated within MATLAB using a plane wall as a reflector and a cylinder. The wall and the cylinder have been assumed to extend infinitely along the z-axis.

Figure 7. Simulation of scene with dots showing reflection points

For each sensor position on the belt all possible reflection paths have been calculated including direct reflections from the sphere, direct reflections from the wall and diffraction across the cylinder. The result of the simulation in the X/Y plane is

shown in Fig. 7. Similar to Fig. 4(b) when the sensor is in front of the belt, reflection points are either on the left of right side where they form groups. Fig. 8(a) shows the simulation of the cylinder. Diffractions effects have been modeled as well and the simulation results for the wall are shown in Fig. 8(b). These results agree well with the measurements in Fig. 5(a) and Fig. 6(a)

0 0.05 0.1 0.15 0.2 0.25

1.416

1.418

1.42

1.422

1.424

1.426

X: 0.09

Y: 1.419

x [m]

ToF

"c[m

]

X: 0.11

Y: 1.419

X: 0.13

Y: 1.419

M1

M2

0 0.05 0.1 0.15 0.2 0.25 0.3

2.565

2.57

2.575

2.58

X: 0.1

Y: 2.575

x [m]

ToF

"c[m

]

X: 0.14

Y: 2.575

M1

M2

Figure 8. (a) ToF distances for cylinder, (b) ToF distances for wall

VII. CONCLUSION

A compact sensor together with its use for algorithms for 3D scene analysis has been presented. Practical measurements and comparison with a simulation of the example scene outline the benefits compared to simpler solutions like range based sensors neglecting important details of the environment. Different objects have been identified and due to the high resolution of the proposed system identification of the object type is possible. Furthermore practical problems have been identified including diffraction effects around obstacles and fake objects due to multiple reflections. Further work at our group is ongoing with a new robot system with enhanced movement capabilities. The main focus of our future research is automated object classification and identification of fake objects due to multiple reflections and/or diffraction effects.

REFERENCES

[1] L. Kleeman, “Fast and accurate sonar trackers using double pulse coding”, IEEE Intelligent Robots and Systems, vol. 2, 1999

[2] S. I. Kim, S. J. Ahn, “Extraction of Geometric Primitives from Point Cloud Data”, ICCAS, June 2005

[3] J. Borenstein, Y. Koren, “Obstacle avoidance with ultrasonic sensors”, IEEE Journal of Robotics and Automation, vol. 4, April 1988.

[4] H. Akbarally, L. Kleeman, “A sonar sensor for accurate 3D target localisation and classification”, IEEE ICRA, vol. 3, May 1995

[5] G. Kaniak, H. Schweinzer, “A 3D Airborne Ultrasound Sensor for High-Precision Location Data Estimation and Conjunction”, IMTC, May 2008

[6] C. Walter, M. Syafrudin, H. Schweinzer, “A Self-contained and Self-checking LPS with High Accuracy”, ISPRS IJGI, 2013 – Unpublished

[7] H. Schweinzer, M. Syafrudin, “LOSNUS: An ultrasonic system enabling high accuracy and secure TDoA locating of numerous devices”, IEEE IPIN, Sept 2010

[8] C. Walter, H. Schweinzer, "An accurate compact ultrasonic 3D sensor using broadband impulses requiring no initial calibration", IMTC, 2012

[9] H. Elmer, H. Schweinzer, "Dependency of correlative ultrasonic measurement upon transducer's orientation," IEEE Sensors, vol. 1, 2003.

[10] G. Kaniak, H. Schweinzer, “Advanced ultrasound object detection in air by intensive use of side lobes of transducer radiation pattern”, IEEE Sensors, Oct 2008

[11] M. Zollner, “Schallfeld der kreisförmigen Kolbenmembran” in Elektroakustik, 3rd, Springer, pp. 96-102, ISBN 3540646655

[12] J. Keller, “Geometrical Theory of Diffraction”, Journal of Optical Society of America, Vol 52, 1962

[13] A. Pierce, “Diffraction of sound around corners and over wide barriers”, Journal of Acoustical Society of America, Vol 55, 1974

(b) (a)

(b) (a)

102/278

Page 112: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Location Estimation Algorithms for the High

Accuracy LPS LOSNUS

Mohammad Syafrudin

Institute of Electrodynamics, Microwave and Circuit

Engineering

Vienna University of Technology

Vienna, Austria

[email protected]

Christian Walter and Herbert Schweinzer

Institute of Electrodynamics, Microwave and Circuit

Engineering

Vienna University of Technology

Vienna, Austria

[email protected]

[email protected]

Abstract—Local Positioning Systems (LPSs) based on Ultrasound

are mostly aimed for tracking of mobile devices or persons.

However, LPS LOSNUS is mainly designed for locating

numerous static devices with high locating accuracy especially in

a wireless sensor network (WSN). Applications in WSN could

significantly be improved including network integration based on

node locations, supervising locations with respect to accidentally

disarrangement and detecting faking of node locations. This

article presents a localization algorithm for LPS LOSNUS in a

six-transmitter configuration which can tolerate a single failure

in a ToA measurement resulting from arbitrary failure modes.

The localization algorithm uses hyperbolic multilateration in

combination with proximity based grouping and final

determination of the position by averaging, selection by smallest

GDOP or by applying a non-linear least square algorithm to the

correct ToAs. The article includes a short description of the

system, the algorithms and a performance comparison to other

localization algorithms based on real-world measurement.

Keywords-3D localization; ToA; TDo; LPS; GDOP

I. INTRODUCTION

The localization algorithm’s performance mainly depends on the accuracy of the distance estimation and geometrical constellation. Inaccuracies in the distance estimates can be due to signal interference by multi-path propagation, obstacles in the direction of wave propagation resulting in diffraction, and damping or blocking. Bad positioning of the transmitters (Txs) can result in a large geometric dilution of precision (GDOP) depending on the receiver position. Different localization algorithms exist for estimating the position of static or mobile devices. The classical non-linear least squares (NLS) estimates the coordinate by minimizing the sum of squared residuals. It is highly sensitive to erroneous measured distances and even a single erroneous measurement will affect the estimated position [1]. Different methods which do not require a-priori information about non-line-of-sight (NLOS) have been proposed to mitigate a NLOS problem: Robust multilateration (RMult) [1] estimates the position by minimizing the sum of absolute values of residuals yielding better result than the classical NLS; least median of squares (LMedS) [2] estimates the position by selecting the solution with the smallest median of the squared residuals. It delivers better performance than the classical NLS.

In this article we present a localization algorithm for LPS LOSNUS in a six-Tx configuration which can tolerate a single failure in ToA measurement resulting from arbitrary failure modes, delivers high locating accuracy and reduces biasing of the final location result. The localization algorithm uses TDoA multilateration in combination with proximity based grouping and final determination of the position by NLS, averaging or selecting the position with the smallest GDOP [3]. Grouping in combination with averaging or GDOP selection are fast deterministic algorithms with low computational complexity compared to iterative minimization algorithms.

II. BASIC PRINCIPLE OF LOSNUS

LOSNUS is a local positioning system (LPS) based on ultrasonic range measurements. It is mainly designed for locating numerous static devices with high accuracy [4] although moving objects can be located with reduced accuracy. The LOSNUS operating phase is based on Tx positions obtained by a calibration process described in [5]. The calibration remains valid as long as the Txs are not moved. Calibration uses ToF measurements and requires at least four receivers, six Txs and a known reference distance. The reference distance is used for scaling the output of the calibration algorithm and allows the calibration algorithm to work only with ToF ratios instead of absolute distances. In the operating phase of LOSNUS ToA measurements are performed enabling only TDoA algorithms for locating. The transmitters are fired sequentially with a given protocol (Fig.1a). ToA measurements are obtained using binary cross-correlation with a known reference signal realized as linear frequency

(a)

T

Framer # x

Distance Measurement

(Chirp coded) 256 µs

Transmitter Coding Field

384 µs

(b)

T

Offset Delay #1

Transmitter #1 Transmitter #2 Transmitter #3 …

Delay #2 Delay #3

Frame #1 Frame #2 Frame #3 …

Figure 1. (a) Sequence of signal transmission using well defined delays to

ensure non-overlapping reception of frames. (b) Transmitted frames

consisting of a constant linear freq. modulated chirp and a transmitter coding

time slot.

103/278

Page 113: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

modulated chirp (Fig.1b). Txs are identified by a fixed-frequency coding.

III. LOCALIZATION ALGORITHM

The basic TDoA equation is given in (1) where ti and tj are

the ToA measurements for transmitter Txi and Txj, Rx is the

unknown receiver position and c is the speed of sound. The

value of c is best estimated by using a known distance from a

permanently installed fixed receiver used during calibration.

(1)

Three such equations can be used for an analytical solution of the receiver position. In case minimization algorithms are used the time difference residuals are calculated as shown in (2)

(2)

The classical NLS minimizes the sum of the squares of all residuals, the RMult minimizes the absolute values of the residuals and the LMedS estimates the position by selecting the solution with the smallest median of the squared residuals.

TABLE I. COMPARISON OF THE ALGORITHMS

No Algorithms

Compu-

tational

Com-

plexity

A-priori

NLOS

Infor-

mation

Statistical Errors

from 230

Measurements

[mm]

Standard Deviation

for Each Position

[mm]

Max. Avg. Max. Min. Avg.

1 Classical NLS Yes No > 1m 102.4 837.2 0.61 166.4

2 RMult Yes No > 1m 58.85 911.7 0.76 93.20

3 LMedS Yes No 30.65 10.23 9.33 2.0 4.86

4 Group(NLS) Yes Yes 13.08 7.81 1.76 0.61 1.18

5 Group(Mean) No Yes 14.01 8.26 2.81 0.63 1.49

6 Group(DOP) No Yes 14.88 7.97 2.45 0.65 1.31

A. Grouping Algorithms

In our test case, grouping first computes all 15 = C(6,4) possible solutions. Six is the number of Txs and four is the number of ToA for the analytical solution. In case of no outliers all 15 positions will be close where the spread radius is given by R equals 3 times the expected GDOP. The a-priori information required is the ToA standard uncertainty. In case of a single error in the ToA measurements only 5 = C(5,4) positions are close together where the others are spread. Simulations and practical verifications have shown that in the TDoA case the other positions do not form groups with the cardinality larger than the correct group. The algorithm can be described as following. Let Loc1,…,LocM, M = C(6,4), be the set of all calculated positions by the TDoA algorithm. We define respective groups as

(3)

From the set of groups the largest group with cardinality of at least 5 is selected. If such a group cannot be found more than one ToA must have been incorrect. The Group(Mean) calculates the position by taking the mean of the x/y/z coordinates of the position within the best group. As each position in the group belongs to a specific set of ToAs which belong to specific Txs the estimated GDOP can be calculated for each position. Based on these facts the element with the smallest GDOP can be selected. We call this Group(DOP). The last method is using the elements in the best group to identify correct ToAs. Using these ToAs the NLS can be executed. In

case of outliers it significantly outperforms the classical NLS as outliers are not used as input data for the algorithm.

IV. RESULT

A. Definition of algorithm error

Due to the calibration process known reference positions are available which can be used for defining the error at each position. Let Pri, 1 ≤ i ≤ 23 be reference positions along the reference belt. The error is calculated as

(4)

B. Comparison with other methods

Fig. 2 shows errors from repeated measurements at the

positions of the belt. It can be seen that the algorithms LMedS

and grouping are resilient to outliers. For some outliers RMult

is resilient but NLS is not. In case of outliers RMult performs

better than NLS most of the times. Grouping performs best

regarding both maximum and average errors. Average errors is

smallest for Group(NLS). If simpler algorithms like mean or

selection on the GDOP are used performance is still

acceptable. In case of grouping the standard deviation is

highest for Group(Mean). Grouping always outperforms the

NLS, RMult, and LMedS algorithms. The comparison of the

algorithms is summarized in Table 1.

Figure 2. Error comparison of different methods for selected positions on the

belt.

V. CONCLUSION

The article presented a localization algorithm for LPS LOSNUS in a six-transmitter configuration which is designed for high accuracy. The localization algorithm uses hyperbolic multilateration in combination with proximity based grouping and provides the best solution being able to tolerate a single failure in ToA measurement resulting from different failure modes.

REFERENCES

[1] Nawaz and N. Trigoni, "Robust localization in cluttered environments with NLOS propagation," in IEEE 7th MASS'10, 2010, pp.166-175

[2] R. Casas, et al., "Hidden Issues in Deploying an Indoor Location System," in Pervasive Computing, IEEE, 2007, pp.62-69.

[3] J.D.Bard and F.M.Ham, "Time difference of arrival dilution of precision and applications," IEEE Trans. on Signal Processing, vol.47, 1999.

[4] H.Schweinzer and M.Syafrudin, "LOSNUS: An ultrasonic system enabling high accuracy and secure TDoA locating of numerous devices," in IPIN, 2010, pp.1-8.

[5] C.Walter, et al., “A self-contained and self-checking LPS with high accuracy.” ISPRS Int. J. Geo-Inf. 2013,vol.2, pp.908-934.

104/278

Page 114: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Infrastructure-less TDOF/AOA-based Indoor

Positioning with Radio Waves

Canan Aydogdu

Electrical and Electronics

Engineering Department

Izmir Institute of Technology

Izmir, Turkey

[email protected]

Kadir Atilla Toker

Software Engineering Department

Izmir University

Izmir, Turkey

[email protected]

Ilkay Kozak

Politeknik Ltd.

Izmir, Turkey

[email protected]

Abstract— Infrastructure-less indoor positioning is a necessity for

mobile ad-hoc networks (MANET) which are required to work in

any indoor environment. A MANET formed by emergency first-

responders in a damaged building where either no infrastructure

exists or the existing infrastructure is useless, a group of military

soldiers in an indoor enemy territory, a group of adventurers in a

cave, robots underground, distant reconnaissance vehicles on

planets, cubesats in the space, etc., are example scenarios where

infrastructure-less indoor positioning is inevitable. Although

ultra-wideband (UWB), ultrasound or infrared indoor

positioning techniques have been proposed so far, the range of

the communication becomes lower for higher precision

techniques.

In this study, we propose an infrastructure-less indoor

positioning technique which is expected to work for an indoor

range of about 150m and is based on distance and direction angle

measurement. An experimental study is carried out with a couple

of wireless devices equipped with field programmable gate arrays

(FPGA) and transparent ISM band transceivers. The direction of

the object to be located is figured out by a rotational antenna.

The technique developed is expected to achieve a positioning

error of at most ±1.5 meters for distance measurement, while

determining the angle of direction within acceptable values, while

achieving a 150-meter indoor range. The developed technique

provides localization with high enough precision for most of the

above mentioned application scenarios and can be extended for

more number of users/devices in a MANET to find a localization

map of the network.

Keywords- indoor localization; infrastructure-less; time

difference of flight (TDOF); mobile ad-hoc networks (MANET);

field programmable gate arrays (FPGA)

I. INTRODUCTION

Mobile ad-hoc networks (MANETs) are a collection of

mobile users/objects which form a wireless network in an ad-

hoc manner without the need for an infrastructure. The

decentralized operation and self-healing property, together

with mobility support, envision MANETs to play a significant

role in future emergency/rescue operations, disaster relief

scenarios and military networks, where position information

of mobile users/objects is critical. Infrastructure-less

positioning is necessary for MANETs in indoor environments

or MANETs in outdoor environments where the global

positioning system (GPS) is useless or inefficient due to the

precision provided.

Owing to the life-critical missions undertaken by

emergency, disaster relief or military operations, the accuracy

of the infrastructure-less positioning to be employed in

MANETs may become important. Moreover, since no

infrastructure is used, the accuracy of distance measured

among mobile users of a MANET becomes critical. Time of

arrival (ToA) or time difference of flight (TDOF) methods

have proven to achieve better accuracies in distance

measurement compared to received signal strength (RSS)

measurements, which fluctuate inconsistently due to multipath

effects. Hence, in this study, we use TDOF techniques in order

to measure the distances among users, eliminating the need for

time synchronization among mobile users.

Moreover, emergency, disaster relief or military operations

generally take place in environments, where obstacles among

users are inherent. For example, firemen inside a building will

have to locate each other in a non-line of sight environment

where the walls and floors form various obstacles. Or soldiers

inside a cave, rescue members inside a damaged building

during an earthquake, etc. will have no line of sight. Hence,

the right localization technique to use should have signals able

to penetrate through obstacles as much as possible. An

acoustic, infrared, optic or high frequency radio wave is not

appropriate for these applications. In this study, we use low

frequency radio waves.

In this study, we propose an infrastructure-less indoor

positioning technique which is expected to work by TDOF

distance measurements by low-frequency radio waves and

angle of arrival (AOA) measurements by a rotational

directional antenna. An experimental study is carried out with

a couple of wireless devices equipped with FPGAs and

transparent ISM band transceivers for an indoor range of about

150m. The advantage of asynchronous digital logic like

FPGA, is that it allows the control signal computation to

105/278

Page 115: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. Range versus accuracy of infrastructure based positioning systems used today [3]

propagate without holding intermediate memory. The

elimination of the computational delay minimizes the

reference to output delay significantly. FPGA are used for

time difference of flight (TDOF) measurements in order to

depict the distance between two units. The direction of the

object to be located is figured out by a rotational antenna.

The technique developed is expected to achieve a

positioning error of at most ±1.5 meters for distance

measurement for the operation frequency, while determining

the angle of direction within acceptable values, while

achieving a 150-meter indoor range. The developed technique

provides localization with high enough precision for most of

the above mentioned application scenarios and can be

extended for more number of users/devices in a MANET to

find a localization map of the network.

II. LITERATURE REVIEW

The global positioning system (GPS) has penetrated into

many aspects of our daily lives and is used widespread by applications for tracking vehicles, people and goods, as well as navigational search. However, GPS is useless if at least four GPS satellites are not in line of sight, such as inside buildings, caves, underground; in outdoor environments such as urban cities with high building where canyon effect is apparent, underwater tunnels or cubesats in space. Various positioning systems, which are proposed for indoor environments can be grouped under four categories:

1) Positioning techniques focusing on deployment of Wi-Fi/Bluetooth/UWB/Infrared/GSM, etc. based [1-3, 7-10].

2) Positioning techniques making use of sensors such as accelerometers, gyroscopes, etc., additional to the infrastructure based positioning systems above [11, 12].

3) Infrastructure-less positioning systems which make use of the mentioned sensors only [13].

4) Infrastructure-less radio frequency based positioning systems [14-18].

Positioning systems in the first and second group are dependent on a specific infrastructure, where a summary of various techniques and their range versus accuracy is shown in Figure 1. Infrastructure based positioning systems are specific to the deployed place and are not applicable to many situations, such as emergency first-responders in a damaged building where either no infrastructure exists or the existing infrastructure is useless. A group of military soldiers in an indoor enemy territory, a group of adventurers in a cave, robots underground, distant reconnaissance vehicles on planets, cubesats in the space, etc., are other example scenarios where infrastructure-less indoor positioning is inevitable.

Infrastructure-less positioning with sensors such as accelerometers and gyroscopes in the third group, are experimentally shown to exhibit increasing positioning errors with increasing distance travelled, for example a 2-4m. positioning error occurs for a 49m. travelled distance in [13].

The fourth category of positioning systems is infrastructure-less positioning by radio waves, which is the focus of this study [14-19]. Infrastructure-less indoor positioning, also referred as ad-hoc mobile positioning, is illustrated in Figure 2. It is a necessity for mobile ad-hoc networks (MANET) which is required to work in any indoor environment. A MANET formed by emergency first-responders in a damaged building where either no infrastructure exists or the existing infrastructure is useless, a group of military soldiers in an indoor enemy territory, a group of adventurers in a cave, robots underground, distant reconnaissance vehicles on planets, cubesats in the space, etc., are example scenarios where infrastructure-less indoor positioning is inevitable.

Researches on infrastructure-less positioning with radio waves has focused on either theory or simulations in [14-17] dealing with cooperative localization or efficient position calculation methods at medium access control and higher layers. An ultra-wideband (UWB) based system, which is developed by Decawave, provides 10-15cm. positioning accuracy [18] for tracking goods in a factory or tracking possessions of people with high accuracy. The main problem with UWB tracking is the high cost of the systems and the low range of communication. Providing a 1GHz bandwidth, requires UWB to be used at high carrier frequencies, which limits the range of communication to about 20m., which makes UWB inefficient for many applications including emergency situations.

An infrastructure-free positioning device is introduced by Lambda:4 [19] in [20]. A handheld device of a 1,2 kg weight is capable of locating cigarette pack sized transmitters. The infrastructure-less positioning system has an accuracy of 1-5m. in a range of 2-5km. This device is developed and patented for emergency first-responders [21].

The main problem with this device is the usage of 2.4GHz ISM band, where interference from Wi-Fi, Bluetooth and ZigBee may become a problem. Moreover, the range and penetration of 2.4 GHz radio waves through several numbers of walls and floors is low compared to a lower frequency radio wave.

106/278

Page 116: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

a)

b)

Figure 2. Indoor positioning (inside building, tunnel, cave or undergrund), outdoor positioning where GPS signals are jammed on purpose or in space require some other positioning system other than GPS. This system may have two different strcutures: a) A deployed infrastructure based positioning system, b) an infrastructure-free positioning system. The focus of this study is the development and experimentation of an infrastructure-free positioning system.

III. TARGETS FOR INFRASTRUCTURE-LESS POSITIONING

Despite the variety of positioning systems proposed so far, a positioning technology for mobile ad-hoc networks such as emergency first responders, military, cubesats, etc. still does not exist. The major targets to be achieved by an infrastructure-less positioning system to be used for emergency, disaster relief and military applications are as follows:

1. High enough precision: A localization accuracy of at least a few meters is required in order to decide on where a mobile user inside a building is.

2. High range: Although range is dependent on density of mobile users in a MANET, a range of at least a few hundred meters is required for keeping the connectivity.

3. High penetration through walls, floors and concrete: Mobile users should be localized despite obstacles among themselves.

4. Low interference: A dedicated channel or interference mitigating techniques should be used in these life-critical applications.

The first target is achieved by TDOF distance measurements, whereas the second and third targets are achieved using a low radio frequency, which is selected to be 868MHz for current experiments. The fourth target requires further future studies considering settled regulations for emergency applications.

IV. METHOD

The scope of this study is developing an infrastructure-free

positioning technique as in Figure 2.b., which achieves high

enough precision, range and penetration though obstacles for

emergency, disaster relief and military applications. Each

mobile user i in the MANET determines the distance to each

of its neighbors j S, dij, for i ≠ j, where S is the set of users in

the MANET. Each node determines distance between its

neighbors by TDOA measurements obtained by sending a

broadcast packet and receiving answers from its neighbors.

Figure 3 illustrates the time measurements at different time

instants. Due to mobility of nodes in a MANET, distance

measurements are repeated at different time instants s,

obtaining dij(s).

A. Initialization

All nodes of the MANET are switched on and exchange identification and address information. A pseudo-random timing sequence ∆tij is calculated at nodes i and j by a function, which is the same for all nodes and has the addresses of node i and node j as inputs. This pseudo-random timing sequence ∆tij has a finite size and repeats itself upon completion. It is used for adding a fixed delay to the TDOF measurements in order to mitigate the inconsistent delays introduced by transceiver hardware while switching between transmit and receive states.

B. Distance measurement

Each mobile user i, sends a broadcast packet including a preamble during which synchronization among two mobile units is achieved. The transmitting node i starts a counter at the end of the last bit sent. The correlation among the expected bit sequence and received bit sequence provides the exact timing of the end of the last bit at the receiving unit j. After delaying for ∆tij(m), at the m

th reception, node j sends back to node i.

Node i, records the time of the first bit received from node j and checks the identity and address of the packet received. If the this packet is the one that is sent, the time difference between sending and receiving the packet to node j, ∆Tij(m), is obtained. The distance between node i and j at the m

th time

instant, dij(m), is obtained by

dij(m)= c∆Tij(m) - ∆tij(m)/2 (1)

where c is the speed of light. Since multipaths are received later than the original signal, dij(1) is the actual distance between nodes i and j at time instant m.

C. Angle measurement technique

A directional antenna is rotated at each time instant m and dij(m) measurements are taken from each 45° angle interval. This way, the angular position of neighbors is obtained for each node. Angular positions together with the distances to

978-1-4673-1954-6/12/$31.00 ©2013

IEEE

107/278

Page 117: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 3. Method for distance measurement: mobile user i measures

distance of users j and k.

each neighbor provide a localization map for the MANET without using an infrastructure.

V. FUTURE WORK

Experiments are carried out to mitigate interference at the right radio frequency module in order to detect exact timing of reception of signals. The experiments carried out with a couple of nodes currently will be extended to a group of nodes in a MANET in the future.

REFERENCES

[1] Hui Liu, Houshang Darabi, Pat Banerjee, Jing Liu, ―Survey of Wireless

Indoor Positioning Techniques and Systems‖, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, vol. 37, no. 6, November 2007.

[2] Y. Gu, A. Lo, I.G. Niemegeers, "A Survey of Indoor Positioning Systems for Wireless Personal Networks", IEEE Communications Surveys and Tutorials, vol. 11, no. 1, first quarter, 2009.

[3] R. Mautz, ―Overview of current indoor positioning systems,‖ Geodesy and Cartography, DOI: 10.3846/1392-1541.2009.35.18-22, vol. 35, no. 1, pp. 18-22.

[4] Alessandro Magnani, Kin K. Leung, ―Self-Organized, Scalable GPS-Free Localization of Wireless Sensors‖, in Proceedings of the WCNC 2007, pp. 3801-3806.

[5] Nan Yu, James M. Kohel, Larry Romans, and Lute Maleki, ―Quantum Gravity Gradiometer Sensor for Earth Science Applications‖, Jet Propulsion Laboratory, California, Institute of Technology, under a contract with NASA.

[6] CXM543 Datasheet - Willow Technologies Ltd, http://www.willow.co.uk/CXM543_Datasheet.pdf

[7] ] S. Gezici, Z. Tian, V. Giannakis, H. Kobaysahi, F. Molisch, H. Poor,and Z. Sahinoglu, ―Localization via ultra-wideband radios: A look at positioning aspects for future sensor networks‖, IEEE Signal Processing Magazine, 22 (2005), no. 4, 70–84.

[8] S. Holm, ―Hybrid ultrasound-rfid indoor positioning: Combining the best of both worlds‖, in Proceedings of the IEEE Int RFID Conf, 2009, pp. 155–162.

[9] D. Skournetou and E. Lohan, ―Pulse shaping investigation for theapplicability of future gnss signals in indoor environments,‖ in Proceedings of the 2010 International Conference on Indoor Positioning and Indoor Navigation (IPIN), September 18, 2010.

[10] Ubisense, http://www.ubisense.net/en

[11] Anshul Rai, Krishna Kant Chintalapudi, Venkata N. Padmanabhan, Rijurekha Sen,‖ Zee: Zero-Effort Crowdsourcing for Indoor Localization‖, in Proceedings of the MobiCom’12, August 22–26, 2012, Istanbul, Turkey.

[12] He Wang, Souvik Sen, Ahmed Elgohary, Moustafa Farid, Moustafa Youssef, Romit Roy Choudhury, ―Unsupervised Indoor Localization‖, in Proceedings of the MobiSys’12, June 25–29, 2012, Low Wood Bay, Lake District, UK

[13] Guillaume Trehard, Sylvie Lamy-Perbal and Mehdi Boukallel, ―Indoor Infrastructure-less Solution based on SensorAugmented Smartphone for Pedestrian Localisation,‖ in Proceedings of the 2012 International Conference on Ubiquitous Positioning Indoor Navigation and Location Based Service, 4-5th October 2012.

[14] Dmitri D. Perkins, Ramesh Tumati, Hongyi Wu, Ikhlas Ajbar, ―Chapter: Localization in Wireless Ad Hoc Networks‖ in Resource Management in Wireless NetworkingNetwork Theory and Applications, ISBN: 978-0-387-23807-4, Volume 16, 2005, pp 507-542.

[15] Z. Merhi,M. Elgamel, R. Ayoubi, M. Bayoumi, ―Tals: Trigonometry-Based Ad-Hoc Localization System for Wireless Sensor Networks‖, Proc. of 7th International Wireless Communications and Mobile Computing Conference (IWCMC), pp. 59-64, 4-8 July 2011.

[16] Tolga Eren, ―Cooperative localization in wireless ad hoc and sensor networks using hybrid distance and bearing (angle of arrival) measurements‖, EURASIP Journal on Wireless Communications and Networking 2011 2011:72.

[17] Davide Dardari, Chia-Chin Chong, Damien B. Jourdan, Lorenzo Mucchi, ―Cooperative Localization in Wireless Ad Hoc and Sensor Networks‖, EURASIP Journal on Advances in Signal Processing 2008, 2008:353289.

[18] Decawave, http://www.decawave.com/

[19] Lambda:4, http://www.lambda4.com/EN/

[20] Rönne Reimann, ―Locating and distance measurement by high frequency radio waves‖, in Proceedings of the 2011 Indoor Positioning and Indoor Navigation Conference, IPIN 2011, – short papers, posters and demos, Moreira, Adriano J. C.; Meneses, Filipe M. L. (eds.). Guimarães, Portugal.

[21] Patent: REIMANN Rönne: Method to determine the location of a receiver. Nov, 22 2012: WO 2012/155990

108/278

Page 118: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Sound Based Indoor Localization – Practical Implementation Considerations

João Moutinho INESC TEC (formerly INESC Porto), Faculty of Engineering, University of

Porto Rua Dr. Roberto Frias

4200-465 Porto, Portugal

Diamantino Freitas Faculty of Engineering, University of

Porto Rua Dr. Roberto Frias

4200-465 Porto, Portugal

Rui Esteves Araújo INESC TEC (formerly INESC Porto), Faculty of Engineering, University of

Porto Rua Dr. Roberto Frias

4200-465 Porto, Portugal

Abstract—Among the several signal types used for state-of-the-art indoor personal localization, ultrasound, electromagnetic and light supported signals stand out as the most popular. However, when considering a balance of characteristics, audible sound based localization stands out as an interesting possibility since it allows the use of off-the-shelf inexpensive components. It must be considered that no solution will turn into a real application if it is too difficult or costly to implement. From the fixed indoor envi-ronment, one can reasonably assume that many spaces already provide a public address sound system. From the moving person, one may expect that a sound receiver with a wireless transmitter with enough indoors signal coverage may be carried on. This could possibly be implemented by means of a cell phone. Work-ing with these two premises together to build a localization sys-tem, is the objective of the current work. There are inherent problems of not using dedicated proprietary tools in this process. Issues like: adaptation of a public address sound system to allow the simultaneous separated excitation of loudspeakers so that ToF (time of flight) and therefore distance may be estimated; simultaneous access and multiple users; data hiding in sound so that only reasonably small disturbance in the acoustical envi-ronment may occur; how may a simple audio channel like the one of a common cell phone be used as a localizable acoustic receiver, raise the problems in hands. This paper is focused on theoretical and practical aspects of a possible real implementation of an audible sound based indoor localization scheme using a standard audio channel receiver. Experimental results using TDMA, FDMA and CDMA access schemes in a sound communication system show that it is possible to have an interesting localization accuracy of a few centimeters in non-ideal conditions (reverber-ant room). Issues concerning the moving person’s device (latency and limited directivity/frequency response) are addressed and possible solutions are proposed.

Keywords-TDMA; FDMA; CDMA; TDE; ToF; Sound-based; Indoor Localization

I. INTRODUCTION

One of the most popular research areas in ubiquitous or pervasive computing is the development of location-aware systems [1]. These are systems in which electronic devices provide the users some kind of information or service depend-ing on their location. The basilar component of a location-aware system is the location-sensing mechanism. In order to develop an inexpensive, easily deployable, widely compatible localization system, one must adequate the problem constrains

to the everyday present technologies and deal with the conse-quences of not having dedicated equipment to perform the measurements and consequently achieve indoor localization. Therefore, it is this paper’s objective to discuss some of the problems in hands while providing some possible solutions. The present results are also used to demonstrate the importance of some issues like the choice of the (audible) excitation signal and its directivity, the medium access for multi-user, multiple-access technique and the importance of the time-of-flight (ToF) measurement in position determination.

II. RELATED WORK

Many technologies with different types of signal were al-ready studied to provide reliable, precise and accurate locali-zation of persons or devices. The existing approaches have explored almost every type of signal: Infrared, Radio Frequen-cy, Artificial Vision, Inertial sensors, Ultrasound and finally Audible Sound.

Even though every type has its own pros and cons, the Au-dible Sound approach is one of the emerging approaches with still much to be studied. It has been somehow left behind due to its initial premise: it is audible and therefore it is assumed that it will disturb the acoustic environmental in a non-desirable way. But using off-the-shelf inexpensive or pre-existent components is tempting and therefore some audible sound based techniques can be found in the literature. Most of them use sound as a natural consequence of their operation, just like airplanes that produce noise that can be used to track them [2]. A 3-D indoor positioning system (IPS) named Beep [3] was designed as a cheap positioning solution using audible sound technology. Beep uses a standard 3-D multilateration algorithm based on TOA measured by the Beep system sen-sors as a PDA or another device emits sound signals. Other possibilities rely on having microphone arrays [4] to track some sound source positioning by angle of arrival (AOA) techniques. Another possible approach is a technique named “Acoustic Background Spectrum” where sound fingerprinting is employed to uniquely identify rooms or spaces in a passive way (no excitation signal), just with the noise “fingerprint” of that space [5]. Very recently an acoustic indoor localization system employing CDMA [6] was developed. It uses off-the-shelf components and localizes a microphone within an indoor

109/278

Page 119: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

space by using sound cues provided by loudspeakers. In this work, time of arrival measurements of acoustic signals, which are binary-phase-shift-keying modulated Gold code sequences using direct-sequence (DS) spread spectrum (SS) technique, are performed. Other approaches also use off-the-shelf devices achieving a sub-meter accuracy. They use tablets, smartphones and laptops to provide wireless data connection and interface [7][8]. Using DS code division multiple access (CDMA) with different coding techniques is common in some approaches and allows to perform simultaneous accurate distance meas-urements while providing some immunity to noise and inter-ference.

III. DETERMINING THE POSITION

Localization is assured by measuring the distance vector between anchors and the mobile device(s). One can assume that sounds are played from all loudspeakers, starting at time and that sound from speaker reaches the microphone at time . If is the speed of sound, (, ) the position of the mobile device in a two-dimensional version of the problem and ( , ) the position of anchor (loudspeaker ), the propaga-tion delays, also called ToF, − and distances between anchors and the mobile device are described by

= ( − ) = ( − ) + ( − ) (2)

The arrival times of the signals may be estimated using correlation methods as is explained in the following. The time instant can be determined using a technique described ahead as “Circle Shrinking”. The anchor’s positions ( , ) are considered to be known. Due to the presence of noise in the estimations, the desired and unknown mobile device’s position (, ) can’t be obtained just by solving the system of equa-tions. The location needs to be determined by an algorithm of source localization that considers an error minimization ap-proach. Using nonlinear least square estimation methods like Gauss Newton, Newton-Raphson and Steepest Descent has provided sufficiently accurate results while maintaining low computational complexity. Their similar performances lead us to believe that each one of these methods is suitable for this purpose, converging to the solution almost at the same itera-tion. However, a small advantage was found in Gauss Newton method due to its simplicity and faster processing.

IV. EXPERIMENTS AND RESULTS

The experiments were performed in a research laboratory room environment with 7m x 9m x 3m size. From this total area only 6m x 7m was used as is depicted in figure 1.

The room is occupied by a set of furniture, computers, per-sons, with plaster reverberant walls in which two are outer walls with four large windows. This room was not adapted in any way for this experiment. Twenty three “ground truth” points were marked on the floor as landmarks to allow error estimation. Four ordinary satellite computer loudspeakers were wall mounted at ear level and used as anchors angularly dis-tributed. The mobile device is represented by an omnidirec-tional “flat” frequency response measurement condenser mi-crophone. It was used for these experiments as an ideal receiver to validate the other aspects independently.

Figure 1. Experiental setup. Corner square red dots represent the anchors (speakers) while the 23 ground truth points are the small circles in yellow.

The sound emission and capture was performed at 44,1kHz sampling rate using an EASERA Gateway sound board from Pre-Sonus, a low latency/low noise IEEE1394 interface sound board with ASIO drivers. All processing was performed in a 1.6GHz Dual Core PC Laptop with 2GB RAM running Win-dows 8 with Matlab2012b.

In all experiences the air temperature was measured to cor-rect the value of speed of sound, necessary to estimate distance according to

= 331.45 1 + (/273.15) (3)

No effect on the speed of sound was considered regarding humidity and wind, being considered to be negligible.

Three experiments (A, B and C) were conducted to esti-mate the importance of some decisions considering methods and algorithms so that the accuracy, precision and general performance obtainable is maximized:

A) Latency analysis considering the Easera Gateway sound board with different API tools;

B) Comparison on three correlation techniques to perform Time Delay Estimation (TDE): cross-correlation, generalized cross-correlation phase transform and maximum likelihood;

C) Evaluation of the position estimation error and reliability with a sufficient SNR considering: TDMA (with unit pulses), FDMA (with chirps) and CDMA (with coded PN).

V. RESULTS AND DISCUSSION

The most compelling results on the conducted experiments are here presented to better illustrate some of the practical issues approached in an implementation of an IPS.

A) Latency analysis was performed with the same sound board considering the two scenarios:

- Using standard WDM drivers and Matlab’s DAQ;

- Using ASIO drivers and PortAudio mex files.

110/278

Page 120: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Figure 2. Latency analysis using two different sound board interfaces. On top, WDM drivers and Matlab’s DAQ. On the bottom, ASIO drivers and a

mex file using PortAudio multichannel interface.

As can be seen in figure 2, there remains no doubt that the bottom ASIO interface has a fixed latency. It has a higher value (around 51ms) due to the use of an exterior .mex file in Matlab and also due to the sound driver configuration where one can select latency as function of the processor load charge. Howev-er it is preferred over a variable lower latency because it’s stable latency value may be subtracted leaving no latency noise.

Having a fixed latency may be very useful because one may subtract a fixed value to delay and obtain ToF more easily and precisely.

B) Time delay estimation is one of the key operations to correctly estimate the distance by using the ToF. The “compar-ison” between the sent signal and the received one, will allow to estimate the delay and therefore the distance.

Between many possibilities, three correlation methods were tested due to their computational simplicity [9]: cross-correlation (CC), generalized cross-correlation phase transform (GCC-PHAT) and maximum likelihood (ML).

Figure 3. Correlation methods comparison with a 12,83dB SNR for a 1000 samples delay. Cross-correlation on top, Generalized Cross-Correlation in the

middle and Maximum Likelihood on the bottom.

As is it possible to observe in figure 3, GCC-PHAT provid-ed the best (sharper) results even in a rather low SNR scenario with its ability to avoid spreading of the peak of the correlation function. This was also verified with several levels of additive white noise providing the best results in TDE with no signifi-cant increase in computational complexity, especially compar-ing with CC results which are easier to compute, but worse in low SNR.

C) The performed experience evaluates the use of three dif-ferent methods to convey the excitation audible sound signal to a receiver so that TDE can be as accurate, precise and reliable as possible in real conditions (the noisy reverberant space). Table I summarizes the results comparing average error, relia-bility and minimum SNR on these three methods considering a minimum SNR so that reliability of the distance vectors does not decrease under 50% in the worst measurement position.

Results demonstrate that the CDMA method has performed slightly better than the other two. Achieving a 1,3cm average error in the center points may be considered in the range of the best state-of-the-art results. The chirp (bird like) FDMA ap-proach had difficulties in estimating some positions due to its relatively small bandwidth, forcing to adapt the experiment so that the loudspeakers are redirected at some measurement points especially when close to walls and corners. In this situa-tion directionality factor is greater than one and the reverbera-tion is interpreted as the direct signal. Significant overestima-tion on the distance vector was noticed when no direction ad-justment was performed to the loudspeakers. The other wide band approaches, TDMA and CDMA, were not affected in the same way, but results also demonstrate significantly better results in center points. The directivity subject in the speakers or in the mobile microphone may be considered together with the frequency response of all the parts, as it will affect the ability to perform TDE. Channel’s equalization may have to be considered to avoid TDE errors. TDMA method has shown not to be robust in a noisy environment. Even though reliability, in the experiment criteria, is one of the greatest, the required min-imum SNR is quite larger than the others. The pulse detection technique used was based on maximum detection and there-fore, a simple impulsive masking noise is enough to make the TDE fail and consequently everything else.

One of the most meaningful observations is related with the minimum SNR that each method requires. As previously men-tioned, TDMA pulse method is very demanding, only perform-ing well (with a reliability criteria of 30cm error in distance vectors) above 24,7dB SNR. In the other hand, CDMA per-formed very well with its 7,2dB minimum SNR. A value in which the used sound was found almost undetectable, comply-ing with the objective of performing without being acoustically annoying.

TABLE I. COMPARISON BETWEEN TDMA, FDMA AND CDMA

111/278

Page 121: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

VI. PRACTICAL IMPLEMENTATION CONSIDERATIONS

The latency problem (finding in equation 2) becomes critical when one wants to measure ToF. No matter which is the architecture, it is expected that it will take some time to emit/receive/process the excitation signal. Since ToF is used to calculate the distance with TDE, one can conclude that latency will cause an overestimation if it isn’t subtracted to the total time. Using a low latency sound board is not a sufficient condi-tion to assure a reliable ToF measurement. Even though its latency may be low, a variable latency is far more harmful to a distance measurement as it cannot be subtracted as a fixed previously known fixed amount. Some previous work, as for instance [6] has used a dedicated microphone in a known posi-tion to calculate the delay at every iteration. It is a simple pos-sible solution, but it requires additional hardware. The con-ducted experiment has shown a fixed latency possibility (in the same run) that avoids the use of a calibration microphone. It is however prudent to have into account a technique we called Circle Shrinking that prevents the latency to affect ToF meas-urements in TDE.

Considering latency to be constant in the small time win-dow of a run (most of the time a viable assumption) and that latency overestimates distance, one can think of the TDE calcu-lated distances as circles, centered in the anchors positions, that need to be shrinked by the latency amount so that one can minimize the interception area between circles as it is shown in figure 4. Latency can be therefore eliminated even if it is varia-ble between runs. However it can be computationally demand-ing to calculate this interception area and to minimize it. One must take into account the application requirements in preci-sion and accuracy to evaluate what is reasonable. Sometimes, a small estimation error in distance vectors may be acceptable. The source localization algorithm may deal with it very well. For example, one-sample error in TDE at 44,1kHz represents less than a centimeter error in a distance vector from an anchor, and less in the final position.

Time delay estimation determines the in equation 2 and is another concerning aspect of the distance vectors determina-tion. The correlation method used and its performance in terms of delay detection and computational complexity may deter-mine the success of using TDE to estimate distance. A poor delay measurement will result in an even poorer distance esti-mation, depending on the sampling frequency and the other parameters.

Figure 4. 25% circle shrinking ilustration. The overestimated distance vector

on each anchor are interatively reduced to minimize the solution space.

A sharper peak provides a better TDE estimation. The ML technique provides a sharper peak by having its weighting function to attenuate the signals in the spectral region where the SNR is the lowest. However, the GCC-PHAT method has proven to provide better delay detection in white noise like environments confirming some literature results [8][10].

VII. CONCLUSIONS AND FUTURE WORK

It has been shown that audible sound is a viable signal to estimate position indoors. The results on the performed experi-ences demonstrate better performance in the use of CDMA to achieve accurate and precise positioning with the lowest SNR. Using CDMA also fulfills the objective of minimizing any caused disturbance in the acoustic environment.

Among the three experimented correlation techniques used for TDE, GCC-PHAT has proven to be the most effective in real noise situations.

In the near future the work will be focused on the moving person’s device and its limitations in reception and transmis-sion. The Doppler effect will also be evaluated when consider-ing a moving device. Also, efforts will be conducted in improv-ing perceptual masking to minimize even further any sound disturbance in the acoustic environment.

ACKNOWLEDGMENT

This work was financed by FCT (Fundação para a Ciência e Tecnologia) with the associated PhD grant reference SFRH/BD/79048/2011, FEDER through ”Programa Opera-cional Factores de Competitividade – COMPETE” and by National Funding through FCT in project FCOMP-01-0124-FEDER-13852.

REFERENCES [1] Ferraro, Richard, and Murat Aktihanoglu, “Location-Aware

Applications.”, Co., 2011. [2] Blumrich, Reinhard, Altmann, Jurgen, “Medium-range localization of

aircraft via triangulation”, App.Acoustics Vol.61, Iss.1, 2000, pp.65-82. [3] A. Mandal, C. V. Lopes, T. Givargis, A. Haghighat, R. Jurdak, and P.

Baldi, ”Beep: 3D Indoor Positioning Using Audible Sound”, Proc. IEEE CCNC, Las Vegas, 2005.

[4] Atmoko H., Tan D. C., Tian G. Y. and Fazenda Bruno, “Accurate sound source localization in a reverberant environment using multiple acoustic sensors”, Meas. Sci. Technol. 19 (2008) 024003 (10pp).

[5] Stephen P. Tarzia, Peter A. Dinda, Robert P. Dick, and Gokhan Memik. 2011. Indoor localization without infrastructure using the acoustic background spectrum; MobiSys '11. ACM, NY, USA, 155-168.

[6] Sertatıl, Cem, Mustafa A. Altınkaya, and Kosai Raoof, "A novel acoustic indoor localization system employing CDMA.", Digital Signal Processing 22 (2012), 506-517.

[7] C. V. Lopes , A. Haghighat , A. Mandal , T. Givargis and P. Baldi "Localization of Off-the-Shelf Mobile Devices Using Audible Sound: Architectures, Protocols and Performance Assessment", ACM SIGMOBILE Mobile Computing and Communication Review, vol. 10, no. 2, 2006.

[8] Rishabh, Ish, Don Kimber, and John Adcock, "Indoor localization using controlled ambient sounds.", Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on. IEEE, 2012.

[9] Zekavat, Reza, and R. Michael Buehrer, “Handbook of position location: Theory, practice and advances”, Vol. 27. Wiley. com, 2011.

[10] Khaddour, Hasan, “A comparison of algorithms of sound source localization based on time delay estimation”, Elektrorevue vol.2, no. 1, April 2011.

112/278

Page 122: International Conference on Indoor Positioning and Indoor Navigation

- chapter 6 -

Mapping, Simultaneous Location AndMapping (SLAM)

Page 123: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Proposed Methodology for Labeling Topological

Maps to Represent Rich Semantic Information for

Vision Impaired Navigation.

J.A.D.C.Anuradha Jayakody

Department of Electrical and Computer Engineering

Curtin University

Perth ,Western Australia

[email protected]

Iain Murray

Department of Electrical and Computer Engineering

Curtin University

Perth ,Western Australia

[email protected]

Abstract— Navigation in indoor environments is highly

challenging for the strictly vision impaired, particularly in

unknown environments visited for the first time. Several

solutions have been proposed to deal with this challenge.

Although some of them have shown to be useful in real scenarios,

they involve an important deployment effort or use objects that

are not natural for vision impaired individuals. It is very helpful

if it contain semantic information of the location. This paper

presents a methodology to add interesting tags/Labels for the

typical indoor topological map. Authors pay special attention to

the semantic labels of the different type of indoor places and

propose a simple way to include the tags to build topological

map.

Keyword-component; Assistive Technology; vision impairment

Indoor Place Classification; Semantic Labeling; Indoor Map

I. INTRODUCTION

Blindness affects approximately 45 million people

worldwide. Because of rapid population growth, this number is

expected to double by the year 2020 [1]. As with the sighted

population they want to be informed about persons and objects

in their environment and object features may be of importance

when navigating a path to a given destination. Blind and vision

impaired people would wish to exact information about

appropriate paths, dangers, distances and critical situations

Navigator of certain buildings, like supermarkets and

shopping complex, usually navigate themselves through the

building using a floor plan they got at the entrance, or by

following the signs on walls. In other words, it is rather

primitive way of navigation. When a building gets more

complex, this type of navigation tends to fail, because it is

hard for the visitor to find his way. In the case vision impaired

people it is almost an impossible task.

II. RELATED WORK

Topological maps have been quite popular in the robotics

field[2].They are believed to be cognitively more adequate,

since they can be stored more compactly than geometric maps,

and can also be communicated more easily to users of a

mobile robot. Many researchers have considered the issues of

building topological maps of the environment from the data

gathered with a mobile robot [2, 4]. However, few techniques

exists that permit semantic information to be added to these

maps [3].

III. TOPLOGICAL MAP WITH SEMANTIC LABLING

This section provides a simple classification to identify

basic classifications of indoor classes that can be used for

identify or discover the topology of the indoor environment.

The main two classes are as mentioned below.

Places

Transition

The “Places” are the nodes of the model and “transitions”

correspond to the edges between nodes. The class, “Places”

includes subclass like corridor, Room 1, Room2…, Room n,

Office environment details and etc. The class “Transitions”

incudes subclasses like Door, Stairs, Elevators, Escalators, etc.

Both sub classes provide the model with augmented semantic

information, but of particular interest it is the analytical fact

that to be considered the type of transition. It is important to

label these specific subclasses in a topological map to assist

vision impaired individuals within an indoor environment.

IV. PROPOSED SEMANTIC LABELING FRAMEWORK

The proposed labeling framework for constructed semantic

map is composed of five main modules, as shown in fig.1.

V. AUTOMATED LABELING ALGORITHAM

An algorithm will read the sensor input coming through

image processing and apply a set of predefined rules to

identify the specific labels in the incoming image of the spatial

environment. Newly proposed algorithm fig. 2 can work with

two types of labels can be namely: specific transitions and

place labels and Generic transition and place labels. Generic

transition and place labels can be ordinary labels such as

staircase, lift, washroom, office area, doors, walls and etc. In

the novel architecture these generic transition and place labels

can be kept in the local database (DB) of the smart phone.

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

113/278

Page 124: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Specific transition and place labels are unique to the given

spatial environment of the surroundings. They can be

organization’s Manager’s room label, Assistant Manager’s

room label and other official labels. After identifying the

significant labels the generated map will be updated according

to the results.

FIGURE 1: SYMANTIC LABLING FRAMEWORK

Figure 1. Symantic Labling Framework

VI. CONCLUSION

This work presents a novel approach to automatic

insertion semantic labeling to the constructed map in an indoor

environment that can be assist the vision impaired individuals

by giving rich full information. In future work, the authors

will focus on the implementation of models using the

proposed architecture and test it in the real world

environments.

ACKNOWLEDGMENT

This work has been supported by Curtin University Perth,

Western Australia and Sri Lanka Institute of Information

Technology, Malabe, Sri Lanka.

Figure 2. Algoritham for Symantic Labling

REFERENCES

[1] J.A.D.C.A. Jayakody, N. Abhayasinghe, I. Murray, “AccessBIM Model for Environmental Characteristics for Vision Impaired Indoor Navigation and Way Finding,” in International Conference on Indoor Positioning and Indoor Navigation, November 2012, [Online]. Available:http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/98_Paper.pdf [Mar. 5, 2013].

[2] S. Thrun, A. Bucken, Integrating grid-based and topological maps for mobile robot navigation, in:Proc. of the National Conference on Artificial Intelligence, 1996, pp. 944–950.

[3] J. Santos-Victor, R. Vassallo, H. Schneebeli, Topological maps for visual navigation, in: International Conference on Computer Vision Systems, 1999, pp. 21 36.

[4] A. N¨uchter, J. Hertzberg, Towards semantic maps for mobile robots, Robotics and Autonomous Systems 56 (11) (2008) 915–926.

[5] A. Tapus, R. Siegwart, Incremental robot mapping with fingerprints of places, Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2005) 2429–2434.

[6] S. Vasuvedan, S. Gachter, V. Nguyen, R. Siegwart, Cognitive maps for mobile robots - an object based approach, Robotics and Autonomous Systems 55 (5) (2007) 359–371.

Insert Image and Gait Analysis Data SMART Phone Sensor Data

Input through Image Processing

Note: Gait is the pattern of movement of the humans

Store, Information Detection & Extraction The main component in the process of navigation

is the map.

In this layer put the accent on the digital form of

the map information and on the principal

producers and users of map database.

Raw & Spatial Data Acquisition segment the incoming sensor data in to two

categories namely:

o Indoor environment specific

o Data landmark based on data

semantics.

Topological Map Building Topological map divides the set of nodes in the

navigational graphs into different areas.

An area consists of a set of interconnected “nodes” with the same place classification.

The nodes represent recognizable indoor specific locations and landmarks. The edges representing clear paths from one node to another usually door and corridors.

Symantec Labeling Track places of interests which important to integrate

into the created map.E.g doors, office names, elevators

with their reachable doors and staircases with the

corresponding number of steps.

114/278

Page 125: International Conference on Indoor Positioning and Indoor Navigation

- chapter 7 -

Robotics & Control Systems

Page 126: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Improvements and Evaluation of theIndoor Laser Localization System GaLocate

Jan Kokert, Florian Wolling, Fabian Höflinger and Leonhard M. ReindlDepartment of Microsystems Engineering - IMTEK

University of Freiburg, GermanyE-mail: kokert, wollingf, hoeflinger, [email protected]

Abstract—GaLocate (localization based on galvanometerlaser scanning), previously reported on the IPIN2012,is a promising solution for the intersection observationin production sites. Vehicles are equipped with a smallretro-reflective tag which is detectable to a laser scannermounted on the ceiling. The location is based on the twoangles of laser beam deflection with respect to the scanner.

In this paper we present the latest hardware andsoftware improvements. New experimental results includ-ing the overall scanning performance and the repetitionaccuracy are discussed.

Keywords—Internal logistic, multi-target localization,laser-scanning, pattern recognition, embedded systems.

I. INTRODUCTION

Automated guided vehicles (AGVs) are a part of many mod-ern production and assembly lines [1]. To allow autonomousnavigation a reliable localization of the vehicles is mandatory[2]. The traditional way to guide AGVs is to use inductive wiresburied into the floor, but this solution is very inflexible. State-of-the-art transport robots are equipped with laser line scanners(LIDAR) due to safety issues. These scanners can also beused to navigate by means of SLAM algorithms (Simultaneouslocalization and mapping). This approach may fail in highlydynamic areas like intersections, where staff or other transportvehicles may cross [3].

Intended to observe the traffic in these dynamic areas oursystem GaLocate provide absolute position data. The systemhas an inherent line of sight condition, which can be solvedby sensor fusion using data from odometry or gyroscopes.

x

ϕ

Figure 1: Angle definition in our realized concept. The position of the mobiletag (green) is determined with respect to the position of the scanner (yellow).

II. WORKING PRINCIPLE

Our localization system GaLocate consists of a laser scannermounted on the ceiling and several receivers (mobile tags)which are mounted on the AGVs, shown in Fig. 1. The mobiletags are detectable due to a retro-reflector to the laser scanner.In the beginning the scanner performs a coarse scan patternto search for mobile tags. If the scanner receives a reflectionfrom a tag, a fine scan will be done within this area.

The laser beam is deflected successively by two mirrorswhich are tilted by galvanometer actuators [4]. If all mobiletags are in the x-y plane, their position (xm,ym,0) is determinedby the two angles ϕ and θ with respect to the scanner (xs,ys,zs)according to

xm = xs + zs · tanϕ · 1cosθ

(1)

ym = ys + zs · tanθ . (2)

The data transfer of the investigated positions from the scannerto the mobile tags is done by an omnidirectional infraredcommunication. The data assignment is realized by a scan-detecting photodiode in the center of the reflector [5].

III. IMPROVEMENTS OF GALOCATE

The scanner components are mounted on an aluminum plate.The optics is facing down as shown in Fig. 2. A new systemcontrol comprising an FPGA and a PC was realized. TheFPGA controls the time critical hardware components like thegalvanometer movements. Hardware parameters are addressablevia an UART command parser. The PC runs a control softwareperforming scan-cycle and pattern-recognition algorithms andvisualizes the measurement data in a GUI. The software is

PID control

galvanometeroptics

FPGA board

power supplies

UARTcable

IR-Tx

Figure 2: The GaLocate laser scanner hardware.

115/278

Page 127: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

programmed in C++ (Qt) and uses the QextSerialPort library[6]. The data transfer is organized in events (3 bytes) like"reflection begin/end detected" or "scanning new line now"and commands (4 bytes) like "start fine scan at x,y with a sizeof w×h now".

With the new software it is for the first time possible toperform a tracked scanning. This is a coarse scan followedby successive fine scans. The center of each next fine scan iscalculated from the data of the last one using two differentalgorithms: Averaging and circle fitting. The averaging algo-rithm takes the arithmetic average of x and y values separatelyincluding all outliers. The circle fitting algorithm calculatesa circle for every combination of three measured points andwights all circle center points afterwards [7].

The intended IR-trigger signal from [5] to achieve aresolution in the sub-millimeter region was resigned. The delaysof scanning in x and y direction sequently are too high andthe scanning resolution is still sufficient.

IV. EXPERIMENTAL RESULTS

In [5] we calculated the theoretical scanner performance.Besides the galvanometer scanning time tscan, the model is nowextended to cover communication and calculation delays (tcomand tcalc). The overall performance ftot (successive fine scansper time) for m mobile tags can then be calculated by:

ftot =1

m · (tscan + tcom + tcalc)(3)

tcom =4byte ·ncmd +3byte ·nevt

fbaud/10data8, start, stop

(4)

tscan = (ndiv +1)/ ffpga · (nlines +2) ·w (5)

In (4) fbaud is the baud rate, ncmd and nevt are the numberof commands and events to be sent. In (5) ffpga is 100 MHzand ndiv is the clock divider value. The scan covers a squarearea with an edge length of w in digits and follows a meanderpattern which consists of nlines horizontal lines.

The edge length w is dynamically adjusted, to be nedge = 3times the reflector diameter aref = 21mm. With a positionaccuracy of ϕstep = 15.2µrad and a distance to the scanner ofzs = 880mm, the edge length can be calculated by (6). Thenumber of received events nevt can be estimated by (7). Inthe experiments we choose m = 1, ndiv = 7 and assume thatncmd = 3 and tcalc = 5ms.

w = nedge ·aref

zs ·ϕstep= 4710 (6)

nevt = nlines(new row)

+nlines/nedge · 2(rise and fall)

(7)

In the experiments we tracked a non-moving retroreflectorusing the circle fitting algorithm. Figure 3a shows the perfor-mance and accuracy with respect to the communication speedfbaud and three different values for nlines = 20,40,60. Thetotal speed ftot is increasing significantly with increasing baudrate, however accuracy σx and σy seems to be independent.

Figure 3b shows the performance and accuracy with respectto the lines per fine scan nlines. With increasing nlines the totalspeed ftot and the standard deviation σx and σy is decreasing.

19.2 57.6 115.2 256.00

20

40

60

80

baud rate fbaud in kBd

perform

ancef t

otin

Hz

ftot, 20 fmodel

ftot, 40 σx,40ftot, 60 σy,40

(a) Performance and accuracy withrespect to communication speed.

10 20 30 40 50 60

number of lines nlines

0

0.1

0.2

0.3

0.4

repetitionaccu

racy

σin

mm

ftotfmodelσxσy

(b) ... with respect to number of linesin one fine scan at fbaud = 256kBd.

V. CONCLUSION AND FURTHER RESEARCH

Since [5] all scanner components were constructed to aworking prototype. A new communication software was writtento control the scanner and to visualize the data. Furthermore2D tracked scanning was realized with high performance andaccuracy, confirmed by experiments.

It was shown, that the UART communication between PCand scanner is a dominant delay. To improve this, the softwarecontrol should be embedded in the scanner. The calculationof successive fine scan positions can be improved by addinga kinematic model and filters (Kalman) and thus increase themaximum object velocity during tracking. To tolerate line ofsight interruptions an IMU (inertial measurement unit) can beintegrated in the mobile tag to extrapolate the positions.

ACKNOWLEDGMENT

This work has been supported by the German ResearchFoundation (Deutsche Forschungsgemeinschaft, DFG) withinthe Research Training Group 1103 (Embedded Microsystems).

REFERENCES

[1] T. Albrecht, “Cellular intralogistics: ATV swarms replace traditionalconveyor technology in the internet of things,” Fraunhofer IML - Annualreport, pp. 51–52, 2010.

[2] R. Askin and J. Goldberg, Design and Analysis of Lean Production Systems.John Wiley & Sons, Inc., 2002.

[3] J. Levinson and S. Thrun, “Robust vehicle localization in urban envi-ronments using probabilistic maps,” in Robotics and Automation (ICRA),2010 IEEE International Conference on. IEEE, 2010, pp. 4372–4378.

[4] Cambridge Technology, 6215H Optical Scanner Mechanical and ElectricalSpecifications, March 2007.

[5] J. Kokert, F. Hoflinger, and L. M. Reindl, “Indoor localization system basedon galvanometer-laser-scanning for numerous mobile tags (GaLocate),”in Indoor Positioning and Indoor Navigation (IPIN), 2012 InternationalConference on. IEEE, 2012, pp. 1–7.

[6] B. Fosdick. (2013) A cross-platform serial port class. [Online]. Available:http://sourceforge.net/projects/qextserialport/

[7] L. Maisonobe, “Finding the circle that best fits a set of points,” in (whitepaper), Oktober 2007.

116/278

Page 128: International Conference on Indoor Positioning and Indoor Navigation

Observability Properties of Mirror-BasedIMU-Camera Calibration

Ghazaleh Panahandeh, Peter Handel, and Magnus JanssonKTH Royal Institute of Technology, ACCESS Linnaeus Center,Stockholm, Sweden

Email:ghpa, ph, [email protected]

Abstract—In this paper, we study the observability propertiesof visual inertial calibration parameters for the system proposedin [1]. In this system, calibration is performed using the mea-surements collected from a visual inertial rig in front of a planarmirror. To construct the visual observations, a set of key featuresattached to the visual inertial rig are selected where the 3Dpositions of the key features are unknown. During calibration, thesystem is navigating in front of the planar mirror while the visionsensor observes the reflections of the key features in the mirror,and the inertial sensor measures the system’s linear accelerationsand rotational velocities over time. The observability propertiesof this time-varying nonlinear system is derived based on theLie derivative rank condition test. We show that the calibrationparameters and the 3D position of the key features are observablefor the proposed model. Hence, our proposed method canconveniently be used in low-cost consumer products like visualinertial based applications in smartphones such as localization,3D reconstruction, and surveillance applications.

Index Terms—IMU-Camera calibration, motion estimation,VINS, computer vision.

I. I NTRODUCTION

Recently, there has been a growing interest in the devel-opment of visual inertial navigation systems. Of particularinterest is the use of lightweight and cheap motion capturesensors such as an inertial measurement unit (IMU) with anoptical sensor such as a monocular camera. However, accurateinformation fusion between the sensors requires sensor-to-sensor calibration. That is, estimating the 6-DoF transforma-tion (the relative rotations and translations) between visual andinertial coordinate frames; disregarding such a transformationwill introduce un-modeled biases in the system that may growover time. The current IMU-camera calibration techniquesare typically implemented for in-lab purposes. Since theyeither require a calibration target or are computationallyverydemanding (e.g., methods which are based on building amap of environments with unknown landmarks). Hence, thesemethods are not convenient to use in smart-phones with limitedpower consumption and without having access to a calibrationtarget. In [1], we proposed a practical visual inertial calibrationmethod, which is based on visual observations in front ofa planar mirror. In particular, the visual inertial system isnavigating in front of the planar mirror, where the cameraobserves a set of features’ reflections (known askey features)in the mirror. The key features are considered to be static withrespect to the camera and such that their reflections can alwaysbe tracked over images. For this nonlinear system, we derivethe state-space model, and estimate the calibration parameters

z x

y

IMU

z

C

Camera

I

Mirror

x

y

z

G

Fig. 1. IMU-camera rig and the corresponding coordinate frames. Therelative IMU-camera rotation and translation are depictedas C(CsI) andIpC, respectively. Featuref is rigidly attached to the IMU-camera where itsreflection in the mirror is in the camera’s field of view.

along with other system state variables using the unscentedKalman filter. In this paper, we show that for this time-varyingnonlinear system the IMU-camera calibration parameters, aswell as the 3D positions of the key features with respect tothe camera, are observable.

II. SYSTEM DESCRIPTION

The hardware of our visual inertial system consists ofa monocular camera—as a vision sensor—that is rigidlymounted on an IMU—as an inertial sensor. For estimating the6-DoF rigid body transformation between the camera and theIMU, we propose an approach based on an IMU-cameraego-motion estimation method [1]. During calibration, we assumethat the IMU-camera is navigating in front of a planar mirror,which is horizontally or vertically aligned. We formulate theproblem in a state-space model setting and use the unscentedKalman filter for state estimation. The IMU measurements(linear acceleration and rotational velocity) with higherrate areused for state propagation and the camera measurements withlower rates are used for state correction. The visual correctionsare obtained from the positions of the key features in the2D image plane, which are tracked between image frames.The key features are located arbitrarily (without any priorassumption on their 3D positions with respect to the camera)on the camera body such that their reflections in the mirrorare in the camera’s field of view (see Fig. 1). Hereafter, webriefly describe the system process and measurement modelused for the observability analysis.

117/278

Page 129: International Conference on Indoor Positioning and Indoor Navigation

We consider the system state variables: 1) motion parame-ters of the sensors (rotation, velocity, position) in the globalreference frame, 2) IMU-camera calibration parameters, 3)the3D positions of the key features with respect to the camera.The total system state vector is

x =[

IsG⊤ GvI

⊤ GpI⊤ CsI

⊤ IpC⊤ Cp f1

⊤ · · · Cp fM⊤]⊤

, (1)

where IsG represents the orientation of the global frameGin the IMU’s frame of referenceI (Cayley-Gibbs-Rodriguesparameterization),GvI andGpI denote the velocity and positionof I in G, respectively;CsI represents the rotation of theIMU in the camera frame,IpC is the position ofC in I, andCp fk for k ∈ 1, ...,M is the position of thek-th key featurein the camera reference frame.

For the observability analysis, we write the system processmodel (eq.(4), [1]) in the input-linear form as

I sGGvIGpICsII pC

Cp f1...

C p fM

=

03×1

gGvI

03×1

03×1

03×1...

03×1

+

12D

03×3

03×3

03×3

03×3

03×3...

03×3

ω +

03×3

C(IsG)⊤

03×3

03×3

03×3

03×3...

03×3

a, (2)

where12D,

∂ IsG∂ Iθ G

, C(s) is the rotation matrix corresponding tos; ω anda are the rotational velocities and linear accelerations,respectively, measured by the IMU.

Assuming a calibrated pinhole camera, the camera measure-ments from thevirtual features (the reflections of key featuresin the mirror) in normalized pixel coordinates can be expressedas

zk = hk =

uk

vk

1

=1

e⊤3Cp fk

Cp fk , (3)

where Cp fk represents the 3D position of thek-th virtualfeature with respect to the camera. The 3D position of thevirtual key featurefk in the camera coordinate frame,Cp fk isa nonlinear function of state variables as

Cp fk =Cp fk −2C(CsI)C(IsG)ere⊤r (4)(

C(IsG)⊤

C(CsI)⊤Cp fk +

GpI +C(IsG)⊤IpC

)

,

whereer is the normal of the mirror with respect to the globalframe; depending on the alignment of the mirror.

III. N ONLINEAR OBSERVABILITY ANALYSIS

We study the observability properties of our nonlinearsystem by analyzing the rank condition of its observabilitymatrix, which is constructed from the spans of the system’s

Lie derivatives [2]. The observability matrixO is defined as

O ,

∇L0h

∇L1fih

...∇L

nfif j ...fd

h

...

. (5)

To prove that a system is observable, it is sufficient to showthatO is of full column rank. For an unobservable system, thenull vectors ofO span the system’s unobservable subspace.Hence, to find the unobservable subspace, we have to findthe null space of matrixO, where O may have infinitelymany rows. This can be a very challenging and difficult taskespecially for high-dimensional systems.

We study the observability of our IMU-camera system basedon the algebraic test and following the analysis given in [3];details of the analyses and derivations can be found in [4]. Weprove that the null space of the observability matrixO in (5),using only two key features, is spanned by five directionscorresponding to the columns of

N =

03×2 03×2∂ IsG∂ Iθ G

Cer

[ ej ed ] 03×2 03×103×2 [ ej ed ] 03×103×2 03×2 03×103×2 03×2 03×103×2 03×2 03×103×2 03×2 03×1

, (6)

which implies that the IMU-camera calibration parametersand the 3D positions of the key features with respect tothe camera are all observable. Moreover, the unobservabledirections correspond to the system’s planar translation andvelocity orthogonal toer (first and second block columns ofN) and rotation arounder (third block column ofN).

IV. CONCLUSION

We have studied the observability properties of the IMU-camera calibration parameters for the proposed system in [1].We show that the calibration parameters and the 3D positionsof the key features with respect to the camera are observablewhen only two key features are used. Hence, our proposedsystem can conveniently be used in smart-phones with limitedpower consumption and without having access to a calibrationtarget. Finally, we have verified the findings of our analysisboth with simulations and real experiments.

REFERENCES

[1] G. Panahandeh and M. Jansson, “IMU-camera self-calibration usingplanar mirror reflection,” inProc. IEEE Int. Conf. on Indoor Positioningand Indoor Navigation (IPIN), Guimares, Portugal, pp. 1–7, Sep. 21-23,2011.

[2] R. Hermann and A. Krener, “Nonlinear controllability and observability,”IEEE Trans. on Automatic Control, vol. 22, no. 4, pp. 728–740, 1977.

[3] G. Panahandeh, C. X. Guo, M. Jansson, and S. I. Roumeliotis, “Ob-servability analysis of a vision-aided inertial navigation system usingplanar features on the ground,” inProc. IEEE/RSJ Int. Conf. on IntelligentRobots and Systems (IROS), 2013.

[4] G. Panahandeh, “Observability analysis of mirror-basedimu-camera self-calibration: Supplemental material,”http://kth.diva-portal.org/smash/record.jsf?pid=diva2:656197 , 2013.

118/278

Page 130: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Processing speed test of Stereoscopic vSLAM in an

Indoors environment Using Opencv GPU- surf

Delgado, J.V.

Faculdade de Engenharia Mecânica

Universidade Estadual de Campinas

Campinas, São Paulo, Brazil

[email protected]

Kurka, P.R.G. Faculdade de Engenharia Mecânica

Universidade Estadual de Campinas

Campinas, São Paulo, Brazil

[email protected]

Ferreira, L.O.S. Faculdade de Engenharia Mecânica

Universidade Estadual de Campinas

Campinas, São Paulo, Brazil

[email protected]

Abstract— This paper presents a speed test of a vSLAM (visual

simultaneous location and mapping) application. In that

framework, we process a stereoscopic image in order to find

invariant interest points (keypoints) using the SURF (Speeded-up

robust features) algorithm. Such an algorithm is computationally

expensive due to the frequent and large number of required data

processing. The SURF algorithm is implemented in three graphic

cards using CUDA trough Opencv library, in order to evaluate

the requirements of processing speed and efficiency. The vSLAM

process begins with the calibration of a stereoscopic camera,

followed by 3-D reconstruction of keypoints position. The visual

odometry is recovered by estimating the successive movements of

the cameras with respect to the identified spatial locations of the

keypoints. The vSLAM is applied to a real indoors navigation

experiment.

Keywords- vSLAM, SURF, Stereoscopic Camera, GPU

Processing, Opencv.

I. INTRODUCTION

The accurate uses of vSLAM autonomous navigation applications require a heavy computational effort. Literature works suggest the use of graphics processing units (GPU) in order to achieve online and real time processing speed. A stereoscopic omnidirectional vSLAM application with the use of a GPU is found in the work of Lui [1]. A real time visual mapping application is presented by Konolige [2]. A commercial depth sensor (Kinect) together with a GPU is used in a vSLAM algorithm by Newcombe [3] The work of Clipp, B. [4] presents a vSLAM implementation using CPUs and GPU to process stereoscopic images, achieving a performance of 61 frames per second (fps).

Open source software, such as the Open Computer Vision (OpenCV) library are also useful tools for the development of image processing applications [6]. Nagendra [7], has developed a method to extract and classify vehicle data, using an OpenCV's filtering module. Katzourakis, D [8], use the OpenCV to process images from a web-cam, providing a roadmap on how to perform experiments with cheap sensors on real vehicles. The 2009 version of OpenCV includes some

modules of the compute unified device architecture (CUDA[5]), to be used in real time applications of GPU processing.

The present paper proposes a vSLAM application test using stereoscopic images. Such an algorithm is computationally expensive due to the frequent and large number of required data to process. The keypoint identification algorithm is implemented in a graphic card. Algorithm compilations are tested on three different graphic units. Mobile and desktop GPUs are tested in order to evaluate their performances Discussion on how to build a real time mobile vSLAM navigation device are presented in the conclusions.

II. SOFTWARE ARCHITECTURE

The SLAM problem is dived in three parts [10]. The first one, Scene Flow, identifies keypoints in the environment. The second part Visual Odometry, calculates the motion between identified keypoints. The last one, Global-SLAM, builds the map and re-localizes the cameras attached to the manipulator. In this paper, the Scene Flow task is implemented in a Graphics processing unit (GPU). Visual Odometry and Global-SLAM are implemented only in an ordinary computer processor. The vSLAM algorithm is shown in Fig.1. The outlined block, represents the processing under GPU. Stereoscopic videos are recorded and later processed on mobile and desktop GPU units.

A. Scene Flow

The exploration begins with the acquisition of two stereoscopic images taken at successive path positions at times, (t-1) and (t). The identification of keypoints is done by correlating three images using the OpenCV features2D module, designed to run on a GPU. Finally the spatial positions of the points correlated in the three images are reconstructed trough stereoscopic triangulation.

The speeded-up robust features (SURF) algorithm [9] is used, in order to find interest points (keypoints) that are inariant to orientation, scale and illumination changes. Every point is associated to a descriptor vector, which contains image point coordinates and neighborhood stability indicator, among

Brazilian research founding agencies, CNPq ,CAPES and Fapesp sponsors of the present work .

119/278

Page 131: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

other parameters. The interest points can be correlated through descriptors matching. A correlation algorithm searches for the best fits between keypoint sets..The algorithm also performs a filtering of false positive correlations

Final correlated points are reconstructed in the Euclidian space using the stereoscopic calibration camera parameters, that is, rotation matrix, translation vector, focal length and image center.

B. Visual Odometry

The trajectory path is the connection of movements recovered from stereoscopic images taken at two successive times (t-1) and (t). The image is represented by point clouds in Euclidian coordinates. The movement is described in terms of estimated planar rotation matrices and translation vectors obtained via a least squares method.

C. Global –SLAM

The environment reconstruction is the transformation and storage of point clouds recovered at each movement and referenced to a global system of coordinates. The environment and 3D path visualization uses the OpenGL library which also runs on a GPU.

III. MATERIALS

The vSLAM algorithm was implemented in real and virtual

environments. A pair of stereoscopic cameras was used to capture real word images. A 3D modeling program was used to simulate a virtual environment and stereoscopic cameras. The processing test required the use of deferent's CUPs and GPUs.

A. Stereoscopic Camera

Images from the real and virtual parallel stereoscopic systems are used as inputs to the vSLAM algorithms. The real stereoscopic system has two Guppy pro (Allied Vision) cameras with 9mm lens and a baseline of 100mm shown in Fig.2. Images of 1280x960 pixels are taken at 7.5 fps. On the

other hand, the virtual camera was modeled with a baseline of 135mm, the images size of 640x480.

Figure 2. Stereoscopic camera fixed on a helmet. The Allied cameras

were sicronized by resetting the bufer after every capture.

B. Virtual enviroment

The algorithms were tested in a controlled environment,

before their implementation in the real word. This work, uses

Autodesk 3DS Max modeling 3D to test the vSLAM

algorithm in an indoor environment. Virtual cameras capture

rendered images of a virtual office environment. The object

textures were created from photographs of real materials, in

order to test the SURF algorithm as shown in Fig.3.

Figure 3. Virtual emviroment and Stereoscopic camera fixed on a

virtual robot. The Allied cameras were sicronized by resetting the bufer after every capture.

C. The used GPUs specification

The characteristics of the three graphic processor units are displayed on table I. The Quadro 4000 and the GTX 680 are mobile devices, designed for low power consumption and ensemble on mobile computer. The GPU GTX 560 is a desktop graphic unit.

Figure 1. Arquitechure of vSLAM. the gray block present an hybrid

implementation between CPU and GPU.

Load stereo image (t)

Read stereo video

Read Calibration

parameters

Keypoints Identification and Correlations

GPU-OpenCV

Path recovery

Triangulation

120/278

Page 132: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Characteristics

GPU Reference

GTX

560Ti

Quadro

4000M

GTX

680M

Processing Power (GFLOPS)

1263,4

638,4

1935,4

NVIDIA® CUDA™ Parallel

Processor Cores

384

336

1344

Graphics Clock (MHz)

822

475

720

Max Power Consumption (w) 170

100

100

Memory Bandwidth (GB/sec)

128

128

115,2

Memory GB

1 2 4

GPUs characteristic reading on the Nvidia page [12].

The performance of mobile GPUs changes when the power is supplied by batteries. In the next section the visual odometry performance is evaluated, running the algorithm on different devices and energy supplies.

IV. RESULTS

The vSLAM algorithm was processed under different, processing units and environments. Stereoscopic navigating videos into virtual and real environment were processed on different GPUs and CPUs. The recovered navigation path is presented on a video [13]. The processing speed for online visualization of the video was 30% of the average speeds, contempt in the tests.

A. Speed test

Two versions (release compilations) of the visual odometry algorithm were tested. The first one runs on GPU and CPU. This was tested on two mobile devices and on a desktop platform. The second program runs on pure CPUs. Speed results for processing are presented in Figs. 4 and 5. Some speed variations on the processing time illustrate the dissimilarity of environment textures.

Speeds in Fig3, represent the processing behavior with a stereoscopic video (image size 1280x980). The fastest result was obtained with the GPU unit GTX 560Ti, at an average speed of 6 fps. Mobile devices Quadro 4000M and GTX 680M experience a 5 times speed drop when using the battery source. Processing speed test on a pure CPU shows small variations between different unit processors. The Intel core i7 950 3.07GHz and Intel core i7 3610QM 2. 3GHz, display a 0.045 fps speed difference. A speed performance increase of 20 times is observed between the faster GPU and the CPUs, giving evidence of the superior performance of the GPU.

Figure 4. Velocities of visulal odometry algorithm on diferent devices.

The video captured (image size 1280x980) by the real stereoscopic

camera was tested on a GPUs and CPUs devices. The mobile GPUs were

tested with and with out batery. ,

In the second test, rendered images of size 640X480 pixels are preprocessed in the GPUs and a CPU, Fig. 4. The Quadro 4000M operating with battery achieves a processing speed similar to that of a CPU. Such a fact illustrates the limits of using the GPU on a battery operated mobile device.

Figure 5. Velocities of visulal odometry algorithm on a virtual

enviroment. The video rendered at (640x480) on the GPU 560TI present

the real time processing. 30 fps.

B. vSLAM experiment

Navigation in a 37.5 meters indoors environment was recorded in a stereoscopic video and later processed with the odometry algorithm. A typical illustration of stereoscopic images taken at successive time instants is shown in Fig 6.

121/278

Page 133: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure 6. A two pairs of stereoscopic images taken in a indoor

enviroment. The up two images are on current position t and the down are

on tha one step before t-1.

The path recovered in the indoors environment is presented in Fig 7. Accuracy measurement of the recovered path is not addressed in the present work, but can be found in ref. [11,14].

V. CONCLUSION

The visual odometry algorithm requires a parallelization technique due to the large amount of image information to process. The results show an increase of performance of about 10 times, between the use of GPU and CPU. Power supply is also seen as a restriction for the performance of mobile devices. On the virtual test, the GPU Quadro4000 had almost the same performance than that of a CPU. The test on a real indoors environment suggests the use of GPUs for large images and the possibility to arrive to real time processing speed (30 fps) with an image size of 640x480 pixels. Similar comparative works can be found in the literature. Konolige, Kurt, [2] achieves a processing speed of 15fps with an image size of 512x384 pixels on a CPU. Clipp, B [4] achieves a 15fps with an image size of 1224x1024 pixels using a GPU.

Figure 7. The path recovery is presneted as the conection of locals

coordenate systems. The red points are the reconstructed fearured used to

recover the pose at every time.

ACKNOWLEDGMENT

The authors wish to thanks, the Brazilian research founding agencies, CNPq, CAPES and Fapesp, sponsors of the present work, and Industrial Automation (DCA) at the School of Electrical and Computer Engineering (FEEC), University of Campinas (Unicamp).

REFERENCES

[1] Lui, Wen Lik Dennis, and Ray Jarvis. "A pure vision-based approach to topological SLAM." Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010.

[2] Konolige, Kurt, and Motilal Agrawal. "FrameSLAM: From bundle adjustment to real-time visual mapping." Robotics, IEEE Transactions on 24.5 (2008): 1066-1077.

[3] Newcombe, Richard A. and Davison..," KinectFusion: Real-time dense surface mapping and tracking ", Proc. 10th IEEE Int Mixed and Augmented Reality (ISMAR) Symp, 127-136, 2011.

[4] Clipp, B. and Jongwoo Lim and Frahm, J.-M. and Pollefeys, M.," Parallel, real-time visual SLAM ", Proc. IEEE/RSJ Int Intelligent Robots and Systems (IROS) Conf, 3961-3968, 2010.

[5] Sanders, Jason, and Edward Kandrot. CUDA by example: an introduction to general-purpose GPU programming. Addison-Wesley Professional, 2010.

[6] Bradski, G. "OpenCV 2.4.2", Dr. Dobb's Journal of Software Tools, 2000

[7] Nagendra, P." Performance characterization of automotive computer vision systems using Graphics Processing Units (GPUs) ", Proc. Int Image Information Processing (ICIIP) Conf, 1-4,2011.

[8] Katzourakis, D. I. and Velenis, E. and Abbink, D. A. and Happee, R. and Holweg, E., " Race-Car Instrumentation for Driving Behavior Studies ", IEEE_J_IM, Vol 61, 462-474, 2012.

[9] Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, "SURF: Speeded Up Robust Features", Computer Vision and Image Understanding (CVIU), Vol. 110, No. 3, pp. 346--359, 2008

[10] Clipp, B. and Jongwoo Lim and Frahm, J.-M. and Pollefeys, M.," Parallel, real-time visual SLAM ", Proc. IEEE/RSJ Int Intelligent Robots and Systems (IROS) Conf, 3961-3968, 2010.

[11] Delgado, V. J, “Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic vision,” Master Dissertation. UNICAMP. Brazil, Febrary 2012.

[12] Nvidia Developer Zone. (2013,Oct10) [Online]. Available: https://developer.nvidia.com/cuda-gpus

[13] Delgado, V. J., (2013, Oct 20). "Processing speed test of Stereoscopic vSLAM in an Indoors environment GPU vs CPU"[YouTube Video file]. Retrieved from: http://www.youtube.com/watch?v=pUVL-17ub9M&feature=youtu.be

[14] Delgado, V. J., Paulo R. Kurka, and Eleri Cardozo. "Visual odometry in mobile robots." Robotics Symposium, 2011 IEEE IX Latin American and IEEE Colombian Conference on Automatic Control and Industry Applications (LARC). IEEE, 2011.

Left(t) Right (t)

Left(t-1) Right (t-1)

Left(t) Right (t)

Left(t-1) Right (t-1)

122/278

Page 134: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Enhanced View-based Navigation

for Human Navigation by Mobile Robots

Using Front and Rear Vision Sensors

Masaaki Tanaka, Yoshiaki Mizuchi, Akimasa Suzuki

and Hiroki Imamura

Graduate School of Engineering

Soka University

1-236 Tangi-machi, Hachioji, Tokyo, 192-8577, Japan

[email protected]

Yoshinobu Hagiwara

National Institute of Informatics

2-1-2 Hitotsubashi, Chiyoda, Tokyo, 101-8430, Japan

[email protected]

Abstract—In this paper, we propose an enhanced view-based

navigation which is robust against a featureless scene using front

and rear vision sensors, and evaluate the proposed method.

Position and rotation of a mobile robot can be estimated by

image matching and ego-motion estimation using suitable one or

two vision sensors. In conventional view-based navigations, it is

difficult to estimate position and rotation of a mobile robot in

case the mobile robot heads for a featureless scene (e.g. wall

surface). By using the proposed method, a mobile robot is

expected to enable human navigation in an actual environment.

To evaluate the proposed method, we conducted experiments at a

corner and in a path heading for a lateral wall. From

experimental results, we confirmed the feasibility of the position

and rotation estimation for the human navigation by the mobile

robot.

Keywords-component; view-based navigation; human

navigation; front and rear cameras; robot; obstacle avoidance

I. INTRODUCTION

Recently human navigation by mobile robots has attracted interest. In extensive facilities, navigation to places indicated by visitors using mobile robots is useful. To realize the human navigation, it is definitely necessary to estimate position and rotation of a mobile robot. View-based navigation [1], as one of approaches to the position and rotation estimation, has been proposed. The view-based navigation is able to estimate position and rotation of a mobile robot using image matching between a current image and recorded images. This estimation can be performed without accumulation of positional errors even in a long path. By applying this view-based navigation, we have investigated a robot navigation system [2], which has enabled to avoid static obstacles using ego-motion. The ego-motion is calculated from corresponding SURF (Speeded Up Robust Features) [3] feature points in a current image and the most matched image in recorded images. However, if a mobile robot heads for a featureless scene, which appears at a corner or in a path heading for a lateral wall during dynamic obstacle avoidance, the view-based navigation with ego-motion has difficulty in estimating position and rotation of a mobile robot. Therefore, we realize an enhanced view-based navigation

which is robust against featureless scenes by using front and rear vision sensors installed on a mobile robot. Installing two vision sensors on front and rear of a mobile robot is expected to have the advantage of navigating back and forth with a single recording and utilizing rear vision sensor for detecting following human. The image matching and the ego-motion estimation are performed with suitable one or two vision sensors. By using our proposed method, it is expected that view-based navigation is enhanced to featureless scenes and enables human navigation in an actual environment.

II. PROPOSED METHOD

Fig. 1 shows the overview of our proposed method. Images at the top of Fig. 1 show recorded images obtained along the recording path presented by the dotted horizontal line. Upper images are images from the front vision sensor, and lower images are images from the rear vision sensor. Images in the

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

Figure 1. Overview of our proposed method

123/278

Page 135: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

middle of Fig. 1 show current images. The upper image is an image from the front vision sensor, and the lower image is an image from the rear vision sensor. In the image matching process of Fig. 1 (I), most similar front and rear images are determined by comparing current front and rear images to each of front and rear images recorded at respective points on the recording path using the SURF method. In the ego-motion estimation process of Fig. 1 (II), the ego-motion is calculated from 3D-positions of corresponding SURF feature points between the current images and the most similar front and rear recorded images. In the position and rotation estimation process of Fig. 1 (III), position and rotation are estimated from the position of the most similar recorded images and the estimated ego-motion. Fig. 2 shows the conceptual diagram of the ego-motion estimation with the front vision sensor. The ego-motion estimation with the rear vision sensor is performed in the same way. In the coordinate system of Fig. 2, the origin is the position of most similar recorded images. The z-axis is the line on the recording path. The x-axis is the line perpendicular to the z-axis. The y-axis is the line perpendicular to the xz plane. Rh stands for the height of vision sensors attached on the mobile robot. The point R (0, Rh, 0) represents the position of most similar recorded images. The point C (Cx, Rh, Cz) represents the current position of the mobile robot. Besides, circles show the positions of sampled feature points viewed from the point R. Triangles show the positions of corresponding feature points viewed from the point C. Squares show corresponding feature points rotated. First, the triangles are rotated to be matched rotationally with the circles using the singular value decomposition [4]. The estimated rotation from the triangles to the squares is equal to the estimated rotation θy of the mobile robot. Next, the squares are translated to be matched with the circles. The estimated translation from the squares to the circles is equivalent to the estimated translation of the mobile robot from the point R to the point C. The ego-motion is obtained as the estimated rotation θy and the estimated translation (Cx, 0, Cz) of the mobile robot. The ego-motion estimation is performed with both front and rear vision sensors, and the positions of feature points viewed from the current position are rotated and translated using each ego-motion. The ego-motion with the largest number of matching feature points is the estimated ego-motion, finally. By estimating the ego-motion using each of front and rear vision sensors, the mobile robot is expected to estimate its position and rotation more robustly against featureless scenes.

III. EXPERIMENTS

A. Position and rotation estimation at a corner

To evaluate the proposed method for the robustness against a featureless scene at a corner, we conducted an experiment at a corner in a corridor of our laboratory. Fig. 3 shows a measurement system for experiments of the position and rotation estimation. The measurement system has two laser pointers to adjust its position and rotation, two Kinects as front

Figure 3. Measurement system for experiments

Figure 4. Experimental path

Figure 5. Captured images at the origin in Fig. 4

(a) Front image (b) Rear image

Figure 6. Estimated positions

Figure 2. Ego-motion estimation

124/278

Page 136: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

and rear vision sensors, and two notebook PCs connected with each Kinect. The estimation accuracy of the proposed method is evaluated by comparing estimated position and rotation to precise position and rotation. Fig. 4 shows an experimental path. The experimental path ends at the point 91.5cm from a forward wall. 91.5cm is half the width of the corridor and a mobile robot is assumed to turn at this point. In Fig. 4, a white circle and a square in dotted outline represent a start position of the recording path and that of the experimental path, respectively. Black circles and squares in solid outline represent capture positions on the recording path and those on the experimental path, respectively. In the recording path of 400cm, recorded images were captured at 100cm intervals. In the experimental path of 400cm, images were captured at 20cm intervals. Fig. 5 (a)(b) show front and rear captured images at the origin in Fig. 4. Fig. 6 shows the experimental result by the conventional method [2] and the proposed method. In Fig. 6, squares are capture positions on the experimental path. Lozenges and circles represent estimated positions by the conventional method and those by the proposed method, respectively. From the experimental result, it is confirmed that the proposed method is able to estimate the position of the mobile robot robustly against a featureless scene of a wall surface at the corner. On the other hand, the conventional method had difficulty in estimating the position of the mobile robot at some positions. Fig. 7 and Fig. 8 show positional errors and rotational errors in the experiment. In Fig. 7, circles and lozenges represent positional errors by the proposed method and those by the conventional method, respectively. In

Fig. 8, circles and lozenges represent rotational errors by the proposed method and those by the conventional method, respectively. With the conventional method, positional errors over 10cm occurred at 4 positions and rotational errors over 1deg. occurred at 5 positions. The maximum errors of position and rotation were 38.1cm and 8.5deg. respectively. On the other hand, with the proposed method, positional errors and rotational errors were under 5cm and 1deg. at all positions. From these results, it is confirmed that the proposed method is able to estimate position and rotation of a mobile robot accurately at a corner which can bring featureless scenes.

B. Position and rotation estimation in dynamic obstacle

avoidance

To evaluate the proposed method for the robustness against a featureless scene in heading for a lateral wall for dynamic obstacle avoidance, we conducted an experiment in a corridor in our laboratory. On the assumption of a sudden obstacle in

Figure 9. Experimental paths

Figure 10. Captured images at the point C in Fig. 9

(a) Front image (b) Rear image

Positional error (cm) Rotational error (deg.) Distance to the

obstacle (cm) Average

Standard

deviation Average

Standard

deviation

220 5.2 11.0 2.4 4.2

160 4.6 5.2 2.3 2.5

100 28.2 33.5 7.4 11.2

80 95.3 97.7 7.8 17.0

Positional error (cm) Rotational error (deg.) Distance to the

obstacle (cm) Average

Standard

deviation Average

Standard

deviation

220 5.0 6.2 1.1 1.2

160 4.6 5.7 1.8 1.8

100 3.7 4.2 2.6 2.6

80 48.7 76.1 3.2 3.3

TABLE I. POSITIONAL AND ROTATIONAL ERRORS WITH THE

CONVENTIONAL METHOD

TABLE II. POSITIONAL AND ROTATIONAL ERRORS WITH THE

PROPOSED METHOD

Figure 7. Positional errors in the experiment at the corner

Figure 8. Rotational errors in the experiment at the corner

125/278

Page 137: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

the middle of the corridor, the robot runs on avoidance paths from various distances to the obstacle. The same measurement system as in Fig. 3 is used. Fig. 9 shows experimental paths. The obstacle is located at the origin of the coordinate system. The maximum distance from the recording path during avoidance is 60cm. Avoidance starts from 220cm, 160cm, 100cm and 80cm to the obstacle, and angles between each avoidance path and the surface of the lateral wall are approximately 15.3deg., 20.6deg., 31.0deg. and 36.9deg, respectively. In Fig. 9, a white lozenge represents a start position of the recording path. Also, hexagons, squares, triangles and circles in dotted outlines represent start positions of avoidance paths from 220cm, 160cm, 100cm and 80cm to the obstacle, respectively. Black lozenges represent capture positions on the recording path. Also, hexagons, squares, triangles and circles in solid lines represent capture positions on the avoidance paths from 220cm, 160cm, 100cm and 80cm to the obstacle, respectively. In the recording path, recorded images were captured at 100cm intervals. In the avoidance paths, images were captured at 20cm intervals. Fig. 10 (a)(b) shows front and rear captured images at the point C, which is the end position of the avoidance path from 80cm to the obstacle. Experimental results are shown in TABLE I and TABLE II. TABLE I shows positional and rotational errors with the conventional method. In TABLE I, in avoidance paths from 100cm and 80cm to the object, it was difficult to estimate position and rotation of the mobile robot during avoidance. TABLE II shows positional and rotational errors with the proposed method. In TABLE II, in avoidance paths from 220cm, 160cm and 100cm to the obstacle, it was confirmed that the average of positional errors and the average of rotational errors are under 10cm and 3deg. respectively. The estimation accuracy is considered to be useful for controlling a mobile robot in the human navigation in an actual environment. In the avoiding path from 80cm to the obstacle, it was difficult to estimate position and rotation of a mobile robot during avoidance. This is due to having selected recorded images at the position 100cm before in the image matching process because of similarity in intersection of current and recorded

images. Far matched points in the current images seem to have had more positional errors and to have led to wrong estimation. From the experimental result in TABLE II, we confirmed that the acceptable range of the distance to the obstacle was improved up to 100cm by the proposed method, which means a mobile robot can avoid a dynamic obstacle at up to approximately 31.0 deg. to the surface of a lateral wall. Therefore, it is confirmed that the proposed method is able to estimate the position and rotation of a mobile robot more robustly against a featureless scene of a lateral wall which appears in dynamic obstacle avoidance.

IV. CONCLUSIONS

In this paper, we proposed an enhanced view-based navigation robust against featureless scenes for human navigation by mobile robots by the use of front and rear vision sensors. In the experiments at a corner and in a path heading for a lateral wall, we evaluated the accuracy of estimated position and rotation. From the experimental results, it was confirmed that the proposed method was able to estimate position and rotation of a mobile robot more robustly against featureless scenes. Moreover, the average processing time of the proposed method was 630ms. Therefore, by using the proposed method, it is expected that a mobile robot runs on a wide range of an actual environment, and enables human navigation.

REFERENCES

[1] Y. Matsumoto, M. Inaba, and H. Inoue, “View-Based Approach to Robot Navigation,” JRSJ, vol. 20, no. 5, 2002, pp. 506–514.

[2] Y. Hagiwara, T. Shoji, and H. Imamura, “Position and Rotation Estimation for Mobile Robots Straying from a Recording Path Using Ego-motion,” IEEJ-C, vol. 133, no. 2, 2013, pp. 356-364.

[3] H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded Up Robust Features,” in ECCV, 2006, pp. 404-417.

[4] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 9, no. 5, 1987, pp. 698-700.

126/278

Page 138: International Conference on Indoor Positioning and Indoor Navigation

- chapter 8 -

Geoscience

Page 139: International Conference on Indoor Positioning and Indoor Navigation

2013 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION, 28-31TH OCTOBER 2013 1

Generation of reference data for indoor navigationby INS and laser scanner

Friedrich Keller, Thomas Willemsen and Harald SternbergDept. Geomatics HafenCity University,

Hamburg, [email protected], [email protected], [email protected]

Abstract—Many indoor applications use maps or building mod-els to improve the determination of the position and navigation.Here, the question arises how such maps can be generated asefficiently as possible. This article shows how kinematic as laserscanning may be used to provide a point cloud. It respondsespecially to the determination of trajectory and the filter appliedtechnology, plus the external support received by a total station,but not the actual modeling of the data.

Keywords—indoornavigation, mobile mapping, totalstation, iner-tial measurement unit, laserscanning, Kalman filtering

I. INTRODUCTION

Navigation never has been so easy as it is today: outside ofbuildings, it is possible for anyone to navigate with navigationsystems or smartphones. In buildings the use of map dataimproves the position determination. At the same time theuse of maps helps to verify the present position. The base islogically that map data or whole building models are available.A small view over the proceedings of the 2012 IPIN shows,that in many areas maps play a role. [5] deals with the issueto raise the map data as a basis: It is shown how this data canbe obtained from a photo of an evacuation plan and how thedata will help to improve the position. This article discussesthe possibility of using an indoor mobile mapping system tomeasure a point cloud as the basis for a building plan or model.Other examples are, [4], [1] and [2].

II. MEASURING SYSTEM

A quick way to capture point clouds for the creation ofbuilding data is the kinematic laser scanning.For kinematiclaser scanning mobile mapping systems (MMS) are used.Usually an MMS consists of different components. Main com-ponents are normally an IMU, which determines in conjunctionwith the GNSS trajectory. There both the position and theorientation in space are determined. However, GNSS is anessential element, it provides an absolute position and preventsthe drift of the IMU. Depending on the configuration one orseveral laser scanners are used.

Since indoors no GNSS is available, the drift of the systemand the absolute position determination must be replacedby other systems. The HCU Hamburg developed a modularmeasurement platform. This allows an analysis of differentmeasurement configurations and sensors. The main module

consists of a high end IMU and a laser scanner. For outdooruse the system can be expanded by a GNSS module and anodometer. For indoor use exist a larger number of modules.This is due to the fact that various sensors to support thesystem are tested. At this point the total station module isto be presented. It consists of a 360 degree prism and a Leica1201 + total station. To determine the suitability of the totalstation the configuration has been adjusted accordingly.

In order to merge the data and to get the optimal trajectorythe approach of kalman filtering[3] has been selected, butin this case for nonlinear systems, also extended kalmanfilter (EKF) called. This allows an optimal estimation of thetrajectory. Kalman filter is a set of equations that provides amethod to determine the estimated position of a process. Thisseries of equations consists of two steps: prediction (estimatingequations) and correction (measurement equations) For furtherexplanation reference is made to relevant literature. [7].

To improve the results of the filter a smoother of Rauch-Tung-Striebel (RTS) is applied. The RTS smoother is anefficient two-pass algorithm for fixed interval smoothing. Thefirst cycle is normal EKF. Then, the respective state vectorsand covariances are backward retrieved and smoothed.

III. TIME SYNCHRONIZATION

For the use of the Kalman filter with kinematic measuringapplications it is essential to provide an uniform time base.Normally, the approximation of the time systems is providedby GNSS. For the indoor system of the HCU a differentsolution had to be found. The IMU itself has a time systemto reference the connected systems. This is synchronized bythe built-in GNSS Receiver with the GPS time. This has tobe done, otherwise the time system will drift. But as the timestamp for the laser scanner and the odometer are generatedby the IMU, this is not critical. The total station but has itsown time system, which cannot be synchronized from theoutside via GNSS or other inputs, explicit explanations fortotal station time system can be found in [6]. To reference timesystems without much effort, as additional laptop or specificmeasurement configuration, the following solution is to beanalyzed. The deviations of the clocks can be described bythe linear function

kTPS = m · kIMU + b (1)0000–0000/00$31.00 c©2013 IEEE

127/278

Page 140: International Conference on Indoor Positioning and Indoor Navigation

2013 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION, 28-31TH OCTOBER 2013 2

kIMU , kTPS are the same points in time, m is a scale whichis generated by the different time drift of the system, and bis the offset of the systems. Firstly, it is assumed that bothsystems only have one offset and the scale with m = 1 isstable. Only the offset b is to be determined. From totalstationtimes and measured coordinates, a second velocity profile canbe calculated. With the cross-correlation the offset of the two-time systems is determined.

IV. INDOORTEST

In a building of the HCU several loops were measured withthe measuring platform, while the platform was tracked by atotal station. Once the time-based systems are synchronizedevery second point of the total station measurement is markedas a control point and ignored in the Kalman filter. To cometo a conclusion about the obtained accuracy, the estimatedpositions of the filter is compared with the control points.Table I shows an overview of the achieved accuracy. Thecontrol points are correlated with the measurement system overthe time. Several supporting data were tested for the filter.Sequentially the odometer and total station were added to theIMU. Here, all other parameters such as the first initializationof the filter (measured in idle mode by the total station)remained unchanged.

The deviations shown in Table I are to be investigated nearerfor the second and third measurement. Noticeable is that theaverage of the second measurement is still above the standarddeviation. This suggests that systematic errors are involvedin the measurement. A detailed analysis of the trajectory canexplain this system with the drift of the system, equally drivencircles become larger or smaller over the measurement. In thethird measurement data the average is less than the standarddeviation, it seems that the systematic error components fallaway by adding the total station. It is assumed that thecharacter of the residual error is random, but this is subjectof further investigations.

V. OUTDOORTEST

To obtain an independent control of the system measure-ments outdoor were recorded. To determine the trajectory mea-surements from DGPS, the IMU and the odometer were used,it was merged with commercial software (Novatel WaypointInertial Explorer). The trajectory of total station, IMU andodometer was merged. Figure 1 shows the comparison ofthe position coordinates, illustrated is the difference betweenreference (GPS) and actual (total station). Striking here is thelarge variation that at the beginning of the measurement at

TABLE I. QUADRATIC DEVIATION; MEASUREMENT 1 IMU,MEASUREMENT 2 IMU AND ODOMETRY, MEASUREMENT 3 IMU,

ODOMETRY AND TOTALSTATION

IMU only[m] with Odometry[m] with Totalstation[m] vs. GPSMax. 1.8049 1.0395 0.1105 0.1274

Median. 0.5820 0.3783 0.0101 0.0164Mean. 0.7218 0.4140 0.0159 0.0202Std. 0.4872 0.3268 0.0197 0.0149Min. 0.0760 0.001 9.5e-08 0.0024

Fig. 1. Differences between Totalstation and GPS with GPS-CoVarianz (95%)

standstill, this must result from the GPS measurement. As aresult, the GPS control must be considered with care. If thevariation in idle mode is taken as accuracy of the measurement( 2cm (95%)) only a few significant deviations were found.Which implicates that the total station can achieve the sameaccuracy class in this configuration as GPS. It shows that thetotal station is suitable to improve kinematic measurementswith the help of IMU.

VI. CONCLUSION

In summary it can be stated that rapid registration indoorswith indoor mobile mapping Systems is possible. The lossof GNSS can be compensated by a total station. The cross-correlation is a practical method to estimate the offset of thetime systems. However, the full potential has not yet beenexhausted, in future studies an independent control has tobe found for quality assessment, it should also be furtherinvestigated whether better results can be reached with specialcommands via the interface GeoCom.

REFERENCES

[1] C. Ascher, C. Kessler, R. Weis, G.f. Tromme: Multi-Floor Map Matchingin Indoor Environments for Mobile Platforms. In: Proceedings of theIndoor Positioning and Indoor Navigation (IPIN), 2012 InternationalConference 13-15 Nov. 2012

[2] D. Gotlib, M. Gnat and Jacek Marciniak : The Research on Cartograph-ical Indoor Presentation and Indoor Route Modeling for NavigationApplications In: Proceedings of the Indoor Positioning and IndoorNavigation (IPIN), 2012 International Conference 13-15 Nov. 2012

[3] R. E. Kalman: A New Approach to Linear Filtering and PredictionProblems. In: ASME Journal of Basic Engineering 82 (Series D), S.3545, 1960

[4] S. Khalifa and M. Hassan : Evaluating mismatch probability of activity-based map matching in indoor positioning. In: Proceedings of the IndoorPositioning and Indoor Navigation (IPIN), 2012 International Conference13-15 Nov. 2012

[5] M. Peter, D. Fritsch, B. Schaefer, A. Kleusberg, L. Bitsch, A. J and K.Wehrle: Versatile Geo-referenced Maps for Indoor Navigation of Pedes-trians. In: Proceedings of the Indoor Positioning and Indoor Navigation(IPIN), 2012 International Conference 13-15 Nov. 2012

[6] W. Stempfhuber, K. Schnaedelbach and W. Maurer: Genaue Position-ierung von bewegten Objekten mit zielverfolgenden Tachymetern. In:Proceedings of the Ingenieurvermessung 2000

[7] G. Welch and G. Bishop: An Introduction to the Kalman Filter, TechnicalReport 95-041, UNC-CH Computer Science, 1995

128/278

Page 141: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Implementation of OGC WFS floor plan data for

enhancing accuracy and reliability of Wi-Fi

fingerprinting positioning methods

Zinkiewicz Daniel

Wasat Sp. z o.o.

Warsaw, Poland

[email protected]

Buszke Bartosz

Wasat Sp. z o.o.

Warsaw, Poland

[email protected]

Abstract—The paper presents a method of enhancing accuracy

and reliability of Wi-Fi fingerprinting by implementing vector

data provided by OGC WFS services. The main concept is based

on dynamic zoning of mapped indoor environment by dividing

fingerprints modeled data. The working prototype is built with

use of the Fraunhofer IIS awiloc® SDK for indoor positioning

and an open source GeoServer platform for hosting WFS data.

In the proposed method the WFS is adopted as a service for

provisioning information in GeoJSON format on buildings while

WMS images provide only a background image map. WFS vector

data are implemented with information on walls, paths,

entrances, doors, open spaces, excluded areas and overall

geometry of buildings. Requested GML data are loaded in the

form of layers with the described layer data type. WFS content is

accessed asynchronously from the GeoServer and retrieved by a

built-in Android-based mobile application.

The client–server structure of the presented solution introduces

flexibility to a static presentation of indoor floor plans.

Implementation of WFS information allows to obtain reliable Wi-

Fi fingerprinting results by limiting areas where a position is not

reachable. Also bordering Wi-Fi fingerprint model data by

vector data allows to exclude a position variation close to walls in

an indoor environment. As a result, the obtained position of the

presented solution does not jump between two sides of the wall.

The applied method of merging Wi-Fi fingerprinting model data

with vector data from WFS service enhances accuracy of indoor

positioning and eliminates influence of big navigation and

routing errors.

Keywords-component; Wi-Fi, Fingerprinting, WFS, Web

Feature Service, Floor plans.

I. INTRODUCTION

A large scale deployment of indoor location data is difficult due to technical challenges. For Wi-Fi fingerprinting, data fusion with additional information is normally required to achieve high accuracy and resolution. A number of researchers have been working on using Wi-Fi fingerprinting in combination with different technologies to enhance the accuracy and reliability.

The complete article is organized as follows: In Section „System parts architecture‟, we describe a baseline of system prototype components. The details of the proposed methodology of data integration are explained and discussed in Section „Implementation‟. Achievements of modeled results are tested and evaluated in comparison to the baseline method in Section „Tests & Evaluation‟. An overview of the related work is provided in Section „Related work‟. This article concludes in Section „Conclusions and future work‟ with a summary of the primary contributions of this work and an overview of the future work.

A. Problem

Location and indoor tracking technologies provide a position in the form of two-dimensional coordinates. Single sensor sources (i.e. Wi-Fi fingerprinting) are not precise and accurate enough but the combination of various technologies and different levels of data processing allows a more exact and reliable indoor positioning.

B. Motivation

The presented work is motivated by the need for spatial support for a user who wants to get more accurate methods of indoor positioning and more reliable systems providing continuous location information.

In this article, we propose a Wi-Fi fingerprinting-based indoor positioning system combined with vector data obtained from Web Feature Service (WFS). In the proposed prototype, we define an efficient and robust model of utilization of floor plan data, where the initial position distribution calculated by the positioning system will be compensated before being presented to the user.

II. TECHNOLOGY AND SYSTEM PARTS ARCHITECTURE

The main part of the work presented in this paper we are based on the decentralized system architecture. The prototype system consists of three different parts: the server in form of an open source implementation of GeoServer, the mobile engine for determining a fingerprinting position in form of Android

129/278

Page 142: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

application, and the engine for WFS retrieving and manipulation of vector data.

A. Use cases

Our approach which combines WFS distributed data and Wi-Fi fingerprinting position has a high potential in large open spaces where no routing graphs are implemented. Vector data in form of multiline or polygons define boundaries of different areas which in most cases are not accessible in short time. Based on these objectives we try to resolve the problem of enhancing positioning accuracy in close neighborhood of walls and open spaces in a building.

B. Geoserver

For the provision of floorplans in form of WFS data, GeoServer (particularly the GeoServer for Windows) is used. GeoServer is an open source project running on different platforms including Microsoft Windows, Linux and Mac OS. It supports a rich set of raster and vector data formats, geographic data sources and OGC standards (among them WMS, WFS, GML) and open format GeoJSON. GeoServer for Windows runs with Apache HTTP Server and Java Tomcat. The shapefiles can be directly used as data sources for GeoServer.

C. Mobile client

A prototype client side of the system was developed on the Android platform. We decided to build a fat client with many features because time reduction in communication was the most important factor for effectiveness of the system‟s work. The Android application was optimized for running on Samsung Galaxy S2 device what allowed us to run algorithms and obtain position in short time. The decentralized architecture of the positioning system enables natural balancing of the load, high availability of floorplans and robustness of the system.

D. WFS data

WFS is the service of choice for accessing building data from GeoServer. In addition, WFS can be based on various data sources as a backend. Available implementations usually support various data formats and databases.

WFS provides a simple protocol and an interface for accessing geographical features based on HTTP. Its main operations used in our implementation were: getCapabilities(), describeFeatureType(), and getFeature().

To access the complete layer data, a layer can be requested using the getFeature() operation. Thus, a download of layer data can be accomplished stepwise, starting with the layer containing the building outline and the layer representing the current user location. (Fig.1). In addition the layers providing information about positioning technologies supported by a client device can be loaded.

Hence, WFS allows the exploration and download of the provided features, i.e. building model layers ensuring extensibility of and flexible access to the provided data based on a standard format and protocol, in a dynamic and selective way

Figure 1. Example of a WFS vector data with floorplan displayed in Android

canvas object.

E. Fraunhoffer SDK

Fraunhofer IIS awiloc® solution makes possible for mobile devices to independently determine their position in indoor and urban environments based on signal strength measurements. Networks from the IEEE 802.11 family of wireless LAN standards have emerged as a prevalent technology. Hence, they are predominantly used as a basis for indoor positioning.

Indoor positions are determined with an accuracy of a few meters. Positioning based on Wi-Fi fingerprinting in communication networks perfectly complements other location data or different approaches developed with use of distributed awiloc® SDKs.

III. IMPLEMENTATION

Most of the implementation work was made on client side of the system. In this case we used a Java based Android environment. The structure of development was divided into parts including algorithms for obtaining Wi-Fi fingerprinting position from awiloc® models, algorithms for downloading and parsing, methods for manipulation of data structures, methods for coordinates transformation. The important element of client side were algorithms for recalculation of position with use of vector data. Each part of application and most important parts of each algorithm run as separate Android services or as asynchronous tasks (e.g. downloading WFS data from GeoServer)

A. GeoJSON

We decided to use GeoJSON for flexible and highly interoperable access to floor plan models on Android devices.

130/278

Page 143: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

In combination with WFS as a service for access, it is possible to support various internal representations of building data, which can be transformed on the fly into GeoJSON format.

In our approach we adopt WFS as a service for provisioning of building information in GeoJSON format. Using OGC WFS and GeoJSON standards we ensure high interoperability with Android SDK. In this way, building data providers are not restricted to a particular format but can choose any format as long as it can be transformed into GeoJSON or GML

Using WFS in form of GeoJSON as a standard technology requires JSON processing which is challenging for mobile clients. For this reason we implement a set of solutions to combine smaller part vectors obtained through WFS what allows our approach to optimize a working system.

B. GeoData structure

Downloaded GeoJSON raw data are processed to obtain less complicated objects which can be used in further calculation. For this case we should implement our own GeoJSON parser compatible with Android based structures. The parser is looking only for lines, multilines and polygons which can represent walls of building in a floor plan. In the next step it divides all objects for separate line sections and build ArrayList objects which allow to manipulate data in easy form.

In position processing, we also store temporally location data. For this case we use HashMaps objects which catch each broadcasted position with a timestamp. In further parts of algorithms development it allows us to compare actual position with its past information.

Another type of geodata structure used in our algorithms is representation of an extended point object. From WFS data we obtain coordinates in WGS 84 projection which cannot be used directly in mathematical formulas. Each coordinate should be transformed before calculations and stored separately. For this reason we use a point object implementation where we store coordinates of each point in different formats (longitude, latitude and its UTM representation).

C. Data filtering

Each floor plan in form of vector data can contain a big amount of data and linear objects. To improve performance of algorithms we process only GeoJSON data from a well-defined bounded box area in closed neighborhood of the actual position. This allows us to reduce vector data.

As a part of data filtering we use algorithms to precisely read each parameter of GeoJSON only once. At this stage we filter only nodes with defined parameter descriptions which point to objects like walls.

The next step in data filtering is sorting all structures and choosing only objects closest to the actual position. This allows to get a small set of objects around the current position. For this purpose we used an optimized algorithm for calculating the shortest distance from point to line (1).

Equation (1) plays a role of the geofencing method in our approach. By using this method we compose the most effective set of objects which are examined in the next steps of the running algorithm.

D. Algorithms

First of all, in algorithms used after parsing of GeoJSON raw data we transform coordinates of each line sections and points to coordinates systems which simplify all mathematical and numerical calculations. The consequent adoption of WGS84 creates some difficulties with the integration of the floorplans data because transformation to the mathematical coordinates was necessary for further calculations.

For that we use extended point objects where we store geographical coordinates with their corresponding UTM representation. We also implement our own algorithms for coordinate‟s transformation which are made only once to increase performance of the running algorithm.

As a main algorithmic part of our solution we implement decision methods. It allows us to define if two corresponding positions received in minimum time intervals are on the same side of line or not. It represents behavior of user location when it is placed close to the walls. For this reason we implement representation of the math library sign calculation. We use the sign of the determinant of vectors (AB, AM), where M(X, Y) is the query point:

P = sign((Bx-Ax)*(Y-Ay) - (By-Ay)*(X-Ax))

If P in (2) is 0 on the line, and +1 on one side, -1 is on the other side. These values are tested for the whole set of section lines filtered in previous steps. Changing value of P in short intervals shows us that position jumps to the other side of the line. In real situation it means that a location of a user changes quickly to another side of the wall what, in real life, is impossible in most cases.

In case of a negative value of the test it is necessary to calculate a new position. For that we implement an algorithm which calculates an average position from the last received position and a virtual value which is the nearest position to the old one but on the right side of the line. That approach allows us to determine the most accurate approximation of a position in those cases.

IV. TESTS & EVALUATION

The experiences and practical tests carried out for implementation of the prototype system have demonstrated the feasibility of the major algorithmic solution. As a result of our previous experiments with Wi-Fi positioning we stated that the accuracy of Wi-Fi fingerprinting-based approach only is often too low for precise indoor applications. The conducted tests allow us to determine and verify our approach to join fingerprinting and vector data.

131/278

Page 144: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

For this reason we make a couple of test sets in one real

object (in one building) but in different places. We choose a location in rooms close to the wall and in corners where two or more lines (walls) are intersected. We also choose one location in open space where the nearest wall is more than 5 meters from the real location. For test purposes we implement a module for data capture which stores location in each epoch with corresponding vector data description to make evaluation. All results are presented in Tab. 1

TABLE I. RESULTS OF TESTS

Set no.

Test results description

Set description

Right position

from Wi-Fi

[%]

Right position

Wi-Fi + WFS

[%]

1 Open space – 378 pos. 98% 98%

2 Long wall – 548 pos. 82% 86%

3 Short wall – 231 pos. 79% 85%

4 Corner – 432 pos. 68% 69%

All tests are made in long periods of time (more than 10 minutes) in static positions in each point. We compare a calculated position coming from Wi-Fi fingerprint module with a position corrected by our algorithms. In each case the application of our methods gives positive results (except for open space where no changes are reached) but changes are not significant. It can be the result of the chosen object where rooms are relatively small. In this case further tests and algorithms improvement should be made.

V. RELATED WORK

A. Crumbs

Indoor positioning system which is used for obtaining positioning data was also used in frame of the EUREKA project “CRUMBS: Crumbs, Places and Augmented Reality in Social Network”. For purpose of location module development in the project the core function of fingerprinting model was implemented and tested in real environments. Floor plans were used to visualize user position on the map. Using only WMS data we observed a need to implement vector data to provide more reliable location information.

B. HortiGIS mobile

For purposes of European Space Agency project “HortiSat: Integrated Satellite Application for High-Value Horticultural Production” a mobile GIS application was developed that retrieved WMS and WFS data. In that case we used only GPS location data and mostly WMS services were utilized to present and distribute different geospatial information for horticulture users.

VI. CONCLUSIONS AND FUTURE WORK

In this paper a novel approach is presented to integrate technologies for indoor location-based services with vector geodata obtained from the geoserver WFS streams. We

introduced a decentralized infrastructure of system providing explicitly modeled data about the building geometry and positioning. Floorplans data are offered by open standards, namely WFS and GeoJSON to achieve high interoperability of the system with Android devices. Floor plans data on client side are combined with map data for visualizing indoor locations in a highly precise integrated manner. In addition, information for positioning is exploited at client side for indoor positioning with different technologies.

The decision to follow WFS and Wi-Fi fingerprinting positioning approach requires higher effort for creating fat clients with a full set of processing algorithms. In addition, processing of floorplans data is performed on the client side. Using WFS in format of GeoJSON as standard technologies requires JSON processing which is challenging for mobile clients. Advantages of the fat client approach are the higher flexibility for processing floorplans data offered as vector data instead of raster images. In addition, a positioning has to be performed on the client side anyway. With a fully implemented client, positioning and visualization can be implemented in a flexible way.

Approaches based on Wi-Fi need further improvements and should be combined with alternative approaches like 2D graphs data, inertial positioning or other positioning methods. Also crowd-sourcing approaches could help in solving the formulated problem but need deeper exploration. The evaluation has shown that major objectives are feasible and WFS data can be adopted to the Wi-Fi fingerprinting model, and combined in one precise localization system. In summary, the presented work is the first step towards the envisioned goal. Our future work will address the challenges to improve the approach.

ACKNOWLEDGMENT

This paper is based upon research made in the framework of the Celtic-Plus project “CRUMBS: Crumbs, Places and Augmented Reality in Social Network” supported by the Polish National Centre for Research and Development (Grant No. E! CP7-004/35/NCBiR/11).

REFERENCES

[1] G. Dedes and A. Dempster, "Indoor gps positioning - challenges and opportunities", in Vehicular Technology Conference, 2005. VTC-2005-Fall. 2005 IEEE 62nd, vol. 1, Sept., 2005, pp. 412 - 415.

[2] C. Nagel, T. Becker, R. Kaden, Ki-Joune Li, J. Lee, T. H. Kolbe "Requirements and Space-Event Modeling for Indoor Navigation", OGC 10-191r1, OpenGIS® Discussion Paper

[3] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford: Clarendon, 1892, pp.68–73. H. Lepprakoski, S. Tikkinen, A. Perttula, and J. Takala, "Comparison of indoor positioning algorithms using wlan fingerprints", in Proceedings of European Navigation Conference Global Navigation Satellite Systems ( ENC-GNSS 2009)

[4] M. Mabrouk, OpenGIS location services (openls): Core services," Open Geospatial Consortium Inc., Tech. Rep. OGC 07-074, version 1.2, 2008.

[5] Gallagher T, Li B, Dempster AG, Rizos C (2010) Database updating through user feedback in fingerprint-based Wi-Fi location systems. In: Proceedings of positioning indoor location based service. pp 1–8

132/278

Page 145: International Conference on Indoor Positioning and Indoor Navigation

- chapter 9 -

Computing & Processing (Hardware/Software)

Page 146: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

On-board navigation system for smartphones

Mauricio César Togneri

Institute of Services Science

University of Geneva

Switzerland

[email protected]

Michel Deriaz

Institute of Services Science

University of Geneva

Switzerland

[email protected]

Abstract—Several mobile solutions offer the possibility to

download maps to use them offline at any moment. However,

most of the time a connection to an external server is still needed

in order to calculate routes and navigate. This represents an issue

when traveling abroad due to roaming costs. In this paper, we

propose a solution to this problem through an engine that stores

and manages OpenStreetMap’s data to consult points of interest,

calculate routes and navigate without any connection required.

The software manages indoor and outdoor information to

provide a full navigation service that works in both

environments. Therefore, the same system allows navigating in a

highway by car and provides indoor navigation for museums,

hospitals and airports among others. The result is an on-board

engine for smartphones that provides indoor and outdoor

navigation services that does not require Internet connection.

Keywords: on-board, navigation, indoor, outdoor, smartphones

I. INTRODUCTION

Nowadays we can find several web mapping services of which we can highlight Google Maps

1, Bing Maps

2 or Nokia

Here3 among others. All these solutions also provide online

navigation services and represent a very important tool in several situations. However, we may encounter situations with specific constraints where an Internet connection is not allowed or guaranteed. In this case we cannot rely on online services and we are forced to use a solution where the navigation services work offline.

Another important aspect of the web mapping services is their large coverage range, mostly at worldwide level. This characteristic allows us to take a look at almost every corner in the world and calculate routes between two points that are thousands of kilometers away from each other. Nonetheless, most of the biggest providers have a closed source of information that we cannot change or access freely. There are some exceptions like MapShare

4 from TomTom or MapMaker

5

from Google that allow users to modify certain parts of the map. However, users do not have the rights over the edited maps and all contributions become property of the companies (map information will remain proprietary and not free).

1 Google Maps, http://www.google.com/maps

2 Bing Maps, http://www.bing.com/maps

3 Nokia Here, http://www.here.com

4 TomTom MapShare, www.tomtom.com/mapshare

5 Google MapMaker, http://www.google.com/mapmaker

One exception to this problem is OpenStreetMap6, a

collaborative project to create a free editable map of the world that provides geographical data to anyone. Thanks to this project, users can access freely to a world map, modify it and create their own maps.

Another aspect of the navigation systems is the indoor maps availability. This topic is relatively new and most of the solutions do not provide a large coverage for indoor navigation. Although it is possible to find indoor services in important places (e.g., airports, big shopping malls, etc.), users still rely on providers to have access to indoor navigation.

Therefore, our goal is to create a navigation system that uses a source of information that is free, continuously growing and easy to modify, a system that works offline and allows navigating both indoor and outdoor. In this paper we present a solution that takes advantage of OpenStreetMap’s data and format to create a navigation system for smartphones. This approach solves the problem of service unavailability that users can have due to a lack of Internet connection and allows indoor and outdoor navigation.

The result is a generic navigation system that can be integrated in different situations, for example:

For touristic purposes, in a guide application to explore a new city (offline feature).

As a customized navigation system for museums, hospitals, airports, etc. (indoor feature).

Customized car navigation for companies to control their fleet (personalization feature).

As a route planner for emergency situations (high availability feature).

The remaining sections are structured as follows. Section II provides an overview of the related work in the area of mobile navigation systems. Section III describes the main architecture and implementation of the module. In Section IV we present an example application that shows the services provided by the system. Finally we present our conclusions and future work in Section V.

6 OpenStreetMap, http://www.openstreetmap.org

133/278

Page 147: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. System architecture

II. RELATED WORK

From a market perspective, most of the mapping solutions offer the possibility to download maps to use them offline at any moment. The best known example is Google Maps and its mobile application

7 that allows users to download maps on the

phone. However, this can help us to locate ourselves over the map but it is not able to provide offline navigation services because a request to the server needs to be done.

There are some mobile solutions such as Sygic: GPS Navigation & Maps

8 or ROUTE 66 Maps + Navigation

9, which

allow downloading maps and calculate routes offline. Even so, those kinds of applications are closed, not free and users have no control over the map that they are using. They are allowed to report bugs or problems on the roads but it is not guaranteed that the changes will be applied.

Some other mobile applications such as Navfree: Free GPS Navigation

10 and OsmAnd Maps & Navigation

11 solve the

previous problem. Both are a good example of a free application that uses OpenStreetMap as a source of maps. Even though, these applications do not offer indoor navigation.

From a research perspective, Jiang, Fang, Yao and Wang [1] present a full infrastructure to deploy an indoor & outdoor navigation system. However, the model relies on a network architecture that uses servers to provide the navigation services. This solution also uses a specific handheld device which makes it difficult to implement the system in a real situation.

Moreover, in the work of Li and Gong [2] we find another attempt to create a system that integrates indoor and outdoor navigation. Nevertheless, we encountered the same problem of a server connection dependency. In this case the system uses the Google Maps API to acquire the routes for outdoor and it uses a local server to provide the product’s querying services and the indoor route calculation.

The novelty of our work is a software module called NaviMod (Navigation Module) that solves all the previous problems. In other words, it is a system that works offline, uses an open, free and customizable source of maps and allows both indoor and outdoor navigation.

III. ARCHITECTURE AND IMPLEMENTATION

The entire system consists of an Android library, which means that it can be integrated in a variety of devices, such as smartphones, tablets, smart watches and the future Google Glass among others.

A. Requirements

The system assumes that the final application has access to a position provider due to the navigation system requiring the user’s location in order to perform certain navigation services.

7 Google Maps Mobile, http://www.google.com/mobile/maps

8 Sygic: GPS Navigation & Maps, http://www.sygic.com

9 ROUTE 66 Maps + Navigation, http://www.66.com

10 Navfree, http://www.navmii.com/gpsnavigation

11 OsmAnd Maps & Navigation, http://www.osmand.net

The final application acquires the user’s location from the position provider and sends it to the navigation module as it is shown in figure 1. This external module provides the device’s position using the World Geodetic System, revision WGS 84. Inside of the navigation module, all points on the map as well as the user’s position are represented using latitude, longitude and altitude following the specified coordinate system. Therefore, every provider should be compatible with this system in order to be used as a valid position provider for the navigation module.

B. Map management

The system provides the necessary tools to convert the source maps from OpenStreetMap (i.e., files with the osm extension) into a database. Hence, users can customize their application adding personalized maps. There are two types of scenarios:

Outdoor maps: OpenStreetMap provides outdoor maps for the whole world. Then, users can download specific regions and add them to the applications as exchangeable maps. For example: a user can download a map of a specific city, a country, a continent, etc. The module is able to manage several maps at the same time. After downloading a specific map, users are allowed to perform local modifications that do not want to upload to the OpenStreetMap’s servers. For example, users can modify locally a map to adapt it to a certain kind of social event (e.g., marathons, conferences, expositions, etc.) in order to improve the navigation services. This kind of modifications are not meant to be uploaded to the servers since they are taking place only in a short period of time.

Indoor maps: Due to OpenStreetMap not providing indoor maps, the user has to create his own maps for specific buildings. Unlike the research made by Gotlib, Gnat, and Marciniak [3], we decided not to use a

134/278

Page 148: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 2. An indoor map design using JOSM

<node lat='46.1747152' lon='6.1273034'>

<tag k='ele' v='430' />

<tag k='amenity' v='toilets' />

<tag k='male' v='yes' />

<tag k='wheelchair' v='yes' />

</node>

<node lat='46.1766711' lon='6.1397371'>

<tag k='ele' v='450' />

<tag k='machine' v='printer' />

<tag k='colour' v='yes' />

</node>

complex format but to adapt the proposal for indoor mapping for OpenStreetMap [4] for future compatibilities. These maps can be shared between users in order to collaborate and improve the navigation service. Once the map is finished, the tool will convert the source map into a database containing the indoor map and the information related to it (i.e., points of interest). Figure 2 shows an example of an indoor map design for the second floor of the University Computing Center at the University of Geneva. The network map was designed using the JOSM

12 tool, which can be used to edit outdoor maps

as well. The map contains all the corridors and their connections. Each point of interest (e.g., offices, classrooms, etc.) is represented by a node in the map.

The result of the conversion of the source map is a directed graph that contains all the points and their connections inside the map. This network is stored into a database, called “map network”, which will be used to reconstruct the graph and calculate routes.

The reason why the module does not use directly the osm source files is that the database approach offers a better performance accessing the data eliminating all the unnecessary information that is not used by the navigation system.

C. Indoor maps

As previously mentioned, the indoor maps are created using the official OpenStreetMap source format. Therefore, the geographical coordinates related to a node or a point of interest are represented using the same format as the outdoor maps, keeping a strong compatibility between both environments.

OpenStreetMap’s format offers a free tagging system that allows the map to contain unlimited data about its elements. A tag consists of a key and a value that are used to describe elements. The community has agreed in a set of standard tags to represent the most common points of interest in a map (e.g., offices, toilets, cafeterias, rooms, elevators, stairs, etc.). Hence, indoor maps can use the same set of tags to represent indoor

12

JOSM, http://josm.openstreetmap.de

elements as well. Figure 3 represents a point of interest using standard tags in OpenStreetMap.

This set of standard tags does not take into account several elements that can be found in a specific indoor environments such as hospitals, museums or airports. However, thanks to the free tagging system, users can define their own tags and then create any kind of point of interest needed. For example, figure 4 represents a printer as a point of interest. This is a common element that can be found often in offices and is not contemplated in the standard set of tags of OpenStreetMap. However, the rules to describe indoor maps are flexible enough to allow users to define and create their own elements for new scenarios.

D. Navigation instructions

The navigation module is able to provide turn-by-turn navigation instructions in order to follow the calculated route. In order to accomplish this task, the module needs the current user’s position to perform a technique called map matching.

This technique is used to merge the data from the position provider and the map network to estimate the user’s location that best matches the calculated route. The reason that this technique is necessary is that the location acquired from position provider is subject to errors. This feature offers a smoother transition between the different positions acquired by the position provider and avoids unexpected variations in the position.

Once the user’s position is matched with the map, the navigation module is able to calculate the next turn that the user needs to perform as well as the distance to it (e.g., turn to the left in 15 meters). The navigation module informs about this event to the main application which will be in charge of display it on the screen.

E. Route algorithm

The module is able to calculate routes between two points that can be separated for a few meters or hundreds of

Figure 3. A point of interest using standard tags in OpenStreetMap

Figure 4. A custom point of interest using OpenStreetMap’s format

135/278

Page 149: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

kilometers. The route algorithm allows to set the mode of traveling (i.e., by car, by foot, by bike or by wheelchair).

The chosen algorithm to calculate routes used by the module is A*, a generalization of the Dijkstra algorithm, as explained in [5], [6] and [7]. The only difference is that A* uses a heuristic function (also called h(x)) in order to speed the algorithm up.

The A* algorithm offers better performance over the Dijkstra algorithm thanks to the heuristic function that allows to “guide” the process in order to find the target node inside the network. The current implemented version of the A* algorithm is able to use two heuristics:

Distance (Euclidean): for traveling by foot, by bike and by wheelchair to calculate the shortest route

Time: for traveling by car to calculate the fastest route

Regarding the performance, we encountered a few problems not in the execution of the algorithm itself but loading the information from the database. We faced some latency problems due to managing outdoor maps since the module needs to handle a database of around 500,000 records for a single city.

In table I, we show different measurements related to the performance of the algorithm. Each row contains the average information related to different executions of the calculation of a route. We choose four different scenarios that calculate a route between two points separated by 580 m, 790 m, 2 km and 5 km respectively. The result shows a big time consumption loading the information from the database and a small part of the time running the algorithm to process the loaded nodes and links.

We can also see that as we increase the distance between the start and end point, the ratio between the processed and loaded nodes changes. Considering the last scenario, a total of 13300 nodes where processed over 18300 nodes that where loaded in memory. This means that 72.6 % of the loaded nodes where actually used in the computation of the algorithm. Quantifying the number of nodes in surface area values, 13300 nodes correspond to 10.9 km

2 and 18300 nodes to 15 km

2.

IV. RESULTS

Due to the system consisting only of an Android library, we have created an example application in order to show all the services that the module can provide. Specifically, it consists of an Android application for smartphones and tablets.

As a position provider we used an internal module called GPM (Global Positioning Module) [8], a hybrid positioning framework for mobile device which provides to the final application the user’s location, both indoor and outdoor. However, if the final application is meant to be used only in outdoor environments, the position provider could use only the GPS signal.

TABLE I. ALGORITHM PERFORMANCE

The example application shows the user’s current location and allows calculating routes between two points. The user can select as a start or end point: his current position, a point in a map (touching the screen) or a point of interest from the catalog. In this case we used the Google Maps viewer to show that the navigation module is independent of the map viewer of the final application. It means that the navigation module only provides the services and it is the responsibility of the final application to show these results (in a 2D or 3D map, using augmented reality, voice instructions, etc.).

A. Outdoor

In this case the application works as a standard navigation system (e.g., TomTom) that allows navigating in the city. The example application contains the network map of the city of Geneva. However, if the application is meant to be used in another city, the user just needs to generate the map of the correct region and add it to the application. Figure 5 shows:

A route by foot from the current user’s position to a point of interest in the city. In this case a static route (with the total distance and the estimated time) is displayed to the user, who can accept it and start the navigation or cancel it.

The user following another route by car. In green the path behind (already done) and in red the path ahead. In this case, the navigation module will monitor the user’s position, perform the map matching and provide the correct turn-by-turn directions in real time to guide the user to his destination.

B. Indoor

The example application also contains an indoor map of a building so the user is able to navigate within it. The map network of the building also contains information about the points of interest inside of it (e.g., offices, classrooms, cafeterias, toilets, etc.).

Therefore, a user who enters the building for the first time and needs to reach a specific room can use the application to find it in the points of interest catalog and navigate to it.

Nodes

loaded/processed Total time Database time A* time

640 / 110 165 ms 110 ms 67% 55 ms 33%

900 / 140 260 ms 190 ms 73% 70 ms 27%

4800 / 2300 1240 ms 1070 ms 86% 170 ms 14%

18300 / 13300 5730 ms 5200 ms 91% 530 ms 9%

136/278

Page 150: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 3. Example of outdoor navigation

Figure 4. Example of indoor navigation

The final application is able to show the correct floor plan

in each moment using the user’s altitude and the map network.

Figure 6 shows an indoor path between two rooms and the

user navigating by foot thought the same route. In this case the

navigation system provides specific turn-by-turn directions for

an indoor environment (e.g., no street names or maximum

speed indications are shown).

V. CONCLUSIONS AND FUTURE WORK

Thanks to the module implementing all the navigation services, the final application remains small and it is only limited to showing the graphical map, receiving the input parameters from the user and showing the results provided by the navigation system.

On the other hand, OpenStreetMap’s data does not have enough information to perform geocoding in an outdoor environment. Therefore, is not possible to find the associated geographic coordinates (often expressed as latitude and longitude) from other geographic data, such as street addresses, or postal codes. Due to this problem, for the moment the user is only able to select a starting point or destination by choosing a point of interest from the catalog, selecting a point on the map (touching the screen) or using his current position.

Another problem is that in the current version the user is only able to travel indoor-indoor or outdoor-outdoor. It means that is not possible to travel from an outdoor position to an indoor place or vice versa due to both environments are in two separated maps.

From the performance perspective, the used route algorithm needs to be improved in order to optimize the node processing and reduce the time response. Currently, solutions such as Navfree or OsmAnd implement algorithms that compute the route in half of the time. This is an important point to take into account for future improvements of the module.

Additionally we are looking into creating maps connections to calculate routes between indoor and outdoor environments. Furthermore, we planned to work on the taxonomy of indoor maps to offer a better indoor navigation service.

ACKNOWLEDGMENT

This work is supported by the AAL Virgilius Project (aal-2011-4-046).

REFERENCES

[1] Yali Jiang, Yuan Fang, Chunlong Yao, and Zhisen Wang, “A design of indoor & outdoor navigation system”, proceedings of ICCTA 2011.

[2] Hui Li, and Xiangyang Gong, “An approach to integrate outdoor and indoor maps for books navigation on the intelligent mobile device” IEEE 3rd International Conference on Communication Software and Networks (ICCSN), 2011.

[3] Dariusz Gotlib, Milosz Gnat, and Jacek Marciniak, “The Research on Cartographical Indoor Presentation and Indoor Route Modeling for Navigation Applications”, International Conference on Indoor Positioning and Indoor Navigation, IPIN 2012.

[4] Indoor mapping proposal for OpenStreetMap, as on October 2013, http://wiki.openstreetmap.org/wiki/Indoor_Mapping

[5] Ioannis Kaparias, and Michael G. H. Bell, “A Reliability-Based Dynamic Re-Routing Algorithm for In-Vehicle Navigation”, Annual Conference on Intelligent Transportation Systems 2010.

[6] Peter E. Hart, Nils J. Nilsson, and Bertram Raphael, “A Formal Basis for the Heuristic Determination of Minimum Cost Paths”, IEEE Transactions of systems science and cybernetics, 1968.

[7] T.M. Rao, Sandeep Mitra, and James Zollweg, “Snow-Plow Route Planning using AI Search” 2011.

[8] Anja Bekkelien, and Michel Deriaz, “Hybrid Positioning Framework for Mobile Devices”, 2nd International Conference on Ubiquitous Positioning, Indoor Navigation, and Location Based Service (UPINLBS), 2012.

137/278

Page 151: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

A Gyroscope Based Accurate Pedometer AlgorithmSampath Jayalath

Department of Electrical andComputer Engineering

Sri Lanka Institute of InformationTechnology

Colombo, Sri [email protected]

Nimsiri AbhayasingheDepartment of Electrical and

Computer EngineeringCurtin University

Perth, Western [email protected]

Iain MurrayDepartment of Electrical and

Computer EngineeringCurtin University

Perth, Western [email protected]

Abstract—Accurate step counting is important in pedometerbased indoor localization. Existing step detection techniques arenot sufficiently accurate, especially at low walking speeds thatare commonly observed when navigating unfamiliar environments.This is more critical when vision impaired indoor navigation isconsidered due to the fact that they have relatively low walkingspeeds. Almost all existing pedometer techniques use accelerom-eter data to identify steps, which is not very accurate at lowwalking speeds. This paper describes a gyroscope based pedometeralgorithm implemented in a smartphone. The smartphone isplaced in the pocket of the trouser, which is a usual carryingposition of the mobile phone. The gyroscope sensor data isused for the identification of steps. The algorithm was designedto demand minimal computational resources so that it can beeasily implemented in an embedded platform. Raw data from thesensor are filtered using a 6th order Butterworth filter for noisereduction. This is then sent though a zero crossing detector whichidentifies the steps. A minimum delay between two consecutivezero crossings was used to avoid fluctuations being counted andpeak detection was used to validate steps. The algorithm has acalibration mode, in which the absolute minimum swing of data islearnt to set the threshold. This approach demonstrated accuraciesabove 96% even at slow walking speeds on flat land, above 95%when walking up/down hills and above 90% when going up/downstairs. This has supported the concept that the gyroscope can beused efficiently in step identification for indoor positioning andnavigation systems.

Index Terms—pedometer algorithms; gyroscopic data; single-point sensors; step detection; localization and navigation; visionimapired navigation

I. INTRODUCTION

Accurate step counting is a critical parameter in pedometerbased indoor localization systems in improving their accuracyand reliability. Existing step detection techniques, both hard-ware and software, does not satisfactorily cater the accuraciesdemanded by localization systems especially at low walkingspeeds observed in natural walking [1]-[3]. Situation may beworse with vision impaired indoor navigation is considered,especially in an unfamiliar environment. Most of existingpedometers use accelerometer data in detecting steps and arebased on threshold detecting [4], [5].

The pedometer algorithm discussed in this paper is based onthe proposal of using gyroscopes in human gait identificationfor indoor localization that was proposed by Abhayasinghe andMurray [6]. This research is a part of an indoor navigationsystem for vision impaired people.

The performance of some existing pedometers are discussedin the “Background” section whereas the novel, gyroscopebased pedometer algorithm and its performance are discussedin the “Step Detection Algorithm” section and “ExperimentalResults” section of this paper.

II. BACKGROUND

Jerome and Albright [1] have compared the performanceof five commercially available talking pedometers with theinvolvement of 13 vision impaired adults and 10 senior adults,and observed that the step detection accuracy for all of themwere poor (41 − 67%) while walking on flat land and thesituation was worse when ascending stairs (9 − 28%) ordescending stairs (11−41%). Crouter et al. [2] have compared10 commercially available electronic pedometers and confirmedthat they underestimate steps in slow walking. Garcia et al.[3] have compared the performance of software pedometersand hardware pedometers and observed that both these typesare comparable in all walking speeds and both types havedemonstrated poor accurately in slow (58 to 98 steps·min−1)walking speeds: 20.5% ± 30% for hardware pedometer and10%± 30% for software pedometer.

Waqar et al. [4] have used an accelerometer based pedometeralgorithm with fixed threshold in their indoor positioning sys-tem. They have reported a mean accuracy of 86.67% in their6 trials of 40 steps each, with a minimum accuracy of 82.5%and a maximum of 95%. The median accuracy was 85%.

A Smartphone pedometer algorithm based on accelerometeris discussed by Oner et al. [5] and their algorithm demonstratedsufficient accuracies at walking speeds higher than 90 beats persecond (bps), but its performance degrades as speeds fall below90 bps. Their algorithm has over counted steps and the errorwas approximately 20% at 80 bps, 60% at 70 bps and 90% at60 bps.

Lim et al. [7] have proposed a foot mounted gyroscope basedpedometer, but the authors have not mentioned the accuracy oftheir system. Further, they use force sensitive resisters (FSR)to detect the toe and heel contacts, and hence the accuracy ofstep detection should be higher as they can easily detect theInitial Contact using the FSR.

Ayabe et al. [8] have examined the performance of somecommercially available pedometers in stair climbing and bench

978-1-4673-1954-6/12/$31.00 ©2012

138/278

Page 152: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

stepping exercises and recorded that the pedometers could countsteps with an error of ±5% at speeds of 80 to 120 steps·min−1.However, the accuracy was poor for low step sizes and lowerstepping rates (> ±40% at 40 steps·min−1).

Most of the examples discussed here used accelerometer datato detect steps and they perform poorly at slow walking speeds.The main reasons for this poor performance at low speedsare the static value (gravitational acceleration) present in theaccelerometer, slow response of accelerometer and that mostof these algorithms cannot adopt their threshold levels to suitwith the pace of walking. This raises the requirement of anaccurate step detection technique at slow walking speeds.

III. STEP DETECTION ALGORITHM

A. Introduction

The work presented in this paper is based on the proposalmade in [6] that the gyroscopic data can be exclusively used forgait recognition in indoor navigation applications. The authorshave proposed that the output of a single point gyroscope sensorlocated in the pants pocket gives sufficient information to trackthe movement of the thigh and hence detect the steps.

B. Relationship Between Gyroscopic Data and Movement ofthe Thigh

A stride cycle is measured from the Initial Contact of oneheel to the next Initial Contact of the same heal [9]. At theInitial Contact, the deflection of the thigh in the forward direc-tion is a maximum. Fig. 1 shows the orientation of the thighcomputed using gyroscopic data and low-pass filtered (with a6th order Butterworth low pass filter with cutoff frequency of5 Hz) gyroscopic X axis reading. Initial Contact points and thestride cycle identified based on the orientation are marked onthe graph. The initial orientation when the leg is at rest wascalculated by fusing accelerometer and the compass data. Forthis computation, the static value of the gyroscopic data wasremoved by deducting the average.

It can be clearly seen that the filtered gyroscopic data is closeto zero at the Initial Contact point of the particular leg and has anegative gradient. Hence, the period from one negative gradientzero crossing point to the next of the filtered gyroscope readingis a stride cycle as shown in the figure.

2 2.5 3 3.5 4 4.5 5-15

-10

-5

0

5

10

15

20

25

30Orientation of Thigh with Filtered Gyro-X Reading w hen Walking on Flat Land

Time (s)

Orie

ntat

ion

of T

high

(de

gree

s), G

yro

X R

eadi

ng (

×7 r

ad/s

ec)

Orientation of ThighGyro-X Reading

Stride Cycle

Initial Contact

Figure 1. Orientation of the thigh with filtered gyroscope-X axis reading whenwalking on flat land

It was also observed that the negative gradient zero-crossingcorresponds to the Initial Contact of that leg when walking onstairs and on an inclined plane too. Therefore it is clear thatzero crossing detection of filtered gyroscopic data may be usedin detecting the stride cycle, hence the steps, even if the personis walking on stairs or on an inclined surface.

In line with these observations, the device is assumed to bein vertical placement where forward and backward rotation ofthe thigh is read as gyroscopic X reading. Hence the real timeprocessing is limited to gyro-X only.

C. Pre Processing of Data

Before attempting to identify zero crossings, the gyroscopicX axis data is filtered with a 6th order discrete Butterworth low-pass filter with cutoff frequency of 3 Hz. 3 Hz was selected asthe cutoff frequency because the mean speed of fast gait is inthe range of 2.5 steps per second [10]. The cutoff frequencywas lowered as much as possible for better smoothness of thewaveform so that the unwanted oscillations around zero areminimal, but still the stride cycle is visible in the waveform.

D. Zero-Crossing Detector

A simple 2-point zero-crossing detection was used to sim-plify the algorithm. Both positive and negative zero-crossingswere detected by alternating the polarity of the zero-crossingdetector because the positive zero-crossing corresponds to thestarting point of Pre Swing of the particular leg, or the InitialContact of the other leg. Hence, the total count of zero-crossings is the number of steps the person has walked.

E. Avoiding False Detections

As indicated by the circle in Fig. 1, the filtered gyroscopicsignal may cross zero with a negative gradient for more thanone time during the period from Initial Contact to LoadingResponse. However, because this period is between 0−10% ofthe gait cycle [9] a timeout mechanism was used to avoid thisunwanted zero-crossing being detected. Once a zero-crossing isdetected, the zero-crossing detector remains disabled for 100 msto avoid detecting these multiple zero crossings. 100 ms wasselected as 15% of the stride cycle assuming a step frequencyof 1.5 steps per second for slow gait [10]. This time delayis 30% of the stride cycle of average fast gait of 3 steps persecond and hence it will not disturb the detection of the nextzero-crossing of fast gait.

F. Validating the Detected Zero Crossings

A threshold detection mechanism was used in the algorithmto validate each zero-crossing detected. As shown in Fig. 1,the gyroscopic reading reaches the corresponding peak after thezero-crossing point. However, in the area marked by the circle,the relative maximum is well below the peak of the signal andthat relative maximum does not correspond to the middle ofthe swing of a leg, hence need to be eliminated. The algorithmincludes a calibration mode where the user has to walk withthe slowest possible speed so that the smallest deflection of thegyroscope signal is learnt by the algorithm. After detecting a

139/278

Page 153: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

zero-crossing, the algorithm checks for the peak that followsthe zero-crossing, and checks if it is larger than the threshold.The counter is incremented only if the peak is larger than thethreshold.

G. The Step Detection Algorithm

A flow chart illustrating the step detection algorithm isdepicted in Fig. 2. It should be noted that both positive andnegative zero-crossings are detected by the algorithm and thepolarity to be checked is toggled after each detection. However,the polarity toggling is not indicated in the figure to reducegraphical complexity.

H. Implementation of the Algorithm

The algorithm was implemented in Matlab® for simulationpurposes and after confirming the outcomes of the algorithmusing prerecorded data, it was implemented in an Apple iPhone4S. During the implementation it was noticed that the algorithmcould count the movements of the phone while in the hand,when placing the phone in the pocket before the trial and takingout of the pocket after the trial. Because Apple license does notallow use of some phone features [11], such as ambient lightsensor to detect placement in the pocket, a time out mechanismand a manual correction was used at the beginning and at theend of the trial respectively. After pressing the start button, theapplication allows a timeout to allow user to place the phone inthe pocket. The algorithm starts detecting steps only after thetimer has timed out. Manual decrement of the total count byone was done to compensate the false count at the end whenthe phone is taken out of the pocket.

IV. EXPERIMENTAL RESULTS

The simulations indicated that the accuracy of step countingof the algorithm on prerecorded data was 100%. The algorithm

Figure 2. Flow Chart of the Step Detection Algorithm

was tested in the real world for five different activities: walkingon flat land, upstairs, downstairs, uphill and downhill, with theinvolvement of 5 male and 5 female volunteers. They wereasked to place the phone vertically in the pants pocket andperform the relevant activity. The tests were conducted in twostages: first with normal walking speed and then with fivedifferent stepping rates (50, 75, 100, 125 and 150 steps·min−1).The actual number of steps that the subject traveled was countedfor each trail by a note taker.

Table I shows sample results of a single subject performingdifferent activities with normal stepping rate. In that set of trials,the algorithm showed above 95% accuracy in every activity.

Table II shows statistics of actual number of steps, numberof steps counted by the algorithm and the accuracy in all trials.It can be seen that the algorithm has shown a minimum meanaccuracy of 94.55% for going downstairs and the minimumreported accuracy for all the trials of 90.91% for stair climbing(both up and down). However, the minimum accuracy reportedby the algorithm for walking on flat land is 96.00% with amaximum of 100%. The algorithm has reported accuraciesgreater than 95% for walking on an inclined surface with amean accuracy of 97.17% for going down and 98.18% for goingup.

The second set of experiments were conducted for walkingon flat land and on stairs only, where the subjects were askedto walk with five stepping rates: 50, 75, 100, 125 and 150steps·min−1. For walking on flat land, the minimum accuracyof 94.59% was reported at 75 steps·min−1 whereas the meanaccuracy for that speed was 97.89%. The statistics are shownin Table III. However, the minimum accuracy reported at 50steps·min−1 was 96% and the accuracy was greater than 96%at all other stepping speeds.

The minimum accuracy reported in going up stairs and downstairs was 90.91% where the total number of steps consideredin each case was 11. Although this is the absolute minimum,the lowest mean accuracy reported when walking up stairs was96.36% and that is at 75 and 125 steps·min−1. For walkingdown stairs, the lowest mean accuracy reported was 95.45%for the stepping speeds of 50 and 125 steps·min−1.

V. DISCUSSION AND FUTURE WORK

Trials of walking on stairs had to be limited to 11 steps pertrial due to unavailability of long stairways. Due to this reason,

Table ISAMPLE RESULTS OF ONE SUBJECT

ActivityActualNo. ofSteps

StepsCounted

byAlgorithm

Accuracy(%)

Walking slowly on flat land 27 26 96.30Walking faster on flat land 49 49 100.00Walking up stairs 11 11 100.00Walking down stairs 11 11 100.00Walking up hills 40 40 100.00Walking down hills 43 41 95.35

140/278

Page 154: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

Table IISTATISTICS OF THE PERFORMANCE OF THE ALGORITHM FOR DIFFERENT ACTIVITIES

Activity Actual No. of Steps Steps Counted byAlgorithm Accuracy (%)

Mean Var Mean Var Mean Var Min MaxWalking slowly on flat lands (<60steps·min−1) 28.50 2.45 27.60 2.64 96.82 1.16 96.00 100.00

Walking faster on flat lands(>100 steps·min−1) 49.10 1.29 48.50 0.65 98.80 1.73 96.08 100.00

Climbing up stairs 11.00 0.00 10.70 0.21 97.27 17.36 90.91 100.00Climbing down stairs 11.00 0.00 10.40 0.24 94.55 19.83 90.91 100.00Walking on inclined plane(up) 43.30 2.01 42.50 1.45 98.18 1.87 95.45 100.00Walking on inclined planes(down) 42.20 1.36 41.00 1.20 97.17 2.02 95.24 100.00

Table IIISTATISTICS OF THE PERFORMANCE OF THE ALGORITHM FOR WALKING ON FLAT LAND WITH DIFFERENT STEPPING RATES

Activity Actual No. of Steps Steps Counted byAlgorithm Accuracy (%)

Mean Var Mean Var Mean Var Min Max50 steps·min−1 25.90 1.09 25.50 0.85 98.49 3.43 96.00 100.0075 steps·min−1 37.80 0.96 37.00 1.20 97.89 2.58 94.59 100.00100 steps·min−1 51.00 1.00 49.90 1.29 97.85 1.89 96.00 100.00125 steps·min−1 62.50 0.65 62.00 0.40 99.21 0.63 98.39 100.00150 steps·min−1 74.50 0.65 73.90 1.69 98.92 0.66 97.26 100.00

the false count at the end of the trail is large as a percentageto the total number of steps. This is the main reason for lowaccuracy. Although the number of steps will be less in realapplication too, the phone will not be taken out of the pocketby the end of the stair case and hence the aforementioned errorcount will not occur. In addition to that, the vendor restrictionshave restricted us using some facilities of the phone to detectwhether the phone is in the pocket. This reason has caused theaccuracy of the algorithm for other activities also to drop below100%.

Implementing the algorithm in other platforms will be thenext step to see the real performance of the algorithm with allfeatures. The algorithm discussed in this paper assumes definedand fixed orientation of the phone in the pants pocket. Currentlythe authors are working on improving the algorithm so that itcan be used with different orientations in the pocket. The focusis to include an orientation correction into the algorithm suchthat the correct gyroscopic axis or combination of axes is used.However, the placement is still limited to the pants pocket asthe authors have identified the pants pocket as the most suitableplace for device placement for step detection [6].

VI. CONCLUSIONS

This paper presented a single-point gyroscope based pe-dometer implemented in a Smartphone as a component in thedevelopment of an indoor way finding system for people withvision impairment. From the testing conducted for differentactivities and different stepping speeds, the algorithm gavepromising results and high step detection accuracy even at lowwalking speeds. The gyroscope based step detection can beeasily used as an accurate step counting technique for indoorlocalization and navigation systems not only on level terrain,but also on tilted terrains and on stairs.

REFERENCES

[1] G. J. Jerome and C. Albright. (2011, June). “Accuracy of five talk-ing pedometers under controlled conditions,” The Journal of Blind-ness Innovation and Research [On-line], vol. 1(2), Available: www.nfb-jbir.org/index.php/JBIR/article/view/17/38 [Oct. 27, 2011].

[2] S. E. Crouter, P. L. Schneider, M. Karabulut and D. R. Bassett, “Validityof 10 electronic pedometers for measuring steps, distance, and energycost,” Medicine & Science in Sports & Exercise, vol.35 no. 8, pp.1455-1460, Aug., 2003.

[3] E. Garcia, Hang Ding, A. Sarela and M. Karunanithi, “Can a mobilephone be used as a pedometer in an outpatient cardiac rehabilitationprogram?,” in IEEE/ICME International Conference on Complex MedicalEngineering (CME) 2010, Gold Coast, QLD, 2010, pp.250-253.

[4] W. Waqar, A. Vardy and Y. Chen. “Motion modellingusing smartphones for indoor mobilephone positioning,” in20th Newfoundland Electrical and Computer EngineeringConference [Online], Newfoundland, Canada, 2011, Available:http://necec.engr.mun.ca/ocs2011/viewpaper.php?id=55&print=1

[5] M. Oner, J.A. Pulcifer-Stump, P. Seeling and T. Kaya, “Towards the runand walk activity classification through Step detection - An Android appli-cation,” in 34th Annual International Conference of the IEEE Engineeringin Medicine and Biology, San Diego, CA, 2012, pp.1980-1983.

[6] K. Abhayasinghe and I. Murray. (2012, Nov.). “A novel approach forindoor localization using human gait analysis with gyroscopic data,”in Third International Conference on Indoor Positioning and IndoorNavigation (IPIN2012) [Online], Sydney, Australia, 2012. Available:http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/22_Paper.pdf [Mar. 5, 2013].

[7] Y. P. Lim, I. T. Brown and J. C. T. Khoo, “An accurate and robustgyroscope-based pedometer,” in 30th Annual International Conferenceof the IEEE Engineering in Medicine and Biology Society (EMBS 2008),Vancouver, BC, 2008, pp.4587-4590.

[8] M. Ayabe, J. Aoki, K. Ishii, K. Takayama and H. Tanaka “Pedometeraccuracy during stair climbing and bench stepping exercises,” Journal ofSports Science and Medicine, vol. 7, pp.249-254, June, 2008.

[9] J. Perry, Gait Analysis: Normal gait and pathological function. Thorafare,NJ Slack, 1999, ch. 1-2.

[10] T. Oberg, A. Karsznia and K. Oberg, “Basic gait parameters: Referencedata for normal subjects, 10–79 years of age,” J. Rehabil. Res. Dev., vol.30no. 2, pp.210–223, 1993.

[11] Apple Inc. (2010, Aug. 10) “Ambient Light Sen-sor”[Weblog entry]. Apple Developer Forums. Availabel:https://devforums.apple.com/message/274229 [July 8, 2013].

141/278

Page 155: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

Bluetooth Embedded Inertial Measurement Unit forReal-Time Data Collection

Ravi ChandrasiriDepartment of Electrical and

Computer EngineeringSri Lanka Institute of Information

TechnologyColombo, Sri Lanka

[email protected]

Nimsiri AbhayasingheDepartment of Electrical and

Computer EngineeringCurtin University

Perth, Western [email protected]

Iain MurrayDepartment of Electrical and

Computer EngineeringCurtin University

Perth, Western [email protected]

Abstract—Inertial Measurement Units (IMUs) are often usedto measure motion parameters of human body in indoor/outdoorlocalization applications. Most of commercially available low-costIMUs have limited number of sensors and are often connectedto a computer by a wired connection (usually by USB). Thedisadvantage of using wired IMUs in human gait measurement isthat, the wires disturb the natural gait patterns. The existing IMUswith wireless connectivity solve that problem, but are relativelyhigh cost. This paper describes the development and testing ofa miniature IMU that can be connected to a Windows basedcomputer or an Android based mobile device through Bluetooth.The IMU consists of a 3-axis accelerometer, 3-axis gyroscope, 3-axis magnetometer, a temperature sensor, a pressure sensor andan ambient light sensor. Sensors are sampled at a frequencyconfigurable by the user with a maximum set at 100 Hz. Rawsensor data are streamed through the integrated Bluetooth moduleto the host device for further processing. The IMU is also equippedwith a microSD card slot that enables on-board data logging. Thepower usage of the Bluetooth transmitter is optimized because onlythe sampled sensor data are transmitted. The windows applicationcan be used to view sensor data, plot them and to store them intoa file for further processing. Android application can be used toview data as well as to record data into a file. The small size ofthe device enables it be attached to any part of lower or upperhuman body for the purpose of gait analysis. Comparison of theperformance of the device with a smartphone indicated that theoutput of the IMU is comparable to the output of smartphone.

Index Terms—indoor localization; IMU; 3-axis inertial sensors;human gait analysis

I. INTRODUCTION

Inertial Measurement Units (IMU) are often used in in-door/outdoor localization applications and robotic applicationsto measure inertial parameters of human body or the robot.Most of commercially available low-cost IMUs are wired to acomputer usually using USB [1]. IMUs with wireless connec-tivity to a computer are costlier [1], [2]. Commercially availableIMUs are usually equipped with a 3-axis accelerometer, a 3-axis gyroscope and a 3-axis magnetometer but they do notinclude an ambient light sensor or a barometer [1], [2] thatmay also be important in indoor localization applications asthe ambient light sensor may be used to detect different lightlevels in different areas and the barometer to identify differentfloor levels. A temperature sensor may also be important in

indoor localization applications to detect different temperaturelevels in different areas (e.g. higher temperature around fireplace) and are sometimes included in commercially availableIMUs for the purpose of temperature compensation of inertialsensors [1].

The IMU discussed in this paper consists of an accelerom-eter, a gyroscope, a magnetometer, a temperature sensor, anambient light sensor and a barometer. This IMU was developedas a part of an indoor navigation system for vision impairedpeople.

Features of two commercially available IMUs with wirelessconnectivity and their prices are compared in “Related Work”section and the development of the IMU, its hardware/softwarefeatures and performance are discussed in “Construction of theIMU” and “Performance of the IMU” sections.

II. RELATED WORK

A series of IMUs have been developed by YEI Technologieswith different features [1]. All these IMUs are equipped witha 14-bit 3-axis accelerometer, a 16-bit 3-axis gyroscope anda 12-bit 3 axis magnetometer. Key technical details of theseare shown in Table I. They also include a temperature sensorthat is used for temperature compensation of inertial sensors.The cheapest of them (US$ 163) that comes as a standaloneIMU, has USB and RS232 connectivity only whereas theothers that have wireless connectivity are costlier (US$ 304for Bluetooth version and US$ 247 for Wireless 2.4 GHz DSSSversion). The version with on-board data logging (US$ 202) hasUSB connectivity only. The processor in all these devices is a32-bit RISC processor running at 60 MHz. Two data modes,IMU mode and Orientation mode, are available in all theseversions. Kalman filtering, Alternating Kalman filtering, Com-plementary filtering or Quaternion Gradient Descent filteringcan be selected as orientation filter when not in IMU modewhere processed orientation is made available as the output. Amaximum sampling rate of 800 Hz is available in IMU mode.

The IMU of x-io Technologies [2] is equipped with a 12-bit 3-axis accelerometer, a 16-bit 3-axis gyroscope and a 12-bit 3-axis magnetometer of which the key technical details aregiven in Table I. This IMU too, has a temperature sensor for

978-1-4673-1954-6/12/$31.00 ©2012

142/278

Page 156: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

Table ITECHNICAL DETAILS OF IMUS OF YEI TECHNOLOGIES AND X-IO

TECHNOLOGIES

Sensor Parameter YEI Technologies x-io Technologies

Accelerometer Bit size 14 bits 12 bitsMax. Range ±8 g ±8 g

Gyroscope Bit size 16 bits 16 bitsMax. Range ±2000 º/sec ±2000 º/sec

Magnetometer Bit size 12 bits 12 bitsMax. Range ±8.1 G ±8.1 G

temperature compensation of inertial sensors. It has a maximumsampling rate of 512 Hz and has USB, Bluetooth and UARTconnectivity as well as SD card slot for on-board data logging.This device has an IMU algorithm and Attitude and HeadingReference System (AHRS) algorithm running on-board for real-time orientation calculations. The price of this device is £ 309(~US$ 460).

III. CONSTRUCTION OF THE IMU

A. Architecture

The IMU discussed in this paper consists of a 3-axis ac-celerometer, a 3-axis gyroscope, a 3-axis magnetometer, atemperature sensor, a barometer sensor and an ambient lightsensor. It is equipped with a USB port and a Bluetooth moduleto communicate with a computer and an SD card slot foron-board data logging. The processor used in the IMU isan 8-bit AVR microcontroller running at 8 MHz on a 3.3 Vsupply. This microcontroller is a cheaper low power (~10 mWat 8 MHz on 3.3 V [5]) processor, yet powerful enough tocater the requirements of the IMU. Observations of [3] and [4]indicate that normal gait frequency is approximately 2 steps·s−1

and fast gait frequency is approximately 2.5 steps·s−1 andhence, 100 Hz sampling rate is sufficient to extract features ofhuman gait using inertial sensors and the maximum samplingrate of the IMU was selected as 100 Hz. Fig. 1 shows thearchitecture of the IMU. Raw sensor data collected are firstscaled appropriately (as different scales are available for mostof the sensors) and then they are organized into frames asdiscussed under “Data Acquisition and Transmission” section.These frames are streamed out through USB 2.0 interface andBluetooth interface without any further processing. The scopewas to present sensor data to the user without performingcomplex computations, so that the flexibility is given to theuser to perform any computations/analysis with these datain an external processing device: either a personal computeror a smartphone. This avoids any “unknown” on-board dataprocessing to be there which keeps full control to the user.

B. Sensors

All the sensors have digital data output with I2C interface.Vendor details, bit size, resolution and maximum range of eachsensor are shown in Table II. All these sensors have technicalspecifications comparable with sensors used in [1] and [2].Because all sensors have output data width of 16 bits or less(except for barometer which has both 16 and 19-bit modes),

Figure 1. Architecture of the IMU

for uniformity all sensor data were converted to 16-bit formatso that manipulation of data is easier.

C. Data Acquisition and Transmission

Although the accelerometer, gyroscope and magnetometerare sampled at 100 Hz, the barometer, light sensor and tem-perature sensor need not to be sampled at a high rate; hencethey are sampled at 20 Hz. With these sampling rates, twotypes of data frames are created according to the availability ofsensor data. The first type of data frame consists of all sensordata while the second consists of accelerometer, gyroscopeand magnetometer only. Fig. 2 depicts the two frame patternsin the actual sequence of data. Baud rate was set to 115.2kbits·s−1 to achieve reliable communication with the Bluetoothmodule. It should also be noted that there are data losses inpractice and time to recover/retransmit such packets has to beaccommodated, hence a faster baud rate is selected. As the dataof sampled sensors only are transmitted during that samplingcycle, the Bluetooth transmitter can stay idling for a longertime when only the inertial sensors are sampled allowing themodule to consume lower power.

Data acquired from sensors are scaled and biased appropri-ately before accumulating them into frames. No other modifica-tion to the data is done in the controller to keep the flexibilityto the user to perform required processing once the data iscollected.

D. Windows Application

The windows application is the main interface that allowsuser to view sensor data transmitted from the IMU. Oncethe COM port and the baud rate are selected, the user canconnect the IMU with the computer through Bluetooth link.The main graphical user interface (GUI) of the application hastwo parts: one shows the row data of each sensor while thesecond part shows processed values. Accelerometer values (inG) and gyroscopic values (in rad·s−1), compass heading (indegrees), pressure (in Pa), altitude (in meters w.r.t. sea level),

143/278

Page 157: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

Table IITECHNICAL DETAILS OF SENSORS[6]–[11]

Sensor Part Number Vendor Bit resolution Output bits Resolution RangeAccelerometer ADXL345 Analog Devices 10/11/13 bits 16 bits 4 mg/LSb ±2, ±4, ±8, ±16 g

Gyroscope ITG3200 InvenSense 16 bits 16 bits 0.0696 °/sec/LSb ±2000 °/secMagnetometer HMC5883L Honeywell 12 bit 12 bit 4.35 mG/LSb ±8 G

Barometer BMP085 Bosch Sensortec 16/19 bits 16/19 bits 0.01 hPa 300 – 1100 hPaTemperature TMP102 Texas Instruments 12 bits 12 bits 0.0625 °C −40 – +125 °C

Ambient Light TSL2561 TAOS 16 bits 16 bits 4 counts/lx 0.1 – 40,000 lx

Figure 2. Data Frame Patterns

Figure 3. The main GUI of the Windows Application

standard atmosphere, temperature (in °C), and light level (inlux) are shown in the second part of the GUI (Fig. 3).

The windows application can also be used to view sensor datain graphical form and to log data. Viewing data in graphicalform is to get an understanding of the fluctuation of sensorvalues. Data logging is important so that the data can be usedfor further offline analysis. Windows application can also beused to select among 1 Hz, 5 Hz, 25 Hz, 50 Hz and 100 Hz asthe sensor sampling rate of the IMU which then changes thesampling frequency of the IMU.

E. Android Application

The android application was designed for mobile platform,tablet or phone, with fewer features, that can be used to viewsensor data and log them. However, one can improve theapplication to do an advance processing if necessary as rowdata are streamed from the IMU.

F. The IMU Board

The final board of the IMU is a 50 mm × 37 mm double sidedprinted circuit board (PCB) with all sensors and other resourceson-board as shown in Fig. 4. The target was to build it as smallas possible so that it is highly portable and very convenient tocarry. The accelerometer and the gyroscope are placed close toeach other with X-axis of them fall on the same line, so that

the relative error in the readings is minimal. The magnetometeris also placed close to those so that the three sensors form a9-axis IMU.

The board is equipped with an on-board battery chargingcircuit to allow the battery to be charged using the USBport. The full charging time is about 100 minutes. The IMUconsumes approximately 42 mW with data logging only and138 mW with both data logging and Bluetooth streaming andhence with a 3.7 V, 800 mAh Li-Po battery it can operate forabout 85 hours with only data logging and about 35 hours withboth data logging and Bluetooth streaming. The complete IMUwith the battery and the enclosure has an approximate weightof 50 g and a size of 55 mm × 45 mm × 20 mm.

IV. PERFORMANCE OF THE IMUThe output of the IMU discussed in this paper was compared

against the data recorded in a smartphone. A graph of IMUaccelerometer output with the smartphone data for a walkingtrial while keeping the devices in the same pants pocket isshown in Fig. 5. This comparison indicated that the output ofthe IMU goes close to the data of the smartphone which indicatethat the performance of the IMU is satisfactory. It should benoted that no additional computations are done to the outputof the IMU other than the conversion form row values to theactual units.

144/278

Page 158: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31st October 2013

Figure 4. The PCB of IMU

V. DISCUSSION AND FUTURE WORK

At the beginning of developing this IMU, the goal was setto achieve a sampling rate of 100 Hz as it is sufficient totrack natural walking paces of 2 – 3 steps·s−1 for fast gait [4].However, with the data rates supported by Bluetooth version2.0, higher sampling rates can also be achieved. Authors arecurrently working on improving the sampling rate of the IMU.However, with the limitations of Bluetooth bandwidth, thenumber of IMUs that can be connected with a single Bluetoothreceiver will be limited as sampling rate increases.

Another factor that limits the sampling rate is the speed ofI2C supported by the microcontroller and the sensors. Althoughthe microcontroller supports I2C baud rates up to 400 kHz,some sensors do not support rates higher than 100 kHz. Theauthors are working on finding alternative sensors that supporthigher I2C baud rates. Further, the data transfer rates supportedby the SD card also imposes a bottleneck for achieving highersampling rates.

The Windows application was written to receive data from asingle IMU only. However, it is possible to receive data frommultiple IMUs so that they can be used to track motion of apart or the full body. The authors are also looking at improvingthe Windows application to accommodate multiple IMU datain a time synchronized manner.

The cost of the IMU with the battery and the enclosurein single units comes below US$ 100. However, if this isimplemented in mass scale, the cost will be lower. It shouldbe noted that the IMU discussed in this paper consists of most

0 1 2 3 4 5 6 7 8-10

-8

-6

-4

-2

0

2

4

6

8

10Vertical Acceleration (y-Axis) for a Walking Trial

Time(s)

Acc

el-X

(m

/s2 )

IMUPhone

Figure 5. Accelerometer Outputs of IMU with Smartphone Data

of the sensors necessary for indoor navigation and localizationas well it has USB and Bluetooth connectivity and data loggingfeatures.

Separate accelerometer, gyroscope and compass sensors wereused in the IMU discussed in this paper to keep the cost loweras possible. However, it gives a small error in the relativesensor readings due to the fact that they have slightly offsetcoordinate systems (X-axis and/or Y-axis have offsets). As afuture work, the authors are looking at using a 9-axis motionsensor developed by InvenSense [12] which includes a 3-axisaccelerometer, a 3-axis gyroscope and a 3-axis magnetometerin a single chip. This will minimize the error due to offset ofsensor axes to a greater extent.

VI. CONCLUSIONS

This paper presented the design and development of a lowcost IMU that consists of most of the sensors needed for indoorlocalization and with Bluetooth and USB data streaming anddata logging in an SD card for real time data collection forhuman gait analysis. The IMU gave satisfactory output that issufficient for real time data capturing for human gait analysis.

REFERENCES

[1] YEI Technology. “YEI 3-Space Sensor” [Online]. Available:http://www.yeitechnology.com/yei-3-space-sensor, [July 17, 2013].

[2] x-io Technologies. “x-IMU” [Online]. Available: http://www.x-io.co.uk/products/x-imu/, [July 17, 2013].

[3] K. Abhayasinghe and I. Murray. (2012, Nov.). “A novel ap-proach for indoor localization using human gait analysis with gyro-scopic data,” in Third International Conference on Indoor Positioningand Indoor Navigation [Online], Sydney, Australia, 2012. Available:http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/22_Paper.pdf [Mar. 5, 2013].

[4] T. Oberg, A. Karsznia and K. Oberg, “Basic gait parameters: Referencedata for normal subjects, 10–79 years of age,” J. Rehabil. Res. Dev., vol.30no. 2, pp.210–223, 1993.

[5] Atmel Corporation. (2009, Oct.). “8-bit AVR® Microcontroller with4/8/16/32 K Byte In-System Programmable Flash” [Online]. Available:http://www.atmel.com/Images/doc8161.pdf [July 17, 2013].

[6] Analog Devices. (2013, Feb.) “3-Axis, ±2 g /±4 g/±8 g /±16 g Digital Accelerometer” [Online]. Available:http://www.analog.com/static/imported-files/data_sheets/ADXL345.pdf[July 17, 2013].

[7] InvenSense Inc. (2011, Feb. 08) “ITG-3200 Prod-uct Specification Revision 1.7” [Online]. Available:http://invensense.com/mems/gyro/documents/PS-ITG-3200A.pdf [July17, 2013].

[8] Honeywell. (2013, Feb.) “3-Axis Digital Compass IC HMC5883L” [On-line]. Available: http://www51.honeywell.com/aero/common/documents/myaerospacecatalog-documents/Defense_Brochures-documents/HMC5883L_3-Axis_Digital_Compass_IC.pdf [July 17,2013].

[9] Bosch Sensortec. (2009, Oct. 15) “BMP085Digital Pressure Sensor” [Online]. Available:https://www.sparkfun.com/datasheets/Components/General/BST-BMP085-DS000-05.pdf [July 17, 2013].

[10] Texas Instruments. (2012, Oct.) “Low Power Digital Temperature Sensorwith SMBusTM /Two-Wire Serial Interface in SOT563” [Online]. Avail-able: http://www.ti.com/lit/ds/symlink/tmp102.pdf [July 17, 2013].

[11] Texas Advanced Optoelectronic Solutions. (2005, Dec.) “TSL2560,TSL2561 Light-to-Digital Converter” [Online]. Available:http://www.adafruit.com/datasheets/TSL2561.pdf [July 17, 2013].

[12] Invensense. “Nine-Axis (Gyro + Accelerometer + Compass)MEMS MotionTrackingTMDevices” [Online]. Available:http://www.invensense.com/mems/gyro/nineaxis.html [July 17, 2013].

145/278

Page 159: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

WiFi localisation of non-cooperative devicesChristian Beder and Martin Klepal

Nimbus Centre for Embedded Systems ResearchCork Institute of Technology

Cork, IrelandEmail: christian.beder,[email protected]

Abstract—Some WiFi enabled devices, such as certain smart-phones and tablets, do not allow reading RSSI measurementsdirectly on the device due to API restrictions and are thereforeexcluded from most currently available WiFi localisation systems.As these restrictions are arbitrary and not due to technologicallimitations, mainstream research did not focus on the issue sofar outside the application space of intrusion detection, wherevery accurate localisation is usually not essential. However, otherapplications might require such localisation services for a widervariety of devices, too, especially if the service or applicationprovider has limited control over the user’s device preferences.We will present a WiFi localisation system able to handle suchnon-cooperative devices using dedicated sniffers made from off-the-shelf components distributed around the environment tomeasure signal strengths on their behalf. This approach allows toprovide WiFi localisation based applications not only to Androiddevices but equally so to devices running iOS or Windows Phoneoperating systems. Assuming the signal strength measured on thesniffers instead of the devices is symmetric, normal RSSI basedlocalisation algorithms can be applied. However, some challengesarise: monitoring all channels simultaneously can be impracticalso that only a subset of the channel spectrum is visible at anygiven time and therefore communication packets are likely tobe missed by some sniffers, in particular in the presence ofvery irregular communication patterns. We will show how toaddress these issues and compare the localisation performance forthese non-cooperative devices with the performance achievableby classical approaches based on active scanning on the deviceitself.

Index Terms—WiFi localisation

I. INTRODUCTION

Fingerprinting based WiFi localisation has been around forquite some time [1] and is a well-established technique forindoor localisation by now [2]. One very common assumptionis, though, that localised devices themselves actively scan forvisible access points in order to measure the mutual signalstrengths. While it seems that from an algorithmic point ofview this assumption does not make any difference, popularsmartphone operating systems like iOS and Windows Phonedo not allow taking these measurements due to API restrictionsand must therefore be excluded from most currently availableWiFi localisation systems, which limits their commercial ap-plicability in certain scenarios.

One possible way of enabling indoor location based ap-plications to such non-cooperative devices is to take themeasurements not on the device itself but to create a systemarchitecture where the measurement is taken by a numberof sniffer devices as part of the infrastructure instead. This

Fig. 1. Two possible inexpensive off-the-shelf WiFi sniffer devices. Left:three OpenWRT access points bundled together in order to be able to monitor3 channels simultaneously in one location. Right: USB Hub with WiFidongles for monitoring 7 channels simultaneously.

approach has been proposed by [3], however the challengesarising out of this have not been the focus of mainstream local-isation research so far. For example the well-known overviewon the state-of-the-art of indoor localisation presented in [2]explicitly distinguishes between the two categories of WiFibased systems and infrastructure systems.

Instead, determining the location of non-cooperative WiFidevices by the infrastructure has been looked at in the contextof security applications [4] and there are commercial systemsavailable today, for instance Cisco’s Wireless Location Appli-ance [5] or Airtight Networks’s Wireless Intrusion PreventionSystem [6], which is based on the patents [7] and [8], to nametwo. However, these systems usually depend on expensivededicated hardware and focus on security applications, i.e.the detection and handling of intrusions, rather than locationaccuracy.

In this paper we will present a system providing accuratecontinuous fingerprinting based WiFi localisation based onvery simple sniffer devices like for instance the ones de-picted in figure 1 comprising of inexpensive off-the-shelf WiFicomponents. Several challenges need to be addressed whenconsidering WiFi sniffing [9], however the most problematicrestriction encountered in this infrastructure based approachcompared to actively scanning on the devices themselves is thefact that a single WiFi chip can only monitor one out of thethirteen licensed WiFi channels at a time. This issue has beenidentified by [3] and addressed in there by trying to estimatethe client’s communication channel. However, in a large scaledeployment all of the channels will be in use, therefore thesystem cannot be restricted to a single channel if many devicesare to be tracked at once. In case the sniffer device does not

146/278

Page 160: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

contain a WiFi chip for all thirteen channels it therefore needsto cycle through the channels, which makes each individualsniffer device sensitive only to a subset of tracked devices atany point in time. Further to that, lacking control over thetracked device itself and also assuming a lack of control overthe communication infrastructure means that the channels usedare usually unknown to the localisation system as well.

The implication this architecture has to have on the local-isation algorithm is, that in case the sniffer devices are notsynchronised, i.e. do not always scan exactly on the samechannels at any point in time, it has to cope with partially miss-ing RSSI measurements from sniffers not currently listeningto the device’s current communication channel. We addressedthe issue of dealing with missing RSSI measurements in [10],however this work was based on the assumption that theprobability of missing an RSSI reading is related to the signalstrength itself, i.e. weak signals are more likely to be missedthan strong signals. This assumption, however, obviously doesnot apply here as even a strong signal can be missed if thesniffer device is listening to different channels at that time.

In the following section we will therefore show how thelikelihood observation function presented in [10] can beaugmented in order to extend the localisation algorithm toaccommodate the additional requirement arising from a systemarchitecture based on monitoring different subsets of channelsin each sniffer location. We will then show what effect snifferdevices with different number of simultaneous channels haveon the localisation accuracy benchmarking this against thecase where all thirteen channels are monitored, which isequivalent to a likelihood function designed for systems basedon cooperative devices being able to activly scan the wholechannel spectrum themselves.

II. FORMAL PROBLEM DEFINITION

We will now show how to design a likelihood observationfunction able to deal with missing RSSI measurements dueto monitoring only a subset of channels on each sniffer ata time. The notation used here follows the rigorous Bayesianapproach to modeling the likelihood function for fingerprintingbased localisation presented in [10]. Let’s assume we have Ksniffers distributed across the environment picking up signalstrength measurements

s =(s1 · · · sK

)T(1)

having (slightly abusing notation) the inverse covariance

C−1ss = σ−2diag[τ ] (2)

together with a boolean pickup indicator function

τ : 1, ...,K → 0, 1 (3)

which tells us whether a particular sniffer picked up somesignal or not. Note that values si can only be defined if thecorresponding τi = 1 and will not have any impact on any ofthe following in case τi = 0.

Unlike systems that are actively scanning the whole channelspectrum on the device itself we assume that each sniffer is

restricted to only a subset of channels it monitors. We willtherefore augment the approach taken in [10] by introducingthe channel monitoring indicator function

κ : 1, ...,K × 1, ...,M → 0, 1 (4)

which tells us for each sniffer and each channel if it is currentlylistening there or not. The major contribution of this paper is todemonstrate how this additional information can be used in thelikelihood function enabling standard localisation algorithmsto cope with an only partially monitored channel spectrum onunsynchronised sniffer devices.

Similar to the measurements we will denote the previouslyrecorded known fingerprint by a vector of location dependentsignal strength functions for each sniffer

F [x] =(F1[x] · · · FK [x]

)T(5)

together with a coverage indicator function

φ[x] : 1, ...,K → 0, 1 (6)

which tells us for each sniffer if an area is covered by it ornot.

The problem of localising a non-cooperative device can nowbe stated as finding the the most likely position of a singlesignal picked up by a subset of sniffers listening to a subsetof channels given the previously known fingerprint

x = argmaxxpx|s, τ ,κ,F ,φ (7)

As usual applying Bayes’ theorem this posterior can berewritten into a product of a likelihood factor and a prior-to-evidence ratio factor as follows

px|s, τ ,κ,F ,φ (8)

= ps, τ ,κ|x,F ,φ px|F ,φps, τ ,κ|F ,φ

Although all parts on the right hand side of this equation canbe considered by an appropriate localisation algorithm (andwill be to a certain extend by the motion model of the particlefilter used in the evaluation section), we will focus in thissection on the likelihood factor only and show how it can bedesigned to model all aspects of the problem.

First we note that by applying basic rules of probability thelikelihood can be decomposed as follows into three factors

ps, τ ,κ|x,F ,φ = (9)ps|x,F ,φpκ|s,x,F ,φpτ |s,κ,x,F ,φ

Each of these factors models an aspect of the problem, so wewill discuss them in turn.

We start with the most commonly modeled first factor.Assuming a Gaussian distribution of the received signalstrength around the fingerprint value in log-energy space andcompensating for the bias introduced by differing antennaattenuation as discussed in [10] it is given by

ps|x,F ,φ =exp

[− 1

2

∑Ki=1 τiφi[x]

ω2i

σ2ω

]√

(2πσ2ω)∑K

i=1τiφi[x]

(10)

147/278

Page 161: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

using the bias compensated signal strength residual

ωi = si − Fi[x]− λ (11)

having the variance

σ2ω =

1 +∑Ki=1 τiφi[x]∑K

i=1 τiφi[x]σ2 (12)

and containing the estimated antenna attenuation bias

λ =

∑Ki=1 τiφi[x](si − Fi[x])∑K

i=1 τiφi[x](13)

The second and third factor of the likelihood function arenot commonly considered by most people but rather assumedto be uniformly distributed. For the second factor, which isthe probability of the sniffers being in a given configurationat a point in time, we also don’t want to introduce any furtherassumptions, therefore we assume it to be independent of anymeasurements and each channel configuration to be equallylikely, hence

pκ|s,x,F ,φ = pκ = 1

2KM(14)

Finally the third and in this context most interesting factor isthe pickup probability, which allows introducing assumptionson the sniffers ability to make a measurement at all. Theparticular contribution of this paper is to also take the knownmonitored channel spectrum κ into account there. In order todo this we first expand the equation by explicitly marginalisingover the unknown transmission channel µ as follows

pτ |s,κ,x,F ,φ (15)

=

M∑µ=1

pτ |µ, s,κ,x,F ,φpµ|s,κ,x,F ,φ

making the transmission channel available explicitly to thealgorithm and thereby enabling us to take the channel spectrummonitored by the sniffers into account.

We will now show how to augment the approach presentedin [10] where we proposed to use the Gibbs distribution

gi =exp [−βci[τi]]

exp [−βci[0]] + exp [−βτici[1]](16)

as pickup probabilities penalising the missed energy

ci[t] = φi[x](1− t)αFi[x] + t(1− φi[x])αsi (17)

with α denoting the energy unit and β the inverse temperatureparameter. In order to introduce the additional knowledgeabout the monitored channels κ now we propose to only applythis lost energy Gibbs distribution gi in case the transmissionchannel µ was actually monitored by a particular sniffer, i.e.κiµ = 1. In the other case that the sniffer was not monitoringthe transmission channel, i.e. κiµ = 0, we use a relaxed zero-one pickup probability to reflect the fact that in this casenothing should have been picked up at all. This means thatdepending on an inverse temperature control parameter γ thesignal pickup probability is close to one in case it is not picked

up but close to zero in the impossible case some transmissionwas picked up on the channel despite not monitoring it. Puttingall this together yields the following augmented conditionalpickup probability

pτ |µ, s,κ,x,F ,φ =K∏i=1

κiµgi+(1−κiµ)e−γ(1−τi)

1 + e−γ(18)

If we now also make the assumption of a uniform transmis-sion channel distribution

pµ|s,κ,x,F ,φ = pµ = 1

M(19)

we finally end up with the new pickup probability function

pτ |s,κ,x,F ,φ (20)

=1

M

M∑µ=1

K∏i=1

κiµgi + (1− κiµ)e−γ(1−τi)

1 + e−γ

which is able to also take the monitored channel spectruminto account. Note that this is a direct generalisation of theapproach presented in [10] being identical in case all channelsare monitored, i.e. κiµ = 1 for all possible transmissionchannels µ. Also note, that this is the case, too, if all thesniffers are synchronized, i.e. κiµ = κjµ, which means thatin this case the presented extension does not yield any benefitover the classical approach.

We will show in the following what influence the restrictionto only a subset of the channel spectrum has on achievablelocalisation performance.

III. EVALUATION

In order to evaluate the presented approach we built asystem able to monitor all thirteen WiFi channels simul-taneously in each sniffer location all the time. By simplyconsidering only a subset of these measurements this allows usto easily study the effect that limiting the number of simulta-neously monitored channels has on localisation performance,as it would occur with more realistic sniffer devices like forinstance the ones depicted in figure 1 relying on randomcycling through subsets of channels instead. As mentionedin the previous section we also assume that this channelselection is unsynchronised, meaning each sniffer chooses itschannels independently, which makes the architecture muchmore flexible as it does not rely on a central synchronisationentity.

Figure 2 shows the sniffer placement on the floor plan aswell as the test path we walked with a smartphone indepen-dently connected to our Cisco enterprise WiFi system allowingfor seamless handovers while pinging a server from the devicein order to generate regular network traffic. We chose thisconnected approach, because it is more realistic than havingthe device continuously scanning for access points as in thiscase it would cycle through all channels itself and be visibleto every sniffer all the time, therefore not showcasing theproperties of the algorithm we want to evaluate.

148/278

Page 162: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Fig. 2. Top: Test-building floor plan with the black dots indicating snifferplacement. Note that the coverage in open plan area on the right handside of the building is limited and therefore quite challenging for any WiFilocalisation system. The green line indicates the test path walked from oneend of the building to the other. Bottom: Area of the floor plan covered bythe fingerprint for the experiment.

0 10 20 30 40 50 60 700

0.2

0.4

0.6

0.8

1

[m]

[%]

1371013

Fig. 3. Cumulative histogram of residual localisation errors while walkingalong the path depicted in figure 2 for different sniffer configurations. Asexpected localisation accuracy increases with increasing number of simulta-neously monitored channels.

The extended likelihood function proposed in the previoussection was implemented in the particle filter based localisa-tion system presented in [11] and [10] and the residual errorsbetween reference locations form a controlled walk alongthe test path shown in figure 2 and the resulting estimatedpositions based on the sniffer measurements were recorded fordifferent sniffer configurations monitoring increasing numbersof channels simultaneously while cycling through these atrandom.

Figure 3 shows the cumulative histogram of the results. Asexpected localisation accuracy increases with the number ofchannels simultaneously monitored in each location. It canbe seen that relying on a single WiFi monitor per locationand cycling through channels on there yields very inferiorperformance to sniffer devices being able to monitor morethan one channel at once. However, it can also be observedthat monitoring all 13 channels simultaneously is not necessaryto achieve reasonable results and sniffer devices like the onesdepicted in figure 1 can be sufficient. However, any additional

sniffer device will potentially help to improve the accuracy andthe presented approach does allow to mix multiple differenttypes of sniffer devices and account for this in the likelihoodobservation function.

IV. CONCLUSION

A system for accurate, continuous, fingerprinting basedWiFi localisation has been presented that does not rely on theability of the tracked devices to actively participate by scan-ning and reporting signal strengths themselves. The proposedapproach therefore enables the provision of location basedservices to non-cooperative or API restricted devices such assmartphones based on the iOS or Windows Phone operatingsystems by taking the necessary RSSI measurements on aninfrastructure level instead.

The major contribution of this paper has been the derivationof a rigorous likelihood observation function modeling restric-tions in the monitored channel spectrum as they occur whentaking such an infrastructure centric approach. It was shownthat our approach allows to localise non-cooperative devicesby placing simple WiFi sniffers built from inexpensive off-the-shelf components into the environment achieving comparableaccuracy to systems designed for actively scanning cooperativedevices.

ACKNOWLEDGMENT

This work has been supported by Enterprise Ireland throughgrant IR/2011/0003.

REFERENCES

[1] P. Bahl and V. Padmanabhan, “Radar: an in-building rf-based userlocation and tracking system,” in INFOCOM 2000. Nineteenth AnnualJoint Conference of the IEEE Computer and Communications Societies.,vol. 2, 2000, pp. 775–784.

[2] R. Mautz, “Indoor positioning technologies,” 2012, habilitation thesis,ETH Zurich.

[3] S. Gami, A. S. Krishnakumar, and P. Krishnan, “Infrastructure-based lo-cation estimation in wlan,” in Wireless Communications and NetworkingConference, 2004. WCNC. 2004 IEEE, vol. 1, 2004, pp. 465–470 Vol.1.

[4] A. Hatami and K. Pahlavan, “In-building intruder detection for wlanaccess,” in Position Location and Navigation Symposium, 2004. PLANS2004, 2004, pp. 592–597.

[5] Cisco. Wireless location appliance. [Online]. Available:http://www.cisco.com/en/US/prod/collateral/wireless/ps5755/ps6301/ps6386/product data sheet0900aecd80293728.html

[6] AirTight Networks. Wireless intrusion prevention system. [Online].Available: http://www.airtightnetworks.com/home/products/AirTight-WIPS.html

[7] M. Kumar and P. Bhagwat, “Method and system for location estimationin wireless networks,” Patent US 7 406 320 B1, 2008.

[8] R. Rawat, “Method and system for location estimation in wirelessnetworks,” Patent US 7 856 209 B1, 2010.

[9] J. Yeo, M. Youssef, and A. Agrawala, “A framework for wirelesslan monitoring and its applications,” in Proceedings of the 3rdACM workshop on Wireless security, ser. WiSe ’04. NewYork, NY, USA: ACM, 2004, pp. 70–79. [Online]. Available:http://doi.acm.org/10.1145/1023646.1023660

[10] C. Beder and M. Klepal, “Fingerprinting based localisation revisited - arigorous approach for comparing rssi measurements coping with missedaccess points and differing antenna attenuations,” in 2012 InternationalConference on Indoor Positioning and Indoor Navigation (IPIN), 2012.

[11] A. McGibney, C. Beder, and M. Klepal, “Mapume smartphone local-isation as a service - a cloud based architecture for providing indoorlocalisation services,” in 2012 International Conference on IndoorPositioning and Indoor Navigation (IPIN), 2012.

149/278

Page 163: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Creation of Image Database with Synchronized IMU Data for the Purpose of Way Finding for

Vision Impaired People

Chamila Rathnayake

Department of Electrical and Computer Engineering Curtin University

Perth, Western Australia [email protected]

Iain Murray

Department of Electrical and Computer Engineering Curtin University

Perth, Western Australia [email protected]

Abstract—This paper describes an image database which includes

synchronized inertial measurement unit (IMU) data with the

meta data of the captured images [1]. Images are taken under a

range of conditions including low light, shadow conditions and

controlled blurring. Physical locations are fixed and repeatable,

and include accurate GPS positioning. The standardized images

are synchronized over the exposure time with multiple sensor

data (accelerometer, gyroscope and ambient light). This database

will be used for a research project currently being undertaken at

Curtin University which proposes a form of “crowd sourcing” to

construct maps for use in mobility and navigation for people with

vision impairments.

Keywords-GPS, image processing, way finding, vision impaired,

IMU, Meta data

INTRODUCTION

A “standardized” image databases is an important tool in the comparative assessment of image processing techniques in the field of indoor and outdoor navigation. Databases of these types have been used in a wide variety of applications such as geographical information systems, computer-aided design and manufacturing systems, multimedia libraries, and medical image management systems.

The inertial measurement unit (IMU) data and other sensor data is captured synchronized with the time of the image has been taken and image meta data has been recorded as an additional information. The captured images and data are stored in a conventional database as described in section V.

In a typical digital image processing process below steps are carried out to obtain any final result [2].

The acquisition and preprocessing steps are one of the most important steps in above process and some of the IMU data and meta data can be used during these steps.

Sensor Data

Gyroscopic Data Barometer Data

Accelerometer Data Proximity Data

Magnetometer Data Ambient light Data

Orientation Data GPS Data

Meta Data

Shutter Speed ISO

Aperture Flash Fired

Exposure Program Resolution

Date and Time Compression Type

A modern smart phone is used as the IMU for the initial experiments and a subset of above data types has been captured using an Android application while taking the images.

RELATED WORK

There is less number of researches are carried out on this area. Reference [4] describes a general image database model which is mainly focus on image attributes and query optimizations on the image database and it doesn’t not address any of the synchronized sensor data during the image capturing process.

IMPORTANCE OF IMAGE DATABASE WITH SYNCHRONIZED IMU

DATA

A. Image Stabilization

IMU data can be used to identify the most stable position of the capturing device or the application while it is in a moving position or an unstable position. A real-time feedback from the

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

150/278

Page 164: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

IMU can be used for capturing device after analyzing gyroscopic and accelerometer data at of most stable position.

B. Image De-blurring

A most common problem in image capturing is blurring which is caused by movements of the capturing device, movements of object or the long exposure time of the capturing device. If the blurring occurs due to the movements of the capturing device, then de-blurring techniques can be applied to the image using gyroscopic and accelerometer data[3].

C. Algorithm training testing and performance analysis

These IMU and meta data can be used for any image processing research project to test their algorithms and applications to identify the optimal results and solutions.

IMAGE TYPES

Each image is captured under different environmental conditions, with different formats at different location. Also images are saved in different modes such as RGB and grayscale. The following table lists all the types of images which are considered.

Environmental

Conditions

Locations Image

Conditions

Modes

Morning Side Walks Normal RGB

Noon Straight

Path Shadowy

Gray- scale

Evening Curvy Paths

Blurred Black & White

Night Corridor

Cloudy Stairs

A sample set of captured images are shown below

Differenet Environmental Conditions

Different Locations

Different Image Conditions

Different Modes

To capture the images for experimental purposes, the capturing device is mounted on a tripod at chest height which is the most suitable placement in human gait while the subject

is performing different activities such as walking on sidewalks and stairways. This is to simulate the real walking patterns of the vision impaired people. The stability of capturing device and how its positions relative to the ground level throughout the activity

DATABASE MODEL

The captured images, IMU data and meta information are stored in a relational normalized database model as shown below figure.

Images

EnvironmentalConditions

MetaData

IMUDataPhysicalConditions

ImageModes

Locations Features

IMG_IDPkeyPK

IMG_Name

ENC_IDFkeyFK

ENC_IDPkeyPK

ENC_Name

ENC_Description

ATR_IDPkeyPK

ATR_DateTime

SND_IDPkeyPK

SND_AccellerationX

SND_AccellerationY

SND_AccellerationZ

SND_GyroscopeX

SND_GyroscopeY

SND_GyroscopeZ

SND_OrientationX

SND_OrientationY

SND_OrientationZ

SND_MagneticFieldX

SND_MagneticFieldY

SND_MagneticFieldZ

IMG_IDFkeyFK

SND_DateTime

IMG_IDFkeyFK

ATR_Resolution

ATR_Longittude

ATR_Latitude

ATR_Altitude

PHC_IDPkeyPK

PHC_Name

PHC_Description

PHC_IDFkeyFK

IMD_IDPkeyPK

IMD_Name

IMD_IDFkeyFK

LCN_IDPkeyPK

LCN_Name

LCN_IDFkeyFK

FTR_IDPkeyPK

FTR_Name

FTR_IDFkeyFK

CONCLUTION

The main advantages of the proposed database can be summarized as follows:

Over 600 of images can be used as a training set, testing set and performance analysis for the image processing techniques. The synchronized IMU data and meta data can be used to enhance the images in image processing and computer vision applications.

ACKNOWLEDGMENT

The authors would like to thank Mr. Nimsiri Amarasinghe and Mrs. Nimali Rajakaruna from Department of Electrical and Computer Engineering, Curtin University for providing assistant in capturing images and testing the image processing applications.

REFERENCES

[1] Dougles Hackney “Digital Photography Meta Data Overview”, 2008,

unpublshed.

[2] S.Annadurai,R.Shanmugalakshmi “Fundamentals of Digital Image Processing”, Pearson Education India, Chapter 1..

[3] Ondrej Sindelar, Filip Sroubek “Image deblurring in smartphone devices using build-in inertial measurement sensors”.

[4] Peter L. Stanchev, “General Image Database Model”, Institute of Mathematics and Computer Science, Bulgarian Academy of Sciences

[5] R. Nicole, “Title of paper with only first word capitalized,” J. Name Stand. Abbrev., in press.

151/278

Page 165: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

Relevance and Interpretation ofthe Cramer-Rao Lower Bound

for Localisation AlgorithmsMarcel Kyas, Yubin Zhao, Heiko Will

Freie Universitat BerlinAG Computer Systems & TelematicsTakustr. 9, 14195 Berlin, Germany

e-mail: marcel.kyas,yubin.zhao,[email protected]

978-1-4673-1954-6/12/$31.00 c© 2012 IEEE

Abstract—We show that using Cramer-Rao-Lower-Bound(CRLB) is inadequate for indoor localisation. The mathematicalassumptions necessary to formulate the Fisher information of theindoor localisation problem and for calculating the CRLB do notgenerally hold. This is caused by non-Gaussian distributions ofmeasurement errors. These distributions also give rise to involvedcalculations if a CRLB is to be computed. Finally, the CRLBgives a lower bound of the mean squared error (MSE) of anypotential estimator for a problem, but makes no statement aboutthe existence of such an optimal estimator or whether a positionestimation algorithm can be improved by using the CRLB.

The mathematical results justify the use of simulation, as donewith our FU Berlin Parallel Lateration-Algorithm Simulation andVisualization Engine (LS2). Using simulation, we can analyse al-gorithms without relying on the CRLB, especially if its calculationproves infeasible or even impossible.

Index Terms—Indoor localisation, Cramer-Rao Lower Bound

I. INTRODUCTION

The Cramer-Rao-Lower-Bound (CRLB) is used as a mea-sure for the quality of an estimation method in localisationalgorithms [1], but its use has become controversial [2], [3].

Our contribution is a rigorous calculation discussion of theCRLB for indoor localisation assuming Gamma distributederrors, a careful interpretation of this bound and its applicationto maximum likelihood estimator (MLE) using this errordistribution, and a comparison of simulation results of realalgorithms with this bound. As we show, the CRLB is oflimited use, because most algorithms are actually much betterthan published CRLB in scenarios of interest. We aim toexplain this phenomenon.

II. RELATED WORK

The earliest use of the CRLB for localisation problemshas been by Torrieri [1]. Applications of the CRLB are todecide whether an algorithm is optimal [4], to place anchorsoptimally [5], or to select presumably optimal anchors andrange measurements [6].

There also have been differing attempts on calculatingthe CRLB, depending on the method of measurement [7]–[9]. There is uncertainty about the correct formulation ofthe CRLB, as different solutions are proposed for the same

measurement error and measurement error distributions [10]–[12]. The geometric dilution of precision (GDOP) has beenshown to be equivalent to the CRLB for normal distributederrors [2].

III. CRAMER-RAO LOWER BOUND

We summarise the important definitions and theorems. From

now on, let ∇θ =(

∂∂θ0

, . . . , ∂∂θk−1

)T.

Fisher information (FI) and the CRLB can be used toestablish optimality of an unbiased estimator. Especially, theCRLB is reached by MLE [13]. We use MLE to establish acounter example in Section VI.

A. Fisher Information

The CRLB is derived from the FI matrix.Definition 1: Suppose ~X = (X0, . . . , Xn−1) form a random

sample from a distribution for which the probability densityfunction (p.d.f.) is f(~x; θ), where the value of the parameterθ = (θ0, . . . , θk−1) must lie in an open subset of a k-dimensional real space. Let fn(~x; θ) denote the joint p.d.f.of ~x. Assume that ~x | fn(~x; θ) > 0 is the same for all θand that log fn(x; θ) is twice differentiable with respect to θ.The FI matrix In(θ) in the random sample ~x is defined as thek × k matrix with (i, j) element equal to

In,i,j(θ) = Cov

[∂

∂θilog fn(~x; θ),

∂θjlog fn(~x; θ)

].

Recall Covθ(X,Y ) = E(XY ; θ) − E(X; θ)E(Y ; θ) andE(X; θ) =

∫X(ω)f(dω; θ) with f the p.d.f. that admits X .

B. The Cramer-Rao Lower Bound

The CRLB expresses a lower bound on the variance of esti-mators of deterministic parameters [14], [15]. Next theorem isproved as Theorem 6.6 in Lehmann and Casella [13, p. 127].

Theorem 1 (Cramer-Rao Information Inequality): Suppose~X = (X0, . . . , Xn−1) form a random sample from a distri-bution for which the p.d.f. is f(x; θ), where the value of theparameter θ = (θ0, . . . , θk−1) must lie in an open subset ofa k-dimensional real space. Let T = δ( ~X) be a statistic with

152/278

Page 166: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

finite variance. Let m(θ) = Eθ(T ). Assume that m(θ) is adifferentiable function of θ. Then:

Varθ(T ) ≥ ∇θm(θ)T · In(θ)−1 · ∇θm(θ) (1)

IV. ERRORS IN DISTANCE MEASUREMENT

The position will be estimated from measurements of dis-tances to anchors, e.g. by time of arrival (TOA). This roughlycorresponds to an indoor localisation scenario with non-line-of-sight (NLOS) errors and abstracts from radio ranges, i.e.,we assume the radio reaches every place in a building.

Indoors, the major contributors to measurement errors areNLOS effects and the inaccurate clocks. Fading and multi-path effects affect the accuracy to a lesser degree, because themeasurements are independent of the received signal strengthand distances are usually short. Only that a message wasreceived is relevant. If it is lost, no value can be measured.

Our experiments [16] show that the error of a measurementcan be approximated by a Gamma distribution. The p.d.f. ofthe Gamma distribution is

Γ(α, β)(x) =

βα

Γ(α)xα−1e−βx if x > 0

0 if x ≤ 0, (2)

where Γ(α) =∫∞

0tz−1e−tdt. We call the parameter α the

shape of the distribution and the parameter β its rate. TheGamma distribution is defined for non-negative α, β, and x.

Given shapes α1, . . . , αn and rates β1, . . . , βn, a rangemeasurement Xi to an anchor position Ai at the position θis modelled by: Xi = d(Ai, θ) + εi − bi with εi ∼ Γ(αi, βi),where bi is an offset of the measurement error, εi is the ithmeasurement error sampled from the p.d.f. of Γ(α, β), andd(Ai, θ) is the Euclidean distance between Ai and θ, i.e.d(a, b) =

√(ax − bx)2 + (ay − by)2.

Definition 2 (Measurement p.d.f.): The probability of mea-suring x at position θ from anchor A is:

f(x; θ) = Γ(α, β)(x− d(A, θ) + b),

=βα

Γ(α)(x− d(A, θ) + b)α−1e−β(x−d(A,θ)+b)

where α, β > 0, b ∈ R is an offset, and x > d(A, θ)− b.Referring to Definition 1, we can establish whether the

Fisher information is defined for such a distribution.Proposition 2: ~x | pi(~x; θ) > 0 =

⋂ni=1~x | d(Ai, θ) −

bi < xi.From this proposition, we immediately conclude that bi > 0

for all 1 ≤ i ≤ n is a sufficient condition for a non-empty set~x | pi(~x; θ) > 0. Otherwise, this set might be empty and noCRLB can be derived.

Geometrically speaking, Proposition 2 describes the inter-section of discs around anchors. For using MLE with thisp.d.f., we must find a point inside this set, otherwise the targetfunction is already undefined at the start.

V. APPLICATION TO INDOOR LOCALISATION

We establish the non-existence of a CRLB for the indoorlocalisation problem using a Gamma-distributed distance mea-surement error.

A. Location Estimation Problem

We analyse the use of the Cramer-Rao Lower Bound in thetheory of localisation. It is important to define all used termsrigorously. The problem of localisation is defined as follows:

Definition 3: Let N ∈ N be a number of anchors (par-ticipants with known locations) and Ai ∈ R2; i ∈ N<Nthe set of their locations. Let θ ∈ R2 be the location ofthe target. The sample space of the localisation problem isX = (X1, . . . , ZN+2) ∈ RN+2, where Zi for 1 < i ≤ Nrepresents the range measurement errors and ZN , ZN+1 repre-sent the estimated position coordinates of θ. Find an estimatorδ(Z1, . . . , ZN ) for θ.

We assume that the measurement errors are independentfrom each other and the target location. Naturally, the rangemeasurements depend on the target location.

Following Definition 3, let n be the number of anchors,~Z = (Z0, . . . , Zn−1) ∈ Rn>0 be the set of range measurements,θ ∈ R2 the true location and the p.d.f. pi(xi; θ) as given inDefinition 2. Because the measurements are independent, thejoint p.d.f. is fn(~x; θ) =

∏n−1i=0 pi(xi; θ). Consequently,

log fn(~x; θ) =

n−1∑i=0

log

(βiαi

Γ(αi)(xi − bi)αi−1e−βi(xi−bi)

)(3)

where bi = d(Ai, θ) + bi.The gradient of log fn(~x; θ) is shown in Eq. (4). It is defined

for all θ satisfying Proposition 2.

∇θ log fn(~x; θ) =

n−1∑i=0

((θx−Ai,x)(β(d(θ,Ai)−(Xi+bi))+α−1)

d(θ,Ai)(d(θ,Ai)−(Xi+bi))(θy−Ai,y)(β(d(θ,Ai)−(Xi+bi))+α−1)

d(θ,Ai)(d(θ,Ai)−(Xi+bi))

)(4)

For θ → Ai it is easy to show that Eq. (4) does not havea unique limit, thus the gradient is not defined for θ = Ai.For d(θ,Ai) = Xi + bi , i.e. at the border of the domain, andbi > 0 the summand diverges (except for α = 1, where theterm converges to β/d(θ,Ai) (θx −Ai,x, θy −Ai,y)

T ).Next, the entries of the FI matrix are:

Cov

[∂ log fn(~x; θ)

∂θi,∂ log fn(~x; θ)

∂θj

]=

E

(∂ log fn(~x; θ)

∂θi

∂ log fn(~x; θ)

∂θj; θ

)−

E

(∂ log fn(~x; θ)

∂θi; θ

)E

(∂ log fn(~x; θ)

∂θj; θ

).

Since ∂ log fn(~x;θ)∂θi

is not integrable for α 6= 1, the FI matrix isnot defined. Thus, we cannot give the CRLB for this problem.

VI. EVALUATION OF THE CRLB

We calculated the CRLB for a variety of anchor set-upsand compared the results of FU Berlin Parallel Lateration-Algorithm Simulation and Visualization Engine (LS2) to theCRLB. The CRLB was computed according to Eq. (2.157) andEq. (2.158) in H.C. So [12], which gives a CRLB for TOAbased range measurements with Gaussian range measurement

153/278

Page 167: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

(a) CRLB (b) NLLS

(c) MLE-Gauss (d) MLE-Gamma

Fig. 1. The RMSE of the CRLB and three algorithms

error. We use a standard deviation of 50 distance units. Wesimulated 5000 measurements for each discrete location on agrid of 500 × 500 distance units. Five anchors are placed ina quite hard configuration, where 4 anchors are almost on aline. The algorithms we simulated are non-linear least squares(NLLS) [17], [18], MLE assuming a normal distributed errorwith standard deviation 25, and a MLE assuming a Gammadistributed error with shape 4, rate 0.08, and offset 25. TheMLE use the estimate of NLLS as the starting point andproceeds by Broyden-Fletcher-Goldfarb-Shanno algorithm tomaximize the likelihood.

A. Normal Distribution

In Fig. 1, we display the distribution of the root meansquared error (RMSE) to compare the variance of the esti-mation error to the CRLB. We perform this comparison basedon a normal distributed error with standard deviation of 25units. This setting is best understood.

We visualise the differences between the algorithms to theCRLB in Figure 2. We display the difference of the CRLBand the RMSE of the simulated algorithms. Areas colouredin green indicate that the RMSE is at least 2 distance unitsworse than the CRLB, areas coloured in yellow indicate thatthe RMSE is within 2 distance units of the CRLB (we usethis interval to account for numerical inaccuracies in theimplementation), and areas coloured in a red display areasin which the simulated algorithm is at least 2 simulation unitsbetter than the CRLB. We should not expect to see any redareas.

The red areas in Fig. 2 correspond to significant improve-ments on So’s CRLB. Especially that NLLS and MLE-Gaussmanage to improve the CRLB close to the anchors and alongthe diagonal axis indicates, that the CRLB refers to a different

(a) NLLS

(b) MLE-Gauss (c) MLE-Gamma

Fig. 2. Difference between CRLB and a simulated algorithm

metric and cannot be applied to bound the RMSE of theposition estimate. A correction is not relevant to us, since weare interested in error distributions relevant to indoor scenarios.

B. Gamma distribution

A normal distributed measurement error occurs seldom inindoor localisation. The measurements are also affected byNLOS effects. We also can never measure a distance that is tooshort. A too short measurement indicates a systematic error.

Figure 3 displays the RMSE of each algorithm for a Gammadistributed error. In comparison to Fig. 1, we notice thatthe larger average error decreases the performance, especiallyinside the convex hull of the anchor nodes. The performanceof MLE-Gamma is better than the performance of NLLS andMLE-Gauss. MLE-Gauss still performs slightly better thanNLLS, especially in the red areas in Fig. 3.

Figures 4 shows the distribution of localisations for a node atthe location marked in yellow. This position was chosen wherethe CRLB suggests a large variance. Darker shades indicatethat this position was estimated more often, white positionsare never estimated. The size of the area corresponds to thevariance. The magenta circle marks the centre of all estimates.The green circle has a 50 units radius.

For NLLS we see the largest shaded area, i.e. the highestvariance. For MLE-Gamma, the area is more concentrated tothe centre of estimates. However, this considers only success-ful estimates. MLE-Gamma fails, when search starts outsideof the domain of log fn (see Proposition 2). The success rateis shown in Figure 4d: We observe failure rates between 10 %(lighter, white areas) and 45 % (darker, green areas), especiallyin areas where the CRLB suggests poor performance. Mostother localisation algorithms can always estimate a position.The initial position is estimated using NLLS, so in case offailure, this estimate can be used in practice with good results.

154/278

Page 168: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31th October 2013

(a) NLLS (b) MLE-Gauss

(c) MLE-Gamma (d) Comparing NLLS to MLE-Gauss

Fig. 3. Comparing RMSE for gamma distributed errors

(a) NLLS (b) MLE-Gauss

(c) MLE-Gamma (d) Failure distribution of MLE-Gamma

Fig. 4. Distribution of locations

VII. CONCLUSION

The lessons are: (1) Experiments seem to refute publishedCRLB, especially in areas close to anchors. (2) The indoorlocalisation problem is hard to model and analyse usingstatistical methods, since calculations are involved or notpossible. Simulations with LS2 avoids these problems andprovides results that more useful in practice. (3) The results ofa statistical analysis don’t convey the information of interest.We are usually interested in minimising the expected absolute

localisation error, but the CRLB provides a lower bound onthe variance of the estimated position. Both values are relatedby the mean squared error (MSE), but the relation is seldomexploitable. (4) MLE fitted to the measured error distributiongives excellent results in simulation.

Future work includes testing MLE on real data and ex-ploring improvements on fitting actual error distributions. Inpractice, finding and modelling the error distribution seems tobe the main obstacle.

REFERENCES

[1] D. Torrieri, “Statistical theory of passive location systems,” IEEETransactions on Aerospace and Electronic Systems, vol. 20, no. 2, pp.183–198, Mar. 1984.

[2] J. Chaffee and J. Abel, “GDOP and the Cramer-Rao bound,” in PLANS.IEEE, 1994, pp. 663–668.

[3] R. M. Vaghefi and R. M. Buehrer, “Cooperative sensor localization withnlos mitigation using semidefinite programming,” in WPNC. IEEE,2012, pp. 13–18.

[4] I. Zisking and M. Wax, “Maximum likelihood localization of multiplesources by alternating projection,” IEEE Transactions on Acoustics,Speech and Signal Processing, vol. 36, no. 10, pp. 1553–1560, Oct.1988.

[5] S. O. Dulman, A. Baggio, P. J. Havinga, and K. G. Langendoen, “Ageometrical perspective on localization,” in MELT’08. ACM Press,2008, pp. 85–90.

[6] B. Yang and J. Scheuing, “Cramer-rao bound and optimum sensor arrayfor source localization from time differences of arrival,” in Acoustics,Speech, and Signal Processing, 2005. Proceedings. (ICASSP ’05). IEEEInternational Conference on, vol. 4. IEEE, 2005, pp. iv/961–iv/964.

[7] R. Malaney, “A location enabled wireless security system,” in GlobalTelecommunications Conference, 2004. GLOBECOM ’04. IEEE, vol. 4,2004, pp. 2196–2200.

[8] H. Shi, X. Li, Y. Shang, and D. Ma, “Cramer-rao bound analysisof quantized rssi based localization in wireless sensor networks,” inParallel and Distributed Systems, 2005. Proceedings. 11th InternationalConference on. IEEE, 2005.

[9] T. Jia and R. M. Buehrer, “A new cramer-rao lower bound for toa-basedlocalization,” in Military Communications Conference, 2008. MILCOM2008. IEEE, 2008, pp. 1–5.

[10] M. L. McGuire and K. N. Plataniotis, Accuracy Bounds for WirelessLocalization Methods. Hershey, NY: Information Science Reference,2009, ch. 15, pp. 380–405.

[11] L. Cheng, S. Ali-Loytty, R. Piche, and L. Wu, Mobile Tracking in MixedLine-of-Sight/Non-Line-of-Sight Conditions: Algorithms and TheoreticalLower Bound. Hoboken, NJ, USA: John Wiley & Sons, 2012, ch. 21,pp. 685–708.

[12] H. C. So, Source Localization: Algorithms and Analysis. Hoboken, NJ,USA: John Wiley & Sons, 2012, ch. 2, pp. 25–66.

[13] E. L. Lehmann and G. Casella, Theory of Point Estimation, 2nd ed.New York: Springer, 1998.

[14] C. R. Rao, “Information and the accuracy attainable in the estimationof statistical parameters,” Bulletin of the Calcutta Mathematical Society,vol. 37, pp. 81–89, 1945.

[15] H. Cramer, Mathematical Methods of Statistics. Princeton, NJ, USA:Princeton University Press, 1946.

[16] T. Hillebrandt, H. Will, and M. Kyas, “The membership degree Min-Max localisation algorithm,” 2013, accepted for publication in Journalof Global Positioning Systems.

[17] S. Venkatesh and R. M. Buehrer, “A linear programming approach toNLOS error mitigation in sensor networks,” in IPSN, J. A. Stankovic,P. B. Gibbons, S. B. Wicker, and J. A. Paradiso, Eds. ACM Press,2006, pp. 301–308.

[18] I. Guvenc, C.-C. Chong, and F. Watanabe, “Analysis of a linear least-squares localization technique in LOS and NLOS environments,” inVTC. IEEE, 2007, pp. 1886–1890.

[19] S. A. Zekavat and R. M. Buehrer, Eds., Handbook of Position Location.Hoboken, NJ, USA: John Wiley & Sons, 2012.

155/278

Page 169: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation

Efficient and Adaptive Generic Object Detection Method for Indoor Navigation

Nimali Rajakaruna Department of Electrical and Computer Engineering

Curtin University Perth, Western Australia

[email protected]

Iain Murray Department of Electrical and Computer Engineering

Curtin University Perth, Western Australia [email protected]

Abstract—Real time object detection and avoidance is an important part of indoor and outdoor way finding and navigation for people with vision impairment in unfamiliar environments. The objects and their arrangement in both indoor and outdoor settings occasionally change. Even stationary objects, such as furniture, may move occasionally. Additionally, providing detailed geometric models for all objects in a single room can be a very difficult and computationally intensive task. When another of similar function replaces an object, completely new models may have to be developed. Hence, there is a need of highly efficient method in detecting generic objects, which will help in detecting objects in a changing environment. This paper, presents an image-based object detection algorithm based on stable features like edges and corners instead of appearance features (color, texture, etc.). Probabilistic Graphical Model (PGM) is used for feature extraction and generic geometric model is built to detect object by combining edges and corners. Furthermore, additional geometric information is employed to distinguish doors from other objects with similar size and shape (e.g. bookshelf, cabinet, etc.). Current research shows that generic object recognition is one of the most difficult and least understood tasks in computer vision.

Keywords-Generic Objects; Hidden Markov Models; Probabilistic Graphical Models

I. INTRODUCTION The problem of generic object recognition (object categorization) has traditionally been a difficult problem for computer vision systems. The main reason for this difficulty is the variability of shape within a class: different objects vary widely in appearance, and it is difficult to capture the essential shape features that characterize the members of one category and distinguish from another. Early vision systems [1-3]could perform specific object recognition reasonably well but did not fare as well on identifying the natural class of an object. Research work done recently [4] have led to systems that can learn a representation for different object classes and achieve good generic object class recognition. Most of the research work on object detection was dominated by the use of appearance-based methods for object recognition. Among the most popular of these was the eigenface method [1]

which forms the basis of numerous appearance-based object recognition schemes. Pentland et al. [5] approach the problem of face recognition under general viewing conditions with a view-based multiple-individual eigenspace technique. Maximum-likelihood estimation framework was introduced by Moghaddam et al. [5] with the idea of using probability densities to formulate a for visual search and target detection. Black and Jepson address general affine transformations [6] by defining a subspace constancy assumption for eigenspaces. They formulate a continuous optimization problem to obtain reconstructions having the same brightness as the corresponding image pixels. In addition, the authors proposed a multi-scale eigenspace representation and a coarse-to-fine matching strategy in order to account for large affine transformations between eigenspace and the image. This work was extended in which proposes a robust principal component analysis method that can be used to automatically learn linear models from data that may be contaminated by outliers. Current literature for object recognition has two main approaches: 1) feature-based, which involve use of spatial arrangements of extracted features such as edge elements or junctions, and 2) brightness based, which make direct use of pixel brightness. Early work for feature-based methods used Fourier descriptors. Huttenlocher et al. developed methods for shape matching based on edge detection and Hausdorff distance. This paper, focus on feature-based implementation, which uses wavelet transformation to deal with feature extraction. A standard approach for multi-class object detection is exemplified by where one detector is used per object class being detected. In, the statistics of both object appearance and “non-object” appearance are represented using a product of histograms. Each histogram represents the joint statistics of a subset of 2D wavelet coefficients and their position on the object. The detection is performed by exhaustive search in scale and position.

156/278

Page 170: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation

The proposed method employs a wavelet transformation for feature extraction conjunction with a probabilistic (hidden Markov) model to estimate contour position, deformation, color and other hidden aspects. It generates a maximum a posteriori estimate given observations in the current frame and prior contour information from previous frames. HMM provides globally optimal solutions for contour adjustment with the Viterbi Algorithm.

II. BACKGROUND A. Wavelet Transformation Using wavelets for feature-based extract and representation of images provides an efficient solution to the given problem. It involves a number of features extracted from raw images based on [7] wavelet. The features of images, such as edges of an object, can be projected by the wavelet coefficients in Low-pass and high-pass sub-bands [7]. Even though there are multiple approaches to object classifications in images, the view representation provides and efficient global optimum solution.. Features and their spatial relationship among them play more important roles in characterizing image contents, because they convey more semantic meanings[8]. In this paper, a method based on wavelet coefficients in low-pass bands is proposed for the image Classification. This decision was taken due to the application of the system for indoor navigation with IMU’s. After an image is decomposed by wavelet, its features can be characterized by the distribution of histograms of wavelet coefficients. The coefficients are respectively projected onto x and y directions. For different images, the distribution of histograms of wavelet coefficients in low-pass bands is substantially different. However, the one in high-pass bands is not as different, which makes the performance of classification not reliable. This paper presents a method for image classification based on wavelet coefficients in low-pass bands only. The nodes can then be represented by the distribution of histograms of these wavelet coefficients. Most of the applications represent images using low-level visual features, such as color, texture, shape and spatial layout in a very high dimensional feature space, either globally or locally. However, the most popular distance metrics, for example, Euclidean distance, cannot guarantee that the contents are similar even their visual features are very close in the high dimensional feature Space. B. Hidden Markov Model based Contour detection The proposed method is based on a Hidden Markov Model (HMM). This model has two advantages. Firstly it is no longer necessary to select training data, a new method to generic object recognition is proposed.

Hidden Markov Models are a widespread approach to probabilistic sequence modeling: they can be viewed as stochastic generalizations of finite-state automata, where both transitions between states and generation of output symbols are governed by probability distributions [9]. Originally, these models were almost exclusively applied in the speech recognition context, and it is only in the last decade that they have been widely used for several other applications, as handwritten character recognition, DNA and protein modeling, gesture recognition, and behavior analysis and synthesis. Even if HMMs have been largely applied for classifying planar objects, HMM’s use in the generic object recognition has been poorly investigated, and only few papers exploring this research direction are appeared in the literature. In this paper a HMM based approach is proposed, which explicitly considers all the information contained in the object. The image is scanned using the raster method fashion with a squared window of fixed size, obtaining a sequence of overlapping sub-images. For each sub-image, wavelet coefficients are computed, discarding the less significant ones. The collected wavelet features connected with each sub-image is then modeled using a HMM. Here the observation layer and hidden layers are programmed accordingly the difficulty with generic object identification. Weak classifiers are used to model the hidden layers. In the modeling, particular care is devoted to the training procedure initialization, which represents a crucial factor because of the locality of the optimization procedure, and to the model selection issue, which represents the problem of choosing the topology and the number of states of the HMM. A strategy similar to that proposed in this paper has been recently applied by the authors in the context of face recognition[10], showing promising results. Assuming a priori equiprobable classes, an unknown sequence is classified into the class whose model shows the highest probability (likelihood) of having generated this sequence (this is the well-known maximum likelihood (ML) classification rule). A discrete-time Hidden Markov Model can be viewed as a Markov model whose states cannot be explicitly observed: each state has associated a probability distribution function, modeling the probability of emitting symbols from that state. More formally, a HMM is defined by the following entities[11],

• 𝑆 = 𝑆!, 𝑆!,… , 𝑆! the finite set of the possible

hidden states; • the transition matrix 𝐴 = 𝑎!" , 1 ≤ 𝑗 ≤ 𝑁

representing the probability to go from state 𝑆! to state 𝑆!, 𝑎!" = 𝑃 𝑞!!! = 𝑆! 𝑞! = 𝑆! 1 ≤ 𝑖, 𝑗 ≤ 𝑁(1)

with 𝑎!" ≥ 0 and 𝑎!"!!!! = 1

• the emission matrix 𝐵 = 𝑏 𝑜 𝑆! , indicating the probability of the emission of the symbol 𝜊 when

157/278

Page 171: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation

system state is 𝑆!; in this paper continuous HMM were employed : 𝑏(𝜊 𝑆!) is represented by a Gaussian distribution, i.e.

𝑏 𝜊 𝑆! = 𝑁(𝜊 𝜇! ,∑!) (2) where 𝑁(𝜊 𝜇 ,∑) denotes a Gaussian density mean 𝜇 and covariance Σ , evaluated at 𝜊;

• 𝜋 = 𝜋! , the initial state probability distribution, representing probabilities of initial states, i.e.

𝜋! = 𝑃 𝑞! = 𝑆! 1 ≤ 𝑖 ≤ 𝑁 (3) with 𝜋! ≥ 0 and 𝜋! = 1.!

!!! For convenience, we denote an HMM as a triplet 𝜆 = 𝐴,𝐵,𝜋 . The training of the model, given a set of sequences [12], is usually performed using the standard Baum-Welch re-estimation, able to determine the parameters 𝐴,𝐵,𝜋 that maximize the probability 𝑃(𝑂!| 𝛌 ) In this paper thetraining procedure is stopped after the convergence of thelikelihood. The evaluation step, i.e. the computation of theprobability 𝑃 (O| 𝛌) given a model 𝜆 and a sequence O to be evaluated, is performed using the forward-backward procedure. The value of the system is that its performed in a satisfactory way even if the number of views per object used for training is drastically reduced. This will help the intended application in an indoor navigation setup. HMMs have been largely applied in several computer vision and pattern recognition problems, whereas a systematic analysis of its behavior in this context is missing in literature.

III. PROPOSED METHOD The strategy used to obtain the data sequence from an object image consists of three steps. First, the image is converted from the color format to the grey level format. This is important to assess the capability of the proposed approach in capturing the geometry of the object, rather than the color. In the second step, a sequence of sub-images of fixed dimension is obtained by sliding over the object image; in a raster scan fashion, a square window of fixed size, with a predefined overlap. In this way we could capture relevant information about the local geometry of the object to be encoded: the sequence of subsequent windows summarizes the local object structure. Finally, the third step consists in applying the wavelets transform to each gathered sub-image. The proposed algorithm calculates the coefficients representing the image with a normalized two-dimensional Haar basis, sorting these coefficients in order of decreasing magnitude. Subsequently, the first M coefficients (i.e., the coefficients with higher magnitude) are retained, performing a lossy image (sub-image)

compression. As for image compression, the retained coefficients represent the more significative information.

Video Stream

Image Feature Extraction using

Wavelet Transformation

Image Level

HMM for Contour Detection

Color Conversion to Gray Format

Sub Image Aquisition

Wavelet transform for each sub

image

Weak Classifier

Object Level

Figure 1: Algorithmic Components for the proposed architecture

Hence, we use them to recognize the objects. In particular, the number of retained coefficients determines the dimensionality of the observation vector (i.e. the local descriptor), while its length is determined by the number of sub-images gathered. By applying this step to all the sub-images of the sequence, we finally get the actual sequence observation. Its dimensionality will be M · T, where M is the number of the wavelet coefficients retained, and T is the number of sub-images gathered in the sample scanning operation of retained coefficients determines the dimensionality of the observation vector (i.e. the local descriptor), while its length is determined by the number of sub-images gathered. By applying this step to all the sub-images of the sequence, we finally get the actual sequence observation. Its dimensionality will be M · T, where M is the number of the wavelet coefficients retained, and T is the number of sub-images gathered in the sample scanning operation problem: to identify an object given an aspect. The basic idea is to perform a ‘‘decreasing’’ learning, starting each training session from an informative situation derived from the previous training phase. More specifically, the procedure consists in starting the model training using a large number of states, run the estimation algorithm, and, after convergence, evaluate the chosen model selection criterion for that model. In this case the Bi-criterion was used. Then, the importance of each model state is determined, using the stationary distribution of the Markov Chain associated to the HMM. Finally, the ‘‘least probable’’ state is pruned, and this

158/278

Page 172: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation

configuration is taken as initial situation from which to start again the training procedure. In this way, each training session is started from a ‘‘nearly good’’ estimate. And the use of weak classifiers will help to made generic object differ from the specific objects. The key component of the object representation is the weak classifiers. A weak classifier can be regarded as a conjunction of a set of single feature classifiers, where a single feature classifier is defined by an edge feature (a location and orientation) along with a tolerance threshold and its parity. A single feature classifier returns true if the distance from the specified location to the closest edge with the specified orientation is within tolerance (i.e. it should be sufficiently small if the parity is positive and sufficiently large if the parity is negative). A weak classifier returns true if all its constituent single feature classifiers return true. This permits to obtain better estimates for the model, increasing the efficacy of the proposed approach. Moreover, by starting from a good situation, the number of iterations required by the training algorithm to converge is reduced, resulting in a less computationally demanding procedure. Learning is finally performed using standard Baum Welch procedure, stopping the procedure after likelihood convergence.

IV. CONCLUSION The proposed method delivers a novel approach in generic object detection. The next step is to validate and test the algorithm using real images.

REFERENCES

1. Manjunath, B.S., R. Chellappa, and C. von der Malsburg. A feature based approach to face recognition. in Computer Vision and Pattern Recognition, 1992. Proceedings CVPR'92., 1992 IEEE Computer Society Conference on. 1992. IEEE.

2. Mel, B.W., SEEMORE: combining color, shape, and texture histogramming in a neurally inspired approach to visual object recognition. Neural computation, 1997. 9(4): p. 777-804.

3. Olvera-López, J.A., J.A. Carrasco-Ochoa, and J.F. MartÃ-nez-Trinidad, A new fast prototype selection method based on clustering. Pattern Analysis and Applications. 13(2): p. 131-141.

4. Fe-Fei, L., R. Fergus, and P. Perona. A Bayesian approach to unsupervised one-shot learning of object categories. in Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on. 2003. IEEE.

5. Moghaddam, B., T. Jebara, and A. Pentland, Bayesian face recognition. Pattern Recognition, 2000. 33(11): p. 1771-1782.

6. Black, M.J. and A.D. Jepson, Apparatus and method for identifying and tracking objects with view-based representations, 2003, Google Patents.

7. Antonini, M., et al., Image coding using wavelet transform. Image Processing, IEEE Transactions on, 1992. 1(2): p. 205-220.

8. Li, H., B. Manjunath, and S.K. Mitra, Multisensor image fusion using the wavelet transform. Graphical models and image processing, 1995. 57(3): p. 235-245.

9. Juang, B.H. and L.R. Rabiner, Hidden Markov models for speech recognition. Technometrics, 1991. 33(3): p. 251-272.

10. Bicego, M., U. Castellani, and V. Murino. Using Hidden Markov Models and wavelets for face recognition. in Image Analysis and Processing, 2003. Proceedings. 12th International Conference on. 2003. IEEE.

11. Rabiner, L.R., A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 1989. 77(2): p. 257-286.

12. Arandiga, F., et al., Edge detection insensitive to changes of illumination in the image. Image and Vision Computing, 2010. 28(4): p. 553-562.

159/278

Page 173: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Hidden Markov Based Hand Gesture Classificationand Recognition Using an Adaptive Threshold Model

Jeroen Mechanicus∗, Vincent Spruyt∗†, Marc Ceulemans∗, Alessandro Ledda∗ and Wilfried Philips†

∗ Faculty of Applied Engineering, Electronics-ICTUniversity of Antwerp

Paardenmarkt 92, 2000 Antwerpen, Belgiumweb: http://www.cosys-lab.be/

† TELIN-IPI-iMindsGhent University

St. Pietersnieuwstraat 41, 9000 Gent, Belgiumweb: http://telin.UGent.be/ipi/

Abstract—Traditional approaches to gesture recognition oftenexperience an inherent time delay, as temporal gestures, such asa waving hand, are only recognized once the gesture has beencompleted. Furthermore, most systems use specialized hardwareor depth cameras in order to detect and segment hands.

We propose a robust and complete, real-time gesture recogni-tion system that can be used in unconstrained situations withoutany time delay, and that only uses a simple, monocular webcam.Gestures are recognized during their execution, allowing for real-time interaction. Furthermore, our system can cope with changingillumination, moving backgrounds, and is able to automaticallyrecover from tracking errors. Gestures can vary in shape,duration and velocity, and are recognized with low computationalcost.

In this paper, we show the robustness of our algorithmby comparing our results with traditional gesture recognitionapproaches, and illustrate its effectiveness in real-life situationsby using it to control the volume of an Arduino based car radio.

I. INTRODUCTION

Gesture recognition and hand pose detection are amongstthe most challenging tasks in current human-computer interac-tion (HCI) research. With the advent of low cost depth sensingdevices, research has mostly shifted from monocular objectdetection to interpretation of depth maps. However, due to theirdependency on infrared signals, these devices tend to fail indirect sunlight, and can not be used in critical environmentswhere other infrared hardware co-exists. Therefore, a robustand real-time gesture recognition system, that is able to rec-ognize gestures using a simple monocular camera, would beable to overcome these problems, and could yet be combinedwith depth-sensing information, when available.

Gestures can either be static or dynamic. Static gesturesoccur when the user assumes a certain pose or hand con-figuration. Recognizing the exact configuration reduces to aclassification problem, and can be solved by means of spatialclassification and traditional pattern recognition approaches.Dynamic gestures, on the other hand, represent a temporalmotion pattern, such as a waving hand. While the problem ofstatic hand pose recognition has received a lot of attention inthe research community [1], dynamic gesture recognition isstill considered a challenging task [2] due to the difficulties of

Research funded by a PhD grant of the Institute for the Promotion ofInnovation through Science and Technology in Flanders (IWT-Vlaanderen)

isolating meaningful gestures from continuous hand motion.Gesture spotting is a difficult challenge, mainly due to thespatial variability of hand gestures, since the same gesturecould have a very distinct appearance when executed bydifferent users, or even when executed by the same user atdifferent instances in time. Most gesture recognition techniquestherefore employ a list of empirically defined constraints andrules to aid in the task.

Yang and Ahuja [3] proposed the well known motiontemplate technique, in which a time delayed neural networkis used for gesture classification. The main disadvantage oftheir approach, however, is the introduced time delay, whichis undesirable in real-time solutions.

Yoon et al. [4] proposed the use of a Hidden MarkovModel (HMM) to model the spatio-temporal variance that isinherent to human gestures. They employ simple color andmotion detection to extract the hand locations, which are thenclustered to obtain hand trajectories. The resulting trajectoriesare classified by an HMM. While their results illustrate thepotential of an HMM based approach, the proposed methodrequires the user to intentionally stop moving his hands forseveral seconds, right before and after the gesture. Further-more, due to their dependency on a simple color based blobdetector, the proposed solution tends to fail in uncontrolledlighting situations.

In speech recognition, word spotting is usually accom-plished by classifying the sequence of words by a separateHMM, called a garbage model, that is trained with acousticnon keyword patterns [5]. If the likelihood of this garbagemodel is higher than the likelihood of the normally trainedmodels, the keyword is discarded.

Lee and Kim [6] introduced a technique to automaticallydetect the beginning and end of a gesture in continuous handmotion sequences. Such temporal segmentation is often re-ferred to as “gesture spotting”, and is one of the most importanttasks of a robust gesture recognition system. However, theirproposal can be classified as a backward spotting technique,which inherently introduces a delay in detection. Backwardspotting methods start classifying a gesture by means of theViterbi algorithm [7], as soon as a gesture endpoint has beendetected, while forward spotting algorithms continuously tryto classify the current gesture, until an endpoint has beendetected.

978-1-4673-1954-6/12/$31.00 c©2012160/278

Page 174: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Since an almost infinite set of non-gesture patterns couldbe obtained, it is difficult to train a garbage model for gesturerecognition purposes. Instead, Lee and Kim propose to rely onthe internal segmentation property of Hidden Markov Models,which says that the states and transitions in a trained HMMrepresent sub patterns of a larger gesture. Therefore, thegarbage model is an HMM that is trained on all states copiedfrom the individual gesture models. This garbage model thenyields an increased likelihood for any kind of gesture and non-gesture that is a combination of any of the states in any ofthe trained Hidden Markov Models, in any order. In orderto achieve this, the threshold model is defined as an ergodicHMM, in which all states are fully connected to each other.

However, the main disadvantage of this approach is theinherent time delay between the start of a gesture and therecognition of the gesture. Only after the gesture endpoint hasbeen spotted, the Viterbi algorithm can be used to find themodel that best explains the sequence of observations betweenthe detected start point and endpoint.

Elmezain and Al-Hamadi [8] proposed an HMM based ges-ture recognition system that is able to automatically determinethe start points and endpoints of a temporal gesture. However,their method assumes that each gesture ends with a straight linethat can be used as a zero-codeword. Furthermore, as manygestures contain straight lines, a constant velocity assumptionis made, to avoid breaking up single gestures into their parts.Finally, they use a depth map that is obtained by a stereocamera setup, in order to aid hand detection and segmentation.

Recently, Elmezain et al. [9] proposed an improvementupon the threshold model idea, by introducing a forwardgesture spotting method that is able to execute hand gesturesegmentation and recognition simultaneously, therefore elim-inating any time delay. Once a start point of a gesture isdetected, the segmented part of the gesture, up till the currenthand location, is recognized accumulatively. Once the endpointof the gesture has been detected, the complete segment isclassified again by the Viterbi algorithm. However, they usea depth sensor to aid in hand detection and segmentation.This greatly increases tracking stability and thus simplifies thegesture recognition problem.

Kurakin et al. [10] proposed an action graph based tech-nique for gesture recognition. Action graphs share similarrobust properties with the standard HMM, but require lesstraining data. On the other hand, inference in action graphs canbe slower than inference in their HMM counterparts. In theirwork, Kurakin et al. use a depth sensor to obtain a depth map,which is thresholded to obtain an accurate hand segmentation.

In this paper, we propose a complete, real-time gesturerecognition system, that is able to detect and track handsin unconstrained environments, using a simple monocularcamera. Inspired by the work of Elmezain et al. [9], wepropose several enhancements to current gesture spotting andclassification methods, resulting in a robust and real-timegesture recognition system that can be used in unconstrainedsituations without any time delay. Furthermore, our system cancope with changing illumination, moving backgrounds, and isable to automatically recover from tracking errors. Gesturescan vary in shape, duration and velocity, and are recognizedwith low computational cost.

In order to test our proposed solution in a real-life situation,we implemented a hardware module that allows a vehicledriver to control his radio using natural gestures. Zobl et al.[11] suggested that such a system could reduce the distractionsof a driver, resulting in less vehicle crashes. Our radio modulecombines an Si4703 FM tuner module with an embeddedArduino platform.

The remainder of this paper is organized as follows:Section II describes the hand detection and tracking frameworkused to construct motion paths. Section III explains our HMMbased approach to classify the motion trajectories into distinctgestures. In Section IV, the hardware setup and integrationwith an FM radio module is described. Finally, Section Vdiscusses the evaluation and results of our approach.

II. HAND DETECTION AND TRACKING

A. Hand detection

Hand detection in monocular video poses a challengingproblem because of the high number of degrees of freedomin a human hand [12]. Due to its articulated nature, handscan take on almost any shape, preventing traditional objectdetection methods, such as Haar classifiers, to accurately learna general hand shape.

In this paper, we build upon our earlier work as describedin [13], where we proposed a random forest based handdetector, capable of detecting human hands in real-time videosequences. The detector is scale and rotation invariant, and canbe used to generate hand hypotheses to be tracked by a particlefilter. A scale invariant feature detector [14] is used to obtain avector of image patches. These image patches represent areasof maximum entropy within the image, and therefore containthe most information.

Image patches are then divided by a 3 × 3 grid, and sixfeature descriptors are calculated for each cell in this grid.Three of these descriptors are color based, while the other threeare texture descriptors. The color based descriptors representhistograms in a non-linear color space that is suited for skindetection [15], while the texture based descriptors consist ofa Local Binary Pattern (LBP) descriptor [16], a normalizedorientation histogram, and a simplified FREAK descriptor [17].

All feature descriptors are normalized by the scale of theimage patch that resulted from the feature detector, and arerotated by the dominant gradient orientation within this patch.Similarly, the 3×3 grid is rotated by this dominant orientation,in order to obtain a scale and rotation invariant descriptor.

During training of the random forest classifier, fifteendecision trees are learned to classify these image patches. Eachnode within each decision tree, splits the dataset based on arandomly selected descriptor. By accumulating the classifica-tion result of all trees, a low-bias, low-variance classifier isobtained.

For each image patch in the training set, the offset vectorto the centroid of the hand is stored. During classification of anew image patch, all decision trees then cast a probabilisticvote on the hand’s centroid location. The resulting Houghvoting map, as illustrated in Figure 1, can then be used toobtain hand location hypotheses.

161/278

Page 175: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

(a) Original image (b) Hough voting map

Fig. 1. Illustration of the Hough voting process [13]

B. Hand tracking

Each hand hypothesis, generated by the hand detector, istracked using a particle filter framework. During tracking,a simple linear classifier continuously rejects false positivedetections, based on temporal information such as the particlefilter’s variance and average velocity.

Hand tracking is based on our earlier work, as described in[18], where the Hough probability map is directly incorporatedinto the observation model of the particle filter.

A Bayesian skin classifier is used to generate a skinlikelihood map, as described in [19]. This skin likelihood iscombined with a simple motion detection by means of framedifferencing. Furthermore, color distributions for skin and non-skin regions are updated online, to allow the particle filter toadapt to changing lighting conditions.

While the offline trained Bayesian skin classifier is cal-culated in RGB space, online color statistics are calculated inHSV space. Combining both color spaces increases robustnessto non-uniform illumination [18].

Finally, optical flow [20] is incorporated into the motionmodel of the particle filter, to increase robustness in case ofrapid, non-linear motion.

A partitioned sampling method is used to efficiently samplethe search space, defined by the state partitions S1 = x, yand S2 = width, height. By solving two two-dimensionalproblems instead of a single four-dimensional problem, ourtracking solution only needs about fifty particles to robustlytrack multiple hands.

III. GESTURE CLASSIFICATION

A. Hidden Markov Models

If a stochastic process has the property that the conditionalprobability distribution of its future state, conditioned on thecurrent state, is independent of previous states, it can bemodeled with a Markov Model, such as a discrete Markovchain.

In these traditional Markov Models, the previous andcurrent states are readily observable, and the model is simplydefined by its state transition probabilities. However, in manysituations, such as speech-recognition or gesture recognition,the states themselves are not readily observable. Instead, theoutput, depending on the state, is visible, while the stateitself remains hidden. These situations can be modeled by a

Hidden Markov Model (HMM), in which each state containsa conditional probability distribution over all output values.

Thus, a Hidden Markov Model tries to model the jointdistribution of states and observations. Relying on Bayesiantheory, this corresponds to modeling the prior distribution ofthe hidden states, and the conditional distribution of obser-vations, given states. The former distribution represents thetransition probabilities, while the latter represents the emissionprobabilities.

Therefore, an HMM can be defined as follows [21]:

• A set of N states s1, . . . , sN;• A set of T observations O = o1, . . . , oT ;• An N ×N state transition matrix A = aij, where

aij is the transition probability to transit from state sito state sj ;

• An observation probability matrix B = bjk, wherebjk is the emission probability of generating symbolok from state sj ;

• Initial state probabilities Π = πj, j = 1, . . . , N .

Training the HMM corresponds to estimating the HMMparameters, namely the transition and emission probabilities,based on training data which consists of nothing but obser-vation sequences. Two widely known methods to train anHMM are the Baum-Welch algorithm and the Viterbi trainingalgorithm. The former is an implementation of a generalizedExpectation-Maximization method that results in a maximumlikelihood estimate of the parameters, while the latter updatesthe HMM parameters such that the probability of the bestHMM state sequence for each training sample is maximized.While both methods have their merits, literature shows that theBaum-Welch algorithm often outperforms the Viterbi trainingmethod [22].

In order to classify a gesture, assuming we obtained asequence of observations, the Forward-Backward algorithm[8] is used. The Forward-Backward algorithm is a dynamicprogramming based inference algorithm for HMMs, that cal-culates the posterior probabilities of each state, given a se-quence of observations. Therefore, this method can be used toevaluate the probability that a particular sequence of symbolsis produced by a particular model.

Finally, several HMM topologies exist, two of which areused in this paper. A fully connected structure, in which anystate can be reached from any other state, is called an ergodicHMM. A second widely used topology is the left-right model,where each state can go to itself or any of the following states.The third topology is the left-right banded model, in whichthe current state can only reach itself, or the next state. In thispaper, only the first and the third topology are of interest.

B. Feature extraction

Based on the tracked hand location, several features couldbe used to train the HMM for gesture classification. Threewidely used features are the hand location itself, the motion’sorientation, and the hand velocity. Earlier research showed thatorientation yields the best results in terms of accuracy andperformance [21], [4].

162/278

Page 176: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

A hand trajectory can be described as a sequence ofcentroid locations C = c0, c1, . . . , ct, each defined aspositions ct = (xt, yt) (at time t). The orientation betweensuch centroid locations can then easily be calculated as

θt = arctan( yt − yt−1

xt − xt−1

)(1)

Due to the discrete nature of the HMM, the orientations arediscretized into bins. A large number of bins would allow forvery fine grained gestures, but increases the dimensionality ofthe problem, and thus the risk of overfitting. A small number ofbins on the other hand, would allow for better generalization,which decreases the risk of overfitting, but also decreases thediscriminative power of the model.

In literature, the number of bins used varies from five [6]to eighteen [9]. Theoretically, the maximum number of binsallowed is only limited by the amount of training data avail-able, as an infinite amount of training data would completelyovercome the overfitting problem.

In our work, eight bins were empirically determined to bea robust choice, while small changes to this parameter do notseem to have any significant impact on the performance of thesystem.

Due to noise in the image, changing illumination orcluttered backgrounds, the centroid location as given by thehand tracker, tends to fluctuate between frames. To increaserobustness of the gesture recognition system, we apply a simpleaveraging filter to the tracking result before calculating theorientation feature. In our work, a window size of three wasfound to sufficiently smooth the resulting centroid coordinates.

Furthermore, in order to decrease computational complex-ity, and to ensure invariance against hand velocity and distancefrom the camera, a centroid location is only recorded if itsoffset from the previous centroid location is large enough. Todecide when this offset is large enough, an adaptive thresholdis calculated, based on the dimensions of the bounding boxof the hand. When a hand is close to the camera, the amountof motion is significantly larger than when the hand is furtherfrom the camera, while making the same gesture. Therefore,by adapting the threshold, our gesture recognition methodbecomes invariant to z-translations.

C. Hidden Markov Model training

To train the model, we use a multi-sequence variablelength Baum-Welch algorithm. An important aspect during thetraining is parameter initialization. If the transition, emissionand initial matrices are not correctly initialized, the Baum-Welch algorithm will get stuck in a local maximum in steadof finding the global optimum, thereby resulting in incorrectparameters.

To get the best result, we initialize the self-transition valuesbased on the left-right banded model. This model assumes thata state can only be reached from a previous state or fromthe current state itself, while it can never be reached by afuture state. Therefore, the left-right banded model representsa sequential flow from the original state towards the final state.

Given the self-transition probabilities aii, the equation stateduration, i.e. the time spend in a state given that the previous

state equals the current state, can be defined as shown byequation (2).

di =1

1− aii(2)

where i is the index in the state transition matrix A.

Furthermore, we assume that each state is equally rep-resented by the observation sequence, such that di can becalculated as shown by equation (3).

di =T

N(3)

where T is the average length of the gesture path (trainingsequences) and N is the number of states in the model.

Combining equations (2) and (3) then allows us to calculatethe initial self-transition probabilities as

aii = 1− 1

T/N(4)

Finally, since each row of the state transition matrix rep-resents a probability mass function and should therefore sumto one, the transition probabilities for the next state, accordingto the left-right banded model, can be found as follows:

aii+1 = 1− aii (5)

The state transition matrix of a left-right banded model fortraining is then:

A =

a11 1− a11 0 00 a22 1− a22 00 0 a33 1− a330 0 0 1

(6)

When we use an average training sequence length of T = 20and an HMM with N = 4 states, the self-transition probabili-ties are aii = 0.8.

D. Gesture spotting

A gesture consists of a sequence of hand locations ormotion orientations, and can be classified by the HMM.However, before classification the startpoint of the gesture hasto be detected. Temporally segmenting a meaningful gesturefrom a a sequence of hand locations, is called ‘gesture spotting’[9], [23]. Gesture spotting is a difficult challenge, mainlydue to the spatial variability of hand gestures, since thesame gesture could have a very distinct appearance whenexecuted by different users, or even when executed by the sameuser at different instances in time. Most gesture recognitiontechniques, therefore employ a list of empirically definedconstraints and rules to aid in the task.

Yoon et al. [4] ask the user to intentionally stop moving hishands for several seconds, right before and after the gesture,in order to aid their gesture spotting algorithm.

In [8], Elmezain et al. assume that each gesture ends with astraight line that can be used as a zero-codeword. Furthermore,as many gestures contain straight lines, a constant velocityassumption is made, to avoid breaking up single gestures intotheir parts.

163/278

Page 177: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

For gesture spotting without empirically defined rules orspecific constraints, the likelihood of a gesture model fora sequence of hand locations, should be distinct enough.While each HMM model reports the likelihood that a givengesture explains the observed sequence, simply applying afixed threshold to this likelihood, often does not yield reliableresults while trying to distinguish gestures from non-gestures.

Our own technique, used in this paper, is based on Leeand Kim’s method [6] and similar to techniques that are usedin speech recognition. In speech recognition, word spotting isusually accomplished by classifying the sequence of words bya seperate HMM, called a garbage model, that is trained withaccoustic nonkeyword patterns [5]. If the likelihood of thisgarbage model is higher than the likelihood of the normallytrained models, the keyword is discarded.

Similarly, we proposed the use of a so called thresholdmodel, that is based on the garbage model concept. Thethreshold model is a separately trained HMM that yields alikelihood value to be used as an adaptive threshold. Therefore,a gesture is only recognized, if the likelihood of the gesturemodel is higher than the likelihood of the threshold model.

The threshold model is an ergodic model, obtained bycopying all states from all the gestures models and then fullyconnecting them. The observation probability matrix of thethreshold model is simply obtained by adding together theobservation matrices from all the HMM models, while thetransition probability matrix of the model is obtained by addingtogether the self-transition values aii of the individual HMMtransition matrices. The unknown transition probabilities arethen calculated as follows:

aij =1− aiiN − 1

, with i 6= j (7)

However, this model only works well as long as the numberof states is limited, otherwise the model becomes unreliablein real-time applications. To elevate this problem, we use asimplified ergodic model, by defining two dummy states (startand end state). The threshold model transition matrix containsthe self-transition values from the gesture models, while theother values are zero, except for the transition from the dummystart state and the transition to the dummy end state. Thetransition from start state S to state j is given by:

aSj =1

N(8)

and the transition from state j to the end state E is given by:

ajE = 1− aii (9)

It is important to note that the dummy states observe nosymbol, so they are passed without time delay, which is animportant factor in real-time applications. Because of ourproposed smoothing system, the spotting becomes more re-liable and this results in less incorrect start points. Withoutsmoothing, start points would be recognized even if the handwere not moving, because of noise and other influences.

Furthermore, our adaptive smoothing method allows forgesture spotting that is invariant to the motion velocity. Slowlymoving gestures result in a lot of observed symbols, close toeach other, while fast gestures result in fewer symbols, spaced

farther from each other. By applying adaptive smoothing, thisdifference is minimized, resulting in a better spotting system,invariant to different speeds of movement.

In Figure 2, the concept of the spotting system is visualized.If the probability of the gesture model becomes larger thanthe probability of the threshold model, a start point has beenfound. If the probability of the gesture model then becomeslower than the probability of the threshold model, an endpointhas been detected.

Fig. 2. Schematic of the spotting system

The spotting algorithm uses a sliding window, the size ofwhich was empirically defined as three in our system.

E. Gesture recognition

After the start point has been detected, we accumulatethe observations and evaluate the accumulated observationsequence at every step to find the endpoint. This process can beseen in Figure 3. After the endpoint is detected, the probability

Fig. 3. Gesture recognition: observations are accumulated and evaluated ateach step

of the full sequence at the gesture model is compared to theprobability of the full sequence at the threshold model. Somegestures can be split into several shorter gestures, e.g., the letterW gesture consists of two consecutive letter V gestures. Inorder to be able to detect the longer gesture, we check wetherany other model has a higher probability than the thresholdmodel after the short model has ended. If not, then the shortmodel is chosen, otherwise we discard the first endpoint andcontinue the endpoint detection process.

To separate non-gestures from gestures better, a prior isused after the endpoint detection. It is possible that a shortgesture can be wrongly classified as correct. To aid theclassification, we base the prior on the gesture sequence length.

164/278

Page 178: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

The prior is the current observation length divided by theaverage length of the training sequences. Finally, a minimumsequence length is defined in order to filter out short sequencesthat accidently have a high probability.

IV. GESTURE CONTROLLED RADIO

The radio is built using an Si4703 FM tuner chip, whichis capable of carrier detection and filtering, processing Ra-dio Data Service (RDS) and Radio Broadcast Data Service(RBDS) information. It is also possible to retrieve data, suchas the station’s name and song name, so it can be displayedto the user. The device has a 100 mW stereo amplifier, so itcan only be used with earphones if no additional amplifier isavailable.

Using this board we are able to pick up multiple radiostations and it is easy to control using i2c (Inter-IC). Tocontrol the FM receiver we use an Arduino Mega 2560. TheArduino can communicate using i2c, but the voltage level ofthe Arduino is 5 V and that of the FM receiver chip is 3.3 V,so for communication between both we need a bidirectionalvoltage converter. The voltage converter circuit is built usingan N-channel, enhancement-mode MOSFETs for low-powerswitching applications, and three resistors. This circuit can beseen in Figure 4. The circuit works as follows: when the low

Fig. 4. Logic of the voltage level converter

side (3.3 V) transmits a logic one, the MOSFET is tied high(off) and the high side sees 5 V through the pull-up resistor.When the low side transmits a logic zero, the MOSFET sourcepin is grounded and the MOSFET is switched on and the highside is pulled down to 0 V. When the high side (5 V) transmitsa logic one, the MOSFET substrate diode conducts pullingthe low side down 0.7 V, which turns the MOSFET on. Thisprocess continues. We need three level converting circuits, onefor Serial Data Line (SDA), one for Serial Clock (SCL) and athird one for a reset pin on the FM receiver board.

After the FM receiver picks up a signal, we need toamplify it before sending it to the speaker. The connectioncable between the amplifier and FM receiver also serves as anantenna. The radio receives its commands through a Bluetoothconnection with the computer where the gesture recognitionalgorithm is running. The simplified schematic concept can beseen in Figure 5.

Fig. 5. Schematic of the radio

V. RESULTS

To evaluate the algorithms we use five gestures, visu-alized in Figure 6. The meaning of the different gesturesis, respectively: volume up, volume down, previous station,next station, and close. The videos used for testing can be

Fig. 6. The five gestures used for evaluation (left to right: volume up, volumedown, previous station, next station, close)

obtained freely for research purposes by sending a request [email protected]. For evaluation, we used 10training samples per gesture and 86 test gestures in total.

In the following paragraphs, we compare our algorithmwith an implementation of the state-of-the-art method proposedby Elmezain et al. [9]. This method is an HMM gesturerecognition system based on the orientation feature, similarto our technique. Furthermore, they use a depth sensor tofacilitate the hand detection and segmentation process.

A. Observation training sequence

As an example, Table I lists a few observation sequencesgenerated by our training algorithm. The first column showsthe observation sequence, written as a series of consecutiveobservations (quantized orientations). The number 0 refers toa movement right, the number 1 to a movement right up,the number 2 to a movement up, and so on. All sequenceshave different lengths, and have little errors in observationorientations which improves for generality of the trainedclassifier while reducing overfitting.

The number of states used to represent the gesture isgiven in the second column. Finally, in the last column thecorresponding gesture action is given.

TABLE I. OBSERVATION TRAINING SEQUENCE FROM OUR PROPOSEDALGORITHM

Observation sequence # states Gesture5, 5, 5, 5, 5, 5, 7, 7, 7, 7, 7, 7, 7, 1 4 Previous Station4, 5, 5, 5, 5, 7, 7, 7, 7, 0 4 Previous Station0, 7, 7, 7, 7, 0, 7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 4 Volume Down7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 1, 2, 1, 1, 1, 1, 1 4 Volume Down4, 1, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7, 7, 7 4 Volume Up5, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 7, 7, 7, 7, 7, 7, 7 4 Volume Up

Table II lists a few observation sequences that are generatedby the training algorithm as proposed by Elmezain et al. Allsequences have different lengths, and have a lot of errors inobservation orientations, as can be seen in the first column.Therefore, this method is more prone to overfitting and thus

165/278

Page 179: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

needs more training data to achieve similar recognition results.

TABLE II. OBSERVATION TRAINING SEQUENCE IMPLEMENTATIONFROM LITERATURE

Observation sequence # states Gesture1, 4, 5, 2, 5, 5, 5, 4, 6, 4, 5, 5, 5, 5, 5, 5, 5,5, 5, 5, 5, 5, 5, 5, 5, 4, 6, 5, 7, 7, 7, 7, 6, 7,6, 7, 6, 7, 7, 7, 7, 7, 7, 0, 0, 6, 7, 0, 4, 7

4 Previous Station

5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 4, 5,4, 5, 4, 5, 7, 7, 7, 6, 0, 6, 7, 0, 6, 7, 7, 7, 7,7, 0, 0, 0, 4, 6, 6, 6, 7, 5, 6, 7, 1, 6, 0

4 Previous Station

7, 4, 2, 3, 5, 2, 2, 1, 6, 3, 2, 2, 2, 2, 2, 2, 5,1, 2, 3, 3, 4, 5, 5, 4, 4, 4, 4, 2, 4, 4, 4, 3, 4,2, 4, 4, 4, 4, 3, 4, 4, 4, 2, 4, 5, 4

4 Volume Down

1, 7, 5, 1, 6, 0, 1, 0, 7, 4, 7, 7, 7, 6, 7, 6, 7,6, 1, 2, 7, 7, 2, 7, 6, 0, 1, 6, 1, 7, 7, 6, 7, 7,0, 1, 6, 7, 0, 6, 0, 3, 0, 5, 1, 5, 0, 2, 6, 2, 2,3, 0, 1, 1, 6, 1, 0, 2, 1, 0, 4, 1, 2, 0, 1, 4, 1,0, 6, 2, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 2, 0, 1, 0,2, 0, 1, 7, 1, 1, 1, 2, 1, 2, 1, 0

4 Volume Down

3, 7, 4, 4, 4, 4, 3, 3, 5, 3, 4, 4, 4, 3, 0, 2, 5,2, 4, 2, 0, 2, 4, 6, 0, 3, 4, 3, 5, 1, 5, 3, 5, 1,2, 3, 0, 2, 3, 3, 2, 2, 3, 2, 2, 3, 1

4 Volume Up

5, 1, 1, 3, 0, 0, 0, 1, 1, 1, 0, 2, 1, 1, 1, 2, 0,1, 1, 1, 2, 2, 7, 2, 2, 3, 1, 0, 0, 1, 1, 2, 6, 1,5, 0, 5, 2, 1, 7, 6, 0, 7, 7, 0, 7, 0, 7, 7, 7, 7,7, 7, 6, 7, 7, 7, 7, 7, 7, 7, 7, 6, 7, 0, 7, 7

4 Volume Up

B. Evaluation of sequences

To decide which algorithm to use for the evaluation ofthe observation sequences, we tested the recognition rate andthe processing time of the Viterbi and the Forward algorithm.Both algorithms have the same recognition rate, but the Viterbialgorithm takes longer to process. The difference in processingtime is caused by the Viterbi path calculation which is usedfor the decoding part. This path calculation is not necessaryfor our purpose, so by reducing the Viterbi algorithm weeffectively reduce the needed processing time. In Table III,the results of the processing time of the Forward, Viterbiand our proposed reduced Viterbi algorithms can bee seen.The processing time is the average time needed to calculatea sequence of 40 observations for a four-state HMM. In most

TABLE III. PROCESSING TIME FOR OBSERVATIONS

Forward Viterbi Reduced Viterbi0.044 ms 0.142 ms 0.042 ms

systems, Viterbi is used. The Viterbi algorithm only guaranteesthe maximum likelihood over all state sequences, instead of thesum over all possibilities like the Forward algorithm, whichresults in an approximation. However, for most applicationsthis is sufficient.

The number of multiplications needed for the Forwardalgorithm is N2(T − 1) + NT . The Viterbi algorithm alsoneeds N2(T − 1) +NT multiplications, but by changing thecomputation to log space, the multiplications in the Viterbialgorithm can be changed to N2T additions, where the For-ward algorithm still needs scaling and therefore still needsmultiplications. Taking this into account, we opted for theViterbi algorithm.

Furthermore, in the method of Elmezain et al., the Viterbialgorithm is used again to evaluate the sequence after theendpoint of the gesture was detected, wheareas in our system,we classify the detected endpoint directly.

C. Movement threshold

In Figures 7 and 8, the results of the different sizes ofmovement threshold for the same hand size and distance fromthe camera are displayed in a graph. The graph starts from nomovement threshold to a movement threshold of five pixels,i.e., the next point needs to be further away than a Euclideandistance of five pixels from the current pixel in order to betaken into account. In Figure 7, the sensitivity of the system forevery movement threshold can be seen, this is the proportionof actual gestures that are correctly identified as such. Amovement threshold of three pixels gives the best results.

Fig. 7. Effect of the movement threshold on the recognition rate

Fig. 8. Effect of the movement threshold on the false positive detections

D. Confusion matrix

The results of the tested video sequences, using a move-ment threshold of three pixels, are listed in the confusion ma-trix in Table IV. For the No Gesture class we have noted an ‘x’in the table, because our gesture test sequences contain randommovements which are indeed recognized as No Gesture, butwe can not place a value on this. Roughly one third of theframes contain random movements. Our implementation hasan overall recognition rate of 92%.

To evaluate the effect of gesture speed on the recognitionrate, we decimated the sampling points two and three times.

166/278

Page 180: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

TABLE IV. CONFUSION MATRIX FOR THE FIVE SELECTED GESTURES

Predicted classPreviousStation

VolumeUp

VolumeDown

NextStation Close No

Gesture

Act

ual

clas

s Previous Station 21 0 0 0 0 1Volume Up 0 14 0 0 0 2

Volume Down 0 0 14 0 0 2Next Station 0 0 0 22 0 0

Close 0 0 0 0 9 1No Gesture 0 0 0 0 0 x

This way we obtain gestures with a simulated speed increaseof two and three times, respectively. The recognition rate didnot change with the increase in speed.

The tests for the algorithm from literature are performedusing our hand detection algorithm in combination with thehand gesture detection algorithm from Elmezain et al. Wefound that their implementation only works for their tracker.If used with a different tracker and different test sets, therecognition rate is 0%. The following paragraphs explain why.

First, Elmezain et al. use a depth sensor in order to obtainan accurate hand detection and segmentation. This greatlyincreases tracking stability, but consequently the gesture recog-nition system is not adapted for other trackers. We performtracking and segmentation using only a 2D video frame,resulting in more noise in the obtained observation sequence.While our method is able to cope with inaccurate trackingresults, the method described by Elmezain et al. is not.

Second, the more time the hand detection algorithm needsto detect a hand, the less data points can be processed inreal-time. This results in an indirect smoothing of the gesture,as can be seen in Figure 9. The dots are hand coordinatesreceived from the hand detection stage and the line is theorientation between two succeeding points. The same effectcan be obtained by decimating the number of received pointsfrom the hand tracker. Our hand detection system takes anaverage of 250 ms to detect a hand on an I7 with 4 GBof RAM. In our proposed algorithm we use a movementthreshold, which results in a stable system, independent on theprocessing time needed for hand detection, whereas the systemproposed by Elmezain et al. is not invariant to the processingspeed and the number of observed datapoints.

Fig. 9. Indirect smoothing occurs when less data points are used

Third, slow moving gestures lead to consecutive handcoordinates with a small Euclidean distance between them.

If the hand detection is distorted in some way, the obtainedcoordinates and consequently the orientation of the handmovement will have a relatively large error for slow movinggestures, as can be seen in Figure 10(a). For fast movinggestures, an incorrect hand detection leads to less error inorientation, because of the distance between the data points,as can be seen in Figure 10(b).

(a) Slow gesture (b) Fast gesture

Fig. 10. The impact of gesture speed on orientation accuracy

In our proposed algorithm we use a movement thresholdwhich results in a stable system, independent on the movementspeed of the gestures.

Fourth, in the system proposed by Elmezain et al., theassumption is made that the gesture moves continuously ata constant speed. Our test gestures have different speedsand have sometimes a small pause during the gesture. InFigure 11, the result can bee seen of a gesture with a smallpause during the movement. The dots in the figure are handcoordinates received from the hand detection stage, the linesare the orientations between two succeeding points. While ouralgorithm ignores similar locations, the method proposed byElmezain et al. can not cope with such changes in velocity.

Fig. 11. Continuity of the gesture

The algorithm from Elmezain et al. works well for fluent,continuous gestures in combination with a depth sensor forthe hand detection and tracking. In our proposed algorithm weuse a movement threshold which results in a stable system,independent on the movement speed of the gestures or pausesin the gestures.

VI. CONCLUSION

This paper proposes a complete, real-time gesture recogni-tion system, that is able to detect, track and recognize gesturesin unconstrained environments. For the hand detection andtracking system we use a previously designed algorithm thatcan work with a cheap monocular camera. We suggest severalenhancements to a Hidden Markov Model system to increasethe robustness of the gesture recognition.

The newly developed system is able to recognize, in real-time and with low computational cost, gestures that vary in

167/278

Page 181: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

shape and duration. Our proposed hand gesture recognitionsystem has several enhancements, which makes it more stableand better usable with different trackers. For spotting andrecognition, an adaptive threshold model is used in combina-tion with an accumulative sliding window. The system is ableto recognize fast gestures, slow gestures, gestures of differentsizes and gestures that are not continuous movements. Duringevaluation, 92% of the tested gestures were recognized.

As a proof of concept, we operate a radio using handgestures. A PowerPoint presentation, or any other device orapplication, can also be manipulated using gestures. A demon-stration of the working system can be seen at http://www.youtube.com/watch?v=ErwUSSdnc4E. The system can thusbe used as a human computer interface to control hardwaredevices or software applications.

REFERENCES

[1] A. Erol, G. Bebis, M. Nicolescu, R. D. Boyle, and X. Twombly, “Vision-based hand pose estimation: A review,” Computer Vision and ImageUnderstanding, vol. 108, no. 1–2, pp. 52–73, 2007.

[2] S. Mitra and T. Acharya, “Gesture Recognition: A Survey,” Systems,Man, and Cybernetics, Part C: Applications and Reviews, IEEE Trans-actions on, vol. 37, no. 3, pp. 311–324, 2007.

[3] M.-H. Yang and N. Ahuja, “Recognizing hand gesture using motiontrajectories,” in Computer Vision and Pattern Recognition, 1999. IEEEComputer Society Conference on., vol. 1, 1999.

[4] H.-S. Yoon, J. Soh, Y. J. Bae, and H. S. Yang, “Hand gesture recognitionusing combined features of location, angle and velocity,” PatternRecognition, vol. 34, no. 7, pp. 1491–1501, 2001.

[5] L. Wilcox and M. Bush, “Training and search algorithms for aninteractive wordspotting system,” in Acoustics, Speech, and SignalProcessing, ICASSP-92, 1992 IEEE International Conference on, vol. 2,1992, pp. 97–100.

[6] H.-K. Lee and J. Kim, “An HMM-based threshold model approach forgesture recognition,” Pattern Analysis and Machine Intelligence, IEEETransactions on, vol. 21, no. 10, pp. 961–973, 1999.

[7] L. Rabiner, “A tutorial on hidden Markov models and selected applica-tions in speech recognition,” Proceedings of the IEEE, vol. 77, no. 2,pp. 257–286, 1989.

[8] M. Elmezain and A. Al-Hamadi, “A Hidden Markov Model-BasedIsolated and Meaningful Hand Gesture Recognition,” in World Academyof Science, Engineering and Technology 41, vol. 31, 2008, pp. 394–401.

[9] M. Elmezain, A. Al-Hamadi, and B. Michaelis, “Hand trajectory-basedgesture spotting and recognition using HMM,” in Image Processing(ICIP), 2009 16th IEEE International Conference on, 2009, pp. 3577–3580.

[10] A. Kurakin, Z. Zhang, and Z. Liu, “A real time system for dynamichand gesture recognition with a depth sensor,” in Signal ProcessingConference (EUSIPCO), 2012 Proceedings of the 20th European, 2012,pp. 1975–1979.

[11] M. Zobl, M. Geiger, K. Bengler, and M. Lang, “A usability studyon hand gesture controlled operation of in-car devices,” in AbridgedProceedings, HCI. New Orleans, LA, USA: Lawrence Erlbaum Ass.,pp. 166–168.

[12] G. ElKoura and K. Singh, “Handrix: animating the human hand,” inProceedings of the 2003 ACM SIGGRAPH/Eurographics symposiumon Computer animation, ser. SCA ’03. Aire-la-Ville, Switzerland,Switzerland: Eurographics Association, 2003, pp. 110–119.

[13] V. Spruyt, A. Ledda, and W. Philips, “Real-time, long-term handtracking with unsupervised initialization,” in Image Processing (ICIP),2013 20th IEEE International Conference on, 2013, p. In Press.

[14] T. Kadir and M. Brady, “Saliency, Scale and Image Description,” Int.J. Comput. Vision, vol. 45, no. 2, pp. 83–105, Nov. 2001.

[15] G. Gomez, “On selecting colour components for skin detection,” inPattern Recognition, 2002. Proceedings. 16th International Conferenceon, vol. 2, 2002, pp. 961–964.

[16] M. Heikkila, M. Pietikainen, and C. Schmid, “Description of interestregions with local binary patterns,” Pattern Recognition, vol. 42, no. 3,pp. 425–436, Mar. 2009.

[17] R. Ortiz, “FREAK: Fast Retina Keypoint,” in Proceedings of the 2012IEEE Conference on Computer Vision and Pattern Recognition (CVPR),ser. CVPR ’12. Washington, DC, USA: IEEE Computer Society, 2012,pp. 510–517.

[18] V. Spruyt, A. Ledda, and W. Philips, “Real-time hand tracking byinvariant hough forest detection,” in Image Processing (ICIP), 201219th IEEE International Conference on, 2012, pp. 149–152.

[19] V. Spruyt, A. Ledda, and S. Geerts, “Real-time multi-colourspacehand segmentation,” in Image Processing (ICIP), 2010 17th IEEEInternational Conference on, 2010, pp. 3117–3120.

[20] V. Spruyt, A. Ledda, and W. Philips, “Sparse optical flow regularizationfor real-time visual tracking,” in Multimedia and Expo (ICME), 2013IEEE International Conference on, 2013, p. In Press.

[21] N. Liu, B. Lovell, P. Kootsookos, and R. Davis, “Model structureselection training algorithms for an HMM gesture recognition system,”in Frontiers in Handwriting Recognition, 2004. IWFHR-9 2004. NinthInternational Workshop on, 2004, pp. 100–105.

[22] L. J. Rodriguez and I. Torres, “Comparative Study of the Baum-Welchand Viterbi Training Algorithms applied to Read and SpontaneousSpeech Recognition,” in Pattern Recognition and Image Analysis, ser.Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003,vol. 2652, pp. 847–857.

[23] M. Elmezain, A. Al-Hamadi, and B. Michaelis, “Real-Time CapableSystem for Hand Gesture Recognition Using Hidden Markov Modelsin Stereo Color Image Sequences,” Journal of WSCG, vol. 16, pp. 65–72, 2008.

168/278

Page 182: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Pedestrian Detection and Localization System

by a New Multi-Beam Passive Infrared Sensor

Raphaël Canals, Peng Ying

PRISME Laboratory _ University of Orleans

Orleans, France

[email protected]

Thierry Deschamps, Joseph Zisa

Technext Society

Cannes, France

[email protected]

Abstract—To reduce the consequent algorithmic power required

by an image processing solution into the framework of indoor

pedestrian positioning, a new sensor, the SPIRIT (Smart Passive

InfraRed Intruder sensor, locator and Tracker) based on the

passive infrared sensor technology, was designed.

This technology, associated with a specific and low-power

electronics and an innovating optics, provides the angular person

positioning relating to the sensor attitude. Thus her detection and

angular positioning are optical and require only a tiny embedded

computing power. In this study, we propose a 3D geometric

modelling of the sensor, thus allowing, by projection, to obtain a

2D cartography of the beam boundaries. When considering the

successive numbers of beams activated by the person during her

displacement and the timing, the distances between beam

boundaries and assuming minimal and maximal walking speed, it

is possible to define the probable directions of origin and so the

plausible pathways as the person moves.

Keywords-pyroelectric infrared sensor (PIR); multi-beam;

multi-boundary; angular positioning; positioning refinement.

II. INTRODUCTION

Techniques of human tracking aim to detect his presence and then determine his position in space as and when he moves. They must be able to manage the complex interactions and dynamics in sequences, such as occlusions, relative movement of the person in relation to the sensor, changes in lighting. The versatile range of applications of tracking extends from human-machine interaction via video communication with compression, to computer vision, robotics, surveillance, industrial automation and other specific applications [1]-[6].

Most vision-based approaches to moving object detection and tracking require intensive real-time computations and expensive hardware [2] [4] [5] [7] [8].

RFID technology can be employed to support indoor and outdoor positioning, but its proximity and absolute positioning needs imply many infrastructures [9]-[11]. RF technology equally permits to realize localization by combining the received signal strength and link quality [16] [17]. Because of its imprecision when used alone, it is used in cooperation with another technology such as pyroelectric one [12].

Pyroelectric infrared sensors (PIR) permit to detect human motion thanks to their sensitivity to changes in heat flux. They are commonly used because of their low costs, their non-

invasive aspect, their low-power consumption and low detec-tability. But their electronics and optics make them binary with a large field of view (FOV) and low resolution [13]. This is why all research works using this technology implement a PIR sensors wireless network in order to criss-cross fairly accurately the coverage area and associate a data supervision and localization processing algorithm with it [14], [15], [18], [19], [21], [22] or even sensory fusion algorithm when combining with another technology [12], [20].

In this context, a new PIR sensor has been developed to counteract this added complexity. Thus the SPIRIT sensor (Smart Passive InfraRed Sensor Intruder, Locator and Tracker) is introduced: thanks to its coded Fresnel lens array, it constitutes a multi-boundary sensor in 3D+time that provides the angular coordinates with a resolution of 4° of the person and allows her temporal tracking in the coverage zone.

This article presents the first stage of the study on the SPIRIT as an isolated sensor to track a single human target. But this detector has been designed to be network capable, so with the view to determine the exact positioning of the person, a solution would merely employ several networked SPIRIT installed such a way their beams intersect, furnishing many precise points of positioning. As to the issue of detecting, locating and tracking multiple persons, a chronological data management algorithm should permit to determine the presen-ce of multiple humans and their separate coherent pathways.

The remainder of this paper is organized as follows. Section II gives the SPIRIT sensor description. Section III discusses its 3D modeling and 2D projection on the ground. In Section IV, we show our simulation and experimental positioning results, and discuss the strength and weakness of our system. A conclusion with outlook finalizes this article in Section V.

III. THE SPIRIT SENSOR

A. General presentation

The SPIRIT sensor uses the reliable and inexpensive technology of passive infrared detectors (PIR), but its internal specific signal processing coupled with an innovative and patented optics [23], [24] identifies the beam having seen the person in its FOV (Fig. 1). It is passive and therefore unde-tectable and harmless to people, animals and environment.

Each beam has its identification number. This allows to

169/278

Page 183: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

track the target displacement and to continuously position it in the sensor FOV, even in total darkness. The FOV of the SPIRIT module, with an aperture of 60°, is segmented into 15 viewing beams that are the solid angles within which any moving temperature source is detected.

Its beam detection is optical: the embedded computing power is therefore minimal, that allows a low-cost product

with a total average power consumption of about 150 A in its low-power version, guarantying a 5-year autonomy with a 3V lithium battery. The SPIRIT is RS485-RS232 wired but can be equipped with a Wavenis radio transmitter by Coronis to facilitate its integration.

Pyroelectric sensor signals are proportional to the change in temperature of the crystal rather than the environment temperature. To aid in motion sensing, a specific Fresnel lens array has been designed so that the visible space is divided into zones. The detection is greatly improved by creating separate visibility regions. Several lenses on the Fresnel lens array contribute in the creation of a single cone of visibility on a pyroelectric sensor. So one SPIRIT beam consists of two sub-beams collecting information from several lenses on the lens array. The positive and negative sub-beams correspond to each of the two sensitive elements in an electronic dual-element detector and the gap between them is due to the insensitivity region of the sensor. All this implies that in order to be detected, a person must circulate through the two sub-beams, in one direction or the other, and its detection is established after the rise and the fall (or vice versa) in the detector response.

Fig. 2 shows the visibility pattern of the Fresnel lens array. The array is made of light-weight low-cost plastic material

with good transmission characteristics in the 8 to 14 m wavelength range. The lens array has a FOV of 60° (top view) and a lateral FOV of 1.8°.The width of one beam is 2° while the angular separation between beams is 2°, which separates the axis of symmetry of beams of 4°. So a person walking along the path indicated and crossing the SPIRIT FOV is detected and her successive angular positions are determined. The SPIRIT sensor is therefore a multi-boundary detector that should allow us to locate the person who crosses its beams.

B. SPIRIT Modelling

As a way of simplifying calculus and explanations, we represent the horizontal and vertical FOV of beams with their axes of symmetry that we call SPIRIT boundaries (Fig. 3). We consider a single person to be located whose height is denoted h. Horizontal planes respectively have the equations z = 0 and z = h. The SPIRIT sensor is positioned on the z-axis at a

height H and its optical axis remains at an angle 0 with the z-axis. Each boundary F is marked with two subscripts that are

two angles: the first angle i indicates the SPIRIT plane, and

the second one n gives its orientation in this same plane. Boundaries are symmetrical relative to the bisector of the total FOV. Points I are the intersections of boundaries with a plane parallel to xOy. These points have the same suffixes than the boundary to which they belong but they have a higher index corresponding to the height of the plane of intersection.

Figure 1. Sensor module called SPIRIT, its optics and beams.

Figure 2. Characterization of the FOV of the Fresnel lens array.

The base is to determine the coordinates of the intersection points I of the boundaries with the horizontal plane z = h passing through the top of the person of height h on the moving

plane xOy. The coordinates of Ih0, /4+n0 are:

2

2 𝐻 − ℎ tan𝛼0 (1 − tan 𝑛𝜑0 ,

2

2 𝐻 − ℎ tan𝛼0 (1 − tan 𝑛𝜑0 , ℎ

where n is the signed number of the beam in relation to the bisector. So, using the example of n = 1, the distance between

the two points Ih0,/4 and I

h0, /4+0 is equal to:

𝐼𝛼0 ,

𝜋

4

ℎ 𝐼𝛼0 ,

𝜋

4+𝜑0

ℎ = 𝐻 − ℎ tan𝛼0 tan𝜑0

If the person is detected at the point Ih0, /4 at time t=t0 and

she moves along a straight line to the point Ih0, /4+0 she

reaches at time t=t0+t, its speed v will be:

𝑣 = 𝐻 − ℎ tan 𝛼0 tan 𝜑0

∆𝑡

Considering the vertical plane that contains I00, /4+0, O, H,

and P(Ih0, /4+0), the projection of I

h0, /4+0 on the plane xOy, a

person having a height h, wherever it comes from, will be

detected when she crosses the boundary F0, /4+0 at point

I00, /4+0 and will not be detected beyond the point

P(Ih0, /4+0) (Fig. 4). These two points are defined by these two

distances: person will be detected if she moves between these two

points:

𝑂𝑃 𝐼𝛼0 ,

𝜋

4+𝜑0

ℎ = 𝜔𝐼𝛼0 ,

𝜋

4+𝜑0

ℎ = 𝐻 − ℎ tan𝛼0 cos𝜑0

170/278

Page 184: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 3. Simplified geometrical characterization of the SPIRIT.

Figure 4. SPIRIT boundaries.

𝑂𝐼𝛼0 ,

𝜋

4+𝜑0

0 = 𝐻 tan𝛼0 cos𝜑0

C. 2D SPIRIT boundaries projection

These previous equations are defined in relation to the 3D

model of the SPIRIT. We must now perform a 2D projection

of this model on the ground plane z = 0; the axes of symmetry

of the beams that were all the same angle n = 0 = 4° away

will now be separated by different angles n with the relation:

𝛽𝑛 = tan−1 tan 𝑛𝜑0 sin𝛼0

Based on Fig. 3 & 4, we obtain the projection of the SPIRIT

boundaries on the plane z = 0 (Fig. 5). The path of the person

is assumed to be piecewise straight and at steady speed, and

crosses three successive boundaries.

Let suppose the maximal person height h = hmax; the thick

black segments represent the 2D projection of portions of

beam boundaries which have detected the person. Knowing

that the distances d1=v.t1 and d2=v.t2, and by observing the

geometry rules, the distance OR is determined by:

𝑂𝑅 = 𝑣1

1+𝑡2

2 sin 2 ∅𝑛+∅𝑛+1 −2𝑡1 𝑡1+𝑡2 sin ∅𝑛+∅𝑛+1 sin ∅𝑛 cos ∅𝑛+1 𝑡1+𝑡2

2

1 2

with t1 and t2 the intervals of time between the boundaries.

Figure 5. 2D projection of the SPIRIT boundaries on the ground.

OR is the distance that separates the SPIRIT sensor from the person on the ground plane. As her angular position is known, the person can be located. But as we do not know her speed, we propose to define a minimum walking speed vmin=3.5km/h and a maximum one vmax=5.5km/h [25], that allows us to delimit an area between the beam projections with two parallel lines and where the person is localized.

III. RESULTS

The detection and positioning system was implemented in a 20 x 15m hall. We also ran numerical simulations to investigate the sensor deployment and positioning precision. Modeling evaluation was also performed via simulations.

A. Simulation results

The suitability of our SPIRIT boundary model and the accuracy of projecting this model on the ground plane were simulated. The sensor can be installed at any location in the room but its installation height is usually underneath the ceiling

(2.5m), with an angle 0 of about 70-80°. At this time, the path of the person is straight and at constant speed, between vmin and vmax; its starting and final points are configurable. The person is basically represented by a cylinder and we suppose that the maximum height hmax is 1.8m. Therefore the person crossing the SPIRIT beams can only be 2D-located in the green ground boundaries (room sizes: 20 x 15 x 2.5m) (Fig. 6).

In Fig. 7, the person crosses all the SPIRIT beams. For easy viewing, the SPIRIT beams are not represented. The ground boundaries are all activated in the area determined using the speed limits. This area is nearly 3.85-meter wide and the limits are parallel to the person path: her direction is therefore known. And in the case the path was not straight and/or the speed not constant, she would still be located in this area.

When some beams have not detected the person, it is possible to reduce the width of the area and thus enhance the positioning. In Fig. 8, the last beam has not been activated because its second sub-beam has not been crossed. The area width is 3.757m wide between the two limits, and in the projected beam, the width is 3.887m at departure and 4.693m at arrival. We find that the maximum area limit is out of the 2D boundaries: here the blue line then becomes the maximum limit and the area width is reduced as the person moves (width of 3.752m at departure and 1.834m at arrival in the projected beam). Moreover, if we consider the path is straight and the speed constant, this area width can be further reduced:

171/278

Page 185: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 6. Simulation of the 3D and 2D SPIRIT models

(SPIRIT(10,0); 0=80°).

Figure 7. Simulation example with all beams crossed by the person

(SPIRIT(10,0); 0=80°; departure(0,5.5); arrival(20,10.5); v=4.0km/h).

With the orange maximum limit, at an early stage of the detection, since the two limits would be parallel and the two sub-beams of the last boun-dary crossed by the person would be activated;

With the orange minimum limit, after the last detection, since the latter beam has not been activated. Concerning this point, the limit can cross part of the beam boundaries but has not to cross the projection of the second sub-beam.

In this case, the area width would only be 0.153m.

A similar example is given in Fig. 9. The area width is 4.441m wide between the two limits. Similarly, it is possible to reduce the area in which the person is located and thus to obtain a better localization. The width is initially 6.964m at departure and 4.673m at arrival, it is equal to 2.606m at departure and 4.673m at arrival after modification. If the path is straight and the speed constant, the area is bounded by the two orange limits (width=0.769m); if no condition is defined, the person is located anywhere in the area, but with a restriction at departure as the third boundary has not been activated. In addition, the person cannot of course be located near a limit at a time and close to the other limit at the next time: her speed would be beyond the maximum allowed speed.

B. Experimental Results

The SPIRIT sensor was equipped with a camera in order to integrate in the acquired images the position of the person derived from detector data collection. The SPIRIT sensor integrates a microcontroller performing acquisition, signal processing and communication tasks: pyroelectric data acquisition is done every 10ms.

Many experiments were needed to determine boundaries on ground: a 1.82m-height person was walking at a short distance

Figure 8. Simulation example with some beams non-crossed by the person

(SPIRIT(10,0); 0=75°; departure(0,5.5); arrival(20,10.5); v=4.2km/h).

Figure 9. Simulation example with some beams non-crossed by the person

(SPIRIT(10,0); 0=80°; departure(5,14.5); arrival(20,4.5); v=4.2km/h).

and at a long distance from the SPIRIT following a path orthogonal to the symmetry axis of the sensor, in one direction and the other, with a view to determining some points of all the boundaries. Similarly, the person has crossed the sensor FOV diagonally to obtain the detection limits (Fig. 10).

Despite some obstacles and implementation problems, we therefore tried to approximate the configuration in Fig. 7: a line was plotted on the ground and the person has trained to walk at the same constant speed with a pedometer. Data obtained by simulation and by experiment are given in Table I. It may first be noted that the distance for the two last boundaries is not defined because our method needs three detection times to calculate the distance between the SPIRIT and the person. Secondly, there are some differences between the simulated and experimental data. This can be due to the fact that it is not obvious to walk at the correct speed and to keep constant this speed but also to the fact that the sensor attitude settings must be precise. Moreover, each measured time introduces an additional error in the distance formula. Finally, some SPIRIT manufacturing shortcomings may exist in electronics, optics and its geometry, that is a point we have seen during experiments since some ground boundaries were slightly offset from the other ones.

II. CONCLUSION

In this paper, a new PIR sensor has been presented for human detection and positioning. The numerical infrared tech-nology implemented is reliable and makes SPIRIT a position analyze element not very expensive, even in total darkness. We modelled the sensor characteristics to simulate its functioning and to compare simulation data with real ones. The data obtained from the SPIRIT sensor allows us to extract position information as well as the direction of motion. Some small differences appear between simulations and experiments but

172/278

Page 186: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 10. Image with projected-beam symmetry axis overlay.

TABLE I. SIMULATED AND EXPERIMENTAL DETECTION DISTANCES

Simulation Experiment

Projected

beam

number

Min.

distance (m)

between the

person and

the sensor

Max.

distance (m)

between the

person and

the sensor

Min. computed

distance

between the

person and the

sensor (m)

Max. computed

distance between

the person and

the sensor (m)

1 6,894 10,833 6,333 9,951

2 6,791 10,673 6,125 9,625

3 6,726 10,569 6,128 9,629

4 6,695 10,521 6,003 9,433

5 6,699 10,527 6,066 9,532

6 6,738 10,588 6,215 9,767

7 6,812 10,705 7,264 11,416

8 6,925 10,882 7,414 11,65

9 7,078 11,123 7,21 11,332

10 7,277 11,435 7,693 12,089

11 7,527 11,828 8,102 12,731

12 7,835 12,313 8,27 12,995

13 8,214 12,908 8,977 14,106

14 & 15 Not defined Not defined Not defined Not defined

have not large incidence on results. Our modelling might be completed by taking into account the minimum and maximum distances between two boundaries in order to restrict possible motion directions and so to result in a better positioning.

Refined position can also be obtained either by using

several networked SPIRIT installed such a way their beams

intersect, permitting to have precise locations, or by making

complex geometric processing with hypothesis, but this

second solution appears to require internal analogue SPIRIT

information other than beam number for positioning quality.

Our future work includes multiple person detection and

positioning by using a single SPIRIT then networked ones.

REFERENCES

[1] P. Turaga, R. Chellappa, VS. Subrahmanian, and O. Udrea, “Machine recognition of human activities: a survey”, IEEE Trans Circuits Syst Video Technol 18(11), pp. 1473–1488, 2008.

[2] P. Turaga, R. Chellappa, VS. Subrahmanian, and O. Udrea, “Machine recognition of human activities: a survey”, IEEE Trans Circuits Syst Video Technol 18(11), pp. 1473–1488, 2008.

[3] S. Saeedi, A. Moussa, and Dr. N. El-Sheimy, “Vision-aided context-aware framework for personal navigation services”, Intern Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B4, pp. 231-236, 2012.

[4] X. Ji, and H. Liu, “Advances in view-invariant human motion analysis: a review”,. IEEE Trans Syst Man Cybern Appl 40(1), pp. 13–24, 2010.

[5] R. Poppe, “A survey on vision-based human action recognition”, Image Vis Comput 28(6), pp. 976–990, 2010.

[6] C. Hide, T. Moore, and T. Botterill, “Low cost vision-aided IMU for pedestrian navigation”, Journal of Global Positioning Systems, Vol.10, No.1, pp. 3-10, 2011.

[7] H. Koyuncu and S. H. Yan, “A survey of indoor positioning and object locating systems”, IJCSNS Int. J. Comput. Sci. Netw. Security, vol. 10, no. 5, pp. 121–128, May 2010.

[8] W. Elloumi, S. Treuillet, and R. Leconge, “Real-time estimation of camera orientation by tracking orthogonal vanishing points in videos”, 8th Int Conf on Comp Vision Theory and Appl (VISAPP 2013), Barcelona, Spain, February 2013.

[9] L. Ruotsalainen, H. Kuusniemi, and R. Chen, “Heading change detection for indoor navigation with a smartphone camera”, International Conference on Indoor Positioning and Indoor Navigation (IPIN), 21-23 September 2011.

[10] Zebra Technology, available online: http://www. wherenet.com/, 2008.

[11] M. Bouet, A.L. dos Santos, “RFID tags: positioning principles and localization techniques”, Wireless Days, WD '08, 1st IFIP, pp. 1-5, 2008.

[12] C. Hekimian-Williams, B. Grant, L. Xiuwen, Z. Zhenghao, and P. Kumar, “Accurate localization of RFID tags using phase difference”, RFID, IEEE International Conference on, pP. 89-96, 2010.

[13] R.C.Luo, and O.Chen, “Wireless and pyroelectric sensory fusion system for indoor human/robot localization and monitoring”, IEEE/ASME Transactions On Mechatronics, Vol. 18, N° 3, pp. 845-853, June 2013.

[14] S. B. Lang, “Pyroelectricity: from ancient curiosity to modern imaging tool,” Phys. Today 58(8), 31–36, 2005.

[15] Q. Hao, D.J. Brady, B.D. Guenther, J.B. Burchett, M. Shankar, and S. Feller, “Human tracking with wireless distributed pyroelectric sensors”, IEEE Sensors Journal, vol. 6, n°6, pp. 1683–1695, 2006.

[16] B. Shen, and G. Wang, “Object localization with wireless binary pyroelectric infrared sensors”, Proceedings of 2013 Chinese Intelligent

Automation Conference, Lecture Notes in Electrical Engineering, Vol.

255, pp. 631-638, 2013. [17] A. Catovic, and Z. Sahinoglu, “The Cramer-rao bounds of hybrid

TOA/RSS and TOAD/RSS location estimation schemes”, IEEE Commun. Letters, 8(10), pp. 626-628, 2004.

[18] K. Pahlavan, and X. Li, “Indoor geolocation science and technology”, IEEE Commun. Mag., 40(2), pp. 112-118, 2002.

[19] R. Hsiao, D. Lin, H. Lin, S. Cheng, and C. Chung, “Indoor target detection and localization in pyroelectric infrared sensor networks”, in Proc. the 8th IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS 2011), Singapore, Aug. 2011.

[20] Q. Hao, F. Hu, and Y. Xiao, “Multiple human tracking and identification with wireless distributed pyroelectric sensor systems”, IEEE Systems Journal, Vol. 3, No. 4, pp. 428-439, December 2009.

[21] M. Magno, F. Tombari, D. Brunelli, L. Di Stefano, and L. Benini, “Multi-modal video surveillance aided by pyroelectric infrared sensors”, Workshop on Multi-camera and Multi-modal Sensor Fusion Algorithms and Applications - M2SFA2 2008, Marseille, France, 2008.

[22] N. Li, and Q. Hao, "Multiple human tracking with wireless distributed pyro-electric sensors", Proc. SPIE 6940, Infrared Technology and Applications XXXIV, 694033, May 01, 2008; doi:10.1117/12.777250.

[23] H. Kim, K. Ha, K. Lee, and K. Lee, “Resident location-recognition algorithm using a Bayesian classifier in the PIR sensor-based indoor location-aware system”. Trans. Sys. Man Cyber Part C 39, 2, pp. 240-245, 2009.

[24] J. Zisa, and Hymatom, “Method and system for detecting an individual by means of passive infrared sensors”, European patent n° EP06831165, Sep. 2008.

[25] J. Zisa, and B. Taillade, “Method and system for detecting an individual by means of passive infrared sensors”, US patent n° US 2009/0219388 A1, Sep. 2009.

[26] C. Willen, K. Lehmann, and K. Sunnerhagen, “Walking speed indoors and outdoors in healthy persons and in persons with late effects of polio”, Journ. Neurol. Res, 3(2): pp. 62-67, 2013.

173/278

Page 187: International Conference on Indoor Positioning and Indoor Navigation

- chapter 10 -

Components, Circuits, Devices & Systems

Page 188: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Study of rotary-laser transmitter shafting

vibration for workspace measurement positioning

system

Zhexu Liu, Jigui Zhu, Yongjie Ren, Jiarui Lin

State Key Laboratory of Precision Measuring Technology and Instruments

School of Precision Instrument and Opto-Electronics Engineering, Tianjin University

Tianjin, China

[email protected]

Abstract—The wMPS (workspace Measurement Positioning

System) is a novel measurement system for indoor large-scale

metrology, which is composed of a network of rotary-laser

transmitters. The stability of the transmitter’s rotating head is a

key factor of the measurement accuracy. This article studies the

shafting vibration of the transmitter by dividing it into three

independent vibration forms: axial vibration, radial vibration

and yaw vibration. The transfer functions between them and the

measurement accuracy are also presented.

Keywords-indoor large-scale metrology; wMPS; shafting

vibration; rotor dynamics

I. INTRODUCTION

As science and technology develop and the needs for large-scale manufacturing and assembly increase, coordinate measurement systems combining multiple angle measurements are established to achieve large-scale precision measurement, such as theodolite networks, digital photogrammetry, iGPS and wMPS [1]. The wMPS (workspace Measurement Positioning System) [2] is a novel measurement system for indoor large-scale metrology and has been successfully demonstrated to be used in industries with its high accuracy, automation and multi-tasking capability.

The wMPS consists of rotary-laser transmitters and optical receivers. Measurement is achieved with the receivers capturing the scanning angles of the rotary-laser planes emitted from the transmitters. In recent years, the performance and applications of the wMPS have been discussed in a fair amount of detail [3-4]. Moreover, its angular survey performance has also been discussed in [5]. Considering the working principle of the wMPS, it is clear that the stability of the transmitter’s rotating head is a key factor of the measurement accuracy, which is primarily determined by the shafting vibration of the transmitter. However, there has been very little work conducted for this subject.

In order to improve the accuracy, this article analyses the measurement error of the wMPS influenced by the shafting vibration. Based on the shafting structure of the transmitter,

this paper studies the shafting vibration and divides it into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The character of each form is discussed to give out the relationship between the shafting vibration and the accuracy of the wMPS. Following the analysis of the three forms, the transfer function is constructed between the three vibration forms and the measurement error of the wMPS. Then, we can improve the shafting structure of the transmitter through the analysis of the transfer function for higher measurement precision.

II. WMPS TECHNOLOGY

The wMPS is a laser-based measurement device for large-scale metrology applications, which is currently under development by Tianjin University, China. As shown in Fig. 1, the configuration of the wMPS is composed of transmitters, receivers, signal processors and a terminal computer.

Figure 1. wMPS configuration

The transmitter consists of a rotating head and a stationary base. With two line laser modules fixed on the rotating head and several pulsed lasers mounted around the stationary base, the transmitter generates three optical signals: two fan-shaped planar laser beams rotating with the head and an omnidirectional laser strobe emitted by the pulsed lasers synchronously when the head rotates to a predefined position of every cycle. The receiver captures the three signals and then converts them into electrical signals through a photoelectrical

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

174/278

Page 189: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

sensor. The signal processor distinguishes between the electrical signals obtained from different transmitters and then extracts the information of the laser planes from them. Subsequently, the information is sent to the terminal computer to calculate the coordinates of the receiver.

During measurement, the transmitters are distributed around the working volume and the relative position relationship between them is pre-calibrated through bundle adjustment [6]. They rotate at different speed to allow the signals from them to be differentiated. When the laser planes emitted from at least two transmitters intersect at a receiver, the scanning angles of the laser planes are exactly known from the information captured by the receiver. Then the spatial angles of the receiver can be obtained and the coordinates of the receiver can be calculated through the triangulation algorithm.

Figure 2. Schematic of the scanning angle measurement

As mentioned previously, the wMPS is based on the scanning angle measurement, whose schematic is illustrated in figure 2. As shown in figure 2(a), the initial position is defined as the position when the head of the transmitter rotates to a predefined position and the pulsed lasers emit the synchronous laser strobe. At the initial position, the receiver captures the synchronous laser strobe and records the initial time. Rotating with the head, the two laser planes scan the measurement space around the transmitter. As shown in figure 2(b), when a laser plane sweeps past the receiver, the time is also recorded. Assuming that the angular velocity of the rotating head is ,

then the scanning angle of the laser plane from the initial position to the position when it passes through the receiver can be obtained [7].

III. SHAFTING VIBRATION ANALYSIS

As described previously, the wMPS is essentially based on the scanning angle measurement. Actually, the accuracy of the scanning angle is impacted by many factors, such as the rotating stability of the rotating head, the uniformity of the transmitter’s rotating speed and timing accuracy of the signal processing circuit.

Among the impact factors, the stability of the transmitter’s rotating head is a key factor of the measurement accuracy. But it is too complex and can hardly be analyzed. Therefore, in order to simplify the analysis process, we divide the shafting vibration of the transmitter into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The detail analyses are expounded as follow.

A. Axial vibration

According to one of the rotating laser planes, assuming that the tilt angle between the plane and the rotating shaft is , the

influence of the axial vibration is shown in figure 3.

Figure 3. Influence of the axial vibration

As shown in figure 3, the coordinate frame of the transmitter is defined as -O XYZ . The rotation shaft of the two

laser planes is defined as axis Z . The origin O is the

intersection of the laser plane and axis Z . Axis X is in the laser plane at the initial time (the time when the pulsed lasers emit the omnidirectional laser strobe) and perpendicular to axis Z . And axis Y is determined according to the right-hand rule.

If there is no axial vibration, the laser plane would pass through the receiver R after it sweeps angles. At this time,

the intersection line of the laser plane and O XY is O P and the

nominal scanning angle is XO P . When axial vibration

happens, the transmitter coordinate frame is changed to

-O X Y Z and the laser plane passing through the receiver is

also changed. As shown in figure 3, the intersection line of the

laser plane and O X Y is O P . Projecting O P on O XY ,

we have 0O P . Then the laser plane O RP can be treated

equivalently as the plane which intersect O XY at line 0O P .

The actual scanning angle is 0X O P .

As shown in figure 3, point D and point D are the

projections of receiver R on O XY and O X Y respectively.

We define RD h , RD h , XO D , OD O D d

and OO DD z . According to the geometrical

relationship, we have the error of the scanning angle:

2 2 2 2

2 2

sin( ) sin( )

sin[( ) ( )]

sin( ) cos( ) cos( ) sin( )

tan tan tan tan1 1

h h h h

d dd d

(1)

175/278

Page 190: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Considering that h h z , we have:

2 2 2 2

2 2

2 2 2 2 2 2

tan ( ) tan ( ) tan tan1 1

tan ( ) tan ( ) tan tan

h h z h z h

d dd d

h d h z h z d h

d

(2)

As we can see from equation 2, the error of the scanning angle is impacted by the amplitude of the axial vibration, the height of the receiver and the horizontal distance between the receiver and the transmitter.

B. Radial vibration

Like the axial vibration, the influence of the radial vibration is shown in figure 4.

Figure 4. Influence of the radial vibration

As shown in figure 4, if there is no radial vibration, the laser plane would pass through the receiver R after it sweeps angles. At this time, the intersection line of the laser plane

and O XY is O P and the nominal scanning angle is XO P .

When radial vibration happens, assuming that its amplitude is r , and the transmitter coordinate frame is changed to

-O X Y Z . As shown in figure 4, when the laser plane sweeps

past the receiver, the intersection line of it and O X Y is O P .

We make a plane passing the point O , which is parallel to

O P R . The intersection line of it and O XY is 0O P . It is clear

that the actual scanning angle is 0X O P .

As shown in figure 3, point D is the projections of receiver

R on O XY . We define RD h , OD d , O D d ,

XOD , XOO and OO r . According to the

geometrical relationship, we have the error of the scanning angle:

2 2 2 2

2 2

sin( ) sin( )

sin[( ) ( )]

sin( ) cos( ) cos( ) sin( )

tan tan tan tan1 1

h h h h

d dd d

(3)

From figure 4, it is easy to know that:

cos( )d d r (4)

Then equation 3 can be rewritten as:

2 2 2 2

2 2

2 2 2 2 2 2

tan tan tan tan1 1

cos( )[ cos( ))]

tan ( [ cos( )] tan tan )

[ cos( )]

h h h h

d d rd r d

h d r h d h

d d r

(5)

As we can see from equation 5, the error of the scanning angle is impacted by the relative position of the receiver and the amplitude and direction of the radial vibration.

C. Yaw vibration

The influence of the yaw vibration is shown in figure 5.

Figure 5. Influence of the yaw vibration

As shown in figure 5, if there is no yaw vibration, the laser plane would pass through the receiver R after it sweeps

angles. At this time, the intersection line of the laser plane and O XY is O P and the nominal scanning angle is XO P .

When yaw vibration happens, assuming that the transmitter coordinate frame rotates around O L and the amplitude is .

Then the transmitter coordinate frame is changed to -O X Y Z .

In this way, when the laser plane sweeps past the receiver, the

intersection line of it and O X Y is O P shown in figure 5.

As shown in figure 5, point D and point D are the

projections of receiver R on O XY and O X Y respectively.

0D is the intersection of RD and O XY . is the angle

between 0D D and axis X . We define RD h , RD h ,

OD d , XO D , XOL .

In order to simplify the analysis, we can approximately divide the yaw vibration into a radial vibration and an axial vibration. Therefore, we have:

r a (6)

176/278

Page 191: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

For the radial vibration, its amplitude r is 0D D , and

direction is . According to the geometrical relationship, it is

easy to know:

0 0tan tanr DD h D RD h (7)

/ 2 (8)

Submitting equation 7 and equation 8 into equation 5, we have:

2 2 2 2 2 2

r

tan ( [ tan sin( )] tan tan )

[ tan sin( )]

h d h h d h

d d h

(9)

For the axial vibration, its amplitude z is h h . As

shown in figure 5, we extend 0D D which intersects O L at

point 0L . It is clear that 0D L is perpendicular to O L , the foot

point is 0L . Then we have:

0 0 0 0

0cos( )

sin( ) tan

L D L D D D

d D D

d h

(10)

Also we have 0 0 0L D L D and 0 0D L D ,

therefore:

0 0 02 sin( / 2)

2 sin( / 2)[ sin( ) tan ]

D D L D

d h

(11)

Moreover, we know that 0D RD , then:

0 0/ cos( ) / cos( )RD h D RD h (12)

Therefore, we have:

0 0

/ cos( ) 2 sin( / 2)[ sin( ) tan ]

z h h RD D D h

h d h h

(13)

Submitting equation 13 into equation 2, we can get:

a

2 2 2

2 2 2

tan (2 / cos( ) 2 sin( / 2)[ sin( ) tan ]) tan

(2 / cos( ) 2 sin( / 2)[ sin( ) tan ]) tan tan

h d h h d h

d

h h d h d h

d

(14)

According to equation 6, we have:

r a

2 2 2 2 2 2

2 2 2

2 2 2

tan ( [ tan sin( )] tan tan )

[ tan sin( )]

tan (2 / cos( ) 2 sin( / 2)[ sin( ) tan ]) tan

(2 / cos( ) 2 sin( / 2)[ sin( ) tan ]) tan tan

h d h h d h

d d h

h d h h d h

d

h h d h d h

d

(15)

As we can see from equation 15, the error of the scanning angle is impacted by the amplitude and direction of the yaw vibration and the relative position of the receiver.

IV. CONCLUSIONS

This paper analyses the shafting vibration of the wMPS transmitter by dividing it into three independent vibration forms: axial vibration, radial vibration and yaw vibration. The transfer functions between them and the measurement accuracy are presented. The analysis results reveal the complex relationship between the shafting vibration and the measurement accuracy through a simple way and are helpful to the deeper research and improvement of the shafting structure in future.

ACKNOWLEDGMENT

This research was supported by Key Projects in the National Science & Technology Pillar Program of China (2011BAF13B04) and the National High-technology & Development Program of China (863 Program, 2012AA041205). The authors would like to express their sincere thanks to them and comments from the reviewers and the editor would be very much appreciated.

REFERENCES

[1] W. Cuypers, N. Van Gestel, A. Voet, J.-P. Kruth, J. Mingneau and P. Bleys, “Optical measurement techniques for mobile and large-scale dimensional metrology,” Opt. Laser Eng. vol. 47, pp.292-300, May 2008.

[2] Z. Xiong, J. G. Zhu, Z. Y. Zhao, X. Y. Yang and S. H. Ye, “Workspace measuring and positioning system based on rotating laser planes,” Mechanika. vol. 18(1), pp. 94-98, January 2012.

[3] L. H. Yang, X. Y. Yang, J. G. Zhu and S. H. Ye, "Error analysis of workspace measurement positioning system based on optical scanning," Journal of Optoelectronics.Laser, vol. 21, pp. 1829-1833, December 2010.

[4] Z. Xiong, L. H. Yang, X. H. Wang and K. Zhang, “Application of Workspace Measurement and Positioning System in Aircraft Manufacturing Assembly,” Aeronautical Manufacturing Technology. vol. 21, pp. 60-62, 2011.

[5] Z. Xiong, J. G. Zhu, L. Geng, X. Y. Yang and S. H. Ye, “Verification of horizontal angular survey performance for workspace measuring and positioning system,” Journal of Optoelectronics.Laser. vol. 23, pp. 291-296, February 2012.

[6] B. Triggs, P. McLauchlan, R. Hartley and A. Fitzgibbon, “Bundle adjustment—a modern synthesis,” Vision algorithms: theory and practice. Springer Berlin Heidelberg, pp. 298-372, 2000.

[7] L. H. Yang, X. Y. Yang, J. G. Zhu, Q. Duanmu and D. B. Lao, “Novel Method for Spatial Angle Measurement Based on Rotating Planar Laser Beams,” Chin. J. Mech. Eng.-En. vol. 23, pp. 758-764, October 2010.

177/278

Page 192: International Conference on Indoor Positioning and Indoor Navigation

Efficient Architecture for Ultrasonic Array Processing based on Encoding Techniques

Rodrigo García, M. Carmen Pérez, Álvaro Hernández, F. Manuel Sánchez, José M. Castilla, Cristina Diego Electronics Department, University of Alcalá

Alcalá de Henares (Madrid), Spain rodrigo.garcia, carmen, [email protected]

Abstract— Airborne ultrasonic systems based on phased arrays provide images from the explored area, at expense of a low scanning speed. Encoding based on complementary set of sequences allows the simultaneous emission of the beam in several directions, thus increasing the scanning speed as well as the corresponding computational load. In this paper it is presented an efficient FPGA-based architecture for real-time processing of signals coming from an airborne ultrasonic phased array.

Keywords— Ultrasonic Phased Array; B-scan; Complementary set of sequences; Field-Programmable Gate Array.

I. INTRODUCTION

Ultrasonic Phased Arrays (PA) consist of a set of elements that are activated with different time delays, thereby forming an ultrasonic beam that can be oriented in the desired direction [1]. The applications of this kind of systems are several, from the ultrasonic image generation in medicine [2], or non-destructive testing [3], to the construction of environment maps in mobile robotics [4].

To increase the imaging rate, some works have been recently proposed by using encoding techniques applied to the signals emitted by the array, so information can be overlapped for each image line to be represented [5]. In these cases, the performance of the final system greatly depends on the correlation properties of the codes. Furthermore, the use of these techniques requires complex processing algorithms and the use of sequences with high lengths. This implies a high computational load, what may exceed the limits imposed by the need to work in real time or demand high-cost platforms.

An approach consists in the use of new encoding schemes, based on sequences with zero correlation zones [6] [7] [8]. These codes, which are mostly derived from Complementary Sets of Sequences (CSS) [9], provide an Interference Free Window (IFW) in their aperiodic correlation functions. Thus, it is possible to mitigate the Inter-Symbol Interference (ISI) and the Multiple Access Interference (MAI), as long as the relative delays among the different receptions are within the IFW. Furthermore, these sequences also provide efficient architectures for the detection stage, typically implemented in FPGA devices (Field-Programmable Gate Array).

This paper presents the real-time implementation of an efficient FPGA-based architecture for ultrasonic signal processing in a phased array based on encoding with sequences derived from CSS, in order to achieve simultaneous scanning

in all directions. The manuscript is organized as follows: Section II briefly describes the proposed architecture and the encoding used in the ultrasonic emission; in Section III the hardware implementation is explained; some experimental results are shown in Section IV and, finally, conclusions are discussed in Section V.

II. SYSTEM OVERVIEW

The goal of the proposed system is to generate B-scan images of the explored area, based on the simultaneous emission in several directions of ultrasonic signals encoded for each scan sector (A-scan). The architecture can be observed in Fig. 1. The emitter performs the storage and modulation of the sequences to be sent. Moreover, it generates the delays for every array element, in order to carry out the desired beam deflection for each sector. The delayed codes are added and then transmitted by the transducer with the purpose of achieving a simultaneous emission in several directions. The receiver block performs the demodulation and correlation of the received signals for each scanned sector (A-scan). It also performs post-processing and the B-scan image generation.

CDMA techniques (Code Division Multiple Access) have been applied in order to achieve simultaneous emissions [10]. They provide a different and uncorrelated sequence to each user in the same channel, thus allowing an independent access. To improve the signal-to-noise ratio (SNR) in the image generation and to reduce MAI, codes with good auto- and cross-correlation properties in the scanned region are required. From the different alternatives, those codes with an Interference Free Window (IFW) around the origin of their correlation functions allow to significantly reduce ISI and MAI. It is also possible to adjust the size of the IFW to the explored area, so a better contrast in B-scan images can be obtained, compared to that from other codes. Finally, most these codes present efficient correlation structures that reduce the computational load of the detection process and make real-time operation more feasible [6] [7] [8] [11].

In this approach, the sequences in every CSS have been concatenated in natural order with a set of wo zeros among them, thus obtaining a larger sequence, called macro-sequence [8]. Firstly, N uncorrelated CSS Sn; 0≤n≤N-1, each one with N sequences sn,m[l]; 0≤n,m≤N-1; 0≤l≤L-1 of length L, are generated. Then, the sequences sn,m from each set Sn have been linked with a separation of wo zeros between them. As a result, N macro-sequences Msn Msn=[sn,0 wo sn,1 wo… sn,N-1]; 0≤n≤N-1 are obtained, with an IFW in their correlation functions with

178/278

Page 193: International Conference on Indoor Positioning and Indoor Navigation

size sequ

TothereduNevenumL=N

Whewo isdefinsequ

Fsequmainhas bthe Iinsidof thof thsequ

Tin th

St

2·wo+1. Fig. uences Msn.

Fig. 2. G

The main adrs with the sam

uction in the nertheless, the

mber of zeros N¸ the final leng

ere N is the nus the number ned as the ratuence length L

Fig. 3 shows uences with Nn correlation been mentioneIFW in order tde it. Thus, thhe scanned arehe generated iuences, thus in

III.

The followinghe design of th

Simultaneous o 64º, using 3

2 shows the g

Generation proces

dvantage of mme IFW propenumber of opeir process gain the seque

gth LMsn for a

umber of sequof interpolat

tio between thLMsn

is (2):

an auto- anN=L=4 and wo

lobe, whose sed before, it ito ensure that

he IFW shouldea. A reducedimages, where

ncreasing the c

HARDWARE

g considerationhe phased array

scan up to N=2 sequences w

generation pro

s for macro-sequ

macro-sequenerty, as LS [6]erations requiain is lower nce. Using a macro-sequen

1

uences sn,m fored zeros. Thehe auto-correl

nd cross-correo=32. Note thsize is 2·wo+s desirable to all possible e

d be adjusted d IFW size deeas too large computational

E IMPLEMENTA

ns have been y and its archi

=32 different with length LM

Fi

ocess for N ma

ences Msn.

ces, compare] or GPC [7], iired in correladue to the laCSS with le

nce Msn is (1):

r every set Sn

e process gaination lobe and

elation for mae IFW around1=65 samplessuitably confichoes are receto the dimens

egrades the quIFW implies load.

ATION

taken into accitecture [5]:

sectors, fromMsn

=11998 bits

ig. 1. General sc

acro-

ed to is the ation. arger ength :

(1)

; and n GP, d the

(2)

acro-d the s. As figure eived sions uality long

count

m -64º .

E

Ea

Bcf

A. E

Tdigitstorathe convthe e

RmembaseThe It isfeascorrfs=1This

Fi

cheme of the prop

Emission throu

Exploration dat dmn=0.30m.

BPSK (Binarcarrier frequefrequency beh

Emitter Block

The emitter bltal signal procage, the moduother hand, anversion, as weemission throu

Regarding themory requiremed on a Xilinxblock implem

s necessary a ible the delay

rectly deflect .6MHz has bes frequency va

ig. 3. Auto- andobtained

posed architectur

ugh an ultraso

epth dmx=1.5mThis implies

ry Phase Shifncy fc=80kHz

havior of the u

k

lock is dividedcessing modululation, and thn analog stageell as the voltugh the ultraso

e first processiments, a Genesx Virtex5 LX5mented in the F

sampling freqy generation the beam. Th

een fixed withalue is feasible

cross-correlationfrom N=4 CSS w

e.

onic array with

m, from the ba half IFW siz

ift Keying) mz, to transmit

used EMFi-bas

d into two stagle, which con

he emission dee, which invotage driver reqonic array.

ing module, asys platform f50T FPGA, hFPGA can be quency fS higwith the acc

herefore, a sah a temporal ree in current FP

n functions for mawith length L= 4 a

h E=8 elemen

beam conformze wo=354 bits

modulation wt according tosed array [12].

ges: on one hacerns the sequelay generatioolves digital-anquired to carr

and considerinfrom Digilent

has been emploobserved in Fh enough to mcuracy requireampling frequesolution tr=6PGA devices.

acro-sequences Mand w0=32.

nts.

mation s.

with a o the .

and, a uence

on; on nalog ry out

ng the t Inc., oyed.

Fig. 4. make ed to uency 25ns.

Msn,

179/278

Page 194: International Conference on Indoor Positioning and Indoor Navigation

Fig. 4. Emitter module implemented in a Xilinx Virtex5 LX50T FPGA.

The binary macro-sequences Msn are stored inside the internal memory block BRAM (cn). A control block (Emission Controller) is responsible for accessing to the position where the sequence bits are, as well as for managing the emission frequency during every whole frame. At the memory output there is a BPSK modulator (BPSK Modulator), which drives other block (Delays 8) in charge of inserting the delays to the modulated sequences mn[k], according to each transducer and scan angle. Finally, an adder (SUM32) carries out the sum of all the delayed transmissions mn,e[k], and the DAC controller sends the result se[k] to the digital-analog converter (DAC).

Note that the diagram in Fig.4 only represents the resources required in the emission of one single Msn through the array. The general scheme, for the simultaneous scan of N=32 sectors, implies replicating modulators, memories and delay blocks, N=32 times. The amount of adders and DAC controllers is equal to the number of elements E=8 in the array. Table I shows the resources consumption in the emitter module in the Virtex5 LX50T FPGA. The main design constraints are determined by the sequence length LMsn

, due to the high memory requirements, and by the temporal resolution tr configured in the delay generation. It also has influence on the delay block size as well as on the maximum sampling frequency fS.

As is explained in Section IV, an EMFi-based array prototype is used for experimental tests, where the array elements require a peak-to-peak voltage level of 150V to transmit the sequences [5]. To reach that bipolar voltage level from the 3V unipolar voltage supplied by the DAC, a filtering and amplification circuit has been included.

B. Receiver Block

The reception process has been divided in three stages: acquisition and correlation; data sending; and post-processing. A FPGA-based Genesys platform has been also selected for the low-level signal processing. This processing includes BPSK demodulation, and correlation with the emitted macro-sequences Msn. The correlation results tn[k] are sent to a computer for high-level processing (envelope detection and image composition). To manage the delivery of the correlation results tn[k], a Microblaze microprocessor embedded in the FPGA has been proposed. Fig. 5 depicts the architecture implemented in the FPGA for the low-level processing.

TABLE I. RESOURCE CONSUMPTION FOR THE EMITTER MODULE IN A XILINX VIRTEX5 LX50T FPGA.

Resources/ Percentage

Block slices BRAMs

Emission controller 62 (0,86 %) 0 (0,00%)

Cn x32 0 (0,00%) 64 (53,33%)

BPSK modulator x32 576 (8,00%) 32 (26,67%)

Delays8 x32 1984 (27,56%) 0 (0,00%)

SUM32 x8 928 (12,89%) 0 (0,00%)

DAC Ctrl. x8 260 (3,61%) 0 (0,00%)

Total 3810 (52,92%) 96 (80,00%)

Fig. 5. Block diagram of the low-level processing at the receiver.

The ultrasonic signal r[k] is captured by a microphone and acquired by an analog-to-digital converter (ADC). The acquired signal r[k] is asynchronously demodulated (BPSK Demod.) to obtain d[k]. Then, a correlation block (Correlator CSS) searches for any of the N=32 emitted macro-sequences Msn. It uses the efficient scheme in [7] to obtain the correlation tn[k] with the N= 32 macro-sequences simultaneously.

The interest area in correlation is the IFW, stored for later processing. The memory necessary to store all the desired data is w0·D·Of·N=2Mbit; where wo is the size of the IFW; D is the output data width; Of is the oversampling factor; and N is the number of macro-sequences. This means 104% over the FPGA internal memory, so data are stored in external DDR2 memory. These data must be accessible by Microblaze, so a multiport memory controller MPMC has been used, with Xilinx DMA protocol based on a NPI (Native Peripheral Interface).

After all the correlation results tn[k] have been stored, a Microblaze interruption is set to indicate that data are ready. To transmit data to a PC, a TCP/IP communication has been established. Finally, an enveloping detection is applied to every correlation to obtain an A-scan line; these A-scan lines provide a dB intensity B-scan image. Table II shows the resource consumption for low-level processing and Microblaze.

IV. EXPERIMENTAL TESTS

First experimental tests have been carried out with an EMFi-based array prototype [5] composed by E=8 elements, with dimensions 0.26x4cm2 (determined to keep the pitch ratio in order to avoid grating lobes), with an element gap of 1mm and a maximum operation frequency of 47kHz. This allows eight angular sectors from -52º to 60º to be covered. Therefore, N=8 macro-sequences are emitted instead of 32 (as

SUM32 DACctrl.

m1,8[k]

s1[k]

••

•••

Emissionctrl.

C1BPSK

modulator

Delays

Addr c1[k] m1[k] m1,1[k]

m1,2[k]

m32,1[k]

data

sclk2 6 6

6

6

126

12

m32,8[k]••

C32BPSK

modulator

Delays

c32[k] m32[n] m32,1[k]

m32,2[k]

2 6 6

6

6

SUM32 DACctrl.

s8[k]

12

m1,8[k]

6data

sclk

••

ADCCtrl.

BPSKDemod.

Delays CorreladorCSS

Ctrl.DMA (NPI)

•••

r[k] d[k] d1[k]

d2[k]

d32[k]

•••

e_Adq

Int.

sclk

cs

data

s_NPI

t32[k]

t2[k]

t1[k]

12 88

8

8

18

18

18

180/278

Page 195: International Conference on Indoor Positioning and Indoor Navigation

was originally designed in the system). Due to the geometrical configuration of the array, the carrier frequency fc has been reduced to 40kHz. The IFW has been kept from 0m to 1.5m.

TABLE II. RESOURCE CONSUMPTION FOR THE RECEIVER MODULE IN A XILINX VIRTEX5 LX50T FPGA.

Resources/ Percentage

Block slices BRAMs DSPs

ADC Ctrl. 15 (0,21%) 0 (0,00%) 0 (0,00%)

BPSK demodulator 68 (0,94%) 1 (0,83%) 1 (2,08%)

Delays 475 (6,60%) 62 (51,67%) 0 (0,00%)

CSS correlator 4945 (68,81%) 0 (0,00%) 0 (0,00%)

Ctrl. DMA (NPI) 102 (1,67%) 0 (0,00%) 0 (0,00%)

Microblaze 1295 (17,99%) 22 (18,33%) 3 (6,25%)

Total 6953 (96,57%) 85 (70,83%) 4 (8,33%)

A test has been conducted for the scenario shown in Fig. 6, with two metal poles of 6x6cm: one (object 1) is placed at 30cm and 20° from the axial axis of the array, and another (object 2) at 45cm and -40°. Fig. 7 depicts the B-scan image with –10dB contrast. In the detection of the objet 1, secondary lobes appear after the main lobe, caused by the multipath effect and by the proximity of the reflector. In Fig. 8, the B-scan image with -5dB contrast is shown, where the secondary lobes caused by multipath effect are no longer observable.

Fig. 6. Experimental set-up.

Fig. 7. B-scan image for the scenario in Fig. 6 with -10dB contrast.

V. CONCLUSIONS

An efficient processing architecture for an airborne ultrasonic phased array has been presented, allowing simultaneous scanning in several directions, by emitting macro-sequences derived from CSS. The use of these codes

increases the image generation rate. The proposed FPGA implementation can achieve real-time processing for the ultrasonic signals from a phased array. First experimental tests with an EMFi ultrasonic array have validated the design. Future works will deal with a further comparison with other existing approaches, as well as with encoding improvement.

ACKNOLEDGEMENTS

This work has been supported by the University of Alcalá (SIMULTANEOUS project, ref. UAH2011/EXP-003) and the Spanish Ministry of Economy and Competitiveness (LORIS project, ref. TIN2012-38080-C04-01, and DISSECT-SOC project, ref. TEC2012-38058-C03-03).

Fig. 8. B-scan image for the scenario in Fig. 6 with -5dB contrast.

REFERENCES [1] O. T. Von Ramm, S. W. Smith, “Beam Steering with Linear Arrays”,

Biomedical Engineering, BME-30, nº 8, pp. 438-452, 1983.

[2] S. W. Smith, H. G. Pavy, O. Von Ramm, “High-speed ultrasound volumetric imaging system. Part I: Transducer design and beam steering”, IEEE Tr. on Ultrasonics, Ferroelectrics, and Frequency Control, vol. 38, no. 2, pp. 100-108, 1991.

[3] M. Parrilla, P. Nevado, A. Ibañez, J. Camacho, J. Brizuela, C. Fritsch, “Ultrasonic imaging of solid railway wheels”, Proc. of the IEEE Ultrasonics Symposium, Chine, pp. 414-417, 2008.

[4] D. T. Pham, Z. Ji, A. Soroka, “Ultrasonic distance scanning techniques for mobile robots”, Int. J. of Computer Aided Engineering and Technology, vol. 1, no. 2, pp. 209-224, 2009.

[5] C. Diego, A. Hernández, A. Jiménez, F. J. Álvarez, R. Sanz. “Ultrasonic array for obstacle detection based on CDMA with Kasami codes”, Sensors, vol. 11, pp. 11464-11475, 2011.

[6] C. Zhang, S. Yamada, M. Hatori, "General method to construct LS codes by complete complementary sequences", IEICE Trans. on Wireless Communications Tech., E-88-B, vol. 8, pp. 3484-3487, 2005.

[7] H.-H. Chen, Y.-C. Yeh et al., “Generalized pairwise complementary codes with set-wise uniform interference-free windows,” IEEE Journal on Selected Areas in Communications, vol. 24, no.1, pp. 65-74, 2006.

[8] M. C. Pérez, R. Sanz, J. Ureña, A. Hernández, C. De Marziani, F. J. Álvarez, “Correlator implementation for Orthogonal CSS used in an ultrasonic LPS”, IEEE Sensors J., vol. 12, no. 9, pp. 2807-2816, 2012.

[9] C. C. Tseng, C. L. Liu, “Complementary sets of sequences”, IEEE Tr. on Information Theory, IT-18(5), pp. 644-652, 1972.

[10] H. Chen, “Next Generation CDMA Technologies”, John Wiley & Son, Ltd, West Sussex PO19 8SQ, England, 2007.

[11] M. C. Pérez, J. Ureña, A. Hernández. C. De Marziani, A. Jiménez, “Optimized Correlator for LS Codes-Based CDMA Systems”, IEEE Communications Letters, vol. 15, no. 2, pp. 223-225, 2011.

[12] M. Paajanen, J. Lekkala, K. Kirjavainen, “ElectroMechanical Film EMFi. A New Multipurpose Electret Material”, Sensors and Actuators A, vol. 84, pp. 95-102, 2000.

-50 -40 -30 -20 -10 0 10 20 30 40 50

10

20

30

40

50

60

dB

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

(cm)

(cm

)

dB

-50 -40 -30 -20 -10 0 10 20 30 40 50

10

20

30

40

50

60

dB

-5

-4.5

-4

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0

(cm)

(cm

)

dB

181/278

Page 196: International Conference on Indoor Positioning and Indoor Navigation

- chapter 11 -

Communication, Networking &Broadcasting

Page 197: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Using Double-peak Gaussian Model to Generate Wi-

Fi Fingerprinting Database for Indoor Positioning

Lina Chen

College of Information Science and Technology,

ECNU

Shanghai, China

College of Mathematics, Physics and Information

Engineering,

Zhejiang Normal University

Jinhua, China

School of Surveying and Geospatial Engineering

UNSW

Sydney, Australian

[email protected]

Chunyu Miao

College of Xingzhi,

Zhejiang Normal University

Jinhua, China

[email protected]

Binghao Li

School of Surveying and Geospatial Engineering

UNSW

Sydney, Australian

[email protected]

Zhengqi Zheng

College of Information Science and Technology,

ECNU

Shanghai, China

[email protected]

Jianmin Zhao

College of Mathematics, Physics and Information

Engineering,

Zhejiang Normal University

Jinhua, China

[email protected]

ABSTRACT

Location fingerprinting using WiFi signals has been very popular and is a

well accepted indoor positioning method. The key issue of the fingerprinting

approach is generating the fingerprint radio map. Limited by the practical

workload, only a few samples of the received signal strength are collected at

each reference point. Unfortunately, fewer samples cannot accurately

represent the actual distribution of the signal strength from each access point.

This study finds most WiFi signals have two peaks. According to the new

finding, a double-peak Gaussian arithmetic is proposed to generate a

fingerprint radio map. This approach requires little time to receive WiFi

signals and it easy to estimate the parameters of the double-peak Gaussian

function. Compared to the Gaussian function and histogram method to

generate a fingerprint radio map, this method better approximates the

occurrence signal distribution. This paper also compared the positioning

accuracy using K-Nearest Neighbor theory for three radio maps, the test

results show that the positioning distance error utilizing the double-peak

Gaussian function is better than the other two methods.

KEYWORDS: Indoor positioning; Double-peak Gaussian Arithmetic

(DGA); Wi-Fi fingerprinting

182/278

Page 198: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

1. INTRODUCTION Location Based Services (LBS) is a mobile application

that depends on mobile devices and the mobile network to calculate the actual geographical location of mobile user and further more to provide service information that users need, related to their current real-space position [1] [2]. One of the key issues for LBS is positioning accuracy, and particularly since the requirement for positioning accuracy indoors is usually higher than that for outdoor. In outdoor applications, using Global Navigation Satellite System (GNSS) such as Global Positioning System (GPS) is sufficient because it provides location accuracy within several meters. However, GPS is still not suitable for indoor positioning as the signal from GPS cannot penetrate walls of buildings [3] [4].

Indoor positioning technology has attracted a huge interest from the research community. There are many techniques that can be used in indoor positioning such as the angle and time difference of the arrival of a signal. However, there are significant multipath effects and a non line-of-sight environment that can lead to inaccurate angle and time estimations. The fingerprinting technique has been accepted as a simple and effective approach that can provide location-aware capability for devices equipped with WLAN (such as Wi-Fi) in indoor environments [5].

Wi-Fi is now generally adopted for indoor positioning due to widely deployed access points (APs). Mobile devices are equipped with Wi-Fi chipset as a standard and Wi-Fi signals are typically available in most buildings. Additionally, Wi-Fi as an existing infrastructure can reduce the cost of implementing location dependent services and by only utilizing signal strengths (SS) it can easy to obtain the required measurements to determine a users position. These advantages have made using Wi-Fi for indoor positioning very popular.

Because of non-line-of-sight (NLOS) propagation and multipath effects for signals it is very difficult to convert SS measurement to range measurement accurately, in order to overcome this problem fingerprinting is usually utilized [6]. The fingerprinting approach is considered as a better method for ubiquitous indoor positioning as it utilizes the NLOS propagation and multipath by mapping location with received signal strength indicator (RSSI) [5],[7].

Although the RSSI can be chosen as the characteristic value to refer indoor location in fingerprinting positioning systems, the actual distribution of RSSI for IEEE 802.11a/b/g itself is rarely known. The location fingerprints can be as simple as patterns of averaged RSSI or distributions of RSSI from a number of APs. In the literature systems that maintain or estimate distributions of RSSI for each location usually have better positioning performance [8]. A lognormal distribution was assumed to model the RSSI [9]. Shape filtering on the empirical distribution was utilized to estimate the RSSI distribution [10]. Kamol et al compared measured data to a Gaussian model to see how well a Gaussian model can fit the data [11]. Another solution was presented using the Weibull function for approximating the Bluetooth signal strength distribution in the data training phase of location fingerprinting [12]. All this shows that improved

understanding of RSSI and approximations of actual RSSI distributions are key issues to improving the performance of WLAN indoor positioning.

The average SS of each Wi-Fi AP measured at each reference point (RP) is used to generate the radio map. Since the variation of the SS measured at each point is large, the RSSI distribution is not usually close to the Gaussian or Weibull. The distribution typically varies at different locations and at the same location when the orientation of the antenna changes [13][14]. In our recent research, some new characteristics of Wi-Fi signals were found as listed below and shown in Fig.1.

1) The vast majority of distribution of received signal strength (RSS) from APs consisted of two peaks and a long tail as the red line shows in Fig.1. The double peaks are especially obvious. This has not been mentioned in previous literature.

2) A Gaussian function does not better approximate the distribution of the RSS, just as the black line in Fig.1shows. The Gaussian function is fit using the same data as the occurrence in red line. Unfortunately, the shapes of the two lines are not very similar.

3) The part with the poorest approximation lies in the double peak region of the data. About 90 percent of signals are in the double peaks region and nearly 50 percent of signals are in peak 1 as illustrated in Fig.2. The large difference between the two lines may lead to larger error for location fingerprinting for indoor positioning.

Peak 1Peak 2

Long tail

Fig. 1. A new distribution characteristic of signals with

two peaks and a tail and non-gaussian function

51%

42%

7%

0%

10%

20%

30%

40%

50%

60%

peak 1 peak 2 tail

Fig. 2. Distribution proportion of signals in Fig. 1

183/278

Page 199: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

The generating of the radio map is an essential prerequisite for a location fingerprint. The more measurements obtained at each RF the better the performance of the positioning. However, more measurements mean more time is required to complete the more intensive computational task. In fact, only a few samples of the RSSI are typically collected at each RF, so the limited samples cannot represent the actual signal distribution well. This paper presents a new approach using the double-peak Gaussian arithmetic (DGA) to approximate the Wi-Fi SS distribution according to the characteristics observed. Compared to the Gaussian function, the experimental results show that the location fingerprint indoor positioning using DGA is better than the Gaussian approach. The study also improved the efficiency of generating a fingerprinting database.

2. EXPERIMENT DESIGN

2.1 Location Fingerprinting

The last several decades have seen a revolutionary in the

development of Global Navigation Satellite Systems (GNSS). Positioning and navigation is almost perfect in outdoor. However, GNSS is not possible to receive enough good quality satellite signals inside building or underground mine that lead failure to apply in indoor.

Indoor positioning technologies can be based on random signals. The random signals such as WiFi signals are not intended for positioning and navigation. These signals are designed for other purposes, and given the harsh reality of signal propagation in the indoor environment achieving a high degree of accuracy is a very difficult, if not impossible, task [15][16]. Fingerprinting is widely used where signal propagation by direct line-of-sight detection is not typical. The low cost and wide coverage of such methods are the main advantages. There are many positioning technologies that require the deployment of infrastructure, such as positioning systems using infrared, ultrasound [17] [18] and ultra-wideband [19]. Development new infrastructure not only costly but also the coverage is usually very limited such as using hot spot mode. Such technologies typically have to be utilized if a reliable and accurate positioning result is required. An obvious advantage of using WiFi signals for indoor positioning is that does not need to pre-deployed infrastructure, which makes such a system cost effective and only signal strengths (SS) are available.

As a standard networking technology WiFi access points (APs) are widely deployed. Modern mobile devices are now equipped With WiFi chips and WiFi signals are easily available almost in possible every building which makes using WiFi for indoor positioning become a very practical means.

Positioning based on WiFi location fingerprinting

consists of two phases that is the off-line data training phase

and the on-line location phase as shown in Fig.3. The off-

line phase builds a radio map for the targeted area based on

RSSI, and the on-line phase is to calculate the user’s

location based on the fingerprints stored in the radio map.

For the off-line training phase, the targeted area is divided

into some cells which are considered as reference points,

and the coordinates of the reference points are determined in

advance. Then the RSSI at each reference point from all

access points are collected, processed and stored as

fingerprints in the radio map. During the on-line location

phase, the unknown position of a mobile user is estimated

by comparing the current RSSI measurements with the data

in the radio map [20] [21].

...

AP(1)

AP(N)

AP(2)

Mobile Users(MU)

(x1,y1)

(x2,y2)

(xn,yn)

...

calculate

unit

RSSI1, RSSI2, ...

RSSI1, RSSI2, ...

RSSI1, RSSI2, ...

(x,y)RSSI1, RSSI2, ...(x,y)?

MU MU location

data training phase

positioning phase

database

Fig. 3. Location fingerprinting arithmetic

2.2 Experiment Condition and Device

This study does an experiment in a six-storey office

building. There are total of 46 access points (AP) placing to serve most areas. All access points inside this building can support IEEE 802.11a/b/g wireless local area network (WLAN) cards. Measurements were made in one room with 45 square meters on the fourth floor. There are arranged four reference points (RF) and five test points (Ti, i=1, 2, 3, 4, 5) as illustrated in Fig.4.

Fig. 4. Condition of experiment ( five test points Ti

(i=1,2,3,4,5) and four RFs)

A standard laptop computer was used to collect Wi-Fi signals at all reference point and test points for the whole experiment. Table 1 lists the device and chipset information and the standard information of wireless local area network. Note that the results of this paper only relate to the wireless device used in the experiment. It may be the same with other computer hardwares, but that requires additional study beyond the scope of the current work.

Table 1. Experimental equipment and communication

standards used

184/278

Page 200: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

3. DISCUSSION OF EXPERIMENTS RESULTS

3.1 The Double-peak Gaussian Arithmetic (DGA) and

Gaussian Function

As is commonly known, Gaussian fitting is a traditional method. Its probability density function can be expressed as:

)0(

2-

2

1)(

2

2

uxexF (1)

Where x is the variable of the function, u is the mean of x, and σ is the standard deviation of x. In this paper, the RSSI of each AP was divided into two parts according to the minimum value between the two peaks. Following this, each part was regarded as a Gaussian function. Finally the two functions were added. Judging from the distribution proportion of the signals of two peaks shown in Fig.2, the weight given to each Gaussian was 1/2. In this paper, the DGA probability density function was defined as function (2), where u1 and σ1 respectively are the mean and the standard deviation of the signals of part 1, and u2 and σ2 belong to part 2.

)0(

2-

2

1

2-

2

1

2

1)(

2

2

2

2

2

2

1

2

1

1

uxe

uxe

xF (2)

3.2 The Comparison of DGA and Gaussian Function

Using the Gaussian function to generate radio maps of

location fingerprinting for indoor positioning is not a new method [20]. Since the variation of the SS measured is large at each point, in order to achieve more accurate results, the probabilistic approach based on Gaussian distribution has also been developed. Unfortunately, the distribution of the SS is non-Gaussian. This is observed in section one. We attempt to characterize the properties of indoor received signal strength and the results given in Fig.5 provided preliminary guidelines to better understand the nature of RSSI from an indoor positioning systems perspective. In Fig.5, the red dash line is the occurrence distribution of RSSI, the blue solid line is the probability distribution that derived from the double-peak Gaussian with the occurrence RSSI, the green solid line shows the probability distribution derived from the Gaussian function solution with the occurrence RSSI. It can be seen from fig.5 the probability distributions

estimated with the double-peak Gaussian solution are significantly better than that obtained from the conventional Gaussian function approach.

Fig. 5. Comparison of Gaussian function and double-peak

Gaussian with the occurrence distribution of RSSI

3.3 The Location Fingerprinting Using DGA

The data collected in this experiment was further utilized

to produce location fingerprints. Three groups of radio maps were generated. These databases using the Gaussian function, traditional histogram and using Double-peak Gaussian arithmetic proposed in this study.

To compare advantages of radio maps generated here against a different approach, the K weighted nearest neighbor (KWNN (K=4)) algorithm has been selected. This test uses the inverse of the signal distance as the weight. Fig.6 shows the location error using the three algorithms for five test points. The blue, the green and the red solid line respectively stand for the Gaussian, histogram and the double-peak Gaussian arithmetic. Clearly, the positioning accuracy using double-peak Gaussian method is better than the other two approaches, at all test points.

Fig. 6. Distance errors of each test points using three radio maps

The average distance error for whole, horizontal and vertical direction is listed in Table 2 . The test results the positioning accuracy using double-peak Gaussian approach

Vendor Intel Corporation

Model Advanced-N 6205

Chipset Intel

Interface PCI-E

standards IEEE802.11a/b/g

185/278

Page 201: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

are improved for every direction. The highest improved range of mean distance error is about 28%, the horizontal direction is 56% and the vertical direction is 38%. The

smallest sized error for all directions was as low as 26% below other methods.

Table 2. List of distance errors (unit: m)

Gaussian Histogram Double-peak

Gaussian

Mean (D1) 1.50 1.45 1.08

Horizontal(D2) 0.89 0.96 0.42

Vertical (D3) 1.00 0.92 0.62

In order to better recognize the double-peak Gaussian arithmetic, the maximum and the minimum distance error for each approach and every direction were compared. Table 3 lists the results. From Table 3 two conclusions can be drawn.

One is that either maximum error or minimum error has been improved using the double Gaussian technique and the other is that the biggest absolute distance error decreased by more than 0.8 meters.

Table 3. List of distance errors (unit: m)

Gaussian Histogram Double-

peak Gaussian

Max

D1 2.18 1.94 1.38

D2 1.70 1.68 1.07

D3 1,98 1.71 1.35

Min

D1 0.32 0.43 0.24

D2 0.23 0.15 0.12

D3 0.23 0.22 0.04

4. CONCLUSION

Because of the new characteristics (two peaks), this paper presents a new solution using double-peak Gaussian arithmetic to approximate the WiFi signal strength distribution in the off-line training phase for location fingerprinting. This approach makes it easy to estimate the parameters of the double-peak Gaussian function. Compared to the Gaussian function, the double-peak Gaussian utilizes two peaks to describe the distribution over the entire RSSI domain. This research indicates that the reliability and accuracy of the fingerprint radio map is improves with the double-peak Gaussian function. A histogram and Gaussian function position estimation based on K-Nearest neighbor theory is utilized in the positioning phase. The test results show that the positioning accuracy using the double-peak Gaussian arithmetic performs better than the other two fingerprint methods.

Although this test shows better results, there are still additional concerns. First, this experimental test bed only a small office, the distance between the reference points and test points is relatively small which leads to the distance error being less obvious. Another, using double-peak Gaussian function to build fingerprinting database has not been applied in other WLAN or other open area to verify functionality. Furthermore, the measured SS at each point

has a large variation. It even varies at the same point at different times. Addressing these issues is the subject of future work.

ACKNOWLEDGEMENTS

This work was partially supported by the Pre-Research

project of the key technology research of container intelligent logistics based on BeiDou satellite which funded by Science and Technology Commission of Shanghai Municipality (12511501102). It was also supported by the project of Research on Authentication Platform of Cloud Computing based on the Internet of Things which is funded by the National Natural Science Foundation of China (61272468), and also was supported by the project of high gain low cost miniaturization multimode substrate integrated satellite navigation antenna which funded by Shanghai Municipal Commission of Economy and Informatization.

REFRENCES:

[1] Richard Ferraro, L. Li (translation), “Location-Aware Applications”, POSTS & TELECOM press, Beijing, 2012.

186/278

Page 202: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

[2] Market Survey Report: “Location Based Services-Market and Technology Outlook- 2013-2020” Market Info Group LLC(MIG), Inc, 2013.

[3] B. Li, J. Zhang, A.G. Dempster, C. Rizos, “ Open Source GNSS Reference Server for Assisted-Global Navigation Statellite Systems”, The Journal of Navigation, Vol. 64, No. 1, 2011, pp. 127-139.

[4] B. Li, J. Salter, A.G. Dempster, C. Rizos, “ Indoor Positioning Techniques Based on Wireless LAN”, in Proceedings of 1st IEEE Int. Conf. on Wire-less Broadband & Ultra Wideband Communications, Sydney(Australia), 13-16 March, 2006.

[5] P. Bahl, N. V.N. Padmanabhan, “RADAN: an in-building RF-based user location and tracking system”, Proceedings of IEEE 9

th Annual Joint Conference of the

IEEE Computer and Communications Societies, Tel Aviv (Israel), March 26-30, 2000, pp.775-784.

[6] H. Hashemi, “The indoor radio propagation channel”, Proceeding of IEEE, Vol.81, No.7, 1993, pp. 943-968

[7] B.Li, Y. Wang, H.K. Lee, A.G. Dempster, C. Rizos, “ Database updating through user feedback in fingerprinting-based Wi-Fi lacation systems”, Proceedings of International Conference on Ubiquitous Positioning Indoor Navigation & Location Based Service, Kirkkonummi (Finland), October 14-15, 2010, paper 1, session 3.

[8] K. Kaemarungsi, “Distribution of WLAN Received Signal Strength Indication for Indoor Location Determination”, 2006 1st International Symposium on Wireless Pervasive Computing, Phuket (Thailand), 16-18 Jan, 2006.

[9] M.A. Youssef, “ HORUS: A WLAN-based indoor location determination system”, Ph.D. dissertation, University of Maryland, College Park, MD, 2004.

[10] Z. Xiang, S. Song, J. Chen, H. Wang, J. Huang, X. Gao, “ A wireless LAN-based indoor positioning technology”, IBM Journal of Research and Development, Vol. 48, No. 5/6, 2004, pp. 617-626.

[11] K. Kaemarungsi, P. Krishnamurthy, “ Properties of Indoor Received Signal Strength for WLAN Location Fingerprinting”, Proceedings of IEEE First Annual International Conference on Mobile and Ubiquitous Systems: Networking and Services, Boston (America), August 22-26, 2004, pp. 14-23.

[12] L. Pei, R. Chen, J. Liu, H. Kuusniemi, T. Tenhunen, Y. Chen, “ Using Inquiry-based Bluetooth RSSI Probability

Distributions for Indoor Positioning”, Journal of Global Positioning Systems, Vol. 9, No. 2, 2010, pp. 122-130.

[13] A.M. Ladd, K.E. Bekris, A. Rudys, G. Marceau, L.E. Kavraki, S. Dan, “Robotics-based location sensing using wireless Ethernet”, Eighth ACM Int. Conf. on Mobile Computing & Networking (MOBICOM), Atlanta, Georgia(US) 23-28 September 2002, pp. 227-238.

[14] Y. Wang, X. Jia, H.K. Lee, G.Y. Li, “ An indoor wireless positioning system based on WLAN infrastructure”, 6

th Int. Symp. on Satellite Navigation

Technology Including Mobile Positioning & Location Services,, Melbourne (Australia), July 22-25, 2003, CD-ROM proc., paper 54.

[15] B. Li, A.G. Dempster, J. Barnes, C. Rizos, D. Li, “Probabilistic algorithm to support the fingerprinting method for CDMA location”, in Proc. Int. Symp. on GPS/GNSS, 2005.

[16] B. Li, Y. Wang, H.K. Lee, A.G. Dempster, C. Rizos, “Method for yielding a database of location fingerprints in WLAN”, IEE Proceedings-Communication, Vol.152, 2005, pp. 580-586.

[17] R. Want, A. Hopper, V. Falcao, J. Gibbons, “The active badge location system”, ACM Transactions on Information Systems, Vol. 10, 1992, pp. 91-102.

[18] N. B. Priyantha, A. Chakraborty, H.Balakrishana, H. Balakrishnan, “ The cricket location-support system”, 6th ACM International Conference on Mobile Computing and Networking, Boston(America), Aug 6-11, 2000, pp.32-43.

[19] S. Gezici, Z. Tian, G. Giannakis, H. Kobayashi, A. Molisch, H. Poor, Z. Sahinoglu, “Localization via ultra-wideband radios: a look at positioning aspects for future sensor networks”, Signal Processing Magazine, IEEE, Vol. 22, 2005, pp. 70-84.

[20] M.A. Youssef, A. Agrawala, and A.U. Shankar, “WLAN Location Determination Via Clustering and Probability Distributions”, Proceedings of the First IEEE International Conference on Pervasive Computing and Communications, Texas (America), March 23-26, 2003, pp. 143-150.

[21] T. Roos, P. Myllymaki, H. Tirri, P. Misikangas and J. Sievanen, “A Probabilistic Approach to WLAN User Location Estimation”, International Journal of Wireless Information Networks, Vol. 9, No. 3, 2002, pp. 155-164.

187/278

Page 203: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Indoor Positioning using Ultrasonic Waves withCSS and FSK Modulation for Narrow Band Channel

Alexander Ens, Fabian Hoeflinger and Leonhard ReindlLaboratory for Electrical Instrumentation (EMP)

Department of Microsystems Engineering (IMTEK)University of Freiburg, Germany

Email: [email protected]

Johannes Wendeberg and Christian SchindelhauerChair for Computer Networks and Telematic (CONE)

Department of Computer Science (IIF)University of Freiburg, Germany

Email: [email protected]

Abstract—We propose a transmission scheme for localizationbased on the exchange of data between the transmitter andreceiver. The ultrasonic signal is used twice, first for indoor lo-calization by synchronization of the transmitter with the receiverand second to transmit additional information to improve thelocalization.

Our approach for coding the information is a combination ofchirp spread spectrum (CSS) signals and frequency shift keying(FSK) to method avoids fast phase changing and shifts of theultrasonic wave, which results in narrow band characteristic.

Index Terms—FSK; CSS; Ultrasonic; Communication; Local-ization

I. INTRODUCTION

In our everyday life it is important to know the actualposition of things. The interests for localization services aregrowing and there is huge amount of possible applications, e.g.as navigation of shopping carts in super markets. Localizationsystems based on ultra-sonic are very cheap, have a lowcomplexity compared to radio frequency and good positionaccuracy is with simple hardware possible, too. While thespeed of sound is about 106 times slower than the speed oflight, the position can be determined by time delay of arrival(TDOA) methods with low sampling rates of the receivedsignal and without an additional intermediate frequency mixer.

The disadvantage of ultrasonic is the absorption and there-fore the attenuation of the transmitted signal by the air.Furthermore the attenuation in the environment depends ontemperature and humidity of the air. Also the sound noise fromindustry and traffic disturb the ultrasonic sound. Another pointthat should be kept in mind, are the good reflections at wallsand plane surfaces that cause additional echoes, which disturbthe signal and reduces the signal-to-noise ratio (SNR) at thetransmitter.

To overcome the absorption of the air, the use of lowfrequency for the transmission can be used [1]. To avoid thedistortion of the signal by echoes, a guard interval is used tohave a silent pause before the next signal is transmitted.

A simple localization system has one transmitter and at leastthree receivers to determine the position by TDOA in 2D of thetransmitter. To distinguish between more than one transmitter,the transmitted signals need additional information of thesignal origin and therefore the identification of the transmitter.

Then the receiver can determine the origin of the signal andmap the time of arrival to the transmitter. The calculation ofthe position is augmented from the TDOA problem to datatransmission and TDOA.

A possible solution is, to give each transmitter a differentfrequency band. Yet, this is very expensive, because of theneed of a broad band receiver and the limited free fre-quency bands. Another modulation scheme is the chirp spreadspectrum (CSS) [2]. The chirp modulation avoids destructiveinterference of the echoes at the receiver by linear frequencymodulation and therefore the signal can’t disappear at thereceiver. Another advantage of the CSS is the robustnessagainst the Doppler shift and good detection of the centerof the chirp sequence by correlation. The CSS modulationneeds fast Phase changes and therefore a higher bandwidth.The Gaussian Minimum Shift Keying (GMSK) overcomesthe problem of fast phase switching by rounding the phasetransitions [3].

II. SYSTEM DESCRIPTION

In our measurement setup we place the receiver at the topand the transmitters are mobile robots. The position of thereceiver is known and the position of the signal origin can becalculated by established TDOA algorithms.

The used narrow band transmitter device has its resonancefrequency at about 39 kHz and the receiver at about 41 kHz.Therefore to get the maximum of the transmission devices aband of 2 kHz will be used.

The symbol set consists of two continuous sinuses withconstant frequency and f0 and f1, an “up” chirp and a “down”chirp. The symbol length is the inverse of the used frequencybandwidth: T = 1

∆f = 0.5 ms.The first symbol in the frame is for precise synchronization

and therefore this symbol is only used at the beginning of theframe. The synchronization symbol is an “up” and “down”chirp in the duration of a symbol. The next symbol is an “up”chirp for logic 1 or a constant sinus with frequency f0 for logic0. The followed symbol depends on the previos symbols. TheTable I below shows the mapping of the symbols depending onthe previous symbol. So the data is coded in the frequency by

188/278

Page 204: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

constant sinus or in chirps. The modulation of the frequencyover time for the Bit sequence 0011000 is shown in figure 1.

Instead of using two frequency sources for the FSK, we useonly one sinus source and change the phase slope smoothly.

PreviousData Bit

CurrentData Bit

Symbol

0 0 Constant sinus with frequency f0

0 1 “Up” chirp from frequency f0 to f1

1 0 “Down” chirp from frequency f1 to f0

1 1 Constant sinus with frequency f1

Table ISYMBOL MAPPING FOR COMBINED FSK AND CSS MODULATION

Freq

uenc

yin

kHz

ofth

etr

ansm

itted

sign

al

39

40

41

Time in ms0 1 2 3 4

Sync Bit 1 Bit 2 Bit 3 Bit 4 Bit 5 Bit 6 Bit 7

0 0 1 1 0 0 0

Figure 1. Combined CSS and FSK modulation for Bit sequence 0011000

III. SIMULATION RESULTS

The needed bandwidth of the transmission scheme is sim-ulated by 10’000 frames with 20 symbols for the combinedmodulation of FSK and CSS and pure CSS modulation infigure 2. The black line is the measured ultra-sonic channel(includes transmitter and receiver) by a vector analyzer. Thepower spectrum of the combined modulation (blue dashed line)fits very well to the channel characteristics (black line).

Pow

ersp

ectr

umin

dB,1

0·log

10

( |s|2/E|s|2)

2/E

|s|2])

-20

-10

0

10

20

Frequency in kHz30 35 40 45 50

Measured Ultrasonic ChannelCSS Modulationcombined FSK and CSS Modulation

Figure 2. Bandwidth comparison of CSS Modulation and proposed modu-lation with FSK and CSS

The spectrum of the received signal is the multiplication ofthe channel spectrum and the modulated signal spectrum. The

result is a Gaussian like shape of the spectrum. This is becausethe shape of the combined modulation spectrum has, unlikethe CSS spectrum, not the periodic minima of the sinc pulse.One of the important property of the Gaussian shape in thefrequency domain is the minimized time-bandwidth product[4].

A further simulation shows in figure 3 the bit error rate(BER) over the energy-per-bit and noise power. The BERfor the combined modulation is exactly between the bipolarmodulation, like binary orthogonal Keying (BOK) CSS [5],and the unipolar modulation, like On-Off-Keying (OOK) [4].The gain is about 1.5 dB to unipolar modulation. The reasontherefore is that, the chirp signal is not orthogonal to theconstant sinus signal.

Bit

Err

orR

ate

(BE

R)

108

107

106

105

104

103

102

101

100

Eb/N0, dB0 5 10 15 20

combined FSK and CSS ModulationBinary Orthogonal Keying (BOK), FSK ModulationUnipolar Modulation

Figure 3. Bit Error Rate over Bit-Energy per Noise power

IV. CONCLUSION AND DISCUSSION

In this paper, we presented a combined FSK and CSS trans-mission scheme for the available bandwidth without frequencyshifts and smoothly phase changes.

The bit error rate can be further decreased by applyingthe Viterbi algorithm to estimated data. Then the correlationcoefficient between the signal and the symbol set can be usedas a metric for the path calculation in Trellis diagram.

Furtermore, the synchronization can be extended to synchro-nize over all sweeps in the signal. This can further improvethe synchronization and the localization accuracy.

ACKNOWLEDGEMENT

We gratefully acknowledge financial support from “Spitzen-cluster MicroTec Suedwest” and BMBF.

REFERENCES

[1] “ISO 9613-1:1993, acoustics – attenuation of sound during propagationoutdoors – part1.”

[2] A. J. Berni and W. Gregg, “On the utility of chirp modulation for digitalsignaling,” IEEE Transactions on Communications, vol. 21, no. 6, pp.748–751, 1973.

[3] K. Murota and K. Hirade, “GMSK modulation for digital mobile radiotelephony,” IEEE Transactions on Communications, vol. 29, no. 7, pp.1044–1050, 1981.

[4] J.-R. Ohm, Signalübertragung: Grundlagen der digitalen und analogenNachrichtenübertragungssysteme. Berlin [u.a.]: Springer, 2005.

[5] A. Springer, W. Gugler, M. Huemer, L. Reindl, C. Ruppel, and R. Weigel,“Spread spectrum communications using chirp signals,” in EUROCOMM2000. Information Systems for Enhanced Public Safety and Security.IEEE/AFCEA, 2000, pp. 166–170.

189/278

Page 205: International Conference on Indoor Positioning and Indoor Navigation

Improving Heading Accuracy in Smartphone-basedPDR Systems using Multi-Pedestrian Sensor Fusion

Marzieh Jalal Abadi∗,†, Yexuan Gu∗, Xinlong Guan∗, Yang Wang∗, Mahbub Hassan∗,† and Chun Tung Chou∗∗School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia

Email: abadim, ygux197, xgua341, ywan195, mahbub, [email protected]†National ICT Australia, Locked Bag 9013, Alexandria, NSW 1435, Australia

Email: Marzieh.Abadi, [email protected]

Abstract—Accurately estimating the heading of each step iscritical for pedestrian dead reckoning (PDR) systems, whichuse step length and step heading to continuously update thecurrent location based on a previous known location. Whilemagnetometer is a key source of heading information, pooraccuracy of consumer grade hardware coupled with frequentpresence of manmade magnetic disturbances makes accurateheading estimation a challenging problem in smartphone-basedPDR systems. In this paper we propose the concept of multi-pedestrian sensor fusion where sensor data from multiple pedes-trians walking in the same direction are fused to improvethe heading accuracy. We have conducted experiments with 3subjects walking together in the corridors of 4 different buildings.Based on the magnetometer data collected from these subjects,we find that multi-pedestrian fusion has the potential to improvemagnetometer-based heading error by 42% compared to the casewhen no fusion is used. We further show that a very basic fusionalgorithm that simply takes the average of 3 individual headingestimations can yield a 27.77% error reduction.

Index Terms—Heading Estimation, Pedestrian Dead Reckon-ing, Multi-Sensor Data Fusion, Indoor Localization.

I. INTRODUCTION

PDR, which uses step length and heading estimation to com-pute current location relative to a previously known location, isa viable positioning alternative to GPS in indoor environments.While magnetometer is considered as a key source of headinginformation for PDR, it is known to exhibit large errorswhen used indoors due to presence of significant magneticdisturbances caused by metallic infrastructure. Because theseperturbations are likely to be highly localised, in this paper, wepropose the concept of multi-pedestrian sensor fusion wheresensor data from multiple pedestrians walking in the samedirection are fused to improve the heading accuracy. The keyhypothesis is that pedestrians experiencing high perturbationwill benefit from those experiencing no or minor perturbationsif their devices could share their sensor data in real-time.Emerging device-to-device communication standards, such asWiFi-Direct1, are definitely opening up such data sharingpossibilities.

To test this hypothesis, we collected magnetometer read-ings from three pedestrians walking in the same direction inthe corridors of 4 different buildings. Our study reveals thefollowing interesting results:

1http://www.wi-fi.org/discover-and-learn/wi-fi-direct

• When pedestrians use their individual heading estima-tions, i.e., when no fusion is used, the average headingerror from the true heading is 12.45 degrees.

• A simple averaging of all three individual estimations,which is called Naıve fusion in this paper, reduces theerror to 8.99 degrees, which yields an improvement of27.77%.

• If, however, we were able to filter out the highly perturbeddata, which is called Oracle fusion in this paper, we couldpotentially reduce the error to 7.21 degrees or achieve upto 42% error reduction.

The rest of our paper is organized as follows. In the nextsection, we describe the data collection methodology followedby the multi-pedestrian fusion analysis in Section III. Relatedwork is reviewed in Section IV before concluding the paperin Section V.

II. DATA COLLECTION

We performed multiple experiments to collect the data forour study. In order to ensure diversity in environment con-ditions (especially magnetic perturbation), experiments wereconducted in 4 different building on our university (UNSW)campus. In each building, we chose different corridors toprovide different heading directions.

Each experiment consisted of three subjects carrying anandroid smartphone. The subjects held the smartphone hor-izontally in their hand and walked along the corridor of thebuilding. They ensured that they walked parallel to the corridorof the building, thus having the same heading, by followingthe line between the floor tiles. The smartphones record themagnetometer readings at 16Hz. Table I shows the buildingname and true heading used in each experiment. The trueheadings are estimated by assuming that the corridor is parallelto the face of the building.

The three subjects walked in a line parallel to the corridor,one after another, with a gap of 5 meters between them.This means that, at a given time, the three subjects werealways at different locations. The motivation for doing thisis to test whether the magnetic perturbation at different placesare independent. We identify the three subjects as “Back”,“Middle” and “Front”.

After obtaining the magnetometer readings, we use the twohorizontal components mx and my to compute the estimated

190/278

Page 206: International Conference on Indoor Positioning and Indoor Navigation

TABLE IINDOOR LOCATIONS FOR DATA COLLECTION, UNSW, SYDNEY

Buildings True heading

Library, 3rd Floor188.99

9.35

Electrical Engineering Building, 2nd Floor278.99

98.9

Robert Webster Building, LG Floor99.27279.18

Old Main Building, Ground Floor279.2499.46

heading with respect to the magnetic North from

h = tan−1(mx

my) (1)

We will use these heading estimations for multi-pedestriandata fusion in the next section.

III. MULTI-PEDESTRIAN DATA FUSION

In order to motivate multi-pedestrian data fusion, we plotthe estimated headings from the three subjects in Figure 1 foran experiment conducted in the Library building. The figurealso shows the true heading which is 188.99 degrees. Thefigure shows that the estimated headings deviate from thetrue heading due to man-made magnetic perturbation. Notealso that, at a given time, each magnetometer experienced adifferent amount of perturbation. Consider the time intervalbetween 11.44 to 13.64 seconds, bounded by the two verticalbars in Figure 1. In this interval, the Front subject experienceda large perturbation in heading estimation while both theMiddle and Back subjects did not. If there was a method totell that the Front heading estimation was erroneous, then wecould discard it and replace it by the average of the other twoheading estimates to obtain better heading estimation. This isthe key idea behind Oracle fusion. In this section, we willcompare the performance of two different fusion strategies.We first describe the fusion strategies.

A. Fusion methods

We define two different fusion methods, Naıve and Oracle.We assume that all the subjects exchange their estimatedheading using wireless communication such as WiFi. Weassume there are n subjects. At a given sampling time, subjecti calculates its heading estimates hi. After the exchange ofheading estimates, each subject has the data: h1, h2, ..., hn.The method is applied for each sampling time.

For Naıve fusion, each subject computes the simple average1n

∑ni=1 hn of all estimated headings. Note that Naıve fusion

works well if the estimated headings are perturbed by randomzero-mean noise but its performance under other types ofperturbations can be poor.

The Oracle method is used here to quantify the best possibleimprovement provided by data fusion. The method assumesthat each subject knows the true heading hT and uses a given

Fig. 1. Library, 3rd Floor, True heading=188.99

TABLE IILIBRARY, 3RD FLOOR, TRUE HEADING=188.99,NAIVE

Participant Average error (No Fusion) Naıve Fusion error

Front 23.348.91Middle 9.35

Back 10.48

Average 14.42 8.91

threshold γ. It also assumes each subject has all the estimatedheadings from all subjects: H = h1, ..., hn. Each subjecteliminates all the estimated headings in H that exceed an errorthreshold γ from the true heading hT , or in other words, eachsubject determines the set H = hi ∈ H : hi ∈ [hT − γ, hT +γ]. If the set H is non-empty, then the Oracle method returnsthe simple average of the heading estimates in H. Otherwise, ifH is empty, the Oracle method uses the subject’s own headingestimate, i.e. subject i uses hi.

B. Results and discussions

For each building, we have collected multiple sets of dataat different times of the day where each data set containsapproximately 900 magnetometer samples for each subject.For a given data set, we applied the two fusion methods toeach sample to obtain the fused headings. The heading error iscalculated as the absolute difference between the true and theestimated headings. For each data set, we obtain one headingerror data by averaging the 900 error data computed for the900 samples.

Table II shows the results of applying Naıve fusion to oneof the experiments conducted on the third floor of the Librarybuilding. It compares Naıve fusion against the average headingerror of each subject when no data fusion is used. The last rowof the table shows the results of averaging over all subjects.Note that the results of Naıve fusion is independent of thesubject. It can be seen that Naıve fusion reduces the averageerror from 14.42 degrees to 8.91 degrees.

Table III shows the results of applying Oracle fusion to thesame dataset. The different γ values used are shown in thefirst column. In columns 2–4, we show the average heading

191/278

Page 207: International Conference on Indoor Positioning and Indoor Navigation

TABLE IIILIBRARY, 3RD FLOOR, TRUE HEADING=188.99,ORACLE

Oracle (Perfect fusion)

γAverage

errorBack

Averageerror

Middle

AverageerrorFront

1 aboveγ

2 aboveγ

3 aboveγ

1 9.55 8.71 19.97 4 133 843

10 5.67 5.69 5.17 499 385 91

12 5.39 5.42 5.07 625 287 53

15 4.80 4.78 4.74 699 189 7

20 4.56 4.56 4.56 675 82 0

25 4.72 4.72 4.72 562 42 0

30 5.75 5.75 5.75 408 4 0

35 6.78 6.78 6.78 221 0 0

40 8.24 8.24 8.24 62 0 0

45 8.70 8.70 8.70 19 0 0

50 8.85 8.85 8.85 5 0 0

60 8.90 8.90 8.90 0 0 0

70 8.90 8.90 8.90 0 0 0

80 8.90 8.90 8.90 0 0 0

90 8.90 8.90 8.90 0 0 0

100 8.90 8.90 8.90 0 0 0

120 8.90 8.90 8.90 0 0 0

140 8.90 8.90 8.90 0 0 0

150 8.90 8.90 8.90 0 0 0

error for each subject for different values of γ. Note thateach subject can have a different average error because if allthe heading estimates at a given time exceed the thresholdγ, each subject uses its own heading estimate as the outputof the Oracle method. In column 5, we show for each valueof γ, the number of sampling times where exactly 1 of theestimated headings is above the threshold γ, or equivalently,the number of sampling times that the set H has exactly 2elements. Columns 6 and 7 are similarly defined. For γ = 1,we find that, for a lot of sampling times, all the three headingestimates have an error greater than γ. This is due to low valueof error threshold γ. As the threshold γ increases, the numberof sampling times that all three heading estimates are abovethe threshold become lower.

An interesting observation that can be made from columns2–4 in Table III is that, as γ increases, the average headingerror for each subject decreases and then increases again. Thismeans that there is an optimal threshold γ that gives theminimum estimation error. This observation is also found inthe data from the other experiments. In Figure 2, we plot theaverage heading error for each subject against the γ for anexperiment conducted in the Robert Webster Building.

In Table IV, we compare the fusion methods over all the10 data sets from the four buildings. Four different methodsare used: no fusion, Naıve fusion, Oracle fusion with a fixedthreshold of 10 and Oracle fusion with the optimum thresholdthat gives the minimum heading error. Percentage improve-ments, compared to the case when no data fusion is used, are

Fig. 2. Heading error for Oracle fusion in Robert Webster Building, LG Floor,True heading=99.27.

shown in brackets. The last row shows the average error andpercentage improvements over all the 10 experiments.

It can be seen from Table IV that Naıve fusion is useful andcan deliver improvement of 27.77% on average. For Oraclefusion with a fixed threshold γ, the improvement is −24.77%.This means a fixed γ does not deliver good results. Finally,the Oracle fusion with optimum threshold delivers the bestimprovement of 42.04%.

IV. RELATED WORK

Some approaches are currently in use to improve head-ing estimation such as sensor fusion by Kalman Filter [1]–[3], magnetometer fingerprinting [4]–[7], and magnetometerfiltering [8]–[10]. Kalman filter is a sophisticated filter anduses magnetometer, accelerometer and gyroscope to estimatepedestrian’s heading. In magnetometer fingerprinting, differentalgorithms are used to match the observed magnetometerreading with a pre-surveyed database. In magnetometer filter-ing, the perturbed data are filtered to improve its accuracy.Our proposed fusion algorithms rely only on smartphone’smagnetometer without using any infrastructure.

V. CONCLUSION

While magnetometer is considered as a key source of head-ing information for PDR, it is known to exhibit large errorswhen used indoors due to presence of significant magneticdisturbances caused by metallic infrastructure. Since theseperturbations are highly localised, it may be possible that notall pedestrians are affected (equally) at the same time, openingup the possibility of reducing error by fusing sensing dataamong multiple pedestrians walking in the same direction.In this paper, we have experimentally quantified the errorreduction potential of such multi-pedestrian sensor fusion. Ourstudy reveals that there is opportunity for significant errorreduction (42.04%), but only 27.77% is achievable with aNaıve averaging. This calls for research in more advancedfusion models to achieve the full potential of multi-pedestriansensor fusion.

192/278

Page 208: International Conference on Indoor Positioning and Indoor Navigation

TABLE IVCOMPARISON OF THE FUSION ALGORITHMS OVER 10 DATA SETS FROM

FOUR BUILDINGS

Building, Day, (True Heading) No-fusion Naıve fusion(%) Oracle fusion, γ=10(%) Oracle fusion, Optimum γ(%)Library, Day 1, (188.99) 14.42 8.91(38.21) 5.51(61.77) 4.64(67.82)

Library, Day 2, (188.99) 21.20 12.01(43.37) 14.56(31.35) 10(52.84)

Library, Day 1, (9.34) 17.94 15.92(11.28) 55.85(-211) 14.56(18.86)

Library, Day 2, (9.34) 12.18 4.92(59.60) 39.53(-224) 5.97(50.98)

Electrical Engineering Building, Day 1, (278.99) 11.61 11.08(4.52) 8.80(24.16) 8.32(28.31)

Electrical Engineering Building, Day 2, (98.9) 9.75 5.77(40.82) 5.26(46.01) 5(48.71)

Robert Webster Building, Day 1, (99.27) 9.61 8.72(9.25) 7.20(25.07) 6.7(30.31)

Robert Webster Building, Day 2, (279.18) 11.75 10.70(8.96) 6.35(45.96) 6.24(46.88)

Old Main Building, Day 1, (279.24) 6.60 5.69(13.75) 4.54(31.29) 4.53(31.39)

Old Main Building, Day 2, (99.26) 9.36 6.18(34.24) 7.68(18.24) 6.18(34.24)

Average 12.45 8.99(27.77) 15.53(-24.77) 7.21(42.04)

REFERENCES

[1] W. Li and J. Wang, “Effective adaptive kalman filter for mems-imu/magnetometers integrated attitude and heading reference systems,”Journal of Navigation, vol. 1, no. 1, pp. 1–15, 2013.

[2] K. Abdulrahim and C. H. T. M. C. Hill, “Integrating low cost imu withbuilding heading in indoor pedestrian navigation,” Journal of GlobalPositioning Systems, vol. 10, no. 1, pp. 30–38, 2011.

[3] S. Kwanmuang, L. Ojeda, and J. Borenstein, “Magnetometer-enhancedpersonal locator for tunnels and gps-denied outdoor environments,” inSPIE Defense, Security, and Sensing. International Society for Opticsand Photonics, 2011, pp. 80 190O–80 190O.

[4] F. Li, C. Zhao, G. Ding, J. Gong, C. Liu, and F. Zhao, “A reliableand accurate indoor localization method using phone inertial sensors,”in Proceedings of the 2012 ACM Conference on Ubiquitous Computing.ACM, 2012, pp. 421–430.

[5] Y. Kim, Y. Chon, and H. Cha, “Smartphone-based collaborative andautonomous radio fingerprinting,” Systems, Man, and Cybernetics, PartC: Applications and Reviews, IEEE Transactions on, vol. 42, no. 1, pp.112–122, 2012.

[6] C. Sapumohotti, M. Y. Alias, and S. W. Tan, “Wilocsim: Simulationtestbed for wlan location fingerprinting systems,” Progress In Electro-magnetics Research B, vol. 46, pp. 1–22, 2013.

[7] C. Laoudias, C. G. Panayiotou, and P. Kemppi, “On the rbf-basedpositioning using wlan signal strength fingerprints,” in PositioningNavigation and Communication (WPNC), 2010 7th Workshop on. IEEE,2010, pp. 93–98.

[8] J. Bird and D. Arden, “Indoor navigation with foot-mounted strapdowninertial navigation and magnetic sensors [emerging opportunities forlocalization and tracking],” Wireless Communications, IEEE, vol. 18,no. 2, pp. 28–35, 2011.

[9] W. T. Faulkner, R. Alwood, D. W. Taylor, and J. Bohlin, “Gps-deniedpedestrian tracking in indoor environments using an imu and magneticcompass,” in Proceedings of the 2010 International Technical Meetingof the Institute of Navigation (ITM 2010), 2010, pp. 198–204.

[10] M. H. Afzal, V. Renaudin, and G. Lachapelle, “Assessment of indoormagnetic field anomalies using multiple magnetometers,” Proceedingsof ION GNSS10, pp. 1–9, 2010.

193/278

Page 209: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

A New Indoor Robot Navigation System

Using RFID Technology

Manato Fujimoto, Emi Nakamori, Daiki Tsukuda,

Tomotaka Wada, Hiromi Okada, Yukio Iida

The Faculty of Engineering Science

Kansai University

Suita, Japan

manato, nakamori, tsukuda2011, wada,

[email protected]

Kouichi Mutsuura

The Faculty of Economics

Shinshu University

Matsumoto, Japan

[email protected]

iida@ kansai-u.ac.jp

Abstract— In this paper, we propose a new indoor robot

navigation system using only passive RFID technology. RFID is

simple composition and inexpensive. The proposed system does

not need many kinds of devices by controlling the moving of a

mobile robot using only information stored in RFID tags. To

show the validity and effectiveness of the proposed system, we

evaluate whether a mobile robot can reach to the final

destination correctly and smoothly by computer simulations.

As the results, the proposed system can control the mobile

robot’s moving correctly and let the mobile robot move to the

final destination smoothly by only RFID tag’s information.

Keywords - indoor robot navigation system; passive RFID

technology; moving control; mobile robot; short term destination

I. INTRODUCTION

Recently, the research development of support technologies for assisting aged and physically handicapped people is increasing very much by recent worldwide trend of increasing aged people. In particular, indoor robot navigation systems have been enthusiastically researched in the world. In the development of these systems, the most important issues are to control the harmonized moving of a mobile robot like electric wheelchair. The existing systems need to consist of several kinds of sensors (e.g. infrared sensor, ultrasound sensor, etc.) and wireless communications devices to control a mobile robot’s moving smoothly [1]-[4]. So, these systems are very complex and expensive. For this reason, the indoor robot navigation system with low cost and simple composition is required very much.

In this paper, we propose a new indoor robot navigation system using passive RFID technology with features of simple composition and inexpensive to solve the above problems. The proposed system is effective system which let a mobile robot move to the final destination smoothly while a mobile robot communicates with RFID tags which are attached on a wall of a passage at regular intervals. This system controls a mobile robot’s moving by using only information stored in RFID tags. Then, this system does not need to consist of several kinds of sensors and wireless communications devices other than RFID.

To show the effectiveness of the proposed system, we carry out the performance evaluations by computer simulations. This paper is organized as follows. Section 2

discusses the outline of the indoor robot navigation system. Section 3 proposed a new indoor robot navigation system using RFID technology. Section 3 presents the performance evaluations by computer simulations. Finally, Section 4 concludes this paper.

II. OUTLINE OF INDOOR ROBOT NAVIGATION SYSTEM

The indoor robot navigation system is the system which assists to reach the destination of a mobile robot by providing the pathway between the current position of a mobile robot and the destination. To realize this system, the mobile robot’s moving control is very important. The moving control consists of three essential functions; 1) position estimation, 2) routing, and 3) moving correction and tuning.

The main purposes of the moving control are to select a safe pathway to the destination and to move a robot smoothly without colliding a wall or obstacles. Researchers have been proposed many methods about each function to control the moving of a mobile robot to achieve such purposes [1]-[4]. Each function is very effective when it combines according to purpose or environments. However, because the existing indoor robot navigation systems control a mobile robot's moving by combining many kinds of sensors and devices, there systems becomes very complex and expensive. Then, the indoor robot navigation system with low cost and simple composition is required very much.

III. PROPOSED SYSTEM

To solve the above problems, we propose a new indoor robot navigation system using only passive RFID technology. RFID is popular, simple composition, inexpensive, and very easy to store information. The proposed system controls a mobile robot’s moving by using only information stored in RFID tags. This system does not need many sensors and devices. The proposed system is the system that a mobile robot moves to the final destination while communicating with each RFID tag which is attached on a wall of a passage at regular intervals. The mobile robot holds an RFID tag's ID attached in the final destination as the final destination information. The mobile robot can obtain the short term destination for approaching the final destination by collating the information read from the RFID tag and the final

194/278

Page 210: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

destination information. The mobile robot can move toward the final destination accurately and smoothly by following all short term destinations that are needed to reach the final destination. So, the proposed system does not need a global map.

Here, we explain a mobile robot’s moving control in the proposed system. Firstly, when a mobile robot detects an RFID tag, the system can obtain the mobile robot’s position and the short term destination by estimating an RFID tag’s position using previous CM-CRR [2] and reading information in an RFID tag. Secondly, the system calculates the pathway which connects mobile robot's position and the short term destination. Finally, the mobile robot moves toward the short term destination by tracking this pathway while controlling the moving speed and direction. By repeating this operation in each short term destination, the mobile robot can reach the final destination.

IV. PERFORMANCE EVALUATIONS

To show the effectiveness of the proposed system, we carry out the performance evaluations by computer

simulations. The purpose of these simulations is to evaluate

the pathway error characteristics for the proposed system in

the environment which assumed the fourth experiment

building of Kansai University.

Fig. 1 shows the simulation environments. Table 1

shows the simulation parameters. We set the mobile robot's

starting points to S1 (34.5, -13.7) and S2 (11.2, 3.0), and the

final destination tags to Tag1 (-14.0, -14.5; ID: 124) and

Tag2 (-10.0, -4.5; ID: 53). We determine the pathway which

connected S1 and Tag 1 as the pathway 1 and the pathway

which connected S2 and Tag 2 as the pathway 2. The mobile

robot starts movement toward the moving direction of the

pathway from the starting point while reading the RFID tag

attached to the wall of the left side. The assumed pathway

shall connect the position distant 80 cm from the wall of the

left side.

Fig. 2(a) shows the pathway error characteristic in

pathway 1. In the straight pathway, we find that the mobile

robot can move to the final destination while maintaining

very small error. In addition, we find that the mobile robot

does not collide with a wall since the maximum pathway

error is 10.58 cm. Fig. 2(b) shows the pathway error

characteristic in pathway 2. We find that the pathway error

increases when the mobile robot is moving on the curve

pathway. However, this error can decrease in the straight

pathway. From these result, we find that the proposed

system can control the mobile robot’s moving without

colliding a wall and let the mobile robot move to the final

destination smoothly by using only information stored in

RFID tags.

V. CONCLUSION

We have proposed a new indoor robot navigation system using only RFID technology. This system controls a mobile robot’s moving by using only information stored in RFID tags. Hence, this system does not need several kinds of sensors, devices and a global map. To show the effectiveness of the proposed system, we evaluated the pathway error

characteristics for the proposed system by computer simulations. As the results, we found that the proposed system is able to control the mobile robot’s moving correctly and smoothly by only information stored in RFID tags.

ACKNOWLEDGMENTS

This research was partially supported by the Grants-in-Aid for Scientific Research (C) (No. 23500103) and the Kansai University Research Grants: Grant-in-Aid for Promotion of Advanced Research in Graduate Course, 2013.

REFERENCES

[1] X. Xiong, et al., “Positioning estimation algorithm based on natural landmark and fishi-eyes’ lens for indoor robot,” IEEE 3rd International Conference on Communication Software and Networks (ICCSN 2011), pp.596-600, Xi’an, China, May 2011.

[2] E. Nakamori, et al., “A New Indoor Position Estimation Method of RFID Tags for Continuous Moving Navigation Systems,” The 3rd International Conference on Indoor Positioning and Indoor Navigation (IPIN 2012), pp.1-6, Sydney, Australia, Nov. 2012.

[3] B. Hartmann, et al., “Indoor 3D position estimation using low-cost inertial sensors and marker-based video-tracking,” IEEE/ION Position Location and Navigation Symposium (PLANS 2010), pp.319-326, CA, USA, May 2010.

[4] M. Suruz Miah, et al., “Keeping track of position and orientation of moving indoor systems by correlation of range-finder scans,” IEEE/RSJ/GI International Conference on Intelligent Robots and Systems (IRO 1994), vol.1, pp.595-601, Munich, Germany, Sept. 1994.

32m

25m

16m

10m

Tag 2(-10.0, -4.5; ID: 53)

52m

S2 (11.2, 3.0)

10m

45m O

Tag 1 (-14.0, -14.5; ID: 124) S1 (34.5, -13.7)

Pathway 1:

Pathway 2:

Room

Passage

2m

Fig. 1 Simulation environments.

Table 1 Simulation parameters.

Parameters Values

Passage width 2m

Attachment interval of RFID tags 1m

Interval of control points 1.5m

Attachment number of RFID tags 225

Size of the mobile robot 60cm×120cm×86cm

Moving speed of the mobile robot 25cm/s

Long rangeMajor axis 121.28cm

Minor axis 48.16cm

Short rangeMajor axis 93.44cm

Minor axis 35.36cm

0

10

20

30

40

50

60

0 20 40 60 80 100 120 140 160

Pat

hw

ay E

rror

[cm

]

Elapsed Moving Time [sec]

0

10

20

30

40

50

60

0 20 40 60 80 100 120 140 160

Pat

hw

ay E

rror

[cm

]

Elapsed Moving Time [sec] (a) Pathway 1 (b)Pathway 2

Fig. 2 Pathway error characteristics in pathway 1 and 2.

978-1-4673-1954-6/12/$31.00 ©2013 IEEE

195/278

Page 211: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Accurate positioning in underground tunnels

using Software-Defined-Radio

Fernando Pereira, Christian Theis

Radiation Protection

European Organization for Nuclear Research

Geneva, Switzerland

[email protected]

Sergio Reis Cunha

University of Porto

Porto, Portugal

[email protected]

Manuel Ricardo

UTM, INESC Porto

University of Porto

Porto, Portugal

Abstract— Localization in tunnels and other underground

environments has been regarded as of extreme importance,

notably for personal safety reasons. Nevertheless, due to the

inherent difficulties which these scenarios present, achieving

high accuracy levels with low-cost generic solutions has shown

to be quite challenging. In the specific case of long but narrow

tunnels, like the ones of the Large Hadron Collider [1] at

CERN, localization based on fingerprinting techniques with

Received Signal Strength (RSS) performs well but yields

accuracy levels in the order of 20m, which is still not sufficient

for the most demanding applications envisaged for the system.

In this context, a new technology based on Time-of-Flight

(ToF) is being developed and prototyped using programmable

Software Defined Radio (SDR) devices. By measuring the

carrier phase-delay, the system aims at achieving meter-level

accuracy. This paper describes the localization technique

under research and whose design takes into account the SDR

specificities, in contrast to dedicated hardware.

Keywords- underground tunnel, phase-delay, SDR, leaky-

feeder

I. INTRODUCTION

In the last years much attention has been paid to localization in tunnels and other challenging environments mostly due to its extreme importance with respect to personal safety. Among the many existing techniques for indoor localization, not all of them are interesting or applicable to these special cases, in which one of the dimensions is generally very large while the others can often be of little interest. Furthermore, it is common that, due to the adverse conditions – rough surfaces, humidity, magnetic fields, radiation, etc – special precautions must be taken regarding the installation of hardware devices, and in some cases it might even be not possible.

Localization based on RSS fingerprinting has therefore been regarded as very attractive solution for these cases, as it

requires neither the installation of dedicated infrastructure hardware nor the allocation of extra radio frequency (RF) spectrum [2]. Thus, they are potentially very cost-effective as well.

For the purpose of localization in the CERN accelerator tunnels, techniques based on RSS fingerprinting have been previously explored, which took advantage of the dense network coverage available via a set of leaky-feeder cables. Besides the benefit with respect to increased personal safety, a good level of accuracy, in the order of one to two meters, would enable for much faster processes carried out by various technical departments at CERN, including radiation surveys with automatic position tagging. Even though these methods have shown to be effective in estimating the location based on the RSS of both the GSM and WLAN networks, their accuracy was limited to 20m at a confidence level of 88% [3], which is not sufficient for providing an accurate position tag for some applications.

In order to increase the accuracy up to the envisaged levels, techniques based on Time-of-Flight (RF wave propagation delay), that could meet the tunnels restrictions and specificities, are currently being investigated. By using frequencies in the VHF band (2m wavelength), the technology aims at achieving 1 meter-level accuracy and, by propagating the signal over the leaky-feeder cable, full tunnel coverage is expected to be achieved with a small number of units. In order to allow for fast prototyping and custom deployment of the methods at a relatively low cost, programmable Software Defined Radio devices are considered for the implementation.

The following chapters provide an overview of the methods being investigated, and the first results of their performance using SDRs, as well as a discussion on the limitations of these devices for such specialized applications.

196/278

Page 212: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

II. BACKGROUND

A. Indoor Positioning

Localization techniques build on three main distance measurement principles: angle-of-arrival (AOA), received-signal strength (RSS), and propagation-time based measurements that can further be divided into time-of-arrival (TOA), roundtrip-time-of-flight (RTOF) and time-difference-of-arrival (TDOA) methods [4]. By using several distance measurements, eventually of different types, it is possible to calculate a position in a 2D or 3D coordinate system.

Angle of Arrival techniques calculate a position by determining and intersecting the directions where a signal comes from, by making use of directional antennas. Due to its relative simplicity and wide coverage, broadcast networks have mostly used it. However, the position accuracy is always subject to growing uncertainties as the distance between the devices increases.

RSS systems are based upon the principles of path loss of electromagnetic waves, and therefore they are virtually applicable to any existing wireless network. Nevertheless, although simple models exist for free space propagation, multipath fading and shadowing effects have a dominant impact in indoor environments, which strongly limit the measurement accuracy. One of the most widely used approaches is RSS fingerprinting, in which a given RSS sample is compared against a map of RSS values previously collected and filtered. RSS fingerprinting can be quite effective but has the drawback of requiring a calibration phase to build the map and subsequent effort to keep it updated. Methods based on RSS fingerprinting have been studied in previous stages of the current investigation. For more in-depth information on their application to the current scenario refer to [3].

Propagation-time based localization (ToA) is arguably the technique delivering the highest accuracy levels. Although the principle of measuring the propagation time of a wave is relatively simple, due to the very high propagation speed and multi-path effects, these techniques require careful design of efficient algorithms and their implementation in hardware. For instance, in a radar-like setup (RTOF) with RF waves, a positioning accuracy in the range of 10 cm requires a clock frequency in the order of 1.5 GHz. In a direct configuration (without round-trip) precise clock synchronization is also required among the receiver and transmitter units.

Methods using Ultra Wideband pulses (UWB) typically reach resolutions better than 30 cm, and are typically found to handle multipath effects best, as long as the pulses are short enough. UWB methods frequently employ pseudo-random-noise (PRN) codes so that a receiver applies autocorrelation techniques to the received signal, yielding accuracy levels proportional to their bandwidth. Among the best known cases is GPS C/A [5] based positioning, which employs codes of 1023 chips at 1 Mchip/s, whose receivers are currently able to detect shifts in the order of 1% of chip time.

In systems using narrower frequency bands, although multiple propagation paths can be more difficult to distinguish, the carrier phase can be recovered at the receiver, which enables cm-level resolution. In these methods, the accuracy is proportional to the wavelength (typically in the order of 5%) and bandwidth is mostly required for target disambiguation.

Due to the limited bandwidth Software-Defined-Radios can handle, the approach being investigated falls in the narrowband category, and uses pairs of signals for position disambiguation.

B. Software Defined Radio platforms

Radio systems have traditionally consisted of transceiver chains with several stages, where the signal is converted to an intermediate frequency, filtered, then converted down to baseband and finally demodulated. With the advent of fast and cheap digital signal processors (DSPs), radio systems now employ digital transceivers composed of a radio Front-End followed by an analogue to digital converter (ADC) and finally by a Back-End responsible for the further signal processing, like filtering and demodulation.

The need for fast-paced development and prototyping has motivated the research for ways how to change the behaviour of some digital blocks with minimum time and cost, i.e. turn them software programmable. This class of transceivers is known as Software–Defined Radios and uses either Field-Programmable Gate Arrays (FPGA) or even General Purpose Processors (GPP) to perform digital operations equivalent to a traditional analogue transceiver [6].

Despite the increased degree of flexibility achieved in such configuration, FPGAs and more critically GPPs are intrinsically slower than Application Specific Integrated Circuits (ASIC). Therefore the computational requirements of the application must be carefully assessed to be sure they can be implemented in SDR. In the case of using GPPs, an intermediate FPGA is commonly used to perform the most demanding operations and down sample the signal to lower rates before sending them to the GPP. This configuration is the one evaluated in the current study.

III. CASE STUDY AND METHODOLOGY

The LHC tunnel at CERN is located 100 m below the surface; it is divided into 8 sections and measures nearly 27 km in circumference. GSM network coverage is available all along the tunnel’s length via a set of leaky-feeder cables installed at nearly 2 m from the ground – see Figure 1. They propagate electromagnetic waves of up to 1950 MHz and exhibit a longitudinal loss of 3.16 dB/100 m at 900 MHz [7].

Although RSS fingerprinting methods couldn’t provide the desired accuracy levels, they succeed in clearly distinguishing among tunnel’s regions. This fact motivated the study of complementary localization technology with higher resolution on a small scale that could cover long ranges but without the need to provide disambiguation between the tunnel’s regions. Under these circumstances, a

197/278

Page 213: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. The LHC tunnel. On he top right corner one can see the leaky-

feeder cable (black)

narrowband phase-delay system was considered. Furthermore, given that it could potentially be implemented in SDR, optimal conditions for experimental development were met while creating a solution that, to the best of the authors’ knowledge, has not been investigated to date.

Tests and development has been carried out with USRP B100 SDR units from Ettus Research [8]. The units implement a Xilinx® Spartan® 3A 1400 FPGA, 64 MS/s dual ADC, 128 MS/s dual DAC and USB 2.0 connectivity to provide data to host processors, and are equipped with the WBX daughterboard that provides 40 MHz of bandwidth within the 50 MHz to 2.2 GHz spectrum range. For an overview of its architecture, refer to Figure 2. Despite the high sample rate at which the unit operates internally, it is limited to stream up to 8 MS/s over the USB link. Therefore, processing at higher rates must be implemented in the FPGA.

After passing by the unit’s FPGA, the signal flows to the host computer where signal processing can be performed in pure software. For this purpose, the GNU radio framework [9] [10] was adopted given the support from Ettus Research and the large active community. In GNU radio the several

Figure 2. USRP B100 architecture

DSP blocks are available as modules which can be programmatically linked to implement the desired functionality. Although this task can be fully carried out in Python, the GNU Radio Companion (GRC) GUI extension allows for the full specification of the system by graphically creating a flow-graph of DSP blocks.

The localization techniques presented in the next chapters were implemented using the mentioned GRC software and tested in regular office conditions. The host machine features a 3-year-old dual core CPU, which could process up to 2 MS/s. Along the implementation, several new DSP blocks had to be created for the GNU Radio library using its C++ API, and will be identified as they are mentioned in the text.

IV. PHASE-DELAY POSITIONING WITH SDR

A. General architecture

While the current scenario presents many challenges, the presence of a leaky feeder comes in as an opportunity to propagate the signals much further and therefore avoid the installation of additional receiver units. Therefore the envisaged system, as shown in Figure 3, has the following components:

Rover Unit (Rover) – moveable SDR unit whose location is to be determined, and shall be simple as to allow for future implementation as portable devices.

Fixed receiver Units (Fixed) – units directly coupled to the leaky feeder, which receive the signals from the Rover Unit and, eventually, from the Reference Unit. In principle there should only exist one per region delimited by the signal reach.

Reference Unit (Reference) – units that might be required, depending on the design, to act as online calibration points of the system.

B. Design considerations

SDR technology allows for fast-paced development by “turning hardware problems into software problems” [10]. Despite being an incredible advantage for research, this facility comes at a high cost. In general, bandwidth and CPU power are the main constraints in SDR platforms when implementing a communication system. Nevertheless, since SDR platforms have not been specifically designed for

Figure 3. Localization system overview

198/278

Page 214: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

localization purposes, other parameters like delay, jitter, clock stability might restrict the feasibility of using such a system. Since these platforms are more complex than pure hardware, there are also more stages where unexpected effects might occur.

In a configuration using the USRP B100 (see Figure 2) a signal, after being received and down-converted in the front-end, is sampled in the ADC and then filtered. Subsequently it is decimated in the FPGA and finally streamed to a host computer via USB. During this initial phase, the internal clocks for down-conversion impose the most serious challenges, as the front-end PLL must precisely tune to the signal center frequency, while the DAC/ADC must operate at a multiple of the base sampling frequency. In both cases they rely on a clock generator (TXCO) with a 2.5 ppm rated accuracy. Although it might seem sufficiently accurate (indeed it is for most applications), it also means that at 100 MHz there might be a frequency shift of 250 Hz, which is enormous for phase sensitive applications.

In a next stage, after being transferred over a USB link, the signal has to go through several communication layers, implemented in kernel as well as user space, of a computer system. When the samples finally arrive at the user DSP program, they have been delayed by a significant, and not necessarily constant, amount of time or have even been dropped. These side effects – jitter and frame dropping – heavily affect precise systems and might be very hard to compensate.

Having these considerations in mind, two approaches have been explored. On the one hand, if clocks are stable enough, an emitter-receiver approach with a synchronization signal should perform well. On the other hand, if jitter and frame-dropping are within acceptable limits, a radar-like approach could be effective while relaxing the need for clock synchronization.

C. Direct phase detection with reference unit

The original design reflects very closely the architecture presented in Figure 3. In this scenario, both the Rover and the Reference unit only emit reference waves. In turn, the Fixed receiver recovers each signal’s carrier, performs the phase measurements taking into account the reference signal and communicates the results to the Rover unit using any existing data network. As for the parameters of the communication it was established:

Wave1: 150 MHz / (1.33 m wavelength in cable)

Wave2: 151 MHz (carrier1-carrier2 = 1MHz-> 200m wavelength in cable)

Channel separation / Δf (between rovers and reference station): 10KHz

For testing, this scenario was simplified to a single Rover, and a combined Fixed Receiver-Reference unit. Both units were configured from the same GNU Radio program, where all the DSP was being performed as well.

In order to correct clock drifts between the Rover and the Fixed receiver, the Reference unit will generate a wave that compensates for the frequency and phase offset.

As illustrated in Figure 4, around a carrier frequency (fc), six waves are to be transmitted in total: four transmitted by the Rover (f1, f1-Δf, f2 and f2+Δf) and two corrective ones transmitted by the Reference unit (f1-2Δf, f2+2Δf). Frequencies close to f1 are relative to wave 1, which is directly employed in the calculation of the fine-grained position within a short range (one wavelength). In turn, frequencies close to f2 are relative to wave 2, which demodulated by f1 create a low-frequency wave used to localize within a long range, in the order of 200m.

Let f1-Δf be f12 and the corrective wave (f1-2Δf) be f1r. The Reference unit will calculate f1r so that its frequency and phase offset is the same to f12 as between f1 and f12. The calculation that yields the correction factor can be simplistically illustrated by the block diagram of Figure 5.

In an initial step, the phase difference between the two original waves (f1 and f12) is compared with the phase difference between f12 and fr. The result is itself a wave with the corrective frequency and phase to be applied. Ideally, it would be sufficient to use this wave to correct a reference wave but, since the clock drift is expected to change quicker than the response time of the system, the correction is done is a two-step process. First, finding the peak of the FFT one determines the frequency shift to be applied, which, accumulated over time, converges to the real frequency correction. When this frequency converges, the phase information from the comparator blocks is also used in a complex VCO to produce the final correcting wave. At this point, the wave comparison result (after the last multiply-conjugate block) should have 0-frequency and 0-phase.

Figure 5. Conceptual implementation of the direct phase detection system with reference unit

Figure 4. Frequency plan of the direct phase detection system with reference

unit

199/278

Page 215: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

For the implementation of the current system the blocks

marked with asterisk were implemented as add-on blocks for gnu-radio.

With a stable system at the Reference unit, the Fixed receiver unit will only see phase shifts depending on the position of the Rover. In this design the Refence unit is a critical part since frequency and phase shifts are expected to occur continuously since various factors, like the temperature, affect the unit’s clock independently. Nevertheless it is required that these shifts are smooth enough in order to allow the system the follow the change. In the case of eventual abrupt changes in the clocks, the system must wait until it stabilizes again.

D. Round-trip phase detection

An approach which tries to minimize the need for full clock synchronization was also considered. In this setup, since the signal is emitted and received by both units, the goal is to evaluate whether phenomena affecting the signal in each transmit/receive chain (e.g., frequency shift due to different clock rates) are counter compensated when the signal goes through the reverse chain of the same device. As a matter of fact, when receiving a signal emitted from the same device, despite all the SDR complexity, it appears in optimal conditions, i.e., without frequency shifts.

Even though this architecture might be conceptually more complicated, due to the round-trip, the principle can be easily verified with two units. On the one hand the Fixed unit will emit a simple wave and listen for “reflections” in N channels separated by a given frequency Δf, where N is the potential number of Rover units. On the other hand, the Rover unit will listen for the original wave, shift it to its own frequency channel and retransmit it. In this stage, signal processing should be kept to a minimum extent to avoid introducing jitter. A simplified view of the implementation with a single channel in gnu-radio is presented in Figure 6.

For this this test, USRP box1 (acting as Fixed unit) simply transmits a sine wave of frequency f1. Then, as seen in the flow graph in the middle of Figure 6, USRP box2

recovers the transmitted wave, filters it and retransmits the wave shifted in frequency by –Δf and Δf. In the last step, again in USRP box1, these two waves are received, then individually shifted by their nominal frequency (to 0-hz), filtered and their phase averaged. The reason for having two reflected waves (at Δf and – Δf) is the fact that any shift in frequency will incur a continuous phase delay in one direction. Doing so for two symmetric frequencies and averaging their phases in the reception will cancel out this delay.

V. RESULTS

In order to assess the performance of these methods in a first phase, the signal phase stability, which is the most crucial parameter in the system, was analyzed graphically. Using the plot tools of GRC, the phase of the signal could be checked for variation over time.

In the direct phase detection method, which uses a reference unit, the system would ideally converge to 0-frequency 0-phase in very few seconds, which would be acceptable if changes in frequency were progressive and rarely abrupt. Unfortunately, none these conditions were met. Fast and significant changes in frequency, like those observed in Figure 7 (usually by 30 Hz or more) happened quite frequently, around every 1 to 5 seconds. Given that the system needed a few seconds to stabilize, this method showed to be little helpful for the current scenario.

Figure 7. Method 1 phase stability

Figure 6. Conceptual implementation of the round-trip phase detection system

200/278

Page 216: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

From the plot, one can also notice that, despite these

frequency changes, phase continuity was kept, which indicates that frame dropping was not the issue. In order to isolate the problem, instead of a SDR unit emitting the original wave, a dedicated RF wave generator was used. Differences in the signal stability were immediate and very noticeable, eliminating this frequency hopping phenomena almost completely. This fact is a strong indicator that, at some point of the transmit chain, the signal suffers side effects, eventually from components highly sensitive to the clock changes or de-synchronism among processing layers, introducing little frequency hops in the resulting wave.

The second method, based on the round-trip principle to minimize clock synchronization issues, was evaluated in a very similar way. After implementing the model, step-by-step tuning of frequencies and filter parameters was performed and quite interesting results could be observed.

In the current setup, after being “reflected”, the signal could be recovered in the original unit and after being shifted to 0-frequency it would behave well, without presenting any kind of frequency hops. Although sensitive to interference of nearby bodies, when performed basic smoothing using a Moving Average filter, the phase would remain impressively constant while perfectly responsive to distance changes.

In Figure 8 it is possible to observe the evolution of the signal phase over a period of 16 sec. After second 5 one unit was moved by nearly 40 cm, kept there for 2 sec and then rapidly moved back to the original position. Indeed, in a round trip configuration at 150 MHz over air (2 m wavelength) a complete period (2π) shall occur with 1 m displacement. A 40 cm change should therefore incur a phase delay of 2.5 rad, which is approximately the observed value.

The results obtained in this test provide a strong argument that, although the signal looks slightly fluctuating while transmitting, those effects are not independent between the transmit/receive chains of the SDR. Indeed, it is quite remarkable that, at sample rates higher than 1 Msample/s, there would be no frame dropping and the receive and transmit chains would remain perfectly aligned so that those fluctuation effects in the signal could cancel out.

Figure 8. Method 2 phase stability and reaction to position change

VI. CONCLUSIONS AND FUTURE WORK

This paper presents a study evaluating localization techniques implemented in Software-Defined-Radio (SDR) platforms, intended to enable accurate positioning over the total length of the CERN tunnels. Two narrowband approaches, based on the principle of Time-of-Flight, were investigated. On the one hand, one approach uses direct-phase detection with a synchronization signal, requiring very simple Rover units. On the other hand, a radar-like approach relaxes the need for clock synchronization but requires jitter and frame-dropping to be within acceptable limits.

Tests using the USRP B100 and GNURadio showed that the first approach did not perform well since the signal was recurrently being affected by fast frequency hops which could not be compensated within its stability time frame. In turn, the second approach showed to perform quite well, as the effects introduced in the signal along the transmit chain of the the SDR were cancelled out when also passing through the receive chain of the same device. Furthermore relative movement of the units among each other could be perfectly observable, closely matching the displacement.

Next steps foresee the development of the second method for long term phase stability and comprehensive tests in the tunnels taking advantage of the leaky feeder infrastructure.

ACKNOWLEDGMENTS

The authors would like to express their gratitude for the support of F. Chapron, A. Pascal and A. Molero from the IT/CS group at CERN, without whom this study wouldn’t have been possible.

REFERENCES

[1] “CERN - The Large Hadron Collider,” [Online]. Available: http://public.web.cern.ch/public/en/LHC/LHC-en.html.

[2] A. Bensky, “Received Signal Strength,” in Wireless Positioning Technologies and Applications, Artech house, 2008.

[3] F. Pereira, C. Theis, A. Moreira and M. Ricardo, “Multi-technology RF fingerprinting with leaky-feeder in underground tunnels,” in Indoor Positioning and Indoor Navigation (IPIN), 2012 International Conference on, Sydney, 2012.

[4] M. Vossiek, L. Wiebking, P. Gulden, J. Wieghardt, C. Hoffmann and P. Heide, “Wireless Local Positioning,” IEEE microwave magazine, vol. 4, no. 4, pp. 77-86, December 2003.

[5] GPS.gov, “GPS Standard Positioning Service (SPS) Performance Standard,” 2008.

[6] D. Valerio, “Open Source Software-Defined Radio: A survey on GNUradio and its applications,” Vienna, 2008.

[7] “1-1/4" RADIAFLEX® RLKW Cable, A-series,” RFS, [Online]. Available: http://www.rfsworld.com/dataxpress/Datasheets/?q=RLKW114-50JFLA.

[8] “Ettus Research LLC,” [Online]. Available: http://www.ettus.com/.

[9] “GNU Radio official website,” [Online]. Available: http://gnuradio.org/redmine/projects/gnuradio/wiki.

[10] E. Blossom, “GNU Radio: Tools for Exploring the Radio Frequency Spectrum,” 2004. [Online]. Available: http://www.linuxjournal.com/article/7319.

201/278

Page 217: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Positioning in GPS Challenged Locations -

NextNav's Metropolitan Beacon System

Subbu Meiyappan, Arun Raghupathy, Ganesh Pattabiraman

NextNav, LLC

Sunnyvale, CA 94085, USA

[email protected], [email protected], [email protected]

Abstract— In this paper, we explore the limits of GNSS-based

positioning solutions with specific emphasis on the challenges

presented to GNSS systems in indoor locations and urban

canyons. In this context we introduce NextNav's Metropolitan

Beacon System as a reliable, ubiquitous, low power, fast (order of

6s cold-start TTFF) positioning service. NextNav’s technology

enables consistent, wide-area indoor and outdoor location

accuracy using a network deployed with metropolitan-area

coverage in contrast with other positioning systems that are

limited to specific venues. The NextNav system provides high

horizontal accuracy (currently ~20m) and precise vertical

accuracy (~1 – 3m), with yields of 98% where deployed.

NextNav’s technology has been operational in the San Franscisco

area for well over three years, and has been subjected to

numerous third party trials to verify system performance. Most

recently, the FCC-sanctioned CSRIC Working Group 3, tasked

with exploring indoor location accuracy standards for wireless

emergency E911 location, conducted a side-by-side trial of

various location technologies at its national test bed. This test

program examined horizontal and vertical indoor location

performance across rural, suburban, urban and dense urban

morphologies. NextNav technology had 28m(67th percentile) and

44m (90th percentile) 2-D error in rural morphologies, 28m/52m

in sub-urban, 62m/141m in urban, and 57m/102m in dense urban

morphologies, which was significantly better accuracy than

competing technologies. The results from the CSRIC trial will be

presented and discussed in this paper.

Keywords-indoor positioning, terrestrial signals, precise

altitude, E911, fast TTFF

I. INTRODUCTION (HEADING 1)

GNSS based location systems provide very reliable and accurate location information in urban and sub-urban outdoor environments. While A-GNSS helps in providing some level of indoor location solutions, it seldom works in deep indoor and in dense-urban environments. With the growing usage of smartphones, a reliable, accurate yet scalable solution is needed to provide similar levels of performance as GNSS, in GNSS challenged environments, for both public safety applications (E911) and commercial Location Based Services (LBS). There are several Signals of Opportunity (SoP) that are currently being used to solve indoor location problems in localized

environments, for commercial applications, such as WiFi, RFID, Low Energy Bluetooth (BLW) etc. However, to provide reliable, accurate, scalable location and timing on a wide area basis, a dedicated network designed to provide location signals (like GPS) with terrestrial transmissions is essential. NextNav is building such a network, called Metropolitan Beacon System (MBS), currently deployed in some markets in the United States.

II. NEXTNAV MBS NETWORK

Unlike location systems designed to offer a building-specific indoor location capability, NextNav‟s service is built as a wide-area network with similar coverage scale to a metropolitan cellular network. Consistent, accurate indoor location performance is designed to be available across an entire market area, and is not limited to a specific venue or set of venues.

Figure 1 illustrates the basic architecture of the NextNav network. Where the satellite signals are blocked, for example, in an urban canyon or deep indoors, NextNav beacons provide terrestrial ranging signals to enable receivers to compute their location.

202/278

Page 218: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure 1: NextNav Network Architecture This location computed on the device is then utilized by

applications on the mobile or measurements are relayed over the network to a server, where the location is computed.

A. NextNav MBS Network Characteristics

Some characteristics of the MBS network are:

- The network consists of high power (30W peak ERP) broadcast transmitters (beacons)

- The network is designed and deployed for both coverage and geometry such that at every location the Geometric Dilution of Precision (GDOP) is ≤ 1.5

- The beacons in a network are synchronized autonomously to GPS time and to each other within a few nanoseconds of each other.

- The transmit antenna location, GPS antenna location, cable lengths etc are precisely surveyed/measured and normalized across the network for delay computation.

- Several beacons in a given network have weather stations installed to help with altitude determination at the receiver.

- The beacons occupy a very small footprint and are co-located on cell-towers or roof-tops with, typically, an omni-directional antenna.

- The network functions as an “overlay” network to the existing cellular infrastructure.

- Ideal location for the beacons is highest available point on existing broadcast, paging or cellular tower facilities.

- The beacons do not need a backhaul – some telemetry services are used for remote monitoring and control from the Network Operations Center (NOC).

- Since the broadcast signals from the beacons are used to compute position, there are no limits on capacity.

B. Nextnav Elevation System

The NextNav elevation system is based on the principle that atmospheric pressure decreases with increasing elevation. The challenge comes from the fact that even normal weather phenomena cause changes in pressure that is an order of magnitude larger than the pressure change resulting from moving from one floor to another in a building. In the San Francisco Bay Area, for example, NextNav has observed ambient pressure changes equivalent to ascending or descending more than 200 feet within several hours.

The solution to this challenge is to simultaneously measure the weather-induced changes in pressure at multiple fixed locations and to use that information as a real-time reference with the pressure readings resulting from pressure changes at a mobile device. By offsetting the weather-induced changes (including temperature differentials and other factors), the remaining changes in pressure correspond to elevation changes.

NextNav has determined that, in order to deliver a high-performance altitude measurement system, the device must be capable of measuring barometric pressure with a high degree of accuracy. For such a device to be practical, it must be small enough, low enough cost, and low enough power to be embedded in a portable consumer product.

Due to demand for pressure measurements in a variety of consumer electronics devices, most recently on mobile phones (e.g., the Samsung Galaxy series of handsets) and various tablet computers, there are a number of MEMS pressure sensors on the market that meet the requirements for size, power and cost.

Note that reference pressure available readily from airports, National Oceanic and Atmospheric Administration (NOAA) etc. do not have the precision to provide floor level accuracy.

III. NEXTNAV SIGNAL STRUCTURE

MBS signals are transmit between 919.75 to 927.25MHz in the United States. MBS signals are designed such that an existing GNSS baseband can be reused in its entirety to process the MBS signal.

The current signal structure for the MBS network has characteristics very similar to GPS signals in that, the chipping rate is 1.023Mcps and uses the family of Gold codes defined in the GNSS specifications.

The spectral characteristics at baseband are shown in Figure 2.

203/278

Page 219: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure 2: NextNav Signal's Spectral Characteristics

The signal from each transmitter is transmitted in a time

slot with a specific PRN and frequency offset as determined during network planning. The payload data (transmitter latitude, longitude, timing correction, pressure, temperature etc.) modulates the PRN sequence at the rate of 1bit/msec.

Further details about the NextNav signal structure is available in the MBS ICD [1] and can be obtained by making an official request to NextNav.

MBS is a terrestrial GNSS “constellation” of beacons in which the ranging measurements are similar to a GNSS system and the MBS receiver can utilize the same call flows as for A-GNSS.

IV. CSRIC TESTING

A. CSRIC COMMITTEE and TEST OBJECTIVES

CSRIC III Working Group 3 (WG3) was tasked by the

FCC to investigate wireless location in the context of indoor wireless E911. In June 2012, WG3 submitted its initial report to the FCC regarding Indoor Location Accuracy for E9-1-1. As one of its primary findings, the report identified the lack of objective and validated information regarding the performance of available location technologies in various representative indoor environments. The Working Group identified obtaining this critical information as its highest priority and established a set of cooperative actions, including the creation of an independent test bed, to accomplish this task.

WG3 created the framework for the test bed whose objectives have been to:

Enable an „apples to apples‟ comparison of various location technologies in real world conditions

Provide unbiased, objective data on the performance of various location technologies in indoor environments to the FCC. This will establish the framework for establishing it‟s longer term objectives for E911.

Establish a benchmark upon which emerging technologies can be compared to determine their relative promise in improving the capabilities that are currently available (both in terms of accuracy and/or consistency).

An independent test house, Technocom, was selected to

perform the test in various morphologies using all available location technologies in the San Francisco bay area during Nov-Dec 2012. The detailed test report submitted by Technocom is available at the FCC website [1] or in the NextNav website at [4]. A summary of the results are discussed below.

The morphologies (or wireless use environments) are those that were defined in ATIS-0500011, namely, dense urban, urban, suburban and rural. These morphologies have subsequently been adopted in ATIS-0500013[2] defining the recommended indoor location test methodology

B. TEST CONFIGURATION

Various technologies participated in the test bed: Nextnav‟s

MBS, Polaris Wireless RF Fingerprinting, Qualcomm‟s

AGPS/AFLT. All receivers (2 per technology) were

assembled in a cart and tested simultaneously as shown in

Figure 3. NextNav receiver is shown in the left.

The end-to-end test configuration for NextNav technology

used in this test is illustrated in Figure 4.

Figure 3: Receiver Assembly for Test

204/278

Page 220: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Location Sig.

WW

AN

Android-PhoneAndroid-Phone

NextNav Location Beacons

NextNav Location Beacons

NN Receiver Unit

USB OTG

Conference Call Bridge

NN APP

1. Initiate Position Fix

2. Position Fixes3. Log Position Fixes

on Device

4. Voice

Cal

l Public Cloud

AT&T BTSAT&T BTS

5. Em

ail L

og

C. TEST EXECUTION

Indoor ground truths were carefully surveyed to a maximum

of +/- 2cm in both vertical and horizontal accuracy. Each

technology was tested with at least two handsets at any given

location. Over 13,400 test calls were placed from the devices

of each of the 3 technologies at 74 valid indoor test points,

averaging over 180 calls per test point. The test point

distribution is shown in Table 1.

Table 1: Test Point Distribution Summary

Morphology Number of Test

Points

Dense Urban (DU) 29

Urban (U) 23

Suburban (SU) 19

Rural (R) 4

Total 75

Buildings were selected from the polygon shown in Figure 5

for testing in dense urban morphology. Similar polygons were

setup and agreed upon for other morphologies. The test points

were selected by Technocom to meet the general requirements

of the test plan with adequate diversity in their RF

environment (including adequate cellular signal coverage),

placement of the point in the building, and non-intrusive test

performance. Several of the test points were deep indoor

locations (5-6 walls inside). Example picture of one such

building is shown Figure 6 which is located in Hearst Office

Bldg (699 Market St.), San Francisco. There were typically 4

test locations indoors, at each building.

Figure 5: Dense Urban Morphology Polygon

Figure 6: A Building in Downtown SF – example test point

The tests were conducted over a period of 21 days and the results were processed and analyzed for the following performance criteria:

Location Accuracy

Latency (TTFF)

Yield

Reported Uncertainty

Location Scatter

Figure 4: Test Setup

205/278

Page 221: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Note that The Time to First Fix (TTFF) or the time to obtain the first computed caller location is reported for each technology at each test point and is calculated by establishing the precise time for call initiation (or an equivalent initiation event if the vendor‟s test configuration did not support the placement of an emergency like call, e.g., 922). A 30 second timeout was set for latency as the maximum time allowed to get a location fix, while the call was in progress. Further, each of the fixes is performed and measured under cold start conditions. Hence, the notion of TTFF is different in this test compared to conventional GNNS concepts.

V. RESULTS

Summary results for accuracy per morphology are shown in

Table 2 and Table 3 for different technologies. Figure 7 shows a pictorial view of the results with the 67th percentile in the bottom and the 90

th percentile at the top of each bar for the

different morphologies. The 67th and 90

th percentile numbers

can be compared to the FCC E911 mandated phase 2 requirements of 50m 67% of the time and 150m 90% of the time (shown in solid grid lines). In Table 3, only Nextnav‟s technology is shown for vertical results because that is the only technology that tested a vertical system. Note that, typical floor height in a multi-story building is around 3m. In Table 2 the numbers have been rounded to the nearest integer, where appropriate, for typographical space. A copy of the full report with the CDF curves for each morphology and technology can be found in [3].

From the results, it is evident that using a terrestrial constellation designed for delivering position location signals performs better than any available wide area positioning system. Further, the elevation information with a floor level accuracy is a unique capability that has never been achieved before in a wide area context.

Table 2: Summary 2D results of key parameters from

CSRIC testing

Technology and Morphology (DU/U/SU/R)

67th

(m)

90th

(m)

Avg. TTFF

(s)

Yield

(%)

Conf.

(m)

NextNavDU 57 102 27.36 93.9 93

Qualcomm DU 156 268 28.24 85.8 93

Polaris DU 116 400 24.37 99.4 69

NextNav U 62 141 27.40 95.4 87

Qualcomm U 227 450 27.83 90.8 79

Polaris U 198 447 24.11 99.9 61

NextNav SU 28 52 27.39 100.0 97

Qualcomm SU 75 205 23.53 91.4 85

Polaris SU 232 420 24.68 99.8 71

NextNav R 28 44 27.56 97.3 95

Qualcomm R 48 210 24.88 99.3 82

Polaris R 575 3005 23.38 96.9 49

Figure 7: Comparative Performance

Table 3: Summary vertical error results from CSRIC

Testing

67th

(m)

90th

(m)

NextNavDU 2.9 4.0 NextNav U 1.9 2.8 NextNav SU 4.6 5.5 NextNav R 0.7 1.1

ACKNOWLEDGMENT

NextNav would like to thank Technocom for providing the results of the CSRIC III testing.

REFERENCES

[1] http://transition.fcc.gov/bureaus/pshs/advisory/csric3/WG3_Indoor_Test

_Report_Bay_Area_Stage_1_Test_Bed_Jan_31 _2013.pdf CSRIC WG3 Indoor Location Test Report, Mar 2003

[2] https://partner.nextnav.com/

[3] https://www.atis.org/docstore/product.aspx?id=25009, “Approaches to Wireless E9-1-1 Indoor Location Performance Testing,” ATIS-0500013, Feb 2010.

[4] http://www.nextnav.com/sites/default/files/CSRIC_III_WG3_Final_Test_Bed_Rpt_3_14_2013.pdf, “Indoor Location Test Bed Report,”,CSRIC III, Mar 2003.

206/278

Page 222: International Conference on Indoor Positioning and Indoor Navigation

Indoor Positioning using Wi-Fi – How Well Is theProblem Understood?

Mikkel Baun Kjærgaard, Mads Vering Krarup, Allan Stisen, Thor Siiger PrentowHenrik Blunck, Kaj Grønbæk, Christian S. Jensen

Department of Computer Science, Aarhus University, DenmarkEmail: mikkelbk,mvk,allans,prentow,blunck,kgronbak,[email protected]

Abstract—The past decade has witnessed substantial researchon methods for indoor Wi-Fi positioning. While much effort hasgone into achieving high positioning accuracy and easing finger-print collection, it is our contention that the general problem isnot sufficiently well understood, thus preventing deployments andtheir usage by applications to become more widespread. Based onour own and published experiences on indoor Wi-Fi positioningdeployments, we hypothesize the following: Current indoor Wi-Fi positioning systems and their utilization in applications arehampered by the lack of understanding of the requirementspresent in the real-world deployments. In this paper, we reportfindings from qualitatively studying organisational requirementsfor indoor Wi-Fi positioning. The studied cases and deploymentscover both company and public-sector settings and the deploy-ment and evaluation of several types of indoor Wi-Fi positioningsystems over durations of up to several years. The findings suggestamong others a need for supporting all case-specific user groups,providing software platform independence, low maintenance,allowing positioning of all user devices, regardless of platformand form factor. Furthermore, the findings also vary significantlyacross organisations, for instance in terms of need for coverage,which motivates the design of orthogonal solutions.

I. INTRODUCTION

Motivated by the challenge of indoor positioning, a sub-stantial amount of research has focused on methods for in-door Wi-Fi positioning. For instance, a search for Wi-Fi andpositioning on Google scholar [1] returns over ten thousandpapers. Already in 2007 a survey covered over fifty paperspresenting different methods for Wi-Fi positioning [2]. Sincethen research on the topic has increased its output and is bynow accompanied by articles that study the links between Wi-Fi positioning and other positioning technologies.

Research articles on WiFi indoor positioning are foremostmethod oriented, e.g, most of them propose a new techniqueto address one general goal, e.g., positioning accuracy asevaluated on collected datasets. General arguments are givento promote addressing the specific topic of the presentedcontribution. However, these claims are often not backed upwith statements grounded in insights from positioning systemstakeholders (e.g., future owners or users) or real-world useexperiences with deployed systems. Therefore, it is largelyunknown whether or not research is addressing the mostpressing issues, e.g., it is unclear whether further accuracygains are more pressing than, e.g., the improvement of methodsthat allow for positioning of devices of a broader variety of op-erating systems and form factors. Therefore, an understandingof the organisational requirements for indoor Wi-Fi positioningas deployed in real-world use, e.g., at companies or public-

sector institutions such as hospitals, is needed to justify bothfurther research on known issues as well as on yet mostlyunaddressed issues. To the best of our knowledge, so far nostudies have been published which focus on reporting theorganisational requirements for indoor Wi-Fi positioning.

Research on Wi-Fi positioning has inspired commercialbusinesses to provide—on common smartphones and withinurban areas—positioning systems that, e.g., allow to pinpointthe building you are in or enable a points-of-interest ap-plication to show you that there is two kilometres to thenearest shopping mall. For such positioning systems earlierstudies claim accuracy levels of 30-70 meters—depending oncalibration level and algorithm used [3]. Furthermore, quite anumber of papers discuss applications of urban positioning,e.g., location-based games, life logs from place visits orlocation-based reminders [4]. At the indoor-level the researchhas also inspired businesses to provide site-specific indoor Wi-Fi positioning systems to individual organisations targeting anaccuracy below 3 meters [5], [6]; however, such technology isnot massively deployed yet. Recently, new players are enteringthe scene providing site-independent indoor Wi-Fi positioningfor public spaces together with indoor maps, e.g., Google Maps6.0 [7] targeting an accuracy of 5-10 meters [8]. However,given the lack of knowledge of organisational requirements itis hard to judge the application potential for these systems.

Given the substantial amount of research into methodsfor indoor Wi-Fi positioning in the last decade one wouldexpect that there by now would exist a multitude of papersreporting on the deployment experience of indoor location-based applications which utilize indoor Wi-Fi positioning.Therefore it is a paradox that when the authors surveyed theliterature only seven articles on the topic could be identified.

In the light of the published as well as of our own experi-ences on indoor Wi-Fi positioning deployments, we hypothe-size the following: Current indoor Wi-Fi positioning systemsand their utilization in applications are hampered by the lackof understanding of the organisational requirements presentin the real-world deployments. This paper thus addresses thelack of knowledge of organisational requirements for indoorWi-Fi positioning. Our case studies and deployments coverboth company and public-sector settings and the deploymentand evaluation of several types of indoor Wi-Fi positioningsystems deployed for several years. The paper’s contributionsare as follows: We present findings of important requirementsin different organisations based on case studies of deployedindoor Wi-Fi positioning systems both in company and public-sector settings. The findings suggest among others a need

207/278

Page 223: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

for supporting all user groups, providing software platformindependence, low maintenance, and enabling of positioningfor user devices regardless of form factor. Additionally, thereis a need to establish application requirements for not onlyaccuracy but also latency. Furthermore, the findings also varysignificantly across organisations, for instance in terms ofneed for coverage, thereby motivating the design of orthogonalsolutions.

II. RECAP OF INDOOR WI-FI-POSITIONING

For use in the reminder of the paper we will establish someterminology for Wi-Fi positioning systems building on the yetmost extensive attempt to structure this field [2]. Indoor Wi-Fi positioning has been studied for more than a decade andresearch has proposed a variety of methods and algorithmsbuilding on the notion of location fingerprinting [9]. At thecore of any location fingerprinting system is a radio mapwhich is a model of network characteristics in a deploymentarea. A positioning method uses this radio map to computea likely position given an observation of the current networkcharacteristics. Additionally, positioning methods might fuseWi-Fi derived positions with other sensor observations [10].Wi-Fi positioning systems are classified according to thedivision of role as device-based if both taking measurementsand positioning are performed by the device to be positioned,device-assisted if measurements are taken by the device andpositioning is performed remotely, and network-based if thenetwork carries out both the measuring and the positioningremotely.

Radio maps can be constructed by methods which canbe classified as either empirical or model-based. Empiricalmethods use collected fingerprints to construct radio maps.Model-based methods use instead a model parameterised forthe covered area to construct radio maps [9]. Furthermore,given recent trends we will in this paper subdivide the empiri-cal methods into administrative, participatory and opportunisticfingerprinting. Administrative fingerprinting is carried out bythe administrator of the system or by an expert hired on thebehalf of the administrator [9]. Participatory fingerprintingrefers to users of the positioning system collecting fingerprintswhen and where they want to [11]. Opportunistic fingerprint-ing refers to collection of fingerprints during normal systemuse without any user intervention and explicit ground truthprovision, e.g., with the assistance of inertial sensors [12] orusing unsupervised techniques to recover the mapping to thephysical space [13] .

III. ORGANISATIONAL REQUIREMENTS

In this section we address the knowledge gap regarding or-ganisational requirements for indoor Wi-Fi positioning. Here,organisation denotes an organized entity involving severalpeople with a particular purpose, such as a business or apublic-sector institution or department. To gather knowledgeabout the organisational requirements, we chose the analysisof case studies as our guiding research method. This enabledus to study different cases of organisations and distill fromthe collected information the case-specific requirements andanalyse how these differ among organisations.

The cases we consider have been chosen in order to coverthe following two dimensions. Firstly, the type of organisation,

e.g., public institution versus private company. Secondly, thesize of the organisation in terms of number of potentialusers and total coverage area of buildings. We selected casesamong organisations which we knew to either already haveexperiences with positioning or have an interest in tryingout positioning. Consequently, we chose to study cases at aSmall Private Company (SPC), a Medium-sized UniversityDepartment (MUD), a Large Shopping Mall (LSM ) anda Large Public Hospital (LPH). In the following, we willuse the above abbreviations to denote either the respectivedeployment scenario or the respective stakeholder parties.Table I lists the studied organisations together with the numberof potential system users, the size of the total coverage area andthe type of the Wi-Fi positioning that was deployed. Withinthe organisations we contacted persons with an interest inor with experiences of using indoor Wi-Fi positioning. Thecontacted persons varied in their knowledge specifically aboutpositioning and also in their level of technical knowledge ingeneral.

Our procedure for gathering information for the casestudies is primarily based on semi-structured interviews withstakeholders about their organisations’ requirements for indoorWi-Fi positioning. Additionally, for two of the organisationswe deployed a positioning system at their site: for SPC, sincethe organisation did not have prior experience with positioning,and for LPH to to enable them to experiment with a differenttype of indoor Wi-Fi positioning. For the two latter cases wealso did follow-up interviews after the deployments.

To guide the case study we have reviewed existing liter-ature. However, we found no research studying in depth theorganisational requirements for Wi-Fi positioning. Instead, asstated in the introduction, research in the field is motivatedby and focuses on general goals regarding the improvement ofaccuracy or the reduction of deployment cost, and, accordingly,general arguments for these goals are given to promote the spe-cific topic of the presented contribution to the field. In regardsto organisational requirements, though, literature has so farfocused foremost on capturing the technical differences amongpositioning systems by defining evaluation criteria capturingdifferent technical aspects of these systems, e.g., resolution,accuracy, coverage, infrastructure requirements among others.Furthermore, the claims regarding the system’s performancesare often purely technical, but not backed up with statementsgrounded in insights from users and stakeholders. So faronly two aspects of WiFi positioning deployments have beenlinked with and discussed in regards to the organisationalrequirements: Firstly privacy, e.g., in an academic setting [14]and secondly—and only in form of general comments—onsocial barriers for fingerprint collection [11] .

In the following we present the findings from the fourcase studies, structured according to nine requirement types weidentified as important. These findings are summarized also inTable II which reveals that the identified requirements differamong organisations, suggesting that there may not be a ”onesize fits all” indoor positioning system.

User Groups (O1) Several organisations, most clearlyMUD and LPH , stated that they had several user groups,that were candidates for using a positioning system. Thesewere for LPH: clinical staff, service staff, patients and guests;and for MUD: academic staff, technical staff, administrative

2208/278

Page 224: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

TABLE I. DETAILS FOR CASE ORGANISATIONS

Description Potential Users Total Coverage Wi-Fi Positioning Deployment Interviewed

SPC Small private com-pany

70 A three story building Empirical participatory finger-printing

Two software engineers

MUD Medium-sized univer-sity department

140 + students Five buildings with three tofive stories each

Model-based and empirical ad-ministrative fingerprinting

Network Administrator

LSM Large shopping mall 100 + customers Large building complex with85 shops

Empirical administrative finger-printing

Site Manager

LPH Large public hospital 5000 + patients andvisitors

Large building complex with6000 individual rooms

Empirical administrative andparticipatory fingerprinting

Two Network Adminis-trators

TABLE II. SUMMARY OF FINDINGS FOR ORGANISATIONAL REQUIREMENTS

O1 O2 O3 O4 O5 O6 O7 O8 O9

SPC Single Incremental Few + External +/- + - +

MUD Multiple Complete Several + External +/- + - +/-

LSM Single Complete Few + External +/- + - +/-

LPH Multiple Complete Many + Local + + +/- +

staff, students and guests. From the studies we noticed thatthese groups differ in all of: i) how they utilize the space, ii)their mobility patterns, and iii) for how long they are withinthe premises of the organization. For instance within LPH ,the staff comes and leaves at regular (work-shift) times, anddepending on their function they are either largely stationaryat one department, or move around the whole hospital. In con-trast, guests who visit a hospitalized patient often go directlyto a specific department—occasionally with a detour to someof the common facilities. Patients on the other hand either staymainly in an department if they are hospitalized, and otherwisewalk from an entrance to the department where they receive theambulant treatment and exit the building afterwards. The twoorganisations LSM and SPC had foremost a single specificgroup in mind to provide positioning to: for LSM customers,and for SPC staff.

Coverage (O2) The organisations differ in how they viewthe requirements for coverage. When introduced to Wi-Fipositioning, SPC was favoring the concept of slowly growingthe coverage to incrementally include places according tohow much these were frequented, as this would easen theinitial deployment. After trying this approach, they providedpositive statements regarding growing the coverage. However,they also did not have a lot of experience with potentialapplications, which potentially could make them reconsiderthese statements. MUD, LSM and LPH stated that theywould like complete coverage for their premises, so that theprovided applications would work without outages and ”darkspots” within the targeted areas and for the intended usergroups and for tracked assets.

Form Factors (O3) Most of the organisations stated thatthey would like the positioning to work for several form factorsof devices. SPC and LSM was focused on the positioningof smart phones, MUD on smart phones and tags and LPHconsidered all of laptops, smartphones, tablets, Wi-Fi-enabledbadges, watches; where smartphones would be foremost usedfor people tracking, and tags asset tracking. Furthermore, giventhe rapid evolution of different form factors, a wish was statedto be able to adopt new device types and form factors as theseenter the market.

Software Platform (O4) Some of the organisations pointedto that positioning should work regardless of the operating

system of the devices to be supported. In particular, LPHstated they would like their applications to work regardlessof the users’ device operating systems. When visiting theorganisations we also noticed how the organisations used andsupported laptops, phones and tablets with different operat-ing systems. LSM would like that all visitors would haveaccess to position-based services wihtin the premises. Suchplatform-independent positioning is not trivial and may inducerestrictions in other regards: For instance, in the SPC andLPH deployments potential system users were limited by thefact that the tested device prototype was implemented on theAndroid platform as device-side positioning is not possibleto implement on current iOS devices due to the restrictions ofthe currently available APIs.

Infrastructure (O5) For LPH it was paramount that thepositioning system came with a high level of reliability andavailability—as soon as their work processes integrated posi-tioning systems. To achieve this, they viewed it as a crucialmeasure that the positioning service was hosted within theirown infrastructure. SPC and LSM instead did not have suchconcerns and were willing to accept a cloud-hosted service—such as the remotely hosted system that was eventually pro-vided to them in the deployment.

Data Privacy (O6) For LPH it was important to protectlocation traces whenever these originate from a device carriedby a identifiable person; the reason being, that they consideredsuch traces as personal and privacy-sensitive data since, e.g. inthe case of patients, medical conditions and their severity maybe deduced from their in-hospital position traces. LSM , onthe other hand, did not view privacy as an organisational issue,because the positioning was used by their customers withoutLSM having knowledge of the resulting position data.

Maintenance (O7) All organisations required explicitly thattheir positioning solution should have only a low degree ofmaintenance. LPH had already tried an empirical administra-tive fingerprinting-based solution and gave up on fingerprintingtheir premises exhaustively, once they had concluded that thistask would take more than three months for a single person.Furthermore, because they invested in the positioning systemas an add-on when replacing their wireless infrastructure, nomajor resources were assigned to running or configuring thisadd-on. A second maintenance issue that LPH encountered

3209/278

Page 225: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

was, that there was no updating procedure in place to informthe installed system that an access point had been replaced,e.g., in place of a access point with a different identifier butof similar type and location—suggesting that old fingerprintscould be reused. After the deployment, LPH saw somepotential in empirical participatory fingerprinting as a low costsolution to improve accuracy in specific areas of the hospital.This fingerprinting approach was attractive also to SPC.Initially, they were concerned if such a solution was reallycheaper, given that highly paid staff may end up spendingwork time on this task. However, after deployment SPCeven suggested that the system should propose new places tousers, where they should go to take a fingerprint. A problemencountered with this system was that people often selected thewrong floor, since the floors’ layouts were very similar. Theissue was partly solved by increasing users’ awareness of whatfloor they selected when fingerprinting via presenting to them adifferent coloring of each floor. MUD had ran a model-basedfingerprinting solution for three years with extremely lowmaintenance. E.g., when all access points were replaced aftertwo years with newer models, only the mac addresses, residingin a single file, had to be reconfigured to get the system upand running again. They also tested a empirical administrativesystem in some parts of their buildings to improve accuracybut had given up fingerprinting it after all access points hadbeen replaced. LSM did a partnership with an external partyso maintenance was limited to providing information abouttheir Wi-Fi infrastructure.

Fingerprinting Limitations due to Social Barriers (O8)When confronted with the social barriers that might affectthe decisions, whether an administrator or expert could orshould collect fingerprints , SPC did not view this as a majorproblem. They compared it to the duties of cleaning personnelor of a person watering the flowers—which also require accessto and temporary presence in most of the premises. However,for participatory fingerprinting there may be restrictions: E.g.,given the case that the system suggests that a participating usershould fingerprint his boss’s office, him entering that officeunnoticed may not be considered as an acceptable action withinthe organisation. MUD had allowed earlier for such collectionand therefore also did not view this as a major problem. LPHand LSM did not utter issues with the topic either. However,in a hospital setting there are a number of locations which aredifficult to get access to, e.g., doctors’ offices, resting rooms,and operation rooms, as these are either seldom unoccupied orconsidered private areas.

Accuracy and Latency (O9) When the stakeholders wereasked upfront, LPH and SPC generally wished for room-level accuracy with high confidence and low latency. Wi-Fipositioning systems generally struggle to provide room-levelaccuracy (assuming rooms < 20 square meters) with highconfidence (at least if not assisted by other sensor modalities oremploying massive deployments of short range Wi-Fi accesspoints). Therefore, the technology may not be able to fulfillthe above wishes fully. LPH linked these wishes to a numberof clinical applications—but also recognized that for otherapplications more relaxed requirements were sufficient, e.g.,sub-department level for providing an overview of assets.LSM stated that for way-finding via in-app indoor mapsthey had gotten positive reactions from customers—despitethe current positioning accuracy was coarser than room-level.

MUD stated that for asset tracking they had gotten positivereactions when providing an (empirically determined) 6 metermedian accuracy. In general, the organisations’ stakeholderswere unsure about the link between positive application expe-riences and specific requirements for accuracy and latency.

IV. CONCLUSIONS

In this paper we have hypothesized about the reasons forthat after more than ten years of research on indoor Wi-Fipositioning, it has not yet achieved a widespread breakthroughin terms of real world deployments. We argue that this is dueto a overly narrow research focus on algorithmic optimizationof positioning accuracy in insufficiently realistic settings—atthe expense of a broader understanding of the organisationalside of indoor Wi-Fi positioning. The findings suggest amongothers a need to consider how to support all user groups,provide software platform independence, low maintenance,allowing positioning of all user devices, regardless of platformand form factor. We hope that the research community willaddress such challenges in future work.

ACKNOWLEDGMENT

The authors acknowledge the support granted by the Dan-ish Advanced Technology Foundation under J.nr. 076-2011-3

REFERENCES

[1] “scholar.google.com,” Dec. 2012.[2] M. B. Kjærgaard, “A Taxonomy for Radio Location Fingerprinting,” in

LoCA, 2007, pp. 139–156.[3] I. Constandache, R. R. Choudhury, and I. Rhee, “Towards mobile phone

localization without war-driving,” in INFOCOM 2010, 2010, pp. 2321–2329.

[4] T. Sohn, K. A. Li, G. Lee, I. E. Smith, J. Scott, and W. G. Griswold,“Place-its: A study of location-based reminders on mobile phones,” inUbiComp, 2005, pp. 232–250.

[5] “www.ekahau.com,” Dec 2012.[6] “www.aeroscout.com,” Dec. 2012.[7] “maps.google.com,” Dec. 2012.[8] “www.theage.com.au/digital-life/smartphone-apps/indoor-gps-every-

step-you-take-every-move-you-make-googles-got-maps-for-you-20121115-29e1b.html,dec.2012.”

[9] P. Bahl and V. N. Padmanabhan, “RADAR: An In-Building RF-BasedUser Location and Tracking System.” in INFOCOM, 2000, pp. 775–784.

[10] R. Nandakumar, K. K. Chintalapudi, and V. N. Padmanabhan, “Centaur:locating devices in an office environment,” in MobiCom, 2012, pp. 281–292.

[11] J. geun Park, B. Charrow, D. Curtis, J. Battat, E. Minkov, J. Hicks, S. J.Teller, and J. Ledlie, “Growing an organic indoor location system,” inMobiSys, 2010, pp. 271–284.

[12] H. Wang, S. Sen, A. Elgohary, M. Farid, M. Youssef, and R. R.Choudhury, “No need to war-drive: unsupervised indoor localization,”in MobiSys, 2012, pp. 197–210.

[13] K. Chintalapudi, A. P. Iyer, and V. N. Padmanabhan, “Indoor localiza-tion without the pain,” in MobiCom, 2010, pp. 173–184.

[14] W. Griswold, P. Shanahan, S. Brown, R. Boyer, M. Ratto, R. Shapiro,and T. Truong, “Activecampus: experiments in community-orientedubiquitous computing,” Computer, vol. 37, no. 10, pp. 73–81, 2004.

4210/278

Page 226: International Conference on Indoor Positioning and Indoor Navigation

- chapter 12 -

Aerospace

Page 227: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

The workspace Measuring and Positioning System

(wMPS) — an alternative to iGPS

Bin Xue, Jigui Zhu*, Yongjie Ren, Jiarui Lin

State Key Laboratory of Precision Measuring Technology and Instruments

School of Precision Instrument and Opto-Electronics Engineering, Tianjin University

Tianjin, China

[email protected]

I. INTRODUCTION

The workspace Measuring and Positioning System (wMPS) is a modular, large-volume tracking system enabling factory-wide localization of multiple objects with metrological accuracy, applicable in manufacturing and assembly [1,2]. Like iGPS [3], the components of the wMPS are a network of transmitters, a control center and a number of receivers. Besides, the wMPS possesses all the distributed characteristics iGPS features including sharing the measurement task by implementing a network of transmitters, tracking an unlimited number of targets simultaneously, and so on. The localization principle of the wMPS is multi-plane constraint, namely, the position of a receiver can be determined by several laser beams intersecting there, while iGPS is based on triangulation, i.e., each transmitter presents two measurement values to each receiver: the horizontal (azimuth) and the vertical (elevation) angles, receivers can calculate their position whenever they are localized in the line of sight of two or more transmitters [4]. Other than the localization principle, the wMPS and the iGPS have the same characteristics and advantages in accuracy and functionality.

II. LOCATING PRINCIPLE

The components of the wMPS are illustrated in Fig.1.

Figure 1. The components and the locating principle of the wMPS

Fig.1 presents the typical components of the wMPS. For each

transmitter, there is a rotating head with two lasers mounting

on, which emit plane-shaped beams. To distinguish one

transmitter from another, different rotating velocities are

assigned to the transmitters in the workspace. The moment a

laser beam sweep over a receiver, the pre-processor plugged

with the receiver provides the time from the start to the present

moment by accumulating the pulses. Several planes going

through a receiver will locate the position of the receiver, see

Fig 1.

III. THE APPLICATION AREA OF THE WMPS

The workspace Measuring and Positioning System

(wMPS) can be applied in many measuring and manufacturing

areas in industry. For instance, the wMPS has successfully

applied in the airplane level measurement project. Other

projects like providing high absolute accuracy to an industrial

robot, guiding an AGV (Automatic Guided Vehicle) moving

in a workshop, assembly project in shipbuilding, and so on,

are also in process.

IV. THE ACCURACY EVALUATION OF THE WMPS

To evaluate the accuracy of the wMPS, we set up the experiment. First, we set up the wMPS with the scale bar. Second, sample at least three correspondences in both the wMPS and the laser tracker frame in order to achieve the relationship between the two. Third, we use both the wMPS and the laser tracker to measure the test points. Because the diameter of the receiver is the same size as the SMR of the laser tracker as we designed, obtaining a same point in both the frames is feasible. Six correspondences are sampled, and the

results are listed in TableⅠ.

Table1 Data comparison between the Laser Tracker and the wMPS

X(mm) Y(mm) Z(mm) Error(mm)

Nominal 5880.18 -3213.32 127.60 0.11

Actual 5880.22 -3213.29 127.70

Nominal 7631.58 -5720.73 152.48 0.15

Actual 7631.50 -5720.86 152.47

Nominal 7840.26 -5626.45 559.75 0.22

Actual 7840.41 -5626.32 559.67

Nominal 6006.25 -3468.72 534.27 0.11

Actual 6006.17 -3468.65 534.24

Nominal 10483.85 -939.81 535.94 0.20

Actual 10483.66 -939.76 535.91

Nominal 11701.76 -3006.05 558.46 0.14

Actual 11701.85 -3005.95 558.48

211/278

Page 228: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

The data under the heading Error are the distances between

the nominal and the actual coordinates. The distance between the nominal provided by the Laser Tracker and the actual provided by the wMPS is used to measure the accuracy of the wMPS. The smaller the distance is, the better the accuracy would be.

We see from the data under the heading Error that the maximum error is 0.22mm and the minimum is 0.11mm. That is to say, the accuracy of the wMPS approximately lies around 0.2mm. We want to point out that the results are obtained in a reasonably well controlled environment. When the circumstances in the workshop are complex so that the placement of the transmitters is influenced, the accuracy may deteriorate.

V. THE DETERIORATION OF THE ACCURACY TOWARDS THE

BOUNDARIES OF THE OPTIMUM MEASURING AREA OF THE

WMPS

The contradiction between the accuracy and the measuring area is a constant topic in large-scale measurement [5]. Although the wMPS has the potential to solve this contradiction by its distributed characteristics, there are still problems unclear and to be solved. For example, locating more transmitters in the network can enlarge the measuring area, but how to arrange the placement in order to provide the optimum accuracy to the specific measurands? How to place the scale bar in the calibration procedure in order to achieve the best calibration accuracy? When converting the current coordinate frame to a workpiece, how to place the correspondence points in order to get the optimum converting accuracy? These urgent problems need to be solved theoretically and practically.

In this section, we just demonstrate an experimental result which reflects the laws about the above problems in a nutshell. In the workshop, we place three transmitters according to the environment, and a laser tracker to provide the reference. Set up the wMPS with the scale bar, and then obtain several correspondences in both the wMPS and the laser tracker frame to achieve the relationship between the two. The obtainment of the correspondences benefits from the same diameter of the receiver and the SMR. Also, by the benefit we can do some coordinates comparisons to reflect the relationship between the accuracy and the measuring area. The results are illustrated in Fig.2.

In Fig.2, the area where we locate the scale bar in order to calibrate the wMPS is marked by both the yellow crosses and the scale bar icons. The pink crosses represent the correspondences used to achieve the relationship between the wMPS and the laser tracker frame. The laser tracker is not marked in the figure. The blue crosses together with the number represent the points used to test the accuracy there. The area where the transmitters approximately intersect best gives a relatively better accuracy. It is also the area where the scale bar is located to implement the calibration procedure. As shown in Fig.2, the accuracy deteriorates significantly outside the boundaries of the area. The deterioration may be caused by several factors such as the bad intersection, the accurate

extrinsic parameters achieved in one area becoming inaccurate in another area, the correspondences not being placed as dispersed as possible. The factors should be studied one by one theoretically and practically. The object is to achieve the best accuracy the wMPS can provide in the desired area to meet the requirement of the industrial measurement.

Figure 2. The deterioration of the accuracy towards the boundaries of the

optimum measuring area of the wMPS

VI. CONCLUSION

In this paper, we first introduce the wMPS in a nutshell, and then offer the accuracy assessment by comparing it with a laser tracker. The results show that the wMPS is able to achieve an accuracy of around 0.2mm at a reasonably well controlled environment. At last, we point out that as a distributed large-scale measurement system, the accuracy of the wMPS differs in different areas. This characteristic is caused by several factors. To effectively control the factors in order to achieve the required accuracy in the area in which we are interested is a problem to be solved.

REFERENCES

[1] Z. Xiong, J. Zhu, Z. Zhao, X. Yang, and S. Ye, “Workspace measuring and positioning system based on rotating laser planes,” Mechanika. vol. 18, pp. 94-8, 2012.

[2] B. Xue, J. Zhu, Z. Zhao, J. Wu, Z. Liu, and Q. Wang, “Validation and mathematical model of workspace Measuring and Positioning System as an integrated metrology system for improving industrial robot positioning,” Proc IMechE Part B: Journal of Engineering Manufacture., in press.

[3] F. Franceschini, M. Galetto, D. Maisano, L. Mastrogiacomo, and B. Pralio, “Distributed large-scale dimensional metrology: new insights,” London, Springer, 2011.

[4] S. Kang, D. Tesar, “A noble 6-DOF measurement tool with indoor GPS for metrology and calibration of modular reconfigurable robots,” IEEE ICM International Conference on Mechatronics, Istanbul, Turkey, 2004.

[5] Y. Mohamed, A. Kemal, “Strategies and techniques for node placement in wireless sensor networks: A survey, ” Ad Hoc Networks, vol. 6, pp. 621-655, 2008.

212/278

Page 229: International Conference on Indoor Positioning and Indoor Navigation

- chapter 13 -

User Requirements

Page 230: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Key Requirements for Successful Deployment of

Positioning Applications in Industrial Automation

Linus Thrybom, Mikael Gidlund, Jonas Neander, Krister Landernäs

Corporate Research

ABB AB

Västerås, Sweden

Abstract—Positioning and navigation applications have so far

mainly been targeting the consumer market but are now

beginning to penetrate the industrial automation domain. The

requirements in this environment are however quite different

compared to the consumer market, but needs nevertheless to be

met before the systems can be used in the automation domain. A

failure in the positioning system may cause substantial

production losses and could even be fatal, which is the reason for

the generally stricter requirements.

This paper will describe the industrial usage needs and define the

industrial requirements such as environmental, availability,

safety, technical, usability and cyber security that industrial

positioning solutions needs to support. Furthermore, the

requirements are compared to the state-of-art in order to identify

current gaps that need to be researched and solved. The paper

concludes that there are large gaps to solve in several areas, and

that these gaps need to be managed before a large industrial

adoption of positioning systems can be achieved. The paper

furthermore concludes that the research community plays an

important role in supporting future industrial automation.

Keywords: industrial; automation; requirements; positioning

I. INTRODUCTION

Industrial use of positioning systems is still on a low level compared to the consumer market, which has adopted the use of position and navigation data into many different application areas. However, the positioning technologies are getting an increased interest also from the industrial automation perspective.

The automation level achieved in a process depends on what data that is available, and here the positioning data will play an important role to further automate industrial processes.

The largest interest for industrial positioning systems is related to autonomous systems and processes including vehicles and mobile devices. Autonomous systems have been in place for some time, but only in very dedicated applications like automatic warehouse trucks.

The automated factory is usually operated from a control room, and the process is autonomous from the input of raw

material to the output of final product. However e.g. the transport of the raw material into the automated factory is today often not automated or integrated with the main process. One example is in the mining industry, where the actual ore extraction is performed remotely from the ore processing. If the quality and amount of raw material can be measured and controlled, it can be used to improve the final product quality of the plant. This extended automation process would also provide a tool for better usage of the machines and other assets involved in the process.

Additional future scenarios of industrial automation include factories in environments which are remotely located or unfriendly for humans. In these future factories the autonomous operation will require a new set of position systems for industrial applications.

II. INDUSTRIAL EXAMPLES

Positioning is today used in e.g. mining, harbor and port systems as well as in the oil and gas industry. The ABB 800xA system plays an important role in process plants by collecting detailed status information from all connected smart instruments including wireless sensors. Integration and analysis of such data improves the operation and process quality.

A. Mining

The mining industry is probably the industry which has progressed furthest on using positioning systems. The risk related to human health in underground mines has pushed for systems which are able to locate people in case of an emergency evacuation due to hazard gases or fire, the knowledge that someone still is in the hazardous zone is critical information for the rescue team. These systems are often limited in accuracy, but provide enough detail of information in order to know in which area a person is located. Active and passive RFID solutions are here the most common solution used today. Still, the degree of automation and data integration is less utilized in underground mining applications, but this is foreseen to change. The reason is that the ores requires more and more effort to be extracted; one necessary way to go in order to continue to increase the productivity is to increase in

213/278

Page 231: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

automation and data integration [1]. It is in this change that further automated mining systems as well as a more accurate position system play an important role, both for human safety and for higher productivity.

A second use case for positioning in mining industry is for open pit mining and fleet management systems. The open pit mining requires exact positioning data for its operation, e.g. for drilling, shoveling and surveying. In fact the drilling process can be improved significantly when the drill position is known down to centimeter level. As the open pit mine areas are stretching over larger and larger areas, with longer and longer distances between the ore and the process plant, the ore logistics becomes important factor in the process. It becomes an interesting and valuable optimization problem to solve when taking all aspects of: ore quality, vehicle conditions, position and speed into account. To some extent this transition has started in underground mining. For many years, the ore has already been extracted by remotely controlled machines. These machines are thus operated far away from the actual mine face area, in a modern industrial control room fashion.

B. Shipping Port

Another application that benefits from position system in conjunction with autonomous operation is the handling of shipping containers in a port. One example is [2] which provide a good insight of the industrial requirements, e.g. regarding reliability and integrity, which resulted in a positioning system consisting of two independent loops and four sensors, with four different physical principles. The safety requirement is in [2] seen as one of the most important issues and solved by basing the safety system on both “defense in depth” as well as independence principles. The mixed use of low frequency absolute positioning and high frequency rate sensor was proposed in [3] and is one way to match the industrial requirements.

C. Oil & Gas

Remotely operated vehicles used for subsea platform inspection uses GPS together with depth as their primary source of position. During the drilling process the position of the wellbore is also monitored. Access control as well as personnel health and safety applications on off-shore platforms are also important positioning functions. Future challenges are e.g. to automate pipe inspections and valve handling on the platform itself, but it should be noted that this environment is very harsh.

III. INDUSTRIAL REQUIREMENTS

A. Industrial Usage Needs & Requirements

The industrial requirements are reflecting the high cost of production losses and the high requirements on productivity and efficiency.

The exact requirements depend to a large extent on the application, and spans over areas like environmental, availability, safety, cyber security, usability, etc. The environmental requirements include e.g. dust, high / low temperatures, humidity and corrosive gases in the air,

mechanical vibrations and electrical EMC disturbances. Functional safety requirements are of increasingly importance, and apply for an increasing set of applications. Usability requirements include e.g. that users wear gloves, helmets and may work in a noisy and dirty environment, which will impact the user interface and user interaction with a system. Cyber security is an extremely important area; since malicious positioning data could lead to collisions, broken equipment and production stop. Both authentication and integrity are here the key cyber security requirements for the industry.

Other typical industrial requirement areas are listed below:

Availability – An industrial system has often high availability requirements, typically 99.999%, which partly can be achieved using redundancy solutions and strict verification processes.

Scalable coverage – It is important that the system is able to scale when adding more sensors or changes are made to the infrastructure.

Standardized equipment – Customers often requires that the used equipment is standardized. This allows them to use different vendors.

Robustness – The devices will operate in harsh environments often with extreme heat and humidity conditions. In some applications it is required that the devices are ATEX and Safety Integrity Level (SIL) 3 ready.

System latency - The system application contains typically several control loops, which require data to be available in time.

Simplicity – The equipment should be easy to deploy and maintain.

Retrofit – Software components, e.g. maps, should be easy to integrate with existing control systems

Life cycle – Product life cycles are generally long, from 15-20 years up to 40 years of life time in some applications.

Cost – The cost, per device, system and the whole life-time is in the end a major decision factor for investing in a positioning system.

The industrial requirements may be easy to achieve one by one, but in a majority of applications most of these requirements are mandatory all together. Some applications require additional certifications from 3rd party.

Bringing all these requirements into the industrial positioning application means e.g. that the positioning device should run safely in a harsh environment for 15 years and be unavailable < 5 minutes / year with a guaranteed accuracy.

Most positioning systems available are radio based and the harsh industrial radio environment becomes then another challenge. This will be further discussed in the next section.

214/278

Page 232: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

B. Radio Communication in Harsh Industrial Environments

Using wireless systems in industrial automation is becoming increasingly popular since it brings several benefits such as easy maintenance, lower installation cost and flexibility. However, the radio environment in industrial automation is usually rougher than the consumer market which most of the existing wireless systems are designed for.

Electromagnetic interference in industrial facilities comes from industrial equipment and coexisting wireless networks. EMC problems origin from sources such as breakers, electric motors, welding, industrial processes, etc. [3], [4]. In some occasions these disturbances can produce delays in the communication due to retransmission, re-synchronization that cause a blockage in the production system and in some cases safety hazards for personnel. Figure 1, shows that the majority of the impulsive interferences occur in the low frequency regions (typically below 1.5 GHz).

Figure 1: Electromagnetic interferences at low frequencies in a paper mill.

Industrial environments are abundant with highly reflective metal surfaces and moving or static physical obstacles. High reflectivity of the surrounding environment produces a large number of signal copies, whose superposition at the receiver can be constructive or destructive. In addition, the sheer size of indoor industrial facilities results in a large Root Mean Square Delay Spreads (RMSDS), which is the cause of another destructive phenomenon in wireless propagation called intersymbol interference (ISI) [5].

Figure 2 shows the normalized received power over time for the case with a tumbling mill and a paper production facility. It can easily be seen that the signal is fluctuating a lot in the tumbling mill application while it is rather static in the paper production area. However, in the paper production area another problem occurs quite often and that is the case when huge trucks are parked in front of the wireless equipment and create shadow fading. In these cases the received signal drops with 30-40 dB. It is often common in industrial environments that both small-scale fading and large-scale fading occurs and this means that wireless systems needs be able to guarantee high reliability.

0 20 40 60 80 100 120 140 160 180-20

-15

-10

-5

0

5

10

Time (samples)

Nor

mal

ized

rec

eive

d po

wer

(dB

)

Iggesund 2.4Ghz

Garpenberg 2.4Ghz

Figure 2: Normalized received power in 2.4 GHz frequency band at the

Boliden and Iggesund facilities.

In multipath environment the first impinging component might be weaker than the strongest component and this will cause problems for determining the positioning if using TOA or TDOA based solution. Figure 3 shows the PDP for both LOS and NLOS scenario in a real iron mine in Sweden [6]. The PDP of an RF signal is used to highlight the characteristics of a signal received in a multipath environment. By studying the PDP in Figure 3 it becomes apparent that in the LOS case the first impinging component is also the largest, and its arriving time is resolvable. However, for the non-LOS (NLOS) there is no apparent relationship between the first impinging component and the amplitudes. Also the NLOS signal is much dispersed over time, which results in a higher RMSDS. The results in Figure 3 hint that positioning in underground environments using wireless systems is a major challenge.

Figure 3: Normalized PDP for a 500 MHz pulse with 2450 MHz center

frequency in LOS and NLOS at 18 m distance in the Mine tunnel [6].

215/278

Page 233: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

In conclusion, the dynamic radio environment in industrial plants is a challenge for any wireless equipment including positioning systems. Guaranteeing consistent performance with the highest reliability under these circumstances is not an easy task.

IV. GAPS

When comparing the state of the art of positioning with the industrial requirements and usage needs there are a number of areas which turn out to be under represented, these are explained and discussed below.

A. RSS

A large share of articles addresses the different methods and principles around RSS as well as the related fingerprinting methods. Techniques like RSS may work fine in a shopping mall, but not in an industrial environment. An industrial environment often contains large obstacles like machines, material and containers. Some of these objects are moved around during the operation as part of the process. Additionally there are also various types of vehicles which are used to transport material and products in the process. This creates a highly dynamic RF environment with rapidly changing signal strengths as well as alternating LOS / NLOS conditions for the area. As a result, these objects make the RSS fingerprint very dynamic in an industrial environment, and in fact unusable. The same view, i.e. that there is no good attenuation model for this type of environment, is also concluded in [7]. As a conclusion other non-RSS methods must be further investigated to higher degree. New methods which are better suited in industrial environments need to be developed.

B. Security

A second area where there is a research gap is the security aspect of positioning techniques, and the related personnel integrity. Personal integrity is a soft aspect of positioning which we already face; the challenge is to use the information in the right way. To locate personnel is necessary in case of an emergency situation, but continuous and detailed localization of all personnel may not be allowed in other cases.

Cyber security threats are valid for all communication systems, including positioning systems. The actual communication path can be protected using existing methods and principles, but when it comes to e.g. TOA there may be other techniques required to secure the authentication and integrity. The position system would need to authenticate that the position signal originates from the correct source, that there is no replay, and that no one has modified the data. Research on these techniques is today mainly missing.

C. Functional Safety

A third area that requires more attention is functional safety. This implies that the protocols and methods used are robust, have high availability and are easy to verify. Since position systems often are used together with mobile vehicles / devices, it is obvious that a failure in the position system can result in collisions with the risk of both high material costs and human losses. The simple solution of just stopping a device /

vehicle may only be a rare and temporary solution, since a process stop would impact the productivity and may cause high economic losses.

D. High Availability

A fourth area that needs more interest is the high availability required of a position system. One common way to achieve high availability of up to 99.999% is to use redundancy. Solutions like redundant processing, redundant I/O and redundant communication are common in many industries today, and the main reason is to fulfill the availability requirements. For positioning systems there are few, if any, research projects targeting high availability or redundant solutions. Technologies which complement each other when in cooperation, but also act as backup in case its partner fails would be highly appreciated topics to see in the research community. The important function is that the user can trust that even if one antenna or cable breaks, the system will still be able to operate. This will be of additional importance in case the application includes mobile vehicles or systems which can move and thus more easily break e.g. cables.

V. CONCLUSION

Although there exist many positioning systems there are actually very few that fulfills the requirements within industrial automation. In this paper we point out some important key requirements and gaps that need to be addressed in future research such that positioning can be used in a larger extension than today. The most important requirements are functional safety, security, high availability, and high accuracy. There is a huge business opportunity for positioning in industrial automation if several of the aforementioned gaps can be closed.

REFERENCES

[1] Stefan L. Sjöstrom, Kjell G. Carlsten, Krister Landernäs, Jonas Neander, “Mine of information” ABB Review 2013-2, www.abb.com/abbreview

[2] Durrant-Whyte, H.; Pagac, D.; Rogers, B.; Stevens, M.; Nelmes, G., "Field and service applications - An autonomous straddle carrier for movement of shipping containers - From Research to Operational Autonomous Systems," Robotics & Automation Magazine, IEEE , vol.14, no.3, pp.14,23, Sept. 2007

[3] J. Ferrer-Coll, J. Chilo, and P. Stenumgaard, “Outdoor APD measurements in Industrial Environments,” in Proc. AMTA 2009, Salt Lake City, USA, Nov. 2009.

[4] P. Ängskog, C. Karlsson, J. Ferrer-Coll, J. Chilo, and P. Stenumgaard, “Sources of disturbances on wireless communication in industrial and factory environments,” In Proc. Asia-Pacific Symposium on Electromagnetic Compatibility and Technical Exhibition on EMC RF/Microwave Measurement and Instrumentation, Beijing, China, April 2010.

[5] D. Tse and P. Viswanath, “Fundamentals of Wireless Communication,” Cambridge, 2005.

[6] J. Ferrer-Coll, P. Ängskog, J. Chilo, and P. Stenumgaard, “Characterization of electromagnetic properties in iron-mine production tunnels,” Electronics Letters, vol. 48, no. 2, pp. 62-63, 2011.

[7] Cypriani, M.; Lassabe, F.; Canalda, P.; Spies, F., "Open Wireless Positioning System: A Wi-Fi-Based Indoor Positioning System," Vehicular Technology Conference Fall (VTC 2009-Fall), 2009 IEEE 70th , vol., no., pp.1,5, 20-23 Sept. 2009

216/278

Page 234: International Conference on Indoor Positioning and Indoor Navigation

- chapter 14 -

Security and Privacy

Page 235: International Conference on Indoor Positioning and Indoor Navigation

- chapter 15 -

Ultra Wide Band

Page 236: International Conference on Indoor Positioning and Indoor Navigation

Texture-Based Algorithm to Separate UWB-RadarEchoes from People in Arbitrary Motion

Takuya Sakamoto, Toru SatoGraduate School of InformaticsKyoto University, Kyoto, JapanEmail: [email protected]

Pascal J. Aubry, and Alexander G. YarovoyMicrowave Sensing, Signals and Systems

Delft University of Technology, Delft, the Netherlands

Abstract—This study proposes a novel algorithm for separat-ing multiple echoes using texture information of radar images.This algorithm is applied to measurement data to be shownto be effective even in scenarios with motion-varying targets.The performance of the algorithm is investigated through itsapplication to ultra-wide-band radar measurement data for twowalking persons.

I. INTRODUCTION

An ultra wide-band (UWB) radar system is a promisingsensing tool for indoor navigation because it provides high res-olution range and Doppler information. The range informationenables the tracking capability of people, whereas the micro-Doppler information is proven to be efficient in estimatingthe action of each person [1]-[9]. However, these conventionalstudies all assume that it is a single person in the image data;an effective algorithm is needed for separating multiple targetsin the scene.

One such technology is multiple hypothesis tracking(MHT) [10] that employs a Kalman filter and multiple hypoth-esis technique redesigned for human tracking. Although thistechnique can estimate multiple trajectories of people, eachtrajectory is represented as a curve that does not define theactual region corresponding to the target in the radar image.Thus this method does not actually separate the received sig-nals into multiple components so that single-target algorithmscan be applied.

In this paper, we propose a new algorithm for separat-ing echoes from multiple persons. This method analyzes thetexture of the radar image in the slow time-range domain.The algorithm proposed uses a texture angle that correspondsto a target’s line-of-sight speed. Next, we calculate a pixel-connection map in which each pixel is connected to anotherpixel that has the closest texture angle. Finally, randomlydistributed complex values are numerically propagated to theadjacent connected pixels. This algorithm works autonomouslyeven for motion-varying targets. Specifically, we demonstratethat our algorithm can successfully separate echoes from twopeople walking at different and time-changing speeds.

II. PROPOSED SEPARATION ALGORITHM OF ECHOES

The proposed method consists of three steps. First, wecalculate the texture angle of the signal. Second, we obtaina pixel-connection map between pixels of the texture angleimage. Third, we apply the connection propagation algorithmto the pixel-connection map to separate multiple echoes.

A. Texture Angle for Radar Echoes

We propose the texture angle for radar images for estimat-ing the approximate line-of-sight velocities of targets. Unlikethe use of a spectrogram, the texture angle can estimate theDoppler velocity for each pixel of the image. In general, theechoes of different targets have different texture angles, unlessthose multiple targets are exactly in the same motion.

We define the texture angle of a slow time-range radarimage as

θ(t, r) = tan−1

(v0∂s(t, r)/∂r∂s(t, r)/∂t

), (1)

where s(t, r) is the signal received at a slow time t from arange r. Note that v0 is introduced to make the argument oftan−1 dimensionless.

B. Pixel Connection Map based on Texture Angle

Next, we explain the procedure to obtain the pixel-connection map, which corresponds to the second step of ourproposed algorithm. In this pixel-connection map, each pixel isconnected to another pixel that has the closest texture angle.For this calculation, we use the texture angle of each pixel.Note that the texture angle is defined only if the intensity ofthe pixel is above a threshold. The following procedure appliesonly to pixels whose texture angle is defined. For the i-th pixel,the right-connected pixel is chosen as

Ri = arg minj

|θj − θi| , (2)

subject toti + Ts > tj > ti (3)

and ∣∣∣∣tan−1

(rj − ri

v0(tj − ti)

)− θi

∣∣∣∣ < δ. (4)

Here, Ts is the window size for the search, and δ is a smallangle. These conditions imply that the pixel connected to thei-th pixel is located on the right hand side of the i-th pixel,and the inclination of the line connecting the pair of pixelsdoes not contradict the texture angle. Under these conditions,we choose the pixel that has a texture angle closest to that ofthe pixel of interest.

Similarly, we calculate the left-connected pixels Li thatis located on the left-hand side of the pixel of interest usingthe same process Eq. (2), but with a different time condition,ti − Ts < tj < ti, instead of Eq. (3).

217/278

Page 237: International Conference on Indoor Positioning and Indoor Navigation

C. Complex Number Propagation Algorithm

Next, we introduce a method that can automatically sepa-rate multiple echoes using the pixel connection map Riand Lithat were calculated in the second step. The pixel connectionmaps are not entirely accurate; the pixels belonging to differenttargets can be erroneously connected. The algorithm proposedbelow benefits from statistical averaging effects to suppresssuch erroneous connections. This algorithm forms a newimage by repetitively updating a few pixels at a time. Wecall hereafter this image the “connection propagation image”,denoted In, where n = 0, 1, · · · is the iteration number.

First, we initialize the connection propagation image I0.A uniformly distributed random variable 0 ≤ ψ < 2π ischosen independently for each pixel to generate a unit complexnumber ejψ; if the corresponding amplitude for the pixel is lessthan the threshold, a zero value is assigned to the pixel of theconnection propagation image.

In each iteration, we randomly pick a pixel index i ∈1, 2, · · · ,Mp from the connection propagation image, whereMp is the number of pixels in the connection propagationimage. Then the pixels are updated if ti ≤ (1 + α)Tmax/2as

In(ti, ri) = (In−1(ti, ri) + In−1(tRi, rRi

))/2, (5)In(tLi

, rLi) = (In−1(ti, ri) + In−1(tLi

, rLi))/2, (6)

and updated if ti > (1 − α)Tmax/2 as

In(ti, ri) = (In−1(ti, ri) + In−1(tLi, rLi

))/2, (7)In(tRi

, rRi) = (In−1(ti, ri) + In−1(tRi

, rRi))/2, (8)

where Tmax is the maximum slow-time of the image.

Eqs. (5) and (6) mean that the complex numbers prop-agate to the left if the chosen pixel is on the left half ofthe connection propagation image. In contrast, the complexnumbers propagate to the right with Eqs. (7) and (8) for pixelson the right half. For i that satisfy (1 − α)Tmax/2 < ti ≤(1 +α)Tmax/2, all operations Eqs. (5)–(8) are applied, whichmeans that complex numbers propagate in both directions.

In this way, the initialized pixels around the center ofthe connection propagation image propagate to both sidesalong the connection established in the previous subsection.Echoes corresponding to different targets have a relativelyfewer number of connections, if any. This prevents the complexnumbers from being mixed up across adjacent pixels thatbelong to different targets. After n = Nmax iterations, weobtain the final connection propagation image. We use thephase of the connection propagation image INmax(ti, ri) toseparate the echoes.

III. RADAR MEASUREMENT SETUP AND DATA

We measured two persons walking using a PulsOn 400radar system manufactured by Time Domain Corporation. Thefrequency band is from 3.1 to 5.3 GHz, and the signal is modu-lated by an m-sequence. The received data are compressed withthe same sequence. The transmitted power is −14.5 dBm. Thetransmitting and receiving antennas are dual-polarized hornantennas (model DP240 manufactured by Flann MicrowaveLtd.) with 2 to 18 GHz bandwidth. The antennas are separatedby 50.0 cm.

0 1 2 3 4 5

Target A

Target B

Tx

Rx

Range [m]

Fig. 1. Schematic of measurement scenario with antennas and two peoplewalking.

Fig. 2. Photo of measurement scenario.

The diagram of the scenario is shown in the lower partof Fig. 1. In this measurement, two persons walked back andforth along the same line. Target A walks from a point 1.0 maway from the antennas to a point 5.0 m away, then back tothe original point. Target B walks from a point 3.0 m awayfrom the antennas to a point 1.0 m away, then back to a point5.0 m away, and walks toward the antenna again. The rangemeasurement repetition frequency is 200 Hz, and the samplingfrequency is 16.39 GHz. The received signals are stored andprocessed afterwards. A photo of the measurement scenario isshown in Fig. 2.

IV. APPLICATION OF THE PROPOSED METHOD TOMEASUREMENT DATA

In this section, we apply the set of proposed algorithms tothe measurement data: texture angle, the pixel connection map,and the complex number propagation algorithm. For calculat-ing the texture angle, v0 is set to 1.84 m/s. A 5 × 5 medianfilter is applied to the texture angle to eliminate artifacts beforecalculating a pixel connection map. For the pixel connectionmap, we set Ts = 1s, and δ = 0.1 rad. For the complexnumber propagation algorithm, we set Th = 0.03 max |s(t, r)|,α = 0.1, and Tθ = π/20.

A slow time-range radar image |s(t, r)| is shown in Fig. 3.The echoes intersect at two points corresponding to 3 s and 10s. Next, we calculate the texture angle of the slow time-rangeimage (Fig. 4). Each of the two echoes has smooth gradationin the texture angle, which means that speeds of the targetschange gradually. This characteristic will be exploited by theproposed method to separate the two echoes.

The proposed pixel-connection map and complex-numberpropagation algorithm are applied to the texture angle image.

218/278

Page 238: International Conference on Indoor Positioning and Indoor Navigation

Time [s]

Ran

ge [m

]

0 2 4 6 8 10 121

2

3

4

5

6

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Fig. 3. Slow time-range radar image |s(t, r)| measured for two peoplewalking at time-changing speeds.

Time [s]

Ran

ge [m

]

0 2 4 6 8 10 121

2

3

4

5

6

0.5

1

1.5

2

2.5

Fig. 4. Texture angle θ(t, r) calculated, two people walking with time-changing speeds.

The images in Fig. 5 show the iterative steps of the proposedmethod, in which the angle of the complex value associatedwith each pixel is displayed. In the first image, each pixelhas an independent value of the others. As the iterationprogresses, the dominant colors in the middle of the imagepropagate toward both sides along the echo trajectories. Evenat the intersection points, pixels located nearby each otherare not necessarily connected in this algorithm. This is whythe colors propagate only to the correctly associated pixelsin the image. Finally, most of the pixels in the images arecorrectly segregated into two dominant colors as seen in thefinal connection propagation image.

The final connection propagation image after Nmax =30000 iterations is shown in Fig. 6. This image indicatesthat the two targets are clearly separated by our algorithm. Ahistogram of this image can be used to determine the thresholdto separate the two targets. Fig. 7 shows the histogram of theimage. We see two significant peaks that correspond to the twotargets. In this way, we do not have to know the number oftargets in advance to use the proposed method. Multiple echoesare autonomously separated into different colors in this image.

In the same way, even if there are more than two targets, theimage can be separated into more than two segments by settingmultiple threshold values. To develop a method to find theoptimal threshold values for this purpose will be an importantaspect of future work. With the proposed algorithm, the signalsin the image of Fig. 3 are for the most part clearly separated,as shown in Fig. 8 although some undesired components areseen in the lower image.

Fig. 5. Iterations in segregating the radar image using the proposed method.The image at the top left is the initialized image. The image at the top rightis the image after 2000 iterations. The other images are plotted after 4000,6000, · · · iterations (every 2000 iterations).

Time [s]

Ran

ge [m

]

0 2 4 6 8 10 121

2

3

4

5

6

−3

−2

−1

0

1

2

3

Fig. 6. Connection propagation image after applying the proposed methodafter 30000 iterations (in rad).

V. CONCLUSION

This paper proposes a new algorithm for separating multi-ple targets using a UWB radar. The proposed method calculatesa texture angle to estimate an approximate line-of-sight speedof the target at each pixel of a signal image. Targets withdifferent speeds have different textures in the slow time-rangeimage. The texture angle was combined with other proposedtechniques such as the pixel-connection map and the complexnumber propagation algorithm. The pixel-connection map rep-resents pixels connected by having similar texture angles. Apair of pixels is chosen such that their relative position is inaccord with the corresponding pixel of the texture angle image.Finally, randomly distributed complex values are numericallypropagated to adjacent connected pixels. This algorithm does

219/278

Page 239: International Conference on Indoor Positioning and Indoor Navigation

−3 −2 −1 0 1 2 30

200

400

600

800

1000

1200

1400

1600

Phase [rad]

Fre

quen

cy [p

ixel

s]

Fig. 7. Histogram of the connection propagation image in Fig.6.

Time [s]

Ran

ge [m

]

0 2 4 6 8 10 121

2

3

4

5

6

0

0.2

0.4

0.6

0.8

1

Time [s]

Ran

ge [m

]

0 2 4 6 8 10 121

2

3

4

5

6

0

0.2

0.4

0.6

0.8

1

Fig. 8. Separated echos s1(t, r) and s2(t, r) using the proposed complexnumber propagation algorithm.

not require a prior knowledge of the number of targets. Therandomly assigned complex numbers automatically propagateand merge into multiple segments. We have demonstratedthat the proposed algorithm can successfully separate twomotion-varying targets from echoes in a measurement withtwo walking persons.

REFERENCES

[1] Y. Kim and H. Ling, “Human activity classification based on micro-Doppler signatures using a support vector machine,” IEEE Transactionson Geoscience and Remote Sensing, vol. 47, no. 5, pp. 1328–1337, May2009.

[2] A. Sona, R. Ricci and G. Giorgi, “A measurement approach basedon micro-Doppler maps for human motion analysis and detection,”Proc. IEEE International Instrumentation and Measurement TechnologyConference, pp. 354–359, May 2012.

[3] D. Tahmoush and J. Silvious, “Simplified model of dismount mi-croDoppler and RCS,” Proc. IEEE Radar Conference, pp. 31–34, May2010.

[4] P. Molchanov, J. Astola and A. Totsky, “Frequency and phase couplingphenomenon in micro-Doppler radar signature of walking human,”Proc. 19th International Radar Symposium, pp. 49–53, May 2012.

[5] J. Li, Z. Zeng, J. Sun and F. Liu, “Through-wall detection of humanbeing’s movement by UWB radar,” IEEE Geoscience and RemoteSensing Letters, vol. 9, no. 6, pp. 1079–1083, Nov. 2012.

[6] C.-P. Lai, R. M. Narayanan, Q. Ruan and A. Davydov, “Hilbert-Huantransform analysis of human activities using through-wall noise andnoise-like radar,” IET Radar Sonar Navig., vol. 2, no. 4, pp. 244-255,2008.

[7] A. G. Yarovoy, L. P. Ligthart, J. Matuzas and B. Levitas, “UWB radarfor human being detection,” IEEE A&E Systems Magazine, pp. 36–40,May 2008.

[8] K. Saho, T. Sakamoto, T. Sato, K. Inoue and T. Fukuda, “Pedestrianclassification based on radial velocity features of UWB Doppler radarimages” Proc. 2012 International Symposium on Antennas and Propaga-tion, pp. 90–93, 2012.

[9] Y. Wang and A. E. Fathy, “Micro-Doppler signatures for intelligenthuman gait recognition using a UWB impulse radar,” Proc. pp. 2103–2106, 2011.

[10] S.-H. Chang, R. Sharan, M. Wolf, N. Mitsumoto, and J. W. Burdick, “AnMHT algorithm for UWB radar-based multiple human target tracking,”Proc. IEEE International Conference on Ultra-Wideband, pp. 459–463,Sep. 2009.

220/278

Page 240: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Experimental Evaluation of UWB Real Time Positioning for Obstructed and NLOS Scenarios

K. M. Al-Qahtani A. H. Muqaibel U. M. Johar M. A. Landolsi A. S. Al-Ahmari Department of Electrical Engineering

King Fahd University of Petroleum & Minerals Dhahran, Saudi Arabia

alqahtani, muqaibel, umjohar, andalusi, asahmari @kfupm.edu.sa

Abstract—High resolution wireless positioning is attracting increased interest because of its numerous foreseen applications. Ultra Wideband (UWB) technology promises high positioning resolution (sub-centimeters). The positioning accuracy can be hindered by obstacles and Non-Line-for-Sight (NLOS) propagation. A hardware testbed was used to perform NLOS and obstructed measurements campaign. Large number of measurements was collected and analyzed. The effects of large scale and small-scale fading on the system performance are studied. The impact of blocking sensors by wood or aluminum plates is examined. Also, the effects of covering tags by different materials like wood, glass and steel bowl are considered. The results are useful for applications like channel modeling, link budget analysis, and though wall imaging.

Keywords- Wireless positioning; Ultra Wideband (UWB); Real Time Positionin;, Non-Line-for-Sight (NLOS) Propagation

I. INTRODUCTION

Ultra wideband (UWB) systems are generally defined as systems, which exhibit a transient impulse response. An UWB transmitter is defined as an intentional radiator that, at any point in time, has a fractional bandwidth of greater than or equal to 0.2 or occupy a bandwidth greater than 500 MHz regardless of the fractional bandwidth [1-2]. Since these signals have very large bandwidths compared to those of conventional narrowband/wideband signals, they have narrow time-domain pulse which offers very high positioning accuracy.

UWB systems are excellent candidates for high resolution positioning and short distance high data rate wireless applications. They have a number of features such as (i) low complexity and cost; (ii) noise-like signal spectrum; (iii) resistance to severe multipath and jamming; (iv) very good time-domain resolution allowing for location and tracking applications. These features are attractive for consumer communication systems.

UWB technology supports the integration of communication and radar applications such as imaging and positioning [3-5]. UWB radars have been attracting an increased interest after the proposal by Scholtz [6] to use impulse UWB radio for personal wireless communication applications. Short-range wireless sensor networks (WSNs) is an emerging application of UWB technology [7-8]. The applications of UWB WSNs include the inventory control,

tracking of sport players, medical application, military application, and search and rescue applications.

A major challenge to the performance of UWB positioning in indoor applications is performance under obstructed and Non-Line-for-Sight (NLOS) scenarios. To evaluate the degree of degradation a hardware test bed is used. Large number of measurements was collected and analyzed. The effects of large scale and small-scale fading on the system performance are studied. The impact of blocking sensors by wood or aluminum plates is examined. Also, the effects of covering tags by different materials like wood, glass and steel bowl are considered.

The rest of the paper is organized as follows. The detail of the research testbed is presented in section II. After that, the experiment procedure and the results are presented in section III. The paper concludes with some remarks.

II. SYSTEM MODEL

The installed positioning system is based on Ubisense®. This system delivers a 15cm 3D positional accuracy in real-time. The Ubisense® real time location system (RLTS) can be divided into two main parts. These parts are the sensor network hardware part, and the location engine software platform.

1) The RTLS Hardware Part The hardware part of the system consists of tags and

sensors. Tags are the portable devices which transmit UWB signals that will be detected by sensors. They are the moving parts with positions to be estimated by the system. Sensors receive the signals transmitted from the tag. They are organized in cooperating sets called location engine cells; each one has a single master sensor and a number of slave sensors. In our research, we used four sensors; three are assigned as slave sensors while the fourth one is the master sensor. The master sensor is the reference in time difference of arrival (TDOA) calculations. Sensors are connected by standard 100 BASE-TX Ethernet cables to the master and to the platform server. Using a switch that supports PoE, the sensors get the power along with the networking cable. Figure 1 below shows the connections for a four-sensor location engine cell and the platform server. Note that the sensors require a Dynamic Host Configuration Protocol (DHCP) server to assign their network configuration.

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

221/278

Page 241: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Figure 1. Connections for a four-sensor location engine cell and the platform server

2) The RTLS Location Engine Software Platform Location Engine Software Platform (LESP) is the software

that is used to collect real time data from the sensor. The LESP has its own built-in algorithms that will estimate the location of the tag.

The calibration for the RTLS involves the calibration for the background noise, the orientation of the sensors, and the cable offset. The noise level heavily depends on the environment. Calibrating the sensors requires fixing its three angles, elevation, azimuth and tilt or rotation. Prior to calibration, it is important to choose the origin and the coordinate system in the environment. The origin is defined to be approximately on the center of the lab. Right-hand rule is followed in determining the x, y and z axes.

Figure 2 shows part of the lab environment. After successfully installing the system, the location of the tag can be determined using the software.

Figure 2. Lab Environment

III. RESULTS AND DISCUSSION

Positioning experiments were performed covering the whole laboratory room (Building 59, room 0020, King Fahd University of Petroleum & Minerals (KFUPM)) where the system was installed. The room has a length of 10.8m and a width of about 8.7m. There are five metallic tables and a column in the middle of the room. The dimensions are shown in Figure 3. On the same figure, the sensors are shown and the y-axis is labeled with numbers while the x-axis is labeled with

letters for ease of referencing. Measurements were conducted to examine large scale effect and small scale effect. Data collection and measurement execution go through different steps. First, identifying the cell where the system is installed. This step could be considered as the planning phase and it includes different stages starting by identifying the origin of the cell to be used as the reference for all test points. The origin should be selected to have a LOS to all sensors. Then, from the origin of the cell, the positive directions of x-axis, y-axis and z-axis are determined following the right hand rule. Finally, the system is calibrated so that it can measure tag locations accurately.

A. Large Scale Effects The objective of the large scale analysis is to study the

impact of large movements on the positioning accuracy. In general, measurements should be averaged out to remove the impact of multipath and other small scale effects. UWB signals are immune to multipath as will be illustrated when studying the small scale effects.

Based on the calibrated reference point, a grid was sketched as shown in Figure 3. Data was collected for all possible points separated by a distance of 1.2 m in both x and y directions. This should results in 56 points around the whole cell. However, due to blocking objects, only 49 points were taken. Missing points were considered as NLOS scenarios.

Figure 3. Points distribution in the cell (top view)

The error distribution in x-component is shown in Figure 4 where the x-axis corresponds to ‘numbers 1-7’ and ‘letters a-h’ corresponds to the y-axis as depicted in Figure 3. Areas with blue color indicate the least error whereas areas with red color indicate the highest error. Errors are given in cm and the color bar to the right of Figure 4 indicates the range of the error and their corresponding color. Most of the points were positioned with error which is less than 15 cm. Five points have errors exceeding 30 cm. Maximum positioning error was found to be 50 cm. This could be due to obstructed or NLOS scenario. The maximum error occurs at the border of the cell.

Figure 5 shows the error distribution of the large scale data in y-component. Most of the values are less than 15 cm. The error in y-component is less than that in x-component as it can

222/278

Page 242: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

be noticed by comparing Figure 4 and Figure 5 where the maximum reached about 30 cm.

Figure 6 shows the error distribution in the radial component where the radial component is calculated as . The error in the radial component is composed of x, y, and z component and is greater than any of them. The areas which suffer more error are those at the border of the cell or those close to the metallic tables. This is expected since at the border, the tag is not seen properly by all the sensors while for the metallic tables act as reflectors.

Figure 4. Error distribution in x-axis

Figure 5. Error distribution in y-axis

Figure 6. Error distribution in r-component

B. Small Scale Effects and Multipath Immunity Small scale effect was tested on an area of about 0.607m2

near the origin as shown in Figure 3. This area was divided into 25 different points separated by a distance of 0.152m in both xand y direction. To study the small scale effect, the choice of the spacing should be related to the wavelength of the carrier frequency. In case of UWB the center frequency may be used instead. The height (z-direction) was kept fixed at 1.43m, which is close to the height of mobile phones during calls.

Figure 7 shows the small scale points and their estimated locations. Every two points with the same color and shape corresponds to the exact and estimated points. The darker point represents the exact position. Some of the points are estimated with errors less than 10 cm. Few points are estimated with larger error because these points are not seen by sensor-1 due to the concrete pillar in the middle of the room and hence the signal is reflected and results in larger estimation error. Most of the points suffer from a fixed bias, which could possibly reflect a calibration error. We have observed that UWB positioning is immune to multipath and small-scale effects. Some points that demonstrated different errors are due to pillars obstruction.

.

Figure 7. Exact and estimated location of the Tag (same shape & color are for one point)

In the following sections, the system performance is tested for different NLOS and obstructed scenarios which are the major problems in high resolution positioning.

C. Obstructed Sensors The system performance was tested when one sensor is

blocked by wooden plate or Aluminum plate. The tag was placed in a location where already a LOS reading with no blocking was taken so that the blocking effects can be investigated. This point was kept fixed when each of the four sensors was blocked and this point was taken as point (0, 0,1.43)m. Only one sensor is blocked at a time. Blocking any sensor with the wooden plate has less effect compared with aluminum. The impact of blocking sensor 2 with wood is depicted in Figure 8. UWB signals can penetrate wood with reasonable fidelity. Aluminum resulted in larger errors.

223/278

Page 243: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Figure 8. The effects of blocking sensor 2 with wood plate

D. NLOS and Covered Tags The system performance was tested when the tag is covered

by different types of materials such as wood box and bowl made of steel or glass (see Figure 9). The box made of wood has dimensions of 16cm in length and width and 20cm in depth while its thickness is 2cm.

The bowl made of steel has a top diameter of 16 cm and a depth of 7 cm with thickness of 2 mm. The glass bowl has a top diameter of 12.5 cm and a depth of 5 cm and its thickness is about 3 mm.

Figure 9. Different materials used to cover the tag

In the following sets of figures, the effects of covering the tag located at the center of the room are presented. The point (d4) is chosen at coordinate (0,0,1.43)m as was shown in Figure 3. This point has a LOS to all sensors. Covering the tag with a wood box shifts the mean of the error by almost 20 cm. When the tag is covered by the glass bowl, the mean of the error is shifted to higher value by almost 40 cm. Covering the tag with steel bowl affects the error range more. The mean of the error is shifted by almost 60 cm as shown in Figure 10.

The tag is now covered by the steel and glass bowls together at the same time. The error is shifted up by about 80 cm (see Figure 11). This is due to error in all the components, mainly x and y components.

Figure10. The effect of covering the tag with steel bowl

Figure 11. The effect of covering the tag with steel and glass bowls

IV. CONCLUSION

The performance of a UWB real time location system was evaluated under many NLOS and obstructed circumstances. The error distribution throughout the room was examined. Data was collected and analyzed for large scale, small scale and different scenarios such as NLOS, sensor blocking and tag covering. The error increases as one or more sensors are blocked. The wood plate has blocked the signal but its effect is not as the aluminum plate which almost eliminates the contribution of the blocked sensor. The steel bowl had the greatest effects and then the glass bowl. The wood box had the least effect. Examined scenarios demonstrated immunity to multipath propagations.

ACKNOWLEDGMENT

The author(s) acknowledge the support provided by King Abdulaziz City for Science and Technology (KACST) through the National Science & Technology Unit at King Fahd University of Petroleum & Minerals (KFUPM) for funding this work under project # 08-ELE44-4-1 as part of the National Science, Technology and Innovation Plan.

REFERENCES

[1] Federal Communication Commission, “First Order and Report, Revision of Part 15 of the Commission’s Rules Regarding Ultra Wideband Transmission System, FCC 02-48, April 2002.

[2] M. G. D. Benedetto, T. Kaiser, A. F. Molisch, I. Oppermann, C. Politano, and D. Porcino, “UWB Communication Systems A Comprehensive Overview”, EURASIP Book Series on Signal Processing and Communications, vol. 5, pp. 5, 2006.

[3] R. J. Fontana and S. J. Gunderson, “Ultra-Wideband Precision Asset Location System,” Proc. Of IEEE Conference on Ultra Wideband Systems and Technologies, 21-23 May 2002, pp. 147-150.

[4] N. S. Correal, S. Kyperountas, Q. Shi, and M. Welborn, “An UWB Relative Location System,” Proc. Of IEEE Conference on Ultra Wideband Systems and Technologies, 16-19 Nov. 2003, pp. 394-397.

[5] W. Chung and D. Ha, “An Accurate Ultra Wideband (UWB) Ranging for Precision Asset Location,” Proc. Of .IEEE Conference on Ultra Wideband Systems and Technologies, 16-19 Nov. 2003, pp. 389-393.

[6] R. A. Scholtz, “Multiple Access with Time-Hopping Impulse Modulation,” MILCOM ’93, vol. 2, pp. 447-450, 1993.

[7] Z. Sahinoglu, S. Gezici, and I. Guvenc, Ultra-Wideband Positioning Systems: Theoretical Limits, Ranging Algorithms, and Protocols. Cambridge University Press, 2008.

[8] S. Gezici, Z. Tian, G. B. Giannakis, H. Kobayashi, A. F. Molisch, H. V. Poor, and Z. Sahinoglu, “Localization via ultra-wideband radios: A look at positioning aspects for future sensor networks,” IEEE Signal Processing Mag., vol. 22, no. 4, pp. 70–84, July 2005.

0 0 . 1 0 . 2 0 . 3 0 .4 0 . 5 0 . 6 0 .7 0 .8 0 . 90

2

4

6

8

1 0

1 2

1 4P d f o f t h e R a n g e e rro r

R a n g e E rro r in m e te r

Pdf

N o rm a lS te e l b lo c k in g

0 0 . 2 0 . 4 0 . 6 0 . 8 1 1 . 2 1 . 40

2

4

6

8

1 0

1 2

1 4P d f o f t h e R a n g e e r ro r

R a n g e E r ro r in m e t e r

Pdf

N o rm a lS t e e l+ g la s s b lo c k in g

224/278

Page 244: International Conference on Indoor Positioning and Indoor Navigation

- chapter 16 -

RFID

Page 245: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Device-Free 3-Dimensional User Recognition

utilizing passive RFID walls

Benjamin Wagner, Dirk Timmermann

Intitute of Applied Microelectronics and Computer Engineering

University of Rostock

Rostock, Germany

[email protected]

Abstract— User localization information is an important data

source for ubiquitous assistance in smart environments and other

location aware systems. It is a major data source for

superimposed intention recognition systems. In typical smart

environment scenarios, like ambient assisted living, there is a

need for non-invasive, wireless, privacy preserving technologies.

Device-free localization approaches (DFL) provide these

advantages with no need for user-attached hardware.

A common problem of DFL technologies is the distinction and

identification of users which is important for multi-user

localization and tracking. Expanding the existing approaches

with the 3rd dimension it becomes possible to estimate user

heights and body shapes depending on the systems resolution.

For that purpose we place pRFID transponders at the room’s

walls giving us the possibility to generate a 3 dimensional wireless

communication grid within the localization area. A person

moving within this area is typically affecting the RFID

communication giving us the possibility to use RSS based

algorithms.

In this work we show the basic approaches and define system and

model related adaptations. We conduct first experiments in an

indoor room DFL scenario for proof of concept and validation.

We show that it is possible to recognize the height of a user with

reasonable precision for future estimation approaches.

Keywords- DFL, RFID, Indoor Navigation, Smart

Environments, Pervasive Computing, Wireless

I. INTRODUCTION

Recognizing a user within a smart environment is a big

challenge in today’s ubiquitous smart technology research.

Estimating the position, User Recognition and Intention

Recognition are the main steps for generating intelligent

assistance. Sensors which are gathering the information need

to be invisible and privacy preserving. For that purpose there

is much work done on the field of Device-free localization

(DFL) utilize wireless radio devices which are installed in the

room leaving the user without any attached hardware.

In our recent work we introduce an approach for radio based

DFL by replacing most of the active radio beacons used in

similar situated approaches with completely passive Radio

Frequency Identification (RFID) transponders. That combines

the advantages of energy efficiency, because the transponders

do not need batteries, and very easy deployment. RFID

transponders can very easily be placed i.e. under the carpet, on

furniture or behind the wallpaper.

Another big advantage are the costs: RFID transponders can

be purchased very cheap, as low as 0.20 € per item.

Based on that, multiple localization algorithms were proposed

in the past providing positioning results with an error as low as

0.3 m in 2D scenarios[1–3].

The available approaches only calculate 2D results. But for

superimposed intention recognition systems it is also

important to know about a user’s vertical position, i.e. is the

user lying on the ground, sitting on a chair or even standing on

the ground. Furthermore the height of a user could give

information about his identity or could help separating users in

multi-user scenarios.

In this paper we propose the use of 3 dimensional

measurement setups (RFID walls) and adaptations for existing

algorithms. Therefore we introduce the related work in section

two and explain our methods in section three. The setup and

the results of a first experimental validation are shown in the

fourth chapter, followed by our conclusions.

II. RELATED WORK

A. Passive RFID Positioning

Dealing with the problem of energy efficiency and

deployment complexity we invented an approach utilizing

ground mounted passive Radio Frequency Identification Tags

(RFID) for device-free radio-based recognition[1], [2], [4].

This work has shown that it is possible to calculate 2D user

positions with remarkable accuracy [4] and low computational

complexity.

Using typical RFID hardware provides less signal processing

possibilities than typically used wireless sensor nodes. For this

reason our measurements regards the Received Signal Strength

Indicator (RSSI) with can be regarded as a linear

transformation of the original signal strength value.

As shown in [5] the presence of the human body does strongly

affect the communication between the RFID reader hardware

and the passive transponders. This can be modeled as[5]:

( ) (

) (1)

with ∆P as estimated RSSI change, wave length λ and phase

shift . The parameters A,B are subject to the multipath

fading properties of the experimental environment[6].

225/278

Page 246: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Therefore the model needs to be re-adjusted for every new

setup.

The path difference between the Line-of-sight (LOS) and

the Non-Line-of-sight (NLOS) path is determining the relative

position of a scattering user towards a specific communication

link. The influence is shown in Fig. 1.

Figure 1. Theoretical model regression and experimental data

points from multiple transponder scenario

Based on this model different methods for the localization of

users were investigated in the past:

Database based localization: minimizing a log-

likelihood-function from the difference between an

expected change of signal strength and the measurement.

The results provide a maximum RMSE of 0.75 m[2].

Geometric localization based on Linear Least Squares

and Intersection Points applied on the measured signal

strength differences. The results provide lower accuracy

at approximately 1.61 m, while having a lower

computational complexity[2].

Training based approaches, e.g. Multi-layered

Perceptrons (MLP) [5], [6]. A three layered MLP getting

the RSSI differences into its input layer and providing a

2D user position out of the output layer. Evaluating

different training functions and layered transfer functions

it is possible to achieve accuracies as low as 0.01m MSE

in a ground mounted pRFID scenario.

B. Passive RFID Tomography

In our recent work [4] wireless sensor network based radio

tomographic imaging [7], [8] and RFID DFL were combined.

The setup consists of waist-high mounted passive transponders

placed around the discretized measurement area. The RFID

antennas are placed directly behind the transponder lines to

guarantee a maximum power transfer.

The imaging result is calculated by using the model of Wilson

et.al.[8]:

(2)

with as matrix of RSS differences in dB, W as pre-

calculated weighting matrix for every pixel-link-combination,

n as zero mean gaussian noise vector and as matrix of pixel

attenuations generating a tomographic picture of the

measurement area.

The algorithm can locate human with as low as 0.3 m mean

location error. In [3] we propose multiple improvements for

performance and online operation.

III. METHODS

For adding the height as 3rd

result dimension both the

measurement setup and the model need to be adapted.

A. Measurements

For the measurement we built “RFID-Walls” with a wall

mounted RFID transponder grid. Discretizing the height

coordinate we can define 2-dimensional layers within the

squared measurement area. As mentioned in [4] these systems

have a sender-receiver relation of:

(3)

because a RFID field typically contains a high number of

transponders (regarded as senders) and a relatively low

number of receivers (RFID antennas).

Transponder

Voxel

Communication Link

RFID Antenna

Figure 3. Principle link structure in sectional view

Because simply integrating more antennas would increase the

systems costs and reduce the advantage of cost efficiency we

just use one receiver layer with 4 antennas situated at every

mid-wall.

The 3D measurement area is discretized into voxels generating a measurement picture for every height

layer.

In contrast to the 2D approach of [4] the transponder layers

and the antenna layer are spatial separated. This has a great

effect on the model, especially on the voxel-communication

link allocation and weight matrix calculation. Figure 3 is

showing the principle problem. The link density above and

under the antenna layer is declining. This has effect on the

weighting matrix.

B. Adapted Model

The experimental area is defined by an image vector

consisting of N pixels. When a person is affecting specific

0 0.5 1 1.5 2 2.5 3-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

Executive Path Length [m]

RS

S D

iffer

ence

[dB

]

226/278

Page 247: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

links in that network (see Fig. 1), that attenuation is regarded

as the sum of attenuation each pixel contributes[4].

The attenuation is measured as the received signal strength for

every transponder-antenna combination. Due to the RFID

protocol[9] it is difficult to set a stable power value for every

transponder. Therefore a 2 phase measurement was conducted:

a calibration phase with no user presence and a measurement

step with scatterer in the field. The measurement vector is

built by

(4)

with signal strength y and RSSI difference vector ∆y.

The most important part of the RTI method is the image

reconstruction since the problem is ill-posed. The authors

handle this by using regularization techniques. The resulting

image estimation formula can be written as[1]:

( ) (5)

In this formula denotes a covariance vector providing

information about the dependence of neighboring pixels due to

a zero-mean Gaussian random field [10]:

(6)

with the voxel-voxel distance d and a correlation term

determining the impact of dependence of neighboring pixels.

We have to use a weighting model only regarding the

backward link between transponder and antenna because due

to the experimental scenario a user can only effect this path.

The forward link is regarded only as sending power supply.

Regarding the model of [11] it can be described as

√ ( ) ( ) ( ) ( ) ( )

for the backward link, where is the Euclidean distance

between transmitting reader antenna , receiving reader

antenna and transponder of link . The ellipse width

surrounding each link is variable by .

Dealing with the problem of inter-layer variation as mentioned

in III.A we define the main imaging scale by

( ) ( ) (7)

over all layers with constant step size s.

C. Error Model

Most authors dealing with user recognition in the 2D area

assume a cylindrical human model with radius [5], [11].

This is not suitable for the 3D area because the human body

has a different shape with different reflection properties at

every height layer. Typically the body center has the greatest

effect on a horizontal communication, although the influence

if the users head or legs is less.

Therefore we define an extended ellipsoid of rotation with a

height dependent radius ( ) as a 3D human model. The

reference image can be described as:

| | ( )

(8)

with the center of the reference object and every voxel .

Assuming this model we can define the picture dependent

mean-squared error for comparison purposes to be [11]

| |

(9)

with the calculated image and the number of all voxels

.

IV. EXPERIMENTAL VALIDATION

A. Experimental Setup

For the experimental validation of our approach we used a

passive bistatic UHF RFID system from Alien Technology

working on ISM 868 Mhz frequency band. We connected four

linearly polarized antennas (G = 6 dB) to the ALR-8800

reader. We did not use circular polarized antennas, because

they have a higher attenuation and all transponders are placed

in the same orientation. Hence all tags are readable in the

same quality.

We placed 40 transponder at every of 4 walls at the layer

heights of

[ ]

meters resulting in a total of 160 transponders. Each wall has a

length of 2.7 meters and in the center of the measurement area

we define 13 possible user positions. Fig. 4 shows profile and

topview of the setup.

The antennas are situated 1.0m behind the wall to guarantee a

best possible energy transmission. This is done due to the

specific antenna lobes.

2.7

0 m

1.0

0 m

0.3

0 m

1.05 m

0.90 m

Y

X

Antenna

Transponder

Position

X

Z

1.0

0 m

1.3

0 m

1.6

0 m

1.9

0 m

2.70 m1.00 m

Figure 3. Top and sectional view of measurement setup

The RFID reader hardware is connected to a workstation

where a Java program is fetching the data. The calculation of

the described approach is done in a post processing step with

the help of Matlab®.

B. Procedure

Due to the high amount of transponders we have to limit the

data measurements. Therefore we define major operating

antenna sequences as follows:

[ ] [ ] [ ] [ ]

with the following annotation:

227/278

Page 248: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

[Powering Antenna; Receiving Antenna]

We did our measurements for every transponder-AS

combination with a minimum of 80 data samples to get a

reliable mean signal strength value.

For experimental validation of 3D user recognition we placed

a test person on every of the 13 defined test positions in three

ways:

1. User is sitting on a chair

2. User is standing on the ground

3. User is standing on a chair

C. Results

Fig. 5 depicts a sample result of the test person located in the

middle of the room.

Figure 5. Sample results by layer - center

It can be stated, that the recognition of the users height within

the testbed could be recognized with reasonable accuracy. Fig.

6 depicts a sample result of the testperson located in the upper

left edge of the room.

It has to be said, that the technique has some problems with

the positioning precision in the field edges, because the

density of communication links is even lower, but the height

information is also recognizable very clearly.

V. CONCLUSION & FUTURE WORK

In this work we did a proof of concept for 3D user recognition

with passive RFID walls. To achieve this goal we described

adaptations for the mathematical model and the measurement

system. Within the model the adaptive bistatic weighting

matrix and the covariance matrix needed to be adapted for a

3D scenario. Furthermore we defined a 3D error model, which

application would go beyond the scale of this work.

In future work estimation algorithm should be developed and

applied on to the results. With them the height of a user within

a room could be estimated, that could be a valuable data

source for user distinction in a multi user scenario.

REFERENCES

[1] B. Wagner and D. Timmermann, “Device-Free User Localization

Utilizing Artificial Neural Networks and Passive RFID.”, IEEE International Conference on Ubiquitous Positioning, Indoor

Navigation and Location based Services (UPINLBS), 2012.

[2] D. Lieckfeldt, J. You, and D. Timmermann, “Exploiting RF-Scatter:

Human Localization with Bistatic Passive UHF RFID-Systems,” 2009

IEEE International Conference on Wireless and Mobile Computing, Networking and Communications, 2009.

[3] B. Wagner, B. Striebing, and D. Timmermann, “A System for Live

Localization In Smart Environments,” IEEE International Conference on Networking, Sensing and Control, 2013.

[4] B. Wagner, N. Patwari, and D. Timmermann, “Passive RFID

tomographic imaging for device-free user localization,” 9th Workshop on Positioning, Navigation and Communication(WPNC), 2012.

[5] D. Lieckfeldt, J. You, and D. Timmermann, “Characterizing the

Influence of Human Presence on Bistatic Passive RFID-System,” IEEE International Conference on Wireless and Mobile Computing,

Networking and Communications, Oct. 2009.

[6] B. Wagner and D. Timmermann, “Adaptive Clustering for Device Free User Positioning utilizing Passive RFID,” 4th Workshop on Context

Systems, Design and Evaluation (CoSDEO), 2013.

[7] J. Wilson and N. Patwari, “Through-Wall Motion Tracking Using Variance-Based Radio Tomography Networks,” arXiv. org, 2009.

[8] J. Wilson and N. Patwari, “Radio Tomographic Imaging with Wireless

Networks,” IEEE Transactions on Mobile Computing, 2010. [9] Y. Kawakita, “Anti-collision performance of Gen2 Air Protocol in

Random Error Communication Link,” International Symposium on

Applications and the Internet Workshops (SAINTW’06), 2005. [10] R. K. Martin, C. Anderson, R. W. Thomas, and A. S. King, “Modelling

and analysis of radio tomography,” 4th IEEE International Workshop

on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Dec. 2011.

[11] J. Wilson and N. Patwari, “Radio Tomographic Imaging with Wireless

Networks,” IEEE Transactions on Mobile Computing, 2010.

1.0 m

1.3 m

1.6 m

1.9 m

Sitting on chair

Standing on

ground

Standing on chair

1.0 m

1.3 m

1.6 m

1.9 m

Sitting on chair

Standing on

ground Standing on chair

Figure 6. Sample results by layer – edge

228/278

Page 249: International Conference on Indoor Positioning and Indoor Navigation

- chapter 17 -

Wireless Sensor Network

Page 250: International Conference on Indoor Positioning and Indoor Navigation

- chapter 18 -

Indoor GNSS or Pseudolite

Page 251: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

1

First Theoretical Aspects of a Cm-accuracy

GNSS-based Indoor Positioning System

Ye Lu, Alexandre Vervisch-Picois, Nel Samama

Department of Electronics and Physics

Institut Mines-Telecom, Telecom SudParis, UMR 5157 SAMOVAR

Evry, France

[email protected]

Abstract—Many techniques developed for indoor positioning

aim at providing a mass market terminal with the continuity of

location service, from outdoors where GNSS are almost

unbeatable, to indoors. The joint constraints of real-life

environment, mass market and indoor positioning are so far too

complicated to deal with, and no acceptable system has been

deployed yet. Our current purpose is slightly different: the

question we would like to answer is “what is the limit of the

possible accuracy of a GNSS-like transmitter based system for

indoor positioning?” The target is now oriented to the

professional world, where a few constraints can be accepted,

especially on the cost and deployment rules. Based on certain

infrastructure, the centimeter-level accuracy is desired to be

reached in order to present some interests. The goal of this first

paper on this subject is to cope with the basics of the problem

and to review possible theoretical aspects. Thus, differential

techniques are detailed with associated requirements in order to

make it possible to reach this level of accuracy. The specificity of

the indoor environment is highlighted and is mainly considered

through the way of increased noise levels on the various

measurements, as well as the immobility of transmitters.

Simulations are carried out and it is shown that centimeter level

is extremely difficult to reach as soon as the measurement noise

increases a little bit. As a conclusion, a few directions of future

works are suggested in order to overcome the increased noise

level that is almost inevitable indoors.

Keywords—indoor positioning; high accuracy; GNSS;

pseudolite; repealite.

I. INTRODUCTION

While the human beings explore the nature tirelessly, they also put significant concerns to be aware of themselves, to know better of the circumstances, and to be informed with their precise positions, velocities, trajectories, etc. with respect to the local environment. The location service is so important to the navigation of pedestrians and vehicles that it is indispensable in our daily life. It can be provided outdoors by the GNSS (Global Navigation Satellite System), thanks to the pseudorange and carrier phase observations on different frequencies. Generally speaking, pseudorange measurements provide meter-level accuracy, while the carrier phases can theoretically improve it to the millimeter range [1]. New techniques and methods emerge continuously: after the traditional technique based on one constellation, i.e. GPS, we are already familiar with the

RTK (Real Time Kinematic) receivers acquiring signals from both GPS and GLONASS. Furthermore, with Galileo, BeiDou (Compass), and several regional satellite-based augmentation systems under deployment, more choices of independent or collaborative positioning will be available, and with the next generation of GPS, new signals on existing or new frequencies (L5) will certainly lead to more efficient positioning algorithms.

However, the situation is quite different indoors: no technique is dominant on indoor positioning, and no standard exists; a trade-off has to be made between the cost of system and its accuracy, availability, and robustness. A synthetic comparison shows that GNSS-based indoor positioning techniques are more precise than those built on radio signal strength indicator, and cover a larger area than UWB (Ultra-Wide Band) or RFID (Radio-Frequency IDentification), and are also more robust than ultrasound or light-based systems with respect to interferences and obstacles [2, 3].

II. CARRIER PHASE DOUBLE DIFFERENCE POSITIONING

Following the notations in the resume of S. Botton [4], the double difference formation is explained in this section. Let the superscripts , characterize two transmitters, and subscripts , two receivers. The pseudorange and carrier phase observables can be expressed as follows:

, = , + , + −

+ (1)

, = ∙ , + ∙ , + −

+ + (2)

where is the speed of light; is the carrier frequency; is the moment of signal emission by transmitter , in system time scale; is the moment of signal reception by receiver , in system time scale; is the distance between transmitter

and receiver ; is the propagation delay due to atmosphere (inexistent indoors) or the environment of receiver antenna; is the clock bias of receiver ; is the clock bias of

transmitter ; is the integer ambiguity, remaining constant

until a cycle slip occurs; is the measurement noise,

229/278

Page 252: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

2

including the influence of multipath. The carrier phase is in cycles, while all the other quantities are in SI units.

Then the carrier phase single and double differences can be built:

∙ ∆, = − + − + − !

+ −! + (3)

∙ ∇∆,#, = − − # + #! + − − # + #!

+ − −# + #! +

≜ %,#, + ,#, + ∙ ,#, + (4)

Suppose the coordinates of receiver , & , ' , ( , are known; receiver is to be positioned; is chosen as the reference transmitter, and 1, 2, … , are the other transmitters.

Let

, = & ' ( ,-, ∙∙∙ ,#,!. (5)

At epochs 1, 2, … , /, by linearizing (4), we have:

0, = 1 (6)

where

0 = 0-02 ∙∙∙ 03!. (7)

1 = 1-12 ∙∙∙ 13!. (8)

03 = 03,#4 03,564! (9)

03,#4 =788889

:;<,=>,?:@=:;<,=A,?:@=

:;<,=>,?:B=:;<,=A,?:B=

:;<,=>,?:C=:;<,=A,?:C=⋮ ⋮ ⋮:;<,=E,?:@=:;<,=E,?:B=

:;<,=E,?:C= FGGGGH

(10)

03,564 = I10 01 ⋯ 0⋯ 0⋮ ⋮ ⋱ ⋮0 0 ⋯ 1M (11)

13 =789 ∙ ∇∆,-, − ,-, − %,-, − ∙ ,-, ∙ ∇∆,2, − ,2, − %,2, − ∙ ,2,⋮ ∙ ∇∆,#, − ,#, − %,#, − ∙ ,#, F

GH (12)

The least squares solution of (6) is given by:

,N = 0.0 O-0.1 (13)

With the new estimate from (13), (4) can be linearized at another point closer to the real coordinates of receiver . This iteration continues until the solution converges.

The most important precondition of this sort of epoch-accumulating algorithms is that the row vectors of matrix 0 should always be linearly independent, while the data of new epochs is continuously taken in. Otherwise (6) becomes an underdetermined system of linear equations. Thus, the geometrical change is required among signal transmitters and receivers. In outdoor positioning, the GPS satellites move a few kilometers per second, allowing a static initialization of the user receiver so as to obtain a rather precise floating solution of integer ambiguities and the coordinates of receiver. However, this is not true indoors, because the pseudolites are fixed: 03

does not change at all from epoch to epoch if the receiver is static. As a result, either pseudolites or the user receiver should move when the observations are collected. The former case is quite hypothetical, since a “moving” pseudolite indoors is never envisaged, and in the latter case, the position of receiver is no longer a constant, so this algorithm is not applicable any more. Equations (5) and (9) should then be replaced by:

, = ,-, ∙∙∙ ,#,!. (14)

03 = 03,564 (15)

While the observations of new epochs arrive, instead of accumulating 03, 13 as in (7) and (8), they can be summed:

0 = ∑ 033- (16)

1 = ∑ 133- (17)

Another solution is to involve the unambiguous

pseudorange observables: similar as (4) but without ,#, ,

double differences of should be built. Equations (9) and (12) are then replaced by:

03 = Q03,#403,#4 03,5640 R (18)

13 =

78888889 ∙ ∇∆,-, − ,-, − %,-, − ∙ ,-, ∙ ∇∆,2, − ,2, − %,2, − ∙ ,2,⋮ ∙ ∇∆,#, − ,#, − %,#, − ∙ ,#,∇∆,-, − ′,-, − %,-,∇∆,2, − ′,2, − %,2,⋮∇∆,#, − ′,#, − %,#, F

GGGGGGH

(19)

Besides, (7) and (8) are substituted for (16) and (17). Simulation and Analysis

230/278

Page 253: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

3

III. SIMULATION AND ANALYSIS

A. Configuration

The testing environment simulated is a room with a floor area of 10 by 15 meters and a height of 3.5 meters. As shown in Fig. 1, the system contains 4 ~ 6 pseudolites, a base station and a user receiver. The station is disposed in the middle, either near the ceiling or on the floor. It is worth mentioning that the VDOP (Vertical Dilution Of Precision) can be greatly reduced if at least one transmitter is located below the receiver, which leads to a more accurate positioning result [5]. The topology of pseudolites is nevertheless not optimized in that way in the following simulations.

Figure 1. Configuration of the positioning system.

B. Positioning with “Moving” Pseudolites

“Moving” pseudolite is an odd concept, since almost all the currently existing pseudolites for positioning and navigation are fixed, with precise coordinates known in advance. As described in the previous section, if the user receiver is static and only carrier phase measurements are used, the pseudolite ambiguities cannot be solved independently, i.e. without the help of outdoor GNSS or other external means [6]. Thus, an intuitive question is: what happens if the pseudolites move? In fact, the concept of “mobile pseudolite” has already been proposed in another context, as a part of the so-called inverted positioning system, where the mobile pseudolite is attached to the user (to be positioned) [7, 8]. Differently, the hypothesis of “moving” pseudolite here simply pertains to a pseudolite, or a part of a pseudolite, e.g. the antenna, that can move. The movement is supposed to be linear, with a constant “velocity”: TUV.

The simulations have shown that the feasibility of algorithm and the accuracy of positioning result depend on the following factors:

The number of pseudolites WUV

The initial position of user receiver &X, 'X, (X!

The “velocity” of pseudolites TUV

The level of noise Y

The number of epochs WZ for taking observation data

The positions of pseudolites and the base station

And a few other minor parameters

Theoretically, the minimal number of pseudolites is 2 so that 1 double difference of carrier phase observables can be formed, but obviously the positioning result will be highly biased due to the very poor DOP and the lack of observation sources for error compensation. In previous site experiments with the repealite system, a typical configuration includes 4 repealites, and it has proved to be efficient. Thus, in the following discussion WUV = 4. The number of pseudolites (or satellites, in outdoor environments) also has an important impact on the algorithm convergence rate. 4 or more GPS satellites are visible at any location outdoors; sometimes more than 10 satellites can be observed. If the GLONASS observables are used at the same time by the receiver, the number of visible satellites is doubled. This lays the foundation for RTK, allowing a cm-accuracy positioning result to be generated within 10 seconds, given that the observation refreshing rate is 1 Hz. Here, since WUV = 4 , the result converges slower (if the other parameters are kept the same).

The initial position of user receiver should be estimated in advance with some other methods, e.g. the classical pseudorange stand-alone positioning. In outdoor situations,

there is no problem even if the distance error of &X, 'X, (X!

reaches several kilometers. In contrast, it is shown in the simulations that the calculation result may diverge if the

distance between &X, 'X, (X! and the receiver’s real position

is greater than 2.5 m, while it always converges if this distance is around 1 m.

When the carrier phase measurements during 10 epochs (i.e. 10 seconds) are used, the relationship between the “velocity” of pseudolites and the noise level, viz. 3σ of the random variable representing the noise, is given in Fig. 2. In this figure, each line corresponds to an accuracy level of the positioning result. The typical magnitude of noise on carrier phase measurements is 3 mm. However, it can be greatly augmented indoors because of the multipath effects. It is indicated that a very large v_` is required when the noise level increases: to keep the 1 cm positioning accuracy, if 3σ = 3 mm, v_` ≈ 0.30 m/s; if 3σ = 20 mm, v_` ≈ 3.50 m/s (not shown in the figure). If a comparatively larger error of positioning is tolerated, v_` is then less sensitive to the growth of noise, as the slope of the green or black line is much smaller than that of the blue line.

As defined in (5), the estimate of all the double difference

integer ambiguities is updated at each iteration. Finally ,#,

converges to a floating value, and if the maximum error of ,#,

with respect to its true value is smaller than 0.5, the true values of all the integer ambiguities can be obtained by simply rounding the floating values. Keeping WZ = 10, the

relationship between the maximum error of ,#, and the noise

level (i.e. 3Y) is plotted in Fig. 3.

0

5

10

15 0

5

10

0

1

2

3

4

y (m)x (m)

z (

m)

Peudolites

Additional pseudolites

Base station

User receiver

231/278

Page 254: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

4

Figure 2. “Velocity” of pseudolites required with respect to various noise levels and upper bounds of positioning accuracy.

Figure 3. Maximum estimation error of integer ambiguities with respect to

noise levels and TUV.

Each line in this figure corresponds to a certain value of v_`. It is shown that a larger v_` can significantly reduce the estimation error of integer ambiguity. When v_` is greater than

0.20 m/s, the error of Nb,cd,e is not quite sensitive to the growth of

noise, and if v_` is greater than 0.30 m/s, the curve of maximum ambiguity error is always below 0.5.

C. Positioning with Moving Receiver

Now the pseudolites are fixed and the user receiver moves. In simulation, the receiver is in a 2-meter radius circular

motion at a constant angular velocity ω = g-X rad/s, i.e. the

tangential speed T ≈ 0.63 m/s. As mentioned in the previous section, the position of the receiver at each epoch is considered as known here, and the purpose is to obtain a precise solution

of floating ambiguities ,#,. In practice, the receiver’s position

should be determined in real time through another method, such as the stand-alone positioning, whose accuracy is highly affected by the quality of pseudorange measurements, indoor multipath, etc. According to the testing results with repealites, the bias of the receiver’s position varies from tens of centimeters to a few meters. Unlike the measurement noise

with a zero-mean Gaussian distribution, the receiver’s position is always quite biased at every epoch. Besides, the accuracy of ambiguities calculated also depends on the number of pseudolites WUV , the noise level ( 3Y ) of carrier phase measurements, the number of epochs WZ, the receiver speed

and trajectory, etc.

Keeping WUV = 4 and WZ = 10, the relationship between the

maximum error of double difference integer ambiguities and the error of receiver’s position is shown in Fig. 4, at four given noise levels of the carrier phase observations:

Figure 4. Maximum estimation error of integer ambiguities with respect to

the error of receiver position and the noise level of carrier phase measurements.

This figure reveals the difficulty of obtaining a precise Nb,cd,e

(whose error is less than 0.5), as it is very hard to maintain a 35-centimeter accuracy in a kinematic positioning of the user receiver indoors, even if the phase noise is restricted at 3 mm. If the error of receiver position is at the meter level, the

solution of Nb,cd,e deviates completely from their true values.

Figure 5. Maximum estimation error of integer ambiguities with respect to WZ and the receiver position error, given the phase noise level on 3 mm

(upper) or 10 mm (lower) respectively.

0 2 4 6 8 10 12 14 16 18 200

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

noise level (mm)

v PL (

m/s

)

"Moving" pseudolites, static user receiver

1 cm accuracy

5 cm accuracy

10 cm accuracy

20 cm accuracy

0 2 4 6 8 10 12 14 16 18 200

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

noise level (mm)

ma

x. err

or

of N

"Moving" pseudolites, static user receiver

vPL

= 0.10 m/s

vPL

= 0.20 m/s

vPL

= 0.30 m/s

vPL

= 0.50 m/s

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20

0.5

1

1.5

2

2.5

3

error of receiver position (m)m

ax. e

rro

r o

f N

Fixed pseudolites, moving user receiver

phase noise 20 mm

phase noise 10 mm

phase noise 5 mm

phase noise 3 mm

50 100 150 200 250 300100

0.5

1

1.5

2

2.5

3

nep

max. e

rror

of N

Fixed pseudolites, moving user receiver, phase noise 3 mm

Rx pos. err. 2 m

Rx pos. err. 1 m

Rx pos. err. 50 cm

Rx pos. err. 30 cm

50 100 150 200 250 300100

0.5

1

1.5

2

2.5

3

nep

ma

x. err

or

of N

phase noise 10 mm

Rx pos. err. 2 m

Rx pos. err. 1 m

Rx pos. err. 50 cm

Rx pos. err. 30 cm

232/278

Page 255: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

5

In favour of the error compensation, a larger quantity of observations should be taken, i.e. to increase the number of epochs WZ . This is effective to suppress the measurement

noise, which is supposed to have a zero mean; it should also be able to average the error of multipath to a certain extent, since the geometry of system changes continuously due to the moving receiver. Without considering the multipath in

simulation, the relationship between the error of ,#, and WZ is

presented in Fig. 5. It indicates that a more accurate floating

solution of ,#, can be achieved with the cost of a longer

period of observation.

D. Positioning with Additional Pseudorange Observables

When the pseudolites are fixed and the user receiver remains static, some additional pseudorange observables have to be used, as they do not bring any unknown ambiguity into the calculation. However, pseudorange measurements are much more biased than carrier phase, especially indoors, which is in conflict with the requirement of high accuracy positioning. Even the pseudorange measured with P code (outdoors) has a magnitude of noise 100 times larger than the carrier phase. Accordingly, the number of epochs WZ is augmented to 2000

in the simulation.

Keeping WUV = 4, the impact of pseudorange and carrier phase measurement noise on the accuracy of calculated receiver coordinates and floating ambiguities is illustrated in Fig. 6 and 7.

Figure 6. Positioning accuracy with respect to the noise level of pseudorange

and carrier phase observations.

Figure 7. Maximum estimation error of integer ambiguities with respect to the noise level of pseudorange and carrier phase observations.

The above figures reveal that the influence of carrier phase noise is almost negligible comparing to the noise on pseudorange. It is extremely hard to reach cm-accuracy even if

the noise of pseudorange is restricted within a few tens of

centimeters. However, the error of Nb,cd,e changes rather slowly

along with the augmentation of noise, and it is always precise enough (smaller than 0.5) in the whole process of simulation. This property implies the possibility to obtain the correct integer values of double difference ambiguities in a tough indoor environment. Don’t forget that the positioning accuracy presented in Fig. 6 corresponds only to the floating solution, i.e. the coordinates of receiver calculated at the same time with the

floating ambiguities. Once the integer values of Nb,cd,e are found,

the biased pseudorange observations are no longer useful, and the accuracy of the fixed solution is bound to be improved.

IV. CONCLUSION

This paper has presented the first theoretical aspects of a cm-accuracy GNSS-based indoor positioning system. The carrier phase double difference algorithm is summarized, emphasizing the impact of indoor characteristics. Three approaches are proposed and simulated: (i) positioning with “moving” pseudolites and static receiver; (ii) positioning with fixed pseudolites and moving receiver; (iii) positioning with fixed pseudolites, static receiver, and additional noisy observables. The multipath effect is not yet modeled but considered as part of the noise in the simulations. It is confirmed by the simulations that the measurement biases play a very important role on the final positioning accuracy, and the possible solution is nothing but: either to decrease the noise and other error sources, or to make use of a robust algorithm allowing a better compensation of errors. Double difference and redundant observations are efficient ways for diminishing the noise. The third approach (iii) is proved to be promising on the aspect of integer ambiguity estimation, even with highly biased observations. As a subsequent step, it still needs to be ameliorated and applied to the repealite system. Furthermore, the errors with non-zero mean, such as multipath, should be modeled so as to be specifically treated.

REFERENCES

[1] B. Hofmann-Wellenhof, H. Lichtenegger, and E. Wasle, GNSS – global navigation satellite systems, GPS, GLONASS, Galileo, and more, Springer-Verlag Wien, 2008, pp. 11-12.

[2] A. Patarot, M. Boukallel, S. Lamy-Perbal, A. Vervisch-Picois, and N. Samama, “Pedestrian indoor positioning techniques: a survey”, JETSAN, May 2013.

[3] H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor positioning techniques and systems”, IEEE transactions on systems, man, and cybernetics – part C: applications and reviews, vol. 37, No. 6, pp. 1067-1080, November 2007.

[4] S. Botton, “Calcul GPS sur la phase”, April 2013.

[5] A. Vervisch-Picois and N. Samama, “Analysis of 3D repeater based indoor positioning system – specific case of indoor DOP”, European navigation conference-GNSS, May 2006.

[6] T. Ford, J. Neumann, N. Toso, W. Petersen, C. Anderson, et al., “HAPPI – a high accuracy pseudolite/GPS positioning integration”, [Online]. Available at: http://www.novatel.com/assets/Documents/Papers/

File45.pdf

[7] J. Wang, “Pseudolite applications in positioning and navigation: progress and problems”, Journal of global positioning systems, vol. 1, No. 1, pp. 48-56, July 2002.

[8] L. Dai, J. Wang, T. Tsujii, and C. Rizos, “Inverted pseudolite positioning and some applications”, Survey review, vol. 36, No. 286, pp. 602-611, October 2002.

0.5 1 1.5 2 2.5 30.20

0.2

0.4

0.6

0.8

1

0.2

noise level of pseudorange observations (m)

po

sitio

nin

g e

rro

r (m

)

Fixed pseudolites, static user receiver

phase noise 20 mm

phase noise 10 mm

phase noise 5 mm

phase noise 3 mm

0.5 1 1.5 2 2.5 30.20

0.1

0.2

0.3

0.4

0.5

noise level of pseudorange observations (m)

max. e

rro

r of N

Fixed pseudolites, static user receiver

phase noise 100 mm

phase noise 50 mm

phase noise 30 mm

phase noise 20 mm

233/278

Page 256: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Indoor positioning with GALILEO-like pseudolite signals

2D positioning test results

Augustin Monsaingeon, Alexis Vallernaud, Jean-Pascal Julien, Marc Boyer INSITEO

Toulouse, France

Abstract — Indoor location applications require sub-metre accuracy, which cannot be achieved using in-orbit satellite signals due to signal power losses and multipath. This paper presents the solution and the accuracy tests performed in 2D-positioning with our pseudolite solution. In order to use a standard GNSS receiver architecture, the system uses continuous Galileo-like BPSK(5) signals. To mitigate cross-correlation due to the near-far effect we define the PRN time offsets between each antenna. The use of GALILEO BPSK(5) reduces the impact of long multipath compared to BPSK(1) of GPS C/A. Double-delta correlators, carrier smoothing and vector tracking mitigate the impact of short multipath.

The test setup uses pseudolites and a dedicated receiver (RF front-end, FPGA, ARM processor). The receiver architecture is similar to standard GNSS chipset architectures. The FPGA performs the real-time correlation tasks and the ARM processor is in charge of discriminators, tracking loops and navigation algorithms. The positions in real-time are displayed on a map and recorded. Detailed results are presented in this paper showing the sub-metre accuracy achieved.

Keywords - pseudolite, indoor, navigation, Galileo, accuracy

I. INTRODUCTION

With handheld devices and smartphones now being available for a majority of users, the use of geo-location has become something natural, underlying many applications and services. However, accurate location solutions are essentially only available outdoors and are therefore missing the requirements of indoor usage.

Indoor location use cases range from pedestrian navigation in large public buildings to lone worker security or interactive games in theme parks. Although they originate from very different contexts, these services have some common requirements. They need location technologies that fulfil three key success factors: (1) Scalability and low deployment costs, (2) compatibility with highly integrated GNSS receivers, (3) Accuracy better than 1 meter.

The path followed by Insiteo is therefore to develop a solution which uses GNSS signals, and keeps the receiver architecture and antennas similar to integrated outdoor GPS

chips. This paper presents the solution and the results achieved during actual test sessions.

In section II, the way the limitation of the near-far effect has been lifted is presented, followed by the presentation of the advantages of the use of E6-b Galileo-like signals. In section III, the Insiteo system is described and compared to a GPS chipset. The result of 2D position and navigation is presented in section IV. Section V concludes the paper.

II. TECHNOLOGY OVERVIEW

A. State of the art

When considering indoor location, several technologies are available but none of them fully reaches the target of being altogether (1) scalable and cost-effective, (2) better than 1 m accuracy, (3) compatible with a majority of smartphone/smartphone chipsets.

The landscape of existing technologies is described in the paragraphs below.

1) Ultra-Wideband (UWB): UWB is evaluated in [3] where ranging accuracy is better

than one meter in an obstructed environment. This technology uses a Time Of Arrival (TOA) method to compute the distance between infrastructure sensors and the devices to locate. In this solution, the infrastructure density not only depends on the desired accuracy but also on the number of simultaneous users to serve.

2) WLAN/BLE RSSI: This technology is the most commonly used on

smartphones and tablets, thus enabling indoor location-based applications in shopping centres, airports and other public facilities. Received Signal Strength Indicator (RSSI) measurements of WLAN and/or Bluetooth Low Energy (BLE) by the devices chipsets are used. Minimum Mean Square Error algorithms are used to estimate the devices’ position, by comparing real-time data with known information from a site survey. By using existing Wi-Fi or BLE beacons, this solution is very cost-effective. However due the nature of the RF signals that are used (telecom signals not designed for accurate travel-

234/278

Page 257: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

time measurements), these solutions do not have the potential of reaching sub-meter accuracies.

3) Ultrasound-based solutions: Due to low propagation speeds, the ranging accuracy based

on sound waves can be extremely good. The principle is to use direct sequence codes modulating an ultrasonic carrier wave.

In [1], a 3D accuracy better than 5 cm was reached in 95% of cases for their synchronous system and better than 25 cm for an asynchronous system. Due to the range of the ultrasound signals, the narrow transducer beam width, and the attenuation of the ultrasound signal by obstacles, the required transmitter density is high. In [2], 47 transmitters were used in order to cover a 6.5 m x 5.7 m x 2.5 m room. This high transmitter density requirement and the fact that they need to be powered are not adapted to ubiquitous mass market deployments.

4) Ground-based transmitters of GPS-like signals: A solution provided by Locata [4] broadcasts pulsed

GNSS-like signals at 10 MHz chip rate in the ISM band. Associated with the Timetenna antenna, the positioning accuracy is approximately 20 cm in kinematic tests. With the pulsed technology, which eliminates the near-far effect, and the transmit power of up to 1Watt the range of the system is up to 10Km. This solution needs custom receivers and transmitters. The size and the complexity of the Timetenna, which reduces the impact of the multipath, are not compatible with hand-held devices and smartphones.

The solution proposed by Insiteo has the advantage to keep the receiver architecture similar to a GPS chipset. Therefore the pseudolite broadcasts a continuous, GNSS-like signal. For regulation reason the Industry Scientific and Medical (ISM) band (2.4 GHz-2.483 GHz) was used.

B. Near-far effect inherent to CDMA systems

The near-far effect appears when the power difference between pseudolites signals at the receiver’s position is too high. The C/A GPS codes broadcast by each antenna have a cross-correlation margin of 24 dB. If the received power difference between pseudolites is greater than this margin, the receiver can acquire and track a cross-correlation peak. When this occurs, the pseudorange estimation on the low-power signal can be severely biased.

By increasing the cross-correlation margin between signals the impact of the near-far effect can be minimised. Insiteo uses a patented technique in order to do so: transmission time offsets between PRNs are properly chosen in order to reach optimal cross-correlation characteristics.

For instance, the maximum cross-correlation between the PRN33, PRN34 and PRN35 is 24 dB on more or less 3 chips when they are not offset from each other. When applying specific delays on each PRN, the cross-correlation in the range of more or less 3 chips is 60 dB.

Thanks to this 60 dB margin, near-far effect is mitigated in standard use cases where the end-user never approaches the transmission antennas by less than a few meters.

C. Pseudolite continuous signal definition

The system uses the Galileo E6-B modulation. As the Galileo E6-B codes are not yet defined, a C/A Gold Code was used. Using higher chip rates, such as BPSK(5) used in E6-B Galileo signals has a beneficial impact on both pseudorange estimation and fading mitigation: with a wider signal bandwidth, correlator resolution is improved and wideband signals are less impacted by fading.

III. INSITEO’S PSEUDOLITE SYSTEM

The Insiteo system is made up of a transmitting part and a receiving part:

- the transmitting part consists of one or more pseudolite signal generators, amplifiers, and antennas.

- the receiving part is composed of one antenna, low noise amplifiers, and one receiver.

A. Transmitter

The pseudolite transmitter is a 19-inch standard rack which provides synchronous Radio Frequency (RF) channel outputs and is manufactured by Silicom. Each channel is independent and able to modulate its own binary stream with BPSK modulation. All RF channels have the same carrier frequency which can be selected in the entire ISM band, from 2400 MHz to 2485 MHz.

The bit stream data rate is 5.115 Mchip/s, applied to PRN GPS C/A Gold codes. For the experiment three of the RF channels transmitters have been used. The PRNs which were used are PRN33, PRN34 and PRN35 from the GPS C/A ICD [12]. All channels are synchronised together and their relative delay is configurable. As explained in the following chapter, the absolute delays between each RF output are calibrated to compensate for propagation delays in the RF cables.

Figure 1. Pseudolite transmitter

B. Amplifiers and Antennas

In order to compensate for cable losses between the pseudolite transmitter and the antenna, mast head power amplifiers are used. The antennas are Right-Hand Circularly Polarised (RHCP) patch antennas. Their gain is 5 dBi and the half power beamwidths are approximately 70° in vertical and horizontal planes.

C. Receiver

The receiver system is composed of an RHCP omnidirectional antenna, a 20 dB gain low-noise amplifier (LNA) and a GNSS receiver. The GNSS receiver is built around a USRP E100 and the full duplex 2.4 GHz transceiver card RFX2400 from Ettus Research.

235/278

Page 258: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Figure 2. Pseudolite receiver

The FPGA and the ARM firmware were customised specifically in order to implement the indoor GNSS receiver functionalities. The FPGA is able to handle the data for the acquisition process, computes the pseudoranges, and performs the demodulation for the tracking process. The ARM drives the FPGA, computes the acquisition, the tracking control loop of the signals, and the navigation filter.

The positions computed by the navigation algorithms are then logged and forwarded to a PC for real-time display on a map background.

Figure 3. Pseudolite receiver architecture

Fig.3 above shows the block diagram of the receiver: the architecture is similar to a typical GPS receiver [7]. The only difference with a basic GPS receiver is the number of correlators by channel.

D. Multipath mitigation

In order to reduce the impact of multipath on pseudorange estimation, the system under test was using a 5.115 Mchip/s signal. Compared to the BPSK(1) of the GPS L1 C/A, the higher chip rate of the BPSK(5) allows for maximum ranging error of about 15 meters for a standard Early Late discriminator [8]. This is not robust enough and the ranging biases are too high for a sub-metric system.

The double delta discriminator has been identified as the best candidate to ensure the system’s performance under multipath conditions. As reported in [9], with chip correlator spacing of 0.1, only the multipath lengths below 7 meters and greater than 50 meters degrade the ranging. This results in maximum biases on the discriminator which are less than 1.2 meters. This discriminator is used in the receiver.

Another phenomenon due to multipath is deep fading. By strongly reducing the signal, it can result in loss of the tracking loops. A good solution to mitigate this effect is the vector tracking loop (VTL). A vector tracking loop implementation is

described in [10]. During our tests, the effectiveness of the solutions has been observed during the dynamic tests.

In summary, the following algorithms were used in the solution under test:

- 40 ms integration time on the code loop; - a double delta discriminator; - carrier smoothing and a vector-based tracking loop.

After initial acquisition an early late correlator is used.

When the code frequency and the carrier have converged, the early-late correlator switches to a double delta discriminator and integrates the correlations on 40ms. When these algorithms have converged, carrier smoothing is activated followed by the vector tracking loop [3].

The pseudoranges and position measurements studied later in this paper were performed after the activation of the vector tracking in each of the test cases.

IV. INDOOR TESTS

The indoor tests have been carried out in an exhibition hall with a goal to test the system accuracy in both static and dynamic situations. This chapter describes the setup configuration and provides test results.

A. System Setup

Three antennas have been deployed, each of them transmitting a pseudolite signal, radiated towards the central area. Antennas were laid out as a 40 m side equilateral triangle.

In the test area, the floor was marked with a one meter grid, which allowed performing accuracy tests. Receiver antenna height above the ground was 1.5 m and transmitting antennas were at 3 m.

The pseudolite transmitter was located in the same building and signals were carried over to the antennas through RF cables.

Figure 4. Test area description

The antenna at the lower-left corner serves as the location coordinates origin in all the tests. Coordinates are in meters, the X-axis being horizontal and Y-axis vertical as indicated in Fig. 4.

The receiver was placed on a trolley and moved around during tests. During each test session, the real-time location

236/278

Page 259: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

provided by the receiver was logged for accuracy estimation purposes.

B. System Calibration

In order to compensate for cable length and transmitter group delays, the system has been calibrated before the tests.

In order to do so:

- The receiver is directly connected to the three RF cables

- Delays between each channel due to cable length and transmitter group delay are measured

- Delays are compensated for at the pseudolite level - System is calibrated and tests sessions can be carried.

C. Test procedure

For each test, the receiver was placed on the static position or at the start of the trajectory. Receiver is started and we waited the algorithms to converge (approximately 20 seconds).

For static tests, positions were then recorded during one minute. For dynamic tests, the trajectory was followed at walking speed. In each case, the receiver provided a new position estimate every 0.5 second.

D. Static test results

For static sessions, six specific locations were tested. The coordinates of the test points are given in Table 1. For each test point, the average position, the mean position bias and the standard deviation (STD) are computed. In these static test sessions, as vector tracking is not required, it was not enabled.

TABLE I. STATIC TEST REFERENCE POINTS

Test point

Reference position in meters (X,Y)

1 (19.9;11.5) 2 (12.9;15.5) 3 (15.9;9.5) 4 (17.9;7.5) 5 (23.9;15.5) 6 (19.9;6.5)

It can be seen that for all the evaluated test points, the

standard deviation is always below 13 cm and the position accuracy (mean bias) is below 43 cm.

The cumulative density function of all positioning errors shows that 50% of the positions are below 22 cm error and 90% of the positions have an error below 50 cm.

All errors are below 101 cm.

TABLE II. STATIC TEST RESULTS

Test point

Average estimated position (X,Y) (m)

Mean bias (m)

STD (m)

1 (20.03;11.52) 0.22 0.12

2 (12.86;15.42) 0.14 0.08 3 (16.07;9.69) 0.37 0.12 4 (17,97;7.5) 0.13 0.06 5 (23.86;15.48) 0.16 0.11 6 (19.90;6.17) 0.43 0.13

Figure 5. CDF of static test biases

E. Dynamic test results

1) Dynamic test description Two dynamic test sessions were performed. In the first test

the receiver followed a snake-shaped trajectory. It was left static only on the first and on the last points of the trajectory.

The second dynamic test session followed a circular trajectory, centred on the middle of the test area and with a 4.5 meter radius. After convergence of navigation algorithms, the receiver was moved around the circle anticlockwise until it reached its initial position. As the speed of the receiver was not perfectly constant during the test, it was not possible to actually compare the real position of the receiver and the estimated position. In order to quantify the navigation accuracy, the actual paths and the reference trajectories are displayed on the same 2D plots.

On each side of the reference trajectory, a dotted line represents a 50 cm error which allows for a visual interpretation of the accuracy.

2) Dynamic test results

Due to some practical constraints on the floor when dynamic test sessions were performed, the receiver was moved manually along the reference trajectories which can result in inaccuracies of about 10 cm. In both dynamic test sessions, we can conclude that the solution accuracy once the algorithms have converged is below one meter.

Fig. 6 displays the result of the first dynamic test. 67% of the estimated positions are within the 50 cm error area.

Fig. 7 shows the results of the second dynamic test. All the estimated positions are within the 50 cm error area.

Fig. 8 at the end of this paper shows all the static test positions on a map background.

237/278

Page 260: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Figure 6. Dynamic Test 1

Figure 7. Dynamic Test 2

Figure 8. Static test sessions map view

F. Conclusion

This paper describes a pseudolite-based indoor positioning system which uses continuous signals. The pseudolite broadcasts a signal similar to the Galileo E6-B in the 2.4 GHz ISM band. One main target of this solution is to keep the receiver and its antenna system very close to a standard integrated chip receiver.

The system principles and architecture are presented along with the techniques which have been implemented to mitigate multipath. The system has been deployed in an exhibition hall and real tests were performed. Static and dynamic tests sessions were logged and show the following results:

- The accuracy of the static tests is better than 50 cm in 90% of the location estimates.

- The accuracy of the dynamic tests is better than 1 m.

These first results demonstrate the performance capabilities of a continuous signal pseudolite-based solution, delivered with a simple receiving segment and antenna system. These very good results validate the approach taken by Insiteo in developing this technology.

More extended testing will be carried out in the coming months to optimise the system settings and the algorithm tuning. The areas which are identified include:

- using more PRNs to implement Receiver Autonomous Integrity Monitoring (RAIM) algorithms,

- using Microelectroemechanical Systems (MEMS) inertial sensors which are available in smartphones,

- performing the tests in public places.

ACKNOWLEDGMENTS

This work has been carried out in the framework of the European GNSS Supervisory Authority Framework Program 7 (GSA-FP7) project “I-Going”. The authors are grateful to Miguel Martins from itrust consulting for his support. They would also like to thank “GL Events” in Toulouse for providing access to their buildings.

REFERENCES [1] M. Hazas, A.Hopper , “Broadband Ultrasonic Location Systems for

Improved Indoor Positioning”, ” IEEE Transactions on mobile computing”, Vol.5 No.5 May 2006

[2] C Hsiao, P. Huang “Two Practical Considerations of Beacon Deployment for Ultrasound-Based Indoor Localization Systems” “IEEE International Conference on Sensor Networks, and Trustworthy Computing” 2008

[3] Tingcong Ye, Michael Walsh, Peter High, John Barton, Alan Methewson, Brendan O’Flynn “Experimental Impulse Radio IEEE 802.15.4a UWB Based Wireless Sensor Localization Technology: Characterization, Reliability and Ranging” ISSC 2011, Trinity College Dublin, June 23-24

[4] C. Rizos, G. Roberts, J. Barnes, N. Gambale “Locata: A new high accuracy indoor positioning system” International Conference on Indoor Positioning and Indoor Navigation (IPIN), 15-17 Septembre 2012, Zürich, Switzerland

[5] Hyoungmin So, Taikjin Lee, Sanghoon Jeon, Chongwon Kim, Changdon Kee, Taehee Kim and Sanguk, “Implementation of a Vector-based Tracking Loop Receiver in a Pseudolite Navigation System”, ISSN 1424-8220

[6] E. Kaplan and C. J. Hegarty, “Understanding GPS: Principles and Applications”, Artech House, Norwood, Mass, USA, 2nd edition, 2006.

[7] GP2021, “GPS 12-Channel Correlator” datasheet, ZARLINK Semicondictor

[8] M. Irsigler, G. W. Hein, B. Eissfeller, “Multipath Performance Analysis for Future GNSS Signals”, Insitute of Geodesy and Navigation, University FAF Munich, Germany

[9] Irsigler, M. and Eissfeller, B. (2003): “Comparaison of multipath Mitigation Techniques with Consideration of Future Signal Structures”, Proceedings of the 16th International Technical Meeting of the Satellite Division of the Institude of Navigation, ION GPS/GNSS 2003, September 9-12, 2003, Portland, Oregon

[10] Hyoungmin So, Taikjin Lee, Sanghoon Jeon, Chongwon Kim, Changdon Kee, Taehee Kim, Sanguk Lee, “Implementation of a Vector-based Tracking Loop Receiver in a Pseudolite Navigation System” Sensors 2010, 10, 6324-6346

[11] Recommendation ITU-R P.1238-7 (02/2012) “Propagation data and prediction methods for the planning of indoor radio communication systems and the radio local area networks in the frequency range 900 MHz to 100 GHz”

[12] IS-GPS-200G (09/2012) “Global positioning systems directorate systems engineering & integration interface specification” Navstar GPS Space Segment/Navigation User Interfaces.

238/278

Page 261: International Conference on Indoor Positioning and Indoor Navigation

- chapter 19 -

RADAR System

Page 262: International Conference on Indoor Positioning and Indoor Navigation

- chapter 20 -

Optical System

Page 263: International Conference on Indoor Positioning and Indoor Navigation

- chapter 21 -

Ultra Sound System

Page 264: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Performance Comparison between Frequency-Division and Code-Division access methods in an

Ultrasonic LPSFernando J. Alvarez, Teodoro Aguilera

Jose A. ParedesSensory Systems Research Group

University of Extremadura06006 Badajoz, Spain

[email protected]

J. Augusto-Gonzalez, J. A. Rodrıguez-NegroC. Rodrıguez-Alemparte

Galician Research and Development Center inAdvanced Telecommunications - GRADIANT

36310 Vigo, [email protected]

Abstract—This work investigates the tolerance against Dopplershift of a Frequency-Division access method for an ultrasonicLPS, and compares this tolerance with that of a well-developedCode-Division access method, based on the emission of 63-bitKasami sequences. This study has been carried out with thehelp of a software simulator that was recently developed by theauthors to analyze the performance of a broadband ultrasonicLPS obtaining good agreement with real results.

Keywords. LPS; FDMA; CDMA; software simulator

I. INTRODUCTION

Broadband ultrasonic local positioning systems (ULPS)appeared about a decade ago with the aim to increase the preci-sion and robustness to noise of pioneer narrowband systems [1][2]. These improvements were achieved through signal codingand matched filtering detection techniques inherited from radarsystems, among others. However, a new problem arose due tothe relatively low propagation speed of ultrasonic signals inair: the Doppler shift caused by the receiver movement canmake the encoded emissions completely unrecognizable to thereceiver [3]. This problem has encouraged researchers in thefield to look for different solutions, such as the use of newDoppler-resilient encoding schemes based on polyphase codesor the search for new channel access methods.

In this work we investigate the tolerance against the Dopplershift caused by the receiver’s movement of a Frequency-Division access method employed by the ULPS recentlyproposed by the Galician Research and Development Center inAdvanced Telecommunications [4]. This tolerance, quantifiedin terms of Time-Of-Flight (TOF) measurements precision, iscompared with that of a Code-Division access method used bythe ULPS already developed by the Sensory Systems ResearchGroup at the University of Extremadura [5]. This study hasbeen carried out with the help of a software simulator that wasrecently developed by the authors to analyze the performanceof a broadband ULPS obtaining good agreement with realresults [6]. The high versatility of this simulator allows us toseparately investigate the effect that various phenomena mayhave on the characteristic features above mentioned.

II. ULPS DESCRIPTION

The simulated ULPS architecture is composed of fourbeacons placed at the upper corners or a 4 × 4 × 3 m3

anechoic room. For simplicity, all these beacons have beenassumed to be ideal, i.e., with a flat frequency response inthe range of interest, and with an omnidirectional acousticemission pattern. Only attenuattion caused by geometric di-vergence and atmospheric absorption have been considered tomodel the propagation channel. The system is assumed to besynchronous, with the receiver knowing the precise momentwhen the signals are emitted.

A. FDMA system features

In this system, the beacons emit pulses of constant fre-quency and 4.1 ms of duration. These frequencies have beenchosen as 21, 23, 25 and 27 kHz for beacons 1 to 4 re-spectively, and the pulses have been amplitude modulated toachieve a centralized bandwidth of about 320 Hz. With thesevalues, the product duration × total-used-bandwidth is slightlyabove 30 for this system.

In the receiver, the signal is acquired at 96 kHz and thenfiltered with a bank of 4 band-pass IIR filters centered at thefrequencies of interest. These filters, designed to separate theemissions from different beacons, have a bandwidth of 1 kHz,sharp cutoff, and provide high attenuation in the stopband(≈60 dB), introducing a group delay of about 0.9 ms thatmust be compensated later. Once the different frequencies havebeen separated, a five-stage process is conducted to obtainthe TOF of the corresponding signal: (1) Computation of thenormalized average power by windowing the filtered signal;(2) Smoothing the power signal by averaging; (3) Removal ofall the pulses below an empirically determined threshold; (4)Determining the time of occurrence of the first local maxima;and finally (5) Subtracting from this time half the duration ofthe pulse as well as the filters group delay.

978-1-4673-1954-6/12/$31.00 c©2012 IEEE

239/278

Page 265: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

B. CDMA system features

In this system, the beacons emit four uncorrelated 63-bitKasami sequences, that have been BPSK modulated with twocycles of a 24 kHz carrier to obtain a centralized bandwidthof about 6.3 kHz. Taking into account that the duration ofthese emissions is 5.3 ms, the product duration × total-used-bandwidth is 33.4 in this case, what allows us to conduct a faircomparison with the system described in the previous section.

In the receiver, the signal is again acquired at 96 kHz,and then correlated with a bank of 4 filters matched to thecorresponding emission patterns. After the correlation peakshave been obtained, the time of ocurrence of these peaks iscalculated and from it the total duration of the emission issubstracted to determine the TOF.

III. RESULTS AND CONCLUSIONS

The tolerance against Doppler shift has been statisticallyanalyzed by modeling the signals that would reach an idealreceiver placed in 49 different locations, uniformly distributedin a horizontal grid of 3×3 m2 placed at 1 m height. Foreach one of these locations, 8 different velocity directionshave been considered in a horizontal plane with angularincrements of 2π/8 rad among them. The detection process hasbeen simulated a hundred times for every possible receiver’sposition and velocity vector combination, considering a signal-to-noise ratio of 3 dB in all cases. Figs. 1 and 2 shows thecumulative error in TOF measurement for all beacons and bothsystems, when two different velocity magnitudes of 1 and 4m/s are considered respectively.

Fig. 1. Cumulative error in TOF determination for a moving receiver with|v| = 1 m/s, in the FDMA system (solid lines) and the CDMA system(asterisks).

As can be seen in the first figure, the performance of theCDMA system is better with low velocity magnitudes, since98% of the TOF errors are below 0.025 ms with this systemwhen the receiver’s velocity magnitude is equal to 1 m/s,whereas this error increases up to 0.45 ms to achieve the samepercentage with the FDMA system. However, as soon as thevelocity magnitude increases above 2 m/s the FDMA system

Fig. 2. Cumulative error in TOF determination for a moving receiver with|v| = 4 m/s, in the FDMA system (solid lines) and the CDMA system(asterisks).

clearly outperforms the CDMA system, as it is representedin Fig. 2, where we can see that the performance of theFDMA system is similar to that obtained in the previous case,whereas the performance of the CDMA system is significantlydeteriorated, with less that 60% of the TOF errors below 1 ms.

It is also important to note that in this work, only a signal-to-noise ratio of 3 dB has been considered in all the simulations.The performace of the FDMA system is far more sensitiveto this parameter than that of the CDMA system, which canprovide similar results to those presented in this work withsignal-to-noise ratios as low as −9 dB. The main conclusionthat can be drawn from this work is that Frequency-Divisionseems to be the best channel access method to be chosen whendealing with high velocities and SNR, whereas Code-Divisionwould be the best option when working with limited velocitiesmagnitudes. Further studies are neccesary in order to supportthis conclusion in more complex environments.

ACKNOWLEDGMENT

This work was supported by the Spanish Ministry ofEconomy and Competitiveness (LORIS project, TIN2012-38080-C04-02), and the Regional Government of Extremadura(GR10097).

REFERENCES

[1] M. Hazas and A. Ward, “A novel broadband ultrasonic location system,”in Proc. of the fourth international Conference on Ubiquitous Computing(UbiComp 2002), Goteborg, Sweden, September 2002, pp. 264–280.

[2] J. Urena, A. Hernandez, J. M. Villadangos, M. Mazo, J. C. Garcıa,J. J. Garcıa, F. J. Alvarez, C. de Marziani, M. C. Perez, J. A. Jimenez,A. Jimenez, and F. Seco, “Advanced sensorial system for an acousticLPS,” Microprocessors and Microsystems, vol. 31, pp. 393–401, 2007.

[3] S. Holm, O. B. Hovind, S. Rostad, and R. Holm, “Indoors data commu-nications using airborne ultrasound,” in Proc. IEEE Int. Conf. Acoustics,Speech, Sign. Proc, Philadelphia, PA, USA, March 2005.

[4] http://www.gradiant.org/.[5] http://www.unex.es/investigacion/grupos/giss.[6] F. J. Alvarez, T. Aguilera, J. A. Fernandez, J. A. Moreno, and A. Gordillo,

“Analysis of the performance of an ultrasonic local positioning systembased on the emission of kasami codes,” in Proc. of the IPIN 2010, Zurich,Switzerland, September 2010.

240/278

Page 266: International Conference on Indoor Positioning and Indoor Navigation

- chapter 22 -

IMU or Foot-Mounted Navigation

Page 267: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Fusion methods for INS using neural networks for precision navigation

Lenka Tejmlova, Jiri Sebesta Department of Radio Electronics, FEEC

Brno University of Technology Brno, Czech Republic

[email protected]

Abstract— The system is based on inertial measurement unit (IMU), consisting of three-axis gyroscope, three-axis accelerometer and three-axis magnetometer. Those sensors give the current state of motion and it is possible to determine IMU’s movements. The first improvement is achieved when the static errors of sensors are subtracted. Further improvement involves the removal (high suppression) of a dynamic error of the system. This task is difficult because there are no models that define the dynamic, ever-changing error of a system precisely. There is a possibility for the implementation of artificial neural networks (ANN). Inputs of ANN are data from IMU, after static errors subtraction. Outputs of ANN are dynamic errors of positioning – the shift in the x, y and z from calculated position (by IMU). After a successful training, such network is able to give correct outputs and improves the accuracy of the positioning inertial system.

Keywords- inertial measurement unit; inertial navigation system; data processing by neural network; precision navigation

I. INTRODUCTION In present, the common theme of research is a very

precision positioning. Systems that provide the high positioning accuracy with error sizes in centimeters or millimeters are requested in automotive, rail transportation, aviation and, among other, also in indoor industry applications or simply for human needs.

At first, it is necessary to use a good algorithm to transfer all sensors frames to one inertial frame with one origin and corresponding axes. Then it is possible to subtract the gravitational acceleration g and to determine the direction of movement and the movement vector.

With knowledge of these data, the trajectory and motion during the movement can be reconstructed. That gives the first approximation of the position and state in the appropriate time, however, such approximations are not very accurate [1]. To improve the results of calculations and to estimate the position and the state in time more precisely, implementation of artificial neural networks (ANN) was chosen.

II. SYSTEM A. Initial position

The initial state and position must be defined before the data processing. Otherwise, the initial position is zero and every motion is defined from that point. In that case, it is

necessary to start with data measuring when the axes of sensors correspond to the Cartesian axes of space where the measurement is performed.

B. Sensors All of sensors are implemented as chips on one board with

corresponding direction of axes. The gyro-sensor has 16-bit reading per axis and it measure in range of ±2000 °/s. The accelerometer has 12-bit reading per axis and its range is ±4 g. The magnetometer‘s range is ±4 gauss with the same resolution as the accelerometer has [2].

C. Transfer of coordinate system To reconstruct the trajectory of device that is continuously

changing the position and also rotating, it is necessary to transfer the old frame to new-one, rotated one. The most intuitive way how to describe an attitude are Euler angles. The entire rotation is split into three consecutive rotations [3]. In practice, it is a transformation of a vector x = (x, y, z) from a set of axes, β, to a different set of axes, α. In our system, we need to make the transformation from the body frame (B) of the device to the inertial frame (I) - the frame of space. The rotation matrix is shown in (1).

),(2)(21)(1),,( B

vvv

vI

IB RRRR

where v1 and v2 are auxiliary frames after individual rotations by angle –Ψ (yaw, around Z axis) and -Θ (pitch, around Y axis).

D. Artificial neural network The network is trained by training pairs to know how to

react on given inputs. Due to this training process, it can independently decide what the best results are. It reacts on unknown inputs and it depends on the complexity of the problem, how the structure will look like [4].

III. PRINCIPLE OF THE SYSTEM ANN should repair the deviations in positioning. The

outputs from ANN (corrections) are compared to error output of the Kalman filter. When the error of sensor is lower than correction vectors, the calibration of the measurement is needed. The other, not yet used ANN is in training process during positioning. When the calibration of the unit doesn’t work, the old ANN’s weights and biases are overwritten with the new, latest values from additional trained ANN.

241/278

Page 268: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

A. Data collection The accelerometer provides data at the rate of 100 Hz, the

magnetometer at the rate of 220 Hz and gyroscope at the rate of 95 Hz. Data from sensors are sent by groups of 9 values as shown in (3). The scanning speed is adjustable in registers as well as the resolution of sensors. Built-in filters are bot used in this application.

B. Trajectory reconstruction The state of the device is defined. From the first seconds

the tilt of the accelerometer is calculated – by subtraction of the value 1 g from acceleration vector when the device does not move, from each axis the relative part – and that corresponds to angles of tilt – see (2) [4].

tnraterot xnraterot xnangle

nAnxraterot x

nAaxnAnx

azayaxnA

an

))(_)1(_(5.0)(

)),(_

,)(

)(

,222)(

cos()(

When the angle is known, it is possible to process next data with rotations and moves. The next step of data processing is to pass them through the Kalman filters. That causes that the rests of oscillations from the sensors are removed while the sum of acceleration vectors is unchanged.

The variances between the adjusted data and outputs from sensors (after subtracting the static error – the sensor offset) are then sent to the ANN.

C. Neural network processing The ANN takes the dynamic errors of sensors and its output

gives the variance between real and calculated position. In the input therefore we have 9 variables as shown in (3). The number of hidden layers is 2 and ANN’s structure is 97-5-3.

].[ zyxzyxzyx mmmaaagggactX

IV. MEASUREMENTS Measurements were carried out in the building of Brno

University of Technology. Variances between the filtered and unfiltered data were sent to the ANN.

A. Experimental measurements The first part of measurement proved that the algorithm we

used is correct. Tests like rotation around one of the axes without acceleration and the movements without rotations confirmed it by results characteristics.

B. Verification of the method The neural network was then trained to achieve the state

when the position does not change. After performing the body to inertial rotation, elimination of 1 g and Kalman filtration we obtained input data and the neural network might be trained. We wanted to achieve maximal mean square error under 1e-3 for this model and the result is in Figure 1a.

The second situation was the combination of constant-location (CL), constant-velocity (CV) and constant-acceleration (CA) model. The device was placed to the lift and it was going through 6 floors at first down, then up and then back down. The comparison of reconstructed trajectories is clear from Figure 1b.

-10-5

05

x 10-3

-50

5

x 10-3

0

5

10

15

x 10-3

x [m]y [m]

z [m

]

with ANNsensors

0510 -8-6-4-202

-25

-20

-15

-10

-5

0

y [m]x [m]

z [m

]

sensorsidealwith ANN

(a) (b) Figure 1. Trajectory of static (a) and moving (b) IMU.

C. Discussion Both figures show that the ANN makes less dispersion of

the computed points of the device’s positions. The position given by sensors differed in the range of 20 mm in 42 s in first case. Outputs of ANN corrected the position to the range of 8 mm. In Figure 1b the ANN had a same effect. The position based on sensors drifted the most in x and z axis after 140 s. After the ANN processing, the position was determined with errors up to 0.22 m in x axis, 0.6 m in y axis and 3 m in z axis. The magnetometer data in second measurement was distorted by magnetic materials in the area of data acquisition.

V. CONCLUSIONS From results we can deduce that the neural network

improves the determination of the position. Nevertheless, in the case when the IMU is not remaining at place motionless, the ANN training is time consuming. The accuracy of our sensor measurements is very poor due to biases and noise. By application of Kalman filter, the computed velocity does not correspond to the real value of velocity during the measuring.

ACKNOWLEDGMENT This work was supported by the Czech Science Foundation project No. 13-38735S Research into wireless channels for intra-vehicle communication and positioning. Also the support of the internal grant of BUT project FEKT-S-11-12 (MOBYS) and the project SIX, CZ.1.05/2.1.00/03.0072 is gratefully acknowledged.

REFERENCES [1] GREWAL, Mohinder S, Angus P ANDREWS a Chris G BARTONE.

Global navigation satellite systems, inertial navigation, and integration. 3rd ed. Hoboken: John Wiley, 2013, xl, 561 s. ISBN 978-1-118-44700-0.

[2] J. Borenstein, H. R. Everett, L. Feng, D. Wehe. "Mobile robot positioning sensors and techniques", in J. Robot. Syst., vol. 14, no. 4, pp. 231–249, 1997

[3] Using Accelerometers to Estimate Position and Velocity. CH ROBOTICS LLC. CH Robotics [online]. october 2012 [cit. 2013-07-15], http://www.chrobotics.com/library/accel-position-velocity

[4] Kaygisiz, B.H., Erkmen, A.M., Erkmen, I. "GPS/INS enhancement using neural networks for autonomous ground vehicle applications", Intelligent Robots and Systems, 2003. Proceedings. 2003 IEEE/RSJ International Conference on, vol.4, no., pp.3763,3768 vol.3, 27-31 Oct. 2003

242/278

Page 269: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Stance Phase Detection using Hidden Markov Model

in Various Motions

Ho Jin Ju

Navigation & Electronic System Laboratory

Seoul National University

Seoul, South Korea

[email protected]

Min Su Lee

Navigation & Electronic System Laboratory

Seoul National University

Seoul, South Korea

[email protected]

Chan Gook Park

Navigation & Electronic System Laboratory

Seoul National University

Seoul, South Korea

[email protected]

Abstract— The first step of Pedestrian Dead-Reckoning (PDR)

system with foot mounted IMU is stance phase detection which

finds a portion of attachment between shoes and ground. One of

the most popular methods for step length estimation is Zero

velocity UPdaTe (ZUPT) with high accuracy and high robustness

of each user. Since the ZUPT requires the stance phase

information, the accurate detection is a basis for exact position

estimation in PDR. In general, the stance phase is detected by

accelerometer or gyroscope signals of foot mounted Inertial

Measuring Units (IMU), but results of detection have a low

reliability in various motion. In this paper, a new stance phase

detection algorithm is proposed for exact estimation of

pedestrian’s position even in various motions. To detect stance

phase, the following 3 steps are performed. First, we detect the

stance phase with variance of accelerometer and gyroscope

signals. In complex gait motion, using only accelerometer and

gyro’s variance causes false detection during the short constant

velocity period. Secondly, false detection is removed by a glitch

removing algorithm, but there exists some miss detection. In

order to solve the problem, Hidden Markov Model (HMM) is

applied for miss detection as a last step. The Markov model is

constructed by sensor signals which make the stance phase

detection more dependable. The experiment results show that

this algorithm detects stance phase even in various motion.

Therefore, the algorithm can be used for indoor navigation

system.

Keywords-component; pedestrian dead-reckoning, stance phase,

Zero velocity update, hidden markov model

I. INTRODUCTION

Pedestrian Dead Reckoning (PDR) which only uses its own

sensor without support of outside infrastructure for locating

object's position is very useful system in any situation. PDR

generally supposes that pedestrian changes position by

movement of their step. Based on the idea, PDR can estimate

position of pedestrian with observing movement of step from

its first position to heading direction. PDR algorithms using

Inertial Measurement Unit (IMU) are proposed at situation of

normal walking in [1-3]. More advanced PDR system which

estimates position in various walking motions is required for

pedestrians, fire fighters, soldier and so on.

PDR is composed of Step Detection algorithm, Step Length

estimation, and Azimuth estimation. In summary, PDR can

calculate step length by detecting each step of pedestrian and

estimating a stride between steps.

Accurate step detection is essential for exact estimates of

position in PDR. Even if Step length is estimated accurately,

an error of estimated position is gradually increased because

of inaccurate step detection. There are existing step detection

techniques represented by Peak Detection Method, Zero

Crossing Detection Method, and Stance Phase Detection

Method; they use outputs of accelerometer and gyro.

In this paper, we focused on stance phase detection with

Hidden Markov Model. When developing PDR algorithm,

accuracy of stance phase detection is an important subject.

During stance phase, the speed of shoes is zero because it is

stuck to the ground. Using the above walking characteristic,

ZUPT compensates accumulating errors of IMU sensor, so it

is necessary to use ZUPT method in Step Length Estimation.

Accurate stance phase detection is required for exact estimates

of pedestrian's position when pedestrian walks in various

motions. Therefore, we proposed the adaptive stance phase

detection algorithm considering pedestrian motion.

II. SYSTEM DESCRIPTION

IMU is attached at shoe with consideration of pedestrian’s

various movements, as can be seen in Fig 1. We tested stance

phase detection algorithm with Xsens Company’s Xbus kit,

978-1-4673-1954-6/12/$31.00 © 2012 IEEE

243/278

Page 270: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure.1 foot mounted IMU

Figure.2 Body frame of foot mounted IMU

Stance phase Swing phase

Fig.3 Step phase classification

composed of 3 axis MEMS accelerometers, gyros,

magnetometer sensors and a Bluetooth module. Body frame of

IMU is set as shown in Fig 2. Then the dominant rotation axis

is y-axis when pedestrian walks or runs in various motions.

III. STANCE PHASE DETECTION ALGORITHM

Gait motion can be roughly divided into stance phase and

swing phase as shown in Fig 3. Stance phase detection

algorithm detects a section of attachment between shoes and

ground. Stance phase detection algorithm finds stance and

swing phase during walking using signal features. Most

algorithms in the literature determine stance phase based on

gyroscope and accelerometer signals. In [4], stance phase is

determined using norms of gyroscope outputs. In [5], stance

phase is detected by local variance of acceleration. In [6],

stance phase detection algorithm uses variances of simply

modified signal features. In [4,5,6], some threshold values

must be assigned. If threshold values are too small, stance

phase cannot be detected when a pedestrian is running or

quick walking. On the other hand, if threshold values are too

big, stance phase cannot be detected when a pedestrian is

slowly walking, crawling, descending stairs and ascending

stairs.

[7] is another method for stance phase detection. In [7],

only one gyroscope value is used based on HMM. [7] works

well in walking and running. In [8], accelerometer signals are

used based on Single Vector Magnitude (SVM). [8] works

well in walking, descending stairs and ascending stairs. But

[7,8] also fail to detect stance phase in various motion.

In order to detect stance phase in various motions, we

propose adaptive algorithm, that strengthen the strength and

Figure.4 Block diagram of Stance phase detection algorithm

make up for the weakness of [4,5,6,7,8]. First of all, for

detecting slow walking, crawling and random walking, we use

simply modified signal features and glitch removal. If we use

only this method, we cannot detect stance phase when

pedestrian is running or quickly walking because values are

over the threshold. Therefore, add to that, we use the other

method to detect stance phase during running and quick

walking. At that time, we use HMM similar to [7].

A. Stance phase detection algorithm using Glitch removal

Fig. 4 represents a block diagram of step A. In this paper,

stance phase is detected by using simply modified signal

features and glitch removal.

We use x and y-axis accelerometers output which is

significantly changed when pedestrian walks. First of all, to

detect stance phase, we use three modified accelerometer

signals : Energy, Product, Sum. As given in Eq. (1), Energy

means root square sum of x and y-axis accelerometers outputs,

Product means their multiplication and Sum means their

summation. And then, local variances of Energy, Product,

Sum are computed. If each local variance is under a threshold,

each condition is 1 as given in Eq. (2). In [6], stance phase is

detected by these three conditions. But three conditions only

use x and y-axis accelerometers outputs, stance phase cannot

be detected during side walking, crawling, descending stairs,

ascending stairs and etc.

2 2

Pr

x z

x z

x z

Energy a a

oduct a a

Sum a a

(1)

14

14

14

1 var( : )

0

1 var( : )

0

1 var( : )

0

k k E

E

k k P

P

k k S

S

E E thCondition

otherwise

P P thCondition

otherwise

S S thCondition

otherwise

(2)

In order to improve performance of stance phase detection,

we use gyroscope outputs in addition to accelerometer outputs.

As shown in Eq. (3, 4), condition is determined by root square

sum of x, y-axis gyroscope outputs. If kw is under a

244/278

Page 271: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Figure.5 Stance phase compensated in swing phase

threshold, condition is 1 as given in Eq. (3). If local variance

of kw is under a threshold, condition is 1 as given in Eq. (4).

2 2

1

0

k kx ky

k w

w

w w w

w thCondition

otherwise

(3)

14 var( )1 var( : )

0

k k w

vw

w w thCondition

otherwise

(4)

If all condition is 1 at the same time, that point can be

considered as the stance phase. But it is not enough to detect

the accurate stance phase, because of the limitations of using

threshold. Even if the swing phase is detected incorrectly

during the stance phase, it is only a slight problem because

actual moving distance is nearly zero. But if the stance phase

is detected incorrectly during the swing phase, it causes a big

problem because actual velocity is not nearly zero. Therefore,

in order to eliminate false detection during the swing phase,

we use glitch removal. According to several experiments,

stance phase time of one step is almost over the 150ms, when

a pedestrian is walking. If stance phase time of each step is

under the 150ms, Stance phase of each step is considered false

detection and glitch removal removes stance phase. After this

process, we can get accurate stance phase during slow walking,

crawling, random walking, descending stairs and ascending

stairs

. Results of crawling are shown in Fig. 5. Green line means

y-axis gyroscope output, Black line means step phase. 1 is

stance phase, 0 is swing phase. The figure on left shows the

result without glitch removal, and the figure on right shows

the result with glitch removal.

B. Stance phase detection algorithm using HMM

First of all, we divide y-axis gyroscope value into three regions based on proposed threshold by [7]. As shown in Fig.6, if successive 3 samples of y-axis gyroscope value are located in the same region, output Y gets number of region. Blue line is y-axis gyroscope value. Generally, when pedestrian walk 1 step,

Y is 1→2→3→2→1. When pedestrian go down stair, Y is

1→2→3→2→3→1. In addition, Y is variously changed by

pedestrian motion

Figure.6 The region classification

Figure.7 The example of state transition during walking and descending stairs

Fig.ure.8 State transition

After determining Y, we assume state transition which represents walking cycle. State transition is related to the foot movement as shown in Fig. 7. The red box means 1 step. When

pedestrian is walking, output Y is almost 1→2→3→2→1, we

assume that state transition is 1→2→3→4→1. When

pedestrian is descending stairs, output Y is almost

1→2→3→2→3→1, we assume that state transition X is

1→2→3→4→5→1. The walking states are modeled by

considering various motions of the pedestrian as shown in Fig. 8.

The most basic form of state transition X is 1→2→3→4→1

and 1→2→3→4→5→1. But sometimes each state is missed,

245/278

Page 272: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

Fig.ure.9 The example of stance phase detection during running

because X is also variously changed by pedestrian motion. Generally, the number of state transition X is 4 or 5. So we always check 7 states after stance phase for detecting stance phase.

For using HMM, we must set state transition probability distribution A, Observation symbol probability distribution B, and initial state distribution . When a pedestrian is walking,

some states can be missed and added. Considering these things,

we determined A, B, by using the proposed method by [7]

as shown in Eq. (5)

0.09 0.09 0.1 0.25 0.4

0.9 0.01 0 0.25 0.4

0.01 0.9 0.01 0 0

0 0 0.7 0.25 0.1

0 0 0.01 0.25 0.1

A

1 0 0 0 0

0 1 0 1 0

0 0 1 0 1

[1 0 0 0 0]

B

(5)

Once A, B, are defined, we can estimate state transition

X from the segment output Y by using viterbi algorithm [9]. After this process, we can estimate state transition X. When

state change 4→2 or 5→2, state 1 is always missed between 2

numbers. Therefore, we make stance phase between 2 numbers. During the Stance phase, gyroscope outputs are extremely small. Therefore, a point which is the smallest norm of gyroscope outputs is always located in the stance phase. So we assume that 3 samples centered at the point is the stance phase. Fig 9 shows the example of detecting stance phase during running. Red box means stance phase made by HMM.

IV. RESULTS OF STANCE PHASE DETECTION

To compare performance of each method, 5 testers walked 20 steps in each motion. Table 1 is the experimental results which are the sum of all five testers 20 steps, so there are 100 steps in each motion. When testers are walking, all methods can detect stance phase with high accuracy. But for other motions, the proposed algorithm shows good results compared to other methods.

V. CONCLUSIONS

In this paper, we proposed the adaptive stance phase

detection algorithm, which can be used in various situations.

Table.1 The results of stance phase detection (Experimental results by 100 steps of each motion)

This algorithm is largely composed of two steps. First step, we

use simply modified signal features and glitch removal for

detecting slow walking, crawling and random walking. And

second step, we use the other method to detect stance phase by

HMM during running and quick walking. The experiment results show that this algorithm can detect

stance phase in various motion. It can be widely used for pedestrians, fire fighters, soldiers and etc.

ACKNOWLEDGMENT

This work was supported by the IT R&D program of

MSIP/KEIT. [10044844, Development of ODM-interactive

Software Technology supporting Live-Virtual Soldier Exercises]

REFERENCES

[1] S. Godha and G. Lachapelle, “Foot mounted inertial system for pedestrian navigation,” Meas. Sci. Technol., vol. 19, pp. 1–9, 2008.

[2] O. Bebek, M. A. Suster, S. Rajgopal, M. J. Fu, X. Huang, M. C. Cavusoglu, D. J. Young, M. Mehregany, A. J. van den Bogert, and C. H. Mastrangelo, “Personal Navigation via High-Resolution-Gait-Corrected Inertial Measurement Units,” IEEE, Transactions on Instrumentation and Measurement, 2010.

[3] A. M. Sabatini, C. Martelloni, S. Scapellato, and F. Cavallo, “Assessment of walking features from foot inertial sensing,” IEEE Trans. Biomed. Eng., vol. 52, no. 3, pp. 486–494, 2005.

[4] L. Ojeda and J. Borenstein, “Non-GPS navigation for security personnel and first responders,” J. Navigation ,2007, pp. 391-407.

[5] A.R. Jim enez, F. Seco, J.C. Prieto and J. Guevara, “Indoor Pedestrian Navigation using an INS/EKF framework for Yaw Drift Reduction and a Foot-mounted IMU,” WPNC’10, Dresden, Germany, March 11–12, 2010.

[6] M. S. Lee, C. G. Park and C.W. Shim, “A Movement-Classification Algorithm for Pedestrian using Foot-Mounted IMU,” Proceedings of the 2012 International Technical Meeting of The Institute of Navigation, Newport Beach, CA, 2012, pp. 922-927.

[7] S. K. Park and Y. S. Suh, “A Zero Velocity Detection Algorithm Using Inertial Sensors for Pedestrian Navigation Systems,” Sensors, Vol. 10, pp 9163-9178, 2010

[8] Wang, J-S., et al. "Walking Pattern Classification and Walking Distance Estimation Algorithms Using Gait Phase Information." Biomedical Engineering, IEEE Transactions on 59.10 , pp 2884-2892, 2012

[9] Forney Jr, G. David. "The viterbi algorithm." Proceedings of the IEEE 61.3, pp 268-278, 1973

246/278

Page 273: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Standing still with inertial navigationJohn-Olof Nilsson and Peter Handel

Department of Signal Processing, ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden

Abstract—The possibility to detect complete standstill sets foot-mounted inertial navigation aside from other heuristic pedes-trian dead reckoning systems. However, traditional zero-velocity-updates (ZUPTs) does not ensure that the system is actuallystatic but a drift will occur. To eliminate the drift we suggestusing a modified mechanization which locks certain states undercertain conditions. Experimental data is used to demonstrate theeffectiveness of the approach.

I. INTRODUCTION

Just like we spend most of our time indoor, we spendmost of our time being still. Consequently, an importantattribute of any pedestrian dead reckoning system is that itgives good performance for standstill. Due to its mountingpoint and implicit dynamic assumptions, ZUPT-aided foot-mounted inertial navigation works well for such situations,providing an attribute that sets it aside from other heuristic(step counting) pedestrian dead reckoning systems. However,the zero-velocity updates, as they are typically implemented,will not make an inertial navigation system stand still but adrift will occur. Naively, this may be avoided by stopping theintegration in the inertial navigation simultaneously with theZUPTs. However, we argue that due to modelling errors thisis not advisable. Instead, in this short paper we suggest analternative mechanization which locks certain states duringcertain (standstill) conditions. Experimental data is used todemonstrate that this stand-still mechanization improves per-formance for situations where a user is standing still while notinfluencing the behaviour of the system for normal gait.

II. FOOT-MOUNTED INERTIAL NAVIGATION

Foot-mounted ZUPT-aided inertial navigation consists offoot-mounted inertial sensors, inertial mechanization equa-tions, a zero-velocity detector, and a complementary Kalmanfilter based on a deviation (error) model to fuse the informa-tion. In the simplest form the mechanization equations arepk

vkqk

=

pk−1 + vk−1dtvk−1 + (qk−1fkq

?k−1 − g)dt

Ω(ωkdt)qk−1

(1)

where k is a time index, dt is the time difference betweenmeasurement instances, pk is the position, vk is the velocity,fk is the specific force, g = [00g]> is the gravity, [·]> denotesthe transpose operation, and ωk is the angular rate (all inR3). Further, qk is the quaternion describing the orientationof the system, the triple product qk−1fkq

?k−1 denotes the

rotation of fk by qk, and Ω(·) is the quaternion updatematrix. For analytical convenience we will interchangeablyrepresent the orientation qk with the equivalent Euler anglesθk = [φk θk ψk]

> (roll, pitch, yaw) or the rotation matrix Rk.

The mechanization equations together with measurementsof the specific force fk and the angular rates ωk, provided bythe inertial sensors, are used to propagate position pk, velocityvk, and orientation qk state estimates. Unfortunately, due toits integrative nature, small errors in fk and ωk accumulate,giving rapidly growing state estimation errors. Fortunately,these errors can be modeled and estimated with ZUPTs. Afirst-order deviation (error) model of (1) is given byδpkδvk

δθk

=

I Idt 00 I [qk−1fkq

?k−1]×dt

0 0 I

δpk−1δvk−1δθk−1

(2)

where δ(·)k are the error states, I and 0 are 3×3 identity andzero matrices, and [·]× is the cross-product matrix. Togetherwith statistical models for the errors in fk and ωk, (2) isused to propagate statistics (covariances) of the error states. Toestimate the error states, stationary time instances are detectedbased on the condition Z(fκ, ωκWk

) < γZ, where Z(·) issome zero-velocity test statistic, fκ, ωκWk

is the inertialmeasurements over some time window Wk, and γZ is a zero-velocity detection threshold. See [1] for further details. Theimplied zero-velocities are used as pseudo-measurements

yk = vk ∀k : Z(fκ, ωκWk) < γZ (3)

which are modeled in terms of the error states as

yk = H[δpk δvk δθk

]>+ nk (4)

where H = [0 I 0] is the measurement matrix and nk is ameasurement noise, i.e. yk = δvk+nk. Given the error model(2) and the measurements model (4), the measurements (3)can be used to estimate the error states with a complementaryKalman type of filter. See [2,3] for further details.

III. INERTIAL NAVIGATION DURING STANDSTILL

During standstill, obviously ZUPTs are in effect and ve-locity and roll and pitch are observable [4]. However, thisdoes not imply that the system is standing still, neither in aphysical sense nor in a state estimation sense. The constraintthat the ZUPTs add is that the system has zero-mean velocitywith a certain distribution, i.e. vk− δvk = nk. Unfortunately,measurement noise will enter the system in both (1) and (2)(since fk and ωk are replaced with their measured values)causing the system to drift even if consecutive ZUPTs areapplied. Consequently, we would like to make the system standstill to avoid this. However, in practice it has been shown thatthe best threshold γZ is far above the statistic’s noise floor [5]and the system will not necessarily be perfectly stationarywhen ZUPTs are applied [6,7].

247/278

Page 274: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

To remedy the drift during standstill, the system has to belocked somehow. However, as seen in the discussion above,this has to be done with care. First of all, as seen in [6], adifferent condition than Z(fκ, ωκWk

) < γZ has to be usedto lock the system. The same detection framework may beused but with a lower threshold and explicit modelling of thegyro bias. Consequently, we will assume that a correspondingstationarity detector is employed with a threshold γS set justabove the statistic’s noise floor. If the condition holds, theinertial navigation is to be locked to mitigate drift. How-ever, to avoid numerical problems and to avoid introducingunnecessary modelling errors, we only wish to introduce aminimum of locking. We are concerned about the drift inthe position states and in the heading. The remaining statesare observable. Therefore, only the position and the headingshould be locked. Locking the position is trivial while lockingonly the heading is not. The problem is that we would typicallynot use a zero-order hold assumption of the rotation and thereis no clear-cut heading change separable from Ω(ωkdt)qk−1in (1). However, since the angular rates can be assumed smallwhen we want to apply a heading lock, this can be achievedby subtracting the angular rate component orthogonal to thehorizontal plane, which primarily affects the heading. Theangular rates in the navigation coordinate system are Rk−1ωkand consequently the angular rates with the horizontal planecomponent subtracted are Rk−1ωk − diag([001])Rk−1ωk,where diag([001]) is the diagonal matrix with [001] on thediagonal. Transforming back to the body coordinate system(multiplying with (Rk−1)

> from the left) gives the desiredquantities

ωk − (Rk−1)>diag([001])Rk−1ωk.

Consequently, a standstill mechanization with locks on theposition and heading is given bypk

vkqk

= pk−1

vk−1 + (qk−1fkq?k−1 − g)dt

Ω((ωk−(Rk−1)>diag([001])Rk−1ωk)dt)qk−1

(5)

and the corresponding deviation model isδpkδvkδθk

=

I 0 00 I [qk−1fkq

?k−1]×dt

0 0 I

δpk−1δvk−1δθk−1

. (6)

During complete standstill, i.e. when the second detector istrue ((3) will apply then as well), instead of (1) and (2), (5)and (6) should be used. Thereby the desirable locking effectof position and heading is attained.

IV. EXPERIMENT

The standstill mechanization (5) and (6) has been imple-mented in the OpenShoe platform [8]. The implementationcan be found at www.openshoe.org. To demonstrate the ef-fectiveness of the mechanization we show the dead reckoningbehaviour for a dataset where the system is truly standingstill and for a dataset of normal gait. The results are seen inFig. 1. For the static scenario (upper left and lower plots), the

−2 0 2 40

1

2

3

4

5

Position estimates forstationary dataset

x [mm]

y[m

m]

0 2 4 6 8 10 12−8

−6

−4

−2

0

2Heading estimates for stationary dataset

time [min]

heading[]

−10 −5 0 5 10

−10

−5

0

5

10

Position estimatesfor normal gait

x [m]

y[m

]

Fig. 1. State estimates with stand-still mechanization when applicable (dashedred) and without (solid blue) for an 11 minutes stationary period (upper leftand bottom plots) and for a walk in a figure eight (upper right plot).

locking mechanization keeps the estimates stable while theydrift without it. For the normal gait (dynamic) scenario (upperleft plot), the estimated trajectories are seen to overlap and thealternating stand-still mechanization does not affect the inertialnavigation (as expected).

V. CONCLUSIONS

Zero-velocity updates of inertial navigation does not ensurethat an inertial navigation system is actually static but a driftwill occur. This can preferably be remedied by applying thesuggested alternative mechanization which locks the positionand heading under certain standstill conditions. The pedestriandead reckoning performance during standstill has been shownto improve with the suggested mechanization while not affect-ing the behaviour for normal gait.

REFERENCES[1] I. Skog, P. Handel, J.-O. Nilsson, and J. Rantakokko, “Zero-velocity

detection: an algorithm evaluation,” Biomedical Engineering, IEEE Trans-actions on, vol. 57, no. 11, pp. 2657 –2666, nov. 2010.

[2] E. Foxlin, “Pedestrian tracking with shoe-mounted inertial sensors,”Computer Graphics and Applications, IEEE, vol. 25, pp. 38 –46, 2005.

[3] J. A. Farrell, Aided Navigation. Mc Graw Hill, 2008.[4] J.-O. Nilsson, D. Zachariah, I. Skog, and P. Handel, “Cooperative local-

ization by dual foot-mounted inertial sensors and inter-agent ranging,”CoRR, vol. abs/1304.3663, 2013.

[5] I. Skog, J.-O. Nilsson, and P. Handel, “Evaluation of zero-velocitydetectors for foot-mounted inertial navigation systems,” in Proc. IPIN,Zurich, Switzerland, 15-17 Sept. 2010.

[6] J.-O. Nilsson, I. Skog, and P. Handel, “A note on the limitations of ZUPTsand the implications on sensor error modeling,” in Proc. IPIN, Sydney,Australia, 13-15 Nov. 2012.

[7] A. Peruzzi, U. Della Croce, and A. Ceretti, “Estimation of stride lengthin level walking using an inertial measurement unit attached to the foot:a validation of the zero velocity assumption during stance,” Journal ofBiomechanics, vol. 44, no. 10, pp. 1991 – 1994, 2011.

[8] J.-O. Nilsson, I. Skog, P. Handel, and K. Hari, “Foot-mounted inertialnavigation for everybody – An open-source embedded implementation,”in Proc. IEEE/ION PLANS, Myrtle Beach, SC, USA, 23-26 Apr. 2012.

248/278

Page 275: International Conference on Indoor Positioning and Indoor Navigation

- chapter 23 -

Industrial Metrology & Geodetic Systems,iGPS

Page 276: International Conference on Indoor Positioning and Indoor Navigation

Single-channel versus multi-channel scanning indevice-free indoor radio localization

Pietro Cassara∗, Francesco Potortı∗, Paolo Barsocchi∗, Paolo Nepa§∗ISTI institute of CNR, via Moruzzi 1, 56124 Pisa, Italy

Email: cassara,potorti,[email protected]§Department of Information Engineering, University of Pisa, Via Caruso 16, 56122 Pisa, Italy

Email: [email protected]

Abstract—In this paper we concentrate on device-free RSS-based indoor localization methods. These methods, which havegenerated much research interest in the last few years, are nowstarting to hit the market. Specifically, the purpose of this paperis to assess the performance improvements of a Variance-basedRadio Tomographic Imaging technique, when scanning variousradio channels with respect to using only one, the latter beingthe “minimum introduced interference” option. In our setup, thedata used for target localization are captured by wireless sensorsdeployed in the localization area, which are in line of sight amongthem. The localization error metrics include the mean squareerror and percentiles of the error distribution.

I. INTRODUCTION

Reliable, accurate and real-time indoor positioning ser-vices and protocols are required in the future generation ofcommunication networks [1]. A positioning system makes amobile device usable for positioning-based services such astracking, navigation or monitoring. Localization and trackingof persons can be achieved by means of a large number ofdifferent technologies, however only few of them are suitablefor Ambient Assisted Living (AAL) applications, since theyshould be non-invasive on the users, they must be suitedto the deployment in the user houses at a reasonable cost,and they should be accepted by the users themselves [2].Considering these constraints, a promising technology is basedon Wireless Sensor Networks (WSN), due to their low cost andeasy deployment. Within WSNs, it is possible to estimate thelocation of a user by exploiting the Received Signal Strength(RSS), the Time of Arrival (ToA), the Time Difference ofArrival (TDoA) or the Angle of Arrival (AoA). Of these,the most promising to be used for low-cost applications isthe RSS, which is a measure of the power of a receivedradio signal that can be obtained from almost all wirelesscommunications devices we know of. The RSS measuredamong fixed devices (whose position is known) and mobiledevices (carried by the user) is leveraged by algorithms thatestimate the coordinates of the user positions. In a smartenvironment, where the ambience is instrumented with sensorsand wireless communication devices, the marginal cost ofimplementing an RSS-based localization system can be verylow, as it can leverage the existing installed hardware.

This work was supported by Regione Toscana under the POR CRO FSE2007-2013 research program.

In this paper, we consider a device-free RSS-based indoorlocalization method, that is, Variance-based Radio Tomogra-phy Imaging (VRTI) [3]. Here, ”device-free” means that aperson does not need to carry or wear any wireless sensor ordevice. These systems are based on a set of small wirelessdevices spread over the area of interest in order to create adense mesh; they exploit the RSS observed by each device onthe radio links connecting it to other devices. A user movingwithin the area modifies the RSS pattern in a way that dependson his location; radio imaging therefore exploits the RSSmeasurements observed along the inter-device links to obtaina reconstruction of the user trajectory. Two working modescan be identified for these devices: either they dedicate somepower and channel occupancy to sending ad hoc localizationprobing packets, or else they exploit data packets sent by otherapplications and measure their RSS for localization purposes.

In [4] the authors found that sending probing packets onmultiple channels gives an advantage in terms of localizationaccuracy with respect to using a single channel for shadowing-based RTI. This means that at least in some device-freesystems there is a trade-off between minimum disturbanceand maximum accuracy when choosing between single- andmultiple-channel localization. Here we use the same criterionapplied to VRTI [5], in order to measure if any performanceimprovements are observed as well.

II. SCENARIO

In this section we introduce the software, the hardwaredevices and the scenario used during our analysis. The RSSvalues are collected through a WSN composed by N nodes,the anchors.

A. Software Tool and Hardware

In our setup, the anchors broadcast packet in turn, so thatall other anchors can perform an RSS measurement on thereceived packet. The payload of each packet is the set ofRSS values of packets received by the other anchors duringthe previous cycle. The base station listens for the packetsand forwards the payload a PC for processing. Each cycle istransmitted on a different channel. We used IRIS Motes pro-duced by Crossbow [6], which are based on the IEEE 802.15.4RF transceiver AT86RF230, operating in the 2.4GHz band.

249/278

Page 277: International Conference on Indoor Positioning and Indoor Navigation

The hardware was programmed using the TinyOS operatingsystem.

B. Experimental Setup

The RSS values were collected in the presence of a humantarget in an area of 6.8 by 5.6 meters, where 20 anchors wereplaced. The measures were performed on a set of 1, 2 and 4channels.

Each link was sampled with a frequency between 5 and 8Hz, depending on the parameters described below. The targetmoved at a constant speed of 0.2 m/s as shown in figure 1.During each experiment, RSS data worth more 5600 cycleswere collected, for a total of 112000 RSS measures. No oneexcept the target was present in the area during the experiment.

The localization area was marked as shown in figure 1,where the black squares are the WSN nodes. The localizationerror, defined as the distance between the estimated positionand the real position of the target was then computed. Therelevant metrics are the root mean square (RMSE), the 75th

and 90th percentile of the localization error.

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50

0.5

1

1.5

2

2.5

3

3.5

4

X (m)

Y (

m)

Fig. 1. Environment setup: N=20 anchors positioned near the room walls,at about 70 cm from the floor, and the path followed by the target.

III. ALGORITHM

The algorithm is an implementation of the Variance-basedRadio Tomographic Imaging (RTI) method [7]: input data arethe RSS levels for each pair of wireless devices.

Consider a set of anchors with known positions, whose pairsidentify links, and a lattice of pixels covering the localizationarea. The matrix W of the variance weights links the vectorof per-pixel RSS variances x with the vector of per-link RSSvariances s, given a noise vector v ([7]): s = Wx+v. Solvingfor x is an ill-posed problem, so a Tikhonov’s least squaresregularization technique is adopted [8].

Errors on s are assumed to be independent with zero meanand standard deviation σv. The solution of the regularizationproblem with a priori information C is ([8] [3]):

x =(WTW + σ2

vC−1

)−1WT s, Cprpq = σ2e−dprpq/δ (1)

The attenuation correlations over the pixel set are assumed tofollow an exponential spatial decay law, where dprpq is thedistance between pixels pr and pq , σ2 is the per-pixel RSS

Chn14 Chn17 Chn21 Chn24 Bi−Chn Quadri−Chn0

0.5

1

1.5

2

2.5

3

multi−Cnannel vs. mono−Channel (3 sec.)

met

ers

RMSE

75−th Percentile

90−th Percentile

Chn14 Chn17 Chn21 Chn24 Bi−Chn Quadri−Chn0

0.5

1

1.5

2

2.5

3

multi−Cnannel vs. mono−Channel (5 sec.)

met

ers

RMSE

75−th Percentile

90−th Percentile

Fig. 2. Main comparison: single-channel performance for channels 14, 17,21, 24 compared with bi-channel (17, 25) and quadri-channel (12, 16, 20,24). 3 s and 5 s variance windows.

variance and δ is used to tune the image smoothness. We setσ2 = 0.3 and δ = 3. x is a map of the position likelihood.

IV. RESULTS

We used several criteria to try and compare the performanceof single- versus multi-channel, as detailed in the followingfour sub-sections.

A. Simple and bare comparison

Here we simply compare the performance of measurementsdone in single-channel mode on four different channels, withmeasurements done in bi-channel and quadri-channel mode,for a total of six cases.

The packet generation rate is around 55 pkt/s, which meansone complete round of the 19 transmitting nodes in about345 ms, or about three complete rounds per second. Thewindow over which the RSS variance is measured is set at3 s and 5 s, which means that the same measured data areused twice, and two sets of results are obtained. Note that thespeed of the target is around 0.2 m/s meaning that, space-wise,the variance window is respectively 0.6 m and 1 m long forthe two cases. In general, we expect to get a localization errornot significantly smaller than the window size.

As shown in figures 2, we have found little differencebetween single-channel, bi-channel and quadri-channel mea-surements. The performance metrics we used are the RMSE(root mean square error) and two percentiles (75 and 90) ofthe localization error distribution. As an example, figures 3show some typical error distribution for our experiments.

Note that the comparison in figure 2 is not rigorous, becauseit is based on six different measurements. This means thatinterference from radio sources may be different in the sixmeasurements. Additionally, the actor’s movement may beslightly different in the six measurements and people movingin nearby rooms may also have differently influenced themeasurements. These issues are tackled in the next subsection.

250/278

Page 278: International Conference on Indoor Positioning and Indoor Navigation

0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 40

5

10

15

20

25

30

Error distribution (m)

Pro

bab

ilit

y (

%)

Error Distribution

75−th Prc

90−th Prc

RMSE

(a) Channel 14

0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 40

5

10

15

20

25

30

Error distribution (m)

Pro

bab

ilit

y (

%)

Error Distribution

75−th Prc

90−th Prc

RMSE

(b) Channel 21

0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 40

5

10

15

20

25

30

Error distribution (m)

Pro

bab

ilit

y (

%)

Error Distribution

75−th Prc

90−th Prc

RMSE

(c) Bi-channel

0 0.25 0.5 0.75 1 1.25 1.5 1.75 2 2.25 2.5 2.75 3 3.25 3.5 3.75 40

5

10

15

20

25

30

Error distribution (m)

Pro

bab

ilit

y (

%)

Error Distribution

75−th Prc

90−th Prc

RMSE

(d) Quadri-channel

Fig. 3. Error distribution for some different experiments.

Measured:ABABABABABABABABABABABABABABABABABABABAB

Filtered:Bi 1: A__BA__BA__BA__BA__BA__BA__BA__BA__BA__BBi 2: BA__BA__BA__BA__BA__BA__BA__BA__BA__BA__Si17: A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_Si25: _B_B_B_B_B_B_B_B_B_B_B_B_B_B_B_B_B_B_B_B

Fig. 4. The original bi-channel measurement on channels 17(A) and 25(B)and three different ways of filtering it.

Measured:ABCDABCDABCDABCDABCDABCDABCDABCDABCDABCD

Filtered:Qu 1: A____B____C____DA____B____C____DA____B__Qu 2: B____C____DA____B____C____DA____B__A____Qu 3: C____DA____B____C____DA____B__A____B____Qu 4: DA____B____C____DA____B__A____B____C____Si12: A___A___A___A___A___A___A___A___A___A___Si16: _B___B___B___B___B___B___B___B___B___B__Si20: __C___C___C___C___C___C___C___C___C___C_Si24: ___D___D___D___D___D___D___D___D___D___D

Fig. 5. The original quadri-channel measurement on channels 12(A), 16(B),20(C) and 24(D) and five different ways of filtering it.

B. Filtering comparison

In order to remove the effect of possible differences betweendifferent measurements when comparing single- to multi-channel performance, we make a comparison that uses a singlebi-channel measurement (the same as the one depicted infigure 2) and filter it in three different ways like shown infigure 4.

The purpose is to extract from a single measurement set abi-channel trace and two single-channel traces, all of them withthe same number of samples per second, so that a comparisonamong them is significant.

We used the same procedure starting from a quadri-channelmeasurements that we filtered in five different ways (see figure5).

We should consider this filtering procedure as the mostrigorous of the tests we made. Its main drawback is that thenumber of samples per second is reduced by two times in thebi-channel case and by four times in the quadri-channel case,which stretches the RTI algorithm ability to its limits. Figures6 and 7 show the resulting comparison.

Again, we do not perceive any significant difference whencomparing single- and multi-channel performance.

Bi 1 3s Bi 2 3s Si 17 3s Si 25 3s0

0.2

0.4

0.6

0.8

1

1.2

1.4

met

ers

RMSE

75−Th Percentile

90−Th Percentile

Bi 1 5s Bi 2 5s Si 17 5s Si 25 5s0

0.2

0.4

0.6

0.8

1

1.2

1.4

met

ers

RMSE

75−Th Percentile

90−Th Percentile

Fig. 6. Rigorous comparison: filtering bi-channel and single-channel from abi-channel measurement. 3 s and 5 s variance windows.

Qu 1 Qu 2 Qu 3 Qu 4 Si 12 Si 16 Si 20 Si 240

0.5

1

1.5

2

2.5

met

ers

RMSE

75−Th Percentile

90−Th Percentile

Fig. 7. Rigorous comparison: filtering quadri-channel and single-channelfrom a quadri-channel measurement. 10 s variance window.

C. Removing multi-channel information

As one more criterion to check for the importance ofmeasurements over different channels, we took a quadri-channel measurement and removed the channel information.In practice, we made the measurements on four differentchannels and run the algorithm on the complete data, includingchannel information, as well as on the data where the channelinformation has been removed (so all the samples appear tohave been logged on the same channel).

Before making the comparison, we cared about removingthe mean value individually from each channel’s data, in orderto avoid spurious variance introduced by mixing differentchannel’s data. The results are depicted in figure 8 and, again,do not indicate any significantly worse performance when thechannel information is discarded.

251/278

Page 279: International Conference on Indoor Positioning and Indoor Navigation

Ch.12 3s Pseudo−Multi 3s Ch.12 5s Pseudo−Multi 5s0

0.5

1

1.5

2

2.5

mete

rs

RMSE

75° Percentile

90° Percentile

(a) Channel 14

Ch.16 3s Pseudo−Multi 3s Ch.16 5s Pseudo−Multi 5s0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

mete

rs

RMSE

75° Percentile

90° Percentile

(b) Channel 18

Ch.20 3s Pseudo−Multi 3s Ch.20 5s Pseudo−Multi 5s0

0.5

1

1.5

mete

rs

RMSE

75° Percentile

90° Percentile

(c) Channel 22

Ch.24 3s Pseudo−Multi 3s Ch.24 5s Pseudo−Multi 5s0

0.5

1

1.5

2

2.5

mete

rs

RMSE

75° Percentile

90° Percentile

(d) Channel 26

Fig. 9. Adding “invented” channel information: single-channel measurements are enriched with nonexistent multi-channel info.

Bi−ch 3s Single 3s Bi−ch 5s Single 5s0

0.2

0.4

0.6

0.8

1

1.2

1.4

Bi−channel vs. bi−channel with channel info discarded

met

ers

RMSE

75° Percentile

90° Percentile

Quadri−ch 3s Single 3s Quadri−ch 5s Single 5s 0

0.2

0.4

0.6

0.8

1

1.2

1.4

Quadri−channel vs. quadri−channel with channel info discarded

met

ers

RMSE

75° Percentile

90° Percentile

Fig. 8. Discarding channel information: treating multi-channel data as if theywere single-channel do not significantly worsen the performance.

D. Treating single-channel as multi-channel

As a last test, we “invented” multi-channel information andadded it to single-channel measurements. This test is a sort ofsecurity check that we did to verify that our implementationof the VRTI algorithm did not introduce any artifacts thatadvantage the single- or the multi-channel measurements.

In figure 9 we observe a small improvement when “invent-ing” channel information. We do not know the exact sourceof this improvement, but in practice we judge it as being toosmall to be significant.

V. CONCLUSION

Some preliminary measurement results relevant to an RTI-based localization technique have been presented and dis-cussed. Main goal was showing whether using multiple radiochannels for collecting RSSI samples is advantageous withrespect to using only one frequency channel, as far as variance-based RTI localization is concerned.

In general, using multiple channels may be more complex,especially if the packets are not explicitly generated for thepurpose of measuring, but just for communication purposes.In the latter case, the channel is constrained by communicationprotocols because of interference criteria or more generally byspectrum sharing criteria. On the other hand, it may be usefulto exploit multiple channels, if they can bring a benefit.

We used several criteria to compare the performance ofsingle- versus multi-channel approach. Our preliminary con-clusion is that we have no clear answer, that more experimen-tation in different conditions is needed, and that apparentlythere is not much difference in performance between single-and multi-channel when using variance-based RTI localization.

REFERENCES

[1] C. Di Flora, M. Ficco, S. Russo, and V. Vecchio, “Indoor and outdoorlocation based services for portable wireless devices,” in Distributed Com-puting Systems Workshops, 2005. 25th IEEE International Conference on,2005, pp. 244–250.

[2] J. A. Alvarez-Garcıa, P. Barsocchi, S. Chessa, and D. Salvi, “Evaluationof localization and activity recognition systems for ambient assistedliving: The experience of the 2012 evaal competition,” Journal of AmbientIntelligence and Smart Environments, vol. 5, no. 1, pp. 119–132, 012013. [Online]. Available: http://dx.doi.org/10.3233/AIS-120192

[3] J. Wilson and N. Patwari, “Through-Wall Motion Tracking UsingVariance-Based Radio Tomography Networks,” IEEE Transactions onMobile Computing, vol. 10, no. 5, pp. 612–621, May 2011.

[4] N. Patwari, M. Bocca, and O. Kaltiokallio, “Enhancing the accuracyof radio tomographic imaging using channel diversity,” 2012 IEEE 9thInternational Conference on Mobile Ad-Hoc and Sensor Systems (MASS2012), vol. 0, pp. 254–262, 2012.

[5] J. Wilson and N. Patwari, “Radio tomographic imaging with wirelessnetworks,” Mobile Computing, IEEE Transactions on, vol. 9, no. 5, pp.621–632, 2010.

[6] C. Technology, “IRIS Datasheet,” http://bullseye.xbow.com:81, 2013.[7] J. Wilson and N. Patwari, “Radio Tomographic Imaging with Wireless

Networks,” IEEE Transactions on Mobile Computing, vol. 9, no. 5, pp.621–632, May 2009.

[8] A. N. Tikhonov and A. V. Y., Solution of Ill-posed Problems. Washing-ton: Winston & Sons, 1977.

252/278

Page 280: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Indoor Positioning using Time of FlightFingerprinting of Ultrasonic Signals

Boaz Ben-Moshe, Harel Levi, Ploit ShamilDepartment of Computer Science

Ariel University CenterAriel 40700, Israel

Amit DvirDepartment of Computer Science

The College of Management Academic StudiesRishon Lezion, Israel

Abstract—This paper presents a positioning technique forindoor environments, which utilizes time-difference-of-arrival(TDOA) of ultrasonic (US) signals. US positioning systems forindoor environments are potentially more accurate than RF-basedmethods, because the signals are slower, and therefore time-of-flight (TOF) computations are feasible. Yet, the accuracy levelof US indoor positioning solutions degrades in no-line-of-sight(NLOS) conditions. Moreover, US signal emitters are directional,and therefore distance computations based on TOF are onlyvalid when the receiver is facing the emitter. In the currentpaper, we demonstrate that fingerprinting can assist to improveUS positioning, by leveraging the relatively constant acoustics ofindoor environments. We show that: (1) multiple samples of USTDOA vectors taken at a calibration location distribute denselyaround the average vector in that location; (2) the Euclideandistances (in time domain) between average US TDOA vectors ofadjacent locations (as close as 20cm) are relatively large. Takentogether, these findings demonstrate the viability of the approachto generate reliable high-resolution fingerprinting maps.

Keywords—Ultrasound, Indoor Positioning, TOF, TDOA, Fin-gerprinting

I. INTRODUCTION

Indoor positioning1 methods relying on ultrasonic (US)signals’ time-of-flight (TOF) or time-difference-of-arrival(TDOA) are potentially more accurate than alternative methodsrelying on Wi-Fi, Bluetooth or GSM. This is mainly becauseUS signals travel at the speed of sound, rather than the speedof light at which radio-frequency (RF) signals travel. As aconsequence, for the US signal, it is feasible to compute itsTOF, or to compute the TDOA of several signals, withoutrequiring extreme clock synchronization among the system’semitters and receivers. Yet, in many US localization systemsdistance computations that are based on TOF or TDOA2 arenot sufficiently accurate [6], [11]. This is mainly due to re-flections (echo-effect) in non-line-of-sight (NLOS) conditions,which are very common in indoor environments. Factors suchas echo-effect, therefore, degrade the accuracy level of USpositioning methods relying on TOF or TDOA measurementsand trilateration methodologies[10]. Nonetheless, a US signalemitter that is programmed to broadcast constant signals atconstant intervals tends to generate relatively stable signals.Primarily, it can be attributed to the typically constant acoustics

1Throughout this paper, we use the terms indoor positioning and indoorlocalization in similar contexts.

2For readability, we denote our employed measurement technique as TDOA,rather than ”TOF or TDOA”. Refer to discussion in Section III-A.

of indoor environments. The outcome is that a mobile clientremaining still in a certain spot, or keeps returning to thesame spot, computes highly-similar TDOA measurements fromall in-range emitters. This is irrespective of whether trilatera-tion computations that are based on these measurements areaccurate or not. We leverage this stability trait to design aUS positioning system, which is not relying on trilaterationmethodologies and makes no assumptions concerning the USsignals’ propagation characteristics. Instead, a training (offline)phase is conducted, through which US TDOA fingerprintsare captured and recorded. This is somewhat equivalent, butmaterially differs from Wi-Fi fingerprinting, in which recordedfingerprints reflect signal power in terms of decibel (dB).

As a consequence of US signals’ stability, the trainingphase in our implementation yields a TDOA map, which ismore reliable than traditional radio-map used in Wi-Fi finger-printing. Within this map, each TDOA fingerprint is associatedwith its true location. Thereafter, throughout the online phase,a mobile client moving within the system’s coverage area isable to accurately estimate its position, by querying the trainingphase’s database for TDOA fingerprints that resemble its onlinesampling. In real-life implementations, the methods presentedin this paper are best-suited for exact positioning inside aroom. Depending on sensors’ availability in the targeted mobileplatform, a hybrid positioning solution can utilize technologiessuch as Wi-Fi or Bluetooth for rough positioning, then use USTDOA fingerprinting for very accurate in-room positioning.

II. RELATED WORK

Weak signal reception and missing line-of-sight (LOS) be-tween the user and navigation satellites cause Global Naviga-tion Satellite Systems (GNSS) to be inadequate for positioningin indoor environments [4]. As a consequence, various posi-tioning systems and techniques have been proposed for suchsettings. Chronologically, indoor positioning systems relyingon RF technologies (mainly Wi-Fi and Bluetooth) are morerecent than early initiatives, which utilized infrared (IR) orUS technologies (e.g., AT&T’s Active Bat [14]) for similarpurposes. In accordance, developers of systems such as Wi-Fi-based RADAR [1] consider their approaches evolutionaryand more realistic to deploy, in comparison to the formerlysuggested IR and US solutions. The case in favor of Wi-Fi forindoor positioning is summarized in [6] as follows. On the onehand, the authors note that US and IR positioning systems canexploit measurements such as angle of arrival (AOA), time ofarrival (TOA) and TDOA. The authors, however, argue that the

21354

253/278

Page 281: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

reliability of such measurements suffers from complex signalpropagation environments [6]. This claim is also affirmedby the developers of acoustic-ranging system Centaur [9],who report that the US Beepbeep system [11] suffers fromlimitations such as poor performance in NLOS conditions. Onthe other hand, increased deployment and popularity of Wi-Fi networks is argued to be a new opportunity for location-aware services[6]. A prominent advantage of such solutions isthe simple system-setup, using standard inexpensive hardware.The authors admit that Wi-Fi was not designed with localiza-tion in mind, but claim it is compatible for the task, becauseRSSI and signal to noise ratio (SNR) can be exploited forlocation estimation.

There are two main approaches for Wi-Fi localization[13]. As noted, one common approach is based on RSSImeasurements of the Wi-Fi signals from the surrounding accesspoints (AP). Such information is available because each AP, inaccordance with the IEEE 802.11 standard [3], is broadcastingbeacons multiple times a second. An estimate of the locationof the device is then obtained using these measurements inconjunction with a signal propagation model inside the build-ing, which is obtained using simulations or prior calibrationmeasurements at certain locations. In the second approach,captured RSSI values (online phase) are compared with RSSIvalues of calibrated points stored in a database (offline phase).This technique is also referred to as fingerprinting. By usingfingerprinting, the characteristics of the signal propagation areembedded in the samples, and the complex modeling of thesignal propagation is avoided. This, in turn, greatly simplifiesthe computations, but requires a tedious preparation phase,which is hard to automate [6].

Wi-Fi and Bluetooth indoor localization solutions are sim-ple to deploy as well as cost-effective. There are, however,several drawbacks to this approach. Most notably, the shiftfrom earlier US localization systems, such as Active Bat [14],to Wi-Fi localization inherently leads to inferior best-case ac-curacy. For example, according to [5] the theoretical accuracyof Wi-Fi-based RADAR system is 3-4.3 meters, whiles theaccuracy of US Active Bat is as good as 9 centimeters. Asnoted, this theoretical advantage of US solutions is mostlya consequence of time-based measurements that are usedin these systems. These measurements directly translate todistances using an approximated speed of sound, as opposedto RSSI measurements in RF solutions, which can merelyprovide implicit indications of distances. Another considerablelimitation of Wi-Fi-based localization is exposed in a recentpaper [8]. The researchers found that different Wi-Fi enableddevices performed significantly differently with respect to themean reported signal strength, and that multiple samples fromthe same device do not perform identically. Furthermore, theresearchers also found that some devices behaved in a way thatmade them poor candidates for use in fingerprinting. Certainother devices were entirely unsuitable for positioning as theyreported signal strength values uncorrelated with distance fromthe transmitter.

III. ULTRASONIC TDOA FINGERPRINTING

A. Synchronization of Ultrasonic Signals

Throughout this paper, we denote our employed measure-ment technique as TDOA. However, our proposed positioning

solution may rely on either TOF or TDOA for fingerprintingmeasurements. The first is more straightforward for trilater-ation computations, but requires the receiver’s clock to behighly-synchronized with the clocks of the emitters. Consider-ing this, we have based our implementation on TDOA, whichonly requires synchronization among the system’s emitters.Figure 1 encapsulates two consecutive fingerprints at the samelocation using this TDOA approach. The highly-similar timeintervals between peaks in both recordings illustrates USsignals’ stability, which is at the essence of the proposedsolution.

Fig. 1. Fingerprinting at the same location: The y-axis is Discrete Fouriertransform (DFT) values; The colored series represent the different frequenciesthat DFT is instructed to distinguish; Fingerprint vectors are constructed bycomputing the time gaps between maximum DFT values.

B. Training (Offline) Phase Formalization

Equivalently to [6], the construction of the TDOA mapin our fingerprinting method begins by dividing the area ofinterest into cells. TDOA values of the US signals transmittedby the system’s emitters are then collected in calibrationpoints inside the cells and stored in a TDOA map. A notableadvantage of the training phase in our approach over thetraining phase in Wi-Fi fingerprinting is that only one snapshot,containing TDOA of all emitters, needs to be taken at eachcalibration point. This is due to US signals’ stability that isdemonstrated in Section IV. However, for the sake of com-pliance in formalization with existing fingerprinting methods,we formalize the general case in which multiple recordings atthe same spot are being taken. Therefore, in accordance with[7] we denote the location fingerprint as a vector R of theaverage US TDOA values from multiple emitters at a particularlocation L. A typical vector R = (r1, r2, ..., rn) consists of nTDOA values from n emitters. The US map contains all suchvectors for a grid of locations in the indoor area.

254/278

Page 282: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

C. Online Phase - Approximating the Position

A mobile device in the suggested framework is approx-imating its position by applying a simplified version of thedeterministic framework algorithm outlined in [6]. In formalterms, in accordance with [7], positioning a mobile deviceduring the online phase is performed by obtaining a sampleTDOA vector P = (p1, p2, ..., pn). The Euclidean distancebetween the P and R for each R in the database is thencomputed. The resulting estimated location is then that L inthe database for which the Euclidean distance between its Rvector and the current point’s vector P is the smallest.

IV. EXPERIMENTAL RESULTS

Our objective in the current paper is to demonstrate thatUS TDOA fingerprints recorded during the training phase aresignificantly more stable, in comparison to fingerprints of RFsignals. For that reason, we focus on statistical validation ofthis claim. We don’t include demonstrations of the onlinephase, in which P vectors are associated to R vectors usingminimal Euclidean distance methodology discussed in SectionIII-C. This is partially because exact positions during the onlinephase are hard to track, as clients constantly move. In futurework, we intend to asses the online phase accuracy of oursuggested approach, and the accuracy improvement that canbe achieved by analyzing Doppler shifts [12].

A. Experimental Setup and Design

Fig. 2 is a schematic drawing of our laboratory openspace in which the experiment had taken place. The followinghardware and software were used to conduct the experiments:

• HTC smartphone running Android OS: To capture andrecord US signals.

• Windows 7 PC with SoundForge 10 [2] softwarestudio: To produce and analyze the US signals.

• 4 standard speakers connected to a quad channel soundcard (5.1).

Using SoundForge we have programmed a repetitive loopconfigured as follows: On each loop start (once a second),SoundForge instructs one of the speakers (S1 − S4) to gen-erate a short US signal at 17khz frequency. This signals thesmartphone to get prepared for a series of 4 consecutive USsignals. Then, each speaker, at its turn, is instructed to generateits predefined signal. We have used short US signals (10milliseconds), each with a narrow frequency in the range of18-21 khz. We have used a recording application that utilizesthe smartphone’s microphone to record the US signals. Wehave then used Matlab and Discrete Fourier transform (DFT)signal-processing to determine for each frequency the pointin time at which the signal power was maximal. Graphically,DFT enables to identify time intervals between peaks in Fig.1, thus construct a TDOA vector with 4 components.

B. Statistics of Training Phase Fingerprints

To assess the quality of our US fingerprinting approach,we have conducted a training-phase experiment for which westatistically analyze the stability of fingerprints at selected

Fig. 2. A model of the experiment’s indoor environment. The 4 speakers(S1 −S4) are connected by wires to a central PC equipped with SoundForgeStudio software [2].

locations. The analysis of statistical results makes use of thefollowing definitions:

• L - one of the locations L1 − L11 drawn in Figures2-3.

• Num. - the number of measurements recorded at thespecified location.

• R - the average vector of TDOA intervals (in mil-liseconds) between peaks of consecutive signals (e.g.,18khz and 19khz).

• Avg(Dist(r, R)) - the average Euclidean distance (inmilliseconds) of recorded samples r at the locationfrom the average vector R.

• Stdev(Dist(r, R)) - the standard deviation (in millisec-onds) of Euclidean distances of samples r from theaverage vector R.

• Nearest-L - the nearest location L to the currentlocation.

• Dist(R, R in Nearest-L) - the Euclidean distance (inmilliseconds) between R in the current location and Rin Nearest-L.

In each location L we have sampled 100 TDOA vectorsin 1 second intervals, then computed R, an average vector of4 items corresponding to TDOA of the 4 frequencies. Thisvector corresponds to the training-phase vector R referred inSection III.

Table I summarizes the statistical results of the experimentin terms of milliseconds. Referring to the results, it can beseen that Avg(Dist(r, R)) and Stdev(Dist(r, R)) are very small.In other terms, the Euclidean distance of samples from theaverage TDOA fingerprint R at each location is a randomvariable that distributes densely around a small mean. Notealso that in the locations L1− L6, the values of Dist(R, R inNearest-L) are very large in comparison to Avg(Dist(r, R)).This shows that the Euclidean distance between R vectors

255/278

Page 283: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

TABLE I. TDOA STATISTICS AT SELECTED LOCATIONS (IN MILLISECONDS). NOTE, BECAUSE THE LOCATIONS L7− L11 IN FIG. 3 ARE ALL VERYCLOSE, WE DON’T DETERMINE NEAREST-L, BUT RATHER MEASURE EUCLIDEAN DISTANCES BETWEEN ALL POSSIBLE PAIRS OF LOCATIONS.

L Num. R Avg(Dist(r, R)) Stdev(Dist(r, R)) Nearest-L Dist(R, R in Nearest-L)L1 100 [605.61, 52.29, 181.05, 838.95] 1.07 0.67 L2, 60 cm 141.86L2 100 [486.50, 66.26, 244.00, 796.76] 0.75 0.37 L1, 60 cm 141.86L3 100 [1289.20, 716.50, 250.50, 322.20] 0.44 0.22 L4, 80 cm 222.87L4 100 [1273.00, 789.00, 64.00, 419.00] 0.32 0.16 L5, 20 cm 60.74L5 100 [1265.40, 799.40, 14.40, 451.60] 0.43 0.26 L4, 20 cm 60.74L6 100 [1255.20, 273.20, 485.10, 496.30] 2.25 1.09 L3, 120 cm 294.12L7 100 [228.94, 0.29, 2.01, 226.78] 0.49 0.18 L7-L11 >= 58.17L8 100 [264.09, 45.42, 2.45, 216.22] 1.07 0.59 L7-L11 >= 58.17L9 100 [273.10, 229.30, 1.80, 42.01] 0.50 0.29 L7-L11 >= 58.17L10 100 [339.01, 319.15, 2.45, 17.41] 1.30 1.01 L7-L11 >= 58.17L11 100 [411.31, 327.52, 2.87, 80.91] 1.02 0.50 L7-L11 >= 58.17

from adjacent locations (see column Nearest-L for distances)is much larger in comparison to distances of TDOA samplesin a location to the average TDOA vector in that location.These findings constitute a strong indication to: (1) the stabilityof fingerprints at the same location; (2) the method’s abilityto generate a high-resolution TDOA map during the trainingphase.

Fig. 3. TDOA Fingerprinting on top of a 1.4m diameter table in partialNLOS conditions.

In the second part of the experiment that is illustratedin Fig. 3, we have changed the positions of the speakersS1 − S4 and the position of the 1.4m diameter table. Notethat at least some of the locations L7 − L11 are now inNLOS, respective to one or more of the speakers. Onceagain, the values of Avg(Dist(r, R)) and Stdev(Dist(r, R)) arevery small, demonstrating the stability of fingerprints. Becausethe locations L7 − L11 are very close to one another wedon’t determine Nearest-L. It is apparent, however, that theR vectors are distant from one another in comparison to thedense distribution of samples in each location - The minimalEuclidean distance between a pair of R vectors in locationL7 − L11 is 58.17 ms, and the average distance between(10 possible) pairs is 248.52 ms. These results suggest that ahigh-resolution TDOA fingerprinting can later enable accuratepositioning of a device on top of a table.

V. CONCLUSION

We have introduced the main principles of an indoor posi-tioning framework, which is based on US signals’ TDOA andfingerprinting methodologies. The proposed system demon-strates a path to utilize the advantages and better deal with theshortfalls of current indoor positioning techniques. By basingthe solution on relatively slow-traveling US signals, mobilereceivers in the system are able to compute highly-stableTDOA measurements, respective to the system’s emitters. Ourexperiments in indoor environments validate that such mea-surements tend to be: (1) Insufficiently accurate for the purposeof trilateration; (2) Highly stable. Rather than conceding to the

inaccuracy characteristic, we leverage the US signals’ stabilitycharacteristic via a training phase, during which we recordreliable TDOA fingerprints that can be matched against duringthe online phase.

REFERENCES

[1] P. Bahl and V.N. Padmanabhan. Radar: An in-building rf-based userlocation and tracking system. In INFOCOM, volume 2, pages 775–784,2000.

[2] Sony Co. Soundforge studio software. http://www.sonycreativesoftware.com/soundforgesoftware, 2012.

[3] B.P. Crow, I. Widjaja, LG Kim, and P.T. Sakai. Ieee 802.11 wirelesslocal area networks. Communications Magazine, IEEE, 35(9):116–126,1997.

[4] P. Enge and P. Misra. Special issue on global positioning system.Proceedings of the IEEE, 87(1):3–15, 1999.

[5] J. Hightower and G. Borriello. Location systems for ubiquitouscomputing. Computer, 34(8):57–66, 2001.

[6] V. Honkavirta, T. Perala, S. Ali-Loytty, and R. Piche. A comparativesurvey of wlan location fingerprinting methods. In Positioning, Navi-gation and Communication, pages 243–251, 2009.

[7] K. Kaemarungsi and P. Krishnamurthy. Properties of indoor receivedsignal strength for wlan location fingerprinting. In MOBIQUITOUS,pages 14–23, 2004.

[8] G. Lui, T. Gallagher, B. Li, A.G. Dempster, and C. Rizos. Differencesin rssi readings made by different wi-fi chipsets: A limitation of wlanlocalization. In International Conference on Localization and GNSS,pages 53–57, 2011.

[9] R. Nandakumar, K. K. Chintalapudi, and V. N. Padmanabhan. Centaur:locating devices in an office environment. In International conferenceon Mobile computing and networking, pages 281–292, NY, USA, 2012.

[10] Z. Parisek, Z. Ruzsa, and G. Gordos. Mathematical algorithms ofan indoor ultrasonic localization system. Infocommunications Journal,64(4):30–36, 2009.

[11] C. Peng, G. Shen, Y. Zhang, Y. Li, and K. Tan. Beepbeep: ahigh accuracy acoustic ranging system using cots mobile devices. InInternational conference on Embedded networked sensor systems, pages1–14, NY, USA, 2007.

[12] N.B. Priyantha, A. Chakraborty, and H. Balakrishnan. The cricketlocation-support system. In International conference on Mobile com-puting and networking, pages 32–43, 2000.

[13] G. Retscher, E. Moser, D. Vredeveld, and D. Heberling. Performanceand accuracy test of the wlan indoor positioning system” ipos. InWorkshop on Positioning, Navigation and Communication, 2006.

[14] A. Ward, A. Jones, and A. Hopper. A new location technique for theactive office. Personal Communications, IEEE, 4(5):42–47, 1997.

256/278

Page 284: International Conference on Indoor Positioning and Indoor Navigation

- chapter 24 -

Navigation Algorithm

Page 285: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

The Construction of an Indoor Floor Plan Using a

Smartphone for Future Usage of Blind Indoor

Navigation. J.A.D.C.Anuradha Jayakody

Department of Electrical and Computer Engineering

Curtin University

Perth ,Western Australia

[email protected]

Iain Murray

Department of Electrical and Computer Engineering

Curtin University

Perth ,Western Australia

[email protected]

Abstract—Most blind people require assistance to navigate

within unfamiliar environments as there is often insufficient

information about the buildings available to them. To address

aspects of this problem, this paper describes the “AccessBIM”

model as an approach to facility management in which a digital

representation of the indoor building features is used to facilitate

the exchange and interoperability of real-time information in

digital format which can assist blind people to independently

access unfamiliar building indoor environments. The model

driven architecture that can be implemented for way finding and

data synchronization, generating, in real-time map, an

AccessBIM to assist vision impaired individuals to navigate

known and unknown indoor environment.

Keywords-component; Assistive Technology; vision

impairment; indoornavigation; SmartSLAM; FastSLAM;

IndoorOSM; crowdsourced database; AccessBIM

I. INTRODUCTION

Blind and low vision people are increasing 1 million to 2

million per year. This estimate indicates that the total number

of blind people will achieve 100 million in 2020, not including

low vision people [1]. The problems become more severe

when it comes to going from one place to another

independently. Society has come to an age of supermarkets,

shopping malls, office complexes, huge warehouses etc. In

such complicated surroundings people with vision

impairments may feel overwhelmed. While navigation

systems for outdoor environments are readily available,

navigation within buildings still poses a challenge. The Global

Positioning System (GPS), the dominant system offering

location information outdoors, suffers a poor indoor

performance due to low signal availability, as GPS signals are

not designed to penetrate through most construction materials.

Many indoor positioning techniques have been developed,

most of which rely on fixed references to determine the

location of tagged devices [2]. Traditionally the buildings are

documented using building maps and plans stored in

electronic form with tools such as computer-aided design

(CAD) applications. Storing the maps in an electronic form is

already pervasive but CAD drawings are not adequate for the

requirements of effective building models aimed at indoor

navigation systems. To reach a higher level of data integrity

and utilization potential of building information models, the

International Alliance for Interoperability (IAI) [2] has

introduced the Industry Foundation Classes (IFC) [2].

OpenBIM [4] describes the database that contains information

about IFC compatible applications for the whole range of

design and construction disciplines, including architectural,

structural, building services, facility management and model

viewing. To manage and centralize the construction based on

the open standard IFC this research proposes to build an

Accessible Building Information Model (AccessBIM) to

centralise and manage the information of the built

environment.

II. NAVIGATION MODEL

Navigation in an unfamiliar indoor environment is a

challenge for vision impaired individuals. This section

explains the methodology to identify indoor environments

using spatial information with an indoor environment using

“AccessBIM” to assist blind and vision impaired individuals

to navigate inside an unknown building environment. To

address the issues involved in the application of collaborative

indoor mapping, this paper discusses four processing stages

A. Processing Stage 1

In the data pre-processing stage, gait analysis and the

modified footpath algorithm is to be utilized as shown in Fig.1

Pedestrian dead reckoning (PDR) navigation approaches track

a user's path with the help of given sensors data [12]. Dead

reckoning is a relative navigation technique that is based on

the observation of the ego-motion of an object. By recording

direction and distance of movement, one can estimate the

current position by adding these motion vectors to a given

start position. The human Gait analysis and image processing

can use different sensor data, such as an accelerometer and an

electronic compass sensor integrated into a smartphone. By

tracking a user's path, PDR approaches estimate the location

of a user by knowing start position, stride length and the

number of his walked steps with their corresponding heading.

One of these approaches is FootPath, which will be integrated

into proposed mapping approach. FootPath is a smartphone-

based infrastructure-less indoor navigation approach, which

was developed by Paul Anthony Smith [4, 5]. FootPath uses

257/278

Page 286: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

the electronic compass sensor of a smartphone to detect

walking directions and the accelerometer sensor to detect

steps.

Figure 1. Block diagram of the data pre-processing stage

The developers estimate the step length of a user by using

the correlation between step length “l” and body height “h”.

The corresponding formula is “l = h * 0:415”[4,5] Considering

that the calculated step length is only a raw estimation of the

travelled distance, the author proposed to develop an approach

to fix occurring errors, like inaccurate compass readings,

wrong estimated distance and wrong step detection, during the

navigation. Modification path may include the way to

calculate not by using this estimation but a more accurate

calibrated method of Gait analysis with direction and distance:

both these parameters are relative to previous point.

B. Processing Stage 2

At the second stage, two sub processes are executed,

smoothing, vertex constriction, sphere contraction as one sub

process and other sub process as modification of relaxation

algorithm for error correction of the data. Fig.2. This process

deals with inaccuracies in the map creation. Sometimes

inaccurate compass readings, miscalculated steps and an

estimated stride length, lead to errors resulting in inaccurately

created maps. To deal with these errors, the “Duckett et al”

relaxation algorithm is proposed, [6]. Using relaxation

algorithm as first step can calculate the estimated position and

its variation proposed to fix inaccurate step lengths during the

map creation and the post-editing to put every vertex to its

expected position.

Figure 2: Block diagram of the Second Stage

C. Processing Stage 3

In this stage a spatial database available to store collected

data to generate indoor maps. The database includes a classic

navigation graph, additional vector data of probable indoor

movement traces is required to be saved, which would allow

for generation of navigation directions as well as optimal

cartographical information of calculated routes.

In the third stage a spatial database is available to store

data. The database is includes a classic navigation graph,

additional vector data of probable indoor movement traces is

required to be saved, which would allow for generation of

navigation directions as well as optimal cartographical

information of calculated routes. Objects are classified into

three main categories as “obstruction-unmovable” and

“objects within the environment”; movable obstruction-object

can be moved here and there within the indoor environment

and “pathway”- the paths along which they walk. The

collected data should store the database inside the mobile

phone as well as cloud environment to share the generated

map to assist the blind people in an indoor environment. In

this stage it is proposed to investigate the possibility of using a

modified simultaneous localization and mapping techniques to

generate maps. The simultaneous localization and mapping

(SLAM) problem is an active research field in robotics. The

SLAM describes how a mobile robot can determine its own

position without having localization information, like a map,

about its environment [1, 5]. For that reason, the robot

explores the surrounding area, starts to create a map of this

area and while it is creating the maps for building with an

existing Wi-Fi infrastructure and the robot tries to determine

its position by using the created map. The SLAM problem

appears in different domains, e.g., it can differentiate between

outdoor SLAM and indoor SLAM. In addition, there are

diverse strategies for solving the SLAM problem. For

instance, one strategy is to take advantage of locations the

robot visits twice. If the robot reaches an unknown position, it

stores the landmarks of this location. If it visits such a position

again, it will be able to determine its location, because it

knows the location already [7, 8] Smart SLAM provides the

automatic creation and updating of indoor plans. The main

idea of SmartSLAM is to create an indoor map, which consists

of Wi-Fi fingerprints. For that, SmartSLAM uses Wi-Fi access

points, which are installed within buildings, as landmarks.

Additionally, it tracks a user's path by using an accelerometer

sensor to detect steps and an electronic compass sensor to

detect directions. SmartSLAM implements the typical

components of SLAM. In addition, the observation model

consists of recording signal strengths with their corresponding

access points. Finally, the created map consists of the recorded

Wi-Fi access points, which the author “Shin et al” proposed to

use as landmarks.

In the sub processes “OpenStreetMap” may be used to

create maps. The pre-mapping phase is important to collect

required parameters for the map creation, from the gait

258/278

Page 287: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

analysis data which is necessary to derive the estimated step

length of the user during the mapping process, and the map's

name. Additionally, in this stage data from image processing

will be incorporated, e.g., information of a door or another

necessary parameter is the start position for map creation. For

every created map and every floor of the map a bounding box

file is created. This is a configuration file that contains all the

important parameters for a created map. It should contain

general information about the floor level of the bounding file,

whether plan of the floor is available, and it holds the world

coordinates of the boundaries of the screen. Furthermore, it

additionally should contain the storage location of the

available map, the corners of the adjusted image file in world

coordinates and the transformation matrix, which can be

applied to the map. In the newly proposed architecture sensor

data will be collected from external environment for the

purposes of map generation. Two sources can be identified as

the external sensor inputs namely: sensor data through image

processing and sensor data through gait analysis. Sensor data

obtained from both the sources will be directed to the specially

designed JAVA API for processing. After the completion of

the error correction process, a decision making process is

initiated.

D. Processing Stage 4

The final stage of this methodology provides rapid

convergence of objects within the AccessBIM to assist real

time processing with multiple indoor users. The Java

OpenStreetMap Editor (JOSM) [10] is a java-based editor for

creating and editing “OpenStreetMaps”. This application will

extend it by creating or modifying different plugins. First,

JOSM offers the functionality to directly download as well as

upload created maps to the OpenStreetMap servers.

Furthermore, JOSM allows exporting and importing

OpenStreetMap maps into an XML formatted file. For editing

of created maps, JOSM offers already implemented

functionalities, such as removing or adding nodes. Overall,

JOSM gives the possibility to import created maps, post-edit

them by using the integrated functionalities and upload the

edited maps to the OpenStreetMap server. The JOSM has only

limited indoor support. Creating maps and adding all

necessary indoor attributes, like floor levels, is time intensive.

For that reason, JOSM is not user-friendly for creating indoor

maps. PicLayer [11] is a JOSM plugin that allows the loading

of an image, positioning it at a certain location, rescale and

resize it and store a configuration file for the adjusted image.

This configuration file allows us to load the image again later.

Figure 3. The block diagram of the Fourth Stage

With the combination of the above mentioned four stages

this model for mapping indoor environments can be accessible

for vision impaired mobile user to navigate in unknown

environments that will be built using the given data. The

model will provide more detailed information about buildings

and their inner structures through shared a cloud in a real time

multi user environment.

III. NAVIGATION ALGORITHAM

Generally, the described navigation algorithms consist two

main parts, as listed follows.

Data acquisition and smoothing algorithm for

AccessBIM.

Map generation and error correction algorithm for

AccessBIM.

In the newly proposed architecture (Fig.4) sensor data

will be collected from external environment for the purposes

of map generation fig4 two sources can be identified as the

external sensor inputs namely: sensor data through image

processing and sensor data through gate analysis. Sensor data

obtained from both the sources will be directed to the specially

designed JAVA API for processing. Inside the JAVA API a

special function named dataSegment() will be invoked. Based

on a set of predefined rules dataSegment() function will

attempt to segment the incoming sensor data in to two

categories namely: temporal data and landmark based data

semantics. Function will be called in an iterative manner until

all the sensor data is segmented and stored in the

corresponding tables within the Data Base (DB). During the

next stage of the proposed algorithm correctly segmented data

will be retrieved from the appropriate tables for the purpose of

decision making. Since the novel algorithm has to deal with

inaccuracies in the map creation, such as inaccurate compass

readings, wrong counted steps and an estimated stride length.

All these factors lead to errors resulting in wrongly created

maps. To deal with these errors, authors propose to use the

relaxation algorithm during the error correction step. The

tested algorithm was developed by Duckett et al. [6]. With this

algorithm, the authors plan to correct inaccurate step lengths

during the map creation and also plan to use this algorithm

during the post-editing to put every vertex to its expected

position. Considering that it needs to deal with inaccurate

compass readings, hence proposed to modify the algorithm.

After the completion of the error correction process the

decision making process comes to the role. The SLAM will be

used to make decisions for the purpose of indoor map

drawing.

Localization is one of the major areas that need to be

considered when considering indoor navigation. The technique

of SLAM which is used in robotics is one possible approach to

discover the environment and construct a map from the data

gathered part of identifying the location and map building. At

the end of the decision making process indoor map will be

259/278

Page 288: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th

-31th

October 2013

generated using “IndoorOpenStreetMaps”. When examining

the literature authors have found multiple techniques to store

the coordinates connected with maps. Since

“IndoorOpenStreetMaps” provides a better approach in

mapping, it will be used during the map generation step.

Figure 4. Map generation and error correction algorithm for AccessBIM.

IV. CONCLUSION

This paper presented an approach to create an indoor map

that can support the vision impaired people to navigate using

their smart phones. Through proposed algorithm it shows how

to capture data to generate maps and the further enhancement

of the algorithm for error correction. The key idea is to build

an open participatory and collaborative system through which

users can contribute environment information using their

mobile devices (crowd-sourced). To address the issues

involved in the application of collaborative indoor mapping,

this paper discusses four processing stages: data pre-

processing, data smoothing and error correction, data storage

and “AccessBIM”.

ACKNOWLEDGMENT

This work has been supported by the Curtin University

Perth, Western Australia and Sri Lanka Institute of

Information Technology, Malabe, Sri Lanka.

REFERENCES

[1]. IndoorOSM. Available online: http://wiki.openstreetmap.org/wiki/IndoorOSM (accessed on 10 April 2013).

[2]. Goetz, M.; Zipf, A. Extending OpenStreetMap to Indoor Environments: Bringing Volunteered Geographic Information to the Next Level. In Proceedings of the Urban and Regional Data Management:Udms Annual 2011, Delft, The Netherlands, 28–30 September 2011;pp. 47–58.

[3] K. Kaemarungsi and P. Krishnamurthy, "Modeling of Indoor Positioning Systems Based on Location Fingerprinting," in Proc. Of IEEE InfoCom 2004, Hong Kong, China, March, 2004.

[4] Bitsch Link, J. A., Smith, P., and Wehrle, K. FootPath: Accurate Map-based Indoor Navigation Using Smartphones. In International Conference on Indoor Positioning and Indoor Navigation (IPIN), 2011 (2011).

[5] Smith, P. A. Tracking Progress on Path for Indoor Navigation. Bachelor thesis, RWTH Aachen, April 2011.

[6] Fabritius, G. F. BurrowView - Path Reconstruction and Localization in a Subterranean Animal Habitat Observation System. Diploma thesis, RWTH Aachen, September 2009.

[7] Bailey, T., and Durrant-Whyte, H. Simultaneous Localization and Mapping (SLAM): Part II. Robotics & Automation Magazine, IEEE 13, 3 (2006), 108-117.

[8] Durrant-Whyte, H., and Bailey, T. Simultaneous Localization and Mapping: part I. Robotics & Automation Magazine, IEEE 13, 2 (2006), 99110.

[9] Shin, H., Chon, Y., and Cha, H. SmartSLAM: Constructing an Indoor Floor Plan using Smartphone. Tech. rep., Yonsei University, Tech. Rep., 2010. MOBED-TR-2010-2.

[10] Scholz, I., and OpenStreetMap community. Java OpenStreetMap Editor, January 2012. http://josm.openstreetmap.de/, accessed 2013-05-23.

[11] OpenStreetMap community. PicLayer Plugin for Java OpenStreetMap Editor, Feburary 2012. http://wiki.openstreetmap.org/wiki/JOSM/Plugins/ PicLayer, accessed 2013-05-12.

[12] Pomp, A. Indoor Navigation - Comparing Di_erent Indoor Location Determination Approaches. Seminar, July 2011. [18] Pomp, A. Indoor

Navigation - Comparing Di_erent Indoor Location Determination Approaches. Seminar, July 2011.

[13] J.A.D.C.A. Jayakody, N. Abhayasinghe, I. Murray, “AccessBIM Model for Environmental Characteristics for Vision Impaired Indoor Navigation and Way Finding,” in International Conference on Indoor Positioning and Indoor Navigation, November 2012, [Online]. Available: http://www.surveying.unsw.edu.au/ipin2012/proceedings/submissions/98_Paper.pdf [Mar. 5, 2013].

260/278

Page 289: International Conference on Indoor Positioning and Indoor Navigation

- chapter 25 -

Hybrid Positioning, multi-positioning

Page 290: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Study aimed at advanced use of the indoor

positioning infrastructure IMES

Yutaka Yamada

Graduate School of Marine Science and Technology

Tokyo University of Marine Science and Technology

Tokyo, Japan

[email protected]

Nobuaki Kubo

Graduate School of Marine Science and Technology

Tokyo University of Marine Science and Technology

Tokyo, Japan

[email protected]

Abstract— IMES (indoor message system), devised by the Japan

Aerospace Exploration Agency (JAXA), is the indoor positioning

system formally approved from the U.S. government as a ground

complement system of the Quasi-Zenith Satellite System (QZSS).

It is Low Power Radio Station which transmits the message

including the transmitter’s position of a with a compatible signal

with GPS. In a standard installation environment, the service

area of each transmitter is as narrow as about 10m in radius.

Although IMES has not the function of ranging and time

synchronization, it can be used for the seamless positioning

system by the combined use with GNSS. Therefore, there remain

problems to be solved. One of them is the complexity of the

indoor radio wave propagation caused by installation

environment of transmitter. In original specification of IMES,

The transmission rate is 50 bps following that of GPS navigation

message. Thus the message cannot be correctly received by a

moving receiver even if the signal is being correctly tracked. In

order to overcome this problem, we tried to accelerate the

message transmission rate to 250bps and 500bps The

experimental results show the transmission rate of 250 bps

worked well inside the service area of 20 m in radius due to the

reduction of the reception time of the message. In the case of 500

bps, the receiving range of a signal was reduced. As a result,

transmission rate 250bps was written together.we are preparing

the experiments to confirm which data rates are optimal in

certain service areas.

Keywords- IMES, indoor positioning, navigation message, QZSS,

transmission rate

I. INTRODUCTION

It’s been six year since the specification of IMES was exhibited. In the meanwhile, we have repeated the trial run. In each experimental result, there was variety of problems. The main concept of IMES is “to transmit position and/or ID of the transmitter with the same RF signal as GPS and Quasi-Zenith Satellite System (QZSS)."[1].

The situation where the message is not transmitted correctly is a problem which we should conquer immediately. Therefore, we tried to speed up the message transmission rate of IMES. By the experimental result, high-speed transmission rate “250bps” is clearly reported in IS-QZSS(Interface Specification for QZSS) .

Although it is still a stage of an actual proof experiment, many applications are existed. There are some interesting applications using the IMES message being developed. However, It it must not forget that IMES is the infrastructure

of seamless positioning. Next, We are doing research which raises position estimate accuracy for the purpose of use to the advanced location-based service by IMES.

A. Positioning of IMES

The important concept of IMES is offering position information by stand-alone transmitter, without ranging and time synchronization. Although we can identify the nearest neighbor transmitter, the position information is the self-position of this transmitter with preset.

On the other hand, IMES is a ground complement system of the QZSS and the infrastructure of seamless positioning. Although IMES is an easy-to-use system, it's must be a high-integrity system for advanced use..

B. Position Estimate using IMES

Position information of IMES is the self-position of this transmitter with preset. Position estimate accuracy is dependent on the installation interval of a transmitter.Since the PRN code is transmitted at one time, we can be known whether a receiver is in the service area of which transmitter. However, in this state, the self-position of a receiver can be presumed only on a zone level. It can be said that the position estimate accuracy as an indoor positioning infrastructure is low.

II. INDOOR POSITIONING EXPERIMENT

A. Read Error of Navigation Message

No dependency on the range from each transmitter due to the multi-pass phasing, the signal strength varies wildly. The receiver sometimes missed to acquire the message, even short ID.

Table 1 is numerical values which represent the effects of IMES’s integrity. We evaluate accurately the rate of successful signal reception. We define it as ‘Continuity’. The ‘Integrity’ implies the rate of the correct reception without bit error among the successfully received data. ‘Integrity is good, but ‘Continuity’ is bad in the observation at fixed points. On the contrary, ‘Integrity’ is not good, but ‘Continuity’ is good in the strolling observation.

978-1-4673-1954-6/12/$31.00 ©2012 IEEE

261/278

Page 291: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Table 1 Rates of ‘Integrity’ and ‘Continuity’ of data

reception. The values are the average for 5 minutes (500 data)

To eliminate such signal error, it is necessary to examine the problems i.e. the installation environment of a transmitter, a reception algorithm and so on.

It is not a problem simply settled by "badness of communication quality" as "a signal transmission error looked at by wireless communications". In addition, it’s also a big problem as an indoor positioning infrastructure which elementary proposition is "correctness of information".

The signal-processing algorithm which minimizes a signal error in wireless LAN can be applied. We thought that the reading error of the navigation message in an experiment is based on signal processing in a receiver. Although there are some of factor such as the bit omission or the reading mistake by signal delay, we have inability to cope with problems of only a receiver. A transmitter also needs improvement in the speed of a message transmission rate, development of the coding algorithm which minimizes a signal error.

B. Speeding up Data Transmission Rate

From the results of the indoor experiment, the direct

transmitting IMES wave seems to be disturbed by reflective

waves from the surrounding walls, ceiling, floor and

movable bodies.

IMES transmitter sends navigation message including the

own location and/or ID to be used for indoor guidance.

However, the IMES receiver often misses to read the

messages due to the complex radio transmission mode in the

indoor environment, as the experimental results implied.

It takes a several seconds to acquire a message in the present

bit rate of 50 bps, which was designed to synchronize with

GPS data transmission rate. The slow transmission speed

causes the acquisition error for moving receiver. Therefore,

we conducted the reception experiment of the message with

the bit rate of 250 bps.

Fig. 1 IMES transmitter (left) and receivers (right) .

The right one is a receiver by u-blox and left one is Sirius.

We added the function of decoding the navigation message

of 250 bps to the Sirius receiver which was developed at our

laboratory. We evaluated the bit error in that case. Although

the output check was carried out with the modified Sirius

receiver, the navigation message was also able to be

acquired normally.

In original specification of IMES, The transmission rate is

50 bps following that of GPS navigation message. Thus the

message cannot be correctly received by a moving receiver

even if the signal is being correctly tracked. In order to

overcome this problem, we tried to accelerate the message

transmission rate to 250bps and 500bps The experimental

results show the transmission rate of 250 bps worked well

inside the service area of 20 m in radius due to the reduction

of the reception time of the message. In the case of 500 bps,

the receiving range of a signal was reduced. As a result,

transmission rate 250bps was written together.

A 1.3.1.1.4 Navigation message

The bit rate is defined as the "High-Speed Bit Rate" (250

[bps]) and the "GPS Compatible Bit Rate" (50 [bps]).[2]

III. HYBRID POSITIONING EXPERIMENTES

A. Outline of the experiment

IMES transmits the self-position of a transmitter by a

navigation message. Since the PRN code is transmitted at

one time, we can be known whether a receiver is in the

service area of which transmitter.

However, in this state, the self-position of a receiver can be

presumed only on a zone level. It can be said that the

position estimate accuracy as an indoor positioning

infrastructure is low.

We thought that we would like to improve the position

estimate accuracy of IMES with other indoor positioning

systems.

Fig. 2 The layout of Techno Plaza

A. Use equipment

The IMES transceiver and receiver and the indoor space

information measuring device were used.

IMES transmitter

Hitachi Industrial Equipment Systems Indoor GPS

transmitter

IMES receiver

u-blox EVK-5H receiver

iP-Solutions GPS/Galileo/QZSS L1 RF recorder

262/278

Page 292: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Fig. 3 IMES transmitter & receiver

indoor space information measuring device

NIKON Trimble TIMMS (Trimble Indoor Mobile

Mapping Solution)

Fig. 4 TIMMS (Trimble Indoor Mobile Mapping

Solution) right :IMES receiver on TIMMS rack

B. The experiment method

Move measurement

The IMES receiver was carried in the mount of IMMS

and simultaneous transfer measurement in the room was

carried out.

Fig. 5 IMES receiver on TIMMS rack

Calibration

Beforehand, the calibration point was installed in the

entrance of a test site place, and coordinates measurement

was carried out by VRS survey. Measurement data was

recorded for it with highly precise world coordinates as a

reference point of TIMMS.

Fig. 6 calibration with control point

installation position of IMES transmitter

Since multi-pass phasing influenced positioning

according to the building construction of the ceiling

installed in the former experiment, we installed

transmitter using stepladder in the central part of indoor

space.

C. Experiment Result

Fig. 7 s the trajectory data of TIMMS.

Fig. 7 the trajectory data of TIMMS.

Since simultaneous counting of all the positioning

infrastructures was carried out, each measurement data

was unified.

Fig7 by it having been alike, and the move trajectory of

TIMMS having been digitizing, so that it might be shown,

a high-precision distance between transmission and

reception has analyzed the receiving signal strength of

IMES.

Fig. 8 Tracking Value of Frontend.

263/278

Page 293: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28th-31th October 2013

Ⅳ. CONCLUSION

We tried to improve accuracy of position and the

accuracy of message reception for advanced use of IMES.

This moves against IMES being an easy-to-use system.

As IMES has the same signal structure with GPS, the

slight improvement of the firmware of common GPS

allows the seamless positioning.

However, we conducted the evaluation experiment for

an electric wave spreads, or whether to be however

blocked by the surrounding building.

As for a data transfer rate, most is affected according to

a propagation condition.

Although the data transfer rate was having accelerated to

250bs and it was a service area of comparatively small

IMES, it was reduced that a receiver makes a mistake in

message reception.

Although development of a receiver has also been

carried out until now, I think that I will put development

of advanced use application into a view.

While IMES has the structure of the same signal as GPS,

the slight improvement of the firmware of common GPS

can do seamless positioning.

It is also application.

ACKNOWLEDGMENT

The authors appreciate many support member. This

work was supported by the colleagues of Japan

Aerospace Exploration Agency (JAXA) and Shibaura

Institute of Technology and National Defense Academy

of Japan and Hitachi Industrial Equipment Systems and

Nikon-Trimble..

REFERENCES

[1] Manandhar, D.,H. Torimoto,”GPS World

SPECIAL WEB EXCLUSIVE”, May38-46,2011

[2] Japan Aerospace Exploration Agency, “IS-QZSS

Ver.1.5 ANNEX,” March 27, 2013

264/278

Page 294: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28–31 October 2013

Health Monitoring of WLAN Localization

Infrastructure using Smartphone Inertial Sensors

Raja Umair Haider†, Christos Laoudias∗, Christos G. Panayiotou∗

†Department of Electrical Engineering, Linkopings Universitet, Linkoping, Sweden

Email: [email protected]∗KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus

Email: [email protected], [email protected]

Abstract—Recently, the use of the ubiquitous WLAN infras-tructure, i.e. Access Points (AP), has become a popular solutionfor determining the unknown user location inside buildings. Ingeneral, WLAN-based localization techniques work well underthe assumption that there are no changes in the underlyingnetwork infrastructure during localization. However, the originalAP deployment is vulnerable to various changes due to hardwarefailures or malicious attacks. For instance, an AP may becomeunavailable during positioning, due to an unpredictable failureor if an intruder simply unplugs it. Under these conditions,faults are injected into the localization system that may severelydegrade accuracy. The main idea of this work is to develop anautomated solution for monitoring the health status of the WLANinfrastructure. In particular, we exploit the high computingcapabilities of modern smartphones and leverage their onboardinertial sensors for detecting possible AP faults. The experimentalresults indicate that the proposed method exhibits high correctdetection rate, while false alarms are kept low.

Keywords—indoor localization; fault detection; inertial sensors

I. INTRODUCTION

Wireless Local Area Network (WLAN) is a commonly used

technology for the provision of reliable location information

in GPS deprived indoor environments. Location Fingerprinting

(LF) is a popular positioning method for WLAN, in which the

measured signal is associated with the corresponding phys-

ical location. In particular, Received Signal Strength (RSS)

measurements from the surrounding Access Points (AP) can

be easily recorded at several known locations with a mobile

device and associated with the location coordinates. These

data are stored in a database, known as the radiomap, and

can be used later to find a user’s unknown location inside

the area of interest [1]. However, LF depends heavily on

the underlying WLAN infrastructure and is very sensitive to

changes in the AP deployment, thus any fault in these APs

can severely degrade accuracy [2]. For instance, an AP may

become unavailable during positioning, while it should be

observed under normal conditions, due to equipment failure

or in case an attacker unplugs it. This scenario constitutes an

AP failure that needs to be detected and mitigated.

Authors in [3] try to detect such faults by comparing the

distance between successive LF estimates with a threshold, as

the user is not expected to travel a long distance between two

LF estimates. However, this scheme does not take into account

the fact that successive LF estimates may occasionally be far

Location

Fingerprinting

Kalman

Filter

Fault

Detection

Pedestrian Dead

Reckoning

Radiomap

RSS Values Decision

Fig. 1. Block diagram of the proposed health monitoring system.

from each other due to noise in the RSS values, thus increasing

the number of false detections. To address this issue, in our

work we develop an automated solution for monitoring the

health status of the WLAN infrastructure. While localizing

users carrying smartphones with LF, at the same time we

also leverage the on board inertial sensors (i.e., accelerometer,

gyroscope and digital compass) to track them by means of

Pedestrian Dead Reckoning (PDR). The main idea is to exploit

these two parallel location streams, i.e. one coming from LF

that is also processed with Kalman Filter (KF) and the other

from PDR, in order to detect AP failures.

The rest of the paper is organized as follows. Section II,

presents the proposed health monitoring system in detail. The

experimental evaluation of our system is presented in Sec-

tion III. Finally, we provide concluding remarks and discuss

our future steps in Section IV.

II. HEALTH MONITORING SYSTEM

The block diagram of the proposed health monitoring sys-

tem is illustrated in Fig. 1. The LF component uses the RSS

values observed while walking to estimate the unknown user

location. Subsequently, the KF component uses an appropriate

mobility model to smooth the current LF location estimate

and filter out some inaccurate location estimates that do not

reflect the travelled path. The PDR component implements

an infrastructure-free approach that exploits sensory data for

tracking the user. Finally, the KF and PDR location streams

are provided as inputs to the Fault Detection component that

signifies the presence of an AP fault or not. In the following,

we present these components in detail.

265/278

Page 295: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28–31 October 2013

A. Location Fingerprinting (LF) Module

LF systems use a set of predefined reference locations L :ℓi = (xi, yi), i = 1, . . . ,M to collect RSS values from NAPs deployed in the target area, where M is the total number

of reference locations inside the area. A reference fingerprint

ri = [ri1, . . . , riN ]T associated with location ℓi, is a vector

of RSS samples and rij denotes the RSS value related to the

j-th AP. Table I shows a sample radiomap with real data,

where NaN values indicate that an AP is not detected at the

corresponding locations.

During positioning the user resides at an unknown lo-

cation ℓ and observes another fingerprint, e.g. s =[−70,−45, NaN,−30]. The objective of LF is to estimate

the user location given s and we have used the probabilistic

Minimum Mean Square Error (MMSE) method [4]. The NaN

values are replaced with a low RSS value in the MMSE com-

putations, e.g. −95 dBm in our setup, to penalize the missing

APs in both the radiomap and the positioning fingerprints. If

an AP is not available during positioning due to a fault, then

its value in fingerprint s will be replaced by this low RSS

value, thus introducing errors in the estimated location.

B. Kalman Filter (KF) Module

The LF location estimates may sometimes deviate from the

actual user location due to noise in the RSS measurements. In

order to smooth the LF estimates and remove outlier locations

we employ a KF module that uses as input the current LF

estimate ℓLF = (xLF , yLF ). The KF projects the previous

filter estimate ahead in time, according to the underlying

mobility model, and then uses the LF location to update the

prediction. In this work, we assume that the user is moving at

a constant speed and the steps of the iterative KF algorithm

are given by

ak = Φak−1 (1)

Pk = ΦPk−1ΦT + ΓQΓT (2)

Kk = PkMT(MPkM

T +R)−1

(3)

ak = ak +Kk (bk −Mak)) (4)

Pk = (I−KM) Pk, (5)

where ak = [x(tk) y(tk) ux(tk) uy(tk)]T is the com-

bined vector of the user location [x(tk) y(tk)]T and velocity

[ux(tk) uy(tk)] at time tk, while ak denotes the one-step ahead

prediction of the filter. Moreover, P is the error covariance

matrix, Kk is the KF gain, bk is the LF estimate ℓLF at time

tk and I denotes the identity matrix. The measurement and

process noise covariance matrices are represented by R = σ2

RI

TABLE I

Location AP1 AP2 AP3 AP4

ℓ1 -30 -70 NaN -58

ℓ2 -49 NaN -65 -65

ℓ3 -70 -30 NaN -80

ℓ4 -80 NaN NaN -70

ℓ5 -65 NaN -49 -49

and Q = σ2

QI, respectively. The parameter σ2

R represents the

uncertainty in the LF estimates, i.e. the expected localization

error between the LF estimate and the actual user location,

and we have used σ2

R = 3m. The parameter σ2

Q represents

the uncertainty in our mobility model, i.e. the random user

acceleration, and we assume σ2

Q = 0.1m/s2. The output of

the KF module at time tk, ℓKF = (xKF , yKF ), is the filtered

user location in vector ak that is smoother compared with the

raw LF estimates.

C. Pedestrian Dead Reckoning (PDR) Module

PDR exploits the kinematics of human movement and is

defined as the process of estimating the present position

by projecting travelled distance and heading from a known

starting point [5]. The applied PDR approach can be divided

into two phases, namely step detection and step heading deter-

mination. The PDR module detects and counts the number of

steps taken and then projects each step with the corresponding

heading according to

xk = xk−1 + S sin(θk) (6)

yk = yk−1 + S cos(θk), (7)

where the sin and cos functions are used for the projection

of the heading angle θk and we assume a constant step

length S = 0.75m. The output of the PDR module ℓPDR =(xPDR, yPDR) at every time step tk is equal to (xk, yk) given

by (6) and (7).

In this work, we have used the integrated accelerometer and

magnetometer sensors on a HTC Desire smartphone to derive

position and heading information, while a custom application

was developed in Java programming language to collect data

from the sensors. Previous studies suggest that sampling

frequency in the range 16 -20Hz is sufficient to deduce the

human walking pattern from the acceleration signal [6]. We

sampled the accelerometer sensor at approximately 40Hz,

which is well above the range for reliable step detection, while

the orientation sensor was sampled at about 10Hz to deduce

the heading information.

D. Fault Detection (FD) Module

The FD module takes as inputs the location streams from

the KF and PDR modules, and decides whether there is a

faulty AP or not. The detection mechanism is based on the

mean positioning residual ǫ over a section of the travelled

path, which is formally given by

ǫ =1

K

K∑

k=1

ǫk, ǫk = ‖ℓKF (tk)− ℓPDR(tk)‖, (8)

where we have used a series of successive location estimates

calculated at time tk, k = 1, . . . ,K .

We compare ǫ to an appropriately selected threshold γ and

if ǫ ≥ γ, the presence of an AP fault in the system is signified.

The intuition in this detection scheme is that in case one or

more APs have failed during positioning, then high errors will

be introduced in the LF locations and consequently there will

266/278

Page 296: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28–31 October 2013

be a significant residual between the KF and PDR trajectories.

We demonstrate this idea in Fig. 2 with a simple scenario using

real data, as the user walks on a straight line for some time.

The PDR module estimates the travelled path accurately, while

the KF trajectory includes some erroneous locations, even in

the fault-free case, due to the noise in the RSS measurements

that affect the underlying LF estimates (Fig. 2a). However,

if AP1 is faulty during positioning, then the KF trajectory is

severely affected and seems to be repelled by the faulty AP

(marked with red color), as shown in Fig. 2b. This behaviour

is due to the fact that the RSS values of the missing AP1 are

replaced by a small value, thus the LF estimates are shifted

towards the part of the localization area where the signal from

AP1 is weaker.

The proposed detection scheme is based on this observation

and employs threshold γ to distinguish between the fault-free

and the faulty AP scenario [1]. In the following, we discuss

the threshold selection process in more detail.

Threshold Selection: Selecting an appropriate threshold

value is crucial and there is a trade-off in order to guaran-

tee high detection rate, while reducing false detections. For

instance, setting γ at a very low value, essentially means that

we consider the KF estimated trajectory to be very accurate

and this could lead to false alarms in the fault-free case. On

the other hand, if γ is set to a high value, then a faulty AP

might go undetected.

Ideally, the threshold can be selected by physically turning

off one by one the APs of the WLAN infrastructure and then

measuring a sample path to calculate ǫ. However, considering

some large-scale WLAN setups in indoor environments, such

as shopping malls and university campuses, it is not practical

to manually turn off APs in order to select the threshold. We

address this by introducing AP faults artificially. Specifically,

we collect some positioning data from the surrounding APs

under normal conditions by walking along a predefined path.

These data provide the mean positioning residual in the fault-

free case, denoted ǫff . Subsequently, we manipulate the fault-

free data so that the RSS values corresponding to a faulty AP

are replaced by NaN values to indicate a missing AP. The

missing RSS values introduce errors in the MMSE algorithm

computations, leading to erroneous LF estimates when the

original RSS radiomap is used. In this way, a single AP

fault is injected artificially, without unplugging or removing it.

Computing the mean positioning residual, denoted ǫfaulty , in

the case of an artificial fault provides a guideline for selecting

the appropriate threshold value according to

ǫff < γ < ǫfaulty. (9)

The steps for selecting γ in order to accommodate all single

AP faults are summarized as follows:

1) Collect LF and PDR positioning data under fault-free

conditions along a route in the target area.

2) Calculate ǫ between the KF and PDR position estimates

over the whole route using (8).

3) Use the fault-free positioning data to create different

datasets by artificially injecting single AP faults.

−5 0 5 10 15 20

−6

−4

−2

0

2

4

6

8

10

12

14

X [m]

Y [

m]

LF estimates

KF trajectory

PDR trajectory

AP1

AP2

AP3

AP4

(a) Fault-free case.

−5 0 5 10 15 20

−6

−4

−2

0

2

4

6

8

10

12

14

X [m]

Y [

m]

LF estimates

KF trajectory

PDR trajectory

AP4

AP2

AP1

AP3

(b) Faulty case where AP1 has failed.

Fig. 2. Effects of single AP faults during positioning.

4) Repeat step 2 for each faulty AP and calculate ǫfaultyusing the corresponding faulty AP dataset.

5) A suitable threshold γ is selected, such that it satisfies

(9), where ǫfaulty corresponds to the minimum residual

value across all faulty datasets.

As a guideline, a route should be selected such that most

APs in the target area have strong RSS values in that route.

This is because if an AP is far away from the route, then its

RSS values will be very weak in the fault-free case as well.

Thus, if it fails it will not affect the LF estimates significantly

and it will be hard to capture with our detection scheme.

III. EXPERIMENTAL EVALUATION

Our experiments were carried out at the premises of the

KIOS Research Center, University of Cyprus. A radiomap for

the LF system has been created using a HP iPAQ device and

we use the MMSE method [4] for localization. The WLAN

infrastructure inside our experimentation area consisted of four

APs, as shown in Fig. 2.

A. Comparison with Related Work

Authors in [3] compare successive LF estimates and if the

distance between the current and the previous LF estimate

exceeds a threshold value, then a fault is detected. However,

267/278

Page 297: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28–31 October 2013

the LF estimates can sometimes be very inaccurate between

neighbouring user locations due to the noise in the RSS values,

rather than actual faults in the APs. Thus, this approach is

expected to exhibit high false detection rate.

Another important difference is that the detection scheme

in our health monitoring system detects an AP fault over the

whole route, because it relies on ǫ over a short path to set the

detection threshold. In contrast, the approach in [3] detects

faults on a per-sample basis. In order to compare with our

approach, we modified this per-sample detection policy by

applying a majority voting rule to achieve per-route detection.

Our sample route consists of 28 positions, thus if the algorithm

in [3] detects a fault for more than half of the LF estimated

location pairs, then a faulty AP is detected. Furthermore,

they sample the user’s location every 1.5 s and the detection

threshold was set to 2.13m. In our experimental setup, the

sampling frequency is 1Hz, thus we adapted the threshold in

the modified approach to γ′ = 1.42m.

First, we compare the two detection schemes in terms of

false detection rate in case all APs are available during posi-

tioning. In order to select the threshold, we studied the mean

positioning residual in meters with artificial fault injection for

different single AP faults and we found that ǫ ∈ [5.5, 7.9].Thus, the threshold value was set to γ = 5m in our detection

scheme.

B. False Detection Rate

A fault detection algorithm is desirable to have a low

probability of false alarms, i.e, ideally no or only rare de-

tections should occur in the fault-free case. We collected

positioning data along the sample route during five test walks

with no AP faults and found that ǫ ∈ [3.1, 4.2]. In this

case, the modified detection scheme of [3] exhibits 60% false

detections, mainly due to some highly erroneous LF estimates

between successive user locations caused by noise and outliers

in the RSS values. On the other hand, we observe that the

threshold value γ = 5m was not exceeded for any of the

test walks. Thus, the proposed detection scheme had no false

detections, owing to the KF module that effectively filters out

these erroneous LF estimates.

C. Correct Detection Rate

Next, we evaluate the detection performance with real life

AP faults. Specifically, AP1, AP2 and AP3 were manually

turned off one by one and then another five test walks were

sampled in the presence of a real single AP fault. In this case,

ǫ ∈ [4.7, 9.1] for all faulty AP scenarios. According to our

findings, the modified detection scheme of [3] attains 80%

correct detection rate when AP1 is faulty, however this drops

to 20% and 40% when AP2 or AP3 are faulty, respectively.

In contrast, the results indicate that the proposed detection

scheme achieves 80% correct detections when either AP1 or

AP3 are faulty, i.e. the faulty AP was captured in 4 out of 5 test

walks, while the correct detection rate is 100% in case AP2

is faulty. Thus, the proposed fault detection scheme performs

much better when faults to AP2 or AP3 are considered.

Our evaluation shows that AP1 and AP3 have 20% prob-

ability of missing an AP fault, when γ = 5m. This implies

that a lower threshold value, e.g. γ = 4m, could lead to higher

fault detection rate. Indeed this is the case, however this comes

at the expense of 40% false detections in the fault-free case,

compared with no false detections when γ = 5m is used.

Thus, the threshold value should be selected in light of this

trade-off and if we can tolerate some false detections, then we

can be conservative with the threshold value.

IV. CONCLUSIONS

The proposed health monitoring system relies on location

information from two different sources, namely the KF and the

PDR modules, produced while the user is being localized. We

detect AP faults by comparing the mean positioning residual

between these two location streams with a pre-selected thresh-

old value. Our experimental results in a real office environment

suggest that our health monitoring system is a promising

solution for detecting AP faults in the WLAN localization

infrastructure.

As future work, we plan to develop a stand-alone Android

application for monitoring the health of the WLAN infras-

tructure. As another direction, we will enhance our system

with fault identification techniques in order to indicate exactly

which AP is faulty.

ACKNOWLEDGEMENT

This work falls under the Cyprus Research Promotion

Foundation’s Framework Programme for Research, Techno-

logical Development and Innovation 2009 (DESMI 2009-

2010), co-funded by the Republic of Cyprus and the Eu-

ropean Regional Development Fund, and specifically under

Grant TΠE/OPIZO/0609(BE)/06. The first author would

like to thank Eleftherios Karipidis and Antonios Pitarokoilis at

Linkopings Universitet for their guidance during the master’s

degree thesis leading to this paper.

REFERENCES

[1] R. U. Haider, “Fault detection in WLAN location fingerprinting systemsusing smartphone inertial sensors,” Master’s thesis, Linkoping University,Communication Systems, The Institute of Technology, 2012. [Online].Available: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81940

[2] C. Laoudias, M. Michaelides, and C. Panayiotou, “Fault detection andmitigation in WLAN RSS fingerprint-based positioning,” in Journal of

Location Based Services, vol. 6, no. 2, pp. 101–110, Jun. 2012.[3] S. Yerubandi, B. Kalgikar, M. Gunturu, D. Akopian, and P. Chen,

“Integrity monitoring in WLAN positioning systems,” in Proceedings of

the International Society for Optical Engineering, vol. 735, 2009, pp.109–121.

[4] T. Roos, P. Myllymaki, H. Tirri, P. Misikangas, and J. Sievanen, “A prob-abilistic approach to WLAN user location estimation,” In International

Journal of Wireless Information Networks, vol. 9, pp. 155–164, 2002.[5] D. Gusenbauer, C. Isert, and J. Krosche, “Self-contained indoor position-

ing on off-the-shelf mobile devices,” in Proceedings of Indoor Positioning

and Indoor Navigation (IPIN), 2010, pp. 1–9.[6] L. Bao and S. Intille, “Activity recognition from user-annotated accel-

eration data,” in Pervasive Computing, ser. Lecture Notes in ComputerScience. Springer Berlin / Heidelberg, 2004, pp. 1–17.

268/278

Page 298: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

GPS Line-Of-Sight Fingerprinting for EnhancingLocation Accuracy in Urban Areas

Akira Uchiyama, Etsuko Katsuda, Yuki Uejima, Hirozumi Yamaguchi and Teruo HigashinoGraduate School of Information Science and Technology, Osaka University

1-5 Yamadaoka, Suita, Osaka 565–0871, JapanEmail: utiyama, e-katsuda, y-uejima, h-yamagu, [email protected]

Abstract—In recent years, many mobile phones are equippedwith GPS, which is essential for our daily life. It is well knownthat GPS error increases in urban areas due to multipath andshadowing caused by buildings. In this research, we proposea method to enhance location accuracy using GPS Line-Of-Sight (LOS) detection. We exploit the fact that GPS satellitevisibility changes depending on positions of GPS satellites andbuildings around a GPS receiver. The proposed method constructsfingerprints of satellite visibility for each position based on alayout and 3D models of buildings. An SNR of a satellite is usedto estimate its visibility based on the visibility model, which ispre-defined from a real data set. The proposed method estimates aregion which includes the position of the receiver by matching theestimated satellite visibility with fingerprints. Location accuracyenhancement is performed by taking a common region of alocation estimated by another localization method and the regionestimated by the proposed method. The performance evaluationin a real environment around Osaka station shows that theproposed method can shrink GPS estimated regions to 17% withthe correct ratio of 81% in average.

Keywords—Global Positioning System, Fingerprinting, LocationAccuracy Enhancement, Line-Of-Sight Detection, Urban Areas

978-1-4673-1954-6/12/$31.00 c©2012 IEEE

I. INTRODUCTION

A number of mobile location services[1], [2] have beenprovided with the spread of mobile terminals such as smart-phones. In such services, user locations are obtained by severaltechniques such as Global Positioning System (GPS), WiFiand Dead Reckoning (DR). Among those, GPS is widely usedoutdoors from its wide availability. However, the accuracydrops in urban areas where there are many buildings due toshadowing and multipath effects[3]. To solve the problem,various approaches have been proposed such as launchingadditional satellites and WiFi fingerprints[4]. However, therestill remains an essential problem of the accuracy becausetheir accuracy depend on environments such as user-satellitegeometry and deployment of base stations. To achieve highlocalization accuracy in urban areas, combination of multiplemethods is essential since the accuracy of localization changesdepending on environments.

In this paper, we propose a method to enhance accuracy oflocations obtained from another localization method such asGPS in urban areas by using visibility conditions of satellitesand 3D models of buildings. The proposed method exploitsinvisible satellites in addition to visible satellites. A visiblesatellite is a satellite whose direct signal is received at areceiver and an invisible satellite is a satellite whose directsignal cannot be received at a receiver. We focus on the

visibility of satellites because an SNR difference betweenvisible and invisible satellites may be clearly observed dueto the strong effect of shadowing by buildings although GPSsignals are weak (e.g. weaker than environmental noise[5])and prone to noise. Hereafter, we denote visible satellites andinvisible satellites as LOS (Line-Of-Sight) satellites and NLOS(Non-Line-Of-Sight) satellites, respectively.

We can determine the visibility of a satellite (i.e. whether aGPS satellite is visible or not) at a receiver position if the open-sky region is known. The computation of the open-sky regionat the receiver position is possible by using the layout and 3Dmodels of buildings. We call the visibility of satellites in thesky at each receiver position a fingerprint. Although satellitepositions always change, we can compute the fingerprints forall possible satellite positions beforehand. The visibility ofa satellite at a receiver is estimated from a Signal to NoiseRatio (SNR) of the satellite, which is available in most mobiledevices. The proposed method estimates a region where areceiver exists by matching the estimated visibility of thereceiver with fingerprints. Finally, a location obtained fromanother localization method is shrunk by taking a commonregion of the location and the region estimated by the proposedmethod to enhance the accuracy.

The novelty of the proposed method is to utilize GPSsignals in terms of the visibility, which is completely differentfrom their current use for pseudorange estimation. The pro-posed method is efficient especially in urban areas becausethe visibility of GPS satellites is largely different dependingupon building shapes and layouts around a receiver. Anotheradvantage of the proposed method is that it can easily beimplemented on off-the-shelf mobile devices since it relies onSNRs and 3D models of buildings, which can be stored on aserver. As far as we know, a similar approach called shadowmatching[6] is proposed by Wang et al. Shadow matching andthe proposed method is the same in terms of making use offingerprints of satellite visibility. Different from the shadowmatching, we introduce the allowable Hamming distance asa parameter to mitigate the effect of incorrect estimation ofsatellite visibility and evaluate its performance through a realexperiment using smartphones.

In order to evaluate the proposed method, we have con-ducted a real experiment using Nexus S based on fingerprintsaround Osaka station obtained from its 3D model. In theexperiment, we have selected GPS localization as a localizationmethod whose accuracy is enhanced by the proposed method.The results show that the proposed method can shrink GPSestimated regions to 17% with the correct ratio of 81%.

269/278

Page 299: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Fig. 1. Overview of location accuracy enhancement

II. LOCATION ACCURACY ENHANCEMENT USINGSATELLITE VISIBILITY

A. Overview

In the proposed method, a user observes signals fromn GPS satellites in the sky and records their SNRs ~S =(s1, s2, . . . , sn). A user sends ~S and a location obtainedby another localization method to a server and receives anenhanced location from the server. The server executes thefollowing process to enhance location accuracy.

The visibility LOSi of satellite i is defined as below.

LOSi =

1 (iff i is LOS)0 (iff i is NLOS)

The satellite visibility is estimated based on si ∈ ~S. We denotethe satellite visibility corresponding to ~S as ~v. A methodto estimate the satellite visibility from SNRs is described inSection II-B.

Figure 1 shows the overview of the proposed method. Aposition of a satellite is defined by a pair of an azimuth andan elevation. We denote a fingerprint at a position l as ~vl,which represents the visibility of satellites in the sky. When nsatellites are in the sky,

~vl = (LOSl,1, LOSl,2, . . . , LOSl,n)

where LOSl,i represents the visibility of a satellite i at aposition l in the fingerprints. ~vl is computed beforehand atthe server based on 3D models of buildings. A position of areceiver is estimated to l if fingerprint ~vl is the nearest to thevisibility ~v observed at the receiver among all the fingerprints.The nearest fingerprint is usually not unique since fingerprintsdepend on the layout of buildings. Therefore, the estimatedpositions (i.e. regions) that have the nearest fingerprint areused in order to enhance accuracy of a location obtainedfrom another localization method such as GPS and WiFi. Theenhancement is performed by taking the common region of alocation obtained by another localization method and a regionestimated by the proposed method.

B. Visibility Estimation Using SNR

We construct a visibility model to distinguish LOS andNLOS satellites based on SNRs. The visibility model defines

Fig. 2. Visibility Model

the probability of LOS and NLOS for each SNR value. Toconstruct the visibility model, we have collected real data ofSNR for LOS and NLOS satellites at three sampling points inOsaka University. We fixed Nexus S with a mobile batteryat each point and measured SNR of each satellite at thesampling rate of 1Hz for 26 hours. We determined LOS andNLOS regions in the sky at each sampling point based on thehemispherical photos and azimuths and elevations measuredby a laser distance meter. Since we cannot avoid measurementerrors in the above process to determine LOS/NLOS regions,we employ two filtering processes. The first process is to filtersamples in regions within ±2 degrees in the azimuth andthe elevation from the borders of LOS/NLOS regions. Theother process is to filter satellites at low elevations since theyare often affected by moving objects such as pedestrians andvehicles. We empirically set the threshold to filter satellites atlow elevations to 20.

After applying the above two filters, the numbers of LOSand NLOS samples were approximately 422,000 and 853,000,respectively. We normalize the LOS and NLOS samples sepa-rately and use the normalized values for the visibility model.

From the collected data set, we build the visibility modelwhich defines the visibility v(si) and its likelihood p(si) fromSNR si of a satellite i. Let Ls and NLs be the normalizednumbers of LOS and NLOS samples with SNR s, respectively.Then, v(s) and p(s) are defined as below.

v(s) =

1 (if Ls ≥ NLs )0 (if Ls < NLs)

(1)

p(s) = max Ls

Ls + NLs,

NLs

Ls + NLs (2)

If no sample is collected for SNR s, p(s) is not defined. We donot use a satellite with such SNR s since it is hardly observeddue to the hardware characteristics.

The built visibility model is shown in Fig. 2. From themodel, we can see p(s) ranges from 0.5 to 1.0 depending ons. If we use a visibility estimated with low likelihood, the resultmay be incorrect. Hence, we filter the visibility estimated withlikelihood pTH or lower from the visibility estimation.

C. Construction of Fingerprints

The fingerprints are constructed by computing the visibilityof a satellite at each position at each receiver position in agiven 3D layout and models of buildings. The computation

270/278

Page 300: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

(a) Fingerprint of satellite g1

(b) Fingerprint of satellite g2

(c) Fingerprint of g1 and g

2

LOSg1=0

LOSg1

=1

LOSg2=0

LOSg2

=1

(LOSg1, LOSg

2)=(0,0)

(0,1)

(1,0)

(1,1)

Fig. 3. Examples of fingerprints around Osaka station

of the visibility is equivalent to the computation of lightand shadow regions regarding the satellite as the source oflight. Therefore, we can use algorithms in image processingsuch as Ray Tracing[7] for the construction of fingerprints.The fingerprints in all cases can be computed and preparedbeforehand at the server for all possible positions of satellites.

In this paper, we use the 3D models of buildings aroundthe Osaka station available at Trimble 3D Warehouse[8]. Forexample, Figs. 3(a) and 3(b) show LOS and NLOS regionsfor satellites g1 and g2, respectively. The target area is dividedinto four regions by overlapping these regions as shown in Fig.3. In other words, the visibility ~v = (LOSg1 , LOSg2) in thearea is any one of (0, 0), (0, 1), (1, 0), (1, 1). Likewise, thetarget area is divided into 2n regions at the maximum when nsatellites are in the sky.

In order to avoid incorrect estimation for satellites at lowelevation, we filter satellites at the elevation threshold φTH orlower. We empirically set φTH = 20.

D. Matching Estimated Satellite Visibility to Fingerprints

Let n be the number of satellites whose visibility likelihoodis more than pTH . A receiver position is estimated by matchingits satellite visibility ~v to the fingerprints. The distance betweena fingerprint ~vl and an estimated visibility ~v is defined by theHamming distance H(~v, ~vl) which is

H(~v, ~vl) =n∑

i=1

LOSi ⊕ LOSl,i.

In the proposed method, a receiver position is estimated toposition l which has the shortest Hamming distance as below.

arg minl∈FP

H(~v, ~vl) (3)

Since some of the estimated visibility may be incorrect, wedefine the correctness of the estimation based on the allowableHamming distance Mh. We call the estimation is correct if theHamming distance H(~v, ~vl) between the estimated visibility ~vand the visibility of the real position l is Mh or less. There aregenerally multiple positions that have the shortest Hammingdistance. The aim of the proposed method is to estimate acorrect set of points (i.e. a region) as small as possible interms of the number of points (i.e. an area size).

Fig. 4. Measurement points in real experiment

E. Location Accuracy Enhancement

Finally, the proposed method enhances accuracy of alocation estimated by another localization method such as GPSand WiFi. For this purpose, we simply take a common regionof the location obtained from another localization method and aregion estimated by the proposed method. In the evaluation, weselect GPS as another localization method and take a commonregion of a GPS fixed position with an error range and a regionestimated by the proposed method.

III. PERFORMANCE EVALUATION

A. Evaluation Settings

To see the performance of the proposed method, we haveconducted a real experiment around Osaka station using NexusS. The layout and 3D models shown in Fig. 3 are used toprepare fingerprints. The visibility model shown in Fig. 2 isused in the evaluation.

In the real experiment, we recorded output in the NMEAformat from a GPS module embedded in Nexus S at thesampling rate of 1Hz. The output contains an SNR and aposition of each satellite, and a GPS fixed position with HDOP(Horizontal Dilution Of Precision). We recorded samples atfour points shown in Fig. 4 for 15 minutes in order to obtainstable GPS fixed positions. We note that the proposed methodworks immediately after turning a GPS receiver on as longas satellite positions are known via a reference server. In theperformance evaluation, we used 25 samples in the 15-minuterecord after the reception of all satellite positions in the skyat each position. The total of samples was 100. The averagedistance between a GPS fixed position and its real positionwas 51.2m.

271/278

Page 301: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Fig. 5. # of used satellites Fig. 6. Correct ratio Fig. 7. Shrunk ratio

The proposed method is used in combination with anotherlocalization method. In the evaluation, we selected GPS. Fora set C of positions estimated by the proposed method anda GPS fixed position p with an error range r, we obtaineda subset of C whose points are included by the GPS errorregion. The GPS error region is defined as the circle at thecenter p with the radius r. r is defined as 2dRMS, which is2·UERE ·HDOP [9]. Here, UERE is User Equivalent RangeError and was set to 15m empirically so that the GPS errorregions contain their real positions as many as possible. In thissetting, 90 out of 100 samples contained the real positions.

The performance of the proposed method should be eval-uated with respect to the tradeoff between the correctness andthe size of the estimated region because the probability that anestimation result is correct increases with the increase of thesize of the estimation result. We introduce a correct ratio anda shrunk ratio defined as below.

Correct ratio =# of correct estimation

# of estimation(4)

Shrunk ratio = 1 − Size of estimated regionSize of GPS error region

(5)

The shrunk ratio represents the extent of location enhancementby the proposed method and a larger shrunk ratio indicates thatthe combined localization result is more effectively enhancedby the proposed method.

B. Results

To see the effect of the visibility likelihood threshold pTH

and the allowable Hamming distance Mh, we have evaluatedthe number of used satellites, the correct ratio and the shrunkratio for different pTH and Mh. The results are shown in Figs.5, 6 and 7. For the number of used satellites and the shrunkratio, the results show the minimum, the maximum and theaverage. The correct ratio is calculated for 90 samples whoseGPS error region is correct as it is. Similarly, the shrunk ratiois calculated for correct results.

Figure 5 shows the number of used satellites for differentpTH . The result shows that the number of used satellitesdecreases as pTH increases. This is because the filtered rangeof SNRs is wider if pTH is larger. In the following evaluation,we only focus on the cases of pTH =0.5 to 0.8 since thenumber of used satellites is extremely small when pTH =0.9and 1.0.

Figure 6 shows the correct ratio for different pTH . From theresult, we can see the correct ratio increases with the increaseof pTH . The reason is that reliable visibility estimation resultsare obtained by filtering satellite visibility estimation results

with low likelihood when pTH is high. On the other hand,the result in Fig. 7 indicates that the shrunk ratio decreases aspTH increases. This is because the number of used satellitesdecreases with the increase of pTH and the granularity offingerprints becomes coarser as the number of used satellitesdecreases. We can also see that the shrunk ratios widely rangefrom 0 to 1, which indicates the shrunk ratios largely dependon the layout of buildings.

Moreover, Figs. 6 and 7 show the results for Mh =0, 1,and 2. When Mh increases, the shrunk ratios decrease sincethe sizes of the estimated regions increase. Meanwhile, largeMh allows incorrect visibility estimation, which leads to theincrease of the correct ratios. From the above results, we haveconfirmed there is a tradeoff between the correct ratios andthe shrunk ratios. For example, if we choose pTH = 0.7 andMh = 1 for parameter values, the average correct ratio of theestimated region is 81% with 83% of the average shrunk ratio,which is 17% of the GPS error regions.

IV. CONCLUSION

In this paper, we proposed a low-cost method using GPSLOS (i.e. visibility) detection to enhance accuracy of locationsobtained by another localization method in urban areas. Fromthe real experiment using Nexus S around Osaka station, weconfirmed that the proposed method could narrow down GPSestimated regions to 17% with the correct ratio of 81%.

REFERENCES

[1] Google.com. Googlemap. Google.com. [Online]. Available: http://maps.google.com/

[2] Ubusuna.inc, Eponet, Casareal, and TechMatrix. omote navi. [Online].Available: http://omotenavi.jp/index.html

[3] M. Modsching, R. Kramer, and K. ten Hagen, “Field trial on gps accuracyin a medium size city: The influence of built-up,” in Proc. of Workshopon Positioning, Navigation and Communication, 2006, pp. 209–218.

[4] J. Rekimoto, T. Miyaki, and T. Ishizawa, “Lifetag: Wifi-based continuouslocation logging for life pattern analysis,” in Proc. of Int. Symp. onLocation- and Context-Awareness (LOCA2007), 2007, pp. 35–49.

[5] P. Misra and P. Enge, Global Positioning System, 2nd ed. Ganga-JamunaPress, 2011.

[6] L. Wang, P. Groves, and M. Ziebart, “Gnss shadow matching: Improvingurban positioning accuracy using a 3d city model with optimizedvisibility prediction scoring,” in Proc. of ION GNSS, 2012, pp. 423–437.

[7] A. S. Glassner, Ed., An Introduction to Ray Tracing, 1st ed. AcademicPress, 1989.

[8] Trimble. Trimble 3d warehouse. Trimble Navigation Limited. [Online].Available: http://sketchup.google.com/3dwarehouse/

[9] B. W. Parkinson and J. J. Spilker, Global Positioning System: Theoryand Applications, 1st ed. Amer Inst of Aeronautics, 1996.

272/278

Page 302: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Utilizing cyber-physical systems to rapidly accessand guard patient records

Thaddeus Czauski, Paul Miranda, Jules WhiteDepartment of Electrical and Computer Engineering

Virginia TechBlacksburg, Virginia, USA

Email: czauski, paulnam, [email protected]

Abstract—Utilizing mobile devices for patient treatment isideal, allowing medical professionals seamless access to essentialtreatment tools and information sources such as patient elec-tronic medical records (EMR). Yet, mobile devices fall short inprotecting confidential patient data: EMRs and other sensitiverecords easily travel outside the hospital unnoticed when medicalprofessionals leave the hospital with their mobile device. Thispaper introduces SPIRIT: a cyber-physical location based accesscontrol (LBAC) system that utilizes indoor localization methodsto rapidly authenticate requests for EMRs and automaticallyprotect EMRs on mobile devices. This paper provides a briefintroduction to challenges faced in the healthcare industry anddiscusses the SPIRIT system.

Index Terms—Automatic control, communication system secu-rity, cyber-physical systems, health informatics systems, indoorlocalization, location based authentication, location based accesscontrol

I. INTRODUCTION

Mobile access to hospital data is rapidly increasing inpopularity. Tablets, smart phones, and laptops are increasinglypreferred by medical professionals to access enterprise andmedical data while performing their duties in hospitals. Astudy by Teves et al. [1] at the Children’s Mercy Hospitalsand Clinics reported that nearly sixty percent of medicalprofessionals surveyed now use an iPad to access electronicmedical records (EMRs) for patients.

In order to provide mobile access to EMRs and patientdata mobile devices need to locally store patent’s EMRs andassociated data. Stored EMR data leaves the hospital and isvulnerable to attack when medical professionals leave at theend of the day and take their laptop, tablet, or smart phone withthem. Once outside of the hospital it becomes significantlyeasier to steal mobile devices and any patient EMRs on thedevices.

The mobility of the data puts confidential patient records at ahigher risk when mobile devices are used. The risk is increasedfurther when mobile devices travel outside the hospital as med-ical professionals will take their devices with them whereeverthey go. To make matters worse, the mobile devices used bymedical professionals can accumulate hundreds of records ina relatively short time span. When a single mobile devicecontaining patient EMRs are stolen, thousands of individuals

This material is based upon work supported by the Bradley FoundationGraduate Fellowship.

are affected by the theft. Consider that one laptop stolen froma medical monitoring service employee’s locked car in 2012contained confidential patient EMRs and other data for nearly116,000 patients [2].

II. SOLUTION APPROACH

To prevent the loss or theft of EMRs on mobile devices,the simplest approach would be to follow existing physicalsecurity wisdom: medical records should never leave thehospital unless there is a compelling need to access the recordsaway from the hospital. Mobile devices used as a cyber-physical system present one solution to providing equivalentphysical security for records stored on mobile devices: useinformation about the mobile device’s surroundings to providelocation based access control (LBAC) for EMRs and tether theEMR to a physical location [3, 4] to ensure the EMR does notleave an area such as a hospital ward.

LBAC provides a mechanism that meets the needs ofmultiple stakeholders while providing a security model thatis based on physical security guidelines within the hospitalenvironment [5]. LBAC satisfies medical professionals’ needfor on-demand mobile access to patient EMRs from anywherewithin the hospital, the hospital’s legal obligations to safeguardpatient information, and provides traceability to determinewho (which patient) was visited, when and where the visitoccurred, and what information was accessed by a medicalprofessional for record keeping purposes. When employingLBAC, accessing data requires physical access to a room, thusphysical security defenses such as doors, locks, and visualsurveillance can be applied to protect the EMRs residingon mobile devices while simultaneously ensuring medicalprofessionals have the EMR they need, when and where theEMR is needed.

To achieve LBAC that meets the needs of medical profes-sionals, this paper introduces the Secure Proximity InformationRetrieval and Indoor Tethering (SPIRIT) system. Medicalprofessional’s mobile devices will authenticate the medicalprofessional’s location as an additional factor in a multi-factorauthentication system to provide assurances that the EMR isbeing appropriately accessed [6]. The addition of location inthe authentication process ensures EMRs cannot be accessedin locations where the data is at a greater risk of being lost orstolen.

978-1-4673-1954-6/12/$31.00

273/278

Page 303: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1 depicts the SPIRIT system’s operation, and adescription of SPIRIT’s operation is discussed in Section II-A.SPIRIT utilizes a hybrid indoor localization approach: SPIRITrelies on both absolute positioning mechanisms and relativepositioning mechanisms. Providing initial indoor localizationand authentication of a mobile device’s location is achievedthrough contactless RF which provides the absolute locationof the medical professional and continuous authentication [7]of medical professionals is achieved using relative positioningby tethering devices to fixed anchor points via personal areaRF networks.

A. SPIRIT system overview

SPIRIT prevents EMRs from leaving the hospital whileproviding ubiquitous access to EMRs on mobile devices byanchoring EMRs to a physical location. When a medicalprofessional wants to use a mobile device to access an EMR,the medical professional must authenticate their location andcredentials. Once authenticated the mobile device attaches atether to an anchor which ensures that the EMR can only beaccessed near the anchor as the medical professional movesabout the hospital. When the tether between the anchor andmobile device is broken, the mobile device protects all recordsthat were accessed from the anchor by removing any cachedfiles or temporary files related to the EMR.

To provide a rapid mechanism for identifying the correctEMR to view when visiting a patient, patients will be wearinga contactless RFID tag (such as a wristband) with an identifierunique to the patient that allows a medical professional’smobile device to identify the patient’s records. An illustrationof using the SPIRIT system to access and protect EMRs isillustrated in Figure 1.

(1) medical professionals use contactless RFID (the figuredepicts an RFID patient wristband) to identify the patient andrecords to access, (2) medical professionals then find an anchorpoint to authenticate their location and establish a tether,(3) once tethered the medical professional’s identity is verifiedby the EMR server, (4) upon verification of the medicalprofessional’s location and identity a session key is issued andrelayed via the anchor to the medical professional’s mobiledevice, (5) using the session key while tethered, the medical

Tap to anchor ❶ ❷

❹ Tap an anchor pointto authenticate

Medical recordsprotected

Medical History

Name: Doe, JohnAge: 33

Weight: 195 lbs

Allergies:gluten

Hospitalizations:2001

❺ ❻ wireless tether range

Fig. 1. Overview of the SPIRIT system. A detailed description of all thesteps can be found in Section II-A.

professional may access EMRs from the server, (6) when themedical professional moves out of tether range of the anchorthe mobile device automatically detects the tether has beenbroken and protects EMRs by removing any cached EMRsfrom the mobile device.

III. CONCLUSION

This paper has introduced the SPIRIT system. SPIRIT aimsto provide medical professionals ubiquitous mobile access toEMRs and transparently protect EMRs from theft and loss.Future work will provide additional details on conventionalEMR use in healthcare environments, an analysis of EMRtheft and loss within the healthcare industry, a description ofthe hybrid indoor localization employed by SPIRIT, and anevaluation of a research implementation of SPIRIT.

REFERENCES

[1] J. P. Teves, B. S. Chaparro, B. Kennedy, Y. R. Chan,R. Riss, J. K. Simmons, and N. Copic, “Survey on ipadusage in a pediatric inpatient hospital,” in 2013 Int. Symp.on Human Factors and Ergonomics in Health Care,Mar 2013. [Online]. Available: http://www.hfes.org/web/HFESMeetings/HCSPresentations/hcs2013teves.pdf

[2] E. McCann, “10 largest hipaa breaches of 2012,” Jan.2013. [Online]. Available: http://www.healthcareitnews.com/news/10-largest-hipaa-breaches-2012

[3] M. Beaumont-Gay, K. Eustice, and P. Reiher, “Informationprotection via environmental data tethers,” in Proceedingsof the 2007 Workshop on New Security Paradigms, ser.NSPW ’07. New York, NY, USA: ACM, 2008, pp.67–73. [Online]. Available: http://doi.acm.org/10.1145/1600176.1600188

[4] W. Jansen and V. Korolev, “A location-based mechanismfor mobile device security,” in Computer Science andInformation Engineering, 2009 WRI World Congress on,vol. 1, 2009, pp. 99–104.

[5] A. v. Cleeff, W. Pieters, and R. Wieringa, “Benefitsof location-based access control: A literature study,” inProceedings of the 2010 IEEE/ACM Int’l Conferenceon Green Computing and Communications & Int’lConference on Cyber, Physical and Social Computing,ser. GREENCOM-CPSCOM ’10. Washington, DC,USA: IEEE Computer Society, 2010, pp. 739–746.[Online]. Available: http://dx.doi.org/10.1109/GreenCom-CPSCom.2010.148

[6] S. Choi and D. Zage, “Addressing insider threat usingwhere you are as fourth factor authentication,” in SecurityTechnology (ICCST), 2012 IEEE International CarnahanConference on, 2012, pp. 147–153.

[7] S. Kurkovsky and E. Syta, “Approaches and issues inlocation-aware continuous authentication,” in Computa-tional Science and Engineering (CSE), 2010 IEEE 13thInternational Conference on, 2010, pp. 279–283.

274/278

Page 304: International Conference on Indoor Positioning and Indoor Navigation

- chapter 26 -

Location based Services & Middleware/Library

Page 305: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Evaluation of Indoor Pedestrian Navigation Systemon Smartphones using View-based Navigation

Mitsuaki Nozawa, Akimasa Suzukiand Yongwoon Choi

Graduate School of EngineeringSoka University

1-236 Tangi-machi, Hachiouji-shi, Tokyo, 192-8577, JapanEmail: [email protected]

Yoshinobu HagiwaraNational Institute of Informatics

2-1-2 Hitotsubashi, Chiyoda-ku, Tokyo, 101-8430, JapanEmail: [email protected]

Abstract—The purpose of this study is to perform practicaltests for an indoor pedestrian navigation system on smartphonesusing view-based navigation. The view-based navigation hasthe characteristic of fast processing speed and robustness as alocalization method based on image matching. We have alreadyverified that the view-based navigation is appropriate for indoorpedestrian navigation using smartphones from the aspect ofprocessing time. To demonstrate practical ability of our approach,this study evaluates our proposed system under illuminationchange and occlusions and using by actual users through thefollowing three experiments. One is to evaluate the robustnessfor the changed illumination by sunlight and fluorescent lightsin an environment. The other is to evaluate the robustnessfor occlusions by passersby. Another is to confirm whether theproposed system performs navigation for pedestrians unfamiliarto a given environment. As a conclusion of this study, the practicalability of our proposed system were confirmed through theexperimental results in a real environment like corridors.

Keywords—Pedestrian navigation; Smartphone; View-basednavigation; Localization; Image matching

I. Introduction

Recently, studies of location based navigation service onmobile devices[1][2] have generated attention. Outdoor naviga-tion service on smartphones using GPS or orientation sensorshas been becoming pervasive. On the contrary, indoor navi-gation service on smartphones hasn’t spread due to difficultyin using GPS under indoor environments and high requiredaccuracy. Conventional indoor pedestrian navigation systemson smartphones include the system using wireless beacons[4]and the system using ultrasonic waves[5]. The systems requireconstruction cost of positioning sensors on buildings in com-pensation for high positional accuracy. Sakuragi et al. [6] haveproposed the inexpensive system using Wi-Fi infrastructure.However, positional accuracy of the system depends on ra-diowave occlusions and positional relationship between Wi-Fi access points. As an approach using smartphone integratedsensors, pedestrian dead reckoning using accelerometer andgyroscope integrated in smartphones[7][8] has been proposed.The dead reckoning causes accumulated errors on behalf ofno requirement of construction cost. To solve the problem ofaccumulated errors, pedestrian localization using smrtphonecameras[9][10] has been proposed. The methods have beenrealized at high accuracy by using an image database, but ithasn’t realized pedestrian navigation due to some problems incomputational cost.

We have studied a view-based navigation[11] which real-izes robot navigation by image matching using a monocularcamera. The view-based navigation possesses fast simple al-gorithm, being robust to the environmental changes such asocclusion and illumination. In addition, the usefulness of thenavigation has already been confirmed through the experimentsapplied to an actual robot navigation. These features have beenimproved in order to be applied on smartphones and madepractical use to indoor pedestrian navigation [12]. However,the improved method applied to smartphones was tested inonly one simple situation. In fact, it is should assumed inenvironments like corridors that fluorescent lights are oftenswitched, sunlight changes as time advances, unexpected ap-pearance of passersby occurs in an image. In addition, weshould consider not only environmental factors but also humanfactors in the proposed system. As a human factor, usingby actual users ignorant to an experimental environment isimportant for the proposed system. In this study, we performpractical tests for our proposed system under illuminationchange, occlusions and using by actual users. The practicaltests demonstrate feasibility of the proposed system in realsituations on corridors. The paper is structured as follows.Section II shows a short description of the localization andthe indicating screen in the proposed system. The proposedsystem is realized by the localization and the indicating screen.In section III we present the practical tests for our proposedsystem. From the practical tests, we discuss the practical abilityof our proposed system.

II. Pedestrian navigation system using view-based navigation

A. Pedestrian localization by View-based navigation

The view-based navigation performs localization by usingimage matching. Fig. 1 shows an overview of the view-basednavigation. The view-based navigation identifies a currentposition with the imaging position of the image which isdetermined by retrieving the current image from memorizedimages. This method is suitable for pedestrian navigationusing smartphones from the perspective of computational costbecause the image matching realizes the localization alone. Inaddition, the image matching algorithm increases an aptitudeof the method for pedestrian navigation using smartphones.Fig. 2 shows the image matching algorithm of the viewbased navigation. The image matching algorithm consists ofSURF(Speeded Up Robust Features)[13] and LSH(Locality

(978-1-4673-1954-6/12/$31.00 c⃝2012 IEEE)275/278

Page 306: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 1. Overview of the view-based navigation.

Sensitive Hashing)[14]. SURF is a fast image feature detectorwhich is robust to illumination change. Feature points in Fig.2 are extracted by SURF. The image matching is executedby pairing feature points between the current image and amemorized image. The pairing is accelerated by LSH whichis a high-speed search method. LSH indexes feature pointsof memorized images by advance numbering to the featurepoints. A pair of feature points are determined by detectingsame numbers derived from feature points. We have alreadyconfirmed that processing time of the image matching isabout 0.33[s] on iPod touch 4[12]. The view-based navigationrealizes the pedestrian localization using smartphones in theproposed system.

B. Indicating screen for the pedestrian navigaiton

We have already developed an indicating screen for thepedestrian navigaiton using the view-based navigation [12].Fig. 3 shows the indicating screen. The indicating screendisplays place name and direction and appropriate viewpoint.Place name and direction are illustrated by balloon and ar-row respectively. Balloon has a extra function to access thelinked site by touching the place name. Appropriate viewpointrepresented by the target at the center of the screen indicatesthe camera orientation which the image matching is the moststable. The target adjusts the viewpoint of a user by leadinga user to keep the center point into the moving circle. Theindicating screen realizes the pedestrian navigation using theview-based navigation.

C. Operation test of the proposed system

The proposed system was already tested in an ideal con-dition [12]. The ideal condition means cut of occlusions andillumination change. Fig. 4 shows the experimental environ-ment for the test. Fig. 5 shows a localization result in the

Figure 2. Image matching algorithm.

Figure 3. An indicating screen for the pedestrian navigation.

test. From Fig. 5, arrival of the pedestrian in the destinationis confirmed. Fig. 6 shows information displays recorded inthe test. From Fig. 6, success of the navigation is confirmed.From these results, success of the operation test was proved.

III. Evaluation experiments

To demonstrate the practical ability of the proposed sys-tem under various situations, this study performs evaluationexperiments. Three experiments evaluate our proposed systemin real conditions which includes illumination change, occlu-sions, using by actual users. One evaluates the robustness forthe changed illumination by sunlight and fluorescent in anenvironment. This experiment uses change in the number offluorescent lights and sunlight change between daytime andnighttime. The other evaluates the robustness for occlusions.In general, it is natural that there are passersby on the cor-ridor in which pedestrian navigation is performed. Anotherconfirms whether the proposed system performs navigation forpedestrians unfamiliar to a given environment. This experimentdetermines influence of ignorance on users to environments. Inthese experiments, the transition of the estimated position tothe elapsed time is used for evaluation because logging truepositions of walking subjects is difficult.

A. Evaluation of robustness to illumination change

This experiment demonstrates robustness to illuminationchange of the proposed system. In the experiment, the numberof fluorescent lights and sunlight streamed through windowsare changed. Fig. 7 shows the illumination conditions in theexperiment. Fig. 7(a) illustrates illumination condition whenrecording that consists of nighttime and full-lighting. Fig. 7(b)illustrates illumination condition when navigation that consistsof daytime and semi-lighting. In this experiment, a pedestrianusing the proposed system walks the linear 25.5[m] corridorat daytime. The pedestrian arrived at the end of the route inabout 32[s] with approximately constant velocity. Memorized

Figure 4. Experimental environment for the operation test.

276/278

Page 307: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

Figure 5. Localization result in the operation test.

images are captured on 18 sequential points at intervals of1.5[m]. The localization result of the experiment is evaluatedfor robustness to illumination condition. Fig. 8 shows thelocalization result of the experiment. In Fig. 8, the horizontalaxis shows the elapsed time during walking and the verticalaxis shows the estimated position. Pearson’s product-momentcorrelation factor between the elapsed time and the estimatedposition results in 0.994 from Fig. 8. The value is feasible tothe fact that the pedestrian walked in approximately constantvelocity. In addition, an average speed calculated from Fig.8 by straight-line approximation results in 0.790[m/s]. Theaverage speed derives experimental time is 32.3[s]. This resultis appropriate for the fact that the pedestrian finished walkingthe corridor in about 32[s]. Therefore, this experimental resultindicates that the proposed system performs the localizationappropriately under illumination change.

B. Evaluation of robustness to occlusions

This experiment demonstrates robustness to occlusions ofthe proposed system. In the experiment, occlusions by twopassersby opposite side and same direction for the subjectis used. Fig. 9 shows the occlusion conditions in the ex-periment. Fig. 9(a) illustrates the occlusion condition when

(a) Start (b) Fire Escape

(c) AED (d) Goal

Figure 6. Information displays in the operation test.

(a) Recording (b) Localization

Figure 7. Illumination conditions in the experiment.

recording with no occlusions. Fig. 9(b) illustrates the occlusioncondition when navigation with occlusions by passersby. Thepedestrian using the proposed system walks the linear 25.5[m]corridor with occlusions by two passersby in this experiment.The pedestrian arrived at end of the route in about 31[s]

with approximately constant velocity. Memorized images arecaptured on 18 sequential points at intervals of 1.5[m]. Thelocalization result of the experiment is evaluated for robustnessto occlusion. Fig. 10 shows the localization result of theexperiment. In Fig. 10, the horizontal axis shows the elapsedtime during walking and the vertical axis shows the esti-mated position. Pearson’s product-moment correlation factorbetween the elapsed time and the estimated position resultsin 0.997 from Fig. 10. The value is feasible to the fact thatthe pedestrian walked in approximately constant velocity. Inaddition, an average speed calculated from Fig. 10 by straight-line approximation results in 0.810[m/s]. The average speedderives experimental time is 31.5[s]. This result is appropriatefor the fact that the pedestrian finished walking the corridor inabout 31[s]. Therefore, this experimental result indicates thatthe proposed system performs the localization appropriatelyunder occlusions.

C. Evaluation of practical ability under using by multiplesubjects

This experiment demonstrates practical ability of the pro-posed system under using by multiple subjects. In the exper-iment, the proposed system performs pedestrian navigation tomultiple subjects. The 4 subjects using the proposed systemwalk the long corridor which includes turning points in thisexperiment. Fig. 4 shows the navigation route in the exper-iment. Memorized images are captured on 45 points of theroute. The route goes from the elevator to the library. The 4subjects arrived at the end of the route with approximately

Figure 8. Localization result under illumination change.

277/278

Page 308: International Conference on Indoor Positioning and Indoor Navigation

2013 International Conference on Indoor Positioning and Indoor Navigation, 28-31st October 2013

(a) Recording (b) Localization

Figure 9. Occlusion conditions in the experiment.

constant velocity. The localization result of the experimentis evaluated for practical ability under using by actual users.Fig. 11 shows the localization result of the experiment. InFig. 11, the horizontal axis shows the elapsed time duringwalking and the vertical axis shows the estimated positionnumber. Pearson’s product-moment correlation factors betweenthe elapsed time and the estimated position are shown inTABLE I. The values are feasible to the fact that the pedes-trians walked in approximately constant velocity. Therefore,this experimental result indicates that the proposed systemperforms the pedestrian navigation appropriately under usingby multiple subjects.

IV. Conclusion

The purpose of this study is to demonstrate the practicalability of the proposed system under various situations. Toachieve the purpose, this paper has presented evaluation of anindoor pedestrian navigation system under illumination changeand occlusions and using by actual users. From the practicaltests, we demonstrated the practical ability of our proposedsystem under real situations in corridors. It is expected thatour proposed system realizes pedestrian navigation services inreal buildings such as museums and schools. Our future worksare implementation of a pedestrian navigation service using theproposed system in museums and schools.

Figure 10. Localization result under occlusions.

TABLE I Correlation factors collected from multiple subjects.

User Correlation factorA 0.992B 0.997C 0.998D 0.996

(a) User A (b) User B

(c) User C (d) User D

Figure 11. Localization results under using by subjects.

References

[1] H. Fujita, and M. Arakawa, ”Complementary Data Development of Pho-tographs and Annotations Based on Spatial Relationships,” InformationProcessing Sciety of Japan, Vol. 47, No. 1, pp.63-76, 2006.

[2] K. Iwasaki, T. Suzuki, M. Kanbara, K. Tamazawa, and N. Yokoya,”Sightseeing Spot Video Browsing System Using Geographical DatabaseBased on Shooting Position and Direction,” IEJCE Technical Report,DE2007-6, PRMU2007-32, pp.31-36, 2007.

[3] Google Map, http://maps.google.co.jp/

[4] T. Ikeda, M. Kawamoto, A. Sashima, K. Suzuki, and K. Kurumatani,”A Robust Indoor Autonomous Positioning System Using Particle FilterBased on ISM Band Wireless Communications,” IEEJ Trans. EIS, Vol.131, Issue 1, pp.227-236, 2011.

[5] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan, ”The CricketLocation-Support System,” In Proc. of the Sixth Annual ACM Inter-national Conference on Mobile Computing and Networking, pp.32-43,2000.

[6] S. Sakuragi, H. Mineno, J. Kanda, Y. Ishiwatari, and T. Mizuno, ”A Studyof Position and Direction Estimation to Indoor Navigation for WirelessLAN Environments,” IPSJ SIG Technical Report, 2008-MLB-44, pp.41-47, 2008.

[7] J. A. Bitsch Link, P. Smith, and K. Wehrle, ”Footpath: Accurate Map-based indoor navigation using smartphones,” In Proc. Indoor Positioningand Indoor Navigation, pp.21-23, 2011.

[8] P. Pombinho de Matos, A. P. Afonso, and M. B. Carmo, ”Point of InterestAwareness using Indoor Positioning with a Mobile Phone,” In Proc.International Conference on Pervasive and Embedded Computing andCommunication Systems, pp.5-14, 2011.

[9] M. Werner, M. Kessel, and C. Marouane, ”Indoor positioning usingsmartphone camera,” In Proc. Indoor Positioning and Indoor Navigation,2011.

[10] B. Ruf, E. Kokiopoulou, and M. Detyniecki, ”Mobile museum guidebased on fast SIFT recognition,” In Proc. Adaptive Multimedia Retrieval,pp.170-183, 2008.

[11] Y. Hagiwara, H. Imamura, Y. Choi, and K. Watanabe, ”An Imprrove-ment of Positional Accuracy for View-Based Navigation Using SURF,”IEEJ Trans. EIS, Vol. 130, No. 8, pp.1395-1403, 2010.

[12] Mitsuaki Nozawa, Yoshinobu Hagiwara and Yongwoon Choi, ”IndoorHuman Navigation System on Smartphones using View-Based Naviga-tion,” In Proc. International Conference on Control, Automation andSystems 2012, pp.1916-1919.

[13] H. Bay, T. Tuytelaars, and Luc Van Gool, ”SURF:Speeded Up RobustFeatures,” European Conference on Computer Vision, pp.404-417, 2006.

[14] M. Dater, N. Immorlica, P. Indyk, and V. Mirrokni, ”Locality-sensitivehashing scheme based on p-stable distributions,” In Proc. ACM Symp.on Computational Geometry, pp. 253-262, 2004.

278/278

Page 309: International Conference on Indoor Positioning and Indoor Navigation

- chapter 27 -

performance Evaluation: System, technic,component, benchmarking

Page 310: International Conference on Indoor Positioning and Indoor Navigation

Authors Index

Abdou, Wahabou..................................................................................................................................... 91

Abed-Meraim, Karim..............................................................................................................................48

Abhayasinghe, Nimsiri.................................................................................................................. 138, 142

Abu Oun, Osama.....................................................................................................................................91

Aguilera, Teodoro..................................................................................................................................239

Al-Ahmari, A.........................................................................................................................................221

Al-Qahtani, Khaled............................................................................................................................... 221

Albuquerque, Daniel..........................................................................................................................34, 44

Andreucci, Fabio..................................................................................................................................... 26

Araujo, Rui ............................................................................................................................................109

Aubry, Pascal.........................................................................................................................................217

Augustin, Daniel......................................................................................................................................11

Augusto, Javier......................................................................................................................................239

Aydogdu, Canan.................................................................................................................................... 105

Barsocchi, Paolo....................................................................................................................................249

Bastos, Carlos..........................................................................................................................................34

Beder, Christian.....................................................................................................................................146

Benmoshe, Boaz....................................................................................................................................253

Berthillier, Marc...................................................................................................................................... 30

Blankenbach, Joerg................................................................................................................................. 18

Bloch, Christelle......................................................................................................................................91

Blunck, Henrik...................................................................................................................................... 207

Bocksch, Marcus..................................................................................................................................... 39

Boyer, Marc...........................................................................................................................................234

Buszke, Bartosz.....................................................................................................................................129

Canalda, Philippe.................................................................................................................................... 95

Canals, Raphael.....................................................................................................................................169

Carvalho, Nuno Borges........................................................................................................................... 44

Cassarà, Pietro.......................................................................................................................................249

Castilla, José M..................................................................................................................................... 178

Ceulemans, Marc...................................................................................................................................160

Chandrasiri, Ravi...................................................................................................................................142

Chao, Hui ................................................................................................................................................ 79

Charbit, Maurice......................................................................................................................................48

Chen, Lina............................................................................................................................................. 182

Cho, Youngsu.......................................................................................................................................... 36

Chobtrong, Thitipun.............................................................................................................................. 1, 3

Choi, Yongwoon....................................................................................................................................275

I

Page 311: International Conference on Indoor Positioning and Indoor Navigation

Christian, Walter......................................................................................................................................99

Colin, Elizabeth.......................................................................................................................................68

Cypriani, Matteo......................................................................................................................................95

Czauski, Thaddeus ................................................................................................................................ 273

Das, Saumitra.......................................................................................................................................... 79

De Marinis, Enrico.................................................................................................................................. 26

Delgado Vargas, Jaime.......................................................................................................................... 119

Deriaz, Michel.......................................................................................................................................133

Deschamps De Paillette, Thierry...........................................................................................................169

Devaux, Pierre.......................................................................................................................................... 3

Diego, Cristina...................................................................................................................................... 178

Domingo-Perez, Francisco ......................................................................................................................22

Dvir, Amit..............................................................................................................................................253

Eidloth, Andreas ......................................................................................................................................53

Ens, Alexander ...................................................................................................................................... 188

Ferreira, Paulo......................................................................................................................................... 34

Fogluzzi, Francesca.................................................................................................................................26

Franke, Norbert....................................................................................................................................... 53

Freitas, Diamantino............................................................................................................................... 109

Fujimoto, Manato ..................................................................................................................................194

García, Rodrigo..................................................................................................................................... 178

Gardel Kurka, Paulo..............................................................................................................................119

Garello, Roberto ......................................................................................................................................73

Gasparini, Otello..................................................................................................................................... 26

Gidlund, Mikael .................................................................................................................................... 213

Grimm, David..........................................................................................................................................18

Grønbæk, Kaj........................................................................................................................................ 207

Guarnieri, Alberto....................................................................................................................................24

Günes, Ersan..........................................................................................................................................1, 3

Hagiwara, Yoshinobu.....................................................................................................................123, 275

Haid, Markus.........................................................................................................................................1, 3

Haider, Raja Umair................................................................................................................................265

Han, Youngnam.......................................................................................................................................55

Hayoz, Marc ............................................................................................................................................68

Hernández, álvaro..................................................................................................................................178

Higashino, Teruo................................................................................................................................... 269

Holm, Eric............................................................................................................................................... 79

Händel, Peter..................................................................................................................................117, 247

II

Page 312: International Conference on Indoor Positioning and Indoor Navigation

Höflinger, Fabian........................................................................................................................... 115, 188

Iida, Yukio............................................................................................................................................. 194

Imamura, Hiroki ....................................................................................................................................123

Jahn, Jasper..............................................................................................................................................39

Jalal Abadi, Marzieh..............................................................................................................................190

Jansson, Magnus....................................................................................................................................117

Jayakody, Anuradha .......................................................................................................................113, 257

Jayalath, Sampath..................................................................................................................................138

Jensen, Christian....................................................................................................................................207

Ji, Myungin ............................................................................................................................................. 36

Johar, Umar........................................................................................................................................... 221

Ju, Ho Jin...............................................................................................................................................243

Julien, Jean-Pascal.................................................................................................................................234

Kamil, Mustafa......................................................................................................................................1, 3

Kang, Wonho...........................................................................................................................................55

Karbownik, Piotr..................................................................................................................................... 53

Katsuda, Etsuko.....................................................................................................................................269

Keller, Friedrich.................................................................................................................................... 127

Kim, Jooyoung........................................................................................................................................ 36

Kjærgaard, Mikkel Baun....................................................................................................................... 207

Klepal, Martin....................................................................................................................................... 146

Koeppe, Enrico........................................................................................................................................11

Kokert, Jan............................................................................................................................................ 115

Kozak, Ilkay.......................................................................................................................................... 105

Krishnamoorthi, Raghuraman................................................................................................................. 79

Krukar, Grzegorz.....................................................................................................................................53

Kulas, ?ukasz...........................................................................................................................................64

Kyas, Marcel....................................................................................................................................60, 152

Landernäs, Krister................................................................................................................................. 213

Landolsi, M........................................................................................................................................... 221

Laoudias, Christos.................................................................................................................................265

Lardies, Joseph ........................................................................................................................................30

Lassabe, Frédéric.....................................................................................................................................95

Ledda, Alessandro................................................................................................................................. 160

Lee, Min Su ...........................................................................................................................................243

Lee, Sookjin............................................................................................................................................ 55

Lee, Yangkoo...........................................................................................................................................36

Legoll, Sébastien..................................................................................................................................... 48

III

Page 313: International Conference on Indoor Positioning and Indoor Navigation

Levi, Harel.............................................................................................................................................253

Li, Binghao............................................................................................................................................182

Lin, Jiarui .......................................................................................................................................174, 211

Liu, Wen.................................................................................................................................................. 14

Liu, Zhexu............................................................................................................................................. 174

Lopes, Sérgio...........................................................................................................................................34

Lopes, Sérgio Ivan...................................................................................................................................44

Lu, Ye.................................................................................................................................................... 229

Lázaro-Galilea, José Luis........................................................................................................................22

Martín-Gorostiza, Ernesto.......................................................................................................................22

Masiero, Andrea...................................................................................................................................... 24

Maximov, Vladimir...................................................................................................................................7Mechanicus, Jeroen............................................................................................................................... 160

Meiyappan, Subramanian......................................................................................................................202

Miao, Chunyu........................................................................................................................................182

Miranda, Paul........................................................................................................................................ 273

Mizuchi, Yoshiaki..................................................................................................................................123

Monsaingeon, Augustin.........................................................................................................................234

Moretto, Alain......................................................................................................................................... 68

Moutinho, João......................................................................................................................................109

Muqaibel, Ali.........................................................................................................................................221

Murray, Iain.....................................................................................................................138, 142, 150, 156

Mutsuura, Kouichi.................................................................................................................................194

Naguib, Ayman........................................................................................................................................79

Nakamori, Emi...................................................................................................................................... 194

Nam, Seongho......................................................................................................................................... 55

Neander, Jonas.......................................................................................................................................213

Nepa, Paolo........................................................................................................................................... 249

Nilsson, John-Olof ................................................................................................................................ 247

Norrdine, Abdelmoumen.........................................................................................................................18

Nozawa, Mitsuaki..................................................................................................................................275

Okada, Hiromi.......................................................................................................................................194

Panahandeh, Ghazaleh.......................................................................................................................... 117

Panayiotou, Christos ............................................................................................................................. 265

Paredes, José Antonio............................................................................................................................239

Park, Chan Gook................................................................................................................................... 243

Park, Sangjoon........................................................................................................................................ 36

Pattabiraman, Ganesh............................................................................................................................202

IV

Page 314: International Conference on Indoor Positioning and Indoor Navigation

Pereira, Fernando.................................................................................................................................. 196

Philips, Wilfried.................................................................................................................................... 160

Pirotti, Francesco.....................................................................................................................................24

Ploit, Shamil ..........................................................................................................................................253

Potortì, Francesco..................................................................................................................................249

Prentow, Thor Siiger..............................................................................................................................207

Pucci, Fabrizio.........................................................................................................................................26

Pérez, M. Carmen..................................................................................................................................178

Raghupathy, Arun..................................................................................................................................202

Rajakaruna, Nimali................................................................................................................................156

Rathnayake, Chamila ............................................................................................................................ 150

Reindl, Leonhard........................................................................................................................... 115, 188

Reis Cunha, Sérgio................................................................................................................................196

Reis, João................................................................................................................................................ 44

Ren, Yongjie...................................................................................................................................174, 211

Ricardo, Manuel ....................................................................................................................................196

Rodríguez, Carlos..................................................................................................................................239

Rodríguez, José Antonio....................................................................................................................... 239

Rosi, Guido ............................................................................................................................................. 26

Rzymowski, Mateusz.............................................................................................................................. 64

Sakamoto, Takuya................................................................................................................................. 217

Salido-Monzú, David.............................................................................................................................. 22

Samama, Nel......................................................................................................................................... 229

Saraiva Ferreira, Luiz............................................................................................................................119

Sato, Toru.............................................................................................................................................. 217

Schiller, Jochen ....................................................................................................................................... 11

Schindelhauer, Christian ....................................................................................................................... 188

Schweinzer, Herbert.........................................................................................................................99, 103

Sebesta, Jiri............................................................................................................................................241

Seitz, Jochen............................................................................................................................................39

Sendorek, Pierre...................................................................................................................................... 48

Shirokov, Igor..........................................................................................................................................84

Sottile, Francesco.................................................................................................................................... 73

Spies, Francois ........................................................................................................................................ 91

Spies, François ........................................................................................................................................ 95

Spirito, Maurizio..................................................................................................................................... 73

Spruyt, Vincent......................................................................................................................................160

Sternberg, Harald .................................................................................................................................. 127

V

Page 315: International Conference on Indoor Positioning and Indoor Navigation

Stisen, Allan.......................................................................................................................................... 207

Suzuki, Akimasa............................................................................................................................ 123, 275

Syafrudin, Mohammad..........................................................................................................................103

Sánchez, F. Manuel ............................................................................................................................... 178

Tabarovsky, Oleg...................................................................................................................................... 7

Tanaka, Masaaki....................................................................................................................................123

Tejmlova, Lenka....................................................................................................................................241

Theis, Christian..................................................................................................................................... 196

Thrybom, Linus.....................................................................................................................................213

Timmermann, Dirk................................................................................................................................225

Togneri, Mauricio..................................................................................................................................133

Toker, Kadir...........................................................................................................................................105

Tsukuda, Daiki ...................................................................................................................................... 194

Uchiyama, Akira....................................................................................................................................269

Uejima, Yuki..........................................................................................................................................269

Uliana, Michele....................................................................................................................................... 26

Vallernaud, Alexis................................................................................................................................. 234

Vervisch-Picois, Alexandre................................................................................................................... 229

Vettore, Antonio...................................................................................................................................... 24

Vieira, José.........................................................................................................................................34, 44

Von Der Grün, Thomas ........................................................................................................................... 53

Wada, Tomotaka ....................................................................................................................................194

Wagner, Benjamin................................................................................................................................. 225

Walter, Christian....................................................................................................................................103

Wendeberg, Johannes............................................................................................................................ 188

White, Jules........................................................................................................................................... 273

Wieser, Andreas................................................................................................................................. 18, 22

Will, Heiko............................................................................................................................................ 152

Willemsen, Thomas...............................................................................................................................127

Wolling, Florian .................................................................................................................................... 115

Wo?nica, Przemys?aw.............................................................................................................................64

Xiong, Zhoubing..................................................................................................................................... 73

Xue, Bin................................................................................................................................................ 211

Yamaguchi, Hirozumi ........................................................................................................................... 269

Yang, Yuan .............................................................................................................................................. 60

Yarovoy, Alexander............................................................................................................................... 217

Yin, Peng............................................................................................................................................... 169

Yutaka, Yamada .....................................................................................................................................261

VI

Page 316: International Conference on Indoor Positioning and Indoor Navigation

Zhang, Yingjun........................................................................................................................................14

Zhao, Jianmin ........................................................................................................................................182

Zhao, Yubin......................................................................................................................................60, 152

Zheng, Zhengqi..................................................................................................................................... 182

Zhu, Jigui....................................................................................................................................... 174, 211

Zinkiewicz, Daniel................................................................................................................................ 129

Zisa, Joseph........................................................................................................................................... 169

álvarez, Fernando.................................................................................................................................. 239

VII

Page 317: International Conference on Indoor Positioning and Indoor Navigation

Edited by CCSd (Centre pour la Communication Scientifique Directe) on Thu, 10 Jul 2014