wireless personal communications: emerging technologies for enhanced communications

344

Upload: others

Post on 11-Sep-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 2: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

WIRELESS PERSONALCOMMUNICATIONSEmerging Technologies forEnhanced Communications

Page 3: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

THE KLUWER INTERNATIONAL SERIESIN ENGINEERING AND COMPUTER SCIENCE

Page 4: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

WIRELESS PERSONALCOMMUNICATIONSEmerging Technologies forEnhanced Communications

edited by

William H. TranterTheodore S. Rappaport

Brian D. WoernerJeffrey H. Reed

Virginia Polytechnic Institute & State University

KLUWER ACADEMIC PUBLISHERSNew York, Boston, Dordrecht, London, Moscow

Page 5: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

eBook ISBN: 0-306-47046-2Print ISBN: 0-792-38359-1

©2002 Kluwer Academic PublishersNew York, Boston, Dordrecht, London, Moscow

All rights reserved

No part of this eBook may be reproduced or transmitted in any form or by any means, electronic,mechanical, recording, or otherwise, without written consent from the Publisher

Created in the United States of America

Visit Kluwer Online at: http://www.kluweronline.comand Kluwer's eBookstore at: http://www.ebooks.kluweronline.com

Page 6: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

TABLE OF CONTENTS

PREFACE ix

I SMART ANTENNAS AND DIVERSITY

1. Effects of Directional Antennas with Realizable BeamPatterns on the Spaced-Time Correlation 1T. B. Welch, M. J. Walker and R. E. Ziemer

2. Frequency Reuse Reduction for IS-136 Using a FourElement Adaptive Array 11J. Tsai, R. M. Buehrer

3. Pseudo-Blind Algorithm for SDMA Application 23J. Laurila and E. Bonek

4. Integrated Broadband Mobile System (IBMS) FeaturingSmart Antennas 35M. Bronzel, J. Jelitto, M. Stege, N. Lohse, D. Hunold andG. Fettweis

5. CDMA Smart Antenna Performance 49M. Feuerstein, J. T. Elson, M. A. Zhao and S. Gordon

II PROPAGATION

6. Wireless RF Distribution in Buildings Using Heating andVentilation Ducts 61C. P. Diehl, B. E. Henty, N. Kanodia and D. D. Stancil

7. Predicting Propagation Loss from Leaky Coaxial CableTerminated With an Indoor Antenna 71K. Carter

8. Building Penetration and Shadowing Characteristics of 1865MHz Radio Waves 83M. Panjwani and G. Hawkins

9. Maximizing Carrier-to-Interference Performance byOptimizing Site Location 91J. Shi and Y. Mintz

Page 7: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

10. Azimuth, Elevation, and Delay of Signals at Mobile StationSite 99

A. Kuchar, E. A. Aparicio, J. Rossi and E. Bonek

III INTERFERENCE CANCELLATION

11. A New Hybrid CDMA/TDMA Multiuser Receiver System 111U. Baroudi and A. Elhakeem

12. Multiuser Multistage Detector for Mode 1 of FRAMESStandard 123A. Boarin and R. E. Ziemer

13. Self-Organizing Feature Maps for Dynamic Control of Radio 129Resources in CDMA PCS NetworksW. S. Hortos

IV. EQUALIZATION

14. Complex Scaled Tangent Rotations (CSTAR) for FastSpace-Time Adaptive Equalization of Wireless TDMA 143M. Martone

15. An Effective LMS Equalizer for the GSM Chipset 155J. Gu, J. Pan, R. Watson and S. Hall

16. Self-Adaptive Sequence Detection via the M-algorithm 167A. R. Shah and B. Paris

17. Soft-Decision MLSE Data Receiver for GSM System 179M. Lee and Z. Zvonar

V. MODULATION, CODING AND NETWORKING

18. Turbo Code Implementation Issues for Low Latency, LowPower Applications 191D. E. Cress and W. J. Ebel

19. Evaluation of the Ad-Hoc Connectivity with the Zone RoutingProtocols 201Z. J. Haas and M. R. Pearlman

VI. INVITED POSTERS PRESENTED AT THE 1998 SYMPOSIUM

20. CDMA Systems Modelling Using OPNET Software Tool 213P. Gajewski and J. Krygier

vi

Page 8: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

21. Signal Monitoring System for Fault Management in WirelessLocal Area Networks 223J. F. Vucetic and P. A. Kline

22. Computer-Aided Designing of Land Mobile RadioCommunication Systems, Taking Into ConsiderationInterfering Stations 235M. Amanowicz, P. Gajewski, W. Kolosowski and M. Wnuk

23. Adaptive Interference Cancellation with Neural Networks 247A. Zooghby, C. Christodoulou and M. Georgiopoulos

24. Calibration of a Smart Antenna for Carrying Out VectorChannel Sounding at 1.9 GHz 259J. Larocque, J. Litva and J. Reilly

25. Implementing New Technologies for Wireless Networks:Photographic Simulations and Geographic InformationSystems 269H. P. Boggess, II and A. F. Wagner, II

26. Envelope PDF in Multipath Fading Channels with RandomNumber of Paths and Nonuniform Phase Distributions 275A. Abdi and M. Kaveh

27. Radio Port Spacing in Low Tier Wireless Systems 283H. Yeh and A. Hills

28. A Peek Into Pandora’s Box: Direct Sequence vs. FrequencyHopped Spread Spectrum 305R. K. Morrow, Jr.

29. On the Capacity of CDMA/PRMA Systems 315R. P. Hoefel and C. de Almeida

INDEX 327

vii

Page 9: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 10: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

PREFACE

The papers appearing in this book were originally presented at the 8th

Virginia Tech/MPRG Symposium on Wireless Personal Communications. Thissymposium, which is an annual event for Virginia Tech and MPRG, was held June10-12, 1998 on the Virginia Tech campus in Blacksburg, Virginia. The symposiumbrings together leaders from industry and academia to discuss the exciting future ofwireless and current research trends. The symposium has been an important part ofMPRG's activities since the inception of the group in 1990.

As can be seen from the Table of Contents, the papers included in this bookare divided into six sections. The first five of these correspond to symposiumsessions and are devoted to the following topics: Smart Antennas and Diversity,Propagation, Interference Cancellation, Equalization, and Modulation, Coding andNetworking. These session titles reflect current research thrusts as the wirelesscommunity strives to enhance the capabilities of wireless communications. Thisyear an added feature of the symposium was the inclusion of externally contributedposter papers. Ten of these poster papers are included in this book as the sixthsection.

The first group of contributions, consisting of five papers, relate to smartantennas and diversity. The first paper, Effects of Directional Antennas withRealizable Beam Patterns on the Spaced-Time Correlation, by T. B. Welch, M. J.Walker and R. E. Ziemer, considers the performance achieved with directionalantennas at a base station. The authors consider the relationship between the biterror probability and the space-time correlation coefficient and illustrate thedegradation in system performance that results when this correlation drops belowone. The next paper, Frequency Reuse Reduction for IS-136 Using a Four ElementAdaptive Array, is co-authored by J. Tsai and R. M. Buehrer. They presentsimulation results for two-element and four-element adaptive arrays and variousfrequency reuse factors. The third paper in this group, Pseudo-Blind Algorithm forSDMA Application by J. Laurila and E. Bonek, presents a novel pseudo-blind space-time equalization algorithm for application to spatial division multiple accesssystems. Simulation results are presented which show performance as a function ofvarious parameters including the number of antenna elements. The next paper,Integrated Broadband Mobile System (IBMS) Featuring Smart Antennas by M.Bronzel, J. Jelitto, M. Stege, N. Lohse, D. Hunold and G. Fettweis, explores the useof smart antennas to adaptively enable a trade-off between mobility and data rate.The authors present experimental data illustrating the relationship betweenbeamwidth and delay spread. They point out that reduced delay spread allows theuse of higher-order spectrally efficient modulation techniques. The final paper inthis group, CDMA Smart Antenna Performance, is co-authored by M. Feuerstein, J.T. Elson, M. A. Zhao and S. Gordon. The strategy adopted is not to generate anoptimum antenna pattern for each channel but rather operates on a per sector basis.

Page 11: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Each sector can be assigned patterns with the objectives of balancing traffic,managing handoff and controlling interference.

Propagation issues constitute the theme of the next group of five papers.The first paper in this group is titled Wireless RF Distribution in Buildings UsingHeating and Ventilation Ducts, and was authored by C. P. Diehl, B. E. Henty, N.Kanodia and D. D. Stancil. They present a novel method for distributing RF signalsin buildings using heating and ventilation ducts as waveguides. The use of existinginfrastructure is attractive and losses are low compared to direct propagation or aleaky coax. The following paper, Predicting Propagation Loss from Leaky CoaxialCable Terminated With an Indoor Antenna by K. Carter, focuses on thedevelopment of a model for a leaky coaxial cable for use in the design of indoormicrocell systems. The model presented in this paper exhibits a mean error of 2 dBwith a standard deviation of 3dB. The paper, Building Penetration and ShadowingCharacteristics of 1865 MHz Radio Waves, by M. Panjwani and G. Hawkins,presents experimental data for penetration loss and shadowing loss for sevenbuildings in urban environments in the Netherlands. They observed a highcorrelation between penetration loss and shadowing loss. This implies thatpenetration loss can be estimated with reasonable accuracy from shadowing loss,which is easier to measure. The fourth paper, Maximizing Carrier-to-InterferencePerformance by Optimizing Site Location by J. Shi and Y. Mintz, examinesperformance enhancement of a cellular system through the maximization of thecarrier-to-interference ratio. This is accomplished by optimizing base stationlocation according to traffic. The final paper in the propagation section, titledAzimuth, Elevation, and Delay of Signals at Mobile Station Site and co-authored byA. Kuchar, E. A. Aparicio, E. Bonek and J. Rossi, presents an analysis of channelsounder measurements made at 890MHz in a dense urban environment in Paris,France. Their objective is a thorough study of propagation mechanisms in theirtarget area. Extensive data is presented and the results of their study will beincorporated into future propagation models developed by the authors.

The third group of papers relate to interference cancellation as atechnique for enhancing system performance. There are three contributions in thisgroup. The first paper, A New Hybrid CDMA/TDMA Multiuser Receiver System byU. Baroudi and A. Elhakeem, considers a novel traffic control scheme which allowsuse of a dual-mode receiver. A decorrelating multi-user receiver is used in burstyslots and a single-user receiver is used on non-busrty slots. In the following paper,Multiuser Multistage Detector for Mode 1 of FRAMES Standard by A. Boarin andR. E. Ziemer, the authors consider a multistage detector that combats both multipleaccess interference and intersymbol interference in code/time division multipleaccess systems. They show that the complexity of the detector is proportional to thenumber of users. The third and final paper in this group, Self-Organizing FeatureMaps for Dynamic Control of Radio Resources in CDMA PCS Networks by W. S.Hortos, considers the application of self-organizing feature maps to the channelassignment in CDMA systems in which radio resources are regulated to minimizeinterference.

X

Page 12: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 13: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Page 14: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Army Research Office (ARO), AT&T Corporation, BellSouth Cellular Corporation,Ericsson, Inc., Grayson Wireless, Hewlett-Packard Company, Honeywell, Inc.,Hughes Electronics Corporation, ITT Industries, Lucent Technologies, Inc.,Motorola, Inc., National Semiconductor, Nokia, Inc., Nortel, Qualcomm, Inc., RadixTechnologies, Inc., Raytheon Systems Corporation, Salient 3 Communications,Southwestern Bell, Tektronix, Inc., TRW, Inc., and Watkins-Johnson Company.

MPRG’s primary mission is to serve the educational and research needs ofthe wireless community. Our affiliates make this possible and we thank them fortheir interest and for their support.

Blacksburg, Virginia William H. TranterTheodore S. Rappaport

Brian D. WoernerJeffrey H. Reed

xiii

Page 15: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 16: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

I

Effects of Directional Antennas with Realizable Beam Patterns onthe Spaced-Time Correlation Function

Dr. Thaddeus B. Welch, U.S. Naval Academy, 410.293.6160, [email protected]. Michael J. Walker, U.S.A.F. Academy, 719.333.4213, [email protected]

Dr. Rodger E. Ziemer, University of Colorado at Colorado Springs,719.262.3350, [email protected] *

Abstract

In this paper we analyze the effects of a realizable directional antenna at a basestation on the bit error probability (BEP) associated with a radio frequency (RF) link toa mobile receiver. Assume the base station has the peak of its radiation pattern pointeddirectly towards the mobile receiver which is surrounded by a uniform distribution ofisotropic scatterers. The shape of the radiation pattern shifts the angle-of-arrival (aoa)probability density function (pdf) to greater values at angles close to the line of sightand lowers the power spectral density (PSD) of the higher frequency components ofDoppler spreading. Since the inverse Fourier transform of the Doppler power spectrumis the spaced-time correlation function, reducing the high frequency power of the PSDresults in a flatter correlation function. For a system employing differentially coherentdetection (e.g. DPSK), a flatter correlation function results in a correlation coefficientcloser to one (1.0). For a DPSK system operating in a Doppler spread channel thisimprovement results in a reduction of the system’s BEP and error floor.

1 Development of the Doppler Spectra using a Realiz-able Antenna Pattern at the Base Station

In [1], Petrus, Reed, and Rappaport derive the Doppler power spectrum associated with anideal (flat top) antenna located at the base station. A uniform distribution of scatteringcenters located within a 1km radius around the mobile receiver are completely illuminatedby the base station transmitter located 3km away when the ideal antenna’s beamwidth isgreater than 38.94°. In this configuration, the probability density function (pdf) of the angle-of arrival (aoa) of scattered energy will be a constant valued function equal toThe resulting Doppler spectra is the familiar U-shaped “bathtub” function. However, whenthe ideal base station antenna has a beam pattern that does not fully illuminate all of thescattering centers (BW<38.94°), the pdf of the aoa of scattered energy becomes skewedtowards those line-of-sight (LOS) angles within the beam. The resulting power spectraldensity (PSD) of Doppler spreading at the mobile receiver also becomes skewed into a Wshaped function with a decrease in the energy content of high frequency Doppler componentsand a corresponding increase in the low frequency components. The narrower the beam, thegreater the change in the PSD. The Doppler power spectra associated with this system isplotted as Fig. 1.

Page 17: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Unfortunately, the use of a directional antenna with a “flat-top” pattern is unrealizable.These antennas do not exist because it would take an infinitely long current source to producea beam pattern that is constant over a small angular region and zero everywhere else. We willinvestigate the change in the Doppler spectra caused by a more realistic antenna pattern;a linear array of vertical dipoles. The total field pattern of an antenna array is equal tothe product of the antenna pattern of a single antenna element and the array factor. Theantenna pattern of a single vertical dipole is uniform in azimuth, thus it will be ignoredfor the remainder of this effort. The array factor is a function of the number of antennaelements (M), the electrical spacing between elements (d), the phase relationships betweeneach element and any external “weighting” of elements by amplification or attenuation.For a linear array of identical elements with uniform “weights” and no phase shift betweensuccessive elements, the array pattern is function, whereand The geometry used for the development of the problem is very similar tothat used by Petrus [1] and is shown in Fig. 2.

A uniform distribution of isotropic scatterers are located within a circular region of radiusR around the mobile. The base station is separated by a distance D and uses a directionalantenna with a field pattern To calculate the pdf, the power received (Pr) from anynarrow angular region between and must be calculated. This is done by integratingthe differential areas in the region weighted by the power received at their particular angles

within the antenna pattern. The equation for Pr is shown below. Notice, the field patternhas been squared to generate the power pattern. Because we will be normalizing the

pdf, the correct proportionality is sufficient and the absolute magnitude is not necessary. Toevaluate the integral, the angle must be represented in terms of and

2

Page 18: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The pdf, is derived by allowing and normalizing the power received fromeach direction by the total power received. The resulting expression is

The integrals in the numerator and denominator are evaluated numerically. The denom-inator needs to be calculated only once and then used for all following calculations. Thenumerator must be evaluated for each angle of interest,

Assuming the antenna at the mobile receiver is omnidirectional, it can be shown [2] thatthe Doppler power spectrum is given by

Figures 3 and 4 are comparisons of the pdfs of our directional dipole antenna array withtwo of Petrus’ ideal “flat-top” antennas, one with a beam wide enough to fully illuminateall the scatterers and one with a more directional beam. Figure 3 is a comparison when thedirectional antennas have their First Null Beam Width (FNBW) set to 10° and Fig. 4 is fora FNBW=2°. In both cases, a two element dipole array is used (M = 2) withand respectively. The beamwidths in all figures are listed in degrees.

Figures 5 and 6 are comparisons of the PSDs of the same antenna configurations. Themobile is assumed to be traveling at an angle of 90° with respect to the LOS and themaximum Doppler frequency is assumed to be 100Hz. For all four plots, the distance betweenthe base station and mobile receiver (D) is set to 3km and the scatterers are located withina radius (R) of 1km around the mobile receiver.

3

Page 19: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4

Page 20: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5

Page 21: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

It is apparent that the dipole antenna array has an effect similar to that of the ideal“flat-top” antenna; a decrease of power in the high frequency Doppler components. Forthe moderate beamwidths of Fig. 3 and 5, the effect of the tapered shape of the dipolearray’s pattern within the common FNBW region seems to dominate the comparison andmakes the dipole array a more attractive configuration from the point of view of minimizingDoppler spreading. At the very narrow beamwidth of Fig. 4 and 6, the array doesn’t reducethe high frequency components quite as far as the “flat-top”. It’s possible that when thebeam is this narrow, the sidelobes of the dipole array’s power pattern become the dominanteffect, illuminating those scatterers further from the LOS compared to the “flat-top” (whichdoesn’t illuminate them at all).

2 Calculating the Spaced-Time Correlation Function

The information in the Doppler power spectra plots is useful, but primarily from an intuitivesense. We want to reduce the amount of Doppler spreading in our channel and we want tostart by reducing the highest frequency components first. A more direct way of viewing theeffect of the Doppler power spectrum is to transform the information into the spaced-timecorrelation function. As previously noted, the spaced-time correlation function and Dopplerpower spectrum are a Fourier transform pair. This can be approximated using an FFT/IFFTprocess and with proper sampling of the Doppler power spectrum sufficient accuracy can beobtained. The resolution associated with a discrete-time sampling process is

where is the resolution in hertz, fs is the sampling frequency in hertz, N is the numberof samples, and is the sample period in seconds [3]. hertz with aTs = 0.0001 s was selected. This corresponds to N = 320000 samples. This combinationrequires a tremendous amount of zero padding of the Doppler power spectrum. Figure 7illustrates the results of this process applied to the “full illumination” configuration withfd = 100 hertz. For the actual spaced-time correlation function we are usingIn Fig. 7 the calculated spaced-time correlation function is a magnitude only plot. Thisaccounts for the 180° phase error (sign error) associated with the first sidelobe. This signerror can be corrected by looking at the correlation function phase plot, which isfor the first sidelobes. This effect should never be of concern since DPSK should only beconsidered for use in a channel through which the correlation coefficient is very close to one.

With a sufficiently accurate numeric process in place, we can now approximate the in-verse Fourier transform of the Doppler power spectra shown in Fig. 5 and 6. This inverseFourier transform process results in Fig. 8. The moderate beamwidth PSDs of Fig. 5 resultin a significant “flattening” of the correlation coefficient with the dipole array noticeablyoutperforming the “flat-top” as expected by its lower high frequency Doppler components.The narrow beamwidths of Fig. 6 produce an even greater “flattening” effect and their cor-relation coefficients almost lie on top of each other. It is interesting to note that although the“flat-top” does just barely outperform the array at the earliest times, the array eventuallyclimbs back on top after about 5-6ms.

6

Page 22: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Zooming in on the peak of the main lobe of Fig. 8 results inFig. 9. From Fig. 8 and 9 it can be seen that directional antennas, in general, increase themagnitude of the spaced-time correlation function. The resulting increase in the correlationcoefficient will decrease the bit error floor. A flatter correlation coefficient, i.e. onethat remains closer to a value of 1.0 for the longest time, will be most likely to identify thecorrect data values. This improvement in the correlation coefficient will reduce the system’sbit error probability (BEP) as well as the error floor. It was demonstrated in [4] that foa multicarrier DPSK system employing an equal gain combiner (EGG), the system’s BEP

r

improves as the correlation coefficient approaches one. This is shown graphically in Fig. 10.

3 Conclusions

The numerically computed Doppler power spectra associated with the moderate beamwidth(10 degrees) realizable antenna was noticeably different from the Doppler spectra derivedpreviously using an ideal flat-top antenna. For this case, the ideal antenna is deficient inits ability to accurately model the effect of a directional beam at the base station. Thisdifference decreases significantly for the results associated with very narrow beamwidth (2degree) realizable and ideal flat-top antennas. At this extreme the ideal model is a goodapproximation and greatly reduces the calculation effort required for analysis. The numericalapproximation of the inverse Fourier transform by an extremely oversampled IFFT of thenumerically computed Doppler power spectrum is sufficiently accurate to closely approximatethe spaced-time correlation function.

7

Page 23: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Both the ideal flat-top antenna and the realizable antenna result in a significant flatteningof the spaced-time correlation function in the region of interest. This flattening of the spaced-time correlation function results in a correlation coefficient closer to unity. For a differentiallycoherent system operating in a Doppler spread channel, this increase in the magnitude ofthe correlation coefficient can result in a significant improvement in the system’s bit errorprobability.

References

[1] Petrus, P., J. H. Reed and T. S. Rappaport, “Effects of Directional Antennas at the BaseStation on the Doppler Spectrum,” IEEE Communications Letters, Vol. 1, No. 2, March1997.

[2] Liberti, J. C. and T. S. Rappaport, “A geometrically based model for line-of-sight mul-tipath radio channels,” Proc. IEEE Vehicular Technology Conference, April 1996.

[3] Porat, B., A Course In Digital Signal Processing, John Wiley & Sons, Inc., 1997.

[4] Welch, T. B., “Analysis of Reduced Complexity Direct-Sequence Code-Division Multiple-Access Systems in Doubly Spread Channels,” Technical Report EAS-ECE-1997-12, Uni-versity of Colorado at Colorado Springs.

* Work of R. E. Ziemer supported in part by the Office of Naval Research under contractN00014-020-J01761/P00004

8

Page 24: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

9

Page 25: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 26: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

IIFrequency Reuse Reduction for IS-136 Using a Four Element

Adaptive ArrayJiann-An Tsai and R. Michael BuehrerBell Laboratories - Lucent Technologies

67 Whippany Rd. Room 1A-243Whippany, NJ 07981

[email protected]

Abstract In this paper we present uplink simulation results for two and four element adaptivearrays assuming a TDMA system with an IS-136 slot structure. The results consist of the simulatedbit error-rate (BER) performance assuming the use of two and four element adaptive arrays withthree different frequency reuse patterns. Simulations are also run for a baseline two branch diversitysystem using standard combining. The results show that by using a four element array in each ofthree sectors, a frequency reuse of four can be supported on the uplink with up to an additional

2dB gain with respect to the baseline system using a reuse of seven. The four element array isalso shown to support a reuse of three albeit with less reliability. This paper addresses the uplinkonly, similar downlink gains must be achieved in order to obtain these capacity improvements.Element correlation is addressed through the use of a multi-element channel model. The majorityof the results assume 10 wavelength inter-element spacing which results in low correlation betweenelements. Relaxing this condition showed severe degradation in array performance. Environmentswith low angular spread will require either large array apertures ( 30 wavelengths total spacing forfour elements) or the use of dual polarization antennas.

1 Introduction

In the effort to increase capacity of cellular wireless systems, intelligent antenna systems havereceived significant attention. Both adaptive antenna arrays and fixed sector antennas have beeninvestigated for performance improvements in TDMA and CDMA systems [1, 2, 3, 4]. On theTDMA side, previous work has shown the potential of two and four element adaptive arrays inIS-54/136 [1, 4] to suppress a single high power interferer. In this paper we present results formulti-cell simulations which provide insight into obtainable reuse reductions.

The present work follows work in [1] which presented performance results for a four elementarray using Direct Matrix Inversion (DMI) with Diagonal Loading (DL) to determine the complexantenna weights. The work in [1] demonstrated that at low speeds a four element array with asingle equal power interferer could achieve a 3-4dB gain at a bit error-rate (BER) of 10–2 over atwo element array without interference. At high speeds, an implementation loss is suffered, but ata BER of 10–2, a Signal-to-Interference ratio as low as 3dB could be accommodated in thepresence of a single interferer.

However, [1] did not discuss multi-cell performance where multiple interferers may be present.For a frequency reuse pattern below seven, the base station array will see more than a single signif-icant interferer a high percentage of the time. Thus, the present work investigates the performancein multi-cell environments where the array sees as many as three significant interferers. Conclusionson reuse reduction are based on a required uncoded BER of 10–2 (i.e. voice systems). We showthat a reuse of four is obtainable with a gain in SNR performance as compared to the baseline(two branch three sector) system operating with a reuse of seven. A reuse of three is also possiblealthough with a loss in reliability and SNR performance in high Doppler spread environments.

Page 27: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

It should be noted that this work considers the uplink only. The gains described here must bematched on the downlink, in order to be converted to frequency reuse reduction.

Section 2 describes the algorithms used in this study as well as the system modeling assump-tions. Performance results are presented in section 3 for two and four element arrays assuming(near) independence between array elements. Section 4 discusses the effect of correlation on theperformance of the four element array as well as the consequences on antenna configuration. Con-clusions are given in section 5.

2 System Model and Algorithms

We assume that the baseband equivalent of the received input signal from M antenna elements issampled once per symbol and represented in complex vector notation as

where s0[k] is the differentially encoded signal and x0[k] is the channel vector of the signal of interestat sampling epoch k, si[k], xi[k] i = 1, 2, ..., K are interfering signals and n[k] is a vector of noisesamples taken from a temporally and spatially white process. We assume that the channel vectorsvary slowly with respect to the differentially encoded symbols and the time delay across the arrayis small with respect to the symbol duration. Boldface type is used to represent vector quantities.

The baseline system uses maximal ratio combining of differentially detected DBPSK.1 Thus thesymbol (or equivalently bit) estimate is created as

where is the estimate of the kth symbol b[k], sgn(x) is the signum of x, is the conjugatetranspose, and is the real part of x. The baseline system thus ignores the effect of interferenceand detects the desired signal’s phase differentially.

Detection of the data is slightly different with the adaptive array as combining is done prior todifferential decoding. The array output z[k] is the weighted sum of the element outputs

where w is the vector of complex array weights. We assume the use of differential coding (althoughnot necessarily differential detection) thus, the decoded symbol is

where we have performed differential decoding after combining. Thus, z[k] can be viewed as anestimate of the differentially encoded symbol s[k] = b[k]s[k - 1].

The channel vector for each user xi is created by modeling the received signal as a sum ofsinusoids with random phases and angles-of-arrival

1Note that IS-136 uses DQPSK rather than DBPSK. The results here also apply to the case of DQPSK.

12

Page 28: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where is a random angle of departure from the mobile assumed uniformly distributed overis the maximum Doppler frequency, is a vector of distances from each

element to some reference point, n is the number of multipaths, is the carrier wavelength and

. Note that this is a multi-element extension of the well known “JakesModel” [5]. We assume that the angles-of-arrival are uniformly distributed about the centralangle-of-arrival of the ith user’s signal, with angle spread [6]. The random phases providethe classic Rayleigh fading envelope at each individual element, while the angle spread determinesthe spatial correlation.

Interference is added to the simulation based on the frequency reuse pattern. We assume eitherthree, four or seven cell reuse. The mean assumed per interferer is determined by

where is the path loss exponent [7] assumed to be 3.8, is the reuse distance in terms ofthe cell radius R for a reuse of r, and Mf is a fade margin3. The fade margin is included to accountfor the effect of shadowing and is assumed to be 10dB. The first term of (6) provides a worst caseestimate of in the absence of shadowing since it assumes that the desired user is a distance Rfrom the base station (i.e. the farthest possible distance) while the interferer is a distance fromthe base station (approximately the closest it could be in a three sector system). The fade marginprovides an overhead to account for the shadowing of both the desired signal and the interferer.Using a 10dB fade margin, the per interferer for a frequency reuse of seven is 15dB. When thefrequency reuse is reduced to four or three the per interferer is 10dB or 8dB respectively. Shortterm fading is also included for each signal via the channel model as shown in equation 5.

The array weights can be chosen to minimize the expected error with respect to a referencesymbol (MMSE weighting) or the weights can be selected to maximize the Signal-to-InterferenceRatio (SINR). Since the two solutions differ only by a scale factor [8] and we have a trainingsequence in IS-136, we choose to use the MMSE weights. The weights which minimize square errorcan be shown to be [8]

where is the covariance matrix of the received signal andcan be shown to be equivalent to the desired signal’s channel vector x0.

These weights can be obtained adaptively (e.g. using Least Mean Squares) or via Direct MatrixInversion (DMI). For large arrays, adaptive weight computation can be more efficient, but can beslow to converge. In a fast varying channel this can limit performance. In this work we use DMIand the covariance matrix is estimated as

where N is the number of samples used in the estimate . Ideally, N should be related to thecoherence time of the channel. However, since the coherence time of the channel is not knownapriori, a value of N must chosen which gives the best performance over a range of channelcoherence times. The channel vector is estimated using the sample mean of the received signal

2This reflects the fact that we assume an omni-directional antenna at the mobile.3See Appendix A.

13

Page 29: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

vector multiplied by the reference signal. To save memory the fixed window can also be implementedas a single pole filter:

where determines the time constant of the update. For a slowly varying channel and consequentlylong coherence time, a large α can be used to reduce the effect of thermal noise. However, as thecoherence time approaches the symbol duration, α must be decreased to track the channel variation.In our simulations, provided the best performance over the range of Doppler frequenciesstudied (20Hz - 180Hz). Since the noise reduction of the filter determined by α is equivalent to thatof a block window of length this result agrees with the result in [1] which recommends a

block window with N = 14.An important detail in the algorithm is the requirement of a reference symbol for each weight

update. This is accomplished for IS-136 by using the 14 symbol SYNCH word for training[1]. Afterinitial training using the SYNCH sequence, weight adaptation is accomplished by using previoussymbol decisions as a reference during the remainder of the slot (i.e. decision directed updating)4.

The MMSE algorithm equally weights noise and interference. However, since we do not knowthe interference perfectly, and since our estimates of the interference can be less reliable than ournoise variance estimates for small sample sizes, we may want to put less emphasis on the interferencethan MMSE would dictate. To accomplish this we use what is termed Diagonal Loading [1]. Inthis technique we construct the element weights as

where is the diagonal loading factor. By increasing we increase the relative importance ofthe noise in the weight calculation and decrease the ratio of the larger eigenvalues to the smallereigenvalues. This provides better performance when the estimate of the covariance matrix is poor,since it pushes the weights toward the Maximal Ratio Combining (MRC) solution.

The appropriate value of is directly related to the interference-to-noise ratio . As this ratiodecreases, the value of moves toward unity which de-emphasizes the interference in the weightcalculation. The value of was chosen as discussed in [1].

3 Algorithm Performance

The noise limited case provides information concerning range extension. Figure 1 presents theperformance of the baseline two element system along with two and four element adaptive arraysusing the algorithm described above in the absence of interference. The figure also includes threeideal maximal ratio combining (MRC) performance plots. The rightmost MRC curve is the the-oretical performance of a two element system using MRC with differential detection of DBPSK.The middle MRC curve represents the theoretical performance of a two branch diversity systemusing coherent detection of DBPSK. The leftmost MRC curve gives the theoretical performance ofa four branch diversity system with coherent detection of DBPSK. The two element adaptive arrayprovides a 1-2dB improvement versus the baseline system at a BER of 1%. This agrees with previ-ously published results [4]. The reason for the improvement is that in the absence of interference,the adaptive array weights will provide coherent channel estimates. The adaptive array then is es-sentially performing coherent detection and differential decoding as opposed to the baseline system

4 The existence of a known CVDCC sequence in the slot structure of IS-136 also allows for additional training [1].

14

Page 30: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

which is performing both differential detection and differential decoding. Thus, a l-2dB gain ispossible. The four element array provides approximately 3-4dB of improvement compared to thebaseline system or 2dB improvement with respect to the two element adaptive array. This increaseis due to both coherent detection and increased diversity.5 Both adaptive arrays show degradationat high Doppler frequencies. As the channel coherence time decreases, the array algorithm failsto track the channel sufficiently resulting in a performance degradation. This degradation is morepronounced at high SNR since tracking errors begin to dominate performance.

Capacity improvements can be realized in TDMA systems if the frequency reuse pattern canbe reduced. This requires adaptive arrays to increase the effective SINR seen at the array outputfor smaller frequency reuse patterns when compared to traditional two branch diversity systems.

We investigate reuse cluster sizes of seven, four, and three with three sectors per cell. Each sectoris assumed to have antenna pattern with a 105° half-power beamwidth.

The of a desired mobile is determined by the position of the interferers, the path loss exponentand the position of the desired mobile as well as the relative shadowing. We assume a worst case

position for each interferer and the desired mobile and use a 10dB fade margin to account forrelative shadowing6. Using equation (6) with and a reuse size of seven, we obtain a= 15dB per interferer. Thus, for a reuse of 7 we assume that each interferer will have an averagereceived energy 15dB below the desired user.

Figure 2 shows the simulated BER performance of the baseline, two element adaptive and fourelement adaptive systems assuming a reuse of seven. The adaptive arrays are represented by opensymbols on the plot while the baseline system is represented by filled symbols. The results representfour separate cases: (1) a single interferer with both desired and interfering signals

5Note that the aperture gain is not included since performance is plotted versus SNR per b i t . A four element arraywill naturally have a 3dB SNR advantage due to aperture gain, but we wish to isolate the diversity and interferencecancellation advantages.

6See Appendix A

15

Page 31: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

experiencing a maximum Doppler frequency ( f d ) of 20Hz; (2) a single interferer with fd=180Hz;(3) two interferers per interferer or total) with fd=20Hz; and (4) 2 interfererswith fd=180Hz. For a target uncoded error rate of 1e-2, the two element adaptive array achievesapproximately a l-2dB improvement over the baseline system for all four cases. The four elementarray, however, provides 3-5dB of gain due both to diversity and interference suppression. Note thatthe performance gain of the two element adaptive array relative to the baseline system decreaseswith Doppler spread due to tracking errors.

Since the two element array has only two degrees of freedom in weight selection, and mustmaintain diversity as well as minimize interference, the array performance can degrade quickly withtwo interferers. The four element array however, can handle two interferers as well as maintainsufficient diversity. As a result, we see that the second interferer causes a 2dB degradation in thetwo element case, while the four element array suffers virtually no degradation.

The value of C/I per interferer was chosen as an approximate worst case situation.7 We wouldlike to determine how often we can expect to see one or more interferers with sufficient powerto cause a C/I of 15dB in a reuse pattern of seven. That is, we wish to know how often theworst case occurs for one or more interferers. Simulations were run with interferers given randompositions within their sector and random shadowing coefficients. The shadowing was assumed to belog-normally distributed with a standard deviation of 8dB. This allowed statistics to be gathered

concerning Table 1 shows the percentage of time 0,1,2, or 3 interferers were received withsufficient energy to cause an of at least -15dB (corresponding to per interferer of 15dB or less).From the table we see that 63.6% of the time there were no interferers strong enough to cause a

below 15dB. Additionally, a single interferer broke this threshold with a 22.3% probability andtwo interferers broke this threshold with an 8.3% probability. From Figure 2 since the four elementarray can achieve the target BER of 1% in the presence of two interferers, we can claim a reliabilityof at least 94.2% (i.e. 63.6+22.3+8.3) at approximately 9.5dB SNR.

7It is approximate since we include a fade margin rather than a worst case fade.

16

Page 32: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Since the four element array could provide significant performance improvement compared tothe baseline system in the case of r=7, simulations were run with reduced reuse patterns. Figure 3shows the simulated BER performance of the two and four element adaptive systems for the samefour cases considered above, but with a reuse pattern of four. The value assumed per interferer isobtained from (6) as 10dB. From Figure 3, the two element array can only achieve the required BERat low Doppler with a single interferer, albeit with a 2dB loss compared to the baseline system withr=7 (approximately 13dB from Figure 2). However, the four element array can achieve the targetBER with a 2-4dB improvement over the baseline performance at r=7. Additionally, from Table1 we see that the reliability (i.e. percent time there are three interferes or less at the threshold

) over 95% which is nearly identical to that for r=7. Thus, the four element adaptive array canreduce the reuse pattern (i.e. increase capacity) and still provide about a 3dB SNR improvement.

17

Page 33: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

A reduction in reuse to r=3 is investigated in Figure 4. The two element array is not considered.The results show that in the presence of a single significant interferer (from (6), thefour element array can still maintain a 2-3dB improvement over the baseline system with r=7.Additionally, for a low Doppler spread (20Hz) the array can also maintain that advantage inthe presence of two or three interferers. However, at high Doppler spreads, the array suffers adegradation for more than one interferer. With two interferers present and a Doppler spread of180Hz, the array suffers a 2dB degradation in SNR per bit at the target BER compared to thebaseline system with r=7 and can not achieve the required BER of 1e-2 when three interferers arepresent. If we require the system to maintain a SNR per bit of 15dB (i.e. the maximum requiredSNR for baseline case), the four element array can achieve adequate performance only when a singleinterferer is present. This results in a reliability of 82.2% as compared to a reliability of over 95%for the baseline case. Thus, a reuse of four is probably the minimum reuse which can be obtained8

without additional measures.

4 Effect of Correlation

The previous results assume that each antenna receives a nearly independent version of the trans-mitted signal. To assure this condition, typically antenna elements must be separated by approx-imately [6] which results in a total aperture length of for a four element array. This islikely an unacceptable array size for cellular applications. Thus, in this section we examine smallerarray sizes which result in non-independent elements to determine the effect of correlation on per-formance. To do this we reduce the array size to a total spacing or spacing between eachof four vertically polarized elements. To vary the correlation between elements we allow the angular

8This assumes high speed mobility (i.e. 180Hz Doppler). Additionally, we have ignored the 3dB aperture gain ofthe four element array, which lowers the required received SNR. Thus, a lower reuse may be possible, particularly atpedestrian mobility rates.

18

Page 34: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

spread of the received signal to vary. We also assume that angles-of-arrival of signal componentsare uniformly distributed about the central angle-of-arrival as explained in section 2. Based on theassumed channel model, the correlation between elements is a function of both the angle spreadand the central angle-of-arrival [6]. The correlation increases with decreasing angle spread and ishighest near the edges of the sector.

Figure 5 plots BER results of a four element array with total spacing for the case of areuse pattern of 4 and two interferers. Two different Doppler spreads are used (20Hz and 180Hz)and the angle spread is varied from 5° to 15°. When the angle spread is 15°, the array suffersonly approximately l-2dB degradation at 1% BER compared to the independent case. However,the degradation for 10° and 5° of angle spread is 2dB and 7dB respectively. Additionally, theerror floor seen at high Doppler frequencies (180Hz) is increased beyond the required BER of 1e-2.Thus, the smaller array spacing can severely degrade the array performance in low angle spreadenvironments.

An alternative to closer element spacing is to use dual polarization antennas. Due to environ-ment scattering, the horizontal and vertical components of the received signal are independent [5].By using dual slant antennas we can achieve some diversity without physi-cally separating the antennas.9 The dual slant antennas obtain identical mean energy levels at theexpense of increased correlation between branches. It can be shown that the envelope correlationbetween the branches is [9]

where is the ratio of vertical to horizontal mean energy. Measurements show that a ratio ofis common [9]. This results in an envelope correlation of We simulated the

9We discuss dual slant antennas rather than vertical and horizontally (V-H) polarized antennas because whilemathematically the two configurations are identical, measurements show that dual slant antennas outperform V-Hantennas.

19

Page 35: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

performance of an array comprised of two pairs of dual slant antennas by approximating the dualslant antennas with two co-polarized antennas separated by with an angle spread of 15°. Thepairs were then separated by to achieve a low degree of correlation between the pairs.

Results for the dual slant simulation are given in Figure 6 for a frequency reuse of four and twointerferers with fd = 20Hz and fd = 180Hz. The results with independent elements are also given.From the plots we see that the dual slant configuration suffers approximately 1dB of degradationat 20Hz and slightly more than 2dB at 180Hz. Thus, compared to the linear array with totalspacing given in Figure 5, the dual slant configuration provides an excellent trade-off between arraysize and performance.

5 Conclusions

In this document we show that a two element array can provide performance improvements (1-2dB in SNR) when compared to the baseline case, but cannot provide frequency reuse reduction.However, the four branch adaptive array is capable of reducing the frequency reuse pattern fromseven to four with improved SNR performance when compared to a baseline two branch diversitysystem with a reuse of seven. These results are based on simulations of an uplink array; a reductionin re-use factor will require equivalent gains in the downlink. Although not discussed in this paper,there are a number of approaches available for the downlink. A reduction to a reuse of three isalso possible, although with a loss in reliability when compared to the baseline system. Reusereduction results in increased system capacity and was found to be sensitive to element correlation,with severe performance degradation when a four element array is confined to a current arraysize (10 wavelengths total spacing) and the angular spread of the received signal is low. A dual

20

Page 36: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

slant configuration was shown to give an excellent trade-off between performance and array size.Results show that the dual-slant configuration can achieve a performance which is only 1-2dB worsethan the case of independent elements with an array size consistent with current antennas (10wavelengths total spacing). This suggests that frequency reuse reduction is possible with adaptivearrays provided similar gains can be obtained on the system downlink.

Acknowledgements

The authors are grateful to the members of the Wireless Technology Laboratory who contributedto this study, with special thanks to Dirk Uptegrove.

Appendix: Formula for

In a 3 sector system with a reuse pattern of seven, there are two interferers within the main lobe ofa sector antenna taking into account only the first tier of interferers. In the worst case, the desiredmobile is at the edge of the cell or a distance R from the base station where R is the cell radius.Additionally, in the worst case, the interfering mobiles are a distance D from the base station ofinterest where D is the reuse distance. Thus, the worst case carrier-to-interference ratio can beapproximated as

where is the reuse distance in terms of R, is the path loss exponent and r is the reusecluster size. For a path loss exponent we obtain an estimate (or 25dB perinterferer).

If we reduce the cluster size to r = 4, we can estimate the carrier-to-interference ratio as

In creating this estimate we have assumed five interferers with two at a distance D and three at adistance 2D. Again using we can estimate the carrier-to-interference ratio asIf we ignore the three second tier interferers, we obtain an estimate of 17.5dB or 20.5dB perinterferer.

In a three sector system with a reuse pattern of r = 3 there are eight significant interferers.The worst case distances of these interferers are D (three interferers), (two interferers), and2D (three interferers). Using these worst case distances we can estimate as

For the is estimated as 12.7dB. Again, if we ignore the second and third tier interferers,

we can estimate the carrier-to-interferer ratio as or 18dB per interferer.From the above examples, we see that we can approximate the carrier-to-interference ratio using

only the first tier of interferers as

21

Page 37: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where K is the number of significant first tier interferers. Alternatively, we can estimate the perinterferer as This approximation is for worst-case average It is worst case in thatthe position of the desired mobile and interfering mobiles is worst case. However, the received signalenergy from each mobile will vary about this average value due to both short term and long termfading. Short term fading is due to multipath, while the long term fading is due to the shadowingeffects of buildings and terrain. We shall ignore short term fading in this calculation since multipathfading is included in the channel model. To accommodate long term fading of both the desiredmobile and the interferers, we include a fade margin of Mf =10dB. That is, we allow for a 10dBreduction in due to the relative shadowing of the desired user and any given interferer. Thejustification of this value is as follows. Measurements show that large scale fading has a log-normaldistribution [7]. For a standard deviation of 8dB, the large scale fading would not exceed 10dB90% of the time. Thus, 10dB provides a margin which will provide approximately 90% reliabilityfor the worst case mobile positions, and we can approximate the mean carrier-to-interference ratioper interferer in dB as This results in mean values of15dB, 10dB and 8dB for reuse sizes of r = 7, r = 4, and r = 3 respectively.

References

[1] R.L. Cupo, G.D. Golden, K.L. Sherman, P.W. Wolniansky, C.C, Martin, N.R. Sollenberger, andJ.H. Winters. A four element adaptive antenna array for IS-136 base stations. In Proceedingsof the Vehicular Technology Conference, pages 1577–1581, Pheonix, AZ, May 1997.

[2] J.C. Liberti and T.S. Rappaport. Analytical results for reverse channel performance improve-ments in CDMA cellular communications systems employing adaptive antennas. Transactionson Vehicular Technology, 43(3), August 1994.

[3] A.F. Naguib. Capacity improvement with base-station antenna arrays in cellular CDMA. Trans-actions on Vehicular Technology, 43(3):691–698, August 1994.

[4] J.H. Winters. Signal acquisition and tracking with adaptive arrays in the digital mobile radiosystem IS-54 with flat fading. Transactions on Vehicular Technology, 42(4):377–384, November1993.

[5] W.C. Jakes. Microwave Mobile Communications. Wiley-Interscience, 1974.

[6] J. Salz and J.H. Winters. Effect of fading correlation on adaptive arrays in digital communi-cations. In Proceedings of the International Conference on Communications, pages 1768–1774,May 1993.

[7] T. S. Rappaport. Wireless Communications: Principles and Practice. Prentice Hall, UpperSaddle River, NJ, first edition, 1996.

[8] R.A. Monzingo and T.W. Miller. Introduction to Adaptive Arrays. John Wiley and Sons, NewYork, 1980.

[9] R.G. Vaughan. Polarization diversity in mobile communications. Transactions on VehicularTechnology, 39(3):177–186, August 1990.

22

Page 38: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

III

Pseudo-Blind Algorithm for SDMA Application

Juha Laurila and Ernst BonekTechnische Universität Wien, Vienna, Austria

Institut für Nachrichtentechnik und Hochfrequenztechnikfax. +43-1-5870 583

[email protected]

Abstract

We propose a novel pseudo-blind estimation method. First we estimate the basis of the desired subspaceand after that we project the basis vectors obtained to the finite alphabet constellation (FA). We performthese projections using the D(W)ILSF (Decoupled (Weighted) Iterative Least Squares with SubspaceFitting) algorithm introduced in this paper. For the initialisation of the iterations we use the trainingsequences included in the slot structure of the GSM system. Our simulations use realistic channel modeland show promising bit error rate performance also when incoming signals are not separable in angle.We also discuss complexity aspects and general advantages of the blind estimation methods.

Introduction

The increasing demand of mobile communications requires more capacity for the systems all the time.Due to the limited spectrum resources this means that new advanced techniques must be employed. Theuse of adaptive antennas has been proposed to open up a new spatial dimension to increase capacity.With adaptive antenna systems two techniques can be distinquished: Spatial Filtering for InterferenceReduction (SFIR) and Spatial Division Multiple Access (SDMA). The latter technique allows multipleusers to be served in the same traffic channel.Traditionally adaptive antenna systems are based on the direction-of-arrival (DOA) estimation at the basestation using algorithms like MUSIC [1], ESPRIT [2] or its more recent extensions [3]. These algorithmsestimate the directions of the incoming signal wavefronts and use beamforming techniques to collect alldiscrete components of the desired signal. Correspondingly the nulls of the antenna pattern are pointedtowards the interfering wavefronts. Some kind of user identification must pair DOAs and correspondingusers between these two steps. Finally some classical receiver structure detects these spatially filteredsignals.

The basis of these algorithms is the DOA estimation utilising the known array properties. Thus, this setsstrict requirements for the array calibration. On the other hand these techniques assume that discreteDOAs with limited angular spread (AS) exist. However, in the practical environments the incomingwavefronts spread due to reflections in the vicinity of the mobile and this impairs performance.Additionally, to ensure adequate co-channel interference (CCI) improvement, the users must be separablein angle.

Another classical approach is the use of so-called temporal reference (TR) algorithms based on the wellknown training sequence assisted least-square adaptation. Proper synchronisation of the incoming signalsbefore weight adaptation is difficult with these algorithms [4].

More recently the signal processing community has focused on blind source separation and signaldetection without directional information, i.e. the channel response matrix mapping the simultaneouslytransmitted signals to the received array data samples must be blindly identified. The blind estimationproblem uses several structural signal properties.

The 1)fixed symbol rate of the digital signals together with the space and time domain oversampling leads

Page 39: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

to cyclostationarity. This allows the identification of the channels from the second order statistics [5][6].Thus, the received data has an factorisation in the channel response and signal matrices and both of themhave a special structure. Utilising the 2)finite alphabet property, i.e. the limited number of the modulationsymbols, together with the first property enables solving of the FIR-MIMO1 problem. Other usefulfeatures of commonly used communication signals enabling their blind estimation are e.g. 3)constantmodulus [7], 4)spectral self-coherence [8] or 5)higher order statistical properties e.g. [9].

Blind algorithms have several advantages compared to traditional techniques that motivate their use inthe adaptive antenna applications: 1)Neither discrete DOAs with limited angular spread nor angularseparability are required. We are only separating the signals based on their different channel responses,without estimating any directional information. 2)The blind methods are robust against receiverimperfections and they do not set strict requirements for the calibration of the array. With blind methodswe only collect several samples of the signal in the space and/or time domain, without assuming anythingabout the array geometry. This means that the calibration requirements are less stringent compared to thetraditional methods.3)The whole detection process is performed simultaneously using the same blindestimation principle. As an output of the blind estimator we directly get the detected symbol vectorsbelonging to the different users. The user identification process before signal detection, being one of thebottlenecks of the DOA based methods, can be omitted. 4)The incoming signals can be unsynchronised,which is a benefit compared to the traditional TR-algorithms utilising the training sequences for theweight adaptation. 5)We can decrease the number of the antenna elements and subsequent receiver trains.With the blind algorithms we are not making beamforming in the traditional sense but we are onlycollecting samples in the space and/or time domains allowing the separation of the signals. Thus, we donot need so precisely steered beam patterns requiring a larger number of the antenna elements. Thisnaturally leads to reduced hardware costs. 6)If we consider totally blind methods, they offer also anadditional capacity increase assuming that their use is included in the forthcoming standards. We needless overhead information which is required for the channel estimation purposes with conventionaltechniques.

We have previously analysed the performance of the totally blind estimation method [10]. In the presentpaper we develop a novel pseudo-blind algorithm based on the same estimation principle, but now usingthe training sequences to initialise the estimation. The reason to focus on the pseudo-blind methodsinstead of the totally blind estimation is twofold. First, with the SDMA application, when more than oneusers are served simultaneously in the same traffic channel, some kind of known information must beincluded in the transmitted signals. This overhead information is needed for the user identificationpurposes, because it is necessary to combine the estimated symbol sequences and the users. Thus, it isreasonable to utilise this information already during the estimation process, not only afterwards when thesymbol vectors are available. On the other hand, the frame structures of all current TDMA-based mobilesystems contain training sequences, because traditional equalisers use this information for the channelestimation purposes.

Contribution of This Paper

In this paper we present a novel pseudo-blind estimation principle, which performs joint space-timeequalisation, separation and detection of the multiple co-channel digital signals. The estimation processconsists of two parts. First we estimate the basis of the desired subspace and after that we project thesebasis vectors to the finite alphabet constellation. During the estimation process we utilise the fixedsymbol rate (FSR) of the incoming signals, allowing linear combinations, and the finite alphabet (FA)structure of the symbols, enabling signal separation. We also introduce a novel approach to make thesefinite alphabet projections using Decoupled (Weighted) Iterative Least Squares with Subspace Fitting,D(W)ILSF, algorithm. We increase the robustness of the estimation and decrease the computational

1 FIR-MIMO = Finite Impulse Response, Multiple Input, Multiple Output

24

Page 40: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

complexity by initialising the FA projections using training sequences. This estimation method allowssignal separation in the SDMA application and to enable this we have to oversample the received signalsin space and/or time domains. Fig. 1. demonstrates the principle of the estimation method.

We refer to the GSM radio interface [11] and use a linear approximation of the GMSK modulation [12].The same principles are, however, applicable to all standards fulfilling the signal properties listed above.We mainly discuss signal processing matters, but we still try to consider the problem from the applicationpoint of view. This means that the restrictions caused by the physical operation environment and by theGSM standard are kept in mind throughout the whole paper.

The paper is organised in the following way: In Section 2 we discuss the joint space-time equalisation,estimating the basis of the desired subspace. Section 3 describes shortlymethod, and the Section 4 presents the new pseudo-blind estimation based on the D(W)ILSF algorithm

the totally-blind estimation

simulation results are shown in Section 6. We discuss complexity aspects in Section 7, and theconclusions are drawn in Section 8.We use the following notation through the paper. For matrix A, A* and A# are its Hermitian transpose andMoore-Penrose pseudo-inverse, respectively. is the Frobenius norm of the matrix. The notationrow(A) denotes row span of A.

Joint Space-Time Equalisation

Data Model

The joint space-time equalisation based on subspace estimation is the common part of both estimationmethods discussed in this paper. The details can be found in [13] and references therein. However, weimprove the readability of the present paper by repeating some basic principles here. The data modeldescribed in Eq. 1 is the basis of the whole space-time equalisation process. An array of M antennas, withoutputs receives d digital signals transmitted over the independent channelshij(t). The impulse responses include different propagation delays of the unsynchronised signals. The datamodel can be written

The independent channels hij(t) are assumed to be FIR filters of a maximal length L. Each antennaelement is sampled with an oversampling rate P. The data matrix X is constructed by collecting data overN symbol periods. The n:th column of this matrix contains M .P samples received during the transmissionof the n:th symbol. Our blind estimation problem is to find the factorisation of the received data matrix

25

Page 41: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The matrix H represents the unknown space-time channel, the matrix S contains the transmitted symbolsand L is the maximum length of the channel impulse responses. We have to perform the factorisation sothat the signal matrix has block-Toeplitz shape and symbols in it satisfy the finite alphabet (FA) property

. The estimation principle is based on the equality of row(X) and row(S) and thus the channelresponse matrix H must be full rank. Because this may cause undue requirements on the number of theantenna elements, M, or oversampling rate, P, we create an extended data matrix to ensure this full-rank property. This extended data matrix is constructed by left-shifting and stacking the data blocks mtimes

where the stacking parameter m can be regarded as the length of the time-domain equaliser. The structureof the extended data matrix allows also effective estimation of the number of incoming signals, asdiscussed later in this chapter. The extended data matrix has a factorization where and arethe block-matrices to be determined during the subsequent steps of the algortihm.The elements of the signal matrix are complex GMSK symbols. Using the so-called derotationtechnique [14] we can convert them back to the original real data symbols . Furthermore we canmodify the data matrix to the real form which enables us to perform the subsequent calculations usingonly real-valued operations. Multiplication with the derotation matrix, D, and conversion to the real formis shown in Eq. 4

Later steps of the adaptation process are carried out for this matrix

Subspace Intersections

Two alternatives exist for the factorization of to find the block-Toeplitz matrix with the known rowspan or the block-Hankel matrix with the known column span. In this paper we prefer the directestimation of forcing it to the Toeplitz form using row span intersections. The first step is to estimatethe orthonormal basis of the row span of assuming that the full-rank property of the channel matrix isfulfilled. To achieve this we have to perform singular value decomposition (SVD) of the data matrix,

[15]. First rows of V, denoted as form the orthonormal basis of the row span of Rank ofthe matrix fulfills the equation in the well-conditioned channels. The rank estimationprocedure is discussed in the following chapter.

The basis of the is defined by intersections of the shifted versions of theComputationally the most attractive way to calculate it is to use the SVD of the stacked basis

matrix The estimated basis for the intersection, Y, is obtained by taking the right singular vectorscorresponding to the largest singular values of the matrix. This means that in the case of fullintersections, n = L+m-1, we can reduce the dimensionality of the problem to be equal to the number ofincoming signals, i.e. However, as discussed in detail in [13], with ill-defined noisy channels thereduced number of intersections gives a better performance.

2 Instead of we use notation later in this paper even if we perform calculations for real-valued matrices

26

Page 42: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Estimation of Signal Number and Channel Length

The rank of the data matrix corresponds to the number of the singular values above thenoise floor. The number of incoming signals, d, can be estimated by increasing the stacking parameter mand observing the increase in rank In practice this estimation approach is robust against noise,because we do not have to observe actual channel lengths. In case of equal well-defined channel lengthsof all signals, the response length can also be estimated by where subindex ecorresponds estimated values. Fig. 2. demonstrates the idea of the estimation mechanism and shows thatthe shift between SV strings is also observable with a relatively high noise-level and a fading multipathchannel, described in Section 5.

Totally Blind Estimation

After the joint space-time equalisation we have obtained the basis of the subspace whichcontains our desired signal vectors. The next step is to determine which linear combination of these basisvectors gives a finite alphabet structure In practice this separation of the signals is carried outusing an ILSP (Iterative Least Squares with Projections) algorithm [16]. The principle of this estimationmethod is to make alternating projections to the finite alphabet symbol constellation and to the obtainedrow span, as shown in Table 1.

Table 1: ILSP-Algorithm [16]

where A is the matrix projecting the FA symbol vectors to the desired subspace and the operatordenotes an element-wise projection on the alphabet The simultaneous iterations of all symbol vectorsare repeated until convergence is achieved.

The performance of the ILSP algorithm is, however, limited by the accuracy of the initialisation.Especially when nonlinear modulation schemes are used, the initialisation must be improved using an

27

Page 43: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

additional algorithm step before the projections. In practice this is made by using the constant modulusproperty of the signals using the analytical constant modulus algorithm (ACMA) [7]. This algorithmgives an exact non-iterative solution to the constant modulus problem in noiseless conditions being alsorobust against additive noise. Fig.3. shows the parts of the entire blind estimation process.

We have previously analysed the performance of this blind estimation chain. The simulations showingpromising BER performance with a realistic channel model are described in [10].

Pseudo-Blind Estimation

Using training sequences to initialise the FA projections allows to perform them directly after thesubspace intersections. Thus, we can omit the ACMA part even though nonlinear modulation schemesare used. In addition to the reduced computational burden the robustness of the estimation will be higher.

D(W)ILSF Algorithm

We also propose a new projection algorithm, called D(W)ILSF (Decoupled (Weighted) Iterative LeastSquares with Subspace Fitting) algorithm combining the ideas of the DWILSP (Decoupled WeightedIterative Least Squares with Projections) [17] and the ILSF (Iterative Least Squares with SubspaceFitting) algorithms [13]. Instead of simultaneous iterations of all symbol vectors we make the projectionsbetween the FA constellation and the desired subspace separately for each user. This approach hasseveral desirable properties. Generally, when only one signal is treated as a desired contribution, all theother present signals are included in the interference term. This means that no assumptions about thecolor of the noise or the synchronisation of the incoming signals have to be made [17]. The decoupledprinciple allows also to estimate the interesting signals only, leaving the rest in the interference term.

We summarise the D(W)ILSF algorithm in Table 2. The goal of the alternating least squares projectionsis to minimise the MMSE (minimum mean square error) criterion. The approach is similar to theDWILSP, but the pseudo-inverse of the k-th iterate is avoided and replaced by the pseudo-inverse of Ywhich remains constant over all iterations. Because in our case the input matrix Y is an orthonormalbasis, this pseudo-inverse is equal to the complex conjugate transpose, The training sequences areused for the initialisation of the fitting vectors and thus we can ensure convergence to the globalminimum. The iterations are continued until convergence is reached and after that we begin theestimation of the next desired signal. Note that the algorithm provides two signal estimates. The least-squares solution before the FA projection can be considered as a soft estimate, whereas the outcome ofthe projection forms its hard counterpart.

Table 2: D(W)ILSF-algorithm

28

Page 44: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Table 2 above considers the D(W)ILSF algorithm in our special case, where the input matrix Y is theorthonormal basis. In the general case, if decoupled projections are used without preceding subspaceestimation, a prewhitening step can be included in the algorithm following the idea presented in [17]. Thepurpose of the prewhitening is to convert the colored noise to white by multiplying the input data by theinverse of the interference covariance matrix. Reference [17] shows that the array covariance matrixapproaches asymptotically this prewhitening matrix. In the case of an orthonormal input matrix we cannaturally omit this prewhitening step, because the only remaining effect would be scaling of the inputdata.

Channel Model

We carried out the simulations using the directional channel model based on scattering areas [18]. Theusers are surrounded by local scattering areas, where the scattering points are randomly placed.Environments causing a larger delay and angular spread are modeled adding more scattering areas atrandom positions inside the cell area. Physically these far scattering areas correspond to the signal echoese.g. from high-rise buildings in an urban environment. The models with and without contributions fromfar scattering areas are called high- and low-rank models, respectively. The propagation modeling inurban environments requires the use of high-rank models whereas only local scatterers are typicallypresent in rural areas. The basic principle of our channel model is shown in Fig.4.

In the model we take the distances between the scattering points and the center of the scattering disc froma one-sided Gaussian distribution. This distribution results in an azimuthal power spectrum (APS) equalto the Laplacian shape [19], which was also observed in recent channel measurements [20] [21].Appropriate parameter selection of the model allows simulations of the different propagationenvironments. In this paper we have used a parameter set corresponding to the urban macrocellenvironment with base station antennas above the rooftop level (Table 3). In practice this corresponds tothe use of the adaptive antennas in an urban umbrella cells above the microcell structure. This parameterselection gives averaged angular spread (AS) of about 3 degrees related to each nominal direction-of-arrival (DOA).Our main interest in this paper is to consider achieved BER performance using the algorithms describedabove. Thus we average the results over different mobile and scattering area positions, which we assumeto be uniformly distributed over the cell area. To guarantee the statistical reliability of our results werandomly selected new mobile, scattering area and scattering point positions for each timeslot.

29

Page 45: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Simulations

Parameter Selection:

In the simulations we studied the performance of the described algorithms using the GSM radio interface.The only modification compared to the transmission structure of the GSM system was related to thetraining sequences. First, to allow separation of several SDMA users we must use a different trainingsequence for each of them. On the other hand the midambles of the GSM system are meant only for thechannel estimation at the receiver, which means that their autocorrelation properties have been the maininterest. When midambles are used for the separation of different SDMA users, also the crosscorrelationproperties have increased importance. Our simulations showed that the proper selection of the midambleshas a significant effect on the final detection performance. We used the improved training sequencesdescribed in [22] for the initialisation of the protective iterations and identification of the detectedsymbol sequences.The results of our simulations concern bit error rates (BER) as a function of the input SNR. Wecalculated the BER over the whole slot structure including both information and training bits. In all caseswe assumed two SDMA users and averaged the BER over both of them. The signal-to-noise ratio valueswere defined by the received mean power values averaged over a large number of random channelsituations. Table 3 shows the parameter selection used in the simulations.

Some issues related to the parameter selection are worth further discussion. In principle the number ofthe independent samples, i.e. M·P product, defines the identifiability restrictions of the algorithm. Thus,at first glance, it may seem reasonable to increase the oversampling rate and in that way decrease thenumber of antenna elements and receiver chains. However, with bandlimited signals the role ofoversampling is limited, because sampling faster than the Nyquist rate does not provide furtherinformation. Thus with wireless systems an oversampling rate P=2 is a reasonable selection. Theseresolution limits are discussed in detail in [23].The transmitter filtering of the GMSK modulation with the normalised bandwidth of BT=0.3 causesinter-symbol-interference (ISI) of three symbols. The additional ISI caused by the channel is typically notmore than two symbol periods in urban environments. Thus we chose an equaliser length (stackingparameter) m=5 in our simulations. As discussed in detail in [13], it is not reasonable to increase m largerthan necessary, because this makes the subspace intersection procedure more difficult.The number of intersected subspaces was given in the form of in Table 3. Compared to thefull-intersection case we thus performed one intersection less. In this way the dimensionality of theobtained basis, was above the number of incoming signals. However, with practicalchannels it is reasonable to include only the well-defined intersections and leave the remainingequalisation for the subsequent projection algorithm [13] [14].

30

Page 46: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

BER Performance with D(W)ILSF Projections:

In simulations we studied the performance of the above described algorithm consisting of the space-timeequalisation and the novel, computationally reduced D(W)ILSF approach performing FA projections.Fig. 5. demonstrates the BER performance when the number of the antenna elements in the uniformlinear array (ULA) was varied. Fig.6. shows the effect of the number of the collected snapshots, thenumber of antenna elements being fixed (M=6).

The performance shown in the figures above is very promising. Simulations show also that already fourantenna elements with two times oversampling give sufficient performance. However, increasing thenumber of the independent samples by using the array with more elements improved the performancefurther. On the other hand our results demonstrate that a relatively small number of collected snapshots isadequate for the signal estimation. Note, that values used here are clearly above the fundamentalidentifiability limits,

Angular separability:

Traditional DOA-estimation algorithms are based on the Vandermonde structure [15] of the multichannelmatrix, which is used for the estimation of the propagation angles of the incoming wavefronts. Thismeans that the detection of the different signals is possible if they are angularly separable. With ouralgorithm the approach is different. We estimate the unknown channel response matrix which maps thesimultaneously transmitted signals to the received array samples. Thus also angularly overlapping signalscoming via the same reflectors are separable in case that their channel response matrices are different.

To demonstrate this property we created the following modified channel scenario. Instead of randompositioning of the mobiles we fixed the MS-BS distance to the same value for both users and the DOAdifference (seen from the BS) to the value of 2 degrees. Otherwise the channel parameters of Table 3were used. We used the same scattering points for both user signals and only local scatterers were presentin the simulation. Thus, in this worst case scenario (Fig. 7.), all signal components transmitted by twodifferent users were propagated via the same scattering points before arriving to the array at the basestation. It is clear to see that no DOA estimation algorithm can cope with this scenario, but the differentdistances between the scattering points and mobiles ensured adequate divergence of the channel responsematrices for our algorithm. The BER performance of this worst case scenario is shown as the broken linein Fig.8. The solid line shows the situation where we added independent far scattering areas for bothusers, but local contributions propagated still via the common scattering points. The figure shows thataddition of these independent multipath components increased the separability of the channel response

31

Page 47: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

matrices and the BER curve approaches the corresponding simulation with the normal channel model(Fig. 5).

Computational Complexity

Joint Space-Time Equalisation

Most of the complexity of our estimation method is related to the space-time equalisation, i.e. mostcomputations are used for the singular value decompositions of relatively large matrices. Computing theSVD of the extended data matrix with size requires aboutoperations [13]. The complexity can be decreased by replacing the SVD by computationally moreefficient subspace tracking methods. In practice this means that we update the subspace estimateiteratively by using the column vectors of the data matrix instead of considering as a whole. Especiallythe use of these methods reduces the computational burden when the propagation channel is relativelyslowly varying. This means that previous subspace estimates can be used for initialisation of the nextslots and the number of required tracking iterations can be reduced. This is the case in the typicalpropagation environments when the system does not utilise frequency hopping. We have already testedtracking algorithm presented in [24] and its more recent extension [25], but because of the spacelimitations the results are not presented in this paper. Reference [26] shows also an interesting SVD-updating algorithm based on the orthonormal allowing an efficient systolic arrayimplementation.To obtain the desired intersection of the subspaces we also have to perform the SVD of the stacked basismatrix In the case of full intersections, when the size ofthis gives a complexity of [13]. We have two possibilities to reduce this number ofoperations. On one hand, instead of joint intersection of all subspaces, we can intersect less subspacesseparately and add more intersection stages in cascade. Ultimately this means that we perform L+m-1pairwise intersections. On the other hand we can consider a similar updating implementation as in thefirst stage also for this matrix. These options reduce the complexity to the values and

respectively [13].

To obtain an optimal combination of complexity and performance, the parameter selection of thealgorithm must be considered. Fig.6. demonstrates the effect of the number of collected snapshots to theperformance of the algorithm and shows that relatively short data collection periods are adequate to givesufficient performance. Consequently, we do not have to perform a full estimation process over the

32

Page 48: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

whole GSM slot or we can collect a smaller number of bits in the neighborhood of the midamble to beused with the algorithm steps described above. We can create the channel response matrix utilising thedetected symbol vectors and use this information for the estimation of the remaining parts of the slot.This can be written as

where X,He and S are the data, estimated channel response and signal matrices corresponding to Eq.2.Subindex S corresponds to a short data collection period and F matrices where the remaining part of theslots is included. The derotation was performed with the data matrices Xs and XF shown in this equation.

D(W)ILSF Algorithm

Considering the steps of the DWILSF algorithm in Table 2 the computational efficiency of the algorithmis easy to see. In the first step when the soft signal estimate is calculated it is sufficient to calculate theproduct of the vector ti with the input matrix Y, with size requiring operations.Similarly to update the estimate of the fitting vector, the same kind of multiplication is computed. Thisgives the total computational complexity of operations, where d is the number of the estimatedsignals and i the number the iterations required for one signal. With an appropriate training sequenceinitialisation, this value was typically 2-3 in our simulations.

Conclusions

In this paper we have proposed a novel pseudo-blind estimation principle. First we performed jointspace-time equalisation for all incoming signals and estimated the basis of the desired subspace. Afterthat we separated the signals by using the finite alphabet constellation (FA). We also proposed a novelmethod for these projections, called D(W)ILSF (Decoupled (Weighted) Iterative Least Squares withSubspace Fitting) algorithm. This algorithm is based on the iterative projections between desiredsubspace and finite alphabet constellation. For the initialisation of these iterations we used the trainingsequences included in the slot structure of the GSM system. We evaluated the performance of thisalgorithm by simulations with realistic directional channel model. We obtained promising bit error rateperformance also with the scenario where signals were not separable in angle. Complexity aspects andgeneral advantages of the blind estimation methods were also discussed.

Acknowledgements:

This work was supported by Austrian Fonds zur Förderung der wissenschaftlichen Forschung underProject P12147-MAT.

References

[1] R.O. Schmidt, Multiple Emitter Location and Signal Parameters Estimation, IEEE Tr. on Antennas and Prop.,vol.34, pp. 276-280, March 1986

[2] R. Roy, A. Paulraj, T. Kailath, ESPRIT - A Subspace Rotation Approach to Estimation of Parameters of Cisoidsin Noise, IEEE Tr. on Acoustics, Speech and Signal Proc., vol.32, pp. 1340-1342, May 1986

[3] M. Haardt, J.A. Nossek, Unitary ESPRIT: How to Obtain an Increased Estimation Accuracy with a ReducedComputational Burden. IEEE Tr. on Signal Proc., vol.43, pp. 1232-1242, May 1995

33

Page 49: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[4] J. Fuhl, D.J. Cichon, E. Bonek. Optimum Antenna Topologies and Adaptation Strategies for SDMA. IEEEGlobal Communications Conference, November 18-22, London, UK, pp. 575-580

[5] L. Tong, G. Xu, T. Kailath, Blind Identification and Equalization Based on Second-order Statistics: A TimeDomain Approach, IEEE Tr. on Information Theory, vol. 40, pp. 340-349, March 1994

[6] E. Moulines, et.al. Subspace Methods for the Blind Identification of Multichannel FIR Filters, Proc. IEEEICASSP, 1994, pp. IV/573-576

[7] A.J. van der Veen, A. Paulraj, An Analytical Constant Modulus Algorithm. IEEE Trans. on Signal Proc., vol.44,pp. 1136-1155, May 1996

[8] B.G. Agee. et.al, Spectral self-coherence restoral: A new approach to blind adaptive signal extraction usingantenna arrays., Proc. IEEE, vol. 78, pp. 753-767, April 1990

[9] J.F. Cardoso, Super-symmetric decomposition of the fourth order cumulant tensor. Blind identification of moresources than sensors, Proc. IEEE ICASSP, Toronto, Canada, 1991, vol.5, pp. 3109-3112

[10] J. Laurila, E. Bonek, SDMA Using Blind Adaptation, ACTS Mobile Communication Summit, Aalborg,Denmark, October 7-10, 1997, pp. 314-319

[11] M. Mouly, P. Pautet, The GSM System for Mobile Communications, 1992, 701 p.

[12] P.A. Laurent, Exact and Approximate Construction of Digital Phase Modulations by Superposition ofAmplitude Modulated Pulses (AMP). IEEE Tr. on Comm., vol.34, pp. 150-160, Feb. 1986

[13] A-J. van der Veen, S. Talwar, A. Paulraj. A Subspace Approach to Blind Space-Time Signal Processing forWireless Communications Systems. IEEE Tr. Signal Proc., vol.45, pp. 173-190, Jan. 1997

[14] A-J. van der Veen, A. Paulraj. Singular Value Analysis of Time-Space Equalization in the GSM MobileSystem. Proc. IEEE ICASSP ‘96. May 1996, Atlanta, GA. p. 1073-1076

[15] G. Golub, C.F. Van Loan, Matrix Computations, 2nd Ed, 1989, 642 p.

[16] S. Talwar, M. Viberg, A. Paulraj, Blind Separation of Synchronous Co-Channel Digital Signals Using anAntenna Array - Part I: Algorithms. IEEE Tr. on Signal Proc., vol.44, pp. 1184-1197, May 1996

[17] P. Pelin, Spatial Diversity Receivers for Base Station Antenna Arrays, Licentiate Thesis, Chalmers Universityof Technology, Göteborg, Sweden, May 1997, 72 p.

[18] J. Fuhl, A.F. Molisch, E. Bonek, A Unified Channel Model for Mobile Radio Systems with Smart Antennas,IEE Proc. - Radar, Sonar and Navigation: Special Issue on Antenna Array Processing Techniques, vol.145, Feb.1998, pp. 32-41

[19] J. Laurila, A.F. Molisch, Influence of the scatterer distribution on power delay profiles and azimuthal powerspectra of mobile radio channels, to be published in International Symposium on Spread Spectrum Techniques andApplications - ISSSTA ’98, Sun City, South Africa, Sep. 2-4, 1998

[20] K.I. Pedersen, et.al, Analysis of Time, Azimuth and Doppler Dispersion in Outdoor Radio Channels, ACTSMobile Communication Summit ’97, October 7-10, 1997, Aalborg, Denmark. p. 308-313

[21] U. Martin, Spatio-Temporal Radio Channel Characteristics in Urban Macro-Cells, IEE Proc.- Radar, Sonar andNavigation: Special Issue on Antenna Array Processing Techniques, vol.145, Feb. 1998, p. 42-50

[22] J. Fuhl, Smart Antennas for Second and Third Generation Mobile Communications Systems. DoctoralDissertation. INTHFT, TU Wien. Vienna, Austria. 1997. 305 p.

[23] A.J. van der Veen, Resolution Limits of Blind Multi-User Multi-Channel Identification Schemes- TheBandlimited Case. Proc. IEEE ICASSP ‘96. May 1996, Atlanta, GA. p. 2722-2725

[24] B. Yang, Projection Approximation Subspace Tracking, IEEE Tr. Signal Proc., vol.43, pp. 95-107, Jan. 1995

[25] B. Yang, An Extension of the PASTd Algorithm to Both Rank and Subspace Tracking, IEEE Signal Proc.Letters, vol.2, pp. 179-182, Sep. 1995

[26] J. Götze, P. Rieder, SVD-Updating Using Orthonormal Journal of VLSI Signal Processing, pp. 7-17, 1996

34

Page 50: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

IVIntegrated Broadband Mobile System (IBMS) Featuring Smart Antennas

M. Bronzel, J. Jelitto, M. Stege, N. Lohse, D. Hunold, and G. Fettweis

Dresden University of Technology, Germany

bronzel @ ifn.et.tu-dresden.de

Abstract: IBMS is a concept for future mobile communication systems to provide a large range of

data rates with different degrees of mobility. The integration of heterogeneous services and

communication systems requires a common Network Access and Connectivity CHannel (NACCH)

for basic signaling to provide permanent network access. Smart Antennas are utilized to

adaptively enable a trade-off between mobility and data rate.

I. Introduction

Currently, wireless communication systems are designed for specific data rates and mobility support.

While high mobility is provided by personal communication systems at low data rates with limited

potential services, higher data rates up to 155 Mb/s will be supported by wireless ATM and/or IP systems

for users with limited mobility. However, future demands for mobile communication systems will be

dominated by the heterogeneity of broadband and narrowband services, which have to be supplied in in-

house and outdoor environments simultaneously, with varying degrees of mobility. The objective of the

Integrated Broadband Mobile System (IBMS) [1,2] is to provide a unified way for supporting a variety of

communication classes ranging from high mobility with low data rates towards portability at high data

rates as an integral feature of a wireless communication system. This requires the development of a

uniform network structure, which also enables the integration of different communication systems,

working at different frequencies. Figure 1 relates IBMS to other European research projects.

Page 51: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

II. System Concept

The key features of the IBMS concept are discussed in more detail in the following sections.

Hierarchical structure

An integrated communication system for heterogeneous services and terminals with different air

interfaces and variable bit-rates requires a new network infrastructure. A possible approach could be a

hierarchical structure of Network Service Classes (NSC), which has been introduced [1] to support

different Quality of Service (QoS) parameters with respect to mobility. Each NSC comprises a functional

signaling set (depicted as circles in Figure 2) which is used for configuration of the particular NSC and

selection of the appropriate Traffic CHannel (TCH).

Starting from NSC 0, which serves as the entry point and therefore contains only the basic functionalities

of network access and location management, a wide range of possible datarates with different degrees of

mobility is provided by additional network functions in higher-layered NSCs (A-C). A particular NSC

passes all functionalities to higher ordered NSCs. In addition, other network structures (e.g. in-house

overlay networks) can attach to NSC-C as another sub-hierarchical network of NSCs, which also enables

the integration of in-house and outdoor environment. The hierarchical network structure in EBMS also

supports Rate Fallback and Rate Upgrade. If a required QoS cannot be maintained, the system falls back

to lower NSCs without loosing connection and automatically switches back to the original NSC if the

necessary channel characteristics are met. Since higher NSCs exploit the frequency/space capacity of the

network more efficiently (see Network Capacity Considerations), connections of lower NSCs shall be

carried along in higher NSCs when channel conditions are appropriate. The maximum bitrates of the

different NSCs are defined as follows:

36

Page 52: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Furthermore, IBMS will allow switching to different communication systems (Figure 2), thus enabling

the integration of proprietary system concepts. The transition is either controlled by IBMS within NSC-0

and permits re-entry or a complete inter-system handoff.

Network Access and Connectivity Channel

A Network Access and Conectivity Channel (NACCH) is an integral part of the IBMS approach. This

basic signaling channel covers all signaling tasks that guarantee constant network access and maintain

established connections. Furthermore, the NACCH comprises additional functionalities in order to

manage the hierarchical structure of IBMS, which includes switching between different NSCs for

supporting Rate Fallback and NSC Upgrade. The NACCH comprises the following main functionalities:

• registration and authentication

• call setup and release

• network access configuration

• mobility management (location management, handoff, paging)

• power saving management

• management of Rate Fallback and NSC Upgrade.

While the physical NACCH implementation depends on the respective environment, the functional

specification however, is independent from the platform used. This enables the integration of

heterogeneous and proprietary systems, which is essential for a software telecommunications/radios.

Trade-Off Between Data Rate and Mobility

Since bitrate and mobility cannot simultaneously be maximized, a trade-off between datarate and

mobility is introduced by means of different NSCs as a key functionality of IBMS (Figure 3). Smart

Antennas shall be deployed to support the different requirements, starting with omnidirectional antennas

in NSC A. Next, NSC B will utilize Smart Antennas at the basestation and, additionally, NSC C at

mobile stations, as well.

37

Page 53: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

As part of the idea behind IBMS, devices of all NSCs shall transmit at similar power levels. Since the bit

energy of the transmitted signal gets weaker with increasing data rate, the idea is that this loss in bit

energy shall be compensated for by the antenna gain of the Smart Antennas. As apparent from Figure 4,

the ratio between the data rates of two adjacent NSCs is about 24 to 25, which requires antenna gains of

the same order of magnitude, i.e. about 12 to 15 dB. After consideration of possible interference

situations we are currently investigating a scenario where users of each NSC span the whole available

bandwidth of approximately 20 - 25 MHz.

Modem aspects

IBMS uses a combination of multiple access methods for the different NSCs. Utilizing Smart Antennas

enables Space Division Multiple Access (SDMA) in NSC B and C. Additionally, another multiple access

method is necessary. Since SDMA cannot be used for users that are not spatially separable and there are

no Smart Antennas in NSC A, Code Division Multiple Access (CDMA) will be used as second multiple

access method for NSCs A and B. The spreading property will then effectively result in a similar

bandwidth for all NSCs. Additional use of single carrier modulation promises an easier modem design

for all NSCs. Since it is not possible to apply CDMA in NSC C which already occupies the complete

bandwidth of one channel due to the high datarate Time Division Multiple Access (TDMA) will be used

if the two NSC C users cannot be separated. This will constrain the access time for each user. In order to

make efficient use of the system resources, IBMS requires intelligent resource management strategies.

The following Table 2 summarizes the intended multiple access and modulation methods used in IBMS:

Coherent modulation is applied in both, uplink and downlink. Currently, some interference canceling

detection methods are investigated to be utilized at the basestation. This might also be implemented at the

mobile station for NSC C type terminals.

III. Network Capacity ConsiderationsBitrate Gain

The performance increase of systems utilizing Smart Antennas can be categorized as follows:

• antenna gain due to directivity

• Spatial Filtering for Interference Reduction (SFIR)

• Space Division Multiple Access (SDMA).

38

Page 54: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The spatial filtering influences not only the number of interferes, but also the number of multipath

components, and thereby parameters of the mobile channel, like coherence bandwidth and coherence

time. Therefore we introduce another category

• Spatial Filtering for Channel Improvement (SFCI).

Higher bitrates can be achieved if a significantly reduced delay spread permits the implementation of

spectrally more efficient modulation techniques. This is indicated in Figure 5 where an increased bitrate

(compared to omnidirectional antennas) can be observed with reduced beamwidth of the adaptive array.

Thereby the maximum bitrate has been derived form the coherence-bandwidth and the requirements on a

non-frequency-selective channel.

Network Capacity Aspects

In a first capacity analysis, the mean Signal-to-Interference Ratio (SIR) for the uplink has been

determined according to a method presented in [3], based on a 7 cell cluster network. Other assumptions

were e.g. ideal antenna array pattern, ideal power control, and a simple path loss channel model. The

number of users was equal in each cell of the cluster considered. As an example, the resulting formula of

the SIR is given for the case of NSC C. In the other NSCs similar results were achieved [4].

39

Page 55: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Figure 6 shows the user capacity of the system for the different NSCs. It can be seen that for NSCs A and

B, the same number of users can be served in uplink despite of the higher datarate in NSC B,

which is a result of using Smart Antennas at the basestation in NSC B. In NSC C, the number of users is

less due to stronger interference through the antenna gain of the MS antenna array (gMS)·

However, referring to Figure 7, which shows the network capacity, it can be concluded that NSC C

exploits the network capacity most efficiently because of its high datarate. Now it becomes obvious, why

it is desirable to carry a low-rate user along in a higher NSC, which is contained in the IBMS concept as

a NSC Upgrade. Since this will reduce the interference in the system, more users can be served while

exploiting the network capacity more efficiently.

IV. Smart Antennas

Spatial Channel Modeling and Verification

Performance analysis of adaptive antenna array receivers in mobile communications requires detailed

knowledge of the time-varying impulse responses between the transmitter and each of the antenna array

outputs, combined also denoted as vector channel impulse response. Due to the beamforming effect of

Smart Antennas, the conventional channel models for omnidirectional antennas cannot be used.

Measurements have been made in order to find parameters for typical impulse responses for a

communication channel when utilizing adaptive antenna arrays. Our measurements have been carried out

using the RUSK ATM vector channel sounder (Figure 8) which has been jointly developed by MEDAV

and Ilmenau University of Technology [5, 10]. The channel sounder can be used to analyse the time

variant vector impulse response in the 5...6 GHz frequency band based on an uniform linear antenna

array with 8 elements. In order to gain superresolution in the delay angular domain, a 2D unitary

ESPRIT procedure [11] is used for joint delay-DOA estimation.

40

Page 56: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Furthermore, IBMS will allow switching to different communication systems (Figure 2), thus enabling

the integration of proprietary system concepts. The transition is either controlled by IBMS within NSC-0

and permits re-entry or a complete inter-system handoff.

Network Access and Connectivity Channel

A Network Access and Conectivity Channel (NACCH) is an integral part of the IBMS approach. This

basic signaling channel covers all signaling tasks that guarantee constant network access and maintain

established connections. Furthermore, the NACCH comprises additional functionalities in order to

manage the hierarchical structure of IBMS, which includes switching between different NSCs for

supporting Rate Fallback and NSC Upgrade. The NACCH comprises the following main functionalities:

• registration and authentication

• call setup and release

• network access configuration

• mobility management (location management, handoff, paging)

• power saving management

• management of Rate Fallback and NSC Upgrade.

While the physical NACCH implementation depends on the respective environment, the functional

specification however, is independent from the platform used. This enables the integration of

heterogeneous and proprietary systems, which is essential for a software telecommunications/radios.

Trade-Off Between Data Rate and Mobility

Since bitrate and mobility cannot simultaneously be maximized, a trade-off between datarate and

mobility is introduced by means of different NSCs as a key functionality of IBMS (Figure 3). Smart

Antennas shall be deployed to support the different requirements, starting with omnidirectional antennas

in NSC A. Next, NSC B will utilize Smart Antennas at the basestation and, additionally, NSC C at

mobile stations, as well.

41

Page 57: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

As part of the idea behind IBMS, devices of all NSCs shall transmit at similar power levels. Since the bit

energy of the transmitted signal gets weaker with increasing data rate, the idea is that this loss in bit

energy shall be compensated for by the antenna gain of the Smart Antennas. As apparent from Figure 4,

the ratio between the data rates of two adjacent NSCs is about 24 to 25, which requires antenna gains of

the same order of magnitude, i.e. about 12 to 15 dB. After consideration of possible interference

situations we are currently investigating a scenario where users of each NSC span the whole available

bandwidth of approximately 20 - 25 MHz.

Modem aspects

IBMS uses a combination of multiple access methods for the different NSCs. Utilizing Smart Antennas

enables Space Division Multiple Access (SDMA) in NSC B and C. Additionally, another multiple access

method is necessary. Since SDMA cannot be used for users that are not spatially separable and there are

no Smart Antennas in NSC A, Code Division Multiple Access (CDMA) will be used as second multiple

access method for NSCs A and B. The spreading property will then effectively result in a similar

bandwidth for all NSCs. Additional use of single carrier modulation promises an easier modem design

for all NSCs. Since it is not possible to apply CDMA in NSC C which already occupies the complete

bandwidth of one channel due to the high datarate Time Division Multiple Access (TDMA) will be used

if the two NSC C users cannot be separated. This will constrain the access time for each user. In order to

make efficient use of the system resources, IBMS requires intelligent resource management strategies.

The following Table 2 summarizes the intended multiple access and modulation methods used in IBMS:

Coherent modulation is applied in both, uplink and downlink. Currently, some interference canceling

detection methods are investigated to be utilized at the basestation. This might also be implemented at the

mobile station for NSC C type terminals.

III. Network Capacity Considerations

Bitrate Gain

The performance increase of systems utilizing Smart Antennas can be categorized as follows:

• antenna gain due to directivity

• Spatial Filtering for Interference Reduction (SFIR)

• Space Division Multiple Access (SDMA).

42

Page 58: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The following pictures show some analysis results of measurements carried out in a suburban

environment. The transmitter was moved along a street with the mean distance to the receive array being

approximately 60 m. Figure 9 shows a sequence of impulse responses of one array channel measured at 2

ms intervals. The resulting delay-Doppler spectrum is shown in Figure 10. It indicates clearly the ability

of the applied measurement principle to track the time-variant behaviour of the impulse responses by

representing a Doppler shift of about 90 Hz, which corresponds to a radial speed of 18.5 km/h.

:

Let us introduce a spatial channel model in order to estimate the channel improvement when utilizing

Smart Antennas. Our model is based on a single bounce approach, which has also been used by Liberti

[6]. The time delay between transmitter and receiver can be described using ellipses. Furthermore, a

radial exponential distribution of the scatterers around the mobile is assumed, which takes into account

the finite dimension of scattering objects. The geometrical superposition of these results in the joint PDF

of delay and beamwidth. Assuming a simple path-loss-model the delay spread depending on the

beamwidth can be derived. Changing the distance between transmitter and receiver in time, results in

43

Page 59: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

moving ellipses and thereby the timevariant joint PDF of delay and beamwidth, from which the

dependency of the Doppler spread on the respective beamwidth of the adaptive array can be derived.

Figure 11 (solid line) shows the effect of the antenna beamwidth on the delay spread, which correspond

to our measurements (depicted as stars). The respective Doppler spread is shown in Figure 12.

I

Simulations

To investigate the performance of beamforming algorithms in realistic scenarios, array imperfections

such as mutual coupling between elements and non-ideal elements have to be taken into consideration.

Figure 13 shows a typical beamforming pattern of an 8-element antenna array wiht the desired signal

located at 30° and an interferer at –30°. The dotted line shows the pattern for “ideal” (omnidirectional)

antenna elements, while the solid line includes both, element imperfections and mutual coupling effects.

Given these antenna characteristics, a simulation of a simple communication situation with one user (30°)

and one interferer (-30°) transmitting on the same carrier frequency was performed. The channel was

44

Page 60: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

modeled using the GBSB-approach with two paths [6]. The SINR plot in Figure 14 starts with an

omnidirectional antenna pattern for both, the transmitter and the receiver. This results in a small Signal-

to-Interference-and-Noise-Ratio (SINR), since all incoming signals are received equally. This situation

can be viewed as a typical scenario for NSC A. Switching to NSC B would then require an adaptive

beamforming at the basestation. This is shown in the second part of the SINR plot, when the adaptive

array is turned on after 40 iterations. The antenna array immediately adapts its directivity towards the

direct path from the transmitter and a dramatically improved SINR (about 20 dB) can be observed. This

will enable the implementation of higher order bandwidth efficient modulation schemes, as required for

the data rates in NSC B.

Testbed Design

In order to demonstrate the aforementioned potential benefits (enable higher order modulation techniques

with increased spectral efficiency) and tracking capabilities of adaptive arrays, we have developed a

smart antenna testbed based on the following design considerations:

• scaleable DSP array with arbitrary topology and considerable signal processing power

• advanced bus system for high speed data transmission (> 64Mbytes/s throughput)

• simple array geometry with sufficient number of antenna elements

• digital down conversion to baseband in order to ensure I/Q-orthogonality

• A/D conversion at a suitable IF.

45

Page 61: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The array antenna band (Figure 15) has been designed as a uniform linear array of 8 printed circuit

elements with vertical polarization and spacing to match the required gains. The center frequency of

5.2 GHz falls into the European Hiperlan-II band. The antennas are connected to an 8 channel RF front

end with a maximum signal bandwidth of 2 MHz, which can be handled using off-the-shelf DSPs. In a

single stage down-conversion the signals from each antenna are converted to an IF at 10.7 MHz.

The IF signals are fed to a signal processing unit (Figure 16), which has been developed by VSYSTEMS,

Munich, in accordance with our design specifications. Analog-to-Digital converters (Analog Devices

AD9042) with a maximum sampling rate of 41 MHz at 12 bit resolution sample the IF signals, which are

then transformed into baseband (I/Q demodulated) using Digital Down Converters (Graychip GC1012)

and fed to an array of 18 SHARC DSPs (Analog Devices ADSP-21060) with a total of 96 Mbyte DRAM.

Theoretically, the DSP array can achieve more than 2 GFLOPS. In order not to limit the performance by

the inter-processor-communication bottleneck, the DSPs are connected over a RACEway crossbar

network with a maximum throughput of 160 Mbyte/s per link.

46

Page 62: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The general structure of our testbed is shown in Figure 17.

VI. Summary

A new system concept for future mobile communications systems has been presented, which utilizes

Smart Antennas in order to maximize the system capacity and meanwhile enables implementation of

adaptive (smart) modems. It has been shown that Smart Antennas are capable of reducing the delay

spread, which enables the implementation of higher order spectrally efficient modulation techniques.

First results have been presented based on spatial channel models which have been experimentally

verified.

VII. Acknowledgments

This work is supported by the German Federal Ministry for Education, Science, Research, and

Technology (BMBF) as part of the ATMmobil project. The channel measurements have been jointly

obtained with Prof. Reiner Thom and his researchers Gerd Sommerkorn, Dirk Hamicke, Uwe

Trautwein, and Andreas Richter from Technical University of Ilmenau. In particular, the authors would

like to thank Prof. Jeffrey Reed from the Mobile and Portable radio Research Group at Virginia Tech for

many helpful discussion when designing our Smart Antenna testbed.

VIII. References

[1] G. Fettweis, K. Iversen, M. Bronzel, H. Schubert, V. Aue, D. Mämpel, J. Voigt, A. Wolisz, G.

Wolf, and J.-P. Ebert: „A Closed Solution For An Integrated Broadband Mobile System

(IBMS)“. ICUPC 1996, Boston, USA.

47

Page 63: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[2] M. Bronzel, and K. Iversen: „Integriertes Breitbandiges Mobilkommunikations-System (IBMS)

auf ATM-Basis“. Workshop der ITG-Fachtagung Optische Teilnehmerzugangsnetze im Heinrich-

Hertz-Institut, Berlin, November 1996, published in: telekom praxis, vol. 74, pp. 12-19, June

1997.

[3] Joseph C. Liberti, Jr., and Theodore S. Rappaport, "Analytical Results for Capacity

Improvements in CDMA", IEEE Transactions on Vehicular Technology, Vol. 43, No. 3, pp. 680

-690, August 1994.

[4] M. Bronzel, D. Hunold, J. Jelitto, N. Lohse, G. Fettweis: "Unterstützung variabler Datenraten

durch den Kapazitätsgewinn adaptiver Antennen", ITG-Diskussionssitzung “Intelligente

Antennen”, Kaiserslautern, December 5, 1997.

[5] M. Bronzel, J. Jelitto, N. Lohse, G. Fettweis, R. Thomä, G. Sommerkorn, D. Hampicke, U.

Trautwein, and A. Richter, "Experimental Verification of Vector Channel Models for Simulation

and Design of adaptive Antenna Array Receivers", ACTS Mobile Communication Summit 1998,

(accepted).

[6] J. C. Liberti, T. S. Rappaport: „A Geometrically Based Model For Line-Of-Sight Multipath

Radio Channels,” IEEE Conf. Proc. VTC’96, pp. 844-848, May 1996.

[7] U. Martin: „A Directional Radio Channel Model for Densely Build-Up Urban Area,“ ITG-

Fachbericht 145, The 2nd European Personal Mobile Communications Conference (EPMCC ’97)

together with 3. ITG-Fachtagung Mobile Kommunikation, Bonn, Germany, pp. 237-244, Sep.

1997.

[8] J. Jelitto and M. Bronzel, "Development of a Smart Antenna Testbed", Joint Workshop o

Wireless Broadband In-House Digital Networks (IHDN’98), January 22-23, 1998, Ulm,

Germany.

[9] M. Bronzel, D. Hunold, G. Fettweis, T. Konschak, T. Dölle, V. Brankovic, H. Alikhani, J.-P.

Eben, A. Festag, F. Fitzek, and A. Wolisz, “Integrated Broadband Mobile System (IBMS)

featuring Wireless ATM”, ACTS Mobile Communication Summit ‘97, October 7-10, 1997,

Aalborg, Denmark, pp.641-646.

[10] U. Trautwein, K. Blau, D. Brückner, F. Herrmann, A. Richter, G. Sommerkorn, and R. Thomä,

"Radio Channel Measurement for Realistic Simulation of Adaptive Antenna Arrays, The 2nd

European Personal Mobile Communications Conference (EPMCC’97), September 1997, Bonn,

Germany, pp. 491-498.

[11] M. Haardt, M.D. Zoltowski, C.P. Mathews, and J.A. Nossek, "2D Unitary ESPRIT for Efficient

2D Parameter Estimation", Proc. ICASSP’95, Detroit, MI, USA, May, 1995, pp. 2096-2099.

48

Page 64: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

vCDMA Smart Antenna Performance

Martin J. Feuerstein, J. Todd Elson, Michael A. Zhao, Scot GordonMetawave Communications Corp.

8700 148th Avenue N. E.Redmond, WA 98052

425-702-5865, [email protected]

Abstract

Applications of smart antenna technology to CDMA networks include both high-mobility cellu-

lar/PCS as well as fixed-terminal and low-mobility wireless local hop deployments. Smart antennas are

emerging as an integral element of the new wideband CDMA standards for third generation mobile

telephone systems across North America, Europe and Asia. This paper proposes a realizable architec-

ture that applies smart antennas to current TIA/IS-95 CDMA networks; the approach employs a non-

invasive applique technique designed to address many of the fundamental performance limitations that

exist with current CDMA networks. The design integrates naturally with existing smart antenna archi-

tectures for analog AMPS service. Computer simulations are used to examine improvements in capacity

and performance employing the fixed-beam technology. The non-traditional roles phased-array antenna

deployments can play in traffic load balancing, handoff management and network-wide interference

control are explored.

1. Introduction

For the first time, smart antenna systems are being deployed in a large-scale fashion throughout ma-

jor metropolitan cellular markets in the U.S. and abroad. In particular, multibeam technologies, they are

also referred to as fixed- or switched-beam methods, have been shown, through extensive analysis,

simulation and experimentation, to provide substantial performance improvements in FDMA, TDMA,

and CDMA networks [1-5]. One apparent advantage of multibeam architectures for FDMA and TDMA

systems is the straight-forward ability of the smart antenna to be implemented as a non-invasive add-on

or applique to an existing cell site, without major modifications or special interfaces. As appliques,

smart antennas are deployed into the enormous and ever-growing existing base of standard cell sites

employing both analog and digital air interfaces.

When the question of CDMA smart antennas arises, it is clear from the literature that multibeam

techniques lead to significant capacity improvements when the phased-array processing is tightly inter-

Page 65: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

faced with, or embedded within, the cell site’s baseband receiver processing [4,5]. However, it has yet to

be demonstrated that an appliqué architecture can provide substantive benefits for CDMA networks; the

topic of this paper addresses precisely this question. At first glance, designing a practical CDMA smart

antenna as a non-invasive add-on may seem like a rather difficult problem, given that despreading the

individual traffic channels requires either blind signal processing methods or special interfaces to the cell

site—clearly not within the constraints of an economical appliqué device. As a solution to this problem,

the architecture proposed here takes a completely different tack: the design does not attempt beam

switching for each traffic channel, but instead synthesizes sector patterns on-the-fly based upon local

traffic and interference conditions, with the objective of reducing peak traffic load, managing handoff

overhead and controlling interference. The following sections introduce the fundamental limitations of

current CDMA networks, define the proposed applique architecture, and present simulated as well as

measured performance results.

2. CDMA Network Performance

Over the past several years, cellular service providers have discovered that deployment, optimization

and maintenance of CDMA networks are radically different from their now-familiar AMPS experiences.

With unity frequency reuse, the troublesome task of AMPS frequency channel planning goes away,

replaced by the CDMA equivalent of per-sector PN offset reuse planning. Another facet of universal

frequency reuse is the fact that every sector of every cell is either a potential handoff candidate or a

possible interferer. In CDMA, there is no frequency reuse distance to separate co-channel interferers

from one another; due to local propagation conditions, it’s not uncommon for a sector to overshoot the

desired coverage area by several tiers of cells. With CDMA technology comes soft handoff, which

provides a high quality make-before-break transition, but on the down side excessive handoff extracts

forward link penalties in terms of higher transmit power requirements, increased interference, reduced

capacity and potentially dropped calls.

The golden rules necessary to elicit maximum performance from a CDMA network all involve inter-

ference control in one aspect or another. Successful optimization of the network, particularly the for-

ward link, is an iterative process of making tough interference tradeoffs. For reliable call originations

dominant servers must be present, because calls originate on the access channel in a one-way connection;

the same dominant server requirement is true for reliable handoffs, since an excessive number of poten-

tial servers can cause interference leading to dropped calls. In hard handoff regions (CDMA fl-to-f2 or

CDMA-to-AMPS), managing interference is once again the key to reliable performance.

50

Page 66: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For over a decade now, the wireless industry has hotly debated capacity questions about CDMA

technology. In reality, the capacity of a CDMA network is an ever-changing quantity that varies based

on local terrain topography and based on geographical traffic distributions over time. Network capacity

is a strong function of the interference, as measured in terms of frequency reuse efficiency (ratio of in-

sector interference to total interference), which is determined largely by local path loss characteristics.

Network capacity is affected by spatial traffic density distributions; often these distributions are highly

nonuniform and time-varying on differing scales (hourly, daily, seasonally, event driven).

The smart antenna applique architecture introduced in the next section is designed to provide CDMA

cellular service providers with flexible tuning options for controlling interference, creating dominant

servers, managing handoff activity, and handling nonuniform and time-varying traffic distributions.

3. Appliqu é Architecture

This paper describes the CDMA portions of a dual-mode smart antenna platform designed to work

with CDMA and AMPS/NAMPS cellular telephone networks. The fundamental element of the archi-

tecture is a phased-array antenna that nominally creates 12 narrow beams (although other options are

possible); these narrow beams, either individually or in combination, are intelligently exploited by the

system to reduce interference in both analog and digital networks. The system supports real-time, adap-

tive control of the interface between the phased-array antenna and the cell site's transmit and receive

radios. To efficiently drive the adaptive control algorithms, the system periodically monitors the radio

environment, measuring power levels on the narrow beams. The phased-array antenna system connects

to CDMA and AMPS/NAMPS base stations as a non-invasive add-on; from an operational perspective,

the functions of the smart antenna are transparent to the cell site and the rest of the network.

A block diagram of the system used to both measure the radio environment and perform optimum

selection/combining among the narrow beams is shown in Figure 1. The control algorithms and phased-

array interface functions reside in the Spectrum Management Units (SMUs), which perform the core

processing in the system. Each SMU is an intelligent unit designed to make real-time decisions about

optimum mappings between the phased-array antenna and the cell site radios for AMPS/NAMPS and

CDMA. Autonomous mapping decisions are made based upon measurements performed by a bank of

high-speed, frequency-agile scanning receivers. Each routing decision is implemented within a multi-

dimensional RF switch matrix with variable amplitude and phase control.

The primary SMU interface to the cell site is through RF connections to the transmit and receive

terminals on the cell site radios. On the cell site transmit path, the SMUs drive a Transmit Com-

51

Page 67: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

biner/Driver (TxCD) assembly, which supports flexible combinations of CDMA and AMPS/NAMPS

radios. Outputs from the TxCD feed a load-sharing matrix of Linear Power Amplifiers (LPAs) con-

nected through duplexers to the phased-array antennas. On the cell site receive path, the SMUs take

signals from a bank of Low Noise Amplifiers (LNAs) connected through duplexers to the phased-array

antennas. The entire smart antenna system can be administered and monitored through a computer

interface that allows static or dynamic parameter control, as well as monitoring of real-time system

performance.

In the CDMA operating mode, the smart antenna makes use of the phased-array to create on-the-fly

custom sector antenna patterns through a process known as sector synthesis. Under software control,

independent management of sector azimuth pointing angles, beamwidths and pattern sculpting contours

provides flexibility to fine-tune network performance. The architecture supports completely different

sector mappings for CDMA and AMPS/NAMPS, thus allowing independent network optimization while

sharing the same physical antennas. The system monitors traffic loading and interference levels on the

CDMA reverse link, combined with traffic loading information from the forward link. The control

algorithms then respond to the traffic loading and interference levels by creating sector antenna patterns

designed to equalize traffic loads and reduce interference.

4. Unlocking Antenna Configurations for Analog and Digital Services

With rare exceptions, cellular 850 MHz band networks use a single set of shared antennas for both

AMPS/NAMPS and CDMA services. Traditionally two services that share the same physical antenna

52

Page 68: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

structure are locked into the exact same sector configurations (azimuth pointing angles, beamwidths,

radiation patterns). With smart antennas, a single physical array antenna can be used to synthesize

completely different sector configurations for the digital and analog services. As the following sections

will illustrate, there are strong theoretical and practical reasons that optimum CDMA sector settings are

much different from optimum AMPS configurations.

In cellular systems where antennas are shared between AMPS and CDMA, service providers are

forced into fixed grid patterns due to the underlying frequency reuse assignments of the AMPS network.

Without a smart antenna system, azimuth pointing angles of the sectors are locked into a rigid hexagonal

grid pattern which forces all alpha, beta and gamma sectors—both AMPS and CDMA—to be aligned

across the network. However, since CDMA is based on unity frequency reuse, there is no need to main-

tain a rigid grid pointing pattern across the entire CDMA network.

Traditional AMPS networks make use of relatively wide beamwidth sector antennas (90° to 105° is

typical for cellular AMPS/CDMA networks) to support hysteresis associated with hard handoffs between

sectors. In contrast for CDMA networks, operators have found that much narrower antenna beamwidths

(60° to 90° is typical for PCS CDMA-only networks) are necessary to limit soft handoff activity. In

addition, the rolloff of the antenna’s main lobe characteristic is a critical factor in determining the

amount of handoff overhead present in the CDMA network.

With smart antennas, the sector configurations for AMPS and CDMA networks can be unlocked

even though the same physical antenna structure is shared. Operators are free to select different sector

beamwidths, azimuth pointing angles and rolloff characteristics to independently optimize both AMPS

and CDMA, thereby maximizing the capacity and performance of both services.

5. Traffic Load Balancing

Statistics derived from commercial cellular networks consistently indicate that traffic loads are un-

evenly distributed across cells and sectors. In other words, it’s quite common for a cell to have a single

sector near the blocking point, while the cell’s other two sectors are lightly loaded. A similar scenario is

true across multiple cells in a cluster that covers a particular geographical region with high traffic. An

example of the extremely non-uniform distribution of traffic is shown in Figure 2. The three curves

show the highest, middle and lowest loaded sectors on each cell; the cells are rank-ordered based on

loading of the highest sector. One can quickly observe that the highest loaded sector is much more

heavily loaded than other sectors on the cell; almost never are all sectors of a given cell at the same high

traffic loading level.

53

Page 69: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

After analyzing traffic data from a number of cellular and PCS markets, average busy hour traffic

statistics are quite consistent from one major market to another. We find that on average the highest

loaded sector has roughly 140% of the traffic it would carry if all sectors were evenly loaded; by con-

trast, the middle and lowest loaded sectors have 98% and 65% of the traffic relative to a uniformly

loaded case. Even though some sectors in a network may be blocking, significant under-utilized capacity

exists in other sectors. The objective of traffic load balancing is to shift excessive traffic load from

heavily loaded sectors to under-utilized sectors. The result is a significant reduction in peak loading

levels and, hence, an increase in carried traffic or network capacity. At a coarse level, static sectorization

parameters can be adjusted for load balancing based on average busy hour traffic distributions. For

optimum control of peak loading levels in time-varying traffic conditions, dynamic control of sector

parameters can be used based on real-time measurements of traffic and interference. Under dynamic

control, network parameters (neighbor lists, search windows, etc.) must be adjusted to support the range

of dynamic sectorization control. Along these same lines, other researchers have noted the ability of

dynamic antenna downtilt control to load balance traffic in hot spots, with a resulting improvement in

overall capacity [7].

The network simulation results in Figures 3 and 4 illustrate how traffic load balancing can improve

network capacity usage. Figure 3 depicts a traffic hot spot that overloads a sector without the application

of smart antennas. Within the hot spot, cellular subscribers that are successfully served are labeled with

54

Page 70: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

small squares, triangles, and circles; subscribers that cannot obtain satisfactory service quality are la-

beled with numbers. Before traffic load balancing with smart antennas, only 54% of the subscribers in

the hot spot receive acceptable service.

With smart antennas, it is possible to rotate the azimuth pointing angle, or change beamwidths of the

CDMA sectors to shift handoff boundaries. Figure 4 shows the same traffic hot spot, only this time the

sector azimuths has been rotated by 60° to redistribute the traffic more evenly across sectors. The result

is that significantly more subscribers obtain acceptable service; in Figure 6, 92% of the subscribers

obtain good service versus 54% in the case without smart antennas in Figure 5.

6. Handoff Management

Cellular service providers often have an extremely difficult time controlling handoff activity. In

CDMA networks, some level of handoff is desirable due to gains associated with the soft handoff feature

(soft handoff allows the subscriber units to be simultaneously connected to multiple sectors). However,

55

Page 71: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

too much handoff can extract a significant performance penalty from the network. For example, each

soft or softer handoff connection requires valuable transmit power resources from all the sectors in-

volved in the multi-way handoff. The impact of this handoff overhead is an increase in the total average

transmit power per subscriber, which wastes valuable linear power amplifier (LPA) resources at the cell

site. In addition, more transmit power per subscriber increases forward link interference levels and

decreases forward link capacity accordingly. Finally, excessive handoff activity can result in dropped

calls due to handoff failures in areas where an excessive number of potential handoff candidates exist.

For optimum forward link capacity, CDMA network operators strive to tightly manage the amount of

handoff activity. Typical networks may run at handoff overhead levels between 65% and 100% (i.e.,

1.65 to 2.0 average handoff links per subscriber). Forward link capacity is inversely proportional to the

handoff overhead factor due to the finite transmit power available at each sector. One key to controlling

excessive handoff is selection of half-power beamwidth for the sector antenna; for example, simulation

results in Figure 5 show the handoff overhead factor, including soft and softer handoff, versus sector

antenna beamwidth for various commercial off-the-shelf sector antennas. At the widest beamwidths,

56

Page 72: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

handoff overhead is extremely high due to large softer handoff regions between sectors. Handoff over-

head is minimum with antenna beamwidths near 70°. Area coverage probabilities from the simulations

show that the 97% area coverage probability target is maintained over the range of 70° to 90° beam-

width, so that sector coverage is conserved. Finally, handoff overhead increases as beamwidths are

reduced below 70° due to the reduction in regions with one-way handoff, as well as antenna sidelobe

impacts. Clearly, optimum sectorization is an important step to maximizing CDMA network capacity, as

other researchers have noted [6],

Another key to controlling handoff overhead is the sector antenna pattern itself. In addition to

beamwidth, the rolloff characteristics of the radiation pattern play a critical role in determining the

amount of soft/softer handoff in the network. Figure 6 illustrates the radiation pattern from a phased-

array smart antenna versus an off-the-shelf commercial sector antenna. With phased-array smart anten-

nas, it is possible to synthesize radiation patterns with sharp rolloff in order to reduce handoff overhead,

while still maintaining coverage. Sector patterns synthesized from the phased array antenna can be much

closer to an ideal sector pie-slice or conical pattern. The sharp rolloff outside the half-power beamwidth

is due to the increased aperture size, as illuminated by multiple dipole columns available with the phase-

array antenna. In contrast, typical sector antennas contain only a single collinear column of dipoles (or

log periodic dipole arrays). Simulations indicate that sharper pattern rolloff can reduce network handoff

levels while still maintaining coverage.

57

Page 73: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

7. Interference Control

As mentioned previously, the most fundamental aspect associated with tuning CDMA networks is

managing interference levels. On both the forward and reverse links, varying interference levels across

the network mean that coverage, quality and capacity change based on local geography and time-of-day.

The best way to illustrate the sensitivity of reverse link capacity to antenna characteristics is through the

reverse link frequency reuse efficiency (in-sector to total interference); reverse link capacity is directly

proportional to the frequency reuse efficiency. Figure 7 depicts the frequency reuse efficiency versus

sector antenna beamwidth, using the same off-the-shelf commercial antennas originally analyzed in

Figure 5. At the widest antenna beamwidths, reuse efficiency is low due to the large sector aperture

resulting in the capture of significant interference from subscribers in other sectors and cells. At the

narrowest beamwidths, reuse efficiency is low due to reduction in main beam coverage area combined

with the impact of antenna sidelobes. For this simulation case, beamwidths of roughly 70° to 90° result

in the best reverse link capacities. Area coverage probabilities from the simulations show that the 97%

area coverage probability target is maintained over this range of antenna beamwidth as well (in other

words, coverage area is not compromised).

58

Page 74: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

During initial network installation and subsequent network maintenance, service providers spend a

significant amount of time and effort to fine-tune interference levels. Operators may adjust transmit

powers, downtilt antennas, change antenna patterns, or tweak network parameters to eliminate interfer-

ence from problem areas. Smart antennas provide an unprecedented degree of flexibility in tuning the

RF coverage footprint of each sector. Figure 8 illustrates several of the sector antenna patterns that can

be created by sculpting the coverage with per-beam gain control. In the figure, three radiation patterns

are shown: the reference case is the unadjusted sector pattern as shown in Figure 6; the other two patterns

show +4 dB and –4 dB adjustments in particular azimuthal directions. Transmit power can be turned up

in specific directions to enhance coverage in traffic hot spots and inside building, or to create dominant

servers in multiple pilot regions. In other directions, transmit power can be reduced to minimize inter-

ference, control handoff activity or tame severe cases of coverage overshoot. Using synthesized phased

array patterns to control sector footprints is significantly more flexible than the alternatives of employing

antenna downtilts or adjusting sector transmit powers—adjustments that impact the entire coverage area

of the sector, rather than confining changes to the specific problem spot.

9. Conclusion

A CDMA smart antenna architecture for implementation as a non-invasive cell site add-on has been

presented. The design is non-traditional in the sense that it does not attempt to create an optimum an-

59

Page 75: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

tenna pattern for each traffic channel, but rather on a per-sector basis. The approach integrates easily

with existing smart antenna architectures for analog FM FDMA services, such as AMPS. The new

proposal provides flexibility to simultaneously synthesize independent sector configurations for multiple

services sharing the same antenna structure, such as the typical combination of AMPS and CDMA. For

the CDMA network, the smart antenna periodically monitors traffic and interference levels to determine

an optimum sectorization. Each CDMA sector can then be assigned radiation patterns exhibiting differ-

ent beamwidths, azimuth pointing angles, and customized sculpting characteristics with the objectives of

balancing traffic load, managing handoff activity and controlling interference. Simulation results show

improved network performance and reduced peak loading levels with the smart antenna system.

References

[1] S. C. Swales, M. A. Beech, D. J. Edwards and J. P. McGeehan, “The Performance Enhancement of Multibeam AdaptiveBase Station Antennas for Cellular Land Mobile Radio Systems”, IEEE Trans. Veh. Tech., Vol 39(1), Feb. 1990, pp. 56-67.

[2] Y. Li, M. J. Feuerstein, D. O. Reudink, “Performance Evaluation of a Cellular Base Station Multibeam Antenna”, IEEETrans. Veh. Tech., Vol. 46(1), Feb. 1997, pp. 1-9.

[3] M. J. Ho, G. L. Stuber and M. D. Austin, “Performance of Switched-Beam Smart Antennas for Cellular Radio Systems”,IEEE Trans. Veh. Tech., Vol. 47(1), Feb. 1997, pp. 10-19.

[4] J. C. Liberti, “Analysis of CDMA Cellular Radio Systems Employing Adaptive Antennas”, Ph.D. Dissertation, VirginiaTech, Sept. 1995.

[5] J. H. Winters, “Smart Antennas for Wireless Systems”, IEEE Personal Communications, Vol. 5(1), Feb. 1998.

[6] T. W. Wong and V. K. Prabhu. “Optimum Sectorization for CDMA 1900 Base Stations”, Proc. IEEE VTC’97, May 4-7,1997, Phoenix, AZ, pp. 1177-1181.

[7] J. S. Wu, J. K. Chung and C. C. Wen. “Hot-Spot Traffic Relief with a Tilted Antenna in CDMA Cellular Networks”, IEEETrans. Veh. Tech., Vol. 47(1), Feb. 1998, pp. 1-9.

60

Page 76: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

VI

Wireless RF Distribution in Buildings using Heating and Ventilation Ducts

Christopher P. Diehl, Benjamin E. Henty, Nikhil Kanodia, and Daniel D. StancilDepartment of Electrical and Computer EngineeringCarnegie Mellon University, Pittsburgh, PA 15213

Abstract

An alternative method of distributing RF in buildings is proposed in which the heating

and ventilation ducts are used as waveguides. Because of the relatively low waveguide loss, this

method may lead to more efficient RF distribution than possible with radiation through walls or

the use of leaky coax. Further, the use of existing infrastructure could lead to a lower-cost

system. Initial experimental results are presented that demonstrate duct-assisted propagation

between nearby offices in a university building. An example method is described for obtaining

efficient coupling between coax and 8”x12” rectangular duct over the 902-928 MHz ISM band.

1. Introduction

One of the challenges related to the installation of wireless networks in buildings is the need to

predict RF propagation and coverage in the presence of complex combinations of shapes and

materials in a building environment[l]. In general, the attenuation in buildings is higher than in

free space, requiring more cells and higher power to obtain adequate coverage.

Over the past several years, an extensive wireless data network has been installed at Carnegie

Mellon that provides coverage to about one half of the campus with raw speeds of two megabits

per second[2-4]. With more than 100 access points, it is believed to be the largest local area

network (LAN) installation anywhere. The effort required to provide a detailed description of the

geometry and material composition of buildings on such a scale often causes system designers to

resort to trial-and-error layouts.

An alternative to relying on direct propagation throughout a building is to install leaky coaxial

cable[5]. Although this method lends itself to a systematic design procedure, the cost of the coax

and its installation may be prohibitive.

Page 77: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

An alternative method of distributing RF in buildings is suggested by the recognition that every

building is equipped with an RF waveguide distribution system—the HVAC ducts. The use of

the HVAC ducts is also amenable to a systematic design procedure but should be significantly

less expensive than other approaches since existing infrastructure is used and the RF is

distributed more efficiently.

2. Description of Proposed System

We envision a distribution system in which RF is coupled into the ducts at a central location

(perhaps near the air handling equipment) using inserted probes. These probes would be

constructed much like existing coax-to-waveguide converters. In most installations, the ducts are

largest near the central air handling equipment, and become smaller as they branch out to the

various rooms. The branches and splits in the ducts would function as waveguide power splitters.

Eventually, the RF would be radiated into rooms and offices through specially-designed louvers.

Coverage in corridors or spaces shielded from louvers could be realized by placing passive

reradiators in the sides of the ducts. As an alternative to special louvers to scatter the RF into

local rooms, waveguide-to-coax coupling could be used to deliver the signal to wireless modems

in the room.

The key idea underlying this distribution method is that low-loss electromagnetic waves can

propagate in hollow metal pipes if the dimensions are sufficiently large compared to a

wavelength. Since HVAC ducts are typically constructed of sheet metal, they are excellent

waveguide candidates. The lowest frequency that can propagate in a given duct depends on the

size and shape of the cross-section. For rectangular ducts, the cutoff frequency fco for the lowest

propagating mode is given by [6]

where c = 3x108 m/s is the velocity of light in free space, and a is the largest dimension of the

duct. For circular ducts, the lowest cutoff frequency is given by [6]

62

Page 78: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where R is the radius of the duct. Minimum duct dimensions for several wireless bands are given

in Table 1.

The principal elements of an HVAC RF distribution system are shown schematically in Figure 1.

One way to couple RF into the duct is by way of a probe antenna similar to those used in

ordinary coax-to-waveguide adapters. An example coupler of this type is described in more

detail in Section 4. A wire screen placed on one side of the probe could be used to accomplish

63

Page 79: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

impedance matching as well as unidirectional radiation. Such a screen would allow air to pass

while reflecting RF energy.

Obstructions to RF such as cooling coils or fans may be occasionally encountered in the duct. In

these cases probe-to-coax couplers could again be used to receive signals on one side of the

obstruction and reradiate them on the other side. The couplers could be connected simply by

low-loss coax or by bi-directional amplifiers if a boost in signal strength is needed. Coverage in

corridors or spaces shielded from louvers could be realized by placing passive reradiators in the

sides of the ducts. For example, a probe coupler could be connected to a small external monopole

as shown in Fig. 1, or dielectric-filled slots could be cut in the side of the duct.

The various branches and splits in the ducts would function as waveguide power splitters.

Depending on the geometry, it may be necessary to insert irises made of wire screens to ensure

the desired power division at the branches. Dampers made of metal sheets to control the airflow

would need to be replaced with insulating sheets (such as plastic) to minimize reflection of the

RF. In addition, right-angle turns in the ducts may require the use of plastic rather than metal fins

to guide both the air and the RF around the bend.

Eventually, the RF would be radiated into rooms and offices through the air vents. This would

also require specially designed vents that disperse both the air and RF (commonly used vents

made with metal louvers would block the RF). Alternatively, probe couplers could be used just

inside the vent along with short segments of coax to deliver the signal locally to wireless

modems.

3. Concept Demonstration

An initial experiment was performed to test the concept. Swept RF transmissions were made

between two offices in one of the academic buildings on the CMU campus (Hamerschlag Hall).

The measurement points were separated by about 6 m and there were two intervening walls. It

was verified that the ducts into the two offices branch from a common trunk, thereby providing a

waveguide path. The frequency was swept between 500 MHz and 3 GHz under two experimental

conditions. In the first, simple dipole antennas tuned to 2.4 GHz were placed in the offices about

64

Page 80: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

a meter away from the vents, but with the metal louvers in place. Figure 2 shows that the

transmission loss between the two rooms was between 60 and 70 dB over this frequency range.

In the second case, the dipoles were held against the front of the ducts with the metal louvers

removed. The smallest duct diameter was 14.8 cm (5.8 in) leading to a lowest-order cutoff

frequency of 1.19 GHz. As shown in Figure 3, the cutoff frequency is observed to be about 1.15

GHz, in good agreement with theory. Above this cutoff, the signals are stronger by about 20 dB

than in the direct propagation case. Note that no attempt was made to impedance-match or

otherwise optimize the coupling into the duct.

65

Page 81: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4. Example Coax-to-Duct Adapter for 915 MHz ISM Band

Since the experiment described in Section 3 did not attempt to optimize the coupling into the

duct, the design of an optimized probe coupler in an 8” x 12” rectangular duct was explored. The

coupler was designed using a capped end-section of the duct, but it should be possible to use a

wire grid as discussed earlier instead to allow airflow past the coupler. The design and

dimensions of the coupler are shown in Figure 4. Using this design, a return loss in excess of 20

dB was obtained over the entire 915 MHz ISM band (Figure 5). This means that more than 99%

of the incident RF power is radiated into the duct, and demonstrates that efficient broadband

coupling can be achieved.

66

Page 82: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5. Research Issues

Although the preliminary experiments described in this paper support the feasibility of the

HVAC RF distribution system, detailed research in a number of areas is needed to develop

systematic design procedures. In the following we briefly comment on several of these.

67

Page 83: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Characterization of the RF channel

Unlike conventional waveguide circuits, in most cases multiple waveguide modes will be above

cutoff in the ducts. This multimode environment will lead to delay spread much like multipath in

open propagation environments. Other sources of delay spread will be reflections from bends,

junctions, and end-plates. It may be possible, for example, to minimize reflections from end-

plates with the use of foam absorbing material. In any event, delay spread and coherence

bandwidth of such channels needs to be explored both theoretically and experimentally.

Coupling into multimode ducts

The existence of multiple propagating modes is a complication usually avoided in conventional

waveguide circuits. Designs and design rules are needed for realizing efficient couplers in the

various sizes and shapes of ducts that are commonly used, for each frequency band of interest.

Mode conversion and cross-polarization in multimode ducts

In the presence of multiple propagating modes, it is likely that the preferred strategy is to

optimize coupling into the lowest-order, or dominant, waveguide mode. However, since HVAC

ducts are not constructed with the same precision as actual waveguide circuits, mode conversion

is likely at joints, seams, protrusions, and other imperfections. In addition to creating delay

spread as discussed above, this mode conversion could lead to signal loss owing to excitation to

orthogonally-polarized modes, as well.

Power division at branches and tees

To obtain satisfactory power distribution throughout a large building, it will be necessary to be

able to determine and control the power division at branches and tees. This power division is also

complicated by the existence of multiple propagating modes. The use of irises made using wire

screens and grids should allow independent control of power division and airflow.

Alternate construction of dampers and corner fins using dielectrics

In existing HVAC systems, airflow is often controlled using adjustable metal dampers. Similarly,

metal fins are often used to guide the air around sharp bends in the ducts. Both of these

constructions potentially represent blockages for the RF. Designs should be explored using

68

Page 84: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

dielectric materials that allow the desired control of the air but which have minimal RF

reflections.

Coupling around obstructions

As mentioned in Section 2, techniques are needed to couple around unavoidable obstructions in

the ducts. Designs for both active and passive coupling need to be explored. The simplicity of

passive probe couplers on either side of the obstruction connected by low-loss coax is attractive,

but bi-directional amplifiers may be needed in some instances, as well. Such techniques could

also be used to couple two otherwise unconnected duct systems.

Design of louvers for dispersing both air and RF

Common designs for louvers contain closely-spaced metal fins that would effectively block RF.

Designs are needed that will disperse both the air and the RF into the room. For example, louvers

made of dielectric materials could be made to minimize RF reflections while using conventional

shapes to disperse the air. Going a step further, metal components could be embedded in the

dielectric that are designed to scatter the RF uniformly into the space.

6. Summary and Conclusions

We have proposed an alternative technique for distributing RF communications signals in

buildings using the HVAC ducts. Because existing infrastructure is used and the ducts exhibit

losses that are low compared with direct propagation and leaky coax, such a system has the

potential to be lower in cost and more efficient than either conventional method. Experimental

results have been presented demonstrating duct-assisted propagation between offices in a

university building, and efficient coax-to-duct coupling. Key research issues associated with

developing practical systems have been briefly discussed.

Acknowledgements

We would like to acknowledge helpful discussions with B. Bennington and A. Hills regarding

the design of large wireless LANs and the potential advantages of HVAC RF distribution

69

Page 85: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

systems. Appreciation is also expressed to D. Klein of McCarls Co. for information about the

construction of HVAC systems and for providing duct samples for use in our experiments.

References

1. H. Hashemi, “The Indoor Radio Propagation Channel,” Proceedings of the IEEE, Vol. 81,

No. 7, pp. 943-968 (1993).

2. Alex Hills and David B. Johnson, “A Wireless Data Network Infrastructure at Carnegie

Mellon University,” IEEE Personal Communications, Vol. 3, No. 1, pp. 56-63 (1996).

3. B. J. Bennington and C. R. Bartel, “Wireless Andrew: Experience Building a High-Speed

Campus-wide Wireless Data Network,” Proceedings of MobiCom’97, Budapest, Sept. 26,

1997.

4. Alex Hills, “Terrestrial Wireless Networks,” Scientific American, pp. 86-91, April 1998.

5. Dennis J. Burt, “In-Building Tricks: How to Design an In-Building Radio System,”

Communications, Vol. 31, No. 6, pp. 42, 44-47 (1994).

6. Robert E. Collin, Foundations for Microwave Engineering, 2nd Edition. Chapter 3,

McGraw Hill, 1992.

70

Page 86: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

VIIPredicting Propagation Loss from Leaky Coaxial Cable Terminated with an

Indoor Antenna

Kirk CarterAT&T Wireless Services, 1920 Corporate Drive, Boynton Beach, FL 33426

(561) 375-6501 or (561) 379-9639; [email protected]

Abstract

This paper addresses the propagation characteristics of an indoor antenna system consisting of one ormore lengths of leaky coaxial cable, each terminated with a 0 dB-gain antenna.

A straight length of coaxial cable with regularly-spaced holes cut in the shielding (aka leaky coax) istreated as a Uniform Line Source antenna. First, a deterministic relation for the path-loss through the airfrom the nearest point ofthe leaky cable to a half-wave dipole antenna (the couplingManufacturer ’s data are used to add empirical corrections and “calibrate ” a near field model. A far field

loss) is derived

modelfor the loss from a short, vertically mounted section of leaky coax is also derived. Loss from theterminating antenna is modeled using a log-slope equation. A simple way to estimate losses from variouswall configurations is suggested. A design methodologyfor indoor distributed antenna systems isoutlined.

Measurements from an indoor antenna system designed using this new model are compared withpredictions. After accounting for fading and environmental losses, the mean error was less than 2 dB.

The standard deviation oferrors was less than 3 dB. Results indicate the model is accurate enoughforindoor microcell design work.

1. Background

Designers of indoor antenna systems for cellular phone service must choose between two main types ofdistributed antenna system (DAS): discrete antennas fed via coax, or lengths of leaky coax acting as long,low-gain antennas. Leaky coax distributes RF energy much more evenly than discrete antennas. This isimportant because some cellular phones can overload at signals above -40 dBm. To prevent phoneoverload under discrete antennas, the power to each antenna must be limited, which severely limits itscoverage area. Thus, a larger number of antennas is needed to achieve the design goals.

The most cost-efficient type of DAS for average-sized office buildings consists of a single or multiplelengths of leaky coax, each terminated in an indoor antenna. A significant amount of RF energy remainson the center conductor at the end of a run of leaky coax. A terminating antenna makes good use of thisremaining energy, extending the coverage well beyond the end of the leaky coax run.

2. Objective

The objective of this project was to derive a model for the mean propagation loss from a single leakycoax/antenna DAS, sufficiently accurate for the purposes of design engineers. That is, the model shouldallow designers to spot check signal strength at various points on a floor plan for sufficient averagepower, accurate to within 5 dB.

Page 87: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Fading characteristics would not be modeled. Rayleigh fading is compensated for in the customary fademargin applied to the link budget for cellular communications. Losses due to walls, furniture or crowdswould be modeled separately as constant correction factors.

Only a straight length of leaky coax would be modeled. Changes in propagation characteristics due tobends in the cable were considered small effects which would probably be overshadowed by thereflections and diffractions from the clutter.

Leaky coax is available commercially in various sizes and with varying radiative characteristics. For thepurposes of this work, however, only one type is considered: Andrew type RXP4-2 [1]. Of themany available, only the EMS 360-45-00NA [2] antenna was used.

The receive antenna was assumed to be a half-wave dipole, a good description of the antenna found onhandheld phones operating in the 850 MHz cellular band.

3. Near Field Model for Leaky Coax

3.1 Deterministic Model

Leaky coax is very much like ordinary coax, but with holes cut in the shielding at regular intervals.Andrew’s documentation [3] states that the holes don’t radiate as antenna apertures. They cause currentsto be excited along the outer conductor. Therefore a straight run of leaky coax can be modeled as aUniform Line Source (ULS) antenna.

is much further away than the portable phone at its typical operating distance. However, the leaky coax isalways in the far field of the phone’s antenna.

The Friis transmission formula assumes that the transmit and receive antennas are in each other’s farfields, and that the power from the transmitter is spread over a spherical surface [4]. But in the near fieldof a ULS of length L, the power is spread over the surface of a cylinder of radius r:

where DT is the directivity of the transmitting antenna. In the near field, no simple gain pattern (uponwhich directivity depends) has formed. We model this as an isotropic source, for which DT is 1.

Phones are most commonly used while held at an angle, which orients the receive antenna’s main beammore or less toward some part of the distributed antenna system (DAS). To keep the model simple, weassume the phone antenna is always optimally oriented, or at least that environmental reflections willmake up for any deviations from the ideal. Thus, the directivity of the receive antenna can be written

72

Page 88: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Substituting the directivity of a half-wave dipole, 1.64, and (2) into (3), we have a relation for the lossbetween a long ULS and a half-wave dipole in the near field of the ULS:

3.2 Empirical Corrections

3.2.1 Coupling Loss

Leaky cable makers provide tables of mean coupling loss for each type of cable at various frequencies,tested at 20 radial feet from a 65 foot length of cable [5]. This is the loss from the center conductor of thecable to the receiving antenna, after the variation due to internal loss is subtracted out.

Andrew’s coupling loss vs. frequency data for RXP4-2 at 20 feet [6] was fitted to a polynomial curve.Variation in coupling loss across the cellular band was about 0.3 dB, so a constant adjustment for thecenter frequency was made. For RXP4-2, the empirical constant CE was calculated as 8.77.

3.2.2 Internal Loss

Like all coaxial cable, leaky coax exhibits internal loss that varies with frequency. Unlike ordinary coax,its internal loss also varies with proximity to concrete, steel or other RF-opaque building materials.Internal loss is usually specified for leaky cable installed with a 2” standoff from the mounting surface.

The curve-fitting approach was again used to obtain the internal loss per foot of RXP4-2. Across thecellular band, changes in internal loss were small enough to ignore. The empirical constant for internalloss, was set at 0.069.

3.2.3 End effects

Little is known about how the radiation pattern changes near and beyond the end of the leaky coax. As apractical matter it’s less important when the cable is terminated in an antenna than when an absorptiveterminator is used. The power from the antenna dominates any end effect. For simplicity in the model, astep function is applied at positions along the leaky coax run to increase the propagation loss by 40dB.

3.3 Final Form of Near Field Model

For convenience, the distance units are converted to feet, the wavelength is expressed as frequency inMHz, and the final result is average received power in dBm. Thus,

is the predicted power at the phone’s position, where

is the loss from the driven end of the center conductor to the phone [(4) in dB form], and where

73

Page 89: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

is the radial distance from the phone to the nearest point of the horizontal segment of leaky coax, andwhere the argument definitions are as shown in Table 1.

4. Far Field Model for Leaky Coax

Rather than use a lossy right-angle connector for attaching the antenna to the end of the leaky coax run,often the cable is curved to point straight down, and a straight connector installed. The effect of theexcited currents on the short vertical section do not add to those of the longer section to form a longer linesource. If the vertical section of the cable is a wavelength or longer, it has sufficient radiation efficiencyon it's own to act as an antenna with a reasonable gain.

4.1 Deterministic Model

4.1.1 Isotropie Loss

In the far field, transmitted power is spread over a spherical surface. Directivity of the short ULS is takenas 1 for the purposes of deriving the loss equation. Substituting into the Friistransmission equation gives:

4.1.2 Directive Gain

Directive gain is defined as the ratio of radiation intensity in a given direction to the average radiationintensity – the gain relative to an isotropic point source. In general, directive gain is:

where is the beam solid angle and P is the power pattern [7]. In general, the beam solid angle andpower pattern are functions of the phase shift per unit length. The velocity factor of the leaky coaxcable, is translated into phase shift using

74

Page 90: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Beam solid angle is then

The power pattern function (12) is based on the pattern factor for a ULS [8], which assumes a uniformcurrent over the length of the line source. The resistive losses in the outer conductor of most leaky coaxcable are very small over the short lengths under consideration here.

4.2 Empirical Corrections

Leaky coax manufacturers do not supply coupling loss data for the far field, so no CE was added to the farfield model. Internal losses over a short length of leaky coax are small enough to ignore.

The gain of any antenna depends partly on its efficiency. In this case, no significant power is reflectedback to the transmitter from the bend in the cable, which improves efficiency. However, we add anefficiency factor, to the directive gain as an empirical correction, knowing that any antenna shorter thanseveral will be less than 100% efficient.

4.3 Final Form of Far Field Model

For convenience, the units are converted to feet and degrees, and the wavelength is expressed asfrequency in MHz. The power pattern is rotated to make 0° the broadside direction. The final result isreceived power in dBm.

where the first two terms find the power on the center conductor at the top of the short vertical segment,and where

75

Page 91: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

the gain in dBi, where is the beam solid angle (11), and where the power pattern P is:

and where the declination angle between the center of the short vertical segment and the phone is:

and where the loss function is [(8) in dB form]:

with radial distance

and where the arguments for the far field model are defined in Table 2.

5. Antenna Propagation Loss Model

The antenna loss model is based on the Keenan-Motley model for indoor propagation loss [9]. Becausewall and floor attenuation must be applied after the contributions from all three elements of the DAS arepower summed, only the first term of the Keenan-Motley equation is used here.

76

Page 92: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5.1 Final Form of Antenna Propagation Model

In order to calculate the radial distance from the antenna to the phone, the horizontal section of leaky coaxis assumed to be arranged in a straight line to the antenna location. This is adequate for most purposes,because normally the leaky coax will not curve back upon itself such that the signal from the antennadominates anywhere but near the end of the run.

The power received from the antenna alone, with power density in dBm, is

where the gain function was found by curve-fitting the vertical antenna pattern (20).

The gain pattern of the EMS 360-45-00NA antenna is omnidirectional in the horizontal plane. is thedeclination angle in degrees, found using

The arguments are as described in Table 3.

6. Estimating Wall Loss

The Keenan-Motley approach to estimating losses due to walls is to count the number of walls betweenthe phone and the antenna, then multiply by the attenuation of one wall. When the DAS consists of morethan one antenna, or a run of leaky coax, a more involved method is required. Some fraction of the DASmay be blocked from a particular phone location by 1 wall, another portion by 2 walls, and so on.

77

Page 93: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Wall loss to a given phone position in a building may be quantified to a useful degree of accuracy using:

where n is the number of walls blocking the phone's “view” of the DAS, b is the percent of the DASblocked by the n walls, and w is the attenuation of each wall, b may be estimated by looking at the floorplan on which the DAS is drawn. Care should be taken to find all the possible paths from the DAS to thephone position, and to apply (22) to the most dominant path.

If the leaky coax run crosses many perpendicular walls, and the points of interest are blocked only bythese walls, (22) is very tedious to apply. The wall loss in this situation is approximately that of one wallblocking the entire DAS.

Measurements have shown that 4 dB is good estimate of wall attenuation if the wall is constructed ofwood or metal studs and gypsum board. Concrete firewalls and floors can attenuate RF much more, with12 dB a reasonable estimate.

7. Indoor DAS Design Methodology

7.1 Required Information

Designing an indoor DAS with leaky coax and terminating antennas requires a scaled floor plan showingwalls and wall types, cubicles and areas where large crowds may gather. Information on the radioequipment (range of RF power settings, uplink gain and limits, location on the floor plan) and desiredareas of coverage are also required. The height of the false ceiling, if any, and the height of the beams orducts in the ceiling are needed for deciding where the DAS might be installed, and for finding the lossesto each room. The location of lighting fixtures, sprinklers, etc. are generally not important.

7.2 Approach

The design process proceeds in repeating cycles of proposing a DAS layout and checking the resultingpredicted signal strength against the design limits. Phones directly under the antenna should see no morethan -40 dBm. The weakest signal level should be at least 17 dB above the strongest noise or interferingsignal, and at least 5 dB above the phone’s sensitivity. The first proposed layout should be the leastexpensive in materials and installation costs, the second slightly more elaborate, and so on. Only a fewcritical points on the floor plan need to be checked in each iteration. These will usually include rooms orstairwells with concrete walls, rooms in the center of a large block of small rooms, and rooms especiallydistant from the nearest section of the DAS.

7.3 Output

The result of the design process is a floor plan with the DAS included and the exact height and locationwhere the antennas and leaky coax should be installed. A special note must be included about theproximity of the leaky coax to concrete or steel beams, if the leaky coax runs along them. The point wherethe bend should begin, and the exact length and height of the vertical section of leaky coax should bespecified.

78

Page 94: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

8. Test Results

8.1 Installation Description

A DAS designed using these propagation models was installed and extensively tested in a single-storyoffice building in Palm Beach County, Florida (Figure 1). All the interior walls were of drywallconstruction.

Two lengths of RXP4-2 were mounted 5.5 inches below a steel beam in the center of the building, runningeast and west from a splitter fed by the radio equipment. Table 4 lists the measured parameters of theDAS. Vertical heights are from the test antenna, 5.8 ft. from the floor. All elements of the DAS weremounted above an ordinary false ceiling.

8.2 Data Collection and Processing

A series of 178 data collection positions was established, nominally 3 feet apart and arranged in straightlines as shown in Figure 1. At each point, an LCC PenCat [10] recorded ten successive measurements ofsignal strength received at an antenna mounted at the end of a 5.8 foot pole held vertically, at least 3 feetaway from bodies or furniture. The ten readings were averaged to reduce the effect of fading in time. Theeffects of fading in space were reduced by passing the signal strength and horizontal distances throughMathcad’s [11] supsmooth function, a symmetric-k nearest neighbor linear least-squares fitting procedure.

79

Page 95: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The power level at each splitter port and at the far end of each run of leaky coax was measured with aMarconi RF Power Meter 9670 [12]. Actual internal loss was calculated from these readings.

8.3 Empirical Parameter Settings

The log-slope factor, α , was set to 21 for areas within line of sight (LOS) or near line of sight of the EMSantenna. For areas beyond many walls, cubicles and furniture, α was set to 25. This approximated theadditional loss due to environmental factors for the signal from the antenna alone.

The efficiency of the vertical section of leaky coax, η, was set to 0.95 for the east run, and to 0.90 for thewest run. These were educated guesses only, based on Lv relative to a wavelength.

Environmental losses were added to the propagation loss predicted by the models, using the methoddescribed in section 6.

8.4 Parallel Data Series

Three data series were taken parallel to the leaky coax runs (Figure 1). The time-averaged data points,spatially-smoothed data line-and the signal strength predicted by the propagation model and wall lossestimation method are shown for these parallel data series in Figures 2-4. Table 5 lists the mean andstandard deviation of errors from the prediction to the spatially-smoothed data line.

Lv for data series 1 was 1.875 ft., because the vertical section was installed at an angle, and the testantenna only “saw” 1.875 feet of the total 2.4 foot length.

80

Page 96: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

8.5 Perpendicular Data Series

Five series of positions running perpendicular to the DAS were measured (Figure 1). The data was treatedas described in section 8.4. Data and prediction traces are shown in Figures 5-9. Table 6 lists the errorstatistics for these series.

81

Page 97: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

9. Summary

The effects of fading are difficult to remove from the data. However, testing in an actual application,rather than on an antenna range, has great value.

Assuming that time averaging and spatial smoothing neutralize the fading component in the data, the errorstatistics show that the model predicts average signal strength accurately enough for design purposes. Inthe worst case, two standard deviations of error is 4.66 dB. About 97% of the errors should fall within 5dB of the mean error in the worst case, close to 100% in the nominal case.

Wall losses are the most difficult aspect of the prediction to calculate, accounting for a large proportion ofthe errors. Designers may want to widen the signal strength design margin beyond 5 dB if the wallarrangement is especially complex, or if the DAS bends around the phone position.

10. Conclusion

Indoor DAS designers can use the models and methods outlined above with confidence that adjustmentsto the original design will be needed after installation in only the rarest instances. Though the methodsrange from deterministic equations to rules of thumb, the tests show they are fully adequate for thepurpose.

11. References

1. Catalog 37, 1997, Andrew Corporation, http://www.andrew.com, (708) 349-3300, p. 713

2. EMS Wireless, http://www.emswireless.com, (770) 362-9200

3. Andrew, Catalog 37, p. 731

4. W.L. Stutzman and G.A. Thiele, Antenna Theory and Design, John Wiley & Sons, 1981, p. 24

5. Andrew, Catalog 37, p. 730

6. Andrew, Catalog 37, p. 713

7. Stutzman and Thiele, p. 29

8. Stutzman and Thiele, p. 176

9. A. Motley and J. Keenan, “Radio Coverage in Buildings”, British Telecom Tech. J., vol. 8, no. 1, Jan.1990, pp. 19-24.

10. LCC International, Inc., http://www.lcc.com, (800) 522-9670

11. Mathcad 7 User’s Guide, Mathsoft Inc., http://www.mathsoft.com, 1997, p. 309

12. Marconi Instruments Ltd., http://www.marconi-instruments.com, (817) 224-9200

82

Page 98: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

VIIIBuilding Penetration and Shadowing Characteristics

of 1865 MHz Radio Waves

Manish Panjwani and Gary Hawkins, CEng. MIEELCC International, Inc.

7925 Jones Branch Dr., McLean, VA 22102, USA703-873-2000

[email protected], [email protected]

Abstract

An in-building measurement campaign was conducted to characterize building properties

pertaining to radio frequency propagation and losses for personal communication system

(PCS) applications. Measurements were made in and around seven buildings in urban

environments in three cities in The Netherlands. The mean building shadowing loss for

all buildings, measured on the ground floor, was found to be 12.4 dB with a standard

deviation of 4.6 dB. The corresponding building penetration loss values had a mean of

19.6 dB with a standard deviation of 5.4 dB. It was found that, on average, building

penetration loss exceeded shadowing loss by a factor of 1.64 and the correlation

coefficient between these parameters was 0.87. It therefore seems that an estimate of a

building 's penetration loss can be derived purely from its shadowing loss, which is

typically much easier to measure. The establishment of such a relationship should prove

very useful in the specification of building penetration values required in link and power

budgets for planning cellular systems, as well as for microcellular modeling and design

applications.

1. Introduction

S part of designing successful cellular and PCS systems, a good estimate of

building penetration loss is required for power and link budget calculations. This is

necessary to ensure that adequate in-building signal levels are provided without over-

designing the system. These estimates are usually assumed based on prior experience or

determined from comprehensive measurements in areas of the wireless market being

designed. If field measurements are required, extensive planning, particularly with

Page 99: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

respect to interior building access and logistics, is necessary. On the other hand,

measurement of building shadowing is typically much easier since access is only required

around the building property and not within the building itself. The work presented in this

paper suggests that building shadowing loss can easily and accurately be used to estimate

penetration loss. In addition to system design, this relationship might also prove useful in

microcellular modeling applications for built-up areas.

This paper is organized as follows. The equipment used, selection of transmitter and test

building candidates, and the data collection process are described in Section 2. The data

analyses procedures are highlighted in Section 3, with the results being presented in

Section 4. Finally, Section 5 provides the conclusions of this study and suggestions for

future work.

2. Test Procedure and Measurement Technique

A typical measurement survey was planned to characterize building penetration and

shadowing losses. A total of seven buildings were measured in three Dutch cities

(Utrecht, Amsterdam, and Rotterdam). These included five office buildings, one

residential tower, and one high-school building. The average signal strength, penetration

loss, and building shadowing loss were determined from the data gathered at each site.

A. Equipment Used

LCC’s TX-1500™ transmitter was used to transmit a continuous wave signal at a

frequency of 1865.8 MHz (corresponding to channel 815 in the DCS-1800 band). A 6

dBi omnidirectional antenna (Scala model 731621 with an 11° half-power vertical

beamwidth) was used to obtain an EIRP of 48.6 dBm (72 W). LCC’s DCS-1800 in-

building measurement tool, MSAT-2000™ (Micro System Analysis Tool), was used as

the receiver for this project. The receiver instantaneously measures the received power

with an accuracy of between –47 and –115 dBm over a 22 kHz bandwidth. The

receive antenna was a 3/4 monopole whip with a gain of 0 dBi.

84

Page 100: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The MSAT-2000 is ideally suited for the collection and display of the received signal

strength (amongst a host of other parameters) in pedestrian environments. This tool is

easily integrated with LCC’s PenCAT™ (Pen based Collection and Analysis Tool)

software which automatically collects data for real time display and post-processing

analysis. PenCAT runs on a hand-holdable, pen-based portable computer. The

MSAT/PenCAT combination is designed to be carried by a person who walks about and

makes measurements in pedestrian environments. The receive antenna sits vertically on

the shoulder of the person (which, in this case, was at a height of 1.5 m).

B. Transmitter Site and Test Building Selection

Five different transmitter locations ranging from 11 to 30 m in height were chosen in

representative pedestrian environments in three cities. All measurements used for the

analyses were made in and around modern buildings with a regular architectural style

such as offices and residential buildings that are expected to house concentrations of

potential customers. The distance from the buildings tested to their respective transmitter

sites ranged between 200 and 1100 m. It is known that signal loss also depends on the

building orientation with respect to the transmitter [1]. Buildings were selected such that

their front facades were approximately perpendicular to the direction of the transmitter so

that orientation effects could be ignored. A list of the buildings measured in each city,

their usage, line-of-sight (LOS) or no line-of-sight (NLOS) conditions to the transmitter,

and transmitter height used is given in Table 1.

85

Page 101: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

C. Data Collection

Once building floor plans had been prepared, the test sites were surveyed and the

positions of the measurement runs were determined and marked on the floor plans.

Measurements inside the buildings were made in as many areas as possible in the central

portions since signal strength is typically lowest near the center of a building.

Measurements were first made in these interior areas to guarantee that signal strength

levels were sufficiently above the receiver noise floor for meaningful analyses. Signal

strength measurements were then made around the perimeter of all buildings on the

ground floor level. In order to ensure more accurate results for the local loss, the inside

measurement runs were chosen to be as parallel as possible to the outside runs. The local

mean of the received signal can be assumed to be homogeneous within a distance of 20-

40 (where is the wavelength of transmitted signal) [2]. At 1865 MHz, this is

approximately 3-7 m which is comparable to the dimensions of a typical room or corridor

in many residential and office buildings. Measurement runs of this order, therefore, were

used as the averaging window during data collection and analyses.

The receiver setup was carried by a person along the predetermined paths and each path

was traversed once. The orientation of the receive antenna was maintained in the general

direction of the transmitter to minimize any effects of body shadowing. Markers were

placed on the floor plan drawings and were used to correlate data files collected for each

measurement run. Other pertinent data, such as a description of the building and the area

and any other special information were also noted.

3. Data Processing

The data file corresponding to each measurement run was first averaged using PenCAT

to obtain a mean signal strength and standard deviation for that run. The difference

between the mean signal strength measured near the center of the building at the ground

floor level and the mean signal strength measurement outside the building on the ground

floor was computed for corresponding inside and outside runs. All of the run differences

were then averaged to produce a local building penetration loss value. The local building

86

Page 102: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

shadowing loss was computed in the same manner by comparing corresponding outside

runs in front of the building with those behind it. This interpretation of building

penetration and shadowing is similar to that often quoted in the literature [1-4]. The

measurement runs performed along the sides of the buildings were not used in this study.

Also, the maximum variation in transmitter antenna gain over the footprint of the

buildings tested was calculated for each building based on its relative 3-d geometry with

its transmitter. This gain variation was estimated to be less than 1 dB on the ground level

and was hence ignored in the analyses.

4. Results

The measured shadowing and penetration loss values for each building are shown in

Table 2. The mean shadowing loss for all buildings was found to be 12.4 dB with a

standard deviation of 4.6 dB. The corresponding penetration loss mean was 19.6 dB with

a standard deviation of 5.4 dB. The shadowing loss measured is somewhat higher than

that reported in [5]. However, the penetration losses reported are much higher than those

reported in [3-6] and those referenced in [6] at these frequencies. This might be

attributable to the heavier brick construction, smaller windows, or the footprint of the

Dutch buildings tested as compared to buildings tested by other researchers. Additional

measurements, as mentioned in Section 5 below, will be required to explain these

differences.

It is interesting to note the degree of association between building shadowing loss (BSL)

and building penetration loss (BPL). The correlation coefficient between these

parameters was found to be 0.87, implying that the two variables are highly correlated.

An approximate relationship can therefore be defined as:

BPL(dB)=M x BSL(dB)

where the coefficient M is a constant. The observed values of Mi, where i is the building

number, are also shown in Table 2 for each building. For the data set reported in this

paper, the mean value of M for all buildings was found to be 1.64 with a standard

deviation of 0.32.

87

Page 103: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The error in predicting penetration loss for a given building can be obtained by replacing

M by Mi in the above equation. A comparison of the measured and predicted penetration

losses yielded an absolute error mean of 15.1 %. The maximum error obtained was 6.1

dB (22.9 %) for Building 3. Figure 1 shows the correlation between building penetration

and shadowing loss and the prediction error.

This analysis was also performed on subsets of the above data for office buildings,

buildings with NLOS to the transmitter, and for different cities. No statistically valid

differences were observed for these classifications because of the small size of the sample

set. However, the mean of the absolute percentage error in each of the above cases was

noted to be less than that of the mean for all buildings.

5. Conclusions and Future Work

This paper presented the results of a measurement campaign to measure building

penetration and shadowing losses in seven buildings in three Dutch cities. The data

collection and reduction methods were similar to those used by other researchers for

building penetration and shadowing measurements.

88

Page 104: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Figure 1: Building penetration and shadowing loss and prediction errorfor various buildings.

This study highlighted a very close correlation between a building’s penetration and

shadowing loss from which a simple relationship was derived. While preliminary, these

results are very interesting and would allow building penetration loss to be accurately

estimated from shadowing loss measurements which are relatively easy to make.

Additional measurements to verify this relation, and to attempt to relate these parameters

in terms of the relative transmitter-receiver geometry, transmitted frequency, type of

building, its size, and the clutter environment surrounding the building are being planned.

Acknowledgements

We would like to thank Rashid Iqbal and Pedro Montez for help with the data collection

process and Telfort B.V. for providing financial and technical support.

89

Page 105: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

References

[1] J. D. Parsons, The Mobile Radio Propagation Channel, New York, Halsted Press,

1992.

[2] W. C. Y. Lee, Mobile Cellular Telecommunications, 2 ed., McGraw-Hill, Inc., 1995.

[3] W. J. Tanis and G. J. Pilato, “Building penetration characteristics of 880 MHz and

1922 MHz radio waves,” 43rd IEEE Veh. Technol Conf. Proc., pp. 206-209, May 1993.

[4] L. P. Rice, “Radio transmission into buildings at 35 and 150 mc,” Bell Syst. Tech. J.,

Vol. 38, No. 1, pp. 197-210, 1959.

[5] A. F. Toledo and A. M. D. Turkmani, “Propagation into and within buildings at 900,

1800 and 2300 MHz,” IEEE Veh. Techol. Conf. Proc., pp. 633-636, May 1992.

[6] A. Davidson and C. Hill, “Measurement of building penetration into medium

buildings at 900 and 1500 MHz,” IEEE Trans. Veh. Technol., Vol. 46, No. 1, pp. 161-

168, Feb l997.

90

Page 106: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

IXMaximizing Carrier-to-interference Performance

by Optimizing Site Location

James Shi and Yaron MintzEricsson Inc.

740 E. Campbell Rd., MP-9, Richardson, TX [email protected], [email protected]

Abstract

In capacity-limited environment, tight frequency reuse patterns and overlapping coverage

result in an interference-limited cellular system. Power control alone may not be sufficient to

control interference. Conventional site location based on a hexagonal grid assumes that traffic is

uniformly distributed in a cell Non-uniform traffic creates undesired variation in the carrier-to-

interference (C/I) performance for a grid-based design. In this paper, the C/I performance is

maximized by optimizing site location according to traffic. It is shown that by moving the cell site

closer to the center of traffic, both the base station and mobile station may transmit at a lower

power and thus generate less interference to other cells. A two-slope propagation model is used

for the analysis. Two methods for the determination of the center of traffic are considered.

Results are presentedfor different levels of traffic concentration and site displacement from the

original grid.

1 Introduction

In capacity-limited cellular systems, tight frequency reuse patterns are used to achieve high

capacity and coverage from multiple cell sites is used to enhance coverage reliability. However,

tight reuse patterns and overlapping coverage result in an interference-limited system. Power

control has been used extensively to reduce interference. However, power control alone may not

be very efficient because a mobile station usually transmits at full power close to the cell border

generating interference to co-channel sites. In an urban environment where the cell radius is

small, a mobile station may often be in line-of-sight to a co-channel site. Therefore, it is very

important to control radio energy to maximize the carrier-to-interference (C/I) performance.

The percentage of static and slow-moving traffic from hand-held mobiles has increased

because more traffic is generated from inside buildings such as offices, convention centers and

Page 107: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

shopping malls. Conventional site location based on a hexagonal grid works well when traffic is

uniformly distributed. Non-uniform traffic creates undesired variation in the C/I performance for

a grid-based design. In this paper, we consider maximizing the C/I performance by optimizing

site location according to traffic.

Previous work [1] has considered the effect of non-ideal cellular grid (irregular base station

locations) and non-uniform distribution of traffic. Based on simulation results, it was concluded

that non-ideal positioning of the base stations has negligible impact on C/I performance under

uniform distribution of traffic. It was also suggested that non-uniform distribution of traffic has

little impact on the average C/I performance. However, the authors cautioned that for non-

uniform traffic, the distribution of C/I problems would not be uniform anymore, that is, some

areas would be more prone to interference problems. The issue of (deliberately) irregularly

positioning base stations for non-uniform traffic has not been studied. It is the interest of this

paper to analyze the impact on C/I performance, in both average sense and worst case, by

irregularly positioning base stations according to distribution of traffic.

2 Analysis

The C/I performance of a cellular system is directly related to its speech quality. A criterion

for the design of the cellular system is to maintain an average C/I at the cell border. If the traffic

is uniformly distributed in the cell coverage area, an ordinary grid pattern can be used to position

the cell site. The size of the cell in capacity-limited environment is determined by the traffic

density and the number of channels at the cell site. In this case, moving the cell site away from

the grid may cause degradation of C/I performance in certain areas. When the traffic is not

uniformly distributed, we show later by analysis and numerical results that it is advantageous to

locate the cell site at the center of traffic. By moving the cell site closer to traffic, both the base

station and mobile station may transmit at a lower power and thus generate less interference to

other cells.

Several assumptions are necessary for the analysis. To calculate the signal and interference

level, a propagation model is needed. The two-slope log-linear function is commonly used [2, 3].

The two-slope model divides the coverage area into two regions, separated by a breakpoint. The

model is defined by the breakpoint and two slope indices before and after the breakpoint. We use

a slope index of 2 before breakpoint to characterize line-of-sight (LOS) propagation near the cell

92

Page 108: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

site and a slope index of 4 after breakpoint for non-LOS propagation. The two-slope model is

more realistic than the single-slope model. The breakpoint is calculated in [2] as

where hB is the base station antenna height, hM is the mobile station antenna height, and is the

wavelength.

The regular grid is defined as the hexagonal grid in which frequencies are assigned and

reused. As a result of frequency reuse, each site has six nearest co-channel sites in the first layer,

all at an equal (reuse) distance. In this paper, we only consider interference from the six co-

channel sites in the first layer. We consider the worst case in which the system is fully loaded,

that is, the same channel is being used in all six co-channel sites. For simplicity, we consider cell

sites with omni-directional antennas but the analysis can be easily expended to sectored cell sites.

In this paper, the communication link from the base station to the mobile station is referred to as

the forward link. The link from the mobile station to the base station is referred to as the reverse

link. Fig. 1 shows an example of interference on the forward link and the reverse link. Only one

pair of co-channel sites is shown.

93

Page 109: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The average forward link interference to MS1 is the signal from BS2 averaging over all

possible locations of MS1. The average reverse link interference to BS1 is the signal from MS2

averaging over all possible locations of MS2. Because the difference in propagation loss between

forward link and reverse link is negligible, the forward link interference in cell BS1 is equivalent

to reverse link interference in BS2. Therefore, we only analyze reverse link interference and the

result also applies to forward link interference.

With power control, the transmit power is determined by the desired received signal level and

the pathloss between the transmitter and receiver. The received signal level is constant with

perfect power control. In practice, range of power control is limited. Given transmit power P,

propagation lost L, traffic distribution f and cell area A, the average received carrier level at the

current serving base station can be expressed as

Similarly, interference level can be calculated as

To calculate interference from a mobile in a co-channel cell to the current base station, the

transmit power (P) is adjusted according to the received signal level and pathloss in the

interference cell. However, the pathloss (L’) in (3) is the loss between the interference mobile

and the current base station. In addition, traffic distribution ( f) and cell area (B) of the

interference cell should be considered. It is obvious that to minimize interference generated in the

current cell, it is necessary to minimize the transmit power in all interference cells.

So far, the calculation is based on the assumption that all base stations are located on grid.

When traffic is not uniformly distributed in the cell, it may be advantageous to locate the base

station according to traffic distribution. The concept of the center of traffic is introduced. We

consider two methods for the determination of the center of traffic. The first method simply

locates the center as the place with the maximum traffic density.

The second method calculates the center of traffic similar to the center of gravity.

94

Page 110: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The first method moves the cell site to the area with the heaviest traffic load while the second

method takes into account the traffic distribution and attempts to maximize the average network

performance. Depending on the traffic distribution, the center of traffic may be far from the

original grid and therefore hardly usable if the first method is used. In general, the second

method achieves a more robust performance.

3 Numerical Results and Discussion

Given traffic distribution, propagation model, cell radius, frequency reuse pattern, desired

receive signal level, range of power control, one can use (2) and (3) to calculate the average

receive signal level in any cell and average interference level from the cell to any co-channel cell.

In this section, we present numerical results and compare the performance for uniform and non-

uniform traffic, with the base station on grid and off grid, and frequency reuse of four (N=4) and

seven (N=7) cells.

The non-uniform traffic distribution has 50% of traffic uniformly distributed in the cell and

the rest 50% concentrated in a small area halfway between the original cell site and the cell

border (also referred to as 50% off grid). To consider the worst case for interference, the small

area is located on the line connecting the pair of co-channel cells. The center of traffic is

determined by the first method. The propagation model has a break point at 500m and a pathloss

of 96 dB at the break point. The two slope indices before and after the break point are 2 and 4,

respectively. The cell radius is 1 km and the desired receive signal level is -80 dBm. As an

example, the range of power control for the mobile station is assumed to be from 28 dBm to -4

dBm. This is the range for the mobile station power control specified in the IS-136 based

standards. A summary of parameter setting for the calculation is given in Table 1 and the results

are shown in Table 2.

Four cases are considered. In the first case, uniform traffic is assumed and the base station is

on grid. In the second case, uniform traffic is also assumed but the base station is 50% off grid

towards the co-channel cell. In the third case, the non-uniform traffic described above is used and

the base station is on grid. In the fourth case, the same non-uniform traffic is used but the base

station is moved to the center of traffic (50% off grid towards the co-channel cell). For all cases,

two frequency reuse patterns, N=4 and N=7 are compared.

95

Page 111: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

By comparing cases 1 and 2 in Table 2, it is seen that for uniformly distributed traffic moving

the base station off grid causes a slight degradation of 0.4 dB in carrier level. The interference

generated from the cell is decreased by 0.4 dB for N=4 but increased by 0.5 dB for N=7. This

seemingly controversial result is examined in the following. The cell can be divided into two

areas: area I is closer to the co-channel cell and area II is farther away from the co-channel cell.

When the base station is moved towards the co-channel cell, interference generated from area I is

decreased and that from area II increased due to power control. Change to overall interference

depends on the relative pathloss and percentage of mobile in the two areas. Obviously, area II is

greater than area I but area I has less pathloss to the co-channel cell. When a tight reuse pattern

such as N=4 is used, the pair of co-channel cells is closer and the contribution of interference

from area I is much more severe than that from area II. Hence, for N=4 moving the base station

reduces the overall interference. When a more sparse reuse pattern such as N=7 is used, the

contribution of interference from area I and II is comparable but the larger population of area II

96

Page 112: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

outweighs that of area I. Therefore, for N=7 moving the base station towards the co-channel cell

aggregates the overall interference.

Results for non-uniform traffic distribution are shown in cases 3 and 4. It is seen that for N=4

by moving the cell site to the center of the small area, the average received signal strength is

improved by 5.5 dB while the interference generated from this cell is reduced by 1.4 dB. This

improvement is significant considering it is the average performance. It can be shown by a

simple analysis that for the worst case in which the mobile is at the cell border closest to the co-

channel cell, its contribution of interference is reduced by 10.8 dB. Similar results are obtained

for N=7 but with less impressive reduction of interference. The marginal improvement of 0.5 dB

can be explained by the offset introduced by moving the base station as discussed earlier for the

uniform traffic. Therefore, it is concluded that the amount of improvement obtained from

optimizing site location according to traffic depends on the frequency reuse: the tighter the reuse,

the more the improvement. Note that this result is based on a limited range for power control.

The interference can be further reduced by allowing unlimited range for power control.

4 Conclusions

In this paper, we consider maximizing carrier-to-interference performance in a cellular system

with non-uniform distributed traffic by optimizing base station locations according to traffic. A

method for the calculation of carrier and interference level in a radio environment with frequency

reuse is presented. A two-slope propagation model is used to characterize both the line-of-sight

(LOS) and non-LOS regions in the cell coverage area. The concept of the center of traffic in a

cell is introduced and two methods for obtaining the center of traffic are described. The

numerical results show that it is advantageous to move the base station to the center of traffic.

However, the amount of improvement depends on the reuse pattern: the tighter the reuse, the

more the improvement. This analysis applies to systems with power control on both forward link

and reverse link. It is useful in determining the optimum site location when performing cell split

or adding new cells in an evolving cellular system. Future investigation should include

determination of realistic traffic distribution in a cell and application of the proposed method to

such distribution.

97

Page 113: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5 References

[1] V. M. Jovanovic and J. Gazzola, “Capacity of present narrowband cellular systems: interference-limited or blocking-limited?” IEEE Personal Commun., Dec. 1997.

[2] D. Har and H. Bertoni, “Effect of the local propagation model on LOS microcellular systemdesign,” in Proc. Infocom ’96, San Francisco, CA, March 1996.

[3] Y. Kishi, T. Mizuike and F. Watanabe, “Geometrical computation of cell coverage areas inplanning of outdoor urban microcellular systems,” in Proc. ISAP’96, Japan.

[4] Greenstein, N. Amitay, T. Chu and L. Cimini, Jr., “Microcells in personal communicationssystems,” IEEE Commun. Mag., Dec. 1992.

98

Page 114: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

÷

Azimuth, Elevation, and Delay of Signals at MobileStation Site

Alexander Kuchar1, Estrella Aguilera Aparicio1,2, Jean-Pierre Rossi3, and Ernst Bonek1

1 Institut für Nachrichtentechnik und Hochfrequenztechnik, Technische Universität Wien; Vienna, AustriaGusshausstrasse 25/389, A-1040 Wien, Austria, email: [email protected]

2 ERASMUS student, Dep. de Teoria de la Señal y Comunicaciones, Centro Politecnico Superior deZaragoza, Spain

3 CNET, Beifort. France, email: [email protected]

Abstract

We analyzed channel sounder measurements at 890MHz in a dense urban environment where thereceiver was located at the mobile station site. The joint information of delay, azimuth and elevationallows a thorough study of the main propagation mechanisms. There is a clear indication that streetcanyons dominate propagation in a dense urban environment, but scatterers all around the mobilestation contribute to the power delay profile. For the macrocells investigated, there is a considerableover-the-roof propagation. Our results corroborate the hypothesis of multiple reflections/diffractionsin urban macro cells.

1 Introduction

The interest in the directional nature of the urban mobile radio channel has been increased rapidlyover the last years. Future mobile communication systems like Universal Mobile Telecommunica-tions Systems (UMTS) will exploit the directional nature of the channel by employing antennaarrays and sophisticated array signal processing [1]. Adaptive antenna technology realizing e.g.space division multiple access SDMA [2] will enhance the spectral efficiency [3, 4] of a mobile ra-dio network. For the development of adaptive antenna systems channel models that include alsodirectional information are required.

In the search for directional channel models for urban mobile radio we require deeper under-standing of the basic underlying propagation mechanisms. This does not only include the studyof azimuthal delay power spectra (ADPS), but also the study of elevation delay power spectra(EDPS). Many deterministic channel models assume, for sake of simplicity, only single reflections.We will investigate the question whether multiple reflected signals represent significant contribu-tions to the received signal and show that this is the case. Recently developed ray tracing toolsused for site specific field prediction are also implementing full 3-D propagation models [5, 6, 7] byincluding diffraction on horizontal edges of buildings. In this work we will discuss whether wavesdiffracted on building roofs considerably contribute to the propagation. To answer this question itis essential to have measurement results at the mobile station site, i.e. where the receiver is locatedat low height.

We evaluated an extensive amount of data from a synthetic array channel sounding campaignconducted by CNET in a dense urban environment in Paris. We analyze the joint azimuthal andelevation spectra resulting in a classification of the various measurement locations.

The paper is structured as follows. Section 2 explains the measurement setup and the experi-mental conditions, and Section 3 describes the data evaluation procedure. In Section 4 we presentthe results of a basic propagation scenario to validate the approach. Section 5 presents the mea-surement results already classified according to the various measurement locations. A discussion

99

X

Page 115: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

of the results follows in Section 6. Section 7 concludes the work.

2 Measurement setup and environment

The measurements were conducted in downtown Paris (Fig. ). The environment consisted mainly ofold buildings with heights of about 25-35m. The transmitter was placed on top of a building blockalong Rue du Archives, about 47m above ground level. The transmit antenna was a 10dBi–gainomnidirectional antenna. As receive antenna we used a quarter wavelength monopole mounted on aconducting shield. It was placed on top of a van at a height of 2m. The antenna was consecutivelymoved to each node point of a rectangular lattice with dimensions 1m x 2m lying in a horizontalplane (Fig. 2). The complex envelope of the impulse response was measured at each node with achannel sounder [8] at a carrier frequency of f0 = 890 MHz. The complex impulse responses wereobtained on a time window of about 35 with an actual delay resolution of anddynamic range of about 30dB.

The node points were arranged with a spacing of in x– and y–direction,respectively. Therefore, we obtained a ”synthetic aperture” of Mx x My = 21 x 41 spatial samples.The Nyquist criterion imposes, in our case, that two measurement points have to be separated lessthan or equal to half a wavelength, which is about 16 cm. The array was always aligned with thex–axis parallel to the street.

Problems of mutual coupling are intrinsically avoided since only one antenna element waspresent. Moreover, the array manifold was precisely defined, and the characteristic of each ele-ment was exactly the same.

For each antenna element/location, we get a vector of complex samples of the

100

Page 116: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

down-converted impulse response. These Mx x My vectors form thebasis for our further investigations.

3 Data evaluation array processing

We employ a recently developed technique [9] to evaluate the measurement data. It consists of thefollowing steps:

• Calculate the power delay profile (PDP), by averaging the measured impulse responseover all array elements,

• Search for the peaks of and store for each of the Npeak peaks the correspondingMx x My complex array samples in a spatial signal matrix.

• From each so found spatial signal matrix estimate the number of incident wave fronts L, and

• estimate the L directions of arrival. Here we apply the high resolution DOA estimationalgorithm 2-D Unitary ESPRIT [10] and 2-D spatial smoothing [11]. The so found azimuthand elevation angles are used to

• reconstruct the single waves and calculate their power (beamforming). Thus we obtain anestimate of the amplitude of the directionally resolved impulse response

Ln is the estimated number of waves that arrive with delay at the receiver, and are theazimuth and elevation of the wave with delay

Note that spatial smoothing is necessary if we want to correctly distinguish coherent signalsthat are incident at the array. Furthermore we improved this approach by applying a reliabilitycriterion [10] that ensures that only correct estimates of azimuth and elevation show up in the finalresults.

101

Page 117: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4 Validation

To validate the array processing algorithms and to verify the data extraction procedure we firstpresent results from a Line Of Sight (LOS) situation (Fig. 3). Here the receiver was located ina 9m wide street with a distance of 480m to the transmitter. In the power delay profile (Fig. 4)we can identify one dominant peak that corresponds to the LOS path. Additional weak waves(relative power < –35dB) with larger delay are also present. We applied a threshold of –45dB,i.e. we processed the spatial signal matrix, at each delay, for the incoming signals having a powerlarger than –45dB.

Azimuthal delay power spectrum, and elevation delay power spectrum

Figure 5 presents the distribution of the power versus azimuth and delay, i.e. the azimuthal delaypower spectrum (ADPS). The figure is read as follows: for each wave a peak is plotted on thedelay-azimuth plane, where delay (azimuth) corresponds to the radial (azimuthal) coordinate inthe plane. A delay of zero corresponds to the delay of the first wave incident at the receiver. Thearrow on the left corner indicates As in all other measurement locations, this was alsothe direction of the street. For the dominant wave we determined and Thetheoretical elevation angle is which we obtained from the RX-TX distance and the TXantenna height. The azimuth angle was nearly 0° in our notation, i.e. the direction of thestreet.

The ADPS reveals that there are longer delayed waves coming from the front (in directionto the transmitter) and some shorter delayed components from behind (direction away from thetransmitter). Although the relative power is very small (< –40dB) the sensitive array processingis able to detect those weak signals. From the elevation delay power spectrum (EDPS) in Fig.5(b) we find that the shorter delayed components have larger elevation, which means that thereare waves reflected/diffracted at buildings.

This simple scenario verifies the applied array processing, and also demonstrates that only fromthe joint study of the PDP, ADPS, and EDPS the propagation mechanisms can be made clear.

In the following section we will present measurement results of various locations that had noLOS to the transmitter.

5 Results

For channel modeling it is essential to find a limited number of representative scenarios that aretypical for channels occurring in urban environment. In this section we present an environmentclassification that resulted from the evaluation of a great number of measurement locations (manymore than shown) in downtown of Paris. In the following we have selected typical examples thatrepresent the different environments.

5.1 Classical street

At location RX27 the receiver was located in Rue de Rivoli, a 27m wide street, about 700m awayfrom the transmitter. The PDP (Fig. 6) exhibits no significant components with large delay; 90%of the energy arrives within a time window of The delay spread. S, is The PDPhas a well known exponential decay, a typical feature of scenarios where local scatterers dominatethe propagation.

102

Page 118: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

From the ADPS we find that all waves are confined to narrow angular ranges in the direction ofthe street (Fig. 8). Because those dominant directions are not we conjecturethat is not exactly the direction of the street, i.e. that the van was not parked parallel tothe street.The buildings along the street form a wave guide in that the waves travel, sometimes also called a‘canyon’. Those waves traveling along the street may have significant delays up to Althoughthe scatterers near the mobile station are often assumed to lie within a circular disc, this resultindicates that, in street canyons, the scatterer distribution has a rectangular shape.

Figure 9 shows the distribution of the elevation. The elevation decreases with increasing delay.For small delays the elevation angles are very large (up to about 60°). This means that the wavestravel over the roofs of near buildings (note that the average building height in the environment isabout 25–30m). To corroborate this conclusion we calculate a theoretical EDPS (Fig. 7) with thefollowing assumptions:

103

Page 119: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

104

Page 120: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

• We assume a street canyon with constant building height and orientation to the transmitteras in Scenario 1.

• Waves travel only over the roofs.

• We consider only single diffraction.

5.2 Street crossing

In the next scenario the receiver was located near a street crossing at Boulevard Voltaire a 30mwide street (Fig. 10). The distance between TX and RX was approximately 1200m. The PDP(Fig. 11) shows a similar behavior than in the previous case, the delay spread was (90%of the energy arrive within ).

The ADPS shows that very close scatterers are homogeneously spread around the receiver.Dominant signal contributions with larger delay are incident from . Thoseangles indicate the directions where Boulevard Voltaire meets two streets (Rue Lacharriere andRue St. Ambroise). The direction of the transmitter is . From this azimuthal directiononly shortly delayed components with large elevation are present, i.e. waves that are traveling overthe roofs.

The EDPS again shows that signal components with small delay have large elevation angles (upto 50°). The elevation decreases with increasing delay.

5.3 Isolated far echoes

A significant long-delayed signal component is typically present in locations where a (quasi–)LOSfrom the receiver and from the transmitter to a large building exists.

In this scenario the receiver was again located in a narrow street (9m wide, 1800m distance toTX). But now the PDP (Fig. 14) shows isolated far echoes. Thus the delay spread is more thantwice as large compared to the previous scenarios, (the time window that includes 90%of the total signal energy is even

We find long delayed waves having wide spread directions–of–arrival around the nominal direc-tions of the street (Fig. 15). From the front two tiers of far echoes at relative delay of

and are present. From the back significant waves arrive within adelay Partly there exist large buildings in the corresponding directions in adistance that fits the delay. However, the large angular spread around the nominal direction andthe two tiers of far echoes indicate that there exists no direct LOS to the far scatterers.

The EDPS presents additional information (Fig. 16). The elevation of the far echoes is upto 20° (10°) for waves arriving from the front (back). Since these signals cannot come from annon-existing 1000m (500m) high building the conclusion is that some signal components suffermultiple reflections. The multiply reflected/diffracted waves travel the final distance from a roof ofa building near the receiver.

5.4 Open areas

The last scenario is different compared to all other locations. The receiver was located on the bridgePont Royal over the river Seine (Fig. 17). Here no street dominates the propagation, instead thereis a park in the near surrounding. The distance to the transmitter is about 1850m.

The PDP (Fig. 18) shows a stronger spread of the signal power over time compared to RX27,and RX10. However, the delay spread of is smaller than in the case of the far echo(RX30).

105

Page 121: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

106

Page 122: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

107

Page 123: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

108

Page 124: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The ADPS reveals that most of the signals are confined to the directionsThose are the directions of the river, i.e. most waves travel along the river. Additionally we findsmall signal components at coming probably from the river banks/edge ofthe bridge.

The elevation is smaller than in the other examples. However, also elevation angles up to 40°exist.

6 Discussion of results

From the distribution of the incident power versus azimuth we find that street canyons force thewaves to come from the directions of the streets only. In all NLOS situations local scatterercontribute to the PDP. Such scenarios are characterized by a delay power profile with exponentialdecay. The signals with very small delay (in the order of are often homogeneously spread overthe azimuth. This means that the nearest scatterers are lying within a circular disc.The signal components with medium delay up to a few are often coming only from the directionsof the street. From the contour plot of the ADPS (azimuth-delay plane) we see that the scattererare distributed on a rectangle (e.g. in Fig. 8).

The distribution of the elevation angles showed in all scenarios, except the LOS case, thatwaves impinge with large elevation angles. We therefore conclude that propagation over the roofscontributes significantly to the overall signal. Note that waves with very large elevation (> 60°)are attenuated by the pattern of the monopole antenna, and therefore no strong signals with largerelevation could be detected.A general trend is that the elevation decreases with increasing delay. Especially in streets withhomogeneous building height the fast decay of elevation over delay is evident. However we alsoidentified waves that are incident from large elevations and with large delay. Evidently thoseechoes can only be explained by assuming multiple reflections/diffractions. Typical situationswhere multiple reflections occur are:

• A wave is reflected on a far scatterer that has no LOS to the receiver. From there the wavereaches a roof of a building near the receiver, where it is diffracted and propagates to thereceiver. In that case the signals' delays and elevation can reach high values.

• A wave travels through a street and is multiply reflected on the walls of the street canyon.

Another indicator for multiple reflections are waves arriving with same azimuth and same elevation,but with different delay. In that case the waves pass the same scatterer before reaching the receiver.

The delay spread was throughout, except for the environment where far scatterers playa significant role

7 Conclusions

The evaluation of wideband array measurements turned out as an important tool to study themobile radio channel, because it allows to estimate a spatial impulse response with high resolutionin time, azimuth and elevation.

Specifically, multiple reflections can only be detected if delay, azimuth and elevation informationis jointly available. The high resolution of the processing method in delay and angle (inthe order of 1°) facilitates the separation of individual partial waves reaching the mobile station. Itis not sufficient to have an azimuthal power spectrum and an independent elevation power spectrumavailable.

From the distribution of the elevation we conclude that in macro cells environments in NLOSsituations propagation over the roof dominates. Multiple reflections give significant contributions

109

Page 125: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

to the overall signal. The results found will be implemented in our channel model [12] in a laterwork.

References

[1] Josef J. Blanz, Martin Haardt, Apostolos Papathanassou, Ignasi Furio, and Peter Jung, “Com-bined Direction of Arrival and Channel Estimation for Time-Slotted CDMA,” in Proc. Intern.Conf. on Telecommunications, Melbourne, Australia, 2–5 April 1997, pp. 395–400 .

[2] M. Tangemann, C. Hoek, and R. Rheinschmitt, “Introducing Adaptive Array Antenna Con-cepts in Mobile Communication Systems,” in Proc. Race Mobile Communications Workshop,Amsterdam, 1994, pp. 714–727.

[3] Josef Fuhl, Alexander Kuchar, and Ernst Bonek , ”Capacity Increase in Cellular PCS by SmartAntennas,” in Proc. 47th IEEE Veh. Technol. Conf., VTC’97, Phoenix, AZ, May 4–7 , 1997,pp. 1962-1966 .

[4] Alexander Küchar, Josef Fuhl, and Ernst Bonek, “Spectral Efficiency Enhancement and PowerControl of Smart Antenna Systems,” in Proc. European Personal Mobile CommunicationsConf., EPMCC’97, Bonn, October, 1997, pp. 475–481.

[5] Th. Kürner, D.J. Cichon, W. Wiesbeck, “Concepts and results for 3D digital terrain basedwave propagation models - an overview,” IEEE Journal on Selected Areas in Communications,vol. JSAC–11, no. 7, July 1993, pp. 1002–1012.

[6] Th. Kürner, D.J. Cichon, W. Wiesbeck, “Evaluation and verification of the VHF/UHF prop-agation channel based on a 3D-wave propagation model,” IEEE Trans. on Antennas andPropagation, vol. AP-44, no.3, March 1996, pp. 393-404.

[7] K. Rizk, R. Valenzuela, S. Fortune, D. Chizhik, and F. Gardiol, “Lateral, Full-3D and VerticalPlane Propagation in Microcells and Small Cells,” COST 259 TD(98) 47, Bern, Switzerland,Feb. 1998.

[8] A. J. Levy, J.-P. Rossi, J.-P. Bardot, and J. Martin, ”An improved channel sounding techniqueapplied to wideband mobile 900 MHz propagation measurements,” in Proc. 40th IEEE Veh.Technol. Conf., VTC’90, Orlando, FL, May 7–10, 1990, pp. 513–519.

[9] Josef Fuhl, Jean-Pierre Rossi. and Ernst Bonek, ”High-Resolution 3–D Direction–of–ArrivalDetermination for Urban Mobile Radio,” IEEE Trans. on Antennas and Propagation, vol.AP–45, no.4, April 1997, pp. 672–682.

[10] Michael D. Zoltowski. Martin Haardt, and Cherian P. Mathews, ”Closed–Form 20–D AngleEstimation with Rectangular Arrays in Element Space of Beamspace via Unitary ESPRIT,”IEEE Trans. on Signal Processing, vol. 44, no. 2, Feb. 1996, pp. 316–328.

[11] Tie-Jun Shan, Mati Wax, and Thomas Kailath, ”On Spatial Smoothing for Direction–of–Arrival Estimation of Coherent Signals,” IEEE Trans. on Acoustics, Speech, and Signal Pro-cessing, vol. ASSP–33, no. 4. August 1985, pp. 806–811.

[12] Josef Fuhl, Andreas F. Molisch, and Ernst Bonek, ”A Unified Channel Model for Mobile RadioSystems with Smart Antennas,” IEE Proc.–Radar, Sonar Navigation, vol. 145, no. 1, February1998, pp. 32–41.

110

Page 126: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

XIA New Hybrid CDMA/TDMA Multiuser Receiver System

Uthman Baroudi and Ahmed ElhakeemECE, Concordia University

1455 de Maisonneuve W., Montreal, QC, H3G 1M8Tel: (514)848-3087, Fax: (514)848-2802

[email protected], [email protected]

Abstract

In this paper, the performance of a new hybrid CDMA/TDMA multiuser receiver system is investi-

gated, whereby all average bursty traffic of all active users are transmitted in one slot and the

excess user traffic as well as the interactive and data gram oriented traffic are transmitted in a

second time slot. Further, the decorrelator multiuser receiver is used to demodulate packets trans-

mitted on the bursty slot and the conventional (single-user) detection on the second slot. The

results show the superiority of the proposed approach compared with the conventional system.

1. Introduction

The work done [1-3] on multiuser detection has added another dimension to the expected future communi-

cation systems. The theoretical performance of multiuser detection shows much improvement compared to

the conventional systems (single-user). Yet, the optimum multiuser detector has a prohibited complexity

which makes it undesirable for wireless communication. This motivates the researchers to look for sub-

optimum multiuser detectors [2] that have good performance and at the same time affordable complexity.

Nevertheless, in multiuser detection, it is generally assumed that the number of simultaneous packets, the

fectly known. Deviating from such assumptions will yield a very high probability of error. For connection-

oriented services, the signalling period may yield useful information about the number of users. Unfortu-

nately, even with such knowledge, users burstiness will preclude the accurate estimation of the instanta-

neous number of packets on the channel.

In this paper, we try to circumvent the above by adopting a new hybrid CDMA/TDMA approach whereby

all average bursty traffic of all active users are transmitted in one slot and the excess user traffic as well as

the interactive and data gram oriented traffic are transmitted in a second time slot. Further, the decorrelator

multiuser receiver is used to demodulate packets transmitted on the bursty slot and the conventional (sin-

gle-user) detection on the second slot.

Page 127: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2. Proposed Multiple Access Protocol

In this paper, we adopt a hybrid CDMA/TDMA system as an Air Interface. In this scheme, the time

domain is divided into frames and each frame is composed of two time slots (i.e. Ts1, Ts2). Further, we

impose the following simple and effective traffic control. In this paper, the flow of traffic is categorized

into two categories: Bursty Traffic and Interactive Traffic. Each category has classes of traffic which have

common traffic characteristics. Fig. 1 shows these classifications. The justification for this classification

relies on the fact that each category has its own required QoS as well as its own traffic and transmission

characteristics. Hence, each category of traffic should be treated differently.

Basically, Ts1 is dedicated to the bursty traffic users, while, Ts2 is dedicated to the interactive traffic

users. During the signalling period (where the user tries to communicate with the base station to get an

admission to the system), the bursty traffic user negotiates with the base station about (among other things)

the average transmission bit rate Ravr(ji) that the user shall stick to while it is using Ts1, though the

actual bit rate is less in some cases. Of course, this is translated into a fixed number of received cells (i.e.

Navr(ji)) at the receiver side. Now, if the user needs to send information using a bit rate higher than the

average bit rate (i.e. Rji > Ravr(ji)), then the excess cells Nexcess (variable bit rate components)

should be directed to the other time slot (i.e. Ts2)· Therefore, the assignment of slots to a part or all of user

traffic is done priori through an agreement between the base station and the user. The motivation for this

traffic control is to make it practical for implementing the Multi-User Detection strategy at the receiver of

the bursty traffic population.

112

Page 128: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

(ji): means class (j) belongs to category (i.e. bursty and interactive categories);the average number of cells emitted from all bursty users to

: the average number of cells emitted from class (i) bursty users to

the ratio of a user’s average bit-rate to the basic bit-rate ;

the ratio of a user’s bit-rate to the basic bit-rate ;

the instantaneous number of users; and

Nclass (i) : the number of classes of users in category (i).

The motivation behind the choice that the system is working on a CDMA platform is the attractive features

of CDMA technique over the others in the wireless environment. However, CDMA is suffering from two

main problems. First, it is well known that the performance of CDMA techniques (in particular DS-

CDMA) is very sensitive to the processing gain Hence, the user’s transmission rate is

not expected to be high enough to support multimedia applications nor a wireless ATM system where the

objective is on-demand availability of bandwidth at peak rate as high as 10Mb/s [4]. The second problem is

that CDMA technique is an interference-limited technique.

Multi-Code CDMA:

Considering this heterogenous traffic, and for analysis convenience, we propose that various traffic classes

should represent their bit rates as n multiple of a basic rate or a 1/n multiple of ; n is an integer. For

practical rate users of rate Rb /n , the generated cells will be buffered till their gross rate matches with

Motivated by the desire to provide remedy for the first problem, the high rate classes (say class i, where the

bit rate is of ), where each user needs to transmit with a rate greater than , the high-bit stream shall

be converted into low-bit basic streams, where . Each new low-bit basic stream has a bit

rate equal to , where is the basic bit duration. , where is the bit duration of

the high-bit rate traffic coming from class i. Consequently, the low-bit rate stream traffic is packetized in

cells as per ATM standards. This strategy introduces the concept of Multicode CDMA (MC-CDMA) [5]

where each low-bit rate stream is spread with a different code.

3. Statistical Traffic Model

The structure of the voice and video traffic is fairly complex due to the high correlation among arrivals [6].

113

Page 129: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Furthermore, the activities of such sources play a key role in modelling generated. The correlation between

voice cells generated during a call can be modelled as an Interrupted Poisson Process (IPP) as shown in

Fig. 2. This model is the simplest model and is widely used to model voice traffic. The transition from talk-

spurt (on state) to silent (off state) during a time period equal to cell transmission time on the channel

occurs with the probability and the transition from OFF to ON occurs with the probability Consider-

ing this discrete event system, the ON and OFF states periods are geometrically distributed with the mean

respectively. The total number of cells generated from all active calls (during the ON

period) follows a Bernoulli distribution.

For traffic class i, the probability Pi (n) that n out Ni voice sources are in the ON state, where Ni is the

number of independent active voice sources is derived as

where which is the average activity probability. When I independent different classes of

users are multiplexed, the aggregated cell arrivals are governed by the number of voice sources in the ON

state from all types of users. Having assumed that all classes of traffic users been are independent identi-

cally distributed random variables (iid), the compound traffic distribution of m cells generated by all active

calls of all I classes is given by

4. The Decorrelator Receiver

The decorrelator receiver uses a linear transformation to obtain an estimate of the transmitted symbol. It is

basically composed of two stages: matched filter bank and then followed by the linear transformation pro-

cess. It is easy to represent the matched filter outputs [7] in a vector notation as shown in eq. (5).

114

Page 130: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

From (7), we observe that the solution (i.e. the bit vector ) can be obtained by inverting the correlation

matrix which is invertible in most cases of interest. Hence, the linear transformation process is done

by

This results in

This linear detector has many attractive features. First of all, its computational complexity is much reduced

comparing with the optimum receiver. The compositional complexity of the decorrelator detector increases

linearly with the number of users (i.e. O (K)). Further, the linear decorrelator receiver exhibits the same

degree of near-far resistance as the optimum multiuser detector [3]. In addition, when the users energies

are unknown, the decorrelator receiver is the optimal approach.

However, a significant limitation of this technique is the computational complexity due to the inversion of

the correlation matrix [8] where entries depend on the number of active users, signature sequences, and the

delays of the users. Further, any change in one of these parameters changes the correlation matrix and con-

sequently a need for updating the multiuser detection process. Moreover, the uncertainty in the actual num-

ber of active users is another serious problem that might degrade the system performance very severely [9].

Fortunately, by adopting our traffic control policy, the last problem is resolved.

4. System Performance Analysis

In the following, we analyze the performance of the proposed CDMA/TDMA system employing the above

traffic control. We consider that there are four classes of bursty traffic users each of transmission rate Rj

(j=1,2,3,4). These rates are multiple of Rb. On the other hand, we assume that the interactive traffic cat-

egory has two classes of users. In this work, we study the effects of (MC-CDMA) intracell interference on

the reverse link (i.e. mobile to base). Each low-bit rate will be spread with a different gold code. The infor-

mation bit are modulated using the binary phase shift keying (BPSK).

Here, we just consider the additive gaussian interference. We can present the received signals from both

categories of traffic as follows. The received signal in Ts1 (i.e. r1(t)) is composed of just a part of the

bursty traffic as explained before. On the other hand, r2(t) received in Ts2 is composed of two parts: 1)

115

Page 131: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

the interactive users signals and 2) the excess traffic from the bursty users. Thus,

n(t) is an additive white gaussian noise with two sided spectral density kj is the number of

active users belong to class j in the interactive traffic category and it is a random variable. Then, the out-

puts of the correlator filters will be processed as explained before, where r1( t) is processed by the decor-

relator multiuser receiver, while r2(t) is processed by the conventional detection strategy (single-user).

In the following, we shall evaluate the performance of the proposed approach using the ATM-cell error-

rate criterion assuming synchronous transmission. According to the above traffic control, the bit error

probability of the bursty traffic is composed of two portions: 1) bit error performance for the cells using

Tsl and detected by the multiuser receiver, and 2) bit error performance for the excess traffic cells using

Ts2 and detected along with the interactive cells by the single-user receiver. Then, these performances

are averaged over the whole bursty traffic population. Following eq. (4), it is easy to figure out the proba-

bility distributions for both bursty traffic components.

Considering first the bursty traffic category, we get the following probability distribution:

where k is the number of cells and const is the value when eq, (4) is evaluated for On the

other hand, the probability distribution of the interactive traffic is straight forward from eq. (4). From the

above discussion, we can easily get the average bit error rate for the bursty traffic as derived in eq. (10).

is the average bit probability of error for the whole category of bursty traffic. is the probability

of error given [3] by eq. (11) when there are k cells (i.e. average components of the bursty traffic) in Ts1.

P1(k) is the probability distribution of the bursty traffic as defined in eq. (9). Sig is the maximum

116

Page 132: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

expected interactive traffic low bit rate's cells. Navr1 is the average number of low-bit rate’ cells of the

bursty traffic. Nexcess is the number of excess bursty traffic low bit rate’s cells. is the bit probability

of error given [10] by eq. (12) when there are m (i.e. m=l+j) cells in Ts2 .

where the Q-function

where PG is the number of chips spanned one bit period (i.e. PG = Tb/Tc ).

Following the same argument as above, we obtain a similar equation for the performance of the interactive

traffic class;

5. Results

We compare our proposal to the conventional system where both categories of traffic are using the whole

available bandwidth and the received signals are processed by the conventional single- user receiver. Fur-

ther, both systems (proposed and conventional) are compared (when applicable) to the single users system

(where there is just one user) which is considered the lower bound for any Multiple Access System.

The common criterion used to evaluate the performance of the multiple access technique is the packet error

rate. Considering the ATM structure for the transmitted packets, where each packet (called cell) is com-

posed of 53 bytes, the packet error rate is

where Pc is the probability of correct bit-decision, and n is the number of bits in the ATM cell. The results

presented here is of the MC-BPSK-CDMA/TDMA hybrid system. Fig. 3 illustrates how the packet error

rate is evaluated. First, for every set of bursty traffic parameters, and for every fixed number of active

117

Page 133: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

users, we obtain all possible combinations of cells that each class of users might send and each user will

split its stream of bits into certain number of low-rate streams. Furthermore, each low-rate stream will be

spread using a set of pseudo-random codes (Gold codes) where the ratio of Tb/Tc = L = 1023.

Though, in the case of two slots CDMA/TDMA, the bursty rate should be higher than 1/Tb . Here, we

assume that the bursty rate at each time slot is 2/Tb (i.e. 20 Kbps). Hence, to maintain the same band-

width, the processing gain should be half the one used in the wide CDMA (i.e. conventional system in our

case), that is 512. Therefore, the bit energy Eb is the same for both systems (i.e. conventional and pro-

posed). Then, the bit error rate calculated using the classical multiuser detection and single user detection

are averaged over ail the possible combinations assuming that each combination is equiprobable. This is

very important, because each class of traffic has different traffic characteristics. Hence, to obtain a fair and

clear picture of the system performance, the results should be averaged over all possible combinations.

Table I summarizes the traffic characteristics used in evaluating the proposed multiple access protocol. The

basic rate is assumed the same for all classes of both traffic categories that is Rb = 10Kbps. Consider-

ing the bursty traffic classes, we let each class to have a range of transmission rates such that the proposed

system will be tested under more reliable parameters. Therefore, each bursty traffic class has a minimum

bit rate, maximum bit rate and an average bit rate. For the purpose of investigating the capacity of the pro-

118

Page 134: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

posed protocol, we study the new system on what we call combination of rates. In other words, the mini-

mum-bit rate combination is where the system is tested assuming that all classes are operating on their

lowest bit rates. The maximum-bit rate combination is where the system is tested assuming that all classes

are operating on their maximum bit rates and the same thing applies for the average rates. Further, each

class of traffic has its own transmission activity (i.e. pi,j, (i,j) means class j belongs to traffic category i).

The system studied, here, has a maximum of 60 bursty users and 13 interactive users.

It is worth to emphasis that the number of active users admitted into the system is not necessarily equal to

the number of mutual cells processed by the system, because the number of cells depends on other param-

eters and not just the number of active users. The instantaneous number of cells is given by eq. (15).

Fig. 4 shows the packet error rate for the proposed approach as well as the conventional and the single user

systems. It is clear that the packet error rate performance is very similar to the conventional one which is

irreducible. This is due to the fact that in the original proposed approach, the user should stick to a certain

average bit rate and as such at high rate transmission, the cells received by the multiuser detector is just a

small portion of the bursty traffic. Hence, when we average the performance of both receivers over the

whole traffic population, the cells received by the conventional receiver will have more statistical weight

(binomial distribution). To overcome this problem, we modify the approach such that more cells shall be

received by the decorrelator multiuser receiver, and this can be accomplished as follows. If the system per-

formance is observed to be worsened by employing the original version of the proposed approach, then

119

Page 135: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

both the user and the base station negotiate again a new average bit rate which should be higher than previ-

ous average bit rate and so on. Fig. 4 shows an improvement in the system performance

when this modification is applied. Further, the system capacity (i.e. the number of users the system can

handle in an acceptable packet error rate) also has been improved as shown in Fig. 5.

6. Conclusions

The new hybrid CDMA/TDMA system has been presented. We examined the new system under a wide

range of expected traffic characteristics (bit rate, transmission activities, etc.) for the future wireless net-

works. The results show the superiority of the proposed system compared with the conventional one. Fur-

ther, this improvement in the performance is attributed to the novel traffic control scheme that could

balance between the traffic load on each detection algorithm and makes the implementation of the decorre-

lator receiver more practical.

7. References

[1] S. Vedru, “Minimum Probability of Error of Asynchronous Gaussian Multiple-Access Channel”, IEEETrans. on Inform. Theory, Vol. IT.-32, No. 1, Jan. 1986, pp. 85-96.

[2] A. Duel-Hallen, J. Holtzman, and Z. Zvonar, “Multiuser Detection for CDMA Systems”, IEEE Per-sonal Communications, Vol.2, No. 2, April 1995, pp. 46-57.

[3] R. Lupas and S. Verdu, “Near-Far Resistance of Multiuser Detectors in Asynchronous Chan-nels, IEEE Tran. Commun., vol. Corn-38, no. 4, April. 1990, pp. 496-508.

[4] “Special Issue on Wireless ATM,” IEEE Personal Commun., August 1996.[5] Chih-Lin I, and R. D. Gitlin, “Multi-Code CDMA Wireless Personal Communications Net-

works,” Proc. of ICC '95, pp. 1060-1064.(6] J. J. Bae, and T. Suda, “Survey of Traffic Control Schemes and Protocols in ATM Networks,”

Proc. IEEE, vol. 79, no. 2, Feb. 1991, pp. 170-189.[7] J. G. Proakis, Digital Communications, 3rd ed., New York: McGraw Hill, 1995.[8] M.J. Juntti, and J.O. Lilleberg, “Implementation Aspects of Linear Multiuser Detectors in

Asychronous CDMA Systems”, proc. of ISSSTA ’96, pp. 842-846.[9] E. S. Esteves and R. A. Scholtz,“ Bit Error Probability of Linear Detectors in the Presence of

Unknown Multiple Access Interference”, Proc. of IEEE Globecom ‘97, pp. 599-603.[10] M. B. Pursley, “Performance Evaluation for Phase-Coded Spread-Spectrum Multiple Access

Communication - Part I: System Analysis”, IEEE Tran. Commun., vol. Com-25, no. 8, Aug.1977, pp. 795-799.

120

Page 136: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

121

Page 137: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 138: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

1Multiuser Multistage Detector for Mode 1 of FRAMES Standard

Adrian Boariu and Rodger E. Ziemer1

Electrical and Computers Engineering DepartmentUniversity of Colorado at Colorado Springs

1420 Austin Bluffs PkwyColorado Springs, CO 80907, USA

[email protected], [email protected]

Abstract

In code/time division multiple access (C/TDMA) systems, both multiple access interference

(MAI) and intersymbol interference (ISI) arise. MAI is present due to the CDMA format while

the ISI is due to the channel multipath. In this paper we present a type of multistage detector that

overcomes these problems and in addition has complexity proportional to the number of CDMA

users, is computationally efficient, and is suitable for pipeline implementation that allows fast

data processing. The simulation results are compared to the single user bound for the average

bit error probability (BEP).

1. Introduction

The FRAMES (Future Radio Wideband Multiple Access System) standard defines third

generation multiple access schemes for personal communication systems (PCS). Mode 1 [1] is

wideband TDMA. The frame is 4.615 ms in duration and has eight slots. One option of Mode 1

is the CDMA feature. Each slot can bear a number up to eight CDMA users. The slot format

includes two data bursts, separated by a midamble, and ending with a guard interval. The

spreading factor is 16 chips/symbol and modulation employed is QPSK. The TDMA feature

guarantees that all K users arrive synchronously at the receiver. Subsequently we will analyze

only the uplink (mobile to base station) because this is the one that often limits the capacity of

the system.

1 This work was supported by the BMDO and managed by the Office of Naval Research under Contract N00014-920-J01761/P0004

Page 139: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2. The detector description

In a mobile radio environment the signal is subject to fading and multipath propagation

effects. As a result, the receiver must deal with intersymbol interference (ISI) due to the channel

multipath in addition to the multiple access interference (MAI) due to the CDMA feature. Perfect

estimation of the channel via midamble is assumed. COST-207 defines the power-delay profiles

and Doppler power spectrum for the mobile channel [5].

Previous solutions to this problem can be found in [2]. The main approach is to estimate

the entire data burst for all users at once, using well known detection methods: decorrelation,

minimum mean-square error, block feedback, etc. The main drawback is that such detection

methods must deal with a large matrix that must be inverted or Cholesky factorized.

Our solution is different [4]. For the coherence of the paper we briefly describe the

method. Figure 1 shows the ISI introduced by a bad urban (BU) radio channel, where

is the equivalent impulse response, hc and c being the impulse response of the channel and the

spreading codes, respectively. Since there is a guard interval at the end of each time slot, there is

no ISI over the [0, Ts) interval. Hence, we can detect the first symbols of all users by performing

the integration of the received signal over one symbol interval. This is the first stage. In the next

stage we integrate the signal over two symbol intervals and we use the estimates from the

previous stage to cancel the ISI. The same procedure is used for the third stage. Thus, succeeding

stages add more energy in the detection process due to the extended integration interval.

124

Page 140: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Let

be the output of the matched filters at the kTs moment with the received signal r(t) integrated

over a length of iTs, N being the data package length, and let

be the cross-correlation matrices for lag kTs with the received signal integrated over a length of

iTs. For the other lags the cross-correlation matrices are zero. The equations that describe the

stage detector for BU case are

For each of these stages, the feedback detector based on Cholesky factorization

with xk detected successively (due to fi triangular form) is employed. A final stage based on the

method described in [3] is added to improve the performance. The proposed detector will have

four stages for the BU situation and three stages for the typical urban (TU) case.

The single user lower bound can be computed based on

where W is the number of channel paths, are the eigenvalues of the 1/2 rc diag(P)

matrix, and the P vector contains the power delay profile samples while rc is a matrix which is

formed by the autocorrelation function of the specific user spreading code [6].

3. Simulation results

Figure 2 presents some simulation results for the BU and TU channel cases. With each

stage employed the performance is better. For reference, simulation curves using the block

feedback detector (using Cholesky factorization) [2] are also depicted. We see that the detector

125

Page 141: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

presented in this paper gives an improvement of around 0.75 dB over the block feedback

detector.

126

Page 142: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4. Conclusions

A multiuser multistage detector has been presented that estimate data on a symbol-by-

symbol basis and not the entire package of data at once as block feedback detectors do. The BEP

performance improves with the number of stages employed. The lower bound for the BEP is less

than 0.5 dB off from the simulation results for the eight-user case. The method uses matrices

having dimension equal to the number of users and is independent of the data burst length. It is

suitable for pipeline implementation because the stages can run independently.

References

[1] T. Ojanpera, et al, “Comparison of multiple access schemes for UMTS”, Proc. IEEE Veh.

Technol Conf., pp. 490-494, May 1997

[2] P. Jung and J. Blanz, “Joint detection with coherent receiver antenna diversity in CDMA

mobile radio systems”, IEEE Trans. on Veh. Technol, vol. 44, pp. 76-88, Feb. 1995

[3] M. K. Varanasi and B. Aazhang, “Multistage detection in asynchronous CDMA

communications”, IEEE Trans. on Commun., vol. 38, pp.509-519, April 1990

[4] A. Boariu and R. E. Ziemer, “Stage detector for C/TDMA systems”, accepted for

publication at Development & Application Systems 1998 Conference, Suceava, Romania

[5] “COST-207: Digital land mobile radio communications”, Final Rep., Luxembourg, Office

for Official Publications of the European Communities, 1989

[6] J. Omura and T. Kailath, “Some useful probability distributions”, Technical Report, no.

7050-6, Stanford University, Sept. 1965

127

Page 143: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 144: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2Self-organizing Feature Maps for Dynamic Control of Radio Resources in

CDMA PCS Networks

William S. HortosFlorida Institute of Technology, Orlando Graduate Center, 3165 McCrory Place, Suite 161,

Orlando, FL 32803

ABSTRACT

The application of artificial neural networks to the channel assignment problem for cellular code-division multiple access (CDMA) cellular networks has previously been investigated. CDMA takesadvantage of voice activity and spatial isolation because its capacity is only interference limited, unliketime-division multiple access (TDMA) and frequency-division multiple access (FDMA) where capacitiesare bandwidth-limited. Any reduction in interference in CDMA translates linearly into increasedcapacity. To satisfy the demands for new services and improved connectivity for mobilecommunications, small cell systems are being introduced. For these systems, there is a need for robustand efficient management procedures for the allocation of power and spectrum to maximize radiocapacity. Topology-conserving mappings play an important role in biological processing of sensoryinputs. The same principles underlie Kohonen’s self-organizing feature maps (SOFMs), which areapplied to the adaptive control of radio resources to minimize interference, hence, maximize capacity indirect-sequence (DS) CDMA networks. The approach based on SOFMs is applied to published examplesof DS/CDMA networks. Results of the approach for these examples are informally compared to theperformance of Hopfield-Tank algorithms and genetic algorithms for the channel assignment problem.

1. INTRODUCTION

During network system planning, radio test data are used to estimate radio propagation profiles and

user demands for mobile cellular service. Based on these stationary estimates, a so-called compatibility

or interference matrix and traffic-demand vector for the planned network are derived. The channel

assignment problem (CAP) is the allocation of the required number of channels to each cell to meet

traffic demands subject to interference conditions. This network representation has been used to develop

energy functions for Hopfield-Tank neural networks (HNNs) as well as genetic algorithms (GAs) for the

CAP in cellular radio networks 1,2. Unfortunately, in systems of smaller or non-uniform cells, used to

provide personal communication services (PCS), the rapid dynamics of cell-to-cell handoff, changes in

service grade, and transmit power control render algorithms based on stationary estimates inadequate.

Channel assignment to the network cells can be either static (SCA) or dynamic (DCA) 3. In SCA a

set of channels is permanently assigned to each cell, or, coverage area of a base station. In DCA the

channels are assigned to cells on a call-by-call basis to improve network capacity and spectral utilization.

The DCA approach is suited to non-uniform and time-varying traffic demands. To adapt to changing

network conditions, the past observed dynamics of the network can be used to update radio resource

allocation (RRA) procedures. The past history of direct-sequence (DS)/code-division multiple access

Page 145: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

(CDMA) networks in the service areas of the base stations allows iterative updates of interference

statistics, limiting new measurements only to recent network events.

The RRA for a new service request, i.e., the assignment of power, base-station antenna, and call

activity monitoring to provide a channel for the call, depends globally on the position of the mobiles as

well as on previous assignments that may interfere with the contemplated RRA. Extensions of

Kohonen’s self-organizing feature map (SOFM) are constructed to adjust the spatial and temporal

configurations of mobiles and base stations, called the “state” of the network. The approach is based on

a composite mapping, from radio resources into the set of interference levels in the service areas of the

base stations, then from the interference set into the set of CDMA channels.

In 1991 Kunz proposed the first Hopfield neural network (HNN) model for solving SCA in cellular

FDMA networks 1, 4 . Smith and Palaniswami 5 are first to apply a method based on Kohonen’s SOFM

for the SCA problem in FDMA and TDMA networks. HNN convergence has been improved recently

using auxiliary conditions on the energy function’s coefficients to ensure hill-climbing (HC) on the

hypercube of feasible solutions 2, 5. Further convergence improvement has been realized for hill-

climbing HNNs (HC-HNNs) via Abe’s non-uniform integration step size 2, 6. The effect of sectored cells

on the convergence of HC-HNN and genetic algorithms (GAs) for the CAP has been examined in DCA 2.

2. COVERAGE, CAPACITY AND INTERFERENCE IN DS/CDMA PCS NETWORKS

The primary goal of DS/CDMA PCS network design is management of the three-way tradeoff among

call quality, coverage and capacity in order to maximize overall system performance economically. The

advantages of DS/CDMA derive from soft/softer handoff, power control, activity monitoring, and reuse

of the same spectrum in all cells and sectors. The cell coverage for the reverse link in a DS/CDMA PCS

network is determined by radio design and operating environmental factors, as indicated by the following

simplified link budget:

where Pm is the maximum mobile station transmit power in dBm, is the receiver sensitivity in dBm, Gb

(Gm) is the base station (mobile) antenna gain in dBi, Lp is the vehicle penetration loss in dB, Lc is the

cable loss in dB, GHO is the handoff gain in dB, is the lognormal fade margin in dB, and is the

channel loading. The transmission parameters Pm, Gm, Lp, and Lc are established at network installation

or not adaptive during network operation. Gb can be modified through antenna selection. Actual mobile

transmit power Pm(t) is controlled to limit interference. The latter two can be considered radio

resources of the network. The remaining terms in (1) depend on propagation and traffic conditions. The

130

Page 146: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

following equation simplifies the relationship between CDMA capacity and radio parameters on the

reverse link:

where m is the number of simultaneous users per cell or area, B/Rb is the processing gain (PG), v is the

voice activity factor, r is the frequency reuse efficiency, and g is the sectorization gain. Capacity

improvement of a CDMA system over TDMA and FDMA comes from the universal frequency reuse

capability, soft handoff, power control, and voice activity monitoring. For CDMA, the frequency reuse

efficiency r is represented by the ratio of interference from the service area of the given base station to

the total interference from that service area and all neighboring service areas. Thus, r is sensitive to

propagation conditions. Due to power control, the signal as well as intra-cell interference remains the

same on the reverse link, so that r is only inversely proportional to inter-cell interference. Ambient noise

establishes the required received signal power at the cell site, and thus establishes the user's power or

cell radius for a given transmit power level. In a reuse system, capacity is alternately defined by Lee7

as

where M is the total number of channels, s is the number of sectors and reuse factor r can be expressed as

where d is the co-channel cell separation, that is, the minimum distance, and R is the cell radius. In a

DS/CDMA system, the broadband frequency channel can be reused in every adjacent cell, so that r is

close to 1. In a practical sense, d=2R, i.e., all available frequencies can be used in each cell, so from

(5), r = 1.33. Radio capacity is also based on (3) and (4). However, in DS/CDMA, r is fixed but M, the

total number of available channels is variable, and depends on the level of interference, i.e., M(I).

Interference, in turn, is the result of the radio resources assigned to each call. As shown in Figure 1,

interference at the mobile arises both from the home cell and the adjacent cells, all operating on the same

frequency f1. The received carrier-to-interference (C/I) can be written as

where Eb is the energy per bit, I0 is the interference per hertz where N0 is thermal noise per

hertz), Rb is the information rate, B is the bandwidth per channel, and B/Rb = PG. Processing gain is

used to overcome I, and, hence, determine the number of channels that can be created. Given two sets

of values for Eb/Io and Rb/B, corresponding C/I values are found from (6). From Figure 1, C/I at the

mobile location A can be used to for a worst-case scenario. If

131

Page 147: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For an IS-95 system, let B = 1.228 MHz and Rb = 9.6 kbps, then PGdB = 21 dB. Voice quality at a

frame error rate (FER) of 10-2 typically corresponds to Eb/I0 = 7 dB. Given the values of PG and Eb/Io,

C/IdB = 7 - 21 = -14 dB or C/I = 0.03981. From this value of C/I, the number of traffic channels mi in

each coverage area of Figure 1 can be obtained by

where mi and are the number of channels and power level at cell i, respectively. Solving (7), one has

In the case of a single radio cell, that is,

channels/cell. In the case

3. INTERFERENCE, TRAFFIC DEMAND, AND RADIO RESOURCE ALLOCATION

The following conditions summarize the interference constraints in cellular network operation.

1. The co-channel constraint (CCC) is that the same channel cannot be assigned simultaneously to

certain pairs of radio cells. The CCC is determined by the co-channel interference (CCI), which, in

turn, depends on the interference control methods applied in coverage areas of the base stations.

132

Page 148: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2. The adjacent channel constraint (ACC) is that channels adjacent in their domain's distance metric

(frequency, time slot or code) cannot be assigned to adjacent radio cells simultaneously. The ACC is

related to the channel reuse factor r.

3. The co-site channel constraint (CSC) is that any pair of channels assigned to a radio cell must be at a

minimum distance in their domain. In DS/CDMA, this distance depends on the interference level

produced by the radio resource assignment used in the service area of each base station.

For a network of N base stations, the constraints are commonly described by an NxN symmetric

matrix, called the interference matrix C. Each off-diagonal element cij in C represents the minimum

separation distance between a channel assigned to cell (or sector) i and a channel assigned to cell (or

sector) j. The CCC is represented by cij =1, while the ACC is represented by cij = 2. Setting cij = 0

indicates that base stations i and j are allowed to assign the same channel to users in their service areas.

In DS/CDMA, with control. Each diagonal element cii represents the minimum separation

distance between any two channels assigned to cell (or sector) i. This is the CSC and is always

satisfied, provided that, in sectored networks, sectors are equivalent to cells.

Since DS/CDMA capacity can only be increased by reducing other-user interference, a departure can

be taken from the model of a two-dimensional interference matrix for assigning N base stations a fixed M

channels. In a DS/CDMA network, each base station i establishes a radio coverage area that can carry

any of mi(I) channels, where the total number of channels in the network, and mi(I)

represents the local capacity of the coverage area i. As discussed by Gilhousen, et. al, using the

elements of directional antenna sectorization, voice activity monitoring, and reverse-link power control,

the system can adaptively regulate intra-cell and inter-cell interference, I, and, thereby, manage channel

capacity 8. Considering these elements as network radio resources suggests representing their effect on

network capacity by a composite mapping, on a 3N-dimensional lattice, into the set M of

vectors of channel capacities in the coverage areas of the base stations, where

represents the set of cell sectorization values in any coverage area; is the set of states of voice

activity monitoring in each area; is the set of discrete power control levels; is the real bounded

interval of all possible interference values; and M is the subset of the set of N-dimensional vectors of

non-negative integers, whose ith component is the call capacity at base station i. This mapping relates an

RRA to network capacity, through the interference level that the assignment produces.

Traffic demand for channels, in a network of N base stations, is represented by an N-vector called the

traffic demand vector T. The component ti of T represents the number of active channels (new calls and

133

Page 149: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

handoffs) to be assigned to the coverage area of base station i. Let qik denote the kth active user

assigned to station i. Then, the constraints described by the interference matrix are given by

Dynamic network conditions are modeled by time-varying entries in C and T, and lead to dynamic

radio resource allocation (DRRA) problems with transient traffic demands and interference thresholds.

4. THE SELF-ORGANIZING FEATURE MAP APPROACH

Learning in a self-organizing system is due to the processes of competition and inhibition. Presented

with an external input, neurons compete with each other to claim the input. Synaptic weights are adapted

to reflect the result of the competition. The idea of the radio resources in a DS/CDMA network

“competing” to be assigned calls suggests an approach is based on Kohonen’s SOFM9. The Kohonen

map is modified to solve discrete-space optimization problems among the lattice of radio resources. The

approach is first developed for SRRA problems, then later extended to DRRA problems.

All feasible solutions to the RRA problem, as formulated by the HNN approach, lie at the vertices of

a 3N-dimensional hypercube N is the number of base stations, and

is the total number of available radio resource combinations. The image of these vertices under also

intersects the constraint hyperplane defined by the interference matrix, traffic demand vector and

interference constraints (9) resulting from the RRAs. Since ti is an integer for all i, the -image of the

radio resource constraint set is an integral polytope. Variables on this hypercube, are defined as

which, for convenience, are denoted by Xj, r. Let X denote the NxSxVxP-

dimensional array of these variables. Since the range of values for each radio resource and local

interference level is assumed to be bounded, each range can be normalized to the interval [0,1]. Without

loss of generality, the set of radio resources and its image under in the interference set are both

contained in unit hypercubes. Suppose one approaches a vertex continuously from within the unit

hypercube, starting from a point on the constraint hyperplane and inside the hypercube, which represents

a feasible, non-integer solution to the RRA problem. One continues to move along the constraint

hyperplane, gradually approaching a feasible vertex. The continuous variable in the interior of the

hypercube is denoted by wr, j. Thus, for a metric Q, and W defined as the array (wr, j), Q(W) = Q(X) at

the vertices. The value wr, j denotes the probability that the variable in the r, j (actually position

of array X is activated. Self-organization is applied to the array of probabilities or synaptic weights, W.

134

Page 150: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5. THE SOFM ALGORITHM FOR PCS NETWORKS

The structure of the discrete-space form of the SOFM is shown in Figure 2, where W is expressed as a

RxN matrix, with vector ô replaced by a scalar index into the lattice of allowable RRA

vectors. It consists of an input layer of N nodes and an array of R output nodes. The output nodes

correspond to the solution array of capacities resulting from each RRA, while the input layer denotes the

N coverage areas in the PCS network. The weight connecting input node j to node (corresponding to r)

of the output array is given by . A cell in which an RRA is required is presented to the

network through the input layer at node j. Physically, a new call or handoff is presented to the PCS

network in the service area of base station j. The nodes of the output layer compete with each other to

determine which column of the solution matrix (channel produced by the RRA) accommodates the input

vector with the least impact on quality or cost. The weights are then adapted to indicate the RRA

decision using the neighborhood topology.

Consider the case where an RRA is required in coverage area j*. An input vector x is presented with

a “1” in position j* and “0” elsewhere. For each node r ofthe outer layer, Vr,j*, the cost to theobjective

function ofassigning r to coverage area j*, is computed. The cost potential Vr,j* of node r for a given

input vector is defined as

where the degree of interference caused by the RRA is represented by the weight Pj, i , d+1, called the

cost or proximity indicator and the distance in the channel capacity domain between

the images of RRAs r and s. then the interference cost should be at a

135

Page 151: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

maximum, with cost decreasing until the two channels are sufficiently separated so that interference is

below a threshold. Elements of the proximity indicator array P are recursively defined from C as

The dominant node, m0, of the outer layer is the network node with minimum cost potential for a

particular input vector, that is, for all nodes r and fixed j*. The neighborhood of the

dominant node m0, is the set of nodes ordered according to the ranking of the cost

potentials corresponding to each node, that is, where is

the size of the neighborhood in the SOFM network for the coverage area ofbase station j*.

Unlike Kohonen’s original formulation 9, where ordering corresponds to the physical structure of the

network, the neighborhood of the dominant node is not defined spatially here, but depends on the

ranking of the computed cost criterion of each output node for a given input vector. Thus, dominant

nodes and their neighborhoods are determined by competition according to the cost criterion; the weights

are modified according to Kohonen’s rules within the dominant neighborhood. The size of the dominant

neighborhood depends on which base station is receiving input.

At the completion of Kohonen weight updating, the weight array W may be moved off the constraint

hyperplane, resulting in an infeasible solution. In the next stage of the SOFM, the weights of the nodes

outside the dominant neighborhood are organized around the modified weights, so that W remains a

feasible solution. This stage can be performed by a HC-HNN. Expressing the array W as a vector w, w

is considered the vector of states of a continuous HNN. The HNN performs random and asynchronous

updates on w, excluding the weights in the dominant neighborhood, to minimize the energy function:

where is the projection onto the constraint hyperplane given by

and is the identity operator. The energy function (14) is expressed in terms of a

solution vector x„ constructed from the solution array X, by reordering the Xi,r where r = (s, v, p), in an

ordering of four-integer indices. In terms of x, the demand constraints can be expressed as Dx = T,

where T is the demand vector and array D consists of N rows of 1’s and 0’s. Some argue that the HNN

need not be an HC-HNN discussed in10, since the weights need only intersect the constraint hyperplane

and, as such, there is no optimal point5. However, reducing the time to minimize the energy function

136

Page 152: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

(14) speeds the update of non-dominant weights and, consequently, the convergence rate of the global

weight adaptation. It may even improve the initial configuration of weights when the next service

requirement is input to the SOFM network. The next randomly selected requirement is input to the

SOFM network to begin a new update stage of the algorithm, where the dominant node and its neighbors

are determined and their weights modified. This procedure is repeated until the SOFM weights stabilize

to a feasible 0-1 solution which is a local minimum of the RRA problem.

During convergence, the magnitude of the weight modifications and the size of the neighborhoods are

decreased. Initially, the size of the neighborhood for each subarray of W, given by is

large, but is decreased linearly until the caller demand for base stationj. Since the SOFM weight

modifications depend on the order in which the resource requirements are input, the approach is

inherently stochastic. In this form, the network must be run repeatedly to arrive at different local

minima and may be subject to stability problems that plagued earlier HNN algorithms for CAP 1,6,10.

The following SOFM algorithm can be applied to the SRRA problem in DS/CDMA networks.

1. Initialize the network weights as is the maximum network capacity possible

under any This yields an initial feasible, although non-integer solution.

2. Randomly select a new radio resource requirement for a base station. Represent this requirement as

the input vector x. Find the position j* (base station coverage area) which is active, i.e., xj* = 1.

3. Compute the potential V r,j* for each index r in the output layer array according to (10).

4. Determine the dominant node, m0, by competition such that and identify

its neighboring nodes is the size of the neighborhood for input at j*.

5. Update the weights in neighborhood of dominant node according to the rule

a modification of Kohonen's slow updating rule, α and σ are monotonically decreasing, positive

functions of sampled time, γ is a normalized weighting vector used in tie-breaking at a node. For all

weights outside the updated neighborhood, Weights are updated as

6. The weights may no longer lie on the constraint hyperplane, so an HC-HNN is applied to return to a

feasible solution. The vector w is modified around SOFM weight adaptations so that Dw = T.

7. Repeat Step 2 until radio resource requests at all base stations have been selected as input vectors to

the SOFM network. This forms one period of the algorithm. Repeat this procedure for K periods.

Parameters in Step 5, α and σ are decreased according to some monotonically decreasing function.

137

Page 153: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

8. Repeat Step 2 until this condition is considered stable convergence of the weights

for a given neighborhood size. Decrease the neighborhood sizes linearly for all j.

9. Repeat Step 8 until for each base station j, j = 1, · · ·, N.

The weighting vector is a heuristic used to damp oscillations in SRRA algorithm updates11:

Each component in the vector γ is normalized. The SRRA SOFM parameters are selected heuristically5:

The SOFM approach is suited to the dynamic rearrangement of radio resources. The DRRA problem

begins at the point in the SRRA solution where a new call cannot be assigned without a rearrangement of

existing RRAs. A time-varying demand vector T(n) and interference constraints are satisfied when

D(n)x(n) = T(n) at sample time n. Each sample period represents the arrival of a single call or multiple

calls to the network and, during that period, input vectors, corresponding to the areas in which calls are

placed, are presented to the SOFM network at a rate determined by the demand distribution at that time.

Since feasibility is always restored during the second stage (Step 6) of the SOFM, any rearrangement of

the existing calls to enable a new call is automatic. If no rearrangement is possible, either the SOFM

may not converge to a feasible set of RRAs, or a feasible rearrangement may be found by allowing the

interference to increase above pre-determined levels. Higher interference levels could, in turn, diminish

channel capacity. In this case, the call can be blocked and the previous state of the system reinstated.

To improve SOFM convergence, Steps 5 and 6 for updates of non-dominant weights can be made

more robust by adopting Abe's approach 10 to ensure that an HC-HNN only leads to feasible RRAs that

are stable points of the system of update weights, i.e., satisfy D(n)x(n) = T(n) at sample time n. A

piecewise linear saturation function can replace the exponential in Step 5. In Step 7, faster updating can

be obtained based on Abe’s convergence acceleration for HC-HNNs to optimize integration step sizes,

now applied in only K = 1 period 6. With little time to wait for stable convergence of the weights, once

the SRRA is completed, the DRRA algorithm omits Step 8. The vector of neighborhood sizes can

also be initially set to the demand vector T(n) at sample time n, at the start of the current update.

138

Page 154: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

6. SIMULATION OF THE SOFM FOR RRA PROBLEMS

The performance of the discrete-space SOFM for SRRA and DRRA is examined through simulations

of network examples considered by Kunz1 , Lai 12, and Funabiki, et. al.13. Interference matrices and

traffic demand vectors for these examples, shown in Figure 3, represent cellular networks with non-

homogeneous traffic. Homogeneous traffic is modeled by demand vectors with equal components.

Time-varying traffic is approximated by the cyclic rotation of T or periodic replacement with a new

vector during the simulations.

In the simulations, the radio resources at each base station form a triple from the selection of cell

sectorization: 1 (omni), 3 (120°), 6 (60°) sectors in a coverage area; voice activity monitoring: 0 for

“off” or 1 for “on”; and power control level: 0 for no power control, 3 levels, 6 levels, 10 levels. Each

triple of resource values r = (s, v, p) is mapped to an interference estimate Ir based on statistics derived

from measurement studies of microcellular networks such as 14, 15. The interference establishes the

DS/CDMA reuse factors, and, hence, determines the number of available channels. The uniform cost

vector for each r is c = (1, 1, 1). Individual RRA cost terms r.c are summed over the number of active

base stations in the network. This sum is added to the quality or cost criterion.

Simulations were performed on a personal computer using adaptive learning models in MATLAB’s

Neural Network Toolbox, which were extensively modified to implement the SOFM outlined in Steps 1

through 9. Algorithm performance is measured alternately on the basis of the average probability of call

blocking (ABP) or the total active calls in the network (transient capacity), along with the average

number of iterations (ANIs) required for asymptotic convergence, based on a prescribed error value

An omnicell network of N= 21 cells is considered with the interference matrix in Figure 3(c) with M

= 221, the minimum number of channels. Entries in T are set initially to ti = 4 for each cell i and the

SRRA SOFM run with K= 100 different initial states. Parameter t i is incremented to 6, 8 and 16. Each

case is repeated for K = 100 different initial states and the results averaged. The ABPs for the cases are

0.010, 0.024, 0.089, and 0.117, respectively. The corresponding ANIs are 38.4, 49.2, 102.5 and 213.9.

Resource allocations for each initial state at each base station area are set to minimum values, (1,0, 0),

and then allowed to adapt to the demand vector. This may account for large ANIs in the test cases. Cell

sectorization shows the greatest sensitivity to a uniform traffic increase, and is at six 60° zones at

simulation termination with ti = 16. Power control and voice activity monitoring are activated in a

greater number of coverage areas as homogeneous demand increases from 4 to 16. The optimal RRAs

effectively reduce the cii from 5 to 2 in the coverage areas of the corresponding base stations i.

139

Page 155: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Direct comparisons are tenuous when using similar, but not identical, examples. As an indirect

comparison, a modified HC-HNN is run to solve the CAP for the same 21-omnicell network, with a

TDMA scheme and homogeneous traffic successively set to ti = 4, 6, 8 16 for each cell i. ABPs are

0.021, 0.044, 0.138, and 0.26, respectively; corresponding ANIs are 17.4, 24.7, 38.2 and 47.5. Cells

sectored into three 120° zones reduce cii from 5 to 4 and improve ABP to about 0.153 with ANI of 33.8

when ti = 16. A GA with forced channel reassignment for the CAP of the 21-cell network, where each

cell has three channels available and tk= ti=1, attains an ABP of about 0.017, averaged over 100 runs 2.

A non-homogeneous network example, introduced by Kunz, is based on an area measuring 24 km by

21 km around Helsinki, Finland 1. There are 25 base stations, distributed non-uniformly over the area,

and 73 channels to satisfy the interference conditions. The interference matrix and traffic demand vector

in Figure 3(a) and 3(b) were generated from this data. The SOFM of the SRRA problem for this network

is simulated, with K = 100 initial states and all RRAs initialized to (1, 0, 0) in each coverage area. The

ABP is 0.078, with ANI equal to 421.6. While this is far better than convergence in 2450 iterations

reported by Kunz for his version of HNN l, the SOFM is much slower than either the HC-HNN or GA

approaches to the CAP for the same example2 . This may be due to the SOFM “learning” the correct

RRAs iteratively over the search space. The SOFM does attain a lower ABP due to its ability to increase

140

Page 156: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

capacity to meet demand. The algorithm slowly increments the sectorization values to 6, sets the voice

activity monitoring “on”, while power control assignments vary over the 25 base stations.

To evaluate performance of the SOFM modified for DRRA problems, the components of T2 in the

Kunz model are cyclically shifted 5 positions down every periods, with and 100, to

represent dynamic local traffic. Algorithm sensitivity to initial RRAs, in response to demand shifts, is

examined by initially using the following three patterns in the coverage areas: (1) all RRAs are (1, 0, 0);

(2) all RRAs are (3,1,1); and (3) the RRAs are set to the final values after K = 100 periods of the SRRA.

In response to these cyclic demand shifts, the ABPs for the DRRA with initial RRA pattern (1) increase

from 0.068 to 0.381, as the number of periods for the algorithm to learn the new network demands,

decreases from 100 to 10, respectively. For initial RRA pattern (2), the ABPs increase from 0.032 to

0.294, as decreases from 100 to 10, respectively. Lastly, using the final resource patterns from the

SRRA problem for the Kunz network to initiate the RRAs in the dynamic resource control of the same

network causes the ABPs to range from 0.002 to 0.169, as decreases from 100 to 10, respectively.

These results can be compared informally to a GA simulation for the CAP of the Kunz model, with

cross-over mutation and bias weights A = 1.0 and B = 1.1. The components of T2

are cyclically shifted 5 positions down every 100 generations to test convergence sensitivity to demand

shifts. The ABP for this GA varies from 0.01 to 0.343 depending on the size of the demand shift 2. All

simulation runs of this GA converge within 200 generations.

7. CONCLUSIONS

The Kohonen self-organizing map has been extended to the RRA problem to incoming calls in a

DS/CDMA PCS network. The problem statement generalizes the determination of the minimum number

of channels required to obtain an interference-free assignment. The SOFM application better reflects

practical systems, in which radio resources are regulated to minimize network interference. The

Kohonen SOFM is extended to perform discrete-space optimization of SRRA problems, then further

modified for more robust performance in DRRA problems. Both RRA algorithms have been simulated

for known network examples, with results informally compared to published simulations of HC-HNN

and GA methods applied to the CAP for similar networks. Future investigations must refine the

approach based on more accurate models of PCS network behavior. Duque-Antón, et. al. observe that

the interference produced by the simultaneous use of a channel in two cells is not fully known in the PCS

environment, since traffic dynamics are more difficult to model with increasingly smaller cell size.

Further research is required to learn the interference constraints in such an adaptive environment by

statistically correlating carrier and interference power levels with call activities14, 15.

141

Page 157: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

REFERENCES

1. D. Kunz, “Channel assignment for cellular radio using neural networks,” IEEE Trans. Veh. Technol.,

vol. VT-40, pp. 188-193, 1991.

2. W. Hortos, “Comparison of neural network applications for channel assignment in cellular TDMA

networks and dynamically sectored PCS networks,” Appl. and Sci. of Artificial Neur. Net. III, Proc.

of SPIE, vol. 3077, pp. 508-524, Orlando, FL, 1997.

3. P. T. H. Chan, M. Palaniswami, and D. Everitt, “Neural network-based dynamic channel assignment

for cellular mobile communication systems,” IEEE Trans. Veh. Technol., vol.VT-43, no. 2, pp. 279-

288, 1994.

4. J. J. Hopfield and D. W. Tank, “ Neural computation of decisions in optimization problems,”

Biological Cybern., vol. 52, pp. 141-152, 1985.

5. K. Smith and M. Palaniswami, “Static and dynamic channel assignment using neural networks,”

IEEE J. Selected Areas Comm., vol. 15, no. 2, pp. 238-249, 1997.

6. S. Abe, “Convergence acceleration of the Hopfield neural network by optimizing integration step

sizes,” IEEE Trans. Syst. Man and Cybern.-Pt.B, vol. 26, no. 1, pp. 194-201, 1996.

7. W. C. Y. Lee, “Applying the intelligent cell concept to PCS,” IEEE Trans. Veh. Technol., vol.VT-43,

pp. 672-679, 1994.

8. K. Gilhousen, I. Jacobs, R. Padovani, A. Viterbi, L. Weaver, and C. Wheatley, “On the capacity of a

cellular CDMA system,” IEEE Trans. Veh. Technol., vol.50, no. 2., pp. 303-312, 1991.

9. T. Kohonen, “Self-organized formation of topologically correct feature maps,” Bio. Cybern., vol. 43,

no. 1, pp. 59-69, 1982.

10. S. Abe, “Global convergence and suppression of spurious states of the Hopfield neural nets,” IEEE

Trans. Circuits Syst., vol. CAS-40, no. 4, pp. 246-257, 1993.

11. K. N. Sivarajan, R. J. McEliece, and J. W. Ketchum, “Channel assignment in mobile radio,” in Proc.

39th IEEE Veh. Technol. Soc. Conf., pp. 846-850, San Francisco, CA, 1989.

12. W. K. Lai and G. G. Coghill, “Channel assignment through evolutionary optimization,” IEEE Trans.

Veh. Technol., vol. VT-43, pp. 91-96, 1996.

13. N. Fubaniki and Y. Takefuji, “A neural network parallel algorithm for channel assignment problems

in cellular radio networks,” IEEE Trans. Veh. Technol., vol. VT-41, no. 4, pp. 430-437, 1992.

14. M. Duque-Antón, D. Kunz, B. Rüber, and M. Ullrich, “An adaptive method to learn the compatibility

matrix for microcellular systems,” in Proc. IEEE 44nd Veh. Technol. Conf. VTC-94, pp.848-852,

Stockholm, Sweden, 1994.

15. M. Duque-Antón, D. Kunz, and B. Rüber, “Learning the compatibility matrix for adaptive resource

management in cellular radio networks,” Eur. Trans. Telecommun., vol. 6, no. 6, pp. 657-664, 1995.

142

Page 158: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3

Complex Scaled Tangent Rotations (CSTAR) for FastSpace-Time Adaptive Equalization of Wireless TDMA

Massimiliano (Max) Martone, Member, IEEEWATKINS-JOHNSON COMPANY

Telecommunications Group700, Quince Orchard Rd.

Gaithersburg, MD 20878-1794, USAE-mail: [email protected]

Abstract

A new update algorithm for space-time equalization of wireless TDMA signals is presented.

The method is based on a modified QR factorization that reduces the computational complexity

of the traditional QR-Decomposition based Recursive Least Squares method and maintains nu-

merical stability and tracking capability. Square roots operations are avoided due to the use of

an approximately orthogonal transformation, defined Complex Scaled Tangent Rotation.

I . I N T R O D U C T I O N

The space-time equalization concept was first proposed in [1] and subsequently applied to

wireless TDMA in [2] where it was conjectured the implicit optimality of the scheme. The use of

different feedforward filters at the antennas and one single feedback filter was demonstrated to be

effective because it was able to simultaneously combat signal fading, intersymbol and cochannel

interference. The implementation of the joint update requires special attention because low sig-

nal to noise ratio and fast frequency selective fading channels result generally in ill-conditioned

adaption. Recursive Least Squares based on QR Decomposition [3] is a well known and nu-

merically well behaved method to perform the filters update. However the high computational

complexity of the method has always been considered a remarkable problem. We propose in

this work a new algorithm based on an approximated QR factorization which improves in terms

of computational efficiency over existing schemes mainly because square roots operations are

avoided. The approach uses a generalization of the scaled tangent rotation of [5] to update the

Cholesky factor of the information matrix without needing to form it. The performance of the

method is compared to more traditional algorithms by means of computer simulations in fixed

point arithmetic for a realistic scenario as specified in the standard IS-136 [6], [7] for cellular

communications in the US.

Page 159: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

II. SPACE-TIME QR-BASED MMSE EQUALIZATION

Consider a K-antenna receiver. At the k-th antenna delayed and attenuated replicas of

the signal are received (k = 1,2, ...,K). The impulse response of the two multipath diversity

channels can be expressed as

are delay, amplitude, phase of the m-th path as received at the k-th antenna.

The complex baseband modulated signal is

are the complex symbols defining the signal constellation used for the particular digital

modulation scheme1. The filter is a square root raised cosine shaping filter with roll-off

factor equal to 0.35, T is the signaling interval. The baseband signal received at the k-th antenna

is where is the carrier

frequency, and is additive Gaussian noise. is sampled at rate (R is an integer

usually in the range and square root raised cosine filtered to obtain the complex I/Q

fractionally spaced samples The optimum combining/equalization scheme

has K feedforward filtering sections (one per antenna) and one feedback filtering section (see Fig.

1 for K = 2). Since the algorithm jointly optimizes the taps of these filters as to minimize the

Mean Squared Error using samples of the received signal taken at different points in time and in

space, this architecture is called Space-Time Minimum Mean Square Error (MMSE) equalizer.

We can express the output of the MMSE Space-Time equalizer in vector notation as

with

for k = 1,2,...,K,

in the US standard for cellular communications [6].

are fractionally spaced taps, while are symbol spaced taps. Moreover

c(n) is updated once per symbol.

144

Page 160: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

for k = l,2,...,K, and

The adaptive algorithm minimizes the Mean Squared Error defined as

and the goal of the adaption process is to adjust

1)) to converge toward the solution . The sequence is

generated using the known training symbols (during training) and using past decisions (during

data demodulation, in decision directed mode).

The QR approach: The equalization problem can be reconducted to solving at each n + 1 step

the problem

where is the forgetting factor is the data matrix and

The normal equations define

the desired minimizer as The use of orthogonal transfor-

mation to solve least squares problems is well established as is the inadvisability of using the

normal equations [3]. Suppose that a matrix is known from the previous step such that

with Q orthogonal and upper triangular matrix, then the problem

stated in (2) is equivalent to

, because Euclidean distance is preserved by orthogonal transforma-

tions. The traditional QR-RLS approach [3] obtains c(n + 1) from the solution of the triangular

linear system where is obtained by sweeping the row vector

in (3) through orthogonal transformations represented by is obtained

applying the same orthogonal transformations to

III. CSTAR TRANSFORMATION AND THE NEW METHOD

The novelty of the method we present is in the following two points:

145

Page 161: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

1. the algorithm tracks the variation in c(n) from step n to step n + 1 rather than c (n ) itself.

This saves some computations and simplifies part of the algorithm.

2. the orthogonal transformations are approximated by Scaled Tangent Rotations rather than

the traditional Givens rotations.

Define From the previous Section

with upper triangular. Since

is the solution of3

where Hence can be found by solving the triangular system

where satisfies

It is then evident that all we need to solve the system (6) can be obtained by forming

and sweeping the bottom part of this matrix using plane rotations4. The orthogonal ma-

trix can be found as a product of Givens rotation matrices [3]:

3 Just substitute4 It should be clear now that by tracking we have simplified the rotation step in because the first L

elements of the last column of are equal to zero. We save 2L complex multiplications with respect to the

QR-RLS of [3] and, more important, the dynamic range required to represent the elements of the last column of

the augmented matrix is reduced.

146

Page 162: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

A single Givens rotation annihilates the L + l, l-

element using

where

and The considerable drawback of the method is in the computation

of the angles for the Givens rotations. Square-root computations are not easily implemented in

DSP processors and are even more problematic in VLSI circuits. Usually they involve iterative

procedures whose convergence is not always guaranteed. Scaled Tangent Rotations (STAR) were

proposed in the context of RLS adaptive filtering for real time-series in [5]. We generalize the

rotation to the complex domain but there is a difference that is important to note. In STAR [5]

scaling is necessary to prevent instability caused by the fact that the tangent function may become

infinite. Our modification for complex signals still contains a scaling operation but the factors are

not normalized to unity as in [5] because this would involve again a square-root operation. The

CSTAR (Complex Scaled Tangent Rotation) transformation

in analogy to the Givens rotation is defined in terms of each as in Table 1, where the

complex sign function is and the sweep is

applied to the augmented matrix (see Table 2). Observe that the Givens elementary com-

plex rotation has two remarkable properties. First of all it zeroes selectively one predetermined

element of any complex matrix, which is indeed needed to triangularize the data matrix. Sec-

ond, it is an orthogonal transformation, that is is an orthogonal matrix that preserves

the original least squares problem. The CSTAR elementary transformation maintains

the first property but it is not an orthogonal tranformation. Note in the flow diagram of the

transformation (Table 1) that, whenever scaling is required, we obtain a non-orthogonal

elementary matrix (and of course a non-orthogonal T(n)). So the CSTAR solution deviates from

the optimum least squares solution5. However it is possible to show along the same guidelines

of [5] that the deviation from orthogonality is limited to the first few adaption steps because as

5 In fact at any given step

where and are the triangular matrices obtained from the sweep performed applying Givens

Rotations and CSTAR Rotations, respectively.

147

Page 163: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

new samples are processed the scaling operation becomes more and more unnecessary. In other

words the algorithm has the property that

where I is the identity matrix and we have used the notation

for the Frobenius norm of a complex matrix M whose generic i,j element is

Our experimental results show that the effect of this initial bias is negligible. The algorithm

can be summarized as in Table 2. In Table 4 the computational complexity of the CSTAR

method is compared to the QR approach using Givens Rotations (defined QRG) and the EWRLS

(exponentially windowed RLS of [4]) in terms of real multiplications, reciprocals, square roots

and additions.

IV. CSTAR-RLS WITHOUT THE BACKSUBSTITUTION STEP

The backsubstitution step is implicitly a serial process: the unknowns are obtained one by one

with a complexity . In addition L reciprocals are needed. A simple derivation [8] shows

that a very elegant solution exists for that avoids divisions and make the complexity

It is possible to prove the two following facts using the matrix factorization lemma.

Fact 1: The inverse hermitian of can be recursively computed

using

where

is the (asymptotically) orthogonal matrix that sweeps the bottom part of (8) using Plane Rotations.

Fact 2: The solution of the triangular system (6) is obtained as:

where is obtained from (10) and is obtained from (7).

The updating algorithm can be summarized as in Table 3. If the matrix is initialized

with the matrix where is any real number, then must be initialized to

148

Page 164: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

V. SIMULATIONS

A dual-diversity (K = 2) TDMA system for cellular communications has been simulated

according to [6], [7]. We assume a two-ray Rayleigh fading diversity channel

[7]. The delay interval is the difference in time of arrival between the two rays at each antenna.

The speed of the transmitter mobile defines the time-varying characteristics of the channel. The

frame is constituted by 162 symbols and 14 of them are dedicated to the training sequence.

Delay interval for both diversity channels is equal to (one symbol period) to describe

an environment severly affected by intersymbol interference. The length of the feedforward

sections is 3, the length of the feedback section is 2. Sampling rate is 2/T (R = 2). The

described algorithm, the EWRLS algorithm (traditional RLS [4]) and the QR-RLS algorithm

(traditional QR-based RLS, [3]) have been implemented using 24 bits of resolution in the fixed

point arithmetic representation. Fig. 2 shows performance of the CSTAR algorithm compared to

EWRLS and to QR-RLS in terms of Mean Squared Error estimated and averaged over 100 runs.

The value of is 0.855 for Fig. 3 shows Bit Error Rate (BER)

results. The EWRLS algorithm reveals numerical problems directly impacting BER performance.

The CSTAR algorithm achieves performance similar to the traditional Givens-based QR-RLS.

VI. CONCLUSIONS

We have presented a new method to update the digital filters of a MMSE K-antenna space-

time decision-feedback receiver. The algorithm is particularly suited for fixed-point arithmetic

implementations because it preserves the numerical stability and performance of a QR-based

approach but it is less computationally demanding due to the absence of square root operations.

Experimental results were presented for the IS-136 North-American [6] standard for cellular

TDMA communications to validate the method and to confirm the effectiveness of the approach.

REFERENCES

[1] P. Monsen, “MMSE equalization of interference on fading diversity channels”. IEEE Trans. Comm., vol. 34,

pp. 5-12, Jan. 1984.

[2] C. Despins, D. Falconer, S. Mahmoud, “Compound strategies of coding, equalization, and space diversity for

wide-band TDMA indoor wireless channels”. IEEE Trans. Vehicular Tech., vol. 41, pp. 369-379, Nov. 1992.

[3] S. Haykin. “Adaptive Filter Theory”, Englewood Cliff, N.J Prentice All, 1986.

6In general low speeds require larger values of to get the best performance out of the tracking scheme. However

there is marginal BER degradation if is kept fixed to 0.855.

149

Page 165: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[4] J. G. Proakis. "Digital communications". McGraw-Hill, 1989.

[5] K. J. Raghunath and K. K. Parhi “Pipelined RLS Adaptive Filtering Using Scaled Tangent Rotations”. IEEE

Trans. Signal Proc., vol. 44, No. 10, pp. 2591-2604, Oct. 1996.

[6] TIA/EIA/IS-136.1-A, “TDMA Cellular/PCS - Radio Interface - Mobile Station - Base Station Compatibility -

Digital Control Channel” and TIA/EIA/IS-136.2-A, “TDMA Cellular/PCS - Radio Interface - Mobile Station

- Base Station Compatibility - Traffic Channels and FSK Control Channel”, October 1996.

[7] TIA/EIA/IS-138, “800 MHz TDMA Cellular - Radio Interface - Minimum Performance Standards for Base

Stations”, December 1994.

[8] B. Yang and J. F. Bohme, “Rotation-based RLS algorithms: unified derivations, numerical properties, and

parallel implementation”, IEEE Trans. Sig. Proc., vol. 40, pp. 1151-1167, May, 1992.

150

Page 166: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Table 2: The CSTAR algorithm.

Initialization

• 0) Inputs

• 1) Compute the prediction error

• 2) Form matrix

• 3) Sweep using L CSTAR Rotations

• 4) Solve by backsubstitution the triangular system

• 5) Obtain

Table 3: The CSTAR algorithm without Backsubstitution.

Initialization

• 0) Inputs

• 1) Compute the prediction error

2) Sweep using L CSTAR Rotations

• 3) Obtain

151

Page 167: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

152

Page 168: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

153

Page 169: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

154

Page 170: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4

An Effective LMS Equalizer for the GSM Chipset

Jian Gu, Jianping Pan, Renee Y. Watson, and Steven D. HallCommQuest Technologies, Inc.

527 Encinitas Boulevard, Encinitas, CA 92024(760)-634-6181, [email protected]

Abstract

An effective equalization approach to implementing receivers for Global System for Mobile

communications (GSM) is presented. The approach utilizes a linear transversal filter using an

Least Mean Square (LMS) algorithm for tap coefficient training. Effective soft decision variables

are generated from the output of the LMS equalizer. It is well known that an LMS equalizer has

low complexity and can be easily implemented into an Application Specific Integrated Circuit

(ASIC). Performance comparisons with 16-state Viterbi equalization are included. This

approach is incorporated into CommQuest’s GSM chipset design for GSM receivers using its

Communication Application Specific Processor technology.

I. Introduction

Many communication systems have to deal with multipath propagation channels, especially for

those used for cellular/PCS mobile communications. The Global System for Mobile

communications (GSM) is such a system for mobile communication and has been adopted by

many countries worldwide. Gaussian filtered Minimum Shifting Keying (GMSK) modulation is

used for GSM. The GSM standard [9] requires any receiver for demodulation of the GMSK

modulated signal should be able to handle multipath propagation channels with delay spread up

to In contrast, the bit duration or the symbol duration of the GMSK signal is about 3.69

Therefore, a delayed and probably attenuated, and phase-rotated signal through the longest

path of the propagation channel may arrive at the receiver up to about a five-symbol duration

after the signal traveling through the shortest path. Without utilizing any advanced equalization

techniques, it is impossible to meet the Bit Error Rate (BER) requirements specified in the

standard [9].

There are many equalization methods that can be used in receivers for GSM systems. A well-

known and widely used approach is called Viterbi equalization -- an approach using the

155

Page 171: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Maximum Likelihood Sequence Estimation (MLSE) [2]-[7]. The Viterbi equalizer needs a

propagation channel estimator to generate all possible signal sequences which are resulted from

being transmitted through the estimated propagation channel. Given the channel memory length,

the number of such signal sequences is limited. These generated signal sequences are compared

with the received signal sequence and the generated signal sequence which is most like the

received signal is selected. The data sequence associated with the selected signal sequence is the

recovered data sequence. Propagation channel estimation is performed by detecting a known data

sequence called the midamble, which is embedded in the middle of the burst. When the

propagation channel becomes very noisy and/or strong interfering signals appear, both of which

are common in the mobile communication environment, performance of the Viterbi equalizer

could be significantly compromised due to the fact that a reliable estimation of the propagation

channel is not available.

Another typically used method is equalization with a linear transversal equalizer [1] [2]. It is

said that the Least Mean Square (LMS) algorithm does not converge fast enough and therefore

some fast converging algorithms like Recursive Least Square (RLS) with decision feedback

could be used for GSM receivers [1]. However, crashes of the decision-directed equalizer due to

error-propagation were observed [10], and some techniques that restart the equalizer may have to

be used.

In this paper, we present an approach of using a transversal equalizer with LMS algorithm for

GSM receivers. Considering error propagation and quantization effects (Using 8-bit arithmetic),

no decision feedback is introduced and thus preventing possible complications. With proper

initialization and training, the LMS equalizer converges fast enough to provide the same or better

performance in comparison with Viterbi equalization in very noisy and/or strong interference

environments, especially near the reference sensitivity level and reference interference level

specified in the standard ETSI/GSM 05.05 [9]. The transversal equalizer is also less complicated

than the Viterbi equalizer; and therefore, it is very easy to implement.

Section II describes the demodulator structure with the transversal equalizer. Section III

presents simulation results for both the transversal equalizer with the LMS algorithm and the

Viterbi equalizer. Some conclusions are drawn in Section IV.

II. Structure of the Demodulator

156

Page 172: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Fig. 1 shows the block diagram of the demodulator. The converter in Fig. 1 includes all

functions needed to convert the received radio signal into a pair of in-phase and quadrature

digital samples, denoted by is the sampling duration which is a

multiple of bit interval that is about for GSM. GSM utilizes Time Division Multiple

Access (TDMA) and GMSK with where B is the 3-dB bandwidth of the Gaussian filter

and is the bit duration. Since the GMSK modulation is a binary modulation scheme, its

symbol interval is equal to the bit interval The dash-lined block in Fig. 1 includes functions

for the baseband I-Q signal processing. The GMSK modulated signal is transmitted in burst

format. Fig. 2 depicts the structure of bits in a burst which is to be transmitted in a time slot.

There are 116 data bits which are separated by a synchronizing sequence of 26 bits which is

called the midamble, preceded by 3 tailing bits and followed by 3 tailing bits. After being

differentially encoded, a burst of modulating bits is formed and fed into the GMSK modulator.

The demodulator recovers the 116 data bits of a burst from the received signal.

The baseband demodulator consists of a sample buffer, a midamble correlator, a processing

unit, a decimator, a transversal equalizer, a power measurement unit, a Signal-to-Noise-Ratio

(SNR) estimator, a soft decision variable generator, and a timing recovery unit.

The sample buffer stores the baseband I-Q samples collected at a rate which is a multiple

of the bit rate The timing recovery unit controls when to start collecting samples of the

received burst signal.

Fig. 3 shows a diagram of the midamble correlator which is a Finite Impulse Response (FIR)

filter modeled by a tapped delay line but with L non-zero tap coefficients, where

The L tap coefficients are the conjugates of the I-Q samples corresponding to the

midamble portion of the received baseband signal at sampling intervals of At each cycle of

it correlates L received I-Q samples spaced at in the sample buffer. Under multipath

propagation conditions, the relative delay between the first ray and the last ray of the radio signal

can be over If is the maximum relative delay between the first ray and the last ray, we

use a midamble search window of since we want to align the nominal timing with the

center of the search window. Therefore, the time span of the samples in correlation is

and the total number of output values is 2MN. Nominal timing is provided by the timing

recovery unit.

157

Page 173: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The processing unit executes a number of functions including calculation of the Multipath

Intensity Profile (MIP), peak detection, estimation for the propagation conditions of the dominant

ray, and tap reduction for the transversal equalizer based on the MIP.

The MIP is obtained by calculating the square of the magnitude of the I-Q samples at the

output of the correlator over the midamble search window. Assuming the auto-correlation

function of the midamble signal is ideal: if there is only one ray of the received signal, then the

MIP has only one peak and timing of the peak corresponds to the arrival timing of the received

single ray signal. However, if there are several rays with different arrival timings, then there

would be several peaks each of whose values represents the strength of the rays, and the timings

of the peaks relate to arrival timing of the rays. The maximum magnitude value corresponds to

the strength of the dominant ray of the propagation path. The output of the midamble correlator is

the propagation channel response, if there is only one ray, or is an approximation of the

propagation channel response, if there are several rays. For example, if there is only one ray and

the propagation channel response is and if we scale the received signal by then the

resulting signal can be perfectly demodulated without any equalization, where is the

conjugate of C .

The decimator has a rate of M: 1. The epoch of the decimator is aligned with the timing of the

dominant ray from the peak detection. Decimation on results in a subsequence

where K is related to the epoch of the decimator

and is an integer between 0 and M-1.

The transversal equalizer shown in Fig. 4 is a tapped delay line equalizer in training

mode with 2N+1 taps. The input signal of the equalizer is After being

trained it is simply a FIR filter. The center tap is initialized with where C is the

propagation condition estimate of the dominant ray of the received signal. The remaining taps

are set to null before training. A tap-reduction algorithm is used to determine the number of taps

needed to ensure good performance based on the estimated dispersion of the propagation

channel. After the tap reduction algorithm, some of the taps are excluded from the training

process, i.e. these taps will not be updated from their initial null values during the training

process. The well-known LMS algorithm [8] is used in the training process. Referring to Fig. 4

and letting and be the k-th symbol of the midamble and the estimated received k-th

158

Page 174: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

symbol of the midamble respectively, then the updated tap coefficients are

j may assume values of a

smaller range as a result of tap reduction. is the step size and is the tap coefficient after

the (k – 1 )-th training and its final value is After training, all tap coefficients remain fixed

during demodulation of the current received burst.

Based on the MIP, the 2MN samples of the squared magnitude are divided into 2N+1 subsets.

Fill an array mip[.] by summing up M samples of magnitude and assigning the sum to an element

of the array. Note that mip[N] is the sum of M samples in the middle of the 2MN samples and

that mip[0] and mip[2N] are a sum of M/2 samples. Each element of mip[.] corresponds to a tap

of the transversal equalizer. Define start_tap as the index of the first non-zero tap and end_tap as

the last non-zero tap. The following is the C program of the tap reduction algorithm:

where p is a scale factor to establish a threshold for the tap reduction algorithm to operate when

the propagation channel is less dispersive. and are called aggressiveness factors and both are

less than 1. The larger the aggressive factors the more aggressive the tap reduction. Also, we can

treat non-causal tap and causal taps differently by making different.

The power measurement unit calculates the average signal power of the midamble portion,

which is an average of of the related I-Q samples. Note that this average power includes

noise power and/or interference power; hence, it can be written as S+N, where S is the wanted

signal power and N is the power of undesired signals including noise and/or interference. The

159

Page 175: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

SNR estimator calculates S/(S+N-S), where S is the wanted signal power which is obtained at the

output of the midamble correlator since the correlator is in fact a matched filter to the midamble

and rejects most of the noise and interference.

The soft decision variable generator scales the outputs of the equalizer by the estimated SNR

and transforms them to samples with a given bit-width. The I-Q combiner, shown in Fig. 5,

converts the complex representation of the signal to a one-dimensional sample sequence. The

sign bit of each sample in the sequence is the hard-decision data bit. For a simpler

implementation, the combiner can be placed before the soft decision variable generator.

The timing recovery unit collects the timings of the dominant rays of several bursts from the

processing unit and calculates the average timing. Only the timings associated with “good” bursts

are used in the averaging process, where “good” means that the estimated SNR of a burst is large.

It then compares the new average timing with the previous average timing and adjusts the

clock/counter as necessary. The adjusted timing is called new nominal timing and is applied to

the midamble correlator and other units for processing of the next burst.

III. Simulation Results

The LMS equalizer has been incorporated into our fixed-point system simulation of the

receiver including Surface Acoustical Wave (SAW) filters, Intermediate Frequency (IF)

amplifiers with Automatic Gain Control (AGC), and an IF bandpass sampling Analog-to-Digital

Converter (ADC). Simulation results show that the LMS equalizer based receiver fully satisfies,

with substantial margins, the sensitivity and interference performances specified in the GSM

standard. Assuming the overall RF/IF system noise figure is 8 dB, the receiver has a BER

performance margin of about 5 dB at the reference sensitivity level for the TU50 channel

conditions. The receiver’s co-channel interference performance has about 1.8 dB margin at the

reference interference level for TU50. These results and other simulation results for all other

propagation conditions are consistent with measured results from actual hardware.

In the following, we make some comparisons with the performance of a receiver using 16-state

Viterbi equalization. The same IF stages and a similar tap reduction algorithm are used for the

16-state Viterbi equalization cases. All simulation runs last 20,000 bursts.

Note that the noise power in the SNR calculation is the white noise power over a frequency

band of 200 kHz and that if the overall system noise figure is 8 dB then SNR = 11 dB

corresponds to a signal level of -102 dBm.

160

Page 176: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Fig. 6 illustrates the raw BER curves for both the transversal equalizer (labeled with LMS) and

Viterbi equalizer (labeled with VE). Fig. 7 illustrates the Residual Bit Error Rate (RBER) of

Class 1b and Frame Erasure Rate (PER) of the two cases, where RBER is defined as the ratio of

the errors detected over the “good” frames to the number of bits in the “good” frames where

“good” frame means the frame is not erased. The propagation channel condition is TU50 which

is Typical case for Urban area with vehicle speed at 50 kilometers per hour. Fig. 8 and Fig. 9

show co-channel interference performances over TU50.

Note in Fig. 6-Fig. 9 that the two equalization approaches have close performances over those

SNR or Carrier-to-interference-Ratio (CIR) ranges. Moreover, for the coded BER (RBER of

Class 1b and FER) the LMS equalization is generally better than Viterbi equalization over most

parts of the range.

IV. Conclusions

We have presented an effective demodulator with an LMS equalizer for GSM handheld units.

The demodulator uses a number of techniques to ensure performance and efficiency. Proper tap

coefficient initialization and burst timing setting allow the LMS equalizer to converge at a faster

rate. The tap reduction algorithm retains only those taps that are necessary for handling the

propagation condition of a given burst. The tap reduction algorithm also treats non-causal and

causal taps differently in order to retain significant taps. The retained taps are to be trained while

the other taps are set to zeros. Fewer taps results in lower residual error when the channel

dispersion is low.

Compared with the commonly used Viterbi equalizer, the transversal equalizer has the same or

better performance for uncoded BER when SNR is low (signal levels near the reference

sensitivity level of -102 dBm) or when CIR is near the reference interference level which is 9 dB

for co-channel interference. This is mainly due to the fact that the channel response estimator for

the Viterbi equalizer does not provide a good channel response estimation at low SNRs or in

severe interference conditions. In addition, the LMS equalizer has a better coded BER

performance than that of the Viterbi equalizer when SNR and/or CIR is low.

The LMS equalizer based receiver has been implemented into a digital Application Specific

Integrated Circuit (ASIC) called Communication Application Specific Processor and

incorporated into the CommQuest’s chip set for GSM handheld units. Reception tests of the units

have verified the simulation results. Testing shows the receiver has about 6 dB margin in

161

Page 177: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

sensitivity performance for the TU50 channel conditions and 7 dB margin for static channel

conditions. The receiver has about 2 dB margin for co-channel interference performance for

TU50. The units also satisfy all other BER performances specified in GSM standard [9] with

substantial margins.

References:

[1] Giovanna D’Aria, Roberto Piermarini, and Valerio Zingarelli, “Fast Adaptive Equalizers for

Narrow-Band TDMA Mobile Radio,” IEEE Trans. on Vehicular Technology, Vol.. VT-40, pp.

392-404, May 1991.

[2] John G. Proakis, “Adaptive Equalization for TDMA Digital Mobile Radio,” IEEE Trans. on

Vehicular Technology, Vol.. VT-40, pp. 333-341, May 1991.

[3] Renato D’Avella, Luigi Moreno, and Marcello Sant’Agostino, “An Adaptive MLSE Receiver

for TDMA Digital Mobile Radio,” IEEE J. Select. Areas Commun., vol. SAC-7, pp. 238-247,

Jan. 1989.

[4] G. Benelli, A. Garzelli, and F. Salvi, “Simplified Viterbi Processors for the GSM Pan-

European Cellular Communication System,” IEEE Trans. on Vehicular Technology, Vol. VT-43,

no.4. pp.870-878, Nov. 1994.

[5] G. Benelli, A. Fioravanti, A. Garzelli, P. Matterini, “Some Digital Receivers for the GSM

Pan-European Cellular Communication System,” IEE Proc.-Commn. Vol. 141, No.3, June,

pp.168-176, 1994.

[6] J. C. S. Cheung and R. Steele, “Modified Viterbi Equalizer for Mobile Radio Channels

Having Large Multi-path Delays,” Electronics Letters, Vol. 25, No. 19, pp.1309-1311, Sept.

1989.

[7] E. Del Re, G. Benelli, G.Castellini, R. Fantacci, L. Pierucci, and L. Pogliani, “Design of a

Digital MLSE Receiver for Mobile Radio Communications,” 1991 GLOBECOM, pp.1469-1473.

[8] John G. Proakis, Digital Communications. New York: McGraw-Hill, 1983.

[9] European Telecommunications Standard Institute (ETSI), ETSI/GSM 05.05, 1996.

[10] E. Eleftheriou and D. D. Falconer,”Adaptive equalization techniques for HF channels,”

IEEE J. Select. Areas Commun., vol.SAC-5, pp. 238-247, Feb. 1987.

162

Page 178: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

163

Page 179: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

164

Page 180: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

165

Page 181: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

166

Page 182: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5

Self-Adaptive Sequence Detection via the M-algorithm

Ali R. ShahEricsson Inc.,

740 East Campbell Rd.,Richardson TX 75081

Bernd-Peter ParisDepartment of Electrical and Computer Engineering

Center of Excellence in C3IGeorge Mason University

Fairfax, VA 22030

April 29, 1998

Abstract

The problem of implementing self-adaptive equalization algorithms in real-time isaddressed. Self-adaptive equalization determines the transmitted sequence without us-ing a training sequence. The advantage over current adaptive equalization techniques isdiscussed. Tree search procedures have been shown to be more effective than dynamicprogramming. Simulation results for tree search procedures based on the M-algorithmare presented. The focus is on the effects of channel order, sequence length and mod-ulation format on the BER. The performance of the M-algorithm is compared withtraditional approaches.

1 IntroductionMany problems in digital communications can be modeled by means of a discrete-time finite-state Markov process representing the signal which is observed in independent identicallydistributed noise. We are considering the case when the process parameters are unknown.We are investigating methods to exploit the structure and finiteness of the state space ofthe signal to determine the most likely state sequence without resorting to a known train-ing sequence. We will refer to this approach as self-adaptive sequence detection (SASD).The problem is also referred to as self-adaptive or blind equalization in the communicationliterature.

Self-adaptive sequence detection (SASD) has several advantages over techniques wherethe channel coefficients are estimated via a training sequence. In digital mobile radio, data

167

Page 183: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

is transmitted over channels whose impulse response changes over time. In traditional ap-proaches, a training sequence is sent with each data packet to re-estimate the channel. Anoverhead of 10-20% is introduced, which could be eliminated through the use of self-adaptivetechniques. The intersymbol interference (ISI) can be represented by a finite impulse responsefilter whose coefficients are sequence of equally likely symbols drawnfrom a discrete and finite alphabet A, is input to the channel.

For slowly varying channels, the coefficients can be assumed to be unknown but constantfor each data packet. The channel model is shown in Figure 1.

The goal is to determine the most likely input sequence out of candidate sequenceswhere M is the alphabet size and N is the sequence length.

The paper is organised as follows. We begin with the mathematical preliminaries anddescribe the metric that needs to be computed. Then current techniques to find the trans-mitted sequence are also presented. Then we explain the algorithm and two methods torecursively update the metric. This is followed by simulation results and the performance ofthe M-algorithm.

2 Mathematical Preliminaries

We propose approaches similar to Feder and Catipovic [2] and Ghosh and Weber [3]. Webriefly describe the approach to perform channel equalization when the channel coefficientsare known. This is referred to Maximum Likelihood Sequence Detection (MLSD). The mostlikely sequence is the one that maximizes the joint probability density function (jpdf) of theobservation v given the transmitted sequence and channel coefficients:

Assuming that the noise is Gaussian, we can explicitly write the jpdf in (1) as:

The sequence that maximizes (2) also maximizes the log of the density function. Takingthe log yields the likelihood expression which allows simplification of the exponential term.

168

Page 184: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Maximizing this expression over all sequences s yields the maximum likelihood sequenceestimate. Let denote the estimated sequence and s and v are vectors comprising the inputsymbols and observations, respectively. The most likely sequence can be estimated as:

where S is an matrix whose columns are shifted versions of s. The Viterbialgorithm [4] can be used to find the most likely sequence. When are unknown, they canbe estimated

We can reformulate our criterion by replacing the estimate of from (4) in 3.

After some manipulation, we can formulate our criterion as:

where is a projection matrix. Among all possibleinput sequences, we are looking for the one which maximizes the metric in (6). The optimalsequence is the one that spans the signal sub-space containing the largest portion of thereceived signal.

For developing sequential algorithms, we do not use variants of the Viterbi algorithmlike the Generalized Viterbi Algorithm (GVA) [5] and Per-Survivor Processing (PSP) [6].The basis for our adaptation are tree search algorithms, originally developed for decoding ofconvolutional codes. In this paper we consider the M-algorithm [7].

3 The M algorithmThe M-algorithm retains sequences that are the best in terms of the criterion in Equa-tion (6). It extends each sequence to M branches to obtain paths, where M is thealphabet size The metric is computed for each sequence. The paths are then sortedin descending order and the best paths are retained while the rest are deleted. The Malgorithm can also be referred to as the extend all nodes of stack-algorithm. A flow diagramof the M algorithm is in Figure 2.

As compared to dynamic programming, there is an additional burden of sorting thesequences which is of the order Dynamic programming is infeasible for SASDas the metric includes the quantity that cannot be written in an additive form.Therefore, the metric for self-adaptive equalization depends on the whole sequence and not

169

Page 185: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

170

Page 186: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

just the last L symbols, which in turn implies that the concept of a “state variable” is nolonger useful. The implication is that Bellman’s principle of optimality [8] does not applyto SASD. Good surviving sequences ending in the same “state” lead to eliminations in thisframework, that are not justified by the principle of optimality.

In this paper, the results show that the M algorithm does much better than PSP. In thesimulation results we consider the effect of filter order (L), sequence length (N), SNR, stacksize and modulation format (BPSK, QPSK) on the bit error rate (BER). A comparison ismade with the clairvoyant detector (that knows and is nothing but the Viterbi algorithm).

3.1 Recursive Computation of the Metric

The objective function in (6) can be maximized in an iterative manner. The covariancematrix is decomposed using either a factorization or the RLS (recursive leastsquares) algorithm.

3.2 The RLS algorithm

The RLS factorization can be used to compute the metric in the following manner:

where the inverse autocorrelation matrix is denoted byThe quantity updated via the RLS algorithm is:

where is the vector of last L + 1 detected symbols. The number ofcomputations per recursion step for the RLS algorithm are shown in Table 1. We note thatthe order of computations is using the RLS algorithm.

3.3 The UDU factorization:

The UDU factorization [9] can be used to compute the metric in the following manner:

The recursion of this algorithm is based on updating the and matrices. Table 2illustrates the number of computations per recursion step.

171

Page 187: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

172

Page 188: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4 Simulation Results

In the simulation results we study the effects of different factors on the BER of the algorithm.Influence of Filter Order The simulation results indicate a degradation in performance

with an increase in the filter order. This is intuitive as a large L implies a greater extentof intersymbol interference and therefore, greater loss in performance. The effect of L onthe BER is studied in Figures 3, 4 and 5 for L = 2, 3 and 4 respectively. The performancedegrades if the number of paths remain a constant for an increasing value of L. Influenceof Number of Stored PathsThe stored sequences for the clairvoyant detector, that usesthe Viterbi algorithm, are referred to as the survivors. They are the best sequences endingat each of the states. As there is no concept of a state for tree-search algorithms, thestored sequences for this case, can be greater or less than

The bit error rate (BER) is plotted for different values of the number of stored paths,in Figures 3, 4 and 5. The number of sequences needed by the M-algorithm to achieve theperformance of the clairvoyant detector is greater than or equal to A large value of

also implies an increase in the computational complexity of the M-algorithm, as each pathis processed in parallel. By choosing a larger stack, we come close to the global search andtherefore, the likelihood of finding the best path increases. That is shown in the simulationresults.

Per-Survivor Processing A dynamic programming approach referred to as per-survivorprocessing (PSP) is also utilized in Figures (3), (4) and (5). In this approach, the state,is used to obtain the surviving sequences. In dynamic programming, the metrics do nothave to be sorted at each stage as the survivors are decided at each state. We observe thatPSP yields poorer performance as compared to the M-algorithm. The reason being that theestimate of the channel coefficients is unreliable. Decisions to discard sequences based onthese estimates in PSP, results in the loss of good candidate sequences.

Influence of Frame Length The effect of frame length is studied in Figure 6. The SNR

173

Page 189: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

174

Page 190: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

is 8 dB, L = 2 and the frame length is varied on log scale from 10 to 200. The various plotsare for different values of Shorter frame lengths imply that the algorithm has a lessernumber of observations to obtain the distance metric. The performance of the self-adaptivealgorithms is not good for small frame lengths. But, as the frame-length increases, the BERapproaches the performance of the clairvoyant detector. The plot suggests that using theM-algorithm on frames of length of 250 or more, should yield the same performance as theclairvoyant detector.

Influence of modulation scheme The previous results were with respect to BPSK.Using QPSK improves bandwidth utilization. The results for QPSK are shown in Figure 7.The symbol error rate (SER) is plotted versus the SNR. The M-algorithm works equallywell for a larger stack size for QPSK. The performance of the M-algorithm approaches theperformance of the clairvoyant detector at high SNR.

Effect of an initial global search An initial global search till k = 10 should improve theperformance of the M-algorithm at the cost of increase in complexity. For BPSK, a globalsearch implies searching over possible paths. In Figure 8 a global search is performedinitially followed by the tree search. The results are compared with the M-algorithm withoutthe global search. The simulation results indicate an improvement in performance for aninitial global search compared to the M-algorithm without a global search especially for caseswhen the stack size is small.

5 ConclusionsThis paper presents an approach for blind equalization which is referred to self-adaptivesequence detection. Substantial savings in bandwidth are possible over current techniquesusing a training sequence. The main contributions in this paper are implementing a tree-search algorithm for SASD (self-adaptive sequence detection). The tree search approach used

175

Page 191: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

176

Page 192: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

is the M algorithm where the best candidate sequences in a stack are retained and the rest aredeleted at each stage. Simulations are used to quantify the effects of storage size, modulationformat, sequence lengt and channel order are performed. Two different update algorithms arealso presented. The processing requirements of each algorithm is The performanceof the self-adaptive sequence detection algorithm is compared with the clairvoyant detector(that knows the channel coefficients). If sufficient storage is available, the SASD algorithmworks as well as the clairvoyant detector for different lengths of the channel filter, It worksfor various modulation formats including BPSK and QPSK. Blind equalization (or SASD)algorithms usually require a large number of observations for convergence. This approachshows that sequences as small as 200 symbols can be used and the same performance isachieved.

Utilizing self-adaptive sequence detection implies elimination of considerable overheadwhich implies extra bandwidth. The current results suggest that if sufficient storage isavailable, implementing such algorithms in real-time is viable.

References[1] M. R. L. Hodges, “The GSM radio interface,” Br. Telecom Technol. J., vol. 8, pp. 31– 43,

January 1990.

[2] M. Feder and J. Catipovic, “Algorithms for joint channel estimation and data recovery—application to equalization in underwater communications,” IEEE Journ. Oceanic Engi-neering, vol. 16, pp. 42–55, jan 1991.

[3] M. Ghosh and C. Weber, “Maximum-likelihood blind equalization,” Optical Engineering,vol. 31, pp. 1224–1228, jun 1992.

[4] G. D. Forney, “Maximum-likelihood sequence estimation of digital sequences in the pres-ence of intersymbol interference,” IEEE Trans. Information Theory, vol. IT-18, pp. 363–378, May 1972.

[5] N. Seshadri, “Joint data and channel estimation using trellis search techniques,” IEEETransactions on Communications, vol. 42, February/March/April 1994.

[6] R. Raheli, A. Polydoros, and C.-K. Tzou, “Per-survivor processing: A general approach toMLSE in uncertain environments,” IEEE Trans. Communications, vol. COM-43, Febru-ary/March/April 1995.

[7] J. B. Anderson, “Limited search trellis decoding of convolutional codes,” IEEE Transac-tions on Information Theory, vol. 35, September 1989.

[8] R. E. Bellman and S. E. Dreyfus, Applied Dynamic Programming. Princeton, NJ: Prince-ton University Press, 1962.

[9] G. J. Bierman, Factorization Methods for Discrete Sequential Estimation. New York, NY10003: Academic Press, Inc., 1977.

177

Page 193: Wireless Personal Communications: Emerging Technologies for Enhanced Communications
Page 194: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

6

Soft-Decision MLSE Data Receiver for GSM System

Martin LeeALOGOREX Inc.

33 Wood Avenue SouthIselin, NJ 08830

[email protected]

and

Zoran ZvonarAnalog Devices Inc.

Communications Division804 Woburn Street, Wilmington, MA 02118

[email protected]

Abstract:

The great success of GSM as a second generation digital cellular standard is largely due to theadvances in integrated solutions for the GSM terminals, in particularly handsets. With the userpopulation constantly growing and the price of handsets plummeting, focus of the design efforthas moved to efficient implementations of GSM data receiver. In this paper we will give a briefoverview of the operating modes of GSM handset, present the framework for the development ofthe data receiver and propose a new soft-decision based MLSE receiver which allows efficienthardware/software partitioning for the implementation.

1. Introduction

Since its introduction, the GSM cellular standard has become a world wide success andhas been adopted in many countries either as a cellular (GSM800) or PCS (GSM 1800,GSM 1900) standard. One of the major contributions to GSM’s acceptance has been its goodperformance in terms of quality of service, and this has in turn allowed manufactures to takeadvantage of economy of scale and reduce cost of equipment. Nevertheless, there is stillconsiderable drive to lower cost even further as well as increasing user desired attributes such astalk and standby times. In order to achieve these goals, optimsation of the building blocks for aGSM mobile station (MS) is critical [1].

One of the major signal processing blocks for a GSM MS is the data receiver (DR). Theimportance of this is highlighted by the fact that the type approval procedure is based heavily onthis part of the MS. It is the aim of this paper to present and discuss some techniques for therealisation of a GSM mobile station data receiver which is capable of meeting all the functionsand requirements as specified in the GSM recommendations. It will be shown that the GSMreceiver for demodulating the basic GSM traffic channel can be achieved with a MLSE structurewhich can be implemented with a combination of hardware and software modules whichcombined offers an attractive alternative to the standard methods.

Page 195: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2. System Overview

The GSM data receiver function can be broken down into three main functionalrequirements, namely acquisition, synchronization and demodulation, as shown in Fig.1. As canbe deduced some functions of the DR are only activated at certain instances while others arepractically on all the time the mobile station is powered on. The fact this is the case putsdifferent requirements on the different parts of the DR’s sub-functions.

In acquisition mode, the DR must be able to continuously process the expected receivedsignal to allow the MS to lock on to the infrastructure. This by its nature can be an processintensive task and thus must be optimized such that the MS is allowed to perform other taskwhile it is in acquisition mode.

In synchronization mode, the DR must be able to lock on to the infrastructure and set allthe MS’s internal settings such that it can communicate in synchronism. The accuracy of theestimated settings dictates largely how fast the mobile can communicate effectively with theinfrastructure.

In normal demodulation mode, the DR’s main task is to provide reliable receivedsymbols (bits) to the rest of the system. In addition it must be able to provide reliable estimatesabout the conditions of the MS’s setting such that it can be used to correct for any drifts after theacquisition and synchronization modes. Other task could involve providing reliable measures onsignal strength and quality which are necessary for optimal control in a cellular system.

In this paper we will focus on the normal demodulation mode of the receiver, assumingthat first two functions of acquisition and synchronization have been achieved.

180

Page 196: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3. GSM Data Receiver Requirements

GSM mobile station data receiver must be capable of meeting all the functions andrequirements as specified in the GSM recommendations. The relevant GSM specification for theperformance of the data receiver is given in the Rec.05.05 series [2]. In Fig.2. the complete datapath in GSM system is depicted, indicating the functions of the handset that have beencompletely specified, and the functions that have been only specified by required performanceand realization is left to the manufacturer. The desired performance of a GSM receiver isspecified both for coded and uncoded data bits.

In order to satisfy the performance and provide cost-effective solution knownrealizations of the GSM receiver have been implemented either as DSP software solution oralternatively as custom logic solution. While DSP implementation is flexible, it may not bepreferable in terms of power consumption and cost. Consequently, the goal is to achieve asolution which will provide satisfactory performance while reducing the complexity of theimplementation and preserving certain degree of flexibility.

Complexity problem may be addressed on two different levels. On the algorithmic levelcomplexity can be reduced by using suboptimal approaches, e.g. reduced-state sequencedetection or decision-feedback equalization. On the architectural level complexity andperformance constraints are usually addressed by the design of co-processors or accelerators forspecific function, such as Viterbi algorithm for equalization and decoding.

181

Page 197: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4. System Model and Receiver Structure

GSM system employs a Gaussian MSK (GMSK) modulation with BT=0.3 providing anet 270.8 kbit/sec rate at the air interface. Intersymbol interference (ISI) is introduceddeliberately to improve spectral efficiency of the system. In addition, time dispersion of thepropagation channel introduces additional ISI. The diagram of the system model underconsideration is shown in Fig.3. GMSK signal can be interpreted as linearly modulated signalwhere the input bits are precoded to form a new symbol [3]. The new symbols are alternatingbetween real and imaginary branch, therefore the transmitted data symbols are independentlyreceived in the quadrature branches, and spaced by twice the original symbol period.The received signal, subject to ISI lasting L symbol periods, can be approximated as

where

182

Page 198: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The received symbols from the two quadrature branches are received offset in time byone bit period. Thus in the absence of any errors in the sampling and any degradation one cantransform the data onto one branch only by performing a constant phase rotation in the signal i.e.multiplying the received by thus providing the real only stream of samples forconsequent processing. This is often referred as serial receiver realization and consequentequalization can be performed on real signal only.

Given the knowledge of the combined channel response of the system one can derive theoriginal transmitted sequence. The original design of GSM took this aspect into account, and inorder to aid the estimation of the channel a midamble is inserted into every slot. A normal trafficslot structure is shown in Fig.4. The training sequence is designed to exhibit an auto-correlationproperty with a distinctive peak and minimal side-lobe content. The overall channel impulseresponse (CIR) can be estimated by correlating the received and expected training sequence.These can then be used to form the MF coefficients which by definition will maximize the SNRat the output of the filter. The impulse response of the MF is the time-reversed complexconjugate of h(m). In order to compute the values of h(m), one simply cross-correlate thereceived signal y(i) and the known signal c(i) which by design is the training sequence.

The above have described a system which employs symbol sampling rate. Fractionalrates can also be used which theoretically is able to give higher performance with the expense ofadded complexity.

In order to estimate CIR one must require some knowledge of where the midamblesituates in the TDMA burst. This “global” timing is derived from a higher layer synchronisationprocedure. It is sufficient to assume that during normal burst demodulation a good estimate ofthe start of the TDMA slot is known in the receiver. Thus in order to estimate the CIR one does

183

Page 199: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

not have to carry out the cross-correlation over an exhaustive range, but only for a small windowover which the midamble burst is expected. The procedure can proceed as soon as enoughsamples have been collected, i.e. up to end of the midamble. Then a portion of the expectedsequence, usually 16 bits long, will be cross-correlated with the incoming sequence. The numberof taps required for data receiver operation is usually restricted to L=5. Thus those L successivecoefficients of the estimate h(m) which have maximum energy, will be chosen as the bestestimate of the CIR, and thus provide MF coefficients.

The above procedure for computing the MF coefficients and hence the timing is basedon the fact that the signal quality is reasonable good such that the cross-correlation procedure canresult in good estimates. When the signal is poor or when it is fluctuating rapidly, then thedescribed method will in general give a poor CIR. During such conditions the search range andthe known signal length (maximally 26) can play a significant role in reducing the estimationerror of the CIR and hence timing.

4.1. Equalization Strategies for GSM

Following the MF some form of equalization can be applied in order to minimize theeffects of the channel. The optimal approach is to use Maximum Likelihood SequenceEstimation (MLSE) which is computed using Viterbi Algorithm (VA) [4]. GSM data receiverbased on MLSE has been reported in [5], and presents common solution in nowdays realizations.Consequent efforts to reduce the complexity on the algorithmic level include the suboptimalsequence estimation approaches [6], decision-feedback equalization [7] and application of blockdetection techniques [8].

The generic MLSE-VA was extended by Hagenauer to include soft decision (SD)outputs to give the Soft-Output Viterbi Algorithm (SOVA) [9]. However, this algorithm andconsequent simplification presented in [10] require memory for storing reliability information.One of the efficient way of using co-processor for MLSE is to provide soft information to thedecoder without using the memory, as suggested in [11]. We propose new method fordetermining soft outputs targeting the reduction of complexity and efficient software-hardwarepartitioning of the algorithm [12]. The classical VA consists of three basic operations:• Calculation of the branch metric contribution (BMC)• Combine the BMC and the accumulated path metric (APM) and decide on which branch(s)

to keep and which branch(s) to discard.• Update the APM so that it can be used in the next epoch.

Soft information can be achieved by making use of the parameter D which is thedifference between the “survivor APM” and the “discarded APM” as presented in [9]. The largerthe metric difference, the more reliable is the “hard decision”. Block diagram of the receiver isgiven in Fig.5., where z(n) denotes the MF output.

As there is an interleaver after the demodulator, it is also necessary to relay informationconcerning the slot’s average reliability. This can take the form of a SNR estimate of theindividual symbols. However, this is difficult to estimate in general, but a slot duration basedSNR estimate would be easier. In order to compute the noise one requires a reference which isconveniently provided by the midamble bits. Thus in order to compute the SNR we can simplydetermine the difference between the expected midamble (which is in the form of + 1/-1), and thereceived midamble (which is in the form of real values in general) to give an estimate of thenoise during the instant of the midamble.

184

Page 200: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The combination of the parameter D and the slot SNR can be combined into a singleparameter, SD which is refered to as “soft decision” information:

The unsigned soft decision information can be combined with the hard decision bits by

to give the soft output symbols where the sign information indicates the bit decision, and the

magnitude gives an indication of the confidence in the bit decision. If the values are to berouted along to different parts of the overall receiver system, then it would be desirable toquantise its range. This can be done by gathering the expect dynamic range of SD and setting upthe appropriate ADC range to cover the region.

185

Page 201: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4.2. Simplified Soft Information Generation Method

The method of computing the soft information SD is costly to implement in software orhardware as it requires either many instructions or hardware storage to store the array SD. Onemethod to overcome this is to split up the task of generating the hard and soft information. Thisdecoupled SOVA is shown in Fig.6. The hard information block (HIB) is same as the standardVA, however the soft information block (SIB) is activated after the completion of the HIB. Theprocessing of the SIB is described below.

Referring to Fig.7, the objective is to determine the SD value at epoch node N. Assumingthat the HIB has managed to perform reasonably in detecting the correct symbols, then thefollowing observations can be made :• The paths that has led up to epoch node N originated from the same past state node and will

end up at a known future node.• The distance between the originator node to the epoch node N is comparable to that of the

CIR length L.

If the above is true then remembering that the ISI can be completely determined givenknowledge of the past and future symbols and the CIR, then we can estimate what the differenceis between the desired survivor and the discarded node. In the limit that there is no errors in thedetected symbols, the estimated should be exactly the same as the true SD.

Therefore, to determine the SD value at N, we simply sum up the APM beginning fromthe known start node to the known end node.

186

Page 202: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Let

The implementation of the above is however not trivial as it requires computation ofmany terms. Fortunately, many terms between the correct path APM and the discarded APM arecommon and can be ignored. By expanding above and eliminating the common terms thefollowing is obtained :

The term is in fact not necessary as we are only interested in the magnitude ofSD(N). As evident the computations required to calculate the soft information is very simple as itrequires only a summation of the CIR coefficients. From the equation above it is clear that theSD is simply a measure of the ISI from the previous and future symbols which is what oneexpects intuitively.

5. Simulation Results

In order to assess the applicability of the proposed algorithm extensive simulations wereconducted using a commercially available package COSSAP. The reference performance curvesare based on the COSSAP library MLSE implementation of a GSM receiver [13], which arecompared to the GSM receiver detailed in this report.

Propagation conditions described in [2] were tested, including static, typical urban at 3and 50Km/h (TU3, TU50), rural area at 250Km/h (RA250), hilly terrain at 100Km/h (HT100)and equaliser test at 50Km/h (EQ50) channels. Also, in addition to these fading channels at so-

187

Page 203: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

called reference receiver sensitivity input levels, adjacent and co-channel tests at referenceinterference input levels were also conducted.

The COSSAP GSM reference receiver model employs CIR estimation block asdescribed in Section 3. However, it does not employ matched filtering and the Viterbi algorithmis based on a full Euclidean distance measure, utilising both the real and imaginary parts of thereceived signal. A full soft decision measure is employed together with an optional internal PLLoperating on a sample by sample basis. Further details can be found in [13].

Few examples from extensive simulation set will be presented to illustrate major trends.Comparison of different receiver structures is presented in Fig.8. for TU50 propagationcondition. The comparison is between parallel MLSE receiver, serial receiver using standardSOVA (SOVA1) and serial receiver with simplified soft decision calculation presented in thispaper (SOVA2). The difference in the performance is within simulation error margin.

For the optimization of hardware block the performance of the receiver is evaluated fordifferent wordlengths used to quantise the soft decision information. Example is presented inFig.9. It has been concluded that 4 bits of quantisation are sufficient to provide resolution in theco-processor that can be used both for equalization and decoding.

The main points from the simulation results can be summarised below :• Overall performance of serial (real) MLSE is comparable to that of parallel (complex)

MLSE.• The BER performance of SOVA1, SOVA2 and COSSAP is essentially the same.• For soft decision dependent measures (FER,RBER) the COSSAP (complex) MLSE performs

slightly better (0.5 to 1 dB) for FER, but the difference is small for RBER.• The simpler SOVA2 is equivalent in performance to the full SOVA1 implementation.• Soft decision word length can be limited to 4-bits while maintaining comparable

performance.• Eb/No of is required to satisfy the reference sensitivity conditions according to

Rec.05.05.• reference interference performance is met using SOVA2.

188

Page 204: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

6. Conclusions

In this paper a concept of a simpler MLSE receiver was described. It was shown that theconventional MLSE can be broken down into a two-stage process to reduce computation whilemaintaining performance. The advantage of the new partitioning is that it enables a mix ofhardware and software to implement the MLSE.

The performance of the simplified SOVA2 receiver was assessed according to the GSMrecommendations. The required Eb/No figure to meet the reference sensitivity level iscomparable to the optimal solution, and should give the RF front end enough margin to allow acost effective design.

References

[1] Z.Zvonar and R.Baines, “Integrated Solutions for GSM Terminals”, International Journal ofWireless Information Networks, Vol. 3, No. 3, 1996, pp147-162.

[2] GSM Technical Specifications, Rec.05.05, ETSI.[3] P.Laurent, “Exact and approximate construction of digital phase modulations by

superposition of amplitude modulated pulses”, IEEE Trans. Commun., Vol. 34, 1986, pp 150-160.

[4] G.Ungerboeck, “Adaptive Maximum-Likelihood Receiver for Carrier-Modulated Data-Transmission Schemes”, IEEE Trans. Commun., Vol. 22, No. 5, 1974, pp 624-636.

[5] R.D’Avella, L.Moreno and M.Sant’Agostino, “An Adaptive MLSE Receiver for TDMADigital Mobile Radio”, IEEE Journal Selec. Area Commun., Vol. 7, No. 1, 1989, pp 122-129.

[6] G.Benelli, A.Fioravanti, A.Garzelli and P.Matteneini, “Some Digital receivers for the GSMPan-European Cellular Communication System,” IEE Proc. Commun., June 1994, pp 168-176.

[7] P.Bune, “A Low-Effort DSP Equalization Algorithm for Wideband Digital TDMA MobileRadio Receivers,: In Proc. of ICC’91, pp 25.1.1-25.1.5.

[8] B.Bjerke, J.Proakis, M.Lee and Z.Zvonar, “A Comparison of Decision Feedback Equalizationand Data Directed Estimation Techniques for the GSM System,” In Proc. ICUPC’97, SanDiego, CA, 1997.

[9] J.Hagenauer and P.Hoeher, “A Viterbi Algorithm with Soft-Decision Outputs and itsApplications”, In Proc. IEEE Globecom’89, Dallas TX, Feb. 1989, pp 47.1.1-47.1.7.

[10] B.Rislow, T.Masen and O.Trandem, “Soft Information in Concatenated Codes,” IEEETrans. Commun., Vol. 44, No. 3, 1996, pp 284-286.

[11] S.Ono, H.Hayashi, T.Tanaka and N.Kondoh, “A MLSE Receiver for the GSM DigitalCellular System,” In Proc. VTC’94, Stockholm, 1994, pp 230-233.

[12] M.Lee, Trellis Decoder with Soft Decision Output, patent pending.[13] COSSAP Model Libraries, Vol. 3, September 1996, Synopsys Inc.

189

Page 205: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

190

Page 206: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

7Turbo Code Implementation Issues for Low Latency, Low Power Applications

D. Eric [email protected]

and

William J. Ebel, Member [email protected]

Mississippi State UniversityElectrical and Computer Engineering

Box 9571Mississippi State, MS, 39762 USA

Abstract: In this paper, four important and interrelated issues are discussed which relate to the

performance of Turbo codes for low latency and low power applications: (1) interleaving, (2)

trellis termination, (3) estimation of the channel noise variance, and (4) fixed point arithmetic

effects on decoder performance. We give a method for terminating both constituent convolutional

encoders in a known (all zero) state by assigning specific binary values to information-sequence

bits that are dependent upon the full set of user input information bits. This method causes a slight

restriction on the set of allowable interleavers that can be chosen for the scheme but does not

compromise performance. Also, we give a robust method for estimating the conditional channel

variance given pre-thresholded random variable samples measured directly from the channel.

Finally, performance results are shown for fixed-point number representations.

A. Introduction

An exciting development in recent years in the field of error correcting codes was the

introduction of Turbo codes [1]. Empirical results indicate that these codes approach the Shannon

limit for reliability improvement on an AWGN channel. In this paper, four important and

interrelated issues are discussed which relate to the performance of Turbo codes for low latency

and low power applications: (1) interleaving, (2) trellis termination, (3) estimation of the channel

noise variance, and (4) fixed point arithmetic effects.

A typical Turbo encoder is shown in Figure 1. The binary data sequence x is input to a rate

1/2 recursive convolutional encoder (RCC) and at the same time it is input to an interleaver

which generates the output sequence y and subsequently input to a second RCC. The output

consists of the original data along with the parity resulting from the two constituent convolutional

191

Page 207: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

encoders.

For the purposes of this discussion, the modulator is taken to be binary and is implemented by

the mapping z = 2x-l where z represents a transmitted symbol and x represents a binary

number. In this way, a binary 1 maps to the real number 1 and a binary 0 maps to the real number

-1. Therefore the transmitter power is = 1 per transmitted bit.

The channel is taken to be AWGN and is implemented by adding a Gaussian random variable

(RV) to each transmitted symbol. If is the variance of the added noise RV, then the signal-to-

noise ratio (SNR) of the system is The received sequences are given by

and, where and are independent Gaussian noise

sequences with variance We will use primes to denote variables that have an added noise

component.

The parallel configuration of the encoder along with the systematic implementation of the

convolutional encoders allows the decoder to operate in an iterative fashion. Figure 2 illustrates,

in concept, a simplified Turbo decoder. First the received input sequence and the received

parity sequence are paired and input to the first Maximum A-Posteriori (MAP) decoder [2][3].

In concept, the output is an estimate of the original input data in the form of a probability

measure that each bit in the sequence is a binary “1”. The rest of the decoder operates as shown.

Turbo codes have traditionally been shown to yield remarkable performance for long

blocklengths (large interleaver), usually on the order of many tens of thousands of bits [4][5], i.e.

192

Page 208: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

to bits. The large interleaver is used to reduce the multiplicities of low weight

codewords, known as spectral thinning [6], however it also plays an important secondary role in

eliminating the effect of poor decoded-bit estimates caused by unknown or unreliable constituent

convolutional encoder state terminations. To be more specific, the constituent convolutional

encoders of a Turbo code are generally configured to be terminated using tail bits that are

transmitted in addition to the data sequence and parity sequences. These tail bits do not benefit

from the diversity effect of the interleaver in the iterative decoder and result in poor decoded bit

estimates. To illustrate this, Figure 3 shows the number of errors that occurred per information bit

position in the data sequence for a rate 1/3 Turbo code using 8-state recursive convolutional

codes, an information sequence length of 30 bits, and with an SNR of 1.5dB. The algorithm using

tail bits for trellis termination resulted in 8,262 bit errors out of 330,000 total bits. The algorithm

using full interleaving with trellis termination, described in Section B below, resulted in 1,449 bit

errors out of 300,000 total bits. The bulk of the additional errors resulted from the poor tail bit

estimates and their residual effect on other bits. When large interleavers are used, these poor

estimates have a negligible effect on the overall performance of the code.

In wireless applications, however, the blocklengths are necessarily much smaller, on the order

of a few hundred bits or less. The main issue in any practical solution is code performance per unit

complexity of the hardware realization, especially in a commercial application involving

handheld, battery-powered electronic devices. In Section B, we describe a method for terminating

the constituent convolutional encoders to improve the performance in short blocklength

193

Page 209: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

applications.

In this paper, we also give a robust method for estimating the channel noise variance directly

from the received data. This method is empirically shown to yield decoded error probability

performance that is nearly identical to that which results when using the true value.

Finally, we show fixed-point arithmetic results for Turbo decoders. If the hardware

implementation of a Turbo decoder is not configured carefully, unstable behavior can be observed.

We show that for some simple codes, as few as 4 bits of precision provide a reasonable coding

gains suggesting an interesting trade-off between complexity and performance.

B. Interleaving and Trellis Termination

In this section, an algorithm is described as first suggested by Barbelescue [7], which

terminates the trellis of the two constituent encoders by properly selecting the first 2m bits of the

information sequence, where m is the number of delays in each constituent encoder. We call these

first 2m bits precursor bits. A Turbo encoder using precursor bits for trellis termination is shown

in Figure 4. The precursor bits are denoted by the length-2m sequence s. The encoder requires two

passes. In the first pass, the precursor bits are set to zero and data is sequenced into the two

constituent RCC’s without regard for the final state. After the data is input, the final state of the

each encoder is sent to a ROM where the precursor bits are read out and positioned at the

194

Page 210: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

beginning of the input sequence. These precursor bits will guarantee that the two constituent

RCC’s end up in the all zero state at the end of the second encoding pass. The ROM is stored off

line by iterating through all possible precursor sequences s and storing the final encoder state for

each RCC. Since the encoder is linear, the second pass will result in final RCC states which are

the sum of the final state due to the data and the precursor bits. Since these final states are

identical and the sum of two identical GF(2) numbers is always zero, this results in final RCC

states that are zero.

There is only one issue to contend with here. In order for this to work, the off-line procedure

for building the ROM must result in a one-to-one correspondence between each precursor binary

number and each of the possible states in the second RCC. It is possible that the random

interleaver will position the precursor bits in y in such a way so that iterating through all possible

precursor binary numbers does not result in an exhaustive set of final states for the second RCC.

We say that such an interleaver is not proper. In our simulation, the random interleaver is

reselected until a proper one is found. Each interleaver is checked to see if the set of possible

precursor bits result in a duplicate encoder state for the second RCC. In our simulation, the

195

Page 211: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

interleaver is generally reselected from 0 to 8 times before a proper one is found.

In this configuration, the entire input sequence, including the precursor bits, is interleaved.

This eliminates the bias in the error probability introduced by the non-interleaved trellis tail bits

for trellis termination. Also note that the code rate for this configuration is which is

approximately 1/3 for large N.

The Turbo decoder corresponding to this Turbo encoder is shown in Figure 5. The

permutation resulting from the interleaver and deinterleaver are identical to that used in the

encoder except that the entries are real numbers rather than binary numbers.

C. Channel Noise Variance Estimation

In this section, a method for estimating both the conditional mean and conditional variance for

the received data which enters the Turbo decoder is described. The problem can be stated as

follows. Let Z be a random variable (RV) with pdf given by

196

Page 212: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where is the Gaussian pdf with mean m, variance , and with independent variable z.

The goal is to estimate the conditional statistics given only measured statistics of RV Z.

An illustration of is shown in Figure 6. If is small, then an efficient method for

computing m and is to find the mean and variance for the The error caused by the tail

overlap will be negligible. However, a more practical situation occurs when the conditional

variance is large, corresponding to a low channel SNR. In this case, the parameters can be

estimated by computing the and moment of Z, denoted and respectively.

Since the mean of Z is zero, the second moment is

Each integral represents the second central moment of a Gaussian RV with mean m and variance

and evaluates to the same expression, given by

Therefore, the second moment of Z is

(1)

Since is directly measurable from a set of samples of Z, this gives one equation in terms of the

unknowns m and Similarly, the moment of Z can be shown to be

(2)

197

Page 213: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Solving (1) and (2) simultaneously, gives

and

Of course, since the measured moments are themselves RV’s, there is always a chance that these

calculations will fail. Surely we must have If either of these conditions

fail, which is most probable if is small, then the first method should be used to estimate m

and

In Table I below, the true channel noise variance is compared with the estimate using the

method outlined in this section. The code chosen has a blocklength of 100 bits and 10 blocks were

Table I. Comparison of true and estimated channel noise variance

combined to form the variance estimate. As the SNR increases, the estimate smoothly converges

to the true estimate. In any case, the estimate is close to the true SNR value above 0dB and there

was no appreciable difference in the performance of the decoded error probability when the

estimated variance was used in place of the true variance.

D. Fixed-Point Arithmetic Results

In Table II below, some experimental results for a hardware realizable turbo coding system are

shown. The system uses a log-likelihood ratio decoder to minimize the hardware complexity by

198

Page 214: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

reducing costly fixed-point divides and multiplies. The code used is an optimal convolutional

code of memory order 2. The interleaver is a block length 100 pseudo-random interleaver Data is

presented for varying degrees of fixed-point precision. The coding gain that is presented is for a

channel modeled with AWGN and a signal-to-noise ratio (Eb/No) of 1.76dB. Note that the

performance results for a dynamic range of less than -7 to 8 was very poor.

These results show that the performance loss due to the fixed-point precision is tolerable down to

4 bit numbers for the specific Turbo code implemented. These results also show that the block

error probability increased by a factor of 2 as the fixed-point number size was reduced from 8 bits

to 4 bits. This suggests that with 4-bit numbers, there were twice as many block decoding failures

but there were not many bit errors in the additional block errors.

E. Conclusions

In this paper, four important and interrelated issues were discussed which relate to the

199

Page 215: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

performance of Turbo codes for low latency and low power applications: (1) interleaving, (2)

trellis termination, (3) estimation of the channel noise variance, and (4) fixed point arithmetic

effects on decoder performance. We described a method for terminating both constituent

convolutional encoders in a known (all zero) and also gave a robust method for estimating the

conditional channel variance given pre-thresholded random variable samples measured directly

from the channel. Finally, performance results are shown for fixed-point number representations

and arithmetic.

Bibliography

[1] Berrou, C., Glavieux, A., and Thitimajshima, P., “Near Shannon Limit Error-CorrectingCoding and Decoding: Turbo-Codes (1)”, International Communications Conference, GenevaSwitzerland, 1993, pp. 1064-1070.

[2] Bahl, L.R., Cocke, J., Jelinek, F., and Raviv, J., “Optimal Decoding of Linear Codes forMinimizing Symbol Error Rate”, IEEE Transactions on Information Theory, March 1974, pp.284-287.

[3] Forney, G.D. Jr., “The Forward-Backward Algorithm”, in Proc. 34th Allerton Conf.Commun., Contr., Computing, Allerton, IL, Oct. 1996.

[4] Divsalar, D., and Pollara, F., “On the Design of Turbo Codes”, JPL TDA Progress Report 42-123, JPL, November 15, 1995, pp. 99-121.

[5] Divsalar, D., and Pollara, F., “Turbo Codes for Deep-Space Communications” TDA ProgressReport 42-120, JPL, February 15, 1995, pp. 29-39.

[6] Perez, L.C., Seghers, J., and Costello, D.J., “A Distance Spectrum Interpretation of TurboCodes”, IEEE Transactions on Information Theory, Vol. 42, No. 6, November 1996, pp. 1698-1709.

[7] Barbulescu, A.S., and Pietrobon, S.S., “Interleaver design for turbo codes”, ElectronicsLetters, Vol. 30, No. 25, December 8, 1994, pp. 2107-2108.

200

Page 216: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

8

Evaluation of the Ad-Hoc Connectivity with the Zone Routing Protocols

Zygmunt J. Haas and Marc R. PearlmanSchool of Electrical Engineering, Cornell University,

323 Rhodes Hall, Ithaca, NY, 14853Tel: (607) 255-3454, fax: (607) 255-9072, e-mail: [email protected]

URL: http://www.ee.cornell.edu/~haas/wnl.html

AbstractIn this paper, we evaluate the novel routing protocol for a special class of ad-hoc networks,

termed by us the Reconfigurable Wireless Networks (RWNs). The main features of the RWNs are: theincreased mobility of the network nodes, the large number of nodes, and the large network span. Weargue that the current routing protocols do not provide a satisfactory solution for routing in this typeof an environment. We propose a scheme, coined the Zone Routing Protocol (ZRP), whichdynamically adjusts itself to the operational conditions by sizing a single network parameter - theZone Radius. More specifically, the ZRP reduces the cost of frequent updates of the constantlychanging network topology by limiting the scope of the updates to the immediate neighborhood of thechange – the Zone Radius. We study the performance of the scheme, evaluating the average number ofcontrol messages required to discover a route within the network. Furthermore, we compare thescheme’s performance, on one hand, with reactive flooding-based schemes, and, on the other hand,with proactive distance-vector schemes.

1. Introduction

A Reconfigurable Wireless Network (RWN) is an ad-hoc network architecture that can be rapidlydeployed without relying on preexisting fixed network infrastructure. The nodes in a RWN candynamically join and leave the network, frequently, often without warning, and without disruption toother nodes’ communication. Finally, the nodes in the network can be highly mobile, thus rapidlychanging the node constellation and the presence or absence of links. Examples of the use of theRWNs are:

• tactical operation - for fast establishment of military communication during the deployment offorces in unknown and hostile terrain;

• rescue missions - for communication in areas without adequate wireless coverage;• national security - for communication in times of national crisis, where the existing

communication infrastructure is non-operational due to a natural disaster or a global war;• law enforcement - for fast establishment of communication infrastructure during law

enforcement operations;• commercial use - for setting up communication in exhibitions, conferences, or sale

presentations;• education - for operation of wall-free (virtual) classrooms; and• sensor networks - for communication between intelligent sensors (e.g., mounted on

mobile platforms.Nodes in the RWN exhibit nomadic behavior by freely migrating within some area, dynamically

creating and tearing down associations with other nodes. Groups of nodes that have a common goalcan create formations (clusters) and migrate together, similarly to military units on missions or

This work is supported by the US Air Force/Rome Labs, under the contract number C-7-2544 and a grantfrom Motorola Corporation, the Applied Research Laboratory.2 Micro-Electro-Mechanical-Systems

201

Page 217: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

similarly to guided tours on excursions. Nodes can communicate with each other at anytime andwithout restrictions, except for connectivity limitations and subject to security provisions. Examples ofnetwork nodes are pedestrians, soldiers, or unmanned robots. Examples of mobile platforms on whichthe network nodes might reside are cars, trucks, buses, tanks, trains, planes, helicopters, ships, UAV-

or UFO-s.In this paper, we concentrate on the issue of designing a routing protocol for the RWN. In

particular, we address routing in a flat ad-hoc networks, as opposed to hierarchical ad-hoc networksthat have been investigated in the past (e.g., [Lauer86,Westcott84]). The proposed protocol, the ZoneRouting Protocol, allows efficient and fast route discovery in the RWN communication environment(i.e., large geographical network size, large number of nodes, fast nodal movement, and frequenttopological changes). In what follows, we explain the elements of the proposed scheme.

2. Previous and Related Work

In this work, we address routing in a flat ad-hoc networks, as opposed to hierarchical ad-hocnetworks that have been investigated in the past (e.g., [Lauer86,Westcott84]). Although routing inhierarchical ad-hoc networks involves simpler procedure, some salient features of the flatarchitectures, as mentioned above, make them that much more attractive for communication in theRWN environment. Comparison of the two architectures is outside the scope of the paper and thereader is referred to [Haas98] for further discussion on this topic.

The wired Internet uses routing protocols based on topological broadcast, such as the OSPF[Moy97]. These protocols are not suitable for the RWN due to the relatively large bandwidth requiredfor update messages.

Routing in multi-hop packet radio networks was based in the past on shortest-path routingalgorithms [Leiner87], such as Distributed Bellman-Ford (DBF) algorithm [Bertsekas92]. Thesealgorithms suffer from very slow convergence (the “counting to infinity” problem). Besides, DBF-likealgorithms incur large update message penalties. Protocols that attempted to cure some of theshortcomings of DBF, such as Destination-Sequenced Distance-Vector Routing (DSDV) [Perkins94],were proposed. However, synchronization and extra processing overhead are common in theseprotocols. Other protocols that rely on the information from the predecessor of the shortest path solvethe slow convergence problem of DBF (e.g., [Cheng89] and [Garcia-Luna-Aceves93]). However, theprocessing requirements of these protocols may be quite high, because of the way they process theupdate messages.

Routing protocols that are based on a source initiated query-reply process have also beenintroduced. Such techniques typically rely on the flooding of queries to discover a destinatio. In[Corson97] the route replies generated are also flooded, in a controlled manner, to distribute routinginformation in the form of directed acyclic graphs (DAGs) rooted at each destination. In contrast,other schemes unicast the route reply back to the querying source, typically by means of reversedrouting information gathered during the query phase. In the case of [Perkins97], this routinginformation is in the form of next-hop routes to the querying node, while in [Johnson96], a routeaccumulation procedure is employed during the route query, allowing the route reply to be returned viasource routing. The on-demand discovery of routes can result in much less traffic than standarddistance vector or link state schemes, especially when innovative route maintenance schemes areemployed. However, the reliance on flooding may still lead to considerable control traffic in thehighly versatile RWN environment.

[Murthy95] and [Murthy] present a new distance-vector routing protocol for packet radio networks(WRP). Upon a change in the network topology, WRP relies on communicating the change to itsneighbors, which effectively propagates throughout the whole network. The salient advantage of WRPis the considerable reduction in the probability of loops in the calculated routes, as compared with

3 Unmanned Aerial Vehicles

202

Page 218: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

other known routing algorithms, such as, for example, DBF. Compared with our routing protocol, themain disadvantage of WRP is in the fact that routing nodes constantly maintain full routinginformation in each network node, which was obtained at relatively high cost in wireless resources.Our protocol, in contrast, rapidly finds routes, only when transmission is necessary. Moreover,multiple routes are maintained, so that when some of these routes become obsolete, other routes can beimmediately utilized. This is especially important when the network contains large number of very fastmoving nodes, as is the case in the RWN architecture.

3. The Notion of a Routing Zone and Intrazone Routing

A routing zone is defined for each node and includes the nodes whose minimum distance in hopsfrom the node in question is at most some predefined number, which is referred to here as the zoneradius. An example of a routing zone (for node S) of radius 2 is shown in Figure 1.

Note that in this example nodes A through K are within the routing zone of S. Node L is outsideS’s routing zone. Peripheral nodes are nodes whose minimum distance to the node in question is equalexactly to the zone radius. Thus, in Figure 1, nodes G-K are peripheral nodes. Zones of different nodesoverlap heavily.

Related to the definition of a zone is the coverage of a node’s transmitter, which is the set of nodesthat are in direct communication with the node in question. These nodes are referred to as neighbors.The transmitter’s coverage depends on the propagation conditions, on the transmitter power, and onthe receiver sensitivity. In our simulation, we define conceptually a radius, which is the

maximal distance that a node’s transmission will be received without errors. Of course, it is importantthat each node be connected to at least one other node. However, more is not, necessarily, better. Asthe transmitter’s coverage includes all the nodes with distance 1 hop from the node in question, thelarger the is, the larger is the content of its routing zone. A large routing zone requires large

amount of update traffic.For the purpose of simplification, we will depict zones as circles around the node in question.

However, one should keep in mind that the zone is not a description of distance, but rather nodalconnectivity (measured in hops).

Each node is assumed to maintain the routing information to all nodes that are within its routingzone and those nodes only. Consequently, in spite of the fact that a network can be quite large, theupdates are only locally propagated. We assume that the protocol through which a node learns its zoneis some sort of a proactive scheme, which we refer to here as the IntrAzone Routing Protocol (IARP).In this paper, we use a modification of the Distance Vector algorithm. However, any other proactivescheme would do. Of course, in principle, the performance of the ZRP depends on the choice of IARP.However, our experience suggests that the tradeoffs are not strongly affected by the particular choiceof the proactive scheme used.

3.1 Interzone Routing and the Zone Routing ProtocolIARP finds routes within a zone. The IntErzone Routing Protocol (IERP), on the other hand, is

responsible for finding routes between nodes located at distances larger than the zone radius. IERPrelies on what we call bordercasting. Bordercasting is a process by which a node sends a packet to allits peripheral nodes. A node knows the identity of its peripheral nodes by the virtue of the IARP.Bordercasting can (and should) be implemented by multicasting, if multicasting is supported withinthe subnet.4 Alternatively, unicasting the packet to all the peripheral nodes achieves the same goal,albeit at much higher cost in resources.

4It is not clear whether multicasting is, indeed, feasible in a highly dynamic network topology. Examination ofapplicability of multicasting in ad-hoc networks is outside the scope of this paper. Here, we assume thatbordercasting is performed using unicasting.

203

Page 219: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The IERP operates as follows: The sourcenode first checks whether the destination is withinits zone.5 If so, the path to the destination is knownand no further route discovery processing isrequired. If the destination is not within the sourceRouting Zone, the source bordercasts a routerequest (which we call simply a request) to all itsperipheral nodes.6 Now, in turn, all the peripheralnodes execute the same algorithm: check whetherthe destination is within their zone. If so, a routereply (which we call simply a reply) is sent back tothe source indicating the route to the destination(more about this in a moment). If not, theperipheral node forwards the query to its peripheralnodes, which, in turn, execute the same procedure.

An example of this Route Discovery procedure is demonstrated in Figure 2. The source node Ssends a packet to the destination D. To find a route within the network, S first checks whether D iswithin its routing zone. If so, S knows the route to node D. Otherwise, S sends a query to all the nodeson the periphery of its zone; that is, to nodes C, G, and H. Now, in turn, each one of these nodes, afterverifying that D is not in its routing zone forwards the query to its “peripheral” nodes. In particular, Hsends the query to B, which recognizes D as being in its routing zone and responds to the query,indicating the forwarding path: S-H-B-D.

A nice feature of this distributed route discovery process is that a single route query can returnmultiple route replies. The quality of these returned routes can be determined based on hop count (orany other path metric7 accumulated during the propagation of the query. The best route can beselected based on the relative quality of the route (e.g., choose the route with the smallest hop count, orshortest accumulated delay).

Two main issues need to be addressed: When sending a reply to the source node, how does the“last peripheral node” know the whole path, to be included in the reply to the source? (A relatedquestion is, how does the responding node know how to send the reply to the source?) The secondquestion is, how does the IERP process terminate?

Let us start with the first question. The process by which the node receiving a query knows thepath back to the source of the query is the Route Accumulation procedure. In the Route Accumulationprocedure, each node that forwards a query writes into the query packet its identification. Thesequence of these identifications represents a path from the source node to the current node, and, byreversing the order, a path from the current node to the source node. Thus, the routes within thenetwork are specified as a sequence of nodes, separated by approximately the zone radius. A node,which identifies that the destination is in its zone, simply adds its own identification to the query andreturns the accumulated route to the source.

The second issue, that of termination of the IERP process, is a more difficult one. Of course,similarly to the standard flooding algorithm, a node that previously received the query will discard it.This, however, does not solve the whole problem, since as the zones heavily overlap, the query will beforwarded to many network nodes. In fact, it is very possible that the query will be forwarded to all thenetwork nodes, effectively flooding the network. But a more disappointing result is that, due to factthat bordercasting involves sending the query over a path of length equal to the zone radius, the IERP

5 Remember that a node knows the identity, distance to, and a route to all the nodes in its zone.6 Again, the identity of its zone peripheral nodes are known to the node in question.7 Typical path metrics include hop count, delay, capacity, etc.

204

Page 220: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

will result in much more traffic than the flooding itself! What is needed is a more efficient terminationcriterion.

Let us look at this problem moreclosely. The idea behind IERP is that thesearch for a node advances in the“quantum” of zone radius, instead offlooding the network by forwarding thequery among neighbors. The gain that weexpect is due to the fact that only somenetwork nodes will be involved in such a“flood.” The challenge is to “steer” thesearch in the direction outwards of theoriginal Routing Zone (see Figure 3), ratherthan going back into the areas that werealready covered by other threads of thesearch. There are a number of ways thatsuch a redirection of the search could beaccomplished. We discuss here twopossibilities. The first improvement, termedthe Backwards Search Prevention (BSP),makes sure that peripheral nodes of thecurrent node that lie within the routing zoneof the previous node are not included in thenext bordercast. To prevent the backward

propagation of queries, a bordercasting node must send its queried peripheral nodes a list of its routingzone nodes (perhaps appended to the IERP query packet). Thus, in the example in Figure 4, after Sbordercasts to the nodes F and C, the nodes A, C, and S will not be included in the consecutivebordercast by node F, as the nodes A, C, and S are all within the routing zone of the previousbordercasting node S. While the BSP may be impractical for large routing zones (due to the long listof routing zone nodes), it could be quite effective when used by nodes which maintain smaller routingzones.

The second improvement, which we call the Loopback Search Prevention (LSP), involves pruningany search that goes into areas previously searched. This is accomplished by terminating bordercast atnodes that either have received the query before or that have overheard the query transmitted by their

205

Page 221: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

neighbors.8 Note that this includes all threads of the query. Additional modification includetermination of a thread at nodes whose routing zone include any of the nodes in the currentlyaccumulated path. An example is shown in Figure 5. In this example, S bordercasts to A, whichbordercasts to B, which in turn bordercasts to C. C will terminate the search of this thread (i.e., willstop bordercasting), as S, who is in the thread's accumulated route, is within its routing zone.

Both of these improvements reduce the amount of control traffic of the IERP protocol. From oursimulation runs, we have learned that the contribution of the two schemes in reducing the controltraffic is approximately equal. Note that the two techniques are not overlapping. The BackwardsSearch Prevention avoids sending a route request to nodes that should not forward it. On the otherhand, the Loopback Search Prevention would send the route request to such a node, but willsubsequently terminate the search thread at this node.

The main advantage of the Zone Routing Protocol is in the fact that the number of "flood"messages to discover a route is significantly smaller, as compared with other reactive-type protocols.This decrease is due to the directed propagation of queries to specified peripheral nodes. Since forradius greater than one the routing zones heavily overlap, the routing tends to be extremely robust.

Zone Routing, as described earlier, discovers multiple routes to a destination. However, the RouteDiscovery process can be made much more efficient in resources, at the expense of longer latency.This could be done by sequentially, rather than simultaneously, querying the peripheral zone nodes,either one-by-one or in groups. Thus, there is a tradeoff between the cost and the latency of the RouteDiscovery procedure.

We omit here correctness proof of the ZRP. Am interested reader is referred to [Haas98-2].

3.2 The Route Maintenance ProcedureIn the Route Discovery procedure, each node, proactively and continuously learns the topology

within its zone radius and, reactively, on-demand, discovers routes by hopping in steps of the routingradius. Because the number of nodes within a zone is much smaller than the number of network nodes,the penalty for dissemination of routing information within a zone is limited. So is the cost of the routediscovery, when the zone radius is sufficiently large. For a small radius (zone radius =1), the ZRPbehaves as a reactive scheme (flooding). On the other extreme, for a large radius thescheme exhibits proactive behavior. In general, the size of the zone radius determines the ratiobetween the proactive and reactive behavior of the protocol. The Route Maintenance Procedureadaptively adjusts the zones' radii, as to reduce the "cost" of the Route Discovery Procedure.

The adjustment may be performed, based on the value of Call-to-Mobility-Ratio (CMR) measuredindependently at each node. CMR is a ratio of the rate at which queries are initiated to the rate atwhich connections with the neighbors are broken. Large CMR indicates that the network mobiles arevery active in connection initiation and, thus, larger zone radius would decrease its frequent routediscovery costs. Small CMR suggests that mobiles rarely place outgoing connections and, to reducethe overall cost of learning the routing within the nodes' routing zones, a smaller zone radius ispreferable. Similarly, for fast moving mobiles (small CMR) the local zone routing informationbecomes obsolete quickly. Thus, a smaller zone radius carries smaller penalty. The routing zone radiimay be configured prior to network deployment, based on a priori knowledge of network call activityand mobility patterns. Alternatively, and more typically, the routing zones may be resizeddynamically, allowing the ZRP to adapt to local changes in call activity or node mobility.9

The Route Maintenance Procedure also significantly reduces the routing costs by employing theRoute Discovery procedure only when there is a substantial change in the network topology. Morespecifically, active routes are cached by nodes: the communicating end nodes and intermediate nodes.

8 This is done by having each node eavesdropping on all its neighbor communications and requires thatthe IERP communicate with the MAC layer.9 Dynamic adjustment of the routing zone radius requires minor modifications to the basic Zone RoutingProtocol. Discussion of these enhancements is outside the scope of this paper.

206

Page 222: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Inactive paths are purged from the caches after some timeout period.10 Upon a change in the networktopology, such that a link within an active path is broken, a local path repair procedure is initiated. Thepath repair procedure substitutes a broken link by a mini-path between the ends of the broken link. Apath update is then generated and sent to the end points of the path. Path repair procedures tend toreduce the path optimality (e.g., increase the length for shortest path routing). Thus, after some numberof repairs, the path end points will initiate a new Route Discovery procedure to replace the path with anew optimal one.

4.0 Evaluation of the ZRP

We use the OPNET™ Network Simulator from MIL3, an event driven simulation package, toevaluate the performance of the ZRP over a range of routing zone radii, from reactive routing toproactive routing Performance is gauged by measuring the control traffic generated by theZRP and its effects on the average session delay. Our results can be used to determine the optimumZRP routing zone radius for a given nodal velocity and for a given route query rate.

The ZRP control traffic consists of the intrazone (IARP) route update packets and the interzone(IERP) route request/reply/failure packets. While the neighbor discovery beacons could be consideredcontrol overhead, this additional traffic is independent of both mobile velocity and routing zone radius.Furthermore, the neighbor discovery process is not an exclusive component of the ZRP; various MACprotocols are also based on neighbor discovery. As such, the beacons do not contribute to the relativeperformance of the ZRP and are not accounted for in our analysis. Because the IERP packets arevariable length (due to the route accumulation procedure), we measure control traffic in terms of nodeID fields, rather than packets.

A meaningful measure of ZRP delay is the average route query response time which is

defined as the average duration from the time a route is initially requested by the Network layer untilthe route is discovered.11 If the destination appears in the routing tables (which will occur withprobability (l–Prob[route discovery])), the query is immediately answered and the route queryresponse time is assumed to be zero.12 Otherwise, a route discovery is required (which will occur withprobability (Prob[route discovery])) and the route query response time is measured as the time elapsedbetween the generation of the route request and the reception of the first route reply,

For a fixed network size and fixed nodal density, the probability of a route discovery for an initialquery is only dependent on the routing zone radius. The behavior of the route reply time is far morecomplicated. Not only is it dependent on the arrival rate of control packets, it is also affected by suchfactors as the network traffic load and the average length of IERP control packets. Our study providessome insight into the effect of these factors on the ZRP delay.

Our simulated RWN consists of 52 mobile nodes, whose initial positions are chosen from auniform random distribution over an area of 600 [m] by 600 [m]. Each node j moves at a constant

10 The determination of what constitutes an "active" or “inactive” path depends on the CMR of the networknodes. A cache management algorithm that determines the path activity is outside the scope of this paper.11 This delay metric does not reflect the delays associated with subsequent route repairs. We assume here thatroutes can be adequately repaired through the local route repair procedure described earlier. These limited depthqueries produce much less control traffic and much lower delays compared with the initial full depth query.

We assume that the local processing time (e.g., table lookup) is negligible, compared with transmissiondelays.

207

Page 223: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

speed, v, and is independently assigned an initial direction13, which is uniformly distributed between 0and When a node reaches the edge of the simulation region, it is reflected back into the coveragearea.

Each simulation runs for duration of 125 seconds. No data is collected for the first 5 seconds ofthe simulation to avoid measurements during the transient period and to ensure that the initialintrazone route discovery process stabilizes.

In order to measure the delay resulting only from the ZRP overhead, the network load is assumedto be low. Route failures are detected and acted upon. The route queries are generated according to aPoisson arrival process, with the arrival intensity being a simulation parameter. The route queriesrepresent both the initial query performed at the beginning of a session and subsequent queries due toreported route failures. Each route query is for a destination selected from a uniform randomdistribution of all other nodes in the network. Since the average time between a node’s query for thesame destination is longer than the expected interzone route lifetime, discovered interzone routes areeffectively used only once and then discarded.

For the purposes of our simulation, we have made a number of simplifying assumptions regardingthe behavior of the lower network layers and channel. This simplified model helps to improveunderstanding of our routing protocol behavior by providing our performance measures with someimmunity from lower layer effects.

From the media access control (MAC) perspective, we assume that there is no channel contention.This assumption is necessary to separate the delays associated with a particular MAC scheme (e.g.,collision avoidance algorithms) from the delays related to the routing protocol. These MACindependent results could be used as a benchmark for future analysis of the interaction between theRouting and the MAC layers

Our assumption of a collision-free media access protocol means that the average SIR of a receivedpacket is limited by the ambient background noise and receiver noise. For fixed transmitter and noisepowers, we assume that the BER is reasonably low within a distance, which we call dxmit. Beyond

dxmit, the BER increases rapidly. This behavior results from a rapid decrease in received power as theseparation distance is increased. We approximate this rapid increase in BER by the followingsimplified path loss model:

We interpret this behavior as follows: any packet can be received, error-free, within a radius ofdxmit from the transmitter, but is lost beyond dxmit. Since packet delivery is guaranteed to anydestination in range of the source, we are able to further reduce the complexity of our model byeliminating packet retransmission at the data link level.

13 Direction is measured as an angle relative to the positive x-axis.

208

Page 224: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

5.0 Performance Results

Results of our simulation are presented in the following figures. Figure 11 shows the dependenceof intrazone control traffic on the routing zone radius, for various rates of network reconfiguration.All else being equal, the rate at which the networks reconfigure increases linearly with the speed of thenodes. For unbounded networks with a uniform distribution of nodes, we expect the increase inintrazone control traffic to be proportional to However, because our network is of finite size andthe nodes are distributed randomly, we find that increase is actually somewhere between Itshould be noted that there is no intrazone control overhead for All nodes within a routing zoneof are, by definition, neighbors. Consequently, the Neighbor Discovery Protocol provides all ofthe information needed to maintain connectivity within the routing zone.

The performance of the reactive portion of the ZRP is exhibited in Figure 12. As we increase therouting zone radius, we find that the rate of interzone control traffic decreases. This decrease can beattributed to three factors. First, as the size of the routing zone increases, more destinations can befound within a routing zone, requiring fewer IERP route requests. Second, as routing zones becomelarger, the redundant route query traffic is reduced through the increasingly directed propagation ofqueries to peripheral nodes. Lastly, the average number of peripheral nodes between a queryingsource and destination is inversely proportional to the routing zone radius. Thus, as the routing zoneradius increases, the IERP accumulated routes are specified, on average, by fewer node IDs.

For all nodes are peripheral nodes and bordercasting is equivalent to flooding. Forwe observe a significant reduction in the interzone control traffic, indicating a potential benefit from ahybrid routing scheme compared to purely reactive routing.

The total control traffic (i.e., the sum of the control packets from the intrazone and interzoneprotocols), depicted in Figures 13, gives an indication of the performance of our hybrid routingscheme. For low route query rates, we find that relatively reactive routing (i.e., small routing zoneradii) produces the least amount of control traffic. As the route query rate increases, the control over-

209

Page 225: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

overhead can be minimized by increasing the routing zone radius. For the network configurations andoperational conditions that we assume in our simulation, configuring the ZRP for a routing zone of

is shown to reduce the rate of ZRP control traffic by approximately 45% of the purely reactiveschemes. For larger the overhead required to maintain larger routing zones outweighs the benefitsgained from bordercasting

Figures 14 show the performance of the ZRP as measured by the average route query response time.The delay characteristics appear to be heavily influenced by the behavior of the interzone routediscovery protocol.

Under the conditions that the average amount of control traffic is small to moderate (small routingzones, small query rates and moderate nodal velocities), most of the instantaneous network load is dueto a single route discovery. When the routing zones become relatively large and the networktopography more volatile, the overall ZRP control traffic becomes large and begins to have anoticeable impact on the instantaneous network load. This behavior is exhibited at(Figure 14c). We note that for the load of 0.5 and of 1.0 [queries/second] (representing short routelifetimes), a minimum in average route query response time appears at Although we’vesimulated the ZRP for a medium sized networks with routing zones of hops, it is reasonable to

210

Page 226: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

assume that for wider networks and larger routing zones, minimum will also be present for relativelylow velocities and low route query rates as well. Neglecting the effects of additional data traffic, wefind that the ZRP can provide as much as a 50% reduction in the average route query response timecompared with purely reactive routing. This improvement is somewhat smaller for networks withhighly dynamic topologies, but even our most volatile networks exhibit significant improvements of38% compared to purely reactive routing.

6.0 Summary and Concluding Remarks

The Zone Routing Protocol (ZRP) provides a flexible solution to the challenge of discovering andmaintaining routes in the Reconfigurable Wireless Network communication environment. The ZRPcombines two radically different methods of routing into one protocol. Interzone route discovery isbased on a reactive route request/route reply scheme. By contrast, intrazone routing uses a proactiveprotocol to maintain up-to-date routing information to all nodes within its routing zone.

211

Page 227: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The amount of intrazone control traffic required to maintain a routing zone increases with the size ofthe routing zone. However, through a mechanism, which we refer to as bordercasting, we are able toexploit the knowledge of the routing zone topography to significantly reduce the amount of interzonecontrol traffic. For networks characterized by highly mobile nodes and very unstable routes, the hybridproactive-reactive routing scheme produces less average total ZRP control traffic than purelyreactive routing Purely reactive schemes appear to be more suitable for networks with greaterroute stability. Furthermore, for highly active networks (frequent route requests), more proactivenetworks produce less overhead (i.e., larger routing zones are preferred).

We note that for networks with low activity, the instantaneous network load is generallydominated by the control traffic from a single route discovery. Consequently, the ZRP exhibitsminimum delay for relatively large routing zone radii (for the networks we simulated, even forcases where relatively reactive routing minimizes the average ZRP control traffic. For highly volatilenetworks, the ZRP has been shown to provide 38% less delay than reactive routing. For slower, morestable networks, the optimal-delay ZRP configuration produced a nearly 50% reduction in delaycompared to reactive routing. Based on the performance of the ZRP under heavy control traffic, weexpect that additional data traffic will further reduce the optimal size of the routing zone.

7.0 References

[Bertsekas92] D. Bertsekas and R. Gallager, Data Networks, Second Edition, Prentice Hall, Inc., 1992.[Cheng89] C. Cheng, R. Reley, S.P.R. Kumar, and J.J. Garcia-Luna-Aceves, “A Loop-Free Extended Bellman-

Ford Routing Protocol without Bouncing Effect,” ACM Computer Communications Review, 19(4), 1989,pp.224-236.

[Corson97] M.S. Corson and V. Park, “Temporally - Ordered Routing Algorithm (TORA) Version 1 FunctionalSpecification,” IETF MANET Internet Draft, Dec. 1997

[Ephremides87], A. Ephremides, J.E. Eieselthier, and D.J. Baker, “A design concept for reliable mobile radionetworks with frequency hopping signaling," Proceedings of the IEEE, vol.75, pp.56-73, January 1987.

[Garcia-Luna-Aceves93] J.J. Garcia-Luna-Aceves, “Loop-Free Routing Using Diffusing Computations,”IEEE/ACM Transactions on Networking, vol.1, no.l, February 1993, pp.130-141.

[Gerla95] M. Gerla and J.T-C. Tsai, “Multicluster, Mobile, Multimedia Radio Network,” ACM/Baltzer WirelessNetworks Journal, vol.1, no.3, pp.255-265 (1995).

[Haas97] Z.J. Haas, “A Routing Protocol for the Reconfigurable Wireless Networks,” IEEE ICUPC’97, SanDiego, CA, October 12-16, 1997.

[Haas98] Z.J. Haas and S. Tabrizi, “On Some Challenges and Design Choices in Ad-Hoc Communications,”submitted for publication.

[Haas98-2] Z.J. Haas and M.R. Pearlman, “Providing Ad-Hoc Connectivity with the Reconfigurable WirelessNetworks,” submitted for journal publication.

[Johnson96] D.B. Johnson and D.A. Maltz, “Dynamic Source Routing in Ad-Hoc Wireless Networking,” inMobile Computing, T. Imielinski and H. Korth, editors, Kluwer Academic Publishing, 1996.

[Lauer88] G. Lauer, "Address Servers in Hierarchical Networks," IEEE International Conference onCommunications '88, Philadelphia, PA, 12-15 June 1988.

[Leiner87] B.M. Leiner, D.L. Nielson, and FA. Tobagi, “Issues in Packet Radio Network Design,” Proceedingsof the IEEE, vol.75, pp.6-20, January 1987.

[Moy97] J. Moy, “OSPF Version 2”, RFC 2178, March 1997.[Murthy] S. Murthy and J.J. Garcia-Luna-Aceves, “An Efficient Routing Protocol for Wireless Networks,”

MONET, vol.1, no.2, pp.183-197, October 1996.[Murthy95] S. Murthy and J.J. Garcia-Luna-Aceves, “A Routing Protocol for Packet Radio Networks,” Proc. of

ACM Mobile Computing and Networking Conference, MOBICOM’95, Nov. 14-15, 1995.[Perkins94] C. E. Perkins and P. Bhagwat, “Highly Dynamic Destination-Sequenced Distance-Vector Routing

(DSDV) for Mobile Computers,” ACM SIGCOMM, vol.24, no.4, Oct. 1994, pp.234-244.[Perkins97] C.E. Perkins “Ad Hoc On-Demand Distance Vector (AODV) Routing,”, IETF MANET Internet

Draft, Dec. 1997.[Westcott84] J. Westcott and G. Lauer, “Hierarchical routing for large networks,” IEEE MILCOM’84, Los

Angeles, CA, October 21-24, 1984.

212

Page 228: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

9CDMA SYSTEMS MODELLING USING OPNET SOFTWARE TOOL

Piotr Gajewski, Jaroslaw Krygier

Military University of Technology, Electronics Faculty

.Kaliskiego 2 str, 01-489 Warsaw, Poland

phone: (+48 22) 685-9517 fax: (+48 22) 685-9038

E-mail: [email protected]

ABSTRACT

The paper presents the result of CDMA network modelling using OPNET software. Some results of ourmodel investigation are also described here. Simulation result can be achieved using elaboratedcomputer programs or commercial software tools. OPNET (Optimized Network Engineering Tool) is anexample of such commercial software elaborated and delivered by MIL-3.Elaborated model of CDMA system consists of mobile terminals, base stations, base controllers andmanaging centre. This four levels project contains several models including user mobility models(Gaussian, triangular), traffic models (Poisson), channel assignment models (hybrid model with channelrelocation), handover models as well as microcellular network model (regular).The user-defined parameters of these models can be introduced and changed. It gives a possibility ofinvestigation of the call blocking probability versus number of channels, channels assignment method,mobile station mobility, priorities, traffic intensity as well as probability of mobile station inaccessibility.Some results will be presented in this paper, including blocking probability versus total number ofchannels, number of fixed and dynamic channels and the area of the switching in proposed method.

1. INTRODUCTION

Recent years have shown stunning development of cellular networks and great interest in

them among the users all over the world. Bigger and bigger demand is seen in the area of data

transfer. Second generation systems enable provision of such services rather below the standard

and at low transmission speed. Due to the demand for the so-called personal communication

services PCS of worldwide reach, working out a uniform standard becomes essential. At present

there is no explicit opinion concerning PCS standard solutions. The application of CDMA code

access in radio-interface with spectrum type DS spread technique having the best possibilities of

intensive radio-communication maintenance is being considered.

Communication Systems Institute of MUT undertook the realisation of a program

application to investigate the characteristics of CDMA system with the use of OPNET software.

Computer simulation method is generally used as a confirmation of theoretical consideration or

planning an actual system activity. The first stage of system investigation is building a model.

The model can be a formal introduction of the theory or a formal description of empirical

213

Page 229: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

observation. Most often it is a combination of both.

This paper shows projected model of cellular network using DS-CDMA.

In the second paragraph mobile station model inside the system is described. The next

paragraph describes soft switching model, characteristic for the system using radio-channels

coded by pseudo-random sequences. Paragraph four contains the description of proposed

allocation channel algorithm and paragraph 5 and 6 show model implementation method on

OPNET base and present obtained results of simulation experiment.

2. MOVABILITY MODEL IN CDMA SYSTEM

OPNET program makes the way to model the movability of mobile stations easy by drafting

a movement motion trajectory of particular nodes with Network Editor. With modelling a good

number of mobile station terminals-in-motion, it would cause a great difficulty and the lack of

randomness of this type of occurrences. Therefore the model using random generating of motion

trajectory was built. The following assumptions will be introduced for this aim:

- calls are generated uniformly inside every cell and call fluxes from separate callers are

independent;

- mobile station can move equiprobably in 4 directions perpendicular to each other;

V is random variable meaning velocity of mobile station of normal distribution with mean

velocity Vmean and standard deviation Vi is constant velocity of mobile station moving

between (i-l) and i-th velocity variation;

T is continuous random variable meaning the time between two successive velocity

variations or between the time of velocity variation and the end of a call; this variable has

exponential distribution with parameter Tmean;. Ti means the duration time between (i-1)

and i-th velocity variation of mobile station, with To= 0;

- the two random variables Vand T are statistically independent;

- the representation of mobile station trajectory proceeds only during the call;

- T is continuous random variable with exponential distribution meaning full time of a call.

Fig. 1 shows the way of movement of mobile station modelled with taking into account

random variables mentioned above. The shaded range determines the handover area, in which a

mobile station is connected with 3 base stations at the same time.

214

Page 230: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Fig. 2 shows details of trajectory generation. Random velocity of successive segments vi and

time between velocity variations, coordinates of velocity variations (xi, yi) can be determined.

Assuming constant measure time (tmeasure), path segments di between these measure points will

shorten with velocity increase. Mobile station total path depends on randomised total time of call

as well.

In places coordinated (xi, yi) a mobile station position is updated.

215

Page 231: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3. SOFT HANDOVER MODEL

A very important problem in cellular networks handing over calls being under way between

two base stations at the moment of crossing the intercellular range limit by mobile station. Due

to specificity of CDMA system connected with the possibility of using the same frequency band

in the whole network region the soft handover method is often in use. It means two or more cells

supervising the quality of radio connection in the case of the mobile station moving inside the so-

called soft handover range and choosing the most convenient base station through which useful

information exchange is carried on.

Fig. 3 shows the soft handover range by shadowed field, rims of which have the possibility

to change the site. Physically it is possible due to signal power modification or transmission

criteria variations, whereas in our case it is investigation result of switching range effect on the

quality of system service offer. Also fig. 1 shows switching zone limited by intermittent circle

line. It can notice that mobile station can be within the range of three base stations. Decision of

mobile terminal connection with neighbouring base stations is made at first measure time after

soft handover range crossing (Fig. 3). Coming out of switching range is detachable similarly.

4. CHANNEL ALLOCATION ALGORITHM

Cellular network systems differ from wire communication systems in limited number of

channels. It is caused by limited allocated frequency bandwidth. Thus, they have to conduct a

very complex channel allocation policy for calling mobile stations. Because of our modelling

216

Page 232: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

code division system we can deal with one allocated bandwidth for the whole system, code

divided into channels.

All such channels can be used then in all system cells or be divided into disjoint sets and

allocated similarly to narrow frequency-divided bandwidth systems.

In modelled system hybrid policy of radio channels allocation is proposed. It means using

two ways of allocation: constant typical for CDMA fixed channels and dynamic, using minimal

interference spacing theory for co-channel cells. Channel allocation in cells method is presented

in Fig. 4.

Dynamic channel allocation principle is based on selecting channel in a given cell by the

way to the minimalisation of cells belonging to this cell interfering environment in which the

used channel would be blocked. The so-called cost function is in use here. When the call is

finished, the channel allocation is not optimal any more and reallocation is needed on similar

basis.

5. SYSTEM CDMA MODEL APPLICATION ON OPNET PLATFORM

The worked-out simulation model of CDMA system takes advantage of graphic modelling

possibilities of OPNET. Network Editor made it possible to model a system net taking into

account accurate position coordinates of every base station. There are net elements like base

stations, base stations controllers, management node. Net nodes in the shape of a generator

represent mobile station and wire communication system subscribers. Model implementation in

Network Editor is shown in Fig. 5.

One of the targets is testing the influence of movement intensity on each system element. To

have the possibility of that variable regulation we assume that each mobile station will not be

217

Page 233: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

modelled separately as a single object in a net editor, but all mobile stations will be represented

by the movement generator. Its tasks are as follows:

- random generating of fixed number of mobile stations sites at the moment of exchanging the

information;

- random generating of velocity and direction of mobile station movement;

- the user's behaviour simulation;

- undertaking decisions concerning the quality of received signal;

- generation of signalling information.

Modelling of the user's behaviour is based on generating start and finish times of a call or

information transmission as well as intensity of calls. It is assumed that call time or data

transmission is represented by exponential distribution. Its mean value is introduced at the

beginning of simulation. The change of calls intensity is explicitly defined through the change of

time between successive calls. It is assumed that the time between calls is also represented by

exponential distribution, resulting in Poisson distribution flux of calls. User can call another

subscriber of mobile station or wire communication system with the same probability. In case of

rejecting the user does not call again, either.

218

Page 234: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Other users of cellular network or wire communication system can call mobile station. In

connection with it, at the given moment the station can be engaged or disconnected. It can be out

of reach of the system. Velocity of mobile station and trajectory of movement are modelled in

consistence with above mentioned arrangement.

General simulation model structure is displayed in Fig. 6. The model consists of five basic

levels:

- opening data setting;

- net (configuration, cell sizes, dimension of switching regions);

- user's behaviour (generation of calls, mobility, soft handover effect);

- call service (allocation and reallocation of channels, transfers, supervising of connections);

- collecting simulation effects.

The following parameters can be determined in the input data set:

- mean number of mobile station users for one cell,

- general amount of duplex channels for speaking,

- amount of fixed and dynamic channels for one cell,

- number of channels reserved for switched calls,

- call duration mean time,

- mean interval time for successive calls coming from any free subscriber,

probability of outer subscriber’s inaccessibility,

219

Page 235: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

- mean value and standard deviation of mobile velocity,

- cellular network modulus and switching region radius.

The definite call generating algorithms, subscribers’ mobility simulation and mobile

stations’ behaviour in switching range have been written down in internal language of the file

and concealed in generator node. Call flux from subscribers of wire communication system

generating algorithm has been written down and concealed in PSTN node, while complicated

algorithms of channel allocation are to be found in management node. Nodes bs00, bs01, ...and

bsc0, bsc1, ... are of assistance and serve to transfer signal information among generator and

management nodes and to gather and transfer output simulation data.

OPNET editor enables representation of particular processes with the aid of transformation

graph, where individual processes are described in internal language. Fig. 7 represents

implementation of mobile station generator model.

6. SIMULATION RESULTS

On the beginning the above mentioned set of input set of input parameters was introduced.

Series of output data was received as a result of carried out simulation experiment. Fig. 8, 9,

10 show probability dependence of call blocking directed at the group of channels in cell from

mean traffic intensity in case of different channel distribution in cell, for different region sizes of

switching and for different mean values of mobile station velocities.

220

Page 236: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

On the basis of the above presented results it can be noticed that:

- using hybrid method of channel allocation significantly decreased call blocking probability in

comparison with the constant method;

- decreasing of mean traffic velocity values caused call blocking probability increase;

- increasing of soft handover region caused call blocking probability decrease.

221

Page 237: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

7. CONCLUSIONS

The authors have suggested and built a model of CDMA system and have implemented it on

OPNET simulation tool. They have used graphic possibilities of the tool here and physical

network structure has been modelled. The dynamics of the system has been modelled by

programming it in internal OPNET language. Hybrid method of channel allocation has been

proposed. A simulation experiment has been conducted and a series of results has been obtained,

part of which has been presented in paragraph 6. The obtained results have confirmed an opinion

of dynamic method usefulness of channel policy and proper behaviour of the system under the

effect of soft handover region modification and mobile station velocity. The built model can be

used to plan cellular network CDMA and to probe the system solutions. It is an open model and

can be developed with other parts of the system.

REFERENCES

[1] G. S. Fishman: Computer simulation, notions and methods, (in Polish) PWE, Warsaw, 1981

[2] W. C. Lee: Overview of cellular CDMA, IEEE Trans. on Veh., Vol 40, No. 2, May 1991

[3] MILS, OPNET Manual Set Section.

[4] R. E. Fisher, A. Fakusawa, T. Sato: Wideband CDMA System for Personal Communication

Services, Oki America Inc. 1996IEEE.

[5] E. Del, R. Fantacci, G. Gimbene: Handover and Dynamic Channel Allocation Techniques in

Mobile Cellular Network, IEEE Trans. On Vehicular Technology, Vol. 44, No. 2, May 1995

[6] D. E. Everitt: Traffic Engineering of the Radio Interface for Cellular Mobile Networks, Proc.

of the IEEE, Vol. 82, No.9, September 1994

[7] M. Amanowicz: Professional Land Radiocommunication Systems Modelling (in Polish),

WAT, Warsaw, 1989.

[8] Szu-Lin Su, Jen-Yeu Cen, Jane-Hwa Huang: Performance Analysis of Soft Handoff in

CDMA Cellular Networks, IEEE Journal on Selection Area in Comm., Vol.14, No. 9,

December 1996.

222

Page 238: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

10

SIGNAL MONITORING SYSTEM FORFAULT MANAGEMENT IN WIRELESS LOCAL AREA

NETWORKS

Jelena F. Vucetic, Paul A. KlineDynamic Telecommunications, Inc.12850 Middlebrook Road, Suite 302

Germantown, MD 20874, USAPhone: (301) 515-0403 (xl02)

Fax: (301) 515-0037Email: [email protected]

Abstract: In the last several years, various types of wireless networks have become available andcost-effective for commercial applications. In order to become a real alternative to traditionalwireline telecommunications, wireless networks should provide competitive quality, reliabilityand availability of service. These requirements imply a need of wireless fault management systemwith features similar to those incorporated in its wireline counterpart. With expansion ofadvanced high-speed services (e.g. multimedia, ATM, etc.) into wireless networks, faultmanagement becomes an imperative to ensure reliability and quality of service.

Despite a significant need for fault management in wireless networks, there has been almost nosuch proposals nor deployments yet. Existing wireless fault management systems providemanagement of Mobile Switching Centers (MSC), rarely of base stations. Traditionally, basestations measure signal quality, and send alarms to a Network Management Center (NMC) ifsome of the signal parameters are out of allowed range. This solution has been proven asinsufficient since it is capable to detect the signal quality at a base station's site, not at a user'ssite.The proposed Signal Monitoring System collects various signal parameters at a user's site (theAccess Point of a wireless local area network (WLAN)), determines if these parameters arewithin the allowed range of values, and generates an alarm if it is not the case. The NMC receivesalarms and handles them as in wireline networks (using an automated trouble-ticketingmechanism, and network reconfiguration if it is necessary). This system does not interfere withthe regular network operations, nor it requires utilization of any extra voice channels (i.e. noreduction of the network throughput).

This paper describes elements of the Signal Monitoring System (Signal Monitoring Units (SMU)and a Performance Manager), and how it detects and locates base station failures, signaldegradation, co-channel and adjacent channel interference in a network. A mechanism of alarmsgeneration and reporting is also described, as well as the interface between the Signal MonitoringSystem and existing Network Management Center.

Page 239: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

1. INTRODUCTIONThis paper describes a fault management solution for wireless local area networks (WLAN) basedon an overlay system that continuously evaluates signal quality in the network coverage area,generates and reports alarms to an existing network management system.

The proposed fault management system provides features equivalent to the OSI standard [3,5]traditionally applied in wireline networks. It also includes management of operations andmaintenance personnel engaged in the WLAN operations [7].

The proposed fault management system enables:

• Continuous measurement of various signal parameters in the network coverage area• Detection and location of signal degradation• Detection and location of co-channel and adjacent-channel interference• Dynamic control of base stations' transmit power to adjust the overall network coverage• Detection and location of base station failures• Efficient trouble-shooting• Management of operations and maintenance personnel• Various types of statistics and reports• Friendly graphical user interface (GUI)

2. SYSTEM OVERVIEW

2.1 SYSTEM ARCHITECTUREFigure 1 represents a general proposed network architecture, which consists of a WLAN, itsNetwork Management Center (NMC), an overlay network of scanning receivers and thePerformance Manager.

Traditionally, in a wireless network the base stations measure signal quality, and send alarms tothe NMC if some of the signal parameters are out of allowed range [6]. This solution has beenshown as insufficient since it is capable to detect signal quality only at the base station's site, notat the subscriber's site.

To improve alarms generation in a WLAN, a Signal Monitoring System can be used, as shown inFigure 1. This system does not interfere with the regular network operations. It only "listens" to

224

Page 240: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

the signals on the utilized channels, measures their parameters, and sends alarms to the NMC ifsome of the parameters are out of allowed range.

The Signal Monitoring System includes Signal Monitoring Units (SMU) and a PerformanceManager. An overlay network of SMUs is used for various types of measurements and statisticalanalysis of signal parameters. The SMUs can be placed at multiple locations within the WLAN toexecute desired measurements. The SMUs are connected with the Performance Manager. Theyreport their measurements to the Performance Manager according to a predefined reportingschedule. The Performance Manager collects measured data from the SMUs, analyzes them anddetermines if there is a poor or insufficient signal quality in certain area of the WLAN. Such anevent may be an indication that the corresponding base station has a problem or a failure. If this isthe case, the Performance Manager generates an alarm containing the description of the problem,the location (base station identifier) where the problem is detected, and the timestamp. Then, thePerformance Manager sends the alarm to the NMC, which decides which further actions shouldbe taken (e.g. send trouble tickets, ignore or clear the alarm).

There are many ways to implement this system. This paper considers the following high-levelversions of implementation:

1. SMUs are connected with the centralized Performance Manager (e.g. via en Ethernetnetwork, as shown in Figure 2). This version is especially suitable for indoors systems (e.g.hospitals, convention centers, factories, etc.).

2. Each SMU is collocated with a distributed Performance Manager (Figure 3), which processesmeasurements locally and sends alarms related to the corresponding base stations that it iscovering to the NMC. The connection between Performance Managers and the NMC can beTCP/IP, dial-up or any other on-demand connection.

225

Page 241: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2.2 SYSTEM DESIGN CONSIDERATIONSLocation and number of SMUs in a WLAN is critical to provide reliable fault management.However, in the design of the Signal Monitoring System, a tradeoff needs to be made between itsreliability and cost. To minimize the cost of the system, the number of SMUs should be minimal.They should be placed at locations where signal degradation and interference from another basestations are beyond a tolerance threshold.

2.3 NETWORK ELEMENTSManaged network elements of the proposed Signal Monitoring System are Base StationTransceivers (BST) and Base Station Controllers (BSC).

For the purpose of fault management, BSTs are characterized by the following attributes:

• BST identifier• Associated BSC identifier• Type of antenna (omni-directional or sectorized, non-diversity or diversity)• Number of voice channels• List of voice channels• Number of control channels• List of control channels• Current status of channels (available, busy, faulty)• BST status (operational, faulty, not-configured)• List of backup BSTs• Collected alarms• BST location• BST area

226

Page 242: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

• BST region• Responsible Supervisor• Responsible Operator• Responsible Technician(s)

For the purpose of fault management, BSCs are characterized by the following attributes:

• BSC identifier• Associated BST identifiers• Number of voice channels• List of voice channels• Number of control channels• List of control channels• Current status of channels (available, busy, faulty)• BSC status (operational, faulty, not-configured)• List of backup BSCs• Collected alarms• BSC location• BSC area• BSC region• Responsible Supervisor• Responsible Operator• Responsible Technician(s)

3. FAULT MANAGEMENT SYSTEMThe proposed fault management system for WLAN includes signal measurements performed bySMUs, alarms generation and reporting performed by the Performance Manager, and alarmshandling performed by the NMC.

3.1 THE PERFORMANCE MANAGERDepending on the organization of the overall fault management system, each base station in themonitored network can have its own Performance Manager (as shown in Figure 3), or a singlePerformance Manager can monitor several base stations (as shown in Figure 2). The formersolution is recommended if the associated SMU is within the coverage area of a single basestation. The latter solution is more suitable if the SMU is located within the area with overlappedcoverage of more than one base station (distributed Performance Manager, as in Figure 3), or inthe case of a centralized Performance Manager (Figure 2).

In any case, the Performance Manager consists of the following functional blocks, as is shown inFigure 4:

• SMU Interface• Measurement Database• Alarms generator• NMC Interface

227

Page 243: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3.1.1 The SMU InterfaceIn transmit mode, the SMU Interface downloads lists of scanning channel identifiers and types ofdesired measurements to the associated SMU(s). The list of channel identifiers is created by anetwork operator using the corresponding base station's frequency plan. The operator can enterthe list either remotely from the NMC or locally from a database residing in the consideredPerformance Manager. For each base station monitored by the Performance Manager, the list ofdownloaded channels into the corresponding SMU(s) includes the channels allocated to the basestation, as well as the list of their adjacent (higher and lower) channels.

In the receive mode, the SMU Interface collects measurement data from SMUs according to thepreviously downloaded list of channel identifiers and types of measurements. The received dataare then stored into the Measurement Database, whose organization (for a single base station) isshown in Figure 4.

3.1.2 The Measurement Database

228

Page 244: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The Measurement Database is a relational database residing in a Performance Manager. Itconsists of tables that define: which BSTs are monitored by the Performance Manager(BST_Table), which SMUs report measurements to the Performance Manager (SMU_Table),which SMUs measure signal characteristics for each base station (SMUList), and measured datafor each channel allocated with each BST (Meas_Table).

The Measurement Database organization is shown in Figure 4. In this example, the PerformanceManager monitors N BSTs using M SMUs. Each BST is associated with a set of SMUs thatmeasure signal characteristics of this BST. This association is defined in SMUList tables.

For each controlled SMU and each BST monitored by the Performance Manager, there is aMeas_Data data block. It consists of a Channels Table and Measurement Table (RSSI, CoChInt,AdjChHi and AdjChLo columns). A detailed organization of a single Meas_Table (for a singleBST and a single SMU) is presented in Figure 5.

The number of entries of each of these columns is equal to the number of channels allocated tothe corresponding BST. The RSSI column contains the average RSSI measured for each channelfrom the Channels Table. In the same fashion, the CoChInt column contains co-channelinterference measurements, while AdjChHi column and AdjChLo column contain adjacentchannel interference measurements for higher and lower adjacent channel, respectively.

3.1.3 The Alarms GeneratorThe Alarms Generator periodically reads measurements from the Measurement Database for eachcontrolled SMU and each monitored base station. Then, it evaluates if any of interferencemeasurements exceeds predefined thresholds, or if any RSSI measurement is below a specifiedthreshold. Then, the Alarms Generator correlates measurements obtained from the SMUsmeasuring signals of the same BST, as well as of adjacent BSTs. If correlated results indicatepoor coverage in a certain area, the Alarms Generator generates an alarm and sends it to the NMCvia the NMC Interface. These alarms contain the following information:

• BSC Identifier• BST Identifier• Faulty Channel Identifier• Alarm Code• Alarm Description• Out-of-Range Measured Data• SMU Identifier• SMU Location• Timestamp

229

Page 245: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

• Alarm Severity

3.1.4 The NMC InterfaceThe NMC Interface enables communications between the Performance Manager and the NMCusing any standard protocol (e.g. as is shown in Figures 2 and 3).

In transmit mode, the NMC Interface sends alarms from the Alarm Generator to the NMC. Inreceive mode, the NMC Interface receives lists of channel identifiers for each base stationmonitored by the Performance Manager, and forwards the list (along with the list of adjacentchannels) via the SMU Interface to the corresponding SMU. At the same time, the NMC Interfacestores these lists in the Measurement Database.

3.2 THE NETWORK MANAGEMENT CENTERTo be able to utilize the fault management capabilities of the described Signal MonitoringSystem, the existing Network Management Center (NMC) includes the following functionalblocks:

• Performance Manager Interface

• Management Information Base (MIB)

• Fault Management Applications

• Graphical User Interface (GUI)

If any of these blocks already exists in the NMC, they should be expanded to support faultmanagement.

3.2.1 The Performance Manager InterfaceThe Performance Manager Interface exchanges information between the NMC and thePerformance Manager.

In transmit mode, the Performance Manager Interface downloads into the Performance Managerthe list of channels that are allocated to each of the BSTs monitored by this PerformanceManager.

In receive mode, the Performance Manager Interface receives alarms from the PerformanceManager and stores them into the MIB.

3.2.2 The Management Information Base (MIB)The Management Information Base (MIB) is a relational database which stores all information onthe managed network elements (NE), operations and maintenance personnel, that are relevant tothe fault management applications.

3.2.2.1 The Fault Management Information StructureThe fault management information [4] can be divided into three categories:

• Information on regions of operations

• Information on network elements• Information on operations and maintenance personnel

230

Page 246: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Each information category is hierarchically organized and connected by relations. Differentcategories of information are also interconnected through relations, as shown in Figure 6.In terms of regions of operations, a network may be divided into regions. Each region consists ofseveral areas.In terms of managed network elements, a network includes BSCs and BSTs. Each regioncontains one or more BSCs and BSTs connected to them. Each area contains either several BSCsor several BSTs (in Figure 4, denoted by shaded blocks).

A network is operated and maintained by the personnel that includes a Superuser, Supervisors,Operators and Technicians. Each Operator is assigned to an area to control the corresponding setof BSCs or BSTs. A team of Technicians is assigned to each area to install, maintain, repair andupgrade the corresponding BSCs or BSTs. Each Technician is responsible for a specified typeand subset of NEs within a certain geographic area. Each region is managed by a Supervisor.Each Supervisor has a group of Operators and Technicians reporting to him/her. All Supervisorsreport to the Superuser, who is responsible for the whole network.

3.2.2.2 The MIB OrganizationThe fault management information is implemented in a MIB as a relational database [2,4,5,7].The general MIB organization is shown in Figure 7.

The MIB consists of tables interconnected by relations. Each table consists of attributes. Forsimplicity, Figure 7 highlights only the attributes that illustrate how the tables are related in termsof fault management.

The NE table contains all relevant network element (BSC or BST) information (the NE identifier,the affiliated area identifier, the responsible technician’s identifier, the responsible operator'sidentifier, the NE configuration, location, list and status of generated alarms, number of channels,etc.).

The Area table contains all relevant information on an operation area (the area identifier, theaffiliated region identifier, the responsible operator identifier, the area’s office address, telephonenumber and facsimile numbers).

231

Page 247: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The Region table contains all relevant information on an operation region (the region identifier,the responsible supervisor identifier, the region monitoring center address, telephone andfacsimile numbers).

The Supervisor table contains all relevant information on a supervisor who is responsible for acertain network region (the supervisor identifier, the affiliated region identifier, the name,address, telephone number, facsimile number, pager, working hours).

The Operator table contains all relevant information on an operator who is responsible for acertain network area (the operator identifier, the affiliated area identifier, the supervisor'sidentifier, the name, address, telephone number, facsimile number, pager, working hours).The Technician table contains all relevant information on a technician who maintains a certainnetwork element (the supervisor identifier, the affiliated area identifier, the name, address,telephone number, facsimile number, pager, working hours, network element identifier).

Figure 8 illustrates a top-down MIB structure. The Superuser table contains pointers to the lists ofthree types of MIB entities: managed network elements, personnel and managed areas.

Assuming that the MIB and its tables have been created, the following operations can be executedon MIB entities:

• Add new entity• Modify entity information• Read entity information• Delete entity

These operations also include creation, modification or deleting of relations of the relevantentities.

3.2.3 The Fault Management ApplicationsFault management applications include:

232

Page 248: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The NMC filters alarms received from all Performance Managers based on various criteria, suchas: geographic area, available operations personnel, alarms severity.

Based on the nature of a received alarm, the NMC can issue a trouble ticket with the directions toa technician to resolve the problem, clear the alarm, or escalate its priority to expedite itsresolution.

In some cases, the NMC can reconfigure the network in order to provide necessary coverage inthe presence of a base station failure. This can be done by increasing the transmit power of acertain BST to provide service to an area affected by the failure of another BST. If interference-based alarms are persistent, the NMC can modify the entire frequency plan of the managednetwork and/or the transmit power of its base stations to improve the overall coverage.

3.2.4 The Graphical User Interface (GUI)We assume that the NMC already includes a GUI containing a geographic map of the networkcoverage area, with an overlay graphical presentation of the managed network elements and theirinterconnections.

233

Page 249: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The existing GUI can be expanded with the following features to support fault management:

• Changing the color of a network element symbol based on the severity of a relevant alarm

• Blinking the network element symbol in emergency situations

• Clicking on the network element symbol to obtain real-time information on the element'sconfiguration and pending alarms.

• Clicking on a selected alarm to automatically generate and send a trouble ticket to a fieldtechnician. The ticket includes: technician identifier, alarm description, faulty networkelement identifier and location. The NMC can send the ticket to a technician by: email,facsimile, pager, short messaging, etc.

• Automatically clearing an alarm and modifying the status/color of the corresponding networkelement when the problem/failure is resolved.

4. CONCLUSIONThis paper describes an overlay Signal Monitoring System, which provides fault management ofbase stations in a WLAN. The system continuously measures various signal parameters using aset of SMUs deployed within the network coverage area. Based on these measurements, thesystem determines signal degradation or interference, and generates alarms that indicate either afailure in certain network element or inadequate coverage.

The proposed Signal Monitoring System improves reliability and quality of service of a WLANby efficient failure detection and location. If it is applied in dynamic adjustment of the networkfrequency plan and transmit power control of the base stations, it can also improve the overallnetwork coverage and availability of network resources.

REFERENCES[1] S. M. Dauber: “Finding Fault”, BYTE Magazine, McGraw-Hill, Inc. New York, NY,

March 1991[2] O. Wolfson, S. Sengupta, Y. Yemini: “Managing Communication Networks by Monitoring

Databases”, IEEE Transactions on Software Engineeringm Vol. 17, No. 9, September 1991[3] L. Feldkhun: “Integrated Network Management Systems”, Proceedings First International

Symposium on Integrated Network Management, 1989[4] H. Yamaguchi, S. Isobe, T. Yamaki, Y. Yamanaka: “Network Information Modeling for

Network Management”, IEEE Network Operations and Management Symposium, 1992[5] S. Bapat: “OSI Management Information Base Implementation”, Proceedings Second

International Symposium on Integrated Network Management, 1991[6] J. Vucetic, P. Kline, J. Plaschke: “Implementation and Performance Analysis of Multi-

Algorithm Dynamic Channel Allocation in a Wideband Cellular Network”, Proceedings ofthe IEEE ICC ’96 Conference, 1996

[7] J. Vucetic, P. Kline: “Integrated Network Management for Rural Networks with FixedWireless Access”, Proceedings of the Wireless ’97, the 9th International Conference onWireless Communications, 1997

234

Page 250: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

11COMPUTER-AIDED DESIGNING OF LAND MOBILE RADIO

COMMUNICATION SYSTEMS , TAKING INTO CONSIDERATION

INTERFERING STATIONS.

Marek Amanowicz, Piotr Gajewski, , Marian Wnuk

Military University of Technology, Electronics Faculty

.Kaliskiego 2 str, 01-489 Warsaw, Poland

phone: (+48 22) 685-9228 fax: (+48 22) 685-9038

E-mail: [email protected].

ABSTRACT

The electromagnetic environment is made up of signals emitted by mobile and stationary radio stations

in radio communications networks, transmitting stations in the radiolinie system as well as radar,

navigation, jamming and other systems to be found in a given area. Formulas for coverage and minimum

distance analysis, taking into account mutual or intentional interference, are presented.

1. INTRODUCTION

The evaluation of the power of the received signal requires the knowledge of the models of the elements

in the system, i.e.: the transmitter, transmitting antenna, the propagation environment of electromagnetic

waves, the receiving antenna and the receiver.

The power of the signal (useful or interfering) at the receiver input is calculated with the use of the

probability theory methods. The average value standard deviation and the probability of

exceeding the admissible threshold signal value for example the minimal detectable signal

are defined.

When is known, it is possible to define the average value of propagation losses, which hinges on

the distance R between the devices. In the coverage analysis it is an operational range and in the

interference analysis - an admissible distance. It is possible to calculate the range between devices with

an eye to the terrain profile, which brings us close to the real propagation model.

2. SIGNAL POWER AT RECEIVER INPUT

The average value of signal power at the demodulator input, expressed in decibels, is:

The minimal detectable signal is the value of the minimal signal power at the receiver

(demodulator) input which meets the signal power requirements at the receiver output:

235

Page 251: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

When a radiocommunication network comprising „n” transmitters is analysed, is expressed by

the following dependence:

where:

I - interference energy sum

Ps - noise power of the receiving system at the IF amplifier output [W]

at the receiver (demodulator) input

3. DISTANCE BETWEEN DEVICES FOR GIVEN PROBABILITY OF SIGNAL

DETECTION

The distance between devices for a given probability of signal detection is calculated to define

either the operational range or the admissible (minimal) distance between devices above which EMC

conditions are preserved. In both cases, appropriate threshold values are assumed for signal power

and probability of exceeding the threshold by the useful signal as well as by the

disturbing signal

When the operational range is calculated, the probability for the useful signal and in

case of the admissible distance, the probability. of the interfering signal exceeding the

assumed threshold is within the range <0.05, 0.95>, depending on the useful signal requirements:

The threshold value of signal is expressed by formula (2), which applies both to useful signals

S in the analysis of operational range and interference I when the admissible distance is calculated.

Appropriate values of the signal to noise ratio (S/N), interference to noise ratio (I/N) and signal to

interference ratio (S/I) at the receiver (demodulator) input are calculated according to ITU-R and ITU-T

recommendations of the International Telecommunications Union (ITU) [5].

Though the admissible distance between devices is usually defined for out band transmission of the

interfering signal, cases of in band transmission should also be considered.

236

Page 252: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

When is known, the average value of propagation loss is calculated according to the following

formula:

Corresponding with this value is a certain distance between the considered radio stations, at which the

probability that the received signal power exceeds the value equals 0.5.

To fulfil condition (5) the value should be selected in relation to the value to meet the

following condition:

where:

X - normal distribution standardised variable of signal power corresponding to assumed

probability

- signal power standard deviation

It is possible to fulfil condition (5) by correcting the propagation loss value calculated

according to formula (6).

In effect the appropriate average value of propagation loss is expressed as follows:

The distance R between the devices, which corresponds with the average value of propagation loss

is calculated using to make modifications the EPM-73 method, which takes into consideration

the terrain profile, for the following parameters:

calculated according to formula (10)

• - antenna’s height above the surrounding terrain,

• f-frequency band

• H or V antenna polarisation,

• average terrain height,

• height of terrain below or above the average level (medians) for a transmitting and

a receiving antenna.

Considering the height of the antenna above the surrounding terrain (fig. 1) is:

237

Page 253: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Considering the effective height of the antenna for vertical and horizontal polarisation,

the height of the antenna is:

Direct transmission

- suspension height of the transmitting antenna

- suspension height of the receiving antenna

- average height of terrain surrounding the transmitting antenna within a 3-15 km distance

- average height of terrain surrounding the receiving antenna within a 3-15 km distance

- terrain elevation near the transmitting antenna

- terrain elevation near the receiving antenna

Fig. 1. Terrain profile on a route between stationary radio station B and mobile station M as well as

effective heights of antennas (average terrain cross-section for the given azimuth is shown).

The values calculated from the dependencies (13,14) are taken into account while calculating

propagation losses. It has to be remembered, however, that certain limits are imposed on values

determined by the dependence (11), which affect the value of propagation losses.

238

Page 254: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4. COMPUTER CALCULATIONS

This software makes a visual presentation of the results of calculations possible. The necessary parameters

are defined by the user.

The screen is divided into three sections (Fig. 2):

. top menu bar

. bottom status bar

. digital map with contour lines and coverage distribution

The status bar provides information about the map and the point of cursor of the mouse position:

geographical and topographical coordinates, vegetation growth as well as the terrain height - fig.2.

The menu with dialogue boxes gives access to such commands as:

. change the parameters of the map (scale, centre screen situation, enlarge a section of the map),

. present the terrain profile (display the profile, open a file with the profile shown before),

. fix a network (create a new network, open an already existing network file, save the current network and

close),

. fix data concerning the facilities (display the facility, create a new one, copy, delete one or all facilities),

. select algorithm (calculations for a non-directional or a directional antenna).

239

Page 255: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The simplest operations involving the map are changing its scale, enlarging its section or refocusing the

screen. Another useful function is a readout of the terrain profile along a given route (fig.4).

The situation on the map can be either fed by the user or a file with a network saved can be opened. The

network fed can be modified, and the changes introduced saved.

240

Page 256: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Various kinds of broadcasting stations can be featured on the map. Each has a name and a set of

parameters providing information about the transmitter power, the frequency on which it works, the height

at which the antenna is suspended and many other data important for the algorithm (fig.5). Some data have

an assumptive value, so the calculations can be made even if not all the values are given, but it has to be

remembered that they cannot be left out as this would affect the final result. Since stations may have the

same parameters, it is possible to copy a station with a list of its parameters but without its name.

241

Page 257: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

When the station’s parameters are known, the algorithm of calculations can be selected - for a non-

directional or a directional antenna. The next step is to select the type of calculation, which may concern:

. the range,

. signal strength,

. probability,

. power density,

. field intensity.

There is also an option either to make the calculations considering the terrain profile or selection of an

,,average” profile (fig.6). The parameters option makes it possible to edit the parameters belonging to the

algorithm. In this way we can decide which probability levels or power levels the calculations will be made

for.

When the calculations have been completed, isolines are displayed on the screen, their alignment depending

on the levels on which the calculations have been made. Figs. 7,8,9,10 show the results of

a series of calculations for various algorithms.

242

Page 258: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

243

Page 259: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

This software makes it possible to determine the quality of the useful signal against the background of

interference. One of the quality measures in radio and telephone systems is the articulation (AS) or the

articulation index (AI) for voice transmission. The threshold signal is relation between of the ratio the

useful signal and interference. at which correct reception is possible. In case of digital systems it is

the bit error rate BER.

To receive an undisturbed signal, the following inequality must be fulfilled:

Knowing the technical parameters of the facility and the interference level we can calculate

propagation losses and the distance at which the received signal is jammed. If the interference comes from n

sources, the resultant value of interference. is

where:.

disturbance from i-source at [W]

n- number of interference sources

244

Page 260: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Fig. 11. shows a situation, where the receiving station with a round characteristic is within the range of two

broadcasting stations, both with directional antennas, one emitting a useful and the other an interfering

signal. When the useful signal and interference reach a given level, the receiving station will be jammed.

5. CONCLUSIONS

Out of the many measures of signal quality in analogy and digital radio and other communication systems,

the most useful are:

• articulation ... or index for telephone radiocommunication systems,

• bit error rate or code-word distortion degree - for digital transmission,

• resolution - for TV and facsimile,

• probability of detection and of false alarm - for radar systems,

• location error - for navigation systems.

The software presented here makes it possible to define a probable power distribution around the

broadcasting station considered or the probability distribution for a given power. It is also possible to

determine the strength of the electric field, power density and propagation losses. The useful range and the

admissible distance from the source of interference can be calculated as well. In all those calculations the

signal quality measures listed above are taken into consideration.

245

Page 261: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

6. REFERENCE:

[l]Duff W.G., White D.R.J.: EMI Prediction and Analysis Techniques. A Handbook Series on

Electromagnetic Interference and Compatibi-lity, vol. 5, Don White Consultans, germantown,

Maryland, 1971-1974.

[2]Lustgarten M.N., Madison J.A.: An Empirical Propagation Model (EPM-73), IEEE Trans on EMC,

vol EMC -19, No 3, August 1977.

[3]M. Amanowicz, P. Gajewski, W. Ko osowski,, M. Wnuk. ,,Land mobile communication systems

engineering” - 4th Conference, AFRICON 1996

[4]Freeman R.L.: Telecommunication Transmission Handbook, John Willey & Sons, New York 1981.

[5] International Telecommunication Union : Radio Regulations, edition 1990, Geneva.

[6] M. Amanowicz, P. Gajewski, W. Ko osowski, M. Wnuk ,,Analiza komputerowa i do wiadczalna

stref pokrycia sieci radiokomunikacji 1 dowej” KST97 Bydgoszcz wrzesie .

246

Page 262: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

12

ADAPTIVE INTERFERENCE CANCELLATIONWITH NEURAL NETWORKS

A.H.EL Zooghby, C. G. Christodoulou, and M. Georgiopoulos

Electrical and Computer Engineering Department

University of Central Florida

4000 Central Florida Blvd, Orlando, Florida 32816

[email protected]

Abstract

In modern cellular, satellite mobile communications systems, and in GPS systems, a fast

tracking system is needed to constantly locate the users, and then adapt the radiation

pattern of the antenna to direct multiple narrow beams to desired users and nulls to

sources of interference. In this paper, the computation of the optimum weight vector of

the array is approached as a mapping problem which can be modeled using a suitable

artificial neural network trained with input output pairs, A three-layer radial basis

function neural network (RBFNN) is used in the design of one and two-dimensional array

antennas to perform beamforming and nulling. RBFNN's are used due to their ability to

interpolate data in higher dimensions. Simulations results performed under different

scenarios of angular separations, and SNR are in excellent agreement with the Wiener

solution. It was found that networks implementing these functions are successful in

tracking mobile users as they move across the antenna’s field of view.

1. Introduction

Multiple access techniques are often used to maximize the number of users a wireless

communications system can accommodate. With frequency reuse, where the same frequency is

used in two different cells separated far enough so that users in one cell do not interfere with the

users in the other cell. Further improvements in the system capacity can be achieved. Moreover,

Page 263: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

adaptive arrays implemented in base stations allow for closer proximity of cofrequency cells or

beams providing additional frequency reuse by rejecting or minimizing cochannel and adjacent

channel interference[l]. Motivated by the inherent advantages of neural networks([2]-[7]), this

paper presents the development of a radial basis function neural network-based algorithm to

compute the weights of an adaptive array antenna [8]. In this new approach, the adaptive array

can detect and locate mobile users, track these mobiles as they move within or between cells, and

allocate narrow beams in the directions of the desired users while simultaneously nulling

unwanted sources of interference. This paper is organized as follows: In sections 2 an 3 a brief

derivation of the optimum array weights in 1-D and 2-D adaptive beamforming is presented. In

Section 4 the RBFNN approach for the computation of the adaptive array weights is introduced.

Finally, Section 4 presents the simulation results and Section 6 offers some conclusive remarks.

2. Adaptive beamforming using 1-Dimensional linear arrays

Consider a linear array composed of M elements. Let K (K<M) be the number of narrowband

plane waves, centered at frequency impinging on the array from directions

Using complex signal representation, the received signal at the element

can be written as,

where is the signal of the mth wave, is the noise signal received at the sensor and

(2)

where d is the spacing between the elements of the array, and c is the speed of light in free space.

Using vector notation we can write the array output on the matrix forms:

Where, X (t), S(t) and N(t) are M-dimensional vectors. Also in (3) A is the MxK steering matrix

of the array towards the direction of the incoming signals defined as:

(4)

where is defined as

248

Page 264: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Classical Approach

Assuming that the noise signals received at the different sensors are

statistically independent, white noise signals, of zero mean and variance and also independent

of S(t) , then the received spatial correlation matrix, R, of the received noisy signals can be

expressed as:

In the above equation, designates the signal covariance matrix and I is the

identity matrix. Also, in the above equation “H” denotes the conjugate transpose. Finally,

stand for the eigenvalues and eigenvectors of the matrix R, respectively.

With the weights of the array element outputs represented as an M-dimensional vector W the

array output becomes

The mean output power is thus given by:

where * denotes the conjugate. To derive the optimal weight vector, the array output is minimized

so that the desired signals are received with specific gain, while the contributions due to noise and

interference are minimized. In other words:

In the above equation, r is the V x 1 constraint vector, where V is the number of desired signals,

and Sd is the steering vector associated with the look direction as defined in (5). The method of

Lagrange multipliers is used to solve the constrained minimization problem in (9). It can be

shown that the optimum weight vector is given by the following equation:

Since the above equation is not practical for real time implementation, an adaptive algorithm

must be used to adapt the weights of the array in order to track the desired signal and to place

nulls in the direction of the interfering signals. In this work, neural networks are utilized to

achieve this adaptive response in real-time, the details of which appear in section 4.

249

Page 265: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3. Adaptive beamforming using 2-D rectangular arrays

Consider a general M x N rectangular array receiving K signals and let the received signal data

matrix be given by [9]:

where

In the above equation , du and dv are the spacings between the elements along the column and row

directions, respectively, while are the elevation and azimuth angles of the source,

respectively, and m=l,2,...,M; n=l,2, ...,N; and i=l,2, ...,K.The received signal data can be

arranged in a 1x MN vector given by

The signal direction vector is defined in terms of the Kronecker product of

which are given by

It can be shown that in this case, the vector of optimum weights is given by

where R is defined as is the steering vector associated with the desired

signals given by where are vectors defined as

whose elements are given as in (12) and (13).

250

Page 266: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4. Neural Network -based interference cancellation:

This section describes a new implementation for the problem of beamforming using neural

networks. Since, optimum weight vector is a nonlinear function of the correlation matrix and the

constraint matrix (see equations (10) and (16)). Then it can be approximated using a suitable

neural net architecture such as the Radial Basis Function Neural Network [11]. A Radial Basis

Function Neural Network can approximate an arbitrary function from an input space of arbitrary

dimensionality to an output space of arbitrary dimensionality [10]. The block diagram of the

RBFNN based array is shown in Figure 1. As it can be seen from Figure 2, the RBFNN consists

of three layers of nodes, the input layer, the output layer and the hidden layer. In our application

the input to the network is the correlation matrix R while the output layer consists of 2M nodes

(1-D case) or 2MN nodes (2-D case) to accommodate the output vector (i.e., The RBFNN

is designed to perform an input-output mapping trained with examples. There are a lot of learning

strategies that have appeared in the literature to train an RBFNN. The one used in this paper was

introduced in ([10],[11]), where an unsupervised learning algorithm (such as the K-Means [12]) is

initially used to identify the centers of the Gaussian functions used in the hidden layer. Then, an

ad-hoc procedure is used to determine the widths (standard deviations) of these Gaussian

functions. According to this procedure the standard deviation of a Gaussian function of a certain

mean is the average distance to the first few nearest neighbors of the means of the other Gaussian

functions. The aforementioned unsupervised learning procedure allows you to identify the

weights (means and standard deviations of the Gaussian functions) from the input layer to the

hidden layer. The weights from the hidden layer to the output layer are identified by following a

supervised learning procedure, applied to a single layer network (the network from hidden to

output layer). This supervised rule is referred to as the delta rule. The delta rule is essentially a

gradient decent procedure applied to an appropriately defined optimization problem. Once

training of the RBFNN is accomplished, the training phase is complete, and the trained neural

network can operate in the performance mode (phase). In the performance phase, the neural

network is supposed to generalize, that is respond to inputs that it has never seen before, but

drawn from the same distribution as the inputs used in the training set. One way of explaining the

generalization exhibited by the network during the performance phase is by remembering that

after the training phase is complete the RBFNN has established an approximation of the desired

input/output mapping. Hence, during the performance phase the RBFNN produces outputs to

previously unseen inputs by interpolating between the inputs used (seen) in the training phase.

251

Page 267: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Input Preprocessing

First, the array output vectors are generated then transformed into appropriate input vectors to

be presented to the network. The estimation phase consists of transforming the sensor output

vector into an input vector and producing the DOA estimate. The correlation matrix R can be

rearranged into a new input vector , b , given as

It was found that by only taking the upper triangular part of R as the input, the number of input

units is significantly reduced without affecting the interpolation capability of the network. Note

that we still need twice as many input nodes for the neural network to accommodate the complex

inputs.

5. Simulations

The pattern of an array of 8 elements receiving 1 desired signal and 3 interfering is shown in

Figure 2. The SNR of the sources is 10 dB. The input to the network consisted of all the elements

of the correlation matrix R. Hence the dimension of the input layer is 128 nodes. In Figure 3, an

array of 10 elements is simulated under the same conditions with 110 Input nodes where only the

upper triangular part of R was used as the input. The results show that it is possible to reduce the

dimension of the input layer significantly without affecting the interpolation capabilities of the

network. In Figure 4,an array of 20 elements is shown tracking 7 signals 4 of which are

interference. The SNR of the desired signals were set to 10 dB while those of the interfering

signals were set to 20 dB and Finally, Figure 5 shows a 4 x 4 planar array is trained to

track 7 signals 3 of which are desired with

6. Conclusion

A new approach to the problem of adaptive beamforming was developed. The weights were

computed using an RBFNN that approximates the Wiener solution. The network was successful

in tracking multiple desired users while simultaneously nulling interference caused by cochannel

252

Page 268: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

users or other sources of interference. Both linear and planar arrays were simulated and the results

have been very good in every case. Comparison of the adapted pattern obtained by the RBFNN

and the optimum solution proved the high degree of accuracy of this approach.

7. Acknowledgement

This work was partly funded by the Florida Space Grant Consortium and by Neural Ware, Inc.

8. References

[1] T.Gebauer, and H.G.Gockler, ” Channel -individual adaptive beamforming for mobile satellite

communications”, IEEE Journal on Selected Areas in Communications, vol.13, No 2, pp. 439-

448 ,February 1995.

[2] H.L.Southall, J.A.Simmers, and T.H.O’Donnell, ” Direction finding in phased arrays with a

neural network beamformer”, IEEE Transactions on Antennas and Propagation, vol. 43, No. 12,

pp. 1369, December 1995.

[3] El Zooghby A. H., C.G. Christodoulou and M. Georgiopoulos,” Performance of radial basis

function networks for direction of arrival estimation with Antenna Arrays"; IEEE Trans .on

Antennas and Propagation, vol. 45, No.l1, pp. 1611-1617, November 1997.

[4] A.F.Naguib, A.Paulraj, T.Kailath,”Capacity improvement with base-station antenna arrays in

cellular CDMA”, IEEE Transactions Vehc.Technology, vol. 43, No. 3,pp. 691, August 1994.

[5] Luo Long, Li Van Da, ”Real-time computation of the noise subspace for the MUSIC

algorithm”, Proc. ICASSP 1993, vol.1, pp.485 -488, April l993.

[6] D.Goryn and M.Kaveh.”Neural networks for narrowband and wideband direction finding”,

Proc. ICASSP, pp.2164-67, April 1988.

[7] P.R.Chang, W.H.Yang, and K.K.Chan,”A neural network approach to MVDR beamforming

problem”, IEEE Transactions on Antennas and Propagation, vol.40, No.3, pp. 313-322, March

1992.

[8] Mozingo, Miller, Introduction to Adaptive arrays, John Wiley, 1980.

253

Page 269: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[9] S.J.Yu, and J.H.Lee,”Design of Two-Dimensional Rectangular Array Beamformers with

partial adaptivity”, IEEE Transactions on Antennas and Propagation, vol.45, No.l, pp. 157-167,

January 1997.

[10] S.Haykin, Neural Networks A Comprehensive Foundation, Macmillan College Publishing,

Ontario, 1994.

[11] T.J.Moody and CJ.Darken, ”Fast learning in networks of locally tuned processing units",

Neural Computation, vol.1, 281(1989).

[12] J.T.Tou and R.C.Gonzalez, Pattern Recognition Principles. Addison Wesley, Reading,

MA, 1976.

254

Page 270: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

255

Page 271: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

256

Page 272: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

257

Page 273: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

258

Page 274: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

13

Calibration of a Smart Antenna for CarryingOut Vector Channel Sounding at 1.9 GHz

Jean-Rene Larocque, John Litva, Jim Reilly

Communication Research LaboratoryMcMaster University

Hamilton,Ontario, Canada, L8S [email protected]

Abstract

In this paper, we present the results of calibration tests carried out witha circular antenna array. We present results showing how the calibrationchanges with time and temperature, a method to estimate the mutual cou-pling between elements and we investigate a method for easily keeping trackof the calibration with time.keywords : smart antenna, array calibration, vector channel sounding

1 Stability of the calibration

To start, we present some results regarding the effects of time and tempera-ture on the calibration of an antenna array.

1.1 Calibration in anechoic chamber

The antenna array has a central element surrounded by 8 evenly spacedelements. This central element is only there for calibration purposes and isnot used during the actual measurements.

Page 275: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

An initial calibration is based on measurements with the antenna arrayinside an anechoic chamber and illuminated from 64 different angles. Usingthe minimax criteria, an optimization procedure [1] is then applied to themeasurements to get 16 sets of beamforming weights. The weights, whenapplied to the array output, form overlapping antenna patterns with lowsidelobes. Applying those weights, it is possible to evaluate the AOA distri-bution. This extensive calibration has been only performed occasionally, i.e.once every one or two months.

A second step consists in measuring the user code when a signal is fedthrough the central element of the array. Assuming that the physical positionof each element with respect to the center is accurate, we can estimate thevariations in gain and phase between elements and later account for that inthe in-field calibration.

1.2 Time and temperature effects on the calibration

In order to evaluate the angle of arrival, the antenna array must be calibrated.It becomes relevant to know the time life cycle of a particular set of weights.Since the array is used under various weather conditions, it is also importantto study the effect of temperature on the radiation pattern.

1.2.1 Temperature effects

We measured the relative variation in phase and gain of the elements versustemperature. The maximum averaged drift in relative gain and phase wasrecorded to be 0.5 dB and 5 degrees. Those variations were applied to atheoretical array to study their effect on the beam pattern. As shown infigure 3, no significant increases in the sidelobe levels were observed over thetemperature interval studied, i.e. (26°C to 36°C).

1.2.2 Time effects

By injecting a signal in the central element, we measured the variationsin the relative gain and phase of the elements over time. As we can seeon figure 4, the 8 elements do not change together, thereby degrading thebeam pattern. So time is a critical issue for our system. Prom the resultsof this study, we have come to the conclusion that the array has to be re-calibrated roughly once every month. Unfortunately, a full calibration is a

260

Page 276: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

time consuming operation and we would like to avoid it. Following is aninvestigation of a practical method for carrying out a calibration with a fewsimple measurements.

2 Investigation for a infield calibration method

Using the central element as a transmitter, with a mini-anechoic chambercovering the array, it is possible to track and compensate, in the field, therelative variations of phase and amplitude of each of the eight receiving el-ements. The data can be corrected accordingly and processed optimally,assuming that the mutual coupling characteristics do not change.

2.1 Analytical solution for the error matrix

Once the antenna array if freshly calibrated and the differences in phase andgain between the elements have been estimated, it is possible to estimate the

261

Page 277: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

error matrix from a theoretical antenna array. From [2], what is measuredat the output of a practical array is a modified version of what would bemeasured in theory:

where holds the user codes measured at 64 different angles,holds the theoretical user codes for the same angles when there is no

mutual coupling and no differences of phase nor gain between elements,is the error matrix comprising the mutual coupling between elements

and the gain and phase of each element. The matrix is a white gaussianadditive noise of variance representing the error in the measurements.

Assuming that the physical structure of the array is perfect, this matrixE can be modelled as where M is in theory a circulant matrixof the mutual coupling when there is no gain nor phase differences betweenelements and all the elements show the same impedance and is a diagonalmatrix with the relative gain and phase of each element. The data matrix

262

Page 278: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

is normalized with respect to allowing us to estimate the matrix Mdirectly.

To estimate the matrix M, it has been showed in [2] that it must minimizethe Frobenius norm of the model:

which can be solved by setting the derivative of the second expression to zeroand solving for M using Kronecker algebra. This previous equation is generalin the sense that it does not take into account the circulant properties of thematrix M and provides the following solution:

263

Page 279: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Assuming that M is circulant, we write where the matrixD is diagonal and the matrix W are the harmonically-related rotating expo-nentials. The matrix This assumption leads to the followingequations:

where .* denotes the Matlab notation for multiplication elements by elementsand N is the number of elements in the array (8 in our case). We see the direct

264

Page 280: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

relation with the Fourier transforms of the matrices. Since D is diagonal,the equation (8) is actually a set of 8 equations. The expression inside thebrackets on the left-hand side is the magnitude of the Fourier transform ofthe columns of the theoretical matrix.

2.2 Updating the weights

A quick way to insure that the array remains calibrated is to track the matrixassuming the the mutual coupling characteristics remain constant, since

we are using the system under fairly the same conditions and that its physicalcharacteristics are not modified. When processing the data, we would onlyneed to modify the set of weights accordingly to the matrix

Where is the new set of weights obtained from the old one,

2.3 Results

Those following matrices were obtained when (6) was applied to data fromNovember 1997 and March 1998, without giving any structure to M:

We can see that the matrices are diagonally dominant. They do notlook circulant though. The elements of the diagonals are different. Butmust importantly, we see that the mutual coupling characteristics, over a

265

Page 281: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

period of 5 months, do not change significantly. As we can see, many ofthe coefficients are the same, within the third digit. It should be noted thatduring this period of time, the system remained indoor and was not exposedto difficult perturbations.

Now assuming that M is circulant and applying (8) to the same data, weget:

Once again, the matrices are diagonally dominant and the coefficients ofthe subdiagonals are the same. However, the matrices are not symmetric.The assumption that is the most likely to be violated is the one about thephysical position of the elements. It is very likely that the elements are notexactly where they are expected to be, making the array not circulant andthe impedance of the elements is most likely not the same.

On the figure 5, we see the application of (9) to the data. We updatedthe weights computed on September 1997 and applied those to the data ofMarch 1998.

As we can see on figure 5 by the dashed-dotted curve, it was clearlytime for a new calibration. The peak sidelobe level is high at around -8 dB,compare to the results after calibration when the peak sidelobe level is downat -16 dB (dashed curve). The full line curve is the updated beampattern with(9). As we can see, the average sidelobe level is lower by 5 dB but the peaksidelobe level was not improved significantly, i.e anymore than anticipated.

266

Page 282: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3 Conclusion

As shown in the first part of this paper, we conclude that the temperature isnot a critical issue in the use of our system but the drift in time is catastrophicon the beam pattern.

We presented some results for the estimation of the mutual coupling ma-trix and investigated a quick method to keep track of the calibration.

Tracking the relative phase and gain of the elements and applying cor-rections to the weights based on the matrix Σ gives a very practical methodto estimate and keep track of the calibration. Of course, this technique isonly valid as long as the mutual coupling matrix M remains unchanged. Thepreliminary results show that applying a correction factor to the weights cor-responding to the Σ’s does not improve the quality of the beampattern asmuch as a new full calibration would. Some sidelobes are lowered but thisapproach is not good enough to lower the worst of the sidelobes. We mustthen conclude that the changes in the mutual coupling characteristics canhave a significant impact on the performances of an antenna array.

267

Page 283: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4 Acknowledment

The authors are greatfull to Ian Timmins and Shiping He for their help atcarrying out the measurements and to Dr. Tim Davidson and Mr. VytasKezys for their guidance.

References

[1] John Litva, Vytas Kezys, Jean-Rene Larocque and al. ”Smart AntennaArray Calibration”, Technical report no. 357 for McMaster University,December 1997. Accepted for presentation at the next AP-S conference.

[2] B. Chong Ng, C.M. Samsom See, ”Sensor-Array Calibration Using aMaximum-Likelihood Approach”, IEEE Transaction on Antennas andPropagation, vol. 44 No. 6, June 1996.

268

Page 284: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

14Implementing New Technologies for Wireless Networks:Photographic Simulations and Geographic Information Systems

Howard P. Boggess, Ð and A. Franklin Wagner, IIAnderson & Associates, Inc.

100 Ardmore Street, Blacksburg, VA [email protected] | [email protected]

Abstract

Anderson & Associates submits this abstract, in the hopes of introducing two separate, but easily

integrated technologies to the realm of wireless networks. Digital Photographic Simulations for site

acquisition add to the agents arsenal of tools for persuasion, and Geographic Information Systems,

maintaining information on a network of sites, assists the companies in knowing the operation and limits

of their system.

Site acquisition can be critical in locating a site within any network. Working with towns, cities, and

state organizations can be a particularly long and drawn out process. With the limited knowledge of the

newer jurisdictions, trying to explain how a site will impact an area or region still relies on the

imagination of the people you are trying to influence.

Digital Photographic Simulations allow the agent to override the imagination and display an

accurate image of the site and its impact, clearly showing everyone exactly what is proposed. A

simulation can help a cellular company sell a proposed tower to property owners, or the board of

supervisors. A simulation can also help a municipality further understand the project by showing what

an antennae on a water tower would look like.

These can be created from a simple photograph of the proposed site, which can be scanned to a

digital image. The image of antennas or a tower is then accurately drawn in to represent the site.

Antennas can be displayed on, but not limited to, buildings, water towers, and power lines. Once all

parties understand and see the impact on a site, they are more reasonable to work with, and agreements

are more easily reached.

Once a network of sites has been established, the use of Geographic Information Systems, or GIS,

to organize and maintain that network can be critical. GIS has a many applications that will facilitate

this. Cellular companies can be more efficient with the use of GIS, making customers more satisfied with

their product, as it can allow for ease of location and organization of current and proposed sites,

enabling them to geographically record activity from each site. A&A would be able to present a way for

a cellular company to geographically record their activity from each site in order to offer a multitude of

information like “holes” or “hot spots” within their network. This information could then be interpreted

to show the need of more sites or increased capacity of existing ones.

269

Page 285: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Introduction

Anderson & Associates has utilized Geographic Information Systems (GIS) and Digital Photographic

Simulations for many years in the field of engineering and economic development. GIS is an invaluable

tool for analyzing and evaluating large geographic areas, and for its ability to combine attribute data with

physical elements such as sewer lines, utility lines and poles, and industrial buildings. GIS is a flexible

tool that may be customized for specific needs, and is easily modified as needs change. Photographic

Simulations are important tools in many areas because proposed construction can be quickly illustrated to

communicate a designer’s or developer’s intent before construction begins. A&A has developed several

methods for utilizing these tools in the field of wireless communications.

Digital Photographic Simulations

It is hardly news that the development and expansion of the wireless communications networks across

the nation and around the world are heavily dependent upon the continued improvements in new analog

and digital broadcast technologies. The siting and construction of towers required to carry the growing

wireless traffic is a logistical issue that is sweeping across metropolitan areas and primary travel corridors

at a rapid pace. These technical, engineering and real estate issues are being tackled with great

enthusiasm and effectiveness. However, the political and aesthetic considerations that are a part of the

equation have traditionally been far less of a consideration. This is rapidly changing as broad opposition

grows to the placement of towers and antennas in highly visible locations.

It takes only a cursory examination of our surroundings to recognize the extant patterns of tower

placement. Transmission antennas function best when located atop the highest point in the landscape.

Over the past half-century, these transmission facilities have tended to cluster in locations that provide the

most effective range and coverage. Once five towers are located in close proximity, a sixth makes little

difference in terms of aesthetic impact. Therefore, in the initial stages of the wireless communications

explosion, there was little objection to the placement of new towers in locations already heavily populated

with television and radio antennas.

As the industry has expanded, however, new antenna towers are sprouting in areas unused to 100’ or

200’ structures rising from the highest, and most visible, peaks. The public backlash was slow to

develop, as the encroachment was initially slow. In the past several years though, the growth has

excellerated to a pace that has coalesced opposition groups that fight the placement of new towers. At the

forefront of the campaign against new towers are neighborhood groups who fight the placement of a

tower within their community, and environmental groups that oppose the siting of towers in forest land,

or within the viewshed of wilderness areas, National Forests, or other unique areas such as the

Appalachian Trail corridor.

270

Page 286: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Today, the wireless communications industry faces growing public, and increasingly legal, opposition

to the growth of the infrastructure that is critical to the widespread coverage necessary to keep up with

customer demands. Many municipalities and counties have enacted legislation that restricts or limits the

placement of new towers and antennas. Even the largely successful technique of co-location of antennas

on existing structures is often subjected to intense scrutiny.

The use of photographic simulations of proposed tower or antenna installations can play an important

role in assuaging the concerns of surrounding residents, and of zoning and planning administrators. For

example, consider the following scenario:

Mr. Engineer explains the details of a new lattice tower installation on a local hilltop to a

group of concerned citizens. He accurately describes the final project as a 120’ tower that is

surrounded by a dense forest of 80’ high trees that will be undisturbed. Only 40’ of tower will be

visible above the trees, and it is less than 3’ wide at 80’, so it will be fairly unobtrusive.

Mrs. Homeowner considers the description and recognizes the hilltop as the focal point of the

view from her family's patio. The Homeowners spend many happy evenings on the patio, and

she is certain that the tower will forever ruin that pleasure.

Mr. Hotelowner envisions the Eiffel Tower protruding 100’ above the treetops, just a few

hundred yards from his establishment. Like many people, he has a difficult time estimating

measurements. Also, Mr. Engineer explained to Mr. Hotelowner that the topography of his

property would preclude the tower from being seen from the hotel. Looking at the contour lines

on the map, however, did nothing to convince Mr. Hotelowner.

Ms. Naturalist, an avid hiker, recognizes that the site is within view of her favorite trail in the

National Forest. The hilltop is over a mile away, but is quite visible from several prominent

locations on the trail. She has documented that currently, there are no man-made structures

visible for the length of the trail. Mr. Engineer's description of the near invisibility of the forty

visible feet of the tower at a distance of more than one mile carries little weight with Ms.

Naturalist, who is heartily opposed to the introduction of the first.

Maps and descriptions are ultimately of marginal value in communicating the visual impacts of this

construction. None of the players are at fault, it is simply a matter that few people are trained to

understand maps and construction drawings, and even fewer are able to communicate the significance of

them verbally. A picture of the proposed installation, however, is universally understood. If Mr.

Engineer provides a photograph of exactly what the tower will look like from Mrs. Homeowner’s patio,

Mr. Hotelowner’s establishment, and from Ms. Naturalist's favorite overlook, he stands a much better

chance of making his case that the tower will have very little visual impact.

Computer-generated photographic simulations are capable of providing just the sort of visual proof

the engineer needs to make his case. The process is straightforward—photographs are taken from the

271

Page 287: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

vantage points of concern and the prints are scanned into digital format. The digital images are then

manipulated in one of several ways. If the engineer has photographs or scaled drawings of the proposed

tower, they too can be scanned and then merged onto the photos of the proposed site. Alternatively, the

tower can be “drawn” onto the site photograph using the digital equivalent of artist's tools such as pencils,

paintbrushes and airbrushes. The simulation can then be printed for display and distribution.

The accuracy of such simulations is dependent upon several factors. Most importantly, the engineer

must provide the dimension of some object near the tower installation to insure accurate scaling of the

tower illustration. In many cases, the most convenient objects are the surrounding trees. From a distant

perspective, an estimate of the average height of the trees is sufficient. For views closer than a few

hundred yards, an accurately dimensioned object, such as a survey rod, will ensure an accurate

simulation. Color, too, can have a significant impact on the visibility of a tower, and must be accurately

portrayed in the simulation. The finish color of the tower should be represented accurately. Even

seasonal and atmospheric variations and will play some role in the accuracy of the simulation. In general,

photographs tend to have a flatter, and bluer cast in winter, and a more vibrant appearance in summer. An

overcast day may have appropriate contrast, but reduce the color saturation, while a cloudless summer

day may result in a colorful image, but one that lacks depth or has too much contrast. The color of the

tower finish must be adjusted to account for these variables. Ultimately, even the distortion of the camera

lens may play a factor, particularly when a wide-angle lens is used for a close-up view.

To present the most accurate representation of the proposed installation, the following procedures

should be considered the minimum effort. The project manager should determine all of the critical views

of the proposed tower, and then select those most important to display as a part of the public relations and

approval effort. Several photographs should be taken from each of the identified positions, preferably

over a period of several days to capture various weather conditions. Representative photographs should

be selected to illustrate not only the view positions, but also different climactic and daylight conditions.

The photographs are next scanned, and color balanced to best represent the actual conditions. The

proposed structure is inserted into each image, and scaled to the appropriate height. At this stage, the

preliminary image should be reviewed carefully by the project engineer, and the client. While it is

usually a simple matter to change tower colors and height, it is most efficient to verify accuracy early,

rather than to alter the image during final composition. Following review, the tower should be blended

into the base photo to complete the simulation, and the final printed document composed.

Tower simulations are generally straightforward and typically require little time to create. As a result,

they are cost effective, and the returns can be substantial where they speed the review and approval

process. Additionally, it is an efficient means to compare alternative tower types and placements.

272

Page 288: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Geographic Information Systems

According to Environmental Systems Research Institute, the company that produces ARC/INFO and

ArcView, widely considered the premiere GIS packages:

GIS is helping [many types of] utilities worldwide reduce costs and increase productivity.Surveys among utilities indicate that 80 percent of their work requires knowledge of the physicallocation of customers and equipment. A GIS is designed to manage this kind of information andto make it accessible to people throughout an organization. GIS technology gives...utilities thetools they need to become more customer-focused, more competitive, and better prepared tocompete in an ever-evolving marketplace.

GIS technology is used extensively for customer analysis in the retail and service industries.As in these industries, [communications providers] can use GIS to determine the most suitablelocation for [tower and antenna placement] based on parameters such as demographics,transportation networks, zoning and real estate costs.

Optimal planning of a large-scale wireless development project requires integration of thegeographic information governing the engineering, environmental and socioeconomic costs. Useof GIS technology for data management and analysis provides the most effective method fordefining these complex geographical relationships and for computing optimal solutions to thesemulti-variate problems.

Once a network of antenna sites has been established, the use of GIS to organize and maintain that

network can be beneficial. GIS has several applications that will facilitate this organization. Cellular

companies can be more efficient with the use of GIS by maintaining a working database of antennas and

equipment. Among the benefits, the equipment database allows the wireless firm to eliminate downtimes

due to worn out equipment and to plan for the maintenance and expansion of the network. A GIS can

provide a multitude of information, such as available service space. Any useable space available on the

tower could be advertised on the Internet for other cellular or local users to find.

A GIS starts off with a base map of a county, state, or region. This map could show graphically the

location of every tower. Each tower could be selected to show its location, size and owner. Other

significant structures could be included on this map such as water tanks and prominent buildings. GIS

users could zoom into an area to see the exact location of any tower. A link might also be attached to

each tower to display a picture or a site plan of the structure. The map could be overlaid with the

locations of airports and flight patterns in close proximity to towers. Other information could include the

type of tower at the site, the type of antennas and equipment and the date of installation.

Once a company has established their network of sites, they may be interested in leasing space on

their poles or towers to other companies. Although available space can be marketed using traditional

methods such as trade or technical journals, the growing ability of GIS software to utilize the Internet

provides immediate, worldwide distribution of this information. Potential buyers could search for

available antenna space in a given area and contact those companies with available mounting space.

273

Page 289: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Conclusion

GIS and Simulations will likely become an integral part of the expansion of existing wireless

networks and the development new ones. These tools will assist firms in avoiding controversies facing

both the communications companies and the municipalities in obtaining new antenna installations, and in

the day-to-day management of the systems. Photographic Simulations are an inexpensive and fast

method for increasing the approval rate for new installations, and for mitigating the objections of

individuals and groups who find proposed sites objectionable on aesthetic grounds. GIS will assist

virtually every corporate department, from accounting to engineering to the maintenance staff, in

accessing and manipulating specific information about the network. As the widespread use of GIS

continues its rapid expansion within various supporting and regulatory organizations, such as electric

companies and state, county and local government bodies, it will become faster and simpler to exchange

information.

274

Page 290: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

15

Envelope PDF in Multipath Fading Channels with Random Number of Paths

and Nonuniform Phase Distributions

ALl ABDI AND MOSTAFA KAVEH

DEPT. OF ELEC. AND COMP. ENG., UNIVERSITY OF MINNESOTA

4-174 EE/CSCI BLDG., 200 UNION ST SE

MINNEAPOLIS, MN 55455, USA

FAX: (612) 625 4583

Email: [email protected] [email protected]

Abstract

In a multipath fading channel the transmitted signal travels through several different

paths to the receiver. In each path, amplitude and phase of the signal vary in a random manner.

It is common to consider the number of paths as a large constant and to model random

fluctuations of the phase by the uniform probability density function (PDF). However, these

assumptions are not realistic in many cases. In this paper, a general multipath fading channel

with random number of paths (with negative binomial distribution) and nonuniform phase

distributions (with von Mises PDFs) is considered and it is shown that the envelope fluctuates

according to a gamma PDF. It is also shown that the parameters of this gamma PDF are

directly related to the physical parameters of the channel. Due to the realistic assumptions made

in the derivation, the gamma PDF is a promising candidate for accurate modeling of envelope

statistics in multipath fading channels.

275

Page 291: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

1-Introduction

When a signal propagates through a multipath environment, it breaks into several multipath

components (or briefly, components); and at the receiver, the superposition of these components

is observed. In general, the phase of each component, with respect to an arbitrary but fixed

reference, depends on its path length. It changes by as its path length changes by a

wavelength.

If the dimensions of a cluster of scatterers is much larger than the signal wavelength, there

will be large variations among the path lengths and hence the phases of components which are

scattered from that cluster. These phases, when reduced to module can be reasonably

modeled as random variables with uniform distributions in Based on this assumption,

and several other simplifying assumptions, the well-known Rayleigh probability density function

(PDF) can be used for the envelope of superimposed components at the receiver [1]. There are

several other Rayleigh-based PDFs for the envelope of multipath signals, like the Suzuki

distribution [1], the K distribution [2], and the distribution which arises due to the presence of a

limited number of strong components [3]-[6]. In deriving the above PDFs, the assumption of

uniformity for phase distributions plays an important role.

On the other hand, when the signal wavelength is comparable with the dimensions of a

cluster of scatterers, the uniform distribution is not suitable for modeling the phases of

components. Thus the derivation of the envelope PDF assuming nonuniform distributions for

phases is of interest.

To the knowledge of the authors, the effect of nonuniform phase distributions on the

envelope PDF has been discussed only in [7] and [8]. Assuming that the number of components

(or equivalently, the number of paths) is a large constant such that central limit theorem holds for

the in-phase and quadrature components of the received signal, formula (4.6-28) in [7] is derived

for the envelope PDF. This complicated formula simplifies to formula (4.6-29) in [7], when the

phase variable of each wave is distributed symmetrically about its mean value, and uncorrelated

with its associated amplitude variable. Unfortunately, even formula (4.6-29) is too involved to be

used in practice.

In this paper, a general and mathematically tractable model for multipath fading channels is

considered. This model explicitly incorporates the effect of nonuniform phase distributions of

components through the von Mises PDF [9], which is a known PDF in communications [10]-

[11]. By taking into account the randomness of the number of paths via the negative binomial

276

Page 292: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

distribution, the gamma PDF is derived for the envelope. So the main contribution of this paper

is the introduction of gamma PDF for the envelope in multipath fading channels having random

number of paths and nonuniform phase distributions.

2-A general model for multipath fading channels

Consider Q clusters of scatterers, which are distributed arbitrarily in space. In the jth

cluster, there are scatterers; and it is assumed that is not too small. Each scatterer in the

jth cluster reflects the incident wave with an attenuation factor and a phase shift

and All are independent; and for a fixed j, all are

distributed identically. The PDF of an individual when reduced to module is assumed to

be of von Mises type with mean and concentration parameter Kj [9]:

where I0(.) is the modified Bessel function of order zero. It should be mentioned that Q and all

and can be either deterministic or random variables.

Among the available phase distributions [9], von Mises PDF has several attractive features.

It plays a prominent role in statistical inference on the circle and its importance is almost the

same as the Gaussian distribution on the line. This PDF usually results in mathematically

tractable formulas. It can approximate other important phase PDFs quite well; and also contains

two important PDFs as special cases: uniform on for and impulse at for

Note that no restriction is imposed on the PDFs of Q, and except should

not take small values (as becomes clear later).

3-Envelope PDF conditioned on Q, and

If the line-of-sight wave has unit amplitude and zero phase (a unit-length, zero-angle

vector), then the reflected wave from the sth scatterer in the jth cluster can be considered as a

vector with length and angle By a simple extension of formula (4.6.5) in [9], the

envelope PDF for the underlying channel model, conditioned on Q, and can be

expressed by the following exact formula:

277

Page 293: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where R is the univariate random variable of envelope, J0 (.) is the Bessel function of order zero,

and is defined by:

Moreover, N and L are Q-element vectors, with and as their jth elements.

4-The gamma envelope PDF

In order to obtain a simple and closed form PDF for the envelope, suppose Q is

deterministic and let Q = l. Moreover, let take the constant value By a simple scaling,

from (4.5.6) in [9] we get:

where and Without loss of generality, suppose Then if N is large

enough [9], the mean and variance of R/N are given by and respectively, where

is the modified Bessel function of order one, and

As goes from 0 to decreases from 0.5 to 0. Now we approximate R/N by a Gaussian

random variable with the same mean and variance. A look at Fig. 1 reveals that this Gaussian

approximation is reasonable when N is large and its accuracy improves as N increases. For a

larger smaller N is required to get the same approximation. If we define where

is the mean of N, then based on that Gaussian approximation, the characteristic function of

conditioned on N, i.e. can be written as:

In the above formula, the new random variable M is defined by

Among the known discrete distributions, Poisson distribution is the most popular one with

attractive analytic properties. It has already been used in modeling scattering phenomena where

the number of scatterers is random [12]-[14]. However, the negative binomial distribution has

278

Page 294: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

279

Page 295: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

become increasingly popular as a more flexible alternative to the Poisson distribution, specially

when it is doubtful whether the strict requirements, particularly independence, for a Poisson

distribution is satisfied [15]. In fact, Poisson distribution is the limiting form of the negative

binomial distribution [15]. From another point of view, negative binomial distribution may be

used to model variable-mean Poisson distribution [16]. Based on these evidences, we assume that

N has a negative binomial distribution (It should be mentioned that the widely-used K PDF is

also obtained assuming negative binomial distribution for the number of scatterers [17]-[19]):

where is the gamma function and parameter α is a constant such that For large

the random variable M will have a gamma PDF [19]:

By taking the expectation of in (5) with respect to M using in (7) and

according to [20] we get:

Now consider the limiting case in which remains constant

(Similar limiting process has been used in [19] to derive the K PDF). Therefore (8) simplifies to:

where is a constant such that . Formula (9) is the characteristic function of the

gamma random variable [21]. Hence the PDF of the measured signal envelope, can be

expressed as:

5-Discussion and conclusion

Due to way we have built the channel model, the parameters and of the proposed

envelope PDF in (10) have special physical meanings:

280

Page 296: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Interpretation of According to (7) we have So a large indicates that the

variations of the number of paths around is small.

Interpretation of : Wehave already observed that Since increases as increases,

we conclude that is a decreasing function of So for a fixed , a small represents a large

, which means that the multipath components at the receiver are approximately coherent.

Although not shown here explicitly, nonuniformity of phase distributions transforms the in-

phase and quadrature components of the received signal to two correlated Gaussian random

variables with different means and variances, when N takes large values (Note that for the simple

case of Rayleigh envelope PDF, in-phase and quadrature components are two independent

Gaussian random variables with zero means and the same variances, when N takes large values).

As mentioned earlier, the exact PDF of the signal envelope for this case is reported in (4.6-28) of

[7], in the form of an infinite series whose terms contain Bessel function products (Of course for

large N, (4) must tend to (4.6-28)). In this contribution we have accurately approximated (4.6-28)

by a Gaussian PDF. However, it seems that (4.6-28) can be approximated more accurately by a

PDF which is not as simple as the Gaussian PDF, but its complexity is still tolerable (This is

under study).

References

[1] H. Hashemi, “The indoor radio propagation channel,” Proc. IEEE, vol. 81, pp. 943-968,

1993.

[2] K. D. Ward, “Application of the K distribution to radar clutter-A review,” in Porch.

IEICE Int. Symp. Noise and Clutter Rejection in Radars and Imaging Sensors, Kyoto:

Japan, 1989, pp. 15-20.

[3] A. Abdi and S. Nader-Esfahani, “Non-Rayleigh envelope PDF in multipath fading

channels - General remarks and polynomial expansion,” in preparation.[4] A. Abdi, H. Hashemi, and S. Nader-Esfahani, “On the PDF of the sum of random

vectors,” submitted to IEEE Trans. Commun., June 1997.

[5] A. Abdi and S. Nader-Esfahani, “A general PDF for the signal envelope in multipath

fading channels using Laguerre polynomials,” in Proc. IEEE Vehic. Technol. Conf.,

Atlanta, GA, 1996, pp. 1428-1432.

[6] A. Abdi and S. Nader-Esfahani, “An optimum Laguerre expansion for the envelope

PDF of two sine waves in Gaussian noise,” in Proc. IEEE Southeastcon Conf., Tampa,

FL, 1996, pp. 160-163.

281

Page 297: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[7] P. Beckmann, Probability in Communication Engineering. New York: Harcourt, Brace

& World, 1967.

[8] P. Beckmann and A. Spizzichino, The Scattering of Electromagnetic Waves From

Rough Surfaces, 2nd ed., Boston, MA: Artech House, 1987.

[9] K. V. Mardia, Statistics of Directional Data. London: Academic, 1972.

[10] A. J. Viterbi, “Optimum detection and signal selection for partially coherent binary

communication,” IEEE Trans. Inform. Theory, vol. 11, pp. 239-246, 1965.

[11] H. Leib and S. Pasupathy, “The phase of a vector perturbed by Gaussian noise and

differentially coherent receivers”, IEEE Trans. Inform. Theory, vol. 34, pp. 1491-1501,

1988.

[12] A. A. Giordano and F. Haber, “Modeling of atmospheric noise,” Radio Sci., vol. 7, pp.

1011-1023, 1972.

[13] P. N. Pusey, D. W. Schaefer, and D. E. Koppel, “Single-interval statistics of light

scattered by identical independent scatterers,” J. Phys. A: Math., Nucl. Gen., vol. 7, pp.

530-540, 1974.

[14] D. Middleton, “Canonical non-Gaussian noise models: Their implications for

measurements and for prediction of receiver performance,” IEEE Trans. Electromagn.

Compat., vol. 21, pp. 209-220, 1979.

[15] N. L. Johnson, S. Kotz, and A. W. Kemp, Univariate Discrete Distributions, 2nd ed.,

New York: Wiley, 1992.

[16] T. Azzarelli, “General class of non-Gaussian coherent clutter models,” IEE Proc.

Radar, Sonar, Navig., vol. 142, pp. 61-70, 1995.

[17] E. Jakeman and P. N. Pusey, “Significance of K distribution in scattering experiments,”

Phys. Rev. Lett., vol. 40, pp. 546-550, 1978.

[18] E. Jakeman, “On the statistics of K-distributed noise,” J. Phys. A: Math. Gen., vol. 13,

pp. 31-48, 1980.

[19] S. H. Yueh, J. A. Kong, J. K. Jao, R. T. Shin, H. A. Zebker, and T. Le Toan, “K--

distribution and multi-frequency polarimetric terrain radar clutter,” J. Electro. Waves

Applic., vol. 5, pp. 1-15, 1991.

[20] L S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 5th ed., A.

Jeffrey, Ed., San Diego, CA: Academic, 1994.

[21] A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed.,

Singapore: McGraw-Hill, 1991.

282

Page 298: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

16

Radio Port Spacing in Low Tier Wireless Systems

Hung-Yao YehDepartment of Engineering and Public Policy

Carnegie Mellon UniversityPittsburgh, PA 15213

hy27@ andrew.cmu.edu

Alex HillsVice Provost and Chief Information Officer

Carnegie Mellon UniversityPittsburgh, Pennsylvania 15213

[email protected]

Abstract

The capacity of a low tier wireless system is dependent on the distance between adjacent

radio ports (RPs). Maximum port spacing, determined by a RP’s maximum coverage area,

is important where population density is low. Minimum port spacing is important where

population density is high, as is the case in congested urban areas. Minimum port spacing

imposes a significant capacity constraint and has a direct impact on the amount of spectrum

needed in a low tier wireless system.

This paper develops estimates for maximum and minimum port spacing. These estimates

can be used to lay out such systems and to estimate the amount of spectrum that will be

required to serve a given population density. We consider both high rise and low rise

environments and the use of both directional and omnidirectional antennas.

We assume that the carrier to interference ratio (C/I) must be held above certain levels in

fixed and mobile service. A propagation model is used to estimate the C/I ratio as a

function of RP spacing and other variables. This allows us to estimate which RP spacings

will provide acceptable performance and which will not. The results of this approach are

estimates of maximum and minimum RP spacing in a variety of environments.

Page 299: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

1. Introduction

The layout of a low tier wireless local loop (WLL) system is critically dependent on thedistance between adjacent radio ports (RPs). Maximum port spacing, determined by an

RP’s maximum coverage area, is important where population density is low. Minimum

port spacing, on the other hand, is important where population density is high, as in

congested urban areas. High population densities can be served by reducing port spacing

and/or by increasing the number of channels in each RP. The former approach is likely toincrease the number of RPs and thus may increase the required investment. The latterapproach, on the other hand, requires a higher spectrum allocation.

In an individual low tier design, local signal strength measurements can help to determine

minimum port spacing. Previous experimental results also help us develop estimates that

can be used in this study. In the layout of a wireless system, the carrier to interference ratio(C/I) must be held above a certain level. To maintain this minimum C/I, a system must be

laid out such that subscriber sets always receive signals higher than a certain C/I threshold

to prevent co-channel interference. We introduce a propagation model and develop

equations to estimate C/I in each of four network layout options. Each of the options

comprises an environment (high rise or low rise) and a RP antenna type (omni- directional

or directional) Thus, the network layout options are: high rise/omni-directional antennas,

high rise/directional antennas, low rise/omni-directional antennas, low rise/directionalantennas. We discuss C/I ratios appropriate for use in low tier systems and drawconclusions about maximum and minimum port spacing based on network layout and otherassumptions. The layouts used throughout this paper are those shown in the Appendix.

2. Radio Propagation Model

A line-of sight (LOS) path loss model has been reported by Milstein et al. (1992) and Erceg

et al. (1992). This two-segment model estimates path loss between a base antenna and amobile antenna.

284

Page 300: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Where L = path loss in dB

d = distance from the base antenna to the mobile antenna.

is the break point and is given by:

where = base antenna height

= subscriber set antenna height

= wavelength

is the path loss at the break point and is given by:

The coefficient 22.5 in Equation 1 corresponds to an attenuation exponent of 2.25, or near

free space attenuation. The coefficient 40 corresponds to an attenuation exponent of 4.

These values are typical of those reported elsewhere.

To determine C/I, it is necessary to estimate the average received signal power at asubscriber unit (SU) from the desired RP and from co-channel interfering RPs. The

average received signal power depends on path loss, SU antenna gain, RP antenna gain,

and transceiver transmitting power. The average received signal power, S, in dBm or dBWis given by:

where = transmitter power in dBm or dBW

= base antenna gain in dBi

= subscriber antenna gain in dBi

285

Page 301: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Note that fast fading is not considered in the calculation of average received signal power orof the co-channel interference power. Therefore, equation 4 is a first order calculation for aline of sight signal. It does not consider path loss variation, multi-path, or scatteringcaused by various terrain features, buildings, etc.

The calculation of both average received signal power and co-channel interference power

assumes that the mobile unit is at the corner of a cell as shown in Figure 1. Thus, thedistance d will be the largest distance between the desired RP and the SU, and it will

produce the smallest C/I. This calculation considers the distance between each co-channelinterfering RP in the first tier and the SU.

286

Page 302: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The low tier standard for the United States is Personal Access Communications System

(PACS). Table 1 shows some technical specifications of PACS which we use to calculate

radio link budgets. The antenna gain and RP antenna height are obtained from publications

of PACS equipment vendors about their products. The SU antenna height is assumed to be

no greater than 5 meters in low rise areas and higher than 10 meters in high rise areas. The

actual antenna heights will vary with building heights and subscriber location. Although the

actual antenna height for fixed units in high rise areas may be much higher than 10 meters,

this assumption will not impact the design rules this study attempts to establish. Estimates

of C/I are obtained by dividing the power of the radio signal received at an SU from the

desired RP by the power of signals coming from co-channel interferers as shown in

Equation (5).

To estimate the signal strength and path loss, we must first estimate the distance between

the SU and the desired and interfering RPs. We allow d, the distance between the SU and

the desired RP, to vary from 0 to In this study, the signal power received at the SU is

1 Information from PACS brochure by Highes Network Systems.2 A 24"x6.3"x2" 90° Directional antenna suitable for 1.9 GHz PCS spectrum has an antenna gain of 14.1dBi. Specification from Alien Telecom Group.

287

Page 303: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

calculated with respect tp the break point distance in Equation (1). Therefore, we

measure the distance between the RP and the SU by the ratio Since SUs may havedifferent antenna heights, one may ask if that will produce different results for the C/I ratio.Fortunately, as long as remains the same, the C/I calculated is independent of the

antenna height of the SUs. This is because the signal strengths calculated are proportional

to As long as is the same, the increases in the signal power received from the

desired RP and the interfering RPs due to the increase of antenna height will offset each

other. Therefore, as long as the ratio is maintained, the SU antenna height will notaffect the calculated C/I. In other words, the design rules developed in this study will beadequate for both fixed service with higher SU antennas and mobile service networks with

lower SU antenna heights.

3. Network Layout

High Rise/Omni-Directional Antennas

The propagation model described in the previous section is based on iine-of-sight (LOS)

propagation. On the other hand, when a SU is “around the corner,” i.e., on a non-LOS

street, the SU experiences a dramatic drop in received signal strength. Thus, closely spaced

RP placement is possible in a high rise urban environment However, it is true that real city

skylines are uneven, and some lots are empty or have low structures.

The assumptions made will affect the frequency assignment plan for RPs. For instance, ifan empty block is present, designers might replace the 4 RPs around this lot with a single

RP which uses a block of spectrum assigned for such situations. For high rise areas with a

few scattered low buildings and empty lots, twice the spectrum needed in a pure high rise

environment may be adequate3 The same technique can be used when other serious

interference occurs due to uneven terrain and surrounding structures. On the other hand, iftoo many low structures or empty lots appear in a region, the designer should follow the

rules for low rise areas.

3 This may also be useful when certain physical structures cause serious interference due to multiplereflections of radio signal or when electromagnetic waves are emitted by other sources. An additional set ofspectrum allows covering those empty lots and surrounding buildings. Reuse of this spectrum is alsopossible when low buildings or empty lots are separated by high rise buildings.

288

Page 304: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For an RP located mid-block on a north-south street, the two main sources of interferenceare two co-channel RPls in the north and south directions. Because of the 20 dB drop per

corner non-LOS propagation property4, other co-channel RPs have minimal impact on the

C/I ratio. To produce numerical results, interference coming from two line of sight and two

close-by non-LOS RPs are considered.

The distance between an SU and the desired RP relative to is Two line of sight,

and two non-LOS co-channel RPs have relative distances of andNote that non-LOS RPs have to turn two corners before the signals reach the SU.

High Rise/Directional Antennas

When directional antennas are aimed along the length of streets, it is clear that an SU willreceive significant interference only from the co-channel RP located behind the desired RP.

Scenarios are examined, again, based on the relative distance 10 RPs are assumed to

contribute to the co-channel interference. Figure 2 shows one of the possible scenarios in

which 11 co-channel RPs are lined up within break point distance

Again, a worst case scenario is assumed, with an SU located at the corner of an RP

coverage area. We assume that all equipment specifications such as transmitter power,

antenna gains, and antenna heights are unchanged, and, therefore, that the break pointdistance and the path loss at the break point are the same as that of omni-directional

antennas. The relative distances between interfering co-channel RPs and the SU arerespectively.

4 The average 20 dB drop of radio signal over a corner is based on experimental observations. See Andersenet al, (1995)

289

Page 305: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Low Rise/Omni-Directional Antennas

Since low rise areas do not provide building blockage as high rise areas do, a 4x4 RP

placement pattern is used to ensure proper separation between co-channel RPs. Usingomni-directional antennas, the estimated C/I of an SU located at the corner of a cell is equal

to the received signal power from the desired RP divided by the sum of received signalpower from 4 surrounding RPs in the first tier as shown in Figure 3.

The relative distance from the RP to the SU is d/Rb. Likewise, the relative distance from the

antennas of 4 surrounding RPs to the SU antenna are

respectively.

290

Page 306: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Low Rise/Directional Antennas

Because of the use of directional antennas, an SU will receive significant interference from

only three co-channel RPs. Note that the directional antennas may have a slight drop in

antenna gain at the edge of their primary coverage areas. We assume here, however, thatthis drop is negligible. Therefore, in our model, antenna gains remain constant within each

coverage area.

Recall that the worst case situation occurs when an SU is located at the corner of its RP’s

coverage area. We assume that all equipment specifications such as transmitter power andantenna heights are unchanged, and therefore, that the break point distance and the path

loss at the break point are the same as that of omni-directional antennas. The onlydifference is that directional antennas have a gain of about 14 dBi, and the energy isconfined to a certain sector. Further, the gain of directional antennas is assumed to be thesame for all directions within each 90° coverage area.

With the SU antenna at a relative distance from the desired RP, the 3 co-channel RPs

are at relative distances and 5 from the SU.

4. Carrier-to-interference Ratio

The estimated C/É for both omni-directional and directional antennas is now examined. In

the layout of a wireless system, C/I must be held above a certain level to prevent

objectionable co-channel interference.

The appropriate C/I for WLL varies from system to system. 18 dB is considered acceptable

for an AMPS (Advanced Mobile Phone Service) system, which uses FM modulation and

30 kHz channels (Lee, 1989). Other measurements of (bit energy-to-noise density)

(Rappaport, 1996) and (bit energy-to-interference ratio) (Garg and Wilkes, 1996)

have been used as design constraints for digital wireless communications. The required

or depends on modulation and coding schemes. For typical digital voice

transmission, for a bit error rate of can be as high as 63 (18 dB) if no coding is

used and as low as 5 (7 dB) for a system using a powerful coding scheme (Garg and

Wilkes, 1996). The design for a mobile radio connection a 30 kHz channel has been

291

Page 307: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

reported as 18 dB, whereas the design under the fixed-to-fixed communication for a

30 kHz channel is assumed to be 14 dB because the fixed service is expected to experienceless fading than mobile service (Garg and Sneed, 1996).

In an individual low tier design, signal strength measurements can help to determine

minimum port spacing. Here, however, we examine minimum port spacing based on a firstorder estimate of C/I. We assume that C/I ratios of 18 dB and 14 dB are necessary for

adequate voice and moderate rate data transmission for mobile and fixed service,respectively.

5. Illustrations and Results

To illustrate how the calculations are carried out using the propagation model developed,

we show an example of a low rise environment using omni-directional antennas in which

the distance from the base antenna to the SU antenna, d, is When 1/2 is substituted

for the relative distances from the base antennas of the 4 surrounding RPs to the SU

antenna become respectively.

Using Equation (1), the distance of (l/2)Rb produces a path loss L of 95.3 dB. Similarly,

the path losses of the four surrounding RPs are 118.0, 118.0, 122.3, and 122.2 dB,respectively. Using Equation (4), the average received signal power S from each RP to theSU antenna can be estimated as 10 log(800)+6+1-L. These received signal powers are

converted from dBm back into mW. The signal strength of the carrier is06, and the signal strengths from interfering RPs are 6.274IE-09, 6.274IE-09, 2.3327E-

09, and 2.3327E-09, respectively.

Using Equation (5), C/I is estimated to be10 log{1165.8/(6.2741+6.2741+2.3327+2.3327)} = 18.3 dB.

All C/I ratios are calculated allowing the ratio to be variable and using the same

methodology. A previous study has plotted the signal to interference and noise ratio vs. cell

radius using a similar set of equations (Hills et al, 1994). Since we are concerned here only

with microcells, we have not considered noise. The general properties of the C/I curves

(neglecting noise) are illustrated in Figure 4.

292

Page 308: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

There are two special conditions which cause C/I to remain constant and flatten out as in

Figure 4. The first is when the distance between the SU and all RPs (desired and first tier

co-channel interferers) is smaller than (We say that p is the maximum value of d forwhich this is true.) The second is when the distance between the SU and all RPs (desiredand first tier co-channel interferers) is greater than is the minimum value of d for

which this is true.)

These two cases both cause the numerator and denominator of C/I to have the same path

loss exponent =2.25 or 4 . When this happens, C/I will remain constant and can be

approximated by Equation (6). In this study, we are only interested in cases where d is nolarger than This is because we consider to be the maximum area over which an RP

is capable of providing reliable service. This is explained in Section 6.

293

Page 309: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where d is the farthest distance between RP and SU (from RP to the farthest corner)is the distance from the interferer to the SU

In between the points d = p and there are cases in which some of the interfering RPs

are within and some are not. We are interested in some specific distance d such that theC/I ratio can be kept above certain levels (14 dB for fixed service, and 18 dB for fixed and

mobile service).

Figure 5 shows the results of calculating many sample points using the methodology

described above and gives the C/I ratio as a function of Note that two horizontal lines

are drawn to represent C/I ratio of 18 and 14 dB. The results are as expected. Note that in

294

Page 310: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

the case of high rise/omni-directional antennas, the value of p is less than 0.05, the smallest

value of shown. Even for very small values of however, C/I remains greater

than 18 dB.

From Figure 5 it is clear that high rise areas always produce a C/I better than 18 dB. This

is because of the shielding provided by high rise buildings and because the use of

directional antennas limits the number of co-channel RPs causing significant interference.

Thus, there is no minimum port spacing constraint for high rise environments using the

configurations proposed here.

On the other hand, there are minimum acceptable values of d in low rise areas. When omni-

directional antennas are used, the distances and give C/I ratios of

14.4 dB and 18.3 dB, respectively. For low rise areas using directional antennas, the

horizontal lines at 14 dB and 18 dB intersect with and 0.95, respectively.

Minimum acceptable values of d and the corresponding port spacing distances are

summarized in Table 2.

295

Page 311: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Table 2. Minimum values of d and corresponding minimum port spacings in low tier

wireless systems in low rise areas. The dashed lines represent propagation loss and break

point distance while the dashed squares show the coverage of an RP. (Note that port

spacing is always for omni-directional antennas, and for directional antennas)

6. Maximum and Minimum Port Spacing

As discussed in Section 1, the line of sight (LOS) path loss increases at a rate of 22.5dB/decade for points close to the RP and at a rate of 40 dB/decade for points further from

the RP. The former applies at distances smaller than break point and the latter for

distances greater than The value of break point, can be approximated by a simple

function of wavelength and antenna heights as shown in Equation (2).

Others have shown that break point measurements can be used to predict the coverage areaof a cell (Xia et al, 1992; Andersen et al, 1995). This is done by assuring that areas beyond

an RP’s breakpoint are covered by other RPs. We propose using the break point

296

Page 312: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

distance, as the maximum cell radius of an RP. This design rule will generally insure that

the strength of the signal received from the desired RP is adequate. If an omni-directional

antenna is used, twice the break point distance can be treated as the maximum port spacing

to maintain adequate coverage. Similarly, the break point distance can be used as the

maximum port spacing when directional antennas are used, assuming that the directional

antennas face the same direction. Using the assumptions in Table 1, with hb = 8 m and hs =

5 m, and operating at 1.9 GHz frequency, Equation (3) gives the calculated path loss at the

break point as

From Equation (2), the break point Rb = 1013.3 m. Therefore, the maximum port spacing

will be 2027 meters (twice Rb) for low rise areas. This value is consistent with numbers

reported elsewhere for possible RP covered range for highways and residential areas of a

low tier system (Cox, 1996). The maximum port spacing will be even larger in high rise

areas because of higher SU antenna height.

A similar set of calculations for high rise areas (where we assume a subscriber antenna

serving an entire building at a height of 20 meters) gives a maximum port spacing of 4053

meters (Rb) for directional antennas and 8107 meters (twice Rb) for omni-directional

antennas.

Two co-channel RPs spaced too closely may cause a lower than acceptable C/I. Using

Equation (2), the break point distance is 1013.3 for SU antennas 5 meters high in low rise

areas. Table 3 presents the minimum port spacing (distance between adjacent ports)

required based on this break point distance and the rules given in Table 2.

297

Page 313: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Clearly, there are limits to ones ability to reduce the port spacing in order to achieve extra

capacity from the network. This limitation on the minimum port spacing imposes asignificant design constraint and has a direct and immediate impact on the spectrum

requirement for low tier WLL service.

Table 4 combines all of our results for maximum and minimum port spacing in low tierWLL systems for both high rise and low rise environments. We have used these results in

order to investigate the economics and the spectrum requirements of low tier WLL systems.

(Yen, 1997).

298

Page 314: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

References

(Andersen et al, 1995) Andersen J. B., T. S. Rappaport, and S. Yoshida, “PropagationMeasurement and Models for Wireless Communications Channels,” IEEECommunications, vol. 33 no. 1, pp. 42-49, January 1995

(Cox, 1996) Cox, D.C. “Wireless Loops: What Are They?” International Journal ofWireless Information Networks, vol. 3, no. 3, pp. 125-138, July 1996

(Erceg et al, 1992) Erceg, V., S. Ghassemzadeh, M. Taylor, D. Li, and D. Shilling,“Urban/Suburban Out-of-Sight Propagation Modeling” IEEE Communications Magazinevol. 30 no. 6, pp. 56-51, June 1992

(Garg and Wilkes, 1996) Garg, V. K., and J. E. Wilkes, Wireless and PersonalCommunications Systems, IEEE Press, 1996

(Garg and Sneed, 1996) Garg, V. K. and E. L. Sneed, “Digital Wireless Local LoopSystem,” IEEE Communications Magazine, vol. 34 no. 10, pp. 112-115, October 1996

(Hills et al, 1994) Hills, A., T. J. Breeden, and F. W. Russell, Jr., “PredictingPerformance in 2 GHz Wireless Data Links,” Proceedings of Sixth InternationalConference on Wireless Communications, Calgary Alberta, pp. 663-687, July 1994

(Lee, 1989) Lee W.C.Y., Mobile Cellular Telecommunications Systems, McGraw-Hill,1989

(Milstein et al, 1992) Milstein, L. B., D. L. Schilling, R. L. Pickholtz, V. Erceg, M.Kullback, E. G. Kanterakis, D.S. Fishman, W. H. Biederman, and D. C. Salerno, “Onthe Feasibility of a CSMA Overlay for Personal Communication Networks,” IEEE Journalon Selected Areas in Communications, vol. 10 no. 4, pp. 665-558, May 1992

(Rappaport, 1996) Rappaport, T. S., Wireless Communications: Principles & Practice,IEEE Press, 1996

(Xia et al, 1992) Xia, H. H., H. L.Bertoni, L. R. Maciel, A. Lindsay-Steward, R. Rowe,and L. Grindstaff, L. “Radio Propagation Measurements and Modeling for Line-of-SightMicrocellular Systems”, 42nd Vehicular Technology Society Conference, pp. 349-353,1992

(Yeh, 1997) Yeh, H.Y., “Designing Wireless Local Loops Using Low Tier Technology:An Approach to Providing Basic Telecommunication Service in Less DevelopedCountries,” Ph. D. Dissertation, Carnegie Mellon University, December 1997.

299

Page 315: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

17

Appendix

The RP layouts for each of the four scenarios are shown here. In order, they are:

Figure A-l: High rise - Omni-directional antennas

Figure A-2: High rise - Directional antennas

Figure A-3: Low rise - Omni-directional antennas

Figure A-4: Low rise - Directional antennas

Page 316: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

301

Page 317: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

East-west directional antenna RP1 through RP8 cover east-west street

North-south directional antenna RP9 through RP16 cover north-south street

Figure A-2.a. Directional antennas aimed directly at buildings; 3.b. Directional

aimed at buildings with 3.c. Directional antennas aimed along streets with

302

Page 318: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

303

Page 319: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

304

Page 320: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

18A Peek Into Pandora’s Box:

Direct Sequence vs. Frequency Hopped Spread Spectrum

Robert K. Morrow, Jr.Morrow Technical Services

6976 Kempton Rd., Centerville IN 47330765-855-5109, [email protected]

Abstract

Spread spectrum signaling has properties that make it useful for hiding and encrypting

signals, reducing interference to (and from) other radio transmissions, accurate ranging,

multiple access, and multipath mitigation. Direct sequence (DS) and frequency hopped (FH)

systems each have their strengths and weaknesses with respect to the above criteria, and

these are often manifested in different ways. There is no universal agreement on whether

DS or FH systems have inherently the simplest, and hence least expensive, design. DS and

FH implementations in satellite systems, CDMA cellular systems, and low power unlicensed

applications are discussed. FH has advantages in certain situations, but DS systems appear

to be superior in most realizations.

1 Introduction

SPREAD spectrum signaling is rapidly gaining popularity in many wireless applications

for several reasons. With the advent of inexpensive monolithic IC realizations of some of

the more complex implementation issues, both direct sequence spread spectrum (DSSS) and

frequency hopped spread spectrum (FHSS) systems can be easily implemented. Selecting

which of the two methods is better for any particular communications task can be challenging,

however, since each has its proponents and battles between the two are constantly being

waged at conferences and in the professional literature.

The choice between DS and FH can occur within several practical design contexts, in-

cluding satellite systems, cellular code division multiple access (CDMA) and other personal

communications systems (PCS), and unlicensed low-power transmitters and receivers oper-

ating in the industrial, scientific, and medical (ISM) bands. In this paper we examine the

relative advantages and disadvantages of DS and FH spread spectrum systems, both theoreti-

cally and from an implementation point of view. We begin with some general characteristics

305

Page 321: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

of DS and FH in the remainder of this section. Satellite systems are briefly examined in

Section 2, CDMA cellular systems and PCS are analyzed in Section 3, and low power appli-

cations associated with FCC Part 15 regulations, along with wireless LANs using the IEEE

802.11 standard, are presented in Section 4. Finally, Section 5 contains some conclusions.

1-1 General DSSS vs FHSS Issues

Using spread spectrum instead of ordinary digital modulation techniques for data commu-

nication can provide most or all of the following advantages, depending upon implementation

details [1], [5]:

Code division multiple access (CDMA) High-resolution ranging

Low probability of intercept (LPI) Interference rejection

Inherent message encryption Anti-multipath

DS and FH techniques each demonstrate strengths and weaknesses with respect to portions

of the above list, often due to limitations on implementation technology rather than from a

theoretical weakness. Table 1-1 compares the two methods [1], [2], [5], [9], but several of the

entries are still open to debate and will no doubt change as spread spectrum implementation

technology progresses.

For example, [1] claims that synchronization is more difficult for DSSS than for FHSS

signals, but [2] claims the reverse. In fact, a DSSS receiver using a matched filter synchro-

nizer will easily synchronize on short incoming spreading sequences with good correlation

properties, but longer spreading sequences such as those required for data security need a

sliding correlator for synchronization; thus the synchronization process becomes more time-

consuming and uncertain. Synchronizing on FHSS signals, on the other hand, can be exceed-

ingly difficult if more than a few frequency slots are used and the energy in each slot is low.

Both DSSS and FHSS can employ special transmitted preambles to aid in synchronization

at the expense of reduced signal security.

Although most FHSS implementations today use non-coherent demodulation with its (typ-

ically) 3 dB SNR penalty compared to the DSSS coherent demodulation process, improve-

ments in direct digital synthesis (DDS) design has led to the possibility of practical, low-cost

FHSS systems being developed that will maintain RF phase coherence from hop to hop.

It is also possible that FHSS systems will improve their anti-multipath rejection as faster

synthesizer lock times enable multipath component resolution.

306

Page 322: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

2 Satellite Mobile Applications

Satellites can provide essentially line-of-sight service to large geographic areas, and may

in fact be the only economic method for providing communications into many areas that

would not otherwise be served with terrestrial based stations.

The Global Positioning System (GPS) is one of the most ubiquitous satellite systems in

existence, and DS/CDMA was chosen for its inherent accurate ranging, data transmission,

multipath mitigation, multiple access, interference rejection, and security [5]. The Qualcomm

OmniTRACS vehicle position reporting system uses a combination of DS/ and FH/CDMA

for interference avoidance.

Satellite PCS networks with worldwide coverage are under development by several com-

panies (Table 2). Although a bit dated (1994), the Table demonstrates the preponderance

of DS/CDMA spread spectrum signaling in satellite PCS. Many of these signals possess

insufficient bandwidth for proper multipath mitigation [5], but spread spectrum will allow

these systems to coexist with other users with little prior coordination. This is especially

important due to the relatively large footprint of satellite signals beamed to earth.

307

Page 323: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3 CDMA Cellular Systems

The North American IS-95 CDMA cellular standard has been gaining in popularity in

recent years. The system is a sophisticated DS/CDMA process with RAKE receivers for

multipath combining, careful power control on all uplink (handset) signals to combat the

near-far effect, and the use of soft handoffs to reduce degradation as additional users are

accommodated. Although the IS-95 standard appears to be fairly well entrenched, with

subscriber equipment available in many places, proponents of FHSS (specifically, slow FHSS)

in cellular communications have not been silent.

In [7] and [8], the authors point out that FH/CDMA cellular can have the following

advantages over DS/CDMA:

• Less total interference. Since it is easier to choose sets of hopping sequences that are

orthogonal simply by insuring that at most one user occupies any frequency slot, intercell

interference can be reduced significantly over that experienced by DS/CDMA.

• No reverse link near-far problem. Since the orthogonality of frequency slot occupation

is a characteristic of this FH implementation, reverse link power control is necessary only for

reducing intercell interference. Consequently, the extensive power control process required

by IS-95 is not needed.

• Reduced susceptibility to external jamming. A narrowband jamming signal that

is close to a DS/CDMA cell site can be disastrous due to near-far, but this same signal's

effect on a FH system causes problems in only one frequency slot, which presumably can

308

Page 324: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

be mitigated through forward error control (FEC). An additional advantage of FH/CDMA

is attained by replacing data in a jammed frequency slot with erasures, improving FEC

performance. Furthermore, “smart” algorithms can be used to remove poor performing

frequency slots from the hopping pattern, a much simpler method of avoiding interference

than the cancellation algorithms being proposed for DS/CDMA.

. Frequency agility. DS/CDMA by its nature requires a wide and continuous frequency

band, but FH/CDMA can be structured to avoid hopping to known sources of interference.

Furthermore, FH/CDMA, due to its instantaneous narrowband signaling, can restrict its

out-of-band interference level compared to DS/CDMA.

• Easy to implement time division duplexing (TDD). TDD can be implemented in

FH/CDMA by simply offsetting transmit and receive slots in time. By eliminating the need

to simultaneously transmit and receive, costs can be reduced significantly. Furthermore, a

few hundred microseconds are available between active slots that can be used to switch the

synthesizer to the upcoming frequency, so synthesizer costs are reduced.

An experimental FH/CDMA cellular system was built by Motorola personnel and dis-

cussed in [8]. The system realized a capacity of 27 continuous channels per cell per MHz,

competitive with DS/CDMA. However, it is unlikely that FH/CDMA will unseat the IS-95

DS/CDMA standard as a viable contender for replacing analog cellular.

4 Low Power Unlicensed Implementations

In order to allow unlicensed operation in the industrial, scientific, and medical (ISM) fre-

quency bands without undue interference, the U.S. Federal Communications Commission

(FCC) developed Part 15 regulations and the European Telecommunications Standards In-

stitute (ETSI) has created prETS 300 328 rules which are summarized in Table 4 with regard

to spread spectrum applications [1], [16].

It is interesting to examine interference of DSSS and FHSS systems conforming to FCC

Part 15 rules to other narrowband receivers located in the same ISM band [1]. With a

10 Mcps spreading code rate and 1.0 W EIRP, both BPSK and MSK DSSS signals have

a power density between 0.1 and 0.2 As a result, a narrowband receiver with

30 kHz IF bandwidth would be subject to only 0.3 mW of the initial 1.0 W of DSSS power.

FHSS systems concentrate all of their transmit power into each hopping channel; therefore,

power densities for a l W EIRP transmitter can be as high as 208 mW/Hz for a minimum

309

Page 325: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

bandwidth/maximum dwell time signal to as low as W/Hz for a maximum bandwidth

(1.0 MHz per slot) signal. Therefore, even the maximum bandwidth FHSS signal will inter-

fere with a narrowband receiver at 10 times the level of DSSS, equivalent to a transmitted

distance as far. Only if we compare the widest allowed FHSS signal (500 kHz instan-

taneous bandwidth) with the narrowest allowed DSSS signal (also 500 kHz instantaneous

bandwidth) will the interference to narrowband systems be equal. Note, however, that the

nature of the two interferences is different: FHSS signals interference comes in bursts each

time a frequency slot is within the passband of the narrowband receiver, but DSSS interfer-

ence is constant.

4-1 IEEE 802.11 Local Area Networks

In an attempt to standardize wireless local area network (WLAN) designs, the IEEE

developed the 802.11 standard for implementation of DSSS, FHSS, and infrared (IR) WLAN

applications. The 802.11 standards committee thought that DSSS would offer advantages in

reliability, less noise susceptibility, and higher data rates, while FHSS would be more useful

in low-cost systems [12]. The standard prescribes the use of Gaussian FSK in FHSS, but

310

Page 326: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

coherent BPSK or QPSK may be employed in DSSS implementations. Consequently, for a

given data rate FHSS experiences a significant SNR penalty compared to DSSS. This penalty

is roughly 15 dB at a 2 Mb/s data rate [11], and translates into a significant decrease in

usable range for FH compared to DS when conforming to the FCC EIRP limit of +36 dBm.

As a result, a QPSK-based DSSS 802.11 network running at 2 Mb/s offers a lower BER than

a FHSS network running at 1 Mb/s. It must be noted, though, that the spreading sequence

specified in the 802.11 standard is a simple 11-bit Barker code which virtually eliminates the

possibility of using CDMA [10]. However, since interference can be tolerated up to –3 dB

relative to the desired signal, DSSS microcells using the same carrier frequency can be packed

fairly densely. FHSS can tolerate interfering signals at –24 dB (at 1 Mb/s) or –29 dB (at

2 Mb/s), but despite this lower tolerance for interference, up to 15 FHSS networks can be

co-located in a 79 channel scheme, although groups of FHSS networks must be placed further

apart than their DSSS counterparts [10].

FH is further penalized by the 802.11-recommended packet size of 400 B for FHSS com-

pared to 1500 or 2400 B for DSSS. Small packet size corresponds to increased overhead for

synchronization and headers; as a result, DSSS can have a 4x to 10x performance improve-

ment over that of FHSS [12],

Although DSSS is more complex to implement than FHSS due to additional digital signal

processing (DSP) requirements and the need for linear power amplification in the transmitter,

actual production costs are nearly equivalent [11], [12].

Under the constraints of the IEEE 802.11 WLAN standard, then, DSSS has a significant

advantage over FHSS. This is manifested by the information given in Table 4-1, where

DSSS has nearly a 3:1 advantage over FHSS in WLAN and short-range spread spectrum

communication implementations.

5 Conclusions

To summarize the debate between whether DSSS or FHSS is superior in commercial ap-

plications, we turn to Chapter 11 of [1], where Dixon explains why FHSS predominates in

military systems, yet designers of commercial systems prefer DSSS.

• Synchronization times. Military systems require complex spreading sequences and en-

crypted data, both of which make FHSS the easier signal upon which to synchronize. Com-

mercial applications can instead use simpler spreading sequences, making matched filter

311

Page 327: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

DSSS capture possible, and lack of encryption means that the same code can be used for

each new acquisition, further simplifying the synchronization process.

• Near/far problem. Military systems cannot be susceptible to a close-in jammer wreaking

havoc with communications, which would be the case if DSSS were being used without

sophisticated interference cancellation algorithms in place. FHSS is inherently less prone to

near/far interference.

• Increased FHSS interference to narrowband systems. As we noted in the last sec-

tion, FHSS can interfere with narrowband receivers to a greater degree than DSSS typically

does, which is a much more important issue with commercial implementations than it is with

military communications.

• Lack of FHSS channel availability. If channels are spaced 25 kHz apart, then 1040 of

them will fit into the 26 MHz available in the 902 MHz ISM band. Twenty simultaneous users

312

Page 328: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

results in a bit error rate of about which is too high even for voice communications.

Furthermore, a star LAN with 20 users would require the base station to have 40 frequency

synthesizers for full duplex operation, compared to only two for DSSS.

• Self-interference. Even at 25 kHz channel spacing, adjacent channel interference would

be prevalent at 900 Mhz due to the lack of adequate filter technology; this would also be

manifested in crossmodulation and intermodulation distortion. Furthermore, since FHSS

must operate at a positive SNR, cell perimeters would be subject to interference from adja-

cent cells to a higher degree than would be experienced in DSSS. With several adjacent cells

providing interference, FHSS collisions would occur at an unacceptably high rate.

It appears, then, that DSSS is the winner over FHSS in the consumer marketplace. Now

let’s close the lid on Pandora’s box and get back to work.

References

[1] R. C. Dixon, Spread Spectrum Systems with Commercial Applications, Wiley, 1994.

[2] B. Schweber, “Choices and confusion spread wider as spread spectrum goes main-

stream,” EDN, pp. 79-87, October 10, 1996.

[3] M. Kavehrad, “Spread spectrum for indoor digital radio,” IEEE Commun. Mag., June

1987, pp. 32-40.

[4] D. Schilling, et al, “Spread spectrum for commercial applications,” IEEE Commun.

Mag., April 1991, pp. 66-79.

[5] D. Magill, et al, “Spread-spectrum technology for commercial applications,” Proc. IEEE,

April 1994, pp. 572-584.

[6] D. Whipple, “North American cellular CDMA,” Hewlett-Packard Journal, December

1993.

[7] R. Kohno, et al, Spread spectrum access methods for wireless communications,” IEEE

Commun. Mag., January 1995, pp. 58-67.

[8] P. Rasky, et al, “Slow frequency-hop TDMA/CDMA for macrocellular personal com-

munications,” IEEE Pers. Commun. Mag., Second Qtr. 1994, pp. 26-35.

[9] A. Duel-Hallen, et al, “Multiuser detection for CDMA systems,” IEEE Pers. Commun.

Mag., April 1995, pp. 46-58.

[10] C. Andren, “WLAN designers choose between DSSS and FHSS,” Wireless Systems

Design, March 1997, pp. 43-46.

313

Page 329: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

[11] A. Kamerman, “Spread-spectrum techniques drive WLAN performance,” Microwaves

RF, September 1996, pp. 109-114.

[12] V. Vermeer, “Designing a high-speed direct-sequence spread-spectrum radio,” Wireless

Systems Design, September 1997, pp. 22-31.

[13] R. Roth, “Spread spectrum technology for low power PCS applications,” Microwave

Journal, October 1995, pp. 94-105.

[14] S. G. Glisic and P. A. Leppanen [eds], Code Division Multiple Access Communications,

Kluwer Academic Publishers, 1995, pp. 203-223.

[15] M. S. Simon, et al, Spread Spectrum Communications Handbook, Mc Graw-Hill, 1994.

[16] T. Cokenias, “Spread-spectrum operators face compliance issues,” Wireless Systems

Design, April 1997, pp. 47-50.

314

Page 330: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

19

On the Capacity of CDMA/PRMA Systems

Roger Pierre Fabris Hoefel and Celso de AlmeidaDepartamento de ComunicaçÕ es

Faculdade de Engenharia Elotrica e de Computac oUruversidade Estadual de Campinas

C.P. 6.101 - Campinas · 13.081-970 - S.P. - BrasilE-mail: roger@ decom.fee.unicamp.br, celso@ decom.fee.unicamp.br

AbstractIn this article we are going to obtain the performance of the CDMA/PRMA protocol in circumstances not

analyzed in [1], First, we are going to analyze the capacity loss due to imperfections in the power control

loop. We are going to consider single and multiple-cell environments. Second, we are going to analyze the

effects of the Rayleigh fading in the system capacity. We are also going to analyze the performance of

CDMA/PRMA with three-state vocoders. A subtle modification is suggested for the three-state vocoders

access scheme in order to obtain a capacity improvement.

1. Introduction

Pioneer works of Goodman [2], [3] have explored the characteristics of Packet-Reservation Multiple-

Access/Time-Division Multiple-Access (PRMA/TDMA) aiming voice, data and video packet

transmission in the reverse link of mobile communication systems. Brand and Aghvami [1] have presented

a joint Code-Division Multiple-Access/ Packet-Reservation Multiple-Access (CDMA/PRMA) protocol

intended to be applied to the reverse link of third generation mobile communication systems. This

approach has shown capacity gain with relation to the random access (RA) DS-CDMA (Direct Sequence-

CDMA) protocol [4]. Section 2 presents a brief description of the CDMA/PRMA protocol, that is also

known as Joint Control Protocol (JCP). Section 3 presents the uplink modeling. Section 4 presents the

performance of CDMA/PRMA systems in distinct operational conditions, as inaccuracies in the power

control loop, Rayleigh fading, single and multiple cell environments, and distinct multimedia traffic

profile. Finally, section 5 presents the conclusions.

2. CDMA/PRMA Protocol Description

Speech packets are classified as periodic, once the channel access was obtained it was maintained until the

end of a current talkspurt. Data packets can have periodic or random access. In the first case, they are

treated as speech, while in the second they always work in the contention mode. Video packets are

classified as continuous, that is. they are in the permanent reservation mode.

315

Page 331: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Time is partitioned into frames, and frames into slots. The frame period is equal to one speech segment

so it is enough for a speech user to access just one slot per frame. The mobile stations (MS) access is

governed by the access permission probability that is sent by the base station (BS). The BS can calculate

the access permission probability of the i-th slot of the next frame based on the number of periodic users

of the i-th slot of the current frame, by observing the header of the decoded packets. For each slot a new

access probability is calculated, and these probabilities are broadcasted to the MSs.

The figure of merit used to measure the performance of this protocol in voice-only traffic condition is

the number of speech users supported by the system, for a given packet loss rate Ploss. The packet loss rate

is equal to the sum of the dropped packet rate by excessive delay, and the corrupted packet rate by

excessive interference. Speech packets that do not have successful access in any slot in a pre-determined

time interval (20 ms) are dropped. Video and data packets work in the Automatic Repeated Request mode.

For mixed data/voice traffic condition the analysis is based on the number of supported users that satisfy

the maximum permissible delay. In this case, the speech users have to satisfy a given restriction in the

voice packet loss rate.

This paper uses the same design parameters of Brand [1], as described in Tab. 1, exactly to do a

comparison with the results obtained there. The processing gain used is equal to 7 It is used a

BCH error correcting code with parameters (511,229,38), that is, length equal to 511 bits, 229 information

bits, and error-correcting capability of 38 bits. The speech is modeled as a Markov 2-state machine,

composed of talkspurts and gaps. The talkspurts and gaps are assumed to have exponentially distributed

duration with mean of 1.00 s, and 1.35 s, respectively. This yields a voice activity factor of

Random data terminals transmit at the rate of 3400 bps, which is equal to the net rate of two-state

vocoders. Video terminals transmit at rate of 144 kbps. The maximum rate in the continuous mode is of

160 kbps. Therefore, it is possible to retransmit the corrupted video packets.

Data and video terminals use first-in first-out buffers that can store up to 200 packets. The packets are

discarded after the acknowledgment of the successful transmission. The system is said to be stable, if none

of the terminal buffers overflow.

For comparison reasons we also implementation the Random Access protocol. In this protocol, the

terminal access a slot as soon as they have packets to transmit.

316

Page 332: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

3 Uplink Channel Modeling

This paper deals with the medium access of the reverse link of a mobile communication system. However,

it considers packets as the basic unit of transmission. Therefore, we are just interested with aspects of the

link layer of the Open System Interconnection Reference Model. The medium access control is

implemented by the specific protocol, that is, JCP or RA protocol. The error control is implemented by the

the packet decoding success probability

where N is the block code length, t is the error-correcting capability and is the bit error probability that

depends on the model used for the radio channel.

3.1 Imperfect Power Control Loop

Brand and Aghvami used the standard Gaussian approximation (SGA) to determine the bit error rate

(BER) [1]. Assuming that the multiple access interference (MAI) is Gaussian, we can express the mean

BER for the Binary Phase-Shift Keying (BPSK) modulation as [5]

where Q(x) is the Q-function, and is the mean signal-to-noise ratio of the i-th user. In a cellular

system with K distinct users in the target cell and neighbor cells with active users each, the mean

SNR of the ith user is given by

where is the received power at the target BS due to the k-th user, is the processing gain, is the

power arriving to the same BS due to the k-th MS of the n-th cell. Thermal noise is considered negligible.

Brand and Aghvami paper has considered perfect power control in an AWGN channel. In this article,

the power received at the BS due to any user is modeled as a log-normal random variable [11].

Inaccuracies in the power control loop are modeled by a standard deviation between 1 and 2 dB. We

considered that the mean power received at the BS maintains constant at each slot. It can be justified by

the fact that shadowing not change significantly for distances of up to [7], where is the wavelength.

317

Page 333: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For voice-only traffic, the other-cell interference ratio to the nominal power received at the BS is

modeled as

where denotes the voice activity factor, is the number of voice terminals in the system, O is the

number of slots in a frame and f the mean other-cell interference fraction. We have considered f=0.37 for

path-loss exponent of

3.2 Flat Fading

Here, we assume that the effects of the path loss and slow lognormal shadowing are compensated, so that

the average received power is controlled [8]. However, the dynamic power control is too slow to

compensate for the effects of the Rayleigh fading. Assuming that the fading is flat and slow and the same

average power is received from all users, then the average signal-to-interferece ratio, for a single-cell

system, is given by

where is given by

where is the mean squared value of the Rayleigh fading, is the energy per bit, is the unilateral

spectrum density of the AWGN, and K is the number of active terminals at each slot.

Finally, the bit error probability for BPSK is given by [9]

3.3 Frequency Selective Fading

Assuming that the number of paths L for every user is the same and the power of all paths for K users is

also the same, then the average SNR per path at all taps of a simple correlating RAKE receiver is given by

[8]

318

Page 334: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For Rayleigh frequency selective fading it can be shown that the BER for maximum ratio combining

with order of diversity L for BPSK is given by [9]

where, by definition

4. Simulation

In this section, we are going to obtain the performance of the CDMA/PRMA protocol expressed by the

mean packet loss rate of all active users. The packets are considered corrupted when the number of

erroneous bits is greater than the error correcting capability of the BCH code. Unless it is explained in the

text, the simulation time is of 1000 s. We are going to use the method proposed in [1] to determine the

channel access function in a given slot, that is, the access probabilities as a function of the number of users

in the reservation mode.

First, we determine the number of MSs supported, employing the optimization a-posteriori. The access

probability, in this case, is given by:

where Kres is the number of users in reservation, K Cont is the number of users in contention, and K opt is the

optimum number of users, that is,

Second, the channel access function is going to be determined heuristically, with the objective that the

capacity be similar to that obtained using (11). The shape of the channel access function proposed in [1] is

of two line segments defined by the following parameters: the initial probability ps, the inclination of the

first straight line segment the first breakpoint the inclination of the second straight line segment

and the second breakpoint The parameters employed in this article were optimized for a packet loss

rate of 1%.

319

Page 335: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4.1 Imperfect Power Control

Fig. 1 and 2 show the JCP performance for single and multiple-cells, respectively. Both figures are

parameterized by the imperfection levels in the power control loop. Tab. 1 and 2 show the capacity loss

corresponding to Fig. 1 and 2, respectively. The tables also show the simulation results obtained by

Newson [10] for single cell systems and the analytical ones by Prasad [6] for multiple-cell systems for

comparison reasons. The results obtained for are in agreement with the ones presented in [1]. The

channel access function parameters used for the single cell case are and

For cellular systems and Fig. 3 and 4 are analogous to Fig.

1 and 2, respectively, expect that the RA protocol was used.

Fig. 5 shows similar results to Fig. 1, except that the permission access function was optimized for each

level of inaccuracy in the power control loop. Tab. 3 and 4 compare the performance of JCP and RA

protocol. The access function for the JCP is optimized for each level of inaccuracy in the power control

loop. The capacity loss for the JCP is similar to that obtained for the RA protocol. Discrepancies observed

are due to the heuristic determination of the access function that do not attend the capacity obtained by the

a-postenori criteria. We observe a lesser capacity gain of JCP in relation to RA protocol in the presence of

the other-cell interference.

320

Page 336: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

For ideal power control, a more accurate and computationally efficient way to calculate the BER was

developed by Holtzman [12] and is denominated Improved Gaussian Approximation (IGA). Assuming

that the AWGN is not negligible, the BER is given by:

321

Page 337: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

where

Fig. 6 shows the number of supported voice users as a function of the SNR per bit for the JCP and

RA protocol for single cell system. It was shown, see Fig. 1, that JCP supports 359 users for and

negligible thermal noise. Therefore, it is possible to conclude that IGA method presents similar results to

the ones obtained by using SGA. This is not surprising, in spite of the small number of users per slot, the

block-code employed permits to work with relatively high BER, and in this case the results of SGA and

IGA are similar [7].

Video terminals produce packets in almost all slots. Therefore, the permission access function for voice

users in the presence of video users is given by f[k+n], where k denotes the number of voice users in the

reservation mode, n is the number of active video users and f[.] is the access functions optimized for

voice-only traffic.

Fig. 7 shows for one video user the performance of CDMA/PRMA. For perfect power control, the

results are in full agreement with the ones shown in [1]. Tab. 6 resumes Fig. 7 for Ploss=l%. Tab. 7 shows

similar results of Tab. 6, except that the access function was optimized at each level of inaccuracy in the

power control loop. Tab. 7 also shows the RA protocol performance.

322

Page 338: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

The video users generate packets in almost all slots. Therefore, the other-cell interference can bemodeled by

where n is the number of active video terminals.

Fig. 8 shows for JCP the number of voice users supported as a function of the number of video

terminals for a cellular system with f=0.37 The capacity loss for one video user is 26.46%, where the

video user is treated as equivalent voice users. For voice-only traffic, similar capacity loss was observed

For mixed voice/data traffic conditions, the permission access of data users is given by where

is a constant value set to 0.2 [1].

Fig. 9 shows for JCP the number of simultaneous conversations as a function of the number of data

terminals, parameterized by the mean access time of data packets, for 100s of simulation time. Traffic load

and profile supported by the system must satisfy the restrictions of stability and voice packet loss rate less

than 2%. For the number of voice users supported for high data traffic is limited by the voice

packet loss rate, while for low data traffic by the stability constraints. For different values of the main

restriction is the voice packet loss rate. In this analysis, the number of voice users is determined, for a

given number of data users. The results for for single cell case, are in agreement with the ones

shown in [1]. We modeled the average other-cell interference for mixed voice/data traffic as:

where is the number of active data users. Observe that the average number of data terminals per slot is

a function of the number of active data users and the data rate.

323

Page 339: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

4.2 Rayleigh Fading

Fig. 10 shows the RA and JCP performance when the radio channel is imposed with a slow flat Rayleigh

fading. The objective of these analysis is to verify the effects of the fading in the multiple access, so we

set the SNR per bit at 20 dB, that is, the system is interference limited. The optimum number of users per

slot is 3, for voice packet loss rate of 1 %. Fig. 10 also shows the voice packet loss rate determined

by the a-posteriori criterion and by the heuristic permission access function. Fig. 10 also shows JCP

performance when the access function was heuristically optimized for ideal power control for AWGN

channel. The same figure shows the RA protocol performance.

Fig. 11 shows JCP and RA protocol performance for voice-only traffic. Slow frequency selective

Rayleigh fading and maximal ratio combining receiver are considered. First, we present the JCP

performance, where the access function is optimized for different diversity degrees. Then, we show JCP

performance with L=3 and with access function heuristically optimized for AWGN channel. Finally, the

same figure presents RA protocol performance for L=3. Again, we verified the superior performance of

JCP in relation to RA protocol.

4.3 Three-State Vocoders

We are now going to present condensed results about the performance of a joint CDMA/PRMA protocol

using there-state vocoders. In this case, the Markov process presents of talkspurts, minigaps, and gaps

states. The gaps have also mean duration of 1.350 s, while the talkspurts 0.275 s, and the minigaps 0.050

s, that yields a voice activity factor of 0.375 [2]. Just replacing the two-state by a three-state vocoder,

without changing the access parameters, is not efficient. In this case it was observed an increase in the

number of dropped packets due to excessive delay. This increase is due to the minigap reservation loss,

324

Page 340: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

and as a consequence there is a contend increase in the slots. Therefore, the protocol must be modified in

order the users do not loose their reservation in the minigaps. As a consequence, we have an improvement

in the capacity of PRMA/CDMA systems using three-state vocoders.

Fig. 12 presents the CDMA-PRMA performance for three-state vocoders, for just one cell. The channel

access function parameters for the single cell case are: It is also

shown the multiple cells performance. The other-cell interference is modeled as a Gaussian variable [11].

The mean and variance are given by respectively, where M is the total

number of active voice users in the system, disregarding sectorizing. The channel access function

parameters for the multiple cell case are: Gilhousen and others

have obtained due to the other-cells a capacity loss of 38.98% for an outage probability of 1%. They

considered perfect power control loop for circuit switched architectures. The capacity loss obtained by

JCP is 42.89%. Fig. 12 also shows the RA protocol performance, for the single-cell case. Again, for the

RA protocol, a subtle modification for three-state vocoders is required. This consists in maintaining of the

same slot until the occurrence of a gap. This avoids the concentration of users in the first slot due to the

short duration of the minigaps. Comparing Fig. 12 and Fig. 1, we verify a capacity gain of about 6% for

by using three-state vocoders for JCP. JCP provides a capacity gain of 76% in relation to RAP.

Similar results were obtained for two-state vocoders. Simulation analysis also indicates that there is also

capacity gain for mixed traffic conditions for three-state vocoders. For distinct traffic profile, the capacity

loss due to imperfect power control loop using three-state vocoders is similar to that verified for two-state.

Fig. 13 shows the number of active users on the channel for a 1 s. time interval for single cell case and

380 active users.

5. Conclusions

We have investigated the capacity loss of the joint CDMA/PRMA protocol and RA protocol for distinct

traffic profile, due to the inaccuracy in power control loop for single and multiple cell systems, and

325

Page 341: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

Rayleigh fading for just one cell. We have considered two and three-state vocoders.

We can conclude that the JCP protocol presents a significant capacity gain in relation to RA protocol

for distinct operational and traffic profile conditions. It is also possible to conclude that a correct choice of

the JCP channel access function parameters minimizes the capacity loss due to imperfect power control

and fading. This has motivated further works on dynamic optimization of the CDMA/PRMA parameters

based on the status of the system and on the quality of services requested by distinct users. We have

verified that subtle modifications on the CDMA/PRMA access protocol permits to obtain a greater

capacity for multimedia traffic. The three-state vocoder capacity loss due to imperfect power control loop

and fading is similar to that verified for two-state vocoders.

References

[I] A. E Brand and A. H. Aghvami, “Performance of a joint CDMA/PRMA protocol for mixed voice/data

transmission for third generation mobile communication,“ IEEE J. Select.Areas Commun., vol. 14, no. 9, pp. 1698-

1707, Dec. l996.

[2] D. J. Goodman and XW. Wei, “Efficiency of packet reservation multiple access,” IEEE Trans. on Vehicular

Techology, vol.40, no. 1, pp 170-176, Feb. 1991.

[3] D. J. Goodman and W. C. Wong, “A packet reservation multiple access protocol for integrated speech and data

transmission,“ IEE Proceedings-I, vol. 139, no. 6, pp. 607-612, Dec. 1992.

[4] R. Ganesh, K. Joseph, N. D. Wilson and D. Raychaudhuri, “Performance of a cellular packet CDMA in an

integrated voice/data network,” International journal on Wireless Information Network, vol. 1, no. 3, pp. 199-221,

1994.

[5] M. B. Pursley, “Performance Evaluation for Phase-Coded Spread Spectrum Multiple-Access Communication,”IEEE Trans. Commun, vol. 25, no. 8, pp. 795-799, Aug. 1977.

[6] R. Prasad, M G. Jansen and A Kegel, “Capacity Analysis of a Cellular Direct Sequence Code Division Multiple

Access System with Imperfect Power Control,” IEICE Trans.Commun., Tokyo, v. E76-B, n. 8, p. 894-904, Aug.

1993.

[7] Rappaport, T. S. Wireless Communications Principles and Practice, Prentice Hall, Upper Saddle River, New

Jersey, 1996.

[8] Pahlavan, K., and Levesque, A. H. Wireless Information Networks, Wiley, New York, 1995.

[9] Proakis, J. G. Digital Communications, McGraw-Hill, New York, 1995.

[10] M R. Heath and P. Newson, “On the Capacity of Spread Spectrum CDMA for Mobile”, Conf. Rec. of the

IEEE VTC’92 (Denver), pp. 732-735,1992.

[11] K. S. Gilhousen, I. M Jacobs, R. Padovani, A. J. Viterbi, L. A Weaver and C. E. Wheatley, “On the Capacity

of a Cellular CDMA System, IEEE Trans. on Vehicular Technology, vol. 40, no. 2, pp. 303-311, May 1991.

[12] J. M. Hotzman. “A Simple, Accurate Method to Calculate Spread-Spectrum Multiple-Access Error

Probabilities”, IEEE Trans. on Communication,, vol. 40, no. 3, pp. 461-464, March 1992.

326

Page 342: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

INDEX

INDEXD

A D(W)ILSF Algorithm 28, 29adaptive antennas 23, 24, 99, 247-258 dampers 69adaptive signal processing 143-154 delay spread 68, 69airflow 64, 67, 69 demodulator 155-166angular separability 31, 32 digital signal processing 143-154antenna efficiency 75 dipole antenna 65applique 49-52 direct sequence 129-131, 133, 134,array calibration 259 137, 139, 141array factor 2 directional antenna pattern 2array measurements 99, 100 directional channel model 29, 99array processing 101 directive gain 74

102-104, 106-108 distributed antenna system (DAS) 71diversity systems 143-154

B dopplerpower spectrum 1beam solid angle 75 DSSS characteristics 307bi-directional amplifiers 64, 70 ducts -See Heating and Ventilationbit error probability 7 Ductsblind estimation 23-28 dynamic control 52, 54broadside 75building penetration loss 83-89 Ebuilding shadowing loss 83-89 electromagnetic waves 62

elevation 99, 109C elevation delay power spectrum 99,canyon effect 99, 102, 104 102-104, 106-108capacity-limited 91 envelope PDF 275-282Carnegie Mellon University 61, 70 ESRI 273carrier-to-interference 91 equalization 143-154CDMA 123, 124, 129-134, 137, 139, equalizer 155-166141, 142, 315-326CDMA cellular 308, 309 Fcellular communication systems 247- fading 315-326258 fading channels 275-282cellular grid 92 far field 72center of traffic 94 FCC Part 15 309, 310channel measurements 99, 100 FHSS characteristics 307channel modeling 275-282 Finite Alphabet (FA) Projections 27-29channel sounder 100 FRAMES 123chipset for GSM 155-166 frequency reuse efficiency 50, 51, 53, 58coherence bandwidth 68 Friis transmission formula 74coupling 61, 62, 65-67, 69, 70coupling loss 73 Gcutoff frequency 62, 63, 65, 66 GIS 273

Page 343: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

optimization 52

hard handoff 50, 53 140-142, 247-258

land mobile radio 315-326

315-326

K pseudo-blind estimation 23, 24, 28, 29

H networkhandoff overhead 50, 53, 55-57 neural network 129, 130, 136-138,

optimization 52

Heating and Ventilation Ducts 61-64, nonuniform phases 275-28266, 68-70 number of paths 275-282HVAC -See Heating and VentilationDucts Ohybrid multiple access system 111- over-the-roof 99, 109121

PI packetland mobile radio 315-326IEEE 802.11 310-312 passive reradiators 62, 64in-building measurements 83, 84, 86 Personal Communication Servicesindoor antenna 71 (PCS) 129, 130, 135, 141, 142indoor propagation 71 power control 91, 315-326internal loss 73 power pattern 74, 76intersymbol interference 123, 124 power spectral density 1ISM band 61, 67 power splitters 62, 64iterative least squares 27-29 PRMA

probe 64, 67, 70

Keenan-Motley 76R

L Radiax72land mobile radio cellular systems radioport 283-304143-154 radio resource allocation 129, 130,layout 283-304 132-135, 137-141leaky coax 61, 62, 70, 71 Rayleigh fading 72load balancing 49, 53, 54local area network 310-312 Slouvers 62, 64-66, 70 satellite mobile 307, 308low tier 283-304 sculpted pattern 52, 59, 60

sector synthesis 52M self-organizing feature map 129, 130,mobile communications 155-166 134-141multibeam 49 sheet metal 62multimode 68, 69 simulation, benefits of 271multipath 68 simulation, procedures 272multipath propagation 275-282 smart antenna 259multiple access interference 123, 124 soft handoff 50, 53, 55-58multistage detection 123, 127 space-time equalisation 23-25, 27multiuser detection 111-121, 123, 127 spaced-time correlation function 6

Spatial Division

Multiple AccessN (SDMA) 23, 24near field 72 spread spectrum advantages 306

328

Page 344: Wireless Personal Communications: Emerging Technologies for Enhanced Communications

static control 52, 54

Ttelephone 283-304time division multiaccess 143-154traffic control 111-121traffic density distribution 51traffic hot spot 54-56, 592-D Unitary ESPRIT 1012-slope propagation model 92

Uuniform line soured (ULS) 71unlicensed low power 309, 310urban environment 99, 100, 102, 109urban propagation 99, 100, 102, 109

Vvector channel sounding 259velocity factor 74

Wwall loss 77waveguide 61, 62, 64, 65, 68, 69wire screen 64, 69wireless 283-304wireless modems 62, 65

329