[ieee 2010 16th asia-pacific conference on communications (apcc) - auckland, new zealand...

5
AbstractA major limiting factor in increasing the throughput of wireless networks has been the bottleneck of prohibitive signaling overhead. A number of header compression schemes have been proposed to solve this particular problem. These, however, come with their set of limitations and suffer from a loss of practicality in certain cases. Our contribution in this paper is two-fold; firstly, we propose a practical framework for cross-layer header compression which improves vastly on existing data rates (by more than 25% of raw throughput at high SNRs). This is achieved by applying independent compression algorithms to both MAC and PHY data units, thus ensuring better throughput. Secondly, we analyze the performance of proposed scheme in slow fading as well as fast fading environments to demonstrate robustness. Index Terms802.11, source coding, header compression, throughput enhancement. I. INTRODUCTION The most valuable resource in any wireless communication system is the bandwidth available. Driven by user’s demands for ever-increasing data rates, many sophisticated algorithms and optimization models have emerged over the last few years for wireless links especially WLANs. These methods include popular approaches such as link adaptation, beamforming and switching between spatial diversity and multiplexing in MIMO systems. Most of these techniques aim at maximizing the utilization of the physical link in some way. This approach, while intuitively sound, suffers from a number of technical shortcomings. Firstly, it has been established in [1] that using the current MAC/PHY header specifications places an upper bound of less than 100 Mbps regardless of any optimization on the PHY layer Transmitter (Tx) characteristics. Secondly, with the ever-increasing complexity of optimization algorithms, the overhead necessary for reasonable receiver functionality increases proportionally. A very pertinent example of this second limitation is seen in modern WLANs. Despite employing OFDM as the primary multiplexing technique, the 802.11 family of standards [7] (and especially the 802.11n version [6]) employ link adaptation algorithm on a per-packet basis instead of on a per-subcarrier basis. This is the perfect limiting case where “too much overhead” proves prohibitive for the optimization algorithm. Hussain Syed Kazmi is a researcher with the Image Processing Centre, National University of Sciences and Technology, Pakistan. Haroon Raja is a post graduate student and researcher at Core Communications and Networks Laboratory, School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Pakistan To solve these two problems, a number of schemes have been proposed both for the case of WLANs as well as other wireless networks. These include, but are not limited to, frame aggregation [1] and header compression algorithms such as Robust Header Compression (ROHC) [2]. Whereas aggregation is specialized for WLANs, ROHC is employed for TCP/IP/UDP header compression. Furthermore, aggregation based techniques do not cater for the PHY layer header and robustness is generally not guaranteed beyond certain aggregated packet lengths so that while the raw throughputs might be increasing, the goodput might not be following a similar trend. The approach we adopt in this work is two-pronged; it provides an algorithm that caters for robust compression of both MAC and PHY header information thus enhancing the overall goodput of WLANs. A Doppler shift dependent entropy coded scheme is presented for efficient compression of the PHY header while a simple stateful approach based on mitigating redundant information suffices for robust compression of MAC header. The simultaneous use of these two compression techniques results in significant performance gains. The structure of the paper is as follows. Section 2 presents a formal overview of the system model under consideration; the numerous variables involved and their dependencies are also highlighted. A brief review of existing techniques is presented in section 3 before elaborating the proposed algorithm in Section 4. Section 5 analyses and compares results of both the proposed and existing schemes prior to conclusion in section 6. II. SYSTEM MODEL Overall system is consisting of two distinct parts; communication system used for transmission and the compressor portion that has been proposed in this paper. A. Communication Model For our performance analysis purposes, we are assuming a rate-adaptive 802.11n WLAN [6] using the MIMO-OFDM framework which will employ diversity model for better reliability (2x2 MIMO in diversity mode is used). The rate adaptation is carried out by means of a simple MCS (modulation and coding scheme) vs. SNR lookup table which has been developed by plotting SNR vs. BER curves in MATLAB as is done in previous works like [3]. The channel is assumed to be a slow fading Rayleigh model. This is based on the typical workplace scenario where users set-up their workstations and work for a couple of hours instead of wandering about randomly. Based on this starting point, we define a few independent and Throughput Enhancement by Cross-layer Header Compression in WLANs Hussain Syed Kazmi and Haroon Raja 2010 16th Asia-Pacific Conference on Communications (APCC) 978-1-4244-8129-3/10/$26.00 ©2010 IEEE 329

Upload: haroon

Post on 28-Feb-2017

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2010 16th Asia-Pacific Conference on Communications (APCC) - Auckland, New Zealand (2010.10.31-2010.11.3)] 2010 16th Asia-Pacific Conference on Communications (APCC) - Throughput

Abstract— A major limiting factor in increasing the

throughput of wireless networks has been the bottleneck of

prohibitive signaling overhead. A number of header

compression schemes have been proposed to solve this

particular problem. These, however, come with their set of

limitations and suffer from a loss of practicality in certain

cases. Our contribution in this paper is two-fold; firstly, we

propose a practical framework for cross-layer header

compression which improves vastly on existing data rates (by

more than 25% of raw throughput at high SNRs). This is

achieved by applying independent compression algorithms to

both MAC and PHY data units, thus ensuring better

throughput. Secondly, we analyze the performance of proposed

scheme in slow fading as well as fast fading environments to

demonstrate robustness.

Index Terms— 802.11, source coding, header compression,

throughput enhancement.

I. INTRODUCTION

The most valuable resource in any wireless communication

system is the bandwidth available. Driven by user’s

demands for ever-increasing data rates, many sophisticated

algorithms and optimization models have emerged over the

last few years for wireless links especially WLANs. These

methods include popular approaches such as link adaptation,

beamforming and switching between spatial diversity and

multiplexing in MIMO systems. Most of these techniques

aim at maximizing the utilization of the physical link in

some way. This approach, while intuitively sound, suffers

from a number of technical shortcomings.

Firstly, it has been established in [1] that using the current

MAC/PHY header specifications places an upper bound of

less than 100 Mbps regardless of any optimization on the

PHY layer Transmitter (Tx) characteristics.

Secondly, with the ever-increasing complexity of

optimization algorithms, the overhead necessary for

reasonable receiver functionality increases proportionally. A

very pertinent example of this second limitation is seen in

modern WLANs. Despite employing OFDM as the primary

multiplexing technique, the 802.11 family of standards [7]

(and especially the 802.11n version [6]) employ link

adaptation algorithm on a per-packet basis instead of on a

per-subcarrier basis. This is the perfect limiting case where

“too much overhead” proves prohibitive for the optimization

algorithm.

Hussain Syed Kazmi is a researcher with the Image Processing Centre,

National University of Sciences and Technology, Pakistan.

Haroon Raja is a post graduate student and researcher at Core

Communications and Networks Laboratory, School of Electrical

Engineering and Computer Science, National University of Sciences and

Technology, Pakistan

To solve these two problems, a number of schemes have

been proposed both for the case of WLANs as well as other

wireless networks. These include, but are not limited to,

frame aggregation [1] and header compression algorithms

such as Robust Header Compression (ROHC) [2]. Whereas

aggregation is specialized for WLANs, ROHC is employed

for TCP/IP/UDP header compression. Furthermore,

aggregation based techniques do not cater for the PHY layer

header and robustness is generally not guaranteed beyond

certain aggregated packet lengths so that while the raw

throughputs might be increasing, the goodput might not be

following a similar trend.

The approach we adopt in this work is two-pronged; it

provides an algorithm that caters for robust compression of

both MAC and PHY header information thus enhancing the

overall goodput of WLANs. A Doppler shift dependent

entropy coded scheme is presented for efficient compression

of the PHY header while a simple stateful approach based

on mitigating redundant information suffices for robust

compression of MAC header. The simultaneous use of these

two compression techniques results in significant

performance gains.

The structure of the paper is as follows. Section 2 presents a

formal overview of the system model under consideration;

the numerous variables involved and their dependencies are

also highlighted. A brief review of existing techniques is

presented in section 3 before elaborating the proposed

algorithm in Section 4. Section 5 analyses and compares

results of both the proposed and existing schemes prior to

conclusion in section 6.

II. SYSTEM MODEL

Overall system is consisting of two distinct parts;

communication system used for transmission and the

compressor portion that has been proposed in this paper.

A. Communication Model

For our performance analysis purposes, we are assuming

a rate-adaptive 802.11n WLAN [6] using the MIMO-OFDM

framework which will employ diversity model for better

reliability (2x2 MIMO in diversity mode is used). The rate

adaptation is carried out by means of a simple MCS

(modulation and coding scheme) vs. SNR lookup table

which has been developed by plotting SNR vs. BER curves

in MATLAB as is done in previous works like [3].

The channel is assumed to be a slow fading Rayleigh

model. This is based on the typical workplace scenario

where users set-up their workstations and work for a couple

of hours instead of wandering about randomly. Based on

this starting point, we define a few independent and

Throughput Enhancement by Cross-layer

Header Compression in WLANs

Hussain Syed Kazmi and Haroon Raja

2010 16th Asia-Pacific Conference on Communications (APCC)

978-1-4244-8129-3/10/$26.00 ©2010 IEEE 329

Page 2: [IEEE 2010 16th Asia-Pacific Conference on Communications (APCC) - Auckland, New Zealand (2010.10.31-2010.11.3)] 2010 16th Asia-Pacific Conference on Communications (APCC) - Throughput

dependent variables in the system. The Doppler Spread (or

alternatively the coherence time of the channel) is one such

independent variable; the average SNR of the

communication link is another. The SNR at the receiving

node, packet length used by the transmitter, the channel

correlation between successive packets, the mode of the

transmitted packet and the type of compression scheme

being employed are all dependent variables on the other

hand. Doppler spread has strong negative correlation with

the channel state between successive packets and

consequently the received SNR [4].

Figure: System Structure

B. Compressor Design

There are 2 distinct stages of the compressor:

1. MAC layer header compressor

The MAC layer compressor is essentially a context based

stateful compression algorithm that is designed to remove

the redundancy in header information.

2. PHY layer header compressor

The PHY compression algorithm aims at exploiting the

similarity measure of the PHY header between consecutive

packets based on a correlation measure of the Doppler

spread and the channel condition. Broadly speaking, based

on the difference between corresponding PHY header fields

on successive packets, an entropy coding scheme is applied.

Lempel-Ziv coding [9] and a number of its variants perform

inefficiently due to the relatively small length of the PHY

header [8]. By employing Adaptive Huffman coding, a

periodically updating probability function of the repetition

bits to calculate the correlation model, we can arrive at a

reliable probability and resultant code tree.

It has been shown in [5, 8] that for Huffman coding the

difference between optimal bit length per code word ( ) and

estimated bit length ( ) will be:

(1)

Where is the error in the ith estimated probability with

reference to the true probability and is given by:

and are the optimal and estimated code word lengths

whereas and are the average number of bits per code

word for the case of estimated and true probabilities, given

by:

The first term in (1) reduces to approximately zero for

practical values of implying that the initial estimate

of probabilities is reasonable. In this case, the equation

reduces to:

By extending this result further using Lagrange

multipliers and assuming the mean squared error in the

probability estimate as the variance, σ2, ΔL can be shown to

be the product of the variance in and the variance of for

the worst case implying there is a loss of optimality caused

by imperfect estimates [8]. For the best case ΔL can also

equal 0. In light of these conclusions, we have designed the

compressor in a way that it employs adaptive Huffman

coding. This mitigates the worst case scenario by employing

an initial estimated standard entropy chart which the

algorithm updates after 10 packets on both communicating

nodes to minimize any false probability estimates.

The possibility of the occurrence of a fast fading channel

is catered for by estimating the Doppler spread at both ends

using a measure of the channel coherence time. If the

Doppler Spread exceeds a certain value, PHY header

compression is not employed. For the time being, this

threshold has been set at 100 Hz based on heuristic

evidence. Further optimization is possible by making this

threshold adaptive as well.

III. A REVIEW OF EXISTING TECHNIQUE – ROHC

Roughly speaking, ROHC [2] mechanism works as follows:

on one side, the compressor treats the packets to remove

redundancy using a context that is built from the information

observed in the past packet headers. On the other side, the

decoder creates its context from the received packets and

uses it to reconstruct the header of the incoming packet.

ROHC acts on upper level layers i.e. the network and

transport layer headers.

Compressor and decompressor contexts have to be

synchronized, otherwise, decompression will fail and the

decompressor will drop the packets until it can re-build its

context, meaning that packets not corrupted during

transmission will be dropped because its header cannot be

reconstructed. The decompressor re-builds its context

through the application of a repair mechanism. If context is

not repaired, then it will notify the compressor through

feedback messages.

IV. PROPOSED ALGORITHM FOR HEADER COMPRESSION

Header compression can be defined as a technique that

optimizes bit allocation by exploiting the redundancy in the

header information both within the same packet header and

also, more importantly, between consecutive packets

belonging to the same packet stream. This technique is

intended to deal with the bandwidth constraints by reducing

the header size. Our proposed compression scheme is two-

fold; attempting to remove the redundancy in both the PHY

as well as MAC header. These 2 techniques are elaborated

in the following sections:

330

Page 3: [IEEE 2010 16th Asia-Pacific Conference on Communications (APCC) - Auckland, New Zealand (2010.10.31-2010.11.3)] 2010 16th Asia-Pacific Conference on Communications (APCC) - Throughput

A. MAC layer header compression

The 802.11n standard defines the MAC frame format as:

Figure [6]: MAC Protocol Data Unit (MPDU) Frame format

1) For Infrastructure based networks

For the case of the infrastructure based WLANs, the

MAC header compression becomes a trivial problem since a

major portion of the redundancy (MAC headers) can be

removed from the packet by using a scheme similar to the

Network Address Translation (NAT) at the Access Point

(AP) or the router. The AP assigns a short identifier instead

of the 4 MAC addresses when the initial connection is

established between the station (STA) and the AP. The QoS

(Quality of Service) control field can also be similarly

compressed after the initial handshake since the QoS

requirements of a link remain the same over the length of a

session. So for this purpose, after the initial handshake the

AP assigns the user 4 network identifiers which must be

compliant with:

(Translated identifier)x = 2*n; n € {N<24}, x ϵ

1,2,3,4}

Where n belongs to N which is the set of all natural

numbers smaller than 24; this ensures that the translated

identifiers combine to form some multiple of the basic unit,

the byte, while also guaranteeing that the translating

identifier does not exceed the original MAC address

allocations (the probability of this happening in a practical

WLAN approaches zero). x is defined as the set containing

values 1 through 4 which refer to the source, destination, AP

at the source end and AP at the destination end respectively.

2) For ad-hoc networks

It is possible for the STAs in an ad-hoc network to

establish the short identifier for themselves at the initial

handshake based on a somewhat similar formula i.e.

(Translated identifier)x = 4*n; n ϵ {N<12}, x ϵ {1,2}

In this case, x is defined as the set containing values 1

through 2 which refer to the source and destination only

since there is no infrastructure in place for the need of AP

addresses to arise. The change of 2 to 4 in the equation

multiplier keeps the equation compliant with byte

specifications. The QoS specifications can be dealt with in a

similar way.

B. PHY layer header compression

The PHY compression algorithm is same and equally

robust for both ad-hoc and infrastructure based wireless

communication links. The technique consists of the

following general steps:

1. Transmit initial 10 packets without any

compression on the PHY header. Assuming that the

link is bidirectional and the receiver transmits an

ACK based on the CRC check on the received

packet, both ends estimate the channels SNR and

the rate of its variation. From the variations of the

channel, the Doppler spread can be estimated.

a. The maximum Doppler spread can be

calculated as a simple reciprocal function of

the coherence time of the channel which is

estimated from the received signals. Details on

calculation of coherence time are present in

[4]. A simplified form of this expression can

be given as:

Doppler Spread, Ds = (1/Tc)

Where Tc is defined as the channel coherence time.

b. After the transmission and reception of

every PHY header the difference vector D

is computed at the transmitter and the

receiver respectively which is defined as:

D = (PHY_Hdr(1:32))n .– (PHY_Hdr(1:32))n-1

Where (PHY_Hdr)n is defined as the PHY layer header

of the nth packet while (PHY_Hdr)n-1 is the previous packet

and is defined as the bitwise subtraction operator. The

difference vector D is thus defined as the bitwise difference

of the first 32 bits of 2 adjacent PHY headers. The D vector

has therefore dimensions of 1x32 (excluding the last 16 bits

of the header which are the CRC and tail bits of each

physical header.) If the Doppler spread does not limit the

functionality of the communication link, then the

information obtained in (1) can be used to determine the

transmit parameters (such as MCS, spatial streams etc.) If

the Doppler Spread is indeed above a certain threshold and

is limiting the robustness of the algorithm then the

compression algorithm is not implemented. The D matrix is

however computed continually to update statistics. D matrix

is assumed to be always positive since the receiver can

verify the CRC of the PHY header to determine what was

the original bit sequence sent between the ambiguous cases.

The D matrix is divided into 4 separate fields.

a. The first 7 bits which are the MCS field.

b. The next bit is the bandwidth indicator bit;

this bit will not be compressed.

c. The next 16 bits are the complete packet

length.

d. Next 10 bits include support for

information such as switching between

MIMO modes, aggregation, sounding and

coding techniques etc.

2. The fields of the D vector are used to update the

adaptive entropy coding function E(Dx) (where x ϵ

{1,2,3} and is the value of the field of the

difference matrix) on both ends such that for every

count of a particular value of a field of the Dx

vector, weight is added to that particular branch of

the entropy coding tree for that particular field.

Once the initial 10 packets have been transmitted,

the updated tree is used to code the 3 fields of the

D vector individually for subsequent transmissions

and receptions such that

Transmitted PHY_Hdr = E(Dx) xϵ{1,2,3}

The function E(Dx) continues to update its branch weights

using the fields of the D vector.

331

Page 4: [IEEE 2010 16th Asia-Pacific Conference on Communications (APCC) - Auckland, New Zealand (2010.10.31-2010.11.3)] 2010 16th Asia-Pacific Conference on Communications (APCC) - Throughput

V. PERFORMANCE ANALYSIS AND COMPARISON

A. Logical Intuition behind gain by PHY header

compression

As the standard states that MCS (modulation and coding

scheme employed for the transmitted packet) 0 through 15

are mandatory in 20 MHz with 800 ns guard interval at an

access point (AP) while MCS 0 through 7 are mandatory in

20 MHz with 800 ns guard interval at all STAs. All other

MCSs and modes are optional, specifically including Tx

(transmit) and Rx (receive) support of 400 ns guard interval,

operation in 40 MHz, and support of MCSs with indices 16

through 76. Most of the systems are employing only the

mandatory MCS 0 through 7, which are coded using 3 bits.

The standard, however allocates 7 bits for each MCS. This

tightening up of wasted bits might seem as trivial in the

grander scheme of things but optimal allocation of bits using

the proposed entropy coding scheme significant

performance gains have been observed. It stands to reason

that for very high data rate WLANs, each and every bit

counts.

1) Theoretical Illustration of gain by compressing MAC

header

a) Best case scenario: Infrastructure based WLAN

with almost static channel

1. Original header bytes = approx. 42 (36 MAC + 6

PHY), compressed header bytes = 18 (13 MAC + 3

PHY), header size reduction = approx. 60%

2. Payload size in bytes = 300, total packet size = 342

bytes, after compression total size = 318 bytes, so

overall sustained improvement = at least. 7%.

b) Worst case scenario: ad-hoc WLAN with fast

fading channel

1. Original header bytes = approx. 30 (24 MAC + 6

PHY), compressed header bytes = 18, header size

reduction = 40%

2. Payload size in bytes = 1000, total packet size =

1030 bytes, after compression total size = 1018

bytes, so overall sustained improvement = at least

1%.

B. Simulation Results

The results from figure show that significant throughput

gain is achieved by employing the proposed coding

techniques when compared with state-of-the art packet

based LA algorithm without compression. Indeed

throughput gains in the region of 25% of original raw

throughput are possible at high SNR’s (for packet lengths of

300 bytes) under the conditions of a slow fading channel

with a Doppler spread less than our prescribed threshold

(100 Hz) The algorithm, however, performs admirably even

in high Doppler spread environments by still providing

considerable gains. This is due to the fact that the transmitter

refrains from compressing the PHY header when the

channel is fast fading i.e. channel correlation between

successive packets is weak. The MAC layer coding is still

performed since it only removes redundant information that

depends on the session and not the channel. This results in

an overall increase in the data rates (and intuitively proves

to also reduce the packet error rate somewhat since the total

number of OFDM symbols transmitted has decreased).

An important fact to glean from figure here is that the gains

only start appearing when the link adaption algorithm shifts

to higher modulation schemes such as 16QAM and 64QAM.

Figure 1: Throughput comparison

This is due to the fact that the compressor is essentially

reducing the number of OFDM symbols the Tx has to

transmit over the wireless medium. As seen from figure, the

lower modulation schemes use so many OFDM symbols for

transmission of data that reduction by a small fraction

causes no gain in throughputs. However at the higher end of

the modulation spectrum, the same reduction results in very

significant gains.

Figure 2: Comparison of No. of OFDM symbols required to transmit 2.5Mb

of user data

For our analysis purposes, the last figure we present is a

comparison of the fraction of overheads incurred in the 3

different cases of a) no compression applied, b) compression

applied in a slow fading channel and c) compression applied

in a fast fading channel. It is evident that there is a very

significant decrease in the signaling overhead incurred in the

system without sacrificing data rates which is what we had

originally set out to do. In addition, the abscissa being a

measure of the data rates, illustrates another important

relationship. The data rates are plotted according to the

results obtained from earlier calculations along a range of 0

to 30dB and tend to cluster rather tightly at lower data rates.

This is due to the fact that data rates start increasing

dramatically only after a certain SNR threshold is reached.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 5 10 15 20 25 30

No

rmal

ized

Th

rou

ghp

ut

SNR (dB)

With

Compression

(Doppler

Spread

50Hz)With

Compression

(Doppler

Spread

250Hz)Without

Compression

(Packet

based LA)

332

Page 5: [IEEE 2010 16th Asia-Pacific Conference on Communications (APCC) - Auckland, New Zealand (2010.10.31-2010.11.3)] 2010 16th Asia-Pacific Conference on Communications (APCC) - Throughput

Figure 3: Comparison of fraction of header overhead

VI. CONCLUSION AND FUTURE EXTENSION

To conclude, the proposed algorithm has proved to

significantly improve throughputs by compressing signaling

overhead while proving highly robust at the same time by

setting a static cut-off threshold for the PHY layer

compression. Future extensions of the work include

generalizing of the proposed framework to multiuser

systems as well as incorporating a dynamic threshold for

PHY compression by employing a history based learning

algorithm.

REFERENCES

[1] Youngsoo Kim, Sunghyun Choi, Kyunghun Jang, Hyosun

Hwang.“Throughput Enhancement of IEEE 802.11 WLAN via

Frame Aggregation”, in Proc. of IEEE Vehicular Technology

Conference (VTC)-Fall, pp. 3030 - 3034, Sept. 2004.

[2] C. Bormann, Ed. 2001. RObust Header Compression (ROHC):

Framework and four profiles: RTP, UDP, ESP, and

uncompressed. RFC 3095. IETF Network Working Group.

[3] Yaser Pourmohammadi Fallah, Panos Nasiopoulos and Hussein

Alnuweiri, “Efficient Transmission of H.264 Video over

Multirate IEEE 802.11e WLANs”, EURASIP Journal

onWireless Communications and Networking Volume 2008.

[4] A. Goldsmith, Wireless Communications. Cambridge, UK:

Cambridge University Press, 2005.

[5] D. A. Huffman, “A Method for the Construction of Minimum-

Redundancy Codes”, Proceedings of the IRE, Vol. 4D, pp.

1098-1101, Sept. 1952.

[6] Local and metropolitan area networks requirements part 11:

Wireless LAN medium access control (MAC) and physical layer

(PHY), Feb 2007.IEEE Standards Association. [7] IEEE, Part 11: Wireless LAN Medium Access Control (MAC)

and Physical Layer (PHY) Specifications. IEEE Std 802.11-

1999, Aug. 1999. [8] Richard B. Wells, “Applied Coding and Information Theory for

Engineers” Pearson Prentice Hall Publication. [9] J. Ziv and A. Lempel “Compression of individual sequences via

variable-rate coding” IEEE Transactions on Information Theory,

24:530-536, Sept 1978.

333