traffic control in tcp/ip networksufdcimages.uflib.ufl.edu/uf/e0/01/68/60/00001/wen_s.pdf · 2010....

117
TRAFFIC CONTROL IN TCP/IP NETWORKS By SHUSHAN WEN A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2006

Upload: others

Post on 24-Sep-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

TRAFFIC CONTROL IN TCP/IP NETWORKS

By

SHUSHAN WEN

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOLOF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT

OF THE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2006

Page 2: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

Copyright 2006

by

Shushan Wen

Page 3: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

To my family.

Page 4: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

ACKNOWLEDGMENTS

First and foremost, I would like to express my sincere gratitude to my advisor, Prof.

Yuguang Fang, for his invaluable advice and encouragement within the past several years

when I have been with Wireless Networks Laboratory (WINET). My work would not have

been completed if I had not had his guidance and support.

I would like to acknowledge my other committee members, Prof. Shigang Chen,

Prof. Tan Wong, and Prof. Janise McNair, for serving on my supervisory committee and for

their helpful suggestions and constructive criticism. My thanks also go to Prof. John Shea,

Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice.

I would like to extend my thanks to my colleagues in WINET for creating a friendly

environment and for offering great help during my research. They are Dr. Younggoo Kwon,

Dr. Wenjing Lou, Dr. Wenchao Ma, Dr. Wei Liu, Dr. Xiang Chen, Dr. Byung-Seo Kim,

Dr. Xuejun Tian, Dr. Hongqiang Zhai, Dr. Yanchao Zhang, Dr. Jianfeng Wang, Yu Zheng,

Xiaoxia Huang, Yun Zhou, Chi Zhang, Frank Goergen, Pan Li, Rongsheng Huang, and

many others who have offered help with this work.

I owe a special debt of gratitude to my family. Without their selfless care, constant

support and unwavering trust, I would never imagine what I have achieved.

I would also like to thank the U.S. Office of Naval Research and the U.S. National

Science Foundation for providing the grants.

iv

Page 5: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

TABLE OF CONTENTSpage

ACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

CHAPTER

1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 DIFFERENTIATED BANDWIDTH ALLOCATION AND TCP PROTECTIONIN CORE NETWORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.1 TCP Protection . . . . . . . . . . . . . . . . . . . . . . . . . . 122.1.2 Bandwidth Differentiation . . . . . . . . . . . . . . . . . . . . . 14

2.2 The CHOKeW Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3.1 Some Useful Probabilities . . . . . . . . . . . . . . . . . . . . . 232.3.2 Steady-State Features of CHOKeW . . . . . . . . . . . . . . . . 272.3.3 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.3.4 Bandwidth Differentiation . . . . . . . . . . . . . . . . . . . . . 33

2.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 342.4.1 Two Priority Levels with the Same Number of Flows . . . . . . 352.4.2 Two Priority Levels with Different Number of Flows . . . . . . 382.4.3 Three or More Priority Levels . . . . . . . . . . . . . . . . . . . 382.4.4 TCP Protection . . . . . . . . . . . . . . . . . . . . . . . . . . 392.4.5 Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422.4.6 CHOKeW versus CHOKeW-RED . . . . . . . . . . . . . . . . 432.4.7 CHOKeW versus CHOKeW-avg . . . . . . . . . . . . . . . . . 452.4.8 TCP Reno in CHOKeW . . . . . . . . . . . . . . . . . . . . . . 47

2.5 Implementation Considerations . . . . . . . . . . . . . . . . . . . . . . 482.5.1 Buffer for Flow IDs . . . . . . . . . . . . . . . . . . . . . . . . 482.5.2 Parallelizing the Drawing Process . . . . . . . . . . . . . . . . . 50

2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

v

Page 6: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

3 CONTAX: AN ADMISSION CONTROL AND PRICING SCHEME FOR CHOKEW 52

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523.2 The ConTax Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.2.1 The ConTax-CHOKeW Framework . . . . . . . . . . . . . . . . 543.2.2 The Pricing Model of ConTax . . . . . . . . . . . . . . . . . . . 563.2.3 The Demand Model of Users . . . . . . . . . . . . . . . . . . . 58

3.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.3.1 Two Priority Classes . . . . . . . . . . . . . . . . . . . . . . . . 613.3.2 Three Priority Classes . . . . . . . . . . . . . . . . . . . . . . . 613.3.3 Higher Arriving Rate . . . . . . . . . . . . . . . . . . . . . . . 65

3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4 A GROUP-BASED PRICING AND ADMISSION CONTROL STRATEGYFOR WIRELESS MESH NETWORKS . . . . . . . . . . . . . . . . . . . . 72

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724.2 Groups in WMNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754.3 The Pricing Model for APRIL . . . . . . . . . . . . . . . . . . . . . . 774.4 The APRIL Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.4.1 Competition Type for APRIL . . . . . . . . . . . . . . . . . . . 784.4.2 Maximum Benefit Principle for Users . . . . . . . . . . . . . . . 804.4.3 Nonnegative Benefit Principle for Providers . . . . . . . . . . . 824.4.4 Algorithm Operations . . . . . . . . . . . . . . . . . . . . . . . 88

4.5 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 884.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5 FUTURE WORK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

BIOGRAPHICAL SKETCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

vi

Page 7: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

LIST OF TABLESTable page

2–1 The State of CHOKeW vs. the Range of L . . . . . . . . . . . . . . . . . . 22

vii

Page 8: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

LIST OF FIGURESFigure page

1–1 Buffer management and scheduling modules in a router . . . . . . . . . . 3

2–1 CHOKeW algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2–2 Algorithm of updating p0 . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2–3 Network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2–4 RCF of RIO and CHOKeW under a scenario of two priority levels . . . . . 36

2–5 Aggregate TCP goodput vs. the number of TCP flows under a scenario oftwo priority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2–6 Average per-flow TCP goodput vs. w(2)/w(1) when 25 flows are assignedw(1) = 1 and 75 flows w(2) . . . . . . . . . . . . . . . . . . . . . . . . 37

2–7 Aggregate goodput vs. the number of TCP flows under a scenario of threepriority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2–8 Aggregate goodput vs. the number of TCP flows under a scenario of fourpriority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2–9 Aggregate goodput vs. the number of UDP flows under a scenario toinvestigate TCP protection . . . . . . . . . . . . . . . . . . . . . . . . 41

2–10 Basic drawing factor p0 vs. the number of UDP flows under a scenario toinvestigate TCP protection . . . . . . . . . . . . . . . . . . . . . . . . 41

2–11 Fairness index vs. the number of flows for CHOKeW, RED and BLUE . . 43

2–12 Link utilization of CHOKeW and CHOKeW-RED . . . . . . . . . . . . . 43

2–13 Average queue length of CHOKeW and CHOKeW-RED . . . . . . . . . . 45

2–14 Aggregate TCP goodput of CHOKeW and CHOKeW-RED . . . . . . . . 46

2–15 Average queue length of CHOKeW and CHOKeW-avg . . . . . . . . . . . 46

2–16 Aggregate TCP goodput of CHOKeW and CHOKeW-avg . . . . . . . . . 47

2–17 Link utilization, aggregate goodput (in Mb/s), and the ratio of minimumgoodput to average goodput of TCP Reno . . . . . . . . . . . . . . . . 48

2–18 Extended matched drop algorithm with ID buffer . . . . . . . . . . . . . . 49

viii

Page 9: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

3–1 ConTax-CHOKeW framework. ConTax is in edge routers, while CHOKeWis in core routers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3–2 ConTax algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3–3 Supply-demand relationship when ConTax is used. The left graph is price-supply curves, and the right graph price-demand curves for each class. . 60

3–4 Dynamics of network load (i.e.,∑I

i=1 ini) in the case of two priority classes 62

3–5 Number of users that are admitted into the network in the case of twopriority classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

3–6 Demand of users in the case of two priority classes . . . . . . . . . . . . . 64

3–7 Aggregate price in the case of two priority classes . . . . . . . . . . . . . 64

3–8 Dynamics of network load in the case of three priority classes . . . . . . . 65

3–9 Number of users that are admitted into the network in the case of threepriority classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3–10 Demand of users in the case of three priority classes . . . . . . . . . . . . 67

3–11 Aggregate price in the case of three priority classes . . . . . . . . . . . . . 67

3–12 Dynamics of network load when arriving rate λ = 6 users/min . . . . . . . 68

3–13 Number of users that are admitted into the network when arriving rateλ = 6 users/min . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3–14 Demand of users when arriving rate λ = 6 users/min . . . . . . . . . . . . 70

3–15 Aggregate price when arriving rate λ = 6 users/min . . . . . . . . . . . . 70

4–1 Tree topology formed by groups in a WMN . . . . . . . . . . . . . . . . . 76

4–2 Prices vs. bandwidth of BellSouth DSL service plans . . . . . . . . . . . . 78

4–3 Utility ui and price pi vs. resources xi . . . . . . . . . . . . . . . . . . . . 82

4–4 Three possible shapes of ∆b(k)i

(x

(k)i+1

). . . . . . . . . . . . . . . . . . . 86

4–5 Determining X(k)i+1 when xi −

∑k−1j=1 x

(j)i+1 − xi > 0 . . . . . . . . . . . . . 87

4–6 Simulation network for APRIL . . . . . . . . . . . . . . . . . . . . . . . 89

4–7 Available bandwidth, bandwidth allocation, and benefit . . . . . . . . . . . 90

ix

Page 10: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

Abstract of Dissertation Presented to the Graduate Schoolof the University of Florida in Partial Fulfillment of theRequirements for the Degree of Doctor of Philosophy

TRAFFIC CONTROL IN TCP/IP NETWORKS

By

Shushan Wen

December 2006

Chair: Yuguang “Michael” FangMajor Department: Electrical and Computer Engineering

TCP/IP networks have been widely used for wired and wireless communications and

will continue to be used in foreseeable future. In our research, we study the traffic control

schemes for wired and wireless networks using TCP/IP protocol suite. Traffic control is

tightly related to Quality of Service (QoS), for which Differentiated Services (DiffServ)

architecture is employed in our research.

For core networks, we present a stateless Active Queue Management (AQM) scheme

called CHOKeW. With respect to the number of flows being supported, both the memory-

requirement complexity and the per-packet-processing complexity for CHOKeW are O(1),

which is a significant improvement compared with conventional per-flow schemes. CHOKeW

is able to conduct bandwidth differentiation and TCP protection.

We combine pricing, admission control, buffer management and scheduling in Diff-

Serv networks by the ConTax-CHOKeW framework. ConTax is a distributed admission

controller that works in edge networks. ConTax uses the sum of priority values for all ad-

mitted users to measure the network load. By charging a higher price when the network

load is heavier, ConTax is capable of controlling the network load efficiently. In addition,

x

Page 11: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

network providers can gain more profit, and users have greater flexibility that in turn meets

the specific performance requirements of their applications.

In Wireless Mesh Networks (WMNs), in order to minimize the number of parties that

are involved in the admission control, we categorize WMN devices into groups. In our ad-

mission control strategy, Admission control with PRIcing Leverage (APRIL), a group can

be a resource user and a resource provider simultaneously. The maximum benefit principle

and the nonnegative benefit principle are applied to users and providers, respectively. The

resource sharing is transparent to other groups, which gives our scheme good scalability.

APRIL also increases the benefit of each involved group as well as the total benefit for

the whole system when more groups are admitted into the network, which becomes an

incentive for expanding WMNs.

xi

Page 12: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

CHAPTER 1INTRODUCTION

Transmission Control Protocol/Internet Protocol (TCP/IP) networks have been widely

used for wired and wireless communications. This is due to the simplicity, scalability, and

robustness in technology. Moreover, in terms of protecting existing investment, it is also

reasonable to expect that TCP/IP protocol suite will continue to be used in the foresee-

able future. Instead of designing an alternative network infrastructure for new applications,

merging those applications into current TCP/IP networks is more likely to be a practical

strategy of research and development for both academia and industry. In this work, we

study the traffic control schemes for wired and wireless networks based on TCP/IP infras-

tructure.

Traffic control is tightly related to Quality of Service (QoS). The effectiveness of a

traffic control scheme needs to be investigated with a certain QoS architecture. In recent

years, many QoS models, such as Service Marking [8], Label Switching [10, 14], Inte-

grated Services/RSVP [27, 28], and Differentiated Services (DiffServ) [25, 109] have been

proposed. Among them, DiffServ is able to provide a variety of services for IP packets

based on their per-hop behaviors (PHBs) [25]. The capability to well balance the work-

load between the devices inside and those on the edge of networks gives DiffServ great

scalability. We select DiffServ as our QoS architecture model.

In general, routers in DiffServ networks are divided into two categories: edge (bound-

ary) routers and core (interior) routers [128]. Operations that need maintain per-flow states,

such as packet classification and priority marking, are implemented at edge routers. In the

core networks, packet forwarding speed has to match packet arrival rate in the long run;

otherwise the service quality would deteriorate due to packet drops. Therefore, the design

1

Page 13: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

2

of core routers must trade off the ability of per-flow control for low complexity that ensures

the forwarding speed.

Since routers buffer those packets that cannot be forwarded immediately, and bottle-

neck routers are the devices that are prone to network congestion, buffer management is

one of the crucial technologies for traffic control.

Buffer management schemes usually use packet dropping or packet marking to control

the traffic. For the best compatibility with TCP, we only discuss packet-dropping based

buffer management in this work. Buffer management mainly focuses on when, where, and

how to drop packets from the buffer. Traditional buffer management drops packets only

when the buffer is full (i.e., tail drop), which causes problems such as low link utilization

and global synchronization; i.e., all TCP flows decrease and increase the sending rates at

the same time. If packet drops happen before the buffer is full, the buffer management is

also called Active Queue Management (AQM).

Random Early Detection (RED) [60] was one of the pioneering work in AQM. It pre-

defines two queue length thresholds, minth and maxth. When the current queue length is

smaller than minth, the network is not regarded as congested, and no packet drop happens.

When the current queue length is larger than minth but smaller than maxth, the network

is regarded as congested, and arriving packets are dropped according to a dropping prob-

ability in between 0 and pmax, where pmax (pmax < 1) is a predefined parameter of

adjusting the dropping probability. The larger the queue length is, the higher the dropping

probability will be. If the current queue length is larger than maxth, all arriving pack-

ets are dropped due to the heavy network congestion.1 With fairly low computation costs,

RED can avoid global synchronization and maintain better TCP goodput than traditional

tail drop schemes.

1 Some modifications may set the completely dropping threshold to other values, suchas 2maxth [64], but the basic dropping strategy is still the same.

Page 14: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

3

Figure 1–1: Buffer management and scheduling modules in a router

Many AQM algorithms were proposed afterwards. Generally, they were aimed at

improving implementation efficiency (e.g., Random Exponential Marking (REM) [12]),

increasing network stability (e.g., BLUE [59] and REM), protecting standard TCP flows

(e.g., RED with Preferential Dropping (RED-PD) [100] and Flow-Valve [42]), or support-

ing multiple priority classes (e.g., RED with In/Out bit (RIO) [47]).

Here we need to clarify the functional difference between buffer management and

scheduling. The relationship of buffer management and scheduling is illustrated in Fig.

1–1. In a router, the total buffer capacity is considered a buffer pool. Logically, the buffer

pool can be shared by multiple queues. When a packet arrives, buffer management is

the module to decide whether to let the arriving packet enter a queue, and which queue

should be used to hold this packet if multiple queues are available. Buffer management

also controls the queue lengths as well as the buffer occupancy by discarding packets from

the buffer. Thus, a buffer management scheme determines at when and from where a packet

should be dropped. On the other hand, scheduling is the module to decide when to forward

a packet and which packet should be forwarded if there are more than one packets in the

buffer. In other words, a scheduler controls the packet forwarding order. Usually, when a

buffer pool consists of multiple logical queues, the scheduling algorithm selects a packet

at one of the heads of the queues to forward.

Page 15: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

4

The simplest scheduling scheme is First Come First Served (FCFS). If all arriving

packets enter the same queue from the tail and packets are sent out one by one from the

head of the queue, FCFS is the scheduler. Generalized Processor Sharing (GPS) [113,114]

was considered an ideal scheduling discipline which is designed to let the flows share the

bandwidth in proportion to the weights. However, the fluid model of GPS is not amenable

to a practical implementation. One of the popular classes of implementation is schedul-

ing schemes with round robin features, including Round robin [106], Deficit Round Robin

(DRR) [126], Stratified Round Robin [120], Class-Based Queueing (CBQ) [61], etc. They

are able to achieve per-packet complexity O(1), but they tend to have poor burstiness and

memory-requirement complexity O(N ). Another popular class is timestamp based sched-

ulers, such as Weighted Fair Queueing (WFQ) [52], WF2Q [23] and Self-Clocked Fair

Queueing [71]. They have performance that is closer to the ideal GPS model, but the per-

packet-processing complexity is usually larger than O(log N ) and memory-requirement

complexity is O(N ).

Recent research of buffer management and scheduling has also been extended to the

following areas. In order to control the bandwidth and CPU resource consumption of in-

network processing, Shin, Chong and Rhee proposed Dual-Resource Queue (DRQ) for

approximating proportional fairness [125]. With respect to optical networks and photonic

packet switches, Harai and Murata proposed a scheme as an expansion of a simple sequen-

tial scheduling [75]. Their scheme uses optical fiber delay lines to construct optical buffers

and the supported data rate is improved due to a parallel and pipeline processing architec-

ture. In wireless networking area, Alcaraz, Cerdan and Garcıa-Haro presented an AQM

algorithm for Radio Link Control (RLC) in order to improve the performance of TCP con-

nections over the Third Generation Partnership Project (3GPP) radio access network [6];

Chen and Yang developed a buffer management scheme for congestion avoidance in sen-

sor networks [41]; Chou and Shin used Last Come First Drop (LCFD) buffer management

policy and post-handoff acknowledgement suppression to enhance performance of smooth

Page 16: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

5

handoff in wireless mobile networks [43]. By contrast, our design of buffer management

focuses on the traffic control in DiffServ core networks. In addition, when we evaluate an

AQM scheme, the performance has to be investigated with TCP, taking account of the fact

that the dynamics of TCP have significant interactions with the dropping scheme. There-

fore, bandwidth differentiation and TCP protection are two goals that we want to achieve.

When we evaluate the performance of buffer management, we also need to consider

the combination of the buffer management and a scheduler. Conventionally, RED, BLUE,

RIO, etc., works with FCFS in order to keep the simplicity of implementation and the low

complexity of operations. On the other hand, some AQM schemes, such as Longest Queue

Dropping (LQD) [131], work with WFQ, so that they are able to obtain good isolation

among flows.

As TCP uses packet drops as network congestion signal, some research [53, 129, 132,

143] considered loss ratio the measurement of resource differentiation to support priority

service. These schemes simply assign a higher dropping rate to arriving packets from a

flow in higher priority. When they are used in core routers, these schemes are faced with

the dilemma of choosing between per-flow control and class-based control (i.e., all flows

in the same priority class are treated as a single super-flow). If per-flow control is selected,

the memory-requirement complexity will become O(N ), which is unacceptable for a router

working in high speed networks with a myriad of flows. If class-based strategy is used, all

flows in the same class—no matter it is a TCP flow or a non-TCP-friendly flow—will have

the same loss ratio, and hence this type of schemes cannot protect TCP flows.

During the evolution process of the Internet, the variety of the applications brings

greatly heterogeneous requirements to the network. The goal of the network design is not

to provide perfect QoS to all users, but rather to give the different categories of applications

a level of service commensurate with the particular needs [122]. In order to let the perfor-

mance of our buffer management scheme meet the speed requirement of core routers, we

combine the buffer management with a FCFS scheduler.

Page 17: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

6

With regard to incorporating the priority services of DiffServ into TCP, two problems

must be solved: TCP protection and bandwidth differentiation. We design a buffer manage-

ment scheme, CHOKeW, to solve these two problems together. In previous work, schemes

either focus only on TCP protection [42, 112] or bandwidth differentiation [23, 47, 52]. To

the best of our knowledge, no other scheme prior to CHOKeW has reached both goals. We

discuss CHOKeW in detail in Chapter 2.

In addition to using buffer management and scheduling techniques to conduct resource

allocation in core networks, a practical DiffServ solution also needs to include pricing and

admission control. Pricing is an effective way to assign priority, especially in a gigantic

open environment such as the Internet. As everybody wants to acquire the highest priority

if the costs are the same, it is hard to imagine that a practically prioritized network has no

pricing policies. When pricing is applied, users who are willing to pay a higher price are

able to get better service.

It is known that in a classical economic system, consumers select the amount of re-

source consumption that results in the maximum benefit for themselves [50, 117]. When

users have different utility functions (which correspond to different network applications

that are being used by the users), the optimal resource consumption for these users is also

different from each other. In other words, a good pricing scheme can let the users adjust

their network resource consumption based on their own utility functions. In this way, the

limited network resources can be shared among those users based on their own choices.

On the other hand, we believe that admission control is also essential to maintain

network services well. Generally speaking, an admission control scheme is designed for

maintaining the delivered QoS to different users (or connections, sessions, calls, etc.) at

the target level by limiting the amount of ongoing traffic in the system [110]. The lack of

admission control would strongly devalue the benefit that DiffServ can produce, and the

deterioration of service quality resulting from network congestion cannot be solved only

by devices working in core networks.

Page 18: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

7

Admission control can be centralized or distributed. Early research mainly discussed

centralized admission control [9, 38, 134]. Centralized admission control, like any other

centralized techniques, has a failure point, and the admission requests may overload the

admission control center when it is used in large networks. Distributed admission control

can be further classified into two categories: collaborative schemes or local schemes. The

design of collaborative schemes, similar to that of centralized schemes, needs to take ac-

count of network communication overhead. It is possible that the network congestion will

further deteriorate due to the control packets carrying the information for collaboration

when the network is already congested. By contrast, for local schemes, information collec-

tion and decision making are done locally. The challenge of designing a local scheme is to

find the appropriate measurement of the network congestion status.

In the research area of admission control in wireless networks, traditionally, study

mainly focused on the trade-off between the new call blocking probability and the change

of handoff blocking probability due to the admission of new users [37, 39, 80, 81, 88].

For systems with hard capacity, that is, Time-Division Multiple Access (TDMA) and

Frequency-Division Multiple Access (FDMA) systems, this type of admission control

schemes work very well. However, systems with soft capacity, such as Code-Division

Multiple Access (CDMA), Orthogonal Frequency-Division Multiple access (OFDM), or

systems with a contention-based Medium Access Control (MAC) layer, the relationship

between the number of users and the available capacity is much more complicated. A

scheme that only focuses on blocking probability is not enough, and this type of schemes

cannot alleviate the network congestion efficiently. Hou, Yang and Papavassiliou proposed

a pricing based admission control algorithm [82]. Their study attempted to find the optimal

point between utility and revenue in terms of the new call arrival rate, which was affected

by the price that was adjusted dynamically according to the network condition.

Page 19: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

8

Admission control can also be conducted by other techniques. The scheme proposed

by Xia et al. [138] aimed at reducing the response delay of admission decisions for mul-

timedia service. It was experience-based and belonged to aggressive admission control,

where each agent of the admission control system admitted more requests than allocated

bandwidth. Jeon and Jeong [84] combined admission control with packet scheduling. The

scheduler assigned the higher priority to real-time packets over best-effort traffic when

the real-time packets were going to meet the deadline. Thus the admission control scheme

acted as a congestion controller. Cross-layer design was also used in this scheme. Qian, Hu

and Chen [119] focused on admission control for Voice over IP (VoIP) application in Wire-

less LANs. The interactions of WLAN voice manager, Medium Access Control (MAC)

layer protocols, soft-switches, routers and other network devices were discussed. Ganesh

et al. [69] developed an admission control scheme that was conducted in end users (i.e.,

endpoint admission control) by probing the network and collecting congestion notification.

This scheme requires close cooperation of end users, which is questioned in open networks

such as the Internet.

One of the critical design considerations of admission control is how to let the ad-

mission controller know the network congestion status. Some approaches used a metric

to evaluate a congestion index at each network element to admit new sessions (e.g., Mon-

teiro, Quadros and Boavida [105]). Some others employed packet probing [30, 55] and

aggregation of RSVP messages in the admission controller [16, 24].

The ideal location to conduct pricing and admission control is edge networks. Edge

routers do not have to support a great many flows, which enables edge routers to keep

per-flow states without loosing much performance. By monitoring the dynamics of the

flows, edge routers can charge a higher price when network congestion occurs, and lower

the price when congestion is alleviated. The higher price reduces the demand of users

to use the network, and new users are unlikely to request the admission if the network is

congested, which in turn gives better service quality to the flows that enter the network.

Page 20: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

9

Based on this strategy, in Chapter 3 we propose a pricing and admission control

scheme that works with CHOKeW. When the network congestion is heavier, our pricing

scheme will increase the price by a value that is proportional to the congestion measure-

ment, which is equivalent to charging a tax due to network congestion—thus we name our

pricing scheme ConTax (Congestion Tax).

ConTax-CHOKeW framework is a cost-effective DiffServ network solution that in-

cludes pricing and admission control (provided by ConTax) plus bandwidth differentiation

and TCP protection (supported by CHOKeW). By using the sum of priority classes of all

admitted users as the network load measurement in ConTax, edge routers can work inde-

pendently. This can save the network resource consumption as well as management cost

for sending control messages for the updates of the network congestion status from core

routers to edge routers periodically. ConTax adjusts the prices for all priority classes when

the network load for an edge router is greater than a threshold. The heavier the load is, the

higher the prices will be. The extra price above the line of the basic price, i.e., congestion

tax, is proved to be able to effectively control the number of users that are admitted into

the network. By using simulations, we also show that when more priority classes are sup-

ported, the network provider can earn more profit due to a higher aggregate price. On the

other hand, a network with a variety of priority service provides users greater flexibility to

meet the specific needs for their applications.

In addition to buffer management in core networks and pricing and admission control

in edge networks, our work with respect to traffic control also includes pricing and admis-

sion control in Wireless Mesh Networks (WMNs), which has some specific features that

do not exist in wired networks.

One of the main purposes of using WMNs is to swiftly extend the Internet coverage

in a cost-effective manner. The configurations of WMNs, determined by the user locations

and the application features, however, are greatly dynamic and flexible. As a result, it

is highly possible for a flow to go through a wireless path consisting of multiple parties

Page 21: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

10

before it reaches the hot spot that is connected to the wired Internet. This feature results

in significant difference in admission control for WMNs and for traditional networks. It

is inefficient and infeasible to ask for the confirmation from each hop along the route in

WMNs. A group-based one-hop admission control is more realistic than the traditional

end-to-end admission control.

In our research that is discussed in Chapter 4, we propose a group-based pricing and

admission control scheme, in which only two parties are involved in the operations upon

each admission request, which minimizes the number of involved parties and simplifies

the operations. In this scheme, the determination criteria for network admission are the

available resources and the requested resources, which correspond to supply and demand

in an economic system, respectively. The involved parties use the knowledge of utility,

cost, and benefit to calculate the available and requested resources. Therefore, our scheme

is named APRIL (Admission control with PRIcing Leverage). Since the operations are

conducted distributedly, there is no need for a single control center. By using APRIL, the

admission of new groups leads to benefit increases of both involved groups, and the total

benefit of the whole system also increases. This characteristic can be used as an incentive

to expand Internet coverage by WMNs. Finally, in Chapter 5, we discuss some future

research issues.

Page 22: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

CHAPTER 2DIFFERENTIATED BANDWIDTH ALLOCATION AND TCP PROTECTION IN

CORE NETWORKS

2.1 Introduction

Problems associated with Quality of Services (QoS) in the Internet have been investi-

gated for years but have not been solved completely. One of the technological challenges

is to introduce a reliable as well as a cost-effective method to support multiple services at

different priority levels within core networks that can support thousands of flows.

In recent years, many QoS models, such as Service Marking [8], Label Switching

[10, 14], Integrated Services/RSVP [27, 28], and Differentiated Services (DiffServ) [25]

have been proposed. Each of these models has their own unique features and flaws.

In Service Marking, a method called “precedence marking” is used to record the pri-

ority value within a packet header. However, the service request is only associated with

each individual packet, and does not consider the aggregate forwarding behavior of a flow.

The flow behavior is nevertheless critical to implement QoS. The second model, Label

Switching, including Multi-Protocol Label Switching (MPLS) [121], is designed in a way

that supports packet delivery. In this model, finer granularity resource allocation is avail-

able, but scalability becomes a problem in large networks. In the worst scenario, it scales

in proportion with the square of the number of edge routers. In addition, the basic infras-

tructure of Label Switching is built by Asynchronous Transfer Mode (ATM) and Frame

Relay technology, and hence it is not straightforward to upgrade current IP routers to Label

Switching routers. Integrated Services/RSVP relies upon traditional datagram networks,

but it also has a scalability problem due to the necessity to establish packet classification

and to maintain the forwarding state of the concurrent reservations on each router. Diff-

Serv is a refinement to Service Marking, and it provides a variety of services for IP packets

11

Page 23: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

12

based on their per-hop behaviors (PHBs) [25]. Because of its simplicity and scalability,

DiffServ has caught the most attention nowadays.

In general, routers in the DiffServ architecture, similar to those proposed in Core-

Stateless Fair Queueing (CSFQ) [128], are divided into two categories: edge (boundary)

routers and core (interior) routers. Sophisticated operations, such as per-flow classification

and marking, are implemented at edge routers. In other words, core routers do not neces-

sarily maintain per-flow states; instead, they only need to forward the packets according to

the indexed PHB values that are predefined. These values are marked in the Differentiated

Services fields (DS fields) in the packet headers [25,109]. For example, Assured Forward-

ing [78] defined a PHB group and each packet is assigned a level of drop precedence. Thus

packets with primary importance based on their PHB values encounter relatively low drop-

ping probability. The implementation of an Active Queue Management (AQM) to conduct

the dropping, however, is not specified in the framework of Assured Forwarding.

When we design an AQM scheme, the performance has to be investigated along with

TCP, taking account of the fact that almost all error-sensitive data in the Internet are trans-

mitted by TCP and the dynamics of TCP has unavoidable interactions with the dropping

scheme.

In order to incorporate the priority services1 of DiffServ into TCP, the following tech-

nical problems must be solved: (1) TCP protection and (2) bandwidth differentiation. We

discuss them in the following two subsections.

2.1.1 TCP Protection

The importance of TCP protection has been discussed by Floyd and Fall [63]. They

predicted that the Internet would collapse if there was no mechanism to protect TCP flows.

1 As Marbach [102] proposed, a set of priority services can be applied to modelingand analyzing DiffServ, by mapping the PHBs that receive better services into the higherpriority levels. In the rest of this chapter, we use “priority levels” to represent PHBs forgeneral purposes.

Page 24: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

13

In the worst case, the routers would be consumed with forwarding packets even though

no packet is useful for receivers. In the meantime, the bandwidth would be completely

occupied by unresponsive senders that do not reduce the sending rates even after their

packets are dropped by the congested routers [63].

Conventional Active Queue Management (AQM) algorithms such as Random Early

Detection (RED) [60] and BLUE [59] cannot protect TCP flows. It is strongly suggested

that novel AQM schemes be designed for TCP protection in routers [29,63]. Cho [42] pro-

posed a mechanism which uses a “flow-valve” filter for RED to punish non-TCP-friendly

flows. However, this approach has to reserve three parameters for each flow, which signif-

icantly increases the memory requirement. Mahajan and Floyd [100] described a simpler

scheme, known as RED with Preferential Dropping (RED-PD), in which the drop history

of RED is used to help identify non-TCP-friendly flows, based on the observation that

flows at higher speeds usually have more packet drops in RED. RED-PD is also a per-flow

scheme and at least one parameter needs to be reserved for each flow to record the number

of drops.

When compared with previous methods including conventional per-flow schemes, the

implementation design of CHOKe, proposed by Pan et al. [112], is simple and it does not

require per-flow state maintenance. CHOKe serves as an enhancement filter for RED in

which a buffered packet is drawn at random and compared with an arriving packet. If both

packets come from the same flow, they are dropped as a pair (hence, we call this “matched

drops”); otherwise, the arriving packet is delivered to RED. Note that a packet that has

passed CHOKe may still be dropped by RED. The validity of CHOKe has been explained

using an analytical model by Tang et al. [133].

CHOKe is simple and effective for TCP protection, but it only supports best-effort

traffic. In DiffServ networks where flows have different priority, TCP protection is still an

imperative task. In this chapter, we use the concept of matched drops to design another

Page 25: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

14

scheme called CHOKeW. The letter W represents a function that supports multiple weights

for bandwidth differentiation.

The TCP protection in DiffServ networks has three scenarios: first, protecting TCP

flows in higher priority from high-speed unresponsive flows in lower priority; second, pro-

tecting TCP flows from high-speed unresponsive flows in the same priority; and third,

protecting TCP flows in lower priority from high-speed unresponsive flows in higher prior-

ity. Since CHOKeW is designed for allocating a greater bandwidth share to higher priority

flows, if TCP protection is effective in the third scenario, it should also be effective in the

first and second scenarios. Here we report the results of the third scenario in Subsection

2.4.4 to demonstrate the effectiveness of TCP protection of CHOKeW.

2.1.2 Bandwidth Differentiation

TCP is the most popular transport protocol in the Internet. It is used to transmit error-

sensitive data from applications such as Simple Mail Transfer Protocol (SMTP), HyperText

Transfer Protocol (HTTP), File Transfer protocol (FTP), or Telnet. For TCP traffic, good-

put is a well-known criterion to measure the performance. Here we use the same definition

for “goodput” as described by the work of Floyd [63] (p. 459), i.e., “the bandwidth deliv-

ered to the receiver, excluding duplicate packets.” There is good evidence that the more

congested the traffic in a TCP flow becomes, the higher the dropping rate of packets will

be and the more duplicate packets produced [7,127]. Since TCP decreases the sending rate

when packet drops are detected, it is reasonable to believe that a TCP flow with a larger

bandwidth share also has higher goodput.

Some research has investigated the relationship between the priority of flows and the

magnitude of bandwidth differentiation. RED with In/Out bit (RIO), presented by Clark

and Fang [47], uses two sets of RED parameters to differentiate high priority traffic (marked

as “in”) from low priority traffic (marked as “out”). The parameter set for “in” traffic

usually includes higher queue thresholds which results in a smaller dropping probability.

In RIO an “out” flow may be starved because there is no mechanism to guarantee the

Page 26: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

15

bandwidth share for low-priority traffic [26], which is a disadvantage of RIO. Our scheme

uses matched drops to control the bandwidth share. When a low-priority TCP flow only

has a small bandwidth share, the responsiveness of TCP can lead to a small backlog for this

flow in the buffer. The packets from this flow will unlikely be dropped, so this flow will

not be starved. Our model explains this feature in Subsection 2.3.1 (Equation (2.10)).

In fact, some scheduling schemes, such as Weighted Fair Queueing (WFQ) [52] and

other packet approximation of the Generalized Processor Sharing (GPS) model [113],

may also support differentiated bandwidth allocation. However, the main disadvantage

of these schemes is that they require constant per-flow state maintenance, which is not

cost-effective in core networks as it causes memory-requirement complexity O(N ) and per-

packet-processing complexity usually larger than O(1).2 Our scheme is a stateless scheme,

and the packet processing time is independent of N . Both the memory-requirement com-

plexity and the per-packet-processing complexity of CHOKeW is O(1).

Moreover, CHOKeW uses First-Come-First-Served (FCFS) scheduling, which short-

ens the tail of the delay distribution [46], and let packets arriving in a small burst be trans-

mitted in a burst. Many applications in the Internet, such as TELNET, benefit from this

feature. Schedulers similar to WFQ or DRR, however, interweave the packets from differ-

ent queues in the forwarding process, which diminishes this feature.

In this chapter, we focus on differentiated bandwidth allocation as well as TCP pro-

tection in the core networks. Our goal is to use a cost-effective method to provide a flow at

a higher priority level with a larger bandwidth share, and to guarantee that no low-priority

TCP flow is starved even if some high-speed unresponsive flows exist. In addition, by using

2 For example, according to Ramabhadran and Pasquale [120], the per-packet-processing complexity is O(N ) for WFQ, O(log N ) for WF2Q [23], and O(log log N )for Leap Forward Virtual Clock [130]. Deficit Round Robin (DRR) [126] reduces theper-packet-processing complexity to O(1), but its memory-requirement complexity is stillO(N ) when the number of logic queues is comparable to the number of active flows, inorder to obtain desired performance.

Page 27: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

16

CHOKeW, we expect better fairness among the flows with the same priority. To the best of

our knowledge, no other stateless scheme has achieved this goal.

The rest of the chapter is organized as follows. Section 2.2 describes the CHOKeW

algorithm, CHOKeW. Section 2.3 derives the equations for the steady state, and explains

the features and effectiveness of CHOKeW, such as fairness and bandwidth differentiation.

Section 2.4 presents and discusses the simulation results, including the effect of supporting

two priority levels and multiple priority levels, TCP protection, the performance of TCP

Reno in CHOKeW, a comparison with CHOKeW-RED (CHOKeW with RED module),

and a comparison with CHOKeW-avg (CHOKeW with a module to calculate the aver-

age queue length by EWMA). Section 2.5.1 discusses the issues involving consideration

of implementation, and gives a suggestion of the extended matched drop algorithm for

CHOKeW designed for some special scenarios. We conclude this chapter in Section 2.6.

2.2 The CHOKeW Algorithm

CHOKeW uses the strategy of matched drops presented by CHOKe [112] to protect

TCP flows. Like CHOKe, CHOKeW is a stateless algorithm that is capable of working in

core networks where a myriad of flows are served.

More importantly, CHOKeW supports differentiated bandwidth allocation for traffic

with different priority weights. Each priority weight corresponds to one of the priority

levels; a heavier priority weight represents a higher priority level.

Although CHOKeW borrows the idea of matched drops from CHOKe for TCP pro-

tection, there are significant differences between these two algorithms. First of all, the goal

of CHOKe is to block high-speed unresponsive flows with the help of RED to inform TCP

flows of network congestion, whereas CHOKeW is designed for supporting differentiated

bandwidth allocation with the assistance of matched drops that are also able to protect TCP

flows.

While Pan et al. [112] suggested to draw more than one packet if there are multiple

unresponsive flows, they did not provide further solutions. In CHOKeW, the adjustable

Page 28: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

17

number of draws is not only used for restricting the bandwidth share of high-speed unre-

sponsive flows, but also used as signals to inform TCP of the congestion status. In order

to avoid functional redundancy, CHOKeW is not combined with RED since RED is also

designed to inform TCP of congestion. Thus we say that CHOKeW is an independent

AQM scheme, instead of an enhancement filter for RED. To demonstrate that RED is not

an essential component for the effectiveness of CHOKeW, the comparison between the per-

formance of CHOKeW and that of CHOKeW-RED (i.e., CHOKeW with RED) is shown

in Subsection 2.4.6.

In order to determine when to draw a packet (or packets) and how many packets are

possibly drawn from the buffer, we introduce a variable, called the drawing factor, to con-

trol the maximum number of draws. For a flow at priority level i (i = 1, 2, · · · , M , where

M is the number of priority levels supported by the router), the drawing factor is denoted

by pi (pi ≥ 0). The value of pi may alter with the time due to the change of congestion

status, but at a particular moment, all flows at priority level i are handled by a CHOKeW

router using the same pi. This is how CHOKeW provides better fairness among flows with

the same priority than other conventional stateless schemes such as RED and BLUE. A

CHOKeW router keeps the values of pi instead of per-flow states. Thus CHOKeW pre-

cludes the memory requirement from rocketing up when more flows go through the router.

Roughly speaking, we may interpret pi as the maximum number of random draws

from the buffer upon an arrival from a flow at priority level i. The precise meaning is

discussed below.

Assume that the number of active flows served by a CHOKeW router is N , and the

number of priority levels supported by the router is M . Let wi (wi ≥ 1) be the priority

weight of flow i (i = 1, 2, · · · , N ), and w(k) (k = 1, 2, · · · , M ) be the weight of priority

level k. If flow i is at priority level k, then wi = w(k). All flows at the same priority

level have the same priority weight. If w(k) > w(l), we say that flows at priority level k

Page 29: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

18

have higher priority than flows at priority level l, or simply, priority level k is higher than

priority level l.

Let p0 denote the basic drawing factor. The drawing factor used for flow i is calculated

as follows:

pi = p0/wi. (2.1)

From wi ≥ 1, p0 ≥ pi, which means that p0 also represents the upper bound of

drawing factors.

If wi > wj , pi < pj , i.e., a flow with higher priority has a smaller drawing factor,

and hence has a lower possibility of becoming the victim of matched drops. This is the

basic mechanism for supporting bandwidth differentiation in CHOKeW (further explained

in Subsection 2.3.4).

The precise meaning of drawing factor pi depends upon its value. It can be categorized

into two cases:

Case 1. When 0 ≤ pi < 1, pi represents the probability of drawing one packet from

the buffer at random for comparison.

Case 2. When pi ≥ 1, pi consists of two parts, and we may rewrite pi as

pi = mi + fi. (2.2)

where mi ∈ Z∗ (the set of nonnegative integers) represents the integral part with the value

of bpic (the largest integer≤ pi), and fi the fractional part of pi. In this case, at the most mi

or mi + 1 packets in the buffer may be drawn for comparison. Let di denote the maximum

number of random draws. We have

Prob [di = mi + 1] = fi,

P rob [di = mi] = 1− fi.

Page 30: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

19

Initialization:p0 ← 0

For each packet pkt arrival(1)L ← L + la(2)Update p0 (see Fig.2)(3)IF pkt is at priority level k

THEN p ← p0/w(k), m ← ⌊p⌋, f = p − m(4)Generate a random number v ∈ [0, 1)

IF v < fTHEN m ← m + 1

(5)IF L > Lth

THENWHILE m > 0

m ← m − 1Draw a packet (pkt′) from the buffer at randomIF ξa=ξb

THENL ← L − la − lb, drop pkt′ and pktRETURN /*wait for the next arrival*/

ELSE keep pkt′ intact(6)IF L > Llim /*buffer is full*/

THEN L ← L − la, drop pktELSE let pkt enter the buffer

Parameters:ξa: Flow ID of the arriving packetξb: Flow ID of the packet drawn from bufferla: Size of the arriving packetlb: Size of the packet drawn from the bufferL: Queue lengthLlim: Buffer limitLth: Queue length threshold of activating matched drops

Figure 2–1: CHOKeW algorithm

Page 31: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

20

IF L < L−

THENp0 ← p0 − p−

IF p0 < 0

THEN p0 ← 0

IF L > L+

THEN p0 ← p0 + p+

Parameters:L: Queue lengthL+: Queue length threshold of increasing p0

L−: Queue length threshold of decreasing p0

Lth < L− < L+

p0: Basic drawing factorp+: Step length of increasing p0

p−: Step length of decreasing p0

Figure 2–2: Algorithm of updating p0

BN, NBN, N

B2, 2

B1, 1

. . .

B0, 0

B1, 1

S1

R1

S2

R2

D1

D2

SN DN

. . .

Figure 2–3: Network topology

Page 32: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

21

The algorithm of drawing packets is described in Fig. 2–1. As CHOKeW does not

require per-flow states, in this figure, m represents the value of mi (before Step (4)) and di

(after Step (4)), we also use p and f to represent pi and fi, respectively.

The congestion status of a router may become either heavier or lighter after a period of

time, since circumstances (e.g., the number of users, the application types, and the traffic

priority) constantly change. In order to cooperate with TCP and to improve the system

performance, an AQM scheme such as RED [60] needs to inform TCP senders to lower

their sending rates by dropping more packets when the network congestion becomes worse.

Unlike CHOKe [112], CHOKeW does not have to work with RED in order to function

properly. Instead, CHOKeW can adaptively update p0 based on the congestion status. The

updating process is shown in Fig. 2–2, which details Step (2) of Fig. 2–1. The combination

of Fig. 2–1 and Fig. 2–2 provides a complete description of the CHOKeW algorithm.

CHOKeW updates p0 upon each packet arrival, but activates matched drops only when

the queue length L is longer than the threshold Lth (Step (5) in Fig. 2–1). Three queue

length thresholds are applied to CHOKeW: Lth is the threshold of activating matched drops,

L+ increasing p0, and L− decreasing p0. As the buffer is used to absorb bursty traffic [29],

we set Lth > 0, so that the short bursty traffic can enter the buffer without suffering any

packet drops when the queue length L is less than Lth (although p0 may be larger than 0

for historical reasons). When L ∈ [L−, L+], the network congestion status is considered

to have been stable and p0 maintains the same value as before (i.e., the algorithm shown in

Fig. 2–2 does not adjust the value of p0). Only when L > L+, the congestion is considered

to be heavy, and p0 is increased by p+ each time. The alleviation of network congestion

is represented by L < L−, and as adaptation, p0 is reduced by p− each time. We keep

Lth < L− so that the matched drops are still active when p0 starts becoming smaller,

which prevents matched drops from being completely turned off suddenly and endows the

algorithm with higher stability.

Page 33: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

22

Table 2–1: The State of CHOKeW vs. the Range of L

State of Range of L

CHOKeW [0, Lth] (Lth, L−) [L−, L+] (L+, Llim]

Matched Drops Inactive Active

p0 ← max{0, p0 − p−} p0 p0 + p+

The state of CHOKeW can be described by the activation of matched drops and the

process of updating p0, which is further determined by the range the current queue length

L falls into, shown in Table 2–1. At any time, CHOKeW works in one of following states:

1. inactive matched drops and decreasing p0 (unless p0 = 0 ), when 0 ≤ L ≤ Lth;

2. active matched drops and decreasing p0 (unless p0 = 0 ), when Lth < L < L− ;

3. active matched drops and constant p0, when L− ≤ L ≤ L+ ;

4. active matched drops and increasing p0, when L+ < L ≤ Llim .

According to the above explanation, 0 < Lth < L− < L+ < Llim, where Llim is the

buffer limit. In the simulations (Section 2.4), we set Lth = 100 packets, L− = 125 packets,

L+ = 175 packets, and Llim = 500 packets. The guideline here is similar to RED when

gentle = true [64]; i.e., let the dropping probability increase smoothly, so the queue can

have some time to absorb small bursty traffic.

One advantage of using CHOKeW is that it is easily able to prioritize each packet

based on the value of the DS field, without the aid of the flow ID.3 Therefore, when

CHOKeW is applied in core routers, priority becomes a packet feature. In terms of ser-

vice qualities in the core network, packets from different flows shall equally be served if

they have the same priority; on the other hand, packets from the same flow may be treated

differently if their priority is different (e.g., some packets are remarked by edge routers).

3 In CHOKeW, the flow ID is only used to check whether two packets are from the sameflow. This operation (XOR) can be executed efficiently by hardware.

Page 34: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

23

Now we discuss the complexity of CHOKeW. On the basis of the above description,

we know that CHOKeW needs to remember only w(k) for each predefined priority level

k (k = 1, 2, · · · , M ), instead of some variables for each flow i ( i = 1, 2, · · · , N ). The

complexity of CHOKeW is only affected by M . In DiffServ networks, it is reasonable to

expect that M will never be a large value in the foreseeable future, i.e., M ¿ N . Thus with

respect to N , the memory-requirement complexity as well as the per-packet-processing

complexity of CHOKeW is O(1), while for conventional per-flow schemes, the memory-

requirement complexity is O(N ) and the per-packet-processing complexity is usually larger

than O(1) [120].

2.3 Model

In previous work, Tang et al. [133] proposed a model to explain the effectiveness of

CHOKe. Using matched drops, CHOKe produces a “leaky buffer” where packets may be

dropped when they move forward in the queue, which may result in a flow that maintains

many packets in the queue but can obtain only a small portion of bandwidth share. In this

way TCP protection takes effect on high-speed unresponsive flows [133].

For CHOKeW, we need a model to explain not only how to protect TCP flows (as

shown by Tang et al. [133]), but also how to differentiate the bandwidth share.

The network topology shown in Fig. 2–3 is used for our model. In this figure, two

routers, R1 and R2, are connected to N source nodes (Si, i = 1, 2, · · · , N ) and N destina-

tion nodes (Di), respectively. The R1-R2 link, with bandwidth B0 and propagation delay

τ0, allows all flows to go through. Bi and τi denotes the bandwidth and the propagation

delay of each link connected to Si or Di, respectively. As we are interested in the network

performance under a heavy load, we always let B0 < Bi, so that the link between the two

routers becomes a bottleneck.

2.3.1 Some Useful Probabilities

In the CHOKeW router, for flow i, let ri be the probability that matched drops occur

at one draw (matching probability in short), which is dependent of the current queue length

Page 35: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

24

L and the number of packets from flow i in the queue (i.e., the packet backlog from flow i,

denoted by Li). The following equation [133] can be also used for CHOKeW:

ri = Li/L. (2.3)

Now we focus on the features of matched drops. Assuming that the buffer has an

unlimited size and thus packet dropping is due to matched drops instead of overflow, we

can calculate the probability that a packet from flow i is allowed to enter the queue upon

its arrival, denoted by ηi:

ηi = (1− ri)mi(1− firi) (2.4)

where

mi = bpic,fi = pi −mi.

(2.5)

The difference between (2.2) and (2.5) is that (2.2) uses m and f rather than mi and

fi. When CHOKeW is implemented, two variables m and f are adequate for all flows

because they can be reused for each arrival. In (2.4), (1 − ri)mi is the probability of

no matched drops in the first mi draws. After the completion of the first mi draws, the

value of fi stochastically determines whether one more packet is drawn. The probability

of no further draw is (1 − fi), and the probability that one more packet is drawn but no

matched drops occur is fi(1 − ri). Therefore the probability that no matched drops occur

is (1− fi) + fi(1− ri) = 1− firi.

We rewrite (1− ri)fi as its Maclaurin Series:

(1− ri)fi = 1− firi + o(ri

2).

Page 36: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

25

Assuming the core network serves a vast number of flows, it is reasonable to say ri ¿1 for each responsive flow i.4 We have (1− ri)

fi ≈ 1− firi , and (2.4) can be rewritten as

ηi = (1− ri)mi+fi , or

ηi = (1− ri)pi . (2.6)

For a packet from flow i, let si denote the probability that it survives the queue, and qi

the dropping probability (either before or after the packet enters the queue).5 We have

qi = 1− si. (2.7)

For each packet arrival from flow i, the probability that it is dropped before entering

the queue is 1 − ηi. According to the rule of matched drops, a packet from the same flow,

which is already in the buffer, should also be dropped if the arriving packet is dropped.

Thus in a steady state we obtain qi = 2(1 − ηi), 0.5 ≤ ηi ≤ 1. When qi=1, ηi = 0.5.

In other words, when flow i is starved, the router still needs to let half of the arriving

packets from flow i enter the queue, and the packets in the queue will be used to match the

new arrivals in the future. On the other hand, if ηi < 0.5 temporarily, the numer of packets

enter the queue cannot compensate the backlog for the packet losses from the queue, which

causes the reduction of ri until ri = 1− 2−1/pi and accordingly ηi = 0.5.

By using (2.6) in it, we get

qi = 2− 2(1− ri)pi (2.8)

and from (2.7),

si = 2(1− ri)pi − 1. (2.9)

4 We call a flow responsive if it avoids sending data at a high rate when the network iscongested. A TCP flow is responsive, while a UDP flow is not.

5 Note that it is possible for a packet to become a matched-drop victim even after it hasentered the queue.

Page 37: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

26

After a packet enters the queue, one and only one of the following possibilities will

happen: 1) it will be dropped from the queue due to matched drops, or 2) it will pass the

queue successfully. The passing probability ρi satisfies si = ηiρi. Using (2.6) and (2.9) in

it, we get

ρi = 2− 1

(1− ri)pi.

In order for TCP protection, CHOKeW requires 0 ≤ qi < 1 if flow i uses TCP. Equa-

tion (2.8) shows that as long as pi does not exceed a certain range, CHOKeW can guarantee

that flow i will not be starved, even if it may only have low priority. This feature offers

CHOKeW advantages over RIO [26], which neither protects TCP flows, nor prevents the

starvation of low-priority flows.

The algorithm for updating p0 illustrated in Fig. 2–2 ensures p0 ≥ 0 after the update.

From Step (3) in Fig. 2–1, pi ≥ 0 . Using it in (2.8), we get qi ≥ 0 , which means in

CHOKeW the lower bound of qi is satisfied automatically.

Now we discuss the range of pi to satisfy the upper bound of qi (i.e., qi < 1). From

(2.8), we have

pi <−1

log2(1− ri). (2.10)

From (2.3), ri can also be interpreted as the normalized backlog from flow i. Equation

(2.10) gives the upper bound of pi, which is a fairly large value if ri is small. For instance,

when ri equals 0.01 (imagine 100 flows share the queue length evenly), the upper bound

of pi is 68.9; in other words, the algorithm may draw up to 69 packets before a flow is

starved, but such a large pi is rarely observed due to the control of unresponsive flows.

Formula (2.10) also explains why a flow in CHOKe (where pi ≡ 1) that is not starved

must have a backlog shorter than half of the queue length. This result is consistent with the

conclusion of Tang et al. [133]. In CHOKeW, for flow i with a certain priority weight wi

and a corresponding drawing factor pi (see Equation (2.1)), the higher the arrival rate, the

larger the backlog, and hence the higher the dropping probability. When the backlog of a

Page 38: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

27

high-rate unresponsive flow reaches the upper bound determined by (2.10), this flow will

be completely blocked by CHOKeW.

2.3.2 Steady-State Features of CHOKeW

In this subsection, assume that there are N independent flows going through the

CHOKeW router, and the packet arrivals for each flow are Poisson.6

The packets arriving at the router can be categorized into two groups: 1) those that will

be dropped and 2) those that will pass the queue. Let λ denote the average aggregate arrival

rate for all flows, and λ′ the average aggregate arrival rate for the packets that will pass the

queue.7 Similarly, L denotes the queue length as mentioned above, and L′ the queue length

only counting the survivable packets. Compared to the time that it takes to transmit (serve)

a packet, the delay to drop a packet is negligible. Little’s Theorem [48] shows

D = L′/λ′. (2.11)

where D is the average waiting time for packets in the queue.

For each flow i (i = 1, 2, · · · , N ), let λi be the average arrival rate, and λ′i be the

average arrival rate only counting the packets that will survive the queue. Then

λ′i = λi(1− qi). (2.12)

As mentioned above, Li denotes the backlog from flow i. Let L′i be the backlog for the

survivable packets from flow i. Then these per-flow measurements have the following

relationship with their aggregate counterparts: λ =∑N

i=1 λi, λ′ =∑N

i=1 λ′i, L =∑N

i=1 Li,

and L′ =∑N

i=1 L′i.

6 Strictly speaking, the Poisson distribution is not a perfect representation of Internettraffic; nevertheless, it can provide some insights into the features of our algorithm.

7 The average arrival rate for the packets that will be dropped is equal to λ− λ′.

Page 39: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

28

Based on the PASTA (Poisson Arrivals See Time Average) property of Poisson ar-

rivals, packets from all flows have the same average waiting time in the queue (i.e., Di = D,

i = 1, 2, · · · , N ). Using Little’s Theorem again, for flow i, we get

D = L′i/λ′i. (2.13)

Using (2.11) in (2.13),L′iL′

=λ′iλ′

. (2.14)

The average number of packet drops from flow i during period D is Dλiqi. As packets

from a flow are dropped in pairs (one before entering the queue and one after entering the

queue), flow i has Dλiqi/2 packets dropped after entering the queue on average. Thus we

have

Li = L′i + Dλiqi/2,

L = L′ + D2

N∑j=1

λjqj.

Using (2.8), (2.12), (2.13), and (2.14) in it, we obtain

Li = Dλi(1− ri)pi ,

L = DN∑

j=1

λj(1− rj)pj .

(2.15)

For flow i, let µi denote the average arrival rate only counting the packets entering the

queue. Then µi is determined by λi and ηi, i.e., µi = λiηi. Considering (2.6), we rewrite

(2.15) as

Li = Dµi,

L = DN∑

j=1

µj,(2.16)

and rewrite (2.3) as ri = µi

/(∑Nj=1 µj

).

Equations (2.16) can be interpreted as Little’s Theorem used for a leaky buffer where

packets may be dropped before reach the front of the queue, which is not a classical queue-

ing system. From (2.16) we get an interesting result: even in a leaky buffer, the average

waiting time is determined by the average queue length for flow i (or the average aggregate

Page 40: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

29

queue length) and the average arrival rate from flow i (or the average aggregate arrival rate)

that only counts the packets entering the queue. The average waiting time is meaningful to

the packets surviving the queue exclusively, whereas packets that are dropped after entering

the queue still contribute to the queue length.

Below are the group of formulas that describe the steady-state features of CHOKeW:

ri =µi

N∑j=1

µj

, (2.17a)

qi = 2− 2(1− ri)pi , (2.17b)

µi = λi(1− ri)pi , (2.17c)

λ′i = λi(1− qi). (2.17d)

2.3.3 Fairness

To demonstrate the fairness of our scheme, we study two TCP flows, i and j (i 6= j,

and i, j ∈ {1, 2, · · · , N}). In this subsection, we analyze the fairness under circumstances

where flows have the same priority and hence the same drawing factor, denoted by p. The

discussion of multiple priority levels is left to the next subsection.

From (2.17a), ri/rj = µi/µj . Using (2.17c) in it, we get

ri

rj

=λi(1− ri)

p

λj(1− rj)p. (2.18)

Previous research (for example, Floyd and Fall [63] and Padhye et al. [111]) has shown

that the approximate sending rate of TCP is affected by the dropping probability, packet

size, Round Trip Time (RTT), and other parameters such as the TCP version and the speed

of users’ computers. In this chapter, we describe TCP sending rate as

λi = αiR(qi). (2.19)

Page 41: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

30

where qi is the dropping probability, and αi (αi > 0) denotes the combination of other fac-

tors.8 Because the sending rate of TCP decreases as network congestion worsens (indicated

by a higher dropping probability), we have

∂R/∂qi < 0. (2.20)

When flow i and flow j have the same priority, our discussion covers two cases, dis-

tinguished by the equivalence of αi and αj .

Case 1. αi = αj

When αi = αj , an intuitive solution to (2.18) is λi = λj and ri = rj . From (2.17b)

and (2.17d) we have λ′i = λ′j , i.e., flow i and flow j get the same amount of bandwidth

share. We will show that this is the only solution.

Let

G =λi(1− ri)

pi

λj(1− rj)pj− ri

rj

.

Then any solution to (2.18) is also a solution to G = 0.

In the core network, a router usually has to support a great number of flows and the

downstream link bandwidth is shared among them. It is reasonable to assume that fluctu-

ations in the backlog of a TCP flow do not significantly affect the backlog of other flows

(i.e., ∂rj/∂ri ≈ 0, i 6= j). Then we have

∂G

∂ri

=1

λj(1− rj)pj

[∂λi

∂ri

(1− ri)pi − λipi(1− ri)

pi−1

]− 1

rj

.

Because ∂λi

∂ri= ∂λi

∂qi· ∂qi

∂ri= 2∂λi

∂qi(1− ri)

pi−1, and ∂λi/∂qi < 0 (derived from (2.19) and

(2.20)), for any value of ri (0 < ri < 1), ∂G/∂ri < 0. In other words, G is a monotone

8 The construction of Equation (2.19) results from previous work. In the work of Floydand Fall [63], for instance, when a TCP session works in the non-delay mode, the sender’srate can be estimated by λi = Pi

Ti

√3

2qi, where Pi and Ti denote packet size and RTT of this

flow, respectively.

Page 42: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

31

decreasing function with respect to ri. As a result, if there is a value of ri satisfying G = 0,

it must be the only solution to G = 0. Thus the only steady state is maintained by λi = λj

and ri = rj , when TCP flows i and j have the same priority and the same factor α. This

indicates that CHOKeW is capable of providing good fairness to flow i and flow j.

Case 2. αi 6= αj

Let (λ′i/λ′j)C and (λ′i/λ

′j)R denote the ratio of average throughput of flow i to that

of flow j for CHOKeW and for conventional stateless AQM schemes, respectively. By

comparing (λ′i/λ′j)C to (λ′i/λ

′j)R, we will show that CHOKeW is able to provide better

fairness, when αi 6= αj .

Among conventional stateless AQM schemes, RED determines the dropping proba-

bility according to the average queue length, and BLUE calculates the dropping probability

from packet loss and link idle events. In a steady state, for AQM schemes such as RED and

BLUE, every flow has the similar dropping probability. Let q denote the dropping proba-

bility. For all flows, qi = q (i = 1, 2, · · · , N ). Therefore, flow i has an average throughput

of

λ′i = λi(1− q) = (1− q)αiR(q).

Similarly, flow j has an average throughput of

λ′j = (1− q)αjR(q).

Thus for RED and BLUE,

(λ′i/λ′j)R = αi/αj. (2.21)

If αi > αj , (λ′i/λ′j)R > 1. Given an AQM scheme, the closer λ′i/λ

′j is to 1, the better

the fairness. For CHOKeW, if (λ′i/λ′j)C is closer to 1 than (λ′i/λ

′j)R, i.e., (λ′i/λ

′j)R > (λ′i/λ

′j)C > 1 ,

we say CHOKeW is capable of providing better fairness.

From (2.17b), R(qi) in (2.19) can be rewritten as

R(qi) = R (2− 2(1− ri)pi) .

Page 43: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

32

When flow i and flow j have the same priority, p, we define

Λ(rk) , R (2− 2(1− rk)p) , for k ∈ {i, j}.

Then (2.19) can be rewritten as

λk = αkΛ(rk), for k ∈ {i, j}.

From (2.17b) and (2.17d),

λ′iλ′j

=αiΛ(ri) [2(1− ri)

p − 1]

αjΛ(rj) [2(1− rj)p − 1]. (2.22)

Our goal is to show that the right part of (2.22) is less than αi/αj , if αi > αj . From

(2.17a) and (2.17c),

ri =αiΛ(ri)(1− ri)

p

αiΛ(ri)(1− ri)p +N∑

k=1,k 6=i

µk

,

and hence

∂ri

∂αi

= βΛ(ri)(1− ri)p

[1− βαi

∂Λ

∂ri

(1− ri)p + βαiΛ(ri)p(1− ri)

p−1

]−1

,

where

β =

N∑k=1,k 6=i

µk

(N∑

k=1

µk

)2 .

From∂Λ

∂ri

=∂R

∂qi

· ∂qi

∂ri

=∂R

∂qi

[2p (1− ri)

p−1] < 0,

we see ∂ri/∂αi > 0, which means when αi > αj , we have ri > rj and Λ(ri) < Λ(rj).

Using these results in (2.22), for CHOKeW,

(λ′i/λ′j)C < (αi/αj). (2.23)

Page 44: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

33

A comparison between (2.21) and (2.23) proves that CHOKeW provides better fair-

ness than RED and BLUE.

2.3.4 Bandwidth Differentiation

For any two TCP flows i and j (i 6= j), if αi = αj , and wi < wj (from (2.1), pi > pj),

CHOKeW allocates a smaller bandwidth share to flow i than to flow j, i.e., λ′i < λ′j . This

seems to be an intuitive strategy, but we also noticed that the interaction among pi, ri and

qi may cause some confusion. The dropping probability of flow i in CHOKeW, qi, is not

only determined by pi, but also by ri. Furthermore, the effects of ri and pi are inverse—an

larger value of pi results in a larger qi, but at the same time it leads to a smaller ri, which

may produce a smaller qi. To clear up the confusion, we only need to show that a larger

value of pi leads to a smaller value of bandwidth share, λ′i, which is equivalent to showing

∂λ′i/∂pi < 0. From (2.17d), (2.19), and (2.20), we get ∂λ′i/∂qi < 0. From the Chain Rule

∂λ′i∂pi

=∂λ′i∂qi

· ∂qi

∂pi

,

we only need to show∂qi

∂pi

> 0.

Proof

According to the Chain Rule, we know

∂qi

∂pi

=∂qi

∂ri

· ∂ri

∂pi

+∂qi

∂u· ∂u

∂pi

(2.24)

where u = pi. We introduce symbol u to distinguish ∂qi/∂u from ∂qi/∂pi, when ri is

treated as a constant in ∂qi/∂u but not in ∂qi/∂pi. From (2.17b)

∂qi/∂u = −2(1− ri)pi ln(1− ri) (2.25)

and

∂qi/∂ri = 2pi(1− ri)pi−1. (2.26)

Page 45: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

34

According to (2.17a) and (2.17c), we have

∂ri

∂pi

=γ1

N∑k=1

µk + λipi(1− ri)pi−1

(2.27)

where

γ1 = (1− ri)pi

[∂λi

∂qi

· ∂qi

∂pi

+ λi ln(1− ri)

].

Using (2.25), (2.26) and (2.27) in (2.24), we get

∂qi

∂pi

= −2(1− ri)

pi ln(1− ri)N∑

k=1

µk

γ2

> 0

where

γ2 =N∑

k=1

µk + λipi(1− ri)pi−1 − 2

∂λi

∂qi

(1− ri)2pi−1 pi.

¥

2.4 Performance Evaluation

To evaluate CHOKeW in various scenarios and to compare it with some other schemes,

we implemented CHOKeW using ns simulator version 2 [103].

The network topology is shown in Fig. 2–3, where B0 = 1 Mb/s and Bi = 10 Mb/s

(i = 1, 2, · · · , N ). Unless specified otherwise, the link propagation delay τ0 = τi = 1 ms.

The buffer limit is 500 packets, and the mean packet size is 1000 bytes. TCP flows are

driven by FTP applications, and UDP flows are driven by CBR traffic. All TCPs are TCP

SACK except in Subsection 2.4.8 where the performance of TCP Reno flows going through

a CHOKeW router is investigated. Each simulation runs for 500 seconds.

Parameters of CHOKeW are set as follows: Lth = 100 packets, L− = 125 packets,

L+ = 175 packets, p+ = 0.002 , and p− = 0.001 .

Parameters of RED are set as follows: minth = 100 packets, maxth = 200 packets,

gentle = true, the EWMA weight is set to 0.002, and pmax = 0.02 (except in Subsection

2.4.6 where different values of pmax are tested to be compared with CHOKeW).

Page 46: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

35

Parameters of RIO include those for “out” traffic and those for “in” traffic. For “out”

traffic, minth out = 100 packets, maxth out = 200 packets, pmax out = 0.02. For “in”

traffic, minth in = 110 packets, maxth in = 210 packets, pmax in = 0.01 (except in

Subsection 2.4.1 where different parameters are tested). Both gentle out and gentle in

are set to true.

For parameters of BLUE, we set δ1 = 0.0025 (the step length for increasing the drop-

ping probability), δ2 = 0.00025 (the step length for decreasing the dropping probability),

and freeze time = 100 ms.

2.4.1 Two Priority Levels with the Same Number of Flows

One of the main tasks of CHOKeW is supporting bandwidth differentiation for multi-

ple priority levels when working in a stateless method. We validate the effect of supporting

two priority levels with the same number of flows in this subsection, two priority levels

with different number of flows in the next subsection, and three or more priority levels in

Subsection 2.4.3.

As mentioned in Subsection 2.1.2, flow starvation often happens in RIO but is avoid-

able in CHOKeW. In order to quantify and compare the severity of flow starvation among

different schemes, we record the Relative Cumulative Frequency (RCF) of goodput for

flows at each priority level. For a scheme, the RCF of goodput g for flows at a specific

priority level represents the number of flows that have goodput lower than or equal to g

divided by the total number of flows in this priority.

We simulate 200 TCP flows. When CHOKeW is used, w(1) = 1 and w(2) = 2 are

assigned to equal number of flows. When RIO is used, the number of “out” flows is also

equal to the number of “in” flows. Fig. 2–4 illustrates the RCF of goodput for flows at each

priority level of CHOKeW and RIO. Here we show three sets of results from RIO, denoted

by RIO 1, RIO 2 and RIO 3, respectively. For RIO 1, we set minth in = 150 packets

and maxth in = 250 packets; for RIO 2, minth in = 130 packets and maxth in = 230

packets; for RIO 3, minth in = 110 packets and maxth in = 210 packets.

Page 47: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

36

0

0.2

0.4

0.6

0.8

1

0 0.002 0.004 0.006 0.008 0.01 0.012 0.014

RC

F o

f goo

dput

for

flow

s

Goodput (Mb/s)

RIO_1 outRIO_1 in

RIO_2 outRIO_2 in

RIO_3 outRIO_3 in

CHOKeW w_(1)CHOKeW w_(2)

Figure 2–4: RCF of RIO and CHOKeW under a scenario of two priority levels

From Fig. 2–4, we see that the RCF of goodput zero for “out” traffic of RIO 1 is 0.1.

In other words, 10 of the 100 “out” flows are starved. Similarly, for RIO 2 and RIO 3, 15

and 6 flows are starved respectively. Moreover, it is observed that some “in” flows of RIO

may also have very low goodput (e.g., the lowest goodput of “in” flows of RIO 2 is only

0.00015 Mb/s) due to a lack of TCP protection. Flow starvation is very common in RIO,

but it rarely happens in CHOKeW.

Now we investigate the relationship between the number of TCP flows and the ag-

gregate TCP goodput for each priority level. The results are shown in Fig. 2–5, where

the curves of w(1) = 1 and w(2) = 2 correspond to the two priority levels. Half of the

flows are assigned w(1) and the other half assigned w(2). As more flows are going through

the CHOKeW router, the goodput difference between the higher-priority flows and the

lower-priority flows changes owing to the network dynamics, but high-priority flows can

get higher goodput no matter how many flows exist.

Page 48: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

37

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

60 80 100 120 140 160 180 200

Agg

rega

te T

CP

goo

dput

(M

b/s)

The number of TCP flows

w_(1)=1.0w_(2)=2.0

Figure 2–5: Aggregate TCP goodput vs. the number of TCP flows under a scenario of twopriority levels

0.004

0.006

0.008

0.01

0.012

0.014

0.016

1.5 2 2.5 3 3.5 4

Ave

rage

per

-flo

w T

CP

goo

dput

(M

b/s)

w_(2)/w_(1)

CHOKeW w_(1)CHOKeW w_(2)

WFQ w_(1)WFQ w_(2)

Figure 2–6: Average per-flow TCP goodput vs. w(2)/w(1) when 25 flows are assignedw(1) = 1 and 75 flows w(2)

Page 49: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

38

2.4.2 Two Priority Levels with Different Number of Flows

When the number of flows in each priority level is different, CHOKeW is still capable

of differentiating bandwidth allocation on flow basis. In the following experiment, among

the total 100 TCP flows, 25 flows are assigned fixed priority weight w(1) = 1.0 , and 75

flows are assigned w(2). As w(2) varies from 1.5 to 4.0, the average per-flow goodput is

collected in each priority level and shown in Fig. 2–6. The results are compared with

those of WFQ working in an aggregate flow mode, i.e., in order to circumvent the per-

flow complexity, flows at the same priority level are merged into an aggregate flow before

entering WFQ, and WFQ buffers packets in the same queue if they have the same priority,

instead of using strict per-flow queueing. In WFQ, the buffer pool of 500 packets is split

into two queues: the queue for w(1) has a capacity of 125 packets and the queue for w(2)

has a capacity of 375 packets.

In Fig. 2–6, it is readily to see that the goodput of flows assigned w(2) increases along

with the increase of the value of w(2) for both CHOKeW and WFQ, and accordingly the

goodput of flows assigned w(1) decreases. However, when w(2)/w(1) < 3, the average

per-flow goodput with w(2) is even lower than that with w(1) for WFQ. We say that WFQ

does not guarantee to offer higher per-flow goodput to higher priority if the priority is taken

by more flows, when aggregate flows are used. For CHOKeW, bandwidth differentiation

works effectively in the whole range of w(2), even though all packets are mixed in one

single queue in a stateless way.

This feature is developed based on the fact that CHOKeW does not require multiple

queues to isolate flows; by contrast, conventional packet approximation of GPS, such as

WFQ, cannot avoid the complexity caused by per-flow nature and give satisfactory band-

width differentiation on flow basis at the same time.

2.4.3 Three or More Priority Levels

In situations where multiple priority levels are used, the results are similar to those

of two priority levels, i.e., the flows with higher priority achieve higher goodput. Since

Page 50: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

39

0.2

0.25

0.3

0.35

0.4

100 150 200 250 300

Agg

rega

te T

CP

goo

dput

(M

b/s)

The number of TCP flows

w_(1)=1.0w_(2)=1.5w_(3)=2.0

Figure 2–7: Aggregate goodput vs. the number of TCP flows under a scenario of threepriority levels

RIO only supports two priority levels, the results are not compared with those of RIO

in this subsection. Fig. 2–7 and Fig. 2–8 demonstrate the aggregate TCP goodput for

each priority level versus the number of TCP flows for three priority levels and for four

priority levels respectively. At each level, the number of TCP flows ranges from 25 to

100. In Fig. 2–7, three priority levels are configured using w(1) = 1.0 , w(2) = 1.5 , and

w(3) = 2.0 ; w(4) = 2.5 is added to the simulations corresponding to Fig. 2–8 for the fourth

priority levels. Even though the goodput fluctuates when the number of TCP flows changes,

the flows in higher priority are still able to obtain higher goodput. Furthermore, no flow

starvation is observed.

2.4.4 TCP Protection

TCP protection is another task of CHOKeW. We use UDP flows at the sending rate

of 10 Mb/s to simulate misbehaving flows. A total of 100 TCP flows are generated in the

simulations. Priority weights w(1) = 1 and w(2) = 2 are assigned to equal number of

flows. In order to evaluate the performance of TCP protection, the UDP flows are assigned

the high priority weight w(2) = 2. As discussed before, if TCP protection works well in

situation where misbehaving flows are in the priority with w(2), it should also work well

when misbehaving flows only have priority lower than w(2). Hence the effectiveness of

Page 51: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

40

0.1

0.15

0.2

0.25

0.3

0.35

0.4

100 150 200 250 300 350 400

Agg

rega

te T

CP

goo

dput

(M

b/s)

The number of TCP flows

w_(1)=1.0w_(2)=1.5w_(3)=2.0w_(4)=2.5

Figure 2–8: Aggregate goodput vs. the number of TCP flows under a scenario of fourpriority levels

TCP protection is validated provided that the high-priority misbehaving flows are blocked

successfully.

The goodput versus the number of UDP flows is shown in Fig. 2–9, where CHOKeW

is compared with RIO. Since no retransmission is provided by UDP flows, goodput is

equal to throughput for UDP. For CHOKeW, even if the number of UDP flows increases

from 1 to 10, the TCP goodput in each priority level (and hence the aggregate goodput of

all TCP flows) is quite stable. In other words, the link bandwidth is shared by these TCP

flows, and the high-speed UDP flows are completely blocked by CHOKeW. By contrast,

the bandwidth share for TCP flows in a RIO router is nearly zero, as high-speed UDP flows

occupy almost all the bandwidth.

Fig. 2–10 illustrates the relationship between p0 and the number of UDP flows recorded

in the simulations of CHOKeW. As more UDP flows start, p0 increases, but p0 rarely

reaches the high value of starting to block TCP flows before high-speed UDP flows are

blocked. In this experiment, we also find that few packets of TCP flows are dropped due to

buffer overflow. In fact, when edge routers cooperate with core routers, the high-speed

misbehaving flows will be marked with lower priority at the edge routers. Therefore,

Page 52: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

41

0

0.2

0.4

0.6

0.8

1

0 2 4 6 8 10

Agg

rega

te g

oodp

ut (

Mb/

s)

The number of UDP flows

CHOKeW TCP w_(1)=1.0CHOKeW TCP w_(2)=2.0

CHOKeW all TCPCHOKeW all UDP

RIO all TCPRIO all UDP

Figure 2–9: Aggregate goodput vs. the number of UDP flows under a scenario to investi-gate TCP protection

25

30

35

40

45

50

55

0 2 4 6 8 10

The

bas

ic d

raw

ing

fact

or p

_0

The number of UDP flows

Figure 2–10: Basic drawing factor p0 vs. the number of UDP flows under a scenario toinvestigate TCP protection

Page 53: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

42

CHOKeW should be able to block even more misbehaving flows than shown in Fig. 2–

9, and p0 should also be smaller than shown in Fig. 2–10.

2.4.5 Fairness

In Subsection 2.3.3, we use the analytical model to explain how CHOKeW can pro-

vide better fairness among the flows in the same priority than conventional stateless AQM

schemes such as RED and BLUE. We validate this attribute by showing simulations in this

subsection. Since RED and BLUE do not support multiple priority levels, and are only

used in best-effort networks, we let CHOKeW work in one priority state (i.e., w(1) = 1 for

all flows) in this subsection.

In the simulation network illustrated in Fig. 2–3, the end-to-end propagation delay of

a flow is set to one of 6, 60, 100, or 150 ms. Each of the four values is assigned to 25% of

the total number of flows.9

When there are only a few (e.g., no more than three) flows under consideration, the

fairness can be evaluated by directly observing the closeness of the goodput or throughput

of different flows. In situations where many flows are active, however, it is hard to measure

the fairness by direct observation; in this case, we introduce the fairness index:

F =

(N∑

i=1

gi

)2

NN∑

i=1

gi2

(2.28)

where N is the number of active flows during the observation period, and gi (i = 1, 2, · · · , N )

represents the goodput of flow i. From (2.28), we know F ∈ (0, 1]. The closer the value

of F is to 1, the better the fairness is. In this chapter, we use gi as goodput instead of

throughput so that the TCP performance evaluation can reflect the successful delivery rate

9 For flow i, the end-to-end propagation delay is 4τi + 2τ0. Since τ0 is constant forall flows in Fig. 2–3, the propagation delay can be assigned a desired value given anappropriate τi.

Page 54: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

43

0.8

0.82

0.84

0.86

0.88

0.9

0.92

0.94

0.96

0.98

1

160 180 200 220 240 260 280

Fai

rnes

s in

dex

F

The number of TCP flows

CHOKeWRED

BLUE

Figure 2–11: Fairness index vs. the number of flows for CHOKeW, RED and BLUE

0.5

0.6

0.7

0.8

0.9

1

60 80 100 120 140 160 180 200

Link

util

izat

ion

The number of TCP flows

CHOKeWCHOKeW-RED pmax=0.02CHOKeW-RED pmax=0.05

CHOKeW-RED pmax=0.1

Figure 2–12: Link utilization of CHOKeW and CHOKeW-RED

more accurately. Fig. 2–11 shows the fairness index of CHOKeW, RED, and BLUE versus

the number of TCP flows ranging from 160 to 280. Even though the fairness decreases as

the number of flows increases for all schemes, CHOKeW still provides better fairness than

both RED and BLUE.

2.4.6 CHOKeW versus CHOKeW-RED

An adaptive drawing algorithm has been incorporated into the design of CHOKeW,

where TCP flows can get network congestion notifications from matched drops. Simul-

taneously, the bandwidth share of high-speed unresponsive flows are also brought under

Page 55: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

44

control by matched drops. As a result, the RED module is no longer required in CHOKeW.

In this subsection, we compare the average queue length, link utilization, and TCP good-

put of CHOKeW with those of CHOKeW-RED (i.e., CHOKeW working with the RED

module).

In RED, pmax is the marginal dropping probability under healthy circumstances and

should not be set to a value greater than 0.1 [62]. For these simulations, we investigate the

performance of CHOKeW-RED with pmax ranging from 0.02 to 0.1.

The relationship between the number of TCP flows and the values of link utilization,

the average queue length, and the aggregate TCP goodput is shown in Fig. 2–12, Fig. 2–13,

and Fig. 2–14 respectively. In each figure, the performance results of CHOKeW-RED are

indicated by three curves, each corresponding to one of the three values for pmax (0.02,

0.05, and 0.1).

Fig. 2–12 shows that all schemes maintain an approximate link utilization of 96%

(shown by the curves overlapping each other), which is considered sufficient for the In-

ternet. From Fig. 2–13, we can see that the average queue length for CHOKeW-RED

increases as the number of TCP flows increases. In contrast, the average queue length can

be maintained at a steady value within the normal range between L− (125 packets) and L+

(175 packets) for CHOKeW. In situations where the number of TCP flows is larger than

100, CHOKeW has the shortest queue length. Since FCFS (First-Come-First-Served) is

used,10 the shorter the average queue length, the less the average waiting time. Among the

above schemes, CHOKeW also provides the shortest average waiting time for packets in

the queue in most cases. In CHOKeW-RED, if L ≤ L+ is maintained by random drops

10 Logically, FCFS is a scheduling strategy and it can be combined with any buffer man-agement scheme, such as TD (Tail Drop), RED, BLUE, or CHOKeW. FCFS is the simplestscheduling algorithm. The original RED [60], for instance, works with FCFS; it may alsowork with FQ, but the performance is uncertain. CHOKeW uses FCFS to minimize thecomplexity.

Page 56: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

45

140

145

150

155

160

165

170

60 80 100 120 140 160 180 200

Ave

rage

que

ue le

ngth

(pa

cket

s)

The number of TCP flows

CHOKeWCHOKeW-RED pmax=0.02CHOKeW-RED pmax=0.05

CHOKeW-RED pmax=0.1

Figure 2–13: Average queue length of CHOKeW and CHOKeW-RED

from RED (for example, this may happen when all flows uses TCP), p0 does not have

an opportunity to increase its value (p0 is initialized to 0, and p0 ← p0 + p+ only when

L > L+), which causes a longer queue in CHOKeW-RED.

Besides the link utilization and the average queue length, the aggregate TCP goodput

is always of interest when evaluating TCP performance. The comparison of TCP goodput

between CHOKeW and CHOKeW-RED is shown in Fig. 2–14. In this figure, all of the

schemes have similar results. In addition, when the number of TCP flows is larger than

100, CHOKeW rivals the best of CHOKeW-RED (i.e., pmax = 0.1).

In a special environment, if the network has not experienced heavy congestion and the

queue length L < L+ has been maintained by random drops of RED since the beginning,

CHOKeW-RED cannot achieve the goal of bandwidth differentiation as p0 = 0 and thus

pi = pj = 0 even if wi 6= wj . In other words, CHOKeW independent of RED works best.

2.4.7 CHOKeW versus CHOKeW-avg

CHOKeW employs an adaptive mechanism to adjust the basic drawing factor p0. The

speed of increase and decrease for p0 is controlled by the step length p+ and p−, respec-

tively. Based on the process illustrated in Fig. 2–2, if p+ and p− are set to appropriate val-

ues, p0 neither responds to network congestion too slowly nor oscillates too dramatically

Page 57: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

46

0

0.2

0.4

0.6

0.8

1

60 80 100 120 140 160 180 200

Agg

rega

te T

CP

goo

dput

(M

b/s)

The number of TCP flows

CHOKeWCHOKeW-RED pmax=0.02CHOKeW-RED pmax=0.05

CHOKeW-RED pmax=0.1

Figure 2–14: Aggregate TCP goodput of CHOKeW and CHOKeW-RED

140

142

144

146

148

150

152

154

60 80 100 120 140 160 180 200

Ave

rage

que

ue le

ngth

(pa

cket

s)

The number of TCP flows

CHOKeWCHOKeW-avg

Figure 2–15: Average queue length of CHOKeW and CHOKeW-avg

while queue length fluctuates due to transient bursty traffic. For the purpose of smoothing

the traffic measurement, the combination of p+ and p− is equivalent to the EWMA average

queue length avg in RED.

In this subsection, we compare the average queue length and aggregate TCP goodput

of CHOKeW with that of CHOKeW-avg (i.e., CHOKeW working with avg). The results

are shown in Fig. 2–15 and Fig. 2–16 respectively.

CHOKeW has an average queue length ranging from 147.7 to 150.7 packets and an

aggregate TCP goodput from 0.923 to 0.942 Mb/s; CHOKeW-avg has an average queue

Page 58: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

47

0.8

0.82

0.84

0.86

0.88

0.9

0.92

0.94

0.96

0.98

1

60 80 100 120 140 160 180 200

Ave

rage

TC

P g

oodp

ut (

Mb/

s)

The number of TCP flows

CHOKeWCHOKeW-avg

Figure 2–16: Aggregate TCP goodput of CHOKeW and CHOKeW-avg

length ranging from 148.5 to 152.2 packets and an aggregate TCP goodput from 0.919 to

0.944 Mb/s. CHOKeW and CHOKeW-avg have similar results. Considering avg does not

improve the performance, it is not used as an essential parameter for CHOKeW.

2.4.8 TCP Reno in CHOKeW

It is known that if two or more packets are dropped in one TCP window, the sending

rate of TCP Reno recovers more slowly than other TCP versions such as New Reno, Tahoe,

or SACK. In CHOKeW, matched drops always occur in pairs within a flow, resulting in two

packet drops per TCP window.

In this subsection, we shows that although a network may have TCP-reno flows,

CHOKeW’s average sending rate over time can still yield good performance. When a TCP-

Reno flow reduces its sending rate after experiencing matched drops, the bandwidth share

deducted from this flow is automatically reallocated to other TCP flows. Thus CHOKeW

can still maintain good link utilization.

On the other hand, when flow i (i = 1, 2, · · · , N ) has only a small backlog in the

buffer, both the matching probability ri and the dropping probability qi are low (see Eq.(2.3)

and (2.17b)). A TCP-Reno flow that has recently suffered matched drops is unlikely to

Page 59: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

48

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

60 80 100 120 140 160 180 200

Per

form

ance

of T

CP

Ren

o

The number of TCP flows

Link utilizationAverage goodput (Mb/s)

Minimum goodput/average goodput

Figure 2–17: Link utilization, aggregate goodput (in Mb/s), and the ratio of minimumgoodput to average goodput of TCP Reno

encounter more matched drops in the near future; the sending rate of this flow may increase

for a longer period of time than other flows.

For this simulation, all TCP flows uses TCP Reno. We study the link utilization, the

aggregate TCP goodput and the ratio of minimum per-flow TCP goodput to the average

per-flow TCP goodput (goodput ratio in short). Since all the values of the link utilization,

the aggregate goodput (in Mb/s), and the goodput ratio are in the same range of [0, 1], they

are illustrated in a single diagram, i.e., Fig. 2–17.

Comparing Fig. 2–12 and Fig. 2–17, we notice that the link utilization of TCP Reno

is comparable to TCP SACK. The aggregate TCP goodput in Fig. 2–17 is larger than

0.9 Mb/s (the full link bandwidth is 1 Mb/s), which is comparable to the goodput of TCP

SACK in Fig. 2–14. The goodput ratio decreases when more TCP flows share the link, as

the possibility that one or two flows get small bandwidth is higher when more flows exist.

Nonetheless, positive goodput is always maintained and no flows are starved.

2.5 Implementation Considerations

2.5.1 Buffer for Flow IDs

One of the implementation consideration is the buffer size. As discussed in Braden

et al. [29], the objective of using buffers in the Internet is to absorb data bursts and transmit

Page 60: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

49

IF L > Lth

THENGenerate a random number v′

∈ [0, 1)IF v′ < LID/(LID + L)THEN

m ← 2 × mWHILE m > 0

m ← m − 1Draw ξb from ID buffer at randomIF ξa=ξb

THENL ← L − la, drop pktRETURN /*wait for the next arrival*/

GOTO Step (6) in Fig.1

Parameters:ξa: Flow ID of arriving packetξb: Flow ID drawn from ID buffer at randomla: Size of the arriving packetLID: Queue length of ID buffer

Figure 2–18: Extended matched drop algorithm with ID buffer

them during subsequent silence. Maintaining normally-small queues does not necessarily

generate poor throughput if appropriate queue management is used; instead, it may help

result in good throughput as well as lower end-to-end delay.

When used in CHOKeW, this strategy, however, may cause a problem in which no two

packets in the buffer are from the same flow, although this is an extreme case and unlikely

happens so often, due to the bursty nature of flows. In this case, no matter how large pi is,

packets drawn from the buffer will never match an arriving packet from flow i. In order

to improve the effectiveness of matched drops, we consider a method that uses a FIFO

buffer for storing the flow IDs of forwarded packets in the history.11 When the packets are

forwarded to the downstream link, their flow IDs are also copied into the ID buffer. If the

ID buffer is full, the oldest ID is deleted and its space is reallocated to a new ID. Since the

size of flow IDs is constant and much smaller than packet size, the implementation does not

11 IPv6 has defined a flow-ID field; for a IPv4 packet, the combination of source anddestination addresses can be used as the flow ID.

Page 61: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

50

require additional processing time or large memory space. We generalize matched drops

by drawing flow IDs from a “unified buffer”, which includes the ID buffer and the packet

buffer. This modification is illustrated in Fig. 2–18, interpreted as a step inserted between

Step (4) and Step (5) in Fig. 2–1.

Let LID denote the number of IDs in the buffer when a new packet arrives. Draws

can happen either in the regular packet buffer or in the ID buffer. The probabilities that the

draws happen in the ID buffer and the packet buffer are LID

LID+Land L

LID+Lrespectively.

If the draws are from the ID buffer, only one packet (i.e., the new arrival) is dropped

each time, and hence the maximum number of draws is set to 2 × pi, implemented by

m← 2×m in Fig. 2–18.

2.5.2 Parallelizing the Drawing Process

Another implementation consideration is how to shorten the time of the drawing pro-

cess. When p0 > 1, CHOKeW may draw more than one packets for comparison upon each

arrival. In Section 2.2, we use a serial drawing process for the description (i.e., packets are

drawn one at a time), to let the algorithm be easily understood. If this process does not

meet the time requirement of the packet forwarding in the router, a parallel method can be

introduced.

Let ξa be the flow ID of the arriving packet, ξib (i = 1, 2, · · · ,m) the flow IDs of the

packets drawn from the buffer. The logical operation of matched drops can be represented

by bitwise XOR (Y) and bitwise AND (∧) as follows: if

m∧i=1

(ξa Y ξi

b

)= 0(false),

then conduct matched drops. Note that the above equation is satisfied if any term of ξa Y ξib

is false, i.e., any ξib drawn from the buffer can provoke matched drops if it equals ξa.

When the drawing process is applied to the packet buffer, matched drops happen in

pairs. Besides the arriving packet, we can simply drop any one of the buffered packets with

flow ID ξib that makes ξa Y ξi

b = 0.

Page 62: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

51

2.6 Conclusion

In this chapter, we proposed a stateless, cost-effective AQM scheme called CHOKeW

that provides bandwidth differentiation among flows at multiple priority levels. Both the

analytical model and the simulations showed that CHOKeW is capable of providing higher

bandwidth share to flows in higher priority, maintaining good fairness among flows in the

same priority, and protecting TCP against high-speed unresponsive flows when network

congestion occurs. The simulations also demonstrate that CHOKeW is able to achieve

efficient link utilization with a shorter queue length than conventional AQM schemes.

Our analytical model was designed to provide insights into the behavior of CHOKeW

and gave a qualitative explanation of its effectiveness. Further understanding of network

dynamics affected by CHOKeW needs more comprehensive models in the future.

The parameter tuning is another area of exploration for future work on CHOKeW. As

indicated in Fig. 2–6, when the priority-weight ratio w(2)/w(1) is higher, the bandwidth

share being allocated to the higher-priority flows will be greater. In the meantime, con-

sidering that the total available bandwidth does not change, the bandwidth share allocated

to the lower-priority flows will be smaller. The value of w(2)/w(1) should be tailored to

the needs of the applications, the network environments, and the users’ demands. This

research can also be incorporated with price-based DiffServ networks to provide differen-

tiated bandwidth allocation as well as TCP protection.

Page 63: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

CHAPTER 3CONTAX: AN ADMISSION CONTROL AND PRICING SCHEME FOR CHOKEW

3.1 Introduction

In differentiated service (DiffServ) networks, flows are assigned a Per-Hop Behavior

(PHB) value, packets from a flow carry the value in the header, and routers along the

path handle the packets according to the value [25, 78, 109]. In order to model DiffServ

networks, a PHB that corresponds to better service can be mapped into a higher priority

class [102]. Then the PHB value is considered the class ID, which determines the service

quality that routers provide to packets of this class.

In DiffServ networks, similar to the architecture proposed in Core-Stateless Fair Queue-

ing (CSFQ) [128], routers are divided into two categories: edge (boundary) routers and core

(interior) routers. The number of flows going through an edge router is much fewer than

that going through a core router. Sophisticated operations, such as per-flow classification

and marking, are implemented in edge routers. By contrast, core routers do not require

per-flow-state maintenance so that they can serve packets as fast as possible.

Our previous work, CHOKeW [135], is an Active Queue Management (AQM) scheme

designed for core routers. It is a stateless algorithm but able to provide bandwidth differ-

entiation and TCP protection. It is also able to maintain good fairness among flows of the

same priority class.

For readers’ convenience, we give a brief introduction of CHOKeW before starting the

discussion of our pricing scheme that will be mainly focused on in this chapter. CHOKeW

reads the priority value of an arriving packet from the packet header. Assume the arriving

packet has priority i, CHOKeW uses a priority weight wi to calculate drawing factor pi

from

pi = p0/wi, (3.1)

52

Page 64: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

53

where p0 is the basic drawing factor that is adjusted according to the severity of network

congestion. The heavier the network congestion is, the larger the value of p0 will be. On

the other hand, a higher priority class corresponds to a larger wi and a smaller pi. Thus pi

carries the information of network congestion status as well as the priority of the flow.

When a packet arrives at a core router, CHOKeW will draw some packets from the

buffer of the core router at random, and compare them with the arriving packet. If a packet

drawn from the buffer is from the same flow as the new arrival, both of them will be

dropped. In CHOKeW, the maximum number of packets that will be drawn from the buffer

upon each arrival is pi, which is mentioned above. The strategy of “matched drops”, i.e.,

dropping packets from the same flow in pairs, was designed for CHOKe by Pan et al. [112],

to protect TCP flows. CHOKe works in traditional best effort networks, while CHOKeW

was designed for DiffServ networks that are able to support multiple priority classes. Other

difference between CHOKe and CHOKeW can be found in our previous work [135].

In DiffServ networks, besides a buffer management scheme for core routers, an ad-

mission control strategy for edge routers is also necessary. Otherwise, whenever users

arrive, they can start to send packets into the network, even if the network has been heavily

congested. The lack of admission control strongly devalues the benefit that DiffServ can

produce, and the deterioration of service quality resulting from network congestion cannot

be solved only by CHOKeW.

Pricing is a straightforward and efficient strategy to assign priority classes to different

flows, and to alleviate network congestion by raising the price when the network load

becomes heavier.

When a network is modeled as an economic system, a user who is willing to pay a

higher price will go to a higher priority class and thus will be able to enjoy higher-quality

service. Moreover, by charging a higher price, a network provider can control the number

of users who are willing to pay the price to use the network, which, in return, becomes a

method to protect the service quality of existing users.

Page 65: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

54

Figure 3–1: ConTax-CHOKeW framework. ConTax is in edge routers, while CHOKeW isin core routers.

We present a pricing scheme for CHOKeW in this chapter. Our pricing scheme works

in edge networks, which assign higher priority to users who are willing to pay more. When

the network congestion is heavier, our pricing scheme will increase the price by a value

that is proportional to the congestion measurement, which is equivalent to charging a tax

due to network congestion—thus we name our pricing scheme ConTax (Congestion Tax).

The chapter is organized as follows. Our scheme is introduced in Section 3.2, includ-

ing the ConTax-CHOKeW framework in Subsection 3.2.1, the pricing model in Subsection

3.2.2, and the user demand model in Subsection 3.2.3. We use simulations to evaluate the

performance of our scheme in Section 3.3, which covers the experiments for investigating

the control of the number of users that are admitted, the regulation of the network load,

and the gain of aggregate profit for the network service provider. Finally, the chapter is

concluded in Section 3.4.

3.2 The ConTax Scheme

3.2.1 The ConTax-CHOKeW Framework

ConTax is a combination of a pricing scheme and an admission control scheme. It

can be implemented in edge routers, gateways, AAA (authentication, authorization and

accounting) servers, or any devices that are able to control the network access. Without

loss of generality, in this chapter we assume that edge routers are the devices that have a

ConTax module.

Page 66: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

55

The ConTax-CHOKeW framework is illustrated in Fig. 3–1. In this figure, hexagons

represent core routers, circles are edge routers, and diamonds denote users. When users try

to obtain network access, they connect the neighboring edge routers and look up the price

for a desired priority class, which is provided by ConTax. If the price is under the budget,

they pay the price and get the network access; otherwise, they do not request the network

access.

After user U obtains the network access from edge router E by paying credits ρ(i)

that corresponds to priority i, U can send packets into the network via E. Each packet

from U is marked with priority i by E before it enters the core network. When a packet

from U arrives at core router C, C uses CHOKeW to decide whether to drop this packet

and another packet belonging to the same flow from the buffer, i.e., to conduct matched

drops. If the arriving packet is not dropped, it will enter the buffer. However, this packet

may still be dropped by CHOKeW before it is forwarded to the next hop, if the sending rate

of this flow is much faster than other flows, since a faster sending rate causes more arrivals

during the same period of time. Thus CHOKeW is able to provide better fairness among

the flows in the same priority class than conventional AQM schemes such as RED [60] and

BLUE [59].

In addition to marking arriving packets with priority values, edge router E is respon-

sible for adjusting prices according to the network congestion. A higher price has a higher

potential to exceed the budget of more users. If the price rises when congestion happens,

fewer users are willing to pay the price to use the network, and consequently, the network

congestion is alleviated. By using the pricing scheme, edge routers can effectively restrict

the traffic that enters the core networks to a reasonable volume so that it will not cause

significant congestion. On the other hand, when the network is less congested, the price

should be reduced appropriately to avoid low link utilization. The pricing function will be

discussed in the following section.

Page 67: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

56

3.2.2 The Pricing Model of ConTax

In ConTax, each priority class has a different price (in credits/unit time, e.g., dol-

lars/minute). For a priority class, the price is composed of two parts, a basic price for each

priority class, and an additional price that reflects the severity of network congestion. We

first look at the basic price,

ρ0(i) = βi + c, (3.2)

where i denotes the priority of the service. Bear in mind that a flow in a higher priority class

can get more resources (in CHOKeW, we mainly focus on bandwidth). Eq.(3.2) shows the

relationship between the quantity of resources and the price. The price increasing rate is

determined by the slope, β (β > 0), and the initial price is controlled by the y-intercept, c

(c > 0).

Particularly, if a flow in priority i gets i times of bandwidth of a flow in priority 1,

formula (3.2) is equivalent to the following continuous function

ρ0(x) = βx + c, (3.3)

where x (x > 0) represents the resource quantity allocated to this flow.

Eq.(3.3) matches many current pricing strategies of ISPs (for example, the DSL Inter-

net services provided by BellSouth [22]).

One of the features of our pricing model is that the more resources a user obtains, the

less the unit-resource price will be. The unit-resource price is denoted by r = ρ0/x. From

(3.3), we have ∂r/∂x = −c/x2. Since c > 0, ∂r/∂x < 0 .

Another feature of our pricing model is that the unit-resource price is a convex func-

tion of resources, as ∂2r/∂x2 > 0. In other words, when a user obtains more resources

from the provider, the unit-resource price will decrease, but the decreasing speed will slow

down, which guarantees that the unit price will never reach zero.

Page 68: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

57

When we have a congestion measurement, we can use it to build a “congestion tax”,

represented by t. Then the final price ρ is determined by

ρ(i) = ρ0(i)(1 + γt), (3.4)

where γ (γ > 0) is a constant that reflects the sensitivity of price to network congestion.

Now one may question how to measure the congestion of core routers appropriately

in an edge router. One solution is to let core router C send a control message to E if C

is congested. The control message informs E the current congestion status, which may be

determined by queue length in C, such as the congestion measurement used in RED [60].

However, we argue that this is not a cost-effective solution, since C needs to track the

sources of the packets that cause the congestion before sending the message to the corre-

sponding edge routers. The link capacity in core networks is usually larger than that on the

edges, as illustrated in Fig. 3–1, where a thicker line represents a link with higher band-

width. A core router becomes a bottleneck only because many flows go through it. There-

fore, the congestion in core networks results from the traffic generated by many senders. If

this strategy is used, the core router being congested has to send messages (or many copies

of the same message) to different edge routers, and the control messages could worsen the

congestion, since more bandwidth is required to transmit the messages.

We notice that an edge router is able to record the number of users in each priority

class, as these users receive the network admission from the edge router. For an edge

router, let ni denote the number of users in priority class i.1 As a user in higher priority

tends to consume more network resources in the core network and thus likely contributes

more to the network congestion that is happening now or may happen in the future, a

1 Here ni only counts the users sending packets to a core network. Traffic that does notenter the core network will not cause any congestion in core routers, and hence is neglectedwhen ni is calculated.

Page 69: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

58

simple method to measure the network congestion, from the viewpoint of the edge router,

is to use∑I

i=1 ini, where positive integer I is the highest priority classes supported by the

network. In the rest of this chapter, we also call∑I

i=1 ini the network load for the edge

router, and it will be used to charge the congestion tax. However, congestion tax should not

be charged when∑I

i=1 ini is very small. We introduce a threshold to indicate the beginning

of congestion, denoted by M (M > 0), and the congestion measurement becomes

t = max

(0,

I∑i=1

ini −M

). (3.5)

Bring (3.2), (3.5) into (3.4), we get

ρ(i) = (βi + c)

(1 + γ max

(0,

I∑i=1

ini −M

)). (3.6)

When∑I

i=1 ini ≤M , no congestion tax is added to the final price, and ρ(i) = ρ0(i).

Based on the above discussions, the ConTax algorithm is described in Fig. 3–2. In

this scheme, when user U keeps using the network, the price ρU that U needs to pay does

not change, which is determined at the moment when U is admitted into the network. The

philosophy here is to consider ρU a commitment made by the network provider. Some

other pricing based admission control protocols, such as that proposed by Li, Iraqi and

Boutaba [93], use class promotion for existing users to maintain the service quality without

charging a higher price. In our scheme, if the network becomes more congested, new users

will be charged higher prices, which prevents further deterioration of the congestion, and

thus maintains the service quality for existing users to some extent. From Fig. 3–2, an edge

router updates price ρ(j) for all priority class j (j = 1, 2, · · · , I) only when a new user is

admitted or an existing user completes the communication.

3.2.3 The Demand Model of Users

When the price alters, the user’s demand to obtain the network access also changes.

A popular demand model is to regard the demand function as a probability of network

access which is determined by the price difference between the current price and the basic

Page 70: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

59

(1) When user U arrives at edge router E

U selects a priority class i

IF ρ(i) is less than the budget of U

THEN

E charges U price ρU = ρ(i)U starts to use the network

E updates ρ(j) for all j = 1, 2, · · · , IELSE U is not admitted into the network

(2) When user U stops using the network

E stops charging U price ρU

E updates ρ(j) for all j = 1, 2, · · · , I

Figure 3–2: ConTax algorithm

price [82, 93]. In ConTax, for priority class i, the user demand d(i) can be modeled as

d(i) = exp

(−σ

(ρ(i)

ρ0(i)− 1

)). (3.7)

In (3.7), σ is the parameter to determine the sensitivity of demand to the price change.

From (3.6) and (3.7), we illustrate the method to determine the value of demand based

on the network load∑I

i=1 ini in an edge router in Fig. 3–3. In the market, the network

load is able to be interpreted as supply, i.e, by charging price ρ(i) for each priority class

i, the network provider is willing to provide service that is equal to∑I

i=1 ini. The heavier

the load is, the higher the price will be.

We do not draw the supply-demand relationship in one single graph, because in Con-

Tax, the supply is the sum of the load of all priority classes, while the demand takes effect

on each class individually. By using two graphs in Fig. 3–3, given network load∑I

i=1 ini,

we can find the price for a priority class according to the supply curve corresponding to the

class in the left graph. Then, in the right graph, the same price maps onto a demand value

that is determined by the corresponding demand curve. The above process is illustrated in

Fig. 3–3 by dashed arrows.

Page 71: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

60

Figure 3–3: Supply-demand relationship when ConTax is used. The left graph is price-supply curves, and the right graph price-demand curves for each class.

3.3 Simulations

Our simulations are based on ns-2 simulator (version 2.29) [103], and focus on the per-

formance of ConTax in edge router E upon random arrivals of users. In the network shown

in Fig. 3–1, we assume that user arrivals comply with Poisson distribution. Even though the

validity of Poisson model for traffic in Wide Area Networks (WANs) has been questioned,

the investigation has shown that user-initiated TCP sessions can still be well modeled by

Poisson [116]. In simulations, we let the average arriving rate be λ = 3 users/min un-

less specified otherwise. An arriving user is admitted into the network according to the

probability that is equal to Eq.(3.7). If the new arrival is admitted, the data transmission

will last for a period of time, which is simulated by a random variable, τ . The Cumulative

Distribution Function (CDF) of τ satisfies Pareto distribution, i.e.,

F (τ) = 1−(τ0

τ

)k

, (3.8)

where k (k > 0) is the shape of Pareto and τ0 is the minimum possible value of τ . When

1 < k ≤ 2, which is most frequently used, Pareto distribution has finite mean value

τ0k/(k− 1) but an infinite variance. Previous study has shown that WWW traffic complies

with Pareto distribution with k ∈ (1.16, 1.5) [51]. We set k = 1.4, and τ0 = 5.714 min

that corresponds to E(τ) = 20 min. The simulations are stopped when the number of user

arrivals reaches 1000. The simulation results are compared with that of pricing without

congestion tax. To determine the price given a traffic load (i.e.,∑I

i=1 ini), we set the

Page 72: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

61

sensitivity parameter γ = 5.0× 10−3 and the threshold of charging a congestion tax M =

20. The demand of users is simulated with parameter σ = 0.5, which is based on the

assumption that the willingness to use the network is close to half when the price is doubled

(i.e., ρ(i) = 2ρ0(i)). Two parameters for calculating the basic price, β and c, are assigned

values 3.0× 10−4 dollars/min and 7.0× 10−4 dollars/min, respectively.

3.3.1 Two Priority Classes

When two priority classes are supported, each user randomly select one of the classes.

The network load for edge router E, the number of users admitted, the demand of users,

and the aggregate price changing with time are shown in Fig. 3–4, 3–5, 3–6 and 3–7,

respectively.

When congestion tax is not charged, i.e., ρ(i) = ρ0(i) all the time, an arriving user is

always admitted since the demand is constant 1. So the number of admitted users is equal

to the number of arrivals, which rises whenever a user arrives, and decreases when a user

completes the communication and leaves. By contrast, when ConTax is applied, after the

time reaches 306 sec, the demand becomes less than 1 due to more users admitted and a

load exceeding M (Fig. 3–6). Some users choose not to obtain the admission when they

arrive. In each class, the number of users admitted into the network is smaller than the

number of arrivals, illustrated by a sub-figure in Fig. 3–5. Accordingly, in Fig. 3–4, the

network load of using ConTax is also lower than that without congestion tax.

The aggregate price is concerned by the network provider not only for alleviating

network congestion, but also for earning profit. From Fig. 3–7, we see that ConTax can

bring a higher aggregate price and therefore more profit to the network provider. On the

other hand, by paying a price that is slightly higher than the basic price, a user can enjoy

the network service with better quality due to less congestion.

3.3.2 Three Priority Classes

The results under the scenario of three priority classes are similar to that of two priority

classes, i.e., the network load for the edge router shown in Fig. 3–8, the number of users

Page 73: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

62

0

20

40

60

80

100

120

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Load

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–4: Dynamics of network load (i.e.,∑I

i=1 ini) in the case of two priority classes

admitted for each priority class in Fig. 3–9, as well as the demand in Fig. 3–10, when

ConTax is employed, is lower than that not using congestion tax, while the aggregate price

is higher (Fig. 3–11).

By comparing Fig. 3–8 with Fig. 3–4, we see that the network load is heavier in the

case of three priority classes than that of two classes, since the users in the third priority

class consume more resources.

The demand curve in Fig. 3–10 is lower than the curve in Fig. 3–6, resulting from the

higher load in the network that supports more priorities.

The curve of aggregate price in Fig. 3–11 is higher than the curve in Fig. 3–7. In other

words, by supporting more priority classes, the network providers can make more profit,

and users are better served by having more options for their applications.2

2 Generally, a higher quality of service can produce higher utility for users, even thoughthey pay a higher price. When multiple priority classes are available, users are more likelyto find the optimal point that leads to the highest benefit. At this point the differencebetween the utility and the price is maximized [50].

Page 74: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

63

0

5

10

15

20

25

30

35

40

45

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(a) Priority class 1

0

5

10

15

20

25

30

35

40

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(b) Priority class 2

Figure 3–5: Number of users that are admitted into the network in the case of two priorityclasses

Page 75: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

64

0.75

0.8

0.85

0.9

0.95

1

1.05

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Dem

and

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–6: Demand of users in the case of two priority classes

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Agg

rega

te p

rice

(dol

lars

/min

)

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–7: Aggregate price in the case of two priority classes

Page 76: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

65

0

20

40

60

80

100

120

140

160

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Load

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–8: Dynamics of network load in the case of three priority classes

3.3.3 Higher Arriving Rate

In Subsection 3.3.1 and 3.3.2, the average arriving rate of users is 3 users/min. Since

the traffic volume in the network is variant and changes with time and locations, we are

also interested in the performance of ConTax when the arriving rate is different. In this

subsection, we let λ = 6 users/min and repeat the simulations in Subsection 3.3.1. The

results are shown from Fig. 3–12 to Fig. 3–15.

First of all, we can see that the difference of the number of admitted users and the

number of arrivals is more significant in Fig. 3–13 than that in Fig. 3–5, by comparing

the results from the same priority class. The reason is that the network load tends to be

heavier when more users arrive during the same period of time, which leads to a smaller

demand that is demonstrated by the comparison between Fig. 3–14 and Fig. 3–6. Corre-

spondingly, the advantage of using a congestion tax regarding the network load, as shown

by the difference of the two curves in Fig. 3–12, is more significant than that in Fig. 3–4.

The aggregate price curve (Fig. 3–15) has the similar shape as the curve in Fig. 3–

7, but it rises faster when λ = 6 users/min with an increasing speed that is also roughly

doubled.

Page 77: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

66

0

5

10

15

20

25

30

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(a) Priority class 1

0

5

10

15

20

25

30

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(b) Priority class 2

0

5

10

15

20

25

30

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(c) Priority class 3

Figure 3–9: Number of users that are admitted into the network in the case of three priorityclasses

Page 78: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

67

0.75

0.8

0.85

0.9

0.95

1

1.05

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Dem

and

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–10: Demand of users in the case of three priority classes

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

Agg

rega

te p

rice

(dol

lars

/min

)

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–11: Aggregate price in the case of three priority classes

Page 79: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

68

0

50

100

150

200

250

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Load

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–12: Dynamics of network load when arriving rate λ = 6 users/min

3.4 Conclusion

ConTax-CHOKeW framework is a cost-effective DiffServ network solution that in-

cludes pricing and admission control (provided by ConTax) plus bandwidth differentiation

and TCP protection (supported by CHOKeW). By using the sum of priority classes of all

admitted users as the network load measurement in ConTax, edge routers can work inde-

pendently. This can save the network resources as well as management cost for sending

control messages to update the network congestion status from core routers to edge routers

periodically.

ConTax adjusts the prices for all priority classes when the network load for an edge

router is greater than a threshold. The heavier the load is, the higher the prices will be.

The extra price above the line of the basic price, i.e., congestion tax, is proved to be able to

effectively control the number of users that are admitted into the network.

By using simulations, we also show that when more priority classes are supported, the

network provider can earn more profit due to a higher aggregate price. On the other hand,

a network with a variety of priority service provides users greater flexibility that in turn

meets the specific needs for their applications.

Page 80: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

69

0

10

20

30

40

50

60

70

80

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(a) Priority class 1

0

10

20

30

40

50

60

70

80

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Num

ber

of u

sers

adm

itted

Time (seconds)

ConTaxpricing without congestion tax

(b) Priority class 2

Figure 3–13: Number of users that are admitted into the network when arriving rate λ =6 users/min

Page 81: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

70

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

1.05

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Dem

and

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–14: Demand of users when arriving rate λ = 6 users/min

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

Agg

rega

te p

rice

(dol

lars

/min

)

Time (seconds)

ConTaxpricing without congestion tax

Figure 3–15: Aggregate price when arriving rate λ = 6 users/min

Page 82: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

71

When the arriving rate of users rises, the network load also increases, and the de-

mand decreases accordingly. This may result in a more noticeable performance difference

between ConTax and a pricing scheme that does not charge congestion tax.

Page 83: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

CHAPTER 4A GROUP-BASED PRICING AND ADMISSION CONTROL STRATEGY FOR

WIRELESS MESH NETWORKS

4.1 Introduction

Wireless Mesh Networks (WMNs) have attracted a great deal of attention of both

academia and industry. One of the main purposes of using WMNs is to swiftly extend the

Internet coverage in a cost-effective manner. The configurations of WMNs, determined

by the user locations and application features, however, are greatly dynamic and flexible.

As a result, it is highly possible for a flow to go through a wireless path consisting of

multiple parties before it reaches the hot spot that is connected to the wired Internet. This

feature results in significant difference in admission control for WMNs and for traditional

networks.

Admission control is closely related to Quality of Service (QoS) profiles and pricing

schemes. In traditional networks, if only best-effort traffic exists, flat-rate pricing is nor-

mally the most straightforward and practical choice. Under this scenario, users are charged

a constant monthly fee or a constant number of credits for each time period determined by

a similar billing plan, based on a contract agreed by the users and the service provider. We

also notice that only two parties are involved in the admission control procedure in the con-

ventional networks, i.e, an ISP who provides the network access and a user who submits

the admission request.

Even though flat-rate pricing is easy to use for traditional best-effort traffic, if the

network is designed to support multiple priority levels, such as Label Switching [10, 14],

Integrated Services/Resource ReSerVation Protocol (InterServ/RSVP) [27, 28], or Differ-

entiate Services (DiffServ) [25], it is necessary to differentiate the priority in pricing policy.

Accordingly, admission control needs to consider the available resources along the path that

72

Page 84: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

73

a flow will follow. For example, in ATM networks, an admission decision (for example,

Courcoubetis et al. [49]) is typically made after the connection request completes a round

trip and shows resources are available in each hop.1

Because multiple parities may be in the path, the design of an admission control

scheme for WMNs is different from traditional admission control schemes. It is ineffi-

cient and infeasible to ask for the confirmation from each hop along the route in WMNs.

As the network structures of WMNs are greatly dynamic, a group-based one-hop admission

control is more realistic than the traditional end-to-end admission control.

On the other hand, compared with wireless ad-hoc networks, the mobility of mesh

routers is usually minimized, which results in the possibility of inexpensive maintenance,

reliable service coverage, and nonstop power supply without energy consumption con-

straints [5]. In this chapter, we assume the physical location of the groups do not change

dramatically during the observation period.

Some previous research touched admission control in WMNs [54, 92, 145]. Lee et al.

focused on path selection to satisfy the rate and delay requirement upon an admission

request [92]. Zhao et al. devoted to incorporate load balancing in the admission control

to select a mesh path [145]. Both require that the devices in a WMN cooperate tightly

with each other. However, we believe that it is impractical to let devices of one party

be deeply involved into the operations of another party. An admission control scheme

that minimizes the involved parties is necessary for WMNs. In addition, when admission

control happens between two parties, economically, the benefit gain becomes a main reason

to share resources, and thus a pricing mechanism should also be considered.

Efstathiou et al. proposed a public-private key based admission control scheme [54],

where a reciprocity algorithm is employed to identify a contributor, i.e., a user who also

1 A minor exception is for Unspecified Bit Rate (UBR) traffic that does not require aQoS guarantee, and therefore works in a best-effort manner.

Page 85: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

74

provides network access to other users, and only contributors can obtain network access

from other contributors when they mobile. We notice that their scheme works as a barter

market from an economical viewpoint, since all users trade their network resources for

other users’ network resources. As a monetary system is more efficient and more flexible,

we expect to design an admission control scheme combined with pricing, where users can

make payments by any forms of credits that they are used to using in their daily lives.

In industry, an admission control scheme combined with pricing, for IEEE 802.11

based networks, was devised by Fon [65]. The scheme needs all parties to be registered in

a control center before they share network resources with each other, and the price, which

still uses flat rate and is unrelated to the service quality, is also determined by the control

center. This causes the inflexibility for the parties who prefer to make their own admis-

sion decisions according to the current available resources and requested service quality.

Therefore, a distributed admission control scheme is more appropriate for WMNs.

In order to meet the requirements, we propose a group-based admission control scheme,

where only two parties are involved in the operations upon each admission request. The de-

termination criteria are the available resources and requested resources, which correspond

to supply and demand in an economic system. The involved parties use the knowledge

of utility, cost, and benefit of economics to calculate the available and requested resources.

Therefore, our scheme is named APRIL (Admission control with PRIcing Leverage). Since

the operations of our scheme are conducted distributedly, there is no need for a single con-

trol center.

The chapter is organized as follows. We introduce the idea of groups in WMNs in

Section 4.2. The pricing model is discussed in Section 4.3. Based on the pricing model,

Section 4.4 provides the procedure of our admission control scheme, followed by perfor-

mance evaluation in Section 4.5. The chapter is concluded in Section 4.6.

Page 86: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

75

4.2 Groups in WMNs

In order to provide ubiquitous Internet access, a mesh network can be an open system

that allows devices of a party to obtain network access from another party that already has

the network access, and thereafter to further share resources with other parities that need

the network access.

WMNs are composed of hot spots, mesh routers, mesh users, and hybrid devices [5].

A hot spot is the Internet access point of the WMN.2 A mesh router is a device that forwards

packets for other devices. By contrast, a mesh user is a device that sends/receives packet

for itself. A hybrid device functions as a mesh router and a mesh user at the same time.

For a general design, we model a set of devices as a “group”, within which admission

control is not considered. A group includes at least one device; but in most cases, it is

a set of multiple devices. The reason to use “groups” instead of physical devices for the

operation and maintenance of WMNs is as follows.

When some devices of the same party are close to each other in physical positions,

they can form a group, and each device becomes a group member. Within the group,

the resources are shared by all group members using some mechanisms or protocols de-

termined by the group administrator or controlled under an agreement of group members,

using either centralized management or distributed management. It is reasonable to assume

that the devices within a group cooperate with each other and work as a single system, and

admission control among these devices is not as important as admission control among

devices belonging to different groups. By splitting devices into groups, we only focus on

the admission control among different groups.

Traffic in WMNs usually goes from mesh users to the Internet through mesh routers

and hot spots, which creates a “parking-lot” scenario [68]. In most cases, based on the

2 It is possible that a WMN has multiple hot spots. In this chapter, we only discuss thecase with one hot spot for simplicity.

Page 87: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

76

Figure 4–1: Tree topology formed by groups in a WMN

traffic routes, the topology of a WMN can be illustrated by a tree or multiple trees, each

having a hot spot as the root, mesh users as the leaves, and mesh routers or hybrid devices

as branches. If we substitute groups for devices, the root, the branches, and the leaves

are all formed by groups. In a tree topology illustrated in Fig. 4–1, let G0 represent the

group that includes the hot spot. Several other groups are connect to G0 directly to get the

network access. They are represented by G(1)1 , G

(2)1 , · · · , respectively.

For general purpose, we denote a group in the tree topology by Gi, and denote Gi−1

the group that provides network access to Gi. Gi also provides network access to groups

G(1)i+1, G

(2)i+1, · · · , G(k)

i+1. After Gi obtains the permit of network access from Gi−1 by paying

some credits, it can further share the resources with G(j)i+1, j = 1, 2, · · · , by accepting

payments from them.3 The details of resource sharing between Gi and G(j)i+1 is transparent

to Gi−1. By using this method, only two groups, resource provider Gi and resource user

G(j)i+1, are involved in the admission control operations when G

(j)i+1 sends the request for

network access.

3 The payment transactions need the protection of reliable network security. The designof WMNs with secure payment transactions is beyond the scope of this chapter. Interestedreaders may refer to other literatures such as Zhang and Fang [144] for more information.

Page 88: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

77

In special cases, if the admission control needs to be conducted between different de-

vices within a group—this may happen although these devices are from the same party—

we can always further categorize the devices of the same group into subgroups. The group-

ing process goes forward until all admission control operations occur between groups. In

this manner, we ensure that only two subgroups are involved in the admission control upon

each request, and the number of parties involved in the admission control is minimized.

4.3 The Pricing Model for APRIL

APRIL considers WMNs economic systems. For ease of explanation, when a group

requests the network access, it is called a user; when a group provides the network access,

it is called a provider. Due to the characteristics of WMNs, a user can also be a provider

when it gives network access to other users.

Let p denote the price that a user needs to pay the provider, in order to get resources

x. We assume

p = βx + c, st. 0 < x ≤ X, (4.1)

where β, c (β > 0, c ≥ 0) are constants. The price increasing rate is determined by the

slope, β, the initial price is controlled by the y-intercept, c, and X (X ≥ 0) is the available

resources. If X = 0, we say that no resources are available; accordingly, no value of x can

satisfy (4.1), which means that the user cannot get any resources from the provider.

This assumption matches many current pricing strategies of ISPs. For example, the

DSL Internet services provided by BellSouth have a different monthly price for each plan

[22]. By collecting the values of the uplink and downlink bandwidth, we illustrate the

prices and the corresponding bandwidth in Fig. 4–2. A linear approximation compliant

with (4.1) is also added in each subfigure. We see that the downlink and uplink prices have

approximation p = x/300 + 27 and p = x/16 + 15.5, respectively. In reality, the values of

uplink bandwidth and downlink bandwidth are included in a service plan, so we only have

to focus on one link direction. In our model, we use the general term “resources”, which

Page 89: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

78

20

25

30

35

40

45

50

0 1000 2000 3000 4000 5000 6000

Pric

e (U

S$)

Link Bandwidth (kb/s)

downlink p=x/300+27

(a) Downlink

20

25

30

35

40

45

50

100 150 200 250 300 350 400 450 500 550

Pric

e (U

S$)

Link Bandwidth (kb/s)

uplink p=x/16+15.5

(b) Uplink

Figure 4–2: Prices vs. bandwidth of BellSouth DSL service plans

can denote the combination of uplink and downlink bandwidth, or other types of resources,

depending on the specific requirements.

One of the features of our pricing model is that the more resources a user gets, the less

the unit-resource price will be. The unit-resource price is denoted by r = p/x. From (4.1),

we have ∂r/∂x = −c/x2. When x > 0, ∂r/∂x < 0 .

Another feature of our pricing model is that the unit-resource price is a convex func-

tion of resources, since ∂2r/∂x2 > 0. In other words, when a user obtains more resources

from the provider, the unit-resource price will decrease, but the decreasing speed will slow

down, so that the unit price will never reach zero.

4.4 The APRIL Algorithm

4.4.1 Competition Type for APRIL

The APRIL algorithm is based on the knowledge of economic systems of WMNs. We

need to find the competition type of the market corresponding to systems. Traditionally, a

market can be modeled by one of the three competition types: 1) a monopoly, if a single

provider controls the amount of goods (i.e., resources in the network access market of

WMNs herein) and determines the market price, 2) a competitive market, if there are many

Page 90: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

79

providers and users but none of them can dictate the price, and 3) an oligopoly, where only

a few providers are available [50].4

APRIL is designed for letting groups that already have network access share the access

(measured by available resources) with new groups swiftly and conveniently. If a group

has available resources for share, it will become a provider. Therefore, it is reasonable to

assume that many providers exist, and thus the economic systems are competitive markets,

instead of monopolies or oligopolies. The competition type may change in the long run,

due to possible lower prices resulting from the growth of one or a few providers that bring

production economies of scale into effect [50,117]. However, since APRIL is mainly used

for short-term network access where no direct connection to a hot spot of an ISP is available

and for the fast extension of the Internet coverage, the evolution of market competition is

out of the scope of this chapter.

In a competitive market, none of the participants (including the providers and the

users) is capable of controlling the prices, but they can adjust the supply and the demand

in order to obtain maximum benefit.5 A competitive market usually has the supply-demand

constraint,m∑

i=1

xi ≤n∑

j=1

Xj,

where∑m

i=1 xi represents the aggregate demand of all users,∑n

j=1 Xj the aggregate supply

of all providers, m the number of users, and n the number of providers [50]. The same

constraint still holds in a resource market of a WMN, where a user adjusts the demand to

maximize the benefit under this constraint (see Subsection 4.4.2 for details). On the other

4 A duopoly, where only two providers exist in the market, is considered to be a specialcase of an oligopoly.

5 In some of the literature, benefit of providers is also called profit. We use the word“benefit” for providers as well as users in this chapter, since a user may also be a providerat the same time in a WMN.

Page 91: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

80

hand, we assume that each user only obtains the network access through one provider at a

time, so that the complexity of network configurations (such as routing algorithms, address

allocations, load balance, traffic aggregation, etc.) will not be too high. This circumstance

is different from the conventional competitive markets. We apply the nonnegative benefit

principle to providers, which will be discussed in details in Subsection 4.4.3.

4.4.2 Maximum Benefit Principle for Users

For a user, the benefit is the difference between the utility generated by using a certain

amount of resources and the price charged by a provider who supplies the resources to

the user. The maximum benefit principle requires the user to seek the amount of resource

demand that maximizes the benefit. The demand is subject to the supply-demand constraint

mentioned above.

Assume group Gi can generate utility ui by using resources xi provided by group

Gi−1. On the other hand, Gi has to pay Gi−1 price pi in order to use the resources xi. The

values of pi, ui and xi satisfy

pi = βxi + c, (4.2a)

ui = max (0, αi log(xi − xi + 1)) ,

st. 0 < xi ≤ Xi (4.2b)

The utility function ui(xi) is increasing and concave, which matches the features of

elastic traffic [124]. Compared with some other utility models such as Kelly et al. [85], our

utility model (4.2b) has a term xi. This term represents the minimum resource requirement

for the applications. Many network applications generate utility only when xi > xi; oth-

erwise, the communication quality would be too poor to be useful. Parameter αi denotes

the “willingness” of the applications to obtain higher resources. A user having applications

with a higher value of αi will be willing to pay more for higher resources. The pricing

model (4.2a) is from (4.1).

Parameters αi and xi are application-dependent, while β and c, determined by the

pricing scheme, are application-independent. In other words, network resources can be

Page 92: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

81

used by any applications which are the user’s choices, and the provider who announces the

pricing plan does not have to know the details that which applications use the resources.

This also reflects the fact that the IP based network architecture with the same resources

has been and will be used/shared by a variety of applications.

In APRIL, we use a uniform pricing scheme for all groups, i.e., β and c do not have

subscript i. This results from the competition of service providers in the market, and thus

the nonnegative benefit principle is applicable to providers (see Subsection 4.4.3). Under

the competitive circumstances, all pricing schemes tend to be similar.

By paying pi to get ui, group Gi generates benefit

bi = ui − pi. (4.3)

Note that bi, ui and pi are all functions of xi. Some previous work (for example, Courcou-

betis and Weber [50]) calls the difference of the utility and price “net benefit”.

Group Gi requests resources xi based on the maximum benefit principle:

max0<xi≤Xi

bi. (4.4)

In order to derive the value of xi that satisfies (4.4), let bi represent (αi log(xi − xi +

1)− (βxi + c)). Then∂bi

∂xi

=αi

xi − xi + 1− β. (4.5)

Let x∗i denote the solution of ∂bi/∂xi = 0, i.e.,

x∗i = αi/β + xi − 1. (4.6)

When xi > x∗i , ∂bi/∂xi < 0 and bi is a decreasing function. On the other hand, when

xi − 1 < xi < x∗i , ∂bi/∂xi > 0, i.e., bi is increasing. Thus x∗i = arg maxxi>xi−1 bi.

Bearing in mind that bi = bi only when xi ≤ xi ≤ Xi, we discuss the calculation

of xi in five cases, which are illustrated in Fig. 4–3. In Subfigure 4–3(a), x∗i > Xi and

bi(Xi) > 0. Since 0 < xi ≤ Xi, the largest benefit is reached when xi = Xi. In Subfigure

Page 93: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

82

(a) x∗i > Xi and bi(Xi) > 0 (b) x∗i > Xi and bi(Xi) ≤ 0 (c) 0 < x∗i ≤ Xi and bi(x∗i ) > 0

(d) x∗i ≤ 0 and bi(x∗i ) > 0 (e) x∗i ≤ Xi and bi(x∗i ) ≤ 0

Figure 4–3: Utility ui and price pi vs. resources xi

4–3(b), when x∗i > Xi and bi(Xi) ≤ 0, xi = 0, i.e., Gi will not request any resources since

no positive benefit is available. If 0 < x∗i ≤ Xi and bi(x∗i ) > 0, as shown in Subfigure

4–3(c), xi = x∗i is the best choice. When x∗i ≤ 0 and bi(x∗i ) > 0 (Subfigure 4–3(d)), xi = 0

and no positive benefit exists in the available resource range. In Subfigure 4–3(e), x∗i ≤ Xi

and bi(x∗i ) ≤ 0, xi = 0 again. By merging all cases that result in xi = 0, we get

xi =

Xi, for x∗i > Xi and bi(Xi) > 0

x∗i , for 0 < x∗i ≤ Xi and bi(x∗i ) > 0

0, otherwise.

(4.7)

4.4.3 Nonnegative Benefit Principle for Providers

For a provider, the benefit is equal to the difference between the aggregate price paid

by all users who obtain resources from the provider and the cost of maintaining those

resources. As a provider may also be a user at the same time in a WMN, the benefit of a

group is the sum of the benefit of being a user and that of being a provider.

Page 94: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

83

Since each user obtains the network access only through one provider at a time, unlike

the case in a conventional competitive market, it may not be a good strategy for a provider

to simply supply the amount of resources that maximizes the benefit, when it has more

available resources.

As described in the previous subsection, a user always seeks the maximum benefit.

When this user has more than one choice for its provider—this is exactly the case that

happens in a competitive market—it will select the provider who supplies more resources,

if the supply from other providers is less the demand that maximizes the user’s benefit.

Because the user can only choose one provider, when the nonnegative benefit principle is

considered, the provider selected by this user will obtain a nonnegative benefit gain, while

all other providers have zero benefit gain. This circumstance forces all providers to adopt

the nonnegative benefit principle. This principle has also been used in some previous work

in economics [91].

When group Gi obtains access from group Gi−1, it can decide whether to further share

the resources with groups G(j)i+1, j = 1, 2, · · · . The further resource sharing will change the

utility function of Gi. In this subsection, we will first discuss the new utility function of Gi.

Then we will give more details of the nonnegative benefit principle based on the analytical

results.

After Gi shares the resources with G(1)i+1 and receives the payment from G

(1)i+1, the

resources used by G(1)i+1 will be deducted from the utility, but the payment will be added to

the utility. The utility function will become

u(1)i = max

(0, αi log

(xi − x

(1)i+1 − xi + 1

))+ βx

(1)i+1 + c,

st. 0 < x(1)i+1 ≤ xi. (4.8)

Page 95: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

84

Here xi becomes the total resources that Gi and G(1)i+1 received, which is determined

by (4.7). The resources used only by Gi becomes xi − x(1)i+1, since xi is shared by Gi and

G(1)i+1. On the other hand, the price function pi is still the same as (4.2a), which represents

the price that Gi needs to pay Gi−1 for the use of xi.

Gi allows the admission of G(1)i+1 only when it gains benefit from the resource sharing.

Let b(1)i be the benefit for Gi after the resource sharing with G

(1)i+1. The benefit gain is

∆b(1)i = b

(1)i − bi

= u(1)i − ui

= αi log

max

(1, xi − x

(1)i+1 − xi + 1

)

max (1, xi − xi + 1)

+ βx

(1)i+1 + c,

st. 0 < x(1)i+1 ≤ xi.

From xi > xi,

∆b(1)i = αi log

max

(1, xi − x

(1)i+1 − xi + 1

)

xi − xi + 1

+ βx

(1)i+1 + c.

If x(1)i+1 > xi − xi,

∆b(1)i = −αi log (xi − xi + 1) + βx

(1)i+1 + c.

Since bi(xi) > 0, i.e., αi log (xi − xi + 1) − βxi − c > 0, considering x(1)i+1 ≤ xi, we

know that ∆b(1)i < 0 when x

(1)i+1 > xi − xi. Gi has available resources for G

(1)i+1 only when

∆b(1)i > 0, which corresponds to

∆b(1)i = αi log

(xi − x

(1)i+1 − xi + 1

xi − xi + 1

)+ βx

(1)i+1 + c.

Page 96: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

85

Similarly, after Gi shares the resources with G(2)i+1, the utility function becomes

u(2)i = max

(0, αi log

(xi −

2∑j=1

x(j)i+1 − xi + 1

))+ β

2∑j=1

x(j)i+1 + 2c,

st. x(j)i+1 > 0 and 0 <

∑j

x(j)i+1 ≤ xi, j = 1, 2.

The benefit gain for Gi after it further shares the resources with G(2)i+1 is

∆b(2)i = αi log

max

(1, xi −

2∑j=1

x(j)i+1 − xi + 1

)

xi − x(1)i+1 − xi + 1

+ βx(2)i+1 + c,

st. x(j)i+1 > 0 and 0 <

∑j

x(j)i+1 ≤ xi, j = 1, 2.

In general, the benefit gain after Gi further shares the resources with G(k)i+1 (k =

1, 2, · · · ) is

∆b(k)i = αi log

max

(1, xi −

k∑j=1

x(j)i+1 − xi + 1

)

max

(1, xi −

k−1∑j=1

x(j)i+1 − xi + 1

)

+ βx(k)i+1 + c,

st. x(j)i+1 > 0 and 0 <

∑j

x(j)i+1 ≤ xi, j = 1, 2, · · · , k. (4.9)

Based on (4.9), the curve corresponding to ∆b(k)i

(x

(k)i+1

)has three possible shapes, as

shown in Fig. 4–4. As it is required that x(k)i+1 > 0 and 0 <

∑kj=1 x

(j)i+1 ≤ xi, we only need

to consider the range of x(k)i+1 from 0 to xi −

∑k−1j=1 x

(j)i+1.

If xi −∑k−1

j=1 x(j)i+1 − xi > 0, we first discuss the shape in the range of 0 < x

(k)i+1 <

xi −∑k−1

j=1 x(j)i+1 − xi. In this range,

∂(∆b

(k)i

)

∂x(k)i+1

=−αi

xi −k∑

j=1

x(j)i+1 − xi + 1

+ β. (4.10)

Page 97: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

86

(a) xi −∑k−1

j=1 x(j)i+1 − xi > 0 and 1 <

αi/β < xi −∑k−1

j=1 x(j)i+1 − xi + 1

(b) xi −∑k−1

j=1 x(j)i+1 − xi > 0 and

αi/β ≥ xi −∑k−1

j=1 x(j)i+1 − xi + 1

(c) xi −∑k−1

j=1 x(j)i+1 −

xi ≤ 0

Figure 4–4: Three possible shapes of ∆b(k)i

(x

(k)i+1

)

If 1 < αi/β < xi −∑k−1

j=1 x(j)i+1 − xi + 1, ∆b

(k)i is increasing when 0 < x

(k)i+1 < xi −

∑k−1j=1 x

(j)i+1− xi−αi/β+1, and decreasing when xi−

∑k−1j=1 x

(j)i+1− xi−αi/β+1 < x

(k)i+1 <

xi −∑k−1

j=1 x(j)i+1 − xi, as illustrated in Fig. 4–4(a). If αi/β ≥ xi −

∑k−1j=1 x

(j)i+1 − xi + 1,

∆b(k)i is decreasing in the range of 0 < x

(k)i+1 < xi −

∑k−1j=1 x

(j)i+1 − xi, which is illustrated

in Fig. 4–4(b). In both subfigures, when xi −∑k−1

j=1 x(j)i+1 − xi < x

(k)i+1 < xi −

∑k−1j=1 x

(j)i+1,

∆b(k)i is always linearly increasing because ∂

(∆b

(k)i

)/∂x

(k)i+1 = β > 0. If 0 < αi/β ≤ 1

(not shown in Fig. 4–4), since Gi is only interested in xi > xi when it sends the admission

request to Gi−1, from (4.2a), (4.2b) and (4.3), ∂bi/∂xi = αi/ (xi − xi + 1) − β < 0. We

also know that bi(xi)|xi=xi= −βxi − c < 0. Therefore, bi(xi) < 0 when xi ≥ xi. In other

words, no resources are able to provide positive benefit for Gi, and hence 0 < αi/β ≤ 1

will never happen when Gi further tries to share resources with any G(j)i+1.

Another possibility is that xi −∑k−1

j=1 x(j)i+1 − xi ≤ 0. In this case, the curve of

∆b(k)i

(x

(k)i+1

)is increasing from 0 to xi −

∑k−1j=1 x

(j)i+1, as shown in Fig. 4–4(c).

Let X(k)i+1 denote the maximum available resources that Gi can provide to G

(k)i+1. The

nonnegative benefit principle requires that 1) X(k)i+1 is as large as possible, and 2) ∆b

(k)i

(x

(k)i+1

)≥

0 for all x(k)i+1 ≤ X

(k)i+1, i.e, Gi does not lose benefit by further sharing resources with G

(k)i+1

as long as the quantity of resources used by G(k)i+1 is no more than X

(k)i+1.

Page 98: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

87

(a) ∆bki

(xi −

∑k−1j=1 x

(j)i+1 − xi

)< 0 (b) ∆bk

i

(xi −

∑k−1j=1 x

(j)i+1 − xi

)≥ 0

Figure 4–5: Determining X(k)i+1 when xi −

∑k−1j=1 x

(j)i+1 − xi > 0

Now we discuss how to determine X(k)i+1. When xi −

∑k−1j=1 x

(j)i+1 − xi ≤ 0, X

(k)i+1 =

xi−∑k−1

j=1 x(j)i+1 based on Fig. 4–4(c). When xi−

∑k−1j=1 x

(j)i+1− xi > 0, the discussion falls

into two cases, categorized by the ranges of ∆bki

(xi −

∑k−1j=1 x

(j)i+1 − xi

). Without loss of

generality, we use the shape of Fig. 4–4(a) to illustrate these two cases in Fig. 4–5.

Case 1. When ∆bki

(xi −

∑k−1j=1 x

(j)i+1 − xi

)< 0, as shown in Fig. 4–5(a), X

(k)i+1 is

the x-coordinate of the intersection of the curve of ∆b(k)i

(x

(k)i+1

)and x-axis in the range of

0 < x(k)i+1 < xi −

∑k−1j=1 x

(j)i+1 − xi.

Case 2. When ∆bki

(xi −

∑k−1j=1 x

(j)i+1 − xi

)≥ 0, as illustrated in Fig. 4–5(b), we

have ∆b(k)i (x

(k)i+1) ≥ 0 in the whole range of 0 ≤ x

(j)i+1 ≤ xi −

∑k−1j=1 x

(j)i+1, since x

(k)i+1 =

xi −∑k−1

j=1 x(j)i+1 − xi gives the smallest value to ∆bk

i . Based on the nonnegative benefit

principle described above, X(k)i+1 = xi −

∑k−1j=1 x

(j)i+1.

Therefore, we have

X(k)i+1 =

xi −k−1∑j=1

x(j)i+1, if

{xi −

k−1∑j=1

x(j)i+1 > xi and ∆b

(k)i

(xi −

k−1∑j=1

x(j)i+1 − xi

)≥ 0

}or

0 < xi −k−1∑j=1

x(j)i+1 ≤ xi

{x

(k)i+1

∣∣ ∆bki

(x

(k)i+1

)= 0 and 0 < x

(k)i+1 < xi −

k−1∑j=1

x(j)i+1 − xi

},

if xi −k−1∑j=1

x(j)i+1 > xi and ∆b

(k)i

(xi −

k−1∑j=1

x(j)i+1 − xi

)< 0

0, otherwise.(4.11)

Page 99: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

88

Here we do not use the value of x(k)i+1 that maximizes ∆b

(k)i because of the competition

among providers. Assume Gi provides less resources than another provider, G(k)i+1 may

choose that provider for network access as long as it gets greater benefit (see Subsection

4.4.2). On the other hand, because of the competition, the price will be adjusted by the

market to a marginal value pi, which is determined only by xi. No service provider can

set a price significantly higher than pi and win customers at the same time. By using the

nonnegative benefit principle, Gi shows the available resources X(k)i+1 to group G

(k)i+1 for

admission control.

4.4.4 Algorithm Operations

After the discussion in the above two subsections, we can describe the APRIL algo-

rithm as follows.

1. When group Gi has available resources, i.e., X(k)i+1 > 0, it will publicize the value of

X(k)i+1, as well as β and c. As discussed before, β and c tend to be the same for all

groups.

2. When group Gj requests network access, it scans the neighboring groups that have

available resources. After treating the available resources X(k)i+1 > 0 publicized by

Gi as Xj , group Gj uses (4.7) to calculate xj . If xj > 0, Gj uses xj as the requested

resources and sends the admission request to Gi. If several neighboring groups are

available, Gj only sends the admission request to the group that gives the largest xj .

3. After receiving the request from Gj , Gi shares the resources with Gj using a resource

allocation scheme, which can be a buffer management scheme, a scheduling scheme,

or any effective scheme that is capable of allocating resources to different groups.

4.5 Performance Evaluation

In this section, we use the network topology shown in Fig. 4–6 to simulate a wireless

mesh network. For simplicity, in the simulations, link capacity is the only resource we

considered. It is possible to include more types of resources in the future.

Page 100: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

89

Figure 4–6: Simulation network for APRIL

At the beginning, only G0 is connected to the Internet. The bandwidth of the wired

backhaul is 6000 kb/s.6 Four other groups G1, G2, G3, G4 start to request the network ac-

cess at time 10 min, 20 min, 30 min and 50 min, respectively, which are shown beside the

groups in Fig. 4–6. G1 and G2 are neighboring groups of G0. G3 is a neighboring group

of G1. G4 is a neighboring group of G2 as well as G3.

Assume the same price as described in Section 4.3 is used, i.e., a monthly fee of

p = x/300+27 dollars, which is equivalent to a minute fee of p = 7.716×10−8∗x+6.944×10−4 dollars. In other words, β = 7.716× 10−8 dollars/kb/s and c = 6.944× 10−4 dollars.

At the beginning, G0 uses the maximum benefit principle to decide amount of band-

width x∗0 = 6000 kb/s. From (4.6), α0 = 4.167 × 10−4 dollars. We also set x0 = 10 kb/s,

which is an experiential bandwidth requirement to surf text-based web pages.

G1 and G2 have the same value of αi = 2.308 × 10−4 dollars, which corresponds to

x∗i = 3000 kb/s. Same as G0, xi = 10 kb/s. G3 and G4 are more sensitive to the bandwidth.

For them, αi = 6.0× 10−4 dollars and xi = 20 kb/s.

6 This corresponds to the download speed of the fastest DSL service plan of BellSouth[22]. When DSL is used as the Internet connection, the upload and download speed is oftendifferent, but since they are co-existent in a service plan, we only focus on the downloadspeed in this chapter for ease of discussion.

Page 101: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

90

0

1000

2000

3000

4000

5000

6000

0 10 20 30 40 50

Ava

ilabl

e re

sour

ces

(kb/

s)

Time (minutes)

G0G1G2G3G4

(a) Available bandwidth

0

1000

2000

3000

4000

5000

6000

0 10 20 30 40 50

Use

d re

sour

ces

(kb/

s)

Time (minutes)

G0G1G2G3G4

(b) Bandwidth allocation

0

0.2

0.4

0.6

0.8

1

1.2

1.4

0 10 20 30 40 50

Ben

efit

(cen

ts/m

inut

e)

Time (minutes)

G0G1G2G3G4

Total

(c) Benefit

Figure 4–7: Available bandwidth, bandwidth allocation, and benefit

Page 102: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

91

We simulate the available bandwidth, bandwidth allocation, and benefit, which is

changing with time due to the new admissions. The results are illustrated in Fig. 4–7.

The bandwidth and benefit is measured at each time when a new group obtains the network

access. Each result is shown as a mark in the figures. The marks corresponding to the same

group are connected by a solid line for ease of observation. Note that the value related to

the previous mark does not change until the curve reaches the next dot.

At time 0, G0 publicizes the available bandwidth X(1)i+1 = 5558 kb/s based on (4.11).

At the same time, it uses all the bandwidth by itself when waiting for an admission request.

The bandwidth is not wasted even before other groups can share it.

At time 10, the admission of G1 changes the bandwidth allocation. The bandwidth

shares of G0 and G1 are both 3000 kb/s, which leads to the maximum benefit x∗1 as well

as approximately a double benefit in total. After G1 is admitted, the available bandwidth

of G0 changes to X(2)i+1 = 2644 kb/s. Group G1 also publicizes its available bandwidth

2935 kb/s.

At time 20, the group G2 sends the admission request to its neighboring group, G0,

asking for bandwidth 2644 kb/s, calculated from (4.7). After that, the available bandwidth

of G0 changes to X(3)i+1 = 356 kb/s, and the available bandwidth of G2 is 2580 kb/s. The

total benefit increases to 0.4944. At time 30, with the admission of G3 via G1, the available

bandwidth and bandwidth allocations of G1 and G3 are changed, but that of G0 is not

affected, even though the traffic of G3 needs to go through G0 to reach the Internet. In

APRIL, G0 treats the traffic from G3 as a part of traffic from G1. Thus the admission

control scheme has good scalability.

When G4 sends the admission request at time 50, two neighboring groups are consid-

ered. If G4 connects G3, the available bandwidth provided by G3 (2227 kb/s) will generate

a less benefit than that is generated by G2’s available bandwidth (2580 kb/s). Therefore, G4

selects G2 as the service provider. After its admission, the total benefit reaches five times

of the benefit when the network has only one single group G0.

Page 103: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

92

We notice that each time after an new group is admitted, the direct service provider

(i.e., the neighboring group that shares the resources with the new group) will have less

resources available. This causes a higher possibility that the new group will select a neigh-

boring group that has less hops to the hot spot, unless they do not have enough resources

available. This self-organizing style is economic with respect to the use of network re-

sources and thus is helpful to improve network performance if the investment is a constant.

4.6 Conclusion

We proposed an admission control scheme for WMNs, APRIL, by combining the

admission determination with the considerations of the nonnegative benefit of the provider

and the maximum benefit of the user. In APRIL, only two groups, i.e., the resource provider

and the resource user, are involved in the admission control process. Therefore, our scheme

has a good scalability for the expansion of WMNs.

By using APRIL, the admission of new groups leads to a benefit increase of both

involved groups, and the total benefit also increases. This characteristic can be used as an

incentive to expand Internet coverage by WMNs.

The future work includes the study of the combination of APRIL and resource al-

location scheme, the parameter tuning of APRIL to represent different applications more

accurately, and the performance investigation of APRIL in IEEE 802.x network scenario.

Page 104: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

CHAPTER 5FUTURE WORK

As further study that is closely related to our schemes, the following aspects will be

considered.

Further understanding of network dynamics affected by CHOKeW needs more com-

prehensive models. The study results can be used in the parameter tuning. For a different

network scenario that is based on the features of user applications, the optimization require-

ments may be different. How to adjust parameters in CHOKeW to meet these requirements

will be a research topic in the future.

In the CHOKeW-ConTax framework, the interaction of CHOKeW and ConTax under

different parameter settings will be investigated.

For WMNs, the combination of APRIL and a resource allocation scheme, and the

effects of APRIL in different IEEE 802.x network environment will be studied. How to

model the utility function for various applications more accurately is also an open research

topic.

When pricing is implemented, security issues need to be solved. Applications in

WMNs may be initiated by one party that has no long-term agreement with a neighbor-

ing party that provides the network access. Some mechanism is required to secure the

transaction between both parties. The design of this security mechanism must incorporate

a set of authentication operations to ensure the user’s legitimacy for the service provider.

The security mechanism also needs to prevent the provider from obtaining confidential in-

formation from the user. Incontestable billing of mesh users can be combined with the

APRIL protocol to evaluate the performance.

93

Page 105: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

94

Although TCP/IP will still be the common infrastructure in the foreseeable future, it

is evident that the future networking environment will be strongly characterized by the het-

erogeneity of networks. The concept of traffic control is not limited to packet transferring;

it also imposes additional requirements for mobility, QoS and media control signaling.

The development of end-to-end QoS is faced with challenges such as designing a QoS

signaling protocol that can be interpreted in each domain without performance conflicts

or compatibility problems. The QoS requirements can be translated not only to IP-based

(network layer) service level semantics, but also to those in various link layers due to the

popularity of wireless access. We believe that this problem cannot be completely solved

without the effort of standardization organizations.

Page 106: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

REFERENCES

[1] A. A. Abouzeid, S. Roy, and M. Azizoglu, “Comprehensive performance analysisof a TCP session over a wireless fading link with queueing,” IEEE Transactions onWireless Communications, vol. 2, no. 2, pp. 344–356, March 2003.

[2] D. Aguayo, J. Bicket, S. Biswas, G. Judd, and R. Morris, “Link-level measurementsfrom an 802.11b mesh network,” presented at ACM SIGCOMM’04, Portland, OR,August/September 2004.

[3] A. Akella, S. Seshan, R. Karp, and S.Shenker, “Selfish behavior and stability ofthe internet: A game-theoretic analysis of TCP,” presented at ACM SIGCOMM’02,Pittsburgh, PA, July/August 2002.

[4] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensornetworks,” IEEE Communications Magazine, vol. 40, no. 8, pp. 102–114, August2002.

[5] I. F. Akyildiz, X. Wang, and W. Wang, “Wireless mesh networks: A survey,” Com-puter Networks, vol. 47, no. 4, pp. 445–487, March 2005.

[6] J. J. Alcaraz, F. Cerdan, and J. Garcıa-Haro, “Optimizing TCP and RLC interac-tion in the UMTS radio access network,” IEEE Network, vol. 20, no. 2, pp. 56–64,March/April 2006.

[7] M. Allman, V. Paxson, and W. Stevens, “TCP congestion control,” IETF RFC 2581,April 1999.

[8] P. Almquist, “Type of service in the Internet protocol suite,” IETF RFC 1349, July1992.

[9] D. Anderson, Y. Osawa, and R. Govindan, “A file system for continuous media,”ACM Transactions on Computer Systems, vol. 10, no. 4, pp. 311–377, November1992.

[10] ANSI, “DSSI core aspects of frame relay,” ANSI T1S1, March 1990.

[11] G. Appenzeller, I. Keslassy, and N. McKeown, “Sizing router buffers,” presented atACM SIGCOMM’04, Portland, OR, August/September 2004.

[12] S. Athuraliya, V. H. Li, S. H. Low, and Q. Yin, “REM: Active queue management,”IEEE Network, vol. 15, no. 3, pp. 48–53, May/June 2001.

95

Page 107: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

96

[13] B. Atkin and K. P. Birman, “Sizing router buffers,” presented at IEEE INFO-COM’03, San Francisco, CA, March/April 2003.

[14] A. Forum, “ATM traffic management specification version 4.0,” AF-TM-0056.000,June 1996.

[15] A. C. Auge, J. L. Magnet, and J. P. Aspas, “Window prediction mechanism for im-proving TCP in wireless asymmetric links,” presented at IEEE Globcom’98, Sydney,Australia, November 1998.

[16] F. Baker, C. Iturralde, F. L. Faucheur, and B. Davie, “Aggregation of RSVP for IPv4and IPv6 reservations,” IETF RFC 3175, September 2001.

[17] A. Bakre and B. R. Badrinath, “I-TCP: indirect TCP for mobile hosts,” presented atICDCS’95, Sydney, Australia, May 1995.

[18] A. V. Bakre and B. R. Badrinath, “Implementation and performance evaluation ofindirect TCP,” IEEE Transactions on Computers, vol. 46, no. 3, pp. 260–278, March1997.

[19] B. S. Bakshi, P. Krishna, N. H. Vaidya, and D. K. Pradhan, “Improving performanceof TCP over wireless networks,” presented at ICDCS’97, Baltimore, MD, May 1997.

[20] H. Balakrishnan and V. N. Padmanabhan, “How network asymmetry affects TCP,”IEEE Communications Magazine, vol. 39, no. 4, pp. 60–67, April 2001.

[21] H. Balakrishnan, V. N. Padmanabhan, S. Seshan, and R. H. Katz, “A comparisonof mechanisms for improving TCP performance over wireless links,” IEEE/ACMTransactions on Networking, vol. 5, no. 6, pp. 756–769, December 1997.

[22] BellSouth, “BellSouth DSL internet service,” last accessed: September 2006.[Online]. Available: http://www.bellsouth.com/consumer/inetsrvcs/index.html

[23] J. Bennet and H. Zhang, “WF2Q: Worst case fair weighted fair queuing,” presentedat IEEE INFOCOM’96, San Francisco, CA, March 1996.

[24] Y. Bernet, “Format of the RSVP DCLASS object,” IETF RFC 2996, November2000.

[25] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An architecturefor differentiated service,” IETF RFC 2475, December 1998.

[26] U. Bodin, O. Schelen, and S. Pink, “Load-tolerant differentiation with active queuemanagement,” ACM SIGCOMM Computer Communication Review, vol. 30, no. 3,pp. 4–16, July 2000.

[27] R. Braden, D. Clark, and S. Shenker, “Integrated services in the Internet architecture:An overview,” IETF RFC 1633, July 1994.

Page 108: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

97

[28] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin, “Resource reservationprotocol (RSVP): Version 1 functional specification,” IETF RFC 2205, September1997.

[29] R. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, and D. Estrin, “Recom-mendations on queue management and congestion avoidance in the Internet,” IETFRFC 2309, April 1998.

[30] L. Breslau, E. W. Knightly, S. Shenker, I. Stoica, and H. Zhang, “Endpoint admissioncontrol: Architectural issues and performance,” presented at ACM SIGCOMM’00,Stockolm, Sweden, August 2000.

[31] B. Bruno, M. Conti, and E. Gregori, “Mesh networks: Commodity multihop ad hocnetworks,” IEEE Communications Magazine, vol. 43, no. 3, pp. 123–131, March2005.

[32] L. Bui, A. Eryilmaz, R. Srikant, and X. Wu, “Joint asynchronous congestion con-trol and distributed scheduling for multi-hop wireless networks,” presented at IEEEINFOCOM’06, Barcelona, Spain, May 2006.

[33] G. M. Butler and K. H. Grace, “Adapting wireless links to support standard networkprotocols,” presented at ICCCN’98, Lafayette, LA, October 1998.

[34] J. Byers, J. Considine, M. Mitzenmacher, and S. Rost, “Informed content deliveryacross adaptive overlay networks,” presented at ACM SIGCOMM’02, Pittsburgh,PA, October 2002.

[35] K. L. Calvert, J. Griffioen, and S. Wen, “Lightweight network support for scalableend-to-end services,” presented at ACM SIGCOMM’02, Pittsburgh, PA, October2002.

[36] K. Chandran, S. Raghunathan, S. Venkatesan, and R. Prakash, “A feedback-basedscheme for improving TCP performance in ad hoc wireless networks,” IEEE Per-sonal Communications, vol. 8, no. 1, pp. 34–39, February 2001.

[37] C. J. Chang, T. T. Su, and Y. Y. Chiang, “Analysis of a cutoff priority cellular radiosystem with finite queueing and reneging/dropping,” IEEE/ACM Transactions onNetworking, vol. 2, no. 2, pp. 166–175, April 1994.

[38] E. Chang and A. Zakhor, “Cost analyses for VBR video servers,” IEEE MultiMedia,vol. 3, no. 4, pp. 56–71, December 1996.

[39] K. N. Chang, J. T. Kim, and S. Kim, “An efficient borrowing channel assignmentscheme for cellular mobile systems,” IEEE Transactions on Vehicular Technology,vol. 47, no. 2, pp. 602–608, May 1998.

[40] H. Chaskar, T. V. Lakshman, and U. Madhow, “On the design of interfaces forTCP/IP over wireless,” presented at Milcom’96, Jordan, MA, October 1996.

Page 109: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

98

[41] S. Chen and N. Yang, “Congestion avoidance based on lightweight buffer manage-ment in sensor networks,” IEEE Trans. on Parallel and Distributed Systems, vol. 17,no. 9, pp. 934–946, September 2006.

[42] K. Cho, “Flow-valve: Embedding a safety-valve in RED,” presented at IEEEGLOBECOM’99, Rio de Janeireo, Brazil, December 1999.

[43] C.-T. Chou and K. G. Shin, “Smooth handoff with enhanced packet buffering-and-forwarding in wireless/mobile networks,” presented at IEEE QShine’05, Orlando,FL, August 2005.

[44] C.-T. Chou, S. N. Shankar, and K. G. Shin, “Achieving per-stream QoS with dis-tributed airtime allocation and admission control in IEEE 802.11e wireless LANs,”presented at IEEE INFOCOM’05, Miami, FL, March 2005.

[45] J. Chung and M. Claypool, “Aggregate rate control for TCP traffic management,”presented at ACM SIGCOMM’04, Portland, OR, August/September 2004.

[46] D. D. Clark, S. Shenker, and L. Zhang, “Supporting real-time applications in an in-tegrated services packet network: Architecture and mechanism,” presented at ACMSIGCOMM’02, Pittsburgh, PA, August/September 2002.

[47] D. D. Clark and W. Fang, “Explicit allocation of best effort packet delivery service,”IEEE/ACM Transactions on Networking, vol. 6, no. 4, pp. 362–373, August 1998.

[48] R. B. Cooper, Introduction to Queueing Theory, 2nd ed. Holland: Elsevier North,1981.

[49] C. Courcoubetis, G. Kesidis, A. Ridder, and J. Walrand, “Admission control androuting in atm networks using inferences from measured buffer occupancy,” IEEETransactions on Communications, vol. 43, no. 234, pp. 1778–1784, 1995.

[50] C. Courcoubetis and R. Weber, Pricing Communication Networks: Economics,Technology and Modelling. New York: Wiley, 2003.

[51] M. Crovella and A. Bestavos, “Self-similarity in World Wide Web traffic: Evidenceand possible causes,” presented at SIGMETRICS’96, Philadelphia, PA, May 1996.

[52] A. Demers, S. Keshav, and S. Shenker, “Analysis and simulations of a fair queueingalgorithm,” presented at ACM SIGCOMM’89, Austin, TX, September 1989.

[53] C. Dovrolis and P. Ramanathan, “Proportional differentiated services, part ii: Lossrate differentiation and packet dropping,” presented at IWQoS’00, Pittsburgh, PA,June 2000.

[54] E. C. Efstathiou, P. A. Frangoudis, and G. C. Polyzos, “Stimulating participation inwireless community networks,” presented at IEEE INFOCOM’06, Barcelona, Spain,April 2006.

Page 110: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

99

[55] V. Elek, G. Karlsson, and R. Ronngren, “Admission control based on end-to-endmeasurements,” presented at IEEE INFOCOM’00, Tel Aviv, Israel, March 2000.

[56] K. Fall and S. Floyd, “Simulation-based comparisons of Tahoe, Reno, and SACKTCP,” Computer Communication Review, vol. 26, no. 3, pp. 5–21, July 1996.

[57] F. L. Faucheur, “Protocol extensions for support of diffserv-aware MPLS traffic en-gineering,” IETF RFC 4124, June 2005.

[58] W. Feng, “Network traffic characterization of TCP,” presented at MILCOM’00, LosAngeles, CA, October 2000.

[59] W. Feng, K. Shin, D. Kandlur, and D. Saha, “The BLUE active queue managementalgorithm,” IEEE/ACM Transactions on Networking, vol. 10, no. 4, pp. 513–528,August 2002.

[60] S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoid-ance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, pp. 397–413, August1993.

[61] ——, “Link-share and resource management models for packet networks,”IEEE/ACM Transactions on Networking, vol. 3, no. 4, pp. 365–386, August 1995.

[62] S. Floyd, “RED: Discussions of setting parameters,” last accessed: September2006. [Online]. Available: http://www.icir.org/floyd/REDparameters.txt

[63] S. Floyd and K. Fall, “Promoting the use of end-to-end congestion control in the In-ternet,” IEEE/ACM Transactions on Networking, vol. 7, no. 4, pp. 458–472, August1999.

[64] S. Floyd, “Recommendation on using the “gentle ” variant of RED,” last accessed:September 2006. [Online]. Available: http://www.icir.org/floyd/red/gentle.html

[65] Fon, “Fon: WiFi everywhere,” last accessed: September 2006. [Online]. Available:http://en.fon.com/

[66] Z. Fu, H. Luo, P. Zerfos, S. Lu, L. Zhang, and M. Gerla, “The impact of multihopwireless channel on TCP performance,” IEEE/ACM Transactions on Mobile Com-puting, vol. 4, no. 2, pp. 209–221, March/April 2005.

[67] A. E. Gamal, J. Mammen, B. Prabhakar, and D. Shah, “Throughput-delay trade-offin wireless networks,” presented at IEEE INFOCOM’04, Hong Kong, March 2004.

[68] V. Gambiroza, B. Sadeghi, and E. Knightly, “End-to-end performance and fairnessin multihop wireless backhaul networks,” presented at ACM MobiCom’04, NewYork, NY, September 2004.

Page 111: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

100

[69] A. J. Ganesh, P. B. Key, D. Polis, and R. Srikant, “Congestion notification and prob-ing mechanisms for endpoint admission control,” IEEE Transactions on Networking,vol. 14, no. 3, pp. 568–578, June 2006.

[70] R. J. Gibbens and F. P. Kelly, “On packet marking at priority queues,” IEEE Trans-actions on Automatic Control, vol. 47, no. 6, pp. 1016–1020, June 2002.

[71] S. Golestani, “A self-clocked fair queueing scheme for broadband applications,” pre-sented at IEEE INFOCOM’94, Toronto, Canada, June 1994.

[72] J. Gronkvist, “Assignment methods for spatial reuse,” presented at ACM Mobi-Hoc’00, Boston, MA, August 2000.

[73] P. Gupta and P. R. Kumar, “The capacity of wireless networks,” IEEE/ACM Trans-actions on Information Theory, vol. 46, no. 2, pp. 209–221, March 2000.

[74] H. Han, C. V. Hollot, Y. Chait, and V. Misra, “TCP networks stabilized by buffer-based AQMs,” presented at IEEE INFOCOM’04, Hong Kong, March 2004.

[75] H. Harai and M. Murata, “High-speed buffer management for 40 Gb/s-based pho-tonic packet switches,” IEEE/ACM Transactions on Networking, vol. 14, no. 1, pp.191–204, February 2006.

[76] ——, “Optical fiber-delay-line buffer management in output-buffered photonicpacket switch to support service differentiation,” IEEE Journal on Selected Areasin Communications, vol. 24, no. 8, pp. 108–116, August 2006.

[77] Y. Hayel, D. Ros, and B. Tuffin, “Less-than-best-effort services: Pricing andscheduling,” presented at IEEE INFOCOM’04, Hong Kong, March 2004.

[78] J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, “Assured forwarding PHBgroup,” IETF RFC 2597, June 1999.

[79] C. V. Hollot, Y. Liu, V. Misra, and D. Towsley, “Unresponsive flows and AQMperformance,” presented at IEEE INFOCOM’03, San Francisco, CA, March/April2003.

[80] D. Hong and S. S. Rappaport, “Traffic model and performance analysis for cellu-lar mobile radio telephone systems with prioritized and nonprioritized handoff pro-cedures,” IEEE Transactions on Vehicular Technology, vol. 35, no. 3, pp. 77–92,August 1986.

[81] J. Hou and Y. Yang, “Mobility-based channel reservation scheme for wireless mobilenetworks,” presented at IEEE WCNC’00, Chicago, IL, September 2000.

[82] J. Hou, J. Yang, and S. Papavassiliou, “Integration of pricing with call admissioncontrol to meet QoS requirements in cellular networks,” IEEE Transactions on Par-allel and Distributed Systems, vol. 13, no. 9, pp. 898–910, September 2002.

Page 112: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

101

[83] P. R. Jelenkovic, P. Momcilovic, and M. S. Squillante, “Buffer scalability of wirelessnetworks,” presented at IEEE INFOCOM’06, Barcelona, Spain, May 2006.

[84] W. S. Jeon and D. G. Jeong, “Rate control for communication networks: Shadowprices, proportional fairness and stability,” IEEE Transactions on Vehicular Tech-nology, vol. 55, no. 5, pp. 1582–1593, September 2006.

[85] F. Kelly, A. Maulloo, and D. Tan, “Rate control for communication networks:Shadow prices, proportional fairness and stability,” Journal of the Operational Re-search Society, vol. 49, pp. 237–252, March 1998.

[86] F. Kelly, “Fairness and stability of end-to-end congestion control,” European Journalof Control, vol. 9, pp. 159–176, March 2003.

[87] H. Kim and J. Hou, “Network calculus based simulation for TCP congestion control:Theorems, implementation and evaluation,” presented at IEEE INFOCOM’04, HongKong, March 2004.

[88] M. D. Kulavaratharasah and A. H. Aghvami, “Teletraffic performance evaluation ofmicrocellular personal communication networks (PCN’s) with prioritized handoffprocedures,” IEEE Transactions on Vehicular Technology, vol. 48, no. 1, pp. 137–152, January 1999.

[89] L. Lamport, “Password authentication with insecure communication,” Communcia-tions of the ACM, vol. 24, no. 11, pp. 770–772, November 1981.

[90] L. Le, J. Aikat, K. Jeffay, and F. D. Smith, “Network calculus based simulation forTCP congestion control: Theorems, implementation and evaluation,” presented atACM SIGCOMM’03, Karlsruhe, Germany, August 2003.

[91] Y. Lee and D. Brow, “Competition, consumer welfare, and the social cost ofmonopoly,” in Cowles Foundation Discussion Paper No. 1528, last accessed:September 2006. [Online]. Available: http://cowles.econ.yale.edu/P/cd/d15a/d1528.pdf

[92] S. Lee, G. Narlikar, M. Pal, G. Wilfong, and L. Zhang, “Admission control for mul-tihop wireless backhaul networks with qos support,” presented at IEEE WCNC’06,Las Vegas, NV, April 2006.

[93] T. Li, Y. Iraqi, and R. Boutaba, “Pricing and admission control for QoS-enabledInternet,” Computer Networks, vol. 46, no. 1, pp. 87–110, September 2004.

[94] R. R.-F. Liao and A. T. Campbell, “Dynamic core provisioning for quantitative dif-ferentiated services,” IEEE Transactions on Networking, vol. 12, no. 3, pp. 429–442,June 2004.

[95] Y. Liu and E. Knightly, “Opportunistic fair scheduling over multiple wireless chan-nelss,” presented at IEEE INFOCOM’03, San Francisco, CA, March/April 2003.

Page 113: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

102

[96] S. H. Low, L. L. Peterson, and L. Wang, “Understanding vegas: a duality model,”Journal of the ACM, vol. 49, no. 2, pp. 207–235, March 2002.

[97] S. H. Low, F. Paganini, J. Wang, S. Adlakha, and J. C. Doyle, “Dynamics ofTCP/RED and a scalable control,” presented at IEEE INFOCOM’02, New York,NY, June 2002.

[98] S. H. Low, S. H. Low, F. Paganini, and J. C. Doyle, “Internet congestion control,”IEEE Control Systems Magazine, vol. 22, no. 1, pp. 28–43, February 2002.

[99] S. H. Low, “A duality model of TCP and queue management algorithm,” IEEE/ACMTransactions on Networking, vol. 11, no. 4, pp. 525–536, August 2003.

[100] R. Mahajan and S. Floyd, “Controlling high-bandwidth flows at the congestedrouter,” ICSI Tech Report TR-01-001, last accessed: September 2006. [Online].Available: http://www.icir.org/red-pd/

[101] S. I. Maniatis, E. G. Nikolouzou, and I. S. Venieris, “End-to-end QoS specificationissues in the converged all-IP wired and wireless environment,” IEEE Communica-tions Magazine, vol. 42, no. 6, pp. 80–86, June 2004.

[102] P. Marbach, “Pricing differentiated services networks: Bursty traffic,” presented atIEEE INFOCOM’01, Anchorage, AK, April 2001.

[103] S. McCanne and S. Floyd, “ns-2 (network simulator version 2),” last accessed:September 2006. [Online]. Available: http://www.isi.edu/nsnam/ns/

[104] P. McKenney, “Stochastic fairness queueing,” presented at IEEE INFOCOM’90, SanFrancisco, CA, June 1990.

[105] E. Monteiro, G. Quadros, and F. Boavida, “A scheme for the quantification ofcongestion in communication services and systems,” presented at IEEE SDNE’96,Whistler, BC, Canada, June 1996.

[106] J. Nagle, “On packet switches with infinite storage,” IEEE/ACM Transactions onCommunications, vol. 35, no. 4, pp. 435–438, April 1987.

[107] M. J.Neely, E. Modiano, and C.-P. Li, “Fairness and optimal stochastic control forheterogeneous networks,” presented at IEEE INFOCOM’05, Miami, FL, March2005.

[108] S. Nelson and L. Kleinrock, “Spatial TDMA: A collision-free multihop channelaccess protocol,” IEEE Transactions on Communications, vol. 33, no. 9, pp. 934–944, September 1985.

[109] N. Nichols, S. Blake, F. Baker, and D. Black, “Definition of the differentiated ser-vices field (DS field) in the IPv4 and IPv6 headers,” IETF RFC 2474, December1998.

Page 114: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

103

[110] D. Niyato and E. Hossain, “Call admission control for qos provisioning in 4g wire-less networks: Issues and approaches,” IEEE Network, vol. 19, no. 5, pp. 5–11,September/October 2005.

[111] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling TCP throughput: Asimple model and its empirical validation,” presented at ACM SIGCOMM’98, Van-couver, Canada, August/September 1998.

[112] R. Pan, B. Prabhakar, and K. Psounis, “CHOKe: A stateless active queue man-agement scheme for approximating fair bandwidth allocation,” presented at IEEEINCOCOM’01, Anchorage, AK, April 2001.

[113] A. Parekh and R. Gallager, “A generalized processor sharing approach to flow con-trol in integrated services networks: the single-node case,” IEEE/ACM Transactionson Networking, vol. 1, no. 3, pp. 344–357, June 1993.

[114] ——, “A generalized processor sharing approach to flow control in integrated ser-vices networks: the multiple-node case,” IEEE/ACM Transactions on Networking,vol. 2, no. 2, pp. 137–150, April 1994.

[115] E.-C. Park and C.-H. Choi, “Proportional bandwidth allocation in diffserv net-works,” presented at IEEE INFOCOM’04, Hong Kong, March 2004.

[116] V. Paxson and S. Floyd, “Wide area traffic: the failure of Poisson modeling,”IEEE/ACM Transactions on Networking, vol. 3, no. 3, pp. 226–244, June 1995.

[117] R. Pindyck and D. Rubinfeld, Microeconomics, 6th ed. Upper Saddle River, NJ:Prentice Hall, 2004.

[118] J. Postel, “Internet protocol,” IETF RFC 791, September 1981.

[119] Y. Qian, R. Q.-Y. Hu, and H.-H. Chen, “A call admission control framework forvoice over WLANs,” IEEE Wireless Communications, vol. 13, no. 1, pp. 44–50,February 2006.

[120] S. Ramabhadran and J. Pasquale, “Stratified round robin: A low complexity packetscheduler with bandwidth fairness and bounded delay,” presented at ACM SIG-COMM’03, Karlsruhe, Germany, August 2003.

[121] E. Rosen, A. Viswanathan, and R. Callon, “Multiprotocol label switching architec-ture,” IETF RFC 3031, January 2001.

[122] A. A. Saleh and J. M. Simmons, “Evolution toward the next-generation core opticalnetwork,” IEEE Journal of Lightwave Technology, vol. 24, no. 9, pp. 3303–3321,September 2006.

[123] N. B. Salem and J.-P. Hubaux, “A fair scheduling for wireless mesh networks,” pre-sented at WiMesh’05, Santa Clara, CA, September 2005.

Page 115: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

104

[124] S. Shenker, “Fundamental design issues for the future Internet,” IEEE Journal onSelected Areas in Communications, vol. 13, no. 7, pp. 1176–1188, September 1995.

[125] M. Shin, S. Chong, and I. Rhee, “Dual-resource TCP/AQM for processing-constrained networks,” presented at IEEE INFOCOM’06, Barcelona, Spain, May2006.

[126] M. Shreedhar and G. Varghese, “Efficient fair queuing using deficit round robin,”IEEE/ACM Transactions on Networking, vol. 4, no. 3, pp. 375–385, June 1996.

[127] W. Stevens, TCP/IP Illustrated, Volume 1: The Protocols. Boston: Addison-Wesley, 1994.

[128] I. Stoica, S. Shenker, and H. Zhang, “Core-stateless fair queueing: Achieving ap-proximately fair bandwidth allocation in high speed networks,” presented at ACMSIGCOMM’98, Vancouver, Canada, August/September 1998.

[129] A. Striegel and G. Manimaran, “Packet scheduling with delay and loss differentia-tion,” Computer Communications, vol. 25, no. 1, pp. 21–31, January 2002.

[130] S. Suri, G. Varghese, and G. Chandramenon, “Leap forward virtual clock: A newfair queueing scheme with guaranteed delay and throughput fairness,” presented atIEEE INFOCOM’97, Kobe, Japan, April 1997.

[131] B. Suter, T. Lakshman, D. Stiliadis, and A. Choudhury, “Buffer managementschemes for supporting TCP in gigabit routers with per-flow queueing,” IEEE Jour-nal on Selected Areas in Communications, vol. 17, no. 6, pp. 1159–1169, June 1999.

[132] C. Tan, M. Gurusamy, and J. Lui, “Achieving proportional loss optical burst swich-ing wdm networks,” presented at IEEE GLOBECOM’04, Dallas, TX, November2004.

[133] A. Tang, J. Wang, and S. Low, “Understanding CHOKe,” presented at IEEE INFO-COM’03, San Francisco, CA, March/April 2003.

[134] H. Vin, P. Goyal, and A. Goyal, “A statistical admission control algorithm for mul-timedia servers,” presented at ACM MultiMedia’94, San Francisco, CA, October1994.

[135] S. Wen, Y. Fang, and H. Sun, “CHOKeW: Bandwidth differentiation and TCP pro-tection in core networks,” presented at MILCOM’05, Atlantic City, NJ, October2005.

[136] S. Wen and Y. Fang, “CHOKeW patch on ns,” last accessed: September 2006.[Online]. Available: http://www.ecel.ufl.edu/∼wen/chokew.zip

[137] ——, “ConTax: A pricing scheme for CHOKeW,” presented at MILCOM’06,Washington, DC, October 2006.

Page 116: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

105

[138] Z. Xia, W. Hao, I.-L. Yen, and P. Li, “A distributed admission control model for QoSassurance in large-scale media delivery systems,” IEEE Transactions on Parallel andDistributed Systems, vol. 16, no. 12, pp. 1143–1153, December 2005.

[139] J. Xu and R. J. Lipton, “On fundamental tradeoffs between delay bounds and com-putational complexity in packet scheduling algorithms,” presented at ACM SIG-COMM’02, Pittsburgh, PA, August 2002.

[140] G. Xylomernons, G. C. Polyzos, P. Mahonen, and M. Saaranen, “TCP performanceissue over wireless links,” IEEE Communications Magazine, vol. 39, no. 4, pp. 52–58, April 2001.

[141] L. Yang, W. Seah, and Q. Yin, “Improving fairness among TCP flows crossing wire-less ad hoc and wired networks,” presented at ACM MobiCom’03, San Diego, CA,September 2003.

[142] I. Yeom and A. L. N. Reddy, “Modeling TCP behavior in a differentiated servicesnetwork,” IEEE/ACM Transactions on Networking, vol. 9, no. 1, pp. 31–46, Febru-ary 2001.

[143] J. Zeng, L. Zakrevski, and N. Ansari, “Computing the loss differentiation parame-ters of the proportional differentiation service model,” IEE Proceedings Communi-cations, vol. 153, no. 2, pp. 177–182, April 2006.

[144] Y. Zhang and Y. Fang, “A secure authentication and billing architecture for wirelessmesh networks,” to appear in ACM Wireless Networks.

[145] D. Zhao, J. Zou, and T. D. Todd, “Admission control with load balancing in IEEE802.11-based ESS mesh networks,” presented at QSHINE’05, Orlando, FL, August2005.

Page 117: TRAFFIC CONTROL IN TCP/IP NETWORKSufdcimages.uflib.ufl.edu/UF/E0/01/68/60/00001/wen_s.pdf · 2010. 5. 6. · Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice. I would

BIOGRAPHICAL SKETCH

Shushan Wen received the B.E. degree from the Department of Electronic Engineer-

ing, University of Electronic Science and Technology of China (UESTC), Chengdu, China,

in 1996, and the Ph.D. degree from the Institute of Communication and Information En-

gineering at the same university in 2002. Currently he is pursuing the Ph.D. degree in the

Department of Electrical and Computer Engineering, University of Florida. His research

interests include TCP/IP performance analysis, congestion control, admission control, pric-

ing, fairness and quality of service. He is a student member of IEEE.

106