quality of services based on both call admission and cell scheduling

13
ELSEVIER and ISDN SYSTEMS Computer Networks and ISDN Systems 29 (1997) 555-567 Quality of services based on both call admission and cell scheduling Lily Cheng ’ Department of Computer Science. Michigan Stute University, Eust Lansing, MI 48824-1027. USA Abstract A scheme was proposed to integrate call admission control and cell scheduling in order to provide cell level QoS for different classes of ATM traffic. This mechanism allocates bandwidth based on both call level and cell level. The call admission algorithm admits a call if the current network bandwidth can satisfy rrafic descriptors and QoS parameters provided by a call request. Required bandwidth is then calculated based on this (trafic descriptors, QoS parameters) tuple. A time-out value is computed based on the bandwidth allocated by the call admission algorithm. Such time-out value serves as an essentialparameter to integrate the call admission and the scheduling algorithm, which is necessary to enforce different cell level QoS for various classes of ATM traffic. Analysis shows that such mechanism provides acceptablebounds of delay time-slots for various classes of traffic. Simulation is used for conducting experiments to study the impact of several parameters (i.e., call inter-arrival intervals and different traffic mixes) on system performance. Our results show that by dynamically measuring network loads at the call setup time, call blocking probability is significantly reduced and link utilization is increased. 0 1997 Published by Elsevier ScienceB.V. Keywords: ATM network; Call admission control; Ceil scheduling; Traffic descriptors; QoS parameters 1. Introduction Applications communicating via B-ISDN require that the network guarantees a certain level of quality of service (QoS). Parameters associated with QoS include cell loss rate, cell delay, and cell delay variation. Cell delay occurs because of propagation delay, switching delay, and queuing delay; whereas, cell loss is caused by transmission error, lack of buffer space, and cell delay violation. These parame- ’ The author is currently with Lucent Technologies. Email: lilychengOlucent.com ters determine whether the ATM network fulfills its service contract for a given connection. Ideally, the network should provide the QoS re- quirements for communication sessions, while simul- taneously taking advantage of the benefits gained by statistical multiplexing. Such a goal poses difficult challenges covering several QoS issues (e.g., traffic shaping, call admission, congestion control, buffer management, cell scheduling) which need to be im- plemented at network nodes. Since the problem of providing different classes of QoS requirements can- not be solved by a single technique, a practical strategy is provided to decompose this problem into several subproblems and use a combination of ap- proaches to solve this single problem. Our previous 0169-7552/97/$17.00 0 1997 Published by Elsevier Science B.V. All rights reserved PII SOI 69.7552(96)001 19-5

Upload: lily-cheng

Post on 02-Jul-2016

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Quality of services based on both call admission and cell scheduling

ELSEVIER

and ISDN SYSTEMS

Computer Networks and ISDN Systems 29 (1997) 555-567

Quality of services based on both call admission and cell scheduling

Lily Cheng ’ Department of Computer Science. Michigan Stute University, Eust Lansing, MI 48824-1027. USA

Abstract

A scheme was proposed to integrate call admission control and cell scheduling in order to provide cell level QoS for different classes of ATM traffic. This mechanism allocates bandwidth based on both call level and cell level. The call admission algorithm admits a call if the current network bandwidth can satisfy rrafic descriptors and QoS parameters provided by a call request. Required bandwidth is then calculated based on this (trafic descriptors, QoS parameters) tuple. A time-out value is computed based on the bandwidth allocated by the call admission algorithm. Such time-out value serves as an essential parameter to integrate the call admission and the scheduling algorithm, which is necessary to enforce different cell level QoS for various classes of ATM traffic. Analysis shows that such mechanism provides acceptable bounds of delay time-slots for various classes of traffic. Simulation is used for conducting experiments to study the impact of several parameters (i.e., call inter-arrival intervals and different traffic mixes) on system performance. Our results show that by dynamically measuring network loads at the call setup time, call blocking probability is significantly reduced and link utilization is increased. 0 1997 Published by Elsevier Science B.V.

Keywords: ATM network; Call admission control; Ceil scheduling; Traffic descriptors; QoS parameters

1. Introduction

Applications communicating via B-ISDN require that the network guarantees a certain level of quality of service (QoS). Parameters associated with QoS include cell loss rate, cell delay, and cell delay variation. Cell delay occurs because of propagation delay, switching delay, and queuing delay; whereas, cell loss is caused by transmission error, lack of buffer space, and cell delay violation. These parame-

’ The author is currently with Lucent Technologies. Email: lilychengOlucent.com

ters determine whether the ATM network fulfills its service contract for a given connection.

Ideally, the network should provide the QoS re- quirements for communication sessions, while simul- taneously taking advantage of the benefits gained by statistical multiplexing. Such a goal poses difficult challenges covering several QoS issues (e.g., traffic shaping, call admission, congestion control, buffer management, cell scheduling) which need to be im- plemented at network nodes. Since the problem of providing different classes of QoS requirements can- not be solved by a single technique, a practical strategy is provided to decompose this problem into several subproblems and use a combination of ap- proaches to solve this single problem. Our previous

0169-7552/97/$17.00 0 1997 Published by Elsevier Science B.V. All rights reserved PII SOI 69.7552(96)001 19-5

Page 2: Quality of services based on both call admission and cell scheduling

556 L. Cheng / Cornpurer Networks und ISDN Systems 29 (19971555-567

work related to QoS issues focused on experimental studies of traffic characteristics [l-3], call admission algorithms [4,5] and scheduling algorithms 161.

Traflc descriptors and QoS parameters must be provided at the connection request time. The ATM traffic descriptor is the generic list of traffic parame- ters that can be used to capture the traffic character- istics of an ATM connection 171. QoS parameters are the set of conditions that a network must satisfy when connections are made. This call admission problem involves evaluating the network perfor- mance (taking into account the new call and the existing calls), and then determining if the network can guarantee the QoS of the new call without affecting the QoS promised to existing calls. If both the QoS requirements for the new call and the existing calls can be satisfied, the new request is accepted; otherwise, it is rejected. The objective of the admission control is to accept or reject arriving calls so as to maximize the output link utilization.

A scheduling policy is needed to maintain QoS for different classes of traffic when queuing occurs [g-12]. The cell level QoS may be trivially guaran- teed by any scheduling mechanism if a conservative admission control policy is used to limit the utiliza- tion levels. Guarantee of cell-level QoS to admitted calls is not sufficient if it comes at the cost of unreasonably high call blocking probability and very low network resource utilization. On the contrary, if too many calls are admitted to the network, no scheduling algorithm is able to satisfy the cell-level QoS for all traffic classes. In this case, call preemp- tion [ 131 may be needed.

2. Background

There are several call admission control policies [ 14,151 based on cell-level and call-level QoS re- quirements. In [16,17], a modular approach is pur- sued, involving cooperation between the processors responsible for scheduling and call admission con- trol. It is shown that the schedulable region can be used at the admission control level to choose the optimal admission policy which guarantees QoS at both call level and cell level.

Ramamurthy [18-211 proposed an usage parame- ter control (UPC) call admission control framework

based on three level controls: call level, burst level, and cell level controls. The call admission algorithm needs an input tuple of (traffic class, QoS parame- ters). Central to this framework is a traffic classifica- tion scheme that maps applications to specific traffic classes based on their QoS requirements and the statistical traffic characteristics.

Ferrari and Verma [22] presented a joint schedul- ing and admission control algorithm for a system with two classes of traffic. They use a version of earliest due date (EDD) scheduling along with a priority mechanism. This algorithm guarantees QoS for a “deterministic” class of traffic which can not tolerate packet loss. It also reserves enough band- width for each admitted call to transmit continuously at peak rate without taking the advantages of statisti- cal multiplexing.

In this paper, we present a scheduling algorithm and a call admission control mechanism, and use a time-out mechanism to allocate bandwidth at both the call level and the cell level. We also focus on the relationship between the call admission control and the scheduling algorithm. The major difference be- tween our work and Hyman’s work is two folds. First, in the call level our scheme uses a dynamic measurement approach to measure network link bandwidth utilization whereas Hyman’s approach uses a linear programming fonnulation. Second, while both schemes have to integrate scheduling and admission control, this paper has a time-out mecha- nism and Hyman proposed a data abstraction filter to maintain schedulable region. The work also differs from Ramamurthy’s 118-211 in that there is no burst level control addressed. From an empirical point of view, burst level control is very difficult to achieve without excessive hardware support.

3. Call Admission Control (CA0 for various QoS classes

In order to achieve different QoS requirements for diverse traffic classes and minimize the parameters used for negotiating a connection, we propose to integrate a connection admission control algorithm and a scheduling algorithm by using a time-out mechanism. The call admission algorithm allocates sufficient bandwidth to different classes of traffic. It

Page 3: Quality of services based on both call admission and cell scheduling

admits a call if the current network bandwidth can satisfy the QoS requirements by a call request. Re- quired bandwidth is then calculated based on this tuple. A time-out value for the scheduling algorithm is computed based on the bandwidth allocated by the call admission, which in turn is used by the schedul- ing algorithm. Such time-out values serve as essen- tial parameters in combining the call admission and scheduling algorithms, which are necessary to en- force different cell level QoS for various classes of ATM traffic. More detail discussion of the time-out values is in the next section.

3.1. CAC criteria

L. Cheng / Computer Networks and ISDN Systems 29 (1997) 555-567 557

descriptors, QoS parameters). Parameters associated with our CAC algorithms are traffic descriptors (i.e., peak rate, mean rate) and QoS classes. Fig. l(d) shows a graphical view of our CAC algorithm that maximizes the link bandwidth utilization. The pro- posed algorithm provides QoS for the following classes: constant bit rate, variable bit rate, available bit rate, and best-effort. Services provided for con- stant bit rate and variable bit rate are guaranteed services, and this means that the network bandwidth can support an amount of bandwidth equal to a ratio of the peak rate for all connections, even if such connections generate their peak rates at the same time (refer to Fig. l(b)). The available network bandwidth is greater than or equal to such ratio of the peak rate for a given traffic during the life of the connection. The ratios are different for constant bit

When a new call requests admission to the net- work, it provides the network with a tuple of (tra&

(a)An instance of network bandwidth usage.

~ Time > __- -..-..~-.. Time

(b)Network bandwidth fully utilized by guaranteed services.

--__- Time > q VBR

Ezz! CBR

(c)Network bandwidth fully utilized by guaranteed services and available bit rate

services.

__- Time --- ._-.-- - -~. -

Best effort

ABR

(d)Full bandwidth usage by different QoS classes.

Fig. I. Connection admission control based on QoS (CER, VBR, ABR, BestEfloort) and available bandwidth.

Page 4: Quality of services based on both call admission and cell scheduling

558 L. Cheng / Computer Networks and ISDN Systems 29 (1997) 555-567

rate and variable bit rate traffic, which are applica- tion dependent and vary from different multiplexing gains.

ABR and best-effort services can only use the remaining bandwidth of the guaranteed services. They are intended to provide different QoS to data traffic and make the best use of the network band- width. To support ABR traffic, the network link bandwidth must be greater than the mean rate of such traffic (refer to Fig. l(c)). Best-effort services can fully utilize the rest of the network bandwidth, where connections for such traffic are always ac- cepted, hence no minimum available network band- width is required. Fig. 1 depicts the above ideas. We illustrate how our algorithm works in the rest of this section.

Our algorithm maintains a list of traffic descrip- tors associated with each connection. It then com- putes required bandwidth (RB) for the connection from traffic descriptors based on services requested. The required bandwidth for a connection represents the maximum bandwidth allocated to this connec- tion. Hence the remaining capacity of the network for each physical link can be calculated. This CAC algorithm accepts a connection based on the remain- ing capacity and the required bandwidth requested by the connection. Usually, the required bandwidth of a connection falls between the peak rate and the mean rate. The required bandwidth of a CBR con- nection is equal to 13,,, P,, where 0 G aCsR G 1, and P, is its peak rate. This is also true for a VBR connection, where the required bandwidth is 8,,, P,. The actual values of OeBR and eveR are application dependent and are adjustable in our algorithm. De- tailed discussion of 13,,, and even, which are calcu- lated from statistical multiplexing, can be found in [23]. The smaller the value of 8 (i.e., 8,,, and eVBR), the higher the network bandwidth efficiency. In the case where 8 = 1, an amount of bandwidth equal to its peak rate is allocated. For ABR traffic, the required bandwidth is equal to the mean rate; whereas the required bandwidth is zero for best-ef- fort traffic. We now illustrate how our algorithm works.

Consider a new traffic flow (Y (P,, M,, @a) re- questing a connection to a network where n connec- tions have been established. The set of parameters (P,, M,, , L?‘~ > represents the QoS requirements for

class &a with a peak rate P, and mean rate M,, where @a E (CBR, VBR, ABR, BE). If the incoming traffic requests service in a guaranteed class (i.e., CBR and VBR), all the peak rate conditions for this class have to be satisfied. The worst case occurs when all traffic flows in the guaranteed class gener- ate peak rates at the same time (refer to Fig. l(b)). During connection setup, the call admission control algorithm performs the following test:

Determines the required bandwidth, RB, for flow a:

RB,=F(f’,, Ma, &), (1)

f f%BRPLl if gu~CBR.

i

0 P F(Pa, Ma,%)= iBR a

if IZ?~,E VBR.

a ifB EmR

a 0 if ~9~ E BE,

(2) where F is the required bandwidth evaluator. Determines whether there is enough link capacity, after adding the new flow (Y, assuming that all flow i, i E VBR, will generate peak rate at the same time.

I c RB, + c RB, if &Tm~CBR,

iG CBR iEVER

I c RB, + c RB, if @@E VBR, L> ;E CER iEVER

min l?, C (

RB, + M, if cZTaEABR, i E CBR .VBR

0 if ~9, E BE.

(3)

3. Accepts the connection if the condition is true; otherwise, rejects it.

From data obtained via experiments, the measured link bandwidth usage (6) is always less than (i.e., when the traffic level equals its mean rate) or equal to (e.g., when the traffic level equals its peak rate) the bandwidth assigned for CBR and VBR traffic (refer to Fig. l(a)>, if the network only consists of CBR and VBR traffic. In order to fully utilize the network bandwidth, call connection requests are ac- cepted for ABR traffic based on the current band- width utilization (R^), hence l? can be greater than

Page 5: Quality of services based on both call admission and cell scheduling

L. Cheng / Computer Networks und ISDN Systems 29 (19971.555-567 559

ci E CBR,VBR RB,. We took measured R^ by using the Simple Network Management Protocol (SNMP) ref- erenced in [4].

The value of 8 affects the acceptance of calls requesting ABR traffic, which also depends on the time instance that the measurement is taken. Con- sider a new flow (Y requesting a connzction to the network at time t in Fig. l(b) where R has a large value. This reduces the probability that the flow (Y can be accepted in order to allow maximum admis- sion of calls. For example, an ABR traffic will be rejected if it requests a connection at time t. The same request of this ABR traffic will be accepted if it requests at time t (refer to Fig. l(b) where R(t) > $r, 1). Note that the network has the same number of connections at times t and t,.

Finally, if the incoming traffic requests best-effort services, a connection is established provided the current network bandwidth is greater than 0, that is, the network is up. The best-effort class intends to make use of the remaining bandwidth. There is no bandwidth allocated for this service. In the case where guaranteed services require more bandwidth and there is not enough bandwidth for best-effort traffic, the cells are simply dropped. Fig. l(b) illus- trates the case where a network bandwidth is allo- cated to guaranteed services, when a peak rate allo- cation scheme is not be able to accept any more calls. Our algorithm, however, allows more calls by accepting ABR traffic (shown in Fig. l(c)) and best-effort traffic (shown in Fig. l(d)).

The above mechanism is a resource over-alloc- ation scheme (e.g., 6 < 1) to achieve high bandwidth utilization. To better utilize the bandwidth, ABR and best-effort are admitted if there is bandwidth avail-

CBR

VBR

ABR

able, regardless of the fact that total bandwidth might have been allocated to guaranteed services. In the case where guaranteed traffic needs to compete with the bandwidth allocated to ABR or best-effort traffic, guaranteed services have higher priority.

4. Class-based scheduling algorithm

In order to maximize the link bandwidth utiliza- tion while at the same time guarantee cell level QoS for different classes of traffic, we proposed a mecha- nism which integrates the call admission control and the scheduling policy. The tasks of the scheduling algorithm and its relation to the call admission are described in this section.

Fig. 2 shows how the scheduling algorithm pro- vides bandwidth requirements for different QoS classes. In this scheduling model, an ATM switching node consisting of four separate buffers provides different QoS requirements to four input classes (i.e., CBR, VBR, ABR, BE) of traffic from the call admis- sion control. Cells are placed at the end of selected input buffers based on their associated classes, and the priority level is defined according to the band- width allocated from the call admission.

If a cell within a given class arrives at an ATM node and its corresponding queue is full, the cell is placed in the best-effort queue. If a cell comes to a full best-effort queue, the cell is dropped. CBR and VBR queues have highest priority, whereas ABR queue has next higher priority, and best-effort queue has lowest priority. Each virtual channel with the same connection type (i.e., CBR, VBR, ABR, or Best Effort) is put in the same queue. For example,

To Network

Fig. 2. Scheduling model for different QoS classes.

Page 6: Quality of services based on both call admission and cell scheduling

560 L. Cheng/ Computer Networks and ISDN Systems 29 (1997) 555-567

all the CBR connections are put in one CBR queue, and so on. Cells in higher priority queues (i.e., CBR and VBR) are processed first, whereas those in a lower priority buffers (i.e., ABR and BE) get service only when higher priority queues are empty. Note that Fig. 2 depicts a scheduling model whose input queues can accept only cells arriving at a rate less than or equal to its required bandwidth.

The scheduler uses a time-out mechanism to en- force bandwidth allocated to different service classes by the call admission algorithm. A time-out value, T, is associated with each queue. Each queue, qi,

can receive service in a round-robin fashion for a maximum service time up to 7;.. In other words, a queue, qi, can receive service until the queue is empty or the timer, T, expires, whichever occurs first. In the case where the server serves each queue until the timer expires, this is also called weighted round robin (WRR). Our scheme differs from HRR [lo] in that the time-outs for guaranteed services will not occur while the queues are not empty. In the HRR scheme, every queue can have unserviced cells when the timer expires. If the queue is empty before the timer expires, the actual service time is ij. The remaining extra time (T. - c) from the guaranteed classes (i.e., CBR and VBR) is used to service other classes of applications (i.e., ABR and BE).

The CAC algorithm, as described above, deter- mines the admission or rejection of a new call, (Y, and computes the required bandwidth, RB,, for the call based on Eqs. (1) and (2). The value of RB, can be used to calculate the time-out value T. for the queue. Assume TL is a cycle time according to the link speed L in the unit of one cell time, then

T,= (+)TLz if++, (4)

T2= (!f$+= (fk$+,

T~=min{(TL-~,-~z), (F)TL), (6)

TL

i

1

Fig. 3. Pictorial view of T, and <.. Note that T,. = T, + T2 + T, + T,=?,+fs++3+f4.

max@> = Ti, where T. is an indicator of the amount of bandwidth allocated to the particular queue. The values of T, and T, have to be recomputed when there is a new connection for CBR and VBR traffic, respective; whereas T3 and T4 are calculated each time all q expire. Fig. 3 shows the pictorial view of the relations between T and c.

From Eq. (61, if the scheduler serves T, and T, until the timer expires, (i.e., ?“, = T, and fz = T,), then

T,=min (T,-T, -T,), ( F)TL}. (8)

In the above equation, if T, = (TL - T, - T,), then (T, + T2 + T,) = T,. Therefore T4 = 0, which is the case when CBR, VBR and ABR traffic consume all of the bandwidth, leaving none for the best-effort traffic. On the other hand, if

T3 = <(T,-T,-T,),

then

T4=(T,-T,-T,-T,)

The value of the time-out, q;., where i = 1 to 4, is computed according to the above equations. q. is the actual time used to process the queue, qi, and

= ( TL - T, - T, ) - (+)TL) >O.

This occurs when the best-effort traffic has some available bandwidth.

The service discipline for the aforementioned scheduling scheme employs a First-Come-First-Serve discipline within a given class. For the guaranteed classes of connections, the time-outs (i.e., T, and T,)

Page 7: Quality of services based on both call admission and cell scheduling

L. Cheng/ Compurer Networks and ISDN Systems 29 (1997) 555-567 561

repeat {

while No time out OCCUTS from Tl and NotEmpty process cells in 41;

?I is the actual time for processing 91; while No time out occurs from Tz and NotEmpty

process cells in 42; ?2 is the actual time for processing 92; T3 = min((T1; - Y?l - 5!2), (*)TL} ; while No time out OCCUTS from T3 and NotEmpty

process cells in 43; ?3 is the actual time for processing 43; T4 = (TL - !fl - T2 - fz) ; while No time out occurs from T4 and NotEmpty(q4)

process cells in 44;

Fig. 4. Scheduling algorithm for the call admission.

are calculated based on the required bandwidth, which is the peak rate allocated by the CAC algo- rithm. Therefore, T3 and T4 can use the remaining of the bandwidth by setting it equal to the remainder of the available time slots. The actual time (i.e., f1 and f2> for processing queues q, and q2 will be less than or equal to the time-out values. Ideally, the network bandwidth is fully utilized by guaranteed traffic (refer to Fig. 3 where T, + T2 = TL). When this happens, T3 = T4 = 0 and cells entering the VBR and best-effort queues are buffered until the next service cycle when these two queues can usually receive services.

Fig. 4 shows how the scheduling algorithm works. The scheduling algorithm is similar to the weighted round-robin (WRR), in that the server serves each queue in a round-robin fashion with different amounts of service time (i.e., the time-out values). It is differ- ent from the WRR in that the time-out value for each queue is dynamically computed in each service cycle according to the bandwidth allocated to each class. The implementation of this scheme can be done in hardware. As an example, the queuing mechanism can be implemented as a circular buffer.

The advantage of this approach is easy to imple- ment and flexible to tune. This algorithm employs a simple queuing management scheme, while at the same time guaranteeing services to different classes by means of a time-out mechanism. The aforemen-

tioned scheduling algorithm is expected to cooperate with our CAC algorithm in order to give different priorities to different classes of traffic. Because of the high-speed switching associated with an ATM network, the scheduling algorithm must be simple. The proposed algorithm is a priority-scheduling mechanism that gives priority to delay-sensitive traf- fic and is feasible at the high speeds required for B-ISDN networks.

5. Analysis

For the scheduling algorithm, the maximum delay time slot is bounded for a chosen time-out value. In order to analyze the maximum queuing delay for a cell c joining a particular queue, we always consider the worst case. The delay time slot, Di, is defined as the interval from the time the cell enters the queue, qi, to the time the transmission begins. Let TL be a cycle time for processing all queues according to the link speed L, then C,,, iq = TL. Here T and TL are in time units for processing a single cell.

The maximum queuing delay, D,, for the first queue occurs when a cell c joins the end of the first queue and the other queues are full. Let B, be the size of the first queue, and let the time duration that c has to wait in the queue before it is served be B,, when no time-out from the other queues occurs. During the time B,, the number of time-outs that occur from the other queues is (1 B,/T, ] + 1); hence, the time needed to process other queues is (l(B, - 1)/T,] + l)(Ci, ,‘J;.). Finally, the server needs B, amount of time to serve the rest of the first queue. From the above description, it follows that the delay for the first queue is

(9)

Page 8: Quality of services based on both call admission and cell scheduling

562 L. Cheng/ Computer Networks and ISDN Systems 29 (1997) 555-567

where q. is the time-out value of the ith queue (refer is thus served until it is empty before the scheduler to Fig. 5(a)). goes to other queues (refer to Fig. 5(c)).

Similarly, in a general case the maximum queuing delay, Di, for the ith queue occurs when a cell c joins the end of the ith queue and the other queues are full. Let Bj be the size of the ith queue. The time needed to process other queues due to time-outs is ([B,/T,]XC,+ ;T,). It also needs to wait (Cj, ,q> before it can serve the rest of the queue. Therefore, the delay for the ith queue is

If all Ti are equal, this is the case similar to Time Division Multiplexing (TDM). Eq. (9) becomes the following, assuming q = T for all i, D, = ([B,/T] + l)(Ci+ ,q) + B, G (B, + l>n. Applying 7; = T for all i to Eq. (lo), Dj Q (Bi + l>n (refer to Fig. 5(d)).

It can be observed from the above equations that the maximum queuing delay for a single cell in the first queue is bounded by the queue size. The maxi- mum queuing delay is bounded by Eq. (10). Note that T, is a constant value determined by the net- work link speed. For a given (I;:, Bi) pair, D, is a bounded value. An advantage of the RR/T scheme is that the bound of Di for a particular queue, qi,

( 10)

From Eq. (91, if T = 1 for all i (i.e., cells in different queues are processed in a round-robin fash- ion where only a single cell gets service for each queue), then 1 B,/T, 1 = B,, and the delay for the first queue is

D;=i4+1)( CT)+4 i i I

<(B, + I), (11)

where n is the number of different classes of queues. Similarly, for Eq. (101, if T = 1 for all i, then Di < ( Bj + 1 )n (refer to Fig. 5(b)).

For the case T, > B,, cells in the first queue are served before time-out occurs, then [B,/T, ] = 0. Hence, Eq. (91 becomes

D,= x7;.+B,< cq+T,=T,. (12) if I ii I

Similarly, for Eq. (lo>, if T, > Bi for all i, then Di < TL. More precisely, since T. > B,, the maximum time that needs to serve the ith queue is c, which is also equal to B;, so Eq. (12) becomes

D,=(,,,.B,=(F,j;)+B,

= CB;, (13) all i

which occurs when the scheduler processes each queue sequentially. Hence, Di = Cal, iBi. Each queue

T +T +T T T +T +T T T +T +T T T +T +T L~._2_ 3 4L lL2 3 41 1 e 3 41 ‘1 2 3 4J

(a) general case, L$+:l = 3.

I (n-l)T 2 L..(“-l)T , T , (“-l)T , T, (n-1)T , . . .

CBR- -‘i: 1 Bl im p;~m-hJ -* (d) T, = T for all i, I$] = 3.

Fig. 5. Examples of worst case for CBR cells

Page 9: Quality of services based on both call admission and cell scheduling

L. Cheng / Computer Networks and ISDN Systems 29 (1997) 555-567 563

1 -

0 400 800 1200 1600 2000

Number of Total Connections Fig. 6. Performance results of the proposed call admission control algorithm (A = .!I calls/set).

depends on the local time-out value, 7;., which makes men&. The bounds computed above represent the it easy to adjust the time-out value for the real-time worst case for a single cell. Fig. 5 depicts examples traffic in order to satisfy its stringent delay require- of the worst case queuing delay CBR cells. The

0.8

0.6

0.4

0.2

0

Peak Rate Alkxation +- Call Admission Control -

5 10 1s 20 25 Call Arrival Intervals (20 msec)

Fig. 7. Call-blocking probability as a function of the number of arriving calls.

Page 10: Quality of services based on both call admission and cell scheduling

564

Table 1

L. Cheng/Computer Networks uml ISDN Systems 29 (1997) 555-567

Summaries of the queuing delay

queuing delay for the queues

general case Di<(lB,/7;lfl)T, T, = 1 for all i D, <(B, + l)n

r,> 4 D, = L,, i B, < TL T, = T for all i D,<(B,+l)n

arrival rate (h = 5 calls/set), regardless of the total number of connections. With a fixed arrival rate, the system is in a steady state so the call-blocking probability remains stable.

Since the total number of connections does not affect the call-blocking probability, the other simula- tion results were performed using 1000 connections in the simulation. Note that in the default configura- tions of a FORE ATM network, a host can only have approximately 250 virtual connections open simulta- neously. Fig. 7 depicts the call-blocking probability as a function of call inter-arrival intervals, given in time units of 20 milliseconds. Note that the inter- arrival time is chosen to overload the system and results in very high call blocking probabilities. Again, the proposed algorithm has a lower call-blocking probability. The difference is more significant when the traffic load is high (i.e., for small call arrival intervals). The peak rate allocation scheme causes higher call-blocking probabilities with smaller call inter-arrival intervals. Reserving peak rates for all existing connections that do not fully utilize the reserved bandwidth will result in a high call rejection rate. This shows that by dynamically measuring net- work loads at call setup time, the call-blocking prob- ability is significantly reduced and link utilization is increased.

queueing delays for VBR cells are similar. Table 1 summarizes the queuing delay under different condi- tions.

6. Performance results

Various simulations were performed to study the performance of the proposed system. We use a traf- fic model that is derived from empirical traffic mea- surements [6,1] to generate data, voice and video traffic. The traffic streams consist of a uniform distribution of various applications (i.e., FTP, Au- dio, Video, Telnet/Rlogin), where a link is modeled as 155 Mbps. Performance of the pro- posed call admission algorithm is compared with that of the peak rate allocation scheme.

Fig. 6 illustrates the better performance results of the proposed call admission algorithm than of the peak rate allocation scheme. Both schemes have call-blocking probabilities associated with the same

In order to understand the effects of guaranteed services, experiments were also conducted to study different traffic mixes. Fig. 8(a) plots call-blocking

1.0 /

0.8 - 50calls/sec

0 0.2 0.4 0.6 0.8 1 Ratio of Guaranteed Traffic

0.6 0.7 0.8 0.9 Ratio of Guaranteed Traffic

1.0

(b)Comparison of different arrival rates. (afArrival rate is 50 cuZZs/sec.

Fig. 8. Call blocking probability as a function of the ratio of guaranteed traffic.

Page 11: Quality of services based on both call admission and cell scheduling

L. Cheng/ Computer Networks and ISDN Systems 29 (1997) 555-567 565

probabilities for different ratios of guaranteed traffic. In these simulation runs, Audio and Video require guaranteed services, while Ftp and Telnet/ Rlogin require ARR service. The peak rate alloca- tion scheme, which allocates peak rates to all admit- ted connections, is not affected by the ratio of guar- anteed traffic, because such a scheme does not dis- tinguish between different traffic classes. It allocates peak rates to the admitted connections.

Our algorithm results in a very low call-blocking probability when the ratio of guaranteed traffic is low. Increasing the connections for guaranteed ser- vices will reserve more network bandwidth and thus increase the call-blocking probability. Note that when the network consists of all guaranteed traffic (refer to Fig. 8 where the ratio is I), the algorithm has the same call-blocking probability as that of the peak rate allocation scheme. If all traffic streams require guaranteed services, the mechanism allocates peak rates to them, thus making it basically a peak rate allocation scheme. For other traffic mixes, the algo- rithm has better performance.

Fig. 8(b) shows that similar results hold under different call arrival rates (5, 10, and 50 calls/secl. For easy visibility, only heavy mixes of guaranteed traffic are plotted. The performance of a peak rate allocation scheme is not affected by the ratio of guaranteed traffic. However, smaller call arrival rates give lower call-blocking probabilities. The perfor- mance of the call admission control is better for lower call arrival rates or lower ratios of guaranteed traffic.

A trace of bandwidth utilization is also per- formed. Fig. 9 shows the bandwidth reserved (B-PM and B-CAC) as well as the bandwidth consumed (KPRA and U- CAC) by the two algo- rithms, where the simulation time is shown from time 1000 to 2500 (lOOOAT< 25OOAT and AT= 20 msec). As shown in Fig. 9, 50% of the arriving calls request guaranteed services. Generally speaking, the peak rate allocation scheme reserves far more band- width than does the call admission algorithm, whereas the call admission control algorithm provides more bandwidth usage. In other words, our algorithm has a

0 500 1000 1500 2cnxl 2500 3000 3500 4000 Call Arrival Intervals (20 msec)

Fig. 9. A trace of bandwidth usage in a simulation tun. B_PRA: bandwidth reserved by peak rate allocation. B_CAC: bandwidth reserved by the call admission control. K PRA: bandwidth utilization for peak rate allocation. U- CAC: bandwidth utilization for the call admission control.

Page 12: Quality of services based on both call admission and cell scheduling

566 L. Cheng / Computer Networks and ISDN Systems 29 (1997) 555-567

higher efficiency of bandwidth utilization. The pro- posed algorithm reserves approximately the same amount of bandwidth as that consumed when t < lOOOAT. As more traffic enters the network when r > lOOOAT, the proposed algorithm allows for more bandwidth usage. For some time instances (t = 2OAT), the peak rate allocation scheme reserves more bandwidth than the proposed algorithm (B-PM > B- CAC). This is caused by previously accepting a large call, which then results in rejecting small calls later. Even so, the actual bandwidth usage of the algorithm is still better than that of the peak rate allocation scheme (refer to Fig. 9 where U-CAC > U-PRA).

7. Summary

In this paper, we presented a mechanism which provides cell level QoS requirements for various classes of traffic based on a call admission control algorithm and a scheduling scheme. This call admis- sion control algorithm can provide guaranteed ser- vices and general services. Such algorithm is based on dynamically measuring network load with a goal of making most use of network link bandwidth and reducing call blocking probability.

The proposed scheduling algorithm is similar to the weighted round-robin (WRR) in that the server serves each queue in a round-robin fashion with different amounts of service time (i.e., the time-out value). It is different from the WRR in that the time-out value for each queue is dynamically com- puted in each service cycle according to the band- width allocated to each class. This time-out based scheduling scheme is an approach for integrating call level control and cell level control to satisfy QoS requirements. Analysis shows that our mechanism provides a delay bound based on the adjustable queue size and time-out value in the worst case. An upper bound of queueing delay for a single cell is obtained.

A traffic model is used to conduct simulation studies of the proposed model. The call admission control scheme, via dynamically measuring network loads at the connection setup time, results in a low call blocking probability and a high link bandwidth utilization. We have shown that the peak rate alloca-

tion scheme is a special case of our algorithm when all network traffic require guaranteed services (i.e., the worst case); otherwise it has better performance. Note that the schemes given in this paper are in- tended to be on the conservative side. As additional information on switch architecture specific issues, and actual ATM applications become more available, the loading level can be adjusted appropriately by changing the values of f?,,, and evrrR.

References

[I] L. Cheng and H.D. Hughes, An ATM trafIic model based on empirical trafIic measurements, in: Proc. IEEE Global Telecommunication Cont. Singapore, 1995.

[2] L. Cheng and H.D. Hughes, On the multiplexing effects of the ATM traffic, in: Proc. IEEE Symp. on Computers and Communications, Alexandria, Egypt, 1995.

[3] L. Cheng and H.D. Hughes, Testing bursty traftic in an atm testbed, in: Proc. 7th IEEE Workshop on Local and Metropolitan Area Networks, Marathon, FL, 1995.

[4] L. Cheng, S.-M. Chang and H. Hughes, A connection admis- sion control algorithm based on empirical traffic measure- ments, in: Proc. IEEE Internat. Conf on Communications’95, Seattle, WA, 1995.

[5] L. Cheng and H. Hughes, Call admission control issues in a wireless ATM network, in: Proc. 3rd Internat. Conf: on Computer Communications and Networks, Las Vegas, NA, 1995.

[6] L. Cheng, H. Hughes and P. Yegani, Priority scheduling schemes in an ATM switching node, in: Proc. 3rd Internat. ConjI on Computer Communications and Networks, San Francisco, CA (1994) 72-76.

[7] A. Forum, ATM User-Network Interface Specijication Ver- sion 3.0 (Prentice Hall, Englewood Cliffs, NJ, 1994).

[8] B.J. Vickers, J.B. Kim, T. Suda and D.P. Hong, Congestion control and resource management in diverse atm environ- ment, IEICE Trans. Comm. (November 1993).

[9] J.J. Bae and T. Suds, Survey of traffic control schemes and protocols in ATM networks, Proc. IEEE 79 (1991) 170- 189.

[lo] C.R. Kalmanek, H. Kanakia and S. Keshav. Rate controlled servers for very high-speed networks, in: Proc. IEEE Global Telecommunication Conf. (1990) 12-20.

11 I] P. Pancha and M. Karol, Guaranteeing bandwidth and mini- mizing delay in packet-switched (ATM) networks, in: Proc. IEEE Global Telecommunication Conf: (1995) 1064 1070.

[121 A.K. Parekh and R.G. Gallager, A generalized processor sharing approach to flow control in integrated services net- works -The bl Ibe single node case, in: Proc. IEEE INFO- COM’92 (1992) 915-924.

1131 J.A. Gamy and I.S. Gopal, Call preemption in communica- tion networks, in: Proc. IEEE INFOCOM’92 (1992) 1043- 1050.

Page 13: Quality of services based on both call admission and cell scheduling

L. Cheng / Computer Networks and ISDN Systems 29 (1997) 555-567 567

[ 141 Ktaimeche, G. Pacifici, D.E. Pendatakis, G. Ramamutthy, B. Sengupta, P.A. Skelly, F. Vakil and Z. Zhang, Call admis- sion control for TetaNet, Tech. Rept. CTR # 319-92-29, Center for Telecommunications Research, Columbia Univet- sity, 1993.

[15] D. Fettati, A new call admission control method for teal-time communication in an intemetwotk, Tech. Rept., University of California at Berkeley and International Computer Science Institute, 1992.

[16] J.M. Hyman, A.A. Lazat and G. Pacific, Joint scheduling and admission control for ATS-based switching nodes, in: Proc. ACM SICCOMM, 1992.

[I71 J.M. Hyman, A.A. Lazat and G. Pacific, Real-time schedul- ing with quality of services constraints, IEEE J. Selected Areas Commun. 9 (1992) 1052-1063.

1181 G. Ramamutthy, R.S. Dighe and D. Raychaudhuti, An UPC-based traffic control framework for ATM networks, in:

Proc. IEEE Global Telecommunications Conj: ‘94 (1994) 600-605.

[19] G. Ramamutthy and R.S. Dighe, Performance analysis of multilevel congestion controls in BISDN, in: Proc. lEEE INFOCOM’94, Vol. l(1994) lc.l.l-lc.1.9.

[20] G. Ramamutthy and R.S. Dighe, Distributed source control: A new access control for integrated broadband networks, IEEE J. Selected Areus Commun. 9 ( 199 1).

[21] G. Ramamurthy and R.S. Dighe, A multidimensional ftame- work for congestion control in B-ISDN, IEEE J. Selected Areas Commun. 9 (1991).

[22] D. Fettati and D. Vetma, A scheme for teal-time channel establishment in wide-area networks, IEEE J. Selecred Areas Commun. 8 (1990) 368-379.

(231 L. Cheng, A call admission control algorithm based on empirical traffic measurements, Ph.D. Thesis, Michigan State University, 1996.