fair resource sharing for stateless-core packet-switched ...menth/... · than light users in the...

19
Fair Resource Sharing for Stateless-Core Packet-Switched Networks with Prioritization Michael Menth and Nikolas Zeitler University of Tuebingen, Chair of Communication Networks, Germany Abstract—We propose activity-based congestion management (ABC) to enforce fair bandwidth sharing among users in packet- based communication networks without requiring per-user in- formation in forwarding nodes. Activity relates to the sent data rate of a user and its contracted reference rate. Activity meters monitor user traffic at edge nodes and add activity information to packets. Forwarding nodes utilize this information to preferentially drop packets with high activity values in case of congestion. In this paper we significantly simplify the behavior of ABC’s control devices: the activity meter in edge nodes and the activity AQM in forwarding nodes. Moreover, we introduce scheduling priorities for ABC, provide means to avoid starvation of non- prioritized traffic through prioritized traffic, and means to disincentivize prioritized transmission of non-realtime traffic. We study the fairness achieved with and without ABC using packet-based simulation, mostly under challenging conditions where a heavy user wants to monopolize a bottleneck’s band- width. We investigate bandwidth sharing for non-responsive traffic, for responsive traffic, and a mixture of both. The effect of traffic prioritization is studied. We explore the impact of configuration parameters and recommend meaningful system parameters. We illustrate their impact on system dynamics and show that ABC expedites sporadic uploads, which improves the quality of experience for interactive applications. Finally, we compare the performance of ABC with the one of CSFQ whose objective is the same as the one of ABC, but significantly differs with regard to signalled information and algorithmic approach. Summarizing, ABC creates an ecosystem where users can maximize their throughput if they do not exceed their fair share on the bottleneck link. This incentivizes the usage of congestion- controlled transport protocols. Moreover, ABC protects traffic from users applying congestion control against traffic overload from users not leveraging congestion control, even if the latter send prioritized traffic. Index Terms—Packet-switched networks, congestion manage- ment, fairness, TCP I. I NTRODUCTION Future mobile networks will consist of many small cells, each of them capable to provide large bandwidths to users [1], [2]. This makes appropriate overprovisioning of meshed backhaul networks diffcult and costly so that means are needed for congestion management in case of overload. Sim- ilar problems arise in multi-tenant datacenter networks with lots of applications generating unknown traffic patterns and overprovisioned core switches or in Internet service providers’ The authors acknowledge the funding by the Deutsche Forschungsgemein- schaft (DFG) under grant ME2727/2-1. The authors alone are responsible for the content of the paper. residential access networks. In such scenarios, scalable and quality of service aware bandwidth sharing is desired in case of congestion. That means, network nodes should remain simple, not require per-user contexts, and provide means for service differentiation. We consider congestion management that detects congestion in a network and takes actions to resolve it. Heavy users may transmit unresponsive flows or lots of congestion-controlled flows so that they achieve significantly more throughput than light users in the absence of fairness-aware congestion management. The latter requires that appropriate packets are dropped to protect the traffic of light users against overload created by heavy users. Active queue management (AQM) is a simple congestion management approach. However, with- out additional information, AQM reduces queue length and queueing time, but it cannot prevent heavy users from mo- nopolizing a bottleneck link’s bandwidth. This significantly compromises the quality of experience of light users. More sophisticated congestion management variants guarantee some fairness among users. Some ISPs apply back pressure on cer- tain users, traffic, or applications [3]–[5]. Thereby, better ser- vice can be delivered to most customers and reinvestments in transmission capacity can be delayed, resulting in cost savings. Scheduling mechanisms like Weighted Fair Queuing (WFQ) and its variants are standard methods to enforce fairness in packet-based networks, but they need to associate packets with users, which requires per-user states and signalling in core nodes and introduces complexity. In the IETF, several efforts have been taken in the recent years to better manage congestion in the Internet. Conges- tion control for real-time traffic sent over RTP (Real-Time Protocol) have been proposed in the RMCAT (RTP Me- dia Congestion Avoidance Techniques) working group [6]. LEDBAT (Low Extra Delay Background Transport) [7] has been standardized for transport of background traffic to give way for normal traffic in case of congestion. Recently, novel AQM mechanisms have been described to cope with variable- capacity links [8], [9] as they occur in wireless networks. Most notably, congestion exposure (ConEx) was introduced to expose congestion caused by a flow in its IP header to provide more information for congestion management to improve fairness among users [10]. One of its goals was to incentivize the application of congestion-sensitive protocols like LEDBAT for background traffic. Mobile access networks [11], residential access networks [12], and datacenters [13] were considered as use cases.

Upload: others

Post on 04-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

Fair Resource Sharing for Stateless-CorePacket-Switched Networks with Prioritization

Michael Menth and Nikolas ZeitlerUniversity of Tuebingen, Chair of Communication Networks, Germany

Abstract—We propose activity-based congestion management(ABC) to enforce fair bandwidth sharing among users in packet-based communication networks without requiring per-user in-formation in forwarding nodes. Activity relates to the sentdata rate of a user and its contracted reference rate. Activitymeters monitor user traffic at edge nodes and add activityinformation to packets. Forwarding nodes utilize this informationto preferentially drop packets with high activity values in caseof congestion.

In this paper we significantly simplify the behavior of ABC’scontrol devices: the activity meter in edge nodes and the activityAQM in forwarding nodes. Moreover, we introduce schedulingpriorities for ABC, provide means to avoid starvation of non-prioritized traffic through prioritized traffic, and means todisincentivize prioritized transmission of non-realtime traffic.

We study the fairness achieved with and without ABC usingpacket-based simulation, mostly under challenging conditionswhere a heavy user wants to monopolize a bottleneck’s band-width. We investigate bandwidth sharing for non-responsivetraffic, for responsive traffic, and a mixture of both. The effectof traffic prioritization is studied. We explore the impact ofconfiguration parameters and recommend meaningful systemparameters. We illustrate their impact on system dynamics andshow that ABC expedites sporadic uploads, which improves thequality of experience for interactive applications. Finally, wecompare the performance of ABC with the one of CSFQ whoseobjective is the same as the one of ABC, but significantly differswith regard to signalled information and algorithmic approach.

Summarizing, ABC creates an ecosystem where users canmaximize their throughput if they do not exceed their fair shareon the bottleneck link. This incentivizes the usage of congestion-controlled transport protocols. Moreover, ABC protects trafficfrom users applying congestion control against traffic overloadfrom users not leveraging congestion control, even if the lattersend prioritized traffic.

Index Terms—Packet-switched networks, congestion manage-ment, fairness, TCP

I. INTRODUCTION

Future mobile networks will consist of many small cells,each of them capable to provide large bandwidths to users[1], [2]. This makes appropriate overprovisioning of meshedbackhaul networks diffcult and costly so that means areneeded for congestion management in case of overload. Sim-ilar problems arise in multi-tenant datacenter networks withlots of applications generating unknown traffic patterns andoverprovisioned core switches or in Internet service providers’

The authors acknowledge the funding by the Deutsche Forschungsgemein-schaft (DFG) under grant ME2727/2-1. The authors alone are responsible forthe content of the paper.

residential access networks. In such scenarios, scalable andquality of service aware bandwidth sharing is desired in case ofcongestion. That means, network nodes should remain simple,not require per-user contexts, and provide means for servicedifferentiation.

We consider congestion management that detects congestionin a network and takes actions to resolve it. Heavy users maytransmit unresponsive flows or lots of congestion-controlledflows so that they achieve significantly more throughputthan light users in the absence of fairness-aware congestionmanagement. The latter requires that appropriate packets aredropped to protect the traffic of light users against overloadcreated by heavy users. Active queue management (AQM) isa simple congestion management approach. However, with-out additional information, AQM reduces queue length andqueueing time, but it cannot prevent heavy users from mo-nopolizing a bottleneck link’s bandwidth. This significantlycompromises the quality of experience of light users. Moresophisticated congestion management variants guarantee somefairness among users. Some ISPs apply back pressure on cer-tain users, traffic, or applications [3]–[5]. Thereby, better ser-vice can be delivered to most customers and reinvestments intransmission capacity can be delayed, resulting in cost savings.Scheduling mechanisms like Weighted Fair Queuing (WFQ)and its variants are standard methods to enforce fairness inpacket-based networks, but they need to associate packets withusers, which requires per-user states and signalling in corenodes and introduces complexity.

In the IETF, several efforts have been taken in the recentyears to better manage congestion in the Internet. Conges-tion control for real-time traffic sent over RTP (Real-TimeProtocol) have been proposed in the RMCAT (RTP Me-dia Congestion Avoidance Techniques) working group [6].LEDBAT (Low Extra Delay Background Transport) [7] hasbeen standardized for transport of background traffic to giveway for normal traffic in case of congestion. Recently, novelAQM mechanisms have been described to cope with variable-capacity links [8], [9] as they occur in wireless networks.Most notably, congestion exposure (ConEx) was introducedto expose congestion caused by a flow in its IP headerto provide more information for congestion management toimprove fairness among users [10]. One of its goals was toincentivize the application of congestion-sensitive protocolslike LEDBAT for background traffic. Mobile access networks[11], residential access networks [12], and datacenters [13]were considered as use cases.

Page 2: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

In [14] we proposed activity-based congestion management(ABC). Activity meters at the network edge measure theusers’ activity and tag it to their packets. Forwarding nodesuse that information to preferentially drop traffic from moreactive users in case of congestion. With ABC control, userscan maximize their throughput by sending traffic at their fairshare with regard to the capacity of the bottleneck link. Thus,ABC creates an eco-system giving incentives to users to applycongestion control. ABC was essentially developed becauseConEx-based congestion management turned out to be littleefficient in simulation studies. Therefore, ABC may be appliedin the same use cases that were envisaged for ConEx.

In this work, we significantly simplify ABC, improve itsunderstanding with a comprehensive performance evaluation,and add support for traffic prioritization. The latter is designedsuch that transmission of prioritized traffic reduces a user’sfair share to disincentivize prioritization for non-realtime orbulk traffic. Moreover, a simple extension of a static priorityscheduler is proposed to avoid starvation of best effort trafficthrough prioritized traffic. CSFQ is a control method similarto ABC, but significantly differs in signalled information andalgorithmic approach. Therefore, we compare its performanceto the one of ABC and analyze why ABC achieves betterfairness under challenging conditions.

The remainder of the paper is structured as follows. Sec-tion II gives an overview of congestion management ap-proaches. In Section III we propose the new ABC designincluding traffic prioritization. In Section IV we use packet-based simulation to evaluate the performance of ABC, in par-ticular fairness aspects, with emphasis on traffic prioritization.We compare the performance of CSFQ with the one of ABC inSection V. Section VI draws conclusions and gives an outlookon future work.

II. RELATED WORK

An excellent and extensive overview of congestion manage-ment is compiled in [4]. Congestion management techniquescomprise packet classification, admission control and resourcereservation, caching, rate control and traffic shaping, routingand traffic engineering, packet dropping and scheduling, andmany more technology-specific approaches. Moreover, multi-ple examples of congestion management practices applied byISPs are described.

The whitepaper in [5] proposes that congestion managementshould be applied only in the presence of real congestion anddiscusses several detection methods for congestion that arebased on time of day, concurrent user thresholds, bandwidththresholds, and subscriber quality of experience. Applicationpriorities and policy enforcement, e.g., prioritization, schedul-ing mechanisms, rate limiting, etc., are discussed as means tomanage traffic when congestion is detected.

Congestion management techniques may be applied per useror per application. The latter requires deep packet inspectionand expedites or suppresses traffic of certain applications. Thisis technically demanding, not always possible, e.g., with IPsec

traffic, and widely undesired as it obviously violates networkneutrality.

Rate limiting is simple and usually implemented by token-bucket based algorithms. It reduces a user’s traffic rate to anupper bound regardless of the network state. Rate limitingmay be applied generally or only to users that have recentlybeen identified as heavy users, e.g., by having exceeded certaindata volumes, and only for a limited time. Monthly quotas arecommon for many subscriber contracts. If a user exceeds hisquota, his rate is throttled to an upper bound which is ratherlow. This is a drastic and ineffective means. As long as a heavyuser has quota available, he may significantly contribute tocongestion, and if his quota is consumed, he is not even ableto transmit traffic in the absence of congestion.

Comcast’s congestion management system [15] identifiesheavy users who contribute too much traffic within a 15minutes measurement interval. It further monitors upstreamand downstream ports of cable modem termination systems(CMTSs) to detect near-congestion states. Under such condi-tions, the congestion management system reduces the priorityof heavy user traffic to “best effort” (BE) while other trafficis served with “priority best effort” (PBO). The latter isscheduled before BE so that only heavy users suffer fromlower throughput and longer delays.

Scheduling mechanisms such as WFQ, possibly approxi-mated by Deficit Round Robin (DRR) [16], reduce a heavyuser’s throughput just to his fair share and only when needed.A variant of DRR allows queues to save limited deficit sothat they can send faster after short idle time [17]. However,such scheduling algorithms lead to rather complex networkingsolutions as they require per-user queues on all potential bot-tleneck links, and traffic classification, which raises scalabilityconcerns. Moreover, signalling may be needed to configurescheduling algorithms per user and to classify traffic onpotential bottlenecks.

Seawall [18] is a network bandwidth allocation schemefor data centers that divides network capacity based onadministrator-specific policy. It is based on congestion-controlled tunnels and provides strong performance isolation.

AQM mechanisms in routers and switches occasionally droppackets before tail drops occur. The survey in [19] gives a com-prehensive overview of the large set of existing mechanisms.RED [20] was one of the first AQMs and is implemented onmany routers. CoDel [21] and PIE [22] were recently discussedin the IETF AQM working group to allow temporary burstsbut to avoid a standing queue and bufferbloat, in particular forlinks with time-varying capacity. Another AQM [23] resem-bles ABC in that it protects TCP traffic against non-responsivetraffic, but it does not address per-user fairness. With explicitcongestion notification (ECN) [24], ECN-enabled TCP sendersmark packets appropriately and AQM mechanisms re-markthese packets as “congestion experienced” (CE) instead ofdropping them. Thereby, ECN makes congestion visible inthe network between the congested element and the receiver.Upon receipt of a CE signal, the TCP receiver signals an ECNecho (ECE) to the sender which then reduces its transmission

2

Page 3: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

rate like after detection of packet loss.Briscoe argued that per-flow fairness is not the right ob-

jective in the Internet [25] and proposed ReFeedback andcongestion policing (CP) to implement per-user fairness [26]–[28]. The congestion exposure (ConEx) protocol is currentlystandardized by the IETF and implements the core idea ofReFeedback. ConEx leverages ECN to learn about congestionon a flow’s path. A TCP sender sets for any received ECEsignal a ConEx mark in the IP header of subsequent packetsso that any node on a flow’s path can observe its contributionto congestion. This requires modifications to TCP senders andreceivers [29]. As the network cannot trust that users insertsufficient ConEx marks into packet streams, per-flow auditnear the receiver should compare CE and ConEx signals todetect cheating flows and sanction them [10].

We briefly explain ConEx-based CP which is the drivingapplication for ConEx. A congestion policer meters onlyConEx-marked packets of a user and if they exceed the user’sconfigured congestion allowance rate and burst tolerance, thepolicer drops some of the user’s traffic. Thereby, the policerpenalizes heavy users causing a lot of congestion and saveslight users whose ConEx-marked traffic does not exceed theircongestion allowances. The objective of this differentiatedtreatment is fairer bandwidth sharing among users in caseof congestion. In [3] ConEx is qualitatively compared withexisting congestion management approaches, and use casesare discussed which are further elaborated in [11]–[13]. Thereis only little insight into the performance of ConEx-based CP.Wagner [30] applied CP to a single queue. Traffic of differentusers is classified and separately policed before entering thequeue. The policers leverage congestion information of thelocal queue rather than ConEx signals in data packets. Thisapproach may be used for fair bandwidth allocation on a localqueue. However, it requires per-user state so that the majoradvantage over scheduling is lost. ConEx Lite was developedfor mobile networks in [31]. Instead of requiring users toapply ConEx, traffic is tunneled and CP is performed usingcongestion feedback from the tunnel. Moreover, a novel AQMpresented in [32] is based on the CP principle.

ABC was inspired by ConEx-based CP but takes a differentapproach due to lessons learned. We have simulated ConEx-based CP and gained two insights. First, in case of congestion,policers may penalize only heavy users, but light users are alsothrottled by the receipt of ECE signals. Second, appropriatecongestion allowances are difficult to configure. Low conges-tion allowances cause low utilization in case of only a fewusers. Large congestion allowances cannot enforce fair capac-ity sharing. Therefore, the performance of CP we consideredbenefits from leveraging information about bottleneck band-widths and the number of current users for configuration ofpolicers. However, this requires dynamic configuration whichmakes a system complex. These shortcomings are avoided inABC.

With pre-congestion notification (PCN), meters and markerswithin a single domain re-mark high-priority packets if anear-congestion state is reached. This information is used at

the edge of a DiffServ domain for admission control andflow termination [33]. While PCN meters traffic inside thenetwork and drops packets from non-admitted or torn-downhigh-priority flows at the network edge, ABC meters traffic atthe edge and drops packets inside a network. Thus, althoughPCN and ABC look similar at first sight, they are significantlydifferent regarding operation and objectives.

Core-Stateless Fair Queueing (CSFQ) was the first approachto enforce fair resource sharing per flow without per-flowstates in the network core [34], [35]. It is designed for adomain context. Edge nodes measure the time-dependent ratesof flows and record these values as labels in the packetheaders. Edge and core nodes estimate the time-dependentrate of arrived and accepted traffic A and F and estimate onthat base including the known link bandwidth the fair per-flow traffic rate α . They drop packets with a probability ofmax(0,1− α

p.label ) where p.label is the label of a packet p. Asa consequence, packets from flows with a rate larger than theestimated fair rate α are dropped so that their rate is reducedto approximately α .

Many improvements and variants of CSFQ have beenproposed, e.g., the work in [36], [37]. The core-statelessscheduling method in [38] guarantees throughput for reservedflows. It may be applied for prioritized realtime flows inan Integrated Services (IntServ) context [39]. It is basedon similar ideas as CSFQ. ABC is unaware of reservationsand rather fits in a Differentiated Services context [40]. TheEnhanced CSFQ with Multiple Queue Priority Scheduler pro-vides increased rates for realtime flows by enqueuing theirpackets into multiple queues that are served round-robin [41].It may create packet reordering for unequal packet sizes. Theweighted variant of CSFQ [34], [35] already supports unequalflow rates by application of weights. ABC as presented inthis paper provides a delay advantage with possibly adaptedthroughput for realtime flows, which is a different form ofpriority.

Rainbow Fair Queueing (RFQ) is a major variant of CSFQ[42], [43]. It assigns packets of a flow to different layers(colors) such that the “highest” color of a flow indicates itsapproximate rate. A simplified example: a packet rate up to100 kb/s is marked 0, a packet rate up to 900 kb/s is marked 1,a packet rate up to 9 Mb/s is marked 2, a packet rate up to 90Mb/s is marked 3, etc. Then the largest color of a 1 Mb/s flowis 1 and the largest color of a 10 Mb/s flow is 2. Edge and corenodes drop packets if the buffer overflows or if packets have acolor larger than a certain color threshold. This color thresholdis continuously updated, i.e., it is decreased if the queue sizeexceeds another fixed threshold, and it is increased whenthe link is underutilized for a while. Simulation results showthat RFQ improves the performance of CSFQ with regard tofairness. Due to the integral number of layers, maintaining ahigh resource utilization is a problem as edge and core nodesdrop all packets with a color larger than the threshold.

“Per Packet Value” (PPV) [44] can be considered as animprovement of RFQ. Packet markers at edge nodes add "per-packet values" to packets similar to RFQ. However, with

3

Page 4: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

RFQ priority decreases with increasing color while with PPVpriority increases with increasing per-packet value. AQMs inedge and core nodes forward only packets with the highest per-packet values in case of congestion. That means, an incomingpacket replaces the packet with the lowest per-packet valuein the queue if the buffer is full. Sophisticated ideas onpacket marking are presented that address the utility of variousflows, users, or applications. Moreover, an extension towardsdifferent delay classes is sketched. The paper [44] providesonly a few performance results, but results for different delayclasses were not presented.

As the simple AQM in [44] is a challenge for implemen-tation, the authors of [45] substitute it by PVPIE which is anextension of the PIE AQM [22]. They replicate the simulationresults of [44] to show that PPV works also well with PVPIE.PIE distinguishes from many other AQMs by coping withvarying bandwidths. Simulation results show that PPV withPVPIE also copes with varying bandwidths. The paper pointsout that setting appropriate parameters for PVPIE is crucial.

ABC has been initially proposed in [14], but it is signif-icantly simplified and extended in Section III of this paper.It differentiates from the above mentioned approaches byutilizing in the packet header an activity value which is thelogarithm of the measured per-flow rate. The novel ABC’sactivity AQM essentially drops a packet whose activity valueexceeds a certain threshold in the presence of congestion. Thisthreshold is dynamic and depends both on the packet’s activityvalue and the average observed activity. The first ensures thatlow-activity traffic enjoys lower drop precedence than high-activity traffic, giving priority to low-activity flows withoutcomplex queue management. The latter ensures that trafficis dropped only if the queue is sufficiently long, no matterwhether the traffic consists of lots of low-activity flows or onlya few high-activity flows. We propose two different activitymeters: the normal activity meter and the fair activity meter.The normal activity meter is similar to the way traffic ismarked with CSFQ as the overall rate of a flow determinesthe packet values. The fair activity meter is similar to theway traffic is marked with RFQ or PPV as small traffic ratesfrom high-bitrate flows are marked with light/dark color orlow activity, respectively. Furthermore, we propose support forscheduling (delay) priority for ABC and provide simulationresults.

III. ABC DESIGN

We first give an overview of ABC. Then, we explain theoperation of activity meters located in edge nodes and theoperation of activity-based AQMs located in edge and corenodes, pointing out the difference to the original ABC design[14]. We suggest how priority scheduling may be added.Finally, we discuss deployment aspects.

A. Overview

ABC enforces fair resource sharing within an ABC domain.The concept is illustrated in Figure 1. An ABC domain isconfined by edge nodes with activity meters. They meter the

activity of traffic aggregates (aka users) and mark it in anypacket header. Both edge nodes and core nodes are forwardingnodes. They run activity AQMs per interface that drop traffic incase of congestion. Activity AQMs leverage the current queuelength and the packets’ activity to preferentially drop trafficfrom more active users. Therefore, ABC does not require per-flow or per-user state or signalling in the core network.

Fig. 1. Activity metering and marking is performed only at the network edgewhile activity AQMs may be applied on all interfaces within an ABC domain.

An activity meter is configured with a reference rate Rr.The activity meter measures the current traffic rate Rm of themonitored traffic aggregate with the arrival of each packet,computes the activity A = log2(

RmRr), and adds it to the packet

header.An activity AQM accounts for the average activity Aavg of

accepted packets. When the activity AQM receives a packet,it reads the activity A from the packet header and decideswhether to accept or drop the packet based on A, Aavg, andthe current queue length Q.

In the following, we explain activity meter and activityAQM in more detail.

B. Activity Meter

The activity meter is configured with a reference rate Rr andmeasures a user’s current traffic rate Rm to compute activityvalues. In this work, we use the TDRM-UTEMA method [46]for packet-based rate computation of Rm. The measurementstarts at time t0 with a weighted sum S = 0 of packet sizesand last update instant tlast = t0. At the transmission of a newpacket at time t, the weighted sum S is updated with thelength L of the packet. A weighted measurement time T isderived and the measured rate Rm is calculated as fractionof the weighted sum S and the weighted time T . Thus, thefollowing computations are performed for each packet arrivalat time t:

S = e−1

MAM·(t−tlast ) ·S+L (1)

T = MAM · (1− e−1

MAM·(t−t0)) (2)

Rm =ST

(3)

tlast = t (4)

When the rates of several flows are measured, separate vari-ables S, T , and tlast are needed for every flow. The advantageof this measurement approach is that it depends only on a

4

Page 5: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

single parameter, the activity meter memory MAM . The activitymeters of all users should be configured with the same memoryMAM .

There are approximations for that method to avoid per-packet computation of the exponential function [46]. Finally,the activity of the packet is

A = log2

(Rm

Rr

)(5)

and this value is recorded in the packet header.While this activity meter can produce negative activity

values if the current measured rate Rm is lower than theuser’s reference rate Rr, the token bucket based activity meterpresented in [14] returns only activity values of at most zero.The new approach allows scaling the reference rates of allusers with the same factor without changing ABC’s behavior(see Section IV-E2). This makes the new ABC variant easierto configure.

C. Activity AQM

ABC forwarding nodes compute a time-dependent averageactivity Aavg from the activities A of accepted packets. Theactivity AQM determines based on this average Aavg, theactivity A of a received packet, and the current queue lengthQ whether the received packet is accepted or dropped. In thefollowing we explain the activity averager to calculate Aavg,the acceptance algorithm, and differences to the original ABCdesign in [14].

1) Activity Averager: The activity averager uses theUTEMA method [46] to compute the average activity Aavgfrom the activities A of accepted packets. The algorithm startsat time t0 with a weighted sum S = 0, a weighted numberN = 0, and a last update instant tlast = t0. If a new packet isreceived and accepted at time t, the activity averager updatesthese values by

S = e−1

MAA·(t−tlast ) ·S+A (6)

N = e−1

MAA·(t−tlast ) ·N +1 (7)

Aavg =SN

(8)

tlast = t (9)

An advantage of that method is its dependence on the singlememory parameter MAA. The activity averagers of all forward-ing nodes should be configured with the same MAA, but thatvalue may be different from the activity meter memory MAM .Moreover, MAA should be smaller than MAM to ensure thatAavg reflects current activities.

2) Acceptance Algorithm: The objective of the activityAQM is to preferentially drop packets with higher activityvalues A in case of congestion. Thereby “large” is relativeto the average activity Aavg. To that end, we suggest a dropthreshold which is a function of a packet’s activity A and thequeue length Q:

Tdrop(A) = max(Qmin,Qbase− γ · (A−Aavg)) . (10)

If a packet with activity A arrives in the presence of aqueue length Q, the packet is dropped if the queue sizeis not smaller than the drop threshold Tdrop(A). The dropthreshold essentially shortens the queue size for packets withan activity that is relatively large compared to most otherpackets observed on the link. The differentiation factor γ

controls this size reduction and the parameter Qmin avoidspacket loss in case of short queue length. The activity AQMmay be adapted to time-varying links by substituting quantitiesand thresholds like queue sizes and Qmin, Qbase, and γ byexpected queueing delay equivalents.

3) Comparison to Previous Work: The original activityAQM in [14] was significantly different: (1) A simple ex-ponential moving average (EMA) [46] was used as activityaverager. It was configured without the notion of a memory,which makes it difficult to configure. (2) The average activityAavg was computed based on the activities of all received pack-ets instead of accepted packets. (3) The acceptance algorithmmodified the output of some probability-based AQM using theactivity difference (A−Aavg) and the current queue length Q.The original activity AQM was rather difficult to understand,had more parameters, and essentially approximated the newmethod.

With the old ABC design, an excessive rate of high-activitytraffic which cannot be accommodated by the link could effectthat the average activity Aavg was close to the high activity.As a result, the high-activity traffic was dropped with too littleprobability, which is a particular problem when high-activitytraffic is prioritized. The same effect cannot happen with thenew activity averager and acceptance algorithm.

D. Extension to Traffic Classes

We extend ABC to support traffic classes with different pri-orities. In this work, we consider only two per-hop behaviours(PHBs, aka traffic classes): Best Effort (BE) and ExpeditedForwarding (EF) [47]. However, the presented principle canbe extended to more than two PHBs. It includes adaptation ofthe activity meter in edge nodes and adaptation of the buffermanagement and packet scheduling in forwarding nodes.

1) Adaptation of Activity Meters: The activity meter in edgenodes may be extended to traffic classes by multiplying thepacket lengths L by a PHB-specific accounting factor aPHB inEquation (1). As an example, aBE = 1 and aEF = 3 may beused for BE and EF. This is a means to disincentivize the useof EF for bandwidth-intense services that do not require lowdelay. This feature is important as EF traffic should constituteonly a minor fraction of the overall traffic to ensure shortqueueing delay for prioritized traffic and avoid starvation oflow-priority traffic.

2) Adaptation of Buffer Management and Packet Schedul-ing: We suggest that traffic waiting in a forwarding node isstored in PHB-specific queues. This requires adaptation of theacceptance algorithm to multiple queues and introduction of aspecial priority scheduling mechanism.

We integrate multiple queues into ABC by using the sumof all queue lengths as input to the activity AQM in Sec-

5

Page 6: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

tion III-C2. In our special case, this is Q = QBE +QEF whereQBE and QEF are the lengths of the queues for BE and EFtraffic.

We first consider simple priority scheduling for serving thequeues. It leverages two first-in-first-out queues, one for high-priority traffic and one for low-priority traffic, and serves thehigh-priority queue strictly before the low-priority queue. Insuch a system, BE traffic may starve in the presence of alarge rate of EF traffic, which continuously keeps the EF queuefilled. To avoid that problem, we apply MEDF scheduling [48].That means, a packet p is time-stamped with p.timestampbefore being enqueued in its PHB-specific queue. On dequeue,the packet with the lowest p.timestamp + ∆PHB is chosenwhereby ∆PHB is a PHB-specific time offset. In our study,we use ∆EF = 0 and ∆BE = δBE · Qmax·MTU

Cbwith δBE = 2.

E. Fair Activity Meter

The normal activity meter defined in Section III-B deter-mines the activity of an entire traffic aggregate based on itsrate. As a result, the traffic of an aggregate with a larger ratehas a larger activity than a traffic aggregate with lower rate,and therefore, all its traffic receives worse treatment by anactivity AQM. To avoid that, we propose a fair activity meteras an alternative to the normal activity meter. It multiplies themeasured rate Rm with a fractional random number x ∈ (0,1)before calculating Equation (5). This leads to heterogeneousactivity values within an aggregate and ensures that equal-ratesubsets of differently large traffic aggregates exhibit similaractivity values. As a consequence, equal-rate subsets of thesetraffic aggregates receive equal treatment by activity AQMs.The fair activity meter resembles the marking in RFQ [42],[43] and PPV [44].

We show in Section IV-H that the fair activity meter leadsto fairer sharing for non-responsive traffic than the normalactivity meter. However, it also degrades the fairness forresponsive traffic and limits the protection of responsive trafficagainst non-responsive traffic. Therefore, we consider it lessattractive and use the normal activity meter as default in ourstudy.

F. Deployment Aspects

ABC requires the following network-wide parameters:memory MAM for activity meters, memory MAA for activityaveragers, and Qmin, Qbase, and γ for configuration of activityAQMs. The priority scheduler is configured with PHB-specificoffsets ∆PHB. Moreover, every user needs to be configuredwith a reference rate Rr. These values may be user-specificfor service differentiation.

The activity information may be encoded, e.g., into IPv6extension headers or in additional headers below IP. The latterseems doable in software-defined networks, e.g., on the baseof P4 [49]. As fraudulent users may set too low activityvalues, we propose the use of ABC only for trusted networkswhere the operater has control over access devices performingactivity metering and marking and the transport devices.

We discuss the placement of activity meters within an ABCdomain. Upstream raffic sent by a user should be activity-metered by a single entity close to the user. In contrast, theuser’s downstream traffic needs to be activity-metered whenentering the ABC domain. If the traffic enters the ABC domainonly through a single ingress node, all the traffic of the usercan be activity-metered by a single entity. If the traffic entersthrough different ingress nodes, all the traffic of the user maybe forwarded to the user’s activity meter before traversing anybottleneck links. Thus, the activity meter may be considered anetwork service function. As an alternative, the traffic may beactivity-metered by distributed activity meters. These activitymeters should be configured with a partition of the user’sreference rate Rr. A similar problem is distributed policingwhich has been demonstrated in [50]. Further details need tobe discussed in the context of specific use cases which maybe similar to those of ConEx-based CP [12], [13], [51].

ABC is designed to enforce per-user fairness in a networkwithout per-user states. However, it may also be leveragedfor enforcing rate fairness among multiple users served bya single interface. WFQ seems the natural solution to thatproblem. However, heavy users sending more traffic obtaina larger capacity share than light users sending less trafficbecause WFQ collects transmission permission only in thepresence of waiting traffic. In case of multiple users and asmall queue size, it is obvious that only a few users can havewaiting packets. Under such conditions, light users are heavilydisadvantaged if they maintain only a single TCP connection[17]. With ABC, activity-metering may be applied to traffic ona per-user basis before entering the queue and acceptance bythe queue may be determined by an activity AQM. Thereby,similar per-user fairness is expected for TCP users as reportedin Section IV.

IV. PERFORMANCE EVALUATION OF ABC

In this section, we investigate bandwidth sharing with ABCusing stochastic discrete-event simulation. We first explain theevaluation methodology and performance metrics. Then, weevaluate ABC for increasingly unequal user load, for differenttraffic types, without and with prioritization. We study theimpact of system parameters to provide recommendations forconfiguration: reference rate Rr, activity AQM parametersQmin, Qbase, and γ , and the memories MAM and MAA for the ac-tivity meter in edge nodes and activity averagers in forwardingnodes. We illustrate that ABC can expedite sporadic uploadsin the presence of background traffic. Finally, we compare thefairness obtained with the fair activity meter and the normalactivity meter.

A. Simulation Methodology

We present the simulation framework and the used trafficsources, explain the network and experiment design, introduceperformance metrics and report about simulation accuracy.

1) Simulator and Traffic Sources: We performed packet-based simulation with ns3 in version 3.26. It is a “discrete-event network simulator for Internet systems” [52]. A maxi-

6

Page 7: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

mum transfer unit (MTU) of 1500 bytes is supported. Packetshave a 2 bytes PPP header and a 20 bytes IP header. UDPpackets carry an 8 bytes UDP header and TCP packets a32 bytes TCP header which includes 12 bytes options. Thus,the maximum segment size (MSS) for UDP is set to 1470bytes and the MSS for TCP is set to 1446 bytes. We usethe TCP New Reno implementation provided by ns3 forsimulation of TCP traffic. UDP traffic sources send Poissontraffic with maximum-size UDP packets and exponentiallydistributed inter-arrival times TIAT . Thus, the transmission rateof a UDP source is Rt =

MSS+30 bytesE[TIAT ]

with an average inter-arrival time E[TIAT ].

In previous work [14], we considered constant bit rate(CBR) traffic instead of Poisson traffic for UDP sources.However, this leads to simulation artifacts in some scenarioswithout ABC due to non-realistic exactness in simulations. Therate of Poisson traffic slightly varies over time. Its coefficientof variation cI

var depends on the measurement interval I

and can be computed by cIvar(R

it) =

(Ri

tL · I

)−1. For I = 1 s

and Rit = 1 Mb/s (Ri

t = 10 Mb/s) we obtain cIvar(R

it) = 0.11

(cIvar(R

it) = 0.03). Thus, the resulting rate variations are rather

small.

Fig. 2. Network and experiment design.

2) Network Design: We simulate the network depicted inFigure 2. Multiple users communicate with a server via anedge node, an access link, and a bottleneck link, yielding aone-sided dumbbell topology. All access links have the samepropagation delay Da and bandwidth Ca. The bottleneck linkis shared by all users, has propagation delay Db and capacityCb. Thus, a lower bound for the round trip time (RTT) is2 · (Da +Db).

On the fast access links, we apply drop-tail queues witha buffer size of 50 packets which are hardly filled. Thebottleneck link is implemented as a packet queue, i.e., everypacket requires one slot in the queue independently of its size.Therefore, queue lengths and thresholds are given in packets(pkts). It has a transmission buffer of up to Qmax = 24 pkts.In case of ABC, the bottleneck link features an activity AQMwith parameters Qmin = 12 pkts, Qbase = 20 pkts, γ = 16 pkts,and a memory of MAA = 0.3 s for its activity averager.

3) Experiment Design: Figure 2 shows that users are di-vided into a user group 0 (UG0) and a user group 1 (UG1)in our experiments. If the experiments require a distinctionbetween heavy and light users, the heavy users (HUs) aregrouped in UG0 and the light users (LUs) in UG1. Theuser groups UGi have ui users whose traffic is monitored byseparate activity meters which are all configured with the samereference rate Ri

r. All users of a user group communicate inthe same way with the server. That means, in case of TCPcommunication, each user in UGi has fi TCP flows. In caseof non-responsive traffic, a user has only a single UDP flowsending Poisson traffic at a transmission rate of Ri

t .ABC’s intention is to share bandwidth in case of congestion

proportionally to the users’ reference rates. We study to whichdegree this objective can be achieved with different traffictypes. To consider challenging conditions, we model heavyusers competing with light users.

To study non-responsive traffic, we simulate a single userper user group UGi with a single flow sending Poissontraffic at rate Ri

t . If two non-responsive users compete forthe bandwidth of a link without ABC, the bandwidth isshared proportionally to the transmission rates. This yields aconfigured unfairness of R0

tR1

t.

To study responsive traffic, we simulate multiple users withvarious numbers of saturated TCP flows, i.e., flows that havealways data to send. More specifically, we simulate one heavyand one or multiple light TCP users with f0 and f1 TCPflows ( f0 ≥ f1), respectively. TCP flows share the capacityof a bottleneck link about equally. Thus, if heavy and lightusers compete for the bandwidth of a link without ABC, thebandwidth is shared proportionally to their numbers of flows.This yields a configured unfairness of f0

f1.

We also consider the coexistence of heavy users with non-responsive traffic and light users with non-responsive traffic.

When we investigate delay prioritization, we utilize anaccounting factor of aBE = 1 for BE traffic and aEF = 3 forEF traffic, and use δBE = 2 to calculate the BE-specific timeoffset ∆BE for the scheduler (see Section III-D2).

We perform multiple experiments that differ only in a fewparameters. Table I compiles default values that are applied inall experiments if not stated differently.

TABLE IDEFAULT PARAMETERS.

link bandwidths Ca, Cb 100 Mb/s, 10 Mb/slink delays Da, Db 0.1 ms, 5 and 50 msbuffer size Qmax on bottleneck link 24 pktsactivity AQM parameters Qmin, Qbase, γ 12, 20, 16 pktsactivity meter and averager memory MAM , MAA 3 s, 0.3 susers u0/u1 in UG0/1 1/10flows f0/ f1 per user in UG0/1 10/1reference rates Ri

r 10 kb/saccounting factors aBE , aEF 1, 3δBE for configuration of ∆BE 1.5

4) Performance Metrics: The widely used fairness indexof Jain [53] is at most 1 and does not reveal the advantaged

7

Page 8: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) Throughput T0 of user 0 – no prioritization. (b) Throughput T0 of user 0 – traffic of user 0is prioritized and transmission rate of user 1 isR1

t = 7.5 Mb/s.

(c) Avg. queuing delay for BE and EF traffic –traffic of user 0 is prioritized and transmission rateof user 1 is R1

t = 7.5 Mb/s.

Fig. 3. User 0 and user 1 send Poisson traffic and their activity meters are configured with equal reference rates Rir = 10kb/s.

user group. Therefore, we characterize the fairness of thebandwidth sharing result by the throughput ratio TR =

T0T1

whereTi is the average throughput per user in UGi. The bandwidthis shared fairly among users in both groups if TR = 1 holds.In case of TR > 1, users in UG0 receive more throughputthan users in UG1, and in case of TR < 1, users in UG1are advantaged. Thus, maximizing per-user fairness meansbringing TR close to 1.

For some experiments, we also report the average queuelength or the average queueing delay, and the long-termutilization of the bottleneck link.

5) Simulation Accuracy: If not stated differently, flows startuniformly distributed within the first 15 s of a simulation run.The first 100 s of a simulation run are discarded and resultsare logged over the consecutive 100 s. We report mean valuesover 10 runs. We omit confidence intervals for the sake ofbetter readability.

B. Performance with Non-Responsive Traffic

In a first experiment series we show how two competingusers sending Poisson traffic share the bandwidth of a bot-tleneck link without and with ABC. We set the transmissionrate of user 1 to R1

t ∈ {2.5,7.5,10} Mb/s and vary user 0’stransmission rate R0

t to study his resulting throughput T0. Weperform experiments without and with prioritization. Resultsare shown for a one-way delay of Db = 5 ms only, becausethat parameter has no impact without reactive sources.

1) Without Prioritization: We first consider the case with-out prioritization. Figure 3(a) shows the throughput T0 ofuser 0 without and with ABC. Without ABC, the throughputT0 of user 0 increases with increasing transmission rate R0

t .User 0 even benefits from overloading the link. As soon asthe network is overloaded, the queue is fully filled and theutilization is 100% (without figures).

For ABC we set the reference rates of both users to Rir = 10

kb/s. As long as the link is not overloaded, the throughputvalues for both users are the same without and with ABC.Therefore, the throughput of user 0 increases with increasingtransmission rate R0

t up to a value of T0 = 7.5 Mb/s in caseof ABC and R1

t = 2.5 Mb/s. It remains constant for larger

transmission rates R0t so that user 1 gets a throughput of T1 =

2.5 Mb/s.We explain the observed effect. The activities of the traffic

sent by both users are depend on their transmission rates. Thetraffic of the user with the higher transmission rate obtainslower activity. In case of congestion, a queue builds up.The activity AQM drops traffic with higher activity valuesat lower queue length while its still accepts the traffic withlower activity values. Thus, the activity AQM preferentiallydrops traffic from the more active user, i.e., user 0, in case ofcongestion.

For R1t = 7.5 Mb/s, the throughput T0 of user 0 increases

up to a transmission rate R0t = 7.5 Mb/s and reaches a value

shortly below T0 = 7.5 Mb/s although the bottleneck link isalready saturated at R0

t = 2.5 Mb/s. For a transmission rateof R0

t < 7.5 Mb/s, packets of user 0 have lower activity thanthose of user 1. Therefore, packets of user 1 are preferentiallydropped so that the throughput T0 of user 0 increases. ForR0

t > 7.5 Mb/s the throughput of user 0 falls down to T0 = 2.5Mb/s so that user 1 gets a throughput of T1 = 7.5 Mb/s. Thisis because then packets of user 0 have larger activity and arepreferentially dropped.

For R1t = 10 Mb/s we see the same phenomenon, which

means that user 0 obtains the full link capacity if user 0exceeds his transmission rate beyond 10 Mb/s.

With ABC, packets are already dropped in case of conges-tion although they could be accommodated in the buffer. Theactivity AQM mostly enforces queue lengths lower than Qbase(without figures) so that the average queue length is signif-icantly lower than without ABC. However, 100% bottleneckutilization is still achieved.

Thus, in case of equal reference rates, traffic of users withthe lower transmission rates enjoy priority over traffic of userswith larger transmission rates. To maximize throughput incase of congestion, users should reduce their activity. Hence,ABC provides an environment that incentivizes users to applycongestion control.

2) With Prioritization: We now perform the same experi-ment but prioritize the traffic of user 0 over the traffic of user 1.That means, traffic of user 0 is sent as EF, accommodated in aseparate queue, and served with the priority scheme proposed

8

Page 9: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) Throughput ratio. (b) Average queue length. (c) Utilization of the bottleneck link.

Fig. 4. Both user 0 and user 1 send TCP traffic without prioritization. The heavy user has a various number of f0 flows and the light user has f1 = 1 flow.

(a) Throughput ratio. (b) Avg. queuing delay for BE traffic. (c) Utilization of the bottleneck link.

Fig. 5. Both user 0 and user 1 send TCP traffic, traffic of user 0 is prioritized. The heavy user has a various number of f0 flows and the light user has f1 = 1flow.

in Section III-D2. We apply two different accounting factorsaEF ∈ {1,3} while keeping aBE = 1. We set the transmissionrate of user 1 to to R1

t = 7.5 Mb/s. Figure 3(b) shows that thethroughput T0 of user 0 is almost identical as without prioriti-zation for aEF = 1. For aEF = 3, T0 is significantly lower thanfor aEF = 1 because higher accounting factors lead to higheractivity values for user 0’s traffic. Thus, when applying a largeaccounting factor aEF , a user can send less EF traffic than BEtraffic in case of congestion. This disincentivizes users to sendtraffic without realtime requirements with EF.

Figure 3(c) compiles the average queueing delays for thetraffic of both users. EF traffic of user 0 is faster forwardedon the bottleneck link than BE traffic. Therefore, the averagequeueing delay of EF traffic is about zero under most con-ditions. Only for a large EF traffic rate of R0

t = 7 Mb/s andan accounting factor of aEF = 1, the average queueing delayfor EF traffic is larger but does not exceed 5 ms. This is dueto lots of accepted EF traffic under these conditions. With anaccounting factor of aEF=3, less EF traffic is accepted whichthen enjoys shorter queueing delay. BE traffic of user 1 waitssignificantly longer. The queuing delay for BE is even largerfor accounting factor aEF = 1 than for aEF = 3 because moreEF traffic is transmitted with aEF = 1. The average queueingdelay can become very large like 50 ms for aEF = 1 althoughthe buffer size of Qmax = 24 packets corresponds to only 31.6ms delay. For aEF = 3, the average queueing delay for BEtraffic is limited to about 20 ms.

C. Performance with Responsive Traffic

We now consider transmission of TCP traffic. User 0 is aheavy user and opens a large number f0 of TCP connectionswhile user 1 is a light user and holds only f1 = 1 TCPconnection. Again, we investigate this scenario without andwith ABC, and without and with prioritization of the heavyuser.

1) Without Prioritization: Figure 4(a) shows the throughputratio of the heavy user compared to the light user. WithoutABC, the throughput ratio TR = T0

T1can be approximated by

f0f1

. This well matches our simulation results for short andlong one-way delay Db ∈ {5,50} ms. In contrast with ABC,the throughput ratio of heavy and light users remains between1 and 1.1. It is slightly lower for short one-way delay than forlong one-way delay.

Figure 4(b) reports the average queue length. It is shorter forshort one-way delay of Db = 5 ms than for long one-way delayof Db = 50 ms and increases with the increasing number offlows f0. ABC leads to slightly shorter average queue lengththan without ABC because the activity AQM drops packetsalready before the queue is fully filled.

Figure 4(c) demonstrates that utilization is 100% for shortone-way delay of Db = 5 ms. For long one-way delay of Db =50 ms, it is lower and increases with an increasing number offlows from 92% to almost 100%. Traffic sharing with ABCleads to similar utilization as without ABC. Although ABCcauses shorter average queue lengths than without ABC, itallows for almost identical link utilization except for f0 = 1and Db = 50.

9

Page 10: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) Throughput ratio. (b) Average queue length. (c) Utilization of the bottleneck link.

Fig. 6. User 0 sends Poisson traffic with transmission rate R0t and user 1 has f1 = 1 TCP flow. No prioritization.

(a) Throughput ratio. (b) Avg. queuing delay for BE traffic. (c) Utilization of the bottleneck link.

Fig. 7. User 0 sends Poisson traffic with transmission rate R0t and user 1 has f1 = 1 TCP flow. ABC is applied and traffic of user 0 is prioritized.

2) With Prioritization: We perform the same experimentsas above while prioritizing the traffic of the heavy user.

For prioritization without ABC we report results withoutfigures. For a one-way delay of Db = 50 ms, the throughputratio increases in a very similar way as without prioritization.This is because the round-trip times for both users are in thesame order of magnitude as they are dominated by the one-way delay since queuing delays for Db = 50 ms are ratherlow. However, for Db = 5 ms, observed throughput ratios startfrom 200. This is because the TCP connections of the heavyuser experience very short round-trip times due to short one-way delay and negligible queueing delay. In contrast, the TCPconnection of the heavy user experiences short one-way delaybut long queueing delay, i.e., long round-trip times, whichlimits its throughput to low values.

Figure 5(a) shows the throughput ratio for ABC and pri-oritization of heavy user traffic. We consider one-way delaysof Db ∈ {5,50} ms and accounting factors aEF ∈ {1,3}. Withaccounting factor aEF = 1, the throughput ratio remains around1.2 for Db = 5 ms and increases for an increasing number offlows f0 of the heavy user from 1.0 to 1.7 for Db = 50 ms.With aEF = 3, the figure reports clearly lower throughput ratiosbecause EF traffic is tagged with larger activity values at lowertransmission rates. The throughput ratio remains around 0.5 forDb = 5 ms and increases from 0.6 to 1.1 for Db = 50 ms. Thus,prioritizing a user’s traffic increases his throughput ratio, butapplying an appropriate accounting factor aEF counteracts thatphenomenon.

In all cases and under all conditions, EF traffic experiences

an average queueing delay close to zero. Figure 5(b) reportsthe average queuing delay for BE traffic. With aEF = 1, itincreases from 24 ms to 31 ms for Db = 5 ms, and from 14ms to 35 ms for Db = 50 ms. With aEF = 3, less EF trafficis sent, which reduces the queueing delay for BE traffic tovalues between 19 ms and 21 ms and between 9 ms and 21ms, respectively.

Figure 5(c) reports the utilization of the bottleneck linkfor ABC with traffic prioritization, again for Db ∈ {5,50}ms and for aEF ∈ {1,3}. For Db = 5 ms, 100% utilizationis achieved for aEF = 1 and slightly less for aEF = 3. ForDb = 50 ms, utilization values increase from 94% to 100%with increasing number of flows and are about equally high aswithout prioritization (see Figure 4(c)). We observe the sameeffect without ABC and with prioritization (without figure).A larger accounting factor of aEF = 3 reduces the utilizationcompared to aEF = 1.

D. Performance with Non-Responsive and Responsive Traffic

We investigate the coexistence of a non-responsive userand a responsive user. The non-responsive heavy user sendsPoisson traffic with a transmission rate of R0

t and the lightresponsive user transmits a single TCP flow. We study thethroughput ratio TR without and with ABC and without andwith prioritization of the heavy user.

1) Without Prioritization: Figure 6(a) shows the throughputratio of both users depending on the transmission rate of theheavy user. Without ABC, the heavy user obtains a majorcapacity share of the bottleneck link for transmission ratesR0

t larger than 5 Mb/s. Figure 6(b) illustrates that as soon as

10

Page 11: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

a transmission rate of R0t = 10 Mb/s is reached, the queue

is mostly fully occupied. According to Figure 6(c), the linkutilization is close to 100% for a one-way delay of 5 ms for anytransmission rate R0

t . In contrast, for a long one-way delay ofDb = 50 ms, the utilization is much lower and increases from75% to 100% with increasing transmission rate R0

t .With ABC, the throughput ratio between the heavy and light

user in Figure 6(a) rises up to a value of 1.2 and remains atthis value for a broad range of transmission rates R0

t beforeit falls to almost 0. This can be observed for one-way delaysof both Db = 5 ms and Db = 50 ms to a different extent.The heavy user’s traffic is preferentially dropped in case ofcongestion because of larger activity values. The rate of thelight TCP user cannot increase significantly beyond 10 Mb/s.Thus, when the heavy user’s rate R0

t is larger than 10 Mb/s, thelight user’s traffic is always prioritized over the heavy user’straffic. However, the heavy user is then not completely starvedbecause some of its traffic can be transmitted when the singleflow of the light user cannot keep the queue length so largethat all traffic of the heavy user is dropped. This critical queuelength increases with increasing transmission rate R0

t .In early experiments, we used plain priority scheduling

for the differentiation of EF and BE traffic. Then BE trafficwas starved if the EF traffic rate R0

t exceeded 10 Mb/s. Thishappened because the EF queue was mostly filled so that BEtraffic was rarely scheduled. The modified priority schedulingapproach presented in Section III-D2 fixed that problem.

With ABC, the average queue length in Figure 6(b) stayssignificantly shorter for R0

t > 10 Mb/s than without ABCbecause traffic from user 0 is already dropped in the presenceof a short queue due to high activity values. Figure 6(c) showsthat the utilization with ABC is lower than the utilizationwithout ABC, but for large R0

t the utilization is larger thanthe utilization without ABC. Overall, it is similar.

Most notably in this experiment is that with ABC, non-responsive users cannot dominate responsive users while theydo without ABC.

2) With Prioritization: We perform the same experimentsas above with prioritization of the heavy user’s traffic. Wefirst consider the experiment without ABC. In this case, thethroughput ratio increases with prioritization (no figure) evenfaster than without prioritization (see Figure 6(a)), i.e., theTCP traffic of the light user is even more suppressed by theheavy user’s traffic as it suffers from increased round-trip timesdue to increased queuing delay. We now consider transmissionwith ABC and prioritization in Figure 7(a). With an accountingfactor of aEF = 1, we obtain similar results as with ABC butwithout prioritization in Figure 6(a). With an accounting factorof aBE = 3 we observe significantly lower throughput ratiosbecause the larger accounting factor increases the activityvalues of the heavy user’s traffic.

As EF traffic is prioritized, the heavy user’s traffic hardlyfaces any queueing delay. Figure 7(b) reports that the averagequeuing delay for BE traffic rises up to a value of around30 ms for accounting factor aEF = 1. It falls again forlarger transmission rates because then only little EF traffic

is admitted to the system due to its large activity values.For accounting factor aEF = 3, the activity AQM accepts lessEF traffic. Therefore, BE traffic faces lower average queueingtime.

ABC with prioritization and accounting factor aEF = 1 doesnot decrease the link utilization compared to capacity sharingwithout ABC and prioritization (without figure). Figure 7(c)shows that aEF = 3 leads to increased utilization in case ofsmall transmission rate R0

t . Moreover, utilization with ABC isalmost equal without and with prioritization (cf. Figure 6(c)).

E. Impact of Reference Rate Rr

Although ABC’s control depends on relative reference rates,we prove that scaling them for all users with the same valueyields identical behavior of the activity AQM, which also leadsto identical sharing results. Furthermore, we show that unequaluser-specific reference rates may be used to scale a user’sthroughput relative to the throughput of other users.

1) Scalability of Reference Rates: Activities are computedas A = log2(

RmRr) with measured rates Rm. We scale the

reference rates of all users by a scalar s to R∗r = s ·Rr, whichyields modified activities

A∗ = log2

(Rm

s ·Rr

)= log2

(Rm

Rr

)− log2(s) = A− log2(s).

(11)The activity AQM calculates a moving average Aavg over theactivities of previously accepted packets. Due Equation (11),the moving average over the scaled activities is

A∗avg = Aavg− log2(s). (12)

The drop decision of the activity AQM for a packet dependson the difference of the packet’s activity A and the averagedactivity Aavg, i.e., on A−Aavg. We obtain for that differencein the scaled system

A∗−A∗avg = (A− log2(s))− (Aavg− log2(s)) = A−Aavg. (13)

Therefore, activity AQMs perform the same drop decisionswhen the reference rates of all users are scaled with the samevalue. Simulations with different scaling factors confirmedthis finding. Minor deviations occurred due to numericalinaccuracies.

Fig. 8. Impact of up-scaled reference rates on throughput ratio.

11

Page 12: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) Throughput ratio. (b) Average queue length. (c) Utilization of the bottleneck link.

Fig. 9. A single heavy user sends f0 = 10 TCP flows and u1 = 10 light users send f1 = 1 TCP flow. Impact of activity AQM parameters γ and Qmin fordifferent one-way delays Db.

(a) Throughput ratio. (b) Average queue length. (c) Percentage of tail drops.

Fig. 10. A single heavy user sends f0 = 10 TCP flows and u1 = 10 light users send f1 = 1 TCP flow. Impact of activity AQM parameters γ and Qbase fordifferent one-way delays Db.

2) Scalability of User Shares: We study the extent to whicha user’s capacity share can be scaled by adapting his referencerate. We consider two users sending TCP traffic. We scale thereference rate of user 0 with a factor s, i.e., R0

r = s ·R1r , and

investigate its impact on the throughput ratio.Figure 8 shows the throughput ratio for one and two flows

per user. For a short one-way delay of Db = 5 ms, user 0 is ableto take full advantage of his upscaled reference rate R0

r as thethroughput ratio TR increases linearly with the scaling factors. We observe that also for two flows per user. In contrast,the throughput ratio is only TR = 12 for a long one-way delayof Db = 50 ms, a scaling factor of s = 16, and one flow peruser. This is because a single TCP flow cannot recover fastenough from packet loss in the presence of a long round-trip time. However, for two flows per user, a throughput ratioof TR = 15 is achieved, which approximates the scaling factors= 16. Thus, scaling of user shares works, but with TCP traffica sufficiently large number of flows may be prerequisite.

F. Impact of Activity AQM ParametersThe activity AQM preferentially drops traffic from heavy

users in case of congestion. We evaluate the impact of itsparameters γ , Qmin, and Qbase on throughput ratio, queuelength, link utilization, and tail drops.

1) Impact of γ and Qmin: We consider a single heavy userwith f0 = 10 TCP flows and u1 = 10 light users with f1 = 1TCP flow each. Figure 9(a) shows that small differentiationfactors of γ = 1 packet already provide good fairness in termsof throughput ratio of about 2. However, larger values of

γ improve the fairness towards throughput ratios between1.0 and 1.1. Increasing γ even further, slightly increases thethroughput ratio again. Short one-way delay of Db = 5 msleads to smaller throughput ratios than large one-way delayof Db = 50 ms because shorter one-way delay allows TCPto faster adapt its transmission rate. The minimum thresholdQmin has only a minor impact on the throughput ratio and itseffect is only visible for small Qmin and very large γ .

The differentiation factor γ essentially influences the queuelimit at which packets with relatively large activity values arestill accepted. Therefore, it also impacts the average queuelength. Figure 9(b) shows that the average queue length islarge for small differentiation factors γ while it is small forlarge differentiation factors γ . Short one-way delay Db = 5 msleads to larger average queue sizes than long one-way delayDb = 50 ms. Again, the effect of the minimum threshold Qminis only visible for small Qmin and very large γ .

The resulting utilization on the bottleneck link is reported inFigure 9(c). Short one-way delay of Db = 5 ms leads to 100%utilization for most parameter settings while long one-waydelay of Db = 50 ms leads to utilization values around 99.6%which are still very high; without ABC, we observed 99.7%utilization for this setting. Very large differentiation factors γ

may slightly reduce utilization, but this effect is mitigated bylarger minimum thresholds Qmin.

Based on this investigation, we choose γ = 16 packets andQmin = 12 packets because these values provide good fairnessand avoid reduced utilization.

12

Page 13: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

2) Impact of Qbase: We now investigate the impact of Qbase.Figure 10(a) shows that it has hardly any impact on the relativethroughput. Only for a very large value of Qbase = 24 pkts therelative throughput is increased. Figure 10(b) illustrates thatincreasing Qbase clearly increases the average queue size asit controls the threshold from which on packets are dropped.Figure 10(c) compiles the percentage of packet drops due tobuffer overflow. It is zero for almost all values of Qbase and γ

but for Qbase = 24 pkts it is significant. As tail drops can affectpackets from both heavy and light users, they reduce ABC’sdrop differentiation between them, which leads to increasedthroughput ratios. It is remarkable that ABC avoids tail dropsin the presence of TCP traffic even for large values of Qbaseas long as there is still some headroom to the queue size.

G. Impact of Memories

ABC requires a memory MAM for activity meters in edgenodes and a memory MAA for activity averagers in forwardingnodes. We study the impact of these memories mathematicallyand by two experiment series.

1) Activity Advantage for Starting Users: When users startsending, their initial measured rate starts with Rm = 0 andslowly ramps up with the amount of transmitted traffic. There-fore, a starting user enjoys lower activity values than userstransmitting at similar rate, which leads to an initial activityadvantage. In the following, we point our the causes of thiseffect.

We consider a user who continuously transmits at a trans-mission speed of Rt =

Cb2 with Cb = 10 Mb/s. According

to Equation (3), the weighted sum S of his activity metermust be S = Rm ·T = Rt ·MAM if the measured rate Rm wellapproximates the transmission rate Rt and if the measurementprocess runs long enough so that the weighted measurementtime T has converged to MAM . For an activity meter memoryof MAM = 3 s, the weighted sum is 1.875 MByte.

We now consider a starting user. His weighted sum isinitially S = 0 and increases only with metered traffic. Inparticular, at least 1.875 MByte must be transmitted beforethe starting user can obtain similar activity values as thecontinuously transmitting user. The starting user can even sendarbitrarily fast for that purpose without risking high activityvalues, which constitutes an initial activity advantage. Thatadvantage scales with MAM . The advantage can be comparedto the tolerance of a token bucket based rate limiter whichallows transmission of initial burst, which is controlled by thebucket’s size.

2) Reduction of Upload Time for Starting Users: We studythe practical consequences of the previously observed initialactivity advantage in the following experiment. We investigatethe impact of ABC on the upload speed of sporadic uploadsas they occur with interactive applications. Users send smallamounts of data and are then inactive until the next upload.Acceleration of such uploads can improve the user’s qualityof experience.

We consider an upload user who starts transmission of alarge file after long idle time and in the presence of background

traffic. The background traffic is generated by u1 = 10 otherusers with f1 ∈ {1,2} saturated TCP flows each. User 0 startsuploading only 100 s after all background users have startedtheir TCP connections to ensure that their activity metersalready yield typical activities. We measure the amount ofuser 0’s traffic that is correctly received by TCP over time.

(a) One-way delay Db = 5 ms.

(b) One-way delay Db = 50 ms.

Fig. 11. Traffic received from an upload user depending on time. Backgroundtraffic is generated by u1 = 10 users with f1 ∈ {1,2} TCP connections. Allusers are configured with equal reference rates.

Figures 11(a) and 11(b) compile the amount of data receivedfrom the upload user over time for various background traffic( f1 ∈ {1,2}), without and with ABC, and for different one-way delay Db ∈ {5,50} ms. Note the logarithmic scale ofthe x-axis and the y-axis in the figures. We first discussresults in Figure 11(a) for Db = 5 ms one-way delay. WithABC, significantly more traffic is uploaded after 1 s. Theamount of traffic received after 1 s increases with the activitymeter memory MAM . The difference in received traffic isdue to the initial activity advantage of the upload user. Therelative difference in received traffic among different activitymeter memories MAM decreases over time. Also the relativedifference of received traffic without and with ABC vanishesif every background user has only a single ( f1 = 1) TCP flow.If background users have f1 = 2 TCP flows, upload with ABCremains faster than without ABC because ABC enforces fairbandwidth sharing among all users, and this advantage remainsover time. Thus, ABC enforces fair capacity sharing, which isan advantage for light users against heavy users, and it favors

13

Page 14: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) Throughput ratio. (b) Average queue length. (c) Utilization of the bottleneck link.

Fig. 12. A single heavy user sends f0 = 10 TCP flows and u1 = 10 light users send f1 = 1 TCP flow. Impact of activity meter memory MAM and activityaverager memory MAA for different one-way delays Db.

users that have been silent for some time as they enjoy anactivity advantage. Large uploads mainly benefit from the firstphenomenon, small uploads also benefit from the latter.

Figure 11(b) shows the upload time for one-way delaysof Db = 50 ms. The difference in upload time without andwith ABC is smaller than for one-way delays of Db = 50 ms.However,in the presence of background users with f1 = 2 TCPconnections, uploads with ABC are still faster than withoutABC because ABC enforces fair capacity sharing.

3) Impact on Fairness: We now study the impact of activitymeter memory MAM and activity averager memory MAA onfairness. To that end, we consider one heavy user with f0 = 10TCP flows and u1 = 10 light users with f1 = 1 flow and testone-way delays of Db ∈ {5,50} ms. Their throughput ratiois presented in Figure 12(a). The fairness between the heavyuser and the light users improves with increasing activity metermemory MAM and good values are obtained for MAM = 2 s andlarger. Values of MAM = 0.25 s and smaller can significantlydegrade the fairness. Figure 12(b) shows that average queuelengths are larger for Db = 5 ms than for Db = 50 ms andthat they increase with increasing activity meter memory MAM .If the activity meter memory MAM is large, activity valuesincrease later compared to short MAM so that activity AQMsstart dropping packets also delayed, which results in increasedaverage queue lengths. Figure 12(c) reveals that the activitymeter memory MAM has no detrimental influence on bottleneckutilization if it is larger than 0.5 s. In this study, we applyMAM = 3 s to keep throughput ratios low and to ensure thatactivity values adapt sufficiently fast in the presence of trafficrate changes. The activity averager memory MAA has hardlyany impact on the performance metrics in Figures 12(a)–12(c).

4) Impact of Activity Averager Memory MAA: We per-formed additional experiments with varying activity averagermemory MAA which is used by the activity AQM in forwardingnodes. It turned out that this parameter has only minor impacton system performance: minor impact on upload performance,minor impact on fairness, and minor impact on bandwidthsharing. However, it is recommendable to set it to a valuesmaller than the activity meter memory MAM to minimize thedifference between measured activities A at the edge nodesand the averaged activity Aavg in forwarding nodes. Givena packet transmission time of 1500·8bit

107bit/s = 1.2 ms, an activity

averager memory in the order of MAA = 300 ms seems enoughso that the average activity is computed over the activity ofsufficiently many packets.

(a) Impact of one-way delay Db without and with ABC.

(b) Impact of activity averager memory MAA on ABC’s drop threshold; theone-way delay is Db = 5 ms.

Fig. 13. Transmission of a single TCP flow.

5) Impact on the Initial Rate Increase of a Single TCPFlow: The activity AQM limits the queue length dependingon various parameters. Therefore, a single TCP flow may notbe able to fully utilize the available buffer size. This may bea problem for TCP flows in the presence of large bandwidth-delay products (BDP) and an underdimensioned buffer as itsrate may take long to fully utilize the bottleneck bandwidth.

Figure 13(a) visualizes the time-dependent transmission rateof a single TCP flow computed with TDRM-UTEMA [46] anda memory of 0.5 s. The dark curves are without ABC, the lightcurves are with ABC. One-way delays of Db ∈ {5,25,50} msare considered. These values correspond to BDPs of at least8.5, 41.8, and packets, 83.5 packets. In contrast, the bufferis only 24 packets large and ABC traffic may not be able tofully utilize it. We observe that for Db = 5 ms, TCP can utilizerather quickly the full bottleneck bandwidth, for Db = 25 ms it

14

Page 15: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) User 0 and 1 send Poisson traffic with trans-mission rates R0

t and R1t . The throughputs T0 of

user 0 are given.

(b) User 0 and 1 send TCP traffic over f0 andf1 = 1 connections. The throughput ratios TR ofboth users are given.

(c) User 0 sends Poisson traffic with rate R0t and

user 1 sends traffic over f1 = 1 TCP connection.The throughput ratios TR of both users are given.

Fig. 14. Impact of traffic type and configured unfairness on fairness with normal and fair activity meter (see Section III-E).

takes a few seconds, and for Db = 50 ms it takes more than 10s. Larger one-way delays lead to slower TCP rate increases.For Db = 25 ms TCP’s rate increase is faster with ABC thanwithout ABC while for Db = 50 ms it is vice-versa. Thus, wesee TCP’s rate increase difficulty with and without ABC. Theperformance depends on the specific setting. ABC’s low dropthreshold may be advantageous as it keeps the round-trip timeshort (in case of Db = 25 ms), but ABC’s low drop thresholdmay also be disadvantageous as it may provoke more severepacket loss (in case of Db = 50 ms).

We visualize in Figure 13(b) the evolution of ABC’sdrop threshold over time in individual simulation runs forMAA ∈ {0.1,0.3,1} s and Db = 5 ms. The drop thresholdsare initially at Qmin = 12 and then converge to Qbase = 20pkts. The convergence speed depends on MAA. We explainthe phenomenon as follows. The average activity Aavg at theactivity AQM is initially zero. A starting TCP connectionfirst exhibits a rather low measured rate for two reasons:the rate is first indeed low and activities are lower for usersthat have just started sending (see Section IV-G2). Therefore,measured activities are initially small and increase over time,the same holds for their average value Aavg at the activitymeter. As the activity values increase, the average Aavg lagsbehind. This yields a positive difference A−Aavg and leadstogether with Qbase < Qmax to a low drop threshold for ABC(see Equation (10)). However, the drop threshold does notfall below the minimum threshold Qmin. The parameter studyin Figure 13(b) shows that the activity averager memoryMAA influences the behavior of ABC’s drop threshold: shortervalues lead to faster adaptation of the drop threshold. However,the adaptation speed has mostly no impact on performancewhen MAA is sufficiently small. We studied the time-dependentdrop thresholds also for Db ∈ {5,25,50} ms, but they lead tosimilar curves (w/o figures).

H. Impact of Fair Activity Meter

In Section III-E we have presented the fair activity meteras an alternative to the normal activity meter. It marks trafficwith activity values such that equal-rate, least-activity subsetsof two different aggregates have similar activities. As a result,these subsets should obtain similar treatment by the activity

AQM. We evaluate the impact of the activity meter variant onfairness.

In the first experiment, user 0 and user 1 send Poissontraffic with different rates. We first consider the fair activitymeter. Figure 14(a) shows that the throughput T0 of user 0converges for R1

t ∈ {5,7.5} Mb/s against slightly more than 5Mb/s with increasing transmission rate R0

t . Thus, user 0 obtainsonly slightly more than his fair share even if his transmissionrate exceeds his fair share to a larger degree than user 1. Incase of R1

t = 2.5 Mb/s, more capacity is available so that thethroughput T0 of user 0 converges against 7.5 Mb/s. With thenormal activity meter, the traffic of the user with the lowertransmission rate is prioritized over the traffic. As a result,user 0 obtains a lower throughput T0 for large transmissionrates R0

t with the normal activity meter. On the one hand, thefair activity meter improves fairness among users as it dropstraffic such that users with different transmission rates achievesimilar throughput while the normal activity meter leads toprioritization of light user traffic. On the other hand, this alsoreduces incentives for users to keep their transmission ratesclose to their fair shares.

In a second experiment, user 0 holds a variable numberof f0 TCP connections while user 1 holds only f1 = 1 TCPconnection. Figure 14(b) shows that ABC with the fair activitymeter is able to protect the throughput of the light user againsttraffic of the heavy user, but only to a lower degree than ABCwith the normal activity meter. Throughput ratios between 1.5and 3 instead of between 1.0 and 1.5 are achieved. Thus, thefair activity meter reduces the fairness between heavy and lightTCP users compared to the normal activity meter.

Finally, user 0 sends Poisson traffic at different transmissionrates R0

t and user 1 sends traffic over f1 = 1 TCP connection.Figure 14(c) shows that throughput ratios are between 2 and3 with ABC and the fair activity meter instead of between 0and 1.1 with the normal activity meter. Thus, the fair activitymeter degrades the fairness for the responsive user.

In this work we concentrated on the normal activity meteras it enforces excellent fairness among responsive users at theexpense of extremely impeding non-responsive users whosetransmission rates exceed their fair shares. If that feature isnot acceptable, the fair activity meter may be used instead as

15

Page 16: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) User 0 and user 1 send Poisson traffic withtransmission rates R0

t and R1t . The throughput T0

of user 0 is given.

(b) User 0 has multiple ( f0) TCP connectionsand user 1 has only f1 = 1 TCP connection. Thethroughput ratios TR of both users are given.

(c) User 0 sends Poisson traffic with rate R0t and

user 1 sends traffic over f1 = 1 TCP connection.The throughput ratios TR of both users are given.

Fig. 15. Impact of traffic type and configured unfairness on throughput or throughput ratio with CSFQ. The corresponding figures for ABC are Figure 3(a),Figure 4(a), and Figure 6(a).

it provides a similar ABC design without this feature but atthe expense of reduced fairness.

Related control designs such as RFQ and PPV (see Sec-tion II) leverage marking schemes similar to the fair activitymeter. Therefore, they suffer from the same drawback that theyprotect the throughput of light users to a lower degree againsttraffic of heavy users compared to ABC.

V. COMPARISON OF ABC AND CSFQ

CSFQ has been the first mechanism to enforce fair resourcesharing in a core-stateless network and is most well-known.Therefore, we compare ABC with CSFQ. We explain ourimplementation and configuration of CSFQ and compare thefairness achieved with CSFQ with the one of ABC. Finally,we analyze why ABC leads to better results and compare theconfiguration requirements of ABC and CSFQ.

A. Configuration of CSFQ

The original source code for CSFQ is available at [54]for the ns2 simulation framework including the two minoramendments described in [34], [35]. We ported that code tons3 in order to perform the same experiments as for ABC.

We used the following parameters for simulation which arerecommended at [54]: K = 100 ms for measuring per-flowrates at edge nodes, Kα = 100 ms for measuring aggregaterates A and F at edge and core nodes, Kc = 100 ms at coreand edge nodes for the estimation interval of the estimated fairrate α . We utilize a bottleneck bandwidth of Cb = 10 Mb/s anda queue size of Qmax = 24 packets. The latter deviates from theoriginal CSFQ evaluation and equals our evaluation of ABC.We deliberately chose a shorter queue size as overly largebuffers contribute to bufferbloat. Moreover, typical switchesexhibit a rather small queue sizes.

B. Non-Responsive Traffic

We first consider non-responsive traffic. Figure 15(a) showsthat with CSFQ, the heavy and light user share the bandwidthvery fairly. In contrast, with ABC heavy users achieve asignificantly lower throughput with the normal activity meter(see Figure 3(a)) and a slightly too large throughput with fairactivity meter (see Figure 14(a)).

C. Responsive Traffic

We study now a heavy and a light TCP user. The heavyuser has a different number f0 of TCP connections whilethe light user has only a single one. Figure 15(b) illustratesthat CSFQ can enforce a throughput ratio between 1.4 and1.6 for different number of flows f0 and a one-way delayof Db = 5 ms while ABC achieves values close to 1.0 (seeFigure 4(a)). However, for larger one-way delay of Db = 50ms, the throughput ratio with CSFQ increases from 1.0 to2.8 for an increasing number of flows f0 of the heavy user.In contrast, ABC keeps the throughput ratio below 1.5 underthese conditions.

D. Coexistence of Non-Responsive and Responsive Traffic

Finally, we investigate the coexistence of a heavy usersending Poisson traffic at rate R0

t and a light user with asingle TCP flow. According to Figure 15(c) the throughputratio increases with CSFQ from 0 to 1 when the transmissionrate R0

t of the heavy user linearly increases from 0 to 5 Mb/s.We observe the same phenomenon with ABC in Figure 6(a).For larger transmission rates R0

t , CSFQ can enforce a relativethroughput of between 1.2 and 1.8 for small one-way delay ofDb = 5 ms. However, for a larger one-way delay of Db = 50ms, the throughput ratio significantly increases with increasingtransmission rate R0

t . Thus, CSFQ cannot protect light usersagainst heavy users in case of longer one-way delays.

E. Discussion and Analysis

We compare CSFQ and ABC with regard to performanceand configuration requirements.

1) Performance: We observed that CSFQ leads to largerthroughput ratios than ABC in the presence TCP traffic andin particular in case of long one-way delays. To explain theeffect, we visualize the distribution of the queue length in thepresence of packet drops for ABC and CSFQ in Figures 16(a)and 16(b).

Figure 16(a) shows that with ABC, packets are droppedonly when the queue length is at least Qmin. This is dueto the definition of the activity AQM’s drop threshold inEquation (10). Morover, packet drops occur only up to a queue

16

Page 17: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

(a) ABC.

(b) CSFQ.

Fig. 16. Queue length distribution at packet drop instants. One heavy userssends f0 = 10 TCP flows and u1 = 10 light users send f0 TCP flows. Theone-way delay is Db = 50 ms.

length of 23 packets, i.e., all drops are due to activity AQMdrops and none due to buffer overflow. This is in line with ourobservations in Figure 10(c).

Figure 16(b) reveals that CSFQ has a significantly differentdrop behavior. Some packets are dropped when the queue isempty or if its length is rather small. This happens in spiteof the implemented amendment in [35] which should preventCSFQ dropping traffic when the queue length is small. Thisfeature is added in Fig. 4 of [35] in the congested mode tocheck at the end of a measurement interval whether the queueis still sufficiently long. In that case CSFQ leaves the con-gested mode to prevent additional packet drops. However, thatmay be too late as the measurement interval takes Kc = 100 ms.As a result, CSFQ sometimes drops further packets althoughthe queue has already sufficiently decreased. The solution isobvious: the algorithm in Fig. 3 of [35] must not drop packetswhen the queue is shorter than Qmin. We implemented thisimprovement, and observed that it avoids packet drops in thepresence of short queues as desired, but it hardly improves

throughput ratios.In addition, we observe that CSFQ drops almost 50% of

its packets when the queue is 24 packets long, i.e., thesedrops are due to buffer overflow. Such tail drops hit thetraffic of both heavy and light users. This reduces CSFQ’sdrop differentiation between heavy and light users, whichreduces CSFQ’s ability to protect light users from heavy usersunder challenging conditions. As a result, CSFQ suffers fromincreased throughput ratios when TCP is transmitted and theone-way delay is only Db = 50 ms. ABC leads to fairerresource sharing as the activity AQM is able to avoid taildrops with appropriate parameters Qmin, Qbase, and γ (seeSection IV-F2).

2) Configuration Requirements: ABC is significantly sim-pler with respect to configuration and more adaptive thanCSFQ. Like CSFQ, it requires per-flow rate measurement atedge nodes. However, at core nodes, CSFQ also requires ratemeasurement of the arrived and accepted traffic rate whileABC’s activity AQM just computes a moving average ofpreviously received activity values. Moreover, CSFQ needsto be configured with the bottleneck bandwidth C to controlthe estimated fair rate α by α = α ·C/F during congestionphases [35, Fig. 4]. Therefore, CSFQ requires a constant andknown bottleneck rate C. This is different for ABC which cantherefore be also applied to transmission links with variablebandwidth.

VI. CONCLUSION

In this work we simplified activity-based congestion man-agement (ABC) and extended it for traffic prioritization.We modified the activity meter at the edge nodes, whichmakes ABC easier to understand and configure. Likewise,we proposed a new activity AQM for forwarding nodes,which is more straightforward and easier to reason about. Weadded support for traffic prioritization and provided means todisincentivize high-priority transport if not needed and to avoidstarvation of non-prioritized traffic.

We investigated fairness and queueing behavior with andwithout ABC under challenging conditions using packet-basedsimulation. We showed that ABC gives loss priority to userswith lower activity so that users can maximize their throughputby sending at their fair rate. With ABC, users applyingcongestion control share the capacity of a bottleneck equallyirrespective of their numbers of flows, which is differentwithout ABC. Moreover, ABC protects the throughput of userssending congestion-controlled traffic against overload of userssending non-responsive traffic. This holds without and withprioritization of non-responsive traffic.

ABC’s user-specific activity meters are configured witha reference rate. We proved that the reference rates of allusers can be scaled by the same factor without changingthe system behavior. Moreover, we showed that individualusers can be granted larger capacity shares by scaling uptheir reference rates. We investigated the impact of ABC’sactivity AQM parameters and recommended values that leadboth to good fairness and high link utilization. Both activity

17

Page 18: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

meters in edge nodes and activity averagers in forwardingnodes require memories. We illustrated their impact and gaverecommendations. Moreover, we showed that ABC reduces thetime needed for small uploads, which improves the quality ofexperience for interactive applications.

Activity meters may be designed differently. We proposeda normal activity meter and a fair activity meter. While thefair activity meter improves the fairness for scenarios withonly non-responsive traffic, the normal activity meter providesbetter fairness for responsive sources. The fair activity meterresembles the way how the similar control approaches RFQand PPV perform traffic marking. We also showed that CSFQ,another similar approach, cannot provide the same fairness asABC as it cannot efficiently avoid tail drops.

We recently implemented a prototype of ABC leveraginga programmable data plane and P4 [49] to demonstrate itstechnical viability. Future work should study use cases forwhich we see particular potential in data centers and in thewireline part of mobile networks.

ACKNOWLEDGEMENTS

The authors thank Bob Briscoe, David Wagner, and HabibMostafaei for their helpful comments.

REFERENCES

[1] M. Jaber, M. A. Imran, R. Tafazolli, and A. Tukmanov, “5G BackhaulChallenges and Emerging Research Directions: A Survey,” IEEE Access,vol. 4, pp. 1743 – 1766, Apr. 2016.

[2] X. Ge, H. Cheng, M. Guizani, and T. Han, “5G Wireless Backhaul Net-works: Challenges and Research Advances,” IEEE Network Magazine,vol. 28, no. 6, pp. 6 – 11, Nov. 2014.

[3] B. Briscoe Ed., R. Woundy Ed., and A. Cooper Ed., “RFC6789:Congestion Exposure (ConEx) Concepts and Use Cases,” Dec. 2012.

[4] Broadband Internet Technical Advisory Group (BITAG), “Real-TimeNetwork Management of Internet Congestion,” Broadband Internet Tech-nical Advisory Group (BITAG), Tech. Rep., Oct. 2013.

[5] Sandvine Incorporated ULC, “Network Congestion Management: Con-siderations and Techniques – An Industry Whitepaper,” Sandvine Incor-porated ULC, Tech. Rep., Oct. 2015.

[6] IETF Working Group on RTP Media Congestion AvoidanceTechniques (RMCAT), “Description of the Working Group,”http://tools.ietf.org/wg/rmcat/charters, 2012.

[7] S. Shalunov, G. Hazel, J. Iyengar, and M. Kuehlewind, “RFC6817: LowExtra Delay Background Transport (LEDBAT),” Dec. 2012.

[8] R. Pan, P. Natarajan, F. Baker, and G. White, “RFC8033: ProportionalIntegral Controller Enhanced (PIE): A Lightweight Control Schemeto Address the Bufferbloat Problem,” https://tools.ietf.org/html/rfc8033,Feb. 2017.

[9] K. Nichols, V. Jacobson, A. McGregor, Ed., and J. Iyengar,Ed., “RFC8289: Controlled Delay Active Queue Management,”https://tools.ietf.org/html/rfc8289, Jan. 2018.

[10] M. Mathis and B. Briscoe, “RFC7713: Congestion Exposure (ConEx)Concepts, Abstract Mechanism, and Requirements,” Dec. 2015.

[11] D. Kutscher, F. Mir, R. Winter, S. Krishnan, Y. Zhang, and C. J.Bernados, “RFC7778: Mobile Communication Congestion ExposureScenario,” Mar. 2016.

[12] B. Briscoe, “Initial Congestion Exposure (ConEx) Deployment Ex-amples,” http://tools.ietf.org/html/draft-briscoe-conex-initial-deploy, Jan.2012.

[13] B. Briscoe and M. Sridharan, “Network Performance Isolation inData Centres using Congestion Policing,” http://tools.ietf.org/html/draft-briscoe-conex-data-centre, Feb. 2014.

[14] M. Menth and N. Zeitler, “Activity-Based Congestion Management forFair Bandwidth Sharing in Trusted Packet Networks,” in IEEE/IFIPNetwork Operations and Management Symposium (NOMS), Istanbul,Turkey, 2016.

[15] C. Bastian, T. Klieber, J. Livingood, J. Mills, and R. Woundy,“RFC6057: Comcast’s Protocol-Agnostic Congestion Management Sys-tem,” Dec. 2010.

[16] A. Demers, S. Keshav, and S. Shenker, “Analysis and Simulation of aFair Queuing Algorithm,” in ACM SIGCOMM, 1989.

[17] M. Menth, M. Mehl, and S. Veith, “Fairness Enforcement for MultipleTCP Users through Deficit Round Robin with Limited Deficit Savings(DRR-LDS),” in GI/ITG Conference on Measuring, Modelling andEvaluation of Computer and Communication Systems (MMB), Erlangen,Germany, Feb. 2018.

[18] A. Shieh, S. Kandula, A. Greenberg, C. Kim, and B. Saha, “Sharingthe Data Center Network,” in USENIX Syposium on Networked SystemsDesign & Implementation (NSDI), 2011.

[19] R. Adams, “Active Queue Management: A Survey,” IEEE Communica-tions Surveys & Tutorials, vol. 15, no. 3, pp. 1425–1476, 2013.

[20] S. Floyd and V. Jacobson, “Random Early Detection Gateways forCongestion Avoidance,” IEEE/ACM Transactions on Networking, vol. 1,no. 4, pp. 397–413, Aug. 1993.

[21] K. Nichols and V. Jacobson, “Controlling Queue Delay,” ACM Queue,vol. 10, no. 5, May 2012.

[22] R. Pan, P. Natarajan, C. Piglione, M. S. Prabhu, V. Subramanian,F. Baker, and B. VerSteeg, “PIE: A Lightweight Control Scheme toAddress the Bufferbloat Problem,” in IEEE Workshop on High Perfor-mance Switching and Routing (HPSR), 2013.

[23] R. Pan, B. Prabhakar, and K. Psounis, “CHOKEs: a Stateless ActiveQueue Management Scheme for Approximating Fair Bandwidth Allo-cation,” in IEEE Infocom, Tel Aviv, Israel, 2000.

[24] K. Ramakrishnan, S. Floyd, and D. Black, “RFC3168: The Addition ofExplicit Congestion Notification (ECN) to IP,” Sep. 2001.

[25] B. Briscoe, “Flow Rate Fairness: Dismantling a Religion,” ACM SIG-COMM Computer Communication Review, vol. 37, no. 2, Apr. 2007.

[26] B. Briscoe, A. Jacquet, C. di Cairano-Gilfedder, A. Salvatori, A. Sop-pera, and M. Koyabe, “Policing Congestion Response in an Internetworkusing Re-feedback,” in ACM SIGCOMM, Portland, OR, Aug. 2005.

[27] A. Jacquet, B. Briscoe, and T. Moncaster, “Policing Freedom to Usethe Internet Resource Pool,” in Re-Architecting the Internet (ReArch),Madrid, Spain, Dec. 2008.

[28] B. Briscoe, “Re-feedback: Freedom with Accountability for CausingCongestion in a Connectionless Internetwork,” PhD thesis, Departmentof Computer Science, University College London, 2009.

[29] M. Kuehlewind, Ed. and R. Scheffenegger, “RFC7786: TCP Modifica-tions for Congestion Exposure (ConEx),” May 2016.

[30] D. Wagner, “Congestion Policing Queues – A New Approach to Man-aging Bandwidth Sharing at Bottlenecks,” in International Conferenceon Network and Services Management (CNSM), 2014.

[31] S. Baillargeon and I. Johansson, “ConEx Lite for Mobile Networks,” inCapacity Sharing Workshop (CSWS), 2014.

[32] M. Menth and S. Veith, “Active Queue Management Based on Con-gestion Policing (CP-AQM),” in GI/ITG Conference on Measuring,Modelling and Evaluation of Computer and Communication Systems(MMB), Erlangen, Germany, Feb. 2018.

[33] M. Menth, B. Briscoe, and T. Tsou, “Pre-Congestion Notification (PCN)– New QoS Support for Differentiated Services IP Networks,” IEEECommunications Magazine, vol. 50, no. 3, Mar. 2012.

[34] I. Stoica, S. Shenker, and H. Zhang, “Core-Stateless Fair Queueing:Achieving Approximately Fair Bandwidth Allocations in High SpeedNetworks,” in ACM SIGCOMM, Vancouver, B.C., 1998.

[35] ——, “Core-Stateless Fair Queueing: A Scalable Architecture to Ap-proximate Fair Bandwidth Allocations in High-Speed Networks,” IEEE/ACM Transactions on Networking, vol. 11, no. 1, Feb. 2003.

[36] H.-T. Ngin and C.-K. Tham, “A Control-Theoretical Approach toAchieving Fair Bandwidth Allocations in Core-Stateless Networks,” inIEEE International Workshop on Quality of Service (IWQoS), Karlsruhe,Germany, 2001.

[37] C. Pelsser, , and S. De Cnodder, “Improvements to Core Stateless FairQueueing,” in IFIP/IEEE International Workshop on Protocols for High-Speed Networks (PfHSN), Berlin, Germany, 2002.

[38] J. Kaur and H. M. Vin, “Core-stateless Guaranteed Throughput Net-works,” in IEEE Infocom, 2003.

[39] S. Shenker, C. Patridge, and R. Guerin, “RFC2212: Specification ofGuaranteed Quality of Service,” Sep. 1997.

[40] S. Blake, D. L. Black, M. A. Carlson, E. Davies, Z. Wang, and W. Weiss,“RFC2475: An Architecture for Differentiated Services,” Dec. 1998.

18

Page 19: Fair Resource Sharing for Stateless-Core Packet-Switched ...menth/... · than light users in the absence offairness-awarecongestion management. The latter requires that appropriate

[41] N. Sivasubramaniam and P. Senniappan, “Enhanced Core Stateless FairQueuing with Multiple Queue Priority Scheduler,” The InternationalArab Journal of Information Technology, vol. 11, no. 2, pp. 159 – 167,Mar. 2014.

[42] Z. Cao, Z. Wang, and E. Zegura, “Rainbow Fair Queueing: FairBandwidth Sharing Without Per-Flow State,” in IEEE Infocom, 2000.

[43] Z. Cao, E. Zegura, and Z. Wang, “Rainbow Fair Queueing: Theory andApplications,” Computer Networks, vol. 47, pp. 367–392, 2005.

[44] S. Nadas, Z. R. Turanyi, and S. Racz, “Per Packet Value: A PracticalConcept for Network Resource Sharing,” in IEEE Globecom, 2016.

[45] S. Laki, G. Gombos, S. Nadas, and Z. Turanyi, “Take Your Own Shareof the PIE,” in ACM, IRTF & ISOC Applied Networking ResearchWorkshop (ANRW), 2017.

[46] M. Menth and F. Hauser, “On Moving Averages, Histograms and Time-Dependent Rates for Online Measurement,” in ACM/SPEC InternationalConference on Performance Engineering (ICPE), L’Aquila, Italy, Apr.2017.

[47] B. D. et al., “RFC3246: An Expedited Forwarding PHB (Per-Hop-Behavior),” Mar. 2002.

[48] M. Menth, M. Schmid, H. Heiß, and T. Reim, “MEDF - A SimpleScheduling Algorithm for Two Real-Time Transport Service Classeswith Application in the UTRAN,” in IEEE Infocom, San Francisco,USA, Mar. 2003.

[49] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford,C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese, and D. Walker,“P4: Programming Protocol-Independent Packet Processors,” ACM SIG-COMM Computer Communication Review, vol. 44, no. 3, 2014.

[50] B. Raghavan, K. Vishwanath, S. Ramabhadran, K. Yocum, and A. C.Snoeren, “Cloud Control with Distributed Rate Limiting,” in ACMSIGCOMM, Kyoto, Japan, Aug. 2007.

[51] D. Kutscher, F. Mir, R. Winter, S. Krishnan, Y. Zhang, and C. J.Bernados, “Mobile Communication Congestion Exposure Scenario,”http://tools.ietf.org/html/draft-ietf-conex-mobile, Oct. 2015.

[52] ns-3 Consortium, “ns-3,” http://www.nsnam.org/, Nov. 2014.[53] R. Jain, D. Chiu, and W. Hawe, “A Quantitative Measure of Fairness and

Discrimination for Resource Allocation in Shared Computer Systems,”DEC, Tech. Rep. Research Report TR-301, Sep. 1984.

[54] Ion Stoica, “CSFQ: Core-Stateless Fair Queueing,” https://people.eecs.berkeley.edu/~istoica/csfq/, 1999.

19