csit560 by m. hamdi 1 qos in the internet: scheduling algorithms and active queue management
Post on 19-Dec-2015
220 Views
Preview:
TRANSCRIPT
2CSIT560 by M. Hamdi
Principles for QOS Guarantees• Consider a phone application at 1Mbps and an FTP application sharing
a 1.5 Mbps link. – bursts of FTP can congest the router and cause audio packets to be dropped.
– want to give priority to audio over FTP
• PRINCIPLE 1: Marking of packets is needed for router to distinguish between different classes; and new router policy to treat packets accordingly
3CSIT560 by M. Hamdi
Principles for QOS Guarantees (more)
• Applications misbehave (audio sends packets at a rate higher than 1Mbps assumed above);
• PRINCIPLE 2: provide protection (isolation) for one class from other classes (Fairness)
4CSIT560 by M. Hamdi
Ba
nd
wid
thB
an
dw
idth
Ba
nd
wid
thB
an
dw
idth
DelayDelayDelayDelay
The path as perceived by a packet!The path as perceived by a packet!AA BB
QoS MetricsWhat are we trying to control?
• Four metrics are used to describe a packet’s transmission through a network – Bandwidth, Delay, Jitter, and Loss
• Using a pipe analogy, then for each packet: Bandwidth is the perceived width of the pipe
Delay is the perceived length of the pipe
Jitter is the perceived variation in the length of the pipe
Loss is the perceived leakiness if the pipe
5CSIT560 by M. Hamdi
Internet QoS Overview• Integrated services
• Differentiated Services
• MPLS
• Traffic Engineering
6CSIT560 by M. Hamdi
QoS: State information
• No State Vs. Soft State Vs. Hard State
No State
IPCircuit
SwitchedATMIntserv/RSVPdiffserv
DedicatedCircuit
HardState
SoftState
No State inside the networkFlow information at the edges
Packet Switched
7CSIT560 by M. Hamdi
QoS Router
Policer
Policer
Classifier
Policer
Policer
Classifier
Per-flow Queue
Scheduler
Per-flow Queue
Per-flow Queue
Scheduler
Per-flow Queue
shaper
shaper
Queue management
8CSIT560 by M. Hamdi
Queuing DisciplinesFirst come first serve
Class 1
Class 2
Class 3
Class 4
Class based scheduling
Scheduler
flow 1
flow 2
flow n
Classifier
Buffer management
9CSIT560 by M. Hamdi
DiffServDiffServ Domain
PremiumPremium GoldGold SilverSilver BronzeBronze
PHBLLQ/WRED
Classification / Conditioning
11CSIT560 by M. Hamdi
Differentiated Service (DS) Field
Version HLen TOS Length
Identification Fragment offsetFlags
Source address
Destination address
TTL Protocol Header checksum
0 4 8 16 19 31
Data
IPheader
• DS filed reuse the first 6 bits from the former Type of Service (TOS) byte to determine the PHB
DS Field0 5 6 7
12CSIT560 by M. Hamdi
R2AR1
R3
B
R4
A RESVA RESV
A RESV
A RESV
Integrated Services RSVP and Traffic Flow example
PATH message will leave the IP address of the previous hop node in each router. Contains Sender Tspec, Sender Temp, Adspec.
Admission/policy control determines if the node has sufficient available resources to handle the request. If request is granted, bandwidth and buffer is allocated.
A RESV message containing a flowspec and a filterspec must be sent in the exact reverse path.The flowspec (T-spec/R-spec) defines the QoS and the traffic characteristics being requested.
Reserved buffer and bw
Reserved buffer and bw
Reserved buffer and bw
BPATH
Data B
Data
RSVP maintains soft state information (DstAddr, Protocol, DstPort) in the routers. All packets will get MF classification treatment and put in the appropriate queue.
Routers enforce MF classification and put packets in the appropriate queue.The scheduler will then serve these queues.
Phop = A
Phop = R1
Phop = R2BPATH
BPATH
BPATH
16CSIT560 by M. Hamdi
Round Robin (RR)
• RR avoids starvation
• All sessions have the same weight and the same packet length:
A: B: C:
Round #2
…
Round #1
17CSIT560 by M. Hamdi
RR with variable packet length
A: B: C:
Round #1 Round #2
…
But the Weights are equal !!!
20CSIT560 by M. Hamdi
WRR – non Integer weights
WA=1.4 WB=0.2 WC=0.8
WA=7 WB=1 WC=4
Normalize
round length = 13
…
21CSIT560 by M. Hamdi
Weighted round robin
• Serve a packet from each non-empty queue in turn– Can provide protection against starvation
– It is easy to implement in hardware
• Unfair if packets are of different length or weights are not equal
• What is the Solution?
• Different weights, fixed packet size– serve more than one packet per visit, after normalizing
to obtain integer weights
22CSIT560 by M. Hamdi
Problems with Weighted Round Robin
• Different weights, variable size packets– normalize weights by mean packet size
• e.g. weights {0.5, 0.75, 1.0}, mean packet sizes {50, 500, 1500}
• normalize weights: {0.5/50, 0.75/500, 1.0/1500} = { 0.01, 0.0015, 0.000666}, normalize again {60, 9, 4}
• With variable size packets, need to know mean packet size in advance
• Fairness is only provided at time scales larger than the schedule
23CSIT560 by M. Hamdi
Max-Min Fairness
• An allocation is fair if it satisfies max-min fairness– each connection gets no more than what it wants
– the excess, if any, is equally shared
Transfer half of excess
Unsatisfied demand
Transfer half of excess
Unsatisfied demand
24CSIT560 by M. Hamdi
Max-Min FairnessA common way to allocate flows
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f).
1. Pick the flow, f, with the smallest requested rate.
2. If W(f) < C/N, then set R(f) = W(f).
3. If W(f) > C/N, then set R(f) = C/N.
4. Set N = N – 1. C = C – R(f).
5. If N>0 goto 1.
25CSIT560 by M. Hamdi
1W(f1) = 0.1
W(f3) = 10R1
C
W(f4) = 5
W(f2) = 0.5
Max-Min FairnessAn example
Round 1: Set R(f1) = 0.1
Round 2: Set R(f2) = 0.9/3 = 0.3
Round 3: Set R(f4) = 0.6/2 = 0.3
Round 4: Set R(f3) = 0.3/1 = 0.3
26CSIT560 by M. Hamdi
Fair Queueing
1. Packets belonging to a flow are placed in a FIFO. This is called “per-flow queueing”.
2. FIFOs are scheduled one bit at a time, in a round-robin fashion.
3. This is called Bit-by-Bit Fair Queueing.
Flow 1
Flow NClassification Scheduling
Bit-by-bit round robin
27CSIT560 by M. Hamdi
Weighted Bit-by-Bit Fair Queueing
Likewise, flows can be allocated different rates by servicing a different number of bits for each flow
during each round.
1R(f1) = 0.1
R(f3) = 0.3R1
C
R(f4) = 0.3
R(f2) = 0.3
Order of service for the four queues:… f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,…
Also called “Generalized Processor Sharing (GPS)”
28CSIT560 by M. Hamdi
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
Time
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1A1A1B1
A2 = 2
C3 = 2
Time
Weights : 3:2:2:1
Round 1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1A1A1B1
A2 = 2
C3 = 2
D1, C2, C1 Depart at R=1Time
B1C1C2D1
Weights : 3:2:2:1
Round 1
29CSIT560 by M. Hamdi
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Weights 3:2:2:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A2 = 2
C3 = 2
B1, A2 A1 Depart at R=2Time
A1A1A1B1B1C1C2D1A1A2A2B1
Round 1Round 2
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A2 = 2
C3 = 2
D2, C3 Depart at R=2Time
A1A1A1B1B1C1C2D1A1A2A2B1C3C3D2D2
Round 1Round 23
Weights : 1:1:1:1
Weights : 3:2:2:1
3
2
2
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1C3 = 2 C1 = 1
C1C2D1A1A1A1A1A2A2B1B 1B1
A2 = 2
C3C3D2D2
Departure order for packet by packet WFQ: Sort by finish time of packetsTime
Sort packets
30CSIT560 by M. Hamdi
Packetized Weighted Fair Queueing (WFQ)
Problem: We need to serve a whole packet at a time.
Solution:
1. Determine what time a packet, p, would complete if we served it flows bit-by-bit. Call this the packet’s finishing time, Fp.
2. Serve packets in the order of increasing finishing time.
Also called “Packetized Generalized Processor Sharing (PGPS)”
31CSIT560 by M. Hamdi
WFQ is complex
• There may be hundreds to millions of flows; the linecard needs to manage a FIFO queue per each flow.
• The finishing time must be calculated for each arriving packet,
• Packets must be sorted by their departure time.
• Most efforts in QoS scheduling algorithms is to come up with practical algorithms that can approximate WFQ!
1
2
3
N
Packets arriving to egress linecard
CalculateFp
Find Smallest Fp
Departing packet
Egress linecard
32CSIT560 by M. Hamdi
When can we Guarantee Delays?
• Theorem
If flows are leaky bucket constrained and all nodes employ GPS (WFQ), then the network can guarantee worst-case delay bounds to sessions.
34CSIT560 by M. Hamdi
Queuing Disciplines
• Each router must implement some queuing discipline
• Queuing allocates both bandwidth and buffer space:– Bandwidth: which packet to serve (transmit) next - This
is scheduling
– Buffer space: which packet to drop next (when required) – this buffer management
• Queuing affects the delay of a packet (QoS)
35CSIT560 by M. Hamdi
Queuing Disciplines
Traffic Sources
Class C
Class B
Class A
Traffic Classes
DropScheduling Buffer Management
36CSIT560 by M. Hamdi
Active Queue Management
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
ACK…
Drop!!!
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP AQM
Congestion
Congestion Notification…
ACK…
Queue
SinkOutbound LinkRouterInbound Link
Sink
TCP
TCP AQM
Advantages• Reduce packet losses (due to queue overflow)• Reduce queuing delay
37CSIT560 by M. Hamdi
QoS Router
Policer
Policer
Classifier
Policer
Policer
Classifier
Per-flow Queue
Scheduler
Per-flow Queue
Per-flow Queue
Scheduler
Per-flow Queue
shaper
shaper
Queue management
38CSIT560 by M. Hamdi
Packet Drop Dimensions
AggregationPer-connection state Single class
Drop positionHead Tail
Random location
Class-based queuing
Early drop Overflow drop
39CSIT560 by M. Hamdi
Typical Internet Queuing• FIFO + drop-tail
– Simplest choice
– Used widely in the Internet
• FIFO (first-in-first-out) – Implies single class of traffic
• Drop-tail– Arriving packets get dropped when queue is full
regardless of flow or importance
• Important distinction:– FIFO: scheduling discipline
– Drop-tail: drop policy (buffer management)
40CSIT560 by M. Hamdi
FIFO + Drop-tail Problems
• FIFO Issues: (irrespective of the aggregation level)– No isolation between flows: full burden on e2e control
(e..g., TCP)
– No policing: send more packets get more service
• Drop-tail issues:– Routers are forced to have have large queues to maintain
high utilizations
– Larger buffers => larger steady state queues/delays
– Synchronization: end hosts react to the same events because packets tend to be lost in bursts
– Lock-out: a side effect of burstiness and synchronization is that a few flows can monopolize queue space
41CSIT560 by M. Hamdi
Synchronization Problem
• Because of Congestion Avoidance in TCP
cwnd
TimeRTT
1
2
4
Slow Start
W*
W W+1
RTT
Congestion Avoidance
W*/2
42CSIT560 by M. Hamdi
Synchronization Problem
Queue Size
Time
Total Queue
All TCP connections reduce their transmission rate on crossing over the maximum queue size.
The TCP connections increase their tx rate using the slow start and congestion avoidance.
The TCP connections reduce their tx rate again.It makes the network traffic fluctuate.
43CSIT560 by M. Hamdi
Global Synchronization Problem
Can result in very low throughput during periods of Can result in very low throughput during periods of congestioncongestion
Max Queue Length
44CSIT560 by M. Hamdi
Global Synchronization Problem
TCP Congestion control Synchronization: leads to bandwidth under-utilization
Persistently full queues: leads to large queueing delays
Cannot provide (weighted) fairness to traffic flows – inherently proposed for responsive flows
Flow 1Rate
Time
Flow 2
Aggregate load
bottleneck rate
45CSIT560 by M. Hamdi
Lock-out Problem
• Lock-Out:Lock-Out: In some situations tail drop allows a In some situations tail drop allows a single connection or a few flows (misbehaving single connection or a few flows (misbehaving flows: UDP) to monopolize queue space, preventing flows: UDP) to monopolize queue space, preventing other connections from getting room in the queue. other connections from getting room in the queue. This "lock-out" phenomenon is often the result of This "lock-out" phenomenon is often the result of synchronization. synchronization.
Max Queue Length
46CSIT560 by M. Hamdi
Bias Against Bursty Traffic
During dropping, bursty traffic will be dropped in benchs – which is During dropping, bursty traffic will be dropped in benchs – which is not fair for bursty connectionsnot fair for bursty connections
Max Queue Length
47CSIT560 by M. Hamdi
Active Queue ManagementGoals
• Solve lock-out and full-queue problems
– No lock-out behavior
– No global synchronization
– No bias against bursty flow
• Provide better QoS at a router
– Low steady-state delay
– Lower packet dropping
48CSIT560 by M. Hamdi
RED (Random Early Detection)
• FIFO scheduling
• Buffer management: – Probabilistically discard packets
– Probability is computed as a function of average queue length
Discard Probability
AverageQueue Length
0
1
min_th max_th queue_len
50CSIT560 by M. Hamdi
RED operationMin threshMax thresh
Average queuelength
minthresh maxthresh
MaxP
1.0
Avg length
P(drop)
51CSIT560 by M. Hamdi
Define Two Threshold Values
RED (Random Early Detection)
• FIFO scheduling
Min threshMax thresh
Average queuelength
Make Use of Average Queue LengthCase 1:
Average Queue Length < Min. Thresh ValueAdmit the New Packet
52CSIT560 by M. Hamdi
RED (Cont’d)
Min threshMax thresh
Average queuelength
Case 2: Average Queue Length betweenMin. and Max. Threshold Value
p
1-p
Admit the New Packet With Probability p…
p
1-p
Or Drop the New Packet With Probability 1-p
53CSIT560 by M. Hamdi
Random Early Detection Algorithm
• ave = (1 – wq)ave + wqq
• P = max_P*(avg_len – min_th)/(max_th – min_th)
for each packet arrival: calculate the average queue size ave if ave ≤ minth
do nothing else if minth ≤ ave ≤ maxth
calculate drop probability p drop arriving packet with probability p else if maxth ≤ ave drop the arriving packetarriving packet
54CSIT560 by M. Hamdi
Random early detection (RED) packet drop
MaxMaxthresholdthreshold
MinMinthresholdthreshold
Average queue lengthAverage queue length
Forced dropForced drop
ProbabilisticProbabilisticearly dropearly drop
No dropNo drop
TimeTime
Drop probabilityDrop probabilityMaxMax
queue lengthqueue length
55CSIT560 by M. Hamdi
Time
Max Queue Size
Active Queue ManagementRandom Early Detection (RED)
• Weighted average accommodates bursty traffic
Max Threshold
Min Threshold
Forced drop
Probabilistic drops
No drops
Drop probability
Average queue length
Probabilistic drops» avoid consecutive drops» drops proportional to bandwidth utilization
– (drop rate equal for all flows)
56CSIT560 by M. Hamdi
RED Vulnerable to Misbehaving Flows
0 10 20 30 40 50 60 70 80 90 1000
200
400
600
800
1,000
1,200
1,400
FIFORED
UDP blast
TC
P T
hro
ughp
ut
(Kby
tes/
Sec
)
Time (seconds)
57CSIT560 by M. Hamdi
Effectiveness of RED- Lock-Out & Global Synchronization
• Packets are randomly dropped
• Each flow has the same probability of being discarded
58CSIT560 by M. Hamdi
Effectiveness of RED- Full-Queue & Bias against bursty traffic
• Drop packets probabilistically in anticipation of congestion– Not when queue is full
• Use qavg to decide packet dropping probability : allow instantaneous bursts
59CSIT560 by M. Hamdi
What QoS does RED Provide?
• Lower buffer delay: good interactive service– qavg is controlled to be small
• Given responsive flows: packet dropping is reduced– Early congestion indication allows traffic to throttle back
before congestion
• RED provide small delay, small packet loss, and high throughput (when it has responsive flows).
60CSIT560 by M. Hamdi
Weighted RED (WRED)
• WRED provides separate thresholds and weights for different IP precedences, allowing us to provide different quality of service to different traffic
• Lower priority class traffic may be dropped more frequently than higher priority traffic during periods of congestion
61CSIT560 by M. Hamdi
Random
Dropping
WRED (Cont..)High Priority traffic
Medium Priority traffic
Low Priority traffic
62CSIT560 by M. Hamdi
AverageQueue Depth
StandardMinimumThreshold
PremiumMinimumThreshold
Std and PreMaximumThreshold
Adds Per-Class Queue Thresholds for Differential Treatment
Two Classes are Shown; Any number of classesCan Be Defined
Congestion Avoidance: Weighted Random Early Detection (WRED)
Probabilityof Packet
Discard
64CSIT560 by M. Hamdi
Vulnerability to Misbehaving Flows
• TCP performance on a 10 Mbps link under RED in the face of a “UDP” blast
65CSIT560 by M. Hamdi
Vulnerability to Misbehaving Flows
• Try to look at the following example:
• Assume there is a network which is set up as:
UDP sources
R1 R2S(m)
S(1)
S(m+1)
S(m+n)
S(m)
S(1)
S(m+1)
S(m+n)
10Mbps
100Mbps 100MbpsTCP
sourcesTCP
sources
UDP sources
66CSIT560 by M. Hamdi
Vulnerability to Misbehaving FlowsThroughput Analysis
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20Flow number
Th
rou
gh
pu
t (M
bp
s)
Ideal
RED
67CSIT560 by M. Hamdi
Vulnerability to Misbehaving Flows
• Queue Size versus Time
0 5 10 15 20 25 30 35 40 45 500
50
100
150
200
250CHOKe: Queue Size
Time (Seconds)
Siz
e of
Que
ue (
No.
of
Pac
kets
)
Average Queue SizeCurrent Queue Size
Delay is bounded
Delay is bounded
Global Synchronization solvedGlobal Synchronization solved
RED: Queue Size
68CSIT560 by M. Hamdi
Unfairness of RED
0 5 10 15 20 25 30 350
100
200
300
400
500
600
700
800
900
1000RED's Throughput
Flow Number
Thr
ough
put
(Kbp
s)Idea Fair ShareRED's Throughput
Unresponsive Flow (such
as UDP)
Unresponsive Flow (such
as UDP)
32 TCP Flows1 UDP Flow
32 TCP Flows1 UDP Flow
An unresponsiveflow occupies over 95% of bandwidth
An unresponsiveflow occupies over 95% of bandwidth
69CSIT560 by M. Hamdi
Scheduling & Queue Management
• What routers want to do?
– Isolate unresponsive flows (e.g., UDP)
– Provide Quality of Service to all users
• Two ways to do it
– Scheduling algorithms:
e.g., WFQ, WRR
– Queue management algorithms:
e.g., RED, FRED, SRED
70CSIT560 by M. Hamdi
The setup and problems
• In a congested network with many users
• QoS requirements are different
• Problem:
Allocate bandwidth fairly
71CSIT560 by M. Hamdi
• Network node: Weighted Fair Queueing (WFQ)
• User traffic: any type
Problem: complex implementation
lots of work per flow
Approach 1: Network-Centric
72CSIT560 by M. Hamdi
Approach 2: User-Centric
• Network node: simple FIFO buffer;
active queue management (AQM): RED
• User traffic: congestion-aware (e.g. TCP)
Problem: requires user cooperation
73CSIT560 by M. Hamdi
Current Trend
• Network node:
simple FIFO buffer
AQM schemes with enhancement to provide fairness: preferential dropping packets
• User traffic: any type
74CSIT560 by M. Hamdi
Packet Dropping Schemes
• Size-based Schemes drop decision based on the size of FIFO queue
e.g. RED
• Content-based Schemes drop decision based on the current content of the FIFO queue
e.g. CHOKe
• History-based Schemes keep a history of packet arrivals/drops to guide drop
decision
e.g. SRED, RED with penalty box, AFD
76CSIT560 by M. Hamdi
Random Sampling from Queue
• A randomly chosen packet more likely from the unresponsive flow
• Unresponsive flows can’t fool the system
77CSIT560 by M. Hamdi
Comparison of Flow ID
• Compare the flow id with the incoming packet
– More accurate
– Reduce the chance of dropping packets from a TCP-friendly flows
78CSIT560 by M. Hamdi
Dropping Mechanism
• Drop packets (both incoming and matching samples)
– More arrival More Drop
– Give users a disincentive to send more
79CSIT560 by M. Hamdi
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
Case 1: Average Queue Length < Min. Thresh Value
Admit the New Packet
80CSIT560 by M. Hamdi
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
p
1-p
Case 2: Avg. Queue Length is between Min. and Max. Threshold Values
A packet is randomly chosen from the queue to compare with the new arrival packet
If they are from different flows, the samelogic in RED applies
If they are from the same flow, both packets will be dropped
81CSIT560 by M. Hamdi
CHOKe (Cont’d)
Min threshMax thresh
Average queuelength
Case 3:Avg. Queue Length > Max. Threshold Value
A random packet will be chosen forcomparison
If they are from different flows, the new packet will be droppedIf they are from the same flow,
both packets will be dropped
83CSIT560 by M. Hamdi
Network Setup Parameters
• 32 TCP flows, 1 UDP flow
• All TCP’s maximum window size = 300
• All links have a propagation delay of 1ms
• FIFO buffer size = 300 packets
• All packets sizes = 1KByte
• RED: (minth, maxth) = (100,200) packets
86CSIT560 by M. Hamdi
How Many Samples to Take?
• Different samples for different Qlenavg
– # samples decrease when Qlenavg close to minth
– # samples increase when Qlenavg close to maxth
88CSIT560 by M. Hamdi
Two Problems of CHOKe
• Problem I: – Unfairness among UDP flows of different rates
• Problem II:– Difficulty in choosing automatically how many to drop
89CSIT560 by M. Hamdi
SAC (Self Adjustable CHOKe
Tries to Solve the previously mentioned two problems
90CSIT560 by M. Hamdi
SAC• Problem 1: Unfairness among UDP flows of different rates (e.g.,
when k =1, the UDP flow 31 (6 Mbps) has 1/3 throughput of UDP flow 32 (1 Mbps), and when k =10 , throughput of UDP flow 31 is almost 0).
0 5 10 15 20 25 30 350
50
100
150
200
250
300
350
400
Flow number
Tth
roughput(
Kbps)
Throughput per flow (30 tcp flows and 2 udp misbehaving flows)
CHOKe 1 CHOKe 10 Ideal Fair Share
91CSIT560 by M. Hamdi
SAC• Problem 2: Difficulty in choosing automatically how many to drop
(when k = 4, UDPs occupy most of the BW. When k =10, relatively good fair sharing, and when k = 20, TCPs get most of the BW).
0 5 10 15 20 25 30 350
20
40
60
80
100
120
140
160
180
Flow number
Thro
ughput(
Kbps)
Throughput per flow (30 tcp flows and 4 udp misbehaving flows)
CHOKe 4 CHOKe 10CHOKe 20
92CSIT560 by M. Hamdi
SAC• Solutions:
1. Search from the tail of the queue for a packet with the same flow number and drop this packet instead of random dropping – because the higher a flow rate is, the more likely its packets will gather at the rear of the queue.
The queue occupancy will be more evenly distributed among the flows.
2. Automate the process of determining k according to traffic status (number of active flows and number of UDP flows)
93CSIT560 by M. Hamdi
SAC• Once an incoming UDP is compared with a randomly selected
packet, if they are of the same flow, P is updated in this way:
P (1-wp) P + wp.
• If they are of different flows, P is updated as follows:
P (1-wp) P .
• If P is small, then there are more competing flows, and we should increase the value of k.
• Once there is an incoming packet, if it is a UDP packet, R is updated in this way:
R (1-wr) R+ wr..
• If it is a TCP packet, R is updated as follows:
R (1-wr) R.
• If R is large, then we have a large amount of UDP traffic, and we should increase k to drop more UDP packets.
94CSIT560 by M. Hamdi
SAC simulation• Throughput per flow (30 TCP flows and 2 UDP flows of different
rate)
95CSIT560 by M. Hamdi
SAC simulation• Throughput per flow (30 TCP flows and 4 UDP flows of the same
rate).
96CSIT560 by M. Hamdi
SAC simulation• Throughput per flow (20 TCP flows and 4 UDP flows of different
rates)
98CSIT560 by M. Hamdi
Congestion Management and Congestion Management and Avoidance:Avoidance: GoalGoal
Provide fair bandwidth allocation similar to WFQ Be simple to implement like RED
Simplicity
Fa
irn
es
s
WFQ
RED
Ideal
99CSIT560 by M. Hamdi
• Objective: achieve fairness close to that of max-min fairness
1. If W(f) < C/N, then set R(f) = W(f).
2. If W(f) > C/N, then set R(f) = C/N.
• Formulation:– Ri: the sending rate of flow i
– Di: the drop probability of flow i
– Ideally, we want» Ri (1 – Di) = Rfair (equal share)
» Di = (1 – Rfair/Ri)+ (That is, drop the excess)
AQM Based on Capture RecaptureAQM Based on Capture Recapture
100CSIT560 by M. Hamdi
AQM Based on Capture-Recapture:AQM Based on Capture-Recapture:
Incoming packets
Active Queue Management
The estimation of the sending rate
The estimation of the fair share
The adjustment mechanism
The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!!
Fair allocation of BW
101CSIT560 by M. Hamdi
Capture-Recapture ModelsCapture-Recapture Models
• The CR models were originally developed for estimating demographic parameters of animal populations (e.g., population size, number of species, etc.).
– It is an extremely useful method where inspecting the whole state space is infeasible or very costly
– Numerous models have been developed to various situations
• The CR models are being used in many diverse fields ranging from software inspection to epidemiology.
• It is based on several key ideas: animals are captured randomly, marked, released and then recaptured randomly from the population.
104CSIT560 by M. Hamdi
Time is then allowed for the marked individuals to mix with the unmarked individuals.
108CSIT560 by M. Hamdi
Capture-Recapture Model
• Unknown number of fish in a lake
• Catch a sample and mark them
• Let them loose
• Recapture a sample and look for marks
• Estimate population size
• n1 = number in first sample 15 n2 = number in second sample 10 n12 = number in both samples 5 N = total population size assume that n1/N = n12/n2 therefore 15/N = 5/10
N = (10 x 15) / 5 = 30
109CSIT560 by M. Hamdi
Capture-Recapture ModelsCapture-Recapture Models• Simple model: estimate a homogeneous population of animals (N):
– n1 animals are captured (marked)
– n2 animals were recaptured, and
– m2 of these appeared to be marked.
• Under this simple capture recapture model (M0):
m2/n2 = n1/N
N
n1
N
n2
N
n1n2
m2
110CSIT560 by M. Hamdi
Capture-Recapture ModelsCapture-Recapture Models
• The capture probability refers to the chance that an individual animal get caught.
• M0 implies that the capture probability for all animals are the same.
– ‘0’ refers to constant capture probability
• Using the Mh model, the capture probabilities vary by animal, sometimes for reasons like difference in species, sex, or age, etc..
– ‘h’ refers to heterogeneity.
111CSIT560 by M. Hamdi
Capture-Recapture ModelsCapture-Recapture Models
• Estimation of N under the Mh Model is based on the capture frequency data f1, f2…, and ft (t captures)
– f1 is the number of animals that were caught only once,
– f2 is the number of animals that were caught only twice, … etc.
• The jackknife estimator of N is computed as a linear combination of these capture frequencies, s.t.:
N = a1f1 + a2f2 + … + atft
where ai are coefficients which are a function of t.
112CSIT560 by M. Hamdi
AQM Based on Capture-RecaptureAQM Based on Capture-Recapture
• The key question is: how to estimate the sending rate (Ri) and the fair share (Rfair) !!!
• We use an arrival buffer to store the recently arrived packet headers (we can have control over how large the buffer is, and is a better representation of the nature of the flows when compared to the sending buffer):
1. We estimate Ri using the M0 capture-recapture model
2. We estimate Rfair using the Mh capture-recapture model (by estimating the number of active flows).
113CSIT560 by M. Hamdi
AQM Based on Capture-RecaptureAQM Based on Capture-Recapture
Ri is estimated for every arriving packet (we can increase the accuracy by having multiple captures, or decrease it by capturing packets periodically)
If the arrival buffer is of size B, and the number of captured packets is Ci, then Ri = R Ci/B where R is the aggregate arrival rate
Rfair may not change every single time slot (as a result, the capturing and the calculation of the number of active flows could be done independently of the arrival of each incoming packet)
Rfair = R/(number of active flows)
The capture-recapture model gives us a lot of flexibility in terms of accuracy vs. complexity
The same capture-recapture can be used for calculating both Ri and Rfair
114CSIT560 by M. Hamdi
AQM Based on Capture-AQM Based on Capture-RecaptureRecapture
Incoming packets
Active Queue Management(Capture-Recapture)
The estimation of Ri by the M0 model
The estimation of Rfair by the Mh CR model
Di = (1 – Rfair/Ri)+
Fair allocation of BW
115CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• This is a classical setup that researchers use to evaluate AQM
schemes (we can vary many parameters, responsive vs. non-responsive connections, the nature of responsiveness, link delays, etc.)
UDP sources
R1 R2S(m)
S(1)
S(m+1)
S(m+n)
S(m)
S(1)
S(m+1)
S(m+n)
10Mbps
100Mbps 100MbpsTCP
sourcesTCP
sources
UDP sources
116CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• Estimation of the number of flows
Estimation of numnber of flows (varying number of flows)
0
10
20
30
40
50
60
Time
Est
imat
ed n
umbe
r of
flow
s
SRED CAP Ideal
117CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and RED
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal RED CAP
118CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and SRED
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal SRED CAP
119CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and RED-PD
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits
/s)
Ideal RED RED-PD CAP
120CSIT560 by M. Hamdi
Performance evaluationPerformance evaluation• Bandwidth allocation comparison between CAP and SFB
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25Flow number
Thro
ughp
ut (M
bits/s
)Ideal SFB CAP
121CSIT560 by M. Hamdi
Normalized Measure of PerformanceNormalized Measure of Performance
• A single comparison of the fairness using a normalized value, where norm is defined as:
where bi is ideal fair share, bj is the bandwidth received by each flow
• Thus, ||BW|| = 0 for the ideal fair sharing
122CSIT560 by M. Hamdi
Normalized Measure of Normalized Measure of PerformancePerformance
0
200
400
600
800
1000
1200
25 30 25 40 45 50 55 60 65 70number of flows
||BW
||
Ideal RED SRED SFB RED-PD CAP
top related