scheduling: contention, fairness and throughput

1
Scheduling: Contention, Fairness and Throughput University of California at Berkeley M otivations for a M ulti-Channel W ireless M AC t=0 frequency Sender 1 t=1 frequency Sender 2 t=2 frequency Sender 3 Today:Each w ireless netw ork uses 1 channelonly : : Pow er D ensity t=0 frequency Pow er D ensity Sender 1 Sender 3 t=1 frequency Sender 2 Sender 1 Sender 4 Sender 4 t=2 frequency Sender 3 : : Sender 2 Sender 4 W hy not:sim ultaneous sending on different channels? Channel1 Channel2 Channel3 W asted spectrum Sim ple Rendezvous Schem e 1. Node igenerates a pseudo random sequence X(S i ,t i ) i.i.d. ~ uniform ({1,2,...,C}).S i is the seed.t i is a localversion ofthe realtim e. 2. Each packetsentby icontains i,S i and t i . 3. Idle nodes occasionally broadcastem pty packets. 4. Eventually each node hears every neighboronce. 5. Node ilistens on the defaultchannelX(S i ,t i ) attim e t Ch2 Ch1 9 8 7 6 5 4 3 2 t=1 node2 node1 node3 Rendezvous:Sending Illustrated Ch2 Ch1 9 8 7 6 5 4 3 2 t=1 2.RTS/ CTS/ Data 1.waiting to send 4.Hopping returns to the original hopping schedule 3.Hopping stopped during data transfer Som e sim ple im provem ents: 1. A sender checks ifany receivers w illbe on its defaultchannel during the com ing tim e slot. 2. Ifnot,the sender chooses a channelc’w ith a receiverin it uniform ly atrandom . 3. Sender transm its an RTS on channelc’w ith probability 1/N(c’,t) w here N(c’,t) is the estim ated num ber ofnodes on channelc’at tim e t. Lim itations ofLocalScheduling O ptim alschedulers (M axim um W eightMatching) require globalcom m unication – notpracticalfor M AC. W hatis the throughputoflocalscheduling? No explicitexchange ofscheduling inform ation. No scheduling state other than the backlog. Idealization: iterated Longest Queue First (iLQ F): Nodes w ith longer backlog transm itfirst. e.g.,M AC w ith backlog dependentbackofftim e. Results Partial Pooling:graph condition for optim ality ofiLQ F. Instability in the case thatPartialPooling (P.P.) is not satisfied. Contention m odel Incidence m atrix A= (A jk ). Setofm atches S[K]= {m 2{0,1} K s.t.Am · 1}. M axim alm atches M [K] ½ S[K]. O ptim althroughput: < 2 Co(M [K]). Classes:k2 K Resources:j2 J Activities:k2 K 1 2 3 λ 2 λ 3 λ 1 1 2 3 conflictgraph Captures dynam ic contention for shared resources,e.g., w ireless netw ork,packetsw itch,distributed com putation. O ptim ality underPartialPooling D ef : P.P. holds for A ½ K if9 nonem pty B ½ A s.t. 8m2 M [A], k2 B m k =Const(A ,B). D ef : P.P. holds ifP.P.holds forallA ½ K. P.P.class strictly includes tree conflictgraphs. Ifsystem conflictgraph satisfies P.P.,then iLQ F is throughputoptimal. Approach:fluid lim its,longestqueue size is a Lyapunov function. Instability w hen P.P.fails Assum e load close to capacity (1/2). “Efficient”m atches:{1,3,5,7},{2,4,6,8} “Inefficient”:{1,4,6},… allsize 3 m atches 8 3 4 6 7 1 2 5 “m eta-stability” in 8-cycle: M oststates activate efficient m atches. But, inefficient states are attracting. M u l t i - C h a n n e l W i r e l e s s M A C I m p a t i e n t B a c k o f f A l g o r i t h m L i m i t a t i o n s o f L o c a l S c h e d u l i n g Antonis Dimakis [email protected] Rajarshi Gupta [email protected] Wilson So [email protected] 2+ : guaranteed success 1: contention success Idle: no suitable receivers on sam e channel Q uiet: everyone backs off R eceiver Absent: receiverstuck on another channel Tim e Slot U tilization B reakdow n 2+ : guaranteed success Idle: no suitable receivers on sam e channel Q uiet: everyone backs off 1: contention success A v g P e r N o d e Q u e u e L e n g t h [ M i n i - P a c k e t s ] Tim e [slot] (10,000 slots = 5 sec) Collision: > 1 sends R eceiver A bsent: receiveron another channel Exp.1 :H igh (80% ) load,Long (5-slot) packets. Per N ode Q ueue Length vs.Tim e x 10 4 A v g P e r N o d e Q u e u e L e n g t h [ M i n i - P a c k e t s ] (10,000 slots = 5 sec) x 10 4 Tim e [slot] Exp.2 :H igh (80% ) load,Short (1-slot) packets. Collision: > 1 sends SmartNets Research Group EECS,U C Berkeley Fall2004 Key Idea In exponentialbackoff(e.g.802.11) Upon collision,nodes back offand becom e less aggressive Problem in netw orks spanning m ultiple interference dom ains Nodes in the m iddle face m ore collisions They backoffm ore.Getlessershare ofbandw idth This is unfair tow ards nodes in the m iddle Idea: Give higher priority to nodes facing m ore contention K ey: Upon collision,nodes decrease theirbackoffdelay Characteristics Achieves stable system Maintains throughput in random netw orks Significantimprovementin fairness EECS,U C Berkeley Fall2004 M echanism BackoffContention Phase Each node has m ean backoff b Picks backoffdelay B using exponentialvariable w ith m ean b Sends outSlotCapture M essage after B m ini-slots Ifa node carriersenses another m essage sooner– itkeeps quiet PacketTransm ission Phase Starts aftercom pletion of BackoffContention Phase Nodes w ith successfulSlot Capture M essages transm it Constantpacketlength Confirm ation using ack Ifcollision orquiet, b:= b/m Ifsuccessfultransm ission, b : =b m m > 1 M arkov analysis indicates stability and fairness 1 5 4 3 2 interference 1 5 4 3 2 1's PacketTransm ission 5's PacketTransm ission B ackoff Contention Phase Packet Transm ission Phase backoff slot capture ack ack EECS,U C Berkeley Fall2004 Resetting Rates Problem ofM IM D schem e W hen m any congested neighbors alldecrease backoffs Resulting backoffs are sm all M any collisions Solution: reset backoff delays w hen exceeds a lim it Ifany m ean backoffgoes below reset_ limit M ultiply all m ean backoffsby constant reset_ factor Sim ulation param eters m= 1.2 reset_ limit = 16/5= 3.2 reset_ factor = 10 Resetpropagation/loss Resets m ove hop-by-hop across netw ork M ay getlost Effectoflostreset Keeps backofflow So node w ins nextfew slots Increases m ean backoff Sim ulation Results Resetpropagation Loss ofup to 10% ofresets Random walk m ovem ent EECS,U C Berkeley Fall2004 Sim ulations on Random Topology ExponentialBackoff Im patient B ackoffA lgorithm Jain’s Fairness Index = 0.58 M ean Throughput = 0.101 Jain’s Fairness Index = 0.68 M ean Throughput = 0.102 Circle = Node :Center= Location,Area = Throughput

Upload: cassandra-brady

Post on 31-Dec-2015

55 views

Category:

Documents


3 download

DESCRIPTION

Scheduling: Contention, Fairness and Throughput. SmartNets Research Group. University of California at Berkeley. Multi-Channel Wireless MAC. Wilson So [email protected]. Impatient Backoff Algorithm. Rajarshi Gupta [email protected]. Limitations of Local Scheduling. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Scheduling: Contention, Fairness and Throughput

Scheduling: Contention, Fairness and Throughput

University of California at Berkeley

Motivations for a Multi-Channel Wireless MAC

t=0 frequencySender 1

t=1frequency

Sender 2

t=2frequency

Sender 3

Today: Each wireless network uses 1 channel only

: :

PowerDensity

t=0frequency

PowerDensity

Sender 1 Sender 3

t=1frequency

Sender 2 Sender 1 Sender 4

Sender 4

t=2 frequencySender 3: :

Sender 2Sender 4

Why not: simultaneous sending on different channels?

Channel 1 Channel 2 Channel 3

Wasted spectru

m

Simple Rendezvous Scheme

1. Node i generates a pseudo random sequence X(Si,ti) i.i.d. ~uniform({1,2,... ,C}). Si is the seed. ti is a local version of the real time.

2. Each packet sent by i contains i, Si and ti.3. Idle nodes occasionally broadcast empty packets.4. Eventually each node hears every neighbor once.5. Node i listens on the default channel X(Si,ti) at time t

Ch2

Ch1

98765432t=1

node2node1 node3

Rendezvous: Sending Illustrated

Ch2

Ch1

98765432t=1

2. RTS/CTS/Data

1. waiting to send

4. Hopping returns to the original hopping schedule

3. Hopping stoppedduring data transfer

Some simple improvements:1. A sender checks if any receivers will be on its default channel

during the coming time slot.2. If not, the sender chooses a channel c’ with a receiver in it

uniformly at random.3. Sender transmits an RTS on channel c’ with probability 1/N(c’,t)

where N(c’,t) is the estimated number of nodes on channel c’ at time t.

Limitations of Local Scheduling

Optimal schedulers (Maximum Weight Matching) require global communication – not practical for MAC.

What is the throughput of local scheduling? No explicit exchange of scheduling information. No scheduling state other than the backlog.

Idealization: iterated Longest Queue First (iLQF): Nodes with longer backlog transmit first.

e.g., MAC with backlog dependent backoff time.

Results Partial Pooling: graph condition for optimality of iLQF. Instability in the case that Partial Pooling (P.P.) is not

satisfied.

Contention model

Incidence matrix A=(Ajk). Set of matches S[K]={m2{0,1}K s.t. Am· 1}. Maximal matches M[K] ½S[K]. Optimal throughput: < 2 Co(M[K]).

Classes: k2 K Resources: j2 J

Activities: k2 K

1

2

3

λ2

λ3

λ1

12 3

conflict graph

Captures dynamic contention for shared resources, e.g., wireless network, packet switch, distributed computation.

Optimality under Partial Pooling

Def: P.P. holds for A½K if 9 nonempty B ½A s.t.

8 m2 M[A], k2 Bmk=Const(A,B).

Def: P.P. holds if P.P. holds for all A ½K.

P.P. class strictly includes tree conflict graphs.

If system conflict graph satisfies P.P., then iLQF is throughput optimal. Approach: fluid limits, longest queue size is a Lyapunov

function.

Instability when P.P. fails

Assume load close to capacity (1/2). “Efficient” matches: {1,3,5,7},{2,4,6,8} “Inefficient”: {1,4,6},… all size 3 matches

8 3

4

6

7

1 2

5 “meta-stability” in 8-cycle:

Most states activate efficient matches.

But, inefficient states are attracting.

Mu

lti-

Ch

an

nel

Wir

ele

ss

MA

C

Imp

ati

en

t

Back

off

A

lgori

thm

Lim

itati

on

s of

Loca

l S

ched

ulin

g

Antonis Dimakis [email protected]

Rajarshi Gupta [email protected]

Wilson So [email protected]

2+: guaranteedsuccess

1:contentionsuccess

Idle: no suitablereceivers on samechannel

Quiet:everyone backs off

Receiver Absent:receiver stuck on another

channel

Time Slot Utilization Breakdown

2+: guaranteedsuccess

Idle: no suitablereceivers on samechannel

Quiet: everyone backs off

1:contentionsuccess

Avg

Per

Nod

e Q

ueu

e L e

ngth

[M

ini-Pa

c ket

s]

Time [slot]

(10,000 slots = 5 sec)

Collision: >1 sendsReceiver Absent:receiver on another

channel

Exp.1 : High (80% ) load, Long (5-slot) packets.

Per Node Queue Length vs. Time

x 104

Avg

Per

Nod

e Q

ueu

e L e

ngth

[M

ini-Pa

c ket

s]

(10,000 slots = 5 sec)

x 104Time [slot]Exp.2 : High (80% ) load, Short (1-slot) packets.

Collision:>1 sends

SmartNets Research Group

EECS, U C Berkeley Fall 2004

Key Idea In exponential backoff (e.g. 802.11)

Upon collision, nodes back off and become less aggressive Problem in networks spanning multiple interference domains

Nodes in the middle face more collisions They backoff more. Get lesser share of bandwidth This is unfair towards nodes in the middle

Idea: Give higher priority to nodes facing more contention Key: Upon collision, nodes decrease their backoff delay Characteristics

Achieves stable system Maintains throughput in random networks Significant improvement in fairness

EECS, U C Berkeley Fall 2004

Mechanism Backoff Contention Phase

Each node has mean backoff b Picks backoff delay B using

exponential variable with mean b Sends out Slot Capture Message

after B mini-slots If a node carrier senses another

message sooner – it keeps quiet Packet Transmission Phase

Starts after completion ofBackoff Contention Phase

Nodes with successful Slot Capture Messages transmit

Constant packet length Confirmation using ack

If collision or quiet, b:= b/ m If successful transmission, b :=bm m > 1 Markov analysis indicates stability

and fairness

1 5432

interference

1

5

4

3

2

1's Packet Transmission

5's Packet Transmission

BackoffContentionPhase

PacketTransmissionPhase

backoff

slot capture

ack

ack

EECS, U C Berkeley Fall 2004

Resetting Rates Problem of MIMD scheme

When many congested neighbors all decrease backoffs

Resulting backoffs are small Many collisions

Solution: reset backoff delays when exceeds a limit

If any mean backoff goes below reset_limit

Multiply all mean backoffs by constant reset_factor

Simulation parameters m= 1.2 reset_limit = 16/5=3.2 reset_factor = 10

Reset propagation/loss Resets move hop-by-hop

across network May get lost Effect of lost reset

Keeps backoff low So node wins next few slots Increases mean backoff

Simulation Results Reset propagation Loss of up to 10% of resets Random walk movement

EECS, U C Berkeley Fall 2004

Simulations on Random TopologyExponential Backoff Impatient Backoff Algorithm

Jain’s Fairness Index = 0.58 Mean Throughput = 0.101

Jain’s Fairness Index = 0.68 Mean Throughput = 0.102

Circle = Node : Center = Location, Area = Throughput