network sharing issues lecture 15 aditya akella. is this the biggest problem in cloud resource...

Download Network Sharing Issues Lecture 15 Aditya Akella. Is this the biggest problem in cloud resource allocation? Why? Why not? How does the problem differ wrt

If you can't read please download the document

Upload: katherine-simpson

Post on 26-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1
  • Network Sharing Issues Lecture 15 Aditya Akella
  • Slide 2
  • Is this the biggest problem in cloud resource allocation? Why? Why not? How does the problem differ wrt allocating other resources? FairCloud: Sharing the Network in Cloud Computing, Sigcomm 2012 What are the assumptions? Drawbacks?
  • Slide 3
  • Motivation Network?
  • Slide 4
  • Context Networks are more difficult to share than other resources X
  • Slide 5
  • Context Several proposals that share network differently, e.g.: proportional to # source VMs (Seawall [NSDI11]) statically reserve bandwidth (Oktopus [Sigcomm12]) Provide specific types of sharing policies Characterize solution space and relate policies to each other?
  • Slide 6
  • FairCloud Paper 1.Framework for understanding network sharing in cloud computing Goals, tradeoffs, properties 2.Solutions for sharing the network Existing policies in this framework New policies representing different points in the design space
  • Slide 7
  • Goals 1.Minimum Bandwidth Guarantees Provides predictable performance Example: file transfer finishes within time limit A1A1 A2A2 Time max = Size / B min B min
  • Slide 8
  • Goals 1.Minimum Bandwidth Guarantees 2.High Utilization Do not leave useful resources unutilized Requires both work-conservation and proper incentives A BBB Both tenants activeNon work-conservingWork-conserving
  • Slide 9
  • Goals 1.Minimum Bandwidth Guarantees 2.High Utilization 3.Network Proportionality As with other services, network should be shared proportional to payment Currently, tenants pay a flat rate per VM network share should be proportional to #VMs (assuming identical VMs)
  • Slide 10
  • Goals 1.Minimum Bandwidth Guarantees 2.High Utilization 3.Network Proportionality Example: A has 2 VMs, B has 3 VMs A1A1 A2A2 Bw A B1B1 B3B3 B2B2 Bw B Bw A = 2 3 When exact sharing is not possible use max-min
  • Slide 11
  • Goals 1.Minimum Bandwidth Guarantees 2.High Utilization 3.Network Proportionality Not all goals are achievable simultaneously!
  • Slide 12
  • Tradeoffs Min Guarantee High Utilization Network Proportionality Not all goals are achievable simultaneously!
  • Slide 13
  • Tradeoffs Min Guarantee Network Proportionality
  • Slide 14
  • Tradeoffs Bw B Bw A AB Access Link L Capacity C Bw B = 11/13 C Bw A = 2/13 C 10 VMs Network Proportionality Bw A C/N T 0 #VMs in the network Bw B = 1/2 C Bw A = 1/2 C Minimum Guarantee Min Guarantee Network Proportionality
  • Slide 15
  • Tradeoffs High Utilization Network Proportionality
  • Slide 16
  • Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 High Utilization Network Proportionality
  • Slide 17
  • Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 Bw B = 1/2 C Bw A = 1/2 C Network Proportionality High Utilization Network Proportionality
  • Slide 18
  • Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 P High Utilization Network Proportionality Uncongested path
  • Slide 19
  • Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 Bw B Bw A < Network Proportionality L L Tenants can be disincentivized to use free resources If A values A 1 A 2 or A 3 A 4 more than A 1 A 3 Bw A +Bw A = Bw B L L P P High Utilization Network Proportionality Uncongested path
  • Slide 20
  • Network Proportionality Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 P Network proportionality applied only for flows traversing congested links shared by multiple tenants High Utilization Uncongested path Congestion Proportionality
  • Slide 21
  • Network Proportionality Tradeoffs L B1B1 B3B3 B2B2 B4B4 A1A1 A3A3 A2A2 A4A4 Uncongested path P Bw B Bw A = Congestion Proportionality L L High Utilization Congestion Proportionality
  • Slide 22
  • Network Proportionality Tradeoffs Still conflicts with high utilization High Utilization Congestion Proportionality
  • Slide 23
  • Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 L2L2 C 1 = C 2 = C Network Proportionality High Utilization L1L1 Congestion Proportionality
  • Slide 24
  • Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 L1L1 L2L2 Bw B Bw A = Congestion Proportionality L1 Bw B Bw A = L2 Network Proportionality High Utilization C 1 = C 2 = C Congestion Proportionality
  • Slide 25
  • Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 L1L1 L2L2 Demand drops to Network Proportionality High Utilization C 1 = C 2 = C Congestion Proportionality
  • Slide 26
  • Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 C - Tenants incentivized to not fully utilize resources Network Proportionality High Utilization C 1 = C 2 = C Congestion Proportionality L1L1 L2L2
  • Slide 27
  • Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 C - 2 Uncongested C - L1L1 L2L2 C 1 = C 2 = C Network Proportionality High Utilization Tenants incentivized to not fully utilize resources Congestion Proportionality
  • Slide 28
  • L2L2 Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 C - 2 C/2 Uncongested C 1 = C 2 = C Network Proportionality High Utilization Congestion Proportionality Tenants incentivized to not fully utilize resources L1L1
  • Slide 29
  • Tradeoffs Proportionality applied to each link independently Congestion Proportionality Link Proportionality Network Proportionality High Utilization L2L2 B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 L1L1
  • Slide 30
  • L2L2 Tradeoffs B1B1 B3B3 A1A1 A3A3 B2B2 B4B4 A2A2 A4A4 Full incentives for high utilization Congestion Proportionality Link Proportionality Network Proportionality High Utilization L1L1
  • Slide 31
  • Goals and Tradeoffs Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality
  • Slide 32
  • Guiding Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Break down goals into lower-level necessary properties
  • Slide 33
  • Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality
  • Slide 34
  • Work Conservation Bottleneck links are fully utilized Static reservations do not have this property
  • Slide 35
  • Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation
  • Slide 36
  • Utilization Incentives Tenants are not incentivized to lie about demand to leave links underutilized Network and congestion proportionality do not have this property Allocating links independently provides this property
  • Slide 37
  • Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives
  • Slide 38
  • Communication-pattern Independence Allocation does not depend on communication pattern Per flow allocation does not have this property (per flow = give equal shares to each flow) Same Bw
  • Slide 39
  • Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence
  • Slide 40
  • Symmetry Swapping demand directions preserves allocation Per source allocation lacks this property (per source = give equal shares to each source) Same Bw
  • Slide 41
  • Goals, Tradeoffs, Properties Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 42
  • Outline 1.Framework for understanding network sharing in cloud computing Goals, tradeoffs, properties 2.Solutions for sharing the network Existing policies in this framework New policies representing different points in the design space
  • Slide 43
  • Per Flow (e.g. today) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 44
  • Per Source (e.g., Seawall [NSDI11]) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 45
  • Static Reservation (e.g., Oktopus [Sigcomm11]) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 46
  • New Allocation Policies 3 new allocation policies that take different stands on tradeoffs
  • Slide 47
  • Proportional Sharing at Link-level (PS-L) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 48
  • Proportional Sharing at Link-level (PS-L) Per tenant WFQ where weight = # tenants VMs on link A B WQ A = #VMs A on L Bw B Bw A = #VMs A on L #VMs B on L Can easily be extended to use heterogeneous VMs (by using VM weights)
  • Slide 49
  • Proportional Sharing at Network-level (PS-N) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 50
  • Proportional Sharing at Network-level (PS-N) Congestion proportionality in severely restricted context Per source-destination WFQ, total tenant weight = # VMs
  • Slide 51
  • Proportional Sharing at Network-level (PS-N) WQ A1 A2 = 1/N A1 + 1/N A2 A1A1 A2A2 N A2 N A1 Total WQ A = #VMs A Congestion proportionality in severely restricted context Per source-destination WFQ, total tenant weight = # VMs
  • Slide 52
  • Proportional Sharing at Network-level (PS-N) WQ B WQ A = #VMs A #VMs B Congestion proportionality in severely restricted context Per source-destination WFQ, total tenant weight = # VMs
  • Slide 53
  • Proportional Sharing on Proximate Links (PS-P) Min Guarantee Link Proportionality High Utilization Congestion Proportionality Network Proportionality Work Conservation Utilization Incentives Comm-Pattern Independence Symmetry
  • Slide 54
  • Assumes a tree-based topology: traditional, fat-tree, VL2 (currently working on removing this assumption) Proportional Sharing on Proximate Links (PS-P)
  • Slide 55
  • Assumes a tree-based topology: traditional, fat-tree, VL2 (currently working on removing this assumption) Min guarantees Hose model Admission control Proportional Sharing on Proximate Links (PS-P) A1A1 Bw A1 A2A2 Bw A2 AnAn Bw An
  • Slide 56
  • Assumes a tree-based topology: traditional, fat-tree, VL2 (currently working on removing this assumption) Min guarantees Hose model Admission control High Utilization Per source fair sharing towards tree root Proportional Sharing on Proximate Links (PS-P)
  • Slide 57
  • Assumes a tree-based topology: traditional, fat-tree, VL2 (currently working on removing this assumption) Min guarantees Hose model Admission control High Utilization Per source fair sharing towards tree root Per destination fair sharing from tree root Proportional Sharing on Proximate Links (PS-P)
  • Slide 58
  • Deploying PS-L, PS-N and PS-P Full Switch Support All allocations can use hardware queues (per tenant, per VM or per source-destination) Partial Switch Support PS-N and PS-P can be deployed using CSFQ [Sigcomm98] No Switch Support PS-N can be deployed using only hypervisors PS-P could be deployed using only hypervisors, we are currently working on it
  • Slide 59
  • Evaluation Small Testbed + Click Modular Router 15 servers, 1Gbps links Simulation + Real Traces 3200 nodes, flow level simulator, Facebook MapReduce traces
  • Slide 60
  • Many to one Bw B Bw A AB N One link, testbed PS-P offers guarantees Bw A N
  • Slide 61
  • MapReduce One link, testbed Bw B Bw A 5 R5 M M+R = 10 M Bw B (Mbps) PS-L offers link proportionality
  • Slide 62
  • MapReduce Network, simulation, Facebook trace
  • Slide 63
  • MapReduce Network, simulation, Facebook trace PS-N is close to network proportionality
  • Slide 64
  • MapReduce Network, simulation, Facebook trace PS-N and PS-P reduce shuffle time of small jobs by 10-15X
  • Slide 65
  • Summary Sharing cloud networks is not trivial First step towards a framework to analyze network sharing in cloud computing Key goals (min guarantees, high utilization and proportionality), tradeoffs and properties New allocation policies, superset properties from past work PS-L: link proportionality + high utilization PS-N: restricted network proportional PS-P: min guarantees + high utilization What are the assumptions, drawbacks?