high performance networking with little or no buffers yashar ganjali on behalf of prof. nick mckeown...
Post on 20-Dec-2015
216 views
TRANSCRIPT
High Performance Networking with
Little or No Buffers
Yashar Ganjali on behalf of Prof. Nick McKeownHigh Performance Networking GroupStanford University
[email protected]://www.stanford.edu/~yganjali
Joint work with:Guido Appenzeller, Ashish Goel, Tim Roughgarden
February 17, 2005
Stanford Networking Research Center Final Project Review Workshop
February 17, 2005 SNRC – Final Project Review 2
MotivationNetworks with Little or No
Buffers Problem
Internet traffic is doubled every year Disparity between traffic and router
growth (space, power, cost) Possible Solution
All-Optical Networking Consequences
Large capacity large traffic Little or no buffers
February 17, 2005 SNRC – Final Project Review 3
Which would you choose?
DSL Router 1 DSL Router 2
$504 x 10/100 Ethernet
1.5Mb/s DSL connection
1Mbit of packet buffer
$554 x 10/100 Ethernet
1.5Mb/s DSL connection
4Mbit of packet buffer
Bigger buffers are better
February 17, 2005 SNRC – Final Project Review 4
What we learn in school
Packet switching is good Long haul links are expensive Statistical multiplexing allows efficient
sharing of long haul links Packet switching requires buffers Packet loss is bad Use big buffers Luckily, big buffers are cheap
February 17, 2005 SNRC – Final Project Review 5
Statistical Multiplexing
Observations1. The bigger the buffer, the lower the packet loss.2. If the buffer never goes empty, the outgoing line
is busy 100% of the time.
February 17, 2005 SNRC – Final Project Review 6
What we learn in school
Queueing Theory
kkXP
EX
1
11
M/M/1
X
Buffer size
Loss rate
Observations1. Can pick buffer size for a given loss rate.2. Loss rate falls fast with increasing buffer size.3. Bigger is better.
February 17, 2005 SNRC – Final Project Review 7
What we learn in school
Moore’s Law: Memory is plentiful and halves in price every 18 months. 1Gbit memory holds 500k packets
and costs $25.
Conclusion: Make buffers big. Choose the $55 DSL router.
February 17, 2005 SNRC – Final Project Review 8
Why bigger isn’t better
Network users don’t like buffers Network operators don’t like buffers Router architects don’t like buffers We don’t need big buffers We’d often be better off with smaller
ones
February 17, 2005 SNRC – Final Project Review 9
Backbone Router Buffers
Universally applied rule-of-thumb: A router needs a buffer size:
• 2T is the two-way propagation delay• C is capacity of bottleneck line
Context Mandated in backbone and edge routers. Appears in RFPs and IETF architectural guidelines.. Usually referenced to Villamizar and Song: “High
Performance TCP in ANSNET”, CCR, 1994. Already known by inventors of TCP [Van Jacobson, 1988] Has major consequences for router design
CTB 2
CRouterSource Destination
2T
February 17, 2005 SNRC – Final Project Review 10
Review: TCP Congestion Control
Only W packets may be outstanding
Rule for adjusting W If an ACK is received: W ← W+1/W If a packet is lost: W ← W/2
Source Dest
maxW
2maxW
t
Window size
February 17, 2005 SNRC – Final Project Review 11
Buffer Size in the Core
ProbabilityDistribution
B
0
Buffer Size
W
February 17, 2005 SNRC – Final Project Review 12
Backbone router buffers
It turns out that The rule of thumb is wrong for a core
routers today
Required buffer is instead of CT 2n
CT 2
February 17, 2005 SNRC – Final Project Review 13
2T C
n
Simulation
Required Buffer Size
February 17, 2005 SNRC – Final Project Review 14
Impact on Router Design
10Gb/s linecard with 200,000 x 56kb/s flows Rule-of-thumb: Buffer = 2.5Gbits
• Requires external, slow DRAM Becomes: Buffer = 6Mbits
• Can use on-chip, fast SRAM• Completion time halved for short-flows
40Gb/s linecard with 40,000 x 1Mb/s flows Rule-of-thumb: Buffer = 10Gbits Becomes: Buffer = 50MbitsMany thanks to Guido Appenzeller at Stanford.
For more details, see Sigcomm 2004 paper available at:http://www.stanford.edu/~nickm/papers
February 17, 2005 SNRC – Final Project Review 15
How small can buffers be?
Imagine you want to build an all-optical router for a backbone network…
…and you can build a few dozen packets in delay lines.
Conventional wisdom: It’s a routing problem (hence deflection routing, burst-switching, etc.)
Our belief: First, think about congestion control.
February 17, 2005 SNRC – Final Project Review 16
The chasm between theory and practice
M/M/1
11
X
kkXP
EX
1
= 50%, EX = 1 packet = 75%, EX = 3 packets
= 50%, P[X>10] < 10-3
= 75%, P[X>10] < 0.06
Theory (benign conditions) Practice
Typical OC192 router linecard buffers over 2,000,000 packets
Can we make the traffic arriving at the routersPoisson “enough” to get most of the benefit?
February 17, 2005 SNRC – Final Project Review 17
Observations
1. Data-rate of most flows small fraction of line-rate
Access links throttle flows to low-rate (1-2Mb/s) Core:Access > 1000:1 (Today TCP’s window size is limited)
2. If packet arrivals were Poisson Buffer size of only 5-10 packets We get 80-90% of the capacity of the link
3. In an all-optical network capacity is plentiful (presumably).
February 17, 2005 SNRC – Final Project Review 18
What we know so farabout very small buffers
ArbitraryInjectionProcess
If Poisson Processwith load < 1
CompleteCentralized
Control
Any rate > 0need unbounded
buffers
Theory Experiment
Need buffersize of approx:
O(logD + logW)i.e. 20-30 pkts
D=#of hopsW=window size
[Goel 2004]
TCP Pacing:Results as goodor better than forPoisson
Constant fractionthroughput withconstant buffers
[Leighton]
February 17, 2005 SNRC – Final Project Review 19
Early resultsCongested core router with 10 packet buffers.Average offered load = 80%RTT = 100ms; each flow limited to 2.5Mb/s
router
source
server
source
10Gb/s
>10Gb/s >10Gb/s
February 17, 2005 SNRC – Final Project Review 20
Slow access links, lots of flows, 10 packet buffers
router
source
server
source
10Gb/s
5Mb/s 5Mb/s
Congested core router with 10 packet buffers.RTT = 100ms; each flow limited to 2.5Mb/s
February 17, 2005 SNRC – Final Project Review 21
Conclusion
We can reduce 1,000,000 packet buffers to 10,000 today.
We can probably reduce to 10-20 packet buffers: With many small flows, no change needed With some large flows, need pacing in the
access routers or at the edge devices. Need more experiments.
Questions?Questions?