tcp congestion control evolution steven low caltech netlab.caltech.edu

Post on 21-Dec-2015

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

TCP Congestion Control Evolution

Steven LowCaltech

netlab.CALTECH.edu

Summary First 25 years of Internet

Networks are several orders of magnitude smaller in size, speed, and heterogeneity

Applications are much simpler

Last 14 years of Internet Networks are much bigger, faster, and

heterogeneous Applications are much more diverse and

demanding

TCP congestion control gradually becomes unsuitable for more and more applications in more and more networks

Summary Both networks and applications are

evolving much faster in last 14 years than previous 25 Our control mechanisms have shown

remarkable robustness, but also essentially frozen in the last 20 years

TCP congestion control is gradually being stressed more and more Time to change is now

… though there may not be an internet-wide train wreck But why wait for train wreck to change?

Outline

Where we were Network growth Application growth Control mechanism growth

Where we are Need to change TCP congestion

control

1969 1974

ARPANet

Internet milestones

19931988

TCP

20061983

TCP/IP

50-56kbps, ARPANet

Backbone speed:

T1 NSFNet

1991

Mosaic

OC12MCI

T3, NSFNet

1996 1999

OC48vBNS

2003

OC192Abilene

HTTP

Tahoe

Napster

Network is exploding

Internet at birth (1969)

Source: http://www.computerhistory.org/exhibits/internet_history/full_size_images/1969_4-node_map.gif

December, DEC

1 November, IBM

1 October, SDS

2 September, SDS

Routers

Internet at 31

Internet growth

In comparison: car

If car technology has been advancing as fast:

US$ 5 /car [US$ 25,000 /car] 100 km / lt [8 km / lt]

The Wireless Internet

DoCoMo’s i-mode• E-mail• Internet• Doubles

every half year

• More than 20 million subscribers by early 2000s

Sub

scri

bers

in m

illi

ons

(Courtesy: D. Rutledge)

Wireless Mesh Network

• Rooftop routers with 0.5-W transmitters• 2Mb/s channels and up to 1-mile range

(Courtesy: D. Rutledge)

1969 1972

Application milestones

1988

Tahoe

1971

ARPANet

NetworkMail

1973

FileTransfer

Telnet

Simple applications

1993 20051995

InternetPhone

Whitehouseonline

Internet Talk Radio

Diverse & demanding applications

1990

Napstermusic

2004

AT&TVoIP

iTunesvideo

YouTube

Network Mail (1971)First Internet (ARPANet) application

The first network email was sent by Ray Tomlinson between these two computers at BBN that are connected by the ARPANet.

First ARPANet applications

First Network Mail: SNDMSG, Ray Tomlinson, 1971 Replaced by SMTP: RFC 821, J. Postel, Aug

1982

Telnet: RFC 318, Jon Postel, April 1972 Started RFC 137, T. C. O’Sullivan, April 1971

FTP: RFC 454, A. McKenzie, Feb 1973 Started in July 1972, RFC 354, Abhay

Bhushan

Internet applications today

Telephony Music TV & home theatre

Software As A Service

Finding your way

Mail

Games

Library at your finger tip Network centric warfare

1969

Control milestones

1988 20061983

TCP/IP

Tahoe

ARPANet

1973

TCP

Flow control: Prevent overwhelming receiver

+ Congestion control:Prevent overwhelming network

Going forward: provide/facilitate new application requirements

TCP (1974)

Cerf and Kahn, Trans. Communications (1974) RFC 793, TCP (1981) ARPANet cutover to TCP/IP (Jan 1, 1983)

Window control: ack-clocking, timeout, retransmission For correct delivery of packet sequence over

unreliable networks To prevent overwhelming receiver (through

Advertised Window in TCP header)

“We envision [HOST retransmission capability] will occasionally beinvoked to allow HOST accommodation to infrequent overdemands for limited buffer resources, and otherwise not used much.”

-- Cerf and Kahn, 1974

TCP (1974)

Key control mechanism Receiver specified max amount of data it

can receive (Advertised Window) Sender maintains no more than that

amount of outstanding packets Ack-clocking adapts to (mild) congestion

Congestion collapse

October 1986, Internet had its first congestion collapse

Link LBL to UC Berkeley 400 yards, 3 hops, 32 Kbps throughput dropped to 40 bps factor of ~1000 drop!

1988, Van Jacobson proposed TCP congestion control

throughput

load

Networks in late 1986

5,089 hosts on Internet (Nov 1986) Backbone speed: 50 – 56 kbps Control mechanism focused only on receiver

congestion, not network congestion

Large number of hosts sharing a slow (and small) network Network became the bottleneck, as opposed to

receivers But TCP flow control only prevents overwhelming

receivers

Jacobson introduced mechanism to deal with network congestion in 1988

Tahoe and its variants (1988)

Jacobson, Sigcomm 1988 + Avoid overwhelming network + Window control mechanisms

Dynamically adjust sender window based on congestion (as well as receiver window)

Loss-based AIMD Fast Retransmit/Fast Recovery New timeout calculation that incorporates

variance in RTT samples

“… important considering that TCP spans a range from 800 Mbps Cray channels to 1200 bps packet radio links”

-- Jacobson, 1988

Outline

Where we were Where we are Need to change TCP congestion

control

Summary

First 25 years of Internet Network is several orders of magnitude

smaller in size, speed, and heterogeneity Applications are simple

Last 14 years of Internet Networks are much bigger, faster, and

heterogeneous Applications are much more diverse and

demanding Both network and applications are

evolving much faster in last 14 years than previous 25

What is inadequate

New applications demand new requirements that stretch TCP FTP or Telnet vs video streaming

New networks stretch TCP New wireless challenges Solutions to some wireless issues

need more than transport layer mechanisms

What is inadequate

Wireless networks: 5 demos Video delivery: 4 Large BDP: 2 Short messages: 1 Content upload: 1

TCP congestion control becomes unsuitable for more and more applications in more and more networks

Real-world evidence

50x delays are common over DSL speed links

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

20000

0.1 0.5 1 5 10 20 60

File size (MB)

FTP

thro

ughp

ut (k

bps)

Alternative TCP

32.5x28.1x21.8x17.6x6.3x3.8x1.8x

Reno avg:35Mbps

Alter avg: 233Mbps

Throuput: LA Tokyo Throuput: San Fran MIT

Another example

San Francisco New YorkJune 3, 2007

Heavy packet loss in Sprint network: Alternative increased throughput by 120x !

TCPthruput: 1Mbps

Alternativethruput: 120Mbps

Why is evolving TCP important

Pockets of industry are experiencing increasing pain

They will not wait for research or standards communities

MS and Linux are already deploying new TCP’s

The pain will only intensity …… because of powerful industry trends

Trend 1

Exploding need to deliver large content Video, software, games, business info

Trend 1: Online content is exploding

CAGR 2005 – 2011: 36% Google worldwide: 46PB/month Library of Congress: 0.136PB

Trend 1: Video is exploding

Jan 2007

Trend 1: Multi-year growth

Only ~10% of digital content is currently online

Jan 2007

Trend 2

Exploding need to deliver large content Video, software, games, business info

Infrastructure growth to support delivery need

Source: Deutsche Bank, Jan 2007

Trend 2: Broadband penetration Global broadband penetration is accelerating

11 million new subscribers/month globally 83% of US home Internet access is broadband by June 2007

Trend 3

Exploding need to deliver large content Video, software, games, business info

Infrastructure growth to support delivery need Centralize IT to reduce costs of management, space,

power, cooling Exacerbated by virtualization of infrastructure and

personalization of content

Trend 3: Centralization Power & cooling cost is escalating, fueling centralization

CAGR (2005-10): power & cooling 11.2%, new server spend 2.7% In 2005, 1,000 servers cost $3.8M to power & cool in 4yrs; 2%

increase in electricity cost raises cost by $200K

Source: IDC Sept 2006

Worldwide Expense(US: $4.5B in 2006; EPA)

Trend 3: Centralization

Content Volume & Global Transfer Increasing

Exploding need to deliver large content Video, software, games, business info

Infrastructure growth to support delivery need Centralize IT to reduce costs of management, space,

power, cooling Exacerbated by virtualization of infrastructure and

personalization of content

Implication More large contents over longer distance Served from centralized data centers

http://sramanamitra.com/2007/10/21/speeding-up-the-internet-algorithms-guru-and-akamai-founder-tom-leighton-part-5/

“You could only get that sustained rate if you are delivering within100 miles, due to the way current Internet protocols work.”

Tom Leighton, MIT/Akamai, Oct 2007

Solution Approaches

Protocol problems degrade performance of long-distance transfers

Current approach: reduce distance

Alternative approach: fix protocol problems

Content Delivery Networks (CDNs)

CDNs circumvent protocol problems by placing servers all around the world

Protocol limitations inherent in Internet give rise to Content Delivery Networks

Alternative Solution

Distance is no longer a constraint in IT infrastructure design

top related