tcp performance over a 2.5 gbit/s transatlantic circuit

26
TCP Performance over a TCP Performance over a 2.5 Gbit/s Transatlantic 2.5 Gbit/s Transatlantic Circuit Circuit HNF-Europe Workshop 2002, ECMWF, UK 11 October 2002 J.P. Martin-Flatin, CERN S. Ravot, Caltech

Upload: minty

Post on 02-Feb-2016

35 views

Category:

Documents


0 download

DESCRIPTION

TCP Performance over a 2.5 Gbit/s Transatlantic Circuit. HNF-Europe Workshop 2002, ECMWF, UK 11 October 2002 J.P. Martin-Flatin, CERN S. Ravot, Caltech. The DataTAG Project http://www.datatag.org/. Project Partners. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

TCP Performance over aTCP Performance over a 2.5 Gbit/s Transatlantic Circuit 2.5 Gbit/s Transatlantic Circuit

HNF-Europe Workshop 2002, ECMWF, UK11 October 2002

J.P. Martin-Flatin, CERNS. Ravot, Caltech

Page 2: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

The DataTAGProject

http://www.datatag.org/

Page 3: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 3

Project Partners Project Partners

EU-funded partners: CERN (CH), INFN EU-funded partners: CERN (CH), INFN (IT), INRIA (FR), PPARC (UK) and (IT), INRIA (FR), PPARC (UK) and University of Amsterdam (NL)University of Amsterdam (NL)

U.S.-funded partners: Caltech, UIC, U.S.-funded partners: Caltech, UIC, UMich, Northwestern University, UMich, Northwestern University, StarLightStarLight

Collaborations: GEANT, Internet2, Collaborations: GEANT, Internet2, Abilene, Canarie, SLAC, ANL, FNAL, Abilene, Canarie, SLAC, ANL, FNAL, LBNL, etc.LBNL, etc.

Project coordinator: CERNProject coordinator: CERN

Page 4: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 4

About DataTAGAbout DataTAG

Budget: EUR 3.98MBudget: EUR 3.98M

Funded manpower: 15 FTE/yearFunded manpower: 15 FTE/year 21 people recruited

26 people externally funded26 people externally funded

Start date: January 1, 2002 Start date: January 1, 2002

Duration: 2 yearsDuration: 2 years

Page 5: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 5

Three Objectives Three Objectives

• Build a testbed to experiment with Build a testbed to experiment with massive file transfers across the massive file transfers across the AtlanticAtlantic

• High-performance protocols for High-performance protocols for gigabit networks underlying data-gigabit networks underlying data-intensive Gridsintensive Grids

• Interoperability between several Interoperability between several major Grid projects in Europe and major Grid projects in Europe and USAUSA

Page 6: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 6

GIIS edt004.cnaf.infn.it Mds-vo-name=‘Datatag’ Resource Broker

Computing Element-1 PBS

WN1 edt001.cnaf.infn.it WN2 edt002.cnaf.infn.it

Gatekeeper grid006f.cnaf.infn.it

Computing Element -2 Fork/pbs

Gatekeeper edt004.cnaf.infn.it

GIIS giis.ivdgl.org mds-vo-name=ivdgl-glue

dc-user.isi.edurod.mcs.anl.gov

Job manager: Fork

hamachi.cs.uchicago.edu

iVDGLDataTAG

Condor

Gatekeeper: US-CMS

LSF

Gatekeeper: US-ATLAS

LSF

Gatekeeper: Padova-siteGIIS giis.ivdgl.org mds-vo-name=glue

Gatekeeper

GridGridss

Page 7: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

TestbedTestbed

Page 8: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 8

ObjectivesObjectives

Provisioning of 2.5 Gbit/s transatlantic Provisioning of 2.5 Gbit/s transatlantic circuit between CERN (Geneva) and circuit between CERN (Geneva) and StarLight (Chicago)StarLight (Chicago)

Dedicated to research (no production traffic)Dedicated to research (no production traffic)

Multi-vendor testbed with layer-2 and layer-Multi-vendor testbed with layer-2 and layer-3 capabilities:3 capabilities:

Cisco

Juniper

Alcatel

Extreme Networks

Testbed open to other Grid projectsTestbed open to other Grid projects

Page 9: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 9

2.5 Gbit/s 2.5 Gbit/s Transatlantic CircuitTransatlantic Circuit

Operational since 20 August 2002 (T-Operational since 20 August 2002 (T-Systems)Systems)

Circuit initially connected to Cisco 7606 Circuit initially connected to Cisco 7606 routers (layer 3)routers (layer 3)

High-end PC servers at CERN and StarLight:High-end PC servers at CERN and StarLight: 6x SuperMicro 2x2.4 GHz SysKonnect SK-9843 GbE cards (2 per PC) can saturate the circuit with TCP traffic ready for upgrade to 10 Gbit/s

Deployment of layer-2 equipment under Deployment of layer-2 equipment under wayway

Testbed fully deployed by 31 October 2002Testbed fully deployed by 31 October 2002

Page 10: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

R&DR&D Connectivity BetweenConnectivity BetweenEurope & USAEurope & USA

NLNLSURFnet

CHCERN

UKUKSuperJANET4

ITITGARR-B

GEANTFRFR

INRIA

FRVTHD

ATRIUM

FRVTHD

ATRIUM

3*2.5Gbit/s3*2.5Gbit/s

AbileneAbilene

ESNETESNET

MRENMREN

New York

StarLight

Canarie

Canarie

Page 11: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

Network ResearchNetwork Research

Page 12: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 12

Network Research Network Research ActivitiesActivities

Enhance TCP performance for Enhance TCP performance for massive file transfersmassive file transfers

MonitoringMonitoring

QoSQoSLBE (Scavenger)

Bandwidth reservationBandwidth reservationAAA-based bandwidth on demand

lightpath managed as a Grid resource

Page 13: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 13

TCP Performance TCP Performance Issues: The Big Issues: The Big

PicturePicture TCP’s current congestion control TCP’s current congestion control

(AIMD) algorithms are not suited to (AIMD) algorithms are not suited to gigabit networksgigabit networks

Line errors are interpreted as Line errors are interpreted as congestioncongestion

Delayed ACKs + large Delayed ACKs + large cwndcwnd + large + large RTT = problemRTT = problem

Page 14: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 14

AIMD AlgorithmsAIMD Algorithms

Van Jacobson, SIGCOMM 1988Van Jacobson, SIGCOMM 1988 Additive Increase:Additive Increase:

a TCP connection increases slowly its bandwidth use in the absence of loss

forever, unless we run out of send/receive buffers

TCP is greedy: no attempt to reach a stationary state

Slow start: increase after each ACK Congestion avoidance: increase once per RTT

Multiplicative Decrease:Multiplicative Decrease: a TCP connection reduces its bandwidth use

by half after a loss is detected

Page 15: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

Congestion Window Congestion Window ((cwndcwnd))

average cwnd over the last 10 samples

average cwnd over the entirelifetime of the connection

Slow Start Congestion Avoidance

SSTHRESH

tcptracetcptrace

Page 16: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 16

Disastrous Effect of Disastrous Effect of Packet Loss on TCP in Packet Loss on TCP in

WANs (1/2)WANs (1/2)

0

100

200

300

400

500

0 1000 2000 3000 4000 5000 6000 7000Time (s)

Thr

ough

put

(Mb/

s)

TCP throughput as a function of time MSS=1460

Page 17: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 17

Disastrous Effect of Disastrous Effect of Packet Loss on TCP in Packet Loss on TCP in

WANs (2/2)WANs (2/2) Long time to recover from a single Long time to recover from a single

loss:loss: TCP should react to congestion rather than

packet loss (line errors, transient fault in equipment)

TCP should recover quicker from a loss

TCP is much more sensitive to TCP is much more sensitive to packet loss in WANs than in LANspacket loss in WANs than in LANs

Page 18: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 18

Measurements with Measurements with Different MTUs (1/2)Different MTUs (1/2)

Experimental environment:Experimental environment: Linux 2.4.19 Traffic generated by iperf

average throughout over the last 5 seconds Single TCP stream RTT = 119 ms Duration of each test: 2 hours Transfers from Chicago to Geneva

MTUs:MTUs: set on the NIC of the PC (ifconfig) POS MTU set to 9180 Max MTU with Linux 2.4.19: 9000

Page 19: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 19

Measurements with Measurements with Different MTUs (2/2)Different MTUs (2/2)

0

200

400

600

800

1000

0 1000 2000 3000

Time (s)

Th

rou

gh

pu

t (M

b/s

)

MTU=1500

MTU=4000

MTU=9000

TCP max: 990 Mbit/s (MTU=9000)TCP max: 990 Mbit/s (MTU=9000)UDP max: 957 Mbit/s (MTU=1500)UDP max: 957 Mbit/s (MTU=1500)

Page 20: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 20

Tools Used to Perform Tools Used to Perform MeasurementsMeasurements

We used several tools to investigate We used several tools to investigate TCP performance issues:TCP performance issues:

Generation of TCP flows: iperf and gensink Capture of packet flows: tcpdump tcpdump output=> tcptrace =>xplot

Some tests performed with Some tests performed with SmartBits 2000SmartBits 2000

Currently shopping for a traffic Currently shopping for a traffic analyzeranalyzer

Suggestions?

Page 21: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 21

ResponsivenessResponsiveness

The responsiveness The responsiveness measures how quickly measures how quickly we go back to using the network link at full we go back to using the network link at full capacity after experiencing a losscapacity after experiencing a loss

2 . inc2 . inc

TCP responsiveness

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

0 50 100 150 200

RTT (ms)

Tim

e (

s) C= 622 Mbit/s

C= 2.5 Gbit/s

C= 10 Gbit/s

C . RTTC . RTT22

Page 22: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

Characterization of the Characterization of the ProblemProblem

CapacityCapacity RTTRTT # inc# inc ResponsivenResponsivenessess

9.6 kbit/s9.6 kbit/s(WAN 1988)(WAN 1988)

max: 40 msmax: 40 ms 11 0.6 ms0.6 ms

10 Mbit/s10 Mbit/s(LAN 1988)(LAN 1988)

max: 20 msmax: 20 ms 88 ~150 ms~150 ms

100 Mbit/s100 Mbit/s(LAN 2002)(LAN 2002)

max: 5 msmax: 5 ms 2020 ~100 ms~100 ms

622 Mbit/s622 Mbit/s 120 ms120 ms ~2,900~2,900 ~6 min~6 min

2.5 Gbit/s2.5 Gbit/s 120 ms120 ms ~11,600~11,600 ~23 min~23 min

10 Gbit/s10 Gbit/s 120 ms120 ms ~46,200~46,200 ~1h 30min~1h 30min

inc size = MSS = 1,460inc size = MSS = 1,460

Page 23: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 23

What Can We Do?What Can We Do?

To achieve high throughput over To achieve high throughput over high latency/bandwidth network, we high latency/bandwidth network, we need to:need to:

Set the initial slow start threshold (ssthresh) to an appropriate value for the delay and bandwidth of the link

Avoid loss by limiting the max size of cwnd

Recover fast in case of loss: larger cwnd increment => better

responsiveness larger packet size (Jumbo frames) Less aggressive decrease algorithm

Page 24: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 24

Delayed ACKsDelayed ACKs

RFC 2581:RFC 2581:

Assumption: one ACK per packetAssumption: one ACK per packet Delayed ACKs: one ACK every Delayed ACKs: one ACK every

second packetsecond packet Responsiveness multiplied by twoResponsiveness multiplied by two Disastrous effect when RTT large Disastrous effect when RTT large

and and cwndcwnd large large

iii cwnd

SMSSSMSScwndcwnd

1

Page 25: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 25

Research DirectionsResearch Directions

New fairness principleNew fairness principle Change multiplicative decrease:Change multiplicative decrease:

do not divide by two Change additive increaseChange additive increase

successive dichotomies local and global stability

More stringent definition of More stringent definition of congestioncongestion

Estimation of the available capacity Estimation of the available capacity and bandwidth*delay product:and bandwidth*delay product:

on the fly cached

Page 26: TCP Performance over a  2.5 Gbit/s Transatlantic Circuit

11 October 2002 J.P. Martin-Flatin and S. Ravot 26

Related WorkRelated Work

Sally Floyd, ICIR: Internet-Draft Sally Floyd, ICIR: Internet-Draft “High Speed TCP for Large “High Speed TCP for Large Congestion Windows”Congestion Windows”

Steven Low, Caltech: Fast TCPSteven Low, Caltech: Fast TCP Tom Kelly, U Cambridge: Scalable Tom Kelly, U Cambridge: Scalable

TCPTCP Web100 and Net100Web100 and Net100 PFLDnet 2003 workshop:PFLDnet 2003 workshop:

http://www.datatag.org/pfldnet2003/