esnet update esnet/internet2 joint techs madison, wisconsin july 17, 2007

23
1 ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007 Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory

Upload: libby-meadows

Post on 15-Mar-2016

39 views

Category:

Documents


1 download

DESCRIPTION

Joe Burrescia ESnet General Manager Lawrence Berkeley National Laboratory. ESnet Update ESnet/Internet2 Joint Techs Madison, Wisconsin July 17, 2007. Outline. ESnet’s Role in DOE’s Office of Science ESnet’s Continuing Evolutionary Dimensions Capacity Reach Reliability - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

1

ESnet Update

ESnet/Internet2 Joint Techs Madison, Wisconsin

July 17, 2007

Joe Burrescia

ESnet General ManagerLawrence Berkeley National Laboratory

Page 2: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

2

OutlineESnet’s Role in DOE’s Office of Science

ESnet’s Continuing Evolutionary DimensionsCapacityReach ReliabilityGuaranteed Services

Page 3: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

3

DOE Office of Science and ESnet

• “The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, … providing more than 40 percent of total funding … for the Nation’s research programs in high-energy physics, nuclear physics, and fusion energy sciences.” (http://www.science.doe.gov)

• The large-scale science that is the mission of the Office of Science is dependent on networks for

o Sharing of massive amounts of datao Supporting thousands of collaborators world-wideo Distributed data processingo Distributed simulation, visualization, and computational steeringo Distributed data management

• ESnet’s mission is to enable those aspects of science that depend on networking and on certain types of large-scale collaboration

Page 4: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

4

The Office of Science U.S. Community

Institutions supported by SC Major User Facilities

DOE Multiprogram LaboratoriesDOE Program-Dedicated LaboratoriesDOE Specific-Mission Laboratories

Pacific NorthwestPacific NorthwestNational LaboratoryNational Laboratory Ames LaboratoryAmes Laboratory

Argonne National Argonne National LaboratoryLaboratory

BrookhavenBrookhavenNationalNational

LaboratoryLaboratory

Oak RidgeOak RidgeNational National

LaboratoryLaboratoryLos AlamosLos Alamos

National National LaboratoryLaboratory

Lawrence Lawrence LivermoreLivermore

&&SandiaSandia

National National LaboratoriesLaboratories

LawrenceLawrenceBerkeley Berkeley NationalNational

LaboratoryLaboratory

FermiFermiNationalNational

Accelerator Accelerator LaboratoryLaboratory

PrincetonPrincetonPlasmaPlasmaPhysicsPhysics

LaboratoryLaboratory

Thomas Jefferson Thomas Jefferson National National

Accelerator FacilityAccelerator Facility

NationalNationalRenewable Energy Renewable Energy

LaboratoryLaboratory

StanfordStanfordLinearLinear

Accelerator Accelerator CenterCenter

Idaho National Idaho National LaboratoryLaboratory

SC Program sites

General General AtomicsAtomics

SandiaSandiaNational National

LaboratoryLaboratory

Page 5: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

5

Footprint of SC Collaborators - Top 100 Traffic Generators

Universities and research institutes that are the top 100 ESnet users• The top 100 data flows generate 30% of all ESnet traffic (ESnet handles about 3x109 flows/mo.)• 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)

Page 6: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

6

Changing Science Environment New Demands on Network

• Increased capacityo Needed to accommodate a large and steadily

increasing amount of data that must traverse the network

• High-speed, highly reliable connectivity between Labs and US and international R&E institutionso To support the inherently collaborative, global nature

of large-scale science• High network reliability

o For interconnecting components of distributed large-scale science

• New network services to provide bandwidth guaranteeso For data transfer deadlines for remote data analysis,

real-time interaction with instruments, coupled computational simulations, etc.

Page 7: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

7

0

200

400

600

800

1000

1200

Feb, 90Jul, 90

Dec, 90May,91Oct, 91Mar, 92Aug, 92Jan, 93Jun, 93Nov, 93Apr, 94Sept,

Feb, 95Jul, 95

Dec, 95May,

Oct, 96Mar, 97Aug, 97Jan, 98Jun, 98Nov, 98Apr, 99Sep, 99Feb, 00Jul, 00

Dec, 00May,

Oct, 01Mar, 02Aug, 02Jan, 03Jun, 03Nov, 03Apr, 04Sep, 04Feb, 05Jul, 05

Dec, 05May,

Network Utilization

ESnet Accepted Traffic (Bytes)Jan 1990 to Jun 2006

ESnet is Currently Transporting over 1.2 Petabytes/monthand this volume is increasing exponentially

TByt

es/M

onth

1.04 Petabyte/mo April 2006

1.20 Petabyte/mo June 2006

Page 8: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

R 2 = 0.9898

0.0

0.1

1.0

10.0

100.0

1000.0

10000.0

Jan, 90 Jan, 91 Jan, 92 Jan, 93 Jan, 94 Jan, 95 Jan, 96 Jan, 97 Jan, 98 Jan, 99 Jan, 00 Jan, 01 Jan, 02 Jan, 03 Jan, 04 Jan, 05 Jan, 06

ESnet traffic has increased by10X every 47 months, on average, since 1990

Tera

byte

s / m

onth

Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2006

Oct., 19931 TBy/mo.

Aug., 1990100 MBy/mo.

Jul., 199810 TBy/mo.

38 months

57 months

40 months

Nov., 2001100 TBy/mo.

Apr., 20061 PBy/mo.

53 months

Page 9: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

9

High Volume Science Traffic Continues to Grow

• Top 100 flows are increasing as a percentage of total traffic volume

• 99% to 100% of top 100 flows are science data (100% starting mid-2005)

• A small number of large-scale science users account for a significant and growing fraction of total traffic volume

2 TB/month

2 TB/month

2 TB/month

1/05

6/05

1/06

2 TB/month

7/06

Page 10: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

10

Traffic coming into ESnet = GreenTraffic leaving ESnet = BlueTraffic between ESnet sites% = of total ingress or egress traffic

Traffic notes• more than 90% of all traffic is Office of Science• less that 10% is inter-Lab

Who Generates ESnet Traffic?ESnet Inter-Sector Traffic Summary for June 2006

Peering Points

Commercial

R&E (mostlyuniversities)

7%

30%

31%

5%

5%

14%

ESnet

~7%

DOE collaborator traffic, inc. data

76%

31%

DOE is a net supplier of data because DOE facilities are used by universities and commercial entities, as well as by DOE researchers

DOE sites

International(almost entirelyR&E sites)

Page 11: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

11

CA*net4 FranceGLORIAD (Russia, China)Korea (Kreonet2)

Japan (SINet)Australia (AARNet)Canada (CA*net4Taiwan (TANet2)Singaren

ATL

DC

MAE-EPAIX-PAEquinixMAE-West

ESnet’s Domestic, Commercial, and International Connectivity

Abilene

Abilene

CERN(USLHCnet

CERN+DOE funded)

GÉANT - France, Germany, Italy, UK, etc

NYC

Starlight CHI-SL

Abilene

SNV

SDSC

MAXGPoP

SoXGPoP

High Speed International ConnectionCommercial and R&E peering points

Abilene High-speed peering points with Abilene

ESnet core hubs IP SDN

CHI

MRENNetherlandsStarTapTaiwan (TANet2)UltraLight

NGIX-W

Australia

SEA

SINet (Japan)Russia (BINP)

AMPATH(S. America)

AMPATHS. America

MA

N LA

NAb

ilene

ESnet provides:• High-speed peerings with Abilene, CERN,

and the international R&E networks• Management of the full complement of global

Internet routes (about 180,000 unique IPv4 routes) in order to provide DOE scientists rich connectivity to all Internet sites

Australia

Equinix

Equinix

PNWGPoP/PacificWave

NGIX-E

PacificWave

UNM ALB

SNV

USN

USN

Page 12: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

12

NLR

Sup

plie

d Ci

rcui

ts

LVK

SNLL

YUCCA MT

BECHTEL-NV

PNNLLIGO

INEEL

SNLAPANTEX

ARM

KCP

NOAA

OSTI ORAU

SRS

JLAB

PPPL

MIT

ANL

BNLFNAL

AMESLLNL

GA

OSC GTNNNSA

International (high speed)Lab Supplied10 Gb/s SDN core10G/s IP core2.5 Gb/s IP coreMAN rings (≥ 10 G/s)OC12 / GigEthernetOC3 (155 Mb/s)45 Mb/s and less

Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (4)Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)

42 end user sites

ESnet IP core

CA*net4 FranceGLORIAD (Russia, China)Korea (Kreonet2

Japan (SINet)Australia (AARNet)Canada (CA*net4Taiwan (TANet2)Singaren

ESnet IP core: Packet over SONET Optical Ring and

Hubs

ELP

ATL

DC

commercial and R&E peering points

MAE-E

PAIX-PAEquinix, etc.

PNW

GPo

P/PA

cific

Wav

e

ESnet’s Physical Connectivity (Summer 2006)

ESnet core hubs IP

Abilene high-speed peering points with Internet2/Abilene

Abilene

CERN(USLHCnet

CERN+DOE funded)

GÉANT - France, Germany, Italy, UK, etc

NYC

StarlightChi NAP

CHI-SL

SNV

Abilene

SNV SDN

JGILBNL

SLACNERSC

SDSC

Equinix

MAXGPoP

SoXGPoP

SNV

ALB

ORNL

CHI

MRENNetherlandsStarTapTaiwan (TANet2)UltraLight

AU

AU

SEA

SINet (Japan)Russia (BINP)

LANL

NREL

AMPATH(S. America)

AMPATH

MA

N LA

NAb

ilene

MAN rings

Abile

ne

USN

USN

IARC

NA

SAA

mes

KCP-ALB UNM

LLNL/LANLDC Offices

LBNL DC

ORAU DC

DOE-ALB

Page 13: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

13

ESnet LIMAN with SDN Connections

Page 14: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

14

LIMAN and BNL• ATLAS (A Toroidal LHC ApparatuS), is one of four

detectors located at the Large Hadron Collider (LHC) located at CERN

• BNL is the largest of the ATLAS Tier 1 centers and the only one in the U.S, and so is responsible for archiving and processing approximately 20 percent of the ATLAS raw data

• During a recent multi-week exercise, BNL was able to sustain an average transfer rate from CERN to their disk arrays of 191 MB/s (~1.5 Gb/s) compared to a target rate of 200 MB/so This was in addition to “normal” BNL site traffic

Page 15: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

15

Chicago Area MAN with SDN Connections

Page 16: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

16

CHIMAN: FNAL and ANL

• Fermi National Laboratory is the only US Tier1 center for the Compact Muon Solenoid (CMS) experiment at LHC

• Argonne National Laboratory will house a 5-teraflop IBM BlueGene computer part of the National Leadership Computing Facility

• Together with ESnet, FNAL and ANL will build the Chicago MAN (CHIMAN) to accommodate the vast amounts of data these facilities will generate and receiveo Five 10GE circuits will go into FNALo Three 10GE circuits will go into ANLo Ring connectivity to StarLight and to the Chicago ESnet

POP

Page 17: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

Jefferson Laboratory Connectivity

Eastern LITE

(E-LITE)

Old Dominion University

W&MJLAB

JTASC

VMASC

Bute St CO

MATPNYC

Atlanta

ODU

NASA

Lovitt

MATP Virginia Tech

MAX GIGAPOP

ESnet Router

10GEOC192OC48

ES

net c

ore

JLAB Site Switch

DC - MAX Giga-POP

T320T320

Page 18: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

18

10-50 Gb/s circuitsProduction IP coreScience Data Network coreMetropolitan Area NetworksInternational connections

MetropolitanArea Rings

Primary DOE Labs

IP core hubs

possible hubs

SDN hubs

Europe(GEANT)

Asia-Pacific

New York

Chicag

o

Washington, DC

Atla

nta

CERN

Seattle

AlbuquerqueAus

.A

ustr

alia

San Diego

LA

Sunn

yval

e

Denver

South America(AMPATH)

South America(AMPATH)

Canada(CANARIE)

CERN

Loop off Backbone

Canada(CANARIE)

Europe(GEANT)

SDN Core

IP Core

ESnet Target Architecture: IP Core+Science Data Network Core+Metro Area Rings

Page 19: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

19

Reliability

“5 nines” “3 nines”“4 nines”

Dually connected sites

Page 20: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

20

Guaranteed Services Using Virtual Circuits• Traffic isolation and traffic engineering

– Provides for high-performance, non-standard transport mechanisms that cannot co-exist with commodity TCP-based transport

– Enables the engineering of explicit paths to meet specific requirements• e.g. bypass congested links, using lower bandwidth, lower latency paths

• Guaranteed bandwidth [Quality of Service (QoS)]– Addresses deadline scheduling

• Where fixed amounts of data have to reach sites on a fixed schedule, so that the processing does not fall far enough behind that it could never catch up – very important for experiment data analysis

• Reduces cost of handling high bandwidth data flows– Highly capable routers are not necessary when every packet goes to the

same place– Use lower cost (factor of 5x) switches to relatively route the packets

• End-to-end connections are required between Labs and collaborator institutions

Page 21: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

21

OSCARS: Guaranteed Bandwidth VC Service For SC Science

• ESnet On-demand Secured Circuits and Advanced Reservation System (OSCARS)

• To ensure compatibility, the design and implementation is done in collaboration with the other major science R&E networks and end sites

o Internet2: Bandwidth Reservation for User Work (BRUW) - Development of common code base

o GEANT: Bandwidth on Demand (GN2-JRA3), Performance and Allocated Capacity for End-users (SA3-PACE) and Advance Multi-domain Provisioning System (AMPS)

- Extends to NRENso BNL: TeraPaths - A QoS Enabled Collaborative Data Sharing Infrastructure for Peta-

scale Computing Research o GA: Network Quality of Service for Magnetic Fusion Research o SLAC: Internet End-to-end Performance Monitoring (IEPM) o USN: Experimental Ultra-Scale Network Testbed for Large-Scale Science

• In its current phase this effort is being funded as a research project by the Office of Science, Mathematical, Information, and Computational Sciences (MICS) Network R&D Program

• A prototype service has been deployed as a proof of concepto To date more then 20 accounts have been created for beta users, collaborators, and

developerso More then 100 reservation requests have been processed

Page 22: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

22

OSCARS - BRUW Interdomain Interoperability Demonstration

BRUW OSCARS

1

23 3

Source

LSP

Sink

LSPIndianapolis

INChicago

ILChicago

ILSunnyvale

CA

4

• The first interdomain, automatically configured, virtual circuit between ESnet and Abilene was created on April 6, 2005

Page 23: ESnet Update ESnet/Internet2 Joint Techs   Madison, Wisconsin  July 17, 2007

23

A Few URLs• ESnet Home Page

o http://www.es.net

• National Labs and User Facilitieso http://www.sc.doe.gov/sub/organization/organization.htm

• ESnet Availability Reportso http://calendar.es.net/

• OSCARS Documentationo http://www.es.net/oscars/index.html