esnet: doe’s science network gnew march, 2004

42
1 Esnet: DOE’s Science Network GNEW March, 2004 William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads and the ESnet Team Lawrence Berkeley National Laboratory

Upload: sinead

Post on 13-Jan-2016

42 views

Category:

Documents


0 download

DESCRIPTION

Esnet: DOE’s Science Network GNEW March, 2004. William E. Johnston, ESnet Manager and Senior Scientist Michael S. Collins, Stan Kluz, Joseph Burrescia, and James V. Gagliardi, ESnet Leads and the ESnet Team Lawrence Berkeley National Laboratory. Esnet Provides. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Esnet: DOE’s Science Network GNEW March, 2004

1

Esnet:DOE’s Science Network

GNEW March, 2004

William E. Johnston, ESnet Manager and Senior Scientist

Michael S. Collins, Stan Kluz,Joseph Burrescia, and James V. Gagliardi, ESnet Leads

and the ESnet Team

Lawrence Berkeley National Laboratory

Page 2: Esnet: DOE’s Science Network GNEW March, 2004

2

Esnet Provides

• High bandwidth backbone and connections for Office of Science Labs and programs

• High bandwidth peering with the US, European, and Japanese Research and Education networks

• SecureNet (DOE classified R&D) as an overlay network

• Science services – Grid and collaboration services

• User support: ESnet “owns” all network trouble tickets (even from end users) until they are resolvedo one stop shopping for user network problemso 7x24 coverageo Both network and science services problems

Page 3: Esnet: DOE’s Science Network GNEW March, 2004

3

TWC

JGISNLL

LBNL

SLAC

YUCCA MT

BECHTEL

PNNLLIGO

INEEL

LANL

SNLAAlliedSignal

PANTEX

ARM

KCP

NOAA

OSTIORAU

SRS

ORNLJLAB

PPPL

ANL-DCINEEL-DCORAU-DC

LLNL/LANL-DC

MIT

ANL

BNL

FNALAMES

4xLAB-DCNERSC

NR

EL

ALBHUB

LLNL

GA DOE-ALB

SDSC

Japan

GTN&NNSA

International (high speed)OC192 (10G/s optical)OC48 (2.5 Gb/s optical)Gigabit Ethernet (1 Gb/s)OC12 ATM (622 Mb/s)OC12 OC3 (155 Mb/s)T3 (45 Mb/s)T1-T3T1 (1 Mb/s)

Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (3)

Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)

QWESTATM

42 end user sites

ESnet IP

GEANT - Germany - France - Italy - UK - etc Sinet (Japan)Japan – Russia(BINP)

CA*net4CERNMRENNetherlandsRussiaStarTapTaiwan (ASCC)

CA*net4KDDI (Japan)FranceSwitzerlandTaiwan (TANet2)

AustraliaCA*net4Taiwan (TANet2)Singaren

ESnet backbone: Optical Ring and Hubs

ELP HUB

SNV HUB CHI HUB

NYC HUB

ATL HUB

DC HUB

peering points

MAE-ES

tarligh

tChi NAP

Fix-W

PAIX-W

MAE-W

NY-NAP

PAIX-E

Euqinix

PNW

G

SEA HUB

ESnet Connects DOE Facilities and Collaborators

hubs SNV HUB

Abi

lene A

bile

ne

Abilene

Ab

ilene

Page 4: Esnet: DOE’s Science Network GNEW March, 2004

Organized by Office of Science

Mary Anne Scott, Chair Dave Bader Steve Eckstrand Marvin Frazier Dale Koelling Vicky White

Workshop Panel Chairs Ray Bair and Deb AgarwalBill Johnston and Mike WildeRick StevensIan Foster and Dennis GannonLinda Winkler and Brian TierneySandy Merola and Charlie Catlett

Focused on science requirements that drive

• Advanced Network Infrastructure

• Middleware Research• Network Research• Network Governance Model

August 13-15, 2002

ESnet is Driven by the Needs of DOE Science

Available at www.es.net/#research

Page 5: Esnet: DOE’s Science Network GNEW March, 2004

5

Eight Major DOE Science Areas Analyzed at the August ’02 Workshop

Feature

Vision for the Future Process of Science

Characteristics thatMotivate High Speed Nets

Requirements

Discipline

Networking Middleware

Climate

(near term)

Analysis of model data by selected communities that have high speed networking (e.g. NCAR and NERSC)

•A few data repositories, many distributed computing sites

•NCAR - 20 TBy

•NERSC - 40 TBy

•ORNL - 40 TBy

•Authenticated data streams for easier site access through firewalls

•Server side data processing (computing and cache embedded in the net)

•Information servers for global data catalogues

Climate

(5 yr)

Enable the analysis of model data by all of the collaborating community

•Add many simulation elements/components as understanding increases

•100 TBy / 100 yr generated simulation data, 1-5 PBy / yr (just at NCAR)

o Distribute large chunks of data to major users for post-simulation analysis

•Robust access to large quantities of data

•Reliable data/file transfer (across system / network failures)

Climate

(5+ yr)

Integrated climate simulation that includes all high-impact factors

•5-10 PBy/yr (at NCAR)

•Add many diverse simulation elements/components, including from other disciplines - this must be done with distributed, multidisciplinary simulation

•Virtualized data to reduce storage load

•Robust networks supporting distributed simulation - adequate bandwidth and latency for remote analysis and visualization of massive datasets

•Quality of service guarantees for distributed, simulations

•Virtual data catalogues and work planners for reconstituting the data on demand

Driven by

Page 6: Esnet: DOE’s Science Network GNEW March, 2004

6

Evolving Qualitative Requirements for Network Infrastructure

In the near term applicationsneed high bandwidth

2-4 yrs requirement is for high bandwidth and QoS.

3-5 yrs requirement is for high bandwidth and QoS and network resident cache and compute elements.

4-7 yrs requirement is for high bandwidth and QoS and network resident cache and compute elements, and robust bandwidth (multiple paths)

C

S

C

C

S

I

C

S

C

C

S

I

C

S

C

C

S

IC&C

C&C

C&

C

C&

C

C&C

C&

C

C

S

C

C

S

IC&C

C&C

C&

C

C&

C

C&C

C&

CS

C

I

C&C

instrument

compute

storage

cache &compute

1-40 Gb/s,end-to-end

1-3

yrs

2-4

yrs

3-5

yrs

4-7

yrs

guaranteedbandwidthpaths

100-200 Gb/s,end-to-end

Page 7: Esnet: DOE’s Science Network GNEW March, 2004

7

Evolving Quantitative Science Requirements for Networks

Science Areas Today End2End Throughput

5 years End2End Throughput

5-10 Years End2End Throughput

Remarks

High Energy Physics

0.5 Gb/s 100 Gb/s 1000 Gb/s high bulk throughput

Climate (Data & Computation)

0.5 Gb/s 160-200 Gb/s N x 1000 Gb/s high bulk throughput

SNS NanoScience Not yet started 1 Gb/s 1000 Gb/s + QoS for control channel

remote control and time critical throughput

Fusion Energy 0.066 Gb/s(500 MB/s burst)

0.198 Gb/s(500MB/20 sec. burst)

N x 1000 Gb/s time critical throughput

Astrophysics 0.013 Gb/s(1 TBy/week)

N*N multicast 1000 Gb/s computational steering and collaborations

Genomics Data & Computation

0.091 Gb/s(1 TBy/day)

100s of users 1000 Gb/s + QoS for control channel

high throughput and steering

Page 8: Esnet: DOE’s Science Network GNEW March, 2004

Organized by the ESSCWorkshop Chair

Roy Whitney, JLABReport Editors

Roy Whitney, JLABLarry Price, ANL

Workshop Panel Chairs Wu-chun Feng, LANLWilliam Johnston, LBNLNagi Rao, ORNLDavid Schissel, GAVicky White, FNALDean Williams, LLNL

Focused on what was needed to achieve the science driven network requirements of the previous workshop

• Both Workshop reports are available at es.net/#research

June 3-5, 2003

New Strategic Directions to Address Needs of DOE Science

Available at www.es.net/#research

Page 9: Esnet: DOE’s Science Network GNEW March, 2004

9

ESnet Strategic Directions

• Developing a 5 yr. strategic plan for how to provide the required capabilities identified by the workshopso Between DOE Labs and their major collaborators in the

University community we must address- Scalable bandwidth

- Reliability

- Quality of Service

o Must address an appropriate set of Grid and human collaboration supporting middleware services

Page 10: Esnet: DOE’s Science Network GNEW March, 2004

10

TWC

JGISNLL

LBNL

SLAC

YUCCA MT

BECHTEL

PNNLLIGO

INEEL

LANL

SNLAAlliedSignal

PANTEX

ARM

KCP

NOAA

OSTIORAU

SRS

ORNLJLAB

PPPL

ANL-DCINEEL-DCORAU-DC

LLNL/LANL-DC

MIT

ANL

BNL

FNALAMES

4xLAB-DCNERSC

NR

EL

ALBHUB

LLNL

GA DOE-ALB

SDSC

Japan

GTN&NNSA

International (high speed)OC192 (10G/s optical)OC48 (2.5 Gb/s optical)Gigabit Ethernet (1 Gb/s)OC12 ATM (622 Mb/s)OC12 OC3 (155 Mb/s)T3 (45 Mb/s)T1-T3T1 (1 Mb/s)

Office Of Science Sponsored (22)NNSA Sponsored (12)Joint Sponsored (3)

Other Sponsored (NSF LIGO, NOAA)Laboratory Sponsored (6)

QWESTATM

42 end user sites

ESnet IP

GEANT - Germany - France - Italy - UK - etc Sinet (Japan)Japan – Russia(BINP)

CA*net4CERNMRENNetherlandsRussiaStarTapTaiwan (ASCC)

CA*net4KDDI (Japan)FranceSwitzerlandTaiwan (TANet2)

AustraliaCA*net4Taiwan (TANet2)Singaren

ESnet backbone: Optical Ring and Hubs

ELP HUB

SNV HUB CHI HUB

NYC HUB

ATL HUB

DC HUB

peering points

MAE-ES

tarligh

tChi NAP

Fix-W

PAIX-W

MAE-W

NY-NAP

PAIX-E

Euqinix

PNW

G

SEA HUB

ESnet Connects DOE Facilities and Collaborators

hubs SNV HUB

Abi

lene A

bile

ne

Abilene

Ab

ilene

Page 11: Esnet: DOE’s Science Network GNEW March, 2004

NY-NAP

TWCJGI

SNLL

SLAC

YUCCA MT

BECHTE

L

PNNLLIGO

INEEL

LANL

SNLAARM

AlliedSignal

NOAA

OSTIORAU

SRS

ORNLJLAB

PPPL

ANL-DCINEEL-DCORAU-DC

LLNL/LANL-DC

MIT

ANL

BNL

FNALAMES

NevisYale

4xLAB-DC

Bran

deis

NERSC

NR

EL

LLNL

GA

DOE-ALB

GTN

SAN

DOE-NNSA

SEA HUB

SNV HUB

ELP HUB

ALBHUB

ATL HUB

DC HUB

NYC H

UBS

CHI HUB

AlliedSignal

ELP

ELP

ELP HUB

SNV

SNVINEEL

SNV

YUCCA MT

PANTEX

QWEST

ATM

Fix-W PAIX-W

Mae-W

LBNL/CalRen2

SEA HUB

MAE-EPAIX-E

CHI NAP

STARLIG

HT

Qwest OwnedQwest Contracted

Touch America (bankrupt)MCI Contracted/OwnedSite Contracted/Owned

FTS2000 Contracted/OwnedSBC(PacBell) Contracted/Owned

SPRINT Contracted/Owned

AlliedSignal

While ESnet Has One Backbone Provider, there areMany Local Loop Providers to Get to the Sites

SDSC/CENIC

SNV HUB

Level3

Page 12: Esnet: DOE’s Science Network GNEW March, 2004

ESnet Logical InfrastructureConnects the DOE Community With its Collaborators

ESnet provides complete access to the Internet by managing the full complement of Global Internet routes (about 150,000) at 10 general/commercial peering points + high-speed peerings w/ Abilene and the international networks

Abilene

Page 13: Esnet: DOE’s Science Network GNEW March, 2004

13

ESnet Traffic

Annual growth in the past five years has increased from 1.7x annually to just over 2.0x annually.

Page 14: Esnet: DOE’s Science Network GNEW March, 2004

14

ESnet Ingress Traffic = GreenESnet Egress Traffic = BlueTraffic between sites% = of total ingress or egress traffic

ESnet Appropriate Use Policy (AUP)

All ESnet traffic must originate and/or terminate on an ESnet an site (no transit traffic is allowed)

E.g. a commercial site cannot exchange traffic with an international site across ESnet

This is effected via routing restrictions

Who Generates Traffic, and Where Does it Go?

ESnet Inter-Sector Traffic Summary, Jan 2003

Commercial

R&E

International

21%

17%

9%

14%

10%

4%

ESnet

~25%

Peering Points

DOE collaborator traffic, inc.data

72%

53%

DOE is a net supplier of data because DOE facilities are used by Univ. and commercial, as well as by DOE researchers

DOE sites

Page 15: Esnet: DOE’s Science Network GNEW March, 2004

15

ESnet Site Architecture

New York (AOA)

Chicago (CHI)

Sunnyvale (SNV)

Atlanta (ATL)

Washington, DC (DC)

El Paso (ELP)

Site gateway router

SiteLAN

ESnet border router DMZ

ESnet responsibility

Site responsibility

Site

Hubs(backbone routers

and local loop connection points)

Backbone(optical fiber ring)

Local loop(Hub to local site)

The Hubs have lots of connections

(42 in all)

Page 16: Esnet: DOE’s Science Network GNEW March, 2004

16

SecureNet

• SecureNet connects 10 NNSA (Defense Programs) Labs

• Essentially a VPN with special encrypterso The NNSA sites exchange encrypted ATM traffic

o The data is unclassified when ESnet gets it because it is encrypted before it leaves the NNSA sites with an NSA certified encrypter

• Runs over the ESnet core backbone as a layer 2 overlay – that is, the SecureNet encrypted ATM is transported over ESnet’s Packet-Over-SONET infrastructure by encapsulating the ATM in MPLS using Juniper CCC

Page 17: Esnet: DOE’s Science Network GNEW March, 2004

17

SNLL

LANL

LLNL

SNLA

DOE-AL

ORNLKCP

Pantex

SNV-HUB

ELP-HUB

CHI-HUB

AOA-HUB

DC-HUB

ATL-HUBSRS

Primary SecureNet

Path

Backup SecureNet

Path

SecureNet encapsulates payload encrypted ATM in MPLSusing the Juniper Router Circuit Cross Connect (CCC) feature.

SecureNet – Mid 2003

GTN

Page 18: Esnet: DOE’s Science Network GNEW March, 2004

18

SLAC

7206

LBL

Sunnyvale

Albuquerque

Atlanta

New York

Chicago

6BONE

Distributed 6TAPPAIX

StarLight

7206

• IPv6 is the next generation Internet protocol, and ESnet is working on addressing deployment issues

-one big improvement is that while IPv4 has 32 bit – about 4x109 – addresses (which we are running short of), IPv6 has 132 bit – about 1040 – addresses (which we are not ever likely to run short of)-another big improvement is native support for encryption of data

7206

7206

IPv6-ESnet Backbone

El Paso

SLAC

LBNL

ESnet

BNL

18 peers

9peers

7peers

7206

DC

TWC

Abilene

Abilene

ANL FNAL

IPv6 only IPv4/IPv6IPv4 only

Page 19: Esnet: DOE’s Science Network GNEW March, 2004

19

Operating Science Mission Critical Infrastructure

• ESnet is a visible and critical pieces of DOE science infrastructure

o if ESnet fails,10s of thousands of DOE and University users know it within minutes if not seconds

• Requires high reliability and high operational security in the ESnet operational services – the systems that are integral to the operation and management of the network

o Secure and redundant mail and Web systems are central to the operation and security of ESnet

- trouble tickets are by email

- engineering communication by email

- engineering database interface is via Web

o Secure network access to Hub equipmento Backup secure telephony access to Hub equipmento 24x7 help desk (joint with NERSC)o 24x7 on-call network engineer

Page 20: Esnet: DOE’s Science Network GNEW March, 2004

Disaster Recovery and Stability

• The network operational services must be kept available even if, e.g., the West coast is disabled by a massive earthquake, etc.o ESnet engineers in four locations across the country

o Full and partial engineering databases and network operational service replicas in three locations

o Telephone modem backup access to all hub equipment

• All core network hubs are located in commercial telecommunication facilities with high physical security and backup power

Page 21: Esnet: DOE’s Science Network GNEW March, 2004

LBNL

PPPL

BNL

AMES

EngineersEng SrvrLoad SrvrConfig Srvr

DNS

Remote Engineer• partial duplicate

infrastructure

TWC

RemoteEngineer

Disaster Recovery and Stability

• ESnet backbone operated without interruption through• N. Calif. Power blackout of 2000• the 9/11 attacks• the Sept., 2003 NE States power blackout

ATL HUB

SEA HUB

ALBHUB

NYC HUBS

DC HUB

ELP HUB

CHI HUB

SNV HUB

SDSC

Duplicate Infrastructure(currently deploying full replication of the NOC databases and servers and Science Services databases)

Engineers, 24x7 NOC, generator backed power

• Spectrum (net mgmt system)

• DNS (name – IP address translation)

• Eng database• Load database• Config database• Public and private Web• E-mail (server and archive)• PKI cert. repository and

revocation lists• collaboratory authorization

service

Page 22: Esnet: DOE’s Science Network GNEW March, 2004

Maintaining Science Mission Critical Infrastructurein the Face of Cyberattack

• A Phased Security Architecture is being implemented to protect the network and the sites

• The phased response ranges from blocking certain site traffic to a complete isolation of the network which allows the sites to continue communicating among themselves in the face of the most virulent attacks

o Separates ESnet core routing functionality from external Internet connections by means of a “peering” router that can have a policy different from the core routers

o Provide a rate limited path to the external Internet that will insure site-to-site communication during an external denial of service attack

o provide “lifeline” connectivity for downloading of patches, exchange of e-mail and viewing web pages (i.e.; e-mail, dns, http, https, ssh, etc.) with the external Internet prior to full isolation of the network

Page 23: Esnet: DOE’s Science Network GNEW March, 2004

23

Phased Response to Cyberattack

LBNL

ESnet

router

router

borderrouter

X

peeringrouter

Lab

Lab

gatewayrouter

ESnet second response – filter traffic from outside of ESnet

Lab first response – filter incoming traffic at their ESnet gateway router

ESnet third response – shut down the main peering path and provide only a

limited bandwidth path for specific “lifeline” services

Xpeeringrouter

gatewayrouter

border router

router

attack trafficX

ESnet first response – filters to assist a site

Sapphire/Slammer worm infection created almost a Gb/s traffic spike on the ESnet backbone until filters were put in place (both into and out of sites) to damp it out.

Page 24: Esnet: DOE’s Science Network GNEW March, 2004

24

Future Directions – the 5 yr Network Strategy

• Elementso University connectivity

o Scalable and reliable site connectivity

o Provisioned circuits for hi-impact science bandwidth

o Close collaboration with the network R&D community

o Services supporting science (Grid middleware, collaboration services, etc.)

Page 25: Esnet: DOE’s Science Network GNEW March, 2004

25

5 yr Strategy – Near Term Goal 1

• Connectivity between any DOE Lab and any Major University should be as good as ESnet connectivity between DOE Labs and Abilene connectivity between Universitieso Partnership with I2/Abilene

o Multiple high-speed peering points

o Routing tailored to take advantage of this

o Latency and bandwidth from DOE Lab to University should be comparable to intra ESnet or intra Abilene

o Continuous monitoring infrastructure to verify

Page 26: Esnet: DOE’s Science Network GNEW March, 2004

26

5 yr Strategy – Near Term Goal 2

• Connectivity between ESnet and R&D nets – a critical issue from Roadmap

o UltraScienceNet and NLR for starters

o Reliable, high bandwidth cross-connects1) IWire ring between Qwest – ESnet Chicago hub and Starlight

– This is also critical for DOE lab connectivity to the DOE funded LHCNet 10 Gb/s link to CERN

– Both LHC tier 1 sites in the US – Atlas and CMS – are at DOE Labs

2) ESnet ring between Qwest – ESnet Sunnyvale hub and the Level 3 Sunnyvale hub that houses the West Coast POP for NLR and UltraScienceNet

Page 27: Esnet: DOE’s Science Network GNEW March, 2004

27

5 yr Strategy – Near-Medium Term Goal

• Scalable and reliable site connectivityo Fiber / lambda ring based Metropolitan Area Networks

o Preliminary engineering study completed for San Francisco Bay Area and Chicago area

- Proposal submitted

- At least one of these is very likely to be funded this year

• Hi-impact science bandwidth – provisioned circuits

Page 28: Esnet: DOE’s Science Network GNEW March, 2004

28

ESnet Future Architecture

• Migrate site local loops to ring structured Metropolitan Area Network and regional nets in some areaso Goal is local rings, like the backbone, that provide multiple

paths

• Dynamic provisioning of private “circuits” in the MAN and through the backbone to provide “high impact science” connectionso This should allow high bandwidth circuits to go around site

firewalls to connect specific systems. The circuits are secure and end-to-end, so if the sites trust each other, they should allow direct connections if they have compatible security policies. E.g. HPSS <-> HPSS

• Partnership with DOE UltraNet, Internet 2 HOPI, and National Lambda Rail

Page 29: Esnet: DOE’s Science Network GNEW March, 2004

29

ESnet Future Architecture

MetropolitanArea

Networks

corering

site

site

site

production IP

provisioned circuits carriedover lambdas

Optical channel (λ) management

equipment

one optical fiber pairDWDM providing point-

to-point, unprotected circuits

Layer 2 management equipment (e.g.

10 GigEthernet switch)

Layer 3 (IP)management

equipment (router)

provisioned circuits carriedas tunnels through the ESnetIP backbone

provisioned circuits initially via MPLS

paths, eventually via lambda paths

Page 30: Esnet: DOE’s Science Network GNEW March, 2004

30

Site gateway routersite equip. Site gateway router

Qwest hub

Vendor neutralfacility

ESnet core

ANLFNAL

ESnet MAN Architecture - ExampleCERN

(DOE funded link)

monitor

ESnet management

and monitoring – partly to

compensate for no site router

ESnet managedλ / circuit services

tunneled through the IP backbone

via MPLS

monitor

site equip.

ESnet production IP service

ESnet managedλ / circuit services

T320

StarLight

other international

peerings

Site LAN Site LAN

Current DMZs are back-hauled to the

core routerImplemented via 2

VLANs – one in each direction around the ring

Ethernet switch• DMZ VLANs• Management of

provisioned circuits

Page 31: Esnet: DOE’s Science Network GNEW March, 2004

31

Future ESnet Architecture

New York (AOA)

Atlanta (ATL)

Washington

El Paso (ELP)

Site gateway router

SiteLAN

DMZ

Site

MANoptical fiber ring

ESnet border

circuit cross

connect

ESnetbackbone

Site gateway

routerSiteLANDMZ

Site

MANoptical fiber ring

ESnet border

circuit cross

connect

Private “circuit” from one Lab to another

Specific host, instrument,

etc.

Specific host, instrument,

etc.

common security policy

Page 32: Esnet: DOE’s Science Network GNEW March, 2004

32

Long-Term ESnet Connectivity Goal

QwestNLR

MANs

Local loops

High-speed cross connects with Internet2/Abilene

Japan

Japan

CERN/EuropeEurope

Major DOE Office of Science Sites

• MANs for scalable bandwidth and redundant site access to backbone• Connecting MANs with two backbones to ensure against hub failure (for example NLR is shown as the second backbone below)

William E. Johnston
Page 33: Esnet: DOE’s Science Network GNEW March, 2004

33

Long-Term ESnet Bandwidth Goal

• Harvey Newman:“And what about increasing the bandwidth in the backbone?”

• Answer: technology progresso By 2008 (the next generation ESnet backbone) DWDM

technology will be 40 Gb/s per lambda

o And the backbone will be multiple lambdas

• Issueso End-to-End, end-to-end, and end-to-end

Page 34: Esnet: DOE’s Science Network GNEW March, 2004

34

Science Services Strategy

• ESnet is in a natural position to be the provider of choice for a number of middleware services that support collaboration, colaboratories, Grids, etc.

• The characteristics of ESnet that make it a natural middleware provider are that ESnet

o is the only computing related organization that serves all of the Office of Science

o is trusted and well respected in the OSC communityo has the 7x24 infrastructure required to support critical services, and is a long-

term stable organization.

• The characteristics of the services for which ESnet is the natural provider are those that

o require long-term persistence of the service or the data associated with the service

o require high availability, require a high degree of integrity on the part of the provider

o are situated at the root of a hierarchy so that the service scales in the number of people that it serves by adding nodes that are managed by local organizations (so that ESnet does not have a large and constantly growing direct user base).

Page 35: Esnet: DOE’s Science Network GNEW March, 2004

35

Science Services Strategy

• DOE Grids CA that provides X.509 identity certificates to support Grid authentication provides an example of this modelo the service requires a highly trusted provider, requires a

high degree of availability

o provides a centralized agent for negoiating trust relationships with, e.g., European CAs

o it scales by adding site based or Virtual Organization based Registration Agents that interact directly with the users

Page 36: Esnet: DOE’s Science Network GNEW March, 2004

36

Science Services: Public Key Infrastructure

• Public Key Infrastructure supports cross-site, cross-organization, and international trust relationships that permit sharing computing and data resources and other Grid services

• Digital identity certificates for people, hosts and services – essential core service for Grid middleware

o provides formal and verified trust management – an essential service for widely distributed heterogeneous collaboration, e.g. in the International High Energy Physics community

o DOE Grids CA

• Have recently added a second CA with a policy that permits bulk issuing of certificates with central private key mg’mt

o Important for secondary issuers- NERSC will auto issue certs when accounts are set up – this constitutes

an acceptable identity verification

- May also be needed for security domain gateways such asKerberos – X509 – e.g. KX509

Page 37: Esnet: DOE’s Science Network GNEW March, 2004

37

Science Services: Public Key Infrastructure

• Policy Management Authority – negotiates and manages the formal trust instrument (Certificate Policy - CP)

o Sets and interprets procedures that are carried out by ESnet

o Currently facing an important oversight situation involving potential compromise of user X.509 cert private keys

- Boys-from-Brazil style exploit => kbd sniffer on several systems that housed Grid certs

- Is there sufficient forensic information to say that the pvt keys were not compromised??

– Is any amount of forensic information sufficient to guarantee this, or should the certs be revoked?

– Policy refinement by experience

• Registration Agents (RAs) validate users against the CP and authorize the CA to issue digital identity certs

• This service was the basis of the first routine sharing of HEP computing resources between US and Europe

Page 38: Esnet: DOE’s Science Network GNEW March, 2004

38

Science Services: Public Key Infrastructure

• The rapidly expanding customer base of this service will soon make it ESnet’s largest collaboration service by customer count

Page 39: Esnet: DOE’s Science Network GNEW March, 2004

39

Voice, Video, and Data Collaboration Service

• The other highly successful ESnet Science Service is the audio, video, and data teleconferencing service to support human collaboration

• Seamless voice, video, and data teleconferencing is important for geographically dispersed collaborators

o ESnet currently provides voice conferencing, videoconferencing (H.320/ISDN scheduled, H.323/IP ad-hoc), and data collaboration services to more than a thousand DOE researchers worldwide

Page 40: Esnet: DOE’s Science Network GNEW March, 2004

40

Voice, Video, and Data Collaboration Service

o Heavily used services, averaging around- 4600 port hours per month for H.320 videoconferences,

- 2000 port hours per month for audio conferences

- 1100 port hours per month for H.323

- approximately 200 port hours per month for data conferencing

• Web-Based registration and scheduling for all of these serviceso authorizes users efficiently

o lets them schedule meetings

Such an automated approach is essential for a scalable service – ESnet staff could never handle all of the reservations manually

Page 41: Esnet: DOE’s Science Network GNEW March, 2004

41

Science Services Strategy

• The Roadmap Workshop identified twelve high priority middleware services, and several of these fit the criteria for ESnet support. These include, for example

o long-term PKI key and proxy credential management (e.g. an adaptation of the NSF’s MyProxy service)

o directory services that virtual organizations (VOs) can use to manage organization membership, member attributes and privileges

o perhaps some form of authorization service

o in the future, some knowledge management services that have the characteristics of an ESnet service are also likely to be important

• ESnet is seeking the addition funding necessary to develop, deploy, and support these types of middleware services.

Page 42: Esnet: DOE’s Science Network GNEW March, 2004

42

Conclusions

• ESnet is an infrastructure that is critical to DOE’s science mission and that serves all of DOE

• Focused on the Office of Science Labs

• ESnet is evolving its architecture and services strategy to need the stated requirements for bandwidth, reliability, QoS, and Grid and collaboration supporting services