internet2 update june 29 th 2010, lhcopn jason zurawski – internet2
TRANSCRIPT
Internet2 UpdateInternet2 Update
June 29th 2010, LHCOPNJason Zurawski – Internet2
• Internet2 is an advanced networking consortium led by members of the Research and Education (R&E) community.
• We promote the missions of our members, in part through the development and support of networking activities and related initiatives.
• We are committed to supporting scientific use of the network, including the LHC.– Enabling large scale data transfers over a high capacity nationwide
network– Dynamic circuit capability through the ION service– Performance Monitoring through perfSONAR– Support for the debugging of Network Performance, end to end.
2 – 04/22/23, © 2010 Internet2
Introduction
• Internet2 Network and Advanced Services Update• ARRA and Stimulus Update• A Blast from the Past• LHC Traffic Observations
3 – 04/22/23, © 2010 Internet2
Outline
• Backbone router upgrades– Houston, Kansas City, Salt Lake City, Los Angeles, Seattle routers
upgraded to Juniper MX960s in early 2010– Chicago router will remain Juniper T1600 in 2010 (getting crowded!)
• Backbone augments– All backbone circuits re-framed from OC-192 to 10GigE LANPHY– Additional backbone links lit between all adjacent routers; adjacent
nodes now connected with 20G of bandwidth• Optical capacity added between Denver and Salt Lake City (10
additional waves)• 10G capacity between Internet2 and TransitRail added in Los
Angeles, Seattle, Chicago, Washington DC
Network Architecture
IP NETWORK VISUALIZEDIP NETWORK VISUALIZED
• Note: New York to Chicago is now 20G, more later…
• Less Than Best Effort service available on Internet2 IP network– Researchers can signal LBE service using TOS overhead bytes– Internet2 will evolve the service and documentation over the
coming months• Backbone routers configured to handle MPLS transport of ION
services• Route Statistics
– R&E IPv4: 13,094 routes– R&E IPv6: 812 routes– CPS* IPv4: 154,272 routes– CPS* IPv6: 2,850 routes
*CPS = Commercial Peering Service (http://noc.net.internet2.edu/i2network/commercial-peering-service.html)
IP Service
• ION = Interface to Dynamic Circuits (similar to Autobahn/SDN)• Transition from Ciena CoreDirectors to Juniper MX960s in 1H2010
– Move from SONET-based network on the Cienas to an MPLS- based service operating on the current IP network
• MPLS transport more efficient use of resources– Bandwidth reserved for circuit instantiation is available for use by
other users when circuit owner not utilizing circuit for transfer– Opportunity to provide circuits that can burst above their requested
commit rate, if sufficient headroom available• ION will be a production service managed by the Internet2 NOC• ION circuits provisioned using a simple and secure web-based
interface or IDC signalling
ION Service
INTERNET2 HISTORICAL OFFERED LOADINTERNET2 HISTORICAL OFFERED LOAD
• Internet2 Network and Advanced Services Update• ARRA and Stimulus Update• A Blast from the Past• LHC Traffic Observations
9 – 04/22/23, © 2010 Internet2
Outline
• Many Internet2 connectors are looking to expand through NTIA BTOP program.
• Internet2 is exploring ways to upgrade/expand capabilities to match up with the expected growth of regionals and ensure fees remain the same or potentially reduced.
• Internet2 has submitted a Round 2 Proposal to the ARRA-funded Broadband Technologies Opportunities Program (BTOP) as funded by the NTIA– Seeks to acquire nationwide dark fiber, optical equipment to light
the fiber at 100G speeds, and an upgraded IP network delivering 100GigE to the Internet2 Community
10 – 04/22/23, © 2010 Internet2
Internet2 and ARRA Stimulus
11 – 04/22/23, © 2010 Internet2
New Network Builds in Proposal
12 – 04/22/23, © 2010 Internet2
Combined US UCAN System Capability
13 – 04/22/23, © 2010 Internet2
Upgraded IP Backbone
• Internet2 Network and Advanced Services Update• ARRA and Stimulus Update• A Blast from the Past• LHC Traffic Observations
14 – 04/22/23, © 2010 Internet2
Outline
• The following slides were given at Summer Joint Techs 2007 (Fermilab)– Rick Summerhill and Eric Boyd– http://www.internet2.edu/presentations/jt2007jul/20070716-boy
d-summerhill.ppt• Background – Internet2 used to sponsor workshops as a service
to our members and connectors to prepare for the LHC– Data and network requirements– Common stumbling blocks to success (e.g. network performance
and design)• Have since evolved into a more general ‘Network Performance’
workshop
15 – 04/22/23, © 2010 Internet2
A Blast from the Past
Are you ready for LHC?
16 – 04/22/23, © 2010 Internet2
CERNTier 0 Raw Data
FNAL BNL Shared Data Storage and Reduction
Tier 1(12 orgs)
US Tier 2(15 orgs)
CMS (7) Atlas (6-7)
US Tier 3 (68 orgs)
US Tier 4 (1500 US scientists)
Scientists Request Data
Provides Data to Tier 3
Scientists Analyze Data
LHCOPN
GEANT-ESNet-Internet2
Internet2/Connectors Internet2/Connectors
Local Infrastructure
17 – 04/22/23, © 2010 Internet2
CERN
Tier 0 to Tier1: Requires 10-40 Gbps
Tier 1 to Tier 2: Requires 10-20 Gbps
LHCOPN
GEANT-ESNet-Internet2
Internet2/Connectors Internet2/Connectors
Tier 1 or 2 to Tier 3: Estimate: Requires 1.6 Gbps per transfer (2 TB's in 3 hours)
Peak Flow Network Requirements
Local Infrastructure
1818 – 04/22/23, © 2010 Internet2
What are the Implications for Normal Network Operations from
T2 to T3?Example: 13 people (3 Professors and 10 Graduate Students) require ten 3-hour timeslots a month to receive 8 Gigabit data flows.
19
4 Gig
10 Gig
19 – 04/22/23, © 2010 Internet2
20
CMS T2 Traffic at UNL
20 – 04/22/23, © 2010 Internet2
Internet2 Connectors
21
MAGPI
3ROXCalREN-2 South
Great Plains Network
Indiana GigaPoP
MREN
Merit
LONI
Internet2
ESnet
NoX
NYSERNet
OARnet
OmniPoPSoX
Oregon GigaPoP Pacific Northwest
GigaPoP
21 – 04/22/23, © 2010 Internet2
Cyberinfrastructure Requirements
• Data storage• Robust campus infrastructure• Security and Authorization • IT support for local and remote resources• Network Performance monitoring tools
2222 – 04/22/23, © 2010 Internet2
Cyberinfrastructure Components
Network
Middleware Performance Infrastructure / Tools
Control Plane
….
Bulk Transport
2-Way Interactive Video
Real-Time Communications
Applications
Applications call on Network Cyberinfrastructure
….
…. ….Phoebus
Netw
ork C
yberinfrastructure
23 – 04/22/23, © 2010 Internet2
• Was the message heard at the Tier2 Level?– Absolutely – most (if not all) US Tier2s are extremely well
connected (diverse and capable network paths) and can (do) flood the network at will (see examples later)
– Cyberinfrastructure components are well deployed and useful• perfSONAR-PS available at all USATLAS Tier1/Tier2s. Gaining at
USCMS as well. Striving for Tier3s to have a deployment available• Lambda Station/Terapaths/Phoebus are successful data movement
tools that utilize the Dynamic Circuit networks
• What more needs to be done?– Tier3s – what is the worse case scenario?– Bridging the gap – Campus IT vs the Science Disciplines– The workshops where valuable, why can’t they continue?
• Internet2 wants to be involved, but needs support and help of the scientific communities (beyond the LHC as well) and network partners
24 – 04/22/23, © 2010 Internet2
Blast from the Past Summary
25 – 04/22/23, © 2010 Internet2
Internet2 LHC Project Connectivity (2009)
• Based on conversation by John and others yesterday, some clarifications– perfSONAR-MDM: Managed service e.g. support available for the installation,
configuration, and management of open source software based on the perfSONAR protocols
– perfSONAR-PS: Non-managed service (e.g. pure open source support model) for the use of open source software based on the perfSONAR protocols
• Is there a difference between the two?– Only in the management and development, the software is interoperable on
a protocol basis• Key stakeholders (for both)
– Networks (R&E and Commercial)– Campuses– Federal Labs– VOs
• Open development opportunities– Yes! There are APIs and the data is available– Traditional (Python, Perl, Java), REST gaining strength
26 – 04/22/23, © 2010 Internet2
And a note on perfSONAR-PS…
27 – 04/22/23, © 2010 Internet2
Outside Development Gaining Traction
• Internet2 Network and Advanced Services Update• ARRA and Stimulus Update• A Blast from the Past• LHC Traffic Observations
28 – 04/22/23, © 2010 Internet2
Outline
• Aggregate traffic from Fermilab/BNL on Internet2 Network• Dates: 3/30 to 4/2 (First collision through data dissemination)• Note the ‘peaks’ of around 3-5G. Didn’t last too long, supports
the maximum size of the data set. • Graph courtesy of Chris Robb.
29 – 04/22/23, © 2009 Internet2
So … Where is All the Data?
• Aggregate traffic from Fermilab/BNL on Internet2 Network• Dates: 3/30 to 4/2 (First collision through data dissemination)• Note the ‘peaks’ of around 3-5G. Didn’t last too long, supports
the maximum size of the data set. • Graph courtesy of Chris Robb.
30 – 04/22/23, © 2009 Internet2
So … Where is All the Data?
Possible Transfers?
• Despite these facts on data size and where it came from, did we see the data on Internet2?– “Some”, but not all– A little later than first availability (more with Tier2 and Tier3
transfers)• Who saw the data?
– Purpose built R&E nets (Ultralight, USLHCNet)– ESnet (into and out of Fermilab/BNL)– Internet2/NLR
• From Tier1s• Between Tier2s*• To Tier3s
*Expected, and becoming common
31 – 04/22/23, © 2009 Internet2
So … Where is All the Data?
• Minor experiment by me to see how Tier-2s route to each other, and the Tier-1 for USATLAS.
• pS Performance Toolkit (http://psps.perfsonar.net/toolkit/) available at Tier-1 and almost all Tier-2s.– Co-allocated near the rest of the processing/storage– Using the available performance tools, analyze the routes– Determine the paths the data is flowing– Check the times/data stores to find evidence of the transfers– ‘Reverse Traceroute’ Tool – Developed by SLAC
32 – 04/22/23, © 2009 Internet2
Connectivity
• Tier 1 for USATLAS• Connectivity to other sites (Tier-2s, Tier3s)
– MSU/UMich – Ultralight– Indiana - ESnet– U of Chicago – Private R&E Network/Peering– Boston Univ. – Private R&E Network/Peering– Oklahoma - ESnet– U of Texas at Arlington – Esnet/NLR– SMU – ESnet/Internet2– U of Wisconsin - ESnet– LBNL/NERSC – ESnet
• As expected for a Tier1, there is not much touching Internet2
33 – 04/22/23, © 2009 Internet2
Connectivity – BNL (Tier 1)
• Tier 2 (Northeast Tier2 [NET2] w/ Harvard)• Connectivity to other sites
– BNL (Tier-1) – Private Network/Peering– MSU/UMich (Tier-2) – Internet2– Indiana (Tier-2) – Internet2– U of Chicago (Tier-2) – Internet2– Oklahoma (Tier-2) – Internet2– U of Texas at Arlington (Tier-2) – Internet2– SMU (Tier-3) – Internet2– U of Wisconsin (Tier-3) – Internet2– LBNL/NERSC (Tier-3) – ESnet
• Private connectivity to the Tier1 (shared with NET2 partner Harvard), but T2-T2 transfers almost exclusively R&E
34 – 04/22/23, © 2009 Internet2
Connectivity – Boston Univ. (Tier 2)
• Tier 2 (Southwest Tier2 [SWT2] w/ U of T at A)• Connectivity to other sites
– BNL (Tier-1) - ESnet– MSU/UMich (Tier-2) - NLR– Indiana (Tier-2) - NLR– U of Chicago (Tier-2) – Internet2– Boston Univ. (Tier-2) – Internet2– U of Texas at Arlington (Tier-2) - NLR– SMU (Tier-3) - NLR– U of Wisconsin (Tier-3) - NLR– LBNL/NERSC (Tier-3) – ESnet
• Well connected site, mix of different R&E peerings. Diversity of path is a good thing.
35 – 04/22/23, © 2009 Internet2
Connectivity – Oklahoma (Tier 2)
• Tier-2 to Tier-2 Transfers– We are always monitoring and looking for pinch points– Some activity is more visible than others…– Fully expect this type of to occur (and increase!) as the project
matures• Tier-2 Transfers, sometimes International (CMS)
– Expecting this based on the CMS model– Directly lead to capacity changes on the network
• New 10G between New York and Chicago – Early May 2010– Ready and willing to add capacity as needed to where it is needed
• RE: David’s slides yesterday regarding ‘protection’• Will be speaking with heavy network users as traffic increases to talk
about solutions (e.g. extra capacity at the regional/campus network, use of ION, etc.)
• Working with regional partners to increase capacity into heavy use campuses (e.g. Vanderbilt University [New Heavy ION Tier2] -> SOX)
36 – 04/22/23, © 2009 Internet2
What We are Expecting
• Inbound to Internet2 from GPN (UNL – A CMS Tier-2)
37 – 04/22/23, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Outbound to CalREN (Caltech – A CMS Tier-2)
38 – 04/22/23, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Backed up by CMS PhEDEx data
39 – 04/22/23, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Backed up by CMS PhEDEx data
40 – 04/22/23, © 2009 Internet2
Example of T2 – T2: 4/26 7 EDT
• Backbone traffic heating up (NEWY-CHIC)
41 – 04/22/23, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Tracked to University of Wisconsin (USCMS/USATLAS Tier-2)
42 – 04/22/23, © 2009 Internet2
Other Example of a T2: 4/15 to date
• PhEDEx confirms (into UofWisc)
43 – 04/22/23, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Some (not all) coming out of a Tier-1 in Germany (KIT/GridKA)
44 – 04/22/23, © 2009 Internet2
Other Example of a T2: 4/15 to date
• Backbones are ready for the challenges– Underestimates can be met with action to increase capacity
• Regional Networks should be prepared as well.– Working to upgrade heavy users to increase the science capability
• Campus preparedness will vary– Large campus – more than likely aware of the demands of big
science– Small campus – prepared? Time is running out, to find out …
• Internet2’s Roll– Support the missions of our members, no matter the project– Deliver networking– Support key cyberinfrastructure, either through software
development, instruction, or advanced services
45 – 04/22/23, © 2009 Internet2
Internet2 UpdateInternet2 UpdateJune 29th 2010, LHCOPNJason Zurawski – Internet2
For more information, visit www.internet2.edu
46 – 04/22/23, © 2009 Internet2