a follow-up on network projects 10/29/2013 hepix fall 20132 co-authors:

Download A follow-up on network projects 10/29/2013 HEPiX Fall 20132 Co-authors:

If you can't read please download the document

Upload: nathan-hawkins

Post on 17-Jan-2018

217 views

Category:

Documents


0 download

DESCRIPTION

Agenda Latest network evolution Network connectivity at Wigner Business Continuity Status about IPv6 deployment TETRA deployment 10/29/2013 3HEPiX Fall 2013

TRANSCRIPT

A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors: IT/Communication Systems HEPiX Fall 2013 Agenda Latest network evolution Network connectivity at Wigner Business Continuity Status about IPv6 deployment TETRA deployment 10/29/2013 3HEPiX Fall 2013 Latest network development 10/29/2013 HEPiX Fall 20134 Upgrade of Geneva data center Migration to Brocade routers completed 2 year project No service interruption Benefits: 100Gbps links Simplified topology (from 22 to 13 routers) Lower power consumption per port Margin for scalability Enhanced features (MPLS, Virtual routing) 10/29/2013 5HEPiX Fall 2013 Backbone 100 Gbps links 140 Gbps 20 Gbps 60 Gbps Internet 12 Gbps External Network RacksDistribution CERN Data Center today Switching fabric 1.36 Tbps Switching fabric 5.28 Tbps LCG GPN Firewall CORE Network Wigner 200 Gbps Switching fabric 1 Tbps 100 Gbps links MPLS 10/29/2013 HEPiX Fall 20136 StorageDistribution Routers Backbone Routers 100s of 10 Gbps 100 Gbps links Scaling the Data Center 7 10s of 100 Gbps Try to skip 40 Gbps interface HEPiX Fall /29/2013 CPU Service capacity depends on Service purpose Blocking Factor: 2 for CPU, 5 for Storage x 1Gbps m x 10 Gbps 10 Gbps x n x 10 Gbps CPU servers Storage servers Distribution Router 8 Scaling the Top of the Rack x 10Gbps m x 100 Gbps n x 100 Gbps 10GBase-T ? 40Gbps ? HEPiX Fall /29/2013 CORE Network Internet 240 Gbps RacksDistribution Backbone Extending the Tier0 to Wigner 9HEPiX Fall /29/2013 Geneva Switzerland Budapest Hungary Internet CORE Backbone Routers RacksDistribution Racks Distribution MPLS Backbone WLCG Tier0 10HEPiX Fall /29/2013 Business Continuity 11 HEPiX Fall /29/2013 LCG Extended CORE MPLS Wigner for Business Continuity 12 LCG GPN External Network Firewall External Network Firewall Internet HEPiX Fall /29/2013 Virtual Routers 2 nd Network Hub at CERN 13 Backbone 140 Gbps 20 Gbps 60 Gbps Internet 12 Gbps External Network RacksDistribution LCG GPN Firewall CORE Network Wigner 200 Gbps HEPiX Fall /29/2013 Single Building 2 nd Network Hub at CERN 14 140 Gbps 20 Gbps 60 Gbps Internet 12 Gbps LCG GPN External Network Firewall CORE Network Wigner 200 Gbps External Network Firewall CORE Network GPN LCG HEPiX Fall /29/2013 IPv6 deployment at CERN 15HEPiX Fall /29/2013 Network Database: Schema and Data IPv6 Ready Configuration Manager supports IPv6 routing Admin Web: IPv6 integrated 2013 The Data Center is Dual-Stack Gradual deployment on the routing infrastructure starts NTPv6 and DNSv6 DHCPv6 Infrastructure is Dual-Stack Firewallv6 automated configuration User Web and SOAP integrate IPv6 Today HEPiX Fall /29/2013 Automatic DNS AAAA configuration IPv4 / IPv6 same portfolio Identical performance, common tools and services Dual Stack, dual routing OSPFv2/OSPFv3 BGP ipv4 and ipv6 peers Service managers decide when ready for IPv6 Devices must be registered IPv6 auto configuration (SLAAC) disabled RAs: Default Gateway + IPv6 prefixes no-autoconfig DHCPv6 MAC addresses as DUIDs: painful without RFC6939 ISC has helped a lot (code implementing classes for ipv6) DHCPv6 clients might not work out of the box 17HEPiX Fall /29/2013 Lots of VMs Two options: A) VMs with only public IPv6 addresses + Unlimited number of VMs - Several applications don't run over IPv6 today (PXE, AFS,...) - Very few remote sites have IPv6 enabled (limited remote connectivity) + Will push IPv6 adoption in the WLCG community B) VMs with private IPv4 and public IPv6 + Works flawlessly inside CERN domain - No connectivity with remote IPv4-only hosts (NAT solutions not supported or recommended) HEPiX Fall Current VM adoption plan will cause IPv4 depletion during /29/2013 TETRA deployment 19HEPiX Fall /29/2013 What is TETRA? A digital professional radio technology E.T.S.I standard (VHF band MHz) Make use of walkie-talkies Voice services Message services Data and other services Designed for safety and security daily operation 20HEPiX Fall /29/2013 The project Update the radio system used by the CERN Fire Brigade 21 Fully operational since early years work A fully-secured radio network (unlike GSM) Complete surface and underground coverage Cooperation with French and Swiss authorities Enhanced features and services HEPiX Fall /29/2013 Which services? Interconnection with other networks Distinct or overlapping user communities security, transport, experiments, maintenance teams Outdoor and indoor geolocation Lone worker protection 22HEPiX Fall /29/2013 Conclusion The Network is ready to cope with ever- increasing needs Wigner is fully integrated Development of Business Continuity Before end-2013, IPv6 will be fully deployed and available to the CERN community TETRA system provides CERN with an advanced, fully-secured radio network. 23HEPiX Fall /29/2013 Thank you! Question 24HEPiX Fall /29/2013 Some links A short introduction to the Worldwide LCG, Marteen Litmaath https://espace.cern.ch/cern-guides/Documents/WLCG-intro.pdfhttps://espace.cern.ch/cern-guides/Documents/WLCG-intro.pdf Physics computing at CERN, Helge Meinhard https://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2011-list/H.Meinhard-PhysicsComputing.pdfhttps://openlab-mu-internal.web.cern.ch/openlab-mu-internal/03_Documents/4_Presentations/Slides/2011-list/H.Meinhard-PhysicsComputing.pdf WLCG Beyond the LHCOPN, Ian Bird http://www.glif.is/meetings/2010/plenary/bird-lhcopn.pdfhttp://www.glif.is/meetings/2010/plenary/bird-lhcopn.pdf LHCONE LHC Use case http://lhcone.web.cern.ch/node/23http://lhcone.web.cern.ch/node/23 LHC Open Network Environment. Bos-Fisk paper http://lhcone.web.cern.ch/node/19http://lhcone.web.cern.ch/node/19 Introduction to CERN Data Center, Frederic Hemmer https://indico.cern.ch/getFile.py/access?contribId=87&resId=1&materialId=slides&confId=243569https://indico.cern.ch/getFile.py/access?contribId=87&resId=1&materialId=slides&confId= The invisible Web http://cds.cern.ch/journal/CERNBulletin/2010/49/News%20Articles/ ?ln=enhttp://cds.cern.ch/journal/CERNBulletin/2010/49/News%20Articles/ ?ln=en CERN LHC Technical infrastructure monitoring http://cds.cern.ch/record/435829/files/st pdfhttp://cds.cern.ch/record/435829/files/st pdf Computing and network infrastructure for Controls http://epaper.kek.jp/ica05/proceedings/pdf/O2_009.pdfhttp://epaper.kek.jp/ica05/proceedings/pdf/O2_009.pdf 25HEPiX Fall /29/2013 Extra Slides 26HEPiX Fall /29/2013 CERN Area~600,000m 2 Buildings 646 Staff and Users14,592 Devices Registered170,475 Data CentersGenevaWigner 2013Wigner 2014 Power3,500KW~900KW~1200KW Racks Servers10,173~1,200~1800 Routers2268+Firewall 100Gbps ports6018 ToR Switches ToR Switching 1Gbps ports22,7763, Gbps ports4, Storage Disks 76,697 Raw disk capacity (TiB) 116,948 Tape Drives 160 Data on Tape (PiB)100 L2 Switching Switches Gbps ports Gbps ports5656 L3 Switching Routers161 1 Gbps ports Gbps ports Gbps ports78 WiFi Access Points1,514 Devices seen/day~7,000 27HEPiX Fall /29/2013 LHCONE LHC Open Network Environment Enable high-volume data transport between T1s, T2s and T3s Separate LHC large flows from the general purpose routed infrastructures of R&E Provide access locations that are entry points into a network private to the LHC T1/2/3 sites. Complement the LHCOPN 28HEPiX Fall /29/2013