worldwide lhc computing grid - ian bird -hnscicloud prototype phase kickoff meeting

Post on 15-Apr-2017

120 Views

Category:

Technology

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Dr. Ian BirdCERNLHC Computing Project Leader3rd April 2017

Worldwide Distributed Computing for the LHC

Ian.Bird@cern.ch 2

The Large Hadron Collider (LHC)

A new frontier in Energy & Data volumes: LHC experiments generate 50 PB/year

HNSciCloud; 3 April 2017

~700 MB/s

~10 GB/s

>1 GB/s

>1 GB/s

Ian.Bird@cern.ch 3

Tier-1: permanent storage, re-processing, analysis

Tier-0 (CERN and Hungary): data recording, reconstruction and distribution

Tier-2: Simulation,end-user analysis

> 2 million jobs/day

~750k CPU cores

600 PB of storage

~170 sites, 42 countries

10-100 Gb links

WLCG:An International collaboration to distribute and analyse LHC data

Integrates computer centres worldwide that provide computing and storage resource into a single infrastructure accessible by all LHC physicists

The Worldwide LHC Computing Grid

HNSciCloud; 3 April 2017

Ian.Bird@cern.ch 4HNSciCloud; 3 April 2017

WLCG MoU Signatures2017:- 63 MoU’s- 167 sites; 42 countries

Ian.Bird@cern.ch 5HNSciCloud; 3 April 2017

Optical Private NetworkSupport T0 – T1 transfers& T1 – T1 trafficManaged by LHC Tier 0 and Tier 1 sites

Networks

Up to 340 Gbps transatlantic

Ian.Bird@cern.ch 6HNSciCloud; 3 April 2017

Asia North America

South America

Europe

LHCOne: Overlay networkAllows NREN’s to manage HEP traffic on general purpose networkManaged by NREN collaboration

Ian.Bird@cern.ch 7

Data in 2016

HNSciCloud; 3 April 2017

2016: 49.4 PB LHC data/ 58 PB all experiments/ 73 PB total

ALICE: 7.6 PBATLAS: 17.4 PBCMS: 16.0 PBLHCb: 8.5 PB

11 PB in July

180 PB on tape800 M files

Data distribution

HNSciCloud; 3 April 2017 Ian.Bird@cern.ch 8

Global transfer rates increased to 30-40 GB/s (>2 x Run1)

Increased performance everywhere:- Data acquisition >10PB / month- Data transfer rates > 35 GB/s globally

Several Tier 1s increased OPN network bandwidth to CERN to manage new data rates;GEANT has deployed additional capacity for LHC

Monthly traffic growth on LHCONE

Regular transfers of >80 PB/month with ~100 PB/month during July-October(many billions of files)

Worldwide computing

HNSciCloud; 3 April 2017 Ian.Bird@cern.ch 9

Peak delivery:18M core-days/month(~580k cores permanently)

Ian.Bird@cern.ch 10

CERN Facilities today

HNSciCloud; 3 April 2017

2017:• 225k cores 325k• 150 PB raw 250 PB

2017-18/19• Upgrade internal

networking capacity

• Refresh tape infrastructure

Ian.Bird@cern.ch 11

Provisioning services Moving towards Elastic Hybrid IaaS model:• In house resources at full

occupation• Elastic use of

commercial & public clouds

• Assume “spot-market” style pricing

OpenStack Resource Provisioning

(>1 physical data centre)

HTCondor

Public Cloud

VMsContainersBare Metal and HPC

(LSF)

Volunteer Computing

IT & Experiment ServicesEnd Users CI/CD

APIsCLIsGUIs

Experiment Pilot Factories

HNSciCloud; 3 April 2017

CERN cloud procurementsSince ~2014, series of short CERN procurement projects of increasing scale and complexity

HNSciCloud; 3 April 2017 Ian.Bird@cern.ch 12

Ian.Bird@cern.ch 13

Commercial Clouds

HNSciCloud; 3 April 2017

Ian.Bird@cern.ch 14

Future Challenges

BFirst run LS1 Second run LS2 Third run LS3 HL-LHC…

FCC?2013 2014 2015 2016 2017 201820112010 2012 2019 2023 2024 2030?20212020 2022 …2025

CPU:• x60 from 2016

Data:• Raw 2016: 50 PB 2027: 600 PB• Derived (1 copy): 2016: 80 PB 2027: 900 PB

HNSciCloud; 3 April 2017

Raw data volume for LHC increases exponentially and with it processing and analysis load

Technology at ~20%/year will bring x6-10 in 10-11 years Estimates of resource needs at HL-LHC x10 above what is realistic to expect from

technology with reasonably constant cost

Ian.Bird@cern.ch 15

HL-LHC computing cost parameters

HNSciCloud; 3 April 2017

Core AlgorithmsInfrastructureSoftware PerformanceParameters

Business of the experiments: amount of Raw data, thresholds; Detector design long term computing cost implications

Business of the experiments: reconstruction, and simulation algorithms

Performance/architectures/memory etc.;Tools to support: automated build/validationCollaboration with externals – via HSF

New grid/cloud models; optimize CPU/disk/network; economies of scale via clouds, joint procurements etc.

HEP Data cloudStorage and compute

Cloud users:Analysis

HNSciCloud; 3 April 2017 Ian.Bird@cern.ch 16

Possible Model for future HEP computing infrastructure

Simulation resources

Ian.Bird@cern.ch 17

Conclusions WLCG has been very successful in providing the

global computing environment for physics at the LHC

Engagement and contributions of the worldwide community have been essential for that

LHC upgrades over the coming decade will give new challenges and opportunities Technology will change our computing models

HNSciCloud; 3 April 2017

top related