1 high energy physics (hep) computing hyangkyu park kyungpook national university daegu, korea 2008...
TRANSCRIPT
1
High Energy Physics (HEP) Computing
HyangKyu Park
Kyungpook National University
Daegu, Korea
2008 Supercomputing & KREONET Workshop
Ramada Hotel, JeJu, Oct. 16~18, 2008
High Energy Physics (HEP)
High Energy Physics (HEP) is the study of the basic elements of matter
and the forces acting among them.
People have long asked,“What is the world made of ?”
“What holds it together ?”
Major HEP Laboratories in the World
US FNAL
US BNL
Europe CERN
Germany DESY
Japan KEK
US SLAC
Major HEP Experiments
No. of Collaborators
No. of
Countries
Data Volume
(comments)
Belle
(KEK, Japan)
~300 13 1 Peta-Byte
(end in~2010)
CDF
(FNAL,USA)
~800 12 1.5 Peta-Byte
(end in 2010)
D0
(FNAL, USA)
~800 19 1.5 Peta-Byte
(end in 2010)
CMS
(CERN, Europe)
~2000 36 ~10 Peta-Byte/yr
(start 2008)
HEP collaborations are increasingly international.
CMS Computing
TOTEM
LHCb: B-physics
ALICE
CMS
Atlas
6000+ Physicists 250+ Institutes 60+ Countries
Challenges: Analyze petabytes of complex data cooperatively
Harness global computing, data & network resources
LHC-b
Large Hadron Collider (LHC) @CERN,where the web was born
LHC started just Now !
9
“The CMS detector is essentially 100-megapixel digital camera that will take 40 M pictures/s of particle interaction.” by Dan Green.
The High Level Trigger farm writes RAW events with 1.5 MB at a rate of 150 Hz.
1.5 MB x 150/s x 107 s ≈ 2.3 Peta-Byte/yr
10
LEP & LHC in Numbers
LEP (1989/2000)
CMS (2008)
Factor
Nr. Electronic Channels 100 000 10 000 000 x 102 Raw data rate 100 GBs 1 000 TBs x 104 Data rate on Tape 1 MBs 100 MBs x 102
Event size 100 KB 1 MB x 10
Bunch Separation 22 s 25 ns x 103 Bunch Crossing Rate 45 KHz 40 MHz x 103 Rate on Tape 10 Hz 100 Hz x 10 Analysis 0.1 Hz
(Z0, W) 10- 6 Hz (Higgs)
x 105
x 1000
11
The LHC Data Grid Hierarchy
KNU
~2000 physicists, 40 countries
~10s of Petabytes/yr by 2010~1000 Petabytes in < 10 yrs?
12
Service and Data Hierarchy Tier-0 at CERN
– Data acquisition & reconstruction of raw data– Data Archiving (Tape & Disk storage) – Distribution of raw & recon data -> Tier-1 centers
Tier-1– Regional & global serivces
• ASCC (Taiwan), CCIN2P3 (Lyon), FNAL (Chicago), GridKA (Kalsruhe), INFN-CNAF (Bologna), PIC (Barcelona), RAL (Oxford)
– Data Archiving (Tape & Disk storage)– Reconstruction– Data Heavy Analysis
Tier-2– ~40 sites (including Kyungpook National Univ.)– MC production– End-user Analysis (Local community use)
13
LCG_KNU
LHC Computing Grid(LCG) FarmsLHC Computing Grid(LCG) Farms
14
Current Tier-1 Computing Resources
Requirements by 2008•CPU: 2500 kSI2k
•Disk: 1.2 PB
•Tape: 2.8 PB
•WAN: At least 10 Gbps
15
Current Tier-2 Computing Resources
Requirements by 2008•CPU: 900 kSI2k
•Disk: 200 TB
•WAN: At least 1 Gbps. 10 Gbps is recommended
16
CMS Computing in KNU
KNU
CPU (kSI2k) 400
Disk Storage (TB) 117->150
(12 of Disk Servers)
Tape (TB) 46
WAN (Gbps) 12 ->20
Grid System LCG
Support High Energy
CMS Computing Role Tier-2
17
TEIN2 North/ORIENT
622 155
45
45
155
PH
VN
TH
ID
MY
45
3 x 622
2.5G(622M)
North America(via TransPAC2)(via GLORIAD)
EU
EU
622
622M+1G
4 x 155
AU
HK
SG
JP
CN KR
KREONET/GLORIAD KR-CN
KOREN/APII KR-JP
APII/TEIN2, GLORIAD (2007.10)
TEIN2 South
622
622
10G(2G)
10G 10G
10G
Courtesy by Prof. D. Son and Dr. B.K. Kim
18
CMS Computing Activities in KNU
Running Tier-2 Participating in LCG Service Challenges, CSAs every
year as Tier-2– SC04 (Service Challenge): Jun. ~ Sep.,2006– CSA06 (Computing, Software & Analysis): Sep. ~ Nov., 2006– Load Test 07: Feb ~ Jun., 2007– CSA07: Sep. ~ Oct., 2007– Pre CSA08: Feb.,2008– CSA08: May~June, 2008
Testing, Demonstrating, Bandwidth Challenging – SC05, SC06, SC07
Preparing physics analyses– RS Graviton search– Drell-Yan process study
Configured Tier3 and supporting Tier3’s (Konkuk U.)
19
CSA07 (Computing, Software & Analysis)
A “50% of 2008” data challenge of the CMS data handling– Schedule: July-Aug. (preparation), Sep. (CSA07 start)
CSA08 (Computing, Software & Analysis)
21
Summary of CSA 07
Transferred Data Volume from Tier-1 to KNU during CSA08
Job Submission Activity during CSA08
MIT
DESY
KNU
Activity # of Sub. Jobs Success Success Rate (%)
Analysis 7,969 1,174 71.1
CCRCPG 1,235 4,827 99.2
Total 9,204 6,001 74.9
Transferred Data Volume from Tier-1 to KNU
Job Submission Activity from Apr. to Oct.
MIT
DESY
KNU
System UpgradeDown time
stem
Activity # of Sub. Jobs Success Success Rate (%)
CCRCPG 1,235 1,174 99.2
JobRobot 65,140 49,989 94.9
Analysis 13,320 6,688 60.8
Production 739 696 100
Total 80,434 58,547 89.3
26
Configuring the Tier-3
with KonKuk University
27
Elements of Data Grid System Data GRID Service (or Supported) Nodes:
– glite-UI (User Interface)– glite-BDII (Berkeley Database Information Index)– glite-LFC_mysql (LCG file catalog)– glite-MON (Monitor)– glite-PX (Proxy server)– glite-SE_dcache (Storage Element)– glite-RB (Resource Broker, Job management)– glite-CE_torque (Computing element)
Worker node: data process and computation Storage Element (File server): Store a large amount of data.
8 Nodes
28
Tier-3 Federation
10G
10G
10G
10G
40G
20G
ADMADMDWDM
DWDMDWDM
ADMADMDWDM
ADMADMDWDM
ADMADMDWDM
Seoul
Daejeon
Suwon
Daegu
BusanGwangju
고려대시립대고려대시립대
전남대동신대전남대동신대 경상대경상대
강원대강원대
충북대충북대
서남대서남대
KOREN
CMS Institution
경북대경북대경북대경북대
건국대건국대건국대건국대
성균관성균관성균관성균관
전북대전북대전북대전북대
Resource:40 CPU’s & 10 TB
Summary HEP has pushed against the limits of networking and
computer technologies for decades.
High Speed Network is vital for HEP researches.
LHC experiment has started just now, and will produce ~10 PB/yr of data soon.
We may expect 1 Tbps in less than a decade.
HEP groups in US, EU, Japan, China and Korea are collaborating for advanced net projects and Grid computing.