main title grid @ hep in greece group info (if required) your name …

14
Main title GRID @ HEP in Greece Group info (if required) Your name ….

Upload: mildred-benson

Post on 03-Jan-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Main title GRID @ HEP in Greece Group info (if required) Your name …

Main title

GRID @ HEP in Greece

Group info (if required)

Your name ….

Page 2: Main title GRID @ HEP in Greece Group info (if required) Your name …

LHC HEP groups in Greece• AUTH - ATLAS– ?????

• Demokritos - CMS– 6 Senior Physicists, 2 post-docs, 3 Ph.D. Students

• NTUA - ATLAS– ?????

• UoA - ATLAS, CMS, ALICE– ???????

• UoI - CMS– ???????

Page 3: Main title GRID @ HEP in Greece Group info (if required) Your name …

Current HEP Grid ResourcesAUTH 2 Grid Clusters (GR-01-AUTH, HG-03-AUTH)

300 cores70TB storagemore than 30% allocated for the LHC/HEP VOs

Prof. Chariclia PETRIDOU

The HG-03-AUTH cluster is a part of the HellasGrid infrastructure, owned by GRNET.

Demokritos 1 Grid Cluster (GR-05-Demokritos)120 cores25TB storagededicated to HEP VOs

Dr. Christos MARKOU

NTUA 1 Grid Cluster (GR-03-HEPNTUA) 74 cores 4.5 TB storageDedicated to HEP VOs

Prof. Evagelos GAZIS

UoA - IASA 2 Grid Clusters (GR-06-IASA, HG-02-IASA)140 cores (additional 160 cores in the next days)10TB storage (additional 10TB in the next days)more than 30% allocated for the LHC/HEP VOs

Prof. Paris SPHICASThe HG-02-IASA cluster is a part of the HellasGrid infrastructure, owned by GRNET.

Uo Ioannina 1 Grid Cluster (GR-07-UOI-HEPLAB)112 cores200TB storagededicated to the CMS experiment

Prof. Kostas FOUDAS

Page 4: Main title GRID @ HEP in Greece Group info (if required) Your name …

Expertise in Grid technology (1)• At least two teams of the consortium, IASA, AUTH and Demokritos:

– Have strong cooperation with GRNET– They are major stakeholders of the Greek National Grid Initiative

(NGI_GRNET/HellasGrid) – Operate Grid production-level sites since 2003– Deep knowledge in day-to-day operations of the Grid distributed core services

(including Virtual Organization Management Systems, Information Systems, Workload management services, Logical File Catalogs e.t.c.)

– Experienced in testing and certification of grid services and middleware – Datacenter/Grid monitoring – Clustering, data management, network monitoring

Page 5: Main title GRID @ HEP in Greece Group info (if required) Your name …

Expertise in Grid technology (2)• Participation in the pan-European projects

– CROSSGRID, GRIDCC, EGEE-I, EGEE-II, EGEE-ΙΙΙ, EUChinaGrid, SEEGRID and since 6/2010 in EGI

• Additionally, the teams are responsible for running at a National and/or International level services:– HellasGrid Certification Authority [https://access.hellasgrid.gr/]– European Grid Application Database [http://appdb.egi.eu]– Unified Middleware Deployment global repository [http://repository.egi.eu]

Page 6: Main title GRID @ HEP in Greece Group info (if required) Your name …

Expertise in Grid management• On behalf of GRNET, personnel of our teams carried-out regional and/or

national level responsibilities/roles:– EGI task leader for Reliable Infrastructure Provision– EGI task leader for User Community Technical Services – Deputy Regional Coordinator for the operations in the South Eastern Europe

region (period 4/2009-4/2010 - EGEEIII)– Country representative/coordinator for the HellasGrid (period 5/2009-4/2010 -

EGEEIII)– Manager for the Policy, International Cooperation & Standardization Status

Report Activity of EGEE-III– Coordinator of the Direct User Support group in EGEE-III

Page 7: Main title GRID @ HEP in Greece Group info (if required) Your name …

HEP data transfers (1)

• CMS phedex commissioned links for data transfers

Page 8: Main title GRID @ HEP in Greece Group info (if required) Your name …

HEP data transfers (2)• ATLAS

– DQ2 - Don Quijote (second release)– DQ2 is built on top of Grid data transfer tools

Page 9: Main title GRID @ HEP in Greece Group info (if required) Your name …

Installed HEP SW

• The CRAB-CMS Remote Analysis Builder it is installed on the UI that is hosted at IASA

• The GANGA frontend for job management on the UI of AUTH

• Almost all the sites of the consortium have the most up-to-date software installed

Page 10: Main title GRID @ HEP in Greece Group info (if required) Your name …

Some indicative statistics• More than 580k jobs since 1/2005

• More than 1.5M normalized cpu hours since 1/2005

Page 11: Main title GRID @ HEP in Greece Group info (if required) Your name …

Needs …..

• What our HEP consortium needs in terms of Physics and Why …– [ChristosM]

Page 12: Main title GRID @ HEP in Greece Group info (if required) Your name …

Specs of a T2 site• Based on CMS and Atlas specifications, the minimum requirements that

should be covered for a T2 site, are:

Computing 5-6k HEPSpec2006aprox. 500-600 cores (~70 nodes with dual Quad core)

Storage > 250TB available disk space(aprox 300TB disks, with 1/6 redundancy - RAID)

Page 13: Main title GRID @ HEP in Greece Group info (if required) Your name …

Proposed distributed T2 site• Distributed in two locations in order Take advantage of existing

Infrastructure and Support Teams• Scenario 1: Computing and Storage Infrastructure in both locations

– One Tier 2 with two sub-clusters.– Higher Availability

• Redundancy in case of site failure • Allow for flexible maintenance windows

• Scenario 2: Computing and Storage Infrastructure decoupled– One Tier2 with one sub-cluster– Split requirement on technical expertise

• Each Support team focuses on specific technical aspects (Storage - Computing)• Reduced requirement for overlapped manpower & homogeneity effort

Page 14: Main title GRID @ HEP in Greece Group info (if required) Your name …

Things to be discussed …

(this slide it only for internal discussion/decisions during the EVO meeting on 18/6)

• Offering a segment (i.e. 20%) of the resources for the SEE VO– benefit the academic/scientific communities of the SEE VO.

• Pledged resources will be provided/guaranteed by us (MoU maybe ???)For example:– Guaranteed 50k/year from the consortium, divided as:

• 50% for computing resources upgrade and/or expansion• 40% for additional storage • 10% maintenance of the infrastructure (A/C, UPS, Electrical infra, …)