FZU Computing Centre
Jan Švec
Institute of Physics of the AS CR, v.v.i.
29.8.2011
FZU Computing Centre
Computing center is an independent part of the Department of Networking and Computing Techniques.
Members of the team lead by J. Chudoba from the Department of Experimental Particle Physics:
T. Kouba, J. Švec, J. Uhlířová, M. Eliáš, J. Kundrát.
Operation is strongly supported by some members of the Department of detector development and data processing lead by M. Lokajíček:
J. Horký, L. Fiala.
FZU Computing Centre
HEP experiments D0 experiment, Fermilab (USA) LHC experiments ATLAS, ALICE (CERN) STAR experiment, BNL (USA)
Solid state physics Astroparticle physics
Pierre Auger Observatory (Argentina)
FZU Computing Centre
History: 2002
34 HP LP1000 2x 1.3 GHz
Pentium 3 1 GB RAM
First 1 TB of disks “Terminal room”
History: 2004
A real data center 200 kVA UPS
380 kVA diesel 2x 56 kW CRACs 67 HP DL140
3.06 GHz Prescotts 10 TB disk storage
History: 2006 - 2007
2006 36 HP BL35p 6 HP BL20p
2007 18 HP BL460c 12 HP BL465c
History: 2006 – 2007 (cont.)
3 HP DL360 for Xen FC infrastructure
HP EVA 70+ TB usable capacity, FATA drives Disk images for Xen machines Rest used as DPM Warranty ended in Dec 2010
A new box cheaper than warranty extension
History: 2008
84x IBM iDataPlex node dx340 2x Xeon E5440 => 8 cores
20x Altix XE 310 twins (40 hosts) 2x Xeon E5420 => 8 cores
3x Overland Ultamus 4800 (48TB raw each) SAN Tape library NEO 8000 2x VIA based NFS storage First decommissioning of computing nodes
2002's HP LP1000r
History: 2009
65x IBM iDataPlex dx360M2 nodes 2x Xeon E5520 with HT => 16 Cores
9x Altix XE 340 twins (18 hosts) 2x Xeon E5520 without HT => 8 cores
All water cooled 3x Nexsan SataBeast (84TB raw each) SAN 3x Atom-based storage nodes
Wrong idea, WD15EADS-00P8B0 are just weird
History: 2009
SGI Altix ICE 8200 Solid state physics 512 cores (128 x E5420 2.5G) 1TB RAM Infiniband 6TB disk array (Infiniband) Torque/Maui, OpenMPI
Cooling 2009
2009: New water cooling infrastructure STULZ CLO 781A 2x 88 kW
Additional ceiling for hot-cold air separation
History: 2010
26x IBM iDataPlex dx360M3 nodes
2x Xeon X5650 with HT => 24 Cores Bull Optima 1500 – HP EVA replacement 8x Supermicro SC847E16 + 847E16-RJBOD
DPM pool nodes 1,3PB
Second decommissioning of computing nodes 2004's HP DL140
8 IBM dx340 swapped for dx360M3
Network facilities
External connectivity delivered by CESNET, a GEANT stakeholder
10Gb to public Internet 1Gb to FZK (Karlsruhe) 10Gb connection demuxed to several
dedicated 1Gb lines FNAL, BNL, ASGC, Czech Tier-3s,...
10G internal network Few machines still on 1G switches → LACP
2011: Current Procurements
Estimates:
– Worker nodes, approx. 5500 HEPSPEC06
– 800TB disk storage
– Water cooled rack + water cooled back doors
– Another NFS fileserver
Overall performance growth
2002 2003 2004 2005 2006 2007 2008 2009 20100
5000
10000
15000
20000
25000
30000
HE
PS
PE
C
Year
Current numbers
275 WNs HEP, 76 WNs solid state physics 3000 cores HEP, 560 cores solid state physics Torque & Maui 2PB on disk servers (DPM or NFSv3)
Q & A
Questions?