Download - Icg hpc-user
Portsmouth ICG – HPC The “Sciama” Environment
G Burton - Nov 10 - Version 1.1
1
SCIAMA (pronounced shama)
SEPNet Computing Infrastructure for Astrophysical Modeling and Analysis
2
What we need from SEPNet partners:-
• Named “superuser”– Required for initial testing– Required for initial user training– Will require local IP range for firewall access
• Required software packages to be installed• Approximation of number of likely users
3
Sciama Building Blocks
4
In the “good-ol-days” things were simple ……….
5
In the “good-ol-days” things were simple ……….
6
… then more sockets were added
• Two main players are Intel and AMD• Single operating system controlling both sockets
7
… then more cores to the sockets. The basic building block for the Sciama cluster
Intel Xeon X5650 2.66Ghz six-core (Westmere core)
8
Total ICG Compute Pool > 1000 Cores
9
Sciama Basic Concept
10
Basic Concept of Cluster
11
A bit about Storage ………
NB. The storage is transient - IT WILL NOT BE BACKED UP
12
Lustre Storage – V Large Files – High Performance
13
Networking -Three Independent LAN’S
14
Some users are at remote locations ..
15
Use of Remote Login Client
16
ICG-HPC Stack
17
Installed S/W
• Licensed Software:-– Intel Cluster Toolkit (compiler edition for Linux)– Intel Thread Checker – Intel Vtune Performance Analyser– IDL
• use ICG license pool ? • Restrict access ?)
– Matlab• use UoP floating licenses ?• Restrict access ? )
18
Installed S/WWill install similar to Cosmos / Universe :-
• OpenMPI, OpenMP, MPICH• Opens source C, C++ and Fortran compiler suites• Maths Libs – ATLAS, BLAS, (Sca)LAPACK, FFTW
19
Running Applications on the Sciama
20
12 cores per Nodes
• Multiple cores allow for multi treaded applications.• OpenMP is an enabler
21
Inter node memory sharing not (usually ) possible
• Gives rise to “distributed memory” Model• Need the likes of OpenMPI (Message Passing Interface)
22
Largest (sensible) job is 24Gbytes in this distributed memory model
23
MPI allows parallel programming in distributed memory model
• MPI enables parallel computation• Message Buffers are used to pass data between processes• Standard tcp-ip network used
24
Hybrid OpenMP and MP programming possible
25
Comparing Sciama with Cambridge COMOS / Universe Environments
26
Shared Memory Model• Sciama is a distributed memory systems.• Cosmos – Universe environments are SGI Altix shared memory systems.
27
Shared Memory Models Can Support very large processes
28
Shared Memory Model Supports OpenMP and MPI (and Hybrid)
• Altix systems have an MPI Offload Engine for speeding upMPI comms.
29
Binary Compatibility
• COSMOS and Universe are not binary compatible (Intel vs Itanium processors).
• Universe is compatible with Sciama but some libraries may be SGI Specific (MPI offload engine)
30