turing cluster documentation - odu - old dominion university · initial connection by a user the...

30
ODU Turing Community Cluster Documentation Table of Contents Who can use Turing? How to apply? What are the conditions of use? Support Resources Turing Resources Network Resources Storage Resources Mass Storage Lustre Storage Login and Usage Turing Hostname Accessing from a Linux,Unix, or OSX Terminal Environment Accessing from a Microsoft Windows Environment Application Environment Applications Available on Turing Setting the environment for jobs Using Modules to define the job environment Using .tcshrc(.bashrc) files Submitting Jobs using Sun Grid Engine (SGE) Following the table shows the association of PE’s for both Batch and Interactive Jobs. Job Script: Appendix A: Research Computing Acceptable Use Statement Appendix B: Modules User Guide Appendix C: Installed Application Documentation Turing Cluster Documentation revision 2.0 Page 1

Upload: lamhanh

Post on 29-Jul-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

ODU Turing Community Cluster Documentation  

Table of Contents 

Who can use Turing? 

How to apply? 

What are the conditions of use? 

Support Resources 

Turing Resources 

Network Resources 

Storage Resources 

Mass Storage 

Lustre Storage 

Login and Usage 

Turing Hostname 

Accessing from a Linux,Unix, or OSX Terminal Environment 

Accessing from a Microsoft Windows Environment 

Application Environment 

Applications Available on Turing 

Setting the environment for jobs 

Using Modules to define the job environment 

Using .tcshrc(.bashrc) files 

Submitting Jobs using Sun Grid Engine (SGE) 

Following the table shows the association of PE’s for both Batch and Interactive Jobs. 

Job Script: 

Appendix A: Research Computing Acceptable Use Statement 

Appendix B: Modules User Guide 

Appendix C: Installed Application Documentation 

 

Turing Cluster Documentation revision 2.0  Page 1  

Who can use Turing? ● Any valid Old Dominion University student, faculty or staff member conducting research ● Research collaborators sponsored by ODU faculty/staff.

How to apply? ● Prerequisites:

• A valid MIDAS username and password ( http://midas.odu.edu) • Workstation with internet access

● Send an email to [email protected] requesting the “HPC service” to be enabled on the user MIDAS account

What are the conditions of use? ● Users are expected to review and adhere to the research computing acceptable use statement (Appendix A).

Support Resources

● Your requests for assistance from the Research Computing group are documented and tracked using FootPrints: https://fp.odu.edu.

● Email: ● HPC Shared email: [email protected] ● Adrian Jones : [email protected] ● John Pratt : [email protected] ● Rizwan Bhutta : [email protected] ● Terry Stilwell: [email protected] ● Je’aime Powell: [email protected]

● Phone:

• Adrian Jones : 683-3678 • John Pratt : 683-3088 • Rizwan Bhutta : 683-3586 • Terry Stilwell: 683-7145 • Je’aime Powell: 683-7149

● Location: ● Engineering & Computational Sciences Bldg, Suite 4300, 4700 Elkhorn Avenue, Norfolk VA – 23529

● High Performance Computing Website

○ http://www.odu.edu/facultystaff/research/resources/computing/high-performance-computing

 

Turing Cluster Documentation revision 2.0  Page 2  

Turing Resources

Type # Type Processor #Cores GPU Memory Total Cores

Total Memory

Head/ Login

3

Dell r720 2 x Intel Xeon E5-2660

v2 2.2GHz 10 n/a 128GB 60GB 384

Login 1 CRAY R230LH2HKC

2 x Intel E5-2670 v2 2.5 GHZ

10 n/a 128GB 20GB 128

Compute 8 Dell c8220 2 x Intel Xeon E5-2660 2.2GHz

8 n/a 128GB 128GB 1024

Compute 24 Dell c8220 2 x Intel Xeon E5-2660 v2 2.2GHz

10 n/a 128GB 480GB 3072

Compute 20 Dell c6220 2 x Intel Xeon E5-2660 2.2GHz

8 n/a 128GB 320GB 2560

NVIDIA GPU

17 Appro 2 x Intel(R) Xeon(R) X5650 2.67GHz

6 4 96GB 204GB 1632

Compute 76 Cray GB512X

2 x Intel(R) Xeon(R) E5-2670 v2 2.50Ghz

10 n/a 128GB 1,520GB 9728

Intel Phi 10 Cray GB512X

2 x Intel(R) Xeon(R) E5-2670 v2 2.50Ghz

10 60 128GB 200GB 1280

High Memory

4 CRAY R230LH2HKC

4 x Intel(R) Xeon(R) CPU E5-4610 v2 2.30GHz

8 n/a 768GB 128GB 3072

Total 3060GB 22880

 

Turing Cluster Documentation revision 2.0  Page 3  

Network Resources The Turing cluster has several data networks.

● The internal infiniband network is used for high speed message passing between compute nodes. The majority of nodes are connected via a non-blocking FDR infiniband network. The Lustre parallel file storage system is also connected via a 40Gb/s infiniband connection.

● Metadata and node imaging traffic is communicated over an internal cluster ethernet network. NFS scratch space is also accessed over the internal ethernet network.

● Access to the login node is provided over a 10Gb ethernet network. This provides fast transfer speeds for transferring data to the login nodes and to the mass storage.

● An internal IPMI network is used for cluster maintenance including building compute nodes and managing devices. ● An out-of-band ethernet network is also used to manage both blade chassis and power distribution units.

Storage Resources The Turing cluster has several mounted storage resources available. They consist of the user home directories (/home), user scratch space (/scratch), parallel user scratch space (/lustre), and long-term research storage (/RC).

● /home - (i.e primary storage) provided by EMC Isilon array mounted on all nodes ○ User home directories are backed up approximately once per day

● /scratch - provided by EMC Isilon array mounted on all nodes ○ This storage area is considered volatile (i.e. it is not backed up in any way)

● /RC - (i.e. Mass Storage and/or archival storage) provided by EMC Isilon array mounted on the login node ● /lustre - (i.e. parallel scratch) provided by Dell Terascala mounted on all nodes

○ This storage area is considered volatile (i.e. it is not backed up in any way)

Mass Storage Mass storage (/RC) is provided by the Isilon disk array. This storage is mounted to the login node via NFS. Individuals may request either individual or group directory space on the mass storage system through send an email to [email protected]. Direct access to the mass storage system can be provided to users workstations through a CIFS (Windows) file share (R:). More information on accessing mass storage can be found on the research computing website at: https://www.odu.edu/content/dam/odu/offices/occs/docs/ResearchMassStorageDocumentation.pdf .

● Total Available Shared Storage: (based on need)

Lustre Storage The TeraScala Lustre file system provided by Dell (/lustre), is a distributed file system designed for large-scale computation. The four (4) network connections via 40Gb/s infiniband provide high data throughput to each computational node. This storage area is high recommended when high volume aggregate file read/write I/O is required by massively distributed, or highly parallelized jobs. Further information on the Lustre file system can be found at the website: https://wiki.hpdd.intel.com/display/PUB/HPDD+Wiki+Front+Page.

● Total Available Shared Storage: 36TB

 

Turing Cluster Documentation revision 2.0  Page 4  

Login and Usage

Turing Hostname

turing.hpc.odu.edu 

Accessing from a Linux,Unix, or OSX Terminal Environment

# ssh ­X [email protected]  Note: The “-X” switch during initial login (and every subsequent login) is very important to forward all the X-related streams. An alternative will be to issue the following commands (for TCSH shell): setenv DISPLAY <my­ip­address>:0 (or export DISPLAY <my­ip­address>:0 for bash). 

Accessing from a Microsoft Windows Environment You can use an SSH client like PuTTy (www.putty.org) or X-Win32 (http://www.odu.edu/ts/software-services/xwin32). After the initial connection by a user the Remote Desktop Connection built into Windows can also be used.

Application Environment

Applications Available on Turing

● Default application installation location: /cm/shared/apps

Name Type Default Location

Abaqus The Abaqus Unified FEA product suite offers powerful and complete solutions for both routine and sophisticated engineering problems covering a vast spectrum of industrial applications.

/cm/shared/apps/abaqus/6.13

CLC bio CLC bio is the world's leading bioinformatics analysis platform, providing seamlessly integrated desktop and server software optimized for best performance.

/cm/shared/apps/CLCGenomicsWorkbench/

 

Turing Cluster Documentation revision 2.0  Page 5  

COMSOL Finite element analysis, solver, and simulation software

/cm/shared/apps/comsol/4.3b /cm/shared/apps/comsol/4.4

Charmm CHARMM (Chemistry at HARvard Macromolecular Mechanics) is a versatile and widely used molecular simulation program with broad application to many-particle systems

/cm/shared/apps/charmm/c34b1

Cmake CMake, the cross-platform, open-source build system. CMake is a family of tools designed to build, test and package software.

/cm/shared/apps/cmake/2.8.12.2

CUDA Toolkit (Appro nodes only)

Comprehensive development environment for C and C++ developers building GPU-accelerated applications.

/cm/shared/apps/cuda50/toolkit/5.0.35

DBENCH File system benchmark tool for testing performance

/cm/shared/apps/dbench/4.0

ddscat Discrete Dipole Scattering (DDSCAT) is a Fortran code for calculating scattering and absorption of light by irregular particles and periodic arrangement of irregular particles.

/cm/shared/apps/ddscat/7.3.0

FFTW Routines to compute the discrete Fourier transform

/cm/shared/apps/fftw/openmpi/gcc/64/3.3.3 /cm/shared/appsfftw3/3.3.4

Gaussian Gaussian function and structure calculation software

/cm/shared/apps/gaussian/09/g09revB.01 /cm/shared/apps/gaussian/09/g09revD.01

GCC GNU Compiler Collection /cm/shared/apps/gcc/4.8.1/ /cm/shared/apps/gcc/4.9.0/

Git Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed

/cm/shared/apps/git/2.1.0

 

Turing Cluster Documentation revision 2.0  Page 6  

and efficiency.

Global Arrays

Library for parallel computing with arrays /cm/shared/apps/globalarrays/openmpi/gcc/64/5.1.1

GNU Scientific Library (GSL)

The GNU Scientific Library (GSL) is a collection of routines for numerical computing. The routines have been written from scratch in C, and present a modern Applications Programming Interface (API) for C programmers, allowing wrappers to be written for very high level languages. The source code is distributed under the GNU General Public License.

/cm/shared/apps/gsl/1.0 /cm/shared/apps/gsl/1.9

Grace Grace is a WYSIWYG 2D plotting tool for the X Window System and M*tif. Grace runs on practically any version of Unix-like OS. As well, it has been successfully ported to VMS, OS/2, and Win9*/NT/2000/XP (some minor functionality may be missing, though).

/cm/shared/apps/grace/5.1.23

Gromacs Molecular dynamics package for chemical simulations

/cm/shared/apps/gromacs/4.6.3 /cm/shared/apps/gromacs/4.6.6 /cm/shared/apps/gromacs/5.0

HDF5 Data model, library, and file format for storing and managing data

/cm/shared/apps/hdf5/1.6.10

HDF5_18 HDF5 general purpose library and file format for storing scientific data

/cm/shared/apps/hdf5_18/1.8.11

HPL Solves a random, dense linear system in double precision arithmetic on distributed-memory computers

/cm/shared/apps/hpl/2.1

Hwloc Portable abstraction across OS, versions, and architectures of the hierarchical topology of

/cm/shared/apps/hwloc/1.7

 

Turing Cluster Documentation revision 2.0  Page 7  

modern architectures

Hypre Hypre is a library for solving large, sparse linear systems of equations on massively parallel computers.

/cm/shared/apps/hypre/2.9.0b

Intel Cluster Checker

Verify that cluster components continue working together

/cm/shared/apps/intel-cluster-checker/2.0

Intel Cluster Runtime

Runtime libraries /cm/shared/apps/intel-cluster-runtime/3.5

Intel Cluster Studio

Powerful threading and correctness tools /cm/shared/apps/ics/2013.1.039

Intel MPI Benchmarks

MPI performance measurements for point-to-point and global communication operations

/cm/shared/apps/imb/3.2.4

Intel Thread Building Blocks

Easily write parallel C++ programs /cm/shared/apps/intel-tbb-oss/intel64/41_20130314oss

IOzone File system benchmark utility /cm/shared/apps/iozone/3_414

IOZone IOzone is a filesystem benchmark tool. The benchmark generates and measures a variety of file operations.

/cm/shared/apps/iozone/3_414

LAPACK Routines for solving systems of simultaneous linear equations

/cm/shared/apps/lapack/gcc/64/3.4.2

MATLAB R2012b

High-level language and interactive environment for numerical computation, visualization, and programming.

/cm/shared/apps/matlab/R2012b

Memtester Userspace utility for testing the memory subsystem for faults

/cm/shared/apps/memtester/4.3.0

Metis METIS is a set of serial programs for /cm/shared/apps/metis/5.1.0

 

Turing Cluster Documentation revision 2.0  Page 8  

partitioning graphs, partitioning finite element meshes, and producing fill reducing orderings for sparse matrices.

Mothur This project seeks to develop a single piece of open-source, expandable software to fill the bioinformatics needs of the microbial ecology community.

/cm/shared/apps/mothur/1.25.0

MPICH High performance and widely portable implementation of the MPI standard

/cm/shared/apps/mpich/ge/gcc/64/3.0.4

MPICH2 High performance and widely portable implementation of the MPI standard

/cm/shared/apps/mpich2/1.5

Mpiexec Initialize a parallel job from within a PBS batch or interactive environment

/cm/shared/apps/mpiexec/0.84_432

MVAPICH Open source implementation of MPI /cm/shared/apps/mvapich/gcc/64/1.2rc1

MVAPICH2 Open source implementation of MPI /cm/shared/apps/mvapich2/gcc/64/1.9

NAMD A parallel, object-oriented molecular dynamics code designed for high-performance simulation of large biomolecular systems.

/cm/shared/apps/namd/2.9

NetCDF libraries and self-describing, machine-independent data formats for array-oriented scientific data

/cm/shared/apps/netcdf/gcc/64/4.3.0

Netperf Provides network bandwidth testing between two hosts on a network

/cm/shared/apps/netperf/2.6.0

Open MPI Open source MPI-2 implementation /cm/shared/apps/openmpi/gcc/64/1.7.2

Open64 Suite of optimizing compiler development tools /cm/shared/apps/open64/4.5.2.1

 

Turing Cluster Documentation revision 2.0  Page 9  

OpenBLAS Open source optimized BLAS library based /cm/shared/apps/openblas/dynamic/0.2.6

OpenSim OpenSim is a freely available software system that allows you to build, exchange, and analyze musuloskeletal models and dynamic simulations of movement.

/cm/shared/apps/opensim/3.2

pplacer Pplacer places query sequences on a fixed reference phylogenetic tree to maximize phylogenetic likelihood or posterior probability according to a reference alignment. Pplacer is designed to be fast, to give useful information about uncertainty, and to offer advanced visualization and downstream analysis.

/cm/shared/apps/pplacer/1.1

Python Python is a widely used general-purpose, high-level programming language.

/cm/shared/apps/python/2.7.6

Quantum Espresso

Quantun Espresso (QE) is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.

/cm/shared/apps/qe/5.1

R Software environment for statistical computing and graphic

/cm/shared/apps/R/3.0.2

SAS SAS (Statistical Analysis System; not to be confused with SAP) is a software suite developed by SAS Institute for advanced analytics, business intelligence, data management, and predictive analytics.

/cm/shared/apps/sas/9.3/

ScaLAPACK Library of high-performance linear algebra routines for distributed memory machines

/cm/shared/apps/scalapack/gcc/1.8.0

SGE Computing cluster software system /cm/shared/apps/sge/2011.11p1

 

Turing Cluster Documentation revision 2.0  Page 10  

SimBody This project is a SimTK toolset providing general multibody dynamics capability, that is, the ability to solve Newton's 2nd law F=ma in any set of generalized coordinates subject to arbitrary constraints.

/cm/shared/apps/simbody/3.3.1

SimTK The software provides a platform on which the biomechanics community can build a library of simulations that can be exchanged, tested, analyzed, and improved through multi-institutional collaboration. The underlying software is written in ANSI C++, and the graphical user interface (GUI) is written in Java.

/cm/shared/apps/simtk/2.1

StressCPU CPU stress tester /cm/shared/apps/stresscpu/2.0

Uchime UCHIME is an algorithm for detecting chimeric sequences

/cm/shared/apps/uchime/4.2.40

Usearch USEARCH is a unique sequence analysis tool with thousands of users world-wide. USEARCH offers search and clustering algorithms that are often orders of magnitude faster than BLAST.

/cm/shared/apps/usearch/5.2.236

VMD Visualization and analysis of biological systems /cm/shared/apps/vmd/1.9.0

 

Turing Cluster Documentation revision 2.0  Page 11  

Setting the environment for jobs

Using Modules to define the job environment Modules are packages that enable easy dynamic modification of user’s environment variables via modulefiles.

Each modulefile contains the information needed to configure the shell for an application. Once the modules package is initialized, the environment can be modified on a per-module basis using the module command which interprets modulefiles. Typically modulefiles instruct the module command to alter or set shell environment variables such as PATH, MANPATH, etc. Modulefiles may be shared by many users on a system. Users may have their own Modules loaded and unloaded dynamically and atomically, in a clean fashion.

All popular shells are supported, including bash, ksh, zsh, sh, csh, tcsh, as well as some scripting languages such as perl.collection to supplement or replace the shared modulefiles.

>> Example 1: To load the default environment:

# module load /cm/shared/modulefiles/default-environment

Now you can readily access GCC compiler binaries and libraries from any directory without worrying about providing path information.

>>Example 2: In order to compile your GCC compiler based MPI programs, you have to setup your environment variables to point to the right MPI library based on the choice of compiler:

This can be achieved using modules with the following steps: Step 1) Check for available MPI modulefiles: Command to use: “module which” Step 2) Load the respective module: Command to use: “module load module-name”

# module which cluster-tools/6.1 : Adds cluster-tools to your environment cmd : Adds the CMDaemon binaries to your path. dot : adds `.' to your PATH environment variable freeipmi/1.2.6 : adds FREEIPMI to your environment variables mvapich2/gcc/64/1.9 : adds MVAPICH2-gcc to your environment variables mvapich2/open64/64/1.9: adds MVAPICH2-open64 to your environment variables # module load mvapich2/gcc/64/1.9

 

Turing Cluster Documentation revision 2.0  Page 12  

At this point after the above command is typed, you can compile your MPI programs with the MPI compiler of your choice without even knowing its install location. Instead of running these at each login, the system can load any module by using “module initadd module-name”. This adds the module to your .tcshrc file without manually editing it. If you wanted the module above to be added at each login, you would type:

# module initadd mvapich2/gcc/64/1.9

For more information on using Modules, please refer Appendix B. Compare this method with manually editing .tcshrc file in the following section.

Using .tcshrc(.bashrc) files In order to make any application executables readily available from command prompt, users have to add a complete path, to the respective application binaries, into .tcshrc file. Also required is to update the respective LD_LIBRARY_PATH for the correct functioning of the application during run-time. All of these require a user to manipulate his/her .tcshrc file often.

For example: To load the GCC based MPI compiler into a user’s environments settings, a user has to add the following line into his/her .tcshrc file.

setenv MPI_HOME /cm/shared/apps/mvapich2/gcc set path = ( $MPI_HOME/bin $path ) set path= ($MPI_HOME/sbin $path ) setenv LD_LIBRARY_PATH $MPI_HOME/lib:$LD_LIBRARY_PATH

 

Turing Cluster Documentation revision 2.0  Page 13  

With the following lines added, now the user has to source his .tcshrc file.

# source ~/.tcshrc

At this point after the above command is typed, you can compile your MPI programs with the MPI compiler of your choice. Compare this method with using modules as described in the previous section.

Submitting Jobs using Sun Grid Engine (SGE) Sun Grid Engine is a workload management system used in our clusters. SGE allows users to share important system resources.

Because all users are required to use SGE (see policies), managing jobs under SGE is an important skill. The minimum required tasks for managing a job under SGE are staging the job, submitting the job to SGE, and managing the job’s results and output (i.e, clean-up). Additional tasks that are optional are covered at the end of this section.

The staging of a job and the management of its results and output require only basic knowledge of operating system (OS) commands. In particular, expect to use the OS commands “ls”, “cp”, and “mv” at the command line, or their various equivalents in different environments (e.g., a GUI). Efficient practice of these tasks benefits from a good understanding of the system architecture as discussed elsewhere. The last step of managing your job’s results and output (or clean-up) is needed so that you can comply with usage policies, for example, with the use of scratch spaces.

Therefore, job submission is the key task that you need to understand. The SGE job submission commands are “qsub” and “qrsh”. The “qsub” command is used for batch submissions, while the “qrsh” command is for interactive use. When you are logged in, use the manpages for these commands frequently as a reference.

The minimum required argument for the SGE “qsub” command is the job script. The syntax for this basic usage is shown below (the “$ > ” sign is your OS command prompt).

$ > qsub job.script Your job 170 ("job.script") has been submitted

In every case, you will receive a response from SGE (similar to the one shown) after you press the <Enter> key. The response shows the job ID (“170”, in this case), and it echoes the filename of your job script (assuming you did not provide other options that change the job name).

Sun Grid Engine allows us to define parallel programming and runtime environments, allowing for the execution of shared memory or distributed memory parallelized applications, under the name Parallel Environments (PE). We here at OCCS have defined a set of PE’s for Nikola cluster to fit the needs of the commonly used applications and its run-time environments. An individual can identify the parallel environment based on the application he/she wants to run and request the Parallel

 

Turing Cluster Documentation revision 2.0  Page 14  

environment (PE) through his/her job script using the switch “-pe pe_name number_of_slots” . A detailed example of the job script showing how to request a PE is discussed in the section “Job Script”.

If an individual chooses not to mention a PE in his/her job script OR requires to run a sequential job then one need not request any PE and exclude the switch “-pe pe_name number_of_slots” from the command line parameter or the job script.

 

Turing Cluster Documentation revision 2.0  Page 15  

Following the table shows the association of PE’s for both Batch and Interactive Jobs.

Queue Name Description Parallel Environments

main CPU computation comsol3, g09-o, make, matlab, mpich, mpich2, openmpi, openmpi_ib, matlabMDCS, qe, SAS, grace, abaqus, blast, CLCGridWorkers

gpu GPU computation with Appro nodes (Nvidia Tesla M2090)

gpu, make

himem Computational nodes with 768GB of RAM

comsol-himem, molpro-himem

phi Co-processor computation using Intel Phi GPUs

molpro

 

Turing Cluster Documentation revision 2.0  Page 16  

If you would like assistance with job submission send us a brief email on what kind of experiment you are running and we will guide you through.

Job Script: A job script is an ACSII file – this implies that you can read its contents using the OS “cat” command. The file “job.script” must not be a binary executable file. Usually, a “#” at the beginning of a line indicates a comment. The special “#!” on the first line indicates the shell (please refer to shell documentation for details). SGE uses the “#$” to indicate “qsub” arguments stored within the job script file - saves you time and the need to remember the correct “tried-and-true” options for your job.

The following is a sample job script for a sequential job:

#!/bin/tcsh # # Replace the “a.out” with your application’s binary executable filename. # If your program accepts or requires arguments, they are listed after the name # of the program – please refer to your application’s documentation for details. # # For example: # a.out output.file # a.out

 

Turing Cluster Documentation revision 2.0  Page 17  

To use the job script template shown, change the “a.out” file as necessary.

You can add many typical OS shell commands (“tcsh” in this case) to this script to automate many of the tasks you need to do, including some of the staging and clean-up tasks. For example, for staging, you can compile some programs you want. As another example, after “a.out”, you can delete files that you know won’t be needed later.

The following is a sample job script for an Open MPI parallel job:

#!/bin/tcsh # # Check out the SGE options listed here # #$ -cwd #$ -o MyOutPutFile.txt -j y #$ -S /bin/tcsh #$ -q main # # Modify the 6 at the end of this option to request a different number of # parallel slots # #$ -pe openmpi 6 # # Tell SGE of the version of Open MPI to use # #$ -v openmpi="openmpi/gcc/64/1.6.4" # # Load the repective module file # module load openmpi/gcc/64/1.6.4 # # Run the binary executable file hello.exe # Refer to Open MPI application development on how to get hello.exe # mpiexec -machinefile $TMPDIR/machines -n $NSLOTS ./hello.exe

 

Turing Cluster Documentation revision 2.0  Page 18  

Note the use of “#$” to specify various SGE “qsub” options within the job script.

For other types of job scripts, we ask you consult with us and specific application documentation.

There are other SGE commands (refer Appendix D) that are useful, though not absolutely necessary for using SGE. Some of these commands are listed in the two tables below, along with their functions. Some options are shown, try them out, or check the manpage for the function.

 

Turing Cluster Documentation revision 2.0  Page 19  

Appendix A: Research Computing Acceptable Use Statement The purpose of the following guidelines is to promote awareness of computer security issues and to ensure that the ODU's computing systems are used in an efficient, ethical, and lawful manner. Users must adhere to the computing Acceptable Usage Policy (available at http://midas.odu.edu on the security setting link). In addition, the following guidelines are established for use of the research facilities.

1. While no hard limits are set for a maximum time limit or the number of nodes, jobs requiring extraordinary resources should be discussed with the research support group prior to execution.

2. ODU cluster accounts are to be used only for the university research activity. Use is not allowed for non- research or commercial activities. Unauthorized use may constitute grounds for account termination and/or legal action.

3. The research computing systems are non-classified systems. Classified information may not be processed, entered, or stored.

4. Users are responsible for protecting and archiving any programs/data/results. 5. Users are required to report any incident in computer security to the research computing group. Users shall not

download, install, or run security programs or utilities to identify weaknesses in the security of a system. For example, ODU users shall not run password cracking programs.

6. Users shall not attempt to access any data or programs contained on systems for which they do not have authorization or explicit consent of the owner of the data.

7. Users must have proper authorization and licenses to install or use copyrighted programs or data. Unauthorized use can lead to account suspension, account termination and may be subject to legal action.

8. You must use your odu login name when using research facilities and may not access anothers account. Users shall not provide information to others for unauthorized access.

9. Users shall not intentionally engage in activities to: harass other users; degrade the performance of systems; deprive an authorized user access to a resource; obtain extra resources beyond those allocated; circumvent computer security measures or gain access to a system for which proper authorization has not been given; misuse batch queues or other resources in ways not authorized or intended.

 

Turing Cluster Documentation revision 2.0  Page 20  

Appendix B: Modules User Guide a) To list the help menu for modules type:

# module -H

Modules Release 3.2.6 2007-02-14 (Copyright GNU GPL v2 1991): Usage: module [ switches ] [ subcommand ] [subcommand-args ] Switches: -H|--help this usage info -V|--version modules version & configuration options -f|--force force active dependency resolution -t|--terse terse format avail and list format -l|--long long format avail and list format -h|--human readable format avail and list format -v|--verbose enable verbose messages -s|--silent disable verbose messages -c|--create create caches for avail and apropos -i|--icase case insensitive -u|--userlvl <lvl> set user level to (nov[ice],exp[ert],adv[anced]) Available SubCommands and Args: + add|load modulefile [modulefile ...] + rm|unload modulefile [modulefile ...] + switch|swap [modulefile1] modulefile2 + display|show modulefile [modulefile ...] + avail [modulefile [modulefile ...]] + use [-a|--append] dir [dir ...] + unuse dir [dir ...] + update + refresh + purge + list + clear + help [modulefile [modulefile ...]] + whatis [modulefile [modulefile ...]] + apropos|keyword string + initadd modulefile [modulefile ...] + initprepend modulefile [modulefile ...] + initrm modulefile [modulefile ...] + initswitch modulefile1 modulefile2 + initlist + initclear

 

Turing Cluster Documentation revision 2.0  Page 21  

b) To list Currently Loaded modules:

# module list

c) Currently available modules to load:

# module avail # module which 3.2.6 : Changes the MODULE_VERSION environment variable : dot : adds `.' to your PATH environment variableOld Dominion University | Appendix B: Modules User Guide 20 mvapich2-v1.0.3/gcc/gen2: Adds GCC Complied MVAPICH2 v1.0.3 specific environment variables to the current environtment. mvapich2-v1.0.3/intel/gen2: Adds Intel Compiled MVAPICH2 v1.0.3 specific environment variables to the current environtment. null : does absolutely nothing

 

Turing Cluster Documentation revision 2.0  Page 22  

d) How to load a module:

>> First check what your current PATH contains: # echo $PATH /opt/gridengine/bin/lx26- amd64:/usr/java/jdk1.5.0_10/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/us r/sbin:/usr/bin:/root/bin >> Now Load the Module # module load mvapich2-v1.0.3/gcc/gen2 >> Now that the module is loaded, check the path: # echo $PATH /opt/mvapich2/1.0.3/gcc-4.1.2/gen2/bin:/opt/gridengine/bin/lx26- amd64:/usr/java/jdk1.5.0_10/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/us r/sbin:/usr/bin:/root/bin Note the environment settings for MVAPICH2 is added ahead of all path variables and hence it will be the first default in the path search. >>!!!! Important !!!!! To load another version of MVAPICH2 you first have to unload the previously loaded version of MVAPICH2 The following example shows trying to load a module that conflicts with another module # module load mvapich2-v1.0.3/intel/gen2 mvapich2-v1.0.3/intel/gen2(24):ERROR:150: Module 'mvapich2- v1.0.3/intel/gen2' conflicts with the currently loaded module(s) 'mvapich2-v1.0.3/gcc/gen2' mvapich2-v1.0.3/intel/gen2(24):ERROR:102: Tcl command execution failed: conflict mvapich2-v1.0.3 >> To avoid the above error unload the previously loaded conflicting module and then load the new module: # module unload mvapich2-v1.0.3/gcc/gen2 # module load mvapich2-v1.0.3/intel/gen2

 

Turing Cluster Documentation revision 2.0  Page 23  

e) Help on a particular modulefile

# module help mvapich2-v1.0.3/gcc/gen2Old Dominion University | Appendix B: Modules User Guide 21 ----------- Module Specific Help for 'mvapich2-v1.0.3/gcc/gen2' ------------- -------------- mvapich2-v1.0.3/gcc/gen2 - Load this module ... 1) To add MPICH2 v1.0.3 sepcific PATH & LD_LIBRAY_PATH variables to the current environment 2) To compile your programs written using MPI. 3) In your job submit script to run MPI jobs in parallel.

f) To show the environment changes made by a specific module:

# module show mvapich2-v1.0.3/intel/gen2 ------------------------------------------------------------------- /usr/local/Modules/3.2.6/modulefiles/mvapich2-v1.0.3/intel/gen2: module-whatis Adds Intel Compiled MVAPICH2 v1.0.3 specific environment variables to the current environment. conflict mvapich2-v1.0.3 setenv MPI_HOME /opt/mvapich2/1.0.3/intel-10.1/gen2 prepend-path PATH /opt/mvapich2/1.0.3/intel-10.1/gen2/bin prepend-path MANPATH /opt/mvapich2/1.0.3/intel-10.1/gen2/man prepend-path LD_LIBRARY_PATH /opt/mvapich2/1.0.3/intel-10.1/gen2/lib

 

Turing Cluster Documentation revision 2.0  Page 24  

Appendix C: Installed Application Documentation ACML 5.3.1

Application Path: /cm/shared/apps/acml/5.3.1

Documentation: http://developer.amd.com/tools-and-sdks/cpu-development/amd-core-math-library-acml/

How to Use:

Run ‘module avail’ and choose the module corresponding to your compiler. Then, run `module load acml/COMPILER/64/5.31` (where COMPILER is gcc or open64) to load the environment.

BLACS 1.1

Application Path: /cm/shared/apps/blacs/openmpi/gcc/1.1patch03/

Documentation: http://www.netlib.org/blacs/

How to Use:

Run ‘module avail’ and choose the module corresponding to your compiler. Then, run `module load blacs/COMPILER/64/1.1patch03` (where COMPILER is gcc or open64) to load the environment.

BLAS 1.0

Application Path: /cm/shared/apps/blas/gcc/1

Documentation: http://www.netlib.org/blas/

How to Use:

Run ‘module avail’ and choose the module corresponding to your compiler. Then, run `module load blas/COMPILER/64/1` (where COMPILER is gcc or open64) to load the environment.

Bonnie++ 1.97.1

Application Path: /cm/shared/apps/bonnie++/1.97.1

Documentation: http://www.coker.com.au/bonnie++/readme.html

How to Use:

Run `module load bonnie++/1.97.1` to load the environment.

COMSOL 4.3b

Application Path: /cm/shared/apps/comsol/4.3b

Documentation: http://www.comsol.com/support/download/4.3b/

How to Use:

Run comsol-43b or /usr/local/bin/comsol-43b if your PATH does not include /usr/local/bin.

CUDA Toolkit 5.0.35

Application Path: /cm/shared/apps/cuda50/toolkit/5.0.35

Documentation: http://docs.nvidia.com/cuda/

How to Use:

Run `module load cuda50/toolkit/5.0.35` to load the environment.

DBENCH 4.0

Application Path: /cm/shared/apps/dbench/4.0

Documentation: https://dbench.samba.org/doc/dbench.1.html

 

Turing Cluster Documentation revision 2.0  Page 25  

How to Use:

Run ‘/cm/shared/apps/dbench/4.0/dbench’.

FFTW 3.3.3

Application Path: /cm/shared/apps/fftw/openmpi/gcc/64/3.3.3

Documentation: http://www.fftw.org/doc/

How to Use:

Run `module load fftw3/openmpi/gcc/64/3.3.3` to load the environment. The FFTW3 libraries will then be available to use by other applications.

Gaussian g09

Application Path: /cm/shared/apps/gaussian/09/g09revB.01

Documentation: http://www.gaussian.com/g_prod/g09.htm

How to Use:

Run one of the g90 scripts located at /usr/local/bin/.

GCC 4.8.1

Application Path: /cm/shared/apps/gcc/4.8.1/

Documentation: http://gcc.gnu.org/onlinedocs/

How to Use:

Run `module load gcc/4.8.1` to load the environment.

Global Arrays 5.1.1

Application Path: /cm/shared/apps/globalarrays/openmpi/gcc/64/5.1.1

Documentation: http://hpc.pnl.gov/globalarrays/

How to Use:

Run ‘module avail’ and choose the module corresponding to your compiler. Then, run `module load globalarrays/openmpi/COMPILER/64/5.1.1` (where COMPILER is gcc or open64) to load the environment.

GROMACS 4.6.3

Application Path: /cm/shared/apps/gromacs/4.6.3

Documentation: http://www.gromacs.org/Documentation

How to Use:

Run `module load gromacs/4.6.3` to load the environment. Use `grompp` or `mdrun` binaries to run single process jobs. Use `grompp_mpi` or `mdrun_mpi` binaries to run MPI jobs.

HDF5 1.6.10

Application Path: /cm/shared/apps/hdf5/1.6.10

Documentation: http://www.hdfgroup.org/HDF5/doc/

How to Use:

Run `module load hdf5/1.6.10` to load the environment.

HDFS_18 1.8.11

Application Path: /cm/shared/apps/hdf5_18/1.8.11

Documentation: http://www.hdfgroup.org/HDF5/doc/

 

Turing Cluster Documentation revision 2.0  Page 26  

How to Use:

Run `module load hdf5_18/1.8.11` to load the environment.

HPL 2.1

Application Path: /cm/shared/apps/hpl/2.1

Documentation: http://www.netlib.org/benchmark/hpl/documentation.html

How to Use:

Run `module load hpl/2.1` to load the environment.

Hwloc 1.7

Application Path: /cm/shared/apps/hwloc/1.7

Documentation: http://www.open-mpi.org/projects/hwloc/

How to Use:

Run `module load hwloc/1.7` to load the environment.

Intel Cluster Studio 2013

Application Path: /cm/shared/apps/ics/2013.1.039

Documentation: http://software.intel.com/en-us/intel-cluster-studio-xe

How to Use:

Run `module load intel/ics/base` to load the environment.

Intel MPI Benchmarks 3.2.4

Application Path: /cm/shared/apps/imb/3.2.4

Documentation: http://software.intel.com/en-us/articles/intel-mpi-benchmarks

How to Use:

This directory includes IMB.c and IMB.h files. To add it to your environment, run setup.sh. This will create a folder called BenchMarks in your home directory.

Intel Cluster Checker 2.0

Application Path: /cm/shared/apps/intel-cluster-checker/2.0

Documentation: http://software.intel.com/en-us/cluster-ready

How to Use:

Run `module load intel-cluster-checker/2.0` to load the environment.

Intel Cluster Runtime 3.5

Application Path: /cm/shared/apps/intel-cluster-runtime/3.5

Documentation: http://software.intel.com/en-us/intel-cluster-studio-xe

How to Use:

Run `module load intel-cluster-runtime/intel64/3.5` to load the environment.

Intel Thread Building Blocks 2013

Application Path: /cm/shared/apps/intel-tbb-oss/intel64/41_20130314oss

Documentation: http://software.intel.com/en-us/intel-tbb

 

Turing Cluster Documentation revision 2.0  Page 27  

How to Use:

Run `module load intel-tbb-oss/intel64/41_20130314oss` to load the environment.

IOzone 3.414

Application Path: /cm/shared/apps/iozone/3_414

Documentation: http://www.iozone.org/docs/IOzone_msword_98.pdf

How to Use:

Run `module load iozone/3_414` to load the environment.

LAPACK 3.4.2

Application Path: /cm/shared/apps/lapack/gcc/64/3.4.2

Documentation: http://www.netlib.org/lapack/#_documentation

How to Use:

Run ‘module avail’ and choose the module corresponding to your compiler. Then, run `module load lapack/COMPILER/64/3.4.2` (where COMPILER is gcc or open64) to load the environment.

MATLAB R2012b

Application Path: /cm/shared/apps/matlab/R2012b

Documentation: http://www.mathworks.com/products/new_products/release2012b.html

How to Use: Run `/usr/local/bin/matlabx’.

Memtester 4.30

Application Path: /cm/shared/apps/memtester/4.3.0

Documentation: http://pyropus.ca/software/memtester/

How to Use:

Run `/cm/shared/apps/memters/4.3.0/memtester`.

MPICH 3.0.4

Application Path: /cm/shared/apps/mpich/ge/gcc/64/3.0.4

Documentation: http://www.mpich.org/documentation/guides/

How to Use:

Run `module load mpich/ge/COMPILER/64/3.0.4` (where COMPILER is open64 or gcc) to load the environment.

Mpiexec 0.84

Application Path: /cm/shared/apps/mpiexec/0.84_432

Documentation: https://www.osc.edu/~djohnson/mpiexec/#Description

How to Use:

Run ‘module load mpiexec/0.84_432` to load the environment.

MVAPICH 1.2

Application Path: /cm/shared/apps/mvapich/gcc/64/1.2rc1

Documentation: http://mvapich.cse.ohio-state.edu/overview/mvapich/

 

Turing Cluster Documentation revision 2.0  Page 28  

How to Use:

Run ‘module load mvapich/COMPILER/64/1.2rc1` (where COMPILER is open64 or gcc) to load the environment.

MVAPICH2 1.9

Application Path: /cm/shared/apps/mvapich2/gcc/64/1.9

Documentation: http://mvapich.cse.ohio-state.edu/overview/mvapich2/

How to Use:

Run ‘module load mvapich2/COMPILER/64/4.3.0` (where COMPILER is open64 or gcc) to load the environment.

NetCDF 2.6.0

Application Path: /cm/shared/apps/netcdf/gcc/64/4.3.0

Documentation: http://www.unidata.ucar.edu/software/netcdf/docs/

How to Use:

Run ‘module load netcdf/COMPILER/64/4.3.0` (where COMPILER is open64 or gcc) to load the environment.

Netperf 2.6.0

Application Path: /cm/shared/apps/netperf/2.6.0

Documentation: http://www.netperf.org/netperf/NetperfPage.html

How to Use:

Run ‘module load netperf/2.6.0` to load the environment.

Open64 4.5.2.1

Application Path: /cm/shared/apps/open64/4.5.2.1

Documentation: http://www.open64.net/documentation/installing-open64-424/introduction.html

How to Use:

Run ‘module load open64/4.5.2.1` to load the environment.

OpenBLAS 0.2.6

Application Path: /cm/shared/apps/openblas/dynamic/0.2.6

Documentation: http://www.openblas.net/

How to Use:

Run ‘module load openblas/dynamic/0.2.6` to load the environment.

Open MPI 1.7.2

Application Path: /cm/shared/apps/openmpi/gcc/64/1.7.2

Documentation: http://www.open-mpi.org/doc/

How to Use:

Run ‘module load openmpi/gcc/64/1.7.2-lmnx-ofed` to load the environment.

R 3.0.2

Application Path: /cm/shared/apps/R/3.0.2

Documentation: http://www.r-project.org/

 

Turing Cluster Documentation revision 2.0  Page 29  

How to Use:

Run `/cm/shared/apps/R/3.0.2/bin/R’ or create a sge script to reference this location.

ScaLAPACK 1.8.0

Application Path: /cm/shared/apps/scalapack/gcc/1.8.0

Documentation: http://www.netlib.org/scalapack/#_documentation

How to Use:

Run ‘module load scalapack/COMPILER/64/1.8.0` (where COMPILER is open64 or gcc) to load the environment.

SGE 2011.11p1

Application Path: /cm/shared/apps/sge/2011.11p1

Documentation: http://gridscheduler.sourceforge.net/

How to Use:

Run ‘module load sge/2011.11p1` to load the environment.

StressCPU 2.0

Application Path: /cm/shared/apps/stresscpu/2.0

Documentation: http://www.gromacs.org/Downloads/User_contributions/Other_software

How to Use:

Run `/cm/shared/apps/stresscpu/2.0/stresscpu2`.

VMD 1.9.0

Application Path: /cm/shared/apps/vmd/1.9.0

Documentation: http://www.ks.uiuc.edu/Research/vmd/current/docs.html

How to Use:

Run `module load vmd/1.9.0` to load the environment. Run `vmd` command with X11 forwarding tostart graphical mode. Run `vmd -dispdev text` command to run in console mode.

 

Turing Cluster Documentation revision 2.0  Page 30