high-performance computing and opensolaris

Download High-Performance Computing and OpenSolaris

If you can't read please download the document

Author: jose-maria-silveira-neto

Post on 16-Apr-2017

3.154 views

Category:

Business


0 download

Embed Size (px)

TRANSCRIPT

High-Performance Computing and OpenSolaris

Silveira Neto

Sun Campus Ambassador

Federal University of Cear

ParGO - Paralellism, Graphs, and Combinatorial Optimization Research Group

Agenda

Why programs should run faster?

How programs can run faster?

High Performance Computing

Motivating

Computer Models

Approachs

OpenSolaris

What is, Advantages and Tools.

Aplication Area Share

stats fromTop500.org Application area share for November/2007

Computational fluid dynamics

Finite Element Analysis

Serial Computation

problem

instructions

CPU

Serial Computation

Single computer, single CPU.

Problem broken into discrete series of instructions.

One instruction per time.

instructions

CPU

Parallel Computing

problem

CPU

CPU

CPU

parts

instructions

Parallel Computing

Simultaneous use of multiple compute resources to solve a computational problem.

Compute resources can include

Single computer with multiple processors

Multiple computers connected by a network

or both

Flynn's Taxonomy

Instruction or Data

Single or Multiple

SISD

Single Instruction, Single Data

SIMD

Single Instruction, Multiple Data

MISD

Multiple Instruction, Single Data

MIMD

Multiple Instruction, Multiple Data

SISD

Single Instruction, Single Data

LOAD A

LOAD B

C = A+B

STORE C

CPU

SIMD

Single Instruction, Multiple Data

LOAD A[0]

LOAD B[0]

C[0] = A[0]+B[0]

STORE C[0]

LOAD A[1]

LOAD B[1]

C[1] = A[1]+B[1]

STORE C[1]

LOAD A[n]

LOAD B[n]

C[n] = A[n]+B[n]

STORE C[n]

CPU 1

CPU 2

CPU n

MISD

Multiple Instruction, Single Data

LOAD A[0]

C[0] = A[0] *1

STORE C[0]

LOAD A[1]

C[1] = A[1] *2

STORE C[1]

LOAD A[n]

C[n] = A[n] *n

STORE C[n]

CPU 1

CPU 2

CPU n

MIMD

Multiple Instruction, Multiple Data

LOAD A[0]

C[0] = A[0] *1

STORE C[0]

X=sqrt(2)

C[1] = A[1] *X

method(C[1]);

something();

W = C[n]**X

C[n] = 1/W

CPU 1

CPU 2

CPU n

Parallel Programming Models

Shared Memory

Threads

Message Passing

Data Parallel

Hybrid

Threads Model

Memory

CPU

CPU

CPU

CPU

Threads Model

Message Passing Model

Memory

CPU

Memory

CPU

Memory

CPU

Memory

CPU

x=10

Hybrid Model

Memory

CPU

CPU

Memory

CPU

CPU

x=10

Memory

CPU

Memory

CPU

Amdahl's Law

fraction of code that can be parallelized

Amdahl's Law

parallel fraction

number of processors

serial fraction

CPU

CPU

Integration

a

b

c

Pi calculation

Madhava of Sangamagrama (1350-1425)

CPU

CPU

MPI

Message Passing Interface

A computer specification

An implementation that allows many computers to communicate with one another. It is used in computer clusters.

MPI Simplest Example

$ mpicc hello.c -o hello$ mpirun -np 5 hi

MPI_Send

MPI_Recv

OpenMP

Open Multi-Processing

Multi-platform shared memory multiprocessing programming in C/C++ and Fortran on many architectures

Available on GCC 4.2

C/C++/Fortran

Creating a OpenMP Thread

$ gcc -openmp hello.c -o hello$ ./hello

Creating multiples OpenMP Threads

Pthreads

POSIX Threads

POSIX standard for threads

Defines an API for creating and manipulating threads

Pthreads

g++ -o threads threads.cpp -lpthread./threads

Java Threads

java.lang.Thread

java.util.concurrent

What is OpenSolaris?

Open development effort based on the source code for the Solaris Operating System.

Collection of source bases (consolidations) and projects.

Opensolaris

ZFS

Dtrace

Containers

HPC Tools

opensolaris.org/os/community/hpcdev/

Dtrace for Open MPI

Sun HPC ClusterTools

Comprehensive set of capabilities for parallel computing.

Integrated toolkit that allows developers to create and tune MPI applications that run on high performance clusters and SMPs.

sun.com/clustertools

Academy, industry and market

References

Introduction to Parallel Computing, Blaise Barney, Livermore Computing, 2007. https://computing.llnl.gov/tutorials/parallel_comp

Flynn, M., Some Computer Organizations and Their Effectiveness, IEEE Trans. Comput., Vol. C-21, pp. 948, 1972.

Top500 Supercomputer sites, Application Area share for 11/2007, http://www.top500.org/charts/list/30/apparea

Wikipedia's article on Finite Element Analysis, http://en.wikipedia.org/wiki/Finite_element_analysis

Wikipedia's article on Computational Fluid Dynamics, http://en.wikipedia.org/wiki/Computational_fluid_dynamics

Slides An Introduction to OpenSolaris, Peter Karlson.

Wikipedia's article on OpenMP, http://en.wikipedia.org/wiki/Openmp

References

Wikipedia's article on MPI, http://en.wikipedia.org/wiki/Message_Passing_Interface

Wikipedia's article on PThreads, http://en.wikipedia.org/wiki/Pthreads

Wikipedia's article on Madhave of Sangamagrama, http://en.wikipedia.org/wiki/Madhava_of_Sangamagrama

Distributed Systems Programming, Heriot-Watt University http://www.macs.hw.ac.uk/~hamish/DSP/topic29.html

Thank you!

Jos Maria Silveira NetoSun Campus [email protected]://silveiraneto.net

open artwork and icons by chandan:
http://blogs.sun.com/chandan

Click to edit the title text format

Click to edit the outline text format

Second Outline Level

Third Outline Level

Fourth Outline Level

Fifth Outline Level

Sixth Outline Level

Seventh Outline Level

Eighth Outline Level

Ninth Outline Level

USE

IMPROVE

EVANGELIZE

Click to edit the notes format

Click to edit the title text format

Click to edit the outline text format

Second Outline Level

Third Outline Level

Fourth Outline Level

Fifth Outline Level

Sixth Outline Level

Seventh Outline Level

Eighth Outline Level

Ninth Outline Level

USE

IMPROVE

EVANGELIZE

Click to edit the notes format

Click to edit the outline text format

Second Outline Level

Third Outline Level

Fourth Outline Level

Fifth Outline Level

Sixth Outline Level

Seventh Outline Level

Eighth Outline Level

Ninth Outline Level

USE

IMPROVE

EVANGELIZE

Click to edit the notes format