cs 240a: models of parallel programming: machines, languages, and complexity measures

25
CS 240A: Models of parallel programming: Machines, languages, and complexity measures

Upload: dimaia

Post on 25-Feb-2016

39 views

Category:

Documents


0 download

DESCRIPTION

CS 240A: Models of parallel programming: Machines, languages, and complexity measures. Generic Parallel Machine Architecture. Storage Hierarchy. Proc. Proc. Proc. Key architecture question: Where and how fast are the interconnects? Key algorithm question: Where is the data?. Cache. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

CS 240A:

Models of parallel programming:

Machines, languages, and complexity measures

Page 2: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Generic Parallel Machine Architecture

• Key architecture question: Where and how fast are the interconnects?

• Key algorithm question: Where is the data?

ProcCache

L2 Cache

L3 Cache

Memory

Storage Hierarchy

ProcCache

L2 Cache

L3 Cache

Memory

ProcCache

L2 Cache

L3 Cache

Memory

potentialinterconnects

Page 3: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Parallel programming languages

• Many have been invented – *much* less consensus on what are the best languages than in the sequential world.

• Could have a whole course on them; we’ll look just a few.

Languages you’ll use in homework:

• C with MPI (very widely used, very old-fashioned)• Cilk++ (a newer upstart)

• Use any language you like for the final project!

Page 4: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Some models of parallel computation

Computational model• Shared memory• SPMD / Message passing• SIMD / Data parallel • Partitioned global

address space (PGAS)• Hybrids …

Languages• Cilk, OpenMP, Pthreads …• MPI• Cuda, Matlab, OpenCL, …• UPC, CAF, Titanium

• ???

Page 5: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Simple example: Sum f(A[i]) from i=1 to i=n

• Parallel decomposition: • Each evaluation of f and each partial sum is a task

• Assign n/p numbers to each of p processes• each computes independent “private” results and partial sum• one (or all) collects the p partial sums and computes the global sum

Classes of Data: • (Logically) Shared

• the original n numbers, the global sum• (Logically) Private

• the individual function values• what about the individual partial sums?

Page 6: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Programming Model 1: Shared Memory• Program is a collection of threads of control.• Can be created dynamically, mid-execution, in some languages

• Each thread has a set of private variables, e.g., local stack variables • Also a set of shared variables, e.g., static variables, shared common

blocks, or global heap.• Threads communicate implicitly by writing and reading shared variables.• Threads coordinate by synchronizing on shared variables

PnP1P0

s s = ...y = ..s ...

Shared memory

i: 2 i: 5 Private memory

i: 8

Page 7: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Shared Memory Code for Computing a Sum

Thread 1

for i = 0, n/2-1 s = s + f(A[i])

Thread 2

for i = n/2, n-1 s = s + f(A[i])

static int s = 0;

• Problem: a race condition on variable s in the program• A race condition or data race occurs when:

- two processors (or two threads) access the same variable, and at least one does a write.

- The accesses are concurrent (not synchronized) so they could happen simultaneously

Page 8: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Shared Memory Code for Computing a Sum

Thread 1 …. compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …

Thread 2 … compute f([A[i]) and put in reg0 reg1 = s reg1 = reg1 + reg0 s = reg1 …

static int s = 0;

• Suppose s=27, f(A[i])=7 on Thread1 and =9 on Thread2• For this program to work, s should be 43 at the end

• but it may be 43, 34, or 36• The atomic operations are reads and writes

7 927 2734 36

3634

Page 9: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Improved Code for Computing a Sum

Thread 1

local_s1= 0 for i = 0, n/2-1 local_s1 = local_s1 + f(A[i]) s = s + local_s1

Thread 2

local_s2 = 0 for i = n/2, n-1 local_s2= local_s2 + f(A[i]) s = s +local_s2

static int s = 0;

• Since addition is associative, it’s OK to rearrange order• Most computation is on private variables

- Sharing frequency is also reduced, which might improve speed - But there is still a race condition on the update of shared s- The race condition can be fixed by adding locks - Only one thread can hold a lock at a time; others wait for it

static lock lk;

lock(lk);

unlock(lk);

lock(lk);

unlock(lk);

Page 10: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Shared memory programming model

• Mostly used for machines with small numbers of processors.

• We won’t use this model in homework

• OpenMP (a relatively new standard)

• Tutorial at http://www.llnl.gov/computing/tutorials/openMP/

Page 11: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 1: Shared Memory

P1

network/bus

$

memory

• Processors all connected to a large shared memory.• Typically called Symmetric Multiprocessors (SMPs)• Sun, HP, Intel, IBM SMPs (nodes of DataStar)• Multicore chips

• “Local” memory is not (usually) part of the hardware abstraction.

• Difficulty scaling to large numbers of processors• < 32 processors typical

• Advantage: uniform memory access (UMA)• Cost: much cheaper to access data in cache than main memory.

P2

$

Pn

$

Page 12: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Programming Model 2: Message Passing• Program consists of a collection of named processes.

• Usually fixed at program startup time• Thread of control plus local address space -- NO shared data.• Logically shared data is partitioned over local processes.

• Processes communicate by explicit send/receive pairs• Coordination is implicit in every communication event.

PnP1P0

y = ..s ...

s: 12

i: 2

Private memory

s: 14

i: 3

s: 11

i: 1

send P1,s

Network

receive Pn,s

Page 13: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Computing s = A[1]+A[2] on each processor° First possible solution – what could go wrong?

Processor 1 xlocal = A[1] send xlocal, proc2 receive xremote, proc2 s = xlocal + xremote

Processor 2 xlocal = A[2] receive xremote, proc1 send xlocal, proc1 s = xlocal + xremote

° Second possible solution

Processor 1 xlocal = A[1] send xlocal, proc2 receive xremote, proc2 s = xlocal + xremote

Processor 2 xlocal = A[2] send xlocal, proc1 receive xremote, proc1 s = xlocal + xremote

° If send/receive acts like the telephone system? The post office?

Page 14: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Message-passing programming model

• One of the two main models you will program in for class

• Our version: MPI (has become the de facto standard)

• A least common denominator based on mid-80s technology

• Links to documentation on course home page

Page 15: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 2: Distributed Memory Cluster• Cray T3E, IBM SP2• IBM SP-4 (DataStar), Cluster2, and Earth Simulator are

distributed memory machines, but the nodes are SMPs.• Each processor has its own memory and cache but cannot

directly access another processor’s memory.• Each “node” has a network interface (NI) for all

communication and synchronization.

interconnect

P0

memory

NI

. . .

P1

memory

NI Pn

memory

NI

Page 16: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Programming Model 4: Data Parallel• Single thread of control consisting of parallel operations.• Parallel operations applied to all (or a defined subset) of a

data structure, usually an array• Communication is implicit in parallel operators • Elegant and easy to understand and reason about • Matlab and APL are sequential data-parallel languages• Matlab*P / Star-P : data-parallel version of Matlab

• Drawbacks: • Not all problems fit this model• Difficult to map onto coarse-grained machines

A:

fA:f

sum

A = array of all datafA = f(A)s = sum(fA)

s:

Page 17: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 4a: SIMD System• A large number of (usually) small processors.

• A single “control processor” issues each instruction.• Each processor executes the same instruction.• Some processors may be turned off on some instructions.

• Machines not popular (CM2, Maspar), but programming model is• implemented by mapping n-fold parallelism to p processors• mostly done in the compilers (HPF = High Performance Fortran),

but it’s hard

interconnect

P1

memory

NI. . .

control processor

P1

memory

NI P1

memory

NI P1

memory

NI P1

memory

NI

Page 18: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 4b: Vector Machine• Vector architectures are based on a single processor

• Multiple functional units• All performing the same operation• Highly pipelined

• Historically important• Overtaken by MPPs in the 90s

• Re-emerging in recent years• At a large scale in the Earth Simulator (NEC SX6) and Cray X1• At a small sale in SIMD media extensions to microprocessors

• SSE, SSE2 (Intel: Pentium/IA64)• Altivec (IBM/Motorola/Apple: PowerPC)• VIS (Sun: Sparc)

• Key idea: Compiler does some of the difficult work of finding parallelism, so the hardware doesn’t have to

Page 19: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Vector Processors

• Vector instructions operate on a vector of elements• These are specified as operations on vector registers

• A supercomputer vector register holds ~32-64 elts• The number of elements is larger than the amount of parallel

hardware, called vector pipes or lanes, say 2-4• The hardware performs a full vector operation in

• #elements-per-vector-register / #pipes

r1 r2

r3

+ + … vr2 … vr1

… vr3

(logically, performs # elts adds in parallel)

… vr2 … vr1(actually, performs # pipes adds in parallel)

++ ++ ++

Page 20: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Programming Model 3: Partitioned Global Address Space (PGAS)• One of the two main models you will program in for class• Program consists of a collection of named threads.

• Usually fixed at program startup time• Local and shared data, as in shared memory model• But, shared data is partitioned over local processes• Cost model says remote data is expensive

• Examples: UPC, Co-Array Fortran, Titanium• In between message passing and shared memory

PnP1P0 s[myThread] = ...

y = ..s[i] ...i: 2 i: 5 Private

memory

Shared memory

i: 8

s[0]: 27 s[1]: 27 s[n]: 27

Page 21: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 3: Globally Addressed Memory

• Cray T3E, X1; HP Alphaserver; SGI Altix• Network interface supports “Remote Direct Memory Access”

• NI can directly access memory without interrupting the CPU• One processor can read/write memory with one-sided operations

(put/get)• Not just a load/store as on a shared memory machine• Remote data is typically not cached locally

interconnect

P0

memory

NI

. . .

P1

memory

NI Pn

memory

NIGlobal address space may be supported in varying degrees

Page 22: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Machine Model 5: Hybrids (Catchall Category)

• Most modern high-performance machines are hybrids of several of these categories

• DataStar: Cluster of shared-memory processors• Cluster2: Cluster of shared-memory processors• Cray X1: More complicated hybrid of vector, shared-

memory, and cluster

• What’s the right programming model for these ???

Page 23: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

4-core Intel Nehalem chip (2 per Triton node):

Page 24: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Triton memory hierarchy

Node Memory

ProcCache

L2 Cache

L3 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

L3 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ChipChip

Node

<- Myrinet Interconnect to Other Nodes ->

Page 25: CS 240A: Models  of parallel programming: Machines, languages, and complexity measures

Triton memory hierarchy

Node Memory

ProcCache

L2 Cache

L3 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

L3 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ProcCache

L2 Cache

ChipChip

Node

<- Myrinet Interconnect to Other Nodes ->