parallel programming patterns overview and map pattern
TRANSCRIPT
Lecture 5 – Parallel Programming Patterns - Map
Parallel Programming Patterns Overview and Map Pattern
Parallel Computing CIS 410/510
Department of Computer and Information Science
Lecture 5 – Parallel Programming Patterns - Map
Outline q Parallel programming models q Dependencies q Structured programming patterns overview
❍ Serial / parallel control flow patterns ❍ Serial / parallel data management patterns
q Map pattern ❍ Optimizations
◆ sequences of Maps ◆ code Fusion ◆ cache Fusion
❍ Related Patterns ❍ Example: Scaled Vector Addition (SAXPY)
2 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Models 101 q Sequential models ❍ von Neumann (RAM) model
q Parallel model ❍ A parallel computer is simple a collection
of processors interconnected in some manner to coordinate activities and exchange data
❍ Models that can be used as general frameworks for describing and analyzing parallel algorithms ◆ Simplicity: description, analysis, architecture independence ◆ Implementability: able to be realized, reflect performance
q Three common parallel models ❍ Directed acyclic graphs, shared-memory, network
Memory (RAM)
processor
3 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Directed Acyclic Graphs (DAG) q Captures data flow parallelism q Nodes represent operations to be performed ❍ Inputs are nodes with no incoming arcs ❍ Output are nodes with no outgoing arcs ❍ Think of nodes as tasks
q Arcs are paths for flow of data results q DAG represents the operations of the algorithm
and implies precedent constraints on their order for (i=1; i<100; i++) a[i] = a[i-1] + 100; a[0] a[1] a[99] …
4 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Shared Memory Model q Parallel extension of RAM model (PRAM)
❍ Memory size is infinite ❍ Number of processors in unbounded ❍ Processors communicate via the memory ❍ Every processor accesses any memory
location in 1 cycle ❍ Synchronous
◆ All processors execute same algorithm synchronously – READ phase – COMPUTE phase – WRITE phase
◆ Some subset of the processors can stay idle ❍ Asynchronous
5
Shared Memory
P3
PN
P1
P2
.
.
.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Network Model q G = (N,E)
❍ N are processing nodes ❍ E are bidirectional communication links
q Each processor has its own memory q No shared memory is available q Network operation may be synchronous or
asynchronous q Requires communication primitives
❍ Send (X, i) ❍ Receive (Y, j)
q Captures message passing model for algorithm design
P31
PN1
P11
P21
.
.
.
P32
PN2
P12
P22
.
.
.
P3N
PNN
P1N
P2N
.
.
.
6 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallelism q Ability to execute different parts of a computation
concurrently on different machines q Why do you want parallelism? ❍ Shorter running time or handling more work
q What is being parallelized? ❍ Task: instruction, statement, procedure, … ❍ Data: data flow, size, replication ❍ Parallelism granularity
◆ Coarse-grain versus fine-grainded
q Thinking about parallelism q Evaluation
7 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Why is parallel programming important? q Parallel programming has matured
❍ Standard programming models ❍ Common machine architectures ❍ Programmer can focus on computation and use suitable
programming model for implementation q Increase portability between models and architectures q Reasonable hope of portability across platforms q Problem
❍ Performance optimization is still platform-dependent ❍ Performance portability is a problem ❍ Parallel programming methods are still evolving
8 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Algorithm q Recipe to solve a problem “in parallel” on multiple
processing elements q Standard steps for constructing a parallel algorithm ❍ Identify work that can be performed concurrently ❍ Partition the concurrent work on separate processors ❍ Properly manage input, output, and intermediate data ❍ Coordinate data accesses and work to satisfy
dependencies q Which are hard to do?
9 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallelism Views q Where can we find parallelism? q Program (task) view ❍ Statement level
◆ Between program statements ◆ Which statements can be executed at the same time?
❍ Block level / Loop level / Routine level / Process level ◆ Larger-grained program statements
q Data view ❍ How is data operated on? ❍ Where does data reside?
q Resource view
10 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallelism, Correctness, and Dependence q Parallel execution, from any point of view, will be
constrained by the sequence of operations needed to be performed for a correct result
q Parallel execution must address control, data, and system dependences
q A dependency arises when one operation depends on an earlier operation to complete and produce a result before this later operation can be performed
q We extend this notion of dependency to resources since some operations may depend on certain resources ❍ For example, due to where data is located
11 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Executing Two Statements in Parallel q Want to execute two statements in parallel q On one processor:
Statement 1; Statement 2;
q On two processors: Processor 1: Processor 2: Statement 1; Statement 2;
q Fundamental (concurrent) execution assumption ❍ Processors execute independent of each other ❍ No assumptions made about speed of processor execution
12 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Sequential Consistency in Parallel Execution q Case 1:
Processor 1: Processor 2: statement 1;
statement 2;
q Case 2: Processor 1: Processor 2: statement 2; statement 1;
q Sequential consistency ❍ Statements execution does not interfere with each other ❍ Computation results are the same (independent of order)
0me
0me
13 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Independent versus Dependent q In other words the execution of
statement1; statement2; must be equivalent to statement2; statement1;
q Their order of execution must not matter! q If true, the statements are independent of each other q Two statements are dependent when the order of their
execution affects the computation outcome
14 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Examples q Example 1
S1: a=1; S2: b=1;
q Example 2 S1: a=1; S2: b=a;
q Example 3 S1: a=f(x); S2: a=b;
q Example 4 S1: a=b; S2: b=1;
r Statements are independent
r Dependent (true (flow) dependence) ¦ Second is dependent on first ¦ Can you remove dependency?
r Dependent (output dependence) ¦ Second is dependent on first ¦ Can you remove dependency? How?
r Dependent (anti-dependence) ¦ First is dependent on second ¦ Can you remove dependency? How?
15 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
True Dependence and Anti-Dependence q Given statements S1 and S2,
S1; S2;
q S2 has a true (flow) dependence on S1 if and only if S2 reads a value written by S1
q S2 has a anti-dependence on S1 if and only if S2 writes a value read by S1
X = = X
... δ
= X X =
... δ-‐1
16 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Output Dependence q Given statements S1 and S2,
S1; S2;
q S2 has an output dependence on S1 if and only if S2 writes a variable written by S1
q Anti- and output dependences are “name” dependencies ❍ Are they “true” dependences?
q How can you get rid of output dependences? ❍ Are there cases where you can not?
X = X =
... δ0
17 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Statement Dependency Graphs q Can use graphs to show dependence relationships q Example
S1: a=1; S2: b=a; S3: a=b+1; S4: c=a;
q S2 δ S3 : S3 is flow-dependent on S2 q S1 δ0 S3 : S3 is output-dependent on S1 q S2 δ-1 S3 : S3 is anti-dependent on S2
S1
S2
S3
S4
flow
an0 output
18 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
When can two statements execute in parallel? q Statements S1 and S2 can execute in parallel if and
only if there are no dependences between S1 and S2 ❍ True dependences ❍ Anti-dependences ❍ Output dependences
q Some dependences can be remove by modifying the program ❍ Rearranging statements ❍ Eliminating statements
19 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
How do you compute dependence? q Data dependence relations can be found by
comparing the IN and OUT sets of each node q The IN and OUT sets of a statement S are defined
as: ❍ IN(S) : set of memory locations (variables) that may be
used in S ❍ OUT(S) : set of memory locations (variables) that may
be modified by S q Note that these sets include all memory locations
that may be fetched or modified q As such, the sets can be conservatively large
20 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
IN / OUT Sets and Computing Dependence q Assuming that there is a path from S1 to S2 , the
following shows how to intersect the IN and OUT sets to test for data dependence
( ) ( )
dependenceoutput )()(
dependence-anti )()(
dependence flow
20
121
21
121
2121
SSSoutSout
SSSoutSin
SSSinSout
δ
δ
δ
∅≠∩
∅≠∩
∅≠∩−
21 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Loop-Level Parallelism q Significant parallelism can be identified within loops
for (i=0; i<100; i++) S1: a[i] = i;
q Dependencies? What about i, the loop index? q DOALL loop (a.k.a. foreach loop)
❍ All iterations are independent of each other ❍ All statements be executed in parallel at the same time
◆ Is this really true?
for (i=0; i<100; i++) { S1: a[i] = i; S2: b[i] = 2*i; }
22 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Iteration Space q Unroll loop into separate statements / iterations q Show dependences between iterations
for (i=0; i<100; i++) S1: a[i] = i;
S10
S20
for (i=0; i<100; i++) { S1: a[i] = i; S2: b[i] = 2*i; }
S11
S21
S199
S299
S10 S11 S199 … …
23 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Multi-Loop Parallelism q Significant parallelism can be identified between loops
for (i=0; i<100; i++) a[i] = i; for (i=0; i<100; i++) b[i] = i;
q Dependencies? q How much parallelism is available? q Given 4 processors, how much parallelism is possible? q What parallelism is achievable with 50 processors?
a[0] a[1] a[99] …
b[0] b[1] b[99] …
24 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Loops with Dependencies Case 1: for (i=1; i<100; i++) a[i] = a[i-1] + 100;
q Dependencies? ❍ What type?
q Is the Case 1 loop parallelizable? q Is the Case 2 loop parallelizable?
Case 2: for (i=5; i<100; i++) a[i-5] = a[i] + 100;
a[0] a[1] a[99] … a[0] a[5] a[10] …
a[1] a[6] a[11] …
a[2] a[7] a[12] …
a[3] a[8] a[13] …
a[4] a[9] a[14] …
25 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Another Loop Example for (i=1; i<100; i++) a[i] = f(a[i-1]);
q Dependencies? ❍ What type?
q Loop iterations are not parallelizable ❍ Why not?
26 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Loop Dependencies q A loop-carried dependence is a dependence that is
present only if the statements are part of the execution of a loop (i.e., between two statements instances in two different iterations of a loop)
q Otherwise, it is loop-independent, including between two statements instances in the same loop iteration
q Loop-carried dependences can prevent loop iteration parallelization
q The dependence is lexically forward if the source comes before the target or lexically backward otherwise ❍ Unroll the loop to see
27 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Loop Dependence Example for (i=0; i<100; i++) a[i+10] = f(a[i]);
q Dependencies? ❍ Between a[10], a[20], … ❍ Between a[11], a[21], …
q Some parallel execution is possible ❍ How much?
28 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Dependences Between Iterations for (i=1; i<100; i++) { S1: a[i] = …; S2: … = a[i-1];
} q Dependencies? ❍ Between a[i] and a[i-1]
q Is parallelism possible? ❍ Statements can be executed in “pipeline” manner
i 1 2 3 4 5 6
S1
S2
…
29 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Another Loop Dependence Example for (i=0; i<100; i++) for (j=1; j<100; j++) a[i][j] = f(a[i][j-1]);
q Dependencies? ❍ Loop-independent dependence on i ❍ Loop-carried dependence on j
q Which loop can be parallelized? ❍ Outer loop parallelizable ❍ Inner loop cannot be parallelized
30 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Still Another Loop Dependence Example for (j=1; j<100; j++) for (i=0; i<100; i++) a[i][j] = f(a[i][j-1]);
q Dependencies?
❍ Loop-independent dependence on i ❍ Loop-carried dependence on j
q Which loop can be parallelized? ❍ Inner loop parallelizable ❍ Outer loop cannot be parallelized ❍ Less desirable (why?)
31 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Key Ideas for Dependency Analysis q To execute in parallel: ❍ Statement order must not matter ❍ Statements must not have dependences
q Some dependences can be removed q Some dependences may not be obvious
32 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Dependencies and Synchronization q How is parallelism achieved when have dependencies?
❍ Think about concurrency ❍ Some parts of the execution are independent ❍ Some parts of the execution are dependent
q Must control ordering of events on different processors (cores) ❍ Dependencies pose constraints on parallel event ordering ❍ Partial ordering of execution action
q Use synchronization mechanisms ❍ Need for concurrent execution too ❍ Maintains partial order
33 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Patterns q Parallel Patterns: A recurring combination of task
distribution and data access that solves a specific problem in parallel algorithm design.
q Patterns provide us with a “vocabulary” for algorithm design
q It can be useful to compare parallel patterns with serial patterns
q Patterns are universal – they can be used in any parallel programming system
34 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Patterns q Nesting Pattern q Serial / Parallel Control Patterns q Serial / Parallel Data Management Patterns q Other Patterns q Programming Model Support for Patterns
35 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Nesting Pattern q Nesting is the ability to hierarchically compose
patterns q This pattern appears in both serial and parallel
algorithms q “Pattern diagrams” are used to visually show the
pattern idea where each “task block” is a location of general code in an algorithm
q Each “task block” can in turn be another pattern in the nesting pattern
36 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Nesting Pattern
37
Nes4ng Pa8ern: A composi0onal paFern. Nes0ng allows other paFerns to be composed in a hierarchy so that any task block in the above diagram can be replaced with a paFern with the same input/output and dependencies.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Control Patterns q Structured serial programming is based on these
patterns: sequence, selection, iteration, and recursion
q The nesting pattern can also be used to hierarchically compose these four patterns
q Though you should be familiar with these, it’s extra important to understand these patterns when parallelizing serial algorithms based on these patterns
38 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Control Patterns: Sequence q Sequence: Ordered list of tasks that are executed
in a specific order q Assumption – program text ordering will be
followed (obvious, but this will be important when parallelized)
39 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Control Patterns: Selection q Selection: condition c is first evaluated. Either task
a or b is executed depending on the true or false result of c.
q Assumptions – a and b are never executed before c, and only a or b is executed - never both
40 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Control Patterns: Iteration q Iteration: a condition c is evaluated. If true, a is
evaluated, and then c is evaluated again. This repeats until c is false.
q Complication when parallelizing: potential for dependencies to exist between previous iterations
41 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Control Patterns: Recursion q Recursion: dynamic form of nesting allowing
functions to call themselves q Tail recursion is a special recursion that can be
converted into iteration – important for functional languages
42 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns q Parallel control patterns extend serial control
patterns q Each parallel control pattern is related to at least
one serial control pattern, but relaxes assumptions of serial control patterns
q Parallel control patterns: fork-join, map, stencil, reduction, scan, recurrence
43 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Fork-Join q Fork-join: allows control flow to fork into
multiple parallel flows, then rejoin later q Cilk Plus implements this with spawn and sync ❍ The call tree is a parallel call tree and functions are
spawned instead of called ❍ Functions that spawn another function call will
continue to execute ❍ Caller syncs with the spawned function to join the two
q A “join” is different than a “barrier ❍ Sync – only one thread continues ❍ Barrier – all threads continue
44 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Map q Map: performs a function over every element of a collection q Map replicates a serial iteration pattern where each iteration is
independent of the others, the number of iterations is known in advance, and computation only depends on the iteration count and data from the input collection
q The replicated function is referred to as an “elemental function”
45
Input
Elemental Func0on
Output
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Stencil q Stencil: Elemental function accesses a set of
“neighbors”, stencil is a generalization of map q Often combined with iteration – used with iterative
solvers or to evolve a system through time
46
q Boundary conditions must be handled carefully in the stencil pattern
q See stencil lecture…
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Reduction q Reduction: Combines every element in a
collection using an associative “combiner function”
q Because of the associativity of the combiner function, different orderings of the reduction are possible
q Examples of combiner functions: addition, multiplication, maximum, minimum, and Boolean AND, OR, and XOR
47 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Reduction
48
Serial Reduc0on Parallel Reduc0on
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Scan q Scan: computes all partial reduction of a collection q For every output in a collection, a reduction of the
input up to that point is computed q If the function being used is associative, the scan
can be parallelized q Parallelizing a scan is not obvious at first, because
of dependencies to previous iterations in the serial loop
q A parallel scan will require more operations than a serial version
49 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Scan
50
Serial Scan Parallel Scan
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Control Patterns: Recurrence q Recurrence: More complex version of map, where
the loop iterations can depend on one another q Similar to map, but elements can use outputs of
adjacent elements as inputs q For a recurrence to be computable, there must be a
serial ordering of the recurrence elements so that elements can be computed using previously computed outputs
51 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Data Management Patterns q Serial programs can manage data in many ways q Data management deals with how data is allocated,
shared, read, written, and copied q Serial Data Management Patterns: random read
and write, stack allocation, heap allocation, objects
52 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Data Management Patterns: random read and write
q Memory locations indexed with addresses q Pointers are typically used to refer to memory
addresses q Aliasing (uncertainty of two pointers referring to
the same object) can cause problems when serial code is parallelized
53 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Data Management Patterns: Stack Allocation
q Stack allocation is useful for dynamically allocating data in LIFO manner
q Efficient – arbitrary amount of data can be allocated in constant time
q Stack allocation also preserves locality q When parallelized, typically each thread will get its
own stack so thread locality is preserved
54 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Data Management Patterns: Heap Allocation
q Heap allocation is useful when data cannot be allocated in a LIFO fashion
q But, heap allocation is slower and more complex than stack allocation
q A parallelized heap allocator should be used when dynamically allocating memory in parallel ❍ This type of allocator will keep separate pools for each
parallel worker
55 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Serial Data Management Patterns: Objects q Objects are language constructs to associate data
with code to manipulate and manage that data q Objects can have member functions, and they also
are considered members of a class of objects q Parallel programming models will generalize
objects in various ways
56 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns q To avoid things like race conditions, it is critically
important to know when data is, and isn’t, potentially shared by multiple parallel workers
q Some parallel data management patterns help us with data locality
q Parallel data management patterns: pack, pipeline, geometric decomposition, gather, and scatter
57 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns: Pack q Pack is used eliminate unused space in a collection q Elements marked false are discarded, the
remaining elements are placed in a contiguous sequence in the same order
q Useful when used with map q Unpack is the inverse and is used to place elements back in their original locations
58 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns: Pipeline q Pipeline connects tasks in a producer-
consumer manner q A linear pipeline is the basic pattern
idea, but a pipeline in a DAG is also possible
q Pipelines are most useful when used with other patterns as they can multiply available parallelism
59 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns: Geometric Decomposition q Geometric Decomposition – arranges data into
subcollections q Overlapping and non-overlapping decompositions
are possible q This pattern doesn’t necessarily move data, it just
gives us another view of it
60 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns: Gather q Gather reads a collection of data given a
collection of indices q Think of a combination of map and random serial
reads q The output collection shares the same type as the
input collection, but it share the same shape as the indices collection
61 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Data Management Patterns: Scatter q Scatter is the inverse of gather q A set of input and indices is required, but each
element of the input is written to the output at the given index instead of read from the input at the given index
q Race conditions can occur when we have two writes to the same location!
62 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Other Parallel Patterns q Superscalar Sequences: write a sequence of tasks,
ordered only by dependencies q Futures: similar to fork-join, but tasks do not need
to be nested hierarchically q Speculative Selection: general version of serial
selection where the condition and both outcomes can all run in parallel
q Workpile: general map pattern where each instance of elemental function can generate more instances, adding to the “pile” of work
63 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Other Parallel Patterns q Search: finds some data in a collection that meets
some criteria q Segmentation: operations on subdivided, non-
overlapping, non-uniformly sized partitions of 1D collections
q Expand: a combination of pack and map q Category Reduction: Given a collection of
elements each with a label, find all elements with same label and reduce them
64 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Programming Model Support for Patterns
65 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Programming Model Support for Patterns
66 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Programming Model Support for Patterns
67 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Map Pattern - Overview q Map q Optimizations ❍ Sequences of Maps ❍ Code Fusion ❍ Cache Fusion
q Related Patterns q Example Implementation: Scaled Vector Addition
(SAXPY) ❍ Problem Description ❍ Various Implementations
68 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Mapping q “Do the same thing many times”
foreach i in foo: do something
q Well-known higher order function in languages like ML, Haskell, Scala map: applies a function each element in a list and returns a list of results
69
∀ab.(a→ b)List a → List b
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 70
Example Maps Add 1 to every item in an array Double every item in an array
0 4 5 3 1 0
0 1 2 3 4 5
1 5 6 4 2 1
3 7 0 1 4 0
6 14 0 2 8 0
0 1 2 3 4 5
Key Point: An operation is a map if it can be applied to each element without knowledge of neighbors.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Key Idea q Map is a “foreach loop” where each iteration is
independent
71
Embarrassingly Parallel
Independence is a big win. We can run map completely in parallel. Significant speedups! More precisely: is O(1) plus implementation overhead that is O(log n)…so .
T (∞)T (∞)∈O(logn)
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Sequential Map
for(int n=0;
n< array.length;
++n){ process(array[n]);
}
72
Tim
e
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Parallel Map
parallel_for_each(
x in array){
process(x); }
73
Tim
e
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Comparing Maps Serial Map
74
Parallel Map
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Comparing Maps Serial Map
75
Parallel Map
Speedup The space here is speedup. With the parallel map, our program finished execution early, while the serial map is still running.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Independence q The key to (embarrasing) parallelism is independence
q Modifying shared state breaks perfect independence q Results of accidentally violating independence:
❍ non-determinism ❍ data-races ❍ undefined behavior ❍ segfaults
76
Map function should be “pure” (or “pure-ish”) and should not modify shared states
Warning: No shared state!
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Implementation and API q OpenMP and CilkPlus contain a parallel for
language construct q Map is a mode of use of parallel for q TBB uses higher order functions with lambda
expressions/“funtors” q Some languages (CilkPlus, Matlab, Fortran) provide
array notation which makes some maps more concise
77
A[:] = A[:]*5; is CilkPlus array notation for “multiply every element in A by 5”
Array Notation
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Unary Maps
78
So far we have only dealt with mapping over a single collection…
Unary Maps
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 79
Map with 1 Input, 1 Output
x 3 7 0 1 4 0 0 4 5 3 1 0
0 1 2 3 4 5 6 7 8 9 10 11
6 14 0 2 8 0 0 8 10 6 2 0 result
int oneToOne ( int x[11] ) { return x*2; }
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
N-ary Maps
80
But, sometimes it makes sense to map over multiple collections at once…
N-ary Maps
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 81
Map with 2 Inputs, 1 Output
x 3 7 0 1 4 0 0 4 5 3 1 0
0 1 2 3 4 5 6 7 8 9 10 11
5 11 2 2 12 3 9 9 10 4 3 1 result
y 2 4 2 1 8 3 9 5 5 1 2 1
int twoToOne ( int x[11], int y[11] ) { return x+y; }
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Optimization – Sequences of Maps q Often several map operations
occur in sequence ❍ Vector math consists of many
small operations such as additions and multiplications applied as maps
q A naïve implementation may write each intermediate result to memory, wasting memory BW and likely overwhelming the cache
82
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 139 — #139
4.4 Sequence of Maps versus Map of Sequence 139
particular interface (using a numerical value to select the index component desired) was chosen becauseit provides a straightforward extension to higher dimensionalities.
4.4 SEQUENCE OF MAPS VERSUS MAP OF SEQUENCEA sequence of map operations over collections of the same shape should be combined wheneverpossible into a single larger operation. In particular, vector operations are really map operations usingvery simple operations like addition and multiplication. Implementing these one by one, writing to andfrom memory, would be inefficient, since it would have low arithmetic intensity. If this organizationwas implemented literally, data would have to be read and written for each operation, and we wouldconsume memory bandwidth unnecessarily for intermediate results. Even worse, if the maps were bigenough, we might exceed the size of the cache and so each map operation would go directly to andfrom main memory.
If we fuse the operations used in a sequence of maps into a sequence inside a single map, we canload only the input data at the start of the map and keep intermediate results in registers rather thanwasting memory bandwidth on them. We will call this approach code fusion, and it can be applied toother patterns as well. Code fusion is demonstrated in Figure 4.2.
FIGURE 4.2
Code fusion optimization: Convert a sequence of maps into a map of sequences, avoiding the need to writeintermediate results to memory. This can be done automatically by ArBB and explicitly in other programmingmodels.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Optimization – Code Fusion q Can sometimes
“fuse” together the operations to perform them at once
q Adds arithmetic intensity, reduces memory/cache usage
q Ideally, operations can be performed using registers alone
83
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 139 — #139
4.4 Sequence of Maps versus Map of Sequence 139
particular interface (using a numerical value to select the index component desired) was chosen becauseit provides a straightforward extension to higher dimensionalities.
4.4 SEQUENCE OF MAPS VERSUS MAP OF SEQUENCEA sequence of map operations over collections of the same shape should be combined wheneverpossible into a single larger operation. In particular, vector operations are really map operations usingvery simple operations like addition and multiplication. Implementing these one by one, writing to andfrom memory, would be inefficient, since it would have low arithmetic intensity. If this organizationwas implemented literally, data would have to be read and written for each operation, and we wouldconsume memory bandwidth unnecessarily for intermediate results. Even worse, if the maps were bigenough, we might exceed the size of the cache and so each map operation would go directly to andfrom main memory.
If we fuse the operations used in a sequence of maps into a sequence inside a single map, we canload only the input data at the start of the map and keep intermediate results in registers rather thanwasting memory bandwidth on them. We will call this approach code fusion, and it can be applied toother patterns as well. Code fusion is demonstrated in Figure 4.2.
FIGURE 4.2
Code fusion optimization: Convert a sequence of maps into a map of sequences, avoiding the need to writeintermediate results to memory. This can be done automatically by ArBB and explicitly in other programmingmodels.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Optimization – Cache Fusion q Sometimes
impractical to fuse together the map operations
q Can instead break the work into blocks, giving each CPU one block at a time
q Hopefully, operations use cache alone
84
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 140 — #140
140 CHAPTER 4 Map
FIGURE 4.3
Cache fusion optimization: Process sequences of maps in small tiles sequentially. When code fusion is notpossible, a sequence of maps can be broken into small tiles and each tile processed sequentially. This avoidsthe need for synchronization between each individual map, and, if the tiles are small enough, intermediatedata can be held in cache.
Another approach that is often almost as effective as code fusion is cache fusion, shown inFigure 4.3. If the maps are broken into tiles and the entire sequence of smaller maps for one tile isexecuted sequentially on one core, then if the aggregate size of the tiles is small enough interme-diate data will be resident in cache. In this case at least it will be possible to avoid going to mainmemory.
Both kinds of fusion also reduce the cost of synchronization, since when multiple maps are fusedonly one synchronization is needed after all the tiles are processed, instead of after every map. How-ever, code fusion is preferred when it is possible since registers are still faster than cache, and withcache fusion there is still the “interpreter” overhead of managing the multiple passes. However, cachefusion is useful when there is no access to the code inside the individual maps—for example, if theyare provided as precompiled user-defined functions without source access by the compiler. This is acommon pattern in, for example, image processing plugins.
In Cilk Plus, TBB, OpenMP, and OpenCL the reorganization needed for either kind of fusion mustgenerally be done by the programmer, with the following notable exceptions:
OpenMP: Cache fusion occurs when all of the following are true:
• A single parallel region executes all of the maps to be fused.• The loop for each map has the same bounds and chunk size.• Each loop uses the static scheduling mode, either as implied by the environment or explicitly
specified.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Related Patterns Three patterns related to map are discussed here: ❍ Stencil ❍ Workpile ❍ Divide-and-Conquer
More detail presented in a later lecture
85 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Stencil q Each instance of the map function accesses
neighbors of its input, offset from its usual input q Common in imaging and PDE solvers
86
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 201 — #201
7.2 Implementing Stencil with Shift 201
FIGURE 7.1
Stencil pattern. The stencil pattern combines a local, structured gather with a function to combine the resultsinto a single output for each input neighborhood.
Stencils also arise in solvers for partial differential equations (PDEs) over regular grids. PDE solversare important in many scientific simulations, in computer-aided engineering, and in imaging. Imag-ing applications include photography, satellite imaging, medical imaging, and seismic reconstruction.Seismic reconstruction is one of the major workloads in oil and gas exploration.
Stencils can be one dimensional, as shown in Figure 7.1, or multidimensional. Stencils also havedifferent kinds of neighborhoods from square compact neighborhoods to sparse neighborhoods. Thespecial case of a convolution using a square compact neighborhood with constant weights is knownas a box filter and there are specific optimizations for it similar to that for the scan pattern. However,these optimizations do not apply to the general case. Stencils reuse samples required for neighbor-ing elements, so stencils, especially multidimensional stencils, can be further optimized by takingcache behavior into account as discussed in Section 7.3. Stencils, like shifts, also require considera-tion of boundary conditions. When subdivided using the partition pattern, presented in Section 6.6,boundary conditions can result in additional communication between cores, either implicit orexplicit.
7.2 IMPLEMENTING STENCIL WITH SHIFTThe regular data access pattern used by stencils can be implemented using shifts. For a group of ele-mental functions, a vector of inputs for each offset in the stencil can be collected by shifting the inputby the amount of the offset. This is diagrammed in Figure 7.2.
Implementing a stencil in this way is really only beneficial for one-dimensional stencils or thememory-contiguous dimension of a multidimensional stencil. Also, it does not reduce total memorytraffic to external memory since, if random scalar reads are used, data movement from external memorywill still be combined into block reads by the cache. Shifts, however, allow vectorization of the datareads, and this can reduce the total number of instructions used. They may also place data in vectorregisters ready for use by vectorized elemental functions.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Workpile q Work items can be added to the map while it is in
progress, from inside map function instances q Work grows and is consumed by the map q Workpile pattern terminates when no more work is
available
87 Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Divide-and-Conquer q Applies if a problem can
be divided into smaller subproblems recursively until a base case is reached that can be solved serially
88
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 192 — #192
192 CHAPTER 6 Data Reorganization
FIGURE 6.17
Partitioning in 2D. The partition pattern can be extended to multiple dimensions.
These diagrams show only the simplest case, where the sections of the partition fit exactly into thedomain. In practice, there may be boundary conditions where partial sections are required along theedges. These may need to be treated with special-purpose code, but even in this case the majority ofthe sections will be regular, which lends itself to vectorization. Ideally, to get good memory behaviorand to allow efficient vectorization, we also normally want to partition data, especially for writes, sothat it aligns with cache line and vectorization boundaries. You should be aware of how data is actuallylaid out in memory when partitioning data. For example, in a multidimensional partitioning, typicallyonly one dimension of an array is contiguous in memory, so only this one benefits directly from spatiallocality. This is also the only dimension that benefits from alignment with cache lines and vectorizationunless the data will be transposed as part of the computation. Partitioning is related to strip-mining thestencil pattern, which is discussed in Section 7.3.
Partitioning can be generalized to another pattern that we will call segmentation. Segmentation stillrequires non-overlapping sections, but now the sections can vary in size. This is shown in Figure 6.18.Various algorithms have been designed to operate on segmented data, including segmented versionsof scan and reduce that can operate on each segment of the array but in a perfectly load-balancedfashion, regardless of the irregularities in the lengths of the segments [BHC+93]. These segmentedalgorithms can actually be implemented in terms of the normal scan and reduce algorithms by usinga suitable combiner function and some auxiliary data. Other algorithms, such as quicksort [Ble90,Ble96], can in turn be implemented in a vectorized fashion with a segmented data structure using theseprimitives.
In order to represent a segmented collection, additional data is required to keep track of the bound-aries between sections. The two most common representations are shown in Figure 6.19. Using an array
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Example: Scaled Vector Addition (SAXPY) q ❍ Scales vector x by a and adds it to vector y ❍ Result is stored in input vector y
q Comes from the BLAS (Basic Linear Algebra Subprograms) library
q Every element in vector x and vector y are independent
89
y← ax + y
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
What does look like?
90
a 4 4 4 4 4 4 4 4 4 4 4 4
0 1 2 3 4 5 6 7 8 9 10 11
11 23 8 5 36 12 36 49 50 7 9 4 y
y 3 7 0 1 4 0 0 4 5 3 1 0
x 2 4 2 1 8 3 9 5 5 1 2 1 *
+
y← ax + y
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Visual:
91
a 4 4 4 4 4 4 4 4 4 4 4 4
0 1 2 3 4 5 6 7 8 9 10 11
11 23 8 5 36 12 36 49 50 7 9 4 y
y 3 7 0 1 4 0 0 4 5 3 1 0
x 2 4 2 1 8 3 9 5 5 1 2 1 *
+
y← ax + y
Twelve processors used à one for each element in the vector
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Visual:
92
a 4 4 4 4 4 4 4 4 4 4 4 4
0 1 2 3 4 5 6 7 8 9 10 11
11 23 8 5 36 12 36 49 50 7 9 4 y
y 3 7 0 1 4 0 0 4 5 3 1 0
x 2 4 2 1 8 3 9 5 5 1 2 1 *
+
y← ax + y
Six processors used à one for every two elements in the vector
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
Visual:
93
a 4 4 4 4 4 4 4 4 4 4 4 4
0 1 2 3 4 5 6 7 8 9 10 11
11 23 8 5 36 12 36 49 50 7 9 4 y
y 3 7 0 1 4 0 0 4 5 3 1 0
x 2 4 2 1 8 3 9 5 5 1 2 1 *
+
y← ax + y
Two processors used à one for every six elements in the vector
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 94
Serial SAXPY Implementation
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 126 — #126
126 CHAPTER 4 Map
1 void saxpy_serial(2 size_t n, // the number of elements in the vectors3 float a, // scale factor4 const float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 for (size_t i = 0; i < n; ++i)8 y[i] = a * x[i] + y[i];9 }
LISTING 4.1
Serial implementation of SAXPY in C.
1 void saxpy_tbb(2 int n, // the number of elements in the vectors3 float a, // scale factor4 float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 tbb::parallel_for(8 tbb::blocked_range<int>(0, n),9 [&](tbb::blocked_range<int> r) {
10 for (size_t i = r.begin(); i != r.end(); ++i)11 y[i] = a * x[i] + y[i];12 }13 );14 }
LISTING 4.2
Tiled implementation of SAXPY in TBB. Tiling not only leads to better spatial locality but also exposesopportunities for vectorization by the host compiler.
functions for brevity throughout the book, though they are not required for using TBB. Appendix D.2discusses lambda functions and how to write the equivalent code by hand if you need to use an oldC++ compiler.
The TBB code exploits tiling. The parallel_for breaks the half-open range [0,n) into subrangesand processes each subrange r with a separate task. Hence, each subrange r acts as a tile, whichis processed by the serial for loop in the code. Here the range and subrange are implemented asblocked_range objects. Appendix C.3 says more about the mechanics of parallel_for.
TBB uses thread parallelism but does not, by itself, vectorize the code. It depends on the underlyingC++ compiler to do that. On the other hand, tiling does expose opportunities for vectorization, so ifthe basic serial algorithm can be vectorized then typically the TBB code can be, too. Generally, the
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 95
TBB SAXPY Implementation
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 126 — #126
126 CHAPTER 4 Map
1 void saxpy_serial(2 size_t n, // the number of elements in the vectors3 float a, // scale factor4 const float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 for (size_t i = 0; i < n; ++i)8 y[i] = a * x[i] + y[i];9 }
LISTING 4.1
Serial implementation of SAXPY in C.
1 void saxpy_tbb(2 int n, // the number of elements in the vectors3 float a, // scale factor4 float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 tbb::parallel_for(8 tbb::blocked_range<int>(0, n),9 [&](tbb::blocked_range<int> r) {
10 for (size_t i = r.begin(); i != r.end(); ++i)11 y[i] = a * x[i] + y[i];12 }13 );14 }
LISTING 4.2
Tiled implementation of SAXPY in TBB. Tiling not only leads to better spatial locality but also exposesopportunities for vectorization by the host compiler.
functions for brevity throughout the book, though they are not required for using TBB. Appendix D.2discusses lambda functions and how to write the equivalent code by hand if you need to use an oldC++ compiler.
The TBB code exploits tiling. The parallel_for breaks the half-open range [0,n) into subrangesand processes each subrange r with a separate task. Hence, each subrange r acts as a tile, whichis processed by the serial for loop in the code. Here the range and subrange are implemented asblocked_range objects. Appendix C.3 says more about the mechanics of parallel_for.
TBB uses thread parallelism but does not, by itself, vectorize the code. It depends on the underlyingC++ compiler to do that. On the other hand, tiling does expose opportunities for vectorization, so ifthe basic serial algorithm can be vectorized then typically the TBB code can be, too. Generally, the
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 96
Cilk Plus SAXPY Implementation
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 127 — #127
4.2 Scaled Vector Addition (SAXPY) 127
performance of the serial code inside TBB tasks will depend on the performance of the code generatedby the C++ compiler with which it is used.
4.2.4 Cilk PlusA basic Cilk Plus implementation of the SAXPY operation is given in Listing 4.3. The “parallel for”syntax approach is used here, as with TBB, although the syntax is closer to a regular for loop. In fact,an ordinary for loop can often be converted to a cilk_for construct if all iterations of the loop bodyare independent—that is, if it is a map. As with TBB, the cilk_for is not explicitly vectorized but thecompiler may attempt to auto-vectorize. There are restrictions on the form of a cilk_for loop. SeeAppendix B.5 for details.
4.2.5 Cilk Plus with Array NotationIt is also possible in Cilk Plus to explicitly specify vector operations using Cilk Plus array notation, asin Listing 4.4. Here x[0:n] and y[0:n] refer to n consecutive elements of each array, starting withx[0] and y[0]. A variant syntax allows specification of a stride between elements, using x[start:length:stride]. Sections of the same length can be combined with operators. Note that there is nocilk_for in Listing 4.4.
1 void saxpy_cilk(2 int n, // the number of elements in the vectors3 float a, // scale factor4 float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 cilk_for (int i = 0; i < n; ++i)8 y[i] = a * x[i] + y[i];9 }
LISTING 4.3
SAXPY in Cilk Plus using cilk_for.
1 void saxpy_array_notation(2 int n, // the number of elements in the vectors3 float a, // scale factor4 float x[], // the input vector5 float y[] // the output vector and offset6 ) {7 y[0:n] = a * x[0:n] + y[0:n];8 }
LISTING 4.4
SAXPY in Cilk Plus using cilk_for and array notation for explicit vectorization.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map 97
OpenMP SAXPY Implentation
To protect the rights of the author(s) and publisher we inform you that this PDF is an uncorrected proof for internal business use only by the author(s), editor(s),reviewer(s), Elsevier and typesetter diacriTech. It is not allowed to publish this proof online or in print. This proof copy is the copyright property of the publisherand is confidential until formal publication.
McCool — e9780124159938 — 2012/6/6 — 23:09 — Page 128 — #128
128 CHAPTER 4 Map
Uniform inputs are handled by scalar promotion: When a scalar and an array are combined withan operator, the scalar is conceptually “promoted” to an array of the same length by replication.
4.2.6 OpenMPLike TBB and Cilk Plus, the map pattern is expressed in OpenMP using a “parallel for” construct. Thisis done by adding a pragma as in Listing 4.5 just before the loop to be parallelized. OpenMP uses a“team” of threads and the work of the loop is distributed over the team when such a pragma is used.How exactly the distribution of work is done is given by the current scheduling option.
The advantage of the OpenMP syntax is that the code inside the loop does not change, and theannotations can usually be safely ignored and a correct serial program will result. However, as with theequivalent Cilk Plus construct, the form of the for loop is more restricted than in the serial case. Also,as with Cilk Plus and TBB, implementations of OpenMP generally do not check for incorrect paral-lelizations that can arise from dependencies between loop iterations, which can lead to race conditions.If these exist and are not correctly accounted for in the pragma, an incorrect parallelization will result.
4.2.7 ArBB Using Vector OperationsArBB operates only over data stored in ArBB containers and requires using ArBB types to representelements of those containers. The ArBB dense container represents multidimensional arrays. It isa template with the first argument being the element type and the second the dimensionality. Thedimensionality default is 1 so the second template argument can be omitted for 1D arrays.
The simplest way to implement SAXPY in ArBB is to use arithmetic operations directly overdense containers, as in Listing 4.6. Actually, this gives a sequence of maps. However, as will beexplained in Section 4.4, ArBB automatically optimizes this into a map of a sequence.
In ArBB, we have to include some extra code to move data into “ArBB data space” and to invokethe above function. Moving data into ArBB space is required for two reasons: safety and offload.Data stored in ArBB containers can be managed in such a way that race conditions are avoided. Forexample, if the same container is both an input and an output to a function, ArBB will make sure that
1 void saxpy_openmp(2 int n, // the number of elements in the vectors3 float a, // scale factor4 float x[], // the first input vector5 float y[] // the output vector and second input vector6 ) {7 #pragma omp parallel for8 for (int i = 0; i < n; ++i)9 y[i] = a * x[i] + y[i];
10 }
LISTING 4.5
SAXPY in OpenMP.
Introduction to Parallel Computing, University of Oregon, IPCC
Lecture 5 – Parallel Programming Patterns - Map
OpenMP SAXPY Performance
98
Vector size = 500,000,000
Introduction to Parallel Computing, University of Oregon, IPCC