lecture 5: parallel matrix algorithms (part 3)zxu2/acms60212-40212/lec-06-3.pdf · algorithms (part...

17
Lecture 5: Parallel Matrix Algorithms (part 3) 1

Upload: others

Post on 14-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Lecture 5: Parallel Matrix Algorithms (part 3)

1

Page 2: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

A Simple Parallel Dense Matrix-Matrix Multiplication

Let 𝐴 = [𝑎𝑖𝑗]𝑛×𝑛 and 𝐵 = [𝑏𝑖𝑗]𝑛×𝑛 be n × n matrices. Compute 𝐶 =

𝐴𝐵

• Computational complexity of sequential algorithm: 𝑂(𝑛3)

• Partition 𝐴 and 𝐵 into 𝑝 square blocks 𝐴𝑖,𝑗 and 𝐵𝑖,𝑗 (0 ≤ 𝑖, 𝑗 < 𝑝)

of size (𝑛/ 𝑝) × (𝑛/ 𝑝) each.

• Use Cartesian topology to set up process grid. Process 𝑃𝑖,𝑗 initially

stores 𝐴𝑖,𝑗 and 𝐵𝑖,𝑗 and computes block 𝐶𝑖,𝑗 of the result matrix.

• Remark: Computing submatrix 𝐶𝑖,𝑗 requires all submatrices 𝐴𝑖,𝑘 and

𝐵𝑘,𝑗 for 0 ≤ 𝑘 < 𝑝.

2

Page 3: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

• Algorithm:

– Perform all-to-all broadcast of blocks of A in each row of processes

– Perform all-to-all broadcast of blocks of B in each column of processes

– Each process 𝑃𝑖,𝑗 perform 𝐶𝑖,𝑗 = 𝐴𝑖,𝑘 𝐵𝑘,𝑗𝑝−1𝑘=0

3

Page 4: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Performance Analysis

• 𝑝 rows of all-to-all broadcasts, each is among a group of 𝑝

processes. A message size is 𝑛2

𝑝, communication time: 𝑡𝑠𝑙𝑜𝑔 𝑝 +

𝑡𝑤𝑛2

𝑝𝑝 − 1

• 𝑝 columns of all-to-all broadcasts, communication time:

𝑡𝑠𝑙𝑜𝑔 𝑝 + 𝑡𝑤𝑛2

𝑝𝑝 − 1

• Computation time: 𝑝 × (𝑛/ 𝑝)3= 𝑛3/𝑝

• Parallel time: 𝑇𝑝 =𝑛3

𝑝+ 2 𝑡𝑠𝑙𝑜𝑔 𝑝 + 𝑡𝑤

𝑛2

𝑝𝑝 − 1

4

Page 5: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Memory Efficiency of the Simple Parallel Algorithm

• Not memory efficient

– Each process 𝑃𝑖,𝑗 has 2 𝑝 blocks of 𝐴𝑖,𝑘 and 𝐵𝑘,𝑗

– Each process needs Θ(𝑛2/ 𝑝) memory

– Total memory over all the processes is Θ(𝑛2 × 𝑝), i.e., 𝑝 times the memory of the sequential algorithm.

5

Page 6: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Cannon’s Algorithm of Matrix-Matrix Multiplication

6

Goal: to improve the memory efficiency.

Let 𝐴 = [𝑎𝑖𝑗]𝑛×𝑛 and 𝐵 = [𝑏𝑖𝑗]𝑛×𝑛 be n × n matrices. Compute 𝐶 =

𝐴𝐵

• Partition 𝐴 and 𝐵 into 𝑝 square blocks 𝐴𝑖,𝑗 and 𝐵𝑖,𝑗 (0 ≤ 𝑖, 𝑗 < 𝑝)

of size (𝑛/ 𝑝) × (𝑛/ 𝑝) each.

• Use Cartesian topology to set up process grid. Process 𝑃𝑖,𝑗 initially

stores 𝐴𝑖,𝑗 and 𝐵𝑖,𝑗 and computes block 𝐶𝑖,𝑗 of the result matrix.

• Remark: Computing submatrix 𝐶𝑖,𝑗 requires all submatrices 𝐴𝑖,𝑘 and

𝐵𝑘,𝑗 for 0 ≤ 𝑘 < 𝑝.

• The contention-free formula:

𝐶𝑖,𝑗 = 𝐴𝑖, 𝑖+𝑗+𝑘 % 𝑝𝐵 𝑖+𝑗+𝑘 % 𝑝,𝑗𝑝−1𝑘=0

Page 7: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Cannon’s Algorithm // make initial alignment

for 𝑖, 𝑗 :=0 to 𝑝 − 1 do

Send block 𝐴𝑖,𝑗 to process 𝑖, 𝑗 − 𝑖 + 𝑝 𝑚𝑜𝑑 𝑝 and block 𝐵𝑖,𝑗 to process

𝑖 − 𝑗 + 𝑝 𝑚𝑜𝑑 𝑝, 𝑗 ;

endfor;

Process 𝑃𝑖,𝑗 multiply received submatrices together and add the result to 𝐶𝑖,𝑗;

// compute-and-shift. A sequence of one-step shifts pairs up 𝐴𝑖,𝑘 and 𝐵𝑘,𝑗

// on process 𝑃𝑖,𝑗 . 𝐶𝑖,𝑗= 𝐶𝑖,𝑗+𝐴𝑖,𝑘𝐵𝑘,𝑗

for step :=1 to 𝑝 − 1 do

Shift 𝐴𝑖,𝑗 one step left (with wraparound) and 𝐵𝑖,𝑗 one step up (with wraparound);

Process 𝑃𝑖,𝑗 multiply received submatrices together and add the result to 𝐶𝑖,𝑗;

Endfor;

Remark: In the initial alignment, the send operation is to: shift 𝐴𝑖,𝑗 to the left (with wraparound) by 𝑖 steps, and shift 𝐵𝑖,𝑗 to the up (with wraparound) by 𝑗 steps. 7

Page 8: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Cannon’s Algorithm for 3 × 3 Matrices

8

Initial A, B A, B initial alignment

A, B after shift step 1

A, B after shift step 2

Page 9: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

Performance Analysis

• In the initial alignment step, the maximum distance over which block shifts is 𝑝 − 1 – The circular shift operations in row and column

directions take time: 𝑡𝑐𝑜𝑚𝑚 = 2(𝑡𝑠 +𝑡𝑤𝑛

2

𝑝)

• Each of the 𝑝 single-step shifts in the compute-

and-shift phase takes time: 𝑡𝑠 +𝑡𝑤𝑛

2

𝑝 .

• Multiplying 𝑝 submatrices of size (n

𝑝) × (

n

𝑝)

takes time: 𝑛3/𝑝.

• Parallel time: 𝑇𝑝 =𝑛3

𝑝+ 2 𝑝 𝑡𝑠 +

𝑡𝑤𝑛2

𝑝+ 2(𝑡𝑠 +

𝑡𝑤𝑛2

𝑝)

9

Page 10: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

int MPI_Sendrecv_replace( void *buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status *status );

• Execute a blocking send and receive. The same buffer is used both for the send and for the receive, so that the message sent is replaced by the message received.

• buf[in/out]: initial address of send and receive buffer

10

Page 11: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

11

#include "mpi.h" #include <stdio.h> int main(int argc, char *argv[]) { int myid, numprocs, left, right; int buffer[10]; MPI_Request request; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD, &numprocs); MPI_Comm_rank(MPI_COMM_WORLD, &myid); right = (myid + 1) % numprocs; left = myid - 1; if (left < 0) left = numprocs - 1; MPI_Sendrecv_replace(buffer, 10, MPI_INT, left, 123, right, 123, MPI_COMM_WORLD, &status); MPI_Finalize(); return 0; }

Page 12: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

DNS Algorithm

• The algorithm is named after Dekel, Nassimi and Aahni

• It is based on partitioning intermediate data

• It performs matrix multiplication in time 𝑂(log𝑛) by using 𝑂(𝑛3/log𝑛) processes

The sequential algorithm for 𝐶 = 𝐴 × 𝐵 𝐶𝑖𝑗 = 0

𝑓𝑜𝑟 𝑖 = 0; 𝑖 < 𝑛; 𝑖 + + 𝑓𝑜𝑟 𝑗 = 0; 𝑗 < 𝑛; 𝑗 + +

𝑓𝑜𝑟 𝑘 = 0; 𝑘 < 𝑛; 𝑘 + + 𝐶𝑖𝑗 = 𝐶𝑖𝑗 + 𝐴𝑖𝑘 × 𝐵𝑘𝑗

Remark: The algorithm performs 𝑛3 scalar multiplications

12

Page 13: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

• Assume that 𝑛3 processes are available for multiplying two 𝑛 × 𝑛 matrices.

• Then each of the 𝑛3 processes is assigned a single scalar multiplication.

• The additions for all 𝐶𝑖𝑗 can be carried out simultaneously in log𝑛 steps each.

• Arrange 𝑛3 processes in a three-dimensional 𝑛 × 𝑛 × 𝑛 logical array. – The processes are labeled according to their location in the

array, and the multiplication 𝐴𝑖𝑘𝐵𝑘𝑗 is assigned to process P[i,j,k] (0 ≤ 𝑖, 𝑗, 𝑘 < 𝑛).

– After each process performs a single multiplication, the contents of P[i,j,0],P[i,j,1],…,P[i,j,n-1] are added to obtain 𝐶𝑖𝑗.

13

Page 14: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

14

j

i

Page 15: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

15

Page 16: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

• The vertical column of processes P[i,j,*] computes the dot product of row 𝐴𝑖∗ and column 𝐵∗𝑗

16

Page 17: Lecture 5: Parallel Matrix Algorithms (part 3)zxu2/acms60212-40212/Lec-06-3.pdf · Algorithms (part 3) 1 . A Simple Parallel Dense Matrix-Matrix Multiplication Let =[ ] × and =[

• The DNS algorithm has three main communication steps:

1. moving the rows of A and the columns of B to their respective places,

2. performing one-to-all broadcast along the j axis for A and along the i axis for B

3. all-to-one reduction along the k axis

17