sparse matrix-matrix multiplication on the gpu - gtc...

21
Sparse Matrix-Matrix Multiplication on the GPU Julien Demouth, NVIDIA

Upload: vuongnhan

Post on 30-Apr-2018

224 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Sparse Matrix-Matrix Multiplication on the GPU Julien Demouth, NVIDIA

Page 2: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Introduction: Problem

Two sparse matrices A and B, compute:

C = AB

Sparse matrix: Many zeroes

Only non-zero elements are stored in memory

Non-zero

Zero x

Page 3: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

foreach rowA : A.rows (in parallel):

hashtbl = {}

foreach (colA, valA) : rowA.nonZeroes:

rowB = B.rows[colA]

foreach (colB, valB) : rowB.nonZeroes:

hashtbl[colB] += valA * valB

store hashtbl

Two main steps:

Find the # of non-zeroes per row of C

Compute the value of non-zeroes of C

Introduction: Approach

A

B

C

Page 4: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Introduction: Implementation

CPU implementation

GPU implementation

A

CPU thread 0

CPU thread 1

CPU thread k

Warp 0

A

Warp 1

Warp n

Shared memory

Global memory

Warp’s hash table:

Host memory (and cache)

Thread’s hash table:

Page 5: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

IMPLEMENTATION

Page 6: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Sparse Matrix Representation: CSR

Do not store 0s

Store column indices and values. Compress row indices

… Column indices:

… Values:

… Row offsets:

… Row indices:

Page 7: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

GPU Algorithm

Count the number of non-zeroes in each row of C (1 kernel)

— Conceptually, a warp computes the result for one row

When # of rows > # of warps, a warp works on more rows

— Store the numbers in C.rows

Exclusive scan on C.rows to compute offsets (1 Thrust call)

Compute column indices and values (1 kernel)

— Conceptually, a warp computes the pairs for one row

When # of rows > # of warps, a warp works on more rows

— Store the column indices and values in C.cols and C.vals

Page 8: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Pseudo-code: Load (colA, valA) Pairs

One warp per row of A (coalesced loads):

Each thread keeps its pair (colA, valA) in registers

… A.cols:

… A.vals:

warpId = threadIdx.x / WARP_SIZE;

laneId = threadIdx.x % WARP_SIZE;

aColIt = A.rows[warpId ] + laneId;

aColEnd = A.rows[warpId + 1];

colA = aColIt < aColEnd ? A.cols[aColIt] : -1;

valA = aColIt < aColEnd ? A.vals[aColIt] : 0.0;

// asm( "mov.u32 %0, %%laneid;" : "=r"( laneId ) );

Invalid (colA, valA) pairs

Is the (colA, valA)

pair valid ?

Page 9: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Pseudo-code: Vote to Load Rows of B

Each thread, in turn, will ask the warp to load its row of B:

Note: Warp synchronization is a HW concept. Use it with care.

__shared__ volatile int sColA[nWarps];

__shared__ volatile double sValA[nWarps];

for( k = 0, end = __popc( __ballot( aColIt < aColEnd ) ) ; k < end ; ++k )

{

if( laneId == k ) { sColA[warpId] = colA; sValA[warpId] = valA; }

bColIt = B.rows[sColA[warpId] ]; // sColA is volatile and warp’s threads

bColEnd = B.rows[sColA[warpId] + 1]; // are implicitly synchronized

for( bColIt += laneId ; __any( bColIt < bColEnd ) ; bColIt += 32 ) { }

}

Number of valid (colA, valA) pairs

As long as there are unread (colB,

valB) pairs in the row colA of B

The kth thread pushes its (colA, valA) pair

Page 10: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Pseudo-code: Load (colB, valB) Pairs

Inside the loop, each thread loads its pair (colB, valB)

And inserts its pair in the hash table

Insertions are performed in parallel

Important note: All threads have different colB values

colB = bColIt < bColEnd ? B.cols[bColIt] : -1;

valB = bColIt < bColEnd ? B.vals[bColIt] : 0.0;

hashtbl.insert( colB, sValA[warpId] * valB );

Page 11: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Hash Table

Stored in shared memory and global memory

Insertion uses four different hash functions in order:

If it fails, we re-run the kernel using more global memory

__shared__ volatile unsigned sKeys[nWarps][sMemSize]; // In our impl, sMemSize == 256

__shared__ volatile double sVals[nWarps][sMemSize];

foreach hashFunction:

if( __all( inserted ) )

break;

tryToInsertInSharedMemory( colB, value );

foreach hashFunction:

if( __all( inserted ) )

break;

tryToInsertInGlobalMemory( colB, value );

Page 12: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Hash Table Insertion

Each thread computes its hash value

If the slot contains the same key, update the value

If the slot is empty, try to insert:

If the slot is full, retry with next hash function (next iteration)

sVals[warpId][hash] += value;

sKeys[warpId][hash] = colB;

if( sKeys[warpId][hash] == colB ) // ?

sVals[warpId][hash] = value;

sKeys:

colB=5 colB=8 Write conflict! Only one winner:

Page 13: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Hash Table Size

Hash tables in global memory contain 2^k slots

— We usually start with 2048 slots

We use inlined PTX to compute hash % kGlobalSize

Honestly, it’s not critical for performance here but it’s fun

// Given 2^k, it finds k.

int nBits;

asm( "bfind.u32 %0, %1;" : "=r"( nBits ) : "r"( kGlobalSize ) );

// Compute hash % (2 << nBits) == hash & ((2 << nBits)-1).

unsigned dst;

asm( "bfe.u32 %0, %1, 0, %2;" : "=r"( dst ) : "r"( hash ), "r"( nBits ) );

Page 14: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Speed Comparison with CUSP

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Speedup compared to CUSP

C = AA on a Tesla M2090 – fp64 Default setup Tuned GMEM size Tuned load width

Page 15: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

IMPROVEMENTS

Page 16: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Memory Consumption

Global memory is allocated for each warp (to store hash tables)

C = AA - fp64 code on Tesla M2090 – A: hood.mtx

400

450

500

550

600

650

700

256 512 1024 2048 4096 8192

Time (ms) per Number of warps

0

50

100

150

200

250

256 512 1024 2048 4096 8192

Memory (MB) per Number of warps

Page 17: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Load Balancing

Objective: Performance with lower memory requirements

Initial approach: Static scheduling

Simple load-balancing:

getWork is implemented using a simple atomic operation:

for( ; __syncthreads_or( aRowId < A_nRows ) ; aRowId += nWarps )

for( ; __syncthreads_or( aRowId < A_nRows ) ; aRowId += getWork( workQueue ) )

__shared__ unsigned work;

if( threadIdx.x == 0 )

work = atomicAdd( workQueue, nWarpsPerBlock );

__syncthreads( );

return work + warpId;

Page 18: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Load Balancing: Results

Lower memory usage

0.00

100.00

200.00

300.00

400.00

500.00

600.00

256 512 1024 2048 4096 8192

Time(ms) per Number of warps

Static

Dynamic

C = AA - fp64 code on Tesla M2090 – A: hood.mtx

Page 19: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Load Several Rows of B

Modify the algorithm to load several rows of B

Hash tables have to use atomics

Fp64 atomicAdd is implemented using Compare-and-Swap

— See CUDA programming guide

sKeys[warpId][hash] = colB;

if( sKeys[warpId][hash] == colB ) // Winner?

atomicAdd( &sVals[warpId][hash], value );

Page 20: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Load Several Rows of B: Results

1: with atomics -- 1 w/o a: without atomics

50.00

70.00

90.00

110.00

130.00

150.00

170.00

190.00

210.00

230.00

16 8 4 2 1 1 w/o a

Time (ms) per Number of rows

atmosmodd

30.00

40.00

50.00

60.00

70.00

80.00

90.00

16 8 4 2 1 1 w/o a

Time (ms) per Number of rows

mc2depi

Page 21: Sparse Matrix-Matrix Multiplication on the GPU - GTC 2012on-demand.gputechconf.com/gtc/2012/presentations/S0285... · Introduction: Problem Two sparse matrices A and B, compute: C

Summary

We have implemented a CPU-like algorithm on the GPU

— It gives good results compared to CUSP

Ideas

— Use hash tables stored in shared and global memories

— Map a CPU thread to a warp of GPU threads

— Use simple load-balancing to reduce memory consumption

Future work

— Try with better load-balancing

— Adapt the algorithm to multiple GPUs, then MPI nodes