© david kirk/nvidia and wen-mei w. hwu, 2007-2009 ece498al, university of illinois,...

25
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007- 2009 ECE498AL, University of Illinois, Urbana- 1 CUDA Threads

Upload: molly-williams

Post on 18-Jan-2018

216 views

Category:

Documents


0 download

DESCRIPTION

CUDA Thread Organization Implementation blockIdx & threadIdx: unique coordinates of threads used to distinguish threads & identify for each thread the appropriate portion of the data to process: –Assigned to threads by the CUDA runtime system –Appear as built-in variables that are initialized by the runtime system and accessed within the kernel functions –References to the blockIdx and threadIdx variables return the appropriate values that form coordinates of the thread gridDim & blockDim: Built in variables which provide the dimensions of the grid and block © David Kirk/NVIDIA and Wen-mei W. Hwu, ECE498AL, University of Illinois, Urbana-Champaign 3

TRANSCRIPT

Page 1: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

1

CUDA Threads

Page 2: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

2

CUDA Thread Block• All threads in a block execute the same

kernel program (SPMD)• Programmer declares block:

– Block size 1 to 512 concurrent threads– Block shape 1D, 2D, or 3D– Block dimensions in threads

• Threads have thread id numbers within block– Thread program uses thread id to select work

and address shared data

• Threads in the same block share data and synchronize while doing their share of the work

• Threads in different blocks cannot cooperate– Each block can execute in any order relative

to other blocks!

CUDA Thread Block

Thread Id #:0 1 2 3 … m

Thread program

Courtesy: John Nickolls, NVIDIA

Page 3: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

CUDA Thread Organization Implementation

• blockIdx & threadIdx: unique coordinates of threads used to distinguish threads & identify for each thread the appropriate portion of the data to process:– Assigned to threads by the CUDA runtime system– Appear as built-in variables that are initialized by the

runtime system and accessed within the kernel functions– References to the blockIdx and threadIdx variables return

the appropriate values that form coordinates of the thread• gridDim & blockDim: Built in variables which

provide the dimensions of the grid and block

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

3

Page 4: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

4

Example Thread Organization• M=8 threads/block (1D organization)

– threadIdx.x = 0, 1, 2, …, 7• N thread blocks (1D organization)

– blockIdx.x = 0, 1, …, N-1• 8*N threads/grid: blockDim.x=8 ; gridDim.x=N• In the code of the figure

– threadID=blockIdx.x*blockDim.x + threadIdx.x – For Thread 3 of Block 5, threadID = 5*8+ 3 =43

Thread Block 0 Thread Block 1 Thread Block N - 1

…float x = input[threadID];float y = func(x);output[threadID] = y;…

threadID

…float x = input[threadID];float y = func(x);output[threadID] = y;…

…float x = input[threadID];float y = func(x);output[threadID] = y;…

76543210 76543210 76543210

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE 498AL Spring 2010, University of Illinois, Urbana-Champaign

Page 5: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

General Thread Organization• Grid: 2D array of blocks; Block: 3D array of threads

• Execution Configuration determines exact organization

• In the example of previous slide, if N=128 & M=32, then the following Execution Configuration is used

Dim3 dimGrid(128,1,1);//dimGrid.x, dimGrid.y take values 1 to 65,535Dim3 dimBlock(32,1,1); // size of block limited to 512 threadsKernelFunction<<<dimGrid, dimBlock>>>(….);• blockIdx.x ranges between 0 & dimGrid.x-1

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

5

Page 6: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

6

Host

Kernel 1

Kernel 2

Device

Grid 1

Block(0, 0)

Block(1, 0)

Block(0, 1)

Block(1, 1)

Grid 2

Courtesy: NDVIA

Figure 3.2. An Example of CUDA Thread Organization.

Block (1, 1)

Thread(0,1,0)

Thread(1,1,0)

Thread(2,1,0)

Thread(3,1,0)

Thread(0,0,0)

Thread(1,0,0)

Thread(2,0,0)

Thread(3,0,0)

(0,0,1) (1,0,1) (2,0,1) (3,0,1)

Another Example

Dim3 dimGrid(2,2,1)Dim3 dimBlock(4,2,2); KernelFunction<<<dimGrid,dimBlock>>>(….);

Block(1,0) hasblockIdx.x=1 & blockIdx.y=0

Thread(2,1,0) hasthreadIdx.x=2 ;threadIdx.y=1;threadIdx.z=0

Page 7: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

7

Md

Nd

Pd

Pdsub

TILE_WIDTH

WIDTHWIDTH

Tx=threadIdx.x01 TILE_WIDTH-12

Bx= blockIdx.x0 1 2

By= blockIdx.y ty 210

TILE_WIDTH-1

2

1

0

TILE_WIDTHE

WIDTH

WIDTH

Matrix Multiplication Using Multiple Blocks• Break-up Pd into tiles • Each thread block calculates one tile; Block

size equal tile size• Each thread calculates one Pd element

identified using– blockIdx.x & blockIdx.y to identify the

tile– hreadIdx.x, threadIdx.y to identify the

thtread within the tile

Page 8: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

8

Md

Nd

Pd

Pdsub

TILE_WIDTH

WIDTHWIDTH

tx01 TILE_WIDTH-12

bx0 1 2

by ty 210

TILE_WIDTH-1

2

1

0

TILE_WIDTHE

WIDTH

WIDTH

Revised Matrix Multiplication using Multiple Blocks (cont.)

the y index of the Pd element computed by a thread is y= by*TILE_WIDTH + ty

the x index of the Pd element computed by a thread is x = bx*TILE_WIDTH + tx

Thread (tx,ty) in block (bx, by) usesrow y of Md and column x of Nd to updateelement Pd[y*Width+x]

Page 9: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

9

P1,0P0,0

P0,1

P2,0 P3,0

P1,1

P0,2 P2,2 P3,2P1,2

P3,1P2,1

P0,3 P2,3 P3,3P1,3

Block(0,0) Block(1,0)

Block(1,1)Block(0,1)

TILE_WIDTH = 2

A Small Example: P(4,4)

For Block(1,0)blockIdx.x=1 & blockIdx.y=0

For Block(0,0)blockIdx.x=0 & blockIdx.y=0

For Block(1,1)blockIdx.x=1 & blockIdx.y=1

For Block(0,1)blockIdx.x=0 & blockIdx.y=1

Page 10: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

10

Pd1,0

A Small Example: Multiplication

Md2,0

Md1,1

Md1,0Md0,0

Md0,1

Md3,0

Md2,1

Pd0,0

Md3,1 Pd0,1

Pd2,0Pd3,0

Nd0,3Nd1,3

Nd1,2

Nd1,1

Nd1,0Nd0,0

Nd0,1

Nd0,2

Pd1,1

Pd0,2 Pd2,2Pd3,2Pd1,2

Pd3,1Pd2,1

Pd0,3 Pd2,3Pd3,3Pd1,3

thread(0,0) of block(0,0) calculates Pd0,0

thread(0,0) of block(0,1) calculates Pd0,2

thread(1,1) of block(0,0) calculates Pd1,1

thread(1,1) of block(0,1) calculates Pd1,3

Page 11: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

Revised Host Code for Launching the Revised Kernel//Setup the execution configurationDim3 dimGrid(Width/TILE_WIDTH, Width/TILE_WIDTH,1);

Dim3 dimBlock(TILE_WIDTH, TILE_WIDTH,1);

//Launch the device computation threadsKernelFunction<<<dimGrid, dimBlock>>>

MatrixMulKernel(Md, Nd, Pd, Width);

//Kernel can handle arrays of dimensions up to• 1,048,560 X 1,048,560{16 X 65,535=1,048,560}• using 65,535 X 65,535= 4,294,836,225 blocks each with

256 threads• Total number of parallel threads =

1,099,478,073,600 > 1 Tera (1012)threads© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

11

Page 12: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

12

Revised Matrix Multiplication Kernel using Multiple Blocks

__global__ void MatrixMulKernel(float* Md, float* Nd, float* Pd, int Width){// Calculate the row index of the Pd element and M

int Row = blockIdx.y*TILE_WIDTH + threadIdx.y; // Calculate the column idenx of Pd and N

int Col = blockIdx.x*TILE_WIDTH + threadIdx.x; float Pvalue = 0; // each thread computes one element of the block sub-matrixfor (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue;}

Page 13: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

13

Transparent Scalability• Cuda runtime system can execute blocks in any order

– A kernel scales across any number of execution resources• Transparent Scalability: The ability to execute the same

application on hardware with different # of execution resources

Device

Block 0 Block 1

Block 2 Block 3

Block 4 Block 5

Block 6 Block 7

Each block can execute in any order relative to other blocks.

Execution with few resources

Kernel grid

Block 0 Block 1

Block 2 Block 3

Block 4 Block 5

Block 6 Block 7

Device

Block 0 Block 1 Block 2 Block 3

Block 4 Block 5 Block 6 Block 7time

Execution with more resources

Page 14: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

14

Thread Block Assignment to Streaming Multiprocessors

• Upon Kernel launch, threads are assigned to Streaming Multiprocessors, SMs in block granularity– Up to 8 blocks assigned to each SM

as resources allows– SM in GT200 can take up to 1024

threads• Could be 256 (threads/block) * 4

blocks • Or 128 (threads/block) * 8 blocks, etc

t0 t1 t2 … tm

Blocks

SP

SharedMemory

MT IU

SP

SharedMemory

MT IU

t0 t1 t2 … tm

Blocks

SM 1SM 0

Page 15: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

15

Thread Block Assignment to Streaming Multiprocessors (cont.)

t0 t1 t2 … tm

Blocks

SP

SharedMemory

MT IU

SP

SharedMemory

MT IU

t0 t1 t2 … tm

Blocks

SM 1SM 0

•GT200 has 30 SMs

•Up to 240 (30*8) blocks execute simultaneously

•Up to 30,720 (30*1024) concurrent threads can be residing in SMs for execution

•SM maintains thread/block id #s•SM manages/schedules thread execution

Page 16: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

16

GT200 Example: Thread Scheduling

• Each Block is divided into 32-thread Warps for scheduling purposes– Size of warp is

implementation dependant– not part of the CUDA

programming model– Warp is the unit of thread

scheduling in SM• If 3 blocks are assigned to an

SM and each block has 256 threads, how many Warps are there in an SM?– Each Block is divided into

256/32 = 8 Warps– There are 8 * 3 = 24 Warps

in each SM

…t0 t1 t2 …

t31

……t0 t1 t2 …

t31

…Block 1 Warps Block 2 Warps

SP

SP

SP

SP

SFU

SP

SP

SP

SP

SFU

Instruction Fetch/Dispatch

Instruction L1Streaming Multiprocessor

Shared Memory

…t0 t1 t2 …

t31

…Block 1 Warps

Page 17: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

17

GT200 Example: Thread Scheduling (Cont.)

SM implements zero-overhead warp scheduling:

• At any time, only one of the warps is executed by SM• Warps whose next instruction has its operands ready for

consumption are eligible for execution• Eligible Warps are selected for execution on a prioritized

scheduling policy• All threads in a warp execute the same instruction when

selected (SIMD execution)

TB1W1

TB = Thread Block, W = Warp

TB2W1

TB3W1

TB2W1

TB1W1

TB3W2

TB1W2

TB1W3

TB3W2

Time

TB1, W1 stallTB3, W2 stallTB2, W1 stall

Instruction: 1 2 3 4 5 6 1 2 1 2 3 41 2 7 8 1 2 1 2 3 4

Page 18: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

18

GT200 Block Granularity ConsiderationsFor Matrix Multiplication using multiple blocks, should I use 8X8, 16X16 or 32X32 blocks?– For 8X8 blocks, we have 64 threads per Block. Since each SM can

take up to 1024 threads, there are 16 Blocks (1024/64). However, each SM can only take up to 8 Blocks. Hence only 512 (64*8) threads will go into each SM!

• SM execution resources are under utilized; fewer wraps to schedule around long latency operations

– For 16X16 blocks , we have 256 threads per Block. Since each SM can take up to 1024 threads, it can take up to 4 Blocks (1024/256)

• Full thread capacity in each SM• Maximal number of warps for scheduling around long-latency

operations (1024/32= 32 wraps)

– For 32X32 blocks, we have 1024 threads per Block. Not even one can fit into an SM! (512 threads /block limitation)

Page 19: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

19

Some Additional API Features

Page 20: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

20

Application Programming Interface

• The API is an extension to the C programming language

• It consists of:– Language extensions

• To target portions of the code for execution on the device

– A runtime library split into:1. A host component to control and access one or more

devices from the host2. A device component providing device-specific functions3. A common component providing built-in vector types and a

subset of the C runtime library in both host and device codes

Page 21: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

21

Language Extensions:Built-in Variables

• dim3 gridDim;– Dimensions of the grid in blocks (gridDim.z unused)

• dim3 blockDim;– Dimensions of the block in threads

• dim3 blockIdx;– Block index within the grid

• dim3 threadIdx;– Thread index within the block

Page 22: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

22

Common Runtime Component:Mathematical Functions

• pow, sqrt, cbrt, hypot• exp, exp2, expm1• log, log2, log10, log1p• sin, cos, tan, asin, acos, atan, atan2• sinh, cosh, tanh, asinh, acosh, atanh• ceil, floor, trunc, round• Etc.

– When executed on the host, a given function uses the C runtime implementation if available

– These functions are only supported for scalar types, not vector types

Page 23: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

23

Device Runtime Component:Mathematical Functions

• Some mathematical functions (e.g. sin(x)) have a less accurate, but faster device-only version (e.g. __sin(x))– __pow– __log, __log2, __log10– __exp– __sin, __cos, __tan

Page 24: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

24

Host Runtime Component• Provides functions to deal with:

– Device management (including multi-device systems)– Memory management– Error handling

• Initializes the first time a runtime function is called

• A host thread can invoke device code on only one device– Multiple host threads required to run on multiple devices

Page 25: © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009 ECE498AL, University of Illinois, Urbana-Champaign 1 CUDA Threads

© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007-2009ECE498AL, University of Illinois, Urbana-Champaign

25

Device Runtime Component:Synchronization Function

• void __syncthreads();• Synchronizes all threads in a block• Once all threads have reached this point,

execution resumes normally• Used to avoid RAW / WAR / WAW hazards

when accessing shared or global memory• Allowed in conditional constructs only if the

conditional is uniform across the entire thread block