code gpu with cuda - optimizing memory and control flow
TRANSCRIPT
CODE GPU WITH CUDAOPTIMIZING MEMORY & CONTROL FLOW
Created by Marina Kolpakova ( ) for cuda.geek Itseez
PREVIOUS
OUTLINEMemory typesMemory cachingTypes of memory access patternsTexturescontrol flow performance limiterslist of common advices
MEMORYOPTIMIZATION
MEMORY TYPESMemory Scope Location Cached Access LifetimeRegister Thread On-chip N/A R/W ThreadLocal Thread Off-chip L1/L2 R/W ThreadShared Block On-chip N/A R/W BlockGlobal Grid + Host Off-chip L2 R/W AppConstant Grid + Host Off-chip L1,L2,L3 R AppTexture Grid + Host Off-chip L1,L2 R App
MEMORY TYPES
MEMORY TYPES
GPU CACHESGPU caches are not intended for the same use as CPU's
Not aimed at temporal reuse. Smaller than CPU size (especially per thread, e.g. Fermi:48 KB L1, 1536 threads on fly, cache / thread = 1 x 128-byte line).Aimed at spatial reuse. Intended to smooth some access patterns, help with spilledregisters and stack.Do not tile relying on block size. Lines likely become evicted next few access
Use smem for tiling. Same latency, fully programmableL2 aimed to speed up atomics and gmem writes.
GMEMLearn your access pattern before thinking about latency hiding and try not to thresh the
memory bus.
Four general categories of inefficient memory access patterns:
Miss-aligned (offset) warp addressesStrided access between threads within a warpThread-affine (each thread in a warp accesses a large contiguous region)Irregular (scattered) addresses
Always be aware about bytes you actually need and bytes you transfer through the bus
GMEM: MISS-ALIGNED
Add extra padding for data to force alignmentUse read-only texture L1Combination of above
GMEM: STRIDED
If pattern is regular, try to change data layout: AoS -> SoA
GMEM: STRIDED
Use smem to correct access pattern.1. load gmem -> smem with best coalescing2. synchronize3. use
GMEM: STRIDED
Use warp shuffle to permute elements for warp1. coalescingly load elements needed by warp2. permute3. use
GMEM: STRIDED
Use proper caching strategycg – cache globalldg – cache in texture L1cs – cache streaming
GMEM: THREAD-AFFINEEach thread accesses relatively long continuous memory region
Load big structures using AoSThread loads continuous region of dataAll threads load the same data
GMEM: THREAD-AFFINEWork distribution
i n t t i d = b l o c k I d x . x * b l o c k D i m . x + t h r e a d I d x . x ;
i n t t h r e a d N = N / b l o c k D i m . x * g r i d D i m . x ;f o r ( s i z e _ t i = t i d * N ; i < ( t i d + 1 ) * N ; + + i ){ s u m = + i n [ i ]}
f o r ( s i z e _ t i = t i d ; i < N ; i + = b l o c k D i m . x * g r i d D i m . x ){ s u m = + i n [ i ]}
UNIFORM LOADAll threads in a block access the same address as read only.
Memory operation uses 3-level constant cache
Generated by compilerAvailable as PTX asm insertion
_ _ d e v i c e _ _ _ _ f o r c e i n l i n e _ _ f l o a t _ _ l d u ( c o n s t f l o a t * p t r ){ f l o a t v a l ; a s m ( " l d u . g l o b a l . f 3 2 % 0 , [ % 1 ] ; " : " = " f ( v a l ) : l ( p t r ) ) ; r e t u r n v a l ;}
GMEM: IRREGULARRandom memory access. Threads in a warp access many lines, strides are irregular.
Improve data localityTry 2D-local arrays (Morton-ordered)Use read-only texture L1Kernel fission to localize the worst case.
TEXTURESmaller transactions and different caching (dedicated L1, 48 KB, ~104 clock latency)Cache is not polluted by other GMEM loads, separate partition for each warp schedulerhelps to prevent cache threshingPossible hardware interpolation (Note: 9-bit alpha)Hardware handling of out-of-bound access
Kepler improvements:
sm_30+ Bindless textures. No global static variables. Can be used in threaded codesm_32+ GMEM access through texture cache bypassing interpolation units
SMEM: BANKINGKEPLER: 32-BIT AND 64-BIT MODES
special case: 2D smem usage (Fermi example)
_ _ s h a r e d _ _ f l o a t s m e m _ b u f f e r [ 3 2 ] [ 3 2 + 1 ]
SMEMThe common techniques are:
use smem to improve memory access patternuse smem for stencil processing
But the gap between smem and math throughput is increasing
Tesla: 16 (32 bit) banks vs 8 thread processors (2:1)GF100: 32 (32 bit) banks vs 32 thread processors (1:1)GF104: 32 (32 bit) banks vs 48 thread processors (2:3)Kepler: 32 (64 bit) banks vs 192 thread processors (1:3)
Max size 48 KB (49152 B), assume max occupancy 64x32,so 24 bytes per thread.
More intensive memory usage affects occupancy.
SMEM (CONT.)smem + L1 use the same 64K B. Program-configurable split:
Fermi: 48:16, 16:48Kepler: 48:16, 16:48, 32:32
cudaDeviceSetCacheConfig(), cudaFuncSetCacheConfig()
prefer L1 to improve lmem usageprefer smem for stencil kernels
smen often used for:
data sharing across the blockinter-block communicationbock-level buffers (for scan or reduction)stencil code
LMEMLocal memory is a stack memory analogue: call stack, register spilling. Note: Both Local
memory reads/writes are cached in L1.
Registers are for automatic variables
Volatile keyword enforces spillingRegisters do not support indexing: local memory is used for local arrays
Register spilling leads to more instructions and memory traffic
i n t a = 4 2 ;
i n t b [ S I Z E ] = { 0 , } ;
SPILLING CONTROL1. Use __launch_bounds__ to help compiler to select maximum amount of registers.
2. Compile with -maxrregcount to enforce compiler optimization for register usage andregister spilling if needed
3. By default you run less concurrent warps per SM
_ _ g l o b a l _ _ v o i d _ _ l a u n c h _ b o u n d s _ _ (m a x T h r e a d s P e r B l o c k , m i n B l o c k s P e r M u l t i p r o c e s s o r ) k e r n e l ( . . . ){ / / . . .}
CONTROL FLOW
CONTROL FLOW: PROBLEMSWarp divergence: branching, early loop exit... Inspect SASS to find divergent pieces ofcodeWorkload is data dependent: code-path depends on input (like classification task)Too many synchronization logic: intensive usage of parallel data structures, lots ofatomics, __sychthreads(), etcResident warps: occupy resources but do nothingBig blocks: tail effect
CONTROL FLOW: SOLUTIONSUnderstand your problem. Select best algorithm keeping in mind GPU architecture.Maximize independent parallelismCompiler generates branch predication with -O3 during if/switch optimization butnumber of instructions has to be less or equal than a given threshold. Threshold = 7 iflots of divergent warps, 4 otherwiseAdjust thread block sizeTry work queues
KERNEL FUSION AND FISSIONFusion
Replace chain of kernel calls with fused oneHelps to save memory reads/writes. Intermediate results can be kept in registersEnables further ILP optimizationsKernels should have almost the same access pattern
FissionReplace one kernel call with a chainHelps to localize ineffective memory access patternsInsert small kernels that repack data (e.g. integral image)
TUNING BLOCK CONFIGURATIONFinding optimal launch configuration is crucial to achieve best performance. Launch
configuration affects occupancy
low occupancy presents full hardware utilization and lowers possibility to hide patencyhigh occupancy for kernels with large memory demands results in over polluted read orwrite queues
Experiment to find optimal configuration (block and grid resolutions, amount of work perthread) that is optimal for your kernel.
TUNING BLOCK CONFIGURATIONFinding optimal launch configuration is crucial to achieve best performance. Launch
configuration affects occupancy
FINAL WORDSBasic CUDA Code Optimizations
use compiler flagsdo not trick compileruse structure of arraysimprove memory layoutload by cache lineprocess by rowcache data in registersre-compute values instead of re-loadingkeep data on GPU
FINAL WORDSConventional parallelization optimizations
use light-weight locking,... atomics,... and lock-free code.minimize locking,... memory fences,... and volatile accesses.
FINAL WORDSConventional architectural optimizations
utilize shared memory,... constant memory,... streams,... thread voting,... and rsqrtf;detect compute capability and number of SMs;tune thread count,... blocks per SM,... launch bounds,and L1 cache/shared memory configuration
THE ENDNEXT
BY / 2013–2015CUDA.GEEK