[harvard cs264] 14 - dynamic compilation for massively parallel processors (gregory diamos, georgia...
DESCRIPTION
http://cs264.org http://j.mp/h2zN72TRANSCRIPT
Dynamic Compilation for Massively Parallel
Processors
Gregory Diamos
PhD candidate
Georgia Institute of Technology and NVIDIA Research
April 14, 2011
Gregory Diamos CS264 - Dynamic Compilation 1/62
What is an execution model?
Gregory Diamos CS264 - Dynamic Compilation 2/62
Goals of programming languages
Programming languages are designed for productivity.
Efficiency is measured in terms of:1 cost - hardware investment, power consumption, area requirement2 complexity - application development effort3 speed - amount of work performed per unit time
Gregory Diamos CS264 - Dynamic Compilation 3/62
Goals of processor architecture
Hardware is designed for speed and efficiency.
Gregory Diamos CS264 - Dynamic Compilation 4/62
Goals of processor architecture - 2
[1] - M. Koyanagi, T. Fukushima, and T. Tanaka. "High-Density Through Silicon Vias for 3-D LSIs" [2] - Novoselov et al. "Electric Field Effect in Atomically Thin Carbon Films." [3] - Intel Corp. 22nm test chip.
It is constrained by the limitations of physical devices.
Gregory Diamos CS264 - Dynamic Compilation 5/62
Execution models bridge the gap
Gregory Diamos CS264 - Dynamic Compilation 6/62
Goals of execution models
Execution models provide impedance matching between applications andhardware.
Goals:
leverage common optimizations across multiple applications.
limit the impact of hardware changes on software.
ISAs have traditionally been effective execution models.
Gregory Diamos CS264 - Dynamic Compilation 7/62
Programming challenges of heterogeneity
The introduction of heterogeneous and multi-core processors changes thehardware/software interface:
Intel Nehalem IBM PowerEN AMD Fusion NVIDIA Fermi
1 multi-core creates multiple interfaces.2 heterogeneity creates different interfaces.3 these increase software complexity.
Gregory Diamos CS264 - Dynamic Compilation 8/62
Program the entire processor, not individual cores.(new execution model abstractions are needed)
Gregory Diamos CS264 - Dynamic Compilation 9/62
Emerging execution models
Gregory Diamos CS264 - Dynamic Compilation 10/62
Bulk-synchronous parallel (BSP)
[1] - Leslie Valiant. A bridging model for parallel computing.
Gregory Diamos CS264 - Dynamic Compilation 11/62
The Parallel Thread eXecution (PTX) Model
PTX defines a kernel as a 2-level grid of bulk-synchronous tasks.
Gregory Diamos CS264 - Dynamic Compilation 12/62
Dynamically translating PTX
Dynamic compilers can transform this parallelism to fit the hardware.
Gregory Diamos CS264 - Dynamic Compilation 13/62
Beyond PTX - Data distributions
Gregory Diamos CS264 - Dynamic Compilation 14/62
Beyond PTX - Memory hierarchies
[1] - Leslie Valiant. A bridging model for multi-core.[2] Fatahalian et al. Sequoia: Programming the memory hierarchy.
Gregory Diamos CS264 - Dynamic Compilation 15/62
Dynamic compilation/binarytranslation
Gregory Diamos CS264 - Dynamic Compilation 16/62
Binary translation
Gregory Diamos CS264 - Dynamic Compilation 17/62
Binary translators are everywhere
If you are running a browser, you are using dynamic compilation.
Gregory Diamos CS264 - Dynamic Compilation 18/62
x86 binary translation
Gregory Diamos CS264 - Dynamic Compilation 19/62
Low Level Virtual Machines
Compile all programs to a common virtual machine representation (LLVMIR), keep this around.
Perform common optimizations on this IR.
Target various machines by lowering it to an ISA.
Statically or via JIT compilation.
Gregory Diamos CS264 - Dynamic Compilation 20/62
Execution model translation
Gregory Diamos CS264 - Dynamic Compilation 21/62
Execution model translation
Extend binary translation to execution model translation.
Dynamic compilers can map threads/tasks to the HW.
Gregory Diamos CS264 - Dynamic Compilation 22/62
Different core architectures
Can we target these from the same execution model.
What about efficiency?
Gregory Diamos CS264 - Dynamic Compilation 23/62
Ocelot
Enables thread-aware compiler transformations.
Gregory Diamos CS264 - Dynamic Compilation 24/62
Mapping CTAs to cores - thread fusion
Scheduler Block
Restore Registers
Spill Registers
Barrier
Original PTX Code Transformed PTX Code
Transform threads into loops over the program.
Distribute loops to handle barriers.
Gregory Diamos CS264 - Dynamic Compilation 25/62
Mapping CTAs to cores - vectorization
Pack adjacent threads into vector instructions.
Speculate that divergence never occurs, check in case it does.
Gregory Diamos CS264 - Dynamic Compilation 26/62
Mapping CTAs to cores - multiple instruction streams
T0 T1 T2 T3
Instructions from different threads are independent.
merge instruction streams and statically schedule on functional units.
Gregory Diamos CS264 - Dynamic Compilation 27/62
PTX analysis
Gregory Diamos CS264 - Dynamic Compilation 28/62
Divergence analysis
Gregory Diamos CS264 - Dynamic Compilation 29/62
Subkernels
subkernel
Gregory Diamos CS264 - Dynamic Compilation 30/62
Thread frontier analysis
Supporting control flow on SIMD processors requires finding divergentbranches and potential re-converge points.
if((cond1() || cond2()) && (cond3() || cond4())){ ...}
bra cond1() bra cond3()
bra cond2() bra cond4()....
entry
exit
compound conditionals
short circuit control flow
bra cond1()
bra cond3()
bra cond2()
bra cond4()
....
entry
exit
B1
B2
B3
B4
B5
Thread FrontiersBlock Id
{}
{B2 - B3}
{B3 - Exit}
{B4 - Exit}
{B5 - Exit}
T0 T1 T2 T3 T0 T1 T2 T3
thread-frontier reconvergence
of T0
thread-frontier reconvergence
of T2
post dominatorreconvergenceof T1 and T3
Push B3 on T0
Push Exit on T1
Push B5 on T2
re-convergence at thread frontiers
immediate post-dominator re-convergence
Push Exit on T4
Pop stack Exit on T4
Pop stack switch to B5 on T2
Pop stack switch to B3 on T0
post dominatorreconvergence
of T1, T2, and T3
Pop stack Exit on T1
Compiler analysis can identify immediate post donimators orthread-frontiers as re-convergence points.
Gregory Diamos CS264 - Dynamic Compilation 31/62
Consequences of architecturedifferences
Gregory Diamos CS264 - Dynamic Compilation 32/62
Degraded performance portability
0 1000 2000 3000 4000 5000 6000
0100
200
300
400
500
600
GF
LO
PS
N
Fermi SGEMM
AMD SGEMM
0 1000 2000 3000 4000 5000 6000
0200
400
600
800
1000
1200
1400
1600
GF
LO
PS
N
Fermi SGEMM
AMD SGEMM
Performance of two OpenCL applications, one tuned for AMD, the otherfor NVIDIA.
Gregory Diamos CS264 - Dynamic Compilation 33/62
Memory traversal patterns
Warp(4) Cycle 1 Warp(4) Cycle 2
Warp(1) Cycle 1 Warp(1) Cycle 2
Thread loops change row major memory accesses to column majoraccesses.
Gregory Diamos CS264 - Dynamic Compilation 34/62
Reduced memory bandwidth on CPUs
Optimized for SIMD (GPU)
Optimized for single-threaded CPU
This reduces memory bandwidth by 10x for a memory microbenchmarkrunning on a 4-core CPU.
Gregory Diamos CS264 - Dynamic Compilation 35/62
The good news
Gregory Diamos CS264 - Dynamic Compilation 36/62
Scaling across three decades of processors
Many existing applications still scale.
12x
480x
280GTX has 40x more peak flops than a Phenom, 480x more than anAtom.
Gregory Diamos CS264 - Dynamic Compilation 37/62
Questions?
Gregory Diamos CS264 - Dynamic Compilation 38/62
Databases on GPUs
Gregory Diamos CS264 - Dynamic Compilation 39/62
Who cares about databases?
Gregory Diamos CS264 - Dynamic Compilation 40/62
What do applications look like?
What do applications look like?
Gregory Diamos CS264 - Dynamic Compilation 41/62
Gobs of data
Gregory Diamos CS264 - Dynamic Compilation 42/62
Distributed systems
Gregory Diamos CS264 - Dynamic Compilation 43/62
Lots of parallelism
Gregory Diamos CS264 - Dynamic Compilation 44/62
What do CPU algorithms look like?
What do cpu algorithms looklike?
Gregory Diamos CS264 - Dynamic Compilation 45/62
Btrees
Gregory Diamos CS264 - Dynamic Compilation 46/62
Sequential algorithms
=
<
>
relation 1
relation 2
result
Gregory Diamos CS264 - Dynamic Compilation 47/62
It doesn’t look good
Outlook not so good...
Gregory Diamos CS264 - Dynamic Compilation 48/62
Or does it?
Where is the parallelism?
Gregory Diamos CS264 - Dynamic Compilation 49/62
Flattened trees
Gregory Diamos CS264 - Dynamic Compilation 50/62
Relational algebra
Gregory Diamos CS264 - Dynamic Compilation 51/62
Inner Join
A Case Study: Inner Join
Gregory Diamos CS264 - Dynamic Compilation 52/62
1. Recursive partitioning
Gregory Diamos CS264 - Dynamic Compilation 53/62
2. Block streaming
Blocking into pages, shared memory buffers, and transaction sizedchunks makes memory accesses efficient.
Gregory Diamos CS264 - Dynamic Compilation 54/62
3. Shared memory merging network
A network for join can be constructed, similar to a sorting network.
Gregory Diamos CS264 - Dynamic Compilation 55/62
4. Data chunking
Stream compaction packs result data into chunks that can be streamedout of shared memory efficiently.
Gregory Diamos CS264 - Dynamic Compilation 56/62
Operator fusion
Gregory Diamos CS264 - Dynamic Compilation 57/62
Will it blend?
Gregory Diamos CS264 - Dynamic Compilation 58/62
Yes it blends.
Operator NVIDIA C2050 Phenom 9570
inner-join 26.4-32.3 GB/s 0.11-0.63 GB/sselect 104.2 GB/s 2.55 GB/sset operators 45.8 GB/s 0.72 GB/sprojection 54.3 GB/s 2.34 GB/scross product 98.8 GB/s 2.67 GB/s
Gregory Diamos CS264 - Dynamic Compilation 59/62
Questions?
Gregory Diamos CS264 - Dynamic Compilation 60/62
Conclusions
Emerging heterogeneous architectures need matching execution modelabstractions.
dynamic compilation can enable portability.
When writing massively parallel codes, consider:
data structures and algorithms.
mapping onto the execution model.
transformations in the compiler/runtime.
processor micro-architecture.
Gregory Diamos CS264 - Dynamic Compilation 61/62
Thoughts on open source software
Gregory Diamos CS264 - Dynamic Compilation 62/62
Questions?
Questions?
Contact Me:
Contribute to Harmony, Ocelot, and Vanaheimr:
http://code.google.com/p/harmonyruntime/
http://code.google.com/p/gpuocelot/
http://code.google.com/p/vanaheimr/
Gregory Diamos CS264 - Dynamic Compilation 63/62