Guiding Ispike with Instrumentation and Hardware (PMU) Profiles
CGO’04 Tutorial3/21/04
CK. Luk [email protected]
Massachusetts Microprocessor Design CenterIntel Corporation
CGO’04 Tutorial 2
What is Ispike?• A post-link optimizer for Itanium/Linux
– No source code required– Memory-centric optimizations:
• Code layout + prefetching, data layout + prefetching– Significant speedups over compiler-optimized programs:
• 10% average speedup over gcc –O3 on SPEC CINT 2000
• Profile usages:– Understanding program characteristics– Driving optimizations automatically– Evaluating the effectiveness of optimizations
CGO’04 Tutorial 3
Profiles used by IspikeGranularity Hardware Profiles
(pfmon)Instrumentation Profiles
(pin)Usages
Per inst. PC sample --- Identifying hot spots
Per inst. line I-EAR (I-Cache) --- Inst. prefetching
I-EAR (I-TLB) --- ---
Per branch BTB Edge profile Code layout, data layout, and other opts
Per load D-EAR (D-Cache) Load-latency profile Data prefetching
D-EAR (D-TLB) --- ---
D-EAR (stride) Stride profile Data prefetching
CGO’04 Tutorial 4
Profile Example: D-EAR (cache)Top 10 loads in the D-EAR profile of the MCF benchmark
latency bucketsTotal sampled miss
latency
CGO’04 Tutorial 5
Profile Analysis Tools• A set of tools written for visualizing and
analyzing profiles, e.g.,:– Control flow graph (CFG) viewer– Code-layout viewer– Load-latency comparator
CGO’04 Tutorial 6
CFG Viewer
For evaluating the accuracy of profiles
CGO’04 Tutorial 7
Code-layout Viewer
For evaluating code-layout optimization
CGO’04 Tutorial 8
Load-latency Comparator
For evaluating data-layout optimization and data prefetching
CGO’04 Tutorial 9
Deriving New Profiles from PMUs
• New profile types can be derived from PMUs• Two examples:
– Consumer stall cycles– D-cache miss strides
CGO’04 Tutorial 10
Consumer Stall Cycles
Question:
– How many cycles of stall experienced by I2? (Note: not necessarily the load latency of I1)
Method:– PC-sample count is proportional to (stall cycles * frequency)
I1: ld8 r2 = [r3];;
/* other instructions */
I2: add r2 = r2, 1;;
I3: st8 [r3] = r2
PC-sample countN1
N2N3
22
3
2
3
2
33
22
3
2
1stallstall
stallstall
freqstallfreqstall
freqstallfreqstall
NN
A
A
Basic block A
CGO’04 Tutorial 11
D-cache Miss StridesProblem:
– Detect strides that are statically unknown
arc* arcin;node* tail;…while (arcin) { tail = arcin->tail; … arcin = tail->mark;}
arcin tail
-192B
-192B -192B
-120B
-120B -120B
Example: Two strided loads in MCF
CGO’04 Tutorial 12
D-EAR based Stride Profiling• Sample load misses with 2 phases:
TimeTime
Skipping phases (1 sample per 1000 misses)
Inspection phases (1 sample per miss)
GCD(A2-A1, A3-A2 )=GCD(240,336)=48GCD(A3-A2, A4-A3 )=GCD(336,144)=48
Use GCD to figure out strides from miss addresses:
TimeTimeA1 A2 A3
A2-A1=5*48=240 A3-A2=7*48=336
A4
A4-A3=3*48=144
A1, A2, A3, A4 are four consecutive miss addresses of a load. The load has a stride of 48 bytes.
CGO’04 Tutorial 13
Performance Evaluation• Instrumentation vs. PMU profiles:
– Profiling overhead– Performance impact
• Ispike optimizations:– Code layout, instruction prefetching, data layout, data prefetching, inlining,
global-data optimization, scalar optimizations• Baseline compilers:
– Intel Electron compiler (ecc), version 8.0 Beta, -O3– GNU C compiler (gcc), version 3.2, -O3
• Benchmarks: – SPEC CINT2000 (profiled with “training”, measured with “reference”)
• System:– 1GHz Itanium 2, 16KB L1I/16KB L1D, 256KB L2, 3MB L3, 16GB memory– Red Hat Enterprise Linux AS with 2.4.18 kernel
CGO’04 Tutorial 14
Performance Gains with PMU Profiles
• Up to 40% gain• Geo. means: 8.5% over Ecc and 9.9% over Gcc
101.
3
90.3
95.6
85.6 99
.0
97.6
59.6
95.0
94.8
96.4
90.0 10
2.7
91.5
0
20
40
60
80
100
120
bzip2
crafty eo
nga
pgc
cgz
ip mcf
parse
r
perlb
mktw
olfvo
rtex
vpr
Geo. M
ean
Nor
mal
ized
Exe
c. T
ime
(%)
Ecc8.0 –O3 baseline Gcc3.2 –O3 baseline
BTB (1 sample/10K branches), D-EAR cache (1 sample/100 load misses)
D-EAR stride (1 sample /100 misses in skipping, 1 sample/miss in inspection)
102.
2
80.9 95
.8
90.9
93.4
90.4
59.4
89.0 94
.0 104.
9
90.9 99
.4
90.1
0
20
40
60
80
100
120
Nor
mal
ized
Exe
c. T
ime
(%)
CGO’04 Tutorial 15
Cycle Breakdown (Ecc Baseline)
Help understand if individual optimizations are doing a good job
100
101
100
90
100
96 100
86
100
99 100
98 100
60
100
95 100
95 100
96 100
90
100
103
020406080
100120
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
base
line
optim
ized
bzip2 crafty eon gap gcc gzip mcf parser perlbmk twolf vortex vpr
Nor
mal
ized
Exe
c. T
ime
(%)
Busy Front-end L1D-accessLoad-use-stall Br-mispredict Other
CGO’04 Tutorial 16
PMU Profiling Overhead
158
123
103
050
100150200250
Nor
mal
ized
Pro
filin
g Ti
me
(%)
Default: BTB=1/10K, D-EAR=1/100, Stride=<1/100, 1/1>
BTB=1/100K, D-EAR=1/100, Stride=<1/100, 1/1>
BTB=1/100K, D-EAR=1/1000, Stride=<1/1000, 1/1>
• Overhead reduced from 58% to 23% when lowering the BTB sampling rate by 10x.
• Overhead reduced to 3% when lowering the D-EAR sampling rate by 10x.
CGO’04 Tutorial 17
Instrumentation Profiling Overhead
1191
5761
5494
02000400060008000
10000
Nor
mal
ized
Pro
filin
g Ti
me
(%)
Edge profiling onlyLoad-latency profiling onlyStride profiling only
Why is the overhead so large?– Training runs are too short to amortize the dynamic compilation cost– Techniques like ephemeral instrumentation yet to be applied
CGO’04 Tutorial 18
PMU vs. Instrumentation (Perf. Gains)10
2.0
90.0 95.5
86.3 98
.7
96.7
62.0
93.2
96.9
98.8
87.8 10
2.1
91.810
1.3
90.3
95.6
85.6 99
.0
97.6
59.6
95.0
94.8
96.4
90.0 10
2.7
91.510
2.3
91.0 96.7
85.8 10
0.2
96.7
59.8
94.6
96.5
99.3
90.8 10
3.7
92.310
2.2
89.7 96.6
95.8
100.
2
96.7
59.2
99.5
96.0
99.8
88.5 10
4.2
93.2
020406080
100120
Nor
mal
ized
Exe
c. T
ime
(%)
Optimized with instrumentation profilesOptimized with PMU profiles (default: BTB=1/10K, D-EAR=1/100, Stride=<1/100, 1/1>)Optimized with PMU profiles (BTB=1/100K, D-EAR=1/100, Stride=<1/100, 1/1>)Optimized with PMU profiles (BTB=1/100K, D-EAR=1/1000, Stride=<1/1000, 1/1>)
• PMU profiles can be as good as instrumentation profiles– Could be even better in some cases (e.g., mcf)
• However, possible performance drops when samples are too sparse– E.g., gap and parser when Stride = <1/1000, 1/1>
profiling overhead>60x59%24%3%
CGO’04 Tutorial 19
Reference
“Ispike: A Post-link Optimizer for the Intel Itanium Architecture”, by Luk et. al. In Proceedings of CGO’04.
http://www.cgo.org/papers/01_82_luk_ck.pdf