guiding ispike with instrumentation and hardware (pmu) profiles cgo’04 tutorial 3/21/04

Post on 13-Jan-2016

21 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Guiding Ispike with Instrumentation and Hardware (PMU) Profiles CGO’04 Tutorial 3/21/04. CK. Luk chi-keung.luk@intel.com Massachusetts Microprocessor Design Center Intel Corporation. What is Ispike?. A post-link optimizer for Itanium/Linux No source code required - PowerPoint PPT Presentation

TRANSCRIPT

Guiding Ispike with Instrumentation and Hardware (PMU) Profiles

CGO’04 Tutorial3/21/04

CK. Luk chi-keung.luk@intel.com

Massachusetts Microprocessor Design CenterIntel Corporation

CGO’04 Tutorial 2

What is Ispike?• A post-link optimizer for Itanium/Linux

– No source code required– Memory-centric optimizations:

• Code layout + prefetching, data layout + prefetching

– Significant speedups over compiler-optimized programs:• 10% average speedup over gcc –O3 on SPEC CINT 2000

• Profile usages:– Understanding program characteristics– Driving optimizations automatically– Evaluating the effectiveness of optimizations

CGO’04 Tutorial 3

Profiles used by IspikeGranularity Hardware Profiles

(pfmon)Instrumentation Profiles

(pin)Usages

Per inst. PC sample --- Identifying hot spots

Per inst. line I-EAR (I-Cache) --- Inst. prefetching

I-EAR (I-TLB) --- ---

Per branch BTB Edge profile Code layout, data layout, and other opts

Per load D-EAR (D-Cache) Load-latency profile Data prefetching

D-EAR (D-TLB) --- ---

D-EAR (stride) Stride profile Data prefetching

CGO’04 Tutorial 4

Profile Example: D-EAR (cache)Top 10 loads in the D-EAR profile of the MCF benchmark

latency bucketsTotal sampled miss

latency

CGO’04 Tutorial 5

Profile Analysis Tools

• A set of tools written for visualizing and analyzing profiles, e.g.,:– Control flow graph (CFG) viewer– Code-layout viewer– Load-latency comparator

CGO’04 Tutorial 6

CFG Viewer

For evaluating the accuracy of profiles

CGO’04 Tutorial 7

Code-layout Viewer

For evaluating code-layout optimization

CGO’04 Tutorial 8

Load-latency Comparator

For evaluating data-layout optimization and data prefetching

CGO’04 Tutorial 9

Deriving New Profiles from PMUs

• New profile types can be derived from PMUs

• Two examples:– Consumer stall cycles– D-cache miss strides

CGO’04 Tutorial 10

Consumer Stall Cycles

Question:

– How many cycles of stall experienced by I2? (Note: not necessarily the load latency of I1)

Method:

– PC-sample count is proportional to (stall cycles * frequency)

I1: ld8 r2 = [r3];;

/* other instructions */

I2: add r2 = r2, 1;;

I3: st8 [r3] = r2

PC-sample countN1

N2

N3

22

3

2

3

2

33

22

3

2

1stall

stall

stall

stall

freqstall

freqstall

freqstall

freqstall

N

N

A

A

Basic block A

CGO’04 Tutorial 11

D-cache Miss StridesProblem:

– Detect strides that are statically unknown

arc* arcin;

node* tail;

while (arcin) {

tail = arcin->tail;

arcin = tail->mark;

}

arcin

tail

-192B

-192B -192B

-120B

-120B -120B

Example: Two strided loads in MCF

CGO’04 Tutorial 12

D-EAR based Stride Profiling• Sample load misses with 2 phases:

TimeTime

Skipping phases (1 sample per 1000 misses)

Inspection phases (1 sample per miss)

GCD(A2-A1, A3-A2 )=GCD(240,336)=48GCD(A3-A2, A4-A3 )=GCD(336,144)=48

Use GCD to figure out strides from miss addresses:

TimeTimeA1 A2 A3

A2-A1=5*48=240 A3-A2=7*48=336

A4

A4-A3=3*48=144

A1, A2, A3, A4 are four consecutive miss addresses of a load. The load has a stride of 48 bytes.

CGO’04 Tutorial 13

Performance Evaluation• Instrumentation vs. PMU profiles:

– Profiling overhead– Performance impact

• Ispike optimizations:– Code layout, instruction prefetching, data layout, data prefetching, inlining,

global-data optimization, scalar optimizations• Baseline compilers:

– Intel Electron compiler (ecc), version 8.0 Beta, -O3– GNU C compiler (gcc), version 3.2, -O3

• Benchmarks: – SPEC CINT2000 (profiled with “training”, measured with “reference”)

• System:– 1GHz Itanium 2, 16KB L1I/16KB L1D, 256KB L2, 3MB L3, 16GB memory– Red Hat Enterprise Linux AS with 2.4.18 kernel

CGO’04 Tutorial 14

Performance Gains with PMU Profiles

• Up to 40% gain• Geo. means: 8.5% over Ecc and 9.9% over Gcc

10

1.3

90

.3

95

.6

85

.6 99

.0

97

.6

59

.6

95

.0

94

.8

96

.4

90

.0 10

2.7

91

.5

0

20

40

60

80

100

120

bzip2

craf

tyeo

nga

pgc

cgz

ipm

cf

pars

er

perlb

mk

twolf

vorte

xvp

r

Geo. M

ean

No

rma

lize

d E

xe

c. T

ime

(%

)

Ecc8.0 –O3 baseline Gcc3.2 –O3 baseline

BTB (1 sample/10K branches), D-EAR cache (1 sample/100 load misses)

D-EAR stride (1 sample /100 misses in skipping, 1 sample/miss in inspection)

10

2.2

80

.9

95

.8

90

.9

93

.4

90

.4

59

.4

89

.0

94

.0 10

4.9

90

.9 99

.4

90

.1

0

20

40

60

80

100

120

No

rma

lize

d E

xe

c.

Tim

e (

%)

CGO’04 Tutorial 15

Cycle Breakdown (Ecc Baseline)

Help understand if individual optimizations are doing a good job

100

101

100

90 1

00

96 100

86 1

00

99 100

98 100

60

100

95 100

95 100

96 100

90 1

00

103

0

20

40

60

80

100

120

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

ba

selin

e

op

timiz

ed

bzip2 crafty eon gap gcc gzip mcf parser perlbmk twolf vortex vpr

No

rmali

zed

Exec

. T

ime

(%

)

Busy Front-end L1D-access

Load-use-stall Br-mispredict Other

CGO’04 Tutorial 16

PMU Profiling Overhead

158

123

103

050

100150200250

No

rma

lize

d P

rofi

ling

Tim

e

(%)

Default: BTB=1/10K, D-EAR=1/100, Stride=<1/100, 1/1>

BTB=1/100K, D-EAR=1/100, Stride=<1/100, 1/1>

BTB=1/100K, D-EAR=1/1000, Stride=<1/1000, 1/1>

• Overhead reduced from 58% to 23% when lowering the BTB sampling rate by 10x.

• Overhead reduced to 3% when lowering the D-EAR sampling rate by 10x.

CGO’04 Tutorial 17

Instrumentation Profiling Overhead

11

91

57

61

54

94

0

2000

4000

6000

8000

10000

No

rma

lize

d P

rofi

ling

Tim

e (

%)

Edge profiling onlyLoad-latency profiling onlyStride profiling only

Why is the overhead so large?– Training runs are too short to amortize the dynamic compilation cost– Techniques like ephemeral instrumentation yet to be applied

CGO’04 Tutorial 18

PMU vs. Instrumentation (Perf. Gains)10

2.0

90.0

95.5

86.3 98

.7

96.7

62.0

93.2

96.9

98.8

87.8 10

2.1

91.810

1.3

90.3

95.6

85.6 99

.0

97.6

59.6

95.0

94.8

96.4

90.0 10

2.7

91.510

2.3

91.0

96.7

85.8 10

0.2

96.7

59.8

94.6

96.5

99.3

90.8 10

3.7

92.310

2.2

89.7 96.6

95.8

100.

2

96.7

59.2

99.5

96.0

99.8

88.5 10

4.2

93.2

020406080

100120

No

rmal

ized

Exe

c. T

ime

(%)

Optimized with instrumentation profilesOptimized with PMU profiles (default: BTB=1/10K, D-EAR=1/100, Stride=<1/100, 1/1>)Optimized with PMU profiles (BTB=1/100K, D-EAR=1/100, Stride=<1/100, 1/1>)Optimized with PMU profiles (BTB=1/100K, D-EAR=1/1000, Stride=<1/1000, 1/1>)

• PMU profiles can be as good as instrumentation profiles– Could be even better in some cases (e.g., mcf)

• However, possible performance drops when samples are too sparse– E.g., gap and parser when Stride = <1/1000, 1/1>

profiling overhead>60x59%24%3%

CGO’04 Tutorial 19

Reference

“Ispike: A Post-link Optimizer for the Intel Itanium Architecture”, by Luk et. al. In Proceedings of CGO’04.

http://www.cgo.org/papers/01_82_luk_ck.pdf

top related