performance november 25, 2011 hy425 lecture 13 ...hy425/2012f/lectures/lecture13...hy425 lecture 13:...

19
Reducing miss penalty Reducing miss rates in HW Reducing miss rates in SW Summary HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS November 25, 2011 Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 1/40 Reducing miss penalty Reducing miss rates in HW Reducing miss rates in SW Summary Multilevel caches Motivation Bigger caches bridge gap between CPU and DRAM Smaller caches keep pace with CPU speed Mutli-level caches a compromise between the two L2 cache captures misses from L1 cache L2 cache provides additional on-chip caching space Performance analysis AMAT = Hit time L1 + Miss rate L1 × Miss penalty L2 miss penalty L1 = Hit time L2 + Miss rate L2 × Miss penalty L2 AMAT = Hit time L1 + Miss rate L1 × (Hit time L2 + Miss rate L2 × Miss penalty L2 ) Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 3/40

Upload: others

Post on 26-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

HY425 Lecture 13: Improving CachePerformance

Dimitrios S. Nikolopoulos

University of Crete and FORTH-ICS

November 25, 2011

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 1 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Multilevel caches

Motivation

I Bigger caches bridge gap between CPU and DRAMI Smaller caches keep pace with CPU speedI Mutli-level caches a compromise between the two

I L2 cache captures misses from L1 cacheI L2 cache provides additional on-chip caching space

Performance analysis

AMAT = Hit timeL1 + Miss rateL1 × Miss penaltyL2

miss penaltyL1 = Hit timeL2 + Miss rateL2 × Miss penaltyL2

AMAT = Hit timeL1 + Miss rateL1 × (Hit timeL2 + Miss rateL2 × Miss penaltyL2)

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 3 / 40

Page 2: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Multilevel cache miss rates

Local vs. global miss rate

I Local miss rate: Number of misses in the cache divided bythe number of access to this cache

I Global miss rate: Number of misses in the cache divide bythe number of accesses issued from the GPU

Performance analysis

Average memory stalls per instruction = Miss per instructionL1 × Hit timeL2

+Misses per instructionL2 × Miss penaltyL2

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 4 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

AMAT in multilevel cachesExample

I Write-back first-level cacheI 40 misses per 1000 memory references L1I 20 misses per 1000 memory references L2I L2 cache miss penalty = 100 cyclesI L1 hit time = 1 cycleI L2 cache hit time = 10 cyclesI 1.5 memory references per instructions

Miss rateL1 =40

1000= 0.04

Miss rateL2,local =misses in L2misses in L1

=2040

= 0.50

AMAT = Hit timeL1 + Miss rateL1 × (Hit timeL2 + Miss rateL2 × Miss penaltyL2)

AMAT = 1.0 + 0.04 × (10 + 0.50 × 100) = 3.4cycles

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 5 / 40

Page 3: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

AMAT in multilevel caches

Example (cont.)

I Write-back first-level cacheI 40 misses per 1000 memory references L1I 20 misses per 1000 memory references L2I L2 cache miss penalty = 100 cyclesI L1 hit time = 1 cycleI L2 cache hit time = 10 cyclesI 1.5 memory references per instruction

Average memory stalls per instruction = Misses per instructionL1 × Hit timeL2 +

Misses per instructionL2 × Miss penaltyL2

40 × 1.51000

× 10 + 20 × 1.51000

× 100 = 3.6 clock cycles

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 6 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

L2 cache performance implications

L2 cache miss rate versus size

I Two-level cache uses 32 KB L1-I, 32 KB L1-D cacheI 64B block size, 2-way associative caches

4% 4% 4% 3% 3%

3% 2% 2% 2% 1% 1%

6% 5% 4%

4% 4% 3% 2% 2% 2% 1% 1%

99% 99% 98% 96%

88%

67%

55% 51%

46%

39% 34%

0%

20%

40%

60%

80%

100%

120%

4 8 16 32 64 128 256 512 1024 2048 4096

Mis

s ra

te

Cache size (KB)

Miss rate versus cache size for multilevel caches

Global miss rate

Single cache miss rate

Local miss rate, L2

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 7 / 40

Page 4: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

L2 cache performance implications

L2 cache miss rate versus size

I Global miss rate follows one-level cache miss rateI Local L2 cache miss rate is poor performance indicator

4% 4% 4% 3% 3%

3% 2% 2% 2% 1% 1%

6% 5% 4%

4% 4% 3% 2% 2% 2% 1% 1%

99% 99% 98% 96%

88%

67%

55% 51%

46%

39% 34%

0%

20%

40%

60%

80%

100%

120%

4 8 16 32 64 128 256 512 1024 2048 4096

Mis

s ra

te

Cache size (KB)

Miss rate versus cache size for multilevel caches

Global miss rate

Single cache miss rate

Local miss rate, L2

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 8 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

L2 cache performance implications

Impact of L2 cache size on execution time

2.34

1.94

1.76

1.60

1.10

1.02

2.39

1.99

1.82

1.65

1.14

1.06

0.00 0.50 1.00 1.50 2.00 2.50 3.00

256

512

1024

2048

4096

8192

Relative execution time

Sec

ond-

leve

l cac

he s

ize

(KB

)

L2 hit = 16 clock cycles L2 hit = 8 clock cycles

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 9 / 40

Page 5: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Inclusion propertyL1 cache contents always in L2 cache?

I Mutil-level inclusion guarantees that L1 data is always present in L2, L2in L3, . . .

I Simplifies cache consistency check in multiprocessors, orbetween I/O devices and processor

I Complicates the use of different block sizes for L1 and L2 –L1 refill may require storing more than one blocks

I Mutil-level exclusion guarantees that L1 data is never present in L2, L2data never present in in L3, . . .

I Reasonable choice for systems with small L2 cache relativeto the L1 cache

I Effective expansion of total caching space with a slowercache

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 10 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Critical word first and early restartCritical word first

I Cache block size tends to increase to exploit spatial locality

I Any given reference needs only one word from a multi-wordblock

I CWF fetches requested word first and sends it to processor

I Processor continues execution while rest of the block is fetched

Early restart

I Fetches words in the order stored in the block

I As soon as critical word arrives, sends to processor andprocessor restarts

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 11 / 40

Page 6: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Giving priority to read misses over writes

Write-through caches

I Write buffer holds written data to mask memory latencyI Write buffer may hold values needed by a later read miss

SW R3, 512(R0) ;M[512] = R3 (cache index 0)LW R1, 1024(R0) ;R1 = M[1024] (cache index 0)LW R2, 512(R0) ;R2 = M[512] (cache index 0)

I Store to 512[R0] with block from cache index 0 waits in write bufferI Load to 1024[R0] misses and brings new block in cache index 0I Second load attempts to bring block from 512[R0] (held in write buffer)I Memory RAW hazard

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 12 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Giving priority to read misses over writes

Write-through caches

I Check contents of write buffer on read missI If no conflict then let missing read bypass pending write

Write-back caches

I Slow path: write dirty block to memory, then fetch new block frommemory to cache

I Faster path:write dirty block to write buffer, then fetch new block frommemory to cache, then write back dirty block

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 13 / 40

Page 7: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Merging write buffer

Write buffer organization

I Processor blocks on write if write buffer fullI Processor checks write address with address in write bufferI Processor merges writes to same address if address is present in write

bufferI Assume write buffer with 4 entries, with 4 64-bit words eachI Writes to same cache block in different cycles, no write merging

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 14 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Merging write buffer

Write buffer organization

I Processor blocks on write if write buffer fullI Processor checks write address with address in write bufferI Processor merges writes to same address if address is present in write

bufferI Assume write buffer with 4 entries, with 4 64-bit words eachI Writes to same cache block in different cycles, write merging

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 15 / 40

Page 8: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Victim cache

Tiny cache holds evicted cache blocks

I Small (e.g. 4-entry) fully associative buffer for evicted blocks

I Proposed to reduce impact of conflicts on direct-mapped cachesI Victim cache + Direct-mapped cache ≈ associative cache

Tag Data

Victim cache

CPU address

=?

Data in Data out

Write buffer

next-level memory

=?

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 16 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

3 C’s model

Characterization of cache misses

I Compulsory miss: Miss that happens due to the first access to a blocksince program began execution. Also called cold-start miss.

I Capacity miss: Miss that happens because a block that has beenfetched in the cache needed to be replaced due to limited capacity (allblocks valid in the cache, cache needed to select victim block). Blockhad been fetched, replaced, and re-fetched to count as capacity miss.

I Conflict miss: MIss that happens because address of block maps tosame location in the cache with other block(s) in memory. Block hadbeen fetched, replaced, re-fetched, and cache has invalid locations thatcould hold the block if a different address mapping scheme were used,to count as conflict miss (as opposed to compulsory miss with first-timefetch).

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 18 / 40

Page 9: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Associativity and conflict misses

Miss rate distribution versus cache size

Cache Size (KB)

Mis

s R

ate

per T

ype

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 2 4 8 16

32

64

128

1-way

2-way

4-way

8-way

Capacity

Compulsory

Conflict

Compulsory vanishingly small

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 19 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Associativity and conflict misses

2 to 1 cache rule miss rate 1-way associative cache of size X = miss rate 2-way associative cache of size X/2

Mis

s R

ate

per T

ype

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

1 2 4 8 16

32

64

128

1-way

2-way

4-way

8-way

Capacity

Conflict

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 20 / 40

Page 10: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Associativity and conflict misses

Miss rate distribution

I Associativity tends to increase in modern caches

Cache Size (KB)

Mis

s Ra

te p

er T

ype

0%

20%

40%

60%

80%

100%

1 2 4 8 16

32

64

128

1-way

2-way4-way

8-way

Capacity

Compulsory

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 21 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Increasing block size

Spatial localityI Larger block size usually reduces

compulsory missesI Larger block size increases miss

penalty, since processor needs tofetch more data

I Increasing block size may increaseconflict misses, if spatial locality ispoor (most words in fetched blocknot used)

I Increasing block size may increasecapacity misses, if spatial locality ispoor (most words in fetched blocknot used)

Block size impact

Block Size (bytes)

Miss Rate

0%

5%

10%

15%

20%

25%

16

32

64

128

256

1K

4K

16K

64K

256K

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 22 / 40

Page 11: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Miss rate versus block size

SPEC92 benchmarksCache size

Block size 4K 16K 64K 256K16 8.57% 3.94% 2.04% 1.09%32 7.24% 2.87% 1.35% 0.70%64 7.00% 2.64% 1.06% 0.51%

128 7.78% 2.77% 1.02% 0.49%256 9.51% 3.29% 1.15% 0.49%

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 23 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

AMAT versus block size

SPEC92 benchmarks

I Example assumes 80-cycle memory latency, 16 bytes per2 cycles pipelined memory throughput

Cache size

Block size Miss penalty 4K 16K 64K 256K16 82 8.027 4.231 2.673 1.89432 84 7.082 3.411 2.134 1.58864 88 7.160 3.323 1.933 1.449128 96 8.469 3.659 1.979 1.470256 112 11.651 4.685 2.288 1.549

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 24 / 40

Page 12: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Larger caches

Implications of higher cache capacity

I Reduction of capacity missesI Longer hit timeI Increased area, power, and cost

I Very large multi-MB caches often placed off-chip

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 25 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Increasing associativity

ExampleI Higher associativity increases hit timeI Increased hit time for L1 cache means increased cycle timeI Assume

Hit time = 1.0 cycle

Miss penaltydirect−mapped = 25 cycles to perfect L2 cache

Clock cycle time2−way = 1.36 × Clock cycle time1−way

Clock cycle time4−way = 1.44 × Clock cycle time1−way

Clock cycle time8−way = 1.52 × Clock cycle time1−way

AMAT8−way < AMAT4−way?

AMAT4−way < AMAT2−way?

AMAT2−way < AMAT1−way?

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 26 / 40

Page 13: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Increasing associativity

ExampleI Assume

Hit time = 1.0 cycle

Miss penaltydirect−mapped = 25 cycles to perfect L2 cache

Clock cycle time2−way = 1.36 × Clock cycle time1−way

Clock cycle time4−way = 1.44 × Clock cycle time1−way

Clock cycle time8−way = 1.52 × Clock cycle time1−way

AMAT8−way = 1.52 + Miss rate8−way × 25.0

AMAT4−way = 1.44 + Miss rate4−way × 25.0

AMAT2−way = 1.36 + Miss rate2−way × 25.0

AMAT1−way = 1.00 + Miss rate1−way × 25.0

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 27 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

AMAT versus associativity

SPEC92 benchmarksAssociativity

Cache size (KB) One-way Two-way Four-way Eight-way4 3.44 3.25 3.22 3.288 2.69 2.58 2.55 2.62

16 2.23 2.40 2.46 2.5332 2.06 2.30 2.37 2.4564 1.92 2.14 2.18 2.25128 1.32 1.66 1.75 1.82256 1.20 1.55 1.59 1.66

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 28 / 40

Page 14: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Way prediction

Way prediction

I Predict way (cache bank) of next cache access

I Block predictor bit per wayI 85% accuracy on Alpha 21264

I Multiplexor set early to select predicted blockI If tag match succeeds (hit) predicted block returned in one cycle

I If tag match fails (miss) rest of the blocks checked in second cycle

I Two hit times: fast hit (way predicted), slow hit (waymispredicted)

I Effective technique for power reduction

I Supply power only to tag arrays of selected ways

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 29 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Pseudoassociativity

I Cache organized as direct-mapped cache with pseudosetsI Pseudosets contain blocks in different lines of the tag/data

arraysI Hit processed as in direct-mapped cacheI On miss, check other block in pseudo-setI Two hit times: fast hit (hit, direct mapped), slow hit (hit

pseudoset)I Increases hit time of direct-mapped cache, especially if

slow hits are manyI May increase miss penalty, overhead to select pseudo-way

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 30 / 40

Page 15: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Compiler optimizations

Cache-aware optimizations in software

I Code transformations to improve:I Spatial locality, through higher utilization of fetched cache

blocksI Temporal locality, through reduction of the reuse distance of

cache blocksI Examples: loop interchange, loop blocking, loop fusion,

loop fisionI Data layout and data structure transformations to improve:

I Spatial locality, through higher utilization of fetched cacheblocks

I Examples: array merging, structure/object class memberreordering in memory, block array layouts

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 32 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Array merging

Data structure reorganization for spatial locality

/* Before */int val[SIZE];int key[SIZE];

I Assume code accessing val[i], key[i], for every iI Accesses to val and key may conflict in direct-mapped cachesI Solution, merge arrays, accesses to val[i], key[i] do not conflict in the cache,

spatial locality exploited

/* After */struct merge {

int val;int key;

}struct merge merged_array[SIZE];

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 33 / 40

Page 16: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Loop interchange

Code transformation for spatial locality

/* Before */for (j = 0; j < 100; j++)

for (i = 0; i < 100; i++)x[i][j] = 2 * x[i][j];

I Arrays in C stored in row-major orderI Innermost loop over column of xI Long strides lead to poor spatial locality

/* After */for (i = 0; i < 5000; i = i+1)

for (j = 0; j < 100; j = j+1)x[i][j] = 2 * x[i][j];

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 34 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Data blockingCode transformations for temporal locality

I Reduce reuse distance for same dataI Organize code so that data is accessed in blocksI Best performance if block accessed many times and few

accesses to data outside block

Example: Matrix multiplication/* Before */for (i = 0; i < N; i = i+1)

for (j = 0; j < N; j = j+1){r = 0;for (k = 0; k < N; k = k + 1)r = r + y[i][k]*z[k][j];x[i][j] = r;

};

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 35 / 40

Page 17: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Data blocking

Array accesses without blockingI Snapshot with i=1I Assume cache line holds one array elementI Two innermost loops access N2 elements of z, N elements of y, N elements of xI N × (N2 + 2N) = 2N2 + N3 capacity missesI Need cache space at least N2 + N to exploit temporal locality

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 36 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Data blocking

Example: Blocked matrix multiplication/* After */for (jj = 0; jj < N; jj = jj+B)

for (kk = 0; kk < N; kk = kk+B)for (i = 0; i < N; i = i+1)for (j = jj; j < min(jj+B,N); j = j+1){r = 0;for (k = kk; k < min(kk+B,N); k = k + 1)

r = r + y[i][k]*z[k][j];x[i][j] = x[i][j] + r;

};

I Load a block of z of size B × B

I Compute partial sum for B elements of x

I Load next block

I NB × N

B × (N × 2B + B2) = 2N3

B + N2capacity misses

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 37 / 40

Page 18: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Blocked matrix multiplication

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 38 / 40

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Cache performance improvement

Reducing cache miss penalty

I Multi-level cachesI Critical-word first and early restartI Priority to read misses over writesI Merging write bufferI Victim cache

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 39 / 40

Page 19: Performance November 25, 2011 HY425 Lecture 13 ...hy425/2012f/lectures/lecture13...HY425 Lecture 13: Improving Cache Performance Dimitrios S. Nikolopoulos University of Crete and FORTH-ICS

Reducing miss penaltyReducing miss rates in HWReducing miss rates in SW

Summary

Cache performance improvement

Reducing cache miss rate

I Higher associativityI Larger block sizeI Way predictionI Pseudo-associativityI Compiler transformations

Dimitrios S. Nikolopoulos HY425 Lecture 13: Improving Cache Performance 40 / 40