5 — memory hierarchy · 5 the “memory wall” processor vs dram speed disparity continues to...

74
Chapter 5 Large and Fast: Exploiting Memory Hierarchy

Upload: others

Post on 15-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

Chapter 5 Large and Fast: Exploiting Memory Hierarchy

Page 2: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

2

SSD (Flash technology)

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Memory Technology Static RAM (SRAM)

0.5ns – 2.5ns, $2000 – $5000 per GB Dynamic RAM (DRAM)

50ns – 70ns, $20 – $75 per GB Magnetic disk

5ms – 20ms, $0.20 – $2 per GB Ideal memory

Access time of SRAM Capacity and cost/GB of disk

§5.1 Introduction

Page 3: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

3

SRAM DRAM

Rechnerstrukturen 182.092 5 — Memory Hierarchy

+

WORD WORD

. .

...

. .

.

BIT BIT. .

Page 4: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

4

Processor-Memory Performance Gap

Rechnerstrukturen 182.092 5 — Memory Hierarchy

1

10

100

1000

10000

1980

1983

1986

1989

1992

1995

1998

2001

2004

Year

Perf

orm

ance

“Moore’s Law”

µProc 55%/year (2X/1.5yr)

DRAM 7%/year (2X/10yrs)

Processor-Memory Performance Gap (grows 50%/year)

Page 5: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

5

The “Memory Wall” Processor vs DRAM speed disparity continues to grow

Good memory hierarchy (cache) design is increasingly important to overall performance

Rechnerstrukturen 182.092 5 — Memory Hierarchy

0,01

0,1

1

10

100

1000

VAX/1980 PPro/1996 2010+

CoreMemory

Clo

cks

per i

nstru

ctio

n

Clo

cks

per D

RAM

acc

ess

Page 6: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

6

A Typical Memory Hierarchy Take advantage of the principle of locality to present the

user with as much memory as is available in the cheapest technology at the speed offered by the fastest technology

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Second Level Cache

(SRAM)

Control

Datapath

Secondary Memory (Disk)

On-Chip Components

RegFile

Main Memory (DRAM) D

ata C

ache Instr

Cache

ITLB

DTLB

Speed (%cycles): ½’s 1’s 10’s 100’s 10,000’s

Size (bytes): 100’s 10K’s M’s G’s T’s

Cost: highest lowest

Page 7: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

7

»Geographical« distances

Rechnerstrukturen 182.092 5 — Memory Hierarchy

register file

on-chip cache

on-board cache

main memory

1 m

2 m

10 m

100 m

on your desk

in your desk

in your room

in the house

disk 1000 km abroad

Page 8: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

8

Principle of Locality Programs access a small proportion of

their address space at any time Temporal locality

Items accessed recently are likely to be accessed again soon

e.g., instructions in a loop, induction variables Spatial locality

Items near those accessed recently are likely to be accessed soon

E.g., sequential instruction access, array data Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 9: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

9

Taking Advantage of Locality Memory hierarchy Store everything on disk Copy recently accessed (and nearby)

items from disk to smaller DRAM memory Main memory

Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memory Cache memory attached to CPU

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 10: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

10

Memory Hierarchy Levels Block (aka line): unit of copying

May be multiple words

If accessed data is present in upper level Hit: access satisfied by upper level

Hit ratio: hits/accesses

If accessed data is absent Miss: block copied from lower level

Time taken: miss penalty Miss ratio: misses/accesses

= 1 – hit ratio Then accessed data supplied from

upper level

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 11: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

11

Cache Memory Cache memory

The level of the memory hierarchy closest to the CPU

Given accesses X1, …, Xn–1, Xn

§5.2 The Basics of C

aches

How do we know if the data is present?

Where do we look?

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 12: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

12

Direct Mapped Cache Location determined by address Direct mapped: only one choice

(Block address) modulo (#Blocks in cache)

#Blocks is a power of 2

Use low-order address bits

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 13: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

13

Tags and Valid Bits How do we know which particular block is

stored in a cache location? Store block address as well as the data Actually, only need the high-order bits Called the tag

What if there is no data in a location? Valid bit: 1 = present, 0 = not present Initially 0

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 14: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

14

Cache Example 8-blocks, 1 word/block, direct mapped Initial state

Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 110 N 111 N

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 15: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

15

Cache Example

Index V Tag Data 000 N 001 N 010 N 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N

Word addr Binary addr Hit/miss Cache block 22 10 110 Miss 110

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 16: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

16

Cache Example

Index V Tag Data 000 N 001 N 010 Y 11 Mem[11010] 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N

Word addr Binary addr Hit/miss Cache block 26 11 010 Miss 010

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 17: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

17

Cache Example

Index V Tag Data 000 N 001 N 010 Y 11 Mem[11010] 011 N 100 N 101 N 110 Y 10 Mem[10110] 111 N

Word addr Binary addr Hit/miss Cache block 22 10 110 Hit 110 26 11 010 Hit 010

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 18: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

18

Cache Example

Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Y 11 Mem[11010] 011 Y 00 Mem[00011] 100 N 101 N 110 Y 10 Mem[10110] 111 N

Word addr Binary addr Hit/miss Cache block 16 10 000 Miss 000 3 00 011 Miss 011

16 10 000 Hit 000

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 19: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

19

Cache Example

Index V Tag Data 000 Y 10 Mem[10000] 001 N 010 Y 10 Mem[10010] 011 Y 00 Mem[00011] 100 N 101 N 110 Y 10 Mem[10110] 111 N

Word addr Binary addr Hit/miss Cache block 18 10 010 Miss 010

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 20: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

20

Address Subdivision

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 21: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

21

Example: Larger Block Size 64 blocks, 16 bytes/block

To what block number does address 1200 map?

Block address = 1200/16 = 75 Block number = 75 modulo 64 = 11

Tag Index Offset 0 3 4 9 10 31

4 bits 6 bits 22 bits

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 22: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

22

Block Size Considerations Larger blocks should reduce miss rate

Due to spatial locality But in a fixed-sized cache

Larger blocks ⇒ fewer of them More competition ⇒ increased miss rate

Larger blocks ⇒ spatial locality decreases Larger miss penalty

Can override benefit of reduced miss rate Early restart and critical-word-first can help

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 23: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

23

Miss Rate vs Block Size vs Cache Size Miss rate goes up if the block size becomes a significant

fraction of the cache size because the number of blocks that can be held in the same size cache is smaller (increasing capacity misses)

Rechnerstrukturen 182.092 5 — Memory Hierarchy

0

5

10

16 32 64 128 256

Mis

s ra

te (%

)

Block size (bytes)

8 KB 16 KB 64 KB 256 KB

Block size ↑ Miss penalty ↑

4 KB

Page 24: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

24

Cache Misses On cache hit, CPU proceeds normally On cache miss

Stall the CPU pipeline Fetch block from next level of hierarchy Instruction cache miss

Restart instruction fetch Data cache miss

Complete data access

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 25: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

25

Write-Through On data-write hit, could just update the block in

cache But then cache and memory would be inconsistent

Write through: also update memory But makes writes take longer

e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cycles Effective CPI = 1 + 0.1×100 = 11

Solution: write buffer Holds data waiting to be written to memory CPU continues immediately

Only stalls on write if write buffer is already full

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 26: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

26

Write(Copy)-Back Alternative: On data-write hit, just update

the block in cache Keep track of whether each block is dirty

When a dirty block is replaced Write it back to memory Can use a write buffer to allow replacing block

to be read first

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 27: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

27

Write Allocation What should happen on a write miss? Alternatives for write-through

Allocate on miss: fetch the block Write around: don’t fetch the block

Since programs often write a whole block before reading it (e.g., initialization)

For write-back Usually fetch the block

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 28: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

28

Example: Intrinsity FastMATH Embedded MIPS processor

12-stage pipeline Instruction and data access on each cycle

Split cache: separate I-cache and D-cache Each 16KB: 256 blocks × 16 words/block D-cache: write-through or write-back

SPEC2000 miss rates I-cache: 0.4% D-cache: 11.4% Weighted average: 3.2%

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 29: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

29

Example: Intrinsity FastMATH

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 30: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

30

Main Memory Supporting Caches Use DRAMs for main memory

Fixed width (e.g., 1 word) Connected by fixed-width clocked bus

Bus clock is typically slower than CPU clock

Example cache block read 1 bus cycle for address transfer 15 bus cycles per DRAM access 1 bus cycle per data transfer

For 4-word block, 1-word-wide DRAM Miss penalty = 1 + 4×15 + 4×1 = 65 bus cycles Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 31: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

31

Increasing Memory Bandwidth

4-word wide memory Miss penalty = 1 + 15 + 1 = 17 bus cycles Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle

4-bank interleaved memory Miss penalty = 1 + 15 + 4×1 = 20 bus cycles Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 32: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

32

Advanced DRAM Organization Bits in a DRAM are organized as a

rectangular array DRAM accesses an entire row Burst mode: supply successive words from a

row with reduced latency Double data rate (DDR) DRAM

Transfer on rising and falling clock edges Quad data rate (QDR) DRAM

Separate DDR inputs and outputs

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 33: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

33

DRAM Generations

0

50

100

150

200

250

300

'80 '83 '85 '89 '92 '96 '98 '00 '04 '07

TracTcac

Year Capacity $/GB

1980 64Kbit $1500.000

1983 256Kbit $500.000

1985 1Mbit $200.000

1989 4Mbit $50.000

1992 16Mbit $15.000

1996 64Mbit $10.000

1998 128Mbit $4.000

2000 256Mbit $1.000

2004 512Mbit $250

2007 1Gbit $50

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 34: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

34

Measuring Cache Performance Components of CPU time

Program execution cycles Includes cache hit time

Memory stall cycles Mainly from cache misses

With simplifying assumptions:

§5.3 Measuring and Im

proving Cache P

erformance

penalty MissnInstructio

MissesProgram

nsInstructio

penalty Missrate MissProgram

accessesMemory

cycles stallMemory

××=

××=

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 35: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

35

Cache Performance Example Given

I-cache miss rate = 2% D-cache miss rate = 4% Miss penalty = 100 cycles Base CPI (ideal cache) = 2 Load & stores are 36% of instructions

Miss cycles per instruction I-cache: 0.02 × 100 = 2 D-cache: 0.36 × 0.04 × 100 = 1.44

Actual CPI = 2 + 2 + 1.44 = 5.44 Ideal CPU is 5.44/2 = 2.72 times faster

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 36: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

36

Average Access Time Hit time is also important for performance Average memory access time (AMAT)

AMAT = Hit time + Miss rate × Miss penalty Example

CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5%

AMAT = 1 + 0.05 × 20 = 2ns 2 cycles per instruction

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 37: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

37

Performance Summary When CPU performance increased

Miss penalty becomes more significant Decreasing base CPI

Greater proportion of time spent on memory stalls

Increasing clock rate Memory stalls account for more CPU cycles

Can’t neglect cache behavior when evaluating system performance

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 38: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

38

Associative Caches Fully associative

Allow a given block to go in any cache entry Requires all entries to be searched at once Comparator per entry (expensive)

n-way set associative Each set contains n entries Block number determines which set

(Block number) modulo (#Sets in cache) Search all entries in a given set at once n comparators (less expensive)

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 39: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

39

Associative Cache Example

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 40: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

40

Spectrum of Associativity For a cache with 8 entries

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 41: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

41

Associativity Example Compare 4-block caches

Direct mapped, 2-way set associative, fully associative

Block access sequence: 0, 8, 0, 6, 8

Direct mapped Block

address Cache index

Hit/miss Cache content after access 0 1 2 3

0 0 miss Mem[0] 8 0 miss Mem[8] 0 0 miss Mem[0] 6 2 miss Mem[0] Mem[6] 8 0 miss Mem[8] Mem[6]

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 42: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

42

Associativity Example 2-way set associative

Block address

Cache index

Hit/miss Cache content after access Set 0 Set 1

0 0 miss Mem[0] 8 0 miss Mem[0] Mem[8] 0 0 hit Mem[0] Mem[8] 6 0 miss Mem[0] Mem[6] 8 0 miss Mem[8] Mem[6]

Fully associative Block

address Hit/miss Cache content after access

0 miss Mem[0] 8 miss Mem[0] Mem[8] 0 hit Mem[0] Mem[8] 6 miss Mem[0] Mem[8] Mem[6] 8 hit Mem[0] Mem[8] Mem[6]

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 43: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

43

How Much Associativity Increased associativity decreases miss

rate But with diminishing returns

Simulation of a system with 64KB D-cache, 16-word blocks, SPEC2000 1-way: 10.3% 2-way: 8.6% 4-way: 8.3% 8-way: 8.1%

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 44: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

44

Set Associative Cache Organization

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 45: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

45

Replacement Policy Direct mapped: no choice Set associative

Prefer non-valid entry, if there is one Otherwise, choose among entries in the set

Least-recently used (LRU) Choose the one unused for the longest time

Simple for 2-way, manageable for 4-way, too hard beyond that

Random Gives approximately the same performance

as LRU for high associativity

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 46: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

46

Multilevel Caches Primary cache attached to CPU

Small, but fast Level-2 cache services misses from

primary cache Larger, slower, but still faster than main

memory Main memory services L-2 cache misses Some high-end systems include L-3 cache

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 47: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

47

Multilevel Cache Example Given

CPU base CPI = 1, clock rate = 4GHz Miss rate/instruction = 2% Main memory access time = 100ns

With just primary cache Miss penalty = 100ns/0.25ns = 400 cycles Effective CPI = 1 + 0.02 × 400 = 9

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 48: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

48

Example (cont.) Now add L-2 cache

Access time = 5ns Global miss rate to main memory = 0.5%

Primary miss with L-2 hit Penalty = 5ns/0.25ns = 20 cycles

Primary miss with L-2 miss Extra penalty = 500 cycles

CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4 Performance ratio = 9/3.4 = 2.6

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 49: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

49

Multilevel Cache Considerations Primary cache

Focus on minimal hit time L-2 cache

Focus on low miss rate to avoid main memory access

Hit time has less overall impact Results

L-1 cache usually smaller than a single cache L-1 block size smaller than L-2 block size

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 50: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

50

Interactions with Advanced CPUs

Out-of-order CPUs can execute instructions during cache miss Pending store stays in load/store unit Dependent instructions wait in reservation

stations Independent instructions continue

Effect of miss depends on program data flow Much harder to analyse Use system simulation

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 51: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

51

Virtual Memory Use main memory as a “cache” for

secondary (disk) storage Managed jointly by CPU hardware and the

operating system (OS) Programs share main memory

Each gets a private virtual address space holding its frequently used code and data

Protected from other programs CPU and OS translate virtual addresses to

physical addresses VM “block” is called a page VM translation “miss” is called a page fault

§5.4 Virtual Mem

ory

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 52: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

52

Two Programs Sharing Physical Memory

A program’s address space is divided into pages (all one fixed size) or segments (variable sizes) The starting location of each page (either in main memory or in

secondary memory) is contained in the program’s page table

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Program 1 virtual address space

main memory

Program 2 virtual address space

Page 53: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

53

Address Translation A virtual address is translated to a physical address by a

combination of hardware and software

Rechnerstrukturen 182.092 5 — Memory Hierarchy

So each memory request first requires an address translation from the virtual space to the physical space A virtual memory miss (i.e., when the page is not in physical

memory) is called a page fault

Virtual Address (VA)

Page offset Virtual page number

31 30 . . . 12 11 . . . 0

Page offset Physical page number

Physical Address (PA) 29 . . . 12 11 0

Translation

Page 54: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

54

Page Fault Penalty On page fault, the page must be fetched

from disk Takes millions of clock cycles Handled by OS code

Try to minimize page fault rate Fully associative placement in memory Smart replacement algorithms

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 55: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

55

Page Tables Stores placement information

Array of page table entries, indexed by virtual page number

Page table register in CPU points to page table in physical memory

If page is present in memory PTE stores the physical page number Plus other status bits (referenced, dirty, …)

If page is not present PTE can refer to location in swap space on

disk Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 56: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

56

Address Translation Mechanisms

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Physical page base addr

Main memory

Disk storage

Virtual page #

V 1 1 1 1 1 1 0 1 0 1 0

Page Table (in main memory)

Offset

Physical page #

Offset

Pag

e ta

ble

regi

ster

Page 57: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

57

Replacement and Writes To reduce page fault rate, prefer least-

recently used (LRU) replacement Reference bit (aka use bit) in PTE set to 1 on

access to page Periodically cleared to 0 by OS A page with reference bit = 0 has not been

used recently Disk writes take millions of cycles

Block at once, not individual locations Write through is impractical Use write-back Dirty bit in PTE set when page is written

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 58: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

58

Fast Translation Using a TLB Address translation would appear to require

extra memory references One to access the PTE Then the actual memory access

But access to page tables has good locality So use a fast cache of PTEs within the CPU Called a Translation Look-aside Buffer (TLB) Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100

cycles for miss, 0.01%–1% miss rate Misses could be handled by hardware or software

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 59: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

59

Fast Translation Using a TLB

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 60: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

60

TLB Misses If page is in memory

Load the PTE from memory and retry Could be handled in hardware or in software

If page is not in memory (page fault)

OS handles fetching the page and updating the page table

Then restart the faulting instruction

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 61: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

61

Memory Protection Different tasks can share parts of their

virtual address spaces But need to protect against errant access Requires OS assistance

Hardware support for OS protection Privileged supervisor mode (aka kernel mode) Privileged instructions Page tables and other state information only

accessible in supervisor mode System call exception (e.g., syscall in MIPS)

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 62: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

62

The Memory Hierarchy

Common principles apply at all levels of the memory hierarchy Based on notions of caching

At each level in the hierarchy Block placement Finding a block Replacement on a miss Write policy

§5.5 A Com

mon Fram

ework for M

emory H

ierarchies

The BIG Picture

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 63: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

63

Block Placement Determined by associativity

Direct mapped (1-way associative) One choice for placement

n-way set associative n choices within a set

Fully associative Any location

Higher associativity reduces miss rate Increases complexity, cost, and access time

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 64: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

64

Finding a Block

Hardware caches Reduce comparisons to reduce cost

Virtual memory Full table lookup makes full associativity feasible Benefit in reduced miss rate

Associativity Location method Tag comparisons Direct mapped Index 1 n-way set associative

Set index, then search entries within the set

n

Fully associative Search all entries #entries Full lookup table 0

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 65: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

65

Replacement Choice of entry to replace on a miss

Least recently used (LRU) Complex and costly hardware for high associativity

Random Close to LRU, easier to implement

Virtual memory LRU approximation with hardware support

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 66: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

66

Write Policy Write-through

Update both upper and lower levels Simplifies replacement, but may require write

buffer Write-back

Update upper level only Update lower level when block is replaced Need to keep more state

Virtual memory Only write-back is feasible, given disk write

latency

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 67: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

67

Sources of Misses Compulsory misses (aka cold start misses)

First access to a block Capacity misses

Due to finite cache size A replaced block is later accessed again

Conflict misses (aka collision misses) In a non-fully associative cache Due to competition for entries in a set Would not occur in a fully associative cache of

the same total size

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 68: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

68

Cache Design Trade-offs

Design change Effect on miss rate Negative performance effect

Increase cache size Decrease capacity misses

May increase access time

Increase associativity Decrease conflict misses

May increase access time

Increase block size Decrease compulsory misses

Increases miss penalty. For very large block size, may increase miss rate due to pollution.

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 69: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

69

Multilevel On-Chip Caches §5.10 R

eal Stuff: The A

MD

Opteron X4 and Intel N

ehalem Per core: 32KB L1 I-cache, 32KB L1 D-cache, 512KB L2 cache

Intel Nehalem 4-core processor

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 70: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

70

2-Level TLB Organization Intel Nehalem AMD Opteron X4

Virtual addr 48 bits 48 bits Physical addr 44 bits 48 bits (256 TB) L1 TLB (per core)

L1 I-TLB: 128 entries for small pages, 7 per thread (2×) for large pages L1 D-TLB: 64 entries for small pages, 32 for large pages Both 4-way, LRU replacement

L1 I-TLB: 48 entries L1 D-TLB: 48 entries Both fully associative, LRU replacement

L2 TLB (per core)

Single L2 TLB: 512 entries 4-way, LRU replacement

L2 I-TLB: 512 entries L2 D-TLB: 512 entries Both 4-way, round-robin LRU

TLB misses Handled in hardware Handled in hardware

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 71: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

71

3-Level Cache Organization Intel Nehalem AMD Opteron X4

L1 caches (per core)

L1 I-cache: 32KB, 64-byte blocks, 4-way, approx LRU replacement, hit time n/a L1 D-cache: 32KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a

L1 I-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, hit time 3 cycles L1 D-cache: 32KB, 64-byte blocks, 2-way, LRU replacement, write-back/allocate, hit time 9 cycles

L2 unified cache (per core)

256KB, 64-byte blocks, 8-way, approx LRU replacement, write-back/allocate, hit time n/a

512KB, 64-byte blocks, 16-way, approx LRU replacement, write-back/allocate, hit time n/a

L3 unified cache (shared)

8MB, 64-byte blocks, 16-way, replacement n/a, write-back/allocate, hit time n/a

2MB, 64-byte blocks, 32-way, replace block shared by fewest cores, write-back/allocate, hit time 32 cycles

n/a: data not available

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 72: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

72

Pitfalls Byte vs. word addressing

Example: 32-byte direct-mapped cache, 4-byte blocks Byte 36 maps to block 1 Word 36 maps to block 4

Ignoring memory system effects when writing or generating code Example: iterating over rows vs. columns of

arrays Large strides result in poor locality

§5.11 Fallacies and Pitfalls

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 73: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

73

Pitfalls In multiprocessor with shared L2 or L3

cache Less associativity than cores results in conflict

misses More cores ⇒ need to increase associativity

Using AMAT to evaluate performance of out-of-order processors Evaluate performance by simulation

Rechnerstrukturen 182.092 5 — Memory Hierarchy

Page 74: 5 — Memory Hierarchy · 5 The “Memory Wall” Processor vs DRAM speed disparity continues to grow Good memory hierarchy (cache) design is increasingly important to overall performance

74

Concluding Remarks Fast memories are small, large memories are

slow We really want fast, large memories Caching gives this illusion

Principle of locality Programs use a small part of their memory space

frequently Memory hierarchy

L1 cache ↔ L2 cache ↔ … ↔ DRAM memory ↔ disk

Memory system design is critical for multiprocessors

§5.12 Concluding R

emarks

Rechnerstrukturen 182.092 5 — Memory Hierarchy