chapter 5fsantiag/arqcomputadoras/09_jerarquia_memorias.pdf · n random read/write access n used...

84
COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 5 Large and Fast: Exploiting Memory Hierarchy

Upload: others

Post on 27-Apr-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

COMPUTER ORGANIZATION AND DESIGNThe Hardware/Software Interface

5thEdition

Chapter 5Large and Fast: Exploiting Memory Hierarchy

Page 2: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 2

Principle of Localityn Programs access a small proportion of

their address space at any timen Temporal locality

n Items accessed recently are likely to be accessed again soon

n e.g., instructions in a loop, induction variablesn Spatial locality

n Items near those accessed recently are likely to be accessed soon

n E.g., sequential instruction access, array data

§5.1 Introduction

Page 3: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 3

Taking Advantage of Localityn Memory hierarchyn Store everything on diskn Copy recently accessed (and nearby)

items from disk to smaller DRAM memoryn Main memory

n Copy more recently accessed (and nearby) items from DRAM to smaller SRAM memoryn Cache memory attached to CPU

Page 4: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 4

Memory Hierarchy Levelsn Block (aka line): unit of copying

n May be multiple words

n If accessed data is present in upper leveln Hit: access satisfied by upper level

n Hit ratio: hits/accesses

n If accessed data is absentn Miss: block copied from lower level

n Time taken: miss penaltyn Miss ratio: misses/accesses

= 1 – hit ration Then accessed data supplied from

upper level

Page 5: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 5

Memory Technologyn Static RAM (SRAM)

n 0.5ns – 2.5ns, $2000 – $5000 per GBn Dynamic RAM (DRAM)

n 50ns – 70ns, $20 – $75 per GBn Magnetic disk

n 5ms – 20ms, $0.20 – $2 per GBn Ideal memory

n Access time of SRAMn Capacity and cost/GB of disk

§5.2 Mem

ory Technologies

Page 6: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

SRAM Technologyn The structure of a 6 transistor SRAM cell,

storing one bit of informationn WL and BL, are used to read and write from or to

the cell.n To write information, the data is imposed on the bit line and

the inverse data on the inverse bit line.

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 6

Page 7: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

DRAM Technologyn Data stored as a charge in a capacitor

n Single transistor used to access the chargen Must periodically be refreshed

n Read contents and write backn Performed on a DRAM “row”

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 7

Page 8: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 8

Advanced DRAM Organizationn Bits in a DRAM are organized as a

rectangular arrayn DRAM accesses an entire rown Burst mode: supply successive words from a

row with reduced latencyn Double data rate (DDR) DRAM

n Transfer on rising and falling clock edgesn Quad data rate (QDR) DRAM

n Separate DDR inputs and outputs

Page 9: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 9

DRAM Generations

Year Capacity $/GB

1980 64Kbit $1500000

1983 256Kbit $500000

1985 1Mbit $200000

1989 4Mbit $50000

1992 16Mbit $15000

1996 64Mbit $10000

1998 128Mbit $4000

2000 256Mbit $1000

2004 512Mbit $250

2007 1Gbit $50

Page 10: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

DRAM Performance Factorsn Row buffer

n Allows several words to be read and refreshed in parallel

n Synchronous DRAMn Allows for consecutive accesses in bursts without

needing to send each addressn Improves bandwidth

n DRAM bankingn Allows simultaneous access to multiple DRAMsn Improves bandwidth

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 10

Page 11: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 11

Increasing Memory Bandwidth

n 4-word wide memoryn Miss penalty = 1 + 15 + 1 = 17 bus cyclesn Bandwidth = 16 bytes / 17 cycles = 0.94 B/cycle

n 4-bank interleaved memoryn Miss penalty = 1 + 15 + 4×1 = 20 bus cyclesn Bandwidth = 16 bytes / 20 cycles = 0.8 B/cycle

Page 12: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 12

Flash Storagen Nonvolatile semiconductor storage

n 100× – 1000× faster than diskn Smaller, lower power, more robustn But more $/GB (between disk and DRAM)

§6.4 Flash Storage

Page 13: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 13

Flash Typesn NOR flash: bit cell like a NOR gate

n Random read/write accessn Used for instruction memory in embedded systems

n NAND flash: bit cell like a NAND gaten Denser (bits/area), but block-at-a-time accessn Cheaper per GBn Used for USB keys, media storage, …

n Flash bits wears out after 1000’s of accessesn Not suitable for direct RAM or disk replacementn Wear leveling: remap data to less used blocks

Page 14: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 14

Disk Storagen Nonvolatile, rotating magnetic storage

§6.3 Disk S

torage

Page 15: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 15

Disk Sectors and Accessn Each sector records

n Sector IDn Data (512 bytes, 4096 bytes proposed)n Error correcting code (ECC)

n Used to hide defects and recording errorsn Synchronization fields and gaps

n Access to a sector involvesn Queuing delay if other accesses are pendingn Seek: move the headsn Rotational latencyn Data transfern Controller overhead

Page 16: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 16

Disk Access Examplen Given

n 512B sector, 15,000rpm, 4ms average seek time, 100MB/s transfer rate, 0.2ms controller overhead, idle disk

n Average read timen 4ms seek time

+ ½ / (15,000/60) = 2ms rotational latency+ 512 / 100MB/s = 0.005ms transfer time+ 0.2ms controller delay= 6.2ms

n If actual average seek time is 1msn Average read time = 3.2ms

Page 17: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 6 — Storage and Other I/O Topics — 17

Disk Performance Issuesn Manufacturers quote average seek time

n Based on all possible seeksn Locality and OS scheduling lead to smaller actual

average seek timesn Smart disk controller allocate physical sectors on

diskn Present logical sector interface to hostn SCSI, ATA, SATA

n Disk drives include cachesn Prefetch sectors in anticipation of accessn Avoid seek and rotational delay

Page 18: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 18

Cache Memoryn Cache memory

n The level of the memory hierarchy closest to the CPU

n Given accesses X1, …, Xn–1, Xn

§5.3 The Basics of C

aches

n How do we know if the data is present?

n Where do we look?

Page 19: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 19

Direct Mapped Cachen Location determined by addressn Direct mapped: only one choice

n (Block address) modulo (#Blocks in cache)

n #Blocks is a power of 2

n Use low-order address bits

Page 20: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 20

Tags and Valid Bitsn How do we know which particular block is

stored in a cache location?n Store block address as well as the datan Actually, only need the high-order bitsn Called the tag

n What if there is no data in a location?n Valid bit: 1 = present, 0 = not presentn Initially 0

Page 21: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 21

Cache Examplen 8-blocks, 1 word/block, direct mappedn Initial state

Index V Tag Data000 N001 N010 N011 N100 N101 N110 N111 N

Page 22: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 22

Cache Example

Index V Tag Data000 N001 N010 N011 N100 N101 N110 Y 10 Mem[10110]111 N

Word addr Binary addr Hit/miss Cache block22 10 110 Miss 110

Page 23: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 23

Cache Example

Index V Tag Data000 N001 N010 Y 11 Mem[11010]011 N100 N101 N110 Y 10 Mem[10110]111 N

Word addr Binary addr Hit/miss Cache block26 11 010 Miss 010

Page 24: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 24

Cache Example

Index V Tag Data000 N001 N010 Y 11 Mem[11010]011 N100 N101 N110 Y 10 Mem[10110]111 N

Word addr Binary addr Hit/miss Cache block22 10 110 Hit 11026 11 010 Hit 010

Page 25: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 25

Cache Example

Index V Tag Data000 Y 10 Mem[10000]001 N010 Y 11 Mem[11010]011 Y 00 Mem[00011]100 N101 N110 Y 10 Mem[10110]111 N

Word addr Binary addr Hit/miss Cache block16 10 000 Miss 0003 00 011 Miss 011

16 10 000 Hit 000

Page 26: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 26

Cache Example

Index V Tag Data000 Y 10 Mem[10000]001 N010 Y 10 Mem[10010]011 Y 00 Mem[00011]100 N101 N110 Y 10 Mem[10110]111 N

Word addr Binary addr Hit/miss Cache block18 10 010 Miss 010

Page 27: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 27

Address Subdivision

Page 28: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 28

Example: Larger Block Sizen 64 blocks, 16 bytes/block

n To what block number does address 1200 map?

n Block address = 1200/16 = 75n Block number = 75 modulo 64 = 11

Tag Index Offset03491031

4 bits6 bits22 bits

Page 29: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 29

Block Size Considerationsn Larger blocks should reduce miss rate

n Due to spatial localityn But in a fixed-sized cache

n Larger blocks fewer of themn More competition increased miss rate

n Larger blocks pollutionn Larger miss penalty

n Can override benefit of reduced miss raten Early restart and critical-word-first can help

Page 30: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Miss rate versus block size

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 30

Note that the miss rate actually goes up if the block size is too large relative to the cache size. Each line represents a cache of different size.

Page 31: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 31

Cache Missesn On cache hit, CPU proceeds normallyn On cache miss

n Stall the CPU pipelinen Fetch block from next level of hierarchyn Instruction cache miss

n Restart instruction fetchn Data cache miss

n Complete data access

Page 32: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 32

Write-Throughn On data-write hit, could just update the block in

cachen But then cache and memory would be inconsistent

n Write through: also update memoryn But makes writes take longer

n e.g., if base CPI = 1, 10% of instructions are stores, write to memory takes 100 cyclesn Effective CPI = 1 + 0.1×100 = 11

n Solution: write buffern Holds data waiting to be written to memoryn CPU continues immediately

n Only stalls on write if write buffer is already full

Page 33: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 33

Write-Backn Alternative: On data-write hit, just update

the block in cachen Keep track of whether each block is dirty

n When a dirty block is replacedn Write it back to memoryn Can use a write buffer to allow replacing block

to be read first

Page 34: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 34

Write Allocationn What should happen on a write miss?n Alternatives for write-through

n Allocate on miss: fetch the blockn Write around: don’t fetch the block

n Since programs often write a whole block before reading it (e.g., initialization)

n For write-backn Usually fetch the block

Page 35: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 35

Example: Intrinsity FastMATHn Embedded MIPS processor

n 12-stage pipelinen Instruction and data access on each cycle

n Split cache: separate I-cache and D-cachen Each 16KB: 256 blocks × 16 words/blockn D-cache: write-through or write-back

n SPEC2000 miss ratesn I-cache: 0.4%n D-cache: 11.4%n Weighted average: 3.2%

Page 36: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 36

Example: Intrinsity FastMATH

Page 37: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 37

Main Memory Supporting Cachesn Use DRAMs for main memory

n Fixed width (e.g., 1 word)n Connected by fixed-width clocked bus

n Bus clock is typically slower than CPU clock

n Example cache block readn 1 bus cycle for address transfern 15 bus cycles per DRAM accessn 1 bus cycle per data transfer

n For 4-word block, 1-word-wide DRAMn Miss penalty = 1 + 4×15 + 4×1 = 65 bus cyclesn Bandwidth = 16 bytes / 65 cycles = 0.25 B/cycle

Page 38: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 38

Measuring Cache Performancen Components of CPU time

n Program execution cyclesn Includes cache hit time

n Memory stall cyclesn Mainly from cache misses

n With simplifying assumptions:

§5.4 Measuring and Im

proving Cache P

erformance

penalty MissnInstructio

MissesProgram

nsInstructio

penalty Missrate MissProgram

accessesMemory cycles stallMemory

Page 39: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 39

Cache Performance Examplen Given

n I-cache miss rate = 2%n D-cache miss rate = 4%n Miss penalty = 100 cyclesn Base CPI (ideal cache) = 2n Load & stores are 36% of instructions

n Miss cycles per instructionn I-cache: 0.02 × 100 = 2n D-cache: 0.36 × 0.04 × 100 = 1.44

n Actual CPI = 2 + 2 + 1.44 = 5.44n Ideal CPU is 5.44/2 =2.72 times faster

Page 40: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 40

Average Access Timen Hit time is also important for performancen Average memory access time (AMAT)

n AMAT = Hit time + Miss rate × Miss penaltyn Example

n CPU with 1ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5%

n AMAT = 1 + 0.05 × 20 = 2nsn 2 cycles per instruction

Page 41: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 41

Performance Summaryn When CPU performance increased

n Miss penalty becomes more significantn Decreasing base CPI

n Greater proportion of time spent on memory stalls

n Increasing clock raten Memory stalls account for more CPU cycles

n Can’t neglect cache behavior when evaluating system performance

Page 42: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 42

Associative Cachesn Fully associative

n Allow a given block to go in any cache entryn Requires all entries to be searched at oncen Comparator per entry (expensive)

n n-way set associativen Each set contains n entriesn Block number determines which set

n (Block number) modulo (#Sets in cache)n Search all entries in a given set at oncen n comparators (less expensive)

Page 43: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 43

Associative Cache Example

Page 44: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 44

Spectrum of Associativityn For a cache with 8 entries

Page 45: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 45

Associativity Examplen Compare 4-block caches

n Direct mapped, 2-way set associative,fully associative

n Block access sequence: 0, 8, 0, 6, 8

n Direct mappedBlock

addressCache index

Hit/miss Cache content after access0 1 2 3

0 0 miss Mem[0]8 0 miss Mem[8]0 0 miss Mem[0]6 2 miss Mem[0] Mem[6]8 0 miss Mem[8] Mem[6]

Page 46: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 46

Associativity Examplen 2-way set associative

Block address

Cache index

Hit/miss Cache content after accessSet 0 Set 1

0 0 miss Mem[0]8 0 miss Mem[0] Mem[8]0 0 hit Mem[0] Mem[8]6 0 miss Mem[0] Mem[6]8 0 miss Mem[8] Mem[6]

n Fully associativeBlock

addressHit/miss Cache content after access

0 miss Mem[0]8 miss Mem[0] Mem[8]0 hit Mem[0] Mem[8]6 miss Mem[0] Mem[8] Mem[6]8 hit Mem[0] Mem[8] Mem[6]

Page 47: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 47

How Much Associativityn Increased associativity decreases miss

raten But with diminishing returns

n Simulation of a system with 64KBD-cache, 16-word blocks, SPEC2000n 1-way: 10.3%n 2-way: 8.6%n 4-way: 8.3%n 8-way: 8.1%

Page 48: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 48

Set Associative Cache Organization

Page 49: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 49

Replacement Policyn Direct mapped: no choicen Set associative

n Prefer non-valid entry, if there is onen Otherwise, choose among entries in the set

n Least-recently used (LRU)n Choose the one unused for the longest time

n Simple for 2-way, manageable for 4-way, too hard beyond that

n Randomn Gives approximately the same performance

as LRU for high associativity

Page 50: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 50

Multilevel Cachesn Primary cache attached to CPU

n Small, but fastn Level-2 cache services misses from

primary cachen Larger, slower, but still faster than main

memoryn Main memory services L-2 cache missesn Some high-end systems include L-3 cache

Page 51: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 51

Multilevel Cache Examplen Given

n CPU base CPI = 1, clock rate = 4GHzn Miss rate/instruction = 2%n Main memory access time = 100ns

n With just primary cachen Miss penalty = 100ns/0.25ns = 400 cyclesn Effective CPI = 1 + 0.02 × 400 = 9

Page 52: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 52

Example (cont.)n Now add L-2 cache

n Access time = 5nsn Global miss rate to main memory = 0.5%

n Primary miss with L-2 hitn Penalty = 5ns/0.25ns = 20 cycles

n Primary miss with L-2 missn Extra penalty = 500 cycles

n CPI = 1 + 0.02 × 20 + 0.005 × 400 = 3.4n Performance ratio = 9/3.4 = 2.6

Page 53: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 53

Multilevel Cache Considerationsn Primary cache

n Focus on minimal hit timen L-2 cache

n Focus on low miss rate to avoid main memory access

n Hit time has less overall impactn Results

n L-1 cache usually smaller than a single cachen L-1 block size smaller than L-2 block size

Page 54: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 54

Interactions with Advanced CPUs

n Out-of-order CPUs can execute instructions during cache missn Pending store stays in load/store unitn Dependent instructions wait in reservation

stationsn Independent instructions continue

n Effect of miss depends on program data flown Much harder to analysen Use system simulation

Page 55: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 55

Interactions with Softwaren Misses depend on

memory access patternsn Algorithm behaviorn Compiler

optimization for memory access

Page 56: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Software Optimization via Blocking

n Goal: maximize accesses to data before it is replaced

n Consider inner loops of DGEMM:

for (int j = 0; j < n; ++j)

{

double cij = C[i+j*n];

for( int k = 0; k < n; k++ )

cij += A[i+k*n] * B[k+j*n];

C[i+j*n] = cij;

}

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 56

Page 57: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

DGEMM Access Patternn C, A, and B arrays

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 57

older accesses

new accesses

Page 58: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Cache Blocked DGEMM1 #define BLOCKSIZE 32

2 void do_block (int n, int si, int sj, int sk, double *A, double

3 *B, double *C)

4 {

5 for (int i = si; i < si+BLOCKSIZE; ++i)

6 for (int j = sj; j < sj+BLOCKSIZE; ++j)

7 {

8 double cij = C[i+j*n];/* cij = C[i][j] */

9 for( int k = sk; k < sk+BLOCKSIZE; k++ )

10 cij += A[i+k*n] * B[k+j*n];/* cij+=A[i][k]*B[k][j] */

11 C[i+j*n] = cij;/* C[i][j] = cij */

12 }

13 }

14 void dgemm (int n, double* A, double* B, double* C)

15 {

16 for ( int sj = 0; sj < n; sj += BLOCKSIZE )

17 for ( int si = 0; si < n; si += BLOCKSIZE )

18 for ( int sk = 0; sk < n; sk += BLOCKSIZE )

19 do_block(n, si, sj, sk, A, B, C);

20 }

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 58

Page 59: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Blocked DGEMM Access Pattern

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 59

Unoptimized Blocked

Page 60: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 60

Virtual Memoryn Use main memory as a “cache” for

secondary (disk) storagen Managed jointly by CPU hardware and the

operating system (OS)n Programs share main memory

n Each gets a private virtual address space holding its frequently used code and data

n Protected from other programsn CPU and OS translate virtual addresses to

physical addressesn VM “block” is called a pagen VM translation “miss” is called a page fault

§5.7 Virtual Mem

ory

Page 61: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 61

Address Translationn Fixed-size pages (e.g., 4K)

Page 62: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 62

Page Fault Penaltyn On page fault, the page must be fetched

from diskn Takes millions of clock cyclesn Handled by OS code

n Try to minimize page fault raten Fully associative placementn Smart replacement algorithms

Page 63: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 63

Page Tablesn Stores placement information

n Array of page table entries, indexed by virtual page number

n Page table register in CPU points to page table in physical memory

n If page is present in memoryn PTE stores the physical page numbern Plus other status bits (referenced, dirty, …)

n If page is not presentn PTE can refer to location in swap space on

disk

Page 64: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 64

Translation Using a Page Table

Page 65: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 65

Mapping Pages to Storage

Page 66: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 66

Replacement and Writesn To reduce page fault rate, prefer least-

recently used (LRU) replacementn Reference bit (aka use bit) in PTE set to 1 on

access to pagen Periodically cleared to 0 by OSn A page with reference bit = 0 has not been

used recentlyn Disk writes take millions of cycles

n Block at once, not individual locationsn Write through is impracticaln Use write-backn Dirty bit in PTE set when page is written

Page 67: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 67

Fast Translation Using a TLBn Address translation would appear to require

extra memory referencesn One to access the PTEn Then the actual memory access

n But access to page tables has good localityn So use a fast cache of PTEs within the CPUn Called a Translation Look-aside Buffer (TLB)n Typical: 16–512 PTEs, 0.5–1 cycle for hit, 10–100

cycles for miss, 0.01%–1% miss raten Misses could be handled by hardware or software

Page 68: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 68

Fast Translation Using a TLB

Page 69: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 69

TLB Missesn If page is in memory

n Load the PTE from memory and retryn Could be handled in hardware

n Can get complex for more complicated page table structures

n Or in softwaren Raise a special exception, with optimized handler

n If page is not in memory (page fault)n OS handles fetching the page and updating

the page tablen Then restart the faulting instruction

Page 70: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 70

TLB Miss Handlern TLB miss indicates

n Page present, but PTE not in TLBn Page not preset

n Must recognize TLB miss before destination register overwrittenn Raise exception

n Handler copies PTE from memory to TLBn Then restarts instructionn If page not present, page fault will occur

Page 71: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 71

Page Fault Handlern Use faulting virtual address to find PTEn Locate page on diskn Choose page to replace

n If dirty, write to disk firstn Read page into memory and update page

tablen Make process runnable again

n Restart from faulting instruction

Page 72: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 72

TLB and Cache Interactionn If cache tag uses

physical addressn Need to translate

before cache lookup

n Alternative: use virtual address tagn Complications due to

aliasingn Different virtual

addresses for shared physical address

Page 73: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

TLB and Cache Interaction

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 73

Page 74: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 74

Memory Protectionn Different tasks can share parts of their

virtual address spacesn But need to protect against errant accessn Requires OS assistance

n Hardware support for OS protectionn Privileged supervisor mode (aka kernel mode)n Privileged instructionsn Page tables and other state information only

accessible in supervisor moden System call exception (e.g., syscall in MIPS)

Page 75: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 75

The Memory Hierarchy

n Common principles apply at all levels of the memory hierarchyn Based on notions of caching

n At each level in the hierarchyn Block placementn Finding a blockn Replacement on a missn Write policy

§5.8 A Com

mon Fram

ework for M

emory H

ierarchies

The BIG Picture

Page 76: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 76

Block Placementn Determined by associativity

n Direct mapped (1-way associative)n One choice for placement

n n-way set associativen n choices within a set

n Fully associativen Any location

n Higher associativity reduces miss raten Increases complexity, cost, and access time

Page 77: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 77

Finding a Block

n Hardware cachesn Reduce comparisons to reduce cost

n Virtual memoryn Full table lookup makes full associativity feasiblen Benefit in reduced miss rate

Associativity Location method Tag comparisonsDirect mapped Index 1n-way set associative

Set index, then search entries within the set

n

Fully associative Search all entries #entriesFull lookup table 0

Page 78: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 78

Replacementn Choice of entry to replace on a miss

n Least recently used (LRU)n Complex and costly hardware for high associativity

n Randomn Close to LRU, easier to implement

n Virtual memoryn LRU approximation with hardware support

Page 79: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 79

Write Policyn Write-through

n Update both upper and lower levelsn Simplifies replacement, but may require write

buffern Write-back

n Update upper level onlyn Update lower level when block is replacedn Need to keep more state

n Virtual memoryn Only write-back is feasible, given disk write

latency

Page 80: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 80

Sources of Missesn Compulsory misses (aka cold start misses)

n First access to a blockn Capacity misses

n Due to finite cache sizen A replaced block is later accessed again

n Conflict misses (aka collision misses)n In a non-fully associative cachen Due to competition for entries in a setn Would not occur in a fully associative cache of

the same total size

Page 81: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 81

Cache Design Trade-offs

Design change Effect on miss rate Negative performance effect

Increase cache size Decrease capacity misses

May increase access time

Increase associativity Decrease conflict misses

May increase access time

Increase block size Decrease compulsory misses

Increases miss penalty. For very large block size, may increase miss rate due to pollution.

Page 82: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 82

Cache Controln Example cache characteristics

n Direct-mapped, write-back, write allocaten Block size: 4 words (16 bytes)n Cache size: 16 KB (1024 blocks)n 32-bit byte addressesn Valid bit and dirty bit per blockn Blocking cache

n CPU waits until access is complete

§5.9 Using a Finite S

tate Machine to C

ontrol A Sim

ple Cache

Tag Index Offset03491031

4 bits10 bits18 bits

Page 83: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 83

Interface Signals

CacheCPU Memory

Read/WriteValid

Address

Write Data

Read Data

Ready

32

32

32

Read/WriteValid

Address

Write Data

Read Data

Ready

32

128

128

Multiple cycles per access

Page 84: Chapter 5fsantiag/ArqComputadoras/09_Jerarquia_Memorias.pdf · n Random read/write access n Used for instruction memory in embedded systems n NAND flash: bit cell like a NAND gate

Chapter 5 — Large and Fast: Exploiting Memory Hierarchy — 84

Concluding Remarksn Fast memories are small, large memories are

slown We really want fast, large memories n Caching gives this illusion

n Principle of localityn Programs use a small part of their memory space

frequentlyn Memory hierarchy

n L1 cache L2 cache … DRAM memory disk

n Memory system design is critical for multiprocessors

§5.16 Concluding R

emarks