Download - Multilevel Caches and Replacement Policies
-
7/31/2019 Multilevel Caches and Replacement Policies
1/16
Multi Level caches &
Replacement Policies
15/08/2012 1CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
2/16
Cache memory
Cache memories are small, fast SRAM-based
memories managed automatically in hardware.
Hold frequently accessed blocks of main memory
Small amount of fast memory
Sits between normal main memory and CPU
May be located on CPU chip or module
15/08/2012 2CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
3/16
Multiple-Level Caches
More levels in the memory hierarchy
Can have two levels of cache
The Level-1 cache (or L1 cache, or internal cache) is smallerand faster, and lies in the processor next to the CPU.
The Level-2 cache (or L2 cache, or external cache) is larger butslower, and lies outside the processor.
Memory access first goes to the L1 cache. If L1 cache access isa miss, go to L2 cache. If L2 cache is a miss, go to mainmemory. If main memory is a miss, go to virtual memory onhard disk.
15/08/2012 3CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
4/16
Multilevel Caches
Small, fast Level 1 (L1)
cache
Often on-chip for speed
and bandwidth
Larger, slower Level 2
(L2) cache
Closely coupled to CPU; may be on-chip, or
nearby on module
15/08/2012 4CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
5/16
Typical bus architecture
15/08/2012 5CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
6/16
Multi-Level Caches
Options: separate data and instruction caches, or a unified
cache
15/08/2012 6CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
7/16
Instruction and Data Caches
Can either have separate Instruction Cache and Data Cache,
or have one unified cache.
Advantage of separate cache: Can access Instruction Cacheand Data Cache simultaneously in the same cycle, as required
by a pipelined data path
Advantage of unified cache: More flexible, so may have ahigher hit rate
15/08/2012 7CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
8/16
Cache Performance Metrics Miss Rate
Fraction of memory references not found in cache (misses/references)Typical numbers:
3-10% for L1
can be quite small (e.g., < 1%) for L2, depending on size, etc.
Hit Time
Time to deliver a line in the cache to the processor (includes
time to determine whether the line is in the cache)
Typical numbers:
1 clock cycle for L1
3-8 clock cycles for L2
Miss Penalty
Additional time required because of a miss
Typically 25-100 cycles for main memory
15/08/2012 8CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
9/16
Cache Memory Design Parameters
Cache size(in bytes or words).A larger cache can hold more ofthe programs useful data but is more costly and likely to be
slower.
Blockor cache-line size (unit of data transfer between cacheand main). With a larger cache line, more data is brought in
cache with each miss. This can improve the hit rate but also
may bring low-utility data in.
Placement policy. Determining where an incoming cache line
is stored. More flexible policies imply higher hardware cost
and may or may not have performance benefits (due to more
complex data location).15/08/2012 9CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
10/16
Cache Memory Design Parameters Cont
Replacement policy. Determining which of several existing
cache blocks (into which a new cache line can be mapped)
should be overwritten. Typical policies: choosing a random or
the least recently used block.
Write policy. Determining if updates to cache words are
immediately forwarded to main (write-through) or modified
blocks are copied back to main if and when they must be
replaced (write-backor copy-back).
15/08/2012 10CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
11/16
Cache Memory: Replacement Policy
When a MM block needs to be brought in while all the CM
blocks are occupied, one of them has to be replaced. The
selection of this block to be replaced can be determined in
one of the following ways.
1) Optimal Replacement
2) Random selection
3) FIFO (first-in first-out)4) LRU
15/08/2012 11CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
12/16
Optimal Replacement
Replace the block which is no longer needed in the future. If
all blocks currently in CM will be used again, replace the one
which will not be used in the future for the longest time.
The optimal replacement is obviously the best but is not
realistic, simply because when a block will be needed in the
future is usually not known ahead of time
15/08/2012 12CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
13/16
Random selection
Replace a randomly selected block among all blocks currently
in CM.
It only requires a random or pseudo-random number
generator
15/08/2012 13
New block Old block (chosen at random)
Random policy:
CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
14/16
FIFO
Replace the block that has been in CM for the longest time
FIFO strategy just requires a queue Q to store references to
the pages in the cache
15/08/2012 14
FIFO policy:
Insert time: 8:00 am 7:48am 9:05am 7:10am 7:30 am 10:10am 8:45am
New block Old block(present longest)
CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
15/16
LRU
Replace the block in CM that has not been used for the
longest time, i.e., the least recently used (LRU) block.
Implementing the LRU strategy requires the use of a priority
queue Q
15/08/2012 15
last used: 7:25am 8:12am 9:22am 6:50am 8:20am 10:02am 9:50am
LRU policy:
New block Old block(least recently used)
CA & PP assignment presentation
-
7/31/2019 Multilevel Caches and Replacement Policies
16/16
15/08/2012 16
THANK YOU
CA & PP assignment presentation