inst.eecs.berkeley.edu/~cs61c ucb cs61c : machine structures lecture 34 – virtual memory ii...
TRANSCRIPT
DATA VIRTUALIZATION
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine
Structures
Lecture 34 – Virtual Memory II
11/19/2014: Primary Data – A new startup came out of stealth mode. The Los Altos company raised about $63 million in funding to help in their plan to untrap data from traditional storage, much like server virtualization decoupled computing from servers in the data center. It does this through an appliance placed in the data center and software deployed in a customer's IT infrastructure.
http://www.zdnet.com/can-wozniak-reshape-the-data-center-7000036018/
A powerful metadata server provides a logical abstraction of physical storage, while the policy engine intelligently automates data movement and placement to enable the vision of the software-defined datacenter: Simple, cost-effective, scalable performance.
CS61C L33 Virtual Memory I (2) Lustig, Fall 2014 © UCB
Regs
L2 Cache
Memory
Disk
Tape
Instr. Operands
Blocks
Pages
Files
Upper Level
Lower Level
Faster
Larger
CacheBlocks
Thus far{{Next:
VirtualMemory
Another View of the Memory Hierarchy
CS61C L33 Virtual Memory I (3) Lustig, Fall 2014 © UCB
Virtual Memory
Fully associative Cache for the disk Provides program with illusion of a very large
main memory: Working set of “pages” reside in main memory
- others reside on disk.
Enables OS to share memory, protect programs from each other
Today, more important for protection vs. just another level of memory hierarchy
Each process thinks it has all the memory to itself
CS61C L33 Virtual Memory I (4) Lustig, Fall 2014 © UCB0
Physical Memory
Virtual Memory
Code
Static
Heap
Stack
64 MB
Mapping Virtual Memory to Physical Memory
0
Disk
CS61C L33 Virtual Memory I (5) Lustig, Fall 2014 © UCB
Paging/Virtual Memory Multiple Processes User B:
Virtual Memory¥
Code
Static
Heap
Stack
0Code
Static
Heap
Stack
A PageTable
B PageTable
User A: Virtual Memory¥
00
Physical Memory
64 MB
CS61C L33 Virtual Memory I (6) Lustig, Fall 2014 © UCB
6
Review: Paging Terminology
Programs use virtual addresses (VAs) Space of all virtual addresses called virtual
memory (VM) Divided into pages indexed by virtual page
number (VPN) Main memory indexed by physical
addresses (PAs) Space of all physical addresses called
physical memory (PM) Divided into pages indexed by physical
page number (PPN)
CS61C L33 Virtual Memory I (7) Lustig, Fall 2014 © UCB
Paging Organization (assume 1 KB pages)
AddrTransMAP
Page is unit of mapping
Page also unit of transfer from disk to physical memory
page 0 1K1K
1K
01024
31744
Virtual Memory
VirtualAddress
page 1
page 31
1K2048 page 2
...... ...
page 001024
7168
PhysicalAddress
PhysicalMemory
1K1K
1K
page 1
page 7...... ...
CS61C L33 Virtual Memory I (8) Lustig, Fall 2014 © UCB
8
Virtual Memory Mapping Function
How large is main memory? Disk? Don’t know! Designed to be
interchangeable components Need a system that works regardless of
sizes Use lookup table (page table) to deal
with arbitrary mapping Index lookup table by # of pages in VM
(not all entries will be used/valid) Size of PM will affect size of stored
translation
CS61C L33 Virtual Memory I (9) Lustig, Fall 2014 © UCB
9
Address Mapping: Page Table Page Table functionality:
Incoming request is Virtual Address (VA), want Physical Address (PA)
Physical Offset = Virtual Offset (page-aligned) So just swap Virtual Page Number (VPN) for
Physical Page Number (PPN)
Implementation? Use VPN as index into PT Store PPN and management bits (Valid, Access
Rights) Does NOT store actual data (the data sits in
PM)
Page offset
Virtual Page #
Physical Page #
CS61C L33 Virtual Memory I (10) Lustig, Fall 2014 © UCB
10
Page Table Layout
V AR PPN
X XX
. . .
Virtual Address: VPN offset
Page Table
1) Index into PT
using VPN
2) Check Valid and Access
Rights bits
PPN
3) Concatenate PPN and offset
PhysicalAddress4) Use PA to access memory
offset
CS61C L33 Virtual Memory I (11) Lustig, Fall 2014 © UCB
Notes on Page Table
Solves Fragmentation problem: all chunks same size, so all holes can be used
OS must reserve “Swap Space” on diskfor each process
To grow a process, ask Operating System If unused pages, OS uses them first If not, OS swaps some old pages to disk (Least Recently Used to pick pages to
swap) Each process has own Page Table Will add details, but Page Table is
essence of Virtual Memory
CS61C L33 Virtual Memory I (12) Lustig, Fall 2014 © UCB
A program’s address space contains 4 regions: stack: local variables,
grows downward heap: space requested for
pointers via malloc() ; resizes dynamically, grows upward
static data: variables declared outside main, does not grow or shrink
code: loaded when program starts, does not change
code
static data
heap
stack
For now, OS somehowprevents accesses between stack and heap (gray hash lines).
~ FFFF FFFFhex
~ 0hex
Why would a process need to “grow”?
Question: How many bits wide are the following fields?
• 16 KiB pages• 40-bit virtual addresses• 64 GiB physical memory
26 26A)24 20B)22 22C)26 22D)
VPN PPN
13
CS61C L33 Virtual Memory I (15) Lustig, Fall 2014 © UCB
15
VA1
User 1 Virtual Address Space
User 2 Virtual Address Space
PT User 1
PT User 2
VA2
Retrieving Data from Memory
1) Access page table for
address translation
Physical Memory
2) Access correct physical
addressRequires two accesses
of physical memory!
CS61C L33 Virtual Memory I (16) Lustig, Fall 2014 © UCB
Translation Look-Aside Buffers (TLBs) TLBs usually small, typically 128 - 256
entries Like any other cache, the TLB can be
direct mapped, set associative, or fully associative
Processor TLBLookup
Cache MainMemory
VA PAmiss
hit dataTrans-lation
hit
miss
Can cache hold requested data if corresponding page is not in physical memory? No!
CS61C L33 Virtual Memory I (17) Lustig, Fall 2014 © UCB
17
TLBs vs. Caches
TLBs usually small, typically 16 – 512 entries TLB access time comparable to cache (« main memory) TLBs can have associativity
Usually fully/highly associative
D$ / I$
Memory Address
Data at memory address
Access next cache level / main memory
On miss:
TLBVPN PPN
Access Page Table in main memory
On miss:
CS61C L33 Virtual Memory I (18) Lustig, Fall 2014 © UCB
Address Translation Using TLB
18
TLB Tag
TLB Index
Page Offset
TLB Tag
PPN
(used just like
in a cache)
. . .
TLB
VPN
PPN Page Offset
Tag IndexOffse
t
Virtual Address
Physical AddressTag Block
Data
. . .
DataCache
PA split two different ways!
Note: TIO for VA & PA unrelated
Question: How many bits wide are the following?
• 16 KiB pages• 40-bit virtual addresses• 64 GiB physical memory• 2-way set associative TLB with 512
entries
12 14 38A)18 8 45B)14 12 40C)17 9 43D)
TLB Tag TLB Index TLB Entry
19
Valid
Dirty
RefAccess Rights
TLB Tag PPN
X X X XX
CS61C L33 Virtual Memory I (21) Lustig, Fall 2014 © UCB
21
1) Check TLB (input: VPN, output: PPN) TLB Hit: Fetch translation, return PPN TLB Miss: Check page table (in memory)
Page Table Hit: Load page table entry into TLB Page Table Miss (Page Fault): Fetch page from
disk to memory, update corresponding page table entry, then load entry into TLB
2) Check cache (input: PPN, output: data) Cache Hit: Return data value to processor Cache Miss: Fetch data value from memory,
store it in cache, return it to processor
Fetching Data on a Memory Read
CS61C L33 Virtual Memory I (22) Lustig, Fall 2014 © UCB
22
Page Faults
Load the page off the disk into a free page of memory Switch to some other process while we wait
Interrupt thrown when page loaded and the process' page table is updated When we switch back to the task, the
desired data will be in memory If memory full, replace page (LRU),
writing back if necessary, and update both page table entries Continuous swapping between disk and
memory called “thrashing”
CS61C L33 Virtual Memory I (23) Garcia, Spring 2014 © UCB
User program view: Contiguous memory Start from some set VA “Infinitely” large Is the only running
program
Reality: Non-contiguous
memory Start wherever
available memory is Finite size Many programs running
simultaneously
Virtual memory provides: Illusion of contiguous
memory All programs starting at
same set address Illusion of ~ infinite memory
(232 or 264 bytes) Protection , Sharing
Implementation: Divide memory into chunks
(pages) OS controls page table that
maps virtual into physical addresses
memory as a cache for disk TLB is a cache for the page
table
Virtual Memory Summary
23