paging

22
Paging Andrew Whitaker CSE451

Upload: calida

Post on 13-Jan-2016

42 views

Category:

Documents


0 download

DESCRIPTION

Paging. Andrew Whitaker CSE451. Review: Process (Virtual) Address Space. Each process has its own address space The OS and the hardware translate virtual addresses to physical frames. user space. kernel space. Multiple Processes. Each process has its own address space - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Paging

Paging

Andrew Whitaker

CSE451

Page 2: Paging

Review: Process (Virtual) Address Space

Each process has its own address spaceThe OS and the hardware translate virtual

addresses to physical frames

kernel spaceuser space

Page 3: Paging

Multiple Processes

Each process has its own address space And, its own set of page tables

Kernel mappings are the same for all

kernel spaceuser space

proc1

proc2

Page 4: Paging

Linux Physical Memory Layout

Page 5: Paging

Paging Issues

Memory scarcity Virtual memory, stay tuned…

Making Paging FastReducing the Overhead of Page Tables

Page 6: Paging

Review: Mechanics of address translation

pageframe 0

pageframe 1

pageframe 2

pageframe Y

pageframe 3

physical memory

offset

physical address

page frame #page frame #

page table

offset

virtual address

virtual page #

Problem: page tables live in memory

Page 7: Paging

Making Paging Fast

We must avoid a page table lookup for every memory reference This would double memory access time

Solution: Translation Lookaside Buffer Fancy name for a cache

TLB stores a subset of PTEs (page table translation entries)

TLBs are small and fast (16-48 entries) Can be accessed “for free”

Page 8: Paging

TLB Details

In practice, most (> 99%) of memory translations handled by the TLB

Each processor has its own TLB TLB is fully associative

Any TLB slot can hold any PTE entry Who fills the TLB? Two options:

Hardware (x86) walks the page table on a TLB miss Software (MIPS, Alpha) routine fills the TLB on a miss

TLB itself needs a replacement policy Usually implemented in hardware (LRU)

Page 9: Paging

What Happens on a Context Switch?

Each process has its own address spaceSo, each process has its own page tableSo, page-table entries are only relevant for

a particular process Thus, the TLB must be flushed on a

context switch This is why context switches are so expensive

Page 10: Paging

Alternative to flushing: Address Space IDsWe can avoid flushing the TLB if entries

are associated with an address space

When would this work well? When would this not work well?

page frame numberprotMRV

202111

ASID

4

Page 11: Paging

TLBs with Multiprocessors

Each TLB stores a subset of page table state Must keep state consistent on a multiprocessor

page frame #

page tableTLB 1

TLB 2

Page 12: Paging

Today’s Topics

Page Replacement StrategiesMaking Paging FastReducing the Overhead of Page Tables

Page 13: Paging

Page Table Overhead

For large address space, page table sizes can become enormous

Example: IA64 architecture

64 bit address space, 8KB pages

Num PTEs = 2^64 / 2^13 = 2^51

Assuming 8 bytes per PTE:Num Bytes = 2^54 = 16 Petabytes

And, this is per-process!

Page 14: Paging

Optimizing for Sparse Address Spaces Observation: very little of the address space is in

use at a given time

Basic idea: only allocate page tables where we need to And, fill in new page tables lazily (on demand)

virtualaddressspace

Page 15: Paging

Implementing Sparse Address SpacesWe need a data structure to keep track of

the page tables we have allocatedAnd, this structure must be small

Otherwise, we’ve defeated our original goal

Solution: multi-level page tables Page tables of page tables “Any problem in CS can be solved with a layer

of indirection”

Page 16: Paging

Two level page tables

pageframe 0

pageframe 1

pageframe 2

pageframe Y

pageframe 3

physical memory

offset

physical address

page frame #

masterpage table

secondary page#

virtual address

master page # offset

secondarypage table

empty

empty

secondarypage table

page framenumber

Key point: not all secondary page tables must be allocated

Page 17: Paging

Generalizing

Early architectures used 1-level page tablesVAX, x86 used 2-level page tablesSPARC uses 3-level page tablesAlpha 68030 uses 4-level page tables

Key thing is that the outer level must be wired down (pinned in physical memory) in order to break the recursion

Page 18: Paging

Cool Paging Tricks

Basic Idea: exploit the layer of indirection between virtual and physical memory

Page 19: Paging

Trick #1: Shared Libraries

Q: How can we avoid 1000 copies of printf?

A: Shared libraries Linux: /usr/lib/*.so

libc libc

Firefox Open Office

libc

Physical memory

Page 20: Paging

Shared Memory Segments

Virt Address space 2Virt Address space 1

Physical memory

Page 21: Paging

Trick #2: Copy-on-write

Copy-on-write allows for a fast “copy” by using shared pages Especially useful for “fork” operations

Implementation: pages are shared “read-only” OS intercepts write operations, makes a real copy

page frame numberprotMRV

Page 22: Paging

Trick #3: Memory-mapped Files

Normally, files are accessed with system calls Open, read, write, close

Memory mapping allows a program to access a file with load/store operations

Virt Address space

Foo.txt