virtual memory os notes

Upload: amsree07

Post on 04-Jun-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Virtual Memory OS notes

    1/51

    Virtual Memory

    Gordon College

    Stephen Brinton

  • 8/13/2019 Virtual Memory OS notes

    2/51

    Virtual Memory

    Background Demand Paging

    Process Creation

    Page Replacement Allocation of Frames

    Thrashing

    Demand Segmentation Operating System Examples

  • 8/13/2019 Virtual Memory OS notes

    3/51

    Background Virtual memoryseparation of user logical

    memory from physical memory. Only part of the program needed

    Logical address space > physical addressspace.

    (easier for programmer) shared by several processes.

    efficient process creation.

    Less I/O to swap processes

    Virtual memory can be implemented via: Demand paging

    Demand segmentation

  • 8/13/2019 Virtual Memory OS notes

    4/51

    Larger Than Physical Memory

  • 8/13/2019 Virtual Memory OS notes

    5/51

    Shared Library Using Virtual

    Memory

  • 8/13/2019 Virtual Memory OS notes

    6/51

    Demand Paging

    Bring a page into memory only when itis needed

    Less I/O needed

    Less memory needed Faster response

    More users (processes) able to execute

    Page is needed reference to it invalid reference abort

    not-in-memory bring to memory

  • 8/13/2019 Virtual Memory OS notes

    7/51

    Valid-Invalid Bit With each page table

    entry a validinvalid bit isassociated

    (1 in-memory, 0 not-

    in-memory)

    Initially validinvalid bit isset to 0 on all entries

    During address

    translation, if validinvalid

    bit in page table entry is 0

    page-fault trap

    1

    11

    1

    0

    0

    0

    Frame # valid-invalid bit

    page table

    Example of a pagetable snapshot:

  • 8/13/2019 Virtual Memory OS notes

    8/51

    Page Table: Some Pages Are Not in

    Main Memory

  • 8/13/2019 Virtual Memory OS notes

    9/51

    Page-Fault Trap1. Reference to a page with invalid bit set - trap to

    OS page fault2. Must decide???:

    Invalid reference abort.

    Just not in memory.

    3. Get empty frame.4. Swap page into frame.

    5. Reset tables, validation bit = 1.

    6. Restart instruction:

    what happens if it is in the middle of aninstruction?

  • 8/13/2019 Virtual Memory OS notes

    10/51

    Steps in Handling a Page

    Fault

  • 8/13/2019 Virtual Memory OS notes

    11/51

    What happens if there is no free frame?

    Page replacementfind somepage in memory, but not really in

    use, swap it out

    algorithm performancewant an algorithm

    which will result in minimum number

    of page faults

    Same page may be brought into

    memory several times

  • 8/13/2019 Virtual Memory OS notes

    12/51

    Performance of Demand Paging

    Page Fault Rate: 0p1 (probability of pagefault)

    ifp= 0 no page faults

    ifp= 1, every reference is a fault

    Effective Access Time (EAT)

    EAT = (1p) x memory access

    +p(page fault overhead

    + [swap page out ]

    + swap page in

    + restart overhead)

  • 8/13/2019 Virtual Memory OS notes

    13/51

    Demand Paging Example

    Memory access time = 1 microsecond

    50% of the time the page that is being replaced hasbeen modified and therefore needs to be swapped

    out

    Page-switch time: around 8 ms.

    EAT = (1p) x (200) + p(8 ms)

    = (1p) x (200) + p(8,000,000)= 200 + 7,999,800p

    220 > 200 + 7,999,800p

    p < .0000025

  • 8/13/2019 Virtual Memory OS notes

    14/51

    Process Creation

    Virtual memory allows other benefits

    during process creation:

    - Copy-on-Write

    - Memory-Mapped Files

  • 8/13/2019 Virtual Memory OS notes

    15/51

    Copy-on-Write

    Copy-on-Write (COW) allows both parent and childprocesses to initially sharethe same pages inmemory

    If either process modifies a shared page, only then isthe page copied

    COW allows more efficient process creation as onlymodified pages are copied

    Free pages are allocated from a poolof zeroed-outpages

  • 8/13/2019 Virtual Memory OS notes

    16/51

    Need: Page Replacement Prevent over-

    allocation ofmemory bymodifying page-fault serviceroutine to includepage

    replacement

    Use modify(dirty) bit toreduce overheadof page transfers

    only modifiedpages arewritten to disk

  • 8/13/2019 Virtual Memory OS notes

    17/51

    Basic Page Replacement

  • 8/13/2019 Virtual Memory OS notes

    18/51

    Page Replacement Algorithms

    GOAL: lowest page-fault rate

    Evaluate algorithm by running it

    on a particular string of memory

    references (reference string) andcomputing the number of page

    faults on that string

    Example string: 1,4,1,6,1,6,1,6,1,6,1

  • 8/13/2019 Virtual Memory OS notes

    19/51

  • 8/13/2019 Virtual Memory OS notes

    20/51

    First-In-First-Out (FIFO) Algorithm

    Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

    3 frames (3 pages can be in memory at a time perprocess)

    4 frames

    FIFO ReplacementBeladys Anomaly

    more frames more page faults

    1

    2

    3

    1

    2

    3

    4

    1

    2

    5

    3

    4

    9 page faults

    1

    2

    3

    1

    2

    3

    5

    1

    2

    4

    5 10 page faults

    44 3

  • 8/13/2019 Virtual Memory OS notes

    21/51

    FIFO Page Replacement

  • 8/13/2019 Virtual Memory OS notes

    22/51

    FIFO Illustrating Beladys Anomaly

  • 8/13/2019 Virtual Memory OS notes

    23/51

    Optimal Algorithm Goal: Replace page that will not be used for longest period

    of time 4 frames example

    1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

    How do you know this? Used for measuring how well your algorithm performs:

    Well, is it at least 4% as good as Optimal Algorithm?

    1

    2

    3

    4

    6 page faults

    4 5

  • 8/13/2019 Virtual Memory OS notes

    24/51

    Optimal Page Replacement

    Optimal: 9 faultsFIFO: 15 faults67% increase over the optimal

  • 8/13/2019 Virtual Memory OS notes

    25/51

    Least Recently Used (LRU) Algorithm

    Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

    Counter implementation

    Every page entry has a counter; every time page isreferenced through this entry: counter = clock

    When a page needs to be changed, look at the counters todetermine which are to change

    1

    2

    3 4

    4 3

    5

    5

  • 8/13/2019 Virtual Memory OS notes

    26/51

    LRU Page Replacement

  • 8/13/2019 Virtual Memory OS notes

    27/51

    LRU Algorithm (Cont.)

    Stack implementationkeep a stack of pagenumbers in a double link form:

    Page referenced:

    move it to the top (most recently used)

    Worst case: 6 pointers to be changed No search for replacement

    A B CHead

    Tail

    B A CHead

    Tail

    1

    2

    3 5

    4

    Also Bs previous link

  • 8/13/2019 Virtual Memory OS notes

    28/51

    Use Of A Stack to Record The Most

    Recent Page References

  • 8/13/2019 Virtual Memory OS notes

    29/51

    LRU Approximation Algorithms

    Reference bit

    With each page associate a bit, initially =0

    When page is referenced bit set to 1

    Replacement: choose something with 0(if one exists). We do not know the order,however.

    Second chance (Clock replacement)

    Need reference bit

    If page to be replaced (in clock order) has

    reference bit = 1 then: set reference bit 0

    leave page in memory

    replace next page (in clock order),

    subject to same rules

  • 8/13/2019 Virtual Memory OS notes

    30/51

    Second-Chance (clock) Page-Replacement Algorithm

  • 8/13/2019 Virtual Memory OS notes

    31/51

    Counting Algorithms

    Keep a counter of the number ofreferences that have been madeto each page

    LFU Algorithm: replaces pagewith smallest count- indicates an actively used page

    MFU Algorithm: based on theargument that the page with thesmallest count was probably justbrought in and has yet to be used

  • 8/13/2019 Virtual Memory OS notes

    32/51

    Allocation of Frames

    Each process needs minimumnumber of pages: depends oncomputer architecture

    Example: IBM 3706 pages to

    handle SS MOVE instruction: instruction is 6 bytes, might span 2

    pages

    2 pages to handle from

    2 pages to handle to

    Two major allocation schemes

    fixed allocation

    priority allocation

  • 8/13/2019 Virtual Memory OS notes

    33/51

    Fixed Allocation Equal allocationFor example, if there are 100

    frames and 5 processes, give each process 20frames.

    Proportional allocationAllocate according to the

    size of process

    m

    S

    spa

    m

    sS

    ps

    iii

    i

    ii

    forallocation

    framesofnumbertotal

    processofsize

    5964137

    127

    564137

    10

    127

    10

    64

    2

    1

    2

    a

    a

    s

    s

    m

    i

  • 8/13/2019 Virtual Memory OS notes

    34/51

    Global vs. Local Allocation

    Global replacementprocessselects a replacement frame fromthe set of all frames - one

    process can take a frame from

    another Con: Process is unable to control its own

    page-fault rate.

    Pro: makes available pages that are less

    used pages of memory

    Local replacementeachprocess selects from only its own

    set of allocated frames

  • 8/13/2019 Virtual Memory OS notes

    35/51

    Thrashing

    Number of frames < minimumrequired for architecturemust

    suspend process Swap-in, swap-out level of intermediate CPU

    scheduling

    Thrashinga process is busy

    swapping pages in and out

  • 8/13/2019 Virtual Memory OS notes

    36/51

    Thrashing

    Consider this:

    CPU utilization lowincrease processes

    A process needs morepagesgets them from

    other processes Other process must

    swap intherefore wait

    Ready queue shrinks

    therefore system thinksit needs moreprocesses

  • 8/13/2019 Virtual Memory OS notes

    37/51

    Demand Paging and Thrashing

    Why does demand paging work?

    Locality model

    as a process executes it moves from

    locality to locality Localities may overlap

    Why does thrashing occur?

    size of locality > total memory sizeall localities added together

    Locality In

  • 8/13/2019 Virtual Memory OS notes

    38/51

    Locality In

    A Memory-

    ReferencePattern

  • 8/13/2019 Virtual Memory OS notes

    39/51

    Working-Set Model working-set window a fixed number of

    page referencesExample: 10,000 instruction

    WSSi(working set of Process Pi) =total number of pages referenced in the mostrecent (varies in time)

    if too small will not encompass entirelocality

    if too large will encompass several localities

    if = will encompass entire program

    D= WSSitotal demand frames

    if D> mThrashing

    Policy if D> m, then suspend one of theprocesses

  • 8/13/2019 Virtual Memory OS notes

    40/51

    Working-set model

  • 8/13/2019 Virtual Memory OS notes

    41/51

    Keeping Track of the Working Set

    Approximate WS with interval timer + a reference bit

    Example: = 10,000 references

    Timer interrupts after every 5000 time units

    Keep in memory 2 bits for each page

    Whenever a timer interrupts: copy and sets the

    values of all reference bits to 0 If one of the bits in memory = 1 page in working

    set

    Why is this not completely accurate?

    Improvement = 10 bits and interrupt every 1000 timeunits

  • 8/13/2019 Virtual Memory OS notes

    42/51

    Page-Fault Frequency Scheme

    Establish acceptable page-fault rate If actual rate too low, process loses frame

    If actual rate too high, process gains frame

  • 8/13/2019 Virtual Memory OS notes

    43/51

    Memory-Mapped Files Memory-mapped file I/O allows file I/O to be treated as

    routine memory access by mappinga disk block to a page

    in memory

    How?

    A file is initially read using demand paging. A page-

    sized portion of the file is read from the file system into a

    physical page.

    Subsequent reads/writes to/from the file are treated as

    ordinary memory accesses.

    Simplifies file access by treating file I/O through memory

    rather than read()write()system calls (less overhead)

    Sharing: Also allows several processes to map the same fileallowing the pages in memory to be shared

  • 8/13/2019 Virtual Memory OS notes

    44/51

    Memory Mapped Files

  • 8/13/2019 Virtual Memory OS notes

    45/51

    WIN32 API

    Steps:1. Create a file mapping for the file

    2. Establish a view of the mapped file in theprocesss virtual address space

    A second process can the open andcreate a view of the mapped file in its

    virtual address space

    O

  • 8/13/2019 Virtual Memory OS notes

    46/51

    Other Issues -- Prepaging

    Prepaging

    To reduce the large number of page faults thatoccurs at process startup

    Prepage all or some of the pages a process willneed, before they are referenced

    But if prepaged pages are unused, I/O andmemory was wasted

    Assume spages are prepaged and of thepages is used

    Is cost of s * save pages faults > or < thanthe cost of prepagings * (1- ) unnecessary pages?

    near zero prepaging loses

  • 8/13/2019 Virtual Memory OS notes

    47/51

    Other IssuesPage Size

    Page size selection must take

    into consideration:

    fragmentation

    table size

    I/O overhead locality

  • 8/13/2019 Virtual Memory OS notes

    48/51

    Other IssuesProgram Structure

    Program structure

    Int[128,128] data;

    Each row is stored in one page

    Program 1

    for (j = 0; j

  • 8/13/2019 Virtual Memory OS notes

    49/51

    Other IssuesI/O interlock

    I/O InterlockPages mustsometimes be locked intomemory

    Consider I/O. Pages that areused for copying a file from adevice must be locked from

    being selected for eviction by apage replacement algorithm.

    Reason Why Frames Used For I/O

  • 8/13/2019 Virtual Memory OS notes

    50/51

    Reason Why Frames Used For I/O

    Must Be In Memory

  • 8/13/2019 Virtual Memory OS notes

    51/51

    Other IssuesTLB Reach

    TLB Reach - The amount of memory accessible fromthe TLB

    TLB Reach = (TLB Size) X (Page Size)

    Ideally, the working set of each process is stored in the

    TLB. Otherwise there is a high degree of page faults.

    Increase the Page Size. This may lead to an increase infragmentation as not all applications require a large

    page size

    Provide Multiple Page Sizes. This allows applications

    that require larger page sizes the opportunity to use

    them without an increase in fragmentation.