chapter 8: main memory

64
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 8: Main Memory

Upload: rad

Post on 07-Jan-2016

25 views

Category:

Documents


2 download

DESCRIPTION

Chapter 8: Main Memory. Chapter 8: Main Memory. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium. Objectives. To provide a detailed description of various ways of organizing memory hardware - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Chapter 8:  Main Memory

8.1 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Chapter 8: Main Memory

Page 2: Chapter 8:  Main Memory

8.2 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Chapter 8: Main Memory

Background

Swapping

Contiguous Memory Allocation

Paging

Structure of the Page Table

Segmentation

Example: The Intel Pentium

Page 3: Chapter 8:  Main Memory

8.3 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Objectives

To provide a detailed description of various ways of organizing memory hardware

To discuss various memory-management techniques, including paging and segmentation

To provide a detailed description of the Intel Pentium, which supports both pure segmentation and segmentation with paging

Page 4: Chapter 8:  Main Memory

8.4 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Background

Program must be brought (from disk) into memory and placed within a process for it to be run

Main memory and registers are only storage CPU can access directly

Register access in one CPU clock (or less)

Main memory can take many cycles

Cache sits between main memory and CPU registers

Protection of memory required to ensure correct operation

Page 5: Chapter 8:  Main Memory

8.5 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Base and Limit Registers

A pair of base and limit registers define the logical address space

Page 6: Chapter 8:  Main Memory

8.6 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Types of Addresses used in a program

Symbolic addresses

The addresses used in a source code. The variable names, constants, and instruction labels are the basic elements of the symbolic address space. E.g. an integer variable called count.

Relative addresses

At the time of compilation, a compiler converts symbolic addresses into relative addresses. E.g. 10 bytes from the start of this module.

Physical or absolute addresses

The loader generates these addresses at the time when a program is loaded into main memory.

Page 7: Chapter 8:  Main Memory

8.7 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Binding of Instructions and Data to Memory

Address binding of instructions and data to memory addresses can happen at three different stages

Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes

Load time: Must generate relocatable code if memory location is not known at compile time

Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers)

Page 8: Chapter 8:  Main Memory

8.8 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Multistep Processing of a User Program

Page 9: Chapter 8:  Main Memory

8.9 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Logical vs. Physical Address Space

The concept of a logical address space that is bound to a separate physical address space is central to proper memory management

Logical address – generated by the CPU; also referred to as virtual address

Physical address – address seen by the memory unit

Page 10: Chapter 8:  Main Memory

8.10 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Memory-Management Unit (MMU)

Hardware device (MMU) maps virtual to physical address

In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory

The user program deals with logical addresses; it never sees the real physical addresses

Page 11: Chapter 8:  Main Memory

8.11 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Dynamic relocation using a relocation register

Page 12: Chapter 8:  Main Memory

8.12 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Dynamic Loading

Routine is not loaded until it is called

Better memory-space utilization; unused routine is never loaded

Useful when large amounts of code are needed to handle infrequently occurring cases

No special support from the operating system is required implemented through program design

Page 13: Chapter 8:  Main Memory

8.13 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Dynamic Linking

Linking postponed until execution time

Small piece of code, stub, used to locate the appropriate memory-resident library routine

Stub replaces itself with the address of the routine, and executes the routine

Operating system needed to check if routine is in processes’ memory address

Dynamic linking is particularly useful for libraries

System also known as shared libraries

Page 14: Chapter 8:  Main Memory

8.14 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Schematic View of Swapping

Page 15: Chapter 8:  Main Memory

8.15 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Swapping Say we have Round Robin CPU scheduling and when the time quantum

expires the scheduler calls the dispatcher. The dispatcher checks to see if the next process in the ready queue in in memory. If not and there is no free memory, the dispatcher swaps out a process currently in memory and swaps in the desired process from the backing store.

Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images

Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed

Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped

Modified versions of swapping are found on many systems (i.e., UNIX, Linux, and Windows)

Page 16: Chapter 8:  Main Memory

8.16 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Drawback of Swapping

Context switch time is high in swapping. E.g. process size = 10 MB and backing store is a hard disk with a transfer rate of

40 MB per second.

Transfer time = 10 MB / 40 MB/second = ¼ second = 250 milliseconds

If there is an average latency time of 8 milliseconds. So swap time = 258 milliseconds.

So total swap time = 258 + 258 = 516 milliseconds

For efficient CPU utilization we need execution time much larger than swap time, i.e. time quantum >> 0.516 second

Swapping is used in few systems. It requires too much swap time and provides too little execution time to be a reasonable memory management solution.

In UNIX, swapping is normally disabled. It starts if many processes were running and using a threshold amount of memory.

Page 17: Chapter 8:  Main Memory

8.17 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Contiguous Allocation

Main memory usually into two partitions:

Resident operating system, usually held in low memory with interrupt vector

User processes then held in high memory

Relocation registers used to protect user processes from each other, and from changing operating-system code and data

Base or relocation register contains value of smallest physical address

Limit register contains range of logical addresses – each logical address must be less than the limit register

When CPU scheduler selects a process for execution, the dispatcher loads these 2 registers as part of the context switch. Every address given by the CPU is checked against these registers thus protecting the O/S and other user programs and data from being modified by this process.

MMU maps logical address dynamically

Page 18: Chapter 8:  Main Memory

8.18 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Hardware Support for Relocation and Limit Registers

Page 19: Chapter 8:  Main Memory

8.19 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Contiguous Allocation (Cont)

Multiple-partition allocation

To allocate available memory to various processes waiting to be brought into memory

Hole – block of available memory; holes of various size are scattered throughout memory

When a process arrives, it is allocated memory from a hole large enough to accommodate it

Operating system maintains information in a table about:a) allocated partitions b) free partitions (hole)

OS

process 5

process 8

process 2

OS

process 5

process 2

OS

process 5

process 2

OS

process 5

process 9

process 2

process 9

process 10

Page 20: Chapter 8:  Main Memory

8.20 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Multiple-partition allocation Example

Page 21: Chapter 8:  Main Memory

8.21 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Dynamic Storage-Allocation Problem

First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must

search entire list, unless ordered by size Produces the smallest leftover hole

Worst-fit: Allocate the largest hole; must also search entire list Produces the largest leftover hole

How to satisfy a request of size n from a list of free holes

First-fit and best-fit better than worst-fit in terms of speed and storage utilization

Page 22: Chapter 8:  Main Memory

8.22 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Dynamic Storage-Allocation Problem

Page 23: Chapter 8:  Main Memory

8.23 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Fragmentation

External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous. As processes are loaded and removed from memory, the free memory space is fragmented into a large # of small holes.

E.g. in figure (a) total external fragmentation = 260 K (too small to satisfy P4 and P5).

In figure (c) total external fragmentation = 300 K + 260 K = 560 K. This space is large enough to run P5 (which needs 500 K) but this free memory is not contiguous.

Analysis has shown, for example, with first-fit about 1/3rd of memory may be unusable due to external fragmentation.

Page 24: Chapter 8:  Main Memory

8.24 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Fragmentation

Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.

E.g. if we have a hole of 18,464 bytes and the next process request is for 18,462 bytes, we are left with a hole of 2 bytes. The overhead of keeping track of this small hole is greater than the hole itself. So the entire hole of 18,464 is allocated to the process. The difference between the allocated memory and requested memory (2 bytes) is internal fragmentation.

Page 25: Chapter 8:  Main Memory

8.25 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Fragmentation

Reduce external fragmentation by compaction

Shuffle memory contents to place all free memory together in one large block

Compaction is possible only if relocation is dynamic, and is done at execution time

Page 26: Chapter 8:  Main Memory

8.26 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Compaction Example

In (b) we moved 600K

In (c) we moved 400K by moving job4 before job3

In (d) we moved 200K by moving job3 after job4

Selecting an optimal compaction strategy is quite difficult

Page 27: Chapter 8:  Main Memory

8.27 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging

Logical address space of a process can be noncontiguous thus allowing a process to be allocated physical memory whenever the latter is available (this is another solution to external fragmentation.)

Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 8,192 bytes)

Divide logical memory into blocks of same size called pages

Keep track of all free frames

To run a program of size n pages, need to find n free frames and load program

Set up a page table to translate logical to physical addresses

Internal fragmentation

Page 28: Chapter 8:  Main Memory

8.28 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging

Address generated by CPU is divided into:

Page number (p) – used as an index into a page table which contains base address of each page in physical memory

Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit

For given logical address space 2m and page size 2n

page number page offset

p d

m - n n

Page 29: Chapter 8:  Main Memory

8.29 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging

Page sizes are always a power of 2. Why?

Selection of a power of 2 as a page size makes the translation of a logical address into a page# and its page offset easy.

E.g. if logical address size is 2m and page size = 2n then the high-order m-n bits of a logical space designate the page# and the low order n bits designate the page offset.

Say logical address = 8K = 8192 bytes, i.e. 213 = 8K and page size = 1K = 1024 bytes = 210

13 – 10 = 3 bits indicate the page# (23 = 8 pages, i.e. pages 0 – 7).

page number page offset

p d

m - n n

Page 30: Chapter 8:  Main Memory

8.30 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging Hardware

Page 31: Chapter 8:  Main Memory

8.31 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging Model of Logical and Physical Memory

Page 32: Chapter 8:  Main Memory

8.32 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging Example

32-byte memory and 4-byte pages

Page 33: Chapter 8:  Main Memory

8.33 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging

There is no external fragmentation with paging since any free frame can be allocated to a process that needs it.

There can be internal fragmentation if the memory requirements of a process do not fall on page boundaries, the last frame may not be completely full.

E.g. page size = 2K = 2048 bytes and process size = 72,766 bytes. This needs 35 frames + 1086 bytes. So it is allocated 36 frames (i.e. 36 * 2048 = 73,728 bytes). So 73,728 -72,766 = 962 bytes wasted in internal fragmentation.

Worst case is a process needing n frames + 1 byte. So it is allowed n+1 frames, wasting almost the entire frame. Average = ½ page (frame) per process is wasted.

To reduce internal fragmentation page sizes need to be kept small but this leads to more overhead in the page table. Also disk I/O is more efficient with larger page sizes. So page sizes have become bigger. Now they are 4K – 16MB.

Page 34: Chapter 8:  Main Memory

8.34 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Free Frames

Before allocation After allocation

Page 35: Chapter 8:  Main Memory

8.35 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Implementation of Page Table

Page table is kept in main memory

Page-table base register (PTBR) points to the page table

Page-table length register (PRLR) indicates size of the page table

In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction thus slowing memory access by a factor 2 (100% slow down).

The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)

Page 36: Chapter 8:  Main Memory

8.36 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Paging Hardware With TLB

Page 37: Chapter 8:  Main Memory

8.37 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

TLBs TLB is a set of fast registers each containing a page# and a frame#.

These registers contain only a few of the page-table entries.

If the page# is found in the TLBs then its frame# is available immediately and used to access memory. This takes less than 10% longer than it would if an unmapped (without the use of a page table) memory reference were made.

If the page# is not found in the TLBs, a reference to the page table must be made. The page# and frame# are added to the TLB.

The TLB must be flushed at each context switch.

The percentage of times that a page# is found in the TLB is called the hit ratio.

Page 38: Chapter 8:  Main Memory

8.38 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Effective Memory Access Time

sa = time for searching TLB

ma = time for memory access

h = hit ratio

Effective Memory Access Time = h (sa + ma) + (1-h) (sa + 2ma)

= h sa + h ma + sa + 2ma - h sa – 2 h ma

= sa + 2ma – h ma = sa + (2 – h) ma

Example: hit ratio = 80% and sa = 20 nsecs and ma= 100 nsecs

Effective Memory Access Time = 20 + (2 – 0.8) 100

= 20 + (1.2) 100 = 140 nsecs. So there is a 40% slowdown (100 nsecs to 140 nsecs).

If hit ratio = 98%,

Effective Memory Access Time = 20 + (2 – 0.98) 100 = 122 nsecs or a 22% slowdown.

Page 39: Chapter 8:  Main Memory

8.39 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Memory Protection

Memory protection implemented by associating protection bit with each frame

Valid-invalid bit attached to each entry in the page table:

“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page

“invalid” indicates that the page is not in the process’ logical address space and is an illegal access and is trapped as such.

Page 40: Chapter 8:  Main Memory

8.40 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Valid (v) or Invalid (i) Bit In A Page Table

Page 41: Chapter 8:  Main Memory

8.41 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Shared Pages

Shared code

One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window systems).

Shared code must appear in same location in the logical address space of all processes

Private code and data

Each process keeps a separate copy of the code and data

The pages for the private code and data can appear anywhere in the logical address space

Page 42: Chapter 8:  Main Memory

8.42 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Shared Pages Example

Page 43: Chapter 8:  Main Memory

8.43 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Multilevel Paging

Modern computer systems support a very large logical address space (232 to 264). Then the page table becomes too big to store.

E.g. consider a system with a 32-bit logical address space. If the page size = 4 KB (212) then we have 232 / 212 = 220 =1 million pages. Say each entry in page table requires 4 bytes, then 4 MB of physical address space would be needed for the page table alone for each process. It is not advisable to allocate 4 MB contiguously in memory.

So table is divided into smaller pieces.

Also, by partitioning the page table we allow the OS to leave partitions unused till a process needs them.

Page 44: Chapter 8:  Main Memory

8.44 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Hierarchical Page Tables

Break up the logical address space into multiple page tables

A simple technique is a two-level page table

Page 45: Chapter 8:  Main Memory

8.45 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Two-Level Page-Table Scheme

Page 46: Chapter 8:  Main Memory

8.46 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Two level paging on the VAX

Page 47: Chapter 8:  Main Memory

8.47 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Effective Memory Access Time in Two-Level Paging

Multilevel paging does not affect performance drastically. For a 2-level page table, converting a logical address into a physical one may take 3 memory accesses (1 access for outer table, one access for actual section which has the page, and 1 access for actual frame). Caching (TLBs) helps and performance remains reasonable.

Effective Memory Access Time = h (sa + ma) + (1-h) (sa + 3ma)

= h sa + h ma + sa + 3ma - h sa – 3 h ma

= sa + 3ma – 2h ma = sa + (3 – 2h) ma

Effective Memory Access Time, if hit ratio = 98%,

Effective Memory Access Time = 20 + (3 – 2(0.98)) 100 =

= 20 + 1.04 (100) = 124 nsecs or a 24% slowdown.

Page 48: Chapter 8:  Main Memory

8.48 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Segmentation

Memory-management scheme that supports user view of memory. A logical address space (user program) is a collection of segments.

A segment is a logical unit such as:

main program

procedure

function

method

object

local variables, global variables

common block

stack

symbol table

arrays

Page 49: Chapter 8:  Main Memory

8.49 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

User’s View of a Program

Page 50: Chapter 8:  Main Memory

8.50 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Logical View of Segmentation

1

3

2

4

1

4

2

3

user space physical memory space

Page 51: Chapter 8:  Main Memory

8.51 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Segmentation

Segments are referred to by a segment number.

Segments are of different sizes.

Sequence among the segments is immaterial.

Each segment is allocated to a contiguous block of memory.

Logical address consists of a two tuple:

<segment-number, offset>,

Page 52: Chapter 8:  Main Memory

8.52 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Segmentation Architecture

Logical address consists of a two tuple:

<segment-number, offset>,

Segment table – maps two-dimensional physical addresses; each table entry has:

base – contains the starting physical address where the segments reside in memory

limit – specifies the length of the segment

Segment-table base register (STBR) points to the segment table’s location in memory

Segment-table length register (STLR) indicates number of segments used by a program;

segment number s is legal if s < STLR

Page 53: Chapter 8:  Main Memory

8.53 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Segmentation Architecture (Cont.)

Protection

With each entry in segment table associate:

validation bit = 0 illegal segment

read/write/execute privileges

Protection bits associated with segments; code sharing occurs at segment level

Since segments vary in length, memory allocation is a dynamic storage-allocation problem

A segmentation example is shown in the following diagram

Page 54: Chapter 8:  Main Memory

8.54 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Segmentation Hardware

Page 55: Chapter 8:  Main Memory

8.55 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Example of Segmentation

Page 56: Chapter 8:  Main Memory

8.56 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Example: The Intel Pentium

Supports both segmentation and segmentation with paging

CPU generates logical address

Given to segmentation unit

Which produces linear addresses

Linear address given to paging unit

Which generates physical address in main memory

Paging units form equivalent of MMU

Page 57: Chapter 8:  Main Memory

8.57 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Logical to Physical Address Translation in Pentium

Page 58: Chapter 8:  Main Memory

8.58 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Intel Pentium Segmentation

Page 59: Chapter 8:  Main Memory

8.59 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Pentium Paging Architecture

Page 60: Chapter 8:  Main Memory

8.60 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Linear Address in Linux

Broken into four parts:

Page 61: Chapter 8:  Main Memory

8.61 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

Three-level Paging in Linux

Page 62: Chapter 8:  Main Memory

8.62 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

End of Chapter 8

Page 63: Chapter 8:  Main Memory

8.63 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

paging.pdf

Page 64: Chapter 8:  Main Memory

8.64 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8th Edition

paging.pdf