operating systems design and implementation third edition andrew s. tanenbaum albert s. woodhull

19
Annotated by B. Hirsbrunner OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL Lecture 7, 6 November 201 Chap. 4 Memory Management 4.1 Basic 4.2 Swapping 4.3 Paging

Upload: torn

Post on 22-Jan-2016

69 views

Category:

Documents


0 download

DESCRIPTION

Lecture 7, 6 November 2012. OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL. Chap. 4 Memory Management 4.1 Basic 4.2 Swapping 4.3 Paging. 4. Memory Management. Ideally programmers want memory that is large fast non volatile - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner

OPERATING SYSTEMSDESIGN AND IMPLEMENTATION

Third EditionANDREW S. TANENBAUMALBERT S. WOODHULL

Lecture 7, 6 November 2012

Chap. 4 Memory Management

4.1 Basic 4.2 Swapping 4.3 Paging

Page 2: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 2

4. Memory Management

• Ideally programmers want memory that is– large– fast– non volatile

• Memory hierarchy – small amount of fast, expensive memory : caches, ~ 1 MB– some medium speed, medium price : main memory, ~1 GB– gigabytes of slow, cheap disk storage, ~ 1 TB

• Memory manager handles the memory hierarchy

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 3: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 3

4.1 Basic Memory Management4.1.1 Monoprogramming without Swapping or Paging

Fig. 4.1 Three simple ways of organizing memory- with an operating system and one user process

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 4: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 4

4.1.2 Multiprogramming with Fixed Partitions

Fig. 4.2 Fixed memory partitions– separate input queues for each partition– single input queue (strategies: next convenient process,

biggest process, don't ignore more than k times)Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 5: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner

Fig. 3.2 Illustration of the relocation problem

4.1.2 Multiprogramming with Fixed Partitions

(a) A 16-KB program, (b) Another 16-KB program, (c) The two programs loaded consecutively into memory

Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall 5

Two solutions:1.static relocation2.dynamic relocation (with Base register)

Page 6: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 6

4.1.3 Relocation and Protection

• Relocation: address locations of variables and code routines cannot be absolute, they have to be translated during :– loading (static relocation) or

– execution (dynamic relocation, e.g. with a base register)

• Protection: a process address can't exceed its allocated memory partition :– Idea: use a limit register

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 7: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner

4.1.3 Relocation and Protection : base and limit registers

7

Fig. 3.3 Base and limit registers can be used to give each process a separate address space.

Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall

Page 8: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 8

4.2 Swapping : memory allocation

Fig. 4.3 Memory allocation changes as – processes come into memory– leave memory

Shaded regions are unused memory

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 9: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 9

4.2 Swapping : memory allocation

Fig. 4.4a Allocating space for growing data segmentFig. 4.4b Allocating space for growing data & stack segment (garbage collection !)

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 10: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 10

4.2.1-2 Memory Management: bit maps and linked lists

Fig. 4.5a Part of memory with 5 processes, 3 holes– tick marks show allocation units (1 in bitmaps)– shaded regions are free (0 in bitmaps)

Fig. 4.5b Corresponding bit mapFig. 4.5c Same information as a list

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 11: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 11

4.1.2 Memory Management : linked lists

Fig. 4.6 Four neighbor combinations for the terminating process X

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 12: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 12

4.1.2 Memory Management : linked lists – algorithms

• First fit: first hole with sufficient size• Next fit: same but search starts at the last current hole

allocated• Best fit: choose the smallest hole that is sufficient• Worst fit: tries to produces usable holes by choosing the

hole for which allocation produces the biggest hole

Improvements: separated lists for holes and used segments, holes must be sorted by size

• Quick fit: separated lists for sizes often demanded

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Unfortunately, none of these algorithms are satisfactory: too much tiny holes (e.g. best fit), not enough big holes (worst fit), …

Page 13: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 13

4.3 Virtual Memory4.3.1 Paging : MMU

Fig. 4.7 The position and function of the MMU

Idea•Each program has its own address space, which is broken up into chunks called pages (typically 4 KB)•A MMU (memory management unit) maps the virtual address onto the physical address

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 14: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 14

4.3.1 Paging : virtual/physical addresses

The relation between thevirtual addresses

and physical memory

addresses is given by the page table

Fig. 4.8Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 15: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 15

Fig. 4.9 Internal operation of MMU with 16 4 KB pages

4.3.1 Paging : page table

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Purposemap virtual pages onto page frames

Major issues•The page table can be extremely large•The mapping must be very fast.

Page 16: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 16

Fig. 4.10a 32 bit address with 2 page table fields

Fig. 4.10b Two-level page tables

4.3.2 Page Tables: multilevel

(b)

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

PurposeReduce the page table size

Page 17: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 17

4.3.2 Page Tables: structure of a page tables entry

Fig. 4.11 A typical page table entry

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 18: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner 18

4.3.3 TLBs – Translation Lookaside Buffers(associative memory)

Fig. 4.12 A TLB to speed up pagingA hardware solution to speed up paging: all TLB’s entries are checked simultaneously !

Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

Page 19: OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Annotated by B. Hirsbrunner

4.3.4 Inverted Page Tables

19

Fig. 4.13

Comparison of a traditional page table with an inverted page tableTanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

252 is the number of entries for a264 bytes address space with 4KB pages

idea of inverted page table: there is only one entry per page frame in real memory