csc204 - chapter 3.2 os performance issue (memory management)-new.ppt

40
CSC204 Practical Approach to Operating System CS110 Chapter 3.2 OS Performance Issue (Memory Management)

Upload: anekumek

Post on 18-Jan-2016

30 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

CSC204Practical Approach to

Operating SystemCS110

Chapter 3.2OS Performance Issue (Memory Management)

Page 2: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Contents

Memory ManagementMemory HierarchyPhysical MemoryVirtual Memory

Page FaultTrashingCache - Principle of Locality

Page 3: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Memory ManagementMemory management is the act

of managing computer memory. In its simpler forms, this involves

providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed.

The management of main memory is critical to the computer system.

Page 4: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Memory ManagementSubdividing memory to

accommodate multiple processesMemory needs to be allocated to

ensure a reasonable supply of ready processes to consume available processor time

Page 5: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Memory HierarchyWHY need memory hierarchy in computer

system?To provide the best performance at the lowest

cost, memory is organized in a hierarchical fashionSmall capacity, fast storage elements are

kept in the CPU (i.e. on-chip)Larger capacity, slower main memory is

accessed through the data busLarger, (almost) permanent storage in the

form of disk and tape drives is still further from the CPU

Page 6: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt
Page 7: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt
Page 8: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Processor<->Cache<->Other Memory Each level of memory keeps a subset of the data contained

in the lower memory-level (i.e. from larger memory) To access a particular piece of data, the CPU first sends a

request to its nearest memory, i.e. cache If the data is not in cache, then main memory is queried. If

the data is not in main memory, then the request goes to disk

Once the data is located at a level, then the data, and a number of its nearby data elements are fetched into cache memoryE.g. if data from address x is requested, then data from

address X +1, X + 2, etc. is also sentA block (data from multiple blocks) of data is

transferred

Page 9: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Why is a block of data transferred?Data between levels is transferred using a

busBus itself takes sometime to transfer data it would more effective to use this

opportunity to get some other data you might require in the future during one bus transaction

Page 10: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Why is a block of data transferred?So why get data that is nearby? This because of

program structure:Temporal Locality (Locality in Time) :

referenced memory is likely to be referenced again soon (e.g. code within a loop)

Keep most recently accessed data items closer to the processorSpatial Locality (Locality in Space):

memory close to referenced memory is likely to be referenced soon (e.g., data in a sequentially access array)

Move blocks consists of contiguous words to the upper levelsSequential locality: Instructions tend to be accessed

sequentiallyThe above three are known as Principles of Locality - , is

the phenomenon of the same value or related storage locations being frequently accessed

Page 11: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

CacheCache

is a small very fast memory (SRAM, expensive)contains copies of the most recently accessed memory

locations (data and instructions): temporal localityis fully managed by hardware (unlike virtual memory)storage is organized in blocks of contiguous memory

locations: spatial localityunit of transfer to/from main memory (or L2) is the

cache blockGeneral structure

n blocks per cache organized in s setsb bytes per blocktotal cache size n*b bytes

Page 12: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Physical MemoryAlso referred to as the physical storage

or the real storageThis is typically the RAM modules that are

installed onto the motherboard.Physical memory is a term used to

describe the total amount of memory installed in the computer.

For example, if the computer has two 64MB memory modules installed, it has a total of 128MB of physical memory.

Page 13: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Physical Memory

Page 14: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Physical Memory : Memory Allocation SchemeFixed PartitionDynamic Partition

First FitBest FitWorst Fit

Compaction

Page 15: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Fixed PartitionAttempt at multiprogramming using fixed

partitions one partition for each jobsize of partition designated by reconfiguring

the systempartitions can’t be too small or too large.

Critical to protect job’s memory space.

Entire program stored contiguously in memory during entire execution.

Page 16: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Fixed Partition

Page 17: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Fixed Partition

Page 18: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Dynamic PartitionAvailable memory kept in contiguous blocks

and jobs given only as much memory as they request when loaded.

Improves memory use over fixed partitions.Performance decline as new jobs enter the

systemfragments of free memory are created between

blocks of allocated memory (external fragmentation).

External fragmentation - total memory space exists to satisfy a request, but it is not contiguous

Page 19: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Dynamic Partitioning of Main Memory & Fragmentation

Page 20: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Dynamic Partition Allocation SchemeFirst-fit: Allocate the first partition that is big enough.

Keep free/busy lists organized by memory location (low-order to high-order).

Faster in making the allocation.Best-fit: Allocate the smallest partition that is big

enough Keep free/busy lists ordered by size (smallest to largest). Produces the smallest leftover partition.Makes best use of memory.

Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole.

Page 21: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Best-Fit vs. First-FitFirst-FitIncreases memory

useMemory allocation

takes more timeReduces internal

fragmentation

Best-FitMore complex

algorithmSearches entire table

before allocating memory

Results in a smaller “free” space (sliver)

Page 22: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

First-Fit Allocation Example

J1 10K J2 20K J3 30K* J4 10K

Memory Memory Job JobInternal

location block size number sizeStatusfragmentation

10240 30K J1 10K Busy 20K

40960 15K J4 10K Busy 5K

56320 50K J2 20K Busy 30K

107520 20K Free

Total Available: 115K Total Used: 40K

Job List

Internal Fragmentation - allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used

Page 23: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Release of Memory Space : DeallocationDeallocation for fixed partitions is simple

Memory Manager resets status of memory block to “free”.

Deallocation for dynamic partitions tries to combine free areas of memory whenever possibleIs the block adjacent to another free block?Is the block between 2 free blocks?Is the block isolated from other free blocks?

Page 24: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Relocate every program in memory so they’re contiguous.

Adjust every address, and every reference to an address, within each program to account for program’s new location in memory.

Must leave alone all other values within the program (e.g., data values).

Compaction used to reduce external fragmentation

Compaction Steps

Page 25: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Memory Before & After Compaction

Page 26: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Virtual Memory• Virtual memory is a technique that

allows the execution of processes that may not be completely in memory.

• One major advantage of this scheme is that programs can be larger than physical memory.

• Virtual memory abstracts main memory into an extremely large, uniform array of storage separating logical memory as viewed by the user from physical memory.

Page 27: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Virtual Memory• This technique frees the programmer

from the concerns of the memory storage limit.

• VM allows processes to share files and address spaces and it provides an efficient mechanism for process creation.

• How Virtual memory being implemented in OS:

1.Paging2.Segmentation

Page 28: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

1. PagingMain memory is divided into a number of equal-sized, relatively small frames.

Each process is divided into a number of equal-sized pages – same length as a frame.

A process is loaded by loading all of its pages into available frames.Not necessarily be contiguous.

Possible thru the use of a page table for each process.Logical address (page number, offset) --- Physical

Address (frame number, offset).Pros

No external fragmentationCons

A small amount of internal fragmentation.

Page 29: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

1. PagingAddress Translation Architecture

Page 30: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

1. PagingAddress generated by CPU is divided into:

◦ Page number (p) – used as an index into a page table which contains base address of each page in physical memory

◦ Page offset (d) – combined with base address to define the physical memory address that is sent to the memory unit

◦ Page table is kept in main memory

Page 31: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

1. Paging (Paging Example - 1)

Page 32: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

1. Paging (Paging Example -2)

Page 33: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

2. SegmentationBased on common practice by programmers

of structuring their programs in modules (logical groupings of code).A segment is a logical unit such as: main

program, subroutine, procedure, function, local variables, global variables, common block, stack, symbol table, or array.

Main memory is not divided into page frames because size of each segment is different.Memory is allocated dynamically.

Page 34: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Segmentation Achitecture Logical address consists of a two tuple:

<segment-number, offset> Segment table – maps two-dimensional

physical addresses; each table entry has:◦ base – contains the starting physical

address where the segments reside in memory

◦ limit – specifies the length of the segment

Page 35: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

2. Segmentation (Address Translation Architecture)

Page 36: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Virtual Memory• Virtual memory – separation of user

logical memory from physical memory.• Only part of the program needs to be in

memory for execution.• Logical address space can therefore be

much larger than physical address space.• Allows address spaces to be shared by

several processes.• Allows for more efficient process creation.

Page 37: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Advantages of VM Works well in a multiprogramming environment

because most programs spend a lot of time waiting.

Job’s size is no longer restricted to the size of main memory (or the free space within main memory).

Memory is used more efficiently. Allows an unlimited amount of multiprogramming. Eliminates external fragmentation when used with

paging and eliminates internal fragmentation when used with segmentation.

Allows a program to be loaded multiple times occupying a different memory location each time.

Allows the sharing of code and data. Facilitates dynamic linking of program segments.

Page 38: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Disadvantages of VMIncreased processor hardware costs.Increased overhead for handling paging

interrupts.Increased software complexity to prevent

thrashing.

Page 39: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

Page FaultPage fault - a failure to find a page in memory.Thrashing a process is busy swapping pages in

and outProcedure to handle page fault:

1. First, check an internal table for the process to determine whether the reference was a valid or invalid access.

2. If the reference was invalid, terminate the process. If it was valid, but have not yet brought in that page, now page it in.

3. Find a free frame4. Schedule a disk operation to read the desired page into the newly

allocated frame.5. When the disk read is complete, modify the internal table kept with

the process and the page table to indicate that the page is now in memory.

6. Restart the instruction that was interrupted by the illegal address trap. The process can now access the page as though it had always been in memory.

Page 40: CSC204 - Chapter 3.2 OS Performance Issue (Memory Management)-New.ppt

ThrashingTrashing – an excessive amount of page

swapping back and forth between main memory and secondary storage.Operation becomes inefficient.Caused when a page is removed from

memory but is called back shortly thereafter. Can occur across jobs, when a large number

of jobs are vying for a relatively few number of free pages.

Can happen within a job (e.g., in loops that cross page boundaries).