virtual memory 2: demand paging - insa lyon1. virtual memory for runtime performance: demand paging...

30
Virtual Memory 2: demand paging also: anatomy of a process Guillaume Salagnac Insa-Lyon – IST Semester Fall 2019

Upload: others

Post on 02-Jun-2020

18 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Virtual Memory 2: demand pagingalso: anatomy of a process

Guillaume Salagnac

Insa-Lyon – IST Semester

Fall 2019

Page 2: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Reminder: OS duties

CPU cache (SRAM)

main memory (DRAM)

fast storage (SSD)

large storage (disk)

CPU

Virtual Memory means:• hiding the actual location of data during execution• placing data and moving it around to improve performance• doing so for several processes running simultaneously

2/29

Page 3: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Virtual memory: principle

CPU MMU/TLB

PAS

0

1

2

3

VAS

0

1

2

3

VPN

PPN

0123

1

2VPN PPN

Ø

ØPT

VA=VPN.PO PA=PPN.PO

Address space virtualization:• CPU only works with Virtual Addresses• MMU/TLB translates every request into a Physical Address

• virtual-to-physical mapping info stored in the Page Table• one Virtual Address Space (i.e. one PT) for each process 3/29

Page 4: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Outline

1. Virtual Memory for runtime performance: demand paging

2. Virtual Memory for multiprogramming: page sharing

3. Virtual Memory for isolation and protection

4. Anatomy of a Virtual Address Space

4/29

Page 5: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

VM for speed: demand paging and swappingRemember: we use DRAM as a cache for the disk

Demand Paging: working principle• allocate all virtual pages of all processes on the disk• only load a page to RAM when it is actually required

Possible states for each virtual page:Unmapped = page doesn’t exist

• no data associated with page (neither in memory nor on disk)• AKA unallocated

Present = page exists and is currently copied in memory• data can be access by CPU• AKA mapped, loaded, cached, swapped-in

Unloaded = allocated on disk but currently not in memory• AKA uncached, swapped-out

note: page state recorded in PTE5/29

Page 6: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Swapping data between memory and diskQuestion: what happens if CPU tries to access an unloaded page ?

CPU DRAM Disk

VAS PAS

0

1

2

3

0

1

2

3

36 37 38 39

Ø

Ø

VPN PPN metadata0 2 present1 0 present2 Ø unallocated3 Ø unloaded to disk sector no38

6/29

Page 7: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Swapping data between memory and disk

From the MMU viewpoint:• page in memory = access is possible = PTE is valid• page not in memory = access not possible = PTE is invalid

• unallocated or unloaded

implementation: boolean flag in PTE known as the “valid bit”

When a program tries to access an invalid page:• MMU raises a software interrupt (trap)• CPU jumps in the kernel and executes associated ISR

• if page is unallocated I non-recoverable errorkernel kills the process (segmentation fault)

• if page is unloaded I process is not guilty !kernel handles page fault by loading data in memory

7/29

Page 8: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Page fault handling

1. CPU requests a certain virtual address2. MMU looks VA up in PT but finds a PTE with valid=false3. MMU sends an interrupt request

• with offending instruction and address

4. CPU switches to supervisor mode and jumps to ISR5. OS reads page table of current process

• check that virtual page does exist (i.e. is allocated)6. OS find a free physical page (by looking in the frame table)

• sometimes we must swap out a page to make space7. OS swaps in the required page from disk

• disk access = I/O burst I context switch to keep CPU busy

8. when page loaded: OS updates PTE to reflect new mapping9. OS marks original process as ready

• still transparent for application programmer

8/29

Page 9: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Demand paging: summary

Idea: use DRAM as a cache for the disk• complicated interaction between software and hardware

• MMU detects accesses to unloaded pages: page fault• OS deals with loading/unloading to disk: swapping

• invisible from userland• CPU kept busy via context switching

Average access time to main memory:• AMAT = page hit time + (page fault rate × page fault penalty)• page hit time ≈ DRAM latency ≈ 50 ns• page fault penalty ≈ disk latency ≈ 5 ms

I system performance is very sensitive to page fault rate

9/29

Page 10: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Outline

1. Virtual Memory for runtime performance: demand paging

2. Virtual Memory for multiprogramming: page sharing

3. Virtual Memory for isolation and protection

4. Anatomy of a Virtual Address Space

10/29

Page 11: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Reminder: OS duties

CPU cache (SRAM)

main memory (DRAM)

fast storage (SSD)

large storage (disk)

CPU

Virtual Memory means:• hiding the actual location of data during execution• placing data and moving it around to improve performance• doing so for several processes running simultaneously

11/29

Page 12: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

VM for multiprogramming: process isolation

Idea: each process gets an individual virtual address spaceI OS maintains one page table per process

VAS1 PAS

0

1

2

3

0

1

2

3

Ø

ØVAS2

0

1

2

3

Ø

Ø

Page Table 1 Page Table 2

Remarks:• RAM allocated to the “most useful” virtual pages• MMU reconfigured at each context switch

• flush the TLB i.e. forget all PTEs of previous process• take as new reference the PT of the new process

12/29

Page 13: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

VM for multiprogramming: shared memory

Idea: enable several processes to communicate• one physical page can be mapped into several VASes

VAS1 PAS

0

1

2

3

0

1

2

3

Ø

ØVAS2

0

1

2

3

Ø

Ø

Page Table 1 Page Table 2

Remarks:• feature accessible through kernel API: mmap() syscall• shared page typically placed at the same VA I why ?

13/29

Page 14: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Outline

1. Virtual Memory for runtime performance: demand paging

2. Virtual Memory for multiprogramming: page sharing

3. Virtual Memory for isolation and protection

4. Anatomy of a Virtual Address Space

14/29

Page 15: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

VM for security: kernel protectionIdea: each PTE contain some permission information

• e.g: read-only, non-executable, kernel-only...• if PTE is valid I MMU also checks for permissions

VAS1 PAS

0

1

2

3

0

1

2

3

VAS2

0

1

2

3

ØPage Table 1 Page Table 2

Ø

kernel space{

user space{ }

user space

}kernel space

Remarks:• kernel typically mapped (though protected) in every VAS• in general, located above userspace in the VAS

• e.g. Linux x86: 0–3GB=user space, 3GB–4GB=kernel• not all pages allocated !

15/29

Page 16: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

VM for security: MMIO protectionIdea: physical addresses lead to DRAM and peripheralsI Memory-mapped input/output

typically:• DRAM allocated to userland• MMIO restricted to kernel

I easy to enforce using paging

Note: the MMU is a peripheral too

16/29

Page 17: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Virtual memory: summary

Under the responsibility of the kernel• reconfigure the MMU/TLB at every context switch

• one process = one page table• handle page faults

• carry out all swapping operations between RAM and disk• allocate memory pages for processes

• e.g. via the mmap() syscall

VAS

0

1

2

3

before: after:

VAS

0

1

2

3

mmap(...)

17/29

Page 18: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Outline

1. Virtual Memory for runtime performance: demand paging

2. Virtual Memory for multiprogramming: page sharing

3. Virtual Memory for isolation and protection

4. Anatomy of a Virtual Address SpaceStatic allocation: .text and .data sectionsStack allocation of local variablesHeap allocation of dynamic data structures

18/29

Page 19: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Virtual memory: principle

CPU MMU/TLB

PAS

0

1

2

3

VAS

0

1

2

3

VPN

PPN

0123

1

2VPN PPN

Ø

ØPT

VA=VPN.PO PA=PPN.PO

Address space virtualization:• CPU only works with Virtual Addresses• MMU/TLB translates every request into a Physical Address

• virtual-to-physical mapping info stored in the Page Table• one Virtual Address Space (i.e. one PT) for each process 19/29

Page 20: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Anatomy of a process address space

0

1

2

X

VPN

Virtual Address Space

To launch a new program:1 create a new PCB and a new PT2 copy the executable in memory3 PCB.PC := address of main()4 PCB.state := ready5 append PCB to the ready queue

one executable = several sections• .text = program instructions• .data = global variables• .heap = dynamic allocation• .stack = local variables (+ calls)

show contents of an executable file:• objdump -h ./prog.elf

• objdump -d ./prog.elf

20/29

Page 21: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Static allocation of instructions and global variables

Static = «does not move during execution»• location decided once, before execution begins• size decided once, before execution begins

source code:

int i,n,r;

factorial(){

i = 1;

while(n>0){

i = i*n;n = n-1;

}r = i;

}

machine code:80483a7: ....80483aa: c7 05 2c 96 04 08 0180483b1: 00 00 0080483b4: eb 2080483b6: 8b 15 2c 96 04 0880483bc: a1 30 96 04 0880483c1: 0f af c280483c4: a3 2c 96 04 0880483c9: a1 30 96 04 0880483ce: 83 e8 0180483d1: a3 30 96 04 0880483d6: a1 30 96 04 0880483db: 85 c080483dd: 7f d780483df: a1 2c 96 04 0880483e4: a3 34 96 04 0880483e9: ...

21/29

Page 22: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Static allocation of instructions and global variables

Static = «does not move during execution»• location decided once, before execution begins• size decided once, before execution begins

source code:

int i,n,r;

factorial(){

i = 1;

while(n>0){

i = i*n;n = n-1;

}r = i;

}

disassembled machine code:80483a7: ...80483aa: movl $0x1,0x804962c80483b1:80483b4: jmp 0x80483d680483b6: mov 0x804962c,%edx80483bc: mov 0x8049630,%eax80483c1: imul %edx,%eax80483c4: mov %eax,0x804962c80483c9: mov 0x8049630,%eax80483ce: sub $0x1,%eax80483d1: mov %eax,0x804963080483d6: mov 0x8049630,%eax80483db: test %eax,%eax80483dd: jg 0x80483b680483df: mov 0x804962c,%eax80483e4: mov %eax,0x804963480483e9: ... 21/29

Page 23: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

“Dynamic” allocation in the .stack section

Problem: what if the size and/orquantity of variables isunknown before execution ?

int f(int n) {if(n<=1) return 1;int a=f(n-1);int b=f(n-2);return a+b;

}

I Solution: use an unbounded data structure i.e. a stack

Remarks:• approach used in 99% of programming languages

• AKA execution stack, program stack, control stack, run-timestack, machine stack, call stack, or just “the stack”

• one function activation = one portion of the stack• local variables, function arguments, return address...

I dedicated CPU instructions: PUSH, POP, CALL, RET• top of stack tracked by Stack Pointer register SP• memory beyond SP not considered significant

22/29

Page 24: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

The execution stack: remarks

SPCPU Definition:

• area dedicated to “dynamic” allocation• allocated and released Last-In First-Out

• contents: local variables, return addresses...• managed by the compiler

• top of stack pointed by SP register

Advantages• easy to use for the programmer

• maps nicely to features of “high-level” languages• efficient at runtime

• using register-indirect addressing mode e.g. LOAD [SP]• automatic stack growth implemented by OS

Limitations• LIFO: not suitable for certain data structures

23/29

Page 25: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Dynamic allocation at runtime AKA heap allocationIdea: allow for arbitrary allocations/deallocations during execution

User interface• malloc(size)

• search the heap for a large-enough free zone, return its address(or return an error if unable to find one)

• free(address)• notify the allocator that a previously allocated block is no

longer needed and can be reused for later allocations

Advantages• total flexibility for the programmer• compatible with all kinds of data structures

Drawbacks• total flexibility for the programmer• complex implementation I allocation can become slow

24/29

size

Page 26: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Heap allocation vs allocation of new VM pagesProblem: how to implement malloc() and free() ?Bad idea: forward all allocation requests to the kernel

• e.g. via the mmap() and munmap() syscalls

VAS

0

1

2

3

before: after:

VAS

0

1

2

3

mmap(...)

Drawbacks• cannot allocate half a page I wasted space AKA fragmentation• frequent system calls I bad performance

Solution: implement the memory allocator in userspace• recycle freed block within the same process when possible• only when heap is full I request new pages from kernel

25/29

Page 27: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Heap allocation: remarks

Why it is hard:• cannot split a request: allocated zone must be contiguous• cannot move a block once allocated: app has pointers to it• if several blocks are possible: which one to choose ?• if chosen free block is too large: should we split it ?

• too many free blocks I allocation becomes slow• free blocks too small I unable to use them for allocation

Data structure: list of free blocks AKA freelist

160

}118 59

}40

Occ.

}30

}42

} }40

}80

147

FreeOcc.

Occupied

Occ.Free

Occ.FreeOccupied

Example: where to allocate a block of size 10 ? 50 ? 200 ?26/29

Page 28: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Anatomy of a process: summary

0

1

2

X

VPN

Virtual Address Space

one VAS = several sections• .text = program instructions (static)

• initialized from executable file• VM flags: read-only, executable

• .data = global variables (static)• initialized from executable file• VM flags: read-write, non-executable

• .heap = dynamic allocation• userspace malloc()/free() API• mmap() to request new pages when full

• .stack = local variables (+ calls)• accessed via PUSH/POP instructions• Last In First Out allocation

27/29

Page 29: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Outline

1. Virtual Memory for runtime performance: demand paging

2. Virtual Memory for multiprogramming: page sharing

3. Virtual Memory for isolation and protection

4. Anatomy of a Virtual Address SpaceStatic allocation: .text and .data sectionsStack allocation of local variablesHeap allocation of dynamic data structures

28/29

Page 30: Virtual Memory 2: demand paging - INSA Lyon1. Virtual Memory for runtime performance: demand paging 2. Virtual Memory for multiprogramming: page sharing 3. Virtual Memory for isolation

Summary

Virtual memory via paging• dissociate logical addresses from physical addresses• managed in software (kernel) + hardware (MMU/TLB)

Demand paging• swap memory pages between DRAM and disk• page faults detected by MMU, handled by OS• disk is slow I faults must remain unfrequent

Memory allocation• static (code, globals) vs dynamic allocation• execution stack for local variables• heap allocation with malloc()/free()

29/29