machine-independent virtual memory management for paged uniprocessor and multiprocessor...
TRANSCRIPT
Machine-Independent Virtual Memory Management for Paged Uniprocessor and
Multiprocessor Architectures
R. Rashid, et al.2nd Symposium on Architectural Support for
Programming Languages and Operating SystemsPalo Alto, October 1987
A summary by Nick Rayner
for PSU CS533, Spring 2006
2
Overview of Contribution
• Portability– Fixed, simple hardware interface
• Capability– Advanced memory management in software
• Integration– Memory management is message passing
• Customization– User-defined memory handlers
3
Mach Review
• Task: resource allocation unit– Paged virtual address space
• Thread: CPU utilization unit– Independent program counter within task
• Port: protected queue for messages
• Message: typed collection of data objects
• Memory Object: managed data collection– Can be mapped into task address space
4
VM Features
• Address ranges map to memory objects
• 9 standard task operations available
• Copy-on-write sharing between tasks
• Read/write sharing with child tasks
• “Pager” tasks associated with memory objects control faults and page-outs
5
Data Structures
• Resident page table– Indexed on physical page number
• Address map– Virtual ranges to object regions
• Memory objects– Backing storage, kernel or user managed
• Pmap– Machine dependent memory mapping
6
Memory Paging
• Physical pages ≠ hardware pages– Physical size a boot time parameter– Must be power of 2 multiple of hardware size
• Physical pages ≠ memory object pages– Object handlers may use their own policies
7
Resident Page Table
• Modified and Reference bits
• Memory object list links table entries for each object
• Allocation queues for free, reclaimable, and allocated pages
• Hash buckets based on object & offset
8
Address Maps
• Doubly linked list of entries sorted by virtual address
• Byte offsets within a memory object for task virtual address ranges
• Inheritance and protection attributes
• Last fault “hint” pointers may be kept
9
Memory Objects
• Reference counters allow garbage collection– Cache maintained for rapid reuse– Pagers may request caching
• Kernel manages pages in primary memory
• Pager handles store and fetch
• Standard ports and fixed interface govern pager--kernel communication
10
Sharing
• Copy-on-write: “shadow object” created– If no write, only contains link to source– On write, contains update and link– Chain develops with repeated copies– Kernel collects unneeded intermediates
• Read/write: indirection via “sharing map”– Equivalent to address map entry– Pointed to by address maps of sharing tasks
11
Machine Dependence: pmap.c
• Must implement 16 specific routines (even if they have no function)
• Must maintain pmaps
• Must switch pmap appropriately
• Not expected to have full accounting
• Should have “frequently referenced task addresses” and current kernel map
12
Hardware Assessment
• By isolating architecture role to support of Mach-defined interface, the authors claim the system can provide a “relatively unbiased assessment” of hardware design alternatives
• No mention is made of how one could control for the specific pmap.c implementation used in each case
13
Performance
•Net benefit shown for all machines•Note that user-space pagers or memory objects unlikely
Tables 7-1 and 7-2 of the paper
14
Weaknesses
• Shadow chains can be lengthy and redundant. Kernel garbage collection may be incomplete and requires “complex” locking for multiprocessors (suggesting overhead and possible contention).
• Paging services at user level require context switching away from kernel fault handler and then back again
• TLB consistency issues in SSM
15
Strengths
• Porting requires altering only a single module, and has been successfully accomplished by a novice C programmer
• Sharing facilities improve parameter passing possibilities
• External memory objects with paging services support distributed systems as well as arbitrary backing storage
16
Conclusion
• Portability and capability demonstrated (at least with kernel-managed objects)
• No clear sense of how the tradeoffs might play out with significant user-level additions