unit i_chapter 3 adv os
TRANSCRIPT
-
8/3/2019 Unit I_chapter 3 Adv OS
1/135
2006 IBM Corporation
Threads and light weight process
-
8/3/2019 Unit I_chapter 3 Adv OS
2/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
2 limitations of process model
1. largely independent tasks run concurrently, share a common address space and other resources
server-side database managers, transaction-processing monitors, etc.
processes are parallel in nature, programming model to support parallelism
UNIX systems forced to serialize tasks, awkward and inefficient mechanisms
to manage multiple operations
2. traditional processes cant take advantage of multiprocessor
architectures
Becoz. process can use only one processor at a time
Application: number of separate processes and dispatch them to processors
find ways of sharing memory and resources, and synchronizing the tasks
Tackled by UNIX variants: primitives in OS support concurrent processing
Each UNIX variant: kernel threads, user threads, kernel-supported user
threads, C-threads, pthreads, and lightweight processes
-
8/3/2019 Unit I_chapter 3 Adv OS
3/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Motivation
several independent tasks: neednt be serialized Database server
listen for and process numerous client requests
Neednt be serviced in a particular order, could run in parallel
perform better, if provided mechanisms for concurrent execution of the subtasks
UNIX systems
programs use multiple processes
server applications have a listener process that waits for client requests
When a request arrives, the listener forks a new process to service it
Since servicing of the request often involves I/O operations that may block the
process, this approach yields some concurrency benefits even on uniprocessor
systems
Uniprocessor machines
divide the work among multiple processes
one process must block for I/O or page fault servicing, another process can
progress in the meantime
UNIX allows users to compile several files in parallel, using a separate process
for each
-
8/3/2019 Unit I_chapter 3 Adv OS
4/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Multiple processes: disadvantages
1. Creating processes adds overhead: becoz. fork is expensive system call
1. IPC ( message passing or shared memory) is needed
as each process has its own address space1. Additional work
dispatch processes to different machines or processors
pass information between these processes
wait for their completion
gather the results4. UNIX: no appropriate frameworks for sharing certain resources, e.g.,
network connections model is justified only
benefits of concurrency offset the cost of creating and managing multiple processes
Thread abstraction independent computational unit that is part of the total processing work of an
application
few interactions with one another and so low synchronization reqts
application contain 1/ more such units
UNIX process single-threaded: all computation is serialized within the same unit
-
8/3/2019 Unit I_chapter 3 Adv OS
5/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Multiple Threads and Processors
multithreaded systems + multiprocessor
architectures
parallelized, compute-bound applications
True || : running each thread on a different processor
If the no. of threads > no. of processors, threads be multiplexed
on the available processors
Application has n threads running on n processors, finish itswork in 1/n th the time required by a single-threaded version
Practically, overhead of creating, managing, and synchronizing
thread, and that of the multiprocessor operating system: reduce
the benefit below this ideal ratio
-
8/3/2019 Unit I_chapter 3 Adv OS
6/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Single and Multithreaded Processes
-
8/3/2019 Unit I_chapter 3 Adv OS
7/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Traditional UNIX - uniprocessor with single-threadedprocesses
single-threaded processes executing on a uniprocessor machine provides an illusion of concurrency by executing each process for a brief
period of time (time slice) before switching to the next first 3 processes: server side of a client-server application
server program spawns a new process for each active client Processes: identical address spaces and share information with one
another using IPC Lower 2 processes: run another server application
-
8/3/2019 Unit I_chapter 3 Adv OS
8/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Multithreaded processes in a uniprocessor system 2 servers: running in a multithreaded system
Each server runs as a single process, with multiple threads sharing a single address space
Interthread context switching handled by either the kernel or a user-level threads library, depending on OS
Reduces load on the memory subsystem: eliminate multiple, nearly identicaladdress spaces for each application
copy-on-write memory sharing: manage separate address translation maps for eachprocess
Since all threads of an application share a common address space Can use efficient, lightweight, interthread communication and synchronization mechanisms
disadvantages single-threaded process: not have to protect its data from other processes
Multithreaded processes:
concerned with every object in their address space
If more than one thread can access an object, use synchronization to avoid data corruption
-
8/3/2019 Unit I_chapter 3 Adv OS
9/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Multithreaded processes in a multiprocessor system 2 multithreaded processes running on a multiprocessor
threads of one process share same address space, but each runs on a diff.processor all run concurrently
improves performance, but complicates the synchronization problems
multiprocessor system useful for single-threaded applications, as several processes can run in parallel
significant benefits of multithreaded applications even on single- processor systems When one thread must block for I/O or some other resource, another thread can be
scheduled to run, and the application continues to progress thread abstraction is more suited for representing the intrinsic concurrency of a program
than for mapping software designs to multiprocessor hardware architectures
-
8/3/2019 Unit I_chapter 3 Adv OS
10/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Concurrency and parallelism
parallelism actual degree of parallel execution achieved
limited by the number of physical processors available to the application concurrency
Achieve maximum parallelism with an unlimited no. of processors
Depends
how the application is written
how many threads of control can execute simultaneously, with the
proper resources available
provided at the system or application level
kernel provides system concurrency by recognizing multiple threads of
control (hot threads) within a process and scheduling them
independently
benefit from system concurrency even on a uniprocessor
if one thread blocks on an event or resource, the kernel can
schedule another thread
-
8/3/2019 Unit I_chapter 3 Adv OS
11/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
user-level thread libraries: provide user concurrency user threads, or coroutines (cold threads),
not recognized by the kernel
scheduled and managed by the applications themselves does not provide true concurrency or parallelism, since such threads cannot
actually run in parallel provide a more natural programming model for concurrent applications Threads
organizational tools and exploit multiple processors
Kernel threads allow parallel execution on multiprocessors, not suitable for structuring user
applications
Purely user-level facility: only useful for structuring applications
does not permit parallel execution of code
Dual concurrency model system + user concurrency
kernel recognizes multiple threads in a process, and libraries add user threads that are notseen by the kernel
User threads allow synchronization between concurrent routines in a program without theoverhead of making system calls
always a good idea to reduce the size and responsibilities of the kernel, and splitting thethread support functionality between the kernel and the threads library
-
8/3/2019 Unit I_chapter 3 Adv OS
12/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Fundamental Abstractions Process: compound entity, 2 components
set of threads and a collection of resources
1. thread dynamic object that represents a control point in the process and that executes a sequence of instructions
private objects (program counter, a stack, and a register context)
1. Resources address space, open files, user credentials, quotas, and so on, are shared by all threads in the process
UNIX process:single thread of control Multithreaded systems: allow more than one thread of control in each process. Centralizing resource ownership: drawbacks
server application assumes the identity of the client while servicing a request
installed with superuser privileges
calls setuid, setgid, and setgroups to temporarily change itsuser credentials to match those of the client
Multithreading server to increase the concurrency: security problems
process has a single set of credentials, it can only pretend to be one client at a time
server is forced to serialize (single-thread) all system calls that check for security
several different types of threads, different properties and uses 3 important types
kernel threads
lightweight processes
user threads
-
8/3/2019 Unit I_chapter 3 Adv OS
13/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Kernel Threads
Not associated with a user process created and destroyed as needed internally by the kernel responsible for executing a specific function
shares the kernel text and global data, has its own kernel stack independently scheduled uses std synchronization mechanisms of the kernel (sleep (), wakeup ()
1. useful for performing asynchronous I/O
Rather than special mechanisms to handle it
Kernel create a new thread to handle each such request
handled synchronously by the thread, but appears asynchronous to the rest of the kernel
2. used to handle interrupts Adv: inexpensive to create and use
only resources are kernel stack and an area to save the register context when not running,
data structure to hold scheduling and synchronization information
Context switching between kernel threads: quick, since the memory mappings do not have to
be flushed
System processes such as the pagedaemon: equivalent to kernel threads
Daemon processes such as nfsd (the Network File System server process) are started at
the user level, but once started, execute entirely in the kernel
user context is not required once they enter kernel mode
-
8/3/2019 Unit I_chapter 3 Adv OS
14/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Lightweight Processes kernel-supported user thread system must support kernel threads before it can support LWPs Every process: one / more LWPs, each supported by a separate kernel
thread
LWPs independently scheduled
share the address space and other resources of the process
make system calls and block for I/O or resources
multiprocessor system benefits of true parallelism (each LWP dispatched to run on a different processor)
even on a uniprocessor resource and I/O waits block individual LWPs, not the entire process
Besides the kernel stack and register context, an LWP also needs tomaintain some user state
user register context, which must be saved when the LWP is pre- empted
User code is fully preemptible, and all LWPs in a process share a common
address space If any data can be accessed concurrently by multiple LWPs, such access
must be synchronized kernel provides facilities to lock shared variables and to block an LWP if it
tries to access locked data mutual exclusion (mutex) locks, semaphores, and condition variables
-
8/3/2019 Unit I_chapter 3 Adv OS
15/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Li it ti
-
8/3/2019 Unit I_chapter 3 Adv OS
16/135BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Limitations:
creation, destruction, and synchronization of LWP: require system calls expensive operations
2 mode switches
one from user to kernel mode on invocation another back to user mode on completion
On each mode switch, LWP crosses a protection boundary
kernel must copy the system call parameters from user to kernel space and validate them to
protect against malicious or buggy processes
on return from the system call, the kernel must copy data back to user space
When the LWPs frequently access shared data synchronization overhead can nullify any performance benefits
multiprocessor systems: locks at the user level
If a thread wants a resource that is currently unavailable, execute a busy-wait without kernel
involvement
Busy-waiting: reasonable for resources held only briefly; otherwise block the thread
Blocking:requires kernel involvement, expensive
-
8/3/2019 Unit I_chapter 3 Adv OS
17/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
LWPs
consumes significant kernel resources, including physical memory for a kernel stack
system cannot support a large no. of LWPs, general enough to support most reasonable
applications Not suitable for applications
use a large number of threads
frequently create and destroy them
transfer control from one thread to another
must be scheduled by the kernel
fairness issue: user can monopolize the processor by creating a large no. ofLWPs
Kernel provides mechanisms for creating, synchronizing, and managing
LWPs, it is the responsibility of the programmer to use them judiciously
U th d
-
8/3/2019 Unit I_chapter 3 Adv OS
18/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User threads
entirely at the user level, without the kernel knowing anything through library packages such as Mach's C-threads and POSIX pthreads provide all the functions for creating, synchronizing, scheduling, and
managing threads with no special assistance from the kernel interaction do not involve the kernel, extremely fast user threads + lightweight processes
very powerful programming environment
kernel recognizes, schedules, and manages LWPs user-level library
multiplexes user threads on top of LWPs
provides facilities for interthread scheduling, context switching, and synchronization without
involving the kernel
library acts as a miniature kernel for the threads it controls user threads is possible
user-level context of a thread can be saved and restored without kernel intervention
user thread own user stack, an area to save user-level register context, and other state information, such
as signal masks
library schedules and switches context between user threads
saving the current thread's stack and registers
Load newly scheduled one
U th d i l t ti
-
8/3/2019 Unit I_chapter 3 Adv OS
19/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User thread implementations
-
8/3/2019 Unit I_chapter 3 Adv OS
20/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
kernel responsibility for process switching, because it alone has the privilege to modify the memory
management registers
User threads
not truly schedulable entities
kernel has no knowledge of it
schedules the underlying process or LWP, which in turn uses library functions toschedule its threads
When the process or LWP is preempted, so are its threads
user thread makes a blocking system call, it blocks the underlying LWP
If the process has only one LWP (or if the user threads are implemented on a single-threadedsystem), all its threads are blocked
Library: provides synchronization objects to protect shared data structures
type of lock variable (such as a semaphore) and a queue of threads blocked on it
Threads must acquire the lock before accessing the data structure
If the object is already locked, the library blocks the thread by linking it onto its blockedthreads queue and transfer ring control to another thread
Modern UNIX systems: asynchronous I/O mechanisms, which allow processes to perform I/Owithout blocking
SVR4: IO_SETSIG ioctl operation to any STREAMS device
subsequent read or write to the stream simply queues the operation and returns withoutblocking
I/O completes: process is informed via a SIGPOLL signal
-
8/3/2019 Unit I_chapter 3 Adv OS
21/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Asynchronous I/O
very useful feature
allows a process to perform other tasks while waiting for I/O
leads to a complex programming model
restrict asynchrony to the operating system level and give applications a synchronous
programming environment.
A threads library achieves this by providing a synchronous interface that uses the
asynchronous mechanisms internally
Each request is synchronous with respect to the calling thread, which blocks until the I/O
completes process, however, continues to make progress, since the library invokes the asynchronous
operation and schedules another user thread to run in the mean- time
When the I/O completes, the library reschedules the blocked thread.
B fit f th d
-
8/3/2019 Unit I_chapter 3 Adv OS
22/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Benefits of user threads more natural way of programming applications (windowing systems) synchronous programming paradigm
by hiding the complexities of asynchronous operations in the threads library
Makes them useful, even in systems lacking any kernel support for threads
several threads libraries optimized for a different class of applications
greatest advantage: performance extremely light-weight, consume no kernel resources except when bound to an LWP
performance gains at user level without using system calls
avoids the overhead of trap processing and moving parameters and data across protectionboundaries
critical thread size amount of work a thread must do to be useful as a separate entity
depends on the overhead in creating and using a thread
user threads: few 100 instructions and may be reduced < 100 (compiler support)
User threads much less time for creation, destruction, and synchronization
Limitations total separation of information between the kernel and the threads library
kernel not know about user threads: cant use its protection mechanisms to protect
Process: own address space, which the kernel protects from unauthorized access by otherprocesses
split sched ling model
-
8/3/2019 Unit I_chapter 3 Adv OS
23/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
split scheduling model
Problem: do not know what the other is doing
threads library schedules user threads, kernel schedules the underlying processes or LWPs
E.g. kernel may preempt an LWP whose user thread is holding a spin lock, If another user
thread on a different LWP tries to acquire this lock, it will busy-wait until the holder of the lock
runs again
kernel does not know the relative priorities of user threads
preempt an LWP running a high-priority user thread to schedule an LWP running a lower-
priority user thread
user-level synchronization mechanisms
behave incorrectly
applications are written on the assumption that all runnable threads are eventually scheduled
This is true when each thread is bound to a separate LWP, but may not be valid when the user
threads are multiplexed onto a small number of LWPs
Since the LWP may block in the kernel when its user thread makes a blocking system call, a
process may run out of LWPs even when there are runnable threads and available processors
availability of an asynchronous I/O mechanism may help to mitigate this problem.
without explicit kernel support, user threads may improve concurrency, but
do not increase parallelism
Even on a multiprocessor: user threads sharing a single LWP cannot execute in parallel
Summary
-
8/3/2019 Unit I_chapter 3 Adv OS
24/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Summary
Kernel threads
primitive objects not visible to applications
Lightweight processes
user-visible threads that are recognized by the kernel and are based on kernel threads User threads
higher-level objects not visible to the kernel
use LWP if supported by the system, or they may be implemented in a standard UNIX process
without special kernel support
LWPs + user threads
major drawbacks that limit their usefulness
User Level Threads Libraries
-
8/3/2019 Unit I_chapter 3 Adv OS
25/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User-Level Threads Libraries
2 important issues
what kind of programming interface the package will present to the user
how it can be implemented using the primitives provided by the operating system
different threads packages
Chores, Topaz, and Mach's C threads IEEE POSIX standards, pthreads
Modem UNIX versions support pthreads interface
P i I t f
-
8/3/2019 Unit I_chapter 3 Adv OS
26/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Programming Interface Interface: facilities provide a large set of operations
creating and terminating threads
suspending and resuming threads
assigning priorities to individual threads
thread scheduling and context switching
synchronizing activities (semaphores and mutual exclusion locks)
sending messages from one thread to another threads package: minimize kernel involvement (overhead) Kernel:
no explicit knowledge of user threads
threads library may use system calls to implement some of its functionality
thread priority is unrelated to the kernel scheduling priority, which is assigned tothe underlying process or LWP
process-relative priority: used by the threads scheduler to select a thread to run
within the process
Implementing Threads Libraries
-
8/3/2019 Unit I_chapter 3 Adv OS
27/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Implementing Threads Libraries
depends on the facilities for multithreading provided by the kernel packages - traditional UNIX kernels: no special support for threads Threads library
miniature kernel, maintaining all the state information for each thread andhandling all thread operations at the user level
effectively serializes all processing, concurrency is provided by using
asynchronous I/O
Modern sys: kernel supports multithreaded processes through LWPs,
Bind each thread to a different LWP. This is easier to implement, but uses more
kernel resources and offers little added value. It requires kernel involvement in all
synchronization and thread scheduling operations
Multiplex user threads on a (smaller) set of LWPs. This is more efficient, as it
consumes fewer kernel resources. This method works well when all threads in a
process are roughly equivalent. It provides no easy way of guaranteeingresources to a particular thread
Allow a mixture of bound and unbound threads in the same process. This allows
the application to fully exploit the concurrency and parallelism of the system. It
also allows preferential handling of a bound thread, by increasing the scheduling
priority of its underlying LWPs or by giving its LWP exclusive ownership of a
processor
-
8/3/2019 Unit I_chapter 3 Adv OS
28/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Threads library scheduling algorithm that selects which user thread to run
It maintains per-thread state and priority, which has no relation to thestate or priority of the underlying LWPs
6 user threads multiplexed onto two LWPs (u1-u6)
library schedules one thread to run on each LWP
u5 and u6: running state, even though the underlying LWPs may be
blocked in the middle of a system call, or preempted and waiting to bescheduled
u1 and u2: blocked state when it tries to acquire a synchronization
object locked by another thread, when released, the library unblocks the
thread, and puts it on the scheduler queue
u3 and u4: runnable state, waiting to be scheduled
Scheduler selects a thread from this queue based on priority and LWP
affiliation
closely parallels the kernel's resource wait and scheduling algorithms
acts as a miniature kernel for the threads it manages
User thread states
-
8/3/2019 Unit I_chapter 3 Adv OS
29/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User thread states
User-Level Threads Libraries
-
8/3/2019 Unit I_chapter 3 Adv OS
30/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User-Level Threads Libraries
User level thread library should provide a set of operations
such as: creating and terminating threads
suspending and resuming threads
assigning priorities to individual threads
thread scheduling and context switching
synchronizing activities through facilities such as semaphores and
mutual exclusion locks
sending messages from one thread to another
-
8/3/2019 Unit I_chapter 3 Adv OS
31/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
thread priority is unrelated to kernel scheduling priority
threads library contains a scheduling algorithm that selectswhich user thread to run
kernel is responsible for processor allocation, the threads
library for scheduling
-
8/3/2019 Unit I_chapter 3 Adv OS
32/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User threads libraries have a choice of implementations:
Bind each thread to a different LWP. This is easier to implement, butuses more kernel resources and offers little added value. It requires
kernel involvement in all synchronization and thread schedulingoperations.
Multiplex user threads on a (smaller) set of LWPs. This is moreefficient, as it consumes fewer kernel resources. This method workswell when all threads in a process are roughly equivalent. It provides
no easy way of guaranteeing resources to a particular thread.Allow a mixture of bound and unbound threads in the same process.
This allows the application to fully exploit the concurrency andparallelism of the system. It also allows preferential handling of abound thread, by increasing the scheduling priority
User Thread Implementation
-
8/3/2019 Unit I_chapter 3 Adv OS
33/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User Thread Implementation
Each user thread must maintain the following stateinformation:
Thread IDSaved register stateUser stackSignal maskPriorityThread local storage
-
8/3/2019 Unit I_chapter 3 Adv OS
34/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Mach C-threads library
The pthreads Library
-
8/3/2019 Unit I_chapter 3 Adv OS
35/135
2006 IBM Corporation
PROCESS SCHEDULING
-
8/3/2019 Unit I_chapter 3 Adv OS
36/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
IntroductionClock interrupt handlingScheduler GoalsTraditional UNIX schedulingProcessor affinity on AIX
Introduction
-
8/3/2019 Unit I_chapter 3 Adv OS
37/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Introduction
Like memory & terminals, the CPU is also a shared resource forwhich processes contend.
The scheduler is the component of the OS that determines whichprocess to run at any given time and how long to run.
AIX(UNIX) is essentially a time sharing system, which means itallows several processes to run concurrently (This is an illusion on
an uni-processor machine).
Two aspects of scheduler:
1. Scheduling policy 2. Implementation (Data Structures and Algos)
Context switch expensive
Process Control Block (PCB)
Clock Interrupt Handling
-
8/3/2019 Unit I_chapter 3 Adv OS
38/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Clock Interrupt Handling
Every machine has a hardware clock, which interrupts the s/m at fixed intervals.
CPU Tick Time interval between successive clock interrupsUNIX typically sets the CPU tick at 10 milliseconds
Clock interrupt handler runs in response to the h/w clock interrupt. It performs the
following tasks:
Rearms the h/w clock if necessary
Update CPU usage statistics
performs priority recomputation and time-slice expiration handling
sends a SIGXCPU signal to the current process if it has exceeded its CPU usage Quota.
Updates the time-of-day clock and other related clocks
Handles callouts
Wakes up swapper and pagedaemon when approptiate.
Handles alarms
Note: Some of these taks do not need to be performed on every tick.
-
8/3/2019 Unit I_chapter 3 Adv OS
39/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Callouts A callout records a function that the kernel must involve at a later time.
TO reggister a callout
int to_ID = timeout (void (*fn)(), caddr_t arg, long delta);
fn() the kernel function to invoke; arg is an argument to pass to fn().
To cancel a callout
void untimeout (int to_ID);
On every tick, the clock handler checksif any callouts are due.
Alarms
-
8/3/2019 Unit I_chapter 3 Adv OS
40/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Alarms
A process can request the kernel to send it a signal after a specific amount of
time, much like an alarm clock.
Three types of alarms : real-time - relates to the actual elapsed time, and notifies the process via a SIGALRM
profiling - measures the amount of time the process has been executing and notifies the
process via SIGPROF
virtual-time - monitors only the time spent by the process in user mode and sends theSIGVTALRM
-
8/3/2019 Unit I_chapter 3 Adv OS
41/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Scheduler Goals
The scheduler must judiciously apportion CPU time to all processes in the system. The scheduler must ensure that the system delivers acceptable performance to each
application,
Applications can be loosely categorized into the following classes, based on theirscheduling requirements and performance expectations:
Interactive -- Applications such as shells, editors, and programs with graphical user interfaces
Batch -- Activities such as software builds and scientific computations do not require user interaction and are
often submitted as background jobs. Real-time -- This is a catchall class of applications that are often time-critical.
The scheduler must try to balance the needs of each. It must also ensure that kernelfunctions such as paging, interrupt handling, and process management can executepromptly when required.
In a well-behaved system, all applications must continue to progress. No applicationshould be able to prevent others from progressing, unless the user has explicitlypermitted it.
The choice of scheduling policy has a profound effect on the system's ability to meet therequirements of different types of applications.
-
8/3/2019 Unit I_chapter 3 Adv OS
42/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Traditional UNIX Scheduling Traditional UNIX scheduling is priority-based. Each process has a scheduling priority that changes with time.
The scheduler always selects the highest-priority runnable process. It uses preemptive time-slicing to scheduleprocesses of equal priority, and dynamically varies process priorities based on their CPU usage patterns.
UNIX kernel is strictly nonpreemptable. (It solves many synchronization problems associated with multiple processesaccessing the same kernel data structures).
Process Priorities
Priority may be any integer value between 0 and 127. Numerically lower values correspond to higher priorities. Prioritiesbetween 0 and 49 are reserved for the kernel, while processes in user mode have priorities between 50 and 127. Theproc structure contains the following fields that contain priority-related information:
p_pri Current scheduling priority.
p_us rp r i User mode priority.
p_cpu Measure of recent CPU usage.
p_ni ce User-controllable nice factor.
When a process completes the system call and is about to return to user mode, its scheduling priority is reset to itscurrent user mode priority.
The user mode priority depends on two factors--the nice value and the recent CPU usage.
nice value is a number between 0 and 39 with a default of 20.Increasing this value decreases the priority.Backgroundprocesses are automatically given higher nice values. Only a superuser can decrease the nice value of a process.
Every second, the kernel invokes a routine called schedcpu () (scheduled by a callout) that reduces the p_cpu value ofeach process by a decay factor.
decay = (2 * toad_average) / (2 * load_average + 1);
sched-cpu () routine also recomputes the user priorities of all processes using the formula
p_usrpri = PUSER + (p_cpu / 4) + (2 * p_nice);
-
8/3/2019 Unit I_chapter 3 Adv OS
43/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Scheduler Implementation
scheduler maintains an array called qs of 32 run queues. Each queue corresponds to four adjacent priorities.
A global variable which qs contains a bitmask with one bit for each queue.
The swtch () routine, which performs the context switch, examines whJ chqs to find the
index of the first set bit.
Run Queue Manipulation
-
8/3/2019 Unit I_chapter 3 Adv OS
44/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Run Queue Manipulation
Every 100 milliseconds, the kernel invokes (through a callout) a routine called
roundrobin() to schedule the next process from the same queue.
The schedcpu () routine recomputes the priority of each process once every
second.
There are three situations where a context switch is indicated:
The current process blocks on a resource or exits. This is a voluntary context switch.
The priority recomputation procedure results in the priority of another process becoming greater
than that of the current one.
The current process, or an interrupt handler, wakes up a higher-priority process.
-
8/3/2019 Unit I_chapter 3 Adv OS
45/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Traditional scheduling algorithm - Analysis The traditional scheduling algorithm is simple and effective. It is adequate for
a general time-sharing system with a mixture of interactive and batch jobs.
Dynamic computation of the priorities prevents starvation of any process.
The approach favors I/O-bound jobs that require small infrequent bursts of
CPU cycles.
The scheduler has several limitations that make it unsuitable for use in a
wide variety of commercial applications: It does not scale well--if the number of processes is very large, it is inefficient to recompute all
priorities every second.
There is no way to guarantee a portion of CPU resources to a specific process or group of
processes.
There are no guarantees of response time to applications with real-time characteristics.
Applications have little control over their priorities. The nice value mechanism is simplistic and
inadequate.
Since the kernel is nonpreemptive, higher-priority processes may have to wait a significant
amount of time even after being made runnable.
Processor affinity on AIX
-
8/3/2019 Unit I_chapter 3 Adv OS
46/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
y
Binding the process to available CPUs on AIX :
use either the bindprocessor command or the bindprocessor API
The bindprocessor command binds or unbinds the kernel threads of a process to a
processor.
The syntax for the bindprocessor command is:
bindprocessor Process [ ProcessorNum ] | -q | -u Process{ProcessID [ProcessorNum] |
-u ProcessID |
-
8/3/2019 Unit I_chapter 3 Adv OS
47/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-
8/3/2019 Unit I_chapter 3 Adv OS
48/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
bindprocessor subroutine
-
8/3/2019 Unit I_chapter 3 Adv OS
49/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-
8/3/2019 Unit I_chapter 3 Adv OS
50/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-
8/3/2019 Unit I_chapter 3 Adv OS
51/135
2006 IBM Corporation
Users Threads in AIX
Understanding Threads and Process
-
8/3/2019 Unit I_chapter 3 Adv OS
52/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
g
Process Properties
It has traditional process attributes, such as:
Process ID, user ID, group ID Environment Working directory
-
8/3/2019 Unit I_chapter 3 Adv OS
53/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A process also provides a common address space and
common system resources, as follows:
File descriptors Signal actions Shared libraries Inter-process communication tools (such as message
queues, pipes, semaphores, or shared memory)
-
8/3/2019 Unit I_chapter 3 Adv OS
54/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Thread PropertiesA thread is the schedulable entity.The properties include the following:
StackScheduling properties (such as policy or priority)Set of pending and blocked signalsSome thread-specific data(like errno)
-
8/3/2019 Unit I_chapter 3 Adv OS
55/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
All threads share the same address space.
When a process is created, one thread is automatically
created. This thread is called the initial thread, not visible
to the programmer
Threads are well-suited entities for modular programming.
-
8/3/2019 Unit I_chapter 3 Adv OS
56/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
User threads are mapped to kernel threads by the threads
library.
Three different ways to map user threads to kernel
threads.M:1 model1:1 model
M:N model
Thread-Safe and Threaded Libraries in AIX
-
8/3/2019 Unit I_chapter 3 Adv OS
57/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The following pairs of objects are manipulated by the
threads library:
Threads and thread-attributes objects
Mutexes and mutex-attributes objects
Condition variables and condition-attributes objects
Read-write locks
-
8/3/2019 Unit I_chapter 3 Adv OS
58/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The thread safe libraries in AIX are:
libbsd.alibc.alibm.alibsvid.alibtli.a
libxti.alibnetsvc.a
Creating Threads
-
8/3/2019 Unit I_chapter 3 Adv OS
59/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
No parent-child relation exists between threads
When creating a thread, an entry-point routine and an
argument must be specified
A thread has attributes, which specify the characteristics of
the thread.
A thread is created by calling the pthread_create
subroutine
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_create.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_create.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_create.htm -
8/3/2019 Unit I_chapter 3 Adv OS
60/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The pthread_create subroutine returns the thread ID of the new
thread
The current thread ID is returned the pthread_selfsubroutine
A thread ID is an opaque object; its type is pthread_t (an integer in
AIX)
When calling the pthread_create subroutine, you may specify a
thread attributes object. If you specify a NULL pointer, the created
thread will have the default attributes.
Terminating Threads
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_self.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_self.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_self.htm -
8/3/2019 Unit I_chapter 3 Adv OS
61/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A thread automatically terminates when it returns from itsentry-point routine.
Thread can exit at any time by calling the pthread_exitsubroutine.
The cancelation of a thread is requested by calling thepthread_cancel subroutine.
Cleanup Handlers
Using Mutexes
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_exit.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_cancel.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_cancel.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_cancel.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_exit.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_exit.htm -
8/3/2019 Unit I_chapter 3 Adv OS
62/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A mutexis a mutual exclusion lock, Only one thread canhold the lock
Mutex attributes, specify the characteristics of the mutex.
Like threads, mutexes are created with the help of anattributes object , which can be accessed throughaccessed through a variable of
type pthread_mutexattr_t .
Creating and destroying mutexes
-
8/3/2019 Unit I_chapter 3 Adv OS
63/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
pthread_mutexattr_init
pthread_mutexattr_destroy
pthread_mutex_init
pthread_mutex_destroy
Types of Mutexes
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutex_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_mutexattr_destroy_init.htm -
8/3/2019 Unit I_chapter 3 Adv OS
64/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The type of mutex determines how the mutex behaves
when it is operated on. They are:
PTHREAD_MUTEX_DEFAULT or PTHREAD_MUTEX_N
ORMAL
PTHREAD_MUTEX_ERRORCHECK
PTHREAD_MUTEX_RECURSIVE
Joining Threads
-
8/3/2019 Unit I_chapter 3 Adv OS
65/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Joining a thread means waiting for it to terminate
Using the pthread_join subroutine alows a thread to waitfor another thread to terminate
A thread cannot join itself because a deadlock would occurand it is detected by the library
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_join.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_join.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_join.htm -
8/3/2019 Unit I_chapter 3 Adv OS
66/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The pthread_join subroutine also allows a thread to return
information to another thread.
Any call to the pthread_join subroutine occurring before
the target thread's termination blocks the calling thread.
What happens when two threads may try to join each
other?
Scheduling Threads
-
8/3/2019 Unit I_chapter 3 Adv OS
67/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Threads can be scheduled
The threads library allows the programmer to control the executionscheduling of the threads in the following ways:
By setting scheduling attributes when creating a thread
By dynamically changing the scheduling attributes of a createdthread
By defining the effect of a mutex on the thread's scheduling whencreating a mutex (known as synchronization scheduling)
By dynamically changing the scheduling of a thread duringsynchronization operations (known as synchronization scheduling)
Thread specific data
-
8/3/2019 Unit I_chapter 3 Adv OS
68/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A thread-specific data keyis an opaque object, of
thepthread_key_t data type.
Thread-specific data are void pointers, which allows
referencing any kind of data
Thread-specific data keys must be created before being
used
Their values can be automatically destroyed when the
corresponding threads terminate.
Routines
-
8/3/2019 Unit I_chapter 3 Adv OS
69/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
pthread_key_create
pthread_key_delete
pthread_getspecific
pthread_setspecific
Signal management
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_create.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_delete.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_getspecific.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_delete.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_delete.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_create.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_key_create.htm -
8/3/2019 Unit I_chapter 3 Adv OS
70/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Signal management in multi-threaded processes is shared
by the process and thread levels, and consists of the
following:
Per-process signal handlersPer-thread signal masksSingle delivery of each signal
-
8/3/2019 Unit I_chapter 3 Adv OS
71/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Signal handlers - maintained at process level.
Signal masks - maintained at thread level
Each thread can have its own set of signals that will be
blocked from delivery.
The sigthreadmask subroutine must be used to get andset the calling thread's signal mask
Signal Generation
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf2/sigthreadmask.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf2/sigthreadmask.htm -
8/3/2019 Unit I_chapter 3 Adv OS
72/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
pthread_kill - subroutine sends a signal to a thread
The kill - subroutine sends a signal to a process.
The raise subroutine sends a signal to the calling thread,
The alarm subroutine requests that a signal be sent later
to the process
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_kill.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/kill.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf2/raise.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/getinterval.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/getinterval.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf2/raise.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/kill.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_kill.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.basetechref/doc/basetrf1/pthread_kill.htm -
8/3/2019 Unit I_chapter 3 Adv OS
73/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Signal handlers are called within the thread to which the
signal is delivered
No pthread routines can be called from a signal handler.
Calling a pthread routine from a signal handler can lead to
an application deadlock.
A signal is delivered to a thread, unless its action is set toignore.
Writing Reentrant and Thread-Safe Code
-
8/3/2019 Unit I_chapter 3 Adv OS
74/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
In multi-threaded programs, the same functions and thesame resources may be accessed concurrently by severalflows of control.
To protect resource integrity, code written for multi-threaded programs must be reentrant and thread-safe.
A function can be either reentrant, thread-safe, both, or
neither.
-
8/3/2019 Unit I_chapter 3 Adv OS
75/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A reentrant function does not hold static data over
successive calls, nor does it return a pointer to static data.
A thread-safe function protects shared resources from
concurrent access by locks.
How to make a function Reentrant?
How to make a function Thread-Safe?
Developing Multi-Threaded Programs
-
8/3/2019 Unit I_chapter 3 Adv OS
76/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
All subroutine prototypes, macros, and other definitions for
using the threads library are in the pthread.h header file,
which is located in the/usr/include directory.
The following global symbols are defined in
the pthread.h file:_POSIX_REENTRANT_FUNCTIONS
_POSIX_THREADS
Invoking the Compiler
-
8/3/2019 Unit I_chapter 3 Adv OS
77/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
When compiling a multi-threaded program, invoke the C
compiler using one of the following commands:
xlc_rInvokes the compiler with default language level
ofansicc_rInvokes the compiler with default language level
ofextended
AIX supports up to 32768 threads in a single process
Debugging a Multi-Threaded Program
-
8/3/2019 Unit I_chapter 3 Adv OS
78/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Application programmers can use the dbx command to
perform debugging.
subcommands are available for displaying thread-related
objects, including attribute, condition, mutex, and thread
.
Kernel programmers can use the kernel debug program toperform debugging on kernel extensions and device
drivers.
S l b d t lti l k l th d d
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.cmds/doc/aixcmds2/dbx.htm -
8/3/2019 Unit I_chapter 3 Adv OS
79/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Several subcommands support multiple kernel threads andprocessors, including:
The cpu subcommand - changes the current processor
The ppd subcommand- displays per-processor data structures
The thread subcommand- displays thread table entries
The uthread subcommand -displays the uthread structure of athread
Core File Requirements of a Multi-Threaded Program
B d f l d f ll fil If
http://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/topic/com.ibm.aix.kdb/doc/kdb/kdb_cmd.htm -
8/3/2019 Unit I_chapter 3 Adv OS
80/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
By default, processes do not generate a full core file. If an
application must debug data in shared memory regions,
particularly thread stacks, it is necessary to generate a full
core dump.
To generate full core file information, run the following
command as root user:
chdev -l sys0 -a fullcore=true
Benefits of Threads
I d f b bt i d lti t
-
8/3/2019 Unit I_chapter 3 Adv OS
81/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Improved performance can be obtained on multiprocessor systems
using threads.
Inter-thread communication is far more efficient and easier to usethan inter-process communication.
creating threads and controlling their execution, requires fewer
system resources than managing processes.
On a multiprocessor system, multiple threads can concurrently run
on multiple CPUs.
References: http://publib boulder ibm com/infocenter/aix/v6r1/index jsp?topic=/com ibm aix
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htm -
8/3/2019 Unit I_chapter 3 Adv OS
82/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.
genprogc/doc/genprogc/understanding_threads.htm
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htmhttp://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.genprogc/doc/genprogc/understanding_threads.htm -
8/3/2019 Unit I_chapter 3 Adv OS
83/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Unit II : IPC, Filesystems & Memory management
IPC
File Systems
Virtual Memory
File Systems in AIX
Unit II:
-
8/3/2019 Unit I_chapter 3 Adv OS
84/135
2006 IBM Corporation
Inter Process Communication (IPC)
Universal IPC facilities
System V IPC
Messages
Ports
Message Passing
Universal IPC facilitiesUNIX provides three facilities for inter process communications
-
8/3/2019 Unit I_chapter 3 Adv OS
85/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
UNIX provides three facilities for inter process communications
signalspipesProcess tracing
These are the only IPC mechanisms common to all UNIX variants.
Signals Notify a process of asynchronous events.
-
8/3/2019 Unit I_chapter 3 Adv OS
86/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Modern UNIXes recognize 31 or more different signals - have predefined meanings. A process may send signals to another process(es) using the kill or killpg system calls. kernel also generates signals in response to various events (ex : sending SIGINT for Ctrl + C event) Each signal has default action, it can be overridden.
Uses:
o can be used for synchronization.
o Many applications developped resource sharing and locking protocols based on signals.
Limitations:
Expensive (Sender must make a s/m call. kernel must interrupt the receiver - manipulate stack-resuming intterrupted code. Limited bandwidth (only 31 different signals exist) i.,e can convey only limited info Useful for event notification and not for complicated interactions.
Pipes Unidirectional, FIFO, unstructured data stream of fixed size Writers add data to the end of the pipe, readers retrive data from the front of the pipe.
-
8/3/2019 Unit I_chapter 3 Adv OS
87/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Writers add data to the end of the pipe, readers retrive data from the front of the pipe. Once read the data is removed Pipe system calls creates pipes and returns two file descriptors (one for rd and one for wr) Each pipe can several readers and writers.
A process can be a reader/ writer / both. I/O to pipe is much like I/O to file, read and write is achived through read and write system calls to the pipe'sdescriptors.
Limitations:
can not be used to broadcast data (since reading removes the data from pipe) Data in pipe is a byte stream. If writer sends several objects of different length, reader can't determine. If there are multiple readers, a writer can't direct data to specific reader, vice versa.
Process tracing ptrace system call used by debuggers such as sdb dbx
-
8/3/2019 Unit I_chapter 3 Adv OS
88/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
used by debuggers such as sdb, dbx a process can be control the execution of a child process
ptrace (cmd, pidm addr, data);
The cmd arg allows the following operations: Read / write a word in child's aread area / uarea / GP registers. Interrupt specific signals. set or delete watchpoints in the child's address space/ Resume the execution of the stopped child.
Single-step the child. Terminate the child. kernel sets the child's traced flag (in its proc structure), which affects how the child responds to signals.
Limitations : only the direct child can be controlled. Inefficient- requires several context switches. Tracing setuidprogram raises problem.
System V UNIX provide three IPC mechanisms
System V IPC
-
8/3/2019 Unit I_chapter 3 Adv OS
89/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Semaphores Message Queues Shared memory
Each instance of an IPC resource has the following attributes
Key -to identify the instance od the resource
Creator - UID & GID Owner - UID & GID Permissions
Process acquires a resource using shmget / semget / msgget Controls the aquired resource using shmctl / semctl / msgctl
Semaphores Are integer-valued objects that support two atomic operations P() and V()/
-
8/3/2019 Unit I_chapter 3 Adv OS
90/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
g j pp p () ()
p() - decrements the value. Blocks if its value is less than zero.
V() - increments the value. If the result >= 0 then wakes up a waiting process / threads.
These operations are atomic in nature
Semaphores can cause dead locks.
Message Queues
-
8/3/2019 Unit I_chapter 3 Adv OS
91/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A Message Queue is a header pointing at a linked list of messsages.A message = 32 bit type value + data area.
The following functions are used: msgqid = gsgget(key, flag); msgsnd(msgqid, msgp, count, flag);
count = msgrcv(msgqid, msgp, maxcnt, msgtype, flag);
It is similar to pipe but more versatile and address limitations of pipes. Transmit data as discrete messages thn unformatted byte-stream. Effective for small amount of data. Each transfer require 2 copy operations, hence results in poor performance.
Shared memory
-
8/3/2019 Unit I_chapter 3 Adv OS
92/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Shared m/y is a portion of physical m/y shared by multiple processes.
Processes may attach this region to any suitable virtual address range in their address space.
Functions : shmid = shmget (key, size, flag);
addr = shmat (shmid, shmaddr, shmflag);
shmdt (shmaddr);
Most modern UNIX variants also provide the mmap system calls, which maaps a file (or part of a file) into the address space of the call er.
-
8/3/2019 Unit I_chapter 3 Adv OS
93/135
2006 IBM Corporation
File System
Filesystems
The User Interface to Files
-
8/3/2019 Unit I_chapter 3 Adv OS
94/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The User Interface to FilesFileSystems
Special FilesFile System FrameworkThe Vnode/VFS Architecture
Implementation OverviewNetwork File System
The User Interface to Files
-
8/3/2019 Unit I_chapter 3 Adv OS
95/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Allows user to organise,manipulate,access different filesFiles and Directories
-links-file organisation- dirent structure
File attributes- type,size, inode number,link,deviceid,user id,group id
- sticky flag File descriptors
- file offset- open()
File I/o- read write calls
File Locking- advisory and mandatory locks
-
8/3/2019 Unit I_chapter 3 Adv OS
96/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
File Systems root file system
-
8/3/2019 Unit I_chapter 3 Adv OS
97/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
root file system Mounting File hierarchy
Logical Disks disk mirrorring striping
-
8/3/2019 Unit I_chapter 3 Adv OS
98/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Terminals and printersSymbolic Links
-soft and hard links
Pipes and FifO- implementation
Special Files
File System Framework
Growth of FS
Need for sharing
The Vnode/VFS Architecture
Objectives
-
8/3/2019 Unit I_chapter 3 Adv OS
99/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Objectives
Lessons from devices I/o
-cdevsw structure
-Relationship between a base class
and its subclass
-vnode abstraction
- vfs abstraction
-
8/3/2019 Unit I_chapter 3 Adv OS
100/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-
8/3/2019 Unit I_chapter 3 Adv OS
101/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Implementation Overview
-
8/3/2019 Unit I_chapter 3 Adv OS
102/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Objectivesvnode and open files - file system dependant data structures
- file object fields
Vnode - vnode structure
Vnode reference count
Vfs object
-vfs structure -relationship between vnode and vfs
-
8/3/2019 Unit I_chapter 3 Adv OS
103/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Network File System - Client/server model
-
8/3/2019 Unit I_chapter 3 Adv OS
104/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-remote procedure calls -User perspective
NFS exportmount
-Design goalsobjectivesMounting nfs
- NFS components
nfsV2 operationsmount protocol
-stalelessness
-
8/3/2019 Unit I_chapter 3 Adv OS
105/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
File Systemsin AIX
-
8/3/2019 Unit I_chapter 3 Adv OS
106/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
JFS/JFS2 File System Layout Mounting Managing Filesystem
1)JFS File System Layout FS is a set of files, directories, and other structures JFS file contain a boot block a superblock bitmaps and one or more
-
8/3/2019 Unit I_chapter 3 Adv OS
107/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
JFS file contain a boot block, a superblock, bitmaps, and one or moreallocation groups.
JFS Boot Block
-The boot block occupies the first 4096 bytes of the file system,
-starting at byte offset 0 on the disk.
-The boot block is available to start the operating system.
JFS Superblock
-4096 bytes in size and starts at byte offset 4096 on the disk.-maintains size,# of data blocks,state of FS,allocation groups sizes
JFS Allocation Bitmaps
-The fragment allocation map records the allocation state of eachfragment.
-The disk i-node allocation map records the status of each i-node.
JFS File System Layout (Contd)
-
8/3/2019 Unit I_chapter 3 Adv OS
108/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
JFS Fragments
-Many file systems have disk blocks or data blocks.-blocks divide the disk into units of equal size to store
the data in a file or directory's logical blocks.
-The disk block may be further divided into fixed-sizeallocation units called fragments.
-JFS provides a view of the file system as a contiguousseries of fragments.
-JFS fragments are the basic allocation unit and the disk isaddressed at the fragment level.
JFS File System Layout (Contd)
-
8/3/2019 Unit I_chapter 3 Adv OS
109/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
JFS Allocation Groups
-The set of fragments making up the FS are divided into oneor more fixed-sized units of contiguous fragments.- Each unit is an allocation group.
-The first 4096 bytes of this area hold the boot block, and thesecond 4096 bytes hold the file system superblock.
-Disk i-nodes are 128 bytes in size and are identified by aunique disk i-node number or i-number.
- The i-number maps a disk i-node to its location on the diskor to an i-node within its allocation group.
-Allocation groups allow the JFS resource allocation policiesto use effective methods for to achieve FS I/O performance.
JFS File System Layout (Contd)
-
8/3/2019 Unit I_chapter 3 Adv OS
110/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
JFS Disk i-Nodes
-Each file and directory has an i-node that containsaccess information such as
file type, access permissions, owner's ID, andnumber of links to that file.
-These i-nodes also contain "addresses" for findingthe location on the disk wherethe data for a logical block is stored.
-Each i-node has an array of numbered sections.
- Each section contains an address for one of thefile or directory's logical blocks.
Mounting
-
8/3/2019 Unit I_chapter 3 Adv OS
111/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
g -Mounting makes file systems, files, directories, devices, and special files available for
use -The mount command instructs the operating system to attach a file system at a
specified directory. -write permission is needed for mount point -A user with root authority can mount a file system arbitrarily by naming both the
device and the directory on the command line. -The /etc/filesystems file is used to define mounts to be automatic at system
initialization.
Mount points -A mount point is a directory or file at which a new file system, directory, or file is
made accessible. -To mount a file system or a directory, the mount point must be a directory; -To mount a file, the mount point must be a file.
Mounting file systems, directories, and files - two types of mounts, a remote mount and a local mount. -Remote mounts are done on a remote system on which data is transmitted over
a telecommunication line.Ex.NFS -Local mounts are mounts done on your local system. -Mounts can be set to occur automatically during system initialization. -Diskless workstations must have the ability to create and access device-special
files on remote machines to have their /dev directories mounted from a server.
-
8/3/2019 Unit I_chapter 3 Adv OS
112/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Managing file system
-
8/3/2019 Unit I_chapter 3 Adv OS
113/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A FS is a complete directory structure, including a rootdirectory and any subdirectories.
File systems are confined to a single logical volume.
Some of the most important system management tasks areconcerning file systems, specifically:
Allocating space, Creating file systems,
Monitoring space, backup creating snapshot, list of system management commands that help manage file
systems (backup,chfs,df,fsck,mkfs,mount,restore,snapshot) Displaying available space on a file system (df command) File system commands Comparing file systems on different machines
-
8/3/2019 Unit I_chapter 3 Adv OS
114/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Virtual
Introduction
Demand Paging
Hardware requirement IBM RS/6000 , Intel 80X806
AIX Program Address space Overview
Memory
Introduction
-
8/3/2019 Unit I_chapter 3 Adv OS
115/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Introduction
Memory Management Unit (MMU)
Things to achieve :
Run programs larger than physical memory.
Run partially loaded programs, thus reducing program startup time.
Allow more than one program to reside in memory at one time
Allow relocatable programs, which may be placed anywhere in memory
Write machine-independent code--there should be no a priori correspondence between
the program and the physical memory configuration.
Relieve programmers of the burden of allocating and managing memory resources.
Allow sharing --for example, shared code
These goals are realized through the use of virtual memory
The application is given the illusion that it has a large main memory at its disposal,although the computer may have a relatively small memory.
The translation tables and other data structures used for memory management reducethe physical memory available to programs.
The usable memory is further reduced by fragmentation.
Demand Paging
-
8/3/2019 Unit I_chapter 3 Adv OS
116/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Demand Paging
Functional Requirements
Address space management
Address translation
Physical memory management
Memory protection
Memory sharing
Monitoring system load
Other facilities
The Virtual Address Space
-
8/3/2019 Unit I_chapter 3 Adv OS
117/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The Virtual Address Space
The address space of a process comprises all (virtual) memory locations that the
program may reference or access.
Demand-paged architectures divide this space into fixed-size pages. The pages of
a program may hold several types of information: text
initialized data
uninitialized data
modified data
stack
heap
shared memory
shared libraries
Initial Access to a Page
-
8/3/2019 Unit I_chapter 3 Adv OS
118/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
A process can start running a program with none of its pages in physical memory. As
it accesses each nonresident page, it generates a page fault, which the kernel
handles by allocating a free page and initializing it with the appropriate data. The method of initialization is different for the first access to a page and for
subsequent accesses. A process can request the kernel to send it a signal after a
specific amount of time, much like an alarm clock.
The swap area
The total size of all active programs is often
much greater than the physical memory,
which consequently holds only some of the
pages of each process. If a process needsa page that is not resident. The kernel
makes room for it by appropriating another
page and discarding its old contents.
Translation Maps
The paging system may use four different types of translation maps to implement
-
8/3/2019 Unit I_chapter 3 Adv OS
119/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
virtual memory, as shown in the following figure:
Hardware address translationsAddress space map
Physical memory map Backing store map
Hardware Requirements - The IBM RS/6000
-
8/3/2019 Unit I_chapter 3 Adv OS
120/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The IBM RS/6000 is a reduced instruction set computer (RISC) machine that runs
AIX, IBM's System V-based operating system. Its memory architecture has two
interesting features it uses a single, flat, system address space, and it uses an
inverted page table for address translation.
-
8/3/2019 Unit I_chapter 3 Adv OS
121/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
-
8/3/2019 Unit I_chapter 3 Adv OS
122/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
AIX Program Address space Overview
-
8/3/2019 Unit I_chapter 3 Adv OS
123/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Tools are available to assist in allocating memory, mapping memory
and files, and profiling application memory usage.
System Memory Architecture Introduction
The system employs a memory management scheme that uses software toextend the capabilities of the physical hardware. Because the address spacedoes not correspond one-to-one with real memory, the address space (and theway the system makes it correspond to real memory) is called virtual memory.
The subsystems of the kernel and the hardware that cooperate to translate thevirtual address to physical addresses make up the memory managementsubsystem. The actions the kernel takes to ensure that processes share mainmemory fairly comprise the memory management policy.
The Physical Address Space of 64-bit Systems
-
8/3/2019 Unit I_chapter 3 Adv OS
124/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The hardware provides a continuous range of virtual memory addresses, from0x00000000000000000000 to 0xFFFFFFFFFFFFFFFFFFFF
Total addressable space is more than 1 trillion terabytes
Memory access instructions generate an address of 64 bits: 36 bits to select a segment register and 28bits to give an offset within the segment.
Segment Register Addressing
-
8/3/2019 Unit I_chapter 3 Adv OS
125/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The system kernel loads some segment registers in the conventional way for all processes,implicitly providing the memory addressability needed by most processes.
These registers include two kernel segments, and a shared-library segment, and an I/O devicesegment, that are shared by all processes
Some segment registers are shared by all processes, others by a subset of processes, and yetothers are accessible to only one process.
Sharing is achieved by allowing two or more processes to load the same segment ID.
Paging Space
A page is a unit of virtual memory that holds 4K bytes of data and can be transferred betweenreal and auxiliary storage.
To accommodate the large virtual memory space with a limited real memory space, the systemuses real memory as a work space and keeps inactive data and programs that are not mappedon disk. The area of disk that contains this data is called the paging space.
Memory Management Policy
-
8/3/2019 Unit I_chapter 3 Adv OS
126/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
The real-to-virtual address translation and most other virtual memory facilities are provided to
the system transparently by the Virtual Memory Manager (VMM).
To accomplish paging, the VMM uses page-stealing algorithms
VMM uses a technique known as the clock algorithm to select pages to be replaced. Thistechnique takes advantage of a referenced bit for each page as an indication of what pages have
been recently used
Memory Allocation
Version 3 of the operating system uses a delayed paging slot technique for storage allocated
to applications. This means that when storage is allocated to an application with a subroutinesuch as malloc, no paging space is assigned to that storage until the storage is referenced.
-
8/3/2019 Unit I_chapter 3 Adv OS
127/135
2006 IBM Corporation
Q & A
Virtual and Physical Addresses
Physical addresses are provided directly by the machine
-
8/3/2019 Unit I_chapter 3 Adv OS
128/135
BCSDCS 707 / BITDIT 707 Adv OS and OS Industry Trends
Physical addresses