it 344: operating systems winter 2010 module 5 process and threads chia-chi teng [email protected] ctb...
TRANSCRIPT
IT 344: Operating
Systems
Winter 2010
Module 5
Process and
Threads
Chia-Chi [email protected]
duCTB 265G
fork() “return twice”
Parent Process
…
int main(int argc, char **argv){ char *name = argv[0]; int pid = fork(); if (pid == 0) { printf(“Child of %s is %d\n”, name,
pid); return 0; } else { printf(“My child is %d\n”, pid); return 0; } }
Child Process
…
int main(int argc, char **argv){ char *name = argv[0]; int pid = fork(); if (pid == 0) { printf(“Child of %s is %d\n”, name,
pid); return 0; } else { printf(“My child is %d\n”, pid); return 0; } }
12/31/2006 3
testparent output
spinlock% gcc -o testparent testparent.c
spinlock% ./testparent
My child is 486
Child of testparent is 0
spinlock% ./testparent
Child of testparent is 0
My child is 571
12/31/2006 4
Exec vs. fork
• So how do we start a new program, instead of just forking the old program?– the exec() system call!– int exec(char *prog, char ** argv)
• exec()– stops the current process– loads program ‘prog’ into the address space– initializes hardware context, args for new program– places PCB onto ready queue– note: does not create a new process!
exec()
0x00000000
0xFFFFFFFF
address space
codefoo() { … exec(); …}
static data(data segment)
heap(dynamic allocated mem)
stack(dynamic allocated mem)
PC
SP
Process ID
Pointer to parent
List of children
Process state
Pointer to address space descriptor
Program countstack pointer
(all) register values
uid (user id)gid (group id)
euid (effective user id)
Open file list
Scheduling priority
Accounting info
Pointers for state queues
Exit (“return”) code value
12/31/2006 6
UNIX shells
int main(int argc, char **argv)
{
while (1) {
char *cmd = get_next_command();
int pid = fork();
if (pid == 0) {
manipulate STDIN/STDOUT/STDERR fd’s
exec(cmd);
panic(“exec failed!”);
} else {
wait(pid);
}
}
}
12/31/2006 7
Input/output redirection
• $ ./myprog < input.txt > output.txt # UNIX– each process has an open file table– by (universal) convention:
• 0: stdin
• 1: stdout
• 2: stderr
– a child process inherits the parent’s open file table
• So the shell…– copies its current stdin/stdout open file entries– opens input.txt as stdin and output.txt as stdout– fork…– restore original stdin/stdout
Create New Process
• Unix/Linux– fork-exec pair– vfork and other variations– POSIX extension
• Windows– CreateProcess(commandline, flags, …)
04/18/23 © 2007 Gribble, Lazowska, Levy, Zahorjan 8
Administrivia
• Keep up with the reading– Finish chap 4– I can not possibly cover everything in the book
• Quiz 1 next Tuesday– Processes and Threads
• HW– Review– Solution– Better late than never
04/18/23 9
04/18/23 10
What’s in a process?• A process consists of (at least):
– an address space
– the code for the running program
– the data for the running program
– an execution stack and stack pointer (SP)
• traces state of procedure calls made
– the program counter (PC), indicating the next instruction
– a set of general-purpose processor registers and their values
– a set of OS resources
• open files, network connections, sound channels, …
• That’s a lot of concepts bundled together!
• Today: decompose …– an address space– threads of control– (other resources…)
04/18/23 11
Concurrency
• Imagine a web server, which might like to handle multiple requests concurrently– While waiting for the credit card server to approve a
purchase for one client, it could be retrieving the data requested by another client from disk, and assembling the response for a third client from cached information
• Imagine a web client (browser), which might like to initiate multiple requests concurrently– Our college home page has ~50 “src= …” html commands,
each of which is going to involve a lot of sitting around! Wouldn’t it be nice to be able to launch these requests concurrently?
04/18/23 12
What’s needed?
• In each of these examples of concurrency (web server, web client):– Everybody wants to run the same code– Everybody wants to access the same data– Everybody has the same privileges– Everybody uses the same resources (open files, network
connections, etc.)
• But you’d like to have multiple hardware execution states:– an execution stack and stack pointer (SP)
• traces state of procedure calls made
– the program counter (PC), indicating the next instruction– a set of general-purpose processor registers and their
values
04/18/23 13
How could we achieve this?
• Given the process abstraction as we know it:– fork several processes
– cause each to map to the same physical memory to share data
• This is like making a pig fly – it’s really inefficient– space: PCB, page tables, etc.
– time: creating OS structures, fork and copy addr space, etc.
• Some equally bad alternatives for some of the examples:– Entirely separate web servers
– Manually programmed asynchronous programming (non-blocking I/O) in the web client (browser)
04/18/23 14
Can we do better?
• Key idea:– separate the concept of a process (address space, etc.)– …from that of a minimal “thread of control” (execution state:
PC, etc.)
• This execution state is usually called a thread, or sometimes, a lightweight process
04/18/23 15
Threads and processes
• Most modern OS’s (Mach, Chorus, NT, modern UNIX) therefore support two entities:– the process, which defines the address space and general
process attributes (such as open files, etc.)– the thread, which defines a sequential execution stream within a
process
• A thread is bound to a single process / address space– address spaces, however, can have multiple threads executing
within them– sharing data between threads is cheap: all see the same
address space (same virtual -> physical address mapping)– creating threads is cheap too!
• Threads become the unit of scheduling– processes / address spaces are just containers in which threads
execute
04/18/23 16
The design space
address space
thread
one thread/process
many processes
many threads/process
many processes
one thread/process
one process
many threads/process
one process
MS/DOS
Java
olderUNIXes
Mach, NT,Chorus,Linux, …
Key
04/18/23 17
(old) Process address space
0x00000000
0xFFFFFFFF
address space
code(text segment)
static data(data segment)
heap(dynamic allocated mem)
stack(dynamic allocated mem)
PC
SP
04/18/23 18
(new) Process address space with threads
0x00000000
0xFFFFFFFF
address space
code(text segment)
static data(data segment)
heap(dynamic allocated mem)
thread 1 stack
PC (T2)
SP (T2)thread 2 stack
thread 3 stack
SP (T1)
SP (T3)
PC (T1)PC (T3)
WHY?
19
Process/thread separation• Concurrency (multithreading) is useful for:
– handling concurrent events (e.g., web servers and clients)
– building parallel programs (e.g., matrix multiply, ray tracing)
– improving program structure, examples?
• Multithreading is useful even on a uniprocessor– even though only one thread can run at a time
– HyperThread vs MultiCore (later)
• Supporting multithreading – that is, separating the concept of a process (address space, files, etc.) from that of a minimal thread of control (execution state), is a big win– creating concurrency does not require creating new processes
– “faster / better / cheaper”
04/18/23 20
“Where do threads come from?”
• Natural answer: the kernel is responsible for creating/managing threads– for example, the kernel call (API) to create a new thread would
• allocate an execution stack within the process address space
• create and initialize a Thread Control Block (TCB)– stack pointer, program counter, register values
• stick it on the ready queue
– we call these kernel threads
04/18/23 21
• Threads can also be managed at the user level (that is, entirely from within the process)– a library linked into the program manages the threads
• because threads share the same address space, the thread manager doesn’t need to manipulate address spaces (which only the kernel can do)
• threads differ (roughly) only in hardware contexts (PC, SP, registers), which can be manipulated by user-level code
• the thread package multiplexes user-level threads on top of kernel thread(s), which it treats as “virtual processors”
– we call these user-level threads
• READ THE BOOK!!!
“Where do threads come from?” (2)
04/18/23 22
Kernel threads
• OS now manages threads and processes– all thread operations are implemented in the kernel– OS schedules all of the threads in a system
• if one thread in a process blocks (e.g., on I/O), the OS knows about it, and can run other threads from that process
• possible to overlap I/O and computation inside a process
• Kernel threads are cheaper than processes– less state to allocate and initialize
• But, they’re still pretty expensive for fine-grained use (e.g., orders of magnitude more expensive than a procedure call)– thread operations are all system calls
• context switch• argument checks
– must maintain kernel state for each thread
04/18/23 23
User-level threads
• To make threads cheap and fast, they need to be implemented at the user level– managed entirely by user-level library, e.g., libpthreads.a
• User-level threads are small and fast– each thread is represented simply by a PC, registers, a stack,
and a small thread control block (TCB)– creating a thread, switching between threads, and
synchronizing threads are done via procedure calls• no kernel involvement is necessary!
– user-level thread operations can be 10-100x faster than kernel threads as a result
04/18/23 24
Performance example
• On a 700MHz Pentium running Linux 2.2.16:
– Processes• fork/exit: 251 s
– Kernel threads• pthread_create()/pthread_join(): 94 s (2.5x faster)
– User-level threads• pthread_create()/pthread_join: 4.5 s (another 20x
faster)
04/18/23 25
Performance example (2)
• On a 700MHz Pentium running Linux 2.2.16:• On a DEC SRC Firefly running Ultrix, 1989
– Processes• fork/exit: 251 s / 11,300 s
– Kernel threads• pthread_create()/pthread_join(): 94 s / 948 s (12x
faster)
– User-level threads• pthread_create()/pthread_join: 4.5 s / 34 s
(another 28x faster)
04/18/23 26
The design space
address space
thread
one thread/process
many processes
many threads/process
many processes
one thread/process
one process
many threads/process
one process
MS/DOS
Java
olderUNIXes
Mach, NT,Chorus,Linux, …
04/18/23 27
address space
thread
Mach, NT,Chorus,Linux, …
os kernel
(thread create, destroy, signal, wait, etc.)
CPU
Kernel threads
04/18/23 28
address space
thread
Mach, NT,Chorus,Linux, …
os kernel
CPU
User-level threads, conceptuallyuser-level
thread library
(thread create, destroy, signal, wait, etc.)
?
04/18/23 29
address space
thread
Mach, NT,Chorus,Linux, …
os kernel
(kernel thread create, destroy, signal, wait, etc.)
CPU
User-level threads, reallyuser-level
thread library
(thread create, destroy, signal, wait, etc.)
kernel threads
04/18/23 30
address space
thread
Mach, NT,Chorus,Linux, …
os kernel
user-levelthread library
(thread create, destroy, signal, wait, etc.)
(kernel thread create, destroy, signal, wait, etc.)
CPU
Multiple kernel threads “powering”each address space
kernel threads
04/18/23 31
User-level thread implementation
• The kernel believes the user-level process is just a normal process running code– But, this code includes the thread support library and its
associated thread scheduler
• The thread scheduler determines when a thread runs– it uses queues to keep track of what threads are doing: run,
ready, wait• just like the OS and processes
• but, implemented at user-level as a library
04/18/23 32
Thread interface
• This is taken from the POSIX pthreads API:
– t = pthread_create(attributes, start_procedure)• creates a new thread of control
• new thread begins executing at start_procedure
– pthread_cond_wait(condition_variable)• the calling thread blocks, sometimes called thread_block()
– pthread_signal(condition_variable)• starts the thread waiting on the condition variable
– pthread_exit()• terminates the calling thread
– pthread_wait(t)• waits for the named thread to terminate
04/18/23 33
• Strategy 1: force everyone to cooperate– a thread willingly gives up the CPU by calling yield()– yield() calls into the scheduler, which context switches to
another ready thread– what happens if a thread never calls yield()?
• Strategy 2: use preemption– scheduler requests that a timer interrupt be delivered by the
OS periodically• usually delivered as a UNIX signal (man signal)• signals are just like software interrupts, but delivered to user-
level by the OS instead of delivered to OS by hardware
– at each timer interrupt, scheduler gains control and context switches as appropriate
How to keep a user-level thread fromhogging the CPU?
04/18/23 34
Thread context switch
• Very simple for user-level threads:– save context of currently running thread
• push machine state onto thread stack
– restore context of the next thread• pop machine state from next thread’s stack
– return as the new thread• execution resumes at PC of next thread
• This is all done by assembly language– it works at the level of the procedure calling convention
• thus, it cannot be implemented using procedure calls
• e.g., a thread might be preempted (and then resumed) in the middle of a procedure call
04/18/23 35
os kernel
CPU
Thread configurations
os kernel
CPU
os kernel
CPU
ULT Combined KLT
04/18/23 36
What if a thread tries to do I/O?
• The kernel thread “powering” it is lost for the duration of the (synchronous) I/O operation!
• Could have one kernel thread “powering” each user-level thread– no real difference from kernel threads – “common case”
operations (e.g., synchronization) would be quick
• Could have a limited-size “pool” of kernel threads “powering” all the user-level threads in the address space– the kernel will be scheduling these threads, obliviously to
what’s going on at user-level
04/18/23 37
What if the kernel preempts a threadholding a lock?
• Other threads will be unable to enter the critical section and will block (stall)– tradeoff, as with everything else
• Solving this requires coordination between the kernel and the user-level thread manager– “scheduler activations”
• each process can request one or more kernel threads– process is given responsibility for mapping user-level threads onto
kernel threads– kernel promises to notify user-level before it suspends or destroys
a kernel thread
04/18/23 38
Summary• You really want multiple threads per address space• Kernel threads are much more efficient than
processes, but they’re still not cheap– all operations require a kernel call and parameter verification
• User-level threads are:– fast– great for common-case operations
• creation, synchronization, destruction
– can suffer in uncommon cases due to kernel obliviousness• I/O
• preemption of a lock-holder
• Scheduler activations are the answer– pretty subtle though
Bonus Slides
04/18/23 © 2007 Gribble, Lazowska, Levy, Zahorjan 39
04/18/23 40
CGI vs ISAPI (IIS)
• Advantages & disadvantages– Protection– Ease of use– Performance– Scalibility
• JAVA Servlet
HW HyperThread
• HyperThread– Multiple Execution State in CPU
• Registers, PC, SP, …• Multiple Logical Processor/Core
04/18/23 41
Hyper Thread vs Multi-core
Hyper Thread Multi-core
Execution State Multiple Multiple
Thread of Execution Single Multiple
Cache Single Multiple
04/18/23 42