csi 3131 greatest hits (2015) modules 1-4

63
CSI 3131 Greatest Hits (2015) Modules 1-4 Content from: Silberchatz

Upload: others

Post on 16-Oct-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSI 3131 Greatest Hits (2015) Modules 1-4

CSI 3131Greatest Hits (2015)

Modules 1-4

Content from: Silberchatz

Page 2: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 2

I/O Structure• Controller has registers for accepting commands and

transferring data (i.e. data-in, data-out, command, status)

• Device driver for each device controller talks to the controller

– The driver is the one who knows details of controller

– Provides uniform interface to kernel

• I/O operation

– Device driver loads controller registers appropriately

– Controller examines registers, executes I/O

Page 3: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 3

I/O Structure• How does the driver know when the I/O

completes?– Periodically read the status register

• Called direct I/O

• Low overhead if I/O is fast

• If I/O is slow, lots of busy waiting

• Any idea how to deal with slow I/O?– Do something else and let the controller

signal device driver (raising and interrupt) that I/O has completed

• Called interrupt-driver I/O

• More overhead, but gets rid of busy waiting

Page 4: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 4

I/O Structure

• Interrupt – driven I/O still has lots of overhead if used for bulk data transfer– An interrupt for each byte is too much

• Ideas?– Have a smart controller that can talk directly

with the memory

– Tell the controller to transfer block of data to/from memory

– The controller raises interrupt when the transfer has completed

– Called Direct Memory Access (DMA)

Page 5: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 5

What Do Operating Systems?– OS is program most involved with the

hardware

• hardware abstraction

– OS is a resource allocator

• Manages all resources

• Decides between conflicting requests for

efficient and fair resource use

– OS is a control program

• Controls execution of programs to prevent errors

and improper use of the computer

Page 6: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 6

Defining Operating Systems• No universally accepted definition

• “Everything a vendor ships when you order an

operating system” is good approximation

– But varies wildly (see system programs)

• “The one program running at all times on the

computer” is the one generally used in this course

– This is the kernel

– Everything else is either a system program (ships with

the operating system) or an application program

Page 7: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 7

Operating System ServicesServices provided to user programs:

– I/O operations

• User program cannot directly access I/O hardware, OS does

the low level part for them

– Communications

• Both inter-process on the same computer, and between

computers over a network

• via shared memory or through message passing

– Error detection

• Errors do occur: in the CPU and memory hardware, in I/O

devices, in user programs

• OS should handle them appropriately to ensure correct and

consistent computing

• Low level debugging tools really help

Page 8: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 8

Operating System Services (Cont.)Services for ensuring efficient operation of the system

itself– Resource allocation and management

• Needed when there are multiple users/ multiple jobs running concurrently

• Some resources need specific management code

– CPU cycles, main memory, file storage

• Others can be managed using general code - I/O devices

– Accounting

• Which users use how much and what kinds of computer resources

– Protection and security

• Between different processes/user in the computer

• From the outsiders in a networked computer system

• Protection: ensuring that all access to system resources is controlled

• Security: user authentication, defending external I/O devices from invalid access attempts

Page 9: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 9

Operating-System Operations• OS is interrupt driven

• Interrupts raised by hardware and software – Mouse click, division by zero, request for operating

system service

– Timer interrupt (i.e. process in an infinite loop), memory access violation (processes trying to modify each other or the operating system)

• Some operations should be performed only by a trusted party– Accessing hardware, memory-management registers

– A rogue user program could damage other programs, steal the system for itself, …

– Solution: dual-mode operation

Page 10: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 10

Transition from User to Kernel Mode• Dual-mode operation allows OS to protect itself and other system

components

– User mode and kernel mode

– Mode bit provided by hardware

• Provides ability to distinguish when system is running user code or kernel code

• Some instructions designated as privileged, only executable in kernel mode

• System call changes mode to kernel, return from call resets it to user

Page 11: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 11

System Calls• Programming interface to the services provided by

the OS

– Process control

• i.e. launch a program

– File management

• i.e. create/open/read a file, list a directory

– Device management

• i.e. request/release a device

– Information maintenance

• i.e. get/set time, process attributes

– Communications

• i.e. open/close connection, send/receive messages

Page 12: CSI 3131 Greatest Hits (2015) Modules 1-4

Winter 2015 12

Main Operations of an Operating System• Process Management

– Program is passive, process is active, unit of work within system

– OS manages resources required by processes

• CPU, memory, I/O, files

• Initialization data

– OS manages process activities: e.g. creation/deletion, interaction between processes, etc.

• Memory Management

– Memory management determines what and when is in memory to

optimize CPU utilization and computer response to users

• Storage Management

– OS provides uniform, logical view of information storage

– File Systems, Mass Storage Management

• I/O SubSystem

– One purpose of OS is to hide peculiarities of hardware devices from the user

Page 13: CSI 3131 Greatest Hits (2015) Modules 1-4

Operating System Structure

• Monolithic

• Layered

• Microkernel

• Modular

• Hybrid

• Which leads to Virtual Machines

Page 14: CSI 3131 Greatest Hits (2015) Modules 1-4

14

Process state transitions

New Ready

Represents the creation of the process.

• In the case of batch systems – a list of waiting in new

state processes (i.e. jobs) is common.

Ready Running

When the CPU scheduler selects a process for

execution.

Running Ready

Happens when an interruption caused by an event

independent of the process

• Must deal with the interruption – which may take away the

CPU from the process

• Important case: the process has used up its time with the

CPU.

Page 15: CSI 3131 Greatest Hits (2015) Modules 1-4

15

Process state transitions

Running Waiting

When a process requests a service from the OS and

the OS cannot satisfy immediately (software interruption

due to a system call)

• An access to a resource not yet available

• Starts an I/O: must wait for the result

• Needs a response from another process

Waiting Ready

When the expected event has occurred.

Running Terminated

The process has reached the end of the program (or an

error occurred).

Page 16: CSI 3131 Greatest Hits (2015) Modules 1-4

16

Process Control Block (PCB)

Page 17: CSI 3131 Greatest Hits (2015) Modules 1-4

17

Schedulers• Long-term scheduler (or job scheduler)

– selects which new processes should be brought into the

memory; new ready (and into the ready queue from a job spool queue) (used in batch systems)

• Short-term scheduler (or CPU scheduler)

– selects which ready process should be executed next; ready running

• Which of these schedulers must execute really fast, and which one can be slow? Why?

• Short-term scheduler is invoked very frequently (milliseconds) (must be fast)

• Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)

Page 18: CSI 3131 Greatest Hits (2015) Modules 1-4

18

Schedulers (Cont.)• Processes differ in their resource utilization:

– I/O-bound process – spends more time doing I/O than computations, many short CPU bursts

– CPU-bound process – spends more time doing computations; few very long CPU bursts

• The long-term scheduler controls the degree of multiprogramming– the goal is to efficiently use the computer

resources

– ideally, chooses a mix of I/O bound and CPU-bound processes

• but difficult to know beforehand

Page 19: CSI 3131 Greatest Hits (2015) Modules 1-4

19

Medium Term Scheduling Due to memory shortage, the OS might decide to

swap-out a process to disk.

Later, it might decide to swap it back into memory

when resources become available

Medium-term scheduler – selects which process

should be swapped out/in

Page 20: CSI 3131 Greatest Hits (2015) Modules 1-4

20

Process CreationSo, where do all processes come from?

– Parent process create children processes, which, in turn create other processes, forming a tree of processes

– Usually, several properties can be specified at child creation time:

• How do the parent and child share resources?– Share all resources– Share subset of parent’s resources– No sharing

• Does the parent run concurrently with the child?– Yes, execute concurrently– No, parent waits until the child terminates

• Address space– Child duplicate of parent– Child has a program loaded into it

Page 21: CSI 3131 Greatest Hits (2015) Modules 1-4

21

Process Creation (Cont.)UNIX example:

– fork() system call creates new process with the duplicate address space of the parent

• no shared memory, but a copy

• copy-on-write used to avoid excessive cost

• returns child’s pid to the parent, 0 to the new child process

• the parent may call wait() to wait until the child terminates

– exec(…) system call used after a fork() to replace the process’ memory space with a new program

Page 22: CSI 3131 Greatest Hits (2015) Modules 1-4

22

Fork exampleint pid, a = 2, b=4;

pid = fork(); /* fork another process */

if (pid < 0) exit(-1); /* fork failed */

else if (pid == 0) { /* child process */

a = 3; printf(“%d\n”, a+b);

} else {

wait();

b = 1;

printf(“%d\n”, a+b);

}

What would be the output printed?

7

3

Page 23: CSI 3131 Greatest Hits (2015) Modules 1-4

23

Process TerminationHow do processes terminate?

• Process executes last statement and asks the operating system to delete it (by making exit() system call)

• Abnormal termination

– Division by zero, memory access violation, …

• Another process asks the OS to terminate it

– Usually only a parent might terminate its children

• To prevent user’s terminating each other’s processes

– Windows: TerminateProcess(…)

– UNIX: kill(processID, signal)

Page 24: CSI 3131 Greatest Hits (2015) Modules 1-4

24

Process TerminationWhat should the OS do?

• Release resources held by the process

– When a process has terminated, but not all of its resources has been released, it is in state terminated (zombie)

• Process’ exit state might be sent to its parent

– The parent indicates interest by executing wait() system call

What to do when a process having children is exiting?

• Some OSs (VMS) do not allow children to continue

– All children terminated - cascading termination

• Other find a parent process for the orphan processes

Page 25: CSI 3131 Greatest Hits (2015) Modules 1-4

25

Interprocess Communication (IPC)• Mechanisms for processes to communicate and

to synchronize their actions

– Fundamental models of IPC

• Through shared memory (shmget & shmat)

• Using message passing

– Examples of IPC mechanisms

• signals

• pipes & sockets

• Semaphores

• Monitor

• RPC

Page 26: CSI 3131 Greatest Hits (2015) Modules 1-4

26

Direct Communication• Processes must name each other explicitly:

– send (P, message) – send a message to process P

– receive(Q, message) – receive a message from process Q

• Properties of the communication link

– Links are established automatically, exactly one link for each pair of communicating processes

– The link may be unidirectional, but is usually bi-directional

Page 27: CSI 3131 Greatest Hits (2015) Modules 1-4

27

Blocking Message Passing

Also called synchronous Blocking message passing

• sender waits until the receiver receives the message

• receiver waits until the sender sends the message

• advantages:

– inherently synchronizes the sender with the receiver

– single copying sufficient (no buffering)

• disadvantages:

– possible deadlock problem

Page 28: CSI 3131 Greatest Hits (2015) Modules 1-4

28

Non-Blocking Message Passing

Also called asynchronous message passing

• Non-blocking send: the sender continues before the delivery of the message

• Non-blocking receive: check whether there is a message available, return immediately

Page 29: CSI 3131 Greatest Hits (2015) Modules 1-4

29

Unix Pipesint fd[2], pid, ret;

ret = pipe(fd);

if (ret == -1) return PIPE_FAILED;

pid = fork();

if (pid == -1) return FORK_FAILED;

if (pid == 0) { /* child */

close(fd[1]);

while(…) {

read(pipes[0], …);

}

} else { /* parent */

close(fd[0]);

while(…) {

write(fd[1], …);

}

}

Page 30: CSI 3131 Greatest Hits (2015) Modules 1-4

30

After Spawning New Child Process

Kernel

0 1 2 3 4

Parent Process

. . .

pipe

0 1 2 3 4

Child Process

. . .

Page 31: CSI 3131 Greatest Hits (2015) Modules 1-4

31

After Closing Unused Ends of Pipe

Kernel

0 1 2 3 4

Parent Process

. . .

pipe

0 1 2 3 4

Child Process

. . .

Page 32: CSI 3131 Greatest Hits (2015) Modules 1-4

32

Process Characteristics

• These 2 characteristics are most often treated independently by OS’s

• Execution is normally designated as execution thread

• Resource ownership is normally designated as process or task

Page 33: CSI 3131 Greatest Hits (2015) Modules 1-4

33

Threads vs ProcessesProcess

• A unit/thread of execution, together with code, data and other resources to support the execution.

Idea

• Make distinction between the resources and the execution threads

• Could the same resources support several threads of execution?– Code, data, .., - yes

– CPU registers, stack - no

Page 34: CSI 3131 Greatest Hits (2015) Modules 1-4

34

Threads = Lightweight Processes

• A thread is a subdivision of a process– A thread of control in the process

• Different threads of a process share the address space and resources of a process.– When a thread modifies a global variable (non-

local), all other threads sees the modification

– An open file by a thread is accessible to other threads (of the same process).

Page 35: CSI 3131 Greatest Hits (2015) Modules 1-4

35

Motivation for Threads• Responsiveness

– One thread handles user interaction

– Another thread does the background work (i.e. load web page)

• Utilization of multiprocessor architectures

– One process/thread can utilize only one CPU

– Many threads can execute in parallel on multiple CPUs

• Well, but all this applies to one thread per process as well –why use threads?

Page 36: CSI 3131 Greatest Hits (2015) Modules 1-4

36

Many to One Model

Properties:– Cheap/fast, but runs as one process to the OS

scheduler

– What happens if one thread blocks on an I/O?• All other threads block as well

– How to make use of multiple CPUs?• Not possible

Examples:– Solaris Green Threads

– GNU Portable Threads

Page 37: CSI 3131 Greatest Hits (2015) Modules 1-4

37

One to One Model

Properties:

– Usually, limited number of threads

– Thread management is relatively costly

– But, provides better concurrency of threads

Examples:

– Windows NT/XP/2000

– Linux

– Solaris Version 9 and later

Page 38: CSI 3131 Greatest Hits (2015) Modules 1-4

38

Many-to-Many Model• Allows many user level threads to be

mapped to many kernel threads

• The thread library cooperates with the OS to dynamically map user threads to kernel threads

• Intermediate costs and most of the benefits of multithreading

– If a user thread blocs, its kernel thread can be associated to another user thread

– If more than one CPU is available, multiple kernel threads can be run concurrently

Examples:

Solaris prior to version 9

Windows NT/2000 with the

ThreadFiber package

Page 39: CSI 3131 Greatest Hits (2015) Modules 1-4

39

Threading Issues - Scheduler Activations

• The many to many models (including two level) require communication from the kernel to inform the thread library when a user thread is about to block, and when it again becomes ready for execution

• When such event occurs, the kernel makes an upcallto the thread library

• The thread library’s upcall handler handles the event (i.e. save the user thread’s state and mark it as blocked)

Page 40: CSI 3131 Greatest Hits (2015) Modules 1-4

40

Threading Issues – Creation/TerminationThread Cancellation

• Terminating a thread before it has finished

• Two general approaches:– Asynchronous cancellation terminates the target

thread immediately• Might leave the shared data in corrupt state

• Some resources may not be freed

– Deferred cancellation• Set a flag which the target thread periodically checks to

see if it should be cancelled

• Allows graceful termination

Page 41: CSI 3131 Greatest Hits (2015) Modules 1-4

41

Threading Issues - Signal Handling• Signals are used in UNIX systems to notify a process that a

particular event has occurred

• Essentially software interrupt

• A signal handler is used to process signals

1. Signal is generated by particular event

2. Signal is delivered to a process

3. Signal is handled

• Options:

– Deliver the signal to the thread to which the signal applies

– Deliver the signal to every thread in the process

– Deliver the signal to certain threads in the process

– Assign a specific thread to receive all signals for the process

Page 42: CSI 3131 Greatest Hits (2015) Modules 1-4

42

Linux Threads• Linux refers to them as tasks rather than threads

• Thread creation is done through clone() system call

• The clone() system call allows to specify which resources are shared between the child and the parent

– Full sharing threads

– Little sharing like fork()

Page 43: CSI 3131 Greatest Hits (2015) Modules 1-4

43

Thread Programming ExerciseGoal: Write multithreaded matrix multiplication algorithm, in order to

make use of several CPUs.

Single threaded algorithm for multiplying n x n matrices A and B :

For (i=0; i<n; i++)For (j=0; j<n; j++) {

C[i,j] = 0;For (k=0; k<n; k++)

C[i,j] += A[i,k] * B[k,j];}

Just to make our life easier:

Assume you have 6 CPUs and n is multiple of 6.

Page 44: CSI 3131 Greatest Hits (2015) Modules 1-4

44

Multithreaded Matrix MultiplicationIdea:

• create 6 threads

• have each thread compute 1/6 of the matrix C

• wait until everybody finished

• the matrix can be used now

Thread 0

Thread 1

Thread 2

Thread 3

Thread 4

Thread 5

Page 45: CSI 3131 Greatest Hits (2015) Modules 1-4

45

PThreadspthread_t tid[6];

pthread_attr_t attr;

int id[6];

int i;

pthread_init_attr(&attr);

for(i=0; i<6; i++) /* create the working threads */

{

id[i] = i;

pthread_create( &tid[i], &attr, worker, &id);

}

for(i=0; i<6; i++) /* now wait until everybody finishes */

pthread_join(tid[i], NULL);

/* the matrix C can be used now */

Page 46: CSI 3131 Greatest Hits (2015) Modules 1-4

46

PThreadsvoid *worker(void *param)

{

int i,j,k;

int id = *((int *) param); /* take param to be

pointer to integer */

int low = id*n/6;

int high = (id+1)*n/6;

for(i=low; i<high; i++)

for(j=0; j<n; j++)

{

C[i,j] = 0;

for(k=0; k<n; k++)

C[i,j] = A[i,k]*B[k,j];

}

pthread_exit(0);

}

Page 47: CSI 3131 Greatest Hits (2015) Modules 1-4

47

Synchronization problem• Concurrent processes (or threads) often need to

share data (maintained either in shared memory or

files) and resources

• If there is no controlled access to shared data, some

processes will obtain an inconsistent view of this

data

• The results of actions performed by concurrent

processes will then depend on the order in which

their execution is interleaved – race condition

• Let us abstract the danger of concurrent modification

of shared variables into the critical-section problem.

Page 48: CSI 3131 Greatest Hits (2015) Modules 1-4

48

Critical-Section Problem• The piece of code modifying the shared variables where a

thread/process needs exclusive access to guarantee consistency is called critical section.

• The general structure of each thread/process can be seen as follows:

while (true)

{

entry_section

critical section (CS)

exit_section

remainder section (RS)

}

• Critical (CS) and remainder (RS) sections are given,

• We want to design entry and exit sections so that the following requirements are satisfied:

Page 49: CSI 3131 Greatest Hits (2015) Modules 1-4

49

Solution Requirements for CSP1. Mutual Exclusion - If process Pi is executing in its critical

section, then no other processes can be executing in their critical sections

2. Progress - If there exist some processes wishing to enter their CS and no process is in their CS, then one of them will eventually enter its critical section.

No deadlock

Non interference – if a process terminates in the RS, other processes should still be able to access the CS.

Assume that a thread/process always exits its CS.

3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

No famine.

Assumptions Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes Many CPUs may be present but memory hardware prevents

simultaneous access to the same memory location No assumption about the order of interleaved execution

Page 50: CSI 3131 Greatest Hits (2015) Modules 1-4
Page 51: CSI 3131 Greatest Hits (2015) Modules 1-4

51

Task T0:

flag[0] = true;

// T0 wants in

turn = 1;

// T0 gives a chance to T1

while

(flag[1]==true&&turn==1){}

// Wait if T1 wants in and it

is his turn!!!

Critical Section

flag[0] = false;

// T0 wants out

Remainder Section

Task T1:

flag[1] = true;

// T1 wants in

turn = 0;

// T1 gives a chance to T0

while

(flag[0]==true&&turn==0){}

// Wait if T0 wants in and it

is his turn!!!

Critical section

flag[1] = false;

// T1 wants out

Remainder Section

Peterson’s Solution (see book) Assume only two process 0 and 1

Use the flag i to indicate willingness to enter CS

But use turn to let the other task enter the CS

Page 52: CSI 3131 Greatest Hits (2015) Modules 1-4

52

Hardware Solution: disable

interruptsSimple solution:

– A process would not be preempted in its CS

Process Pi:

while(true)

{

disable interrupts

critical section

enable interrupts

remainder section

}Discussion:

Efficiency deteriorates: when a process is in its CS, it’s impossible to

interlace the execution of other processes in their RS.

Loss of interruptions

On a multiprocessor system: mutual exclusion is not assured.

A solution that is not generally acceptable.

Solutions

Peterson’s

Solution

(Software)

Hardware

Solutions

Semaphores

Monitors

Page 53: CSI 3131 Greatest Hits (2015) Modules 1-4

53

The test-and-set instruction (cont.)• Mutual exclusion is assured: if Ti enters the

CS, the other Tj are busy waiting.

– Problem: still using busy waiting.

• Can easily attain mutual exclusion, but

needs more complex algorithms to satisfy

the other requirements to the CSP.

– When Ti leaves its CS, the selection of the

next Tj is arbitrary: no bounded waiting:

starvation is possible.

Page 54: CSI 3131 Greatest Hits (2015) Modules 1-4

54

Spinlocks: Busy Wait Semaphores

• Easiest way to implement semaphores.

• Used in situations where waiting is brief or with multi-processors.

• When S=n (n>0), up to n processes will not block when calling wait().

• When S becomes 0, processes block in the call wait() up until signal() is called.

• A call to signal() unblocks a blocked process or increments the semaphore value.

wait(S)

{

while(S<=0);//no-op

S--;

}

signal(S)

{

S++;

}

The sequence S<=0, S–- must

be atomic.

The signal call must be atomic.

Page 55: CSI 3131 Greatest Hits (2015) Modules 1-4

55

Semaphores – Version 2 – no busy wait

• When a process must wait for a semaphore to become

greater than 0, place it in a queue of processes waiting

on the same semaphore

• The queues can be FIFO, with priorities, etc. The OS

controls the order in which processes are selected from

the queue.

• Thus wait and signal become system calls similar to

requests for I/O.

• There exists a queue for each semaphore similar to the

queues defined for each I/O device.

Page 56: CSI 3131 Greatest Hits (2015) Modules 1-4

56

Atomic execution of the wait() and signal() calls• wait() and signal() are in fact critical sections.• Can we use semaphores for these critical sections?• one processor: inhibit (scheduler) interrupts during CS• With single processor systems

– Can disable interrupts– Operations are short (about 10 instructions)

• With SMP systems (inhibit interrupts on each processor?)– Spinlocks– Other software and hardware CSP solutions.

• Result: we have not eliminated busy waiting– But the busy waiting has been reduced considerably (to the wait()

and signal() calls); moved from entry section to CS– The semaphores (version 2), without busy wait, are used within

applications that can spend long periods in their critical section (or blocked on a semaphore waiting for a signal) – many minutes or even hours.

• Our synchronization solution is thus efficient.

Page 57: CSI 3131 Greatest Hits (2015) Modules 1-4

57

The Bounded Buffer problem

• A producer process produces information that is consumed by a consumer process

– Ex1: a print program produces characters that are consumed by a printer

– Ex2: an assembler produces object modules that are consumed by a loader

• We need a buffer to hold items that are produced and eventually consumed

• A common paradigm for cooperating processes

Page 58: CSI 3131 Greatest Hits (2015) Modules 1-4

58

There are two types of processes accessing a shared database

– readers only read the data, but do not modify them

– writers that want to modify the data

In order to maintain consistency, as well as efficiency, the following rules are used

– Several readers can access the database simultaneously

– A writer needs to have an exclusive access to the database (has a critical section)

• No readers or other writers are allowed while a writer is writing

– What to do if there are several readers in the system and a writer arrives? Two options:

• While there is a reader active, do not have new readers wait (first readers-writers problem).

• No new reader is admitted if there is a writer waiting (second readers-writers problem).

• Both might lead to starvation

Readers-Writers Problem

Page 59: CSI 3131 Greatest Hits (2015) Modules 1-4

59

The Dining Philosophers Problem

• 5 philosophers think, eat, think,

eat, think ….

• In the center a bowl of rice.

• Only 5 chopsticks available.

• Require 2 chopsticks for eating.

• A classical synchronization.

problem

• Illustrates the difficulty of

allocating resources among

process without deadlock and

starvation

Page 60: CSI 3131 Greatest Hits (2015) Modules 1-4

60

Advantages of semaphores (relative to other synchr. solutions)

Single variable (data structure) per critical

section.

Two operations: wait, signal

Can be applied to more than 2 processes.

Can have more than a single process enter

the CS.

Can be used for general synchronization.

Service offered by OS, including blocking and

queue management.

Page 61: CSI 3131 Greatest Hits (2015) Modules 1-4

61

Problems with semaphores: programming difficulty

Wait and signal are scattered across different

programs and running threads/processes

Not always easy to understand the logic

Usage must be correct in all

threads/processes

One « bad » thread/process can make a

collection of threads/processes fail (e.g. forget a

signal)

Page 62: CSI 3131 Greatest Hits (2015) Modules 1-4

62

• Is a software module (ADT – abstract data type)

containing:

– one or more procedures

– an initialization sequence

– local data variables

• Characteristics:

– local variables accessible only by monitor’s procedures

– a process enters the monitor by invoking one of it’s

procedures

– only one process can be executing in the monitor at any

one time (but a number of threads/processes can be waiting in the

monitor).

Monitors

Page 63: CSI 3131 Greatest Hits (2015) Modules 1-4

63

• The monitor ensures mutual exclusion: no need to program this constraint explicitly

• Hence, shared data are protected by placing them in the monitor

– The monitor locks the shared data on process entry

• Process synchronization is done by the programmer by using condition variablesthat represent conditions a process may need to wait for before executing in the monitor

Monitors