module 2.1: process synchronization

34
K. Salah Operating Systems 1 Module 2.1: Process Synchronization Examples of Shared Variable Problem Mutual Exclusion Solutions to ME Semaphores Critical Regions Monitors Process synchronization means coordination among processes.

Upload: kineta

Post on 05-Jan-2016

38 views

Category:

Documents


1 download

DESCRIPTION

Module 2.1: Process Synchronization. Examples of Shared Variable Problem Mutual Exclusion Solutions to ME Semaphores Critical Regions Monitors Process synchronization means coordination among processes. Concurrent Processes. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Module 2.1:  Process Synchronization

K. Salah Operating Systems 1

Module 2.1: Process Synchronization

• Examples of Shared Variable Problem

• Mutual Exclusion

• Solutions to ME

• Semaphores

• Critical Regions

• Monitors

• Process synchronization means coordination among processes.

Page 2: Module 2.1:  Process Synchronization

K. Salah Operating Systems 2

Concurrent Processes

• Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources

• If there is no controlled access to shared data, some processes will obtain an inconsistent view of this data

• The action performed by concurrent processes will then depend on the order in which their execution is interleaved

• This order can not be predicted

– Activities of other processes

– Handling of I/O and interrupts

– Scheduling policies of the OS

Page 3: Module 2.1:  Process Synchronization

K. Salah Operating Systems 3

Shared variable problemAn Example

• Process P1 and P2 are running this same procedure and have access to the same variable “a”

• Processes can be interrupted anywhere

• If P1 is first interrupted after user input and P2 executes entirely

• Then the character echoed by P1 will be the one read by P2 !!

static char a;

void echo(){ cin >> a; cout << a;}

Page 4: Module 2.1:  Process Synchronization

K. Salah Operating Systems 4

A Second Example

• Example (Simple Shared Variable)– Two processes are each reading characters typed at their

respective terminals– Want to keep running count of total number of characters typed on

both terminals– A Shared variable V is introduced; each time a character is typed, a

process uses the code:V := V + 1;

to update the count. During testing it is observed that the count recorded in V is less than the actual number of characters typed. What happened?

The programmer failed to realize that the assignment was not executed as a single indivisible action, but rather as the following sequence of instructions:

• MOVE V, r0

• INCR r0

• MOVE r0, V

Page 5: Module 2.1:  Process Synchronization

K. Salah Operating Systems 5

The Producer/Consumer ProblemThird Example

from time to time, the producer places an item in the buffer the consumer removes an item from the buffer careful synchronization/coordination required the consumer must wait if the buffer empty the producer must wait if the buffer full typical solution would involve a shared variable count (recall

previous example) also known as the Bounded Buffer problem

producer

process

consumer

process

P

buffer

C

Page 6: Module 2.1:  Process Synchronization

K. Salah Operating Systems 6

The Mutual Exclusion Problem

• The previous two examples are typical of kind of “race condition” problem that arises in operating system programming.

• Occurs when more than one process has simultaneous access to shared data, whose values are supposed to obey some integrity constraint.

• Other examples: airline reservation system, bank transaction system

• Problem generally solved by making access to shared variables mutually exclusive: at most one process can access shared variables at a time

• The period of time when one process has exclusive access to the data is called a critical section.

Page 7: Module 2.1:  Process Synchronization

K. Salah Operating Systems 7

The Critical Section Problem

• Definition. A critical section is a sequence of activities (or statements) in a process during which a mutually excluded resource(s) (either hardware or software) must be accessed.

• The critical section problem is to ensure that two concurrent activities do not access shared data at the same time.

• A solution to the mutual exclusion problem must satisfy the following three requirements:

1 Mutual Exclusion2 Progress3 Bounded waiting (no starvation)

Page 8: Module 2.1:  Process Synchronization

K. Salah Operating Systems 8

Requirement to Critical-Section Problem

1. Mutual Exclusion. If process Pi is executing in its critical section, then no other processes can be executing in their critical sections.

2. Progress. If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely.

3. Bounded Waiting. A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Assume that each process executes at a nonzero speed No assumption concerning relative speed of the n

processes.

Page 9: Module 2.1:  Process Synchronization

K. Salah Operating Systems 9

Methods for Mutual Exclusion

1. Disable interrupts (hardware solution)

2. Strict alternation and Peterson’s solution (software solution)

3. Switch variables (assume atomic read and write)

4. Locks (hardware solution with TSL or TAS)

5. Semaphores (software solution)

6. Critical Regions and Monitors (HLL solution)

Page 10: Module 2.1:  Process Synchronization

K. Salah Operating Systems 10

Disable Interrupts

process A process B ... ... disable interrupts disable interrupts CS CS enable interrupts enable interrupts

– Prevents scheduling during CS, since the timer interrupt is disabled.

– May hinder real-time response and delays– All processes are excluded even if they do not access the

same variables– This is sometimes necessary (to prevent further interrupts

during interrupt handling), used by the kernel when updating its variables and lists, e.g. ready and blocked lists.

Page 11: Module 2.1:  Process Synchronization

K. Salah Operating Systems 11

Lock Variables

• Not used in any system. It does not work properly.

• The idea is to have a lock variable guarding the CS:

– If the lock is 0, the process sets the lock to 1 and enter CS

– If the lock is 1, the process waits until the lock becomes 0

• Has the same problem as shared variables. Both processes may read simultaneously the lock 0.

Page 12: Module 2.1:  Process Synchronization

K. Salah Operating Systems 12

Strict Alternation

Process A

While (TRUE){

while (turn != A); /* wait */ CS;turn = B;...

}

turn is a shared variable and initially set to A

different CS's can be implemented using different switch variables

busy waiting is a waste of CPU cycles and causes the priority inversion problem. Priority inversion problem can occur if there are 2 processes H and L with H to be run whenever it is “ready”.

danger of long blockage since A and B strictly alternates, i.e., Process A or B can not run twice in a row.

We need a solution that does not require strict alternation

Process B While (TRUE){ while (turn != B); /* wait */

CS;

turn = A;

... }

Page 13: Module 2.1:  Process Synchronization

K. Salah Operating Systems 13

Peterson’s Solution#define N 2 /* 2 processes: 0 and 1 */

int interested[N] = {FALSE,FALSE};

int turn;

void enter_region (int process)

{

int other;

other = 1 – process;

interested[process] = TRUE; /* resolves the strict alternation */

turn = process; /* resolves simultaneous enter_region call. Last turn only counts. In other words, turn is used to break ties! */

while (turn == process && interested[other] == TRUE);

}

void leave_region(int process)

{

interested[process] = FALSE;}

Page 14: Module 2.1:  Process Synchronization

K. Salah Operating Systems 14

Peterson’s Solution (cont.)

• Properties: Complex and unclear Busy waiting Mutual exclusion is preserved Strict alternation is resolved Can be extended for n processes

Page 15: Module 2.1:  Process Synchronization

K. Salah Operating Systems 15

TSL or TAS Instruction

TSL: Test and Set Lock (or TAS: TestAndSet) is implemented in HW, e.g. Motorola 68000 microprocessor. The Test/read and Set/write bus cycles are done atomically (not

interrupted).

enter_region: tsl r0, flag ; if flag is 0, set flag to 1

cmp r0, #0 jnz enter_region ret

leave_region: mov flag, #0 ret

If not supported by hardware, TAS can be implemented by disabling and enabling interrupts.

TAS can also be implemented using atomic swap(x,y).

Page 16: Module 2.1:  Process Synchronization

K. Salah Operating Systems 16

Properties

1. Busy waiting problem. Better to have the process blocked on IPC primitive (semaphore, event counter, message) and then awakened later.

2. Starvation is possible:– If we have P1, P2, and P3. With improper scheduling, P1 and P2 may

always execute and not P3. P1 and P2 may have higher priority than P3. P3 will starve.

– Does the other schemes have it?

3. Different locks may be used for different shared resources.

• Examples: (1) VAX 11, (2) B6500

• MIPS -- Load-Linked/Store Conditional (LL/SC)

• Pentium -- Compare and Exchange, Exchange, Fetch and Add

• SPARC -- Load Store Unsigned Bit (LDSTUB) in v9

• PowerPC -- Load Word and Reserve (lwarx)

Page 17: Module 2.1:  Process Synchronization

K. Salah Operating Systems 17

Semaphores

P V Dijkstra ‘65 wait signal Per Brinch Hansen

The semaphore has a value that is invisible to the users and a queue of processes waiting to acquire the semaphore. Code for counting semaphores:

type semaphore = record value : integer; L : list of process; end

P(S):[ S.value := S.value-1; if S.value < 0 then add this process to S.L; block; end if ]

V(S):[ S.value := S.value + 1; if S.value <= 0 then remove a process P from S.L; wakeup(P); // place it on the ready queue. end if ]

Page 18: Module 2.1:  Process Synchronization

K. Salah Operating Systems 18

Properties of semaphore

parbegin S.value = 1 P1: ... P(S); CS1; V(S); ... P2: ... P(S); CS2; V(S); ... . . . Pn: ... P(S); CSn; V(S); ...

parend

Properties1. No busy waiting2. May starve unless FCFS (scheduling left to the implementer of semaphores)3. Can handle multiple users by proper initialization. Example: 3 tape drivers4. If S is either 1 or 0, it is called a binary semaphore or mutex. How to implement a counting semaphore using mutex?

Page 19: Module 2.1:  Process Synchronization

K. Salah Operating Systems 19

Code for Binary Semaphores

waitB(S): if (S.value = 1) { S.value := 0; } else {

place this process in S.L;block;

}

signalB(S): if (S.L is empty) { S.value := 1; } else { remove a process P from S.L; wakeup(P); }

Page 20: Module 2.1:  Process Synchronization

K. Salah Operating Systems 20

How to implement a counting semaphore using mutex?

S: counting semaphoreS1: mutex = 1;S2: mutex = 0;C: integer

P(S):P(S1)C := C-1;if (C < 0) {

V(S1);P(S2);}

V(S1);

V(S):P(S1);C := C+1;if (C <= 0)

V(S2);else

V(S1);

Page 21: Module 2.1:  Process Synchronization

K. Salah Operating Systems 21

More properties and examples

5. Can implement scheduling of activities using a precedence graph. Here we use semaphores for synchronizing different activities, not resolving mutual exclusion. An activity is a work done by a specific process. Initially system creates all processes to do these specific activities. For example, process x that performs activity x doesn’t start performing activity x unless it is signaled (or told) by process y.

Example of process synchronization: Router fault detection, fault logging, alarm reporting, and fault fixing.

1. Draw process precedence graph2. Write psuedo code for process synchronization using semaphores

6. Proper use can't be enforced by compiler.e.g.

P(S) V(S) CS CS V(S) P(S)

e.g. S1, S2

P1: P(S1) P2: P(S2) P(S2) P(S1) CS CS V(S2) V(S1) V(S1) V(S2) This is a deadlock situation

Page 22: Module 2.1:  Process Synchronization

K. Salah Operating Systems 22

Classical problems

• The bounded buffer problem

• The readers and writers problems

• The dining philosophers problem

Page 23: Module 2.1:  Process Synchronization

K. Salah Operating Systems 23

The Producer-Consumer Problem

bounded buffer (of size n) one set of processes (producers) write to it one set of processes (consumers) read from it

semaphore: full = 0 /* counting semaphores */ empty = n mutex = 1 /* binary semaphore */

process Producer process Consumer do forever do forever . P(full) /* produce */ P(mutex)

. /* take from buffer */ P(empty) V(mutex) P(mutex) V(empty)

/* add to buffer */ . V(mutex) /* consume */ V(full) . end end

Page 24: Module 2.1:  Process Synchronization

K. Salah Operating Systems 24

The Readers and Writers Problem

Shared data to be accessed in two modes: reading and writing. Any number of processes permitted to read at one time; writes must exclude all other operations.

Read WriteRead Y N conflictWrite N N matrix

Intuitively:

Reader: | Writers: when(#writers==0) do | when(#readers==0 #readers=#readers+1 | and #writers==0) do | #writers = 1 | <read> | <write> | #readers=#readers-1 | #writers = 0 . | . . | . . | .

Page 25: Module 2.1:  Process Synchronization

K. Salah Operating Systems 25

Semaphore Solution to Readers and Writers

Semaphore: mutex = 1 /* mutual excl. for updating readcount */ wrt = 1 /* mutual excl. writer */

int variable: readcount = 0

Reader: P(mutex) readcount = readcount + 1 if readcount == 1 then P(wrt) V(mutex) <read> P(mutex) readcount = readcount – 1 if readcount == 0 then V(wrt) V(mutex)

Writer: P(wrt) <write> V(wrt)

Notes: wrt also used by first/last reader that enters/exits critical section. Solution gives priority to readers in that writers can be starved by stream of readers.

Page 26: Module 2.1:  Process Synchronization

K. Salah Operating Systems 26

The Dining Philosopher Problem

• Five philosopher spend their lives thinking + eating.

• One simple solution is to represent each chopstick by a semaphore.

• P before picking it up & V after using it.

var chopstick: array[0..4] of semaphores=1philosopher i

repeat P( chopstick[i] ); P( chopstick[i+1 mod 5] ); ... eat ... V( chopstick[i] ); V( chopstick[i+1 mod 5] ); ... think ... forever

• Is deadlock possible?

Page 27: Module 2.1:  Process Synchronization

K. Salah Operating Systems 27

Concurrent Programming

• An OS consists of a large number of programs that execute asynchronously and cooperate.

• Traditionally, these programs were written in assembly language for the following reasons:

– High-level languages (HLL) did not provide mechanisms for writing machine-dependent code (such as device drivers).

– HLL did not provide the appropriate tools for writing concurrent programs.

– HLL for concurrent programs were not efficient.

• HLL for OS must provide facilities for synchronization and modularization.

• Two ways used by HLL:

– Critical Regions and Conditional Critical Regions

– Monitors

Page 28: Module 2.1:  Process Synchronization

K. Salah Operating Systems 28

Motivating examples

• P and V operations are better than shared variables but still susceptible to programming errors

•P(S) P(S) . ==> . . .V(S) P(S)

•P(S1) P(S1) . .P(S2) P(S2) . ==> . . .V(S2) V(S1) . .V(S1) V(S2)

Page 29: Module 2.1:  Process Synchronization

K. Salah Operating Systems 29

Critical Regions

• A higher-level programming language construct proposed in 1972 by Brinch Hansen and Hoare.if a variable is to be shared, it must be declared as suchaccess to shared variables only in mutual exclusion

var a: shared int var b: shared int

region a do -- access variable a --

Compiler generates equivalent code using P and V:

P(Sa)

-- access variable a --

V(Sa)

Page 30: Module 2.1:  Process Synchronization

K. Salah Operating Systems 30

Critical Regions aren't perfect

Process 1:

region a do

region b do stmt1;

Process 2:

region b do

region a do stmt2;

Page 31: Module 2.1:  Process Synchronization

K. Salah Operating Systems 31

Conditional Critical Regions

Critical regions are basically a mutex They are not easily adapted to general synchronization

problems, i.e. those requiring a counting semaphore Hoare, again in 1972, proposed conditional critical regions: region X when B do S X will be accessed in mutual exclusion in S process delayed until B becomes true

Page 32: Module 2.1:  Process Synchronization

K. Salah Operating Systems 32

The Producer-consumer problem

Var buffer: shared record pool: array[0...n-1] of item; count, in, out: integer = 0;

Producer:

region buffer when count < n do begin pool[in] := item_produced in : = in + 1 mod n count := count + 1 end

Consumer:

region buffer when count > 0 do begin item_consumed := pool[out] out := out + 1 mod n count := count – 1 end

Page 33: Module 2.1:  Process Synchronization

K. Salah Operating Systems 33

Monitors• A monitor is a shared data object together

with a set of operations whichs manipulate it.

• To enforce mutual exclusion, at most one process may execute operations defined for the data object at any given time.

• All uses of shared variables are governed by monitors.

– Support data abstraction (hide implementation details)

– Only one process may execute a monitor's procedure at a time

– data type “condition” for synchronization(can be waited or signaled within a monitor procedure)

– Two operations on “condition” variables:• wait: Forces the caller to be

delayed. Exclusion released. Hidden Q of waiters.

• signal: One waiting process is resumed if there are waiters.

Page 34: Module 2.1:  Process Synchronization

K. Salah Operating Systems 34

Semaphore using monitors

• type semaphore = monitor var busy: boolean nonbusy: condition

procedure entry P begin if busy then nonbusy.wait fi busy := true end {P}

procedure entry V begin busy := false nonbusy.signal end {V}

begin busy := false end {monitor}

What could be other ways to implement semaphores?

Solving Dinning Philosopher’s problem using Monitors in textbook.