lecture 5: concurrency: mutual exclusion and synchronization

39
Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Upload: denis-gibbs

Post on 16-Dec-2015

236 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Page 2: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

An Example

• Process P1 and P2 are running this same procedure and have access to the same variable “a”

• Processes can be interrupted anywhere

• Assume P1 is interrupted after user input and P2 executes entirely

• Then the character echoed by P1 will be the one read by P2 instead of the one read by itself!!

static char a;

void echo(){ cin >> a; cout << a;}

Page 3: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Problems with Concurrent Execution

• Concurrent processes (or threads) often need to share data (maintained either in shared memory or files) and resources

• If there is no controlled access to shared data, some processes may obtain an inconsistent view of data

• Non-determinism: the action performed by concurrent processes will depend on the order in which their execution is interleaved

Page 4: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Race Conditions

• Situations where processes access the same data concurrently and the outcome of execution depends on the particular order in which the access takes place are called race conditions

• How can processes coordinate (or synchronise) in order to guard against race conditions?

Page 5: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

The Critical Section Problem

• When a process executes code that manipulates shared data (or resource), we say that the process is in its critical section (for that shared data)

• The execution of critical sections must be mutually exclusive: at any time: only one process is allowed to execute in its critical section (even with multiple CPUs)

• Each process must request the permission to enter a critical section

Page 6: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

The Critical Section Problem

• The section of code implementing the request to enter a CS is called the entry section

• The critical section might be followed by an exit section

• The remaining code is the remainder section• The critical section problem: to design a protocol

that, if executed by concurrent processes, ensures that their action will not depend on the order in which their execution is interleaved (possibly on many processors)

Page 7: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Framework for Critical Section Solution Analysis

• Each process executes at nonzero speed, but no assumption on the relative speed of n processes

• General structure of a process:

• Many CPU may execute, but memory hardware prevents simultaneous access to the same memory location

• No assumption about order of interleaved execution • Basically, a ME solution must specify the entry and exit

sections

repeat entry section critical section exit section remainder sectionforever

Page 8: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Requirements for a valid solution to the critical section problem

• Mutual Exclusion

– At any time, at most one process can be in its critical section (CS)

• Progress

– Only processes that are not executing in their RS can participate in the decision of which process will enter next in the CS.

– This selection cannot be postponed indefinitely

• Hence, we must have no deadlock

Page 9: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Requirements for a valid solution to the critical section problem (cont.)

• Bounded Waiting

– After a process has made a request to enter it’s CS, there is a bound on the number of times that the other processes are allowed to enter their CS

• starvation otherwise

Page 10: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

What about process failures?

• If all 3 criteria (ME, progress, bounded waiting) are satisfied, then a valid solution will provide robustness against failure of a process in its remainder section (RS)

– Because failure in RS is just like having an infinitely long RS

• However, no valid solution can provide robustness against a process failing in its critical section (CS)

– A process Pi that fails in its CS does not signal that fact to other processes: for them Pi is still in its CS

Page 11: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Definitions

critical section : a section of code that reads/writes (CS) shared data

• race condition: potential for interleaved execution of a critical section by multiple threads => results are non-deterministic

mutual exclusion: synchronization mechanism to avoid (ME) race conditions by ensuring exclusive

execution of critical sections deadlock: permanent blocking of threads starvation: execution but no progress

Page 12: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Solutions for Mutual Exclusion

• software reservation: a thread must register its intent to enter CS and then, wait until no other thread has registered a similar intention before proceeding

• spin-locks using memory-interlocked instructions: require special hardware to ensure that a given location can be read, modified and written without interruption (i.e. TST: test&set instruction)

• OS-based mechanisms for ME: semaphores, monitors, message passing, lock files

Page 13: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Software Reservation• works both for uniprocessors and multiprocessors but has

overheads and memory requirements

• multiple algorithms: Dekker and Peterson

• Lamport (common case: 2 loads + 5 stores)start:

b[i] = true;

x=i;

if (y<>0) { /* contention */

b[i] = false;

await (y==0);

goto start;

}

y = i;

if (x != i) { /* collision */

b[i] = false;

for j=1 to N await(b[j]==false);

if (y != i) {

await (y==0);

goto start;

}

}

CRITICAL SECTION

y = 0;

b[i] = false;

Page 14: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Drawbacks of Software Solutions

• Processes requesting to enter in their critical section are busy waiting (consuming processor time needlessly)

• If critical sections are long, it would be more efficient to block processes that are waiting

Page 15: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Hardware Solutions: Interrupt Disabling

• On a uniprocessor, mutual exclusion is preserved but efficiency of execution is degraded– while in CS, execution

cannot be interleaved with other processes in RS

• On a multiprocessor, mutual exclusion is not preserved– CS is atomic but not

mutually exclusive• Generally not an acceptable

solution

Process Pi:repeat disable interrupts critical section enable interrupts remainder sectionforever

Page 16: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Hardware Solutions: Special Machine Instructions

• Normally, an access to a memory location excludes other access to that same location

• Extension: designers have proposed machines instructions that perform 2 actions atomically (indivisible) on the same memory location (ex: reading and writing)

• The execution of such an instruction is also mutually exclusive (even with multiple CPUs)

• They can be used to provide mutual exclusion but need to be complemented by other mechanisms to satisfy the other 2 requirements of the CS problem (and avoid starvation and deadlock)

Page 17: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Test-and-Set Instruction

• A C++ description of test-and-set:

• An algorithm that uses test&set for mutual exclusion:

bool testset(int& i){ if (i==0) { i=1; return true; } else { return false; }}

Process Pi:repeat repeat{} until testset(b); CS b:=0; RSforever

Page 18: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Test-and-Set Instruction (cont.)

• Shared variable b is initialized to 0

• Only the first Pi that sets b enters CS

• Mutual exclusion is preserved

• if Pi enter CS, the other Pj are busy waiting

• When Pi exit CS, the selection of the next Pj that enters CS is arbitrary

– No bounded waiting

– Starvation is possible

• Some processors (ex: Pentium) may provide an atomic xchg(a,b) instruction that swaps the content of a and b.

– xchg(a,b) suffers from the same drawbacks as test-and-set

Page 19: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Using xchg for Mutual Exclusion

• Shared variable b is initialized to 0

• Each Pi has a local variable k

• The only Pi that can enter CS is the one that finds b=0

• This Pi excludes all the other Pj by setting b to 1

Process Pi:repeat k:=1 repeat xchg(k,b) until k=0; CS b:=0; RSforever

Page 20: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Mutual Exclusion Machine Instructions

• Advantages

– Applicable to any number of processes on either a single processor or multiple processors sharing main memory

– It is simple and easy to verify

– It can be used to support multiple critical sections

Page 21: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Mutual Exclusion Machine Instructions

• Disadvantages

– Busy-waiting consumes processor time

– Starvation is possible when a process leaves a critical section and more than one process is waiting.

– Deadlock

• If a low priority process has the critical region and a higher priority process needs it, the higher priority process will obtain the processor just to wait for the critical region

Page 22: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

OS Solutions: Semaphores

• Synchronization tool (provided by the OS) that does not require busy waiting

• A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually exclusive operations:– wait(S)– signal(S)

• Avoids busy waiting– when a process has to wait, the OS will put it in a blocked

queue of processes waiting for that semaphore

Page 23: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Semaphores

• Internally, a semaphore is a record (structure):

type semaphore = record count: integer; queue: list of process end;var S: semaphore;

• When a process must wait for a semaphore S, it is blocked and put on the semaphore’s queue

• The signal operation removes (according to a fair policy like ,FIFO) one process from the queue and puts it in the list of ready processes

Page 24: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Semaphore Operations

wait(S): S.count--; if (S.count<0) { block this process place this process in S.queue }

signal(S): S.count++; if (S.count<=0) { remove a process P from S.queue place this process P on ready list }

•S.count must be initialized to a nonnegative value (depending on application)

Page 25: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Semaphores: Observations

• S.count >=0

– the number of processes that can execute wait(S) without being blocked is S.count

• S.count<0

– the number of processes waiting on S is = |S.count|

• Atomicity and mutual exclusion

– no two process can be in wait(S) and signal(S) (on the same S) at the same time (even with multiple CPUs)

– The code defining wait(S) and signal(S) must be executed in critical sections

Page 26: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Semaphores: Implementation

• Key observation: the critical sections defined by wait(S) and signal(S) are very short (typically 10 instructions)

• Uniprocessor solutions:• disable interrupts during these operations (ie: for a

very short period)• does not work on a multiprocessor machine.

• Multiprocessor solutions:• use software or hardware mutual exclusion solutions

in the OS.• the amount of busy waiting is small.

Page 27: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Using Semaphores for Solving Critical Section Problems

• For n processes

• Initialize S.count to 1

• Only 1 process is allowed into CS (mutual exclusion)

• To allow k processes into CS, we initialize S.count to k

Process Pi:repeat wait(S); CS signal(S); RSforever

Page 28: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Using Semaphores to Synchronize Processes

• We have two processes: P1 and P2

• Problem: Statement S1 in P1 must be performed before statement S2 in P2

• Solution: define a semaphore “synch”

• Initialize synch to 0

• P1 code: S1; signal(synch);

• P2 code wait(synch); S2;

Page 29: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Binary Semaphores

• Similar to general (counting) semaphores except that “count” is Boolean valued

• Counting semaphores can be implemented using binary semaphores

• More difficult to use than counting semaphores (eg: they cannot be initialized to an integer k > 1)

Page 30: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Binary Semaphore Operations

waitB(S): if (S.value = 1) { S.value := 0; } else { block this process place this process in S.queue }

signalB(S): if (S.queue is empty) { S.value := 1; } else { remove a process P from S.queue place this process P on ready list }

Page 31: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Problems with Semaphores

• Semaphores are a powerful tool for enforcing mutual exclusion and coordinate processes

• Problem: wait(S) and signal(S) are scattered among several processes

• It is difficult to understand their effects

• Usage must be correct in all processes

• One bad (or malicious) process can fail the entire collection of processes

Page 32: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Monitors

• Are high-level language constructs that provide equivalent functionality to semaphores but are easier to control

• Found in many concurrent programming languages

• Concurrent Pascal, Modula-3, uC++, Java...

• Can be implemented using semaphores

Page 33: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Monitors

• Is a software module containing:

• one or more procedures

• an initialization sequence

• local data variables

• Characteristics:

• local variables accessible only by monitor’s procedures

• a process enters the monitor by invoking one of it’s procedures

• only one process can be in the monitor at any one time

Page 34: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Monitors

• The monitor ensures mutual exclusion

• no need to program this constraint explicitly

• Shared data are protected by placing them in the monitor

• The monitor locks the shared data on process entry

• Process synchronization is done using condition variables, which represent conditions a process may need to wait for before executing in the monitor

Page 35: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Condition Variables

• Local to the monitor (accessible only within the monitor)

• Can be access and changed only by two functions:

• cwait(a): blocks execution of the calling process on condition (variable) a

• the process can resume execution only if another process executes csignal(a)

• csignal(a): resume execution of some process blocked on condition (variable) a.

• If several such process exists: choose any one

• If no such process exists: do nothing

Page 36: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

Monitors• Awaiting processes are

either in the entrance queue or in a condition queue

• A process puts itself into condition queue cn by issuing cwait(cn)

• csignal(cn) brings into the monitor one process in condition cn queue

• csignal(cn) blocks the calling process and puts it in the urgent queue (unless csignal is the last operation of the monitor procedure)

Page 37: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

The Dining Philosophers Problem

• A classical synchronization problem

• 5 philosophers who only eat and think

• Each need to use 2 forks for eating

• There are only 5 forks

• Illustrates the difficulty of allocating resources among process without deadlock and starvation

Page 38: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

The Dining Philosophers Problem

• Each philosopher is a process

• One semaphore per fork:

– Fork: array[0..4] of semaphores

– Initialization: fork[i].count:=1 for i:=0..4

• A first attempt:

• Deadlock if each philosopher starts by picking his left fork!

Process Pi:repeat think; wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); forever

Page 39: Lecture 5: Concurrency: Mutual Exclusion and Synchronization

The Dining Philosophers Problem

• Idea: admit only 4 philosophers at a time who try to eat

• Then, one philosopher can always eat when the other 3 are holding one fork

• Solution: use another semaphore T to limit at 4 the number of philosophers “sitting at the table”

• Initialize: T.count:=4

• Solution with monitors?

Process Pi:repeat think; wait(T); wait(fork[i]); wait(fork[i+1 mod 5]); eat; signal(fork[i+1 mod 5]); signal(fork[i]); signal(T); forever