concurrency notes
TRANSCRIPT
-
7/30/2019 Concurrency Notes
1/18
Concurrency and Mutual Exclusion
CS 350 10/12/03
The need for Mutual exclusion in concurrent processing.
! Overview an example:
Concurrent processing implies asynchronism: the outcome of
concurrent processing may be non-deterministic if certain situations
are not prevented:
! Example of a Race Condition
This is an example of a print spooler. It is taken from taken from
Modern Operating Systems, by Andrew Tanenbaum, pp. 33-34. It
illustrates an apparently mundane situation, and an obvious but nave
approach which will deliver erroneously results.
o A process wanting to print a file and enters the file name in a print
queue to await printing.
o A print daemon periodically checks the queue for files to be
printed and if so it chooses one, removes it from the queue andprints it.
o Assume the queue is a infinitely large array whose slots are
indexed: 0,1,2,
o There are two shared variables:
out - points to next file in queue to be printed
in points to the next free slot in the queue
o Assume slots 0 through 3 are empty (files already printed) and
slots 4 though 6 are full (they have names of files yet to be printed)
next free slot is 7.
o Assume processes A and B both want to print a file and the
following sequence takes place:
CS350 Fall 2003 Principles of concurrency page 1
-
7/30/2019 Concurrency Notes
2/18
- Process A reads in and stores the value 7 in a local
variable next_free_slot.
- A context switch now takes place (time slice) and process B
switches into active execution. Process B also reads in and also
gets a 7 so it stores the name of the file in slot 7 and updates in
to be an 8 it then goes off and does something else.
- Process A now becomes active again and using
next_free_slot, finds a 7 there and writes its file name in slot 7,
erasing the name that process B just put there. It then puts an 8 in
in
- process B never gets its file printed.
CS350 Fall 2003 Principles of concurrency page 2
-
7/30/2019 Concurrency Notes
3/18
Mutual Exclusion: Software Approaches
We consider two attempts to solve the mutual exclusion problem:
All solutions suffer from busy waiting, and will indefinitely block the otherprocess if one process crashes in the critical section.
First attempt:
int turn = 0; // initial value for turn
/* process 0 */ /* processes 1 */
while(turn != 0) no-op; while(turn != 1) no-op;
/* critical section*/ /* critical section*/
turn = 1; turn = 0;
} }Strict alternation between two processes via use of shared variable turn
Mutual exclusion achieved
Two problems:
Performance determined by least active (slowest) process
If one processes crashes outside of the critical section, the other will wait
forever for the CS.
Second attempt:
/* process 0 */ /* processes 1 */flag[0] = true; flag[1] = true;
while(flag[1]) no-op; while(flag[0]) no-op;
/* critical section*/ /* critical section*/
flag[0] = false; flag[1] = false;
} }
Crash outside CS will not cause indefinitely block other process
Mutual exclusion achieved
Problem: Deadlock caused by race to set flag[i]
CS350 Fall 2003 Principles of concurrency page 3
-
7/30/2019 Concurrency Notes
4/18
Correct Software Solutions:
Dekkers Algorithm:
Combines using the turn variable and flag[] variable.
Avoids mutual courtesy
turn guarantees mutual exclusion.
flag[] breaks strict alternation problem if your flag is set to 1 and other
flag is 0, then enter no matter what turn is.
In the event that a race causes flag[0] = flag[1] = 1, then there is no deadlock
to get to the CS because turn will allow one of the processes to enter.
flag[i] is an indication of intent to enter.
See flow chart below for details.
CS350 Fall 2003 Principles of concurrency page 4
-
7/30/2019 Concurrency Notes
5/18
Initial values: flag[0] = flag[1] = 0
CS350 Fall 2003 Principles of concurrency page 5
-
7/30/2019 Concurrency Notes
6/18
Petersons Algorithm:
Interpret flag[i] as indicting that process i is interested in entering the CS.
Turn resolves the simultaneity conflicts as with DekkersDoes the same as Dekkers, but is simpler.
See flow chart below for details:
CS350 Fall 2003 Principles of concurrency page 6
-
7/30/2019 Concurrency Notes
7/18
Initial values: flag[0] = flag[1] = 0
This is as good as it gets with software solutions the nemesis of all
software solutions is:
Busy waiting
Lets see what hardware and the OS has to offer.
CS350 Fall 2003 Principles of concurrency page 7
-
7/30/2019 Concurrency Notes
8/18
Mutual exclusion: hardware support
Interrupt disabling:
while(1) {/* disable interrupts */;
/* critical section */;
/* enable interrupts */;
/* remainder section */;
}
Mutual exclusion guaranteed.
High cost: execution inefficient I uniprocessor cannot interleave processes.
Will not work a multiprocessing architecture physical parallelism
synchronizing interrupts across processors a problem.
test-and-set instruction
See Stallings page 214 and fig. 5.5a
In all cases i == 1 means CS is locked, the door is locked
boolean testset (int i) { //alternate definition flow chart, next page
if (i == 0) { i = 1; return false; }
else {return true}
}
Note: if i == 1 (locked), i stays 1 and returned 1
if i == 0, change i to 1 and return 0 and enter CS
Another way of coding the same function (1st definition flow chart, next
page:boolean testset (int i) {
if ( i = = 0) {i = 1; return true;}
else return (complement if i)
}
CS350 Fall 2003 Principles of concurrency page 8
-
7/30/2019 Concurrency Notes
9/18
CS350 Fall 2003 Principles of concurrency page 9
-
7/30/2019 Concurrency Notes
10/18
Flow Chart:For alternate definition for test&set(i)
if (i == 0) {i = 1; return 0;}if (i == 1) return 1;
// or more simply:j = i;
i = 1;
return j;
CS350 Fall 2003 Principles of concurrency page 10
-
7/30/2019 Concurrency Notes
11/18
exchange or swap instruction
void exchange (int register, int memory) {
int temp;
temp = memory;
memory = register;
registger = temp;
}
Flow chart:
CS350 Fall 2003 Principles of concurrency page 11
-
7/30/2019 Concurrency Notes
12/18
Semaphores
A synchronization mechanism provided by the operating system based on a
signaling and blocking scheme.
Will enforce mutual exclusion and also regulate the operation of processes
based on the values of certain global numerical variables (counting variables).
Some properties of semaphores: A signal is transmitted via semaphore s by
executing a primitive called signal(s). To receive a signal via semaphore s, a
process executes the primitive wait(s). A wait call couldcause a process
block depending on the value of a semaphore s. A signal couldrelease a
blocked process depending on the value of the semaphore upon which it is
waiting.
Definition of a semaphore:
A semaphore, s, is a data structure consisting of an integer and a queue.
A semaphore, s, integer may be initialized to a nonnegative value.
The wait(s) operation decrements the value of the semaphore integer.
If the semaphore value becomes negative, the process P calling the wait is
blocked and goes on the queue of the semaphore. P is said to be waiting on thesemaphore s.
The signal operation increments the semaphore integer; if the value is not
positive, then a process blocked in the queue of this semaphore is released.
Why it works: the signal(s) and wait(s) operations are atomic. These
operations cannot be interrupted and are considered indivisible.
Below are the specifications (description) of wait and signal using C
pseudocode.
CS350 Fall 2003 Principles of concurrency page 12
-
7/30/2019 Concurrency Notes
13/18
Definition of a general Semaphore Primitives from Stallings Figure 5.6.
struct semaphore {
int count; // value of semaphore
queueType queue; //list of processes waiting on this semaphore
} s; // s is the declared semaphore variable
void wait(s) {
s.count--; //decrement semaphore value first
if (s.count < 0) { // test for negative
place this process in s.queue;
block this process;
}
}
void signal(s) {
s.count++; //decrement semaphore value first
if (s.count 0) { // test for non-positive
remove a process P from s.queue;
place process P on ready list;
}
}
How General Semaphore are Used:
There are two common types of general semaphores: mutual exclusion
semaphores controlling of a critical section, and counting semaphores
controlling the action of processes that access a limited renewal resource such
as reading and writing a finite memory buffer.
Mutual Exclusion Semaphores
Such semaphores are commonly referred to as mutex semaphores. The
mutex can be a general semaphore initialized to a one (1). When the value is a
1, the door is open and any process can enter its critical section. The firstprocess to enter will do a wait(mutex) and cause the mutex to decrement to a 0
before entering. The next process trying to enter will also do a wait(mutex)
causing the mutex to decrement to a 1 and will block in the queue. Any
other processes trying to enter will also block on wait after further
decrementing the semaphore. All such blocked processes are added to the
mutex queue. When the process in the critical section exits, it will do a
CS350 Fall 2003 Principles of concurrency page 13
-
7/30/2019 Concurrency Notes
14/18
signal(mutex) and this will free up one of the blocked processes in the queue
and also increment the semaphore by one. If the semaphore is negative, is
absolute value is the number of processes waiting on this semaphore.
Counting Semaphores
This case is best described by the bounded buffer example. Assume that the
buffer is a circular array of, say, 10 storage cells. Let a semaphore called
unused be the count of the number of unused slots in the buffer, and another
semaphore called used be the count of the number of used slots in the buffer.
unused is initialized to 10 and used is initialized to 0. Each time an item is
produced in the buffer unused is decremented by 1 and used is incremented
by 1. Each time an item is consumed from the buffer, used is decremented
by one and unused is incremented by 1. Decrementing is done by doing a
wait on the semaphore, and incrementing is done by incrementing the
semaphore. If the waits are done before accessing the buffer (critical section),then the producer (waiting on unused) will block if unused is zero (buffer full),
and consumer (waiting on used) will block if used is zero (buffer empty).
signals are done after accessing the buffer and when the producer signals on
used, a blocked consumer will be released and the used count will be
incremented. The consumer will signal on unused after accessing the buffer
(consuming) and a blocked producer will be released and the unused count will
be incremented.
Unless the buffer is full or empty, it is possible for both the producer and
consumer to be in the critical section at the same time. Mutual exclusion can
also be achieved by using a mutual exclusion semaphore as above. See the
bounded buffer example below.
CS350 Fall 2003 Principles of concurrency page 14
-
7/30/2019 Concurrency Notes
15/18
Binary Semaphores
Binary Semaphores have a value which can only vary between 0 and 1. If the
value is 1 the wait will not block. If the value is 0, the wait will block and the
blocked process will be queued on the semaphore. The signal opertation will
set the value to 1 if the queue is empty, otherwise it will leave it at 0. The
implementation is given below:
struct binary_semaphore {
enum (zero, one) value;
queueType queue;
}
void waitB(binary_semaphore s) {if (s.value ==1) s.value = 0; // Lockthe door& let processenter CS
else {
place this process in s.queue;
block this process;
}} // no indicationof size of queue... compareto generalsemaphore
// wait unconditionallyleavesb-semaphorevalueat 0
void signalB(binary_semaphore s) { //s.value==0is necessarybut not sufficient
if (s.queue is empty) s.value = 1; // conditionfor emptyqueue
else {
remove a process from s.queue;
place this process in the ready queue;
}} // if only 1 procin queue,leavevalueat 0, sincemovedproc will go to CS
CS350 Fall 2003 Principles of concurrency page 15
-
7/30/2019 Concurrency Notes
16/18
An example on how mutual exclusion is achieved with a semaphore:
// pseudocode for process i of n processes
semaphore mutex = 1;
void mutual_exclusion(i) {
while (1) {
wait(mutex);
< critical section >;
signal(mutex);
}
< remainder section>;
}
Example of three processes using general or counting semaphores and a
binary semaphore based on fig 5.10 Stallings (for binary Semaphores):
CS350 Fall 2003 Principles of concurrency page 16
-
7/30/2019 Concurrency Notes
17/18
The producer/consumer problem:
// boundedbuffer;
int SIZE // size of buffer
semaphore mutex = 1; // this is only pseudocode, initializing
semaphore used = 0; // a semaphore is actually a system call,
semaphore unused = SIZE;
//Note:
// used means the number of occupied slots in buffer
// unused means the number of empty slots in buffer
voidproducer() {
while (1) {
produce-item(); // non critical (remainder) sectionwait(unused);
wait(mutex);
append-item(); // critical section
signal(mutex);
signal(used);
}
}
void consumer(); {
while (1) {
wait(used);
wait(mutex);
take(); // critical section
signal(mutex);
signal(unused);
consume(); // non critical (remainder) section
}
}
CS350 Fall 2003 Principles of concurrency page 17
-
7/30/2019 Concurrency Notes
18/18
A bounded buffer scenario for a buffer of size 3:
In this example mutual exclusion will be ignored, and only the counting
semaphores will be illustrated (producer starts first).
producer unused used consumer comments
event semaphore semaphore event
initial value 3 0 in ready queue, queue empty
wait(unused) 2 0 in ready queue
signal(used 2 1 in ready queue
wait(unused) 1 1 in ready queue
signal(used) 1 2 in ready queue
wait(unused) 0 2 in ready queue
signal(used) 0 3 in ready queue, queue full
wait(unused) -1 3 in ready queue, producer blocks
blocked -1 2 wait(used)in ready queue 0 2 signal(unused)
in ready queue 0 1 wait(used)
in ready queue 1 1 signal(unused)
signal(used) 1 2 in ready queue
wait(unused) 0 2 in ready queue
signal(used) 0 3 in ready queue, queue full
in ready queue 0 2 wait(used)
in ready queue 1 2 signal(unused)
in ready queue 1 1 wait(used)
in ready queue 2 1 signal(unused)
in ready queue 2 0 wait(used)
in ready queue 3 0 signal(unused), queue empty
in ready queue 3 -1 wait(used), consumer blocks
wait(unused) 2 0 in ready queue
CS350 Fall 2003 Principles of concurrency page 18