chapter3 higher-level synchronization and communication contents: –3.1 shared memory methods...

33
CHAPTER3 Higher-Level Synchronization and Communication Contents: 3.1 Shared memory methods 3.2 Distributed synchronization and communication 3.3 Other classic synchronization problems Shortcoming of semaphores and events: Not supporting the elegant structuring of concurrent programs (being Implemented at lower level) Verifying the behavior of programs is difficult

Upload: katelyn-dorrell

Post on 14-Dec-2015

244 views

Category:

Documents


1 download

TRANSCRIPT

CHAPTER3Higher-Level Synchronization and

Communication

Contents:– 3.1 Shared memory methods– 3.2 Distributed synchronization and communication– 3.3 Other classic synchronization problems

Shortcoming of semaphores and events:– Not supporting the elegant structuring of concurrent programs

(being Implemented at lower level)– Verifying the behavior of programs is difficult

Monitors

The principles of abstract data types:– Data type– A set of operations

The monitor construct:– Access to the resource (CS) is possible only via

one of the monitor procedures;– Procedures are mutually exclusive;– Condition variable is used to manipulate

processes’ communication and synchronization

Condition variable

• C.wait:

causes the executing process to be suspended and placed on a queue associated with the CV c.

• C.signal:

wakes up one of the processes waiting on c, placing it on a queue of processes wanting to reenter the monitor

Directions of CV

• No value associated with CV

• Referring to a specific event, state of a computation, or assertion

• Used inner of monitors

Example: monitor solution to the bounded-buffer problem

Monitor bounded_buffer{char buffer[n];int nextin=0, nextout=0,full_cnt=0;condition notempty,notfull;

Deposit(char c){if(full_cnt==n) notfull.wait;buffer[nextin]=c;nextin=(nextin+1)%n;full_cnt=full_cnt+1;notempty.signal;

}Remove(char c){

if(full_cnt==0) notempty.wait;c=buffer[nextout];nextout=(nextout+1)%n;full_cnt=full_cnt-1;notfull.singal;

}

Priority waits

c.wait(p)

C: condition variable on which the process is to be suspended

P: an integer expression defining a priority

when the condition c is signaled and there is more than one process waiting, the one which specified the lowest value of p is resumed.

Monitor alarm_clock{int now=0;condition wakeup;

Wakeme(int n){int alarmsetting;alarmsetting=now+n;while(now<alarmsetting)

wakeup.wait(alarmsetting);wakeup.signal;/*in case more than one process is to wake up at the same time*/

}Tick(){

now=now+1;wakeup.signal;

}}

Protected types

• Implicit wait at the beginning of the procedure

• Implicit signal at the ending of the procedure

• Mutual exclusive between protected types’ procedures

Distributed synchronization and communication

• Message-based communication

• Procedure-based communication

• distributed mutual exclusion

Surroundings:• For centralized systems where process isolation and encapsulation

is desirable• For distributed systems where processes may reside on different

nodes in a network

Message-based communication

Some fundamental questions:• When a message is emitted, must the sending process

wait until the message has been accepted by the receiver or can it continue processing immediately after emission?

• What should happen when a receive is issued and there is no message waiting?

• Must the sender name exactly one receiver to which it wishes to transmit a message or can messages be simultaneously sent to a group of receiver processes?

• Must the receiver specify exactly one sender from which it wishes to accept a message or can it accept messages arriving from any member of a group of senders?

Named-channel system

Syntax:Send(channel, var); receive(channel, var)

The problem of blocking receive with explicit naming:

don’t permit the process executing receive to wait selectively for the arrival of one of several possible requests.

Nondeterministic selective input:when(C) S

Procedure-based communication

• Asymmetric RPC

• Symmetric RPC

client: q.f(params)

server: accept f(params) S

Rendezvousp q

q.f()

accept f()wait

S

Calling process is delayed

p q

q.f()

accept f()

wait

S

called process is delayed

ADA’s select statement

• Mutual exclusiveonly one of the embedded accepts to be executed

• Nondeterministicchoosing among eligible accept statements according to a fair internal policy

Select {[when B1:]

accept E1(…) S1;or…[when Bn:]

accept En(…) Sn;[else R]

}

Distributed mutual exclusion

• Centralized controller--relying on the correct operation of the controller

--potential performance bottleneck

• Fully distributed--large amount of message passing

--accurate time stamps

--difficulty of managing node or process failures

• Token ring

Token ringProcess controller [i] {

while(1) {accept token;select {

accept Request_CS() {busy=1;}else null;

}if(busy)

accept Release_CS() {busy=0;}controller[(i+1)%n].Token;

}}Process p [i] {

while(1) {controller [i].Request_CS();CSi;controller [i].Release_CS();programi;

}}

Classic synchronization problems

• The Readers/Writers problem

• The Dining Philosophers Problem

The Readers/Writers problem

• Writers are permitted to modify the state of the resource and must have exclusive access;

• Readers only can interrogate the resource state and , consequently, may share the resource concurrently with an unlimited number of other readers;

• Fairness policies must be included.

Writer WriterExclusive

Writer ReaderExclusive

Reader ReaderConcurrent

CobeginReader: while(1){

reading;

}//

Writer: while(1){

writing;

}coend

Semaphore writelock=1;Int readcount=0;Semaphore countlock=1;

P(countlock);if(readcount==0)

P(writelock);readcount++;V(countlock);

P(countlock);readcount--;if(readcount==0)

V(writelock);V(countlock);

P(writelock);

V(writelock);

Fairness Policies

• A new reader should not be permitted to start during a read sequence if there is a writer waiting

• All readers waiting at the end of a write operation should have priority over the next writer

Monitor readers/writers{int read_cnt=0, writing=0;condition OK_to_read, OK_to_write;

Start_read(){if(writing|| !empty(OK_to_write))

OK_to_read.wait;read_cnt=read_cnt+1;OK_to_read.signal;

}End_read(){

read_cnt=read_cnt-1;if(read_cnt==0)

OK_to_write.signal;}Start_write(){

if((read_cnt!=0)||writing) OK_to_write.wait;writing=1;

}End_write(){

writing=0;if(!empty(OK_to_read))OK_to_read.signal;else OK_to_write.signal;

}}

The Dining Philosophers Problem

spaghetti

P1

P5

P4P3

P2

Concerns about the problem

• Deadlock: A situation must be prevented where each philosopher obtains one of the forks and is blocked forever waiting for the other to be available

• Fairness: It should not be possible for one or more philosophers to conspire in such a way that another philosopher is prevented indefinitely from acquiring its fork

• Concurrency: When one philosopher, e.g., p1, is eating, only its two immediate neighbors (p5 and p2) should be prevented from eating. The others (p3 and p4) should not be blocked; one of these must be able to eat concurrently with p1

Semaphore f[5]={1,1,1,1,1,};

Cobegin{

P(i): while(1){

think(i);

P(f(i)); P(f(i+1)%5);// grab_forks(i);

eat(i);

V(f[i]); V(f[i+1]);// return_forks(i);

}

}

//

……

}

coend

spaghetti

P1

P5

P4P3

P2

Semaphore f[5]={1,1,1,1,1,};

Cobegin{

P(i): while(1){

think(i);

P(f[i]); P(f[(i+1)%5]);// grab_forks(i);

eat(i);

V(f[i]); V(f[(i+1)%5]);// return_forks(i);

}

//

……P(j): while(1){

think(j);P(f[j+1]%5); P(f[j]); // grab_forks(j);eat(j);V(f[(j+1)%5]); V(f[j]); // return_forks(j);

}

//

……

}

coend

spaghetti

P1

P5

P4P3

P2

Semaphore f[5]={1,1,1,1,1,};

Cobegin{

P(i): while(1){

think(i);

if(i%2==1)

P(f(i)); P(f(i+1)%5);// grab_forks(i);

else

P(f(i+1)%5); P(f(i)); // grab_forks(i);

eat(i);

V(f[i]); V(f[i+1]);// return_forks(i);

}

}

//

……

}

coend

spaghetti

P1

P5

P4P3

P2

FAQs

• Rewrite the program below using cobegin/coend statements. Make sure that it exploits maximum parallelism but produces the same result as the sequential execution. Hint: Draw first the process flow graph where each line of the code corresponds to an edge. Start with the last line.

W=X1 * X2;V=X3 * X4;Y= V * X5;Z= V * X6;Y= W * Y;Z= W * Z;A= Y + Z;

E

A=Y+Z

Z=W * ZY=W * Y

Z=V * X6Y=V * X5

V= X3* X4

S

W=X1 * X2

Cobegin{W=X1*X2;//V=X3* X4;cobegin{

Y=V*X5;//Z=V*X6;

}coend

}CoendCobegin{

Y=W* Y;//Z=W*Z;

}CoendA=Y+Z;

• Generalize the last software solution to the mutual exclusion problem(Section2.3.1) to work with three processes.

Int c1=0,c2=0,c3=0,will_wait;CobeginP1: while(1){

c1=1;will_wait=1;while((c2||c3)&&(will_wait==1));cs1; c1=0;program1;

}//…coend

Int c1=0,c2=0,c3=0,will_wait;CobeginP1: while(1){

c1=1;will_wait=1;while(c2&&(will_wait==1));will_wait=1;while(c3&&(will_wait==1));CS1;c1=0; program1;

}//P2: while(1){

c2=1;will_wait=2;while(c1&&(will_wait==2));will_wait=2;while(c3&&(will_wait==2));CS2;c2=0; program2;

}//P3: while(1){

c3=1;will_wait=3;while((c1||c2)&&(will_wait==3);CS3; c3=0; program3;

}coend