introduction to concurrency concurrency: execute two or more pieces of code “at the same time”...
TRANSCRIPT
![Page 1: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/1.jpg)
Introduction to ConcurrencyConcurrency:Execute two or more pieces of code “at the same time”
Why?No choice:- Geographically distributed data- Interoperability of different machines- A piece of code must”serve” many other client processes- To achieve reliability
By Choice:to achieve speedupsometimes makes programming easier (e.g. UNIX pipes)
![Page 2: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/2.jpg)
Possibilities for Concurrency
Architecture: Architecture:
Uniprocessor with:-I/O channel- I/o processor- DMA
Multiprogramming, multiple process system Programs
Network of uniprocessors
Distributed programming
Multiple CPU’s Parallel programming
![Page 3: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/3.jpg)
DefinitionsConcurrent process execution can be:- interleaved,or- physically simultaneous
Interleaved:Multiprogramming on uniprocessor
Physically simultaneous:Uni-or multiprogramming on multiprocessor
Process, thread, or task:Schedulable unit of computation
Granularity:Process “size” or computation to communication ratio- Too small: excessive overhead- Too large: less concurrency
![Page 4: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/4.jpg)
Precedence GraphConsider writing a program as a set of tasks.
Precedence graph:
Specifies execution ordering among tasks
S0: A:= X – YS1: B:= X + YS2: C := Z + 1S3: C := A - BS4: W:= C + 1
S1
S3
S2S0
S4
Parallel zing compilers for computers with vector processors build dependency graphs
![Page 5: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/5.jpg)
Cyclic Precedence GraphsWhat does the following graph represent ?
S1
S2
S3
![Page 6: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/6.jpg)
Examples of Concurrency in Uniprocessors
Example 1: Unix pipesMotivations:-fast to write code-Fast to execute
Example2: BufferingMotivation:-required when two asynchronous processes must communicate
Example3: Client/ Server modelMotivation:- geographically distributed computing
![Page 7: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/7.jpg)
Operating System Issues
Synchronization:What primitives should OS provide?
Communication:What primitives should OS provide to interface communication protocol?
Hardware support:Needed to implement OS primitives
Remote execution:What primitives should OS provide?- Remote procedure call(RPC)- Remote command shell
Sharing address spaces:Makes programmer easier
Lightweight threads:Can a process creation be as cheap as a procedure call?
![Page 8: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/8.jpg)
Parallel Language ConstructsFORK and JOIN
FORK LStarts parallel execution at the statement labeled L and at the statement following the fork
JOIN CountRecombines ‘Count’ concurrent computations
Count:=Count-1;
If(Count>0)
Then Quit;
Join is an atomic operation
![Page 9: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/9.jpg)
Definition: Atomic OperationIf I am a process executing on a processor, and I execute an
atomic operation, then all other processes executing on this or any other processor:
• Can see state of system before I execute or after I execute
• but cannot see any intermediate state while I am executing
Example: bank teller/* Joe has $1000, split equally between savings and checking accounts*/
1. Subtract $100 from Joe’s savings account
2. Add $100 to Joe’s checking account
Other processes should never read Joe’s balances and find he has 900 in both accounts.
![Page 10: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/10.jpg)
Concurrency ConditionsLet Si denote a statementRead set of Si:
R(Si) = {a1,a2,…,an)Set of all variables referenced in Si
Write set of Si:W(Si) = { b1, b2, …, bm},Set of all variables changed by si
C := A - BR(C := A - B) = {A , B}W(C := A - B) = {C}
Scanf(“%d” , &A)R(scanf(“%d” , &A))={}W(scanf(“%d” , &A))={A}
![Page 11: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/11.jpg)
Bernstein’s ConditionsThe following conditions must hold for two statements S1 and S2
to execute concurrently with valid results:
1) R(S1) INTERSECT W(S2)={}
2) W(S1) INTERSECT R(S2)={}
3) W(S1) INTERSECT W(S2)={}
These are called the Berstein Conditions.
![Page 12: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/12.jpg)
Fork and Join Example #1
S1
S4
S3
S2S1: A := X + YS2: B := Z + 1S3: C := A - BS4: W :=C + 1
Count := 2;FORK L1;A := X + Y;Goto L2;
L1: B := Z + 1;L2: JOIN Count;
C := A - B;W := C + 1;
![Page 13: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/13.jpg)
Structured Parallel ConstructsPARBEGIN /
PAREND
PARBEGIN Sequential execution splits off into several concurrent sequences
PAREND Parallel computations merge
PARBEGINStatement 1;Statement 2;::::::::::::::::Statement Nl
PAREND;
PARBEGINQ := C mod 25;
BeginN := N - 1;T := N / 5;
End; Proc1(X , Y);PAREND;
![Page 14: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/14.jpg)
Fork and Join Example #2
S1
S3S2
S4
S6S5
S7
S1;Count := 3;FORK L1;S2;S4;FORK L2;S5;Goto L3;
L2: S6;Goto L3
L1: S3;L3: JOIN Count
S7;Up to three tasks may concurrently execute
![Page 15: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/15.jpg)
Parbegin / Parend Examples
Begin PARBEGIN
A := X + Y;B := Z + 1;
PAREND;C := A - B;W := C + 1;
End
Begin S1; PARBEGIN
S3BEGIN S2; S4; PARBEGIN
S5;S6;
PAREND; End;
PAREND; S7;End;
![Page 16: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/16.jpg)
ComparisonUnfortunately, the structured concurrent statement is not powerful enough to model all precedence graphs.
S1
S1 S1
S1
S1S1
S1
![Page 17: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/17.jpg)
Comparison(contd)
Fork and Join code for the modified precedence graph:
S1;Count 1:=2;FORK L1;S2;S4;Count2:=2;FORK L2;S5Goto L3;
L1: S3;L2: JOIN Count1;
S6;L3: JOIN Count2;
S7;
![Page 18: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/18.jpg)
Comparison(cont’d)
There is no corresponding structured construct code for the same graph
However, other synchronization techniques can supplement
Also, not all graphs need implementing for real-world problems
![Page 19: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/19.jpg)
Overview
System Calls
- fork( )
- wait( )
- pipe( )
- write( )
- read( )
Examples
![Page 20: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/20.jpg)
Process CreationFork( )NAME
fork() – create a new processSYNOPSIS
# include <sys/types.h># include <unistd.h>pid_t fork(void)
RETURN VALUEsuccess
parent- child pidchild- 0
failure-1
![Page 21: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/21.jpg)
Fork() system call- example#include <sys/types.h>
#include <unistd.h>
#include <stdio.h>
Main()
{
printf(“[%ld] parent process id: %ld\n”, getpid(), getppid());
fork();
printf(“\n[%ld] parent process id: %ld\n”, getpid(), getppid());
}
![Page 22: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/22.jpg)
Fork() system call- example
[17619] parent process id: 12729
[17619] parent process id: 12729
[2372] parent process id: 17619
![Page 23: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/23.jpg)
Fork()- program structure#include <sys/types.h>#include <unistd.h>#include <stdio.h>Main(){
pid_t pid;if((pid = fork())>0){/* parent */}else if ((pid==0){/*child*/}else {
/* cannot fork*}exit(0);
}
![Page 24: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/24.jpg)
Wait() system call
Wait()- wait for the process whose pid reference is passed to finish executing
SYNOPSIS
#include<sys/types.h>
#include<sys/wait.h>
pid_t wait(int *stat)loc)
The unsigned decimal integer process ID for which to wait
RETURN VALUE
success- child pid
failure- -1 and errno is set
![Page 25: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/25.jpg)
Wait()- program structure#include <sys/types.h>#include <unistd.h>#include <stdlib.h>#include <stdio.h>Main(int argc, char* argv[]){
pid_t childPID;if((childPID = fork())==0){/*child*/}else {
/* parent*wait(0);
}exit(0);
}
![Page 26: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/26.jpg)
Pipe() system call
Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off.
SYNOPSIS
Int pipe(pfd)
int pfd[2];
PARAMETERPfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipe
RETURN VALUE:0 – success;-1 – error.
![Page 27: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/27.jpg)
Pipe() - structure/* first, define an array to store the two file descriptors*/Int pipe[2];
/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) {
/* pipe() failed*/Perror(“pipe”);exit(1);
}
If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.
![Page 28: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/28.jpg)
Write() system callWrite() – used to write data to a file or other object identified
by a file descriptor.SYNOPSIS
#include <sys/types.h>Size_t write(int fildes, const void * buf, size_t nbyte);
PARAMETERfildes is the file descriptor,buf is the base address of area of memory that data is copied from,nbyte is the amount of data to copy
RETURN VALUEThe return value is the actual amount of data written, if this differs from nbyte then something has gone wrong
![Page 29: Introduction to Concurrency Concurrency: Execute two or more pieces of code “at the same time” Why? No choice: - Geographically distributed data - Interoperability](https://reader036.vdocument.in/reader036/viewer/2022062318/551acfb8550346b2288b5b02/html5/thumbnails/29.jpg)
Read() system callRead() – read data from a file or other object identified by a file descriptor
SYNOPSIS
#include <sys/types.h>
Size_t read(int fildes, void *buf, size_t nbyte);
ARGUMENTfildes is the file descriptor,buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read.
RETURN VALUEThe actual amount of data read from the file. The pointer is incremented by the amount of data read.