chapter 5: synchronization !1 - university of albertasmartynk/resources/cmput 379... · 2014. 4....

20
Chapter 5: Synchronization 1

Upload: others

Post on 20-Jan-2021

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization ���1

Page 2: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Start of Lecture: January 29, 2014

���2

Page 3: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Reminders

• No reminders; hope Assignment 1 is going well

• Any questions or comments?

���3

Page 4: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Critical section within OS code

• I mentioned that non-preemptive kernels are essentially free of race conditions in kernel mode (e.g. when executing system calls) on a single-processor

• system calls are mostly the only way to switch from user mode into kernel mode: user-mode process causes trap with syscall, then CPU switches to kernel mode and jumps to system call location

• Why are non-preemptive kernels race condition free?

• Do you think the Linux distribution running in the labs is preemptive or non-preemptive?

• Non-preemptive kernels cannot be used for real-time scheduling or multiprocessor systems

• Dealing with race conditions & critical sections similar for kernel code/user code; we will not distinguish from now on

���4

Page 5: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Key for interleaved instructions: atomic operations

• The general issue with shared variables is that instructions are interleaved, resulting in inconsistencies

• We could usually solve the critical section problem by making it an atomic operation, so it completely executes as as series of non-interrupted instructions or not at all

• Is this a good approach for critical sections? Why or why not?

• If we do not want to make critical sections atomic, what else can we make atomic to ensure mutual exclusion?

���5

Page 6: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Locking using atomic operations

• To ensure we can put locks around a critical section, hardware provides atomic operations

���6

5.18 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

test_and_set Instruction Definition: boolean test_and_set (boolean *target) { boolean rv = *target; *target = TRUE; return rv: }

1. Executed atomically 2. Returns the original value of passed parameter 3. Set the new value of passed parameter to “TRUE”.

5.20 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

compare_and_swap Instruction Definition: int compare _and_swap(int *value, int expected, int new_value) {

int temp = *value;

if (*value == expected)

*value = new_value;

return temp;

}

1. Executed atomically 2. Returns the original value of passed parameter “value” 3. Set the variable “value” the value of the passed parameter “new_value”

but only if “value” ==“expected”. That is, the swap takes place only under this condition.

Page 7: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Mutex Locks

���7

5.24 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

acquire() and release()

� acquire() { while (!available)

; /* busy wait */

available = false;;

}

� release() {

available = true;

}

� do {

acquire lock

critical section

release lock

remainder section

} while (true);

• Variable available is shared across processes

• Process busy-waits until available becomes true

• Once available is true, it sets it to false so that no other processes can enter their critical sections

• Are there any issues with acquire?

Page 8: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

How implement acquire and release?

• acquire() and release() must be atomic operations (why?)

• Can we use test_and_set() or compare_and_swap() to ensure acquire() and release() are atomic?

���8

Page 9: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Exercise: make acquire and release atomic

���9

release() { available = true;}

acquire() { while(!available); available = false;}

acquire() { while(compare_and_swap(available, 0, 1) == 1);}

Is this operation atomic? Is this operation atomic?

int compare_and_swap(int *value, int expected, int new_value);

acquire() { while(test_and_set(not_available) == true);}

boolean test_and_set(boolean *value);

Answer revealed during class

Page 10: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Why would we ever use spinlocks?

• Busy-waiting is generally a bad idea, when the process can just tell the operating system to block it, i.e. context-switch so it is no longer the active process and only make it active again once its unblocked

• But, spinlocks can be useful on SMP (symmetric multiprocessing systems) to avoid context-switches, when the lock is expected to be held for a short time

• one thread spins on one processor while another threads performs its critical section on another processor

• Do spinlocks make sense on single-processor systems?

���10

Page 11: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Busy Waiting versus Blocking/Polling

• What happens during a busy wait?

• Instructions for process P1 are to continuously check a variable value

• OS runs this instruction some number of times before context-switch

• If no other processes running (on other processors), it just checks same variable over and over again, as it could not have changed (wasted cycles)

• Then context-switches to another process P2, which might finally change the variable, so that when we go back to P1, might go past loop

• What happens during blocking or polling?

• Blocking: process tells OS to block() until some event calls wakeup()

• Polling: process sleeps (which blocks for some amount of time) and then checks again if variable has changed

���11

Page 12: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Semaphores

• Semaphores in general can be used to synchronize access to N resources (rather than just locking)

• A semaphore S is an integer variable stating the number of available resources, rather than a boolean

• wait(S) — like acquire, but returns when S > 0 (i.e. a resource available)

• signal(S) — like release, but increments number of available resources

• Notice that binary semaphores are like mutex locks, except usually semaphores use blocking, not busy-waiting

• policy is the same, but mechanism (implementation) is different

���12

Page 13: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Semaphore Implementation with Blocking

���13

5.28 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

Semaphore Implementation with no Busy waiting

� With each semaphore there is an associated waiting queue

� Each entry in a waiting queue has two data items:

z value (of type integer)

z pointer to next record in the list

� Two operations:

z block – place the process invoking the operation on the appropriate waiting queue

z wakeup – remove one of processes in the waiting queue and place it in the ready queue

� typedef struct{

int value;

struct process *list;

} semaphore;

5.29 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {

S->value--;

if (S->value < 0) { add this process to S->list;

block();

}

}

signal(semaphore *S) {

S->value++;

if (S->value <= 0) { remove a process P from S->list;

wakeup(P);

}

}

5.29 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {

S->value--;

if (S->value < 0) { add this process to S->list;

block();

}

}

signal(semaphore *S) {

S->value++;

if (S->value <= 0) { remove a process P from S->list;

wakeup(P);

}

}

Each semaphore has an associated waiting queue

Two operationsblock — place process invoking the operation on the wait queue

wakeup — remove one of the processes from wait queue and place in ready queue

Page 14: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Video Break: brought to you by another super fantastic classmate

���14

Page 15: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Back to the milk example

• Can we make sure M processes (people) get no more than N cartons of milk?

���15

/* Start with empty fridge */num_milk = 0;S->value = 1;while (true) { wait(S); if (num_milk < N) { Buy_Milk(); num_milk++; } signal(S); /* * After maybe we’ll drink * milk and consume cookies */ }

• What is being synchronized? Is this a binary semaphore (lock) or counting semaphore?

• What happens if there is no milk left? What is the value of S?

• What happens when signal(S) is called?

Page 16: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Consuming work from Producer• Say you have a producer generating work, e.g. server

accepting requests and adding those requests to a queue

• There could be N consumers (e.g. threads) consuming that work, i.e servicing the requests for webpages

• Is this a binary semaphore (lock) or counting semaphore?

���16

/* Server */S->value = 0;while (true) { /* Waits if no requests */ accept_request(); add_request_to_queue(); signal(S); }

/* * Shared code for N threads * Each thread runs this fcn */while (true) { /* Waits if queue empty */ wait(S); service_request(); }

Page 17: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Demo: A real example using mutexes

• Run dotprod_mutex.c — computes dot product of two large vectors, separating into multiple threads to do dot product on subsets of vectors

• Share a global data structure, including global sum

• pthread_mutex_t allows threads to safely modify shared variables, in this case the global sum variable

• critical section involves adding to global sum

���17

Page 18: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

• Deadlock — two or more processes are waiting indefinitely for an event that can be caused by one of the waiting processes

• Example: let S and Q be semaphores initialized to 1

!

!

!

!

• Starvation — indefinite blocking

Deadlocks and Starvation

���18

5.30 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9th Edition

Deadlock and Starvation

� Deadlock – two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes

� Let S and Q be two semaphores initialized to 1

P0 P1

wait(S); wait(Q); wait(Q); wait(S); ... ... signal(S); signal(Q); signal(Q); signal(S); � Starvation – indefinite blocking

z A process may never be removed from the semaphore queue in which it is suspended

� Priority Inversion – Scheduling problem when lower-priority process holds a lock needed by higher-priority process z Solved via priority-inheritance protocol

Page 19: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Priority Inversion

• Scheduling problem when a lower-priority process holds a lock needed by a higher-priority process

• Assume have three processes — L, M, H — whose priorities follow order L < M < H. Assume H waiting on resource R, currently accessed by L. If M becomes runnable, it could preempt L: a lower-priority task, M, indirectly makes a higher priority task, H, wait longer

• One solution: priority-inheritance protocol — a low-priority process accessing resources that are needed by a high-priority process temporarily inherit that priority

���19

Page 20: Chapter 5: Synchronization !1 - University of Albertasmartynk/Resources/CMPUT 379... · 2014. 4. 16. · Chapter 5: Synchronization Critical section within OS code • I mentioned

Chapter 5: Synchronization

Summary so far of Mutual Exclusion Mechanisms

• Atomic operations by hardware: used to implement mutual exclusion mechanisms, like mutexes, semaphores

• Mutex: for locking a critical section (mutual exclusion)

• Semaphore: many uses depending on initialization

• Mutual Exclusion: initialize semaphore to one (binary semaphore == mutex)

• Synchronization of cooperating processes (signalling): initialize semaphore to zero

• Manage multiple instances of a resource: initialize semaphore to # of instances

���20