multitasking and scheduling - moodle.insa-lyon.fr · multitasking and scheduling guillaume salagnac...

44
Multitasking and scheduling Guillaume Salagnac Insa-Lyon – IST Semester Fall 2017

Upload: hangoc

Post on 18-May-2018

222 views

Category:

Documents


2 download

TRANSCRIPT

Multitasking and scheduling

Guillaume Salagnac

Insa-Lyon – IST Semester

Fall 2017

Previously on IST-OPS: kernel vs userland

Application 1

Hardware

Application 2

OS Kernel

Architecture

VM2VM1

Each program executes on an isolated virtual machine :• the processor is just for me: “virtual CPU”• the memory is just for me: “virtual memory”

2/39

Separation of mechanism and policy

Principle: design for orthogonality

Operating system designers try not to confuse• mechanisms (and their implementation) on one hand

and• policies (and their specifications) on the other hand

Example: vs

3/39

Outline

1. Introduction: the concept of a process

2. Achieving multitasking through context switching

3. Scheduling: problem statement

4. Scheduling: classical algorithms

5. Evaluating a scheduling policy

4/39

Definitions: Multitasking vs Multiprocessing

Multiprocessing, multi-core computing

simultaneous execution of programs by separate processors

VS

Multiprogramming AKA multitasking

ability to run several programs “at the same time”

I typically: number of CPUs < number of programs

5/39

Pseudo-parallelism through interleaving

VCPU1 t

VCPU2

CPU

VS

Application A

A

Application B

B C A B C A B C A B C A B C

VCPU3 Application C

Policy = 1 VCPU / application

Mechanism = CPU time-sharing

t

t

t

Note: interleaving is fine as long as the user doesn’t notice

6/39

Degree of multiprogramming

Definition: degree of multiprogramming

Number of processes currently loaded in the system

source: Tanenbaum. Modern Operating Systems (4th ed, 2014). page 877/39

Why do we want multiprogramming ?

Typically: number of CPUs < number of programs

also: better resource utilization

8/39

Why do we want multiprogramming ?

Empirical observationWhen executing, a program alternates between doing somecalculations (CPU burst) and waiting for data (I/O burst)

CPU1 t

CPU2

CPU

I/O

I/O

I/O

VS

A

A

A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

B

B

waiting waiting

waiting waiting

idle idle idle

idle idle idle

9/39

Multiprogramming: remarksWhy do we have to wait ?• because of access latency: memory, disk, network...• interactive programs have to wait for user input• programs may also have to synchronize with each other

Bad approach: busy waiting AKA polling• difficult to program correctly• precious CPU time is wasted doing nothing

Solution: passive waiting AKA blocking• easier to use: just call a blocking function• better CPU utilization

• latency hiding: overlap computations and I/O

I need a mechanism to share the CPU

10/39

Illustration of a context switch between two programs

Program 1 Kernel Program 2

interruptor syscall

P1 is running

P1 is dormant

copy CPUregistersto TCB1

choose P2load CPU registersfrom TCB2

RETI

P2 is dormant

P2 is running

(deal withsyscall?)

(deal withinterrupt?)

11/39

Context switch: remarks

• dispatcher = implements the context switch• executed very often I must be quick (dispatch latency)

• scheduler = chooses which program to execute next• possible that P2 = P1, e.g. after a write()...• possible that P2 6= P1, e.g. read() I blocking call

Associated kernel data structures :• Process Control Block = PCB

• represents a running program: process number (PID),executable filename, permissions...

• contains one TCB• Thread Control Block = TCB

• represents a virtual processor AKA execution context• contains a copy of the CPU state: registers, PC, SR...

12/39

Outline

1. Introduction: the concept of a process

2. Achieving multitasking through context switching

3. Scheduling: problem statement

4. Scheduling: classical algorithms

5. Evaluating a scheduling policy

13/39

What program to execute next ?

P1 t

P2

I/O

I/O

A A A

B B

A A

A

A

B B B

B B

P3

I/O

C

C

C

C

C

C

C

C

C

C

Question: On a single CPU, how should we execute this workload ?

14/39

Naive scheduling

First idea: always execute A, B, then C, and repeat

CPU tI/O

A

A

B

B

C A

A

B

BC

C

C

I quite inefficient, especially for C

Second idea: execute C as often as possible

CPU tI/O

A

A B

BC

CC

BC

C

C

C

A C

A

I a lot better for C, while almost the same for A and B

15/39

Not all processes want the all CPU all the time

CPU tI/O

A

A B

BC

CC

BC

C

C

C

A C

A

t1 t2

at time t1:• A has the CPU• B is ready to execute• C is waiting for an input/output request to complete

at time t2:• B has the CPU• A and C are ready to execute

16/39

Process state diagram (1)

RunningReady

Blocked

New Terminated

Possible states for a thread:• New: PCB/TCB currently being created by the kernel• Running = active: currently executing on the processor• Ready = activable: waits to be executed• Blocked = sleeping: waits for some event to complete• Terminated: PCB/TCB being cleaned up by the kernel

17/39

Process state diagram (2)

RunningReady

Blocked

New Terminated1

2

34

0 5

Transitions:0 PCB/TCB initialization is done1 the dispatcher loads the thread on the CPU2 an IRQ or syscall interrupts execution3 the program makes a blocking syscall

• e.g. input-output read(), delay sleep(), etc...4 the awaited event completes

• e.g. data becomes available, delay expires, etc...5 execution comes to an end (either voluntarily or abruptly)

18/39

Scheduling: problem statement

Purpose of the CPU scheduler• given K threads which are ready to execute

• and supposing that we know some “features” about them

• given N > 1 available processors

decide which threads to execute on each processor

Remark: when is the scheduler activated ?• upon each transition running→ blocked (3) e.g. sleep()• when a process terminates (5)• upon each transition blocked→ ready (4)• upon each transition running→ ready (2)

• e.g. upon receiving and IRQ from the system timer

19/39

Two types of scheduling

Cooperative scheduler: activated only upon (3) and (5)

• applications explicitely yield control of the CPU• blocking system calls• + a dedicated yield() syscall

• efficient but supposes to trust the applications

Preemptive scheduler : activated upon (2), (3), (4) and (5)• enables the kernel to stay in control of the machine

• system timer sends periodic IRQs to trigger preemption

• less efficient but allows for executing untrusted applications

20/39

States implemented as queues

Ready queuedispatchprocess creation

preemption

CPU

disk request

Disk queue

Net queue

networkrequestrequest

completed

Sleeping queue

sleep()delay expired

requestcompleted

21/39

Organisation des PCB en files: remarques

Thread Control Blocks are chained together in queues• Ready Queue AKA Run Queue

Purpose of the scheduler: choose a TCB in the Ready Queue

Blocked processes I transfered to another queue• one Device Queue for each device• one queue for sleeping processes• ... one queue for each earson to be Blocked

22/39

Outline

1. Introduction: the concept of a process

2. Achieving multitasking through context switching

3. Scheduling: problem statement

4. Scheduling: classical algorithms

5. Evaluating a scheduling policy

23/39

Scheduling in project management

Off-line scheduling: projects, workshop, factory, etcInput: a list of “tasks” with their duration and dependencies

( + a list of available “resources” )Output: a start date for each task

( + assignment of resources to tasks )

24/39

Off-line scheduling vs long-running programsOmniscient point of view

P1 t

P2

I/O

I/O

A A A

B B

A A

A

A

B B B

B B

P3

I/O

C

C

C

C

C

C

C

C

C

C

VS

Point of view of the scheduler at t=0The Ready Queue contains P1, P2 and P3. The CPU is idle.I how can we decide what to do ?

25/39

FCFS Scheduling: First Come First Servedalso known as FIFO (First In First Out)

FCFS scheduling: principlerun jobs in the same order they arrived in the queue

In our example:

CPU tI/O

A

A

B

B

C A

A

B

BC

C

C

Remarks:• inspired from real-life situations• rather fair ; no risk of starvation• non-preemptive scheduling• short tasks (e.g. C) may be penalized

26/39

Execution is a sequence of bursts

HeuristicFor each process in the ready queue, the kernel will try andguess the duration of the next CPU burst.

Remark: in practice, the scheduler does not think in terms ofprocesses or threads, but in terms of CPU bursts !

Point of view of the scheduler at t=0

the Ready Queue contains A B C, , and .

I which process should we execute next ?

27/39

Different types of bottlenecks

In our example:• A and B are “mostly doing calculations” I bottleneck = CPU• C is “mostly doing intput/output” I bottleneck = I/O device

Definitions• A program is said to be «compute-bound» if a faster

processor would reduce its execution time• A program is said to be «I/O-bound» if faster intput/output

would reduce its execution time• variants: memory-bound, disk-bound, network-bound...

Empirical observationIn practice, a thread will be either compute-bound or I/O-bound.

28/39

Distribution of CPU bursts durations

source: Silberschatz Operating Systems Concepts Essentials (2011). p 17729/39

SJF Scheduling: Shortest Job First

SJF Scheduling: principle

in the Ready Queue, pick the job with smallest execution time

In our example:

CPU tI/O

A

A B

BC

CC

BC

C

C

C

A C

A

Remarks:• beneficial to IO-bound processes...• ...while not harming CPU-bound processes too much• risk of starvation if many short jobs arrive

30/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

t=0

Ready Queue: C , , BA

31/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

t=1

Ready Queue: , BA

C

31/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

t=3

Ready Queue: B

C

C

A

CA , ,

31/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

Ready Queue: , B

C

C

A

t=4

C

31/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

Ready Queue: B

C

C

A

t=5

C

A

A,

31/39

SRTF Scheduling: Shortest Remaining Time First

SRTF Scheduling: principle

like SJF but with preemptionI choice re-evaluated on each transition blocked→ ready

CPU tI/O

C

C

A C

At

B

B

C

B

C

C

C

A C

A

Remarks:• similar to SJF (with our example: result identical)• preemptive: one process can’t monopolize the CPU• but still prone to starvation

31/39

RR Scheduling: Round RobinRound Robin Scheduling: principle• processes are given the CPU each in turn• kernel uses the system timer tick to measure time• a burst which exceeds its time quantum gets preempted

In our example, with a quantum duration q = 2 ticks:

CPU tA C CB A BA

I/O C A CB

system timer IRQ

B C

C

A B

Remarks:• naturally fair and immune to starvation• but how do we choose the value of q ?

32/39

In real life: priority schedulingPriority scheduling: principle• maintain several ready queues simultaneously• consider them by decreasing order of priority• each queue can implement a different policy: RR, SRTF...

Variants:• fixed priority I real-time scheduling• variable priority I time sharing (AKA best-effort )

• example: Multi-Level Feedback Queues Scheduling (MLFQ)• with criteria to promote/demote processes

MLFQ scheduling: example• high priority: RR q=5ms I interactive processes• average priority: RR q=50ms I I/O-bound tasks• low priority: SRTF I run CPU-bound tasks in the background

33/39

Outline

1. Introduction: the concept of a process

2. Achieving multitasking through context switching

3. Scheduling: problem statement

4. Scheduling: classical algorithms

5. Evaluating a scheduling policy

34/39

Evaluating a scheduling policy

Evaluation methodology• deterministic simulation: on a given scenario

• play out the algorithms, on paper or with a computer• schocastic modeling

• queueing theory, markov chains...• real system instrumentation AKA benchmarking

• impact on performance, choice of the workload...

Example scenarios:

task durationT1 6T2 8T3 3

task arrival durationT1 0 8T2 1 4T3 2 9T4 3 5

35/39

Evaluation criteria

• CPU utilization rate: proportion of time when the CPU is active• ideally, should be close to 100%

• Throughput: number of jobs finished by unit of time• only makes sense if “jobs” can “finish”

• Fairness in general and non-starvation in particular• a whole subject by itself

• Turnaround time: time elapsed between arrival and termination• only makes sense if “jobs” can “finish”

• Waiting time: duration spent in the ready queue• all time spent in the ready queue really is wasted

• Response time: time elapsed before first “response”• depends on the definition of response

36/39

Example

Consider this scenario:

task arrival durationT1 0 8T2 1 4T3 2 9T4 3 5

FCFS T1 T2

SJF T1 T2

SRTF T2

time

T4

1 2 3 4 5 10 15 20 25

T3 T4

T3T4

T1 T3

RR q=3 T1 T2 T3 T4 T1 T3

1

2 T4 T1 T3

0

For SRTF: TT =(17− 0) + (5− 1) + (26− 2) + (10− 3)

4= 13

WT=(10− 1) + 0 + (17− 2) + (5− 3)

4= 6.5

37/39

Outline

1. Introduction: the concept of a process

2. Achieving multitasking through context switching

3. Scheduling: problem statement

4. Scheduling: classical algorithms

5. Evaluating a scheduling policy

38/39

SummaryPolicy vs Mechanism• Multitasking vs Multiprocessing• VCPU vs context switch + scheduling

Important concepts• Dispatcher, Scheduler, Process Control Block, Preemption,

CPU-burst / IO-burst, process states, Ready Queue...

Scheduling policies• First Come First Served• Shortest Job First, Shortest Remaining Time First• Round Robin with a value for the time quantum• Priority scheduling, either with fixed or dynamic priorities

• Multi-Level Feedback Queue

Evaluation: Turnaround Time, Waiting Time...

39/39