chapter 5 – cpu scheduling (pgs 183 – 218). cpu scheduling goal: to get as much done as...

22
CSCI 3431: OPERATING SYSTEMS Chapter 5 – CPU Scheduling (Pgs 183 – 218)

Upload: jared-boyd

Post on 30-Dec-2015

223 views

Category:

Documents


1 download

TRANSCRIPT

CSCI 3431: OPERATING SYSTEMS

Chapter 5 – CPU Scheduling (Pgs 183 – 218)

CPU Scheduling

Goal: To get as much done as possible

How: By never letting the CPU sit "idle" and not do anything

Idea: When a process is waiting for something to happen, the CPU can execute a different process that isn't waiting for anything

Reality: Many CPUs are often idle because all processes are waiting for something

Bursts

CPU burst – period of time in which CPU is executing instructions and "doing work"

I/O burst – period of time in which CPU is waiting for IO to occur and is "idle"

CPU bursts tend to follow a pattern as most bursts tend to be short duration and few bursts are long durations

CPU Burst Duration Histogram

Fig. 3.2 Process State

Scheduling

Short-term schedulers select (or order) the ready queue

Scheduling occurs:1. When a process switches from running to waiting2. When a process switches from running to ready 3. When a process switches from waiting to ready4. When a process terminates

If 1 and 4 only, scheduling is cooperative If 1 to 4 (all), scheduling is preemptive

Preemption

A process switches to ready because its timeslice is over

Permits fair sharing of CPU cycles Needed for multi-user systems Cooperation uses "run as long as

possible" A variant of cooperation, "run to

completion" never switches a process that is not finished, even if waiting for I/O

Scheduling Criteria

CPU Utilisation: Keep the CPU busy Throughput: # of processes / time unit Turnaround Time: time from process

submission to completion Waiting Time: Time spent in the READY

queue Response Time: Time from submission

until first output produced

Criteria Not Mentioned

Overhead! O/S activities take time away from

user processes Time for performing scheduling Time to do context switch Interference due to interrupt

handling Dispatch Latency: Time to stop one

process and start another running

First Come – First Served

Simple queue needed to implement Average waiting times can be long Order processes are started will

affect waiting times Non-preemptive, poor for multi-user

systems But, no so bad when used with

preemption (Round Robin Scheduling)

Shortest Job First

Provably optimal w.r.t. waiting times May or may not be preemptive Problem – how does one know how

long a job will take (CPU burst length)?

1. Start with system average2. Modify based on previous bursts

(exponential average)p = αtlast + (1-α)tall-others

Shortest Remaining Time

Shortest Job First BUT if a new job will be quicker than

the running job, preempt the running job and run the new one

Priority Scheduling

Select next process based on priority SJF = priority based on inverse CPU

burst length Many ways to assign priority: policy,

memory factors, burst times, user history etc.

Starvation (being too low a priority to get CPU time) can be a problem

Aging: older processes increase in priority (prevents starvation)

Round Robin

FCFS with preemption Basic time slices Length of time slice is important 80% +/- of CPU bursts should fit in a

time slice Should not be so short as consume a

large fraction of CPU cycles doing context switches

Multi-Level Queueing

Similar to Priority Scheduling, but keep different queues for each priority instead of ordering on one queue

Can use different algorithms (or variants of the same algorithm) on each queue

Various ways to select which queue to select the next job from

Can permit process migration between queues

Queues do not need to have the same length timeslices etc.

Thread Scheduling

Process Contention Scope: Scheduling of threads in "user space"

System Contention Scope: Scheduling of threads in "kernel space"

Pthreads lets user control contention scope!

pthread_attr_setscope() pthread_attr_getscope()

Multi-Processor Scheduling Load Sharing – Now scheduling must

also deal with multiple CPUs as well as multiple processes

Quite complex Can be affected by processor

similarity Symmetric Multiprocessing (SMP) –

Each CPU is self scheduling (most common)

Asymmetric Multiprocessing – One processor is the master scheduler

Processor Affinity

Best to keep a process on the same CPU for its life to maximise cache benefits

Hard Affinity: Process can be set to never migrate between CPUs

Soft Affinity: Migration is possible in some instances

NUMA, CPU speed, job mix will all affect migration

Sometimes the cost of migration is recovered by moving from an overworked CPU to an idle (or faster) one

Load Balancing

It makes no sense to have some CPUs with waiting processes while some CPUs sit idle

Push Migration: A monitor moves processes around and "pushes" them towards less busy CPUs

Pull Migration: Idle CPUs pull in jobs Often the load balancing is wasted

when cache reloads are needed

Threading Granularity

Some CPUs have very low-level instructions to support threads

Can switch threads every few instructions at low cost = fine-grained multithreading

Some CPUs do not provide much support and context switches are expensive = coarse-grained multithreading

Many CPUs provide multiple hardware threads in support of fine-grained multithreading – CPU is specifically designed for this (e.g., two register sets) and has hardware and microcode support

Algorithm Evaluation

1. Deterministic Modeling – use predetermined workload (e.g., historic data) to evaluate alogorithms

2. Queueing Analysis – uses mathematical queueing theory and process characteristics (based on probability) to model the system

3. Simulations – simulate the system and measure performance on probability of process characteristics

4. Prototyping – program and test the algorithm in an operating environment

To Do:

Work on Assignment 1 Finish reading Chapter 5 (pgs 183-

218; this lecture) if you haven’t already

Read Chapter 6 (pgs 225-267; next lecture)