itfn 2601 introduction to operating systems

45
ITFN 2601 Introduction to Operating Systems Lecture 4/5 Scheduling

Upload: isra

Post on 08-Feb-2016

39 views

Category:

Documents


1 download

DESCRIPTION

ITFN 2601 Introduction to Operating Systems. Lecture 4/5 Scheduling. Agenda. Scheduling Batch Interactive Real-Time Threads. Processes. A process is an executing Program Multiprogramming Consist of Program, input, output and a state. Process Creation. System Initialization - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: ITFN 2601 Introduction to Operating Systems

ITFN 2601Introduction to Operating

Systems

Lecture 4/5Scheduling

Page 2: ITFN 2601 Introduction to Operating Systems

Agenda

SchedulingBatchInteractiveReal-TimeThreads

Page 3: ITFN 2601 Introduction to Operating Systems

Processes

A process is an executing Program

Multiprogramming

Consist of Program, input, output and a state

Page 4: ITFN 2601 Introduction to Operating Systems

Process Creation

System Initialization

System call by running process

User request to create new process

Initiation of batch job

Page 5: ITFN 2601 Introduction to Operating Systems

Process Termination

Normal Exit

Error Exit

Fatal Error

Killed by another process

Page 6: ITFN 2601 Introduction to Operating Systems

Process States

Running

Ready

Blocked

Page 7: ITFN 2601 Introduction to Operating Systems

Threads

Lightweight processesThreads handle all execution activitiesA thread is a program counter, a stack,

and a set of registers Thread creation is relatively cheap in

terms of CPU costs

Page 8: ITFN 2601 Introduction to Operating Systems

Thread Usage

Programming model becomes simplerEasy to create and destroySpeeds up applicationsUseful on systems with multiple CPUs

Page 9: ITFN 2601 Introduction to Operating Systems

User-level threads

Get the time of only one process to executeUser-level threads are managed by runtime

library routines linked into each application so that thread management operations require no kernel intervention.

User-level threads are also flexible; they can be customized to the needs of the language or user without kernel modification

User-level threads execute within the context of traditional processes

Page 10: ITFN 2601 Introduction to Operating Systems

The Cost and Benefits of User-Level Threads

Thread operations do not have to cross protection boundary.

Parameters do not need to be copied. Kernel implements a single policy or pays overhead of

managing multiple policies. Applications can link in the correct thread

management policy for their needs. Performance is inherently better in user-level threads. User-level threads that issue blocking calls to the

kernel will block an entire kernel thread (i.e. virtual processor).

Page 11: ITFN 2601 Introduction to Operating Systems

User-level Thread Problems

How to block system callsPage FaultsThread must give up the CPU

Page 12: ITFN 2601 Introduction to Operating Systems

Threads in the Kernal

Avoids the system integrations problems exhibited by user-level threads, because the kernel directly schedules each applications threads onto physical processors

Performance has been typical an order of magnitude worse than the best-case performance of user-level threads

Employ user-level threads, which have good performance and correct behavior provided the application is uniprogrammed and does no I/O, or employ kernel threads, which have worse performance but are not as restricted.

Page 13: ITFN 2601 Introduction to Operating Systems

The Cost and Benefit of Kernel-Level Threads

Kernel threads are expensive! Kernel does not understand application

behavior. Deschedule a thread holding a spin lock. Thread priority inversion. May run out of kernel threads to handle

all the user threads. "Correct" Kernel Level Support

Page 14: ITFN 2601 Introduction to Operating Systems

Scheduler Activations

Threads are needed for parallel applications. User-level and kernel-level threads both have

problems. User-level threads offer good performance, but

does not handle I/O well. Kernel-level threads are expensive, but correct.

Page 15: ITFN 2601 Introduction to Operating Systems

Scheduler Activations

SAs notify user-level schedulers of changes in kernel scheduling decisions.

SAs provide kernel space for threads that block in the kernel.

Create one activation for each virtual processor.

Kernel creates SAs to upcall into applications, notifying them of scheduling events.

Page 16: ITFN 2601 Introduction to Operating Systems

Scheduler Activations (cont)Key difference between SAs and kernel threads

When an SA blocks, the application is notified by a different SA.

The blocking SA's thread is marked blocked and the old SA is freed.

The new SA can now be scheduled. The number of SAs under control of the application

never changes (unless requested/told explicitly). Kernel level state is passed to thread system on

upcall, so that registers of the blocking thread are accessible to the user-level scheduler.

Page 17: ITFN 2601 Introduction to Operating Systems

Popup Threads

Thread is created spontaneously to handle an incoming request.

Incoming message mapped into thread's address space

Advantages over traditional request: no waiting on work (no context needs to be

saved) creating new thread is cheaper than

restoring old thread (no context is saved)

Page 18: ITFN 2601 Introduction to Operating Systems

Definition of Critical Sections

The overlapping portion of each process, where the shared variables are being accessed. Mutual Exclusion --- if Pi is executing in one of its critical sections, noPj , i ≠ j , is executing in its critical sections

Page 19: ITFN 2601 Introduction to Operating Systems

Race Conditions

Race conditions generally involve one or more processes accessing a shared resource (such a file or variable), where this multiple access has not been properly controlled

Race conditions appear in three situations: implicit calls to schedule from within a function blocking operationsaccess to data shared by interrupt code and system

calls.

Page 20: ITFN 2601 Introduction to Operating Systems

Critical Regions

No two processes may be simultaneously inside their critical regions

No assumptions may be made about speeds or the number of CPUs

No process running outside its critical region may block another process

No process should have to wait forever to enter its critical region

Page 21: ITFN 2601 Introduction to Operating Systems

Mutual Exclusion

Busy Waiting or Spin Lock Priority inversionProducer-Consumer Problem

Page 22: ITFN 2601 Introduction to Operating Systems

Scheduling

Process ConditionsProcessor BoundI/O Bound

Scheduling how?Pre-emptiveNon-pre-emptive

Page 23: ITFN 2601 Introduction to Operating Systems

Scheduling: When

New Process is CreatedParent processChild process

Process ExitsWhen a process BlocksI/O Interrupt occurs

Clock InterruptsNon preemptivePreemptive

Page 24: ITFN 2601 Introduction to Operating Systems

Objectives of a Good Scheduling Policy

Fairness Efficiency Low response time (important for

interactive jobs) Low turnaround time (important for

batch jobs) High throughput

Page 25: ITFN 2601 Introduction to Operating Systems

Scheduling

Throughput. The amount of useful work accomplished per unit time. This depends, of course, on what constitutes ``useful work.'' One common measure of throughput is jobs/minute (or second, or hour, depending on the kinds of job).

Utilization. For each device, the utilization of a device is the fraction of time the device is busy. A good scheduling algorithm keeps all the devices (CPU's, disk drives, etc.) busy most of the time.

Page 26: ITFN 2601 Introduction to Operating Systems

SchedulingTurnaround.

The length of time between when the job arrives in the system and when it finally finishes.

Response Time. The length of time between when the job arrives in the system and when it starts to produce output. For interactive jobs, response time might be more important than turnaround.

Waiting Time. The amount of time the job is ready (runnable but not running). This is a better measure of scheduling quality than turnaround, since the scheduler has no control of the amount of time the process spends computing or blocked waiting for I/O.

Page 27: ITFN 2601 Introduction to Operating Systems

Preemption

Needs a clock interrupt (or equivalent) Needed to guarantee fairness Found in all modern general purpose

operating systems Without preemption, the system

implements ``run to completion (or yield)''

Page 28: ITFN 2601 Introduction to Operating Systems

Semaphores

• Semaphores are used to block a process from entering a `critical section' of its machine code, if this critical section accesses a shared resource (e.g a memory location) which another program is currently accessing A process cannot atomically test the state of the

semaphore, and block itself if the semaphore is owned by another process. However, the operating system can do this work, as it can ensure that the running process is not pre-empted while the test, and possible block, are performed. This is why the operations on semaphores are typically implemented as system calls.

Page 29: ITFN 2601 Introduction to Operating Systems

First-Come-First-Served

The simplest possible scheduling discipline is called First-come, first-served (FCFS). The ready list is a simple queue (first-in/first-out). The scheduler simply runs the first job on the queue until it blocks, then it runs the new first job, and so on. When a job becomes ready, it is simply added to the end of the queue

Page 30: ITFN 2601 Introduction to Operating Systems

FCFS

Main advantage of FCFS is that it is easy to write and understand

No starvationIf one process gets into an infinite loop, it

will run forever and shut out all the others.

FCFS tends to excessively favor long bursts. CPU-bound processes

Page 31: ITFN 2601 Introduction to Operating Systems

Shortest-job-first (SJF)

Whenever the CPU has to choose a burst to run, it chooses the shortest one

Non-preemptive policypreemptive version of the SJF, called

shortest-remaining-time-first (SRTF). Starvation is possible

Page 32: ITFN 2601 Introduction to Operating Systems

Three-Level Scheduling

Admission Scheduler – which jobs to admit to the system

Memory Scheduler – Which processes are kept in memory and which on disk

CPU Scheduler – Pick ready process to run

Page 33: ITFN 2601 Introduction to Operating Systems

Round-Robin

Round-robin (RR). RR keeps all the burst in a queue and runs the first one, like FCFS. But after a length of time q (called a quantum), if the current burst hasn't completed, it is moved to the tail of the queue and the next burst is started.

Page 34: ITFN 2601 Introduction to Operating Systems

Round RobbinAn important preemptive policy Essentially the preemptive version of FCFS The key parameter is the quantum size q When a process is put into the running state a timer is set

to q. If the timer goes off and the process is still running, the

OS preempts the process. This process is moved to the ready state (the preempt arc

in the diagram. The next job in the ready list (normally a queue) is

selected to run

Page 35: ITFN 2601 Introduction to Operating Systems

Round Robbin

As q gets large, RR approaches FCFS As q gets small, RR approaches PS What q should we choose

Tradeoff Small q makes system more responsive Large q makes system more efficient since

less switching

Page 36: ITFN 2601 Introduction to Operating Systems

Priority Scheduling

Always to run the highest priority burstpreemptive or non-preemptive Priorities can be assigned externally to

processes based on their importanceAssigned (and changed) dynamically

Page 37: ITFN 2601 Introduction to Operating Systems

Other Interactive Scheduling

Multiple Queues Shortest Process NextGuaranteed SchedulingLottery SchedulingFair-Share Scheduling

Page 38: ITFN 2601 Introduction to Operating Systems

Scheduler Goals

Generic GoalsFairness of processor allocationEnforcement of Scheduling PoliciesBalance of utilization

Batch-based GoalsMaximize throughput of jobsMinimize turnaround on jobs

Page 39: ITFN 2601 Introduction to Operating Systems

Scheduler Goals II

Interactive System GoalsMinimize response time for user I/OUser expectations should be met

Real-time System GoalsDeadlines must be met for Process

CompletionSystem Performance must be predictable

Page 40: ITFN 2601 Introduction to Operating Systems

Scheduling Algorithms(Batch)

FIFO (First In First Out) NON-PREEMPTIVEFairestLow throughputHigh Turnaround

Shortest First NON-PREEMPTIVEHigh ThroughputLow TurnaroundUnfair for Large Jobs

Page 41: ITFN 2601 Introduction to Operating Systems

Scheduling Algorithms(Batch, cont)

Shortest Remaining - PREEMPTIVEHigh Turnaround on Long JobsUnfair for Large Jobs

Multi-Scheduling (CPU or Memory Limited)HIGH Turnaround (disk swaps)Throughput highly variable, probably lowFairness highly variable

Page 42: ITFN 2601 Introduction to Operating Systems

Scheduling Algorithms(Interactive)

Round Robin - PREEMPTIVEFairest overallResponse time variable but finite

Priority Scheduling - PREEMPTIVEFair

“More Fair” for users with higher prioritiesResponse time inverse to priorityWindows/Unix typically implement this

Page 43: ITFN 2601 Introduction to Operating Systems

Round Robin, Example

Page 44: ITFN 2601 Introduction to Operating Systems

Scheduling Algorithms(Real-Time)

Small JobsHigh PriorityPeriodic/AperiodicSchedulable?

Iff the sum of the ratios CPU Time to Period time is less than one

Sum(CPU/Period) <= 1Static/Dynamic?

Page 45: ITFN 2601 Introduction to Operating Systems

Summary

Scheduler responsible for many goals

Scheduling algorithms complex

Know your math!