scheduling: chapter 3 process: entity competing for resources process states: new, running,...

51
Scheduling: Chapter 3 Process: Entity competing for resources Process states: New, running, waiting, ready, terminated, zombie (and perhaps more). See also Fig. 3.2 on page 87.

Upload: branden-cummings

Post on 27-Dec-2015

222 views

Category:

Documents


1 download

TRANSCRIPT

Scheduling: Chapter 3

Process: Entity competing for resources Process states: New, running, waiting, ready,

terminated, zombie (and perhaps more). See also Fig. 3.2 on page 87.

Partial state diagram

RunningRunning

ReadyReady

waitingwaiting

ZombieZombie terminatedterminated

1

HoldHold

2

3

4

5

6

7

8

States Hold: waiting for a time to gain entry Ready: can compute if OS allows Running: currently executing Waiting: waiting for an event (e.g. an I/O to

complete) Zombie: process has finished terminated: process finished and parent

waited on it.

State Transitions gains entry to system at specified time (1) OS gives process right to use CPU (2) OS removes right of the process to use CPU

(time’s up) (3) process makes a request (e.g. issues an I/O

request) or issues a wait or pause among other things. (4)

event has occurred (5) (e.g. I/O completed) process has finished (6) parent waited on process (7) OS suspends process (maybe too much

activity and it must reduce the load) (8)

High-level scheduling (Also long-term): Which programs gain entry to the system or exit it. Essentially 1, 6, and 7 above.

Intermediate-level scheduling: Which processes can compete for CPU. Essentially 4, 5, and 8 above.

Low-level scheduling (Also short-term): Who gets the CPU. Essentially 2 and 3 above.

Goals and thoughts fairness maximize throughput minimize turnaround minimize response time Consistency May be incompatible

avoid loads that degrade system keep resources busy (e.g. I/O controllers) to

maximize concurrent activities. high priority to interactive users deal with compute-bound vs. I/O bound

processes. keep CPU busy

should all processes have the same priority? should OS distinguish between processes that

have done a lot so far from those that have done little?

Consider limits.

NonPreemptive scheduling: when a process gets the CPU it keeps it until done

Preemptive scheduling: What the OS giveth, the OS can taketh

PCB (Process Control Block): Every process has one and it contains: state program counter CPU register values accounting information

in general, the process context Saving or storing information in a PCB is

called a context switch and usually happens when a process gets or loses CPU control.

See also task_struct, the linux PCB (page 90 of the text)

located at /usr/src/kernels/2.6.18-128.4.1.el5-xen-i686/include/linux/sched.h (line 834). Can copy into command line by copying from this document and right clicking the mouse at the command line prompt.

Process lists are really PCB lists.

Can skip 3.4 (forks and IPC). Some of this we did (shared memory); some we'll do later (message passing).

Can skip 3.5 (more message passing) Can skip 3.6 (sockets and RPCs). Some of

that’s done in the networks course

Chapter 4 deals largely with threads. I will postpone that until a little later when I introduce java threads and synchronization.

Chapter 5: CPU Scheduling Typically programs alternate: CPU burst-I/O

burst, CPU burst-I/O burst, CPU burst-I/O burst, etc. Fig 5.1 on p. 168.

Compute bound: mostly CPU bursts (e.g. simulations, graphics)

I/O bound: Mostly I/O bursts (e.g. interactions, database)

Scheduling algorithms:

First come-first serve (FIFO): process that asks first gets the CPU first. Keeps it until

done or until it requests something it must wait for. Show Gantt chart on p. 173; shows avg wait time and

turnaround time. avg times vary according the process at the front of the Q

inappropriate for many environments many processes could wait a long time for a compute

bound process (bad if they’re interactive or need to request their own I/O).

Might be OK if most processes are compute bound (primarily for simulations)

Sometimes used in conjunction with other methods.

Might be useful in specialized environments where most tasks are compute bound ones.

SJF (Shortest Job First)

Orders processes in order of next CPU burst. Preemptive: if new process enters it may replace a

currently running process. Can be useful if the OS wants to give high priority to

a task like to have a short CPU burst and, thus, keep I/O controllers busy.

may not know length of CPU burst time Can estimate, based on time limits in JCL (

Job Control Language) or…… Can predict burst length based on previous burst

lengths and predictions.

Possible option: use an exponential average defined by

tn+1 = atn + (1 – a) tn

Variable tn+1 is the predicted value for the next burst

and tn is the length of the nth burst.

a is some constant

In general

tn+1 = atn + (1 – a) a tn-1 + … + (1 – a)j a tn-j + … + (1 – a)n+1t0

If a = 0, recent history has no effect If a = 1, only most recent burst matters. See Figure 5.3 for an example. See Gantt chart on page 176

Priority scheduling Priority associated with each process and

scheduled accordingly. See Gantt chart on page 177 Indefinite postponement, indefinite blocking,

starvation: These are all terms that apply to a process that may wait indefinitely due to low priority.

NOTE: textbook cites a rumor that when the IBM 7094 at MIT was shut down in 1973, they found a low priority process that had been there since 1967.

Can deal with this by periodically increasing priorities of processes that are waiting

This is called aging.

Round Robin Processes just take turns. Gantt chart on page 178 Process at front of Q runs until

it finishes it issues a request for which it must wait (e.g. I/O) time quantum (maximum length of

uninterrupted execution time) expires

Quantum size is an issue. Large quantum looks more like FCFS A process waits longer for “its turn” Small quantum generates frequent context

switches (OS intervention). Since OS uses CPU a higher percent of the time

the processes use it less.

quantum size

r

e

s

p

o

n

s

e

Round Robin does not react much to a changing environment – for example more or fewer I/O requests

Treats all processes the same, which may or may not be appropriate.

I/O bound tasks have same priority as CPU bound ones. Does that make sense?

Multilevel Feedback Queue Scheduling Multiple Qs highest priority Q has shortest quantum Lowest priority Q has longest quantum quanta range from small to large over all Qs Schedule from highest priority Q that has a

ready process

Process runs until it finishes it issues a request for which it must wait (e.g. I/O)

When ready again, enter the next higher priority queue (if one)

time quantum (for that Q) expires. Go to the next lower priority queue (if one).

Interactive processes: keep high priority Compute bound processes typically have low

priority. In the presence of mostly compute bound processes,

acts more like FIFO because of the longer quantum In the presence of mostly I/O bound processes, acts

like Round Robin. Can react to a changing environment!

Real-time Scheduling Hard real-time: MUST complete a task in a specified

amount of time. Usually requires special hardware (since Virtual

memory, paging, and secondary storage can make the time unpredictable).

Soft real-time: Critical processes receive priority over non-critical ones. Can be implemented using Multilevel Feedback Queues

where the highest queues are reserved for the real-time processes.

Threads (just a couple of highlights from Chapter 4) Thread: Lightweight process Threads in the same process share code, data,

files, etc, but have different stacks and registers.

Note the examples on the web site (thread.c and process.c)

Kernel threads: managed by the OS kernel. User threads: managed by a thread library (no

kernel support) Less kernel overhead

User-kernel thread relationship

Many user threads map to one kernel thread If one thread blocks, the entire process blocks Cannot run multiple threads in parallel Green threads (from Solaris) GNU portable threads

One-to-one User threads can operate more independently More flexible, but more burden on the kernel Typical of windows and Linux

Three main thread libraries POSIX (Portable Operating Systems Interface) – an

interface standard with worldwide acceptance. IEEE standard [http://standards.ieee.org/regauth/posix/index.html] Also [https://computing.llnl.gov/tutorials/pthreads/]

Win32 threads Java threads (cover later)

Multiple processor scheduling Asymmetric multiprocessing: All scheduling

routines run on the master processor. Symmetric multiprocessing (SMP): each

processor is self-scheduling. Common queue for all processors One queue for each processor

We’ll consider SMP

If a common queue for all processors then there are issues of multiple processors accessing and updating a common data structure.

There are many issues associated with this type of concurrency, which we cover in the next chapter.

Processor Affinity May want to keep a process associated with

the same processor. If a process moves to another processor,

current processor’s cache is invalidated and new processor’s cache must be updated

Soft affinity: try but no guarantee Hard affinity: guarantee

Load balancing Keep workload balanced – i.e. avoid idle

processors if there are ready processes. More difficult if each processor has its own

ready queue (typical of most OS’s). May also run counter to processor affinity.

Push migration OS task periodically check processor queues and

may redistribute tasks to balance the load. Pull migration

An idle processor pulls a task from another processor’s queue

Linux, for example, does push migration several times per second and pull migration if a queue is empty.

Multicore processors (not in text) One chip – multiple core processors, each

with its own register set. One thread per core seems logical but

presents problems. Memory stall: processor waits for data to

become available such as may happen in a cache or TLB miss. Waiting processors mean no work being done.

Multithreaded processor core: two or more threads assigned to a single core.

Could interleave threads – i.e. when one thread is waiting, the other executed instruction cycle.

If one thread stalls, the processor can switch to the other thread

From the OS point of view, each hardware thread is a separate core capable of running a software thread.

i.e. OS may see 4 logical processors on dual-core chip

Windows XP: Read through this: Some stuff on page 833-834 and page 191-192 uses 32-level priority scheme (top half are soft real-

time). Basically a multilevel Feedback Queue. Each thread has a base priority and priorities cannot

fall below that. CTRL-ALT-Del to get task manager - right click on

task to see priority and affinity.

Threads get a boost when a wait is over. Amount of boost depends on what the wait was for.

Threads waiting for a keyboard (or mouse) I/O get a larger boost that if it were waiting for a disk I/O. Boost will NOT put thread into the real-time range

Linux Processes have credits (priority)

High number is low priority Some stuff on page 796 and page 193-194 Enter the Linux top command to see processes and

their priorities.

Processes also have a nice value, which can affect scheduling. See info nice.

nice values range from -20 (least nice) to 19 (nicest) There’s also a nice command which runs a process

with a specific nice value. There’s also a renice command which can change

nice values of a running process. Its format is renice n pid where n is the nice value.

Usually need to be root to get more favorable treatment.

Example

In the scheduling directory, run the script runall; then enter the top command to see the processes.

On another machine, log in as root and enter the command renice -20 pid or renice 19 pid.

There may or may not be much difference since nice values are suggestions to the Linux scheduler (see info nice). May have to do both renice commands.

Note: Instead of entering a.out& you can use nice –n value a.out& (to run with a different nice value)

killall a.out will kill the processes