copyright © 2000, daniel w. lewis. all rights reserved. chapter 7 concurrent software

Post on 17-Dec-2015

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

CHAPTER 7

CONCURRENT SOFTWARE

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Program Organization of a Foreground/Background System

IRET

Interrupt

ISR for Task #2

start

Initialize

IRET

Interrupt

ISR for Task #1

IRET

Interrupt

ISR for Task #3

Wait for Interrupts

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Foreground/Background System

• Most of the actual work is performed in the "foreground" ISRs, with each ISR processing a particular hardware event.

• Main program performs initialization and then enters a "background" loop that waits for interrupts to occur.

• Allows the system to respond to external events with a predictable amount of latency.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Task State and Serializationunsigned int byte_counter ;

void Send_Request_For_Data(void) {outportb(CMD_PORT, RQST_DATA_CMD) ;byte_counter = 0 ;}

void interrupt Process_One_Data_Byte(void){BYTE8 data = inportb(DATA_PORT) ;switch (++byte_counter)

{case 1: Process_Temperature(data) ; break ;case 2: Process_Altitude(data) ; break ;case 3: Process_Humidity(data) ;

break ;……}

}

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Yes

Output Device Ready?

IRET

Input Ready

Input Data

Process Data

Output Data

STI

Send EOI Command to

PIC

ISR with Long Execution Time

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Removing the Waiting Loop from

the ISR Dequeue Data

Output Device Ready?

Yes

Input Data

Process Data

Enqueue Data

Input Ready

Enter Backgroun

d

Output Data

STI

Send EOI Command

to PIC

IRET

Data Enqueue

d?

Yes

Initialize

FIFO Queue

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Yes

Input Data

Process Data

Enqueue Data

InputReady

OutputReady

Data Enqueued?

Dequeue Data

FIFOQueue

Output Data

STI STI

IRET

Send EOI Command

to PIC

IRET

Send EOI Command

to PIC

Interrupt-Driven Output

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Kick Starting Output

CALL SendData

IRET

SendData Subroutine

IRET

Data Enqueued?

Yes

Dequeue Data

FIFOQueue Clear Busy

Flag

Output DataNo!OutputDeviceBusy?

Input Data

Process Data

Enqueue Data

InputReady

CALL SendData(Kick Start)

Send EOI Command to

PIC

Send EOI Command to

PIC

OutputReady

RET

STI

STI

Set Busy Flag

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preventing Interrupt Overrun

ISR Busy Flag Set?

When interupts get re-enabled (see STI below), allow interrupts from lower priority devices (and this device too).

Yes Ignore this Interrupt!

(Interrupts are re-enabled by the IRET)

InputReady

Set ISR Busy Flag

Clear ISR Busy Flag

Send EOI Command

to PIC

IRET

Process data, write result to output queue, & kick start.

STI Allow interrupts from any device.

Input Data Removes the interrupt request that invoked this ISR.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preventing Interrupt Overrun

Allow interrupts from lower priority devices.

InputReady

Disable future interrupts from

this device.

IRET

Process data, write result to output queue, & kick start.

STI Allow interrupts from higher priority devices.

Input Data Removes the interrupt request that invoked this ISR.

Send EOI Command to

PIC

Set the mask bit for this device in the 8259 PIC

Clear the mask bit for this device in the 8259 PIC

Enable future interrupts from

this device.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Moving Work into Background

• Move non-time-critical work (such as updating a display) into background task.

• Foreground ISR writes data to queue, then background removes and processes it.

• An alternative to ignoring one or more interrupts as the result of input overrun.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Limitations

• Best possible performance requires moving as much as possible into the background.

• Background becomes collection of queues and associated routines to process the data.

• Optimizes latency of the individual ISRs, but background begs for a managed allocation of processor time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Multi-Threaded Architecture

Queue

Queue

Queue

Queue

ISR

ISR

ISR

ISR

Background Thread

Background Thread

Multi-threaded run-time function library (the real-time kernel)

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Thread Design

• Threads usually perform some initialization and then enter an infinite processing loop.

• At the top of the loop, the thread relinquishes the processor while it waits for data to become available, an external event to occur, or a condition to become true.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Concurrent Execution of Independent Threads

• Each thread runs as if it had its own CPU separate from those of the other threads.

• Threads are designed, programmed, and behave as if they are the only thread running.

• Partitioning the background into a set of independent threads simplifies each thread, and thus total program complexity.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Each Thread Maintains Its Own Stack and Register Contents

CS:EIP

SS:ESP

EAX

EBX

EFlags

Stack Registers

Context of Thread 1

CS:EIP

SS:ESP

EAX

EBX

EFlags

Stack Registers

Context of Thread N

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Concurrency

• Only one thread runs at a time while others are suspended.

• Processor switches from one thread to another so quickly that it appears all threads are running simultaneously. Threads run concurrently.

• Programmer assigns priority to each thread and the scheduler uses this to determine which thread to run next

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Real-Time Kernel

• Threads call a library of run-time routines (known as the real-time kernel) manages resources.

• Kernel provides mechanisms to switch between threads, for coordination, synchronization, communications, and priority.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching• Each thread has its own stack and a special

region of memory referred to as its context. • A context switch from thread "A" to thread

"B" first saves all CPU registers in context A, and then reloads all CPU registers from context B.

• Since CPU registers includes SS:ESP and CS:EIP, reloading context B reactivates thread B's stack and returns to where it left off when it was last suspended.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching

Suspended

Suspended

ExecutingSuspended

Executing

Executing

Restore context B

Save context A

Save context BRestore context A

Thread A Thread B

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking

• Threads call a kernel routine to perform the context switch.

• Thread relinquishes control of processor, thus allowing another thread to run.

• The context switch call is often referred to as a yield, and this form of multi-tasking is often referred to as cooperative multi-tasking.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking

• When external event occurs, processor may be executing a thread other than one designed to process the event.

• The first opportunity to execute the needed thread will not occur until current thread reaches next yield.

• When yield does occur, other threads may be scheduled to run first.

• In most cases, this makes it impossible or extremely difficult to predict the maximum response time of non-preemptive multi-tasking systems.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking

• Programmer must call the yield routine frequently, or else system response time may suffer.

• Yields must be inserted in any loop where a thread is waiting for some external condition.

• Yield may also be needed inside other loops that take a long time to complete (such as reading or writing a file), or distributed periodically throughout a lengthy computation.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching in a Non-Preemptive System

Wait? Yield to other threads

Thread Initialization

Start

Yes

Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed.

Data Processing

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Multi-Tasking

• Hardware interrupts trigger context switch.• When external event occurs, a hardware ISR is

invoked.• The ISR gets the data from the I/O device and

makes a kernel call to enqueue it, causing the state of the thread that is pending on the queue to change from pending to ready. The ISR then calls the scheduler to context switch to the highest priority thread that is ready to run.

• Significantly improves system response time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Multi-Tasking

• Eliminates the programmer's obligation to include explicit calls to the kernel to perform context switches within the various background threads.

• Programmer no longer needs to worry about how frequently the context switch routine is called; it's called only when needed - i.e., in response to external events.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Context Switching

Hardware Interrupt

Thread A

Thread AExecuting

ISR

Context Switch

Thread B

Thread BSuspended

Thread ASuspended

Thread BExecuting

Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed.

Process Interrupt Request

IRET

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Critical Sections

• Critical section: A code sequence whose proper execution is based on the assumption that it has exclusive access to the shared resources that it is using during the execution of the sequence.

• Critical sections must be protected against preemption, or else integrity of the computation may be compromised.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Atomic Operations

• Atomic operations are those that execute to completion without preemption.

• Critical sections must be made atomic.– Disable interrupts for their duration, or– Acquire exclusive access to the shared resource

through arbitration before entering the critical section and release it on exit.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Threads, ISRs, and Sharing

1. Between a thread and an ISR:Data corruption may occur if the thread's critical

section is interrupted to execute the ISR.

2. Between 2 ISRs:Data corruption may occur if the critical section

of one ISR can be interrupted to execute the other ISR.

3. Between 2 threads:Data corruption may occur unless execution of

their critical sections is coordinated.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Shared Resources

• A similar situation applies to other kinds of shared resources - not just shared data.

• Consider two or more threads that want to simultaneously send data to the same (shared) disk, printer, network card, or serial port. If access is not arbitrated so that only one thread uses the resource at a time, the data streams might get mixed together, producing nonsense at the destination.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Uncontrolled Access to a Shared Resource (the Printer)

Thread AThread B

Shared Printer

"HELLO\n" "goodbye"HgoELodLObye

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Protecting Critical Sections• Non-preemptive system: Programmer has explicit

control over where and when context switch occurs.– Except for ISRs!

• Preemptive system: Programmer has no control over the time and place of a context switch.

• Protection Options:– Disabling interrupts– Spin lock– mutex– semaphore

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Disabling Interrupts

• The overhead required to disable (and later re-enable) interrupts is negligible.– Good for short critical sections.

• Disabling interrupts during the execution of a long critical section can significantly degrade system response time.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Spin Locks

No

Set FlagSet Flag

Critical SectionCritical Section

Clear FlagClear Flag

do {disable() ;ok = !flag ;flag = TRUE ;enable() ;} while (!ok) ;

do {disable() ;ok = !flag ;flag = TRUE ;enable() ;} while (!ok) ;

L1: MOV AL,1XCHG [_flag],ALOR AL,ALJNZ L1

L1: MOV AL,1XCHG [_flag],ALOR AL,ALJNZ L1

flag = FALSE ; flag = FALSE ; MOV BYTE [_flag],0MOV BYTE [_flag],0

Flag set?Flag set?

Spin-lock in C.Spin-lock in C. Spin-lock in assembly.

Spin-lock in assembly.

If the flag is set, another thread is currently using the shared memory and will clear the flag when done.

If the flag is set, another thread is currently using the shared memory and will clear the flag when done.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Spin Locks vs. Semaphores

• Non-preemptive system requires kernel call inside spin lock loop to let other threads run.

• Context-switching during spin lock can be a significant overhead (saving and restoring threads’ registers and stack).

• Semaphores eliminate the context-switch until flag is released.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Semaphores

Semaphore “Pend”

Semaphore “Pend”

Critical SectionCritical Section

Semaphore “Post”

Semaphore “Post”

Kernel suspends this thread if another thread has possession of the semaphore; this thread does not get to run again until the other thread releases the semaphore with a “post” operation.

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Kernel Services

• Initialization• Threads• Scheduling• Priorities• Interrupt Routines

• Semaphores• Mailboxes• Queues• Time

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Initialization Services

Multi-C:n/a

C/OS-II:OSInit() ;

OSStart() ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Thread Services

Multi-C:ECODE MtCCoroutine(void (*fn)(…)) ;ECODE MtCSplit(THREAD **new, MTCBOOL *old) ;ECODE MtCStop(THREAD *) ;

C/OS-II:BYTE8 OSTaskCreate(void (*fn)(void *), void *data,

void *stk, BYTE8 prio) ;BYTE8 OSTaskDel(BYTE8 prio) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Scheduling Services

Multi-C:ECODE MtCYield(void) ;

C/OS-II:void OSSchedLock(void) ;

void OSSchedUnlock(void) ;

BYTE8 OSTimeTick(BYTE8 old, BYTE8 new) ;

void OSTimeDly(WORD16) ;]

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Priority Services

Multi-C:ECODE MtCGetPri(THREAD *, MTCPRI *) ;

ECODE MtCSetPri(THREAD *, MTCPRI) ;

C/OS-II:BYTE8 OSTaskChangePrio(BYTE8 old, BYTE8 new) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

ISR Services

Multi-C:n/a

C/OS-II:OS_ENTER_CRITICAL() ;

OS_EXIT_CRITICAL() ;

void OSIntEnter(void) ;

void OSIntExit(void) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Semaphore Services

Multi-C:ECODE MtCSemaCreate(SEMA_INFO **) ;

ECODE MtCSemaWait(SEMA_INFO *, MTCBOOL *) ;

ECODE MtCSemaReset(SEMA_INFO *) ;

ECODE MtCSemaSet(SEMA_INFO *) ;

C/OS-II:OS_EVENT *OSSemCreate(WORD16) ;

void OSSemPend(OS_EVENT *, WORD16, BYTE8 *) ;

BYTE8 OSSemPost(OS_EVENT *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Mailbox Services

Multi-C:

n/a

C/OS-II:OS_EVENT *OSMboxCreate(void *msg) ;

void *OSMboxPend(OS_EVENT *, WORD16, BYTE8 *) ;

BYTE8 OSMboxPost(OS_EVENT *, void *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Queue Services

Multi-C:ECODE MtCReceive(void *msgbfr, int *msgsize) ;

ECODE MtCSendTHREAD *, void *msg, int size, int pri) ;

ECODE MtCASendTHREAD *, void *msg, int size, int pri) ;

C/OS-II:OS_EVENT *OSQCreate(void **start, BYTE8 size) ;

void *OSQPend(OS_EVENT *, WORD16, BYTE8 *) ;

BYTE8 OSQPost(OS_EVENT *, void *) ;

Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Time Services

Multi-C:

n/a

C/OS-II:DWORD32 OSTimeGet(void) ;

void OSTimeSet(DWORD32) ;

top related