unit 4_embedded system

134
I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T Department of EEE Page 1 EMBEDDED SYSTEMS DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING UNIT - IV I/O PROGRAMMING AND SCHEDULE MECHANISM Intel I/O instruction – Transfer rate, latency; interrupt driven I/O - Non-maskable interrupts; software interrupts, writing interrupt service routine in C & assembly languages; preventing interrupt overrun; disability interrupts. Multi threaded programming – Context switching, premature & non-premature multitasking, semaphores. Scheduling – Thread states, pending threads, context switching, round robin scheduling, priority based scheduling, assigning priorities, deadlock, watch dog timers. Prepared by M.Sujith, Lecturer, Department of Electrical and Electronics Engineering, Vidyaa Vikas College of Engineering and Technology. HOD/EEE

Upload: sujith

Post on 08-Mar-2015

1.044 views

Category:

Documents


3 download

DESCRIPTION

emebedded system_anna university,coimbatore

TRANSCRIPT

Page 1: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 1

EMBEDDED SYSTEMS

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERIN G

UNIT - IV

I/O PROGRAMMING AND SCHEDULE MECHANISM

� Intel I/O instruction – Transfer rate, latency; interrupt driven I/O - Non-maskable interrupts; software interrupts, writing interrupt service routine in C & assembly languages; preventing interrupt overrun; disability interrupts. � Multi threaded programming – Context switching, premature & non-premature multitasking, semaphores. � Scheduling – Thread states, pending threads, context switching, round robin scheduling, priority based scheduling, assigning priorities, deadlock, watch dog timers.

Prepared by

M.Sujith,

Lecturer,

Department of Electrical and Electronics Engineering,

Vidyaa Vikas College of Engineering and Technology.

HOD/EEE

Page 2: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 2

DEVICE DRIVER AND ISR Codes for Embedded Processor 1. Configuring 2. Activating 3. Device driver ISR 4. Resetting Programmed I/O (busy and wait) method for ports and devices, and the need for interrupt driven IOs Programmed IOs approach from ports and devices � Processor is continuously busy in executing the program related to input or output from the port and waits for the input ready or output completion � Processor is continuously busy in executing the program related to device functions and waits for the device status ready or function completions Example─ A 64-kbps UART Input � When a UART transmits in format of 11-bit per character format, the network transmits at most 64 kbps ÷ 11 = 5818 characters per second, which means every 171.9 µs a character is expected. � Before 171.9 µs, the receiver port must be checked to find and read another character assuming that all the received characters are in succession without any in-between time gap.

Format of bits at UART protocol

Ports A and B with no interrupt generation and interrupt service (handling) mechanism • Port A be in a PC, and port B be its modem input which puts the characters on the telephone line. • Let In_A_Out_B be a routine that receives an input-character from Port A and retransmits the character to Port B output

Page 3: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 3

In_A_Out_B routine • Has to cyclically call the following steps a to e and executes the cycles of functions i to v, thus ensuring the modem port A does not miss reading the character

Programmed IO Method Network Driver Program In-A Out B without Interrupt s

In_A_Out_B routine

• Call function i

• Call function ii

• Call function iii

• Call function iv

• Call function v

• Loop back to step 1

Steps a, b ,c, d and e

Page 4: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 4

� Step a: Function i─ Check for a character at port A, if not available, then wait

� Step b: Function ii─ Read Port A byte (character for message) and return to step a instruction, which

will call function iii

� Step c: Function iii─ Decrypt the Message and return to step a instruction, which will call function iv

� Step d: Function iv─ Encode the Message and return to step a instruction, which will call function v.

� Step e: Function v ─ Transmit the encoded message to Port B and return to step a last instruction, which

will start step a from beginning

Step a

• Does Polling─ Polling a port means to find the status of the port─ whether ready with a character (byte)

at input or not.

• Polling must start before 171.9 µs because characters are expected at 64 kbps/11 bit format.

Condition in which no character misses

If the program instructions in four steps b, c, d and e (functions ii to v) take total running time of less than

171.9 µs then the above programmed IO method works

Problems with Programmed IOs approach from ports and devices

1. (a) The program must switch to execute the In_A_Out_B cycle of steps a to e within a period less than 171.9

µs.

(b) Programmer must ensure that steps of In_A_Out_B and any other device program steps never exceed this

time. from portas and devices

2. When the characters are not received at Port A in regular succession, the waiting period during step a for

polling the port can be very significant.

3. Wastage of processor times for the waiting periods is the most significant disadvantage of the present

approach

4. When the other ports and devices are also present, then programming problem is to poll each port and

device, and ensure that program switches to execute the In_A_Out_B step a as well as switches to poll each

port or device on time and then execute each service routines related to the functions of other ports and

devices within specific time intervals such that each one is polled on time.

5. The program and functions are processor and device specific in the above busy-wait approach, and all

system functions must execute in synchronization and timings are completely dependent on periods of software

execution

IO based on an interrupt from Port A • Instead of continuously checking for characters at the port A by executing function (i) we can first call step a when a modem receives an input character, sets a status bit in its status register and interrupts Port A. The interrupt should be generated by port hardware.

Page 5: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 5

• In response to the interrupt, an interrupt service routine ISR_ PortA _Character executes─ anefficient

solution in place the wait at step a (poll for input character)

Application of programmed IOs

In case of single purpose processor and dedicated IOs or device functions with continuous device status polling

INTERRUPT AND INTERRUPT SERVICE ROUTINE CONCEPT • Interrupt means event, which invites attention of the processor on occurrence of some action at hardware or software interrupt instruction event. Action on Interrupt In response to the interrupt, the routine or program, which is running presently interrupts and an interrupt service routine (ISR) executes. Interrupt Service Routine ISR is also called device driver in case of the devices and called exception or signal or trap handler in case of software interrupts Interrupt approach for the port or device functions * Processor executes the program, called interrupt service routine or signal handler or trap handler or exception handler or device driver, related to input or output from the port or device or related to a device function on an interrupt and does not wait and look for the input ready or output completion or device-status ready or set Hardware interrupt Examples • When a device or port is ready, a device or port generates an interrupt, or when it completes the assigned

action or when a timer overflows or when a time at the timer equals a preset time in a compare register or on setting a status flag (for example, on timer overflow or compare or capture of time) or on click of mice in a computer

• Hardware interrupt generates call to an ISR

Software Interrupt Examples

• When software run-time exception condition (for examples, division by 0 or overflow or illegal opcode detected) the processor-hardware generates an interrupt, called trap, which calls an ISR

• When software run-time exception condition defined in a program occurs, then a software instruction (SWI) is executed─ called software interrupt or exception or signal, which calls an ISR .

Page 6: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 6

Software Interrupt

When a device function is to be invoked, for example, open (initialize/configure) or read or write or close , then a software instruction (SWI) is executed─ called software interrupt to execute the required device driver function for open or read or write or close operations. Interrupt � Software can execute the software instruction (SWI) or Interrupt n (INT n) to signal execution of ISR

(interrupt service routine). The n is as per the handler address. � Signal interrupt [The signal differs from the function in the sense that execution of signal handler (ISR) can

be masked and till mask is reset, the handler will not execute on interrupt. Function on the other hand always executes on the call after a call-instruction.]

How does call to ISR differ from a function (routine) call? Routine (function) and ISR call features � On a function call, the instructions are executed from a new address � Execute like in as function in the ‘C’ or in a method in Java. � ISR also the instructions are executed from a new address like in as function in the ‘C’ or in a method in

Java � A function call is after executing present instruction in any program and is a planned (user programmed)

diversion from the present instruction in sequence of instructions to another sequence of instructions; this sequence of instructions executes till the return from that.

� An ISR call is after executing present instruction in any program and is interrupt related diversion from the current sequence of instructions to another sequence of instructions; this sequence of instructions executes till the return from that or till another interrupt of higher priority

Nesting of function calls and ISRs � Nesting the function-calls ─ When a function 1 calls another function 2 and that call another function 3, on

return from 3, the return is to function 2 and return from 2 is to function. Functions are said to be nested

Page 7: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 7

� The ISR calls in case of multiple interrupt occurrences can be nested or can be as per priority of the interrupts

Using the Interrupt(s) and ISR(s) for each device function (for example, read, write) Use of ISRs (Interrupt Service Routine) by SWIs is the main mechanism used for the device accesses and actions Interrupt driven IO and device accesses Feature 1. There is no continuous monitoring of the status bits or polling for status by the application. 2. Between two interrupt calls the program task(s) continue. There are many device functions, and each of which executes on a device interrupt. Example─ A 64-kbps UART Input • When a UART transmits in format of 11-bit per character format, the network transmits at most 64 kbps

%11 = 5818 characters per second, which means every 171.9 micro sec a character is expected. • Before 171.9 micro sec, the receiver port is not checked. Only on interrupt from the port the character is

read • There is no in-between time-gap for polling

Ports A and B with interrupt generation and interrupt service (handling) mechanism

• Port A be at in a PC, and port B be its modem input which puts the characters on the telephone line. • Let ISR_ PortA_Character be a routine that receives an input character from Port A on interrupt and Out_B

routine re-transmits an output character to Port B. ISR_ PortA _Character • Step f function (vi): Read Port A character. Reset the status bit so that modem is ready for next character

input (generally resetting of status bit is automatic without need of specific instruction for it). Put it in memory buffer. Memory buffer is a set of memory addresses where the bytes (characters) are queued for processing later.

• Return from the interrupt service routine.

Page 8: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 8

Out_B routine • Step g: Call function (vii) to decrypt the message characters at the memory buffer and return for next instruction step h. • Step h: Call function (viii) to encode the message character and return for next instruction step k • Step k: Call function (ix) to transmit the encoded character to Port B • Return from the function.

Application of Interrupt based IOs and device functions _ In case of multiple processes and device functions with no device status polling or wait An automatic chocolate vending machine (ACVM) • Without interrupt mechanism, one way is the ‘programmed I/O- transfer in which that the device waits for

the coin continuously, activates on sensing the coin and runs the service routine • In the event driven way is that the device should also awakens and activates on each interrupt after

sensing each coin-inserting event. Interrupt-service routine (ISR) or device driver function The system awakens and activates on an interrupt through a hardware or software signal. The system on port interrupt collects the coin by running a service routine. This routine is called interrupt handler routine for the coin-port read.

Page 9: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 9

Digital camera system • Has an image input device. • The system awakens and activates on interrupt from a switch. • When the system wakes up and activates, the device should grab an image frame data. • The interrupt is through a hardware signal from the device switch. On the interrupt, an ISR (can also be considered as camera’s imaging device driver function) for the read starts execution, it passes a message (signal) to a function or program thread or task

Image sense and device frame-buffer

• The thread senses the image and then the function reads the device frame-buffer (called frame grabbing), • Then the function passes a signal to another function or program thread or task to process and compress the

image data and save the image frame data compressed file in a flash memory

Interrupt through a hardware signal from a print-switch at device

• Camera system again awakens and activates on. • System on interrupt then runs another ISR. • The ISR Routine is the device-driver write function for the outputs to printer through the USB bus

connected to the printer

Page 10: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 10

Mobile phone system � Has a system reset key, which when pressed resets the system to an initial state. � When the reset key is pressed the system awakens and activates a reset interrupt through a hardware signal

from the reset key. � On the interrupt, an ISR (can also be considered as reset-key device-driver function) suspends all � activity of the system, sends signal to display function or program thread or task for displaying the initial

reset state menu and graphics on the LCD screen, and also activates LCD display-off timer device for 15 s timeout (for example).

After the timeout � The system again awakens and activates on interrupt through internal hardware signal from the timer

device and runs another ISR to send a control bit to the LCD device. � The ISR Routine is the device-driver LCD off-function for the LCD device. The devices switches off by

reset of a control bit in it.

Page 11: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 11

Software Interrupts and Interrupt Service routines Software Interrupt (Throw an Exception) Concept • A program needs to detect error condition or run time exceptional condition encountered during the running. • In a program either the hardware detects this condition or in a program detects this condition, then an instruction SWI (software interrupt) is used Detection of exceptional run-time condition � Called throwing an exception by the program. � An interrupt service routine (exceptional handler routine) executes, which is called catch function as it

executes on catching the exception thrown. SWI • Executes on detecting the exceptional runtime condition during computations or communication. • For example, on detecting that the square root of a negative number is being calculated or detecting illegal argument in the function or detecting that connection to network not found. Example: SWI a1 and SWI a2 The SWI (software interrupt) instructions, SWI a1 and SWI a2 will be inserted for trapping (A- B) as –ve number and trapping y > 100 or less than 0

Page 12: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 12

Software instruction SWI a1 � Causes processor interrupt. � In response, the software ISR function ‘catch (Exception_1 a1) { }’ executes on throwing of the

exception a1 during try block execution. � SWI a1 is used after catching the exception a1 whenever it is thrown

Software instruction SWI a2

� Causes processor interrupt. � In response, the software ISR function ‘catch (Exception_2 a2) { }’ executes on throwing of the

exception a1 during try block execution. � SWI a2 is used after catching the exception a2 whenever it is thrown

SWI a3

� Software ISR function ‘finally { }’ executes either at the end of the try or at the end of catch function codes.

� SWI a3 is used after the try and catch functions finish, then finally function will perform final task, for example, exit from the program or call another function

Signal from a thread for Signal handler Interrupt Service Routine

� ISR is also called signal handler in case of a routine or program thread or task sends a signal using an SWI

� Signals are used to notify error conditions or notifying end of an action to enable signal handler thread or task to initiate action on that

Page 13: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 13

Interrupt Service Threads as Second Level Interrupt Handlers ISR executed in two parts

� One part is short execution time service routine � Runs the critical part of the ISR and passes a signal or message to the OS for running the remaining

second part later. First level ISR (FLISR)

� First part does the device dependent handling only. For example, it does not perform data decryption of data received from the network. It simply does the transfer of data to the memory buffer for the device data

� Second part waits during execution of the interrupts of lesser priority. Second level ISR (SLISR) [Interrupt service thread (IST)]

� Second part is long service routine � The OS schedules the IST as per its priority. � Does the device independnet handling. � IST is also the software interrupt, when it is triggered by an SWI (software interrupt instruction) [Signal]

in FLISR

Page 14: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 14

Page 15: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 15

Device Driver

Device driver definition • A device driver has a set of routines (functions) used by a high-level language programmer, which does the interaction with the device hardware, sends control commands to the device, communicates data to the device and runs the codes for reading device data.

Device driver routine

• Each device in a system needs device driver routine with number of device functions. • An ISR relates to a device driver command (device-function). The device driver uses SWI to call the related ISR (device-function routine) • The device driver also responds to device hardware interrupts.

Device driver generic commands

• A programmer uses generic commands for device driver for using a device. The operating system provides these generic commands. • Each command relates to an ISR. The device driver command uses an SWI to call the related ISR device-function routine) Generic functions • Generic functions used for the commands tothe device are device create ( ), open ( ), connect ( ), bind ( ), read ( ), write ( ), ioctl () [for IO control], delete ( ) and close ( ). Device driver code

� Different in different operating system. � Same device may have different code for the driver when system is using differentoperating system

Device driver

� Does the interrupt service for any event related to the device and use the system and IO buses required for the device service.

� Device driver can be considered software layer between an application program and the device Interrupt service routines � An Interrupt service routine (ISR) accesses a device for service (configuring, initializing, activating,

opening, attaching, reading, writing, resetting, deactivating or closing). � Interrupt service routines thus implements the device functions of the device driver

Page 16: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 16

Example

� Application program commands to write on the display screen of a mobile the contact names from the

contact database. It sends an SWI to call LCD display device-driver

� The driver run short code and executes another SWI to call the ISR related to write function at the LCD

� The device-driver does that without theapplication programmer knowing how does LCD device

interface in the system, what are the addresses by which it is used, what and where and how used are

the control(command) registers and status registers

Interrupt Sources

1. Hardware Sources of interrupts

Hardware device sources of interrupts

• Hardware sources can be internal or external for interrupt of ongoing routine and thereby diversion to corresponding ISR. • The internal sources from devices differ in different processor or microcontroller or device and their versions and families • External sources and ports also differ in different processors or microcontrollers

Page 17: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 17

Interrupt sources (or groups of interrupt sources) • Each of the interrupt sources (or groups of interrupt sources) demands a temporary transfer of control from the presently executed routine to the ISR corresponding to the source (when a source not masked) Internal Hardware Device Sources 1 Parallel Port 2. UART Serial Receiver Port - [Noise, Overrun, Frame-Error, IDLE, RDRF in 68HC11] 3. Synchronous Receiver byte Completion 4. UART Serial Transmit Port-Transmission Complete, [For example, TDRE (transmitter data register Empty] 5. Synchronous Transmission of byte completed 6. ADC Start of Conversion 7. ADC End of Conversion 8. Pulse-Accumulator overflows 9. Real Time Clock time-outs 10. Watchdog Timer Reset 11. Timer Overflow on time-out 12. Timer comparison with Output compare Registers 13. Timer capture on inputs External Hardware Device interrupt with also sending vector address • INTR in 8086 and 80x86 ─ The device provides the ISR Address or Vector Address or Type externally on data bus after interrupt at INTR pin External Hardware Device interrupt with internal generation of sending vector address • Maskable Pins (interrupt request pin)

─INT0 and INT 1 in 8051, IRQ in 68HC11

External hardware related interrupt at INTR Pin in 80x86 processor 1. When INTR pin activates on an interrupt from the external device, the processor issues two cycles of acknowledgements in two clock cycles through INTA (interrupt acknowledgement) pin. 2. During the second cycle of acknowledgement, the external device sends the type of interrupt information on data bus. 3. Information is for one byte for n. 4. 80x86 action is execution as per action on software instruction INT n, External Hardware Device Non maskable Interrupts with Internal Vector Address Generation 1. Non-Maskable Pin─ NMI in 8086 and 80x86 2. Within first few clock cycles unmaskable declarable Pin (interrupt request pin) but otherwise maskable XIRQ in 68HC11]

Page 18: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 18

Sources of interrupts due to Processor Hardware detecting Software error • Software sources for interrupt are related to processor detecting computational error during execution such as division by 0, illegal opcode or overflow (for example, multiplication of two numbers exceeding the limit)

Software error Related Sources (exceptions or SW -traps) 1. Division by zero detection (or trap) by hardware 2. Over-flow detection by hardware 3. Under-flow detection by hardware 4. Illegal opcode detection by hardware

Examples of Software error exception or trap related sources � Interrupt of ongoing program computations in certain processors. � Division by zero (also known as type 0 interrupt as it is also generated by and software interrupt � instruction INT 0 instruction in 80x86) � Overflow (also known as type 2 interrupt as it is also generated by INT 2 instruction) in 80x86.

These two interrupts, types 0 and 2 generate by the hardware with the ALU part of the processor

2. SOFTWARE INTERRUPTS

Sources of interrupts due to software code detecting Software error or exceptional condition and executing software interrupt instruction

• Software sources for interrupt are related to software detecting computational error or exceptional condition during execution and there up on executing a SWI (software interrupt) instruction, which causes processor interrupt of ongoing routine.

Software Interrupt by a software instruction � Software interrupt is interrupt-generated, for example, by a software instruction Int n in 80x86 processor or

SWI m in ARM7, where n is interrupt type and m is 24 bits related to ISR address pointer and ISR input parameters pointer

Steps on interrupt of type n or on Software Instruction INT n in 80x86 1. INT n means executes interrupt of type n 2. n can be between 0 and 255. 3. INT n causes the processor vectoring to address 0x00004 n for finding IP and CS registers values for diversion to the ISR.

Page 19: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 19

8086 and 80x86 two byte instructions INT n

� n represents type and is the second byte. This means ‘generate type n interrupt’ and processor hardware gets the ISR address using the vector address 0x00004.

� When n =1, it represents single step trap in 8086 and 80x86 Examples of Software Instruction Related Interrupts Sources • Handling of –ve number square root throws an exception, that means executes an SWI, which is handled

by SWI instruction SWI n (Similar but not analogous to INT n in 80x86) in the instruction set of a processor

Examples of Software Instruction Related Interrupts Source from Signal

• Certain software instruction for interrupting for diversion to interrupt service routine or another task or thread, also called signal handler.

• These are used for signaling (or switching) to another routine from an ongoing routine or task or thread

Software interrupt instructions Software instructions are also used for trapping some run-time error conditions (called throwing exceptions) and executing exceptional handlers on catching the exceptions

Page 20: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 20

Examples � Instruction, SWI in 68HC11. � Single byte instruction INT0 in 80x86. It generates type 0 interrupt � Type 0 interrupt means generation of interrupt with corresponding vector address 0x00000. � Instead of the type 0 interrupt by instruction, 8086 and 80x86 hardware may also generate interrupt on a

division by zero � Single byte 8086 and 80x86 instruction TYPE3 (corresponding vector address 0x00C0H). This

generates an interrupt of type 3, called break point interrupt. � Break point interrupt instruction is like a PAUSE instruction. � PAUSE─ a temporary stoppage of a running program, enables a program to do some housekeeping, and

then return back to instruction after the break point by pressing any key.

INTERRUPT VECTOR MECHANISM

Interrupt vector ─ an important part of interrupts service mechanism � An interrupt vector is an important part of interrupt service mechanism, which associates a processor. � Processor first saves program counter and/or other registers of CPU on interrupt and then loads a vector address into the program counter. � Vector address provides either the ISR or ISR address to the processor for the interrupt source or group of sources or given interrupt type.

Interrupt Vector System software designer puts the bytes at a ISR_VECTADDR address. The bytes are for either

• the ISR short code or jump instruction to ISR instruction or • ISR short code with call to the full code of the ISR at an ISR address or • Bytes points to an ISR address

Interrupt Vector • A memory address to which processor vectors (transfers into program counter or IP and CS registers in case of 80x86) a new address on an interrupt for servicing that interrupt. • The memory addresses for vectoring by the processor are processor or microcontroller specific. • Vectoring is as per interrupt handling mechanism, which the processor provides. Processor Vectoring to an ISR VECTADDR • On an interrupt, a processor vectors to a new address, ISR_VECTADDR. • Vector means the program counter (PC), which was going to have the program or routine executing at instruction address of next instruction, now saves that address on stack (or in some CPU register, called link register) and processor loads the ISR_VECTADDR into the PC. • When PC saves on the stack, the stack pointer register of CPU provides the address of memory stack.

Link Register in certain Processors

Page 21: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 21

• A part of the CPU register set • The PC saves at link register (in place of stack) before the processor vectors to an address by loading new value in PC Return from ISR Because the PC is saved at stack or link register before vectoring, it enables return from the ISR later on an RETI (return from interrupt) Instruction ISR_VECTADDR based addressing mechanism

• A system has the internal devices like the on-chip timer and on-chip A/D converter. • In a given microcontroller, each internal device interrupt source or source group has separate

ISR_VECTADDR address. • Each external interrupt pins have separate ISR_VECTADDR, example, 8051.

Commonly used method

• The internal device (interrupt source or interrupt source group) in microcontroller auto generates the corresponding interrupt vector address, ISR_VECTADDR.

• These vector addresses specific for a specific microcontroller or processor with that internal device. • An internal hardware signal from the device is sent for interrupt source in device interrupts source group

Two types of handling mechanisms in processor hardware 1. There are some processors, which use ISR_VECTADDR directly as ISR address and processor fetches from there the ISR instruction, for example, ARM or 8051 2. There are some processors, which use ISR_VECTADDR indirectly as ISR address and processor fetches the ISR address from the bytes saved at the ISR_VECTADDR, for example, 80x86

Page 22: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 22

80x86 Processor Mechanism • A software interrupt instruction, for example, Int n explicitly also defines type of interrupt

and the type defines the ISR_VECTADDR • Type value multiplied by 0x00004 gives the vectoring address from where the processor fetches the four

bytes to compute the ISR address for executing the ISR

Page 23: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 23

ARM processor Mechanism 1. In a certain processor architecture, for example, ARM, the software instruction SWI does not explicitly defines the type of interrupt for generating different vector address and instead there is a common ISR_VECTADDR for each exception or signal or trap generated using SWI instruction. 2. ISR that executes after vectoring has to find out which exception caused the processor to interrupt and program diversion. Such a mechanism in processor architecture results in provisioning for the unlimited umber of exception handling routines in the system with common an interrupt vector address. ARM processor provisions for such a mechanism

Page 24: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 24

Interrupt Vector Table

� Facilitates the service of the multiple interrupting sources for each internal device. � Each row of table has an ISR_VECTADDR and the bytes to be saved at the ISR_VECTADDR. � Vector table location in memory depends on the processor. � System software designer must provide for putting the bytes at each ISR_VECTADDR address.

The bytes are for either

� the ISR short code or jump instruction to ISR instruction or � ISR short code with call to the full code of the ISR at an ISR address or � Bytes points to an ISR address

Page 25: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 25

Interrupt Vector Table • At higher memory addresses, 0xFFC0 to 0xFFFB in 68HC11 • At lowest memory addresses 0x0000 to 0x03FF in 80x86 processors. • Starts from lowest memory addresses 0x00000000 in ARM7.

Page 26: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 26

Masking of Interrupt Sources and Interrupt Status or Pending Mechanism 1. Masking of Interrupt Sources 2. Maskable sources of interrupt provides for masking and unmasking the interrupt service (diversion to the ISR). 3. Execution of a device interrupt source or source group can be masked. 4. On a pin, an external interrupt request can be masked. 5. Execution of a software interrupt (trap or exception or signal) can be masked. 6. Most interrupt sources are maskable. Non-maskable Interrupt Sources (NMIs) • A few specific interrupt cannot be masked. • A few specific interrupt can be declared non-maskable within few clock cycle of processor reset, else that is maskable.

Page 27: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 27

Classification of all interrupts as Non Maskable and Maskable Interrupts • Nonmaskable─ Examples are RAM parity error in a PC and error interrupts like division by zero. These must be serviced. • Maskable: Maskable interrupts are those for which the service may be temporarily disabled to let higher priority ISRs be executed uninterruptedly • Nonmaskable only when defined so within few clock cycles after reset: Certain processors like 68HC11 have this provision Enabling (Unmasking) and Disabling(Masking) in case of Maskable Interrupt Sources • Interrupt control bits in devices • One bit EA (enable all)─ also called the primary level bit • EA may be for enabling or disabling the complete interrupt system except for NMIs. Use of DI and EI for a Critical section of codes

� Instruction DI (disable interrupts) is executed at the beginning of the critical section when a routine or ISR is executing in the codes in a critical section which must complete, Another instruction EI (enable interrupts) is executed at the end of the critical section.

� DI instruction resets EA bit and EI instruction sets the EA bit Examples of Critical Section codes

� Synchronous communication of bits from a port � Writing some byte(s), for example, time, which are shared with other routines. In case the bytes for time

are not written at that instance the system timing reference will be incorrect � Assume that an ISR is transferring data to the printer buffer, which is common to multiple ISRs and

functions. No other ISR of function should transfer the data at that instant to the print-buffer, else the bytes at the buffer shall be from multiple sources.

Page 28: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 28

� A data shared by several ISRs and routines needs to be generated or used by protecting its modification by another ISR or routine.

Multiple bits E0, .. En

� For n source group of the interrupts in case of multiple devices present in the system and externally connected to the system

� Called mask bits and also called the secondary level bits � For enabling or disabling the specific sources or source groups in the interrupting system. � By the appropriate instructions in the user software, write to the primary enable bit and secondary level � enable bits (or the opposite of it, mask bits) bits, either all or a part of the total maskable interrupt

sources, respectively disabled

Example

� Two timers and each timer has an interrupt control bit. � Timer interrupt control bits ET0 and ET1 � SI device─ interrupt control bit ES, common to serial transmission and serial reception. T � EA bit to interrupt control for disabling all interrupts. � When EA = 0, then no interrupt is recognized and timers as well as SI interrupts service is disabled. � When EA = 1, ET0 = 0, ET1 = 1 and ES = 1, then interrupts from timer 1 and SI enabled and timer 0 � interrupt is disabled

Page 29: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 29

2. Interrupt Status or Pending Register

Multiple interrupt sources

Multiple interrupt sources, an occurrence of each interrupt source (or source-group) is identifiable from a bit

or bits in the status register and/or in the IPR

Properties of the interrupt identification flags • A separate flag each for every identification of an occurrence from each of the interrupt sources • The flag sets on the an occurrence of an interrupt, The flag is present either • in the internal hardware circuit of the processor or • in the IPR or • in status register.

Identification of a previously occurred interrupt f rom a source • A local level flag (bit) in a status register which can hold one or more status-flags for the one or several

of the interrupt sources or groups of the sources • A processor pending-flag (Boolean variable) in an interrupt-pending register (IPR), and which sets by

the source (setting by hardware) and auto-resets immediately by the internal hardware as soon as at a later instant, the corresponding source service starts on diversion to the corresponding ISR

Example

• Two timers and each timer has a status bit TF0 and TF1 • SI device there are two status bits TxEMPTY and RxReady for serial transmission completed and

receiver data ready • The ISR_T1 corresponding to timer 1 device─ reads the status bit TF1 = 1 in status register to find that

timer 1 has overflowed, as soon as the bit is read the TF1 resets to 0 • The ISR_T0 corresponding to timer 0 device reads the status bit TF1 = 0 in status register to find that

timer 0 has overflowed, as soon as the bit is read the TF0 resets to 0. • The ISR corresponding to SI device common for transmitter and receiver. • The ISR reads the status bits TxEMPTY and RxReady in status register to find whether new byte is to

be sent to transmit buffer or whether byte is to be read from receiver buffer. • As soon as byte is read the RxReady resets and as soon as byte is written into SI for transmission,

TxEMPTY resets •

Flag set on occurrence of an interrupt • Used for a read by an instruction and for a write by the interrupting source hardware only. • Flag resets (becomes inactive) as soon as it is read. • An auto reset characteristics provided for in certain hardware designs in order to let this flag indicate the

next occurrence from the same interrupt source. Interrupt Service on flag setting

• If flag is set, it does not necessarily mean that it will be recognized or serviced later. • Whenever a mask bit corresponding to its source exists, and then even if the flag sets, the processor may

ignore it unless the mask (or enable) bit modifies later.

Page 30: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 30

• Masking prevents an unwanted interrupt from being serviced by a diversion to another ISR Example of touch screen

� Touch screen device processor generates an interrupt when a screen position is touched. � A status bit is also set. � It activates an external pin interrupt request IRQ. � From the status bit, which is set, the interrupting source is recognized among the sources group (multiple

sources of interrupt from same device or devices). � The ISR_VECTORIRQ and ISRIRQ common for all the interrupts at IRQ pin.

IRQ

• IRQ results in processor (controller processing element) vectoring to an ISR_VECTORIRQ. • Using ISR_VECTORIRQ when the ISRIRQ starts, ISRIRQ instruction reads the status register

and discovers the bit as set. • Calls for service function (get_touch_position), which reads register Rpos for touched screen position

information. • This action also resets the status bit when the touch screen processor provides for auto resetting

of the bit soon as the Rpos byte for touched position is read. Multiple Interrupt Sources and Priority Mechanism

Multiple interrupt-calls

Interrupt-service calls • There can be interrupt-service calls in case a number of higher priority interrupt sources activates in succession. • A return from any of the ISR is to the lower priority pending ISR Processor interrupt service mechanisms • Certain processors permit in-between routine diversion to higher priority interrupts unless all interrupts or interrupts of priority greater than the presently running routine are masked or ISR executed DI instruction • These processors provide in order to prevent diversion in between the running ISR completely by provisioning for masking all interrupts by primary level bit and or DI instruction • These processors also provide in order to prevent diversion in-between the running ISR selectively by provisioning for masking selectively the interrupt service by secondary level bits for the ISR interrupt source groups Context saving These processors may also provide for auto saving the CPU registers (context) when ISR execution starts and auto return of saved values into the CPU registers on return from ISR. This help in fast transfer to higher priority interrupt

Page 31: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 31

Page 32: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 32

Processor interrupt service mechanisms • Certain processors don't permit in-between routine diversion to higher priority interrupts • These processors provide auto disabling of all maskable interrupt when ISR execution starts and auto re-enabling of all maskable interrupt on return from ISR • These processors may also provide for auto saving the CPU registers (context) when ISR execution starts and auto return of saved values into the CPU registers on return from ISR. This help in fast transfer to pending higher priority interrupt

1. Hardware Assignment of priorities

Need for the assigning a priority order by hardware • Certain interrupts need fast attention • For example, clock interrupt on a system timer overflow, detection of illegal opcode by the processor, division by 0,…. • When there are multiple device drivers, traps, exceptions, signals due to hardware and software interrupts the assignment of the priorities for each source or source group is required so that the ISRs of smaller deadline execute earlier by assigning them higher priorities Hardware-defined priorities can be used as such. Why does the hardware assign the presumed priority? • Several interrupts occur at the same time during the execution of a set of instructions, and either all or a few are enabled for service. • The service using the source corresponding ISRs can only be done in a certain order of priority. • Hardware-defined priorities can be used as Such Hardware assignment of priorities

• ARM7 provides two types of the interrupt sources (requests) ─ IRQs (interrupt requests) and FIQs (fast interrupt requests).

• Interrupts in 80x86 assigned interrupt-types and interrupt of type 0 has highest priority and 255 as lowest priority

Multiple sources of interrupts

• Multiple devices • Processor hardware assigns a priority phw to each source (including traps or exceptions) or sourcegroup

a pre-assumed priority (or level or type). • phw represents the hardware presumed priority for the source (or group) • Assume number be among 0, 1, 2, ..., k, ..., m-1. • Let phw = 0 mean the highest; phw =1 next to lowest;........; phw = m-1 assigned the lowest.

Example of seven devices or source groups *The processor’s hardware assign phw = 0, 1, 2, …, 6. * The hardware service priorities will be in order phw = 0, 1, 2, …, 6

Page 33: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 33

Example of the 80x86 family processor six interrupt sources

• division by zero, • single step, • NMI (non maskable interrupt from RAM parity error, etc.), • break point, • overflow and • print screen. • These interrupts can be assumed to be of phw = 0, 1, 2, 3, 4 and 5, respectively.

Example of the 80x86 family processor

� Assigns highest priority for a division by zero. This is so because it is an exceptional condition found in user software itself.

� Next priority single stepping as the as the user enables this source of interrupt because of the need to have break point at the end of each instruction whenever a debugging of the software is to be done.

� Next priority NMI─ because external memory read error needs urgent attention. � Print screen has lowest priority

Vectored priority polling method

� A processor interrupt mechanism may internally provide for the number of vectors, ISR_VECTADDRs � Assigns the ISR_VECTADDR as well as phw � There is a call at the end of each instruction cycle (or at the return from an ISR) for a highest priority

source among those enabled and pending. � Vectored priorities in 80x86 are as per the ntype � ntype = 0 highest priority and ntype = 0xFF (=255) lowest priority

Page 34: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 34

2. Software defined priorities Software defined priority Setting • In certain processors, software can re-define the priorities • Software defined priorities override the hardware ones • 8051 has priority register to define an interrupt priority = 1 (high) or low (=0)

CONTEXT, CONTEXT SWITCHING AND INTERRUPT LATENCY

1. Context An embedded system executes: • multiple tasks (processes). An operating system facilitates this • perform multiple actions or functions due to multiple sources of the interrupts. An interrupt service mechanism in the system facilitates this

The multitasking and multiple ISRs execute even though there is only one processor by first saving the one program context and retrieving another program context Current program’s program counter, status word, registers, and other program-contexts

• Before executing new instructions of the new function, the current program’s program counter are saved. • Also status word, registers, and other program-contexts are saved, if not done automatically by the

processor. • Because the program counter, status word register and other registers are needed by the newly called

function. Program Counter- a part of the context of presently running program

• Getting an address (pointer) from where the new function begins , and loading that address into the program counter and then executing the called function's instructions will change the running program at the CPU to a new program

• Program Counter- a part of the context of presently running program Context • A context of a program must include program counter as well as the program status word, stack pointer and may include the processor registers. The context may be at a register set or at the separate memory block for the stack. • A register set or memory block can hold context information • Present CPU state, which means registers and • May include ID of the process interrupted and • Function’s local variables.

Page 35: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 35

What should constitute the context? Depends on the processor of the system or operating system supervising the program.

An Example

• In an 8051 program, the program counter alone constitute the context for the processor interrupt service and multiple functions call mechanism.

• In 68HC11 program, the program counter, and CPU registers constitute the context for the processor interrupt service and multiple functions call mechanism.

2. CONTEXT SWITCHING Context Switching on Interrupts • Context switching means saving the context of interrupted routine (or function) or task and retrieving or loading the new context of the called routine or task to be executed next.

Page 36: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 36

Page 37: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 37

Context Switching in ARM7 on Interrupt • The interrupt mask (disable) flags set [Disable low priority interrupts.] • Program Counter (PC) saved • Current Program Status Register, CPSR copies into one for the saved program status register (SPSR), and CPSR stores the new status (interrupt source data or information). • PC gets the new value as per the interrupt source from the vector table On return from the ISR, the context switching back to previous context (i) Program Counter (PC) retrieves. (ii) The corresponding SPSR copies back into CPSR. (iii) The interrupt mask (disable) flags are reset. [Enable again the earlier disabled low priority interrupts.] Operating system OS provides for memory blocks to be used as stack frames if internal stack frames are not available at the processor being used in the system. 3. Classification of Processors Interrupt Service Mechanism from Context Saving Angle 8051

• 8051 interrupt-service mechanism is such that on occurrence of an interrupt service, the processor pushes the processor registers PCH (program counter higher byte) and PCL (program counter higher byte) on to the memory stack.

• The 8051 family processors do not save the context of the program (other than the absolutely essential program counter) and a context can save only by using the specific set of instructions for that. For example, using Push instructions at the ISR.

Advantage of Saving PC only in 8051 • It speeds up the start of ISR and returns from ISR but at a cost. • The onus of context saving is on the programmer in case the context (SP and CPU registers other than PCL

and PCH) to be modified on service of or on function calls during execution of the remaining ISR instructions.

68HC11 interrupt mechanism � Processor registers save on to the stack whenever an interrupt service occurs. These are in the order

of PCL, PCH, IYL, IYH, IXL, IXH, ACCA, ACCB, and CCR. � 68HC11 thus does automatically save the processor context of the program without being so any instructed

in the user program. � As context saving takes processor time, it slows a little at the start of ISR and returns from ISR but at

the a great advantage that the onus of context saving is not there on the programmer and there is no risk in case the context modifies on service or function calls

ARM7 interrupt mechanism

• ARM7 provides a mechanism for fast context switching between the two tasks, one current and one at the stack.

Page 38: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 38

INTERRUPT LATENCY AND SERVICE DEADLINE

Interrupt Latency

Interrupt Latency

• A period between occurrence of an interrupt and start of execution of the ISR • The time taken in context switching is also included in a period, called interrupt latency period, Tlat.

Minimum Interrupt-latency period • Tlat, is the sum of the periods as follows. • Time to be taken is for the response and initiating the ISR instructions. • This includes the time to save or switch the context (including the program counter and registers) plus the

time to restore its context. For example, in the ARM7 processor, this period equals two clock cycles plus zero to twenty clock cycles for finishing an ongoing instruction plus zero to three cycles for aborting the data.

Minimum Latency = context switching period *When the interrupt service starts immediately on context switching the interrupt latency = Tswitch = context switching period. When instructions in a processor takes variable clock cycles, maximum clock cycles for an instructions are taken into account for calculating latency

Latency on context switch to an higher priority interrupt When the interrupt service does not starts immediately by context switching but context switching starts

after all the ISRs corresponding to the higher priority interrupts complete the execution. If sum of the time intervals for completing the higher priority ISRs = ΣTexec , then interrupt latency = Tswitch + ΣTexec

Page 39: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 39

Latency due to execution of Disable Interrupt Instruction in a critical section

* Tdisable is the period for which a routine is disabled in its critical section. The interrupt service latency from the routine with interrupt disabling instruction (due to presence of the routine with critical section) for an interrupt source will be Tswitch + ΣTexec + Tdisable

Page 40: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 40

Worst case latency Sum of the periods Tswitch , ΣTexec and Tdisable where sum is for the interrupts of higher priorities only. Minimum latency Sum of the periods Tswitch and Tdisable when the interrupt is of highest priority.

ISR or Task Deadline

For every source there may be a maximum period only up to which the service of its all ISR instructions can be kept pending. This period defines the Deadline period, Td during which the execution must be completed.

Page 41: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 41

Example Video frames in video conferencing reach after every 1 15 s. The device on getting the frame interrupts the system and the interrupt service deadline is 1 15 s, else the next frame will be missed A 16-bit timer device on overflow raises TF interrupt on transition of counts from 0xFFFF to 0x0000.

• To be responded by executing an ISR for TF before the next overflow of the timer occurs, else the counting period between 0x0000 after overflow and 0x000 after next to next overflow will not be accounted.

• If timer counts increment every 1µs, the interrupt service deadline is 216 µs = 65536 µs. To keep the ISR as short as possible

• In case of multiple interrupt sources • To service the in-between pending interrupts and leave the functions that can be executed afterwards

later for a later time. • Use of interrupt service threads, which are the second level interrupt handlers. • When this principle is not adhered to, a specific interrupting source may not be serviced within in the

deadline (maximum permissible pending time) for that Assignment of priorities to Meet Service Deadlines • By following an EDF (Earlier Deadline First) strategy for assigning the priorities to the ISRs and Tasks, the service deadlines are met

Software overriding of Hardware Priorities to Meet Service Deadlines It is first decided among the ISRs that have been assigned higher priority by in the user software. If user assigned priorities are , and then if these equal then among the that highest priority, which is pre-assigned by at the processor internal- hardware.

DIRECT MEMORY ACCESS Multi-byte data set or burst of data or block of data

• A DMA is required when a multi-byte data set or a burst of data or a block of data is to be transferred between the external device and system or two systems.

• A device facilitates DMA transfer with a processing element (single purpose processor) and that device is called DMAC (DMA Controller).

Using a DMA controller

• DMA based method useful, when a block of bytes are transferred, for example, from disk to the RAM or

RAM to the disk. • Repeatedly interrupting the processor for transfer of every byte during bulk transfer of data will waste too much of processor time in context switching

Page 42: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 42

DMAC _ System performance improves by separate processing of the transfers from and to the peripherals (for example, between camera memory and USB port)

DMAC hold request

� After an ISR initiates and programs the DMAC, the DMAC sends a hold request to the CPU � CPU acknowledges that if the system memory buses are free to use.

Three modes

� Single transfer at a time and then release of the hold on the system bus. � Burst transfer at a time and then release of the hold on the system bus. A burst may be of a few kB. � Bulk transfer and then release of the hold on the system bus after the transfer is completed.

DMA proceeds without the CPU intervening

� Except (i) at the start for DMAC programming and initializing and (ii) at the end. � Whenever a DMA request by external device is made to the DMAC, the CPU is requested (using

interrupt signal) the DMA transfer by DMAC at the start to initiate the DMA and at the end to notify (using interrupt signal) the end of the DMA by DMAC.

Page 43: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 43

Using a DMA controller When a DMA controller is used to transfer a block of bytes: • ISRs are not called during the transfer of bytes • An ISR is called only at the beginning of the transfer to program the controller (DMAC) • Another ISR is called only at the end of the Transfer

Programming the DMAC registers

The ISR that initiates the DMA (Direct Memory Access) to the interrupting source, simply programs the DMA registers for the: • command (for mode of transfer─ bulk or burst or bytes), • data-count (number of bytes to be transferred), • memory block address where access to data is made and • I/O bus for start address of external device

Use of DMA Channel for Facilitating the Small Interrupt-Latency Period Sources

• Small latency periods can be set when using a DMA channel when multiple interrupt from IO sources exist.

• The ISR run period from start to end can now be very small, [only short code for programming DMAC and for short code on end of DMA transfer for initiating new data transfer or new task.]

Multiple channels DMAC

• Provides DMA action from system memories and two (or more IO) devices. • Separate set of registers for programming each channel. • Separate interrupt signals in the case of a multi-channel DMAC

On Chip DMAC in Microcontrollers • 8051 family member 83C152JA (and its sister JB, JC and JD microcontrollers) ─two DMA channels

on-chip. • 80196KC has a PTS (Peripheral Transactions Server) that supports DMA functions. [Only single

and bulk transfer modes are supported, not the burst transfer mode.] • MC68340 microcontroller has two DMA channels on chip.

Device types, Physical and Virtual device functions Device Types For each type of device, there is a set of the generic commands. For example, for char device one set of commands and for block device there can be another set.

Page 44: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 44

Types of Physical and Virtual devices in a system may be as follows: char, block, loop back device, file, pipe, socket, RAM disk, sound, video and media, mouse, keyboard, keypad, timer . Virtual device driver - Definition : A virtual-device driver is the component of a device driver that communicates directly between an application and memory or a physical device. - Virtual device driver controls the flow of data - Allows more than one application to access the same memory or physical device without conflict. Char Device Char Device: For example, a device to which one character is sent at one time or is read from it at one time. For example, mouse, keyboard, keypad, timer . Block Device • Block Device: For example, a device to which one block of characters is sent at one time or is read from it at one time. For example, printer, disk. Block Device configuration as Char Device Block as well as Char device: For example, a device to which one block of characters or a single character is sent at one time or is read from it at one time. For, example, LCD display unit. A device can be configured as char or block as per the need by a generic command. Configuration as loop-back Device Loop-back Device: A device to which one character or set of characters are sent, and those are echoed back to same. Configuration as copy Device • Copy Device: A device using which a set of characters are sent, and those are returned to another device. For example, disk_ copy device when characters are copied from one disk to another or a keyboard-cum-display device. Keyboard input is sent to a buffer and display unit uses that buffer for display. Virtual Devices • Besides the physical devices of a system, drivers are also used in a systems for virtual devices. • Physical device drivers and virtual device drivers have analogies. • Like physical device, virtual device drivers may also have functions for device connect or open, read, write and close.

Driver

A memory block can have data buffers for input and output in analogy to buffers at an IO device and can be accessed from a char driver or block or pipe or socket driver.

Page 45: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 45

Virtual Device Examples

Pipe device: A device from to which the blocks of characters are send from one end and accessed from another ends in FIFO mode (first-in first-out) after a connect function is executed to connect two ends Socket device: A device from to which (a) the blocks of characters are send from one end with a set of the port (application) and sender addresses, (b) accessed from another end port (application) and receiver addresses, (c) access is in FIFO mode (first-in first-out) only after a connect function is executed to connect twosockets. File device: A device from which the blocks of characters are accessed similar to a disk in a tree like format (folder, subfolder,...). For example, a named file at the memory stick. RAM disk Device: A set of RAM memory blocks used like a disk, which is accessed by defining addresses of directory, subdirectory, second level subdirectory, folder and subfolder Difference between various types of virtual Devices

• Pipe needs one address at an end, • Socket one addresses and one port number at an end, and • File and disk can have multiple addresses. Reading and writing into a file is from or to current cursor

address in the currently open folder. • Just as a file is sent read call, a device must be sent a driver command when its input buffer(s) is to be read.

• Just as a file is sent write call, a device needs to be sent a driver command when its output buffer is to be written. Virtual device example for Remote System access A virtual device example is a device description that is used to form a connection between a user and a physical system networked or connected to a remote system. Virtual device driver File name (VxD) - Driver filename in Windows OS is used where the V stands for virtual and D stands for device.The “d" can be replaced with other characters; for example, VdD means a display driver. Linux Internals and Device Drivers and Linux Network Functions • Linux has internal functions called Internals. Internals exist for the device drivers and network-management functions. • Useful Linux drivers for the embedded system and gives the uses of each.

Page 46: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 46

Linux drivers

• Char (For driving a character) • Block (For driving a block of char) • Input (For standard IO devices) • Media (For standard media device functions) • Video (For standard video device functions) • Sound (For standard auido device functions)

Linux drivers in the net directory

The Linux internal functions exist for o Sockets, o Handling of Socket buffers, o firewalls, o network Protocols (for examples, NFS, IP, IPv6 and Ethernet) and o bridges. • They work separately as drivers and also form a part of the network management function of the operating system.

PROGRAMMING ELEMENTS AND PROGRAMMING IN C

Programming • An essential part of any embedded system design Programming in Assembly or HLL • Processor and memory-sensitive instructions: Program codes may be written in assembly • Most of codes: Written in a high level language (HLL), ‘C’, ‘C++’ or Java Assembly Language Programming 1.Advantage of Assembly Language Programming

• Assembly codes sensitive to the processor, memory, ports and devices hardware • Gives a precise control of the processor internal devices • Enables full use of processor specific features in its instruction set and its addressing modes

• Machine codes are compact, processor and memory sensitive • System needs a smaller memory. • Memory needed does not depend on the programmer data type selection and ruledeclarations • Not the compiler specific and library functionsSpecific • Device driver codes may need only a few assembly instructions. • Bottom-up-design approach

Page 47: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 47

2. Advantage of using high level language (HLL) for Programming Short Development Cycle • Code reusability─ A function or routine can be repeatedly used in a program • Standard library functions─ For examples, the mathematical functions and delay ( ), wait ( ), sleep ( ) functions • Use of the modular building blocks

Short Development Cycle─ Bottom-up Design

• Sub-modules are designed first for specific and distinct set of actions, then the modules and finally integration into complete design. • First code the basic functional modules and then build a bigger module and then integrate into the final system

Short Development Cycle─ Top-down Design:

• First design of main program (blueprint), then its modules and finally the sub-modules are designed for specific and distinct set of actions. • Top-down design Most favoured program design approach Use of Data Type and Declarations • Examples, char, int, unsigned short, long, float, double, boolean. • Each data type provides an abstraction of the (i) methods to use, manipulate and represent, and (ii) set of permissible operations.

Use of Type Checking • Type checking during compilation makes the program less prone to errors. • Example─ type checking on a char data type variable (a character) does not permit subtraction, multiplication and division.

Use of Control Structures, loops and Conditions • Control Structures and loops • Examples─ while, do-while, break and for • Conditional Statements examples • if, if- else, else - if and switch - case) • Makes tasks simple for the program flow Design

Use of Data Structures

• Data structure- A way of organizing large amounts of data. • A data elements’ collection • Data element in a structure identified and accessed with the help of a few pointers and/or indices and/or • functions.

Page 48: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 48

Standard Data structure

• Queue • Stack • Array – one dimensional as a vector • Multidimensional • List • Tree

Use of Objects • Objects bind the data fields and methods to manipulate those fields • Objects reusability • Provide inheritance, method overloading, overriding and interfacing • Many other features for ease in Programming

Advantage of using C for Programming

C • Procedure oriented language (No objects) • Provision of inserting the assembly language codes in between (called inline assembly) to obtain a direct hardware control.

Procedure oriented language • A large program in ‘C’ splits into the declarations for variables, functions and data structure, simpler functional blocks and statements.

In-line assembly codes of C functions • Processor and memory sensitive part of the program within the inline assembly, and the complex part in the HLL codes. • Example function outportb (q, p) • Example─ Mov al, p; out q, al

‘C’ PROGRAM ELEMENTS

Preprocessor include Directive Header, configuration and other available source files are made the part of an embedded system program source file by this directive Examples of Preprocessor include Directives # include "VxWorks.h" /* Include VxWorks functions*/ # include "semLib.h" /* Include Semaphore functions Library */

Page 49: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 49

# include "taskLib.h" /* Include multitasking functions Library */ # include "sysLib.c" /* Include system library for system functions */ # include "netDrvConfig.txt" /* Include a text file that provides the 'Network Driver Configuration'. */ # include "prctlHandlers.c" /* Include file for the codes for handling and actions as per the protocols used for driving streams to the network. */ Preprocessor Directive for the definitions • Global Variables ─ # define volatile boolean IntrEnable • Constants ─ # define false 0 • Strings─ # define welcomemsg "Welcome To ABC Telecom"

Preprocessor Macros

• Macro - A named collection of codes that is defined in a program as preprocessor directive. • Differs from a function in the sense that once a macro is defined by a name, the compiler puts the corresponding codes at the macro at every place where that macro-name appears. Difference between Macro and Function

• The codes for a function compiled once only • On calling that function, the processor has to save the context, and on return restore the context. • Macros are used for short codes only. • When a function call is used instead of macro, the overheads (context saving and return) will take a time, Toverheads that is the same order of magnitude as the time, Texec for execution of short codes within a function. • Use the function when the Toverheads << Texec and macro when Toverheads ~= or > Texec.

Use of Modifiers • auto • unsigned • static • const • register • interrupt • extern • volatile • volatile static Use of infinite loops

• Infinite loops- Never desired in usual programming. Why? The program will never end and never exit or proceed further to the codes after the loop. • Infinite loop is a feature in embedded system programming!

Example: A telephone is never switching off. The system software in the telephone has to be always in a waiting loop that finds the ring on the line. An exit from the loop will make the system hardware redundant.

Page 50: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 50

# define false 0 # define true 1 void main (void) { /* Call RTOS run here */ rtos.run ( ); /* Infinite while loops follows in each task. So never there is return from the RTOS. */ }

void task1 (....) { /* Declarations */. while (true) { /* Run Codes that repeatedly execute */ /* Run Codes that execute on an event*/ if (flag1) {....;}; flag1 =0; /* Codes that execute for message to the kernel */ message1 ( ); } }

Use of typedef Example─ A compiler version may not process the declaration as an unsigned byte

• The 'unsigned character' can then be used as a data type. • Declared as follows: typedef unsigned character portAdata • Used as follows: #define Pbyte portAdata 0xF1

Use of Pointers Pointers are powerful tools when used correctly and according to certain basic principles. Example # define COM ((struct sio near*) 0x2F8); This statement with a single master stroke assigns the addresses to all 8 variables Example Free the memory spaces allotted to a data structure. #define NULL (void*) 0x0000 • Now statement & COM ((struct sio near*) = NULL; assigns the COM to Null and make free the memory between 0x2F8 and 0x2FF for other uses. Data structure • Example─ structure sio • Eight characters─ Seven for the bytes in BR/THR/DLATCHLByte, IER, IIR, LCR, MCR, LSR, MSR

Page 51: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 51

registers of serial line device and one dummy variable Example of Data structure declaration • Assume structured variable COM at the addresses beginning 0x2F8. # define COM ((struct sio near*) 0x2F8) • COM is at 8 addresses 0x2F8-0x2FF and is a structure consisting of 8 character variables structure for the COM port 2 in the UART serial line device at an IBM PC. # define COM1 ((struct sio near*) 0x3F8); It will give another structured variable COM1 at addresses beginning 0x3F8 using the data structure declared earlier as sio Use of functions (i) Passing the Values (elements): The values are copied into the arguments of the functions. When the function is executed in this way, it does not change a variable's value at the function, which calls new function. (ii) Passing the References When an argument value to a function passes through a pointer, the called function can change this value. On returning from this function, the new value may be available in the calling program or another function called by this function. Use of Reentrant Function • Reentrant function- A function usable by the several tasks and routines synchronously (at the same time). This is because all the values of its argument are retrievable from the stack. Three conditions for a function called as reentrant function

1. All the arguments pass the values and none of the argument is a pointer (address) whenever a calling function calls that function. 2. When an operation is not atomic, that function should not operate on any variable, which is declared outside the function or which an interrupt service routine uses or which is a global variable but passed by reference and not passed by value as an argument into the function. [The value of such a variable or variables, which is not local, does not save on the stack when there is call to another program.] 3. That function does not call any other function that is not itself Reentrant. DATA STRUCTURES: ARRAYS

Array: A structure with a series of data items sequentially placed in memory (i) Each element accessible by an identifier name (which points to the array) and an index, i (which define offset from the first element) (ii) i starts from 0 and is +ve integer

Page 52: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 52

Example :- One dimensional array (vector) unsigned int salary [11]; salary [0] – 1st month salary salary [11] – 12th month salary Each integer is of 32-bit (4 bytes); salary assigned 48 bytes address space Example 2: sio COM [1]; COM [0]– COM1 port data record with structure equivalent to sio COM [1]– COM2 port data record with structure equivalent to sio COM assigned 2*8 characters = 16 bytes address space Two dimensional array unsigned int salary [11, 9]; salary [3, 5]– 4th month 6th year salary salary [11, 4] – 12th month 5th year salary salary assigned 12*10*4 = 480 bytes address space

Multi-dimensional array char pixel [143,175, 23]; pixel [0, 2, 5] – 1st horizontal line index x, 3rd vertical line index y, 6th color c. pixel assigned 144*176*24 = 608256 bytes address space in a colored picture of resolution 144x 176 and 24 colors

Page 53: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 53

DATA STRUCTURES: QUEUES, PIPES AND SOCKETS

QUEUE

• A structure with a series of data elements with the first element waiting for an operation. ─ Used when an element is not to be accessed by index with a pointer directly, as in an array, but only through FIFO (first in first out) mode through a queue-head pointer • An element can be inserted only at the end (also called tail or back) in series of elements waiting for an operation and deleted from front (queue-head). • There are two pointers, one for deleting after the read operation from the head and other for inserting at the tail. Both increment after an operation.

Page 54: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 54

Circular Queue at a memory block

Page 55: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 55

Queue with a header for its length at a memory block

Memory block for Queue with a header for its length

Page 56: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 56

Queue with a header for its length at a memory Block

Queue with a header for length and source-address and destination-addresses at a memory block • When a byte stream is sent on a network, the bytes for the header for the length of the stream and for the source and destination addresses of the stream are must. [Note: There may be other header bits, for example, in the IP protocol. There may trailing bytes for examples, in HDLC and Ethernet protocols.] Standard Functions used in a queue 1. QELInsert – Insert an element into the queue as pointed by *qtail and increment the qtail pointer address 2. QELReturn – Return an element into the queue as pointed by *qhead and the element deletes from queue on increment in the qhead pointer address─ return also means delete 3. isQNotEmpty– Return true or false after the check for the queue not empty. Priority Queue When there is an urgent message to be placed in a queue, we can program such that a priority data element inserts at the head instead of at the tail. That element retrieves as if there is last-in firstout. Application Example of a Queue • Networking applications need the specialized formations of a queue. On a network, the bits are transmitted in a sequence and retrieved at the other end in a sequence. To separate the bits at the different blocks or frames or packets, there are header bytes.

Page 57: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 57

Queue with header bytes

• Queues of bytes in a stream play a vital role in a network communication. • Network queues have headers for length, source and destination addresses and other bytes as per the protocol used

Queue with header bytes • The header with the queue data elements (forming a byte stream) follows a protocol. A protocol may also provide for appending the bytes at the queue tail. These may be the CRC (Cyclic Redundancy Check) bytes at the tail. Data streams flow control • Use a special construct FIPO (Firstin Provisionally Out) • FIPO is a queue special construct in which deletion is provisional and head pointer moves backward as per the last acknowledged (successfully read) byte from source of deletion in the network Pipe A pipe is a virtual device for sending byte or data-element streams at the tail and retrieving from head but using pipe- device driver functions [create ( ), open ( ), connect ( ), read ( ), write ( ), close ( )] and device descriptor, fd

• Elements from a pipe deletes from the head at a destination and inserts into the tail at the source. • Source and destination may be two physical devices on two systems or at the same system.

Pipe Device driver Functions _ Create ( ) _ Connect ( ) for connect pipe between two addresses for read and write, respectively _ Open ( ) _ read ( ) _ Write ( ) _ Close ( )

Page 58: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 58

Socket A socket is a virtual device used to send the packets from an addressed port at the source to another addressed port at the destination. Packets are the blocks of byte-streams. Each byte stream having the header and/or trailing bytes. Socket Device driver Functions • create ( ) • connect ( ) for two ports at two addresses for read and write, respectively • listen ( ) • accept ( ) • open ( ) • read ( ) • write ( ) • close ( )

Page 59: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 59

Network Socket Message bytes with header as per protocol and sending data and receiving data using Device driver Functions write ( ) and read ( )

DATA STRUCTURES: STACKS

• A structure with a series of data elements with last sent element waiting for a delete operation - Used when an element is not to be accessible by the index with pointer directly, as in an array, but only through LIFO (Last in first out) mode through a stack-top pointer.

Push and Pop onto a STACK

- A data-element can be pushed (inserted) only at the from front (stack-head) in the series of elements waiting for an operation and popped (deleted) also from front (stack-head). - There is only one pointer, for deleting after the read operation from stack-top and other for inserting at stack-top. Pointer if increments after a push operation then it decrements after a POP operation.

Page 60: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 60

Standard Functions used in a Stack 1. SELInsert – Pushes a data-element into the stack as pointed by item and increment the item pointer address 2. SELReturn – Pop an element from the stack as pointed by *item and the element deletes from stack on decrement in the item pointer address 3. isSNotEmpty– Return true or false after the check for the stack not empty Stack Pointer

• SP (stack pointer): SP is pointer to a memory block dedicated to saving the context on context switch to another ISR or routine.

• Each processor has at least one stack pointer so that the instruction stack can be pointed and calling of the routines can be facilitated

• In some processors, RIP (return instruction pointer)─ a register for saving the return address of the program counter when a routine calls another routine or ISR.

• RIP is also called link register (LR) in ARM Processor

Page 61: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 61

FP (data frame pointer)

A pointer to a memory block dedicated to saving the data and local variable values _ FIP in certain processors PFP (previous program frame pointer) A pointer to a memory block dedicated to saved the program data frame, in certain processors Multiple stack frames for Multiple Threads OS defines processes or threads such that each process or thread is allocated one one task stack pointer or thread stack pointer

Page 62: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 62

Motorola MC68010 processor USP (user Stack Pointer) and SSP (Supervisory Stack Pointer). Program runs in two modes: user mode and supervisory mode. In supervisory mode the operating system functions execute. Switch from user mode to supervisory mode after every tick of the system clock

MC68040

USP (User Stack Pointer), SSP (Supervisory Stack Pointer), (MSP) Memory Stack frames pointers, and Instruction Stack Pointer (ISP). Application Examples 1. Ease in saving of the data-elements in case of interrupts or function calls 2. Nested set of operations like function calls 3. A program thread, which blocks pushes on the stack and then the last pushed thread pops first. Tables

• A two-dimensional array (matrix) and is an important data set that is allocated a memory block. • There is always a base pointer for a table.

• Base pointer points to its first element at the first column first row. • There are two indices, one for a column and other for a row.

Three pointers in table Three pointers, table base, column index and destination index pointers can retrieve any element of the table

Page 63: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 63

• An important data set • A lookup table can be said a two-dimensional array (matrix) with first column can be said to hold the pointers, one pointer in each row and second column the values pointed by first column in each row. • First and second columns are at different nonadjacent addresses. • Each row has pointer in first column and from pointed memory block the addressed data is traced Column of the pointers in lookup table Column index pointers can retrieve any row element in the table Hash Table

• A data set that is a collection of pairs of a key and a corresponding value. • A hash table has a key or name in one column.

The corresponding value or object is at the second column.

• The keys may be at non-consecutive memory addresses. • When look-up tables store like a hash. If the first column of a table is used as a key (pointer to the

value) and the second column as a value, we call that table as look-up table. • A has table is a two-dimensional array (matrix) with first column can be said to hold key and second column the values • An important data set. • Each row has key and from look at the key, the addressed data in second column is traced • Just as an index identifies an array element, a hash-key identifies a hash element Column of keys in hash table By matching a key in a column of keys the values are retrieved from second column of the table

Page 64: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 64

Lists

1) list differs from an array; array memory allocation- as per the index assigned to an element.

2) Each array element- at the consecutive memory addresses starting from the 0th element address.

3) Each element value (or object) in an array is read or replaced or written by an addressing using only two

values: 0th element address pointer and index(ices).

4) list each element- Must include along with an item a pointer, LIST_NEXT. Each element is at the memory

address to which the predecessor list-element points

5) LIST_NEXT points to the next element in the list. LIST_NEXT points to NULL in element at end of the list.

6) The memory-size of an item of an element can also vary. The address of the list element at the top is a

pointer, LIST_TOP. Only by using the LIST_TOP and traversing through all the LIST_NEXT values for the

preceding elements can an element be deleted or replaced or inserted between the two elements.

7) A list differs from a queue as follows: A queue is accessible and is readable as FIFO only.

8) An insertion of an element in the list can be done anywhere within it only by traversing through

LIST_NEXT pointers at all the preceding elements

9) An insertion is always at the tail in queue. Also, an element can be read and deleted from anywhere

in the list only by traversing through the list. It is always from the head in the queue.

Page 65: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 65

10) Each element of the ordered list - Always as per the order of the priority assigned to its items. Priority can be in order of task execution or alphabetical order or timeout period left in case of the list of the active timers or first time entry then the next in a chosen criteria for ordering in list.

Page 66: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 66

Application Example of a List 1. List of Tasks that are active (not blocked and not finished). 2. List of software timers which are not yet timed out and to which clock inputs are to be periodically given by real time clock.

Page 67: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 67

Tree • When the list becomes long traversing through it, insertions, deletion, and search of an element in-between the list becomes lengthier process. • Suppose a list element instead of just pointing to the next element through LIST_NEXT, it points to two elements using LIST_NEXT_LEFT and LIST_NEXT _RIGHT or to more than two elements by LIST_NEXT1, LIST_NEXT2, …. Then instead of List, we form a Tree. 1) There is a root element.

2) Root has two or more branches each having a daughter element.

3) Each daughter element has two or more daughter elements.

4) Last one (leaf) does not have any daughter element and points to Null.

5) Only the root element is identifiable and it is done by the treetop pointer (Header).

Each element points to TNodeNextLeft and TNodeNextRIGHT in a binary tree and or to more than two

elements by TNodeNext1, TNodeNext2, … , TNodeNextN in tree with N-branches (maximum) at a node.

6) Since no other element is identifiable directly, by traversing the root element, then proceeding continuously

through all the succeeding daughters, a tree element can be read or read and deleted, or can be added to another

daughter or replaced by another element.

Page 68: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 68

7) Last element in the node points to NULL like in a list.

8) A tree has data elements arranged as branches. The last daughter, called node has no further daughters. A

binary tree is a tree with a maximum of two daughters (branches) in each element.

Application Examples of a Tree 1. A directory - Number of file-folders, Each file-folder having a number of other file folders and so on and a file is at the least node (leaf). 2. USB Devices nodes connected to hubs and nodes, and finally to a host controller at root 3. Files in a sub-directory and each sub-directory to parent directory, and finally to a root directory 4. A root has number of file-folders. Each file folder has a number of other file folders and so on in the end there is a file each. 5. Network architecture in which a central server connects to multiple servers and clients Programming using functions and function queues Use of multiple function calls in the main ( ) Use of multiple function calls in cyclic order Use of pointer to a function Use of function queues and Use of the queues of the function pointers built by the ISRs. It reduces significantly the ISR latency periods. Each device ISR is therefore able to execute within its stipulated deadline Multiple function calls

Page 69: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 69

Multiple function calls in cyclic order Use of Multiple function calls in Cyclic Order • One of the most common methods is the use of multiple function-calls in a cyclic order in an infinite loop of the main ( ). Use of function pointers • Sign when placed before the function name then it refers to all the compiled form of the statements in the

memory that are specified within the curly braces when declaring the function. • A returning data type specification (for example, void) followed by '(*functionName) (functionArguments)' calls the statements of the functionName using the functionArguments, and on a return, it returns the specified data object. We can thus use the function pointer for invoking a call to the function. Queue of Function-pointers Application of Queue of Function pointers inserted by ISRs Makes possible the designing of ISRs with short codes and by running the functions of the ISRs at later stages

Multiple ISRs insertion of Function pointers into a Queue • The ISRs insert the function pointers • The pointed functions in the queue execute at later stages by deleting from the queue • These queued functions execute after the service to all pending ISRs finishes Example Interrupt Service Routine ISR_PortAInputI declarati on for the functions void interrupt ISR_PortAInputI (QueueElArray In_A_Out_B) {

disable_PortA_Intr ( ); /* Disable another interrupt from port A*/

void inPortA (unsigned char *portAdata); /* Function for retransmit output to Port B*/

void decipherPortAData (unsigned char *portAdata); /* Function for deciphering */

void encryptPortAData (unsigned char *portAdata); /* Function for encrypting */

void outPort B (unsigned char *portAdata); /* Function for Sending Output to Port B*/

Interrupt Service Routine ISR_PortAInputI inserting function pointers /* Insert the function pointers into the queue */ In_A_Out_B.QElinsert (const inPortA & *portAdata);

In_A_Out_B.QElinsert (const decipherPortAData & *portAdata);

In_A_Out_B.QElinsert (const encryptPortAData & *portAdata);

In_A_Out_B.QElinsert (const outPort B & *portAdata);

Enable Function before return from ISR enable_PortA_Intr ( ); /* Enable another interrupt from port A*/ } /*********************/

Page 70: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 70

Priority Function Queue of Multiple ISRs • When there are multiple ISRs, a high priority interrupt service routine is executed first and the lowest priority.

Page 71: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 71

• The ISRs insert the function pointers into a priority queue of function pointers [ISR can now be designed short enough so that other source don’t miss a deadline for service] Programming using Event or Messages Polling Multitasking Function main with a waiting loop

• main ( ) passes the control to an RTOS • Each task controlled by RTOS and • Each task will also have the codes in an infinite loop • A waiting task is passed a signal by the RTOS to start

main ( ) calling RTOS # define false 0 # define true 1 /************************************************* *******************/ void main (void) { /* Call RTOS run here */ Infinite loop in main ( ) while (1) {rtos.run ( ); /* Infinite while loops follows in each task. So never there is return from the RTOS. */ } } /*********************************** *********************************/

Task 1 _ void task1 (....) { _ /* Declarations */ _ while (true) { _ /* Codes that repeatedly execute */ _ /* Codes that execute on an event */ _ if (flag1) {....;}; flag1 =0; _ /* Codes that execute for message to the kernel */ _ message1 ( ); _ } } _ /*********************************************/

Task2 ( ) _ void task2 (....) { _ /* Declarations */ _ while (true) { _ /* Codes that repeatedly execute */

Page 72: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 72

_ /* Codes that execute on an event */ _ if (flag2) {....;}; flag2 =0; _ /* Codes that execute for message to the kernel */ _ message2 ( ); _ } } _ /*********************************************/

TaskN_1 ( ) _ void taskN_1 (....) { _ /* Declarations */ _ while (true) { _ /* Codes that repeatedly execute */ _ /* Codes that execute on an event */ _ if (flagN_1) {....;}; flagN_1 =0; _ /* Codes that execute for message to the kernel */ _ messageN_1 ( ); _ } } _ /*********************************************/

TaskN _ void taskN (....) { _ /* Declarations */ _ while (true) { _ /* Codes that repeatedly execute */ _ /* Codes that execute on an event */ _ if (flagN) {....;}; flagN =0; _ /* Codes that execute for message to the kernel */ _ messageN ( ); _ } } _ /*********************************************/

Polling for events and messages

• A Programming method is to facilitate execution of one of the multiple possible function calls and the function executes after polling

• Polling example is polling for a screen state (or Window menu) j and for a message m from an ISR as per the user choice

Mobile phone

• Assume that screen state j is between 0 and K, among 0, 1, 2, .. or K – 1 possible states.(set of menus). • An interrupt is triggered from a touch screen GUI and an ISR posts an event-message m = 0, 1, 2, …,

Page 73: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 73

or N – 1 as per the selected the menu choice 0, 1, 2, …, N – 1 when there are N menu- choices for a mobile phone user to select from a screen in state j.

Polling for a menu selection from screen state void poll_menuK {/* Code for polling for choice from menu m for screen state K*/ } }

/*********************************/

Page 74: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 74

Process • A process consists of executable program (codes), state of which is controlled by OS, • The state during running of a process─ represented by process-status (running, blocked, or finished),

processstructure— its data, objects and resources, and process control block (PCB). • Runs when it is scheduled to run by the OS (kernel) • OS gives the control of the CPU on a process’s request (system call).

Page 75: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 75

• Runs by executing the instructions and the continuous changes of its state takes place as the program counter (PC) changes • Process is that executing unit of computation, which is controlled by some process (of the OS) for a scheduling mechanism that lets it execute on the CPU and by some process at OS for a resourcemanagement mechanism that lets it use the system-memory and other system resources such as network, file, display or printer

Example ─ Mobile Phone Device embedded software

� Software highly complex. � Number of functions, ISRs, processes threads, multiple physical and virtual device drivers, and several

program objects that must be concurrently processed on a single processor.

Exemplary processes at the phone device

Voice encoding and convoluting process─ the device captures the spoken words through a speaker and generates the digital signals after analog to digital conversion, the digits are encoded and convoluted using a CODEC,

• Modulating process, • Display process, • GUIs (graphic user interfaces), and • Key input process ─ for provisioning • of the user interrupts

Process Control Block

• A data structure having the information using which the OS controls the process state.

Page 76: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 76

• Stores in protected memory area of the kernel. • Consists of the information about the process state

Information about the process state at Process Control Block

� Process ID,

� process priority,

� parent process (if any),

� child process (if any), and

� address to the next process PCB which will run

� allocated program memory address blocks in physical memory and in secondary (virtual) memory for

the process-codes,

� allocated process-specific data addressblocks _ allocated process-heap (data generated during the

program run) addresses,

� allocated process-stack addresses for the functions called during running of the process,

� allocated addresses of CPU register-save area as a process context represents by CPU registers, which

include the program counter and stack pointer

� allocated addresses of CPU register-save area as a process context

� [Register-contents (define process context) include the program counter and stack pointer contents]

� process-state signal mask [when mask is set to 0 (active) the process is inhibited from running and when

reset to 1, the process is allowed to run],

� Signals (messages) dispatch table [process IPC functions],

� OS allocated resources’ descriptors (for example, file descriptors for open files, device descriptors for

open (accessible) devices, device-buffer addresses and status, socket-descriptor for open socket), and

� Security restrictions and permissions

CONTEXT

Context loads into the CPU registers from memory when process starts running, and the registers save at the addresses of register-save area on the context switch to another process

� The present CPU registers, which include program counter and stack pointer are called context � When context saves on the PCB pointed process-stack and register-save area addresses, then the running

process stops. � Other process context now loads and that process runs─ This means that the context has switched

Thread Concepts

Page 77: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 77

A thread consists of executable program (codes), state of which is controlled by OS,

The state information─ thread-status (running, blocked, or finished), thread structure— its data, objects and a subset of the process resources, and thread-stack Thread… lightweight

• Considered a lightweight process and a process level controlled entity. [Light weight means its running does not depend on system resources]

Process… heavyweight • Process considered as a heavyweight process and a kernel-level controlled entity. • Process thus can have codes in secondary memory from which the pages can be swapped into the physical primary memory during running of the process. [Heavy weight means its running may depend on system resources] • May have process structure with the virtual memory map, file descriptors, user–ID, etc. • Can have multiple threads, which share the process structure Thread A process or sub-process within a process that has its own program counter, its own stack pointer and stack, its own priority parameter for its scheduling by a thread scheduler

� Its’ variables that load into the processor registers on context switching. � Has own signal mask at the kernel.

Thread’s signal mask

� When unmasked lets the thread activate and run. � When masked, the thread is put into a queue of pending threads.

Thread’s Stack A thread stack is at a memory address block allocated by the OS.

Page 78: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 78

Application program can be said to consist of number of threads or processes Multiprocessing OS

� A multiprocessing OS runs more than one processes. � When a process consists of multiple threads, it is called multithreaded process. � A thread can be considered as daughter process. � A thread defines a minimum unit of a multithreaded process that an OS schedules onto the CPU and

allocates other system resources. Example ─ Multiple threads of Display process in Mobile Phone Device

� Display_Time_Date thread ─ for displaying clock time and date. � Display_Battery thread ─ for displaying battery power. � Display_Signal thread ─ for displaying signal power for communication with mobile service provider.

Exemplary threads of display_process at the phone device

� Display_Profile thread ─ for displaying silent or sound-active mode. A thread � Display_Message thread ─ for displaying unread message in the inbox. � Display_Call Status thread ─for displaying call status; whether dialing or call waiting � Display_Menu thread ─ for displaying menu. � Display threads can share the common memory blocks and resources allocated to the Display_Process.

Page 79: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 79

Minimum computational unit A display thread is now the minimum computational unit controlled by the OS. Thread Parameters and Stack Thread parameters

� Each thread has independent parameters- ID, priority, program counter, stack pointer, CPU registers and its present status.

• Thread states─ starting, running, blocked (sleep) and finished

� When a function in a thread in OS is called, the calling function state is placed on the stack top. � When there is return the calling function takes the state information from the stack top

Thread Stack

� A data structure having the information using which the OS controls the thread state. � Stores in protected memory area of the kernel. � Consists of the information about the thread state

Thread and Task

� Thread is a concept used in Java or Unix. � A thread can either be a sub-process within a process or a process within an application program. � To schedule the multiple processes, there is the concept of forming thread groups and thread libraries. � A task is a process and the OS does the multitasking. � Task is a kernel-controlled entity while thread is a process-controlled entity.

Thread and Task analogy

� A thread does not call another thread to run. A task also does not directly call another task to run.

� Multithreading needs a thread-scheduler. Multitasking also needs a task-scheduler.

� There may or may not be task groups and task libraries in a given OS

Task and Task States Task Concepts An application program can also be said to be a program consisting of the tasks and task behaviors in various states that are controlled by OS. A task is like a process or thread in an OS. Task─ term used for the process in the RTOSes for the embedded systems. For example, VxWorks and µCOS-II are the RTOSes, which use the term task. A task consists of executable program (codes), state of which is controlled by OS,

Page 80: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 80

� The state during running of a task─ represented by information of process status (running, blocked, or

finished), process-structure—its data, objects and resources, and task control block (PCB). � Runs when it is scheduled to run by the OS (kernel), which gives the control of the CPU on a task request

(system call) or a message. � Runs by executing the instructions and the continuous changes of its state takes place as the program

counter (PC) changes � Task is that executing unit of computation, which is controlled by some process at the OS scheduling

mechanism, which lets it execute on the CPU and by some process at OS for a resource-management � mechanism that lets it use the system memory and other system-resources such as network, file, display or

prin • A task─ an independent process. • No task can call another task. [It is unlike a C (or C++) function, which can call another function.] • The task─ can send signal (s) or message(s) that can let another task run. • The OS can only block a running task and let another task gain access of CPU to run the servicing codes

Application program can be said to consist of number of tasks

Example ─ Automatic Chocolate Vending Machine � Software highly complex. � RTOS schedules to run the application embedded software as consisting of the number of Tasks � Number of functions, ISRs, interrupt ervice- threads, tasks, multiple physical and virtual device drivers,

and several program objects that must be concurrently processed on a single processor

� Task User Keypad Input ─ keypad task to get the user input

Page 81: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 81

� Task Read-Amount ─ for reading the inserted coins amount, � Chocolate delivery task ─ delivers the chocolate and signals the machine for readying for next input of

the coins, � Display Task, � GUI_Task ─for graphic user interfaces, � Communication task ─ for provisioning the AVCM owner access the machine Information and � information.

Task States

States of a Task in a system (i) Idle state [Not attached or not registered] (ii) Ready State [Attached or registered] (iii) Running state (iv) Blocked (waiting) state (v) Delayed for a preset period Number of possible states depends on the RTOS Idle (created) state

• The task has been created and memory allotted to its structure • However, it is not ready and is not schedulable by kernel.

Ready (Active) State

The created task is ready and is schedulable by the kernel but not running at present as another higher priority task is scheduled to run and gets the system resources at this instance.

Page 82: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 82

Running state Executing the codes and getting the system resources at this instance. It will run till it needs some IPC (input) or wait for an event or till it gets preempted by another higher priority task than this one. Blocked (waiting) state

� Execution of task codes suspends after saving the needed parameters into its context. � It needs some IPC (input) or it needs to wait for an event or wait for higher priority task to block to

enable running after blocking.

Blocked (waiting) state example A task is pending while it waits for an input from the keyboard or a file. The scheduler then puts it in the blocked state Deleted (finished) state Deleted Task─

• The created task has memory deallotted to its structure. • It frees the memory. • Task has to be re-created.

Created and Activated Task States During Processing one of three states─ _ ready, _ running and _ blocked OS Functions for the tasks and Task states at Smart Card

Exemplary Steps

� Let the main program first run an RTOS function OS_initiate ( ) � This enables use of the RTOS functions � The main program runs an RTOS function OS_Task_Create ( ) to create a task, Task_Send_Card_Info..

OS Functions � OS_Task_Create ( ) runs twice to create two other tasks, Task_Send_Port_Output and

Task_Read_Port_Input and both of them are also in idle state. Let these tasks be of middle and low priorities,

� respectively. � OS_Start ( )─ for starting and � OS_Ticks_Per_Sec ()─ for initiating n system clock interrupts per s. � After initiation, the system switches from user mode to supervisory mode every 1/60 s if n = 60. All three

task states will be made in ready state by an OS function

Page 83: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 83

Task_Send_Card_Info

� Task is for sending card information to the host. � Has an allocated memory for the stack. � Has a TCB using which OS controls the task. � Task state is idle state at the brginning. � Let Task_Send_Card_Info be of high Priority � The OS runs a function, which makes the Task_Send_Card_Info state as running. � Task_Send_Card_Info runs an OS function mailbox_post (authentication_request), which sends the

server identification request through IO port to the host using the task

Task_Send_Port_Output

� The Task_Send_Card_Info runs a function mailbox_wait ( ), which makes the task state as blocked and OS switches context to another task Task_Send_Port_Output and then to Task_Read_Port_Input for reading the IO port input data,

� The OS when mailbox gets the authentication message from server, switches context to Task_Send_Card_Info and the task becomes in running state again

Task Data, TCB and Characteristics Task and its data

� Includes task context and TCB � TCB─ A data structure having the information using which the OS controls the process state. � Stores in protected memory area of the kernel. � Consists of the information about the task state

Page 84: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 84

Task Information at the TCB

� Task ID, for example, ID a number between 0 and 255

� Task priority, if between 0 and 255, is represented by a byte

� Parent task (if any),

� Child task (if any),

� Address to the next task’s TCB of task that will run next,

� allocated program memory address blocks in physical memory and in secondary (virtual) memory for

the Tasks-codes,

� Allocated task-specific data address blocks

� allocated task-heap (data generated during the program run) addresses,

� Allocated task-stack addresses for the functions called during running of the process,

� Allocated addresses of CPU register-save area as a task context represents by CPU registers, which

include the program counter and stack pointer

Page 85: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 85

Task Information at Task Control Block � allocated addresses of CPU register save area as a task context, [Register-contents (define process context) include the program counter and stack pointer contents]

Information about the process state at Process Control Block

� task-state signal mask [when mask is set to 0 (active) the process is inhibited from running and when reset to 1, the process is allowed to run],

� Task signals (messages) dispatch table [task IPC functions], � OS allocated resources’ descriptors (for example, file descriptors for open files, device descriptors for

open (accessible) devices, device-buffer addresses and status, socket-descriptor for open socket), and � Security restrictions and permissions

Task’s Context and Context Switching Context

� Each task has a context, � Context has a record that reflects the CPU state just before OS blocks one task and initiates another task

into running state. � Continuously updates during the running of a task, � Saved before switching occurs to another Task � The present CPU registers, which include program counter and stack pointer are part of the context � When context saves on the TCB pointed process-stack and register-save area addresses, then the running

process stops. � Other task context now loads and that task runs─ which means that the context has switched

Task Coding in Endless Event- Waiting Loop

� Each task may be coded such that it is in endless event-waiting loop to start with. � An event loop is one that keeps on waiting for an event to occur. On the start-event, the loop starts

executing instruction from the next instruction of the waiting function in the loop � Execution of service codes (or setting a token that is an event for another task) then occurs. � At the end, the task returns to the event waiting in the loop

ACVM Chocolate delivery task static void Task_Deliver (void *taskPointer) { /* The initial assignments of the variables and pre-infinite loop statements that execute once only*/ ACVM Chocolate delivery task infinite loop while (1) { /* Start an infinite while-loop. */ /* Wait for an event indicated by an IPC from Task Read-Amount */ /* Codes for delivering a chocolate into a bowl. */ /* Send message through an IPC for displaying "Collect the nice chocolate. Thank you, visit again" to the Display Task*/

Page 86: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 86

ACVM Chocolate delivery task infinite loop /* Resume delayed Task Read-Amount */ }; /* End of while loop*/ / * End of the Task_Deliver function */

Task Characteristics

� Each task is independent and takes control of the CPU when scheduled bya scheduler at an OS. The

scheduler controls and runs the tasks.

� No task can call another task. [It is unlike a C (or C++) function, which can call another function.]

� Each task is recognised by a TCB.

� Each task has an ID just as each function has a name. The ID, task ID is a byte if it is between 0 and

255. ID

� is also an index of the task.

� Each task may have a priority parameter.

� The priority, if between 0 and 255, is represented by a byte.

� Signal mask

� A task is an independent process. The OS will only block a running task and let another task gain access

of CPU to run the servicing codes.

� Each task has its independent (distinct from other tasks) values of the followings at an instant: (i)

program

� counter and (ii) task stack pointer (memory address from where it gets the saved parameters after the

� scheduler granting access of the CPU).

These two values are the part of its context of a task.

� Task runs by context switching to that task by the OS scheduler. � Multitasking operations are by context switching between the various tasks � Each task must be either reentrant routine or � Must have a way to solve the shared data problem

The task returns to the either idle state (on deregistering or detaching) or ready state after finishing (completion of the running state), that is, when all the servicing codes have been executed.

� Each task may be coded such that it is in endless loop waiting for an event to start running of the codes. � Event can be a message in a queue or in mailbox or � Event can be a token or signal � Event can be delay-period over

Page 87: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 87

Function, Task and ISR Function

Function is an entity used in any program, function, task or thread for performing specific set of actions when called and on finishing the action the control returns to the function calling entity (a calling function or task or process or thread).

� Each function has an ID (name) � has program counter and � has its stack, which saves when it calls � another function and the stack restores on return to the caller. � Functions can be nested. One function call another, that can call another, and so on and later the return is

in reverse order

Interrupt Service routine

� ISR is a function called on an interrupt from an interrupting source. � Further unlike a function, the ISR can have hardware and software assigned priorities. � Further unlike a function, the ISR can have mask, which inhibits execution on the event, when mask is

set and enables execution when mask reset

Task Task defined as an executing computational unit that processes on a CPU and state of which is under the control of kernel of an operating system. Distinction Between Function, ISR and Task Uses

• Function─ for running specific set of codes for performing a specific set of actions as per the arguments passed to it

• ISR─ for running on an event specific set of codes for performing a specific set of actions for servicing the interrupt call

• Task ─ for running codes on context switching to it by OS and the codes can be in endless loop for the event (s)

Calling Source

• Function─ call from another function or process or thread or task • ISR─ interrupt-call for running an ISR can be from hardware or software at any instance • Task ─ A call to run the task is from the system (RTOS). RTOS can let another higher priority task

execute after blocking the present one. It is the RTOS (kernel) only that controls the task scheduling

Page 88: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 88

Context Saving

• Function─ run by change in program counter instantaneous value. There is a stack. On the top of which the program counter value (for the code left without running) and other values (called functions’ context) save

• All function have a common stack in order to support the nesting

Page 89: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 89

Context Saving

_ ISR─ Each ISR is an event-driven function code. The code run by change in program counter instantaneous value. ISR has a stack for the program counter instantaneous value and other values that must save. _ All ISRs can have common stack in case the OS supports nesting _ Task ─ Each task has a distinct task stack at distinct memory block for the context (program counter instantaneous value and other CPU register values in task control block) that must save _ Each task has a distinct process structure(TCB) for it at distinct memory block

Tasks and their separate contexts

Response and Synchronization

� Function─ nesting of one another, a hardware mechanism for sequential nested mode synchronization

between the functions directly without control of scheduler or OS

� ISR─ a hardware mechanism for responding to an interrupt for the interrupt source calls, according to

the given OS kernel feature a synchronizing mechanism for the ISRs, and that can be nesting support by

the OS

� Task ─ According to the given OS kernel feature, there is a task responding and synchronizing mechanism. The kernel functions are used for task synchronization because only the OS kernel calls a task to run at a time. When a task runs and when it blocks is fully under the control of the OS

Structure

� Function─ can be the subunit of a process or thread or task or ISR or subunit of another function

� ISR─ Can be considered as a function, which runs on an event at the interrupting source.

� A pending interrupt is scheduled to run using an interrupt handling mechanism in the OS, the

mechanism can be priority based scheduling.

� The system, during running of an ISR, can let another higher priority ISR run.

� Task ─ is independent and can be considered as a function, which is called to run by the OS scheduler

using a context switching and task scheduling mechanism of the OS.

Page 90: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 90

� The system, during running of a task, can let another higher priority task run. The kernel manages the

tasks scheduling

Global Variables Use

� Function─ can change the global variables. The interrupts must be disabled and after finishing use of

global variable the interrupts are enabled

� ISR─ When using a global variable in it, the interrupts must be disabled and after finishing use of global

variable the interrupts are enabled (analogous to case of a function)

� Task ─ When using a global variable, either the interrupts are disabled and after finishing use of global

variable the interrupts are enabled or use of the semaphores or lock functions in critical sections, which

can use global variables and memory buffers

Posting and Sending Parameters

� Function─ can get the parameters and messages through the arguments passed to it or global variables the references to which are made by it. Function returns the results of the Operations

� ISR─ using IPC functions can send (post) the signals, tokens or messages. ISR can’t use the mutex protection of the critical sections by wait for the signals, tokens or messages

� Task ─ can send (post) the signals and messages � can wait for the signals and messages using the IPC functions, � can use the mutex or lock protection of the code section by wait for the token or lock at the section

beginning and messages and post the token or unlock at the section end

Concept of Semaphore as an event signaling variable or notifying variable Semaphore as an event signaling variable or notifying variable

• Suppose that there are two trains. • Assume that they use an identical track. • When the first train A is to start on the track, a signal or token for A is set (true, taken) and

same signal or token for other train, B is reset (false, not released). OS Functions for Semaphore as an event signaling variable or notifying variable

• OS Functions provide for the use of a semaphore for signaling or notifying of certain action or notifying the acceptance of the notice or signal.

• Let a binary Boolean variable, s, represents the semaphore. OS Functions for Semaphore as event notifying variable � The taken and post operations on s─ signals or notifies operations for communicating the occurrence of an event and (ii) for communicating taking

note of the event. � Notifying variable s is like a token ─

Page 91: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 91

(i)acceptance of the token is taking note of that event (ii) Release of a token is the occurrence of an event

BINARY SEMAPHORE

� Let the token (flag for event occurrence) s initial value = 0 � Assume that the s increments from 0 to 1 for signaling or notifying occurrence of an event from a section

of codes in a task or thread. � When the event is taken note by section in another task waiting for that event, the s decrements from 1 to

0 and the waiting task codes start another action. � When s = 1─ assumed that it has been released (or sent or posted) and no task code section has taken it yet � When s = 0 ─ assumed that it has been taken (or accepted) and other task code section has not taken it yet

Binary Semaphore use in ISR and Task

• An ISR can release a token. • A task can release the token as well accept the token or wait for taking the token

Uses in ACVM

• Chocolate delivery task ─ after the task delivers the chocolate, it has to notify to the display task to run a waiting section of the code to display "Collect the nice chocolate. Thank you, visit again".

• The waiting section for the displaying the thank you message takes this notice and then it starts the display of thank you message

• Assume OSSemPost ( )─ is an OS IPC function for posting a semaphore

• OS SemPend ( ) ─ another OS IPC function for waiting for the semaphore.

• Let sdispT is the binary semaphore posted from Chocolate delivery task and taken by a Display task

section for displaying thank you message.

• Let sdispT initial value = 0.

static void Task_Deliver (void *taskPointer) { while (1) { /* Codes for delivering a chocolate into a bowl. */ OSSemPost (sdispT) /* Post the semaphore sdispT. This means that OS function increments sdispT in corresponding event control block. SdispT becomes 1 now. */ };

static void Task_Display (void *taskPointer) {

Page 92: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 92

while (1) { OSSemPend (sdispT) /* Wait sdispT. means wait till sdispT is posted and becomes 1. When sdispT becomes 1 and the OS function decrements sdispT in corresponding event control block. sdispT 0 now. Task then runs further the following code*/ /* Code for display "Collect the nice chocolate. Thank you, visit again" */

}; Semaphore as a resource key and for critical sections having shared resource (s) Shared Resource (s)

• Shared memory buffer is to be used only by one task (process or thread) at an instance • Print buffer, global variable (s), file, network, LCD display line or segment, are also used only by one

task( process or thread) at an instance

OS Functions for Semaphore as a resource key

� OS Functions provide for the use of a semaphore resource key for running of the codes in critical section

� Let a binary Boolean variable, sm, represents the semaphore.

� Resource key can be for shared memory buffer, print buffer, global variable (s), file, network, LCD display

line or segment, …which is to be used only by one task (process or thread) at an instance

� The taken and post operations on sm─ (i) signals or notifies operations for starting the task section using a

shared resource (ii) signals or notifies operations for leaving the task section after using a shared resource .

� Semaphore function’s variable sm is like a key for the resource ─ (i) beginning of using the shared resource

is by taking the key (ii) Release of key is the end of using the shared resource

Page 93: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 93

Critical Section Section of codes using for shared memory buffer, print buffer, global variable (s), file, network, LCD display line or segment, …which is to be used only by one task (process or thread) at an instance, which can also be used by another section also at another instance

Mutex Semaphore for use as resource Key Mutex

� Mutex means mutually exclusive key

� Mutex is a binary semaphore usable for protecting use of resource by other task section at an instance

� Let the key sm initial value =0

� When the key is taken by section the key sm decrements from 1 to 0 and the waiting task

� codes starts.

� Assume that the sm increments from 0 to 1 for signaling or notifying end of us ie of the key that section

of codes in the task or thread.

Page 94: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 94

Mutex Semaphore

� When sm = 0 ─ assumed that it has been taken (or accepted) and other task code section has not taken it

yet and using the resource

� When sm = 1─ assumed that it has been released (or sent or posted) and other task code section can now

take the key and use the resource

Mutex Semaphore use in ISR and Task

• An ISR does not take the key • A task can wait for taking the key and release the key

Uses in UpDate_Time and Read_Time Tasks in ACVM � An interrupt service routine runs when timer timeouts and makes posts key initial value = 1 at time T1

� Update_Time task ─ When the task updates t information at the time device, it has to notify to the

Read_Time task not to run a waiting section of the code to read t from the time device as it is being updated

� Read_Time task ─ runs a waiting section of the code to read t from the time device after updating of the

time and date

Uses in key (mutex) � Assume OSSemPend ( ) ─ another OS IPC function for waiting for the semaphore.

� OSSemPost ( )─ is an OS IPC function for posting a semaphore

� Let supdateT is the binary semaphore posted from Chocolate delivery task and taken by a Display task

section for displaying thank you message.

� Let supdateT initial value = 0.

Wait for the update key after the ISR posts time static void Task_ Update_Time (void *taskPointer) { while (1) { OSSemPend (supdateT) /* Wait the semaphore supdateT =1 posted by IS. This means that OS function decrements sdispT in corresponding event control block. supdateT becomes 0 at T2. */ Post of the update key after updating time and date /* Codes for writing into the time device. */ OSSemPost (supdateT) /* Post the semaphore supdateT. This means that OS function increments supdateT in corresponding event control block. supdateT key becomes 1 at instance T3. */ };

Page 95: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 95

Key Taking after Update task posting the Key static void Task_ Read_Time (void *taskPointer) { while (1) { OSSemPend (sdispT) /* Wait supdateT. Means supdateT is posted by the update task and becomes 1. When supdateT becomes 1 and the OS function decrements supdateT and becomes 0 at instance T4.Task then runs further the following code*/ Key Release after reading time and date

/* Code for reading the time device */ OS SemPost (sup dateT) /* Post the semaphore supdateT. This means that OS function increments supdateT in corresponding event control block. supdate T becomes 1 at instance T5. */

};

� Suppose that there are two trains. � Assume that they use an identical track. � When the first train A is to start on the track, a signal or token for A is set (true, taken) and � same signal or token for other train, B is reset (false, not released).

Use of Multiple Semaphores and counting Semaphore for Synchronizing the Tasks Use of Multiple Semaphores

Use of Multiple Semaphores for Synchronizing the Tasks _ Example of the use of two semaphores for synchronizing the tasks I, J and M and the tasks J and L, respectively

Page 96: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 96

OS Functions for Semaphore

� OSSemPost ( )─ an OS IPC function for posting a semaphore and assume OSSemPend ( ) ─ another OS

IPC function for waiting for the semaphore.

� Let sTask is the mutex semaphore pending and posted at each task to let another run.

� Let sTask1 initially is 1 and sTask2, sTask3 and sTask4 initially are 0s Codes

� Consider Codes such that firstly task I will run, then J, then K, then L, then I when at an initial instance

sTask1 = 1 and sTask2 = sTask3 = sTask4 = 0

Page 97: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 97

Running of Tasks A, B, C, and D Synchronized through IPCs s0, s1, s2 and s3

� Task A sends an IPC s1, B is waiting for s1, when s1 releases, B takes s1 and runs. Similarly, C runs on taking s2, D runs on taking s3, again A runs on taking s0. Running of the codes of tasks A to D synchronizes using the IPCs

Codes for task I wait for running static void Task_ I (void *taskPointer) { while (1) { OS SemPend (sTask1) /* Post the semaphore sTask1. Means that OS function decrements sTask1 in corresponding event control block. sTask1 becomes 0 and following code run*/ Codes for task I run and release semaphore for J /* Codes for Task_I */ OSSemPost (sTask2) /* Post the semaphore sTask2. This means that OS function increments sTask2 in corresponding event control block. sTask2 becomes 1 */ };

Page 98: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 98

Codes for task J wait for semaphore from I static void Task_ J (void *taskPointer) { while (1) { OSSemPend (sTask2) /* Wait sTask2. Means wait till sTask2 is posted and becomes 1. When sTask2 becomes 1 and the OS function decrements sTask2 in corresponding event control block, sTask2 becomes 0. Task then runs further the following code*/ Codes for task J run and release semaphore for K /* Code for Task J */ OS SemPost (sTask3) /* Post the semaphore sTask3. Means that OS function increments sTask3 in corresponding event control block. sTask3 becomes 1. */ };

Codes for task K wait for semaphore from J static void Task_ K (void *taskPointer) { while (1) { OSSemPend (sTask3) /* Wait for the semaphore sTask3. Means that wait till sTask3 is posted and becomes 1. When sTask3 becomes 1 and the OS SemPend decrements sTask3 in corresponding event control block. sTask3 becomes 0. Task then runs further the following code*/ Codes for task K run and release semaphore for L /* Code for Task K */ OSSemPost (sTask4) /* Post the semaphore sTask4. This means that OS function increments sTask4 in corresponding event control block. sTask4 becomes 1. */ };

Codes for task L wait for semaphore from K static void Task_ L (void *taskPointer) { while (1) { OSSemPend (sTask4) /* Wait for the semaphore sTask4. This means that task waits till sTask4 is posted and becomes 1. When sTask4 becomes 1 and the OS function is to decrements sTask3 in corresponding event control block. sTask4 becomes 0. Task then runs further the following code*/

Page 99: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 99

Codes for task L run and release semaphore for I /* Code for Task L */ OSSemPost (sTask1) /* Post the semaphore sTask1. This means that OS function increments sTask1 in corresponding event control block. sTask1 becomes 1. */ };

Number of tasks waiting for same Semaphore • OS Provides the answer • In certain OS, a semaphore is given to the task of highest priority among the waiting tasks. • In certain OS, a semaphore is given to the longest waiting task (FIFO mode). • In certain OS, a semaphore is given as per selected option and the option is provided to choose among

priority and FIFO. • The task having priority, if started takes a semaphore first in case the priority option is selected. The task

pending since longer period takes a semaphore first in case the FIFO option is selected.

Counting Semaphore

OS counting semaphore functions

• Counting semaphore scnt is an unsigned 8 or 16 or 32 bit-integer.

• A value of scnt controls the blocking or running of the codes of a task.

• scnt decrements each time it is taken.

• scnt increments when released by a task.

• scnt at an instance reflects the initialized value minus the number of times it is taken plus the number of times released.

• scnt can be considered as the number of tokens present and the waiting task will do the action if at least one token is present.

• The use of scnt is such that one of the task thus waits to execute the codes or waits for a resource till at least one token is found

Counting Semaphore application Example

• Assume that a task can send on a network the stacks into 8 buffers.

• Each time the task runs it takes the semaphore and sends the stack in one of the buffers, which is next

• to the earlier one.

• Assume that a counting semaphore scnt is initialized = 8. After sending the data into the stack, the task

takes the scnt and scnt decrements. When a task tries to take the scnt when it is 0, then the task blocks

and cannot send into the buffer

Page 100: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 100

ACVM Example

Consider Chocolate delivery task.

• It cannot deliver more than the total number of chocolates, total loaded into the machine. • Assume that a semCnt is initialized equal to total. • Each time, the new chocolates loaded in the machine, semCnt increments by the number • of new chocolates.

Chocolate delivery task code

static void Task_Deliver (void *taskPointer) { while (1) { /* Start an infinite while-loop. */ /* Wait for an event indicated by an IPC from Task Read-Amount */ If (Chocolate_delivered) OSSemPend (semCnt) /* If chocolate delivered is true, if semCnt is not 1 or > 1 (which means is 0 or less) else decrement the semCnt and continue remaining operations */ };

P and V semaphores • An efficient synchronisation mechanism • POSIX 1003.1.b, an IEEE standard. • POSIX─ for portable OS interfaces in Unix. • P and V semaphores ─ represents the by integers in place of binary or unsigned integers P and V semaphore Variables The semaphore, apart from initialization, is accessed only through two standard atomic operations─ P and V

• P (for wait operation)─ derived from a Dutch word ‘Proberen’, which means 'to test'.

• V (for signal passing operation)─ derived from the word 'Verhogen' which means 'to increment'.

� P semaphore function signals that the task requires a resource and if not available waits for it.

� V semaphore function signals which the task passes to the OS that the resource is now free for the other

users.

Page 101: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 101

P Function─ P (&Sem1)

1. /* Decrease the semaphore variable*/ sem_1 = sem_1 -1; 2. /* If sem_1 is less than 0, send a message to OS by calling a function wait Call To OS. Control of the process transfers to OS, because less than 0 means that some other process has already executed P function on sem_1. Whenever there is return for the OS, it will be to step 1. */ if (sem_1 < 0){waitCallToOS (sem_1);}

V Function─ V (&Sem2)

3. /* Increase the semaphore variable*/ sem_2 = sem_2 + 1; 4. /* If sem_2 is less or equal to 0, send a message to OS by calling a function signal CallToOS. Control of the process transfers to OS, because < or = 0 means that some other process is already executed P function on sem_2. Whenever there is return for the OS, it will be to step 3. */ if (sem_2 < = 0){signalCallToOS (sem_2);}

P and V SEMAPHORE FUNCTIONS WITH SIGNALING OR NOTIF ICATION PROPERTY P and V SEMAPHORE FUNCTIONS WITH SIGNAL OR NOTIFICATION PROPERTY Process 1 (Task 1): while (true) { /* Codes */ V (&sem_s); /* Continue Process 1 if sem_s is not equal to 0 or not less than 0. It means that no process is executing at present. */ };

Process 2 (Task 2): while (true) { /* Codes */ P (&sem_s); /* The following codes will execute only when sem_s is not less than 0. */ };

Page 102: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 102

P and V SEMAPHORE FUNCTIONS WITH MUTEX PROPERTY

Page 103: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 103

P and V Semaphore functions with mutex property ─ Wait for starting Critical Section Process 1 (Task 1): while (true) { /* Codes before a critical region*/ /* Enter Process 1 Critical region codes*/ P (&sem_m); /* The following codes will execute only when sem_m is not less than 0. */ P and V Semaphore functions with mutex property ─ running end exiting Critical Section Process 1 (Task 1): /* Exit Process 1 critical region codes */ V (&sem_m); /* Continue Process 1 if sem_m is not equal to 0 or not less than 0. It means that no process is executing at present. */ };

P and V Semaphore functions with mutex property ─ Wait for starting other Critical Section Process 2 (Task 2): while (true) { /* Codes before a critical region*/ /* Enter Process 2 Critical region codes*/ P (&sem_m); /* The following codes will execute only when sem_m is not less than 0. */ P and V Semaphore functions with mutex property ─ running end exiting the Critical Section Process 2 (Task 2): /* Exit Process 2 critical region codes */ V (&sem_m); /* Continue Process 2 if sem_m is not equal to 0 or not less than 0. It means that no process is executing at present. */ };

P and V SEMAPHORE FUNCTIONS WITH COUNTING SEMAPHORE PROPERTY P and V Semaphore functions with Count property ─ producer Wait for empty place if = 0

Page 104: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 104

Process c (Task c): while (true) {/* sem_c1 represent number of empty places*/ /* Codes before a producer region*/ ./* Exit Process 3 region codes */ P (&sem_c1); /* Continue Process c if sem_c1 is not equal to 0 or not less than 0. */

P and V SEMAPHORE FUNCTIONS FOR PRODUCER-CONSUMERPROBLEM (BOUNDED BUFFERPROBLEM)

� Example (i) another task reads the I/O stream bytes from the filled places and creates empty places. � Example (ii), from the print buffer an I/O stream prints after a buffer-read and after printing, more

empty places are created � Example (iii), a consumer is consuming the chocolates produced and more empty places (to stock the

produced chocolates) are created. Bounded Buffer Problem A task blockage operational problem─ commonly called Bounded Buffer Problem . Example (i)─ A task cannot transmit to the I/O stream if there are no empty places at the stream Example (ii)─ The task cannot write from the memory to print buffer if there are no empty places at the print-buffer. Producer-Consumer Problem Example (iii)─ The producer cannot produce chocolates if there are no empty places at the consumer end. P and V Semaphore functions with producer consumer problem solution Process 3 (Task 3): while (true) { /* Codes before a producer region. sem_c2 number of empty places created by process 4 */ /* Enter Process 3 Producing region codes*/ P (&sem_c2); /* The following codes will execute only when sem_c2 is not less than 0. */ P and V Semaphore functions with producer Consumer problem solution Process 3 (Task 3): . /* Exit Process 3 region codes */ V (& sem_c1);

Page 105: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 105

/* Continue Process 3 if sem_c1 is not equal to 0 or not less than 0. */ . };

P and V Semaphore functions with producer consumer problem solution Process 4 (Task 4): while (true) {/* Codes before a producer region. sem_c1 number of filled places created by process 3 */ . /* Enter Process 4 Consuming region codes*/ P (&sem_c1); /* The following codes will execute only when sem_c1 is not less than 0. */

P and V Semaphore functions with producer consumer problem solution Process 4 (Task 4): /* Exit Process 4 region codes */ V (& sem_c2); /* Continue Process 4 if sem_m is not equal to 0 or not less than 0. It means that no process is executing at present. */ . };

Sharing Data between the Processes - Some data is common to different processes or tasks. Examples are as follows:

� Time, which is updated continuously by a process, is also used by display process in a system � Port input data, which is received by one process and further processed and analysed by another

process. � Memory Buffer data which is inserted by one process and further read (deleted), processed and

analysed by another process

Shared Data Problem

• Assume that at an instant when the value of variable operates and during the operations on it, only a part of the operation is completed and another part remains incomplete. • At that moment, assume that there is an Interrupt

Shared Data Problem Arising on Interrupt • Assume that there is another function. It also shares the same variable. The value of the variable may differ from the one expected if the earlier operation had been completed .

Page 106: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 106

Whenever another process sharing the same partly operated data , then shared data problem arises.

Steps for the Elimination of Shared Data Problem

� Use reentrant function with atomic instructions in that section of a function that needs its complete

execution before it can be interrupted. This section is called the critical section.

� Put a shared variable in a circular queue. A function that requires the value of this variable always

� deletes (takes) it from the queue front, and another function, which inserts (writes) the value of this

variable, always does so at the queue back

� Disable the interrupts (DI) before a critical section starts executing and enable the interrupts (EI) on its

completion.

� DI─ powerful but a drastic option. An interrupt, even if of higher priority than the critical function, gets

disabled.

� Use lock ( ) a critical section starts executing and use unlock ( ) on its completion.

� A software designer usually not use the drastic option of disabling interrupts in all the critical sections,

� except in automobile system like software

� Use IPC (Inter-Process Communication)

� Using semaphore as mutex for the shared data problem.

Use of Mutex for the Elimination of Shared Data Problem Use of a mutex semaphore

• Facilitates mutually exclusive access by two or more processes to the resource (CPU).

• The same variable, sem_m, is shared between the various processes.

• Let process 1 and process 2 share sem_m and its initial value is set = 1

• Process 1 proceeds after sem_m decreases and equals 0 and gets the exclusive access to the CPU.

• Process 1 ends after sem_m increases and equals 1; process 2 now gets exclusive access to the CPU.

• Process 2 proceeds after sem_m decreases and equals 0 and gets exclusive access to CPU.

• Process 2 ends after sem_m increases and equals 1; process 1 now gets the exclusive access to the CPU

• sem_m is like a resource-key and shared data within the processes 1 and 2 is the resource.

• Whosoever first decreases it to 0 at the start gets the access to and prevents other to run with whom the

key shares

Page 107: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 107

Difficulties in Elimination of Shared Data Problem Using Mutex Use of semaphores

� Use of semaphores does not eliminate the shared data problem completely � Solution of deadlock and priority inversion must also be looked when using semaphores

Priority Inversion Problem and Deadlock Situations Priority Inversion Assume • Priorities of tasks be in an order such that task I highest priority, task J a lower, and task K the lowest priority. • Only tasks I and K share the data and J does not share data with K. • Also let tasks I and K alone share a semaphore sik and not J. Few tasks share a semaphore

� Why do only a few tasks share a semaphore? Can't all share a semaphore? � Answer─ Worst-case latency becomes too high and may exceed the deadline if all tasks are blocked

when one task takes a semaphore. � The worst-case latency will be small only if the time taken by the tasks that share the resources is

relevant

Priority Inversion Situation

• At an instant t0, suppose task K takes sik , it does not block task J and blocks only the task I.

• This happens because only tasks I and K share the data and J does not and I is blocked at instance t0 due to

wait for some message and sik .

• Consider the problem that now arises on selective sharing between K and I. • At next instant t1, let task K

become ready first on an interrupt.

• Now, assume that at next instant t2, task I becomes ready on an interrupt or getting the waiting message.

• At this instant, K is in the critical section.

• Therefore, task I cannot start at this instant due to K being in the critical region

• Now, if at next instant t3, some action (event) causes the unblocked higher than K priority task J to run.

• After instant t3, running task J does not allow the highest priority task I to run because K is not running, and

therefore K can't release sik that it shares with I.

• Further, the design of task J may be such that even when sik is released by task K, it may not let I run. [J runs

the codes as if it is in critical section all the time after executing DI.] The J action is now as if J has higher

priority than I. This is because K, after entering the critical section and taking the semaphore the OS let the J

run, did not share the priority information about I—that task I is of higher priority than J.

Page 108: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 108

• The priority information of another higher priority task I should have also been inherited by K temporarily, if

K waits for I but J does not and J runs when K has still not finished the critical section codes.

• This did not happen because the given OS was such that it didn’t provide for temporary priority inheritance in

such situations. • Above situation is also called a priority inversion problem

OS Provision for temporary priority inheritance in such situations • Some OSes provide for priority inheritance in these situations and thus priority inheritance problem does not

occur when using them.

• A mutex should be a mutually exclusive Boolean function, by using which the critical section is protected

from interruption in such a way that the problem of priority inversion does not arise.

• Mutex is automatically provided in certain RTOS so that it the priority inversion problem does not arise

• Mutex is automatically provided with priority inheritance by task on taking it in certain OSes so that it the

priority inversion problem does not arise and certain OSes provides for selecting priority inheritance as

well as priority sealing options

DEADLOCK SITUATION

Assume

• Priorities of tasks be such that task H is of highest priority, task I a lower priority and task J the lowest.

• Two semaphores, SemTok1 and SemTok2.

• Tasks I and H have a shared resource through SemTok1 only.

• Tasks I and J have two shared resources through two semaphores, SemTok1 and SemTok2.

Deadlock Situation

• At a next instant t1, being now of a higher priority, the task H interrupts the task I and J after it takes the

semaphore SemTok1, and thus blocks both I and J • In between the time interval t0 and t1, the SemTok1 was

released but SemTok2 was not released during the run of task J. But the latter did not matter as the tasks I and J

don’t share SemTok2.

• At an instant t2, if H now releases the SemTok1, lets the task I take it.

• Even then it cannot run because it is also waiting for task J to release the SemTok2.

• The task J is waiting at a next instant t3, for either H or I to release the SemTok1 because it needs this to

again enter a critical section in it.

• Neither task I can run after instant t3 nor task J.

Page 109: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 109

Deadlock Situation Solution • There is a circular dependency established between I and J.

• On the interrupt by H, the task J, before exiting from the running state, should have been put in queue-front

so that later on, it should first take SemTok1, and the task I put in queue next for the same token, then the

deadlock would not have occurred

SEMAPHORE

� OS provides the semaphore IPC functions for creating, releasing and taking of the semaphore

� As the event flag, token mutex (resource key for resource locking and unlocking resource onto for other

processes) and counting semaphore.

� OS provides a function (OSSemPost) for semaphore as notice for an event occurrence

� OSSemPost facilitates inter-task communication for notifying (through a scheduler event control block )

a waiting task section [using OSSemPend] to the running state upon an event at the running

task section at an ISR or task

� Semaphore can be used as a mutex (mutually exclusive) to permit access to a set of codes (in a thread or

process).

� A process using the mutex locks on to a critical section in a task.

� Semaphore can be used as a counting semaphore, which facilitates multiple inter-task communications

� Semaphore can be used in case of producer–consumer type problems (for example, use of a bounded

buffer, which can’t be sent bytes more than the buffer capacity)

� Semaphores can be a P and V semaphore-pair in the POSIX standard semaphore IPC.

Semaphore Functions 1. OSSemCreate─ to create a semaphore and to initialize

2. OSSemPost ─ to send the semaphore to an event control block and its value increments on event occurrence.

(Used in ISRs as well as in tasks).

3. OSSemPend ─ to wait the semaphore from an event, and its value decrements on taking note of that event

occurrence. (Used in tasks). Arguments are semaphore variable name, time out period, and error handler

4. OSSemAccept─ to read and returns the present semaphore value and if it shows occurrence of an event (by

non zero value) then it takes note of that and decrements that value. [No wait. Used in ISRs and

tasks.]

5. OSSemQuery ─ to query the semaphore for an event occurrence or non-occurrence by reading its value and

returns the present semaphore value, and returns pointer to the data structure OSSemData. The semaphore

Page 110: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 110

value does not decrease.

6.The OSSemData─ data structure to point to the present value and a table of the tasks waiting for the

semaphore. (Used in tasks)

Mutex, Lock and Spin Lock functions

Mutex Semaphore

� Process using a Mutex blocks a critical section in a task for taking the mutex and unblocks on releasing the mutex.

� The mutex wait for lock can be specified a timeout .

Page 111: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 111

Lock function

� Process using lock ( ) enters a critical section locks the resources to a critical section till at the end of the section unlock ( ) is used.

� A wait loop creates on executing lock ( ) and wait is over when other critical section executes unlock ( ) � lock ( ) and unlock ( ) involves little overhead (number f operations) than the OSSemPend ( ) and

OSSemPost ( ) Lock function disadvantages • A resource of high priority should not lock the other processes by blocking an already running task in the following situation. Suppose a task is running and a little time is left for its completion.

Spin Lock function

Spinlock ( )

• Suppose a task is running and a little time is left for its completion.

Page 112: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 112

• The running time left for it is less compared to the time that would be taken in blocking it and context

switching.

• There is an innovative concept of spin locking in certain schedulers.

• A spin lock is a powerful tool in the situation described above.

• The scheduler locking processor for a task waits to cause the blocking of the running task first for a time-

interval t, then for (t - δt), then (t - 2δt) and so on.

• When this time interval spin downs to 0, the task that requested the lock of the processor now unlocks the

running task and blocks it from further running. The request is now granted.

• A spin lock does not let a running task be blocked instantly • First successively tries decreasing the trial periods before finally blocking a task

TASK SCHEDULING COOPERATIVE MODELS

Common scheduling models

• Cooperative Scheduling of ready tasks in a circular queue. It closely relates to function queue scheduling.

• Cooperative Scheduling with Precedence Constraints

• Cyclic Scheduling of periodic tasks and Round Robin Time Slicing Scheduling of equal priority tasks

• Preemptive Scheduling

• Scheduling using 'Earliest Deadline First' (EDF) precedence.

Rate Monotonic Scheduling using ‘higher rate of events occurrence First’ precedence Fixed Times Scheduling Scheduling of Periodic, sporadic and aperiodic Tasks Advanced scheduling algorithms using the probabilistic Timed Petri nets (Stochastic) or Multi Thread Graph for the multiprocessors and complex distributed systems.

Cooperative Scheduling in the cyclic order Each task cooperate to let the running task finish Cooperative means that each task cooperates to let the a running one finish. None of the tasks does block in-between anywhere during the ready to finish states. The service is in the cyclic order

Page 113: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 113

Worst-case latency

• Same for every task Tworst = {(sti + eti )1 + (sti + eti )2 +...+ (sti + eti)N-1 + (sti + eti )N} + tISR.

• tISR is the sum of all execution times for the ISRs • For an i-th task, switching time from one task to another be is sti and task execution time be is eti

i = 1, 2, …, N 1 , N, when number of tasks = N

Page 114: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 114

Cooperative Scheduling in the order in which a task is initiated on interrupt

• None of the tasks does block in-between anywhere during the ready to finish states. • The service is in the order in which a task is initiated on interrupt.

Page 115: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 115

Worst-case latency � Same for every task in the ready list

� Tworst = {(dti + sti + eti )1 + (dti + sti + eti )2 +...+ (dti + sti + eti )n-1 + (dti + sti + eti )n} + tISR.

� tISR is the sum of all execution times for the ISRs

� For an i-th task, let the event detection time with when an event is brought into a list be is dti , switching

time from one task to another be is sti and task execution time be is eti

� i = 1, 2, …, n 1 , n

Cooperative Scheduling of Ready Tasks Using an Ordered List as per precedence Constraints

� Scheduler using a priority parameter, taskPriority does the ordering of list of the tasks─ ordering according to the precedence of the interrupt sources and tasks.

� The scheduler first executes only the first task at the ordered list, and the total, equals to period taken by the first task on at the list. It is deleted from the list after the first task is executed and the next task becomesthe first.

� The insertions and deletions for forming the ordered list are made only at the beginning of the cycle for each list

Page 116: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 116

Worst-case latency

� Not Same for every task. Varies from (dti + sti + eti ) p(m)} + tISR

to {(dti + sti + eti )p1 + (dti + sti + eti ) p2 +...+ (dti + sti + eti ) p(m-1) + (dti + sti + eti ) p(m)} + tISR.

� tISR is the sum of all execution times for the ISRs

� For an i-th task, let the event detection time with when an event is brought into a list be is dti , switching

time from one task to another be is sti and task execution time be is eti

� i = 1, 2, …, m 1 , m; m is number of ISRs and tasks in the list

Page 117: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 117

Example of ACVM First the coins inserted by the user are read, then the chocolate delivers, and then display task displays ‘thank you, visit again’ message. Each task cooperates with other to finish. The precedence of tasks in the ready list─ reading coins is highest, then of chocolate delivery and display for the ordered list of the ready tasks.

ROUND ROBIN TIME SLICING OF TASKS OF EQUAL PRIORITI ES Common scheduling models Cooperative Scheduling of ready tasks in a circular queue. It closely relates to function queue scheduling. Cooperative Scheduling with Precedence Constraints Cyclic scheduling of periodic tasks and Round Robin Time Slicing Scheduling of equal priority tasks Preemptive Scheduling Scheduling using 'Earliest Deadline First' (EDF) precedence. Rate Monotonic Scheduling using ‘higher rate of events occurrence First’ precedence Fixed Times Scheduling Scheduling of Periodic, sporadic and aperiodic Tasks Advanced scheduling algorithms using the probabilistic Timed Petri nets (Stochastic) or Multi Thread Graph for the multiprocessors and complex distributed systems. Round Robin Time Slice Scheduling of Equal Priority Tasks Equal Priority Tasks

� Round robin means that each ready task runs turn by in turn only in a cyclic queue for a limited time slice.

Page 118: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 118

� Widely used model in traditional OS. � Round robin is a hybrid model of clock-driven model (for example cyclic model) as well as event

driven (for example, preemptive) � A real time system responds to the event within a bound time limit and within an explicit time.

Page 119: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 119

Page 120: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 120

Case : Tcycle = N Tslice

� Same for every task = Tcycle � Tcycle ={Tslice )}N + tISR. � tISR is the sum of all execution times for the ISRs � For an i-th task, switching time from one task to another be is st and task execution time be is et � Number of tasks = N

Worst-case latency

� Same for every task in the ready list � Tworst = {N (Tslice)} + tISR. � tISR is the sum of all execution times for the ISRs i = 1, 2, …, N 1 , N

VoIP Tasks Example

� Assume a VoIP [Voice Over IP.] router. � It routes the packets to N destinations from N sources. � It has N calls to route. � Each of N tasks is allotted from a time slice and is cyclically executed for routing packet from a source

to its destination Round Robin Case 1: Then each task is executed once and finishes in one cycle itself. When a task finishes the execution before the maximum time it can takes, there is a waiting period in-between period between two cycles. The worst-case latency for any task is then N tslice. A task may periodically need execution. A task The period for the its need of required repeat execution of a task is an integral multiple of tslice. Case 2: Alternative model strategy Case 2: Certain tasks are executed more than once and do not finish in one cycle Decomposition of a task that takes the abnormally long time to be executed. The decomposition is into two or four or more tasks. Then one set of tasks (or the odd numbered tasks) can run in one time slice, t'slice and the another set of tasks (or the even numbered tasks) in another time slice, t''slice. Decomposition of the long time taking task into a number of sequential states Decomposition of the long time taking task into a number of sequential states or a number of node-places and transitions as in finite state machine. (FSM). Then its one of its states or transitions runs in the first cycle, the next state in the second cycle and so on. This task then reduces the response times of the remaining tasks that are executed after a state change.

Page 121: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 121

Preemptive Scheduling Model

Common scheduling models Cooperative Scheduling of ready tasks in a circular queue. It closely relates to function queue scheduling.

Cooperative Scheduling with Precedence Constraints

Cyclic Scheduling of periodic tasks and Round Robin Time Slicing Scheduling of equal priority

tasks

Preemptive Scheduling

Scheduling using 'Earliest Deadline First' (EDF) precedence.

Rate Monotonic Scheduling using ‘higher rate of events occurrence First’ precedence

Fixed Times Scheduling

Scheduling of Periodic, sporadic and aperiodic Tasks

Advanced scheduling algorithms using the probabilistic Timed Petri nets (Stochastic) or

Multi Thread Graph for the multiprocessors and complex distributed systems.

Difficulties in cooperative and cyclic scheduling of tasks Cooperative schedulers schedule such that each ready task cooperates to let the running one finish.

However, a difficulty in case of the cooperative scheduling is that a long execution time of a low- priority

task lets a high priority task waits at least until that that finishes

Difficulty when the cooperative scheduler is cyclic but without a predefined tslice─ Assume that an

interrupt for service from first task occurs just at the beginning of the second task. The first task service waits

till all other remaining listed or queued tasks finish.

Worst case latency equals the sum of execution times of all tasks

Preemptive Scheduling of tasks OS schedules such that higher priority task, when ready, preempts a lower priority by blocking

Solves the problem of large worst case latency for high priority tasks.

Page 122: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 122

RTOS Preemptive Scheduling Processes execute such that scheduler provides for preemption of lower priority process by higher

priority process.

Assume priority of task_1 > task_2> task_3 > task_4…. > task N

Page 123: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 123

Each task has an infinite loop from start (Idle state) up to finish.

Task 1 last instruction points to the next pointed address, *next. In case of the infinite loop, *next points to

the same task 1 start.

Worst-case latency Not Same for every task Highest priority task latency smallest Lowest priority task latency highest

Worst-case latency Different for different tasks in the ready list Tworst = {(dti + sti + eti )1 + (dti + sti + eti )2 +...+ (dti + sti + eti )p-1 + (dti + sti + eti )p} + tISR. tISR is the sum of all execution times for the ISRs For an i-th task, let the event detection time with when an event is brought into a list be is dti , switching time from one task to another be is sti and task execution time be is eti

Page 124: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 124

i = 1, 2, …, p 1 when number of higher priority tasks = p 1 for the pth task.

ISRs Handling Hardware polls to determine whether an ISR with a higher priority ISR than the present one needs the service at the end of an instruction during execution of an ISR, if yes, then the higher priority ISR is executed.

RTOS method for Preemptive Scheduling of tasks

An infinite loop in each task � Each task design is like as an independent program, in an infinite loop between the task ready place and

the finish place. � The task does not return to the scheduler, as a function does. � Within the loop, the actions and transitions are according to the events or flags or tokens.

When priority of task_1 > task_2 > task_3 (1) At RTOS start, scheduler sends a message (Task_Switch_Flag) to task 1 to go to un-blocked state and run, and thus highest priority task 1 runs at start. (2) When task 1 blocks due to need of some input or wait for IPC or delay for certain period, a message (Task_Switch_Flag) will be sent to RTOS, task 1 context saves and the RTOS now sends a message (Task_Switch_Flag) to task 2 to go to unblocked state and run.

Page 125: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 125

When priority of task_1 > task_2 > task_3 (3) Task 2 now runs on retrieving the context of task 2. When it blocks due to need of some input or wait for

IPC or delay for certain period, a message (Task_Switch_Flag) will be sent to RTOS, task 2 context saves and

an RTOS message (Task_Switch_Flag) makes the task 3 in unblocked state. Task 3 will run now after

retrieving the context of task 3.

When priority of task_1 > task_2 > task_3

(4) If during running of task 3, either task 2 or task 1 becomes ready with the required input or IPC or delay

period is over, task 3 is preempted, a message (Task_Switch_Flag) will be sent to RTOS, task 3 context saves,

and task 1, and if task 1 not ready, then task 2 runs after retrieving the context of task 2.

When priority of task_1 > task_2 > task_3

(5) A message (Task_Switch_Flag) is sent to RTOS after task 2 blocks due to wait of IPC or need of sum input

and task 2 context saves and task 1 if ready then task 1 runs on retrieving the context of task 1

(6) task 1 if not ready then task 3 runs on retrieving the context of task 3

(7) Task 1 when ready to run preempts tasks 2 and 3, and Task 2 when ready to run preempts task 3

Specifying timeout for waiting for the token or event

• Specify timeout for waiting for the token or event.

• An advantage of using timeout intervals while designing task codes is that worst case latency estimation is

possible.

• There is deterministic latency of each tasks

• Another advantage of using the timeouts is the error reporting and handling

• Timeouts provide a way to let the RTOS run even the preempted lowest priority task in needed instances and

necessary cases

Model for Preemptive Scheduling

Petri net concept based model Petri net concept based model which models and helps in designing the codes for a task The model shows places by the circles and transitions by the rectangles.

Page 126: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 126

Petri net model in Figure (i) Each task is in idle state (or at idleTaskPlace) to start with, and a token to the RTOS is taskSwitchFlag = reset.

(ii) Let us consider the task_J_Idle place, which is currently has of highest priority among the ready tasks.

When the RTOS creates task, task_J, the place, task_J_Idle undergoes a transition to the ready state (or to readyTaskPlace), task_J_Ready place. The RTOS initiates idle to ready transition by executing a function, task create ( ).

For the present case it is done by executing a function, task_J_create ( ). A transition from the idle state of the task is fired as follows. i.RTOS sends two tokens, RTOS_CREATE Event and taskJSwitchFlag.

Page 127: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 127

ii.The output token from the transition is taskSwitchFlag = true.

(iii) When after task J finishes, the RTOS sends a RTOS_DELETE event (a token) the task, it returns to the

task_J_Idle place and its corresponding taskJSwitchFlag resets.

(iv) At task_J_Ready place, the scheduler takes the priority parameter into account. If the current task

current happens to be of highest priority, the scheduler sets two tokens, taskJSwitchFlag = true (sends a token)

and highest Priority Event = true, for the transition to the running task J place, task_J_Running. The scheduler

also resets and sends the tokens, task switch flags, for all other tasks that are of lesser priority. This is because

the system has only one CPU to process at an instant

(v) From the task_J_Running place, the transition to the task_J_Ready place will be fired when the task finish

flag sets

(vi) At task_J_Running place, the codes of the switched task J are executed. [Refer tothe top-right most

transition in the figure.]

(vii) At the runningTaskPlace, the transition for preempting will be fired when RTOS sends a token,

suspendEvent.

Another enabling token if present, is time_out_event will also fire the transition. An enabling token for both

situations is the semaphore release flag, which must be set. Semaphore release flag is sets on finishing

the codes of task J critical-sections.

On firing, the next place is task_J_Blocked. Blocking is in two situations., oOne situation is of preemption. It

happensis when the suspendEvent occurs on a call at the runningTaskPlace asking the RTOS to

suspend the running. Another situation is a time-out of an SWT, which that associates with the running task

place

(viii) On a resumeEvent (a token from RTOS) the transition to task_J_Running place occurs

(ix) At the task_J_Running place, there is another transition that fires so that the task J is at back at to the to

task_J_Running place when the RTOS sends a token, take_Semaphore_Event for to asking the task J to take

the semaphore

(x) There can be none or one or several sections taking and releasing semaphore or message.

RTOS during the execution of a section, the RTOS resets the semaphore release flag and sets the take

semaphore event token

Page 128: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 128

1. Disabling and enabling interrupts

Critical Section Service by disabling and enabling interrupts Critical section is a section in a system call (OS function), where there is no preemption by tasks. A disable interrupts function can be used at beginning of critical section and enable interrupts function executed at exit from the critical section to prevent preemption by 1. tasks as well as ISRs.

2. Preemption Points in COS-II

3. RTOS COS-II provides a function

4. OS_ENTER_CRITICAL ( ) to stop

5. preemption by any task or ISR, as it disable

6. interrupts. The RTOS provides a function

7. OS_EXIT_CRITICAL ( ) to facilitate

8. preemption by high priority task or ISR, as

9. it enable interrupts.

Process critical Section

static void process_p (void *taskPointer) {... while (1) {...; ...; ...; OS_ENTER_CRITICAL ( ) /*to stop preemption or interrupts by any task or ISR, after this instruction executes, critical section starts*/ ...; ...; ...; OS_ENTER_CRITICAL ( ) /* to start preemption or interrupts by any task or ISR */ ...; ...; ...; } }

Page 129: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 129

Disabling and enabling preemption by other processes by using lock ( ) and unlock ( ) Critical Section Service by lock ( ) and unlock ( ) in a Preemptive Scheduler A lock function can be used at beginning of critical section and an unlock function executed at exit from the critical section. RTOS COS-II lock ( ) and unlock ( )

� RTOS COS-II provides a function OSSchedLock ( ) to lock scheduling and hence locks preemption by other task waiting to proceed further for taking the lock ( )

� OSSchedUnlock ( ) unlocks scheduling and hence unlocks preemption by other task waiting to proceed further after executing lock ( )

Process p critical Section static void process_p (void *taskPointer) {... while (1) {...; ...; ...; OSSchedLock ( ); /* after this instruction, preemption is disabled and critical section of 1 starts*/ ...; ...; ...; OSSchedUnlock ( ); /* after this instruction, preemption is enabled and critical section of 1 ends*/...; ...; ...; } }

Page 130: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 130

Process q critical Section

static void process_q (void *taskPointer) {.. while (1) {...; ...; ...; OSSchedLock ( ); /* after this instruction, preemption is disabled and critical section of 1 starts*/ ...; ...; ...; OSSchedUnlock ( ); /* after this instruction, preemption is enabled and critical section of 1 ends*/ ...; ...; ...; } }

Preemption Points in Windows CE (a) RTOS kernels, for example, Windows CE, provide for preemption points. These are the OS function codes in between the critical sections Taking and releasing semaphore (mutex) by processes by using semTake( ) and semGive ( ) Critical Section Service by semTake ( ) and semGive ( ) in a Preemptive Scheduler A mutex semaphore can be used. Take the mutex at critical section 1 start Release the mutex at critical section end Same mutex is then taken and released by another critical section 2 when the access to the section is permitted after taking semaphore at an instance

Page 131: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 131

Process 1 critical Section

static void process_1 (void *taskPointer) {...

while (1) {...; ...; ...;

OSSemtake (Sem_m); /* after this instruction executes, critical section of 1 starts and the shared variable can

be operated)*/

...; ...; ...;

OSSemPost (Sem_m); /* after this instruction executes the next process critical section can operate on the

shared variable */

...; ...; ...; } }

Process 2 critical Section

static void process_2 (void *taskPointer) {.. while (1) {

...; ...; ...;

OSSemTake (Sem_m); /* after the instruction executes Sem_m = 0 (taken state)*/

...; ...; ...;

OSSemPost(Sem_m); /* Release mutex to let process 1 section proceed further if of higher priority */

...; ...; ...;}}

Page 132: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 132

Common scheduling models Cooperative Scheduling of ready tasks in a circular queue. It closely relates to function queue scheduling. Cooperative Scheduling with Precedence Constraints Cyclic scheduling of periodic tasks and Round Robin Time Slicing Scheduling of equal priority tasks Preemptive Scheduling Scheduling using 'Earliest Deadline First' (EDF) precedence. Rate Monotonic Scheduling using ‘higher rate of events occurrence First’ precedence Fixed Times Scheduling Scheduling of Periodic, sporadic and aperiodic Tasks Advanced scheduling algorithms using the probabilistic Timed Petri nets (Stochastic) or Multi Thread Graph for the multiprocessors and complex distributed systems. Earliest Deadline First' (EDF) Precedence

When a task becomes ready, its will be considered at a scheduling point. The scheduler does not assign any priority. It computes the deadline left at a scheduling point. Scheduling point is an instance at which the scheduler blocks the running task and re-computes the deadlines and runs the EDF algorithm and finds the task which is to be run

Page 133: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 133

An EDF algorithm can also maintain a priority queue based on the computation when the new task inserts Rate Monotonic Scheduler

� Rate monotonic scheduler computes the priorities, p, from the rate of occurrence of the tasks. � The i-th task priority, pi is proportional to ( 1/ti) where ti is the period of the occurrence of the task

event. � RMA gives an advantage over EDF because most RTOSes have provisions for priority assignment.

Higher priority tasks always get executed Precedence Assignment in the Scheduling Algorithms

� Best strategy is one, which is based on EDF (Earliest Deadline First) precedence. � Precedence is made the highest for a task that corresponds to an interrupt source, which occurs at the

earliest a succeeding times. and which deadline will finish earliest at the earliest � We assign precedence by appropriate strategy in the case of the variable CPU loads for the different

tasks and variable EDFs . Dynamic Precedence Assignment

� Firstly, there is deterministic or static assignment of the precedence � It means firstly there is rate monotonic scheduling(RMS). � Later on the scheduler dynamically assigns and fixes the timeout delays afresh, and assigns the

precedence as per EDF. � The need for of the dynamic assignment arises due to the sporadic tasks and the distributed or

multiprocessor indeterminate environment.

Fixed (Static) Real Time Scheduling of the Tasks A scheduler is said to be using a fixed time scheduling method when the schedule is static and deterministic

Methods for Fixed (Static) Real Time Scheduling (i) Simulated annealing method─ different schedules fixed and the performance is simulated. Now, schedules

for the tasks are gradually incremented by changing the interrupt timer settings (using a

corresponding OS function) till the simulation result shows that none is missing its deadline.

(ii) Heuristic method. Here, the reasoning or past experience lets us helps to define and fixing the fixed

schedules

(iii) Dynamic programming model. This is as follows: Certain specific running program first determines the

schedules for each task and then the timer interrupt s loads the timer settings from the outputs

from that program

Page 134: Unit 4_embedded system

I/O PROGRAMMING AND SCHEDULE MECHANISM [UNIT-IV] V.V.C.E.T

Department of EEE Page 134

Three types of tasks for finding performance Scheduler must take into account (aperiodic, periodic and sporadic) separately. (i) An aperiodic task needs to be run only once. (ii) A periodic task needs to run after the fixed periods and it that must be executed before its next preemption is needed. (iii) A sporadic task needs to be checked for run after a minimum time period of its occurrence. Predictably response to the event and minimum interrupt latency as Performance Measures

• An RTOS should quickly and predictably respond to the event.

• It should minimum interrupt latency and fast context switching latency Three Models for Performance Measures (i) Ratio of the sum of latencies of the tasks and Interrupt with respect to the sum of the execution times. (ii) CPU load for how much time CPU not idle (iii) Worst-Case Execution time with respect to mean execution time. Interrupt latencies as Performance Metric

• Interrupt and task execution latencies with respect to the sum of the execution times must be very small • There must be fast context switching .

CPU Load as Performance Metric

• Each task gives a load to the CPU that equals the task execution time divided by the task period • CPU load or system load estimation in the case of multitasking is as follows. Suppose there are m tasks.

For the multiple tasks, the sum of the CPU loads for all the tasks and ISRs should be less than 1 CPU Load

• CPU load equal to 0.1 (10%)─ means the CPU is underutilized and spends its 90% time in a waiting mode.

• Since the executions times can vary or and the task periods vary, the CPU loads can also vary Sporadic Task Model Performance Metric

• Ttotal = Total length of periods for which sporadic tasks occur • e = Total Task Execution Time • Tav = Mean periods between the sporadic occurrences • Tmin = Minimum Period between the sporadic occurrences

Sporadic Task Model Performance Metric

� Worst-Case Execution-time performance metric, p is measured calculated as follows for a tasks worst case of a task in a model. model.

� p = p worst= (e * Ttotal / Tav )/ (e * Ttotal / Tmin). � Because average rate of occurrence of sporadic task = (Ttotal / Tav) and maximum rate of sporadic task

burst = Ttotal / Tmin.