rtos

52
Advantage/Disadvantage of Using RTOS Advantage Split application to multi tasks, simplify the design, easy to be understand, expend and maintenance Time latency is guaranteed Higher system reliability Disadvantage More RAM/ROM usage 2~5% CPU overhead Adding additional cost, if commercial TROS is used REAL-TIME KERNEL Real-time operating systems must provide three specific functions with respect to tasks: scheduling, dispatching, and intercommunication and synchronization. The kernel of the operating system is the smallest portion that provides for these functions. A scheduler determines which task will run next in a multitasking system, while a dispatcher performs the necessary bookkeeping to start that task. Intertask communication and synchronization assures that the tasks cooperate. Various layers of operating system functionality and an associated taxonomy are given in Figure. A nanokernel provides simple thread (lightweight process) management. It essentially provides only one of the three services provided by a kernel, whereas a microkernel in addition provides for task scheduling. A kernel also provides for intertask synchronization and communication via semaphores, mailboxes, and other methods. A real-time executive is a kernel that includes privatized memory blocks, I/O services, and other complex features. Most commercial real-time kernels are executives. Finally, an operating system is an executive that provides for a generalized user interface, security, and a file-management system.

Upload: veeramaniks408

Post on 19-Jan-2016

36 views

Category:

Documents


1 download

DESCRIPTION

RTOS

TRANSCRIPT

Page 1: rtos

Advantage/Disadvantage of Using RTOS

Advantage

– Split application to multi tasks, simplify the design, easy to be

understand, expend and maintenance

– Time latency is guaranteed

– Higher system reliability

Disadvantage

– More RAM/ROM usage

– 2~5% CPU overhead

– Adding additional cost, if commercial TROS is used

REAL-TIME KERNEL

Real-time operating systems must provide three specific functions with respect to

tasks: scheduling, dispatching, and intercommunication and synchronization. The kernel

of the operating system is the smallest portion that provides for these functions. A

scheduler determines which task will run next in a multitasking system, while a

dispatcher performs the necessary bookkeeping to start that task.

Intertask communication and synchronization assures that the tasks cooperate.

Various layers of operating system functionality and an associated taxonomy are given in

Figure.

A nanokernel provides simple thread (lightweight process) management. It

essentially provides only one of the three services provided by a kernel, whereas a

microkernel in addition provides for task scheduling. A kernel also provides for intertask

synchronization and communication via semaphores, mailboxes, and other methods. A

real-time executive is a kernel that includes privatized memory blocks, I/O services, and

other complex features. Most commercial real-time kernels are executives. Finally, an

operating system is an executive that provides for a generalized user interface, security,

and a file-management system.

Page 2: rtos

Regardless of the operating system architecture used, the objective is to satisfy

real-time behavioral requirements and provide a seamless multitasking environment that

is flexible and robust.

Real-time multitasking can be achieved without interrupts and even without an

operating system per se. When feasible, these approaches are preferred because resultant

systems are easier to analyze.

---------------------------------------------------------------------------------------------------------

Polled Loop systems

Polled loops are used for fast response to single devices. In a polled-loop system,

a single and a repetitive instruction is used to test a flag that indicates whether or not

some event has occurred. If the event has not occurred, then the polling continues.

For example, suppose a software system is needed to handle packets of data that

arrive at a rate of no more than 1 per second. A flag named packet_here is set by the

network, which writes the data into the CPU‟s memory via direct memory access (DMA).

The data are available when packet_here = 1. Using a C code fragment, such a polled

loop to handle such a system is:

Polled-loop schemes work well when a single processor is dedicated to handling

the I/O for some fast device and when overlapping of events is not allowed or minimized.

Polled loops are ordinarily implemented as a background task in an interrupt-driven

system, or as a task in a cyclic executive. In the latter case, the polled loop polls each

cycle for a finite number of times to allow other tasks to run. Other tasks handle the

nonevent-driven processing.

Synchronized Polled Loop

A variation on the polled loop uses a fixed clock interrupt to pause between the

time when the signaling event is triggered and then reset. Such a system is used to treat

events that exhibit switch bounce. Switch bounce is a phenomenon that occurs because it

is impossible to build a switch, whether mechanical or electrical, that can change state

instantaneously.

A typical response for such a switch is given in Figure 3.2. Events triggered by

switches, levers, and buttons all exhibit this phenomenon. If, however, a sufficient delay

occurs between the initial triggering of the event and the reset, the system will avoid

interpreting the settling oscillations as events.

These are, of course, spurious events that would surely overwhelm any polled

loop service. For instance, suppose a polled-loop system is used to handle an event that

occurs randomly, but no more than once per second. The event is known to exhibit a

switch-bounce effect that disappears after 20 milliseconds. A 10-millisecond fixed-rate

interrupt is available for synchronization. The event is signaled by an external device that

sets a memory location via DMA.

Page 3: rtos

The C code looks like the following:

where pause(20) is a system timer call that provides a delay in increments of 1

millisecond. Since there is overhead in the system call and return, the wait time will

always be greater than the needed 20 milliseconds, which avoids the interpretation of

spurious events. Assuming the pause system call is available, polled-loop systems are

simple to write and debug, and the response time is easy to determine.

Polled loops are excellent for handling high-speed data channels, especially when

the events occur at widely dispersed intervals and the processor is dedicated to handling

the data channel. Polled-loop systems most often fail, however, because bursts are not

taken into account. Furthermore, polled loops by themselves are generally not sufficient

to handle complex systems. Finally, polled loops inherently waste CPU time, especially if

the event being polled occurs in frequently.

RTOS porting to Target

Porting of Vxworks on to a target system

For creating Vxworks image, we have taken a simple Client/server application to

send/receive the data from host to target and target to host. This project tries to explain

how various operations such as inter communication between processes and systems take

place in a remote terminal unit (RTU).

The root task waiting for user input. If the input is valid, selected task is

performed. If not send the acknowledgment to the user. After the simulation, we develop

the application specific for VxWorks on the main system and then port it on the target

system using the Boot disk created.

Fig. Shows overall data flow diagram from host PC to target device.

Page 4: rtos

Host DFD Module:

Initially the Process module acquires data from the User module and the data is

processed. If there is any error in the data for processing, the Process module executes a

Acknowledgment signal to the User module for new data.

If the data is processed, it is compiled and executed to make the application

machine independent, and the image of the data is processed to the Target module.

Fig. First Level DFD of Host Development

Target DFD Module:

In the Target module, the Host PC executes the VxWorks image of an application

to the process module and later to the Booting module for Boot disk creation. The

Debugger module acquires error data from the Process module by sending error free data

for processing.

Fig. First Level DFD of Target PC(VxWorks)

The OSEK/VDXStandard: Operating System and Communication

The OSEK/VDX standard actually comprises three substandards-an operating

system standard (OS), a communication standard (COM), and a network manager

standard (NM). In addition, an OSEK/VDX implementation language (OIL) has been

defined.

Page 5: rtos

Background

The OSEK/VDX standard is a combination of standards that were originally

developed by two separate consortia and later merged. OSEK, which draws its name

from a German acronym that translates approximately to "Open systems and the

Corresponding interfaces for Automotive electronics," was founded in 1993 as a joint

development effort of the German companies BMW, Bosch,Daimler Benz (now Daimler

Chrysler), Opel, Siemens, and Volkswagen and the University of Karlsruhe, Germany.

VDX, which is an acronym for Vehicle Distributed eXecutive, was originally

defined as part of a joint effort by the French companies PSA and Renault. The VDX

group merged with the OSEK group in 1994. Today,many other companies from

different sectors of embedded systems development have joined the OSEK/VDX effort.

The list of member companies includes such key players as Hewlett-Packard, Motorola,

NEC, and Texas instruments. For simplicity, I'll refer to the joint standard as OSEK for

the rest of this article.

The increasing cost of software development motivated the creation of the

standard. As the number of microcontrollers in automobiles and other complex systems

increases rapidly, the need for software developers is increasing faster than colleges can

turn out qualified graduates. The original members of the committee recognized that

there were high recurring costs attributed to non-value added software, including the

operating system/kernel, the network management, and the I/O processing. The goal was

to define a standard architecture, and a standard API, which could be used by any

automotive OEM or supplier.

With a standard architecture, colleges and universities can train engineers and

reduce the cost and risks to companies in the industry. The barriers to changing from one

microcontroller to another was lowered by allowing highly portable software to be

written to the OSEK API, and not to a unique OS. (This includes traditional commercial

RTOS‟s that may not be available for the new microcontroller.)

Originally, OSEK was targeted as a standard open architecture for automotive

Electronic Control Units(ECUs) distributed throughout the vehicle. However, the

resulting standard is generic and does not limit usage to an automotive environment.

Consequently, this standard can be used in many stand-alone and networked devices,

such as in a manufacturing environment, household appliances, intelligent transportation

system devices, and so forth.

OSEK Architecture

An application that uses the OSEK architecture can take on a few different forms.

The two basic forms utilizing all components of the standard can be seen in Figures 1 and

2. I describe each component in detail later in the article. The difference between the two

forms is how the application handles the interface to the hardware.

In the first form, the application addresses the I/O layer directly through an I/O

API. This API isn't defined in the OSEK standard due to the varying requirements of

different applications. This form has the advantage of rapid response to a request for I /O

information from the application task. The drawback is that portability of application

tasks may be limited.

As an example, consider a device that requires input of vehicle speed. In some

applications, this input may be in the form of a pulse stream into the hardware. In this

Page 6: rtos

version, the information is obtained via a call to the I/O layer. In another version, the

vehicle speed is obtained from another microcontroller over a network such as CAN or

J1850. In this version, the information is obtained via a call to the OSEKCOM module.

Due to this small difference, the application is not 100%reusable. If vehicle speed

is used in many application tasks, the effort to port the software from one version to the

next may be daunting (not to mention the possibility of errors).

The second form treats the I/O layer as an OSEK task. In this form,every

application task requests information from and sends information to the OSEK COM

module. Consequently, changes to the source or destination of the information requires

one change to the COM message, which is automatically cascaded to every application

task. The drawback is that the processing of the message by COM may take longer than

processing the information directly from the I/O layer.

Other forms can be derived that use only some of the components of the OSEK

standard. Each component can be designed to operate independently of the other

components. In particular, the COM component does not assume that it is operating in an

OSEK OS environment.

Page 7: rtos

Operating System

The first component of the OSEK standard is the operating system. Many

engineers have a common misperception that OSEK is an RTOS. Although the OS is a

large portion of the standard,the power of OSEK comes from the integration of each

component and the development of a standard architecture. The operating system is

composed of a number of objects as shown in Figure 3.

The OS also provides error handling (used primarily during development) and

hooks for user-defined functions to track changes in system state.

OSEK Tasks

Each OSEK task must be in one of only four states-suspended, ready,running, or

waiting. As I mentioned earlier, only extended tasks can enter the waiting state. The four

task states are defined as follows:

Suspended: The task is not in the ready queue and is therefore ineligible to run

Ready: The task is ready to run and the scheduler may choose to run it(based on

its priority and that of other ready tasks, as well as the preemption rules)

Running: The task is currently running.Only one task will be running at any given

instant

Waiting: The task is waiting for an event to occur

Each task also has a priority, with higher numbers indicating higher priority. The

OSEK standard does not define a maximum priority. Each implementation is free to

define its own. Tasks can be moved into the waiting state when one of the following

events occurs:

The task is commanded into theready state by an explicit task activation command

(ActivateTask() or ChainTask() system service)

An alarm expires that activates the task

A message is received that activates the task

An event upon which the task is waiting occurs

Tasks in the ready state reside inthe ready queue based on priority and are executed

on a first-in, first-out basis. Tasks move to the suspended state upon termination

Page 8: rtos

(ChainTask() or TerminateTask() system service), and are moved to the waiting state

when an event is not available that is needed (WaitEvent() system service). The task is

moved from ready to running by the scheduler. The function of the scheduler varies

based on whether the running task can be preempted. For non-preemptive tasks, the

scheduler runs when one of the following occurs:

A task is terminated (ChainTask() or TerminateTask() system service)

The scheduler is called explicitly (Schedule() system service)

An extended task transitions into the waiting state (WaitEvent() systemservice)

For preemptive tasks, the scheduler runs when one of the following occurs:

A task is terminated (ChainTask() or TerminateTask() system service)

An extended task transitions into the waiting state (WaitEvent() systemservice)

A task is moved from suspended to ready (ActivateTask() system service)

An event is set (SetEvent() systemservice)

A message arrives that activates a task or sets an event

An alarm expires that activates a task or sets an event

The scheduler is also considered a resource that can be locked, thereby inhibiting

rescheduling during a critical section of the code.

Interrupts

OSEK defines three levels of interrupt service routines (ISRs). The difference

between each level is whether OS system services are called. Level 1 ISRs run

independently of the OS and execute the fastest. An example is transmitting a stream of

serial data previously buffered, or driving a PWM output signal.

A level 2 ISR provides a frame in which an application function that contains an

OS call is executed. An example of this level is the receipt of a pulse that must be

processed immediately.

A level 3 ISR is a hybrid in which code that doesn't call an OS service coexists

with code that calls a service. In this case, code that makes OS service calls is enclosed in

two calls- EnterISR() and LeaveISR()(). An example of this level of ISR is the receipt

ofa serial stream of data. The ISR knows how long the stream is, and buffers the stream

until the end, at which time an OS service is called to send the message to an application.

The only time that EnterISR()() and LeaveISR()() are called is after the last character of

the stream is received.

After EnterISR()() is called, the level 3 ISR can activate tasks, enable and disable

interrupts, set events as having occurred, and start, reset, and stop alarms. However,

rescheduling does not occur until LeaveISR()() is called. The last statement in the ISR

must always be LeaveISR()().

Page 9: rtos

Interrupts may be checked, disabled, and enabled. Unlike a generic enable and

disable interrupt routine provided by a compiler, this interface allows different interrupts

to be disabled or enabled based on a mask that is sent to the routine. The interrupt

descriptor is specific to the implementation of OSEK (due to differences in

microcontrollers). However, a mask can be created by the application that can be

configured for each OSEK implementation. For example, a mask called

TIMER_INTERRUPTS might be defined to inhibit interruption of the task by the timer

module. In implementations where there are no timers, this would be defined as zero; in

other implementations, in which there are only timer interrupts, it may be the global

interrupt enable mask; and instill others, it may be a combination of specific interrupts.

Events

Events are used to synchronize different tasks. Each event is "owned" by an

extended task. Any task, including basic tasks, can set an event. Only the owner task can

clear the event or wait for the event.

Resource management

Resource management controls access to shared resources such as memory,

hardware, and the like. The scheduler is a special resource that can also be locked by

tasks. To eliminate priority inversion and deadlock, OSEK employs a priority ceiling

protocol.This protocol temporarily increases the priority of the task that has locked the

resource so that no other tasks that access the resource can be running while the resource

is locked. However, all tasks with a priority higher than the highest-priority task with

access to the resource can still run.

Alarms and counters

Alarms and counters are tools used to synchronize task activation with recurring

events. An alarm is statically assigned to one counter, one task, and one action. The

action could be either to activate a task or set an event.

Counters are measured in ticks and can represent time, number of pulses received,

and so on. One counter, the timer counter, is provided by each implementation. This

counter can be used to schedule periodic events. Other counters are manipulated through

an API that is specific to each implementation of the OSEK OS. Consequently, counter

control code written for one microcontroller and one vendor's OSEK OS would have to

be rewritten if the software is ported to a different vendor's OSEK OS, but to the same

microcontroller.

Two types of alarms are available: cyclic and single. Cyclic alarms can be used to

schedule a task that must occur periodically. When an alarm is set, it can be set to a

relative or absolute value of the counter. The value of the counter and the cycle can be

dynamically allocated when the alarm is set. Consequently, a single alarm can be single,

Page 10: rtos

cyclic, set relative to the counter, and set absolute to the counter at different locations in

the application.

An example of using alarms is inscheduling periodic tasks to activate. If there are

four tasks-A, B, C, and D, all of the same priority-and each task needs to be executed

every 40ms, four alarms could be set up and started during hook routine StartupHook().

Task A would be set to execute at a relative time of 0ms, Task B at a relative time of

10ms, Task C at a relative time of 20ms, and Task D at a relative time of 30ms. All tasks

would cycle at 40ms. The effect of this is to limit the latency for each task from the time

that it is activated until it runs. None of the tasks will have to wait on any other task,

unless the task takes more than 10ms to complete.

Communication

The communication specification (COM) provides an interface for multiple

application modules to communicate via messages. In addition to providing for

interprocess communication, COM also provides for communication between

microcontrollers in a multiprocessor module as well as between controllers over a

network. If implemented, the network may be one of CAN or J1850. (Theoretically,the

network can be of any type, including Ethernet.) The application modules do not have

knowledge of the physical location of the sender or recipient.

The COM model consists of a numbe rof layers that correspond roughly to the

ISO/OSI seven-layer model. The layers, as shown in Figure 5, are: application,

interaction, network, datalink, and physical. The interaction layer corresponds roughly to

the presentation layer of the ISO/OSI model. The session and transport layers of the

ISO/OSI model do not exist in the COM specification.

The interaction layer provides the application programming interface for COM. It

consists of a small numberof interfaces that are used to send and receive messages, check

status, and lock and release the message resource. This simple interface encapsulates a

powerful system that greatly increases the portability of application modules. Messages

Page 11: rtos

that are intended for local processes are handled totally by the interaction layer. Messages

intended for transmission over a network or to another microcontroller in the same

module are passed to the network layer.

The network layer provides services to the interaction layer to transfer messages

over a network. It will segment messages into frames if they are supported by the chosen

COM conformance class. If a message is unsegmented, it is passed directly to the data

link layer. The data link layer handles the protocol of the message over the chosen

network. Multiple data link layers may exist in a given application.. Figure 8 shows the

flow of information whenever a message is sent.

The second step is to define the usage for each task. The message may be sent or

received, with or without copy, and activate a task, set an event, or start or clear an alarm

for each task using the message. As an example look at the system diagram in Figure 7.

One message exists, A, that is sent by Task A.1 and is received by tasks A.2, B.1, and

C.1. Only one task may be defined as sending a message. For Task A.1, the message is

sent with copy and starts an alarm. Task A.2 is activated when the message is sent and

receives the message with copy. Task B.1 receives the message from the network with

copy and sets an event. Finally, Task C.1 receives the message from the network without

copy and activates a task. Figure 8 shows the flow of information whenever a message is

sent.

Page 12: rtos

POSIX Standard

• POSIX: Portable OS Interface

– Set of IEEE standards

– Mandatory + Optional parts

• Objective: Source code portability of applications across multiple OS

– Standard way for applications to interface to OS

– Mostly but not exclusively Unix type OS

– Total portability is not achievable

• POSIX.1

– Set of standard OS system calls

• File operations, process management, signals, and devices

• Mostly mandatory

– Widely supported

• Unix, Linux, VxWorks, QNX, Solaris, OSX, LynxOS

POSIX.4 version

• POSIX.4 = POSIX 1003.1b

– Added Realtime functionality

– Built on top of POSIX.1

• POSIX.4a = POSIX 1003.1c

– Threads extensions

• POSIX.4b = POSIX 1003.1d

– More realtime extensions

POSIX.4 = POSIX 1003.1b

– Range of RT Features

• Shared Memory / Memory Locking / Priority

Scheduling / Signals / Semaphores / Clocks &

Timers

• Will examine many of these issues

– Adopted by many RTOS

• QNX, LynxOS, VxWorks, RT Linux, Integrity

– “POSIX Compliant” OS

• Supports mandatory features

• Specifies which optional features it supports

• POSIX.4 features are mostly optional so beware…

Page 13: rtos
Page 14: rtos
Page 15: rtos
Page 16: rtos

POSIX.4 RT Scheduling

• Defines two main scheduling policies

– SCHED_FIFO and SCHED_RR

• Each have attributes

– Also have SCHED_OTHER

– Currently a single attribute = priority

struct sched_param{

int sched_priority;

}

– Eg. Could implement EDF by extending structure to

include

• struct timespec sched_deadline;

• struct timespec sched_timerequired;

POSIX.4 RT Scheduling

• SCHED_FIFO

– Simple priority based preemptive scheduler

– Most common in RTS

– FIFO used to schedule processes within each priority level

– If no other process exists at higher priority, process runs until complete

• Next process at that priority (if present) then allocated CPU

• Highest priority process guaranteed processor time

• SCHED_RR

– Round robin used to time slice among processes at same priority level

– System provided timeslice

– Use for lower priority tasks

POSIX.4 Clocks & Timers

• Range of features

– timespec, itimerspec

• Time structures

Page 17: rtos

– clock_getres()

• Determine Clock Resolution

– clock_gettime(),clock_settime()

• Get and Set System Clock Time

– nanosleep( )

– timer_create( ),timer_delete( )

POSIX.4 Memory Locking

/*Main routine */

int main(void ){

/* Lock all process down */

mlockall(MCL_CURRENT|MCL_FUTURE);

… process code

munlockall();

return 0;

}

• Locks currently and future mapped pages belonging to process in memory

– Locked Memory will vary as process runs

• Can also lock critical sections of memory or functions within a process

– More comple

Page 18: rtos

What is uCOS II?

Micro-Controller Operating Systems, Version 2

A very small real-time kernel.

Memory footprint is about 20KB for a fully functional kernel.

Source code is about 5,500 lines, mostly in ANSI C.

It‟s source is open but not free for commercial usages.

Important Feature of uC/OSII

Preemptible priority-driven real-time scheduling.

64 priority levels (max 64 tasks), 8 reserved for uC/OS-II

Each task is an infinite loop.

Deterministic execution times for most uC/OS-II functions and services.

Nested interrupts could go up to 256 levels.

Feature Not Supported by uCOS

• Not support priority inheritance.

• With uC/OS-II, all tasks must have a unique priority, (Cannot Change While

Running)

μC/OS-II (pronounced "Micro C O S 2") which stands for Micro-Controller

Operating System Version 2. μC/OS-II is based on μC/OS, The Real-Time Kernel which

was first published in 1992. Thousands of people around the world are using μC/OS in all

kinds of applications such as cameras, medical instruments, musical instruments, engine

controls, network adapters, highway telephone call boxes, ATM machines, industrial

robots, and many more. Numerous colleges and Universities have also used μC/OS to

teach students about real-time systems.

μC/OS-II is upward compatible with μC/OS (V1.11) but provides many

improvements over μC/OS such as the addition of a fixed-sized memory manager, user

definable callouts on task creation, task deletion, task switch and system tick, supports

TCB extensions, stack checking and, much more. I also added comments to just about

every function and I made μC/OS-II much easier to port to different processors. The

source code in μC/OS was found in two source files. Because μC/OS-II contains many

new features and functions, I decided to split μC/OS-II in a few source files to make the

code easier to maintain. If you currently have an application (i.e. product) that runs with

μC/OS, your application should be able to run, virtually unchanged, with μC/OS-II. All

of the services (i.e. function calls) provided by μC/OS have been preserved.

μC/OS-II features in detail :

Source Code:

The organization of a real-time kernel is not always apparent by staring at many

source files and thousands of lines of code.

Portable:

Most of μC/OS-II is written in highly portable ANSI C, with target

microprocessor specific code written in assembly language. Assembly language is kept to

a minimum to make μC/OS-II easy to port to other processors. Like μC/OS, μC/OS-II can

be ported to a large number of microprocessors as long as the microprocessor provides a

stack pointer and the CPU registers can be pushed onto and popped from the stack. Also,

Page 19: rtos

the C compiler should either provide in-line assembly or language extensions that allow

you to enable and disable interrupts from C.

μC/OS-II can run on most 8-bit, 16-bit, 32-bit or even 64-bit microprocessors or

micro-controllers and, DSPs. μC/OS-II is upward compatible with μC/OS, your μC/OS

applications should run on μC/OS-II with few or no changes. Check for the availability of

ports on the μC/OS-II Web site at „www.uCOS-II.com‟.

ROMable:

μC/OS-II was designed for embedded applications. This means that if you have

the proper tool chain (i.e. C compiler, assembler and linker/locator), you can embed

μC/OS-II as part of a product.

Scalable:

I designed μC/OS-II so that you can use only the services that you need in your

application. This means that a product can have just a few of μC/OS-II‟s services while

another product can have the full set of features. This allows you to reduce the amount of

memory (both RAM and ROM) needed by μC/OS-II on a product per product basis.

Scalability is accomplished with the use of conditional compilation. You simply specify

(through #define constants) which features you need for your application/product

Multi-tasking:

μC/OS-II is a fully-preemptive real-time kernel. This means that μC/OS-II always

runs the highest priority task that is ready. Most commercial kernels are preemptive and

μC/OS-II is comparable in performance with many of them. μC/OS-II can manage up to

64 tasks, however, the current version of the software reserves eight (8) of these tasks for

system use. This leaves your application with up to 56 tasks. Each task has a unique

priority assigned to it which means that μC/OS-II cannot do round robin scheduling.

There are thus 64 priority levels.

Deterministic:

Execution time of all μC/OS-II functions and services are deterministic. This

means that you can always know how much time μC/OS-II will take to execute a function

or a service. Furthermore, except for one service, execution time of all μC/OS-II services

do not depend on the number of tasks running in your application.

Task stacks:

Each task requires its own stack, however, μC/OS-II allows each task to have a

different stack size. This allows you to reduce the amount of RAM needed in your

application. With μC/OS-II‟s stack checking feature, you can determine exactly how

much stack space each task actually requires.

Services:

μC/OS-II provides a number of system services such as mailboxes, queues,

semaphores, fixed-sized memory partitions, time related functions, etc.

Interrupt Management:

Interrupts can suspend the execution of a task and, if a higher priority task is

awakened as a result of the interrupt, the highest priority task will run as soon as all

nested interrupts complete. Interrupts can be nested up to 255 levels deep.

Robust and reliable:

Page 20: rtos
Page 21: rtos

μcos II programming

Example program using UCOSII for the above tasks #include "includes.h"

// Registers and data definition for our MX1 CPU

#include "type.h"

#include "mx1.h"

// Basic IO operation function

extern void EUARTinit(void);

extern U8 EUARTgetData(void);

extern void EUARTputString(U8 *line);

// Define stacks for eack task (32-bit)

#define TASK_STK_SIZE 512

OS_STK TaskStk[2][TASK_STK_SIZE];

// Function prototypes of tasks

void Task0(void *data);

void Task1(void *data);

// Mail boxes (Just two points)

OS_EVENT* MBox0;

OS_EVENT* MBox1;

// main function //

Page 22: rtos

int main (void)

{

P_U32 temp;

// Init UART controller

EUARTinit(); EUARTputString("\nEntering main() now!\n");

// Init uCOS kernel

OSInit();

// Create Mailbox

MBox0 = OSMboxCreate((void*)0);

MBox1 = OSMboxCreate((void*)0);

// Create two task

OSTaskCreate(Task0, // Task function

(void *)0, // point to data struct for this task

&(TaskStk[0][TASK_STK_SIZE - 1]), // stacks for the task

1); // priority

OSTaskCreate(Task1, // Task function

(void *)0, // no data transfered

&(TaskStk[1][TASK_STK_SIZE - 1]), // point to data struct for this task

2); // priority

OSStart(); // Start multi-tasking now

return 0; // Never comes here

}

// program for task0 and task1

void Task0 (void *data)

{

int n = 0;

int i;

void* msg;

INT8U err;

while (1)

{

// Do something

for (i = 0; i < 10000; i++) n++;

// Tell people task0 is running

EUARTputString("Enter task0\n");

// Send mail to mailbox 1 (to task 1)

err = OSMboxPost(MBox1, (void*)&n);

Page 23: rtos

// Waiting for mail from mailbox 0 (reply from task1)

msg = OSMboxPend(MBox0, 100, &err);

// Get data attached in the mail

n = *((int*)msg);

}

}

void Task1 (void *data)

{

int m = 0;

int i;

void* msg;

INT8U err;

while (1)

{

// Do something

for (i = 0; i < 1234; i++) m++;

// Tell people task1 is running

EUARTputString("Enter task1\n");

// Waiting for mail from mailbox 1 (mail from task 0)

msg = OSMboxPend(MBox1, 100, &err);

// Get data attached in the mail

m = *((int*)msg);

// Reply the mail to mailbox 0 (to task 0)

err = OSMboxPost(MBox0, (void*)&m);

}

}

Memory Management Functions

OSMemCreate (void *addr, INT32U nblks, INT32U blksize, INT8U *err)

– Format a memory partition

OSMemGet (OS_MEM *pmem, INT8U *err)

– Get a memory block from one of the created memory partitions

OSMemPut (OS_MEM *pmem, void *pblk)

– Returning a Memory Block to the appropriate partition

OSMemQuery()

– Obtaining status of a Memory partition

Page 24: rtos

Time Management Function

OSTimeDly()

– Delay for a user-specified number of clock ticks

OSTimeDlyHMSM()

hours(H), minutes(M), seconds(S), milliseconds(m), Maximum Delay

256hours (11days)

OSTimeDlyHMSM( 0, 0, 1, 500);

OSTimeDlyResume()

– Resuming a Delayed Task

OSTimeGet() & OSTimeSet()

– 32-bit counter

Page 25: rtos
Page 26: rtos

RTOS --- VxWorks VxWorks is the high performance real time operating system. Most widely used

RTOS is the VxWorks. This is a scalable Operating system in which kernel, I/O system,

network layer and all are in component nature and which are being used whenever

required to add to Kernel. Modularity of this feature is one of the main advantage in

development of real time applications. VxWorks is the real time operating system

(RTOS) unlike other operating systems like windows, Linux etc. It is the multi tasking

and single user operating system .The main advantage of RTOS is less context switching

time, accuracy and predictable response. Development environment for vxWorks is

Tornado. VxWorks application can run standalone--either in ROM or disk-based--with

no further need for the network or the host system. However, the host machine and

VxWorks can also work together in a hybrid application, with the host machine using

VxWorks systems as real-time "servers" in a networked environment. For instance, a

VxWorks system controlling a robot might itself be controlled by a host machine that

runs an expert system, or several VxWorks systems running factory equipment might be

connected to host machines that track inventory or generate reports. Features of Vxworks High-Performance Real-time Kernel Facilities

The VxWorks kernel, wind, includes multitasking with preemptive priority

scheduling, intertask synchronization and communications facilities, interrupt handling

support, watchdog timers, and memory management.

a. Multitasking and Intertask Communications

The VxWorks multitasking kernel, wind, uses interrupt-driven, priority-based task

scheduling. It features fast context switch times and low interrupt latency. Under

VxWorks, any subroutine can be spawned as a separate task, with its own context and

Page 27: rtos

stack. Other basic task control facilities allow tasks to be suspended, resumed, deleted,

delayed, and moved in priority.

The wind kernel supplies semaphores as the basic task synchronization and

mutualexclusion mechanism. There are several kinds of semaphores in wind, specialized

for different application needs: binary semaphores, counting semaphores, mutual

exclusion semaphores, and POSIX semaphores. All of these semaphore types are fast and

efficient. In addition to being available to application developers, they have also been

used extensively in building higher-level facilities in VxWorks.

For intertask communications, the wind kernel also supplies message queues,

pipes, sockets, and signals. The optional component VxMP provides shared-memory

objects as a communication mechanism for tasks executing on different CPUs. In

addition, semaphores are described in the semLib and semPxLib reference entries;

message queues are described in the msgQLib and mqPxLib reference entries;

POSIX Compatibility

VxWorks provides most interfaces specified by the 1003.1b standard (formerly

the 1003.4 standard), simplifying your ports from other conforming systems.

I/O System

The VxWorks I/O system provides uniform device-independent access to many

kinds of devices. You can call seven basic I/O routines: creat ( ), remove ( ), open ( ),

close( ), read ( ), write ( ), and ioctl ( ). Higher-level I/O routines (such as ANSI

compatible printf ( ) and scanf ( ) routines) are also provided.

VxWorks also provides a standard buffered I/O package (stdio) that includes

ANSI C-compatible routines such as fopen ( ), fclose ( ), fread ( ), fwrite ( ), getc ( ), and

putc ( ). These routines increase I/O performance in many cases. The VxWorks I/O

system also includes POSIX-compliant asynchronous I/O: a library of routines that

perform input and output operations concurrently with a task's other activities. VxWorks

includes device drivers for serial communication, disks, RAM disks, SCSI tape devices,

intertask communication devices (called pipes), and devices on a network. Application

developers can easily write additional drivers, if needed. VxWorks allows dynamic

installation and removal of drivers without rebooting the system.

Local File Systems

VxWorks provides fast file systems tailored to real-time applications. One file

system is compatible with the MS-DOS® file system, another with the RT-11 file system,

a third is a "raw disk" file system, a fourth supports SCSI tape devices, and a fifth

supports CD-ROM devices.

MS-DOS Compatible File System: dosFs

VxWorks provides the dosFs file system, which is compatible with the MS-DOS

file system (for MS-DOS versions up to and including 6.2). The capabilities of dosFs

offer considerable flexibility appropriate to the varying demands of real-time applications

Raw Disk File System: rawFs

VxWorks provides rawFs, a simple "raw disk file system" for use with disk

devices. rawFs treats the entire disk much like a single large file. The rawFs file system

permits reading and writing portions of the disk, specified by byte offset, and it performs

simple buffering. When only simple, low-level disk I/O is required, rawFs has the

advantages of size and speed.

Page 28: rtos

cdRomFs

VxWorks provides the cdromFs file system which lets applications read any

CDROM that is formatted in accordance with ISO 9660 file system standards. After

initializing cdRomFs and mounting it on a CD-ROM block device, you can access data

on that device using the standard POSIX I/O calls.

C++ Development Support

In addition to general C++ support including the iostream library and the standard

template library, the optional component Wind Foundation Classes adds the following

C++object libraries:

Shared-Memory Objects (VxMP Option)

The VxMP option provides facilities for sharing semaphores, message queues,

and memory regions between tasks on different processors.

Virtual Memory (Including VxVMI Option)

VxWorks provides both bundled and unbundled (VxVMI) virtual memory

support for boards with an MMU, including the ability to make portions of memory

noncacheable or read-only, as well as a set of routines for virtual-memory management.

Target-resident Tools

In the Tornado development system, the development tools reside on the host

system. However, a target-resident shell, module loader and unloader, and symbol table

can be configured into the VxWorks system if necessary.

Utility Libraries

VxWorks provides an extensive set of utility routines, including interrupt

handling, watchdog timers, message logging, memory allocation, string formatting and

scanning, linear and ring buffer manipulations, linked-list manipulations, and ANSI C

libraries.

Performance Evaluation Tools

VxWorks performance evaluation tools include an execution timer for timing a

routine or group of routines, and utilities to show CPU utilization percentage by task.

Target Agent

The target agent allows a VxWorks application to be remotely debugged using the

Tornado development tools.

Board Support Packages

Board Support Packages (BSPs) are available for a variety of boards and provide

routines for hardware initialization, interrupt setup, timers, memory mapping, and so on.

Network Facilities

VxWorks provides "transparent" access to other VxWorks and TCP/IP-networked

systems. All VxWorks network facilities comply with standard Internet protocols, both

loosely coupled over serial lines or standard Ethernet connections and tightly coupled

over a backplane bus using shared memory.

Memory Allocation

VxWorks supplies a memory management facility useful for dynamically

allocating, freeing, and reallocating blocks of memory from a memory pool. Blocks of

arbitrary size can be allocated, and you can specify the size of the memory pool. This

memory scheme is built on a much more general mechanism that allows VxWorks to

manage several separate memory pools.

Page 29: rtos

Watchdog Timers

A watchdog facility allows callers to schedule execution of their own routines

after specified time delays. As soon as the specified number of ticks have elapsed, the

specified "timeout" routine is called at the interrupt level of the system clock, unless the

watchdog is canceled first. This mechanism is entirely different from the kernel's task

delay facility.

Message Logging

A simple message logging facility allows applications to send error or status

messages to a logging task, which then formats and outputs the messages to a system

wide logging device (such as the system console, disk, or accessible memory). The

message logging facility can be used from either interrupt level or task level.

VxWorks 5.3(Tornado) Programming

1. Timing VxWorks provides a number of timing facilities to help with this task. The VxWorks

execution timer can time any subroutine or group of subroutines. To time very fast

subroutines, the timer can also repeatedly execute a group of functions until the time of a

single iteration is known with reasonable certainty.

Objectives

The following are the primary objectives of this experiment:

To demonstrate how to time a single subroutine using the VxWorks timex() routine

Description

The timex() routine times a single execution of a specified function with up to eight

integer arguments to be passed to the function. When execution us complete, timex()

routine displays the execution time and a margin of error in miliseconds. If the execution

was so fast relative to the clock rate that the time is meaningless(error > 50%), a warning

message will appear. In such cases, use timexN() which will repeatedly execute the

function until the time of a single iteration is known with reasonable certainty.

1. Syntax

void timex(FUNCPTR function_name, int arg1, .., int arg8)

Note: the first argument in timex() routine is a pointer to the function to be timed.

2. Example This small example has two subroutines. The first subroutine "timing" makes

a call to timex() with the function name "printit" which is the subroutine to be timed. The

arguments are all NULL, so no parameters are being passed to "printit". The second

subroutine, "printit", which is being timed iterates 200 times while printing its task

id(using taskIdSelf()) and the increment variable "i".

-------------------------------------------------------------------------------------

#include "vxWorks.h" /* Always include this as the first thing in every program */

#include "timexLib.h"

#include "stdio.h"

#define ITERATIONS 200

Page 30: rtos

int printit(void);

void timing() /* Function to perform the timing */

{

FUNCPTR function_ptr = printit; /* a pointer to the function "printit" */

timex(function_ptr,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL); /* Timing

the "print" function */

}

int printit(void) /* Function being timed */

{

int i;

for(i=0; i < ITERATIONS; i++) /* Printing the task id number and the increment

variable "i" */

printf("Hello, I am task %d and is i = %d\n",taskIdSelf(),i);

return 0;

}

-------------------------------------------------------------------------------------

2. Multi-Tasking

Introduction

Modern real-time systems are based on the complementary concepts of

multitasking and intertask communications. A multitasking environment allows real-time

applications to be constructed as a set of independent tasks, each with a separate thread of

execution and its own set of system resources. The intertask communication facilities

allow these tasks to synchronize and coordinate their activities.

The VxWorks multitasking kernel, wind, uses interrupt-driven, priority- based

task scheduling. It features fast context switch time and low interrupt latency.

Objectives

The following are the primary objectives of this experiment:

To teach the student how to initiate multiple processes using Vxworks tasking

routines.

Description

Multitasking creates the appearance of many threads of execution running concurrently

when, in fact, the kernel interleaves their execution on a basis of ascheduling algorithm.

Each apparently independent program is called a task. Each task has its own context,

which is the CPU environment and system resources that the task sees each time it is

scheduled to run by the kernel.

On a context switch, a task's context is saved in the Task Control Block(TCB). A task's

context includes:

a thread of execution, that is, the task's program counter

the CPU registers and floating-point registers if necessary

a stack of dynamic variables and return addresses of function calls

Page 31: rtos

I/O assignments for standard input, output, error

a delay timer

a timeslice timer

kernel control structures

signal handlers

debugging and performance monitoring values

1. Task Creation and Activation

The routine taskSpawn creates the new task context, which includes allocating

and setting up the task environment to call the main routine(an ordinary subroutine) with

the specified arguments. The new task begins at the entry to the specified routine.

The arguments to taskSpawn() are the new task's name(an ASCII string), priority,

an "options" word(also hex value), stack size(int), main routine address(also main routine

name), and 10 integer arguments to be passed to the main routine as startup parameters.

2. Syntax

id = taskSpawn(name,priority,options,stacksize,function, arg1,..,arg10);

3. Example

This example creates ten tasks which all print their task Id once:

---------------------------------------------------------------------------

#define ITERATIONS 10

void print(void);

spawn_ten() /* Subroutine to perform the spawning */

{

int i, taskId;

for(i=0; i < ITERATIONS; i++) /* Creates ten tasks */

taskId = taskSpawn("tprint",90,0x100,2000,print,0,0,0,0,0,0,0,0,0,0);

}

void print(void) /* Subroutine to be spawned */

{

printf("Hello, I am task %d\n",taskIdSelf()); /* Print task Id */

}

---------------------------------------------------------------------------

Semaphores

Introduction

Semaphores permit multitasking applications to coordinate their activities. The

most obvious way for tasks to communicate is via various shared data structures. Because

all tasks in VxWorks exist in a single linear address space, shared data structures between

tasks is trivial. Global variables, linear buffers, ring buffers, link lists, and pointers can be

referenced directly by code running in different context.

However, while shared address space simplifies the exchange of data, interlocking

access to memory is crucial to avoid contention. Many methods exist for obtaining

exclusive access to resources, and one of them is semaphores.

Page 32: rtos

Objectives

The following are the primary objectives of this experiment:

To demonstrate the use of VxWorks semaphores.

Description

VxWorks semaphores are highly optimized and provide the fastest intertask

communication mechanisms in VxWorks. Semaphores are the primary means for

addressing the requirements of both mutual exclusion and task synchronization. There are

three types of Wind semaphores, optimized to address different classes of problems:

binary

The fastest, most general purpose semaphore. Optimized for synchronization and

can also be used for mutual exclusion.

mutual exclusion

A special binary semaphore optimized for problems inherent in mutual exclusion:

priority inheritance, deletion safety and recursion.

counting

Like the binary semaphore, but keeps track of the number of times the semaphore

is given. Optimized for guarding multiple instances of a resource.

1. Semaphore Control Wind semaphores provide a single uniform interface for semaphore control. Only the

creation routines are specific to the semaphore type:

semBCreate(int options, SEM_B_STATE initialState ): Allocate and intialize a

binary semaphore.

semMCreate(int options): Allocate and intialize a mutual exclusion semaphore.

semCCreate(int options, int initialCount): Allocate and intialize a counting

semaphore.

semDelete(SEM_ID semId): Terminate and free a semaphore.

semTake(SEM_ID semId, int timeout): Take a semaphore.

semGive(SEM_ID semId): Give a semaphore.

semFlush(SEM_ID semId): Ublock all tasks waiting for a semaphore.

Please refer to the VxWorks Reference Manual for valid arguments in the above routines.

Page 33: rtos

2. Example: Binary Semaphore A binary semaphore can be viewed as a flag that is available or unavailable.

When a task takes a binary semaphore, using semTake(), the outcome depends on

whether the semaphore is available or unavailable at the time of the call. If the semaphore

is available, then the semaphore becomes unavailable and then the task continues

executing immediately. If the semaphore is unavailable, the task is put on a queue of

blocked tasks and enters a state of pending on the availability of the semaphore.

When a task gives a binary semaphore, using semGive(), the outcome depends on

whether the semaphore is available or unavailable at the time of the call.If the semaphore

is already available, giving the semaphore has no effect at all. If the semaphore is

unavailable and no task is waiting to take it, then the semaphore becomes available. If the

semaphore is unavailable and one or more tasks are pending on its availablity, then the

first task in the queue of pending tasks is unblocked, and the semaphore is left

unavailable.

In the example below, two tasks(taskOne and taskTwo), are competing to update

the value of a global variable, called "global." The objective of the program is to toggle

the value of the global variable(1s and 0s). taskOne changes the value of "global" to 1

and taskTwo changes the value back to 0. Without the use of the semaphore, the value of

"global" would be random and the value of "global" would be corrupted.

-------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "taskLib.h"

#include "semLib.h"

#include "stdio.h"

/* function prototypes */

void taskOne(void);

void taskTwo(void);

/* globals */

#define ITER 10

SEM_ID semBinary;

int global = 0;

void binary(void)

{

int taskIdOne, taskIdTwo;

/* create semaphore with semaphore available and queue tasks on FIFO basis */

semBinary = semBCreate(SEM_Q_FIFO, SEM_FULL);

/* Note 1: lock the semaphore for scheduling purposes */

semTake(semBinary,WAIT_FOREVER);

/* spawn the two tasks */

Page 34: rtos

taskIdOne = taskSpawn("t1",90,0x100,2000,(FUNCPTR)taskOne,0,0,0,0,0,0,0,0,0,0);

taskIdTwo = taskSpawn("t2",90,0x100,2000,(FUNCPTR)taskTwo,0,0,0,0,0,0,0,0,0,0);

}

void taskOne(void)

{

int i;

for (i=0; i < ITER; i++)

{

semTake(semBinary,WAIT_FOREVER); /* wait indefinitely for semaphore */

printf("I am taskOne and global = %d......................\n", ++global);

semGive(semBinary); /* give up semaphore */

}

}

void taskTwo(void)

{

int i;

semGive(semBinary); /* Note 2: give up semaphore(a scheduling fix) */

for (i=0; i < ITER; i++)

{

semTake(semBinary,WAIT_FOREVER); /* wait indefinitely for semaphore */

printf("I am taskTwo and global = %d----------------------\n", --global);

semGive(semBinary); /* give up semaphore */

}

}

-------------------------------------------------------------------------------------

Message Queues

Introduction

In VxWorks, the primary intertask communication mechanism within a single

CPU is message queues. Message queues allow a variable number of messages, each of

variable length, to be queued(FIFO or priority based). Any task can send a message to a

message queue and any task can receive a message from a message queue. Multiple tasks

can send to and receive from the same message queue. Two way communication between

two tasks generally requires two message queues, one for each direction.

Objectives

The following are the primary objectives of this experiment:

To demonstrate the use of VxWorks message queues.

Description

Wind message queues are created and deleted with the following routines:

Page 35: rtos

msgQCreate(int maxMsgs, int maxMsgLength, int options): Allocate and initialize

a message queue.

msgQDelete(MSG_Q_ID msgQId): Terminate and free a message queue.

msgQSend(MSG_Q_ID msgQId, char *Buffer, UINT nBytes, int timeout, int

priority): Send a message to a message queue.

msgQReceive(MSG_Q_ID msgQId, char *Buffer, UINT nBytes, int timeout) Send

a message to a message queue.

This library provides messages that are queued in FIFO order, with a single

exception: there are two priority levels, and messages marked as high priority are

attached to the head of the queue. A message queue is created with msgQCreate(). Its

parameters specify the maximum number of messages that can be queued in the message

queue and the maximum length in bytes of each message. A task sends a message to a

message queue with msgQSend(). If no tasks are waiting for messages on that queue, the

message is added to the queue's buffer of messages. If any tasks are waiting for a

message from that message queue, the message is immediately delivered to the first

waiting task.

A task receives a message from a message queue with msgQReceive(). If messages

are already available in the queue's buffer, the first message is immediately dequeued and

returned to the caller. If no messages are available, then the calling task blocks and is

added to a queue of tasks waiting for messages. The queue of waiting tasks can be

ordered either by task priority or FIFO, as specified when the queue is created.

Timeouts: Both msgQSend() and msgQReceive() take timeout parameters. The

timeout parameter specifies how many ticks(clock ticks per second) to wait for

space to be available when sending a message and to wait for a message to be

available when receiving a message.

Urgent Messages: The msgQSend() function can specify the priority of a message

either as normal MSG_PRI_NORMAL or urgent MSG_PRI_URGENT.

Normal priority messages are added to the tail of the message queue, while urgent

priority messages are added to the head of the message queue.

1. Example -------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "msgQLib.h"

/* function prototypes */

void taskOne(void);

void taskTwo(void);

Page 36: rtos

/* defines */

#define MAX_MESSAGES 100

#define MAX_MESSAGE_LENGTH 50

/* globals */

MSG_Q_ID mesgQueueId;

void message(void) /* function to create the message queue and two tasks */

{

int taskIdOne, taskIdTwo;

/* create message queue */

mesgQueueId=

msgQCreate(MAX_MESSAGES,MAX_MESSAGE_LENGTH,MSG_Q_FIFO);

/* spawn the two tasks that will use the message queue */

taskIdOne = taskSpawn("t1",90,0x100,2000,(FUNCPTR)taskOne,0,0,0,0,0,0,0,

0,0,0)

printf("taskSpawn taskOne failed\n");

if((taskIdTwo = taskSpawn("t2",90,0x100,2000,(FUNCPTR)taskTwo,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskTwo failed\n");

}

void taskOne(void) /* task that writes to the message queue */

{

char message[] = "Received message from taskOne";

/* send message */

if((msgQSend(mesgQueueId,message,MAX_MESSAGE_LENGTH, WAIT_FOREVER,

MSG_PRI_NORMAL)) == ERROR)

printf("msgQSend in taskOne failed\n");

}

void taskTwo(void) /* tasks that reads from the message queue */

{

char msgBuf[MAX_MESSAGE_LENGTH];

/* receive message */

if(msgQReceive(mesgQueueId,msgBuf,MAX_MESSAGE_LENGTH,

WAIT_FOREVER)

== ERROR)

printf("msgQReceive in taskTwo failed\n");

else

printf("%s\n",msgBuf);

msgQDelete(mesgQueueId); /* delete message queue */

}

-------------------------------------------------------------------------------------

Page 37: rtos

Round-Robin Task Scheduling

Introduction

Task scheduling is the assignment of starting and ending times to a set of tasks,

subject to certain constraints. Constraints are typically either time constraints or resource

constraints. On a time-sharing operating system, running each active process in turn for

its share of time (its "timeslice"), thus creating the illusion that multiple processes are

running simultaneously on a single processor.

Wind task scheduling uses a priority based preemptive scheduling algorithm as

default, but it can also accommodate round-robin scheduling.

Objectives

The following are the primary objectives of this experiment:

To demonstrate the use of VxWorks round-robin task scheduling facilities.

Description

Round-Robin Scheduling

A round-robin scheduling algorithm attempts to share the CPU fairly

among all ready tasks of the same priority. Without round-robin scheduling, when

multiple tasks of equal priority must share the processor, a single task can usurp

the processor by never blocking, thus never giving other equal priority tasks a

chance to run.

Round-robin scheduling achieves fair allocation of the CPU to tasks of the

same priority by an approach known as time slicing. Each task executes for a

defined interval or time slice; then another task executes for an equal interval, in

rotation. The allocation is fair in that no task of a priority group gets a second

slice of time before the other tasks of a group are given a slice.

Round-robin scheduling can be enabled with routine kernelTimeSlice(),

which takes a parameter for a time slice, or interval. The interval is the amount of

time each task is allowed to run before relinquishing the processor to another

equal priority task.

The following routine controls round-robin task scheduling:

kernelTimeSlice(int ticks): Control round-robin scheduling. The number of

ticks(60 ticks equate to one second) determine the duration of the time slice.

1. Example: Round-robin Based Scheduling In the example below, three tasks with the same priority print their task ids and

task names on the console. Without round-robin scheduling, "taskOne" would usurp the

processor until it was finished, and then "taskTwo" and "taskThree" would do likewise.

Page 38: rtos

In the event that "taskOne" was looping indefinitely, the other tasks would never get a

chance to run. To insure that the tasks get an equal share of the CPU time, a call is made

to kernelTimeSlice(). This sets the time slice interval value to TIMESLICE. The

TIMESLICE value is the time slice interval in terms of the number of clock ticks(which

in the example and the M68040 is 60 ticks which is equivalent to one second). The

sysClkRateGet() can be used to determine the number of clock ticks per second. Having

setup the time slice in the manner above, the three tasks are spawned. However, here a

few implementation details that should be noted:

Make sure that sched has a higher priority than the tasks it is spawning! Unless

otherwise specified, tasks have a default priority of 100. Notice that taskOne,

taskTwo, and taskThree all have priorities of 101, which makes them lower in

priority than sched.

Yow must allow enough time for the context switches to occur. Thus the reason

for -> for (j=0; j < LONG_TIME; j++);

Using printf is not ideal in the example, because it can block .This will of course

cause a task transition which will upset the nice round robin picture. Instead use

logMsg() (see vxWorks reference manual for details). The latter won't block

unless the log message queue is full.

------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "taskLib.h"

#include "kernelLib.h"

#include "sysLib.h"

#include "logLib.h"

/* function prototypes */

void taskOne(void);

void taskTwo(void);

void taskThree(void);

/* globals */

#define ITER1 100

#define ITER2 10

#define PRIORITY 101

#define TIMESLICE sysClkRateGet()

#define LONG_TIME 1000000

void sched(void) /* function to create the three tasks */

{

int taskIdOne, taskIdTwo, taskIdThree;

Page 39: rtos

kernelTimeSlice(TIMESLICE) == OK /* turn round-robin on */

printf("\n\\t\tTIMESLICE = %d seconds\n\n\n", TIMESLICE/60);

/* spawn the three tasks */

if((taskIdOne =

taskSpawn("task1",PRIORITY,0x100,20000,(FUNCPTR)taskOne,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskOne failed\n");

if((taskIdTwo =

taskSpawn("task2",PRIORITY,0x100,20000,(FUNCPTR)taskTwo,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskTwo failed\n");

if((taskIdThree =

taskSpawn("task3",PRIORITY,0x100,20000,(FUNCPTR)taskThree,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskThree failed\n");

}

void taskOne(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

logMsg("\n",0,0,0,0,0,0); /* log messages */

for (j=0; j < LONG_TIME; j++); /* allow time for context switch */

}

}

void taskTwo(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

logMsg("\n",0,0,0,0,0,0); /* log messages */

for (j=0; j < LONG_TIME; j++); /* allow time for context switch */

}

}

void taskThree(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

Page 40: rtos

logMsg("\n",0,0,0,0,0,0); /* log messages */

for (j=0; j < LONG_TIME; j++); /* allow time for context switch */

}

}

Preemptive Priority Based Task Scheduling

Introduction

Task scheduling is the assignment of starting and ending times to a set of tasks, subject to

certain constraints. Constraints are typically either time constraints or resource

constraints. On a time-sharing operating system, running each active process in turn for

its share of time (its "timeslice"), thus creating the illusion that multiple processes are

running simultaneously on a single processor.

Wind task scheduling uses a priority based preemptive scheduling algorithm as default,

but it can also accommodate round-robin scheduling.

Objectives

The following are the primary objectives of this experiment:

To demonstrate the use of VxWorks preemptive priority based task scheduling

facilities.

Description

Preemptive Priority Based Scheduling

With a preemptive priority based scheduler, each task has a priority and

the kernel insures that the CPU is allocated to the highest priority task that is

ready to run. This scheduling method is preemptive in that if a task that has a

higher priority than the current task becomes ready to run, the kernel immediately

saves the current tasks's context and switches to the context of the higher priority

task.

The Wind kernel has 256 priority levels(0-255). Priority 0 is the highest

and priority 255 is the lowest. Tasks are assigned a priority when created;

however, while executing, a task can change its priority using taskPrioritySet().

1. Example: Preemptive Priority Based Scheduling One of the arguments to taskSpawn() is the priority at which the task is to

execute: id = taskSpawn(name, priority, options, stacksize, function, arg1,.. , arg10);

By varying the priority(0-255) of the task spawned, you can affect the priority of

the task. Priority 0 is the highest and priority 255 is the lowest.The Note the priority of a

task is relative to the priorities of other tasks. In other words, the task priority number

itself has no particular significance by itself. In addition a task's priority can be changed

after its spawned using the following routine:

Page 41: rtos

taskPrioritySet(int tid, int newPriority): Change the priority of a task.

In the example below, there are three tasks with different

priorities(HIGH,MID,LOW). The result of running the program is that the task with the

highest priority, "taskThree" will run to completion first, followed by the next highest

priority task, "taskTwo", and the finally the task with the lowest priority which is

"taskOne."

------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "taskLib.h"

#include "logLib.h"

/* function prototypes */

void taskOne(void);

void taskTwo(void);

void taskThree(void);

/* globals */

#define ITER1 100

#define ITER2 1

#define LONG_TIME 1000000

#define HIGH 100 /* high priority */

#define MID 101 /* medium priority */

#define LOW 102 /* low priority */

void sched(void) /* function to create the two tasks */

{

int taskIdOne, taskIdTwo, taskIdThree;

printf("\n\n\n\n\n");

/* spawn the three tasks */

if((taskIdOne =

taskSpawn("task1",LOW,0x100,20000,(FUNCPTR)taskOne,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskOne failed\n");

if((taskIdTwo =

taskSpawn("task2",MID,0x100,20000,(FUNCPTR)taskTwo,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskTwo failed\n");

if((taskIdThree =

taskSpawn("task3",HIGH,0x100,20000,(FUNCPTR)taskThree,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn taskThree failed\n");

}

Page 42: rtos

void taskOne(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

logMsg("\n",0,0,0,0,0,0);

for (j=0; j < LONG_TIME; j++);

}

}

void taskTwo(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

logMsg("\n",0,0,0,0,0,0);

for (j=0; j < LONG_TIME; j++);

}

}

void taskThree(void)

{

int i,j;

for (i=0; i < ITER1; i++)

{

for (j=0; j < ITER2; j++)

logMsg("\n",0,0,0,0,0,0);

for (j=0; j < LONG_TIME; j++);

}

}

------------------------------------------------------------------------------------

Priority Inversion

Introduction

Priority inversion occurs when a higher-priority task is forced to wait an indefinite

period for the completion of a lower priority task. For example, prioHigh,

prioMedium,and prioLow are task of high, medium, and low priority, respectively.

prioLow has acquired a resource by taking its associated binary semaphore. When

prioHigh preempts prioLow and contends for the resource by taking the same

semaphore, it becomes blocked. If prioHigh would be blocked no longer than the time it

normally takes prioLow to finish with the resource, there would be no problem, because

the resource can't be preempted. However, the low priority task is vulnerable to

preemption by the medium priority task, prioMedium, which could prevent prioLow

Page 43: rtos

from relinquishing the resource. This condition could persist, blocking prioHigh for an

extensive period of time.

Objectives

The following are the primary objectives of this experiment:

To demonstrate VxWorks' priority inversion avoidance mechanisms.

Description

To address the problem of priority inversion, VxWorks provides an additional

option when using mutual exclusion semaphores. This option is

SEM_INVERSION_SAFE which enables a priority inheritance algorithm. This

algorithm insures that the task that owns a resource executes at the priority of the highest

priority task blocked on that resource. When execution is complete, the task relinquishes

the resource and returns to its normal priority. Therefore, the inheriting task is protected

from preemption by an intermediate priority task. This option must be used in

conjunction with SEM_Q_PRIORITY:

semId = semMCreate(SEM_Q_PRIORITY | SEM_INVERSION_SAFE);

1. Example: The example below illustrates a typical situation in which priority inversion takes

place. Here is the what happens:

1. prioLow task locks the semaphore.

2. prioLow task gets preempted by prioMedium task which runs for a long time which

results in the blocking of prioLow.

3. prioHigh task preempts prioMedium task and tries to lock the semaphore which is

currently locked by prioLow. The situation is shown in the printout from running the

program:

------------------------------------------------------------------------------------

Low priority task locks semaphore

Medium task running

High priority task trys to lock semaphore

Medium task running

Medium task running

Medium task running

Medium task running

Medium task running

Medium task running

Medium task running

Medium task running

Medium task running

------------------------------------------Medium priority task exited

Low priority task unlocks semaphore

High priority task locks semaphore

High priority task unlocks semaphore

High priority task trys to lock semaphore

High priority task locks semaphore

Page 44: rtos

High priority task unlocks semaphore

High priority task trys to lock semaphore

High priority task locks semaphore

High priority task unlocks semaphore

..........................................High priority task exited

Low priority task locks semaphore

Low priority task unlocks semaphore

Low priority task locks semaphore

Low priority task unlocks semaphore

..........................................Low priority task exited

------------------------------------------------------------------------------------

Since both prioLow and prioHigh are both blocked, prioMedium runs to completion(a

very long time). By the time prioHigh runs it is likely that it has missed its timing

requirements.

Here is what the code looks like:

------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "taskLib.h"

#include "semLib.h"

/* function prototypes */

void prioHigh(void);

void prioMedium(void);

void prioLow(void);

/* globals */

#define ITER 3

#define HIGH 102 /* high priority */

#define MEDIUM 103 /* medium priority */

#define LOW 104 /* low priority */

#define LONG_TIME 3000000

SEM_ID semMutex;

void inversion(void) /* function to create the three tasks */

{

int i, low, medium, high;

printf("\n\n....................##RUNNING##.........................\n\n\n");

/* create semaphore */

semMutex = semMCreate(SEM_Q_PRIORITY); /* priority based semaphore */

/* spawn the three tasks */

if((low = taskSpawn("task1",LOW,0x100,20000,(FUNCPTR)prioLow,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

Page 45: rtos

printf("taskSpawn prioHigh failed\n");

if((medium =

taskSpawn("task2",MEDIUM,0x100,20000,(FUNCPTR)prioMedium,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn prioMedium failed\n");

if((high = taskSpawn("task3",HIGH,0x100,20000,(FUNCPTR)prioHigh,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

printf("taskSpawn prioLow failed\n");

}

void prioLow(void)

{

int i,j;

for (i=0; i < ITER; i++)

{

semTake(semMutex,WAIT_FOREVER); /* wait indefinitely for semaphore */

printf("Low priority task locks semaphore\n");

for (j=0; j < LONG_TIME; j++);

printf("Low priority task unlocks semaphore\n");

semGive(semMutex); /* give up semaphore */

}

printf("..........................................Low priority task exited\n");

}

void prioMedium(void)

{

int i;

taskDelay(20);/* allow time for task with the lowest priority to seize semaphore */

for (i=0; i < LONG_TIME*10; i++)

{

if ((i % LONG_TIME) == 0)

printf("Medium task running\n");

}

printf("------------------------------------------Medium priority task exited\n");

}

void prioHigh(void)

{

int i,j;

taskDelay(30);/* allow time for task with the lowest priority to seize semaphore */

for (i=0; i < ITER; i++)

{

printf("High priority task trys to lock semaphore\n");

semTake(semMutex,WAIT_FOREVER); /* wait indefinitely for semaphore */

printf("High priority task locks semaphore\n");

for (j=0; j < LONG_TIME; j++);

Page 46: rtos

printf("High priority task unlocks semaphore\n");

semGive(semMutex); /* give up semaphore */

}

printf("..........................................High priority task exited\n");

}

------------------------------------------------------------------------------------

Signals

Introduction

A signal is a software notification to a task or a process of an event. A signal is

generated when the event that causes the signal occurs. A signal is delivered when a

task or a process takes action based on that signal. The lifetime of a signal is the interval

between its generation and its delivery. A signal that has been generated but not yet

delivered is pending. There may be considerable time between signal generation and

signal delivery.

VxWorks supports a software signal facility. The signals asynchronously alter the

control flow of task. Any task can raise a signal for a particular task. The task being

signaled immediately suspends its current thread of execution and the task specified

signal handler routine is executed the next time the task is scheduled to run. The signal

handler gets invoked even if the task is blocked on some action or event. The signal

handler is a user supplied routine that is bound to a specific signal and performs whatever

actions are necessary whenever the signal is received. Signals are most appropriate for

error and exception handling, rather than general intertask communication.

The Wind kernel has both BSD 4.3 and POSIX signal interface. The POSIX

interface provides a standardized interface which is more functional than BSD 4.3

interface. Your application should use only one interface and not mix the two.

Objectives

The following are the primary objectives of this experiment:

To demonstrate VxWorks' implementation of POSIX signal routines.

Description

The signal facility provides a set of 31 distinct signals(see VxWorks Manual). A

signal can be raised by calling kill(), which is analogous to an interrupt or hardware

exception. A signal is bound to a particular signal with sigaction(). While the signal

handler is running, other signals are blocked from delivery. Tasks can block the

occurence of certain signals with sigprocmask(); if a signal is blocked when it is raised,

its handler routine will be called when the signal becomes unblocked. Signal handlers are

typically defined as:

void sigHandlerFunction(int signalNumber)

{

.............. /* signal handler code */

..............

..............

Page 47: rtos

}

where signalNumber is the signal number for which sigHandlerFunction is to be

invoked for.

The sigaction function installs signal handlers for a task:

int sigaction(int signo, const struct sigaction *pAct, struct sigaction *pOact)

A data structure of type struct sigaction holds the handler information. The sigaction

call has three parameters: the signal number to be caught, a pointer to the new handler

structure(of type struct sigaction), and a pointer to the old structure(also of type struct

sigaction). If the program does not need the value of the old handler(*pOact), pass a

NULL pointer for *pOact. To direct a specific signal to a specific task, the kill(int, int)

call is made where the first argument the task id to send signal to, and the second

argument is the signal to send to the task .

1. Example: In the example below, the "sigGenerator" function generates the SIGINT or Ctrl-

C signal, and directs the signal to the "sigCatcher" task. When "sigCatcher" receives the

signal, it suspends its normal execution and branches to a signal hander that it has

installed(catchSIGINT function).

------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "sigLib.h"

#include "taskLib.h"

#include "stdio.h"

/* function prototypes */

void catchSIGINT(int);

void sigCatcher(void);

/* globals */

#define NO_OPTIONS 0

#define ITER1 100

#define LONG_TIME 1000000

#define HIGHPRIORITY 101

#define MIDPRIORITY 102

#define LOWPRIORITY 103

int ownId;

void sigGenerator(void) /* task to generate the SIGINT signal */

{

int i, j, taskId;

STATUS taskAlive;

if((taskId =

taskSpawn("signal",MIDPRIORITY,0x100,20000,(FUNCPTR)sigCatcher,0,0,0,0,0,0,0,

0,0,0)) == ERROR)

Page 48: rtos

printf("taskSpawn sigCatcher failed\n");

ownId = taskIdSelf(); /* get sigGenerator's task id */

taskDelay(30); /* allow time to get sigCatcher to run */

for (i=0; i < ITER1; i++)

{

if ((taskAlive = taskIdVerify(taskId)) == OK)

{

printf("+++++++++++++++++++++++++++++++SIGINT signal

generated\n");

kill(taskId, SIGINT); /* generate signal */

/* lower sigGenerator priority to allow sigCatcher to run */

taskPrioritySet(ownId,LOWPRIORITY);

}

else

/* sigCatcher is dead */

break;

}

printf("\n***************sigGenerator Exited***************\n");

}

void sigCatcher(void) /* task to handle the SIGINT signal */

{

struct sigaction newAction;

int i, j;

newAction.sa_handler = catchSIGINT; /* set the new handler */

sigemptyset(&newAction.sa_mask); /* no other signals blocked */

newAction.sa_flags = NO_OPTIONS; /* no special options */

if(sigaction(SIGINT, &newAction, NULL) == -1)

printf("Could not install signal handler\n");

for (i=0; i < ITER1; i++)

{

for (j=0; j < LONG_TIME; j++);

printf("Normal processing in sigCatcher\n");

}

printf("\n+++++++++++++++sigCatcher Exited+++++++++++++++\n");

}

void catchSIGINT(int signal) /* signal handler code */

{

printf("-------------------------------SIGINT signal caught\n");

/* increase sigGenerator priority to allow sigGenerator to run */

taskPrioritySet(ownId,HIGHPRIORITY);

}

Page 49: rtos

Interrupt Service Routines

Introduction

Interrupt handling is important in real-time operating systems. The system

becomes aware of external events via the interrupt mechanism and the response of a real-

time systems depends on the speed of the system's response to interrupts and the speed of

processing interrupt handlers. To achieve the best response possible the application writer

must be aware of how to take advantage of the utilities provided by VxWorks.

Objectives

The following are the primary objectives of this experiment:

To demonstrate VxWorks' implementation of interrupt service routines.

Description

The user may write an interrupt service routine (ISR) and attach it to a particular

interrupt using the intConnect routine provided by VxWorks. What basically happens

when an interrupt to the system occurs, is at the first non-critical code after the interrupt

occured, guaranteed to be within musec by WRS, the ISR is executed. This time span is

generally knonw as interrupt latency. Because many interrupts may occur within a short

time of each other and a higher interrupt will block lower priority interrupts, it is

necessary to keep the ISR processing to a minimum. This is the responsibility of the

application writer.

The header files which relate to VxWorks interrupt management are, intLib.h -

this is the interrupt library header file; and ARCH/mc68k/ivMc68k.h. The ISR code does

not run in the normal task context. It has no task control block and all ISR's share a single

stack. Because of these differences there are restrictions to the type of routines that can be

used in the ISR.

ISR's should not invoke functions which may cause ``blocking'' of the caller. For

example, semTake. malloc and free cannot be used because they call functions which

may cause blocking and thus all creation and deletion functions are forbidden since they

use malloc and free. An ISR must not perform I/O through the VxWorks I/O system. A

call to a device driver may block the system if the caller needs to wait for the device.

However, the VxWorks pipe driver has been designed to permit writes by interrupt

service code.

The best way to print out messages from an ISR is to use the function logMsg or

other functions provided by the library logLib. ISRs should not use floating point

instructions since these registers are not saved on entry to the ISR. If floating point

instructions are to be used the registers must be saved using the functions in fppALib.

However, floating point operations are time intensive and should be avoided in ISRs.

Ideally, an ISR only contains a semGive system call. That is to say, the function

of a ISR is to cause the execution of a task to perform whatever processing is necessary.

To improve cooperation between VxWorks' ISRs and tasks, the best mechanism is

semaphores.

Page 50: rtos

1. Example: In the example below, the interruptGenerator task generates a hard interrupt,

sysBusIntGen(INTERRUPT_NUM,INTERRUPT_LEVEL), which is caught by

interruptCatcher.

The synatx for sysBusIntGen is:

SYNOPSIS

STATUS sysBusIntGen

(

int intLevel, /* bus interrupt level to generate */

int vector /* interrupt vector to generate (0-255) */

)

RETURNS OK, or ERROR if intLevel is out of range or the board cannot generate a bus

interrupt.

interruptCatcher is able to handle this hardware interrupt by installing an interrupt

handler, interruptHandler. interruptCatcher is "attaches" to the hardware interrupt

using

intConnect(INUM_TO_IVEC(INTERRUPT_LEVEL),(VOIDFUNCPTR)interrupt

Handler,i). The INUM_TO_IVEC(INTERRUPT_LEVEL) is a macro that converts a

hardware interrupt number to a vector.

The synatx for sysBusIntGen is:

SYNOPSIS

STATUS intConnect

(

VOIDFUNCPTR * vector, /* interrupt vector to attach to */

VOIDFUNCPTR routine, /* routine to be called */

int parameter /* parameter to be passed to routine */

)

DESCRIPTION

This routine connects a specified C routine to a specified interrupt vector. The

address of routine is stored at vector so that routine is called with parameter when the

interrupt occurs. The routine is invoked in supervisor mode at interrupt level. A proper

C environment is established, the necessary registers saved, and the stack set up.

The routine can be any normal C code, except that it must not invoke certain

operating system functions that may block or perform I/O operations. This routine simply

calls intHandlerCreate( ) and intVecSet( ). The address of the handler returned by

intHandlerCreate( ) is what actually goes in the interrupt vector.

RETURNS

OK, or ERROR if the interrupt handler cannot be built.

The run time scenario consists of interruptCatcher running and simulating

normal processing until interruptGenerator generates a hardware interrupt. Upon the

generation of the interrupt, interruptCatcher suspends its normal processing and

Page 51: rtos

branches to interruptHandler. Once the interrupt handling code has been executed,

control is passed back to interruptCatcher. This activity is repeated multiple times.

------------------------------------------------------------------------------------

/* includes */

#include "vxWorks.h"

#include "intLib.h"

#include "taskLib.h"

#include "arch/mc68k/ivMc68k.h"

#include "logLib.h"

/* function prototypes */

void interruptHandler(int);

void interruptCatcher(void);

/* globals */

#define INTERRUPT_NUM 2

#define INTERRUPT_LEVEL 65

#define ITER1 40

#define LONG_TIME 1000000

#define PRIORITY 100

#define ONE_SECOND 100

void interruptGenerator(void) /* task to generate the SIGINT signal */

{

int i, j, taskId, priority;

STATUS taskAlive;

if((taskId =

taskSpawn("interruptCatcher",PRIORITY,0x100,20000,(FUNCPTR)interruptCatcher,0,0

,0,0,0,0,0,

0,0,0)) == ERROR)

logMsg("taskSpawn interruptCatcher failed\n",0,0,0,0,0,0);

for (i=0; i < ITER1; i++)

{

taskDelay(ONE_SECOND);/* suspend interruptGenerator for one second */

/* check to see if interruptCatcher task is alive! */

if ((taskAlive = taskIdVerify(taskId)) == OK)

{

logMsg("++++++++++++++++++++++++++Interrupt

generated\n",0,0,0,0,0,0);

/* generate hardware interrupt 2 */

if((sysBusIntGen(INTERRUPT_NUM,INTERRUPT_LEVEL)) ==

ERROR)

logMsg("Interrupt not generated\n",0,0,0,0,0,0);

}

Page 52: rtos

else /* interruptCatcher is dead */

break;

}

logMsg("\n***************interruptGenerator

Exited***************\n\n\n\n",0,0,0,0,0,0);

}

void interruptCatcher(void) /* task to handle the interrupt */

{

int i, j;

STATUS connected;

/* connect the interrupt vector, INTERRUPT_LEVEL, to a specific interrupt

handler routine ,interruptHandler, and pass an argument, i */

if((connected =

intConnect(INUM_TO_IVEC(INTERRUPT_LEVEL),(VOIDFUNCPTR)interruptHandl

er,i)) == ERROR)

logMsg("intConnect failed\n",0,0,0,0,0,0);

for (i=0; i < ITER1; i++)

{

for (j=0; j < LONG_TIME; j++);

logMsg("Normal processing in interruptCatcher\n",0,0,0,0,0,0);

}

logMsg("\n+++++++++++++++interruptCatcher

Exited+++++++++++++++\n",0,0,0,0,0,0);

}

void interruptHandler(int arg) /* signal handler code */

{

int i;

logMsg("-------------------------------interrupt caught\n",0,0,0,0,0,0);

for (i=0; i < 5; i++)

logMsg("interrupt processing\n",0,0,0,0,0,0);

}

-----------------------------------------------------------------------

-----------