testable embedded system firmware development: the out–in methodology

16
Ž . Computer Standards & Interfaces 22 2000 337–352 www.elsevier.comrlocatercsi Testable embedded system firmware development: the out–in methodology Narayanan Subramanian a, ) , Lawrence Chung b a Anritsu Company, 1155 E. Collins BlÕd., Suite 100, Richardson, TX 75081, USA b Department of Computer Science, UniÕersity of Texas, Dallas, Richardson, TX 75081, USA Received 16 May 2000; received in revised form 16 June 2000; accepted 22 July 2000 Abstract Reliability is of paramount importance to just about any embedded system firmware. This paper presents the out–in Ž . methodology OIM , a new reliability-driven approach to developing such a system, which is intended to detect static and, more importantly, dynamic errors much faster than the usual firmware development methods. In this approach, the core functionality is developed together with an interface software that is used specifically for testing the core functionality. This paper describes the use of this approach in a real life situation and discusses the benefits, potential pitfalls, and other possible application areas. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Embedded system; Testing; Interface; Development methodology; GPIB 1. Introduction Embedded systems are widely prevalent in the modern world. They include household items such as microwaves, dishwashers, telephones; portable sys- tems like cellphones, handheld personal computers wx 1 , robots; and covering the spectrum at the higher end of complexity are the spaceships and space shuttles. All of these systems have a dedicated hard- ware that includes at least one CPU running dedi- cated software. This paper is concerned with the development of error-free software for such embed- ) Corresponding author. Tel.: q 1-972-644-1777; fax: q 1-972- 644-3416. E-mail address: [email protected] Ž . N. Subramanian . Ž ded systems in this paper, Aembedded systemB and . AsystemB are used synonymously . Software for an embedded system has two basic wx components: control and IrO 2 . The control com- ponent is the heart of the system and does the processing and other tasks that distinguish the partic- ular system in which it resides from the others. The IrO component handles the processing associated with inputs and outputs — it includes inputs from the user, the environment and any other source of relevance to the system, and outputs that the system sends to associated peripherals. In a microwave oven, for example, the keypad is the input source, the LCD display is the output, and the control reacts to the inputs and displays output, turns the oven off and on and performs the functions of the microwave oven. An important characteristic required of such em- bedded systems is reliability. When the user gives a 0920-5489r00r$ - see front matter q 2000 Elsevier Science B.V. All rights reserved. Ž . PII: S0920-5489 00 00054-4

Upload: narayanan-subramanian

Post on 16-Sep-2016

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Testable embedded system firmware development: the out–in methodology

Ž .Computer Standards & Interfaces 22 2000 337–352www.elsevier.comrlocatercsi

Testable embedded system firmware development:the out–in methodology

Narayanan Subramanian a,), Lawrence Chung b

a Anritsu Company, 1155 E. Collins BlÕd., Suite 100, Richardson, TX 75081, USAb Department of Computer Science, UniÕersity of Texas, Dallas, Richardson, TX 75081, USA

Received 16 May 2000; received in revised form 16 June 2000; accepted 22 July 2000

Abstract

Reliability is of paramount importance to just about any embedded system firmware. This paper presents the out–inŽ .methodology OIM , a new reliability-driven approach to developing such a system, which is intended to detect static and,

more importantly, dynamic errors much faster than the usual firmware development methods. In this approach, the corefunctionality is developed together with an interface software that is used specifically for testing the core functionality. Thispaper describes the use of this approach in a real life situation and discusses the benefits, potential pitfalls, and other possibleapplication areas. q 2000 Elsevier Science B.V. All rights reserved.

Keywords: Embedded system; Testing; Interface; Development methodology; GPIB

1. Introduction

Embedded systems are widely prevalent in themodern world. They include household items such asmicrowaves, dishwashers, telephones; portable sys-tems like cellphones, handheld personal computersw x1 , robots; and covering the spectrum at the higherend of complexity are the spaceships and spaceshuttles. All of these systems have a dedicated hard-ware that includes at least one CPU running dedi-cated software. This paper is concerned with thedevelopment of error-free software for such embed-

) Corresponding author. Tel.: q1-972-644-1777; fax: q1-972-644-3416.

E-mail address: [email protected]Ž .N. Subramanian .

Žded systems in this paper, Aembedded systemB and.AsystemB are used synonymously .

Software for an embedded system has two basicw xcomponents: control and IrO 2 . The control com-

ponent is the heart of the system and does theprocessing and other tasks that distinguish the partic-ular system in which it resides from the others. TheIrO component handles the processing associatedwith inputs and outputs — it includes inputs fromthe user, the environment and any other source ofrelevance to the system, and outputs that the systemsends to associated peripherals. In a microwave oven,for example, the keypad is the input source, the LCDdisplay is the output, and the control reacts to theinputs and displays output, turns the oven off and onand performs the functions of the microwave oven.

An important characteristic required of such em-bedded systems is reliability. When the user gives a

0920-5489r00r$ - see front matter q 2000 Elsevier Science B.V. All rights reserved.Ž .PII: S0920-5489 00 00054-4

Page 2: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352338

command to the embedded system to do some activ-ity, the system should either do what it is told to orelse display an error message in case the commandwas wrong in the current context. However, if thesoftware has not been designed with care, therecould be some sequence of user–system interactionsthat lead to a system crash. This usually manifestsitself by making the system completely unresponsiveto any further user interaction. The only resort forthe user is then to reboot the system. Such anoccurrence, as can be expected, results in poor user

w xconfidence in the system 3 .Such user-interaction problems can be overcome

by careful scenario analysis upfront, followed bytesting after the implementation of the software.While for simple embedded systems, this approachmay be sufficient to produce an error-free workingsystem, for complex systems that include multipletasks and multiple processors, such an approachis far from sufficient to ensure error-free software.This is because such systems have complicatedhardware–software interactions that are difficult to

Žvisualize before the entire system hardware and.software is built. Hence, it is difficult for the devel-

opment teams of such systems to plan for suchproblems. Detection of such problems requires non-invasive testing of the system while the system is

Žrunning. Automatic testing or machine-based test-.ing is required since tests may have to be repeated

Žmany, many times thousands to hundreds of thou-.sands of times to wring out some of these hard-

ware–software interaction problems.Testing techniques currently in use include using

in-circuit emulators, debuggers and the serial port.Unfortunately, all of these have their drawbacks.In-circuit emulators do not use the actual hardwareof the final system — in fact, in one way, it is atotally invasive method of testing. Furthermore, out-puts of the in-circuit emulators have to be manuallyanalyzed. Debuggers use special serial ports on em-bedded processors to give a picture of the currentstate of affairs in a running embedded system —however, they inevitably interrupt the running sys-tem to give the information. Also, their informationhas to be viewed and analyzed by a human operator.Serial port is frequently used to get data out of arunning embedded system. However, these outputscan only be analyzed by a human operator. Also, an

output is received only if there is a print outputstatement at suitable points in the software. It wouldbe convenient if the debug outputs from the embed-ded system could be analyzed by computer and alsoif the debug information related only to the relevantvariables in the software is received. Both of theseare easily realized in a firmware developed using the

Ž .out–in methodology OIM that is the subject of thispaper.

w xOIM 12 aims to ensure the following:

1. that the software is developed in a way thatmakes non-invasive testing at run-time feasible

2. there is an automatic method to test the softwareat run-time.

The subsequent sections discuss this new method-ology in detail and clarify the above points withnumerous examples.

ŽThe popular IEEE488 also known as generalŽ ..purpose interface bus GPIB was used for imple-

menting OIM in this paper. This is a parallel bus andhas a dedicated service request line that can be usedto indicate completion of tasks. The reader is notrequired to know the details of this bus. Any dataspecific to this bus is explained in the paper wher-ever necessary. Also, since the implementation wasdone for a test and measuring instrument, which isitself an embedded system, the word AinstrumentBhas been used synonymously with the words Aem-bedded systemB. One more point, in the discussionthat follows, there will be statements like Auponcompletion of measurement . . . B. This means that theinstrument completed the measurement it was askedto do and raised the service request line of theIEEE488 bus. This lets the PC know when themeasurement was completed and take further action.

Section 2 discusses the current firmware develop-ment process and points out its drawbacks. Section 3discusses the OIM in detail. Section 4 discusses animplementation, while Section 5 discusses the use ofthe implementation of this new methodology. Sec-tion 6 draws conclusions from this work.

2. The current firmware development process

Firmware is usually developed using either theclassical waterfall or the incremental model of devel-

Page 3: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 339

Žopment for a discussion of current firmware devel-w x.opment methodology, please see Refs. 4,5 . Once

the firmware has been developed, testing is doneduring the verification stage of the development

w xcycle. Usually, black-box testing 6 is done at thisstage. Regression tests are also performed. Thesetests may be manual or automatic. Since automatictesting is faster, many of these tests are automated.However, most of the tests cannot test the firmware

Ž w xin situ, i.e. as the firmware is running see Ref. 7.for description of automated testing . For example, if

boundary value testing is to be done, then an auto-mated test will test the software for the extremevalues, but it does not test how the software willbehave if these boundary values are given when thesystem is running. Of course, the system can bemanually tested by actually entering the boundary

Ž .values for a system that lets users set values of dataand checking to see if the system behaves as ex-pected. However, this is very time consuming. An-other alternative is to have the system test itself uponstart-up or upon pressing a special key, but this willrequire a pre-defined sequence of tests and will beextremely inflexible. If a test fails, there is no easyway of identifying why it failed.

In fact, the following drawbacks in the currentfirmware development methodology can be ob-served.

Ž .1 No single AwindowB that can access all partsof the system.

Ž .2 There is no facility for obtaining data from thefirmware at run-time — no way to confirm correct-

Žness at run-time automatically Aprintf’sB inserted in.the code do not allow for automated interpretation .

Ž .3 Tests are automated but cannot test theŽfirmware while it is running automated testing is

done on passive code, not on the run-time code; foran embedded system, its run-time behavior is what isobserved by the customer and hence there is an

.urgent need for automatic run-time tests .The OIM that is presented in Section 3 aims to

overcome these problems.

3. The OIM

The first requirement for testing a system in situis to have some means for extracting data out of the

running system. One way of doing this for a systemwith a display is to have print statements inserted atstrategic locations in the code that would then sendthe data to the display. The disadvantage with thismethod is that the observation of the outputs can bedone only manually. Another way of doing this is tohave some means of reading out the data such asthrough a network interface. This is the techniquethat the OIM uses. The OIM approaches the problemby intentionally developing the network interface tothe firmware, eventhough the system may not needsuch an interface. In fact, the first step in the firmwaredevelopment process using OIM is to develop acomputer interface. The only reason that this inter-face is developed is for testing all sorts of tests canbe performed with such an interface as will beexplained later. The OIM system configuration isgiven in Fig. 1.

As can be seen from Fig. 1, the embedded systemuses an external PC for testing. The PC is connectedto the OIM system through a hardware interface. Thetest engineer runs the tests from the PC. The testsuite is called the monitor. The embedded system

Žreceives the commands called the interface language.commands from the PC over the PC-embedded

system cable and executes the commands. Any re-sponses that the OIM system has to send to the PCŽ .because of the commands received are also sentover the PC-embedded system cable. The OIM re-quirements added the external PC interface and themonitor on the PC. The difference from automatedtests is that the PC now takes data out of running

Ž .system or the executing code while the normal

Fig. 1. OIM system configuration.

Page 4: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352340

usage of automated testing refers to automaticallytesting the passive, static code.

The difference between traditional firmware de-velopment process and the OIM is depicted in Fig. 2.As can be seen in the figure, computer interfaceŽ .shown as computer IrO is an optional firmwareitem for traditional process; if the requirements callfor such an interface it is included, else it is ex-cluded. In OIM, irrespective of the requirements, acomputer IrO is the first firmware item that isdeveloped. All other software is developed later.What is the advantage of this approach? The com-puter interface is the window to all other parts of the

Žsoftware. In fact, by proper design and this is not.difficult , it can be ensured that no part of the

software is inaccessible from the interface. How isthis done? This is accomplished by having an inter-face language command for each item of the soft-ware that has to be accessed. Please note that thiscommand set is developed only for testing purposes.It is quite possible that the computer interface was alegal requirement and the instrument was required tosupport a standard set of commands. However, thecommands that the OIM requires the instrument tosupport are in addition to the standard ones. Theseadditional commands are used only to exercise thesystem. The OIM development process is describedin detail in the Section 3.1.

The testing method described in this paper hasw xbeen described elsewhere 10,11 . However, these

methods do not affect the firmware developmentprocess of the embedded system they are connectedto in any way. In the authors’ view, this is a hugeloss of opportunity to test the firmware as well. TheOIM methodology incorporates the feasibility forexternal PC testing in the firmware developmentprocess and thus exploits the advantages that a PC-based testing offers.

Fig. 2. Out–in vs. traditional methodology comparison.

3.1. OIM firmware deÕelopment process

The firmware development process for OIM isshown in Fig. 3. In order to explain this process, thefollowing example, which is a part of the actualrequirements, is used:

The instrument shall perform test A in which itŽshall measure three parameters V 1, V 2, V 3 which

.are physical parameters and compute the value ofparameter V4 by the formula F:

V4sF V 1,V 2,V 3 .Ž .

As can be seen from Fig. 3, the OIM firmwaredevelopment process differs from the usual softwaredevelopment process after the requirements stage.After the initial requirements stage, the subsequentphases of development are OIM requirements phase,OIM design phase, OIM implementation phase andfinally, the OIM testing phase. The OIM firmwaredevelopment process is explained using the examplein Fig. 4. As can be seen from this figure, the firststep in the OIM methodology is the requirementsstep. Once the requirements are collected, the nextstep is to enhance the requirements by adding theOIM specific requirements. There are three parts inan OIM system — the computer interface, the in-strument and the monitor. The requirements stagechooses the computer interface to be used for OIM,defines the requirements for the instrument and themonitor. The OIM design phase designs the interfacelanguage commands that will be used over the com-puter interface, the instrument firmware and themonitor tests. The OIM implementation phase imple-ments the three parts. The OIM testing phase teststhe monitor first using standard PC-based tests whilethe instrument firmware is tested by the monitorusing the interface language commands. This alsotests the interface language commands itself. Thesephases are explained in detail later. The state transi-tion diagrams for the instrument and the PC aregiven in Fig. 4a, and the sequence diagram is givenin Fig. 4b.

3.1.1. State transition diagramsAs shown in Fig. 4a, the instrument is initially in

the Idle state. The moment it receives START A

Page 5: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 341

Fig. 3. OIM firmware development process.

command from the monitor, the instrument goes toPerform Test A state. Once the instrument completes

Žthe test, it indicates completion of the test in GPIB,it does so by raising the service request line of the

.IEEE488 interface and returns to the Idle state.Whenever the instrument receives any one of GET?V 1, GET? V 2, GET? V 3 or GET? V4 from themonitor, the instrument goes to Read Results stateand sends the values of V 1, V 2, V 3 and V4, respec-tively to the monitor. After sending the response tothe monitor, the instrument returns to the Idle stateand awaits further interaction from the monitor.

The monitor is initially in the Idle state. As soonas the test engineer starts the test for Test A, themonitor sends the command START A to the instru-ment and waits for the instrument to complete TestA. The monitor then sends out commands GET? V 1,GET? V 2 and GET? V 3 to the instrument and readsthe values of V 1, V 2 and V 3. The monitor thencomputes its value of V4, say V4X, using the formulaF. The monitor then reads V4 calculated by theinstrument by sending the command GET? V4 to theinstrument and compares V4 with V4X. If they aredifferent, the monitor raises an alarm and informs thetest engineer about the error. Else, the monitor endsthe test or may inform the test engineer of thesuccessful completion of the test.

3.1.2. Sequence diagramThe sequence diagram is shown in Fig. 4b. The

test engineer first starts Test A. This causes themonitor in the PC to send the command START Ato the instrument. Upon receiving this command, the

instrument performs Test A, the completion of whichis then indicated to the monitor. The monitor thensends the commands GET? V 1, GET? V 2 and GET?V 3 one after the other and for each command re-ceives the values of V 1, V 2 and V 3, respectivelyfrom the instrument. The monitor then computes itsvalue of V4, say V4X, using formula F. The monitorthen sends the command GET? V4 to the instrumentto get the value of V4 from the instrument. Themonitor then compares V4 with V4X. If they aredifferent, an error alarm is communicated to the testengineer, else, the test ends.

3.1.3. OIM requirementsThe requirements for OIM are the following:

1. A single AwindowB into all parts of the firmwareis required.

2. This Asee-all portB should be accessible by anexternal PC connected to the OIM system by a

Žstandard interface USB, Firewire, IEEE488, Eth-.ernet — see Fig. 1.

3. The design is oriented toward machine–machinetesting.

In order that the above goals are accomplished,the following must be added to the requirements:

1. An interface driver2. The interface language commands3. Monitor application on a PC.

3.1.3.1. Interface driÕer. Interface is a hardware portto the system being developed. Thus, this interfacedriver requirement should be part of system require-ments also. This interface will be the Asee-all portBmentioned above. However, while the interface is ahardware component, its driver is the software com-ponent. The driver lets the external world talk to theapplication running on the system and also lets theapplication on the system talk to the outside world.This driver should be part of the software require-ments. It is quite possible that an interface wasalready a part of the system requirements; in thatcase, the OIM requirement for the interface driver isnot considered.

Page 6: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352342

3.1.3.2. Interface language commands. The languageis the tool with which the external PC communicateswith the system. Since every parameter in the systemshould be visible to the outside world, there shouldbe a command that the external PC will send to the

system to set or get each parameter. Thus, thislanguage requirement should be part of the require-ments. Along with the language comes the require-ment for its parser and this parser should also be partof the requirements.

Ž . Ž .Fig. 4. OIM firmware development process example. a STD’s for PC and instrument. b Sequence diagram for Test A.

Page 7: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 343

Ž .Fig. 4 continued .

3.1.3.3. Monitor. The monitor is the application on aPC that will be used for testing the embedded sys-tem. The monitor will send commands to the instru-ment and read data back from the instrument, andinterpret the data received.

3.1.4. OIM designThe design phase is same as that of the usual

software development process with the exceptionthat the requirements have been changed for OIM.Thus, the design phase should ensure that the

Page 8: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352344

AwindowB into all parts of the system is created,should ensure that there is a command in the inter-face language to get or set each parameter in thesystem and should decide on the parser algorithm forthe interface language commands.

3.1.4.1. Design interface language commands. Dif-ferent categories of interface language commandswill be required and they are listed below:

1. Commands to set the context of the instrument.Examples are:

GET READY FOR POWER MEASURE-MENTSET UP A CALL WITH PHONECHANGE TO DIFFERENT PHONE SYSTEM

2. Commands to set values of parameters. Examplesare:

SET OUTPUT LEVELSET FREQUENCYCHANGE DELAY TIME

3. Commands to change state of parameters. Exam-ples are:

OUTPUT POWER ONSPECIAL MODE OFF

4. Commands to start the instrument to do tests.Examples are:

START POWER MEASUREMENTEXECUTE FUNCTION FSTART ANALYSIS MEASURMENT

5. Commands to retrieve valuesrstates of parame-ters and results. Examples are:

GET OUTPUT LEVELGET OUTPUT POWER STATEGET MEASURED POWER VALUE

6. Commands to set and get miscellaneous parame-ters. Examples are:

SET TIMESET DATEGET TIMEGET DATE

The interface language commands for any embed-ded system can be divided into two classes —application independent and application dependent.Application independent commands are the com-mands applicable to almost any embedded system,while the application dependent commands are com-

mands specific to an embedded system. The applica-tion independent commands can be designed fromthe state diagram of the generic embedded systemgiven in Fig. 5.

As Fig. 5 shows, the embedded system is initiallyin Off state. Upon pressing the power on switch, theevent Power On is generated which causes the transi-tion of the system to the On or Idle state. Upon

Žreceiving the event Do F which may be due to a keypress or due to a command received over an inter-

.face port , the system goes from the Idle state to theExecute Function F state. In the Execute Function Fstate the system executes the function F and uponcompletion returns to the Idle state. When the system

Žreceives the Set event this event will have at leasttwo parameters — the parameter to set and the new

.value of the parameter , the system goes to the SetHardware state where the parameter is set with itsnew value. After setting, the system returns to the

ŽIdle state. Upon receiving the Get event this will.have at least one parameter , the system goes to

Read Values state wherein it reads the value of theparameter and returns to the Idle state.

Fig. 6 shows the table of events in the genericembedded system and how commands are derivedfrom the table.

3.1.4.2. Design instrument. In designing the instru-ment of the OIM system, the advantage of reuse canbe obtained for many cases. This is because theoriginal system already had some means of doing theactivity — for example, to start a test, there would

Ž .have been a key to press this is the primary input .Then, the code that the key press executes can besimply used by the interface language command thatstarts the test. This is reason for the small overhead

Ž .for OIM implementation see Section 3.4 . For read-ing values from the OIM system there has to be amemory of some sort to store the values to be read;

Fig. 5. State transition diagram for a generic embedded system.

Page 9: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 345

Fig. 6. Event table used to generate interface language commands.

this is because some values may be transient-pro-duced and consumed within a very short interval oftime and overwritten by the subsequent value. Allthese transient values will have to be stored in thememory so that after the test that caused thesetransients to occur has been completed, the PC canget these values out of the system for analysis. Theparser for interpreting the interface language com-mands should be designed and the driver to handlethe inputs from and the outputs to the interfaceshould also be designed. Fig. 7 shows the design forthe instrument of the OIM system.

3.1.4.3. Design monitor. However, for the PC, thedesign phase involves the monitor design — the

monitor should be designed for testing each andevery function in the instrument. The genericpseudo-code for developing a test module in themonitor is given in Fig. 8. All lines in Fig. 8 with AifnecessaryB are optional and are not required for alltests.

At step 8 in Fig. 8, the error is raised to informthe test engineer that the test did not completesuccessfully. If for testing function f, the steps 1–8were done in a loop many times, then upon error, thetest will break out of the loop and will not continue.

3.1.5. OIM implementationThe implementation phase implements the design

for both the embedded system and the monitor.

Fig. 7. Design of the instrument for the OIM system.

Page 10: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352346

Fig. 8. Generic pseudo-code for a test module in the monitor.

3.1.6. OIM testingThe testing phase of the OIM process is totally

different from the standard process. In OIM, themonitor tests the instrument. The monitor sends thetest commands to the instrument and reads responses

Žfrom the instrument responses are sent by the instru-ment to those test commands that require responses

.to be sent . It is very easy for the monitor to checkthe responses to see if they are expected or not.Moreover, the monitor could log the results of all thetests for human observation and regression tests.

3.2. OIM rationale

Why should OIM require computer interface de-velopment be the first step of the firmware develop-ment process? This is because the design shouldconsider the computer interface at every step. Sincealmost all internal functions and variables should beaccessible from the interface, there must be somemechanism for the interface to see all the data —either global variables or message passing mecha-nisms could be used. This can be done effectivelyonly when the design starts with computer interface.If the computer interface part of the software isadded as an afterthought, then the insertion of AaccesspointsB to the different parts of the software becomesa challenging activity and if not correctly imple-mented could cause the effectiveness of the interfacein testing to decrease. The OIM obviates this risk.One of the popular software development approachesis the incremental development model. If OIM isused, every increment of the software will be testedthrough the interface. The very first increment may

have only the boot-up software besides the interfacesoftware. The first increment will have to be exhaus-tively tested to make sure that the interface software

Ž .works correctly with proper stubs . Subsequently asmore features are added to the software, more testcommands will also be added and the newly addedfeatures can then be tested thoroughly. As can beseen, the OIM methodology complements the tradi-tional software development process — in the latter,

Ž .the computer interface even if required is rarely apart of the very first software version. The riskiest orthe most critical part of the software is developed

w xfirst 8 . However, there is no way to exhaustivelytest the software so developed. Whereas, OIM isoriented toward automated testing of software rightfrom the start.

3.3. Interface types

So far, the paper has referred to computer inter-face in a generic manner. There are numerous physi-cal interfaces available like RS232C, IEEE488, USB,FireWire, Ethernet and the like. The interface couldbe serial or parallel, wired or wireless. Whateverinterface the development group is familiar with canbe used.

However, different physical interfaces have dif-ferent data rates. Faster data throughput interfacestransfer messages faster — however, the tests thatthe embedded system performs as a result of thecommands received from the monitor will still takethe same time no matter at what speed the commandwas received over the interface. However, if theinterface is such that it can interrupt the application

Page 11: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 347

Ž .when the application on the embedded system isperforming a test of interest, then the test time willbe extended. However, the monitor tests should beso designed that once it has started a test on theembedded system, the monitor waits for the test tocomplete before proceeding to send further com-

Žmands to the embedded system unless the monitortimes out waiting for the test to complete — inwhich case there could be a possible bug in theimplementation of the test in the embedded systemand the monitor sends a command to stop the test or

.to reset the embedded system .

3.4. OIM oÕerhead

Since OIM insists on a computer interface, onemay be tempted to conclude that a high overheadpenalty for the embedded system will be incurred. Ifthe interface was not a requirement for the instru-ment, then additional software is required for theinterface driver, command parser and command exe-cution. In the author’s experience, if software reuseis followed, then the overhead does not exceed a few

Žtens of kilobytes in the project that the author wasinvolved, the interface software was less than 27

.kilobytes . Since modern embedded systems oftenw xhave a 32-bit CPU and megabytes of memory 9 , the

interface overhead should not be a constraint. If, onthe other hand, the interface was part of the systemrequirement, then the additional overhead will bevery low indeed.

Also, as can be observed, the algorithms that theinstrument uses to compute values will be duplicatedin the monitor as well. The implementations need notbe identical in both — only that the algorithms bethe same. Thus, while the instrument may have thealgorithm implemented in a high level language oreven assembly, the monitor may have the samealgorithm implemented in the same or some otherhigh level language. However, the duplication of thealgorithm for the monitor does not affect the imple-mentation of the algorithm for the embedded systemat all.

3.5. OIM-relationship to known principles

The principle of getting data out of an embeddedsystem is not new — as mentioned in Section 1,

serial port is used for getting debug information outof a running system and has also been used for

w xregression testing of software 10 , GPIB has beenused for controlling an embedded system, and manyother interfaces are used for sending data to andreceiving data from an embedded system. Theuniqueness of OIM lies in using the interface tech-nology for testing the embedded system itself and inorienting the entire firmware development process toensure that such testing results in maximum benefitsfor the organization. OIM permits the interface to beused in the normal way by the customer while itpermits the interface to be used in the normal wayand for embedded system testing by the organizationdeveloping the system.

4. Implementation

One of the authors was involved in a project thatused OIM for the most part. The embedded systemdeveloped was a high-end telecom test and measur-ing instrument. The computer interface used was

Ž .IEEE488 also known as GPIB that is a parallelinterface. One of the advantages of GPIB is that ithas a dedicated service request line and this wasused extensively for testing purposes. A PC with aGPIB-port was connected to the instrument using aGPIB cable. The monitor was developed and run onthe PC.

4.1. Implementation of the software

The software that was implemented included thecomplete stack on the instrument side and the appli-cation layer on the PC side as shown in Fig. 9.

On the PC side, the driver and IEEE488 bus arethird party software and hardware, respectively. Theapplication layer on the PC is the monitor that runsdifferent tests on the instrument. The test code on thePC sends out a series of commands to the instrumentand to some of these commands; it waits for aresponse from the instrument. The IEEE488 standarddifferentiates the commands that wait and do notwait for instrument responses by a command thatterminates with a A?B. If a command terminates withA?B, it means that the monitor expects a responsefrom the instrument, else the monitor does not. The

Page 12: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352348

Fig. 9. Software architecture used for implementing OIM.

PC is connected to the instrument by an IEEE488cable.

On the instrument side, the IEEE488 bus layer isthe hardware layer and is handled by an ASIC. Thehardware–software interface is handled by the driverlayer. This layer sends outputs to the hardware layer

Ž .and receives inputs in the form of interrupts fromthe hardware layer. The driver layer then sends the

Žcommands from the PC i.e. the inputs to the instru-.ment received from the hardware layer to the parser

layer. The parser layer decodes the commands andsends the decoded message to the application layer.The application layer consists of the IEEE488 con-trol layer and the measurement layer. The IEEE488control layer takes the correct action based on thecommands. If the command requires only an actionto be taken by the instrument like changing a param-eter value or doing a measurement, the IEEE488control sends the appropriate messages to the mea-surement layer to take these actions but does notsend an acknowledgment to the PC; but if the com-mand needs a response to be sent back to the PC, theIEEE488 control layer generates the response byreading the necessary values from the measurementlayer and sends the response back to the driver

Ždirectly the parser layer is not involved in the return.path , which then sends the response onto the PC via

the hardware layer.

For example, if the command ATAKEMEASBrequires the instrument to take power measurement,

Žsay, then when the PC sends this command thiscommand may be part of a ATest Feature iB module

.of the monitor , the hardware layer of the instrumentreceives the command ATAKEMEASB byte by byteŽ .IEEE488 is a parallel bus and sends the full com-mand to the driver layer. The driver layer resets thehardware layer for receivingrsending subsequentcommands fromrto the PC and sends the receivedcommand to the parser layer for processing. Theparser layer decodes this string to mean Ado po-–wer measurementB and calls this function in the–IEEE488 control layer, which then sends the mes-sage to the measurement layer to do the measure-ment. If the PC wants to read the result of thismeasurement, the PC then sends the following com-

Ž .mand ARESULT?B for example and the parserlayer, this time, may decode this string to meanAreturn measured valueB and calls this function in– –the IEEE488 control layer. This function in theIEEE488 control layer will read the result from theresult-area of the measurement layer and send the

Ž .response for example, 5 DBM directly to the driverlayer. The driver layer sends this response to thehardware layer. In IEEE488, the PC takes an explicitaction to read the response and as soon as theinstrument’s hardware layer knows that the PC wants

Page 13: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 349

Ž .Fig. 10. Pseudo-code for the monitor first example .

the response, the former sends the data in the outputbuffer to the PC. The PC can now store such resultsin a big array to plot them or do any other manipula-tion with the data. The fact is the data in a running

Ž .system the instrument has been received in the PCŽ .where further and perhaps faster processing can be

done.

4.2. Pseudo-codes for the monitor

Many examples have been presented earlier onhow the testing is done in an OIM-based system.This section presents even further examples withpseudo-codes to clarify this point. Monitor is theapplication that is resident on the PC and tests thecode running the application inside the embedded

Ž .system i.e. the instrument . The monitor is writtenŽ .in C language it can be in any language and has

modules to test various parts of the system.An example code is given in pseudo-code below

that tests the following scenario.Upon setting the output level for the instrument,

Ž .say to 10 dBm s10 mW , the instrument sets theregisters of its internal hardware to set this level.Three hardware registers have to be set for any levelchange. The registers are entered integer values thatare calculated from the set level by somewhat com-plicated formulas. Let these formulas be F1 forsetting the first register, F2 for setting the secondregister, and F3 for setting the third register, and letthe three register values be R1, R2 and R3, and letthe set output level be L. Then,

R1sF1 L 1Ž . Ž .R2sF2 L 2Ž . Ž .

and,

R3sF3 L . 3Ž . Ž .Since the R1, R2 and R3 values are internal, they

are not accessible to the user of the instrument.However, in the OIM method, these values are ac-cessible through the test port and let GET? R1,GET? R2 and GET? R3 be the commands for access-ing these register values, and let ASET L,fB be the

Žcommand to set the value of L to f since the testport is used for testing and this testing is onlyperformed by the company that manufactures theinstrument, there is still no need for the customer tobe aware of the commands to retrieve R1, R2 andR3 values — he may need the SET L,f commandthough, in case he chooses to use this port for setting

1.the value .Then, the pseudo-code for the monitor will be as

given in Fig. 10. This pseudo-code is not verydifferent from the generic code given in Fig. 8. Here,the separate tests for R1, R2 and R3 have beencollapsed into one single test.

The above is a simple example. However, let usconsider the case in which the hardware registers hadto be set within 100 ms of receiving the command toset the output level and that there was a timer that

1 This is because the requirement for an external PC interfaceport may have been part of the original requirements. In that case,it may be simpler to choose the same interface to be the OIM testport as well. However, in this case while SET L,f may be the

Žcommand that the user will be informed about so that he may set.the output level remotely , he will not be told about the GET? R1,

GET? R2 and GET? R3 commands. These latter commands weredeveloped and implemented only as part of OIM. Likewise, theTIMER? command for the second example may be an exclusiveOIM command.

Page 14: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352350

Ž .Fig. 11. Psuedo-code for the monitor second example .

kept track of how fast the output levels were set afterŽreceiving the command to do so this command may

have been received from the remote PC or may havebeen set by the user from the instrument’s front

.panel . In the OIM methodology, let TIMER? be thecommand to retrieve the value of the timer after each

Žsetting it is assumed that the timer is reset every.time the output level is changed . Then, the pseudo-

code in this case will be as given in Fig. 11.The typical time for sending the command

TIMER? and reading its response is about 10 msŽ .using IEEE488 . Hence, the PC can know the timervalues much faster than, say, if the timer values were

Žprinted on the instrument’s screen if it were at all.possible and were being manually interpreted. Also,

in the second case above, there is no way to test thecode passively and know whether the output levelwill always be set within 100 ms. This is becausethere could be unexpected hardware–software inter-actions in the running system that is completelyignored in the passive method of testing. This iswhere the power of OIM lies.

Thus, as can be seen, the interaction between themonitor application and the original requirements isthrough the commands sent between the PC and theinstrument. There is no other way for these twoindependent applications to interact. All the interac-tion must be through the test interface chosen.

5. Results of using OIM

The embedded system that was developed was atest and measuring instrument that tested cellphones

before release to the market. The OIM was used formost part of the development of the firmware for thisembedded instrument. The embedded system tookabout 10 engineers more than a year to develop.Such test and measuring instruments are used by cellphone manufacturers and service providers. BeforeOIM was actively used, there had been many recur-ring complaints on the stability and performance ofthe system. Stability related to the robustness of thesystem — the system should not crash in the pres-ence of reasonable user interactions; performancerelated to the working of the instrument over time —previously, the system used to crash after workingsome x h but such transient bugs were ironed outusing OIM. Since OIM has been used, however, thecustomers have informed that the subsequent ver-sions of the firmware have been better both instability and performance. This has led the companyto feel more confident that the firmware releases areerror-free.

Currently, this project is in its final phase. Cus-tomers have been satisfied with the features providedso far and the performance of the instrument. TheIEEE488 interface was part of the system require-ments. For OIM, the same interface was used. In all,about 1000 interface language commands were de-veloped, out of which about 20% were for OIMpurposes and the remainder were for meeting cus-tomer requirements.

The monitor tested all aspects of the firmware.This resulted in reduced testing and error detectiontimes. Early detection of the error meant that thedevelopment group could fix the bugs before therelease reached the customer. OIM helped detect

Page 15: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352 351

numerous bugs that could not be detected in anyŽother way some tests were run 200,000 times to

detect transient bugs — one test run lasted 1 s and.the tests were run 3–4 days . One of the examples of

ŽOIM’s advantage was clearly evident in a test whichshall be referred to as Test A — this is the same

.example that was used earlier that spewed out fourvalues, say V 1, V 2, V 3 and V4, where V4 dependedon V 1, V 2 and V 3 by a formula F, i.e.

V4sF V 1,V 2,V 3 .Ž .As soon as Test A completed the instrument

raised the service request line, whereupon the PCread the values of V 1, V 2, V 3 and V4. The PC used

X Žthe formula F and computed V4 the PC equivalent.of V4 . When the Test A was run thousands of

times, it was found that there was a significantdifference between V4X and V4 in some cases. Tofurther analyze the problem, the intermediate values

Žthat the instrument used for formula F there were.three of them, say I1, I2 and I3 were also retrieved

by PC using special test commands developed onlyfor this purpose. When I1, I2 and I3 were used, thevalues of V4X and V4 agreed exactly. The problemwas then found to be due to truncation of floating

Žpoint values of V 1 and V 2 when read by PC the.instrument used more precise values . The PC ran

test scripts written in C to do the tests and gave theresults of the test in a spreadsheet format. Thisenabled easy analysis of the results including chartcreation.

6. Conclusion

Through numerous examples, this paper has pre-sented the OIM. This paper has also presented a realapplication of the methodology, which detected nu-merous functional and performance-related errors atrun-time, hence enhancing the stability and perfor-mance of the application.

It is the contention of the authors, drawn from thisapplication, that very little or no organizationalchange is required to implement OIM. In fact, oncefollowed, the popularity of the OIM methodologywill most likely increase in any organization, al-though further studies would be needed to confirmthis generalization.

OIM has its drawbacks too. Since an external PCis being used to send commands to and receiveresponses from the embedded system, the embeddedsystem will have to service the interrupts from theexternal PC. This takes up processor time in theembedded system that could affect the time taken tocomplete the other processes running in the system.However, in practice, this is not that much of aconstraint, since once a measurement is started, thePC waits for the measurement to complete beforereading the results. Thus, while the embedded sys-tem is doing the measurement, it is not disturbed.However, this means when selecting the processorfor the embedded system, the time spent in process-ing the PC interrupts should also be considered.Another drawback is that since the OIM methodol-ogy requires testing of the system’s function, the testcode on the PC may duplicate most of the code thatthe embedded system uses. This may be a constraintin some cases. Yet, another drawback is the possibleoccurrence of race condition — a test may havebeen started by the PC, but due to some firmwareerror in the embedded system, the test may neverfinish. But the PC may timeout waiting for the test to

Žcomplete and may start reading the results of the yet.to be completed test. Manual intervention may be

necessary to stop the test and restart it after fixingthe firmware errors. Finally, the OIM requires addi-tional memory for storing intermediate values, asexplained in Section 3. However, in our experiencethe additional memory requirement was not a con-straint.

It is the contention of the authors that the futureof the OIM methodology is pretty promising. In thisera of internet and anytime, anywhere access to theweb, if an embedded system is equipped with webserver capability, it can be tested from practicallyanywhere in the world if OIM is used. Thus, cus-tomer service would acquire a special meaning —service by remote. The entire diagnosis can be doneremotely and only if it is a hardware problem does aservice engineer need visit the customer; if it is asoftware problem, even a new firmware version withthe necessary fixes may be downloaded into thesystem remotely. However, this should not divertattention from systematic software development ifthe scenario analysis is done upfront thoroughly,then many errors in logic can be prevented. The OIM

Page 16: Testable embedded system firmware development: the out–in methodology

( )N. Subramanian, L. ChungrComputer Standards & Interfaces 22 2000 337–352352

methodology can then be used to test the run-timefirmware behavior most of the time and to test lessof the mundane logic and coding errors.

References

w x1 P. Lettieri, M.B. Srivastava, Advances in wireless terminals,Ž .IEEE Pers. Commun. 1999 6–19, February.

w x2 P.A. Laplante, Real-Time Systems Design and Analysis: AnEngineer’s Handbook, IEEE Computer Society Press, NewYork, 1993.

w x3 R.E. Eberts, User Interface Design, Prentice-Hall, Engle-wood Cliffs, New Jersey, 1994.

w x4 P.J. Byrne, Reducing time to insight in digital system integra-Ž . Ž .tion, Hewlett-Packard J. 47 3 1996 June, Article 1.

w x5 E. Kilk, PPA printer firmware design, Hewlett-Packard J. 48Ž . Ž .3 1997 June, Article 3.

w x6 R.S. Pressman, Software Engineering, McGraw-Hill, NewYork, 1997.

w x7 M. Fewster, D. Graham, Software Test Automation: Effec-tive Use of Test Execution Tools, Addison Wesley, Harlow,England, 1999.

w x8 B.P. Douglass, Doing Hard Time, Addison Wesley, Reading,Massachusetts, 1999.

w x9 P. Varhol, Internet appliances are the future of small real-timeŽ . Ž .systems, Electron. Des. 47 19 1999 59–68, September 20.

w x10 R. Lewis, D.W. Beck, J. Hartmann, Assay — a tool tosupport regression testing, Proceedings of 2nd European

Software Engineering Conference, Springer, Berlin, 1989,pp. 487–496.

w x11 N. Subramanian, Instrument firmware testing using LabWin-dowserCVI and LabVIEWe — a comparative study, Pro-ceedings of NIWEEKe, August 1999, 1999.

w x12 N. Subramanian, A novel approach to system design: out–inmethodology, Wireless SymposiumrPortable by DesignConference in February, 2000, Paper presented.

Narayanan Subramanian is employedas Firmware Engineer at Anritsu Com-pany’s Applied Technology Division atRichardson, TX. He has been workingon implementing OO GPIB for the past2 yr. in the test and measuring instru-ments manufactured by Anritsu. Anritsumanufactures test and measuring intru-ments used in the telecom industry. Hehas an MSEE from Louisiana State Uni-versity, Baton Rouge, LA.

Dr. Lawrence Chung is currently anAssociate Professor of the Departmentof Computer Science at the Universityof Texas as Dallas. His research inter-ests include Requirements Engineeringand Software Architecture, as well asElectronic CommercerBusiness Archi-tecting. He has recently coauthored abook, ANon-Functional Requirements inSoftware EngineeringB, which is beingadopted in extending object-orientedanalysis to goal-oriented analysis.