integrated circuit testing for quality assurance in...

24
610 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997 Integrated Circuit Testing for Quality Assurance in Manufacturing: History, Current Status, and Future Trends Andrew Grochowski, Member, IEEE, Debashis Bhattacharya, Senior Member, IEEE, T. R. Viswanathan, Fellow, IEEE, and Ken Laker, Fellow, IEEE (Invited Paper) Abstract— Integrated circuit (IC) testing for quality assur- ance is approaching 50% of the manufacturing costs for some complex mixed-signal IC’s. For many years the market growth and technology advancements in digital IC’s were driving the developments in testing. The increasing trend to integrate infor- mation acquisition and digital processing on the same chip has spawned increasing attention to the test needs of mixed-signal IC’s. The recent advances in wireless communications indicate a trend toward the integration of the RF and baseband mixed signal technologies. In this paper we examine the developments in IC testing form the historic, current status and future view points. In separate sections we address the testing developments for digital, mixed signal and RF IC’s. With these reviews as context, we relate new test paradigms that have the potential to fundamentally alter the methods used to test mixed-signal and RF parts. I. INTRODUCTION O VER THE YEARS of its development, the integrated circuit technology has brought great progress to the design of high performance systems. Many challenges in the manufacturing process had to be solved to achieve this. One of the steps in this process, namely testing, is posing the most significant challenge to contemporary and future integrated circuit (IC) manufacturing. This is a continuing trend, because due to decreasing silicon cost and increasing complexity of integrated circuits, testing constitutes a very sizable portion of the IC manufacturing cost. This trend is further accentuated by the emergence of mixed signal, including radio frequency (RF) circuits, coupled with the competitive price pressures of the high volume consumer market. Frequently, the cost of testing a chip with a CODEC, 1 an integrated digital signal processor (DSP), and other base-band circuitry reaches 30 to 50% of the total cost. Due to unavoidable statistical flaws in the materials and masks used to fabricate IC’s, it is impossible to realize 100% yield on any particular IC; where yield refers to the ratio of Manuscript received August 8, 1996; revised January 31, 1997. This paper was recommended by Editor J. Choma, Jr. A. Grochowski, D. Bhattacharys, and T. R. Viswanathan are with Texas Instruments Inc., Dallas, TX 75243 USA. K. Laker is with the Electrical Engineering Department, University of Pennsylvania, Philadelphia, PA 19104 USA. Publisher Item Identifier S 1057-7130(97)03645-8. 1 CODEC—a related pair of coding (analog-to-digital) and decoding (digital-to-analog) blocks in a pulse code modulated telephone channel good IC’s to the total number of IC’s fabricated. A good IC is one that satisfies all of its performance specifications under all specified conditions. The probability of a bad IC increases in proportion to its size and complexity. It is also increased by process sensitivities that occur in digital and analog IC’s that rely on the control and/or matching of IC components or parameters to achieve their specified functionality. The details that govern IC yield and the associated economics that drive IC manufacturing are very interesting, but beyond the scope of this paper. The interested reader is referred to the literature, e.g., [98], [99]. In any event, testing is an indispensable part of the IC manufacturing process. Since testing is repeatedly performed at several manufac- turing stages, such as after wafer fabrication and packaging, it is very important to understand its inefficiencies. Reducing or eliminating these inefficiencies enables a chip manufacturer to drive down the cost of the final product [1], [2]. It is also important to understand the reasons for and the costs associated with testing. It is in the chip manufacturer’s best interest to minimize the number of bad devices shipped to the customer. A bad device is an IC that fails to meet one or more specifications at any point in the manufacturing process. Poorly designed tests or parts not designed for testability can result in bad devices appearing as good parts or good devices failing tests and appearing as bad. The shipment of bad devices leads to incurred replacement cost, loss of reputation, and possible loss of market share. The other side of this problem is not much better. When good parts are represented as bad, it reduces the chip yield and, correspondingly, it reduces the earnings of the chip manufacturer. At this point we should introduce the reader to the term DUT or device under test. DUT is a generic term used in the testing literature to refer to the component, or the bare IC die on a wafer, or the packaged IC, or the circuit board, etc. that is being tested. We will use this term frequently in this paper, mostly to refer to bare and packaged IC’s The major steps of a typical IC manufacturing process, with clear indication of the points at which the device testing is performed, are shown in the flow diagram of Fig. 1. Testing, at different steps in the product development process, addresses different issues and presents different challenges. The initial testing is performed within the computer-aided design (CAD) environment. At this stage the 1057–7130/97$10.00 1997 IEEE Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 04-Jun-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

610 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Integrated Circuit Testing for QualityAssurance in Manufacturing:

History, Current Status, and Future TrendsAndrew Grochowski,Member, IEEE, Debashis Bhattacharya,Senior Member, IEEE,

T. R. Viswanathan,Fellow, IEEE, and Ken Laker,Fellow, IEEE

(Invited Paper)

Abstract—Integrated circuit (IC) testing for quality assur-ance is approaching 50% of the manufacturing costs for somecomplex mixed-signal IC’s. For many years the market growthand technology advancements in digital IC’s were driving thedevelopments in testing. The increasing trend to integrate infor-mation acquisition and digital processing on the same chip hasspawned increasing attention to the test needs of mixed-signalIC’s. The recent advances in wireless communications indicate atrend toward the integration of the RF and baseband mixed signaltechnologies. In this paper we examine the developments in ICtesting form the historic, current status and future view points. Inseparate sections we address the testing developments for digital,mixed signal and RF IC’s. With these reviews as context, we relatenew test paradigms that have the potential to fundamentally alterthe methods used to test mixed-signal and RF parts.

I. INTRODUCTION

OVER THE YEARS of its development, the integratedcircuit technology has brought great progress to the

design of high performance systems. Many challenges in themanufacturing process had to be solved to achieve this. Oneof the steps in this process, namely testing, is posing the mostsignificant challenge to contemporary and future integratedcircuit (IC) manufacturing. This is a continuing trend, becausedue to decreasing silicon cost and increasing complexity ofintegrated circuits, testing constitutes a very sizable portion ofthe IC manufacturing cost. This trend is further accentuated bythe emergence of mixed signal, including radio frequency (RF)circuits, coupled with the competitive price pressures of thehigh volume consumer market. Frequently, the cost of testinga chip with a CODEC,1 an integrated digital signal processor(DSP), and other base-band circuitry reaches 30 to 50% of thetotal cost.

Due to unavoidable statistical flaws in the materials andmasks used to fabricate IC’s, it is impossible to realize 100%yield on any particular IC; where yield refers to the ratio of

Manuscript received August 8, 1996; revised January 31, 1997. This paperwas recommended by Editor J. Choma, Jr.

A. Grochowski, D. Bhattacharys, and T. R. Viswanathan are with TexasInstruments Inc., Dallas, TX 75243 USA.

K. Laker is with the Electrical Engineering Department, University ofPennsylvania, Philadelphia, PA 19104 USA.

Publisher Item Identifier S 1057-7130(97)03645-8.1CODEC—a related pair of coding (analog-to-digital) and decoding

(digital-to-analog) blocks in a pulse code modulated telephone channel

good IC’s to the total number of IC’s fabricated. A good ICis one that satisfies all of its performance specifications underall specified conditions. The probability of a bad IC increasesin proportion to its size and complexity. It is also increasedby process sensitivities that occur in digital and analog IC’sthat rely on the control and/or matching of IC components orparameters to achieve their specified functionality. The detailsthat govern IC yield and the associated economics that driveIC manufacturing are very interesting, but beyond the scopeof this paper. The interested reader is referred to the literature,e.g., [98], [99]. In any event, testing is an indispensable partof the IC manufacturing process.

Since testing is repeatedly performed at several manufac-turing stages, such as after wafer fabrication and packaging, itis very important to understand its inefficiencies. Reducing oreliminating these inefficiencies enables a chip manufacturerto drive down the cost of the final product [1], [2]. It isalso important to understand the reasons for and the costsassociated with testing. It is in the chip manufacturer’s bestinterest to minimize the number of bad devices shipped to thecustomer. A bad device is an IC that fails to meet one or morespecifications at any point in the manufacturing process. Poorlydesigned tests or parts not designed for testability can resultin bad devices appearing as good parts or good devices failingtests and appearing as bad. The shipment of bad devices leadsto incurred replacement cost, loss of reputation, and possibleloss of market share. The other side of this problem is not muchbetter. When good parts are represented as bad, it reduces thechip yield and, correspondingly, it reduces the earnings of thechip manufacturer. At this point we should introduce the readerto the term DUT or device under test. DUT is a generic termused in the testing literature to refer to the component, or thebare IC die on a wafer, or the packaged IC, or the circuit board,etc. that is being tested. We will use this term frequently inthis paper, mostly to refer to bare and packaged IC’s

The major steps of a typical IC manufacturing process, withclear indication of the points at which the device testing isperformed, are shown in the flow diagram of Fig. 1.

Testing, at different steps in the product developmentprocess, addresses different issues and presents differentchallenges. The initial testing is performed within thecomputer-aided design (CAD) environment. At this stage the

1057–7130/97$10.00 1997 IEEE

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 2: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 611

Fig. 1. Major steps in IC manufacturing flow.

designer is verifying the functionality and the performance ofthe intended product. Many of the production tests, indicatedin Fig. 1, are based on this activity.

After the wafer processing, the integrity of the wafer pro-cessing is evaluated by probing sites on the wafer that containstandardized device structures that are designed to evaluateprocess control. These test sites are stand-alone IC’s that areplaced on the wafer either in place of a few primary2 IC sitesscattered over the area of the wafer or in the scribe areas inbetween the primary IC sites. This test is intended to quicklyidentify any common catastrophic processing defects and toensure accurate process control. If the wafer processing isfound to be out of the manufacturer’s specification window;such wafers or the lots that include them may be scrappedprior to any functional testing. We should point out that thesame device structures and/or additional ones on these testsites may be used to acquire the data for the SPICE modelsused in the simulation of the IC’s.

At the end of the wafer processing, every device (primaryIC) on the wafer undergoes a set of performance or functionrelated tests (commonly referred to as the test suite). The goalof this step is to eliminate DUT’s that fail to satisfy one ormore of the expected performance specifications. Dependingon the test philosophy of the IC manufacturer, the test suiteused for wafer probing will either include a complete setof tests intended for the packaged devices or a subset. Atthis time, the bad devices are “electronically marked,” andthis information is used in the next manufacturing step. Thenext step is to scribe the wafer and select the good dies forpackaging. After the good IC dies are packaged, testing isagain performed to ensure proper handling, wire bonding andpackaging. The final test checks performance of the deviceover the specified temperature range.

A sample set of the qualified packaged IC’s is subjected tostress testing that is referred to as burn-in (i.e., operating theIC’s in an oven at elevated temperature for an extended periodof time). After removing the devices from the oven, they are

2Every wafer contains the sites with the designs of the intended finalproduct, called “primary” sites. Some of these primary sites are sacrificed tofacilitate the manufacturing process control (indirect measurements of levelsof doping, layer thicknesses, etc.). Same sites are used for placement of thefeatures used in the alignment of different mask levels.

retested using the full test suite. The objective of this step isto eliminate the “weak” devices which are likely to fail duringtheir early period of operation. This failure mode is referredto as “infant mortality” and it represents the initial portion ofthe aging or so called “bath-tub curve” [6].

Finally, a smaller sample is retested, prior to delivery,for quality assurance (QA) purposes. Different manufacturersimplement their own independent QA programs. However,the globalization of the microelectronics industry drives theestablishment of QA standards such as the popular ISO9000standard.

The repetitions of similar tests performed on every IC,makes the process appear inefficient. The number of repetitionsis necessary to weed out bad IC’s as early in the manufacturingprocess as possible and to ultimately insure that no badIC’s are shipped to customers. The cost associated withproducing a bad device is magnified by a factor of tenas it propagates undetected through the chip and systemmanufacturing processes and ultimately to the field. Conse-quently, the number of tests and the total test time need tobe balanced to achieve the requisite process quality assurancewhile keeping the manufacturing cost a low as possible. Thefeed-back of failure mode information to previous steps in themanufacturing process enables a closed loop control that isvital to continuously improving the efficiency and quality ofthe process. For this reason, simply limiting the number oftests performed at each step is usually an unacceptable costreducing measure.

II. A UTOMATED TESTING

Historically, IC characterization and testing developed withthe use of bench-top equipment. There are still parts, e.g.,in the microwave and millimeter wave area, that can only betested this way. These instruments have provided and continueto provide high performance and flexibility of use; bothcharacteristics are very important during the characterizationand debugging phases of IC development. As IC DUT’sbecame smaller and more complex, they also became moredifficult to fully test on the bench. It also became important tohave several synchronized instruments operating on the DUTsimultaneously. This use of synchronized instruments becamea vital approach for both the characterizing of IC’s in theR&D laboratory and the conduct of the production tests in themanufacturing environment. It was then seen that a streamlinedinterconnection of such instruments could simplify the entiretesting procedure. At the simplest level, one instrument maytrigger or control the instrument supplying the stimuli to theDUT or it may control the intervention of other measuringinstruments.

The integration of a collection of interconnected and syn-chronized bench top instruments into a test station was markedby two significant developments. The first event was theintroduction of standards for instrument interconnect and in-tercommunications; and the second was the availability of themicroprocessor.

The IEEE 488 standard evolved in response to a growingdemand for computer driven testing solutions. It evolved

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 3: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

612 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 2. VXIbus electrical architecture (the power distribution bus is not shown) [92].

in the mid-1970’s out of the proposal for the byte serialprotocol from Hewlett-Packard Company. At times it has beencalled the “ASCII bus” or the “General Purpose Interface Bus(GPIB)”. The standard [7] was designed to allow a user toeasily interconnect GPIB configured instruments to each otherand to a control computer (i.e., usually a PC with a GPIBplug-in card). It covers the communications protocols (fortransmitting/receiving data and controlling communications onthe bus), the multiconductor interconnection cables, and theplug-in fixtures.

The second breakthrough in the automation of bench topequipment was facilitated by the emergence of relativelyinexpensive microprocessors, which enabled the developmentof more versatile and more reliable instrumentation. The fastdevelopment of digital VLSI rendered practical the incorpo-ration of digitizing and data processing within the instrument.The inclusion of microprocessors enabled instruments to be de-veloped that were easier to operate. The use of microprocessorsalso reduced operator error by eliminating most front panelcontrols and making the remaining controls more powerful.

These two developments lead to the evolution of new andmore efficient protocols that advanced the integration andsynchronization of instruments and computers.

The early 1980’s saw an emergence of VME (Versa Mod-ule Europa) computer bus technology. It began with Mo-torola’s VERSAbus. After adopting Eurocard module dimen-sions, VERSAbus became VMEbus (IEEE 1014 standard,1987). This new standard provided specifications for an openarchitecture computer bus with wide bandwidth. The protocolfor VMEbus proved to well be suited for a wide variety ofcomputer plug-in cards; but is not so for data acquisitionboards. Consequently, VMEbus was never well accepted byinstrument manufacturers.

In the late 1980’s, a number of technical representativesfrom the instrumentation manufacturers formed a committee to

formulate the additional standards necessary for an open archi-tecture instrumentation bus based on VMEbus and the IEEE488 standards. This gave birth to the VXIbus, an acronymthat stands for VMEbus Extension for Instrumentation. TheVXIbus can be logically grouped into eight buses, as shownin Fig. 2. Global buses are accessible and shared by all VXImodules. Unique buses are routed from the slot 0 module (themain control unit) to other modules on a one-to-one basis.Private buses are local buses between adjacent modules [92].

VXIbus enables the control of signal characteristics andpropagation delay via a well defined backplane environment[8]. With tighter electrical characteristics comes greater flex-ibility and higher bandwidth communications between in-struments. VXI provides modularity and ease of use, whilemaintaining compatibility with GPIB. The growth in highvolume discrete electronic component manufacturing (resis-tors, capacitors, diodes, etc.), and the need for repeatablemeasurements, further stimulated the automation of existingtest methods.

At the time of the development of the GPIB standard,semiconductor and discrete component manufacturers likeFairchild, RCA, Western Electric, TI, etc. began building theirown automated testers. These developments were driven by thepressure of increasing demand for higher IC production andthe associated need to perform large volume batch testing. Thefact is these “homegrown” systems were very difficult to buildreliably and they conformed to few if any standards.

High demand in the consumer electronics market, coupledwith the fact that it was difficult and expensive to build thetest equipment in-house, created the right environment for theemergence of the third party manufacturers of automated testsystems. ATE to test digital IC’s appeared first; and ATE fordigital and analog IC’s were separate systems. The analog andmixed-signal ATE industry was started when Teradyne, Inc.and the LTX Corporation were founded in the late 1960’s,

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 4: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 613

Fig. 3. The ATE evolution.3

and the early 1970’s, respectively. These companies beganby developing automated test equipment (ATE) to test linearanalog IC’s, where the void was greatest. Over the next severalyears the capabilities of their ATE products evolved to servethe testing needs of mixed-signal (digital and analog) ICmanufacturers. In Fig. 3 we graphically show the time-line forthe evolution of ATE technologies. It can be debated whetherthe development of “analog only” ATE has disappeared ornot. The fact is that today’s mixed signal ATE’s, even ifequipped with “analog only” instruments, rely totally on digitalsignal processing (DSP) based measurement methods (i.e., themeasured analog signals are digitized and DSP algorithmsare used to extract the desired performance parameters). Forthis reason Fig. 3 shows the development of “pure analog”or “analog only” ATE to have ended with the emergence ofmixed-signal ATE. Moreover, the trend among digital ATEvendors is to offer ATE’s with some mixed signal capabilities.Thus, the contemporary ATE industry offers products withvarying ratios of analog and digital functionality (i.e., “bigD little A” or “big A little D” configurations and variationsin between).

Mixed-signal ATE systems are sophisticated computer con-trolled systems which perform automatic testing of DUT’s thatinclude single components, packaged and bare IC’s, circuitboards, and complete systems. The allocation of the ATE’sfunctionality and the control of its operation are achieved withsoftware. The ATE generates and applies the stimuli to theIC, it senses and digitizes the IC’s responses, and it analyzesthese responses. Consequently, contemporary ATE are com-prised of the following five functional modules: STIMULUS,MEASUREMENT, TESTER CONTROL, PROCESSING, andSIGNAL HANDLING; as shown in Fig. 4.

The TESTER CONTROL, i.e., the top block in Fig. 4,controls the entire ATE under the guidance of a test plan.It establishes the controls for the PROCESSING, MEASURE-MENT, and STIMULUS blocks needed to implement the testplan. Test plans are written in a high level programminglanguage. Due to its efficiency, modularity, and versatility, Clanguage has become the most popular language for this task.It is also in this portion of the ATE where the interface (menusand icons) between the test developer (and ATE operator) andthe ATE is determined. The PROCESSING block provides

3This graph does not assume any particular date for the onset of the bench-top equipment development, nor does it assume the disappearance of all testequipment in the near future. Furthermore, the dates for the appearance anddisappearance of a particular ATE type are approximate dates.

Fig. 4. Functional modules of an ATE.

computing power necessary for preprocessing the data that isto be sent to the STIMULUS block. The STIMULUS block, inturn, is responsible for the final shaping of the signals that areto be applied to the DUT. The PROCESSING block also doesthe postprocessing of the digitized signals provided by theMEASUREMENT block. The SIGNAL HANDLING blockembodies all the hardware used for routing the signals to andfrom the DUT.

In addition to the functional partitioning shown in Fig. 4, ev-ery ATE is physically comprised of a test head connected to amain cabinet containing the processor and the instruments. Thephysical appearance of a typical ATE is shown in Figs. 5 and 6.The main cabinet houses the STIMULUS, MEASUREMENT,PROCESSING, and TESTER CONTROL blocks. SIGNALHANDLING is a primary the function of the ATE’s test head.It also includes signal routing through adapters, fixtures, andcables between modules in the ATE and the DUT. Thus, signalhandling is an area that requires special consideration. Theamount of degradation experienced by signals, especially foranalog signals at high frequencies, passing through switchcontacts and cables cannot be underestimated [9].

Because the signal handling requirement is unique to anATE, the development of this hardware is very closely tiedto the growth of the ATE industry. The instrumentation foundin the main cabinet, on the other hand, evolved separatelywith little influence from the ATE industry except to makethem programmable and compact. Most of these instrumentswere developed as bench-top (stand alone) units as early asthe 1950’s. Simple hardware controllers were used in the1950’s but true automation occurred only subsequent to theavailability of low cost, powerful microprocessors.

Originally ATE vendors used in their equipment a mixof proprietary instrument interfaces with GPIB controllers.Subsequent ATE models incorporated prevailing instrumentand computer bus standards. The emerging bus standards havebeen incorporated within later ATE generations to variousextent. Today, this trend continues, with majority of ATEindustry converging on VXIbus based instruments and several

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 5: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

614 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 5. Teradyne A580 automated tester.

Fig. 6. LTX Synchro automated tester.

vendors introducing optical fiber as means of communicatingbetween internal subsystems.

In addition, the performance of the instruments availablewithin an ATE framework has undergone a tremendous im-provement. Today’s ATE provides a measurement environmentwith a noise floor between 120 and 140 dB. This type ofenvironment is necessary in the testing of high resolutionanalog-to-digital and digital-to-analog converters. Similarly,RF testing also requires a very low noise environment for lowsignal power measurements. Since today’s ATE was developedusing IC’s developed in yesterday’s technologies, even themost advanced ATE is challenged to test IC’s that represent thelatest state-of-the-art. For example, the increasing performance

of IC’s for storage devices is resulting in increasingly morestringent demands on the timing parameters of today’s ATE’s.The clock jitter (phase noise) of no more than 10 ps rms isrequired to properly test the read channel IC’s that are realizingmore than 100 Mb/s. data rates. Some of the mixed-signalATE vendors provide special modes which increase the digitalcapture rates up to 400 MHz. Table I gives a general list ofthe instruments available in a truly mixed-signal ATE. Theparameters listed in this table represent typical values, and theyare often the differentiating factors that drive the IC manufac-turer’s choice between the ATE’s offered by different vendors.

One of the first obstacles the ATE industry had to overcome,was signal handling and conditioning. As stated in the intro-

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 6: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 615

Fig. 7. Teradyne A580 device interface board (13 inches in diameter).

duction, there are two embodiments of an IC DUT that theATE has to interface with, i.e., die on a wafer and package. Inboth cases the signal path from an ATE to the DUT is routedthrough a multilayer device interface board (DIB), such asthat shown in Fig. 7. Device interface boards can be eithersquare or round. The round DIB’s have diameters between 12in and 18 in, depending on the tester type. Typically the DIB’sare constructed with 2–4 separate, uncommitted planes (layers)which can be used for separate ground planes and shielding. Inaddition the DIB’s have planes (layers) dedicated to set powersupplies ( 5 V, 15 V, etc.). The access to ATE resourcesis gained via contact pads radially located on the ring aroundthe perimeter of the board. These pads make contact throughthe spring loaded coaxial pins to the resources located in thetest head.

During the wafer test, the signals are delivered to and fromthe IC via probe card [3]. To accommodate such interface, thedevice interface board has an opening in the center (usually3 to 5 inches in diameter). The small pads are located onthe perimeter of that opening. The interconnection betweenthe probe card and the device under test board is achievedthrough the spring loaded shielded pins. To maintain the signalintegrity of the connection between the probe card and the IC,this interface has to be very reliable and resilient. Probe card

resiliency is specified in hundreds of thousands of touchdowns(typically 100 000 to 300 000). The three most commonly usedprobe card types are the ceramic blade, metal blade, and epoxyring. The choice of a probe card style is dependent on differentfactors such as maintenance and signal aberrations. Fig. 8shows an example of a probe card used for testing of thelow pin count devices.

The diameter of a probe card ranges anywhere from 4 to 8in, depending on the test system. The diameter of the probetip is usually between 5 and 15 mils (where 1 mil 0.001in). Most commonly used materials for standard probes aretungsten, rhenium tungsten, beryllium copper, and palladium.The planarity of the probes depends on the chip area andtypically ranges from less than 0.5 mil for a small die tobelow 1 mil for a large die. The probes are laid out radiallywith ground layers placed in the way that allows placement ofbypass capacitors as close to the probes as possible.

A considerable amount of effort is currently being devotedto the development of high speed and RF probing. A recentutilization of the same technologies that are used in ICmanufacturing, leads to introduction of higher density andhigher performance probe cards [4], [5], [89].

Packaged IC’s are tested by placing a socket adapter onthe DIB. The physical admissions of the packages are very

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 7: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

616 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

(a)

(b)

Fig. 8. Example of low probe count probe card (4 inches in diameter).

different for different package types. They also exhibit verydifferent electrical characteristics. The important issues inselecting the socket type are the lead inductance of high speed( 10 MHz) paths and the series resistance of the leads. Thesecharacteristics affect the fidelity of the signals brought inand out of the DUT. The most prominent vendor names inthe socket adapter industry are Yamaichi and Johnstech. Asan example, Fig. 9 shows a socket for a 28 PLCC package

mounted on a DIB. Shown is a close-up of the center of theDIB with a mounted socket adapter.

The type of socket adapter shown in Fig. 8 is used during“hand testing,” that is when the devices are inserted into atest fixture manually. For large volume automated testing thissocket is removed and the DIB is directly connected to theautomated handler. Automated handlers are the instrumentswhich bring DUT’s into contact with the DIB. After the test is

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 8: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 617

Fig. 9. Socket adapter mounted on the load board.

TABLE ITYPICAL ATE INSTRUMENTATION PERFORMANCE

completed, the automated handler places the DUT in the ap-propriate bin, e.g., separate bins for good and bad devices (thedetails of the handler operation are determined by commandsin the test plan). Some products, e.g., microprocessors, aretested to more than one performance criteria; thus, additionalbins are used to separate parts that satisfy the different criteria.

The second significant achievement in the ATE developmentwas the simplification of the interface between the ATE andthe test developer/operator. It was largely the computational

Fig. 10. Test program development environment. (Courtesy of HP, ATEDivision.)

power and programming flexibility of the DSP, coupled withthe emergence high performance A/D and D/A converters, thatenabled the development of user friendly and flexible virtualinstruments.4 In today’s ATE, test instruments for specificapplications (e.g., video, telecommunications, hard disk drives,etc.) are created by configuring the ATE’s resources usingsoftware.

4Using analog channel/digitizer/DSP and flexible software, it is possible togenerate various instruments based on the same principles of digitizing eventsand digital signal processing.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 9: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

618 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 11. Instrument status window. (Courtesy of HP, ATE Division.)

For the initial “home grown” ATE systems, the human-machine interface was the low level machine language codeprepared by the test engineer. The design and implementationof tests in these early systems were encumbered by an interfacethat had its own steep learning curve. As time progressedthe programming evolved to higher level languages, i.e.,FORTRAN, Pascal, and C, and to a window based graphicaluser interface (GUI). With a GUI the test development envi-ronment evolved into one that is intuitive and easy to navigate.Moreover, it freed the test engineers to focus their efforts onthe important testing issue rather than writing and debuggingcode. Fig. 10 shows an example of the program developmentwindow for a typical ATE. We see buttons or icons for filemanagement, program compilation and launching, programand test debugging, and many other utilities designed to makethe test development easier. Here the user can select a set offunctions to be included in the test plan, start the executionof the program, stop the execution at any arbitrary point, anddirect the flow of the output data.

There are many utilities which allow the user to interrogatethe current states of the ATE and to change them, if desired.All the information describing the state of each instrumentis available in form of windows. The user can verify and/orchange the settings of voltage/current levels, select amongavailable filters, select the mode of operation, timing sets, etc.All these capabilities are available through an easy to followpoint-and-click environment. The examples of such utilities isshown in Fig. 11. The panel in this figure left shows graphicalrepresentation of the arbitrary waveform generator (AWG). Italso shows the status of the connections and the waveformparameter settings. The user is permitted to change some ofthe waveform parameters when the program is stopped at abreak point.

When digital inputs are needed, digital pattern editingresources are available at the click of a button. Fig. 11shows the digital pattern editor window. The window isdivided into three areas; namely, a spreadsheet style patterndisplay, a graphical pattern display, and a text timing and

Fig. 12. Digital signal display. (Courtesy of HP, ATE Division.)

vector display. Through this window, the user can change andmonitor the timing relations between various digital signals.The spreadsheet style display section allows the user to inputinstrument specific instructions. This allows triggering of thesignal generators and digitizers. These instruments are builtwith ADC’s and DAC’s, therefore the time when the samplesare sent out of the instrument can be precisely controlled.

The graphical waveform analyzer is shown in Fig. 13. Datastored in any numerical array can be displayed and edited inthis window. Data contained in a previously stored file can beread and displayed using this tool, as well. The generation ofdata segments for arbitrary waveform generators (AWG) canalso be accomplished with this tool. The data obtained by thedigitizer or the digital capture instruments can be displayedhere too. The ATE will update these displays when ever theprogram is restarted or suspended at a break point.

The important functions of data logging and the binning ofresults are governed by the test plan. Fig. 14 shows a typicaldata log or results window. The data output can be saved invarious file formats and later used for further analysis. Thiswindow shows an easy to read way of presenting numericalvalues of data measured and/or calculated during tests, asdefined in the test plan.

Test program development is completely under the testengineer’s control. Any valid application within the softwaresystem can be opened during development or debugging ses-sions. All mixed-signal ATE vendors provide test developmentenvironment emulation software that can be used on anyworkstation or PC connected to the network that includes theATE. This arrangement allows for easy transfer of data andtest programs between the ATE and the engineer’s desktopcomputer. As a result, most of the test development can beperformed prior to the physical availability of the DUT.

The easy to use ATE environment described in this paper isin effect a direct port of advances that have made workstationsand PC’s more intuitive and friendly to use. The drive to sim-

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 10: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 619

Fig. 13. Waveform display. (Courtesy of LTX Corporation.)

plify ATE programming will continue until the state is reachedwhere the software driving the system is transparent, and thetest engineer can solely concentrate on generating the test planand analyzing the results. Many ATE vendors are seriouslypursuing the integration of the IC design and test activitiesinto a single “user friendly” environment. Computer modelsthat emulate the ATE are being developed to be incorporatedinto the same CAD tools used during the design phase. Thiswill allow the test engineer to exercise the model for DUT(i.e., SPICE or HDL) within an emulated test environment.It is anticipated that this combined environment will yield abetter understanding of the DUT’s functionality and testability,and a more efficient and robust overall test strategy.

III. D IGITAL TESTING

A. Methods

In the 1970’s and 1980’s, there was a major divergencebetween the growth paths for analog and mixed-signal circuitson one hand, and digital circuits on the other. Digital circuitsexperiences an exponential growth in their level of integrationand performance.

This is illustrated in Fig. 15 for the case of microprocessorswhich are perhaps the best symbols of the rapid increasein complexity and performance of digital circuits. A majorconsequence of such growth in complexity was that conven-tional testing techniques, dubbed functional techniques, wereno longer sufficient for digital circuits. Instead, a host of newtechniques were developed to meet the challenge of testingcomplex digital circuits. At the same time, the evolution oftest equipment, including ATE, was also driven by the needto test such complex yet high-performance digital circuits.The variety of test techniques for digital circuits, availabletoday, can be broadly classified into the following categories:

Fig. 14. Results window. (Courtesy of HP, ATE Division.)

1) functional testing which relies on exercising the DUT inits normal operational mode, and consequently, at its ratedoperating speed; 2) testing for permanent structural faults (likestuck-at faults, stuck-open faults, bridging faults, etc.) that donot require the circuit to operate at rated speed during test;3) testing based on inductive fault analysis in which faults arederived from a simulation of the defect generation mechanismsin an IC (such faults tend to be permanent, and do not requirethe circuit to be tested at rated speed); 4) testing for delayfaults that require the circuit to operate at rated speed duringtest; 5) current measurement based testing techniques, whichtypically detect faulty circuits by measuring the current drawnby the circuit under different input conditions while the circuitis in the quiescent state.

The functional approach to testing, which relies on exer-cising the DUT in its normal operational mode, is the oldestmethod of testing and verifying the operation of digital circuits.In this approach, the designer and/or test engineer generatesequences of input patterns, dubbed test vectors sequences,given the design details at different levels of abstraction. Thepopularity of this approach stems from the following tworeasons: 1) such tests verify that the circuit operates accordingto specifications, at least under some input conditions, and 2)the test vectors are easily derived by simply formatting thetest cases used by the designer during creation of the design,thereby saving test generation time. Moreover, functional testsare well known to detect a reasonably large percentage of thefaults in an IC.

However, the functional testing method has some majorshortcomings, as well [56].5 Foremost among the deficiencies

5The literature on most of the topics to be covered in this section, is vast.Consequently, only a few representative works are discussed and/or cited, inthe discussion that follows.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 11: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

620 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

(a)

(b)

Fig. 15. Graphs illustrating growth in CPU and memory size (number oftransistors).

is the fact that test sequences generated using the functionalapproach tend to be unacceptably large for all but trivial digitalcircuits. For example, a thorough functional test set for a 32-bit adder with carry input to the least significant bit-position,requires the application of about 2input patterns. This isan impossible proposition for even the fastest ATE available.Moreover, a 32-bit adder represents only a small portion ofcomplex digital circuits like microprocessors. Testing circuitslike microprocessors using functional tests alone, is not even

Fig. 16. Single stuck-at fault model illustrated.

remotely feasible. Fault coverage of functional test vectorshas also come under increasing scrutiny [32], [79] as thestringency of test specifications have grown over the years.While functional test vectors tend to detect a large percentageof faults in an IC, they also tend to miss a nontrivial percentageof faults. At the present time, functional test vectors aretypically fault simulated6 to estimate their fault coverage,which can be a computational resource intensive processthat offsets the advantage of easy test vector generation.Furthermore, the correlation between fault coverage estimatesobtained via fault simulation, and the ability of functionaltest vectors to identify failing IC’s, can be poor. Thus, faultcoverage estimation for functional test vectors remains an openissue, thereby limiting the usefulness of functional vectors intoday’s environment of tight quality control.

Structural fault model based test techniques represent themost thoroughly investigated alternative to the functionaltesting approach since the 1950’s. A fault model is usuallyan artificial model of failures in digital circuits, such that testsfor such faults in an IC expose actual failures in the IC. Theearliest, and the most widely used structural fault model isthe single stuck-at7 fault model, in which one interconnectionline between low-level design units (like gates) is assumedto be permanently stuck at a logic value 0 or 1, one at atime. This is illustrated in Fig. 16 for a simple digital circuit.While this fault model may appear to be a highly artificialone, experience has shown that this fault model can be veryuseful in identifying faulty IC’s. Moreover, experience andtheoretical investigations suggest that test vectors generatedusing the single stuck-at fault model are usually effectivein identifying faulty chips in the presence of multiple faultswhich rarely mask each other’s impact on circuit behavior,completely.

Furthermore, the behavior of a digital circuit in the presenceof a stuck-at fault can be mathematically analyzed usingBoolean algebra. Numerous techniques to identify test vectorsthat expose specific stuck-at faults in a given design8 purelyBoolean-algebraic [33], [19], search-algorithm based [23],[44], [47], [68], [70], and some combination of the two [46],[75] have been developed over the years, using this faultmodel. Many of these techniques, have been implemented incommercial CAD tools, and are used throughout the semicon-ductor industry today. The major problems with the stuck-at

6Fault simulation efficiently simulates and compares the behavior of a givencircuit in fault-free condition, to its behavior in the presence of one or morewell-defined faults (in accordance with a selected fault model.)

7Another common name for this fault model being the single stuck-line orSSL fault model.

8This process is commonly referred to as test pattern generation (TPG),and with the proliferation of computer implementations of various methodsfor TPG, as automatic TPG or ATPG.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 12: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 621

fault model are twofold: 1) the computation process to identifytests (TPG, see footnote 2) can be extremely resource andtime intensive and 2) the stuck-at fault model is not good atmodeling certain failure modes of CMOS, the dominant ICmanufacturing technology at the present time.

Consequently, alternative fault models like transistor stuck-OPEN and stuck-ON fault models [24], [39], [51], [78], andbridging fault models [27], [52], [58] have gained popularityin recent times. Efforts have also been made to developeffective TPG tools that use very low-level circuit modelscalled switch-level models [21], [48]: switch-level circuitmodels use primitives that fall in-between transistor-levelmodels like those used for SPICE simulation, and gate-levelcircuit models that are commonly used with the stuck-at faultmodel. At the present time, the search for alternatives to thestuck-at fault model that are better suited for CMOS circuits,and the search for computer implementation techniques forTPG that require less time and computing resources, continueto be active areas of research. Simultaneously, the stuck-atfault model continues to be widely used for generating testvectors which are likely to expose failing IC’s that are notidentified as faulty by other testing techniques.

Recognition of the limitations of the permanent fault modelsfor digital circuits, especially in CMOS technology, has led tothe development of an interesting approach to fault modeling,designated inductive fault analysis (IFA), and more recently,inductive contamination analysis (ICA) [37], [40], [53], [59],[69]. In these approaches, candidate faults in a given circuitare first identified through simulation of the recognized modesof failure (like particle contamination) in a specific IC man-ufacturing process, applied specifically to a given layout ofa given circuit: an overall view of IFA is shown in Fig. 17based on the framework presented by Ferguson and Shen[40]. The resultant faults consist of deviations in the variouslayers of the IC, from the designed layers, like extra blobsof metal causing short between two wires, breach in the gateoxide, etc. The impacts of these faults on the behavior ofthe given circuit are then extracted via simulation, and areusually modeled as permanent faults. Tests for these permanentfaults are generated in a final step, typically by means of somecomputerized search procedure. The main advantage of IFA isthat it uses detailed process information to decide on a list oftarget faults, and hence, can claim the resultant list of faults tobe “realistic” as opposed to “artificial” as is the case with thestuck-at fault model. A major problem of IFA, however, alsostems from this reliance on detailed process simulation, theprocedure is resource and time-intensive, leading to high cost.

Delay faults represent a different face of fault model basedtesting in which faults are assumed to affect the performance ofthe circuit, without affecting its functionality. In other words,a circuit with delay fault is presumed to operate correctlyat sufficiently low speed, but does not operate correctly atthe rated speed. At the present time, delay faults are broadlyclassified into two categories: 1) gate delay faults, where thedelay fault is assumed to be lumped at some gate outputand 2) path delay fault, where the delay fault is the resultof accumulation of small delays as a signal propagates alongone or more paths in a circuit. Although delay faults were

Fig. 17. Framework of fault extraction in CMOS circuit using IFA, aspresented by Ferguson and Shen [40].

studied in some depth as early as 1985 [55], interest in delayfaults have picked up only since the late 1980’s. A slew ofautomatic TPG or ATPG techniques for delay faults, undervarious assumptions of observability and controllability of theinternal flip-flops in a digital design, have been developed [17],[28], [30], [34], [43], [50], [60], [67]. As with stuck-at faults,these techniques use computerized search algorithms, or somecombination of Boolean algebraic processing and search, toidentify the test vectors for delay faults. Unlike tests for stuck-at faults where the impact of a fault is usually not completelymasked by the presence of other faults in the DUT, tests fordelay faults can suffer from a problem of robustness, i.e., a testfor one delay fault may be invalidated by excess delay in partsof the circuit that are not the target of the test being applied.For an excellent discussion of robustness of delay faults andthe relationship between robust tests for gate and path delayfaults, the reader is referred to [67].

For common digital circuits of today, e.g., those manufac-tured in CMOS, all of the previous testing techniques rely onvoltage measurements during testing to determine whether theDUT operates according to expectation, or not. Techniques thatare based on the measurement current represent an orthogonalapproach. In such methods the current consumed by theDUT is measured, under different input conditions, to decidewhether the DUT is good or bad. Since high-precision currentmeasurement is usually a slow process, taking several mil-liseconds, almost all of the current measurement based testingtechniques rely on the measurement of quiescent current of theDUT, for different input conditions [62], [73]. Such techniquesare commonly referred to as IDDQ test techniques. Somestudies have also been made about the possibility of usingcurrent measurements while the circuit is not in quiescentmode [57], [77], but this approach remains a research topic,at this time.

test techniques have been most successfully appliedto detecting faulty static CMOS parts in which the quiescentcurrent is expected to be very low: in the order of few hundred

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 13: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

622 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 18. Ad hoccontrol and observation point insertions to ease testability.

nanoamperes at most, for state-of-the art processes, assumingsupply voltages in the range of 2.0–4.0 V. Hence, if the DUTconsumes significantly more current in the quiescent state,e.g., tens to hundreds of microamperes, the DUT is judgedto be faulty. Such increased levels of can be causedby a variety of factors like metal-to-metal shorts, defectivetransistors, etc. However, depending on the actual failuremechanism, some of these faults may not be caught by anyother test method except for a nearly exhaustive functional testset—which, as discussed earlier, is impractical. Recent studies[56], have confirmed the routine occurrence of such faults inCMOS circuits, thereby boosting the case for testingsignificantly. Related studies have also shown that there isa good correlation between failure in tests, and earlylife failure of IC’s, i.e., tests can also act as goodindicators of potential reliability problems for IC’s. The majordisadvantage of IDDQ testing is the fact that high-resolutioncurrent measurement is a very slow process, even with state-of-the-art test equipment. As a result, tests are usually verycostly in tester time, in a production environment. Moreover,classification of chips that fail tests only, but pass allother functional tests and structural fault model based tests,is usually a hard problem on account of the cost of rejecting“good” DUT’s.

Testing of digital circuits usually goes hand-in-hand withdesign-for-testability (DFT) which represent design principlesand methodologies aimed at enhancing the controllability andobservability of internal nodes in a circuit, with minimal areaand performance overhead. DFT techniques can be classifiedinto two broad categories: 1)ad hoc techniques, and 2)systematic techniques.Ad hoc DFT techniques are usuallyapplicable to specific designs only, but cannot be generalizedto cover many different types of designs. They include suchmeasures like control and observation point insertion, asillustrated in Fig. 18, and the partitioning of long counters andshift-registers, etc. Systematic DFT is essentially another namefor structured, modularized digital design methods which arereusable, and hence, can be used with a variety of designs,have minimal dependence on chip parameters, and are usuallyhighly automated.,

Among the variety of systematic DFT techniques that havebeen investigated by researchers, scan design methods arepossibly the most well-known and widely used methodology.The foundation of scan design lies in the Huffman model ofsequential circuits shown in Fig. 19; whereby any arbitrarysequential circuit can be viewed as a composite of onecombinational logic block and one or more memory elements,

Fig. 19. Basis for scan design methodology: the Huffman model of sequen-tial circuits.

Fig. 20. A typical IEEE 1149.1-compliant board organization.

with every feedback loop from output of the combinationalblock to its input(s) passing via a memory element; thememory elements are clocked latches or flip-flops in the caseof synchronous designs. In scan design methodology speciallydesigned memory elements are used so that all, or somesubset of these memory elements can be fully controlledand observed via some special access mechanism in thetest mode: such access mechanism usually consists of oneor more shift registers designated scan chains9 formed outof the memory elements themselves, or of a 2-D grid-likeaccess mechanism. This basic idea has been implemented innumerous variations by many CAD tool vendors and digitalIC vendors, some of the notable ones being: 1) multiplexer-scan commonly designated as MUX-scan (virtually all DFTtool vendors support this approach, which can be attributedoriginally to Williams and Angell [81]); 2) level sensitivescan design commonly designated as LSSD (IBM) [38]; 3)scan path (NEC) [45]; 4) modular port scan design or MPSD(TI) [65]; and 5) scan/set logic (Sperry-Univac) [76]; and 6)random access scan (Fujitsu) [18], etc.

When all memory elements in the DUT are replaced withscan elements, the design is designated asfull scan design.The main advantage of full scan design is that it converts asequential circuit (which has feedback loops and memory) into

9Of course, one needs to verify that the scan chain itself is fault-free, whichis not a hard problem if some constraints are imposed on the types of memoryelements that can be used in the circuit.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 14: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 623

Fig. 21. Typical organization of test logic in IEEE 1149.1-compliant part.

a combinational one (with no feedback loops or memory) fortesting purposes; which in turn, simplifies test generation forthe DUT considerably. If some, but not all, of the memoryelements are replaced with scan elements, then the design isdesignated apartial scan design[54]. Experiments show thatwith careful selection of the scan elements, partial scan designscan often have most of the benefits of full scan design. Thiscan occur even though the resultant partial scanned design hasmemory elements in it, in the test mode. Of course, dependingon the number of flip-flops scanned, the overhead due to thepartial scan can be significantly less than the area overheadof full scan.

In 1990, IEEE approved the standard 1149.1 test accessport and boundary scan architecture [49] for digital circuits,which recognizes scan as the key component in easing testingproblems for large digital circuits and systems. This standard iscommonly referred to as the Joint Test Action Group (JTAG)standard, since it was developed by the JTAG consisting ofmembers from Europe and North America. It is also referredto as boundary scan, since a major part of the standard is basedon the use of special scan cells at the boundary of IC’s. Thisstandard establishes a systematic design procedure for chipsand boards, with guaranteed access to chips fully assembledon aboard. A schematic view of the DFT structures in a boardemploying boundary scan for all its chips, is given in Fig. 20.

Any design compatible with IEEE Standard 1149.1 musthave a few key components: 1) a test access port (TAP); 2)a TAP controller; 3) an instruction register (IR); and 4) somemandatory test data registers (TDR’s). The TAP controller,IR, and TDR’s constitute the test logic indicated in Fig. 20.The standard specifies the behavior of the TAP controller, andsome mandatory instructions that must be supported by thetest logic. In addition, the standard allows manufacturers ofIC’s to include additional TDR’s, and to support additionalinstructions, which may be revealed to the buyer (publicinstructions) or may not be revealed to the buyer (privateinstructions). A typical organization of the test logic in aJTAG-compliant IC is shown in schematic, in Fig. 21. For anexcellent discussion of the JTAG standard, implementations ofthe various parts of the test logic, and useful extensions to the

Fig. 22. Illustration C-testability of a ripple-carry adder.

specified standard, e.g., simultaneous access to multiple scanchains under the control of private instructions, the reader isreferred to [63].

DFT techniques for circuits possessing regular and repeatedstructures have also received tremendous amount of attention,over the years. It was shown as early as 1973 [42] thatripple-carry adders (recall the difficulty of functionally testinga 32-bit adder) can be tested very thoroughly, assuming nomore than one adder stage is faulty at a time, using only eighttests, independent of the size of the adder; this is illustrated inFig. 22. Whenever a circuit has this property, it is designatedC-testable [42]. It has been shown that with the use of appro-priate DFT techniques, test set sizes for many useful arithmeticcircuits like ALU’s, multipliers, etc. can either be madeconstant (i.e., the circuit can be made C-testable), or madeto grow very slowly with circuit size [26], [29], [35], [71],[74]. At the same time, it has been shown that microprocessor-like circuits (designated instruction-set processors) which havea well-defined register-transfer model, can usually be testedthoroughly using register-transfer operations can be used toexercise different parts of the DUT [20], [31].

As digital circuits have grown large, and as the availabilityof transistors/gates on an IC has steadily increased withimproved fabrication technologies, various techniques havebeen developed to include some testing functionality into thelogic itself: this is commonly referred to asbuilt-in self testor BIST. While the idea of a digital circuit testing itself ismany decades old (for example, the earliest switches fromAT&T had some self testing capabilities) the use of BIST incommercial designs has become popular only recently. Thegeneral structure of a digital circuit using BIST is shown inFig. 23. Response verification in BIST is usually done bycomparing the response of one part of the circuit, with theresponse from another part (in the case of regular circuits),or using feedback shift registers acting as signature analyzers[13], or some combination of the two.

There is, however, wide variation in the circuitry used togenerate patterns, when using BIST, depending on the typeof patterns that are generated. The types of patterns used inBIST range from deterministic [15], [41], [64], [80], [82],which is typically used with highly structured circuits like

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 15: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

624 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 23. Typical top-level structure of circuit employing BIST.

memories and FIFO’s, to scan-based [22], to circular [66], topseudoexhaustive [61]; the latter categories of BIST are usedprimarily in logic circuits. A recent study [36] has shown thatmany leading manufacturers and users of semiconductor IC’shave adopted the use of BIST, almost uniformly for highlystructured circuits like memories and FIFO’s, and to varyingdegrees for other types of logic circuits. The popularity ofBIST in circuits like memories and FIFO’s is easily explainedby the fact that these circuits are notoriously difficult totest using external test resources like ATE, whereas they arerelatively easy to test using BIST. For example, thoroughtests for pattern sensitive faults in a 64 Mbyte memory, usingexternal test resources, would require over a trillion (andpossibly a quadrillion!) cycles of operation of the memory.In contrast tests with the same effectiveness can be carried outby BIST structures which make full use of access to variousinternal nets of the memory, thereby testing different parts ofthe memory in parallel. Such BIST tests can be accomplishedin a few tens of million cycles.

B. Digital ATE

Over the last couple of decades, evolution of ATE aimedat complex digital circuits mirrors the tremendous growth inthe variety of testing methods for digital circuits. The featuresthat have emerged as the key to effective testing of very high-speed complex digital parts are: 1) independent control andobservation capability for each pin of the DUT; 2) real-timetiming control, allowing timing of a test pattern at any pin tobe changed in any cycle; 3) built-in calibration subsystems;4) presence of a high-performance host processor to manageoperations of the various ATE subsystems, and to provide aworkstation-like user interface; 5) a rich software environmentthat integrates CAD tools and database management systemwith the test operating system to provide a seamless pathfrom pattern files generated by designers and/or ATPG tools,to actual waveforms being applied to, or observed at, the pinsof the DUT; and 6) development of sophisticated mechanicalstructures that can act as an effective interface between theadvanced electronics of the ATE and that of the DUT. Asmentioned in Section II, the instrumentation present in moststate-of-the-art ATE is more or less similar, nowadays. Thisis true, individually, within the class of ATE aimed at digitalcircuits and the class of ATE aimed at mixed-signal circuits.

However, there is still considerable difference in the focus ofinstrumentation between these two classes of ATE. The keyissues of instrumentation and software environment of ATEhave already been discussed, in significant detail, in Section II,and hence, are not repeated here.

The primary problems faced by ATE aimed at purely digitalcircuits, is the rapidly increasing performance of digital circuitsthat constitute the DUT’s. Today, large 200 MHz digitalcircuits (like CPU’s) are becoming commonplace, 500 MHzdigital circuits are expected to become commonplace in thenext few years, and large 1 GHz digital circuits are beingcontemplated. Problems faced in testing these kinds of ultrahigh-speed digital circuits are twofold: 1) the ATE availableoften does not have the instrumentation necessary to testthe DUT at the rated speed; 2) analog circuit problems arecreeping back into the business of testing digital circuits.Recently, considerable effort has been focused on the formerproblem, with the primary goal of formulating techniques totest ultra high-speed digital circuits using lower speed ATE[14], [16], [23]. The latter problem remains essentially anopen issue.

IV. M IXED-SIGNAL TESTING METHODS

During the period of explosive growth in the size andcomplexity of digital IC’s, the growth in size and complexityof mixed analog-digital IC’s (including operational amplifiers,voltage regulators, ADC’s and DAC’s) was modest in compar-ison. The most dramatic advancements in mixed-signal circuitsduring the past two decades have been in performance, e.g. res-olution, signal-to-noise, and signal bandwidth, as opposed tosize or complexity. These developments lead to more stringentrequirements and challenges for the analog instrumentation.They have also enabled significant advancement in the dataacquisition capabilities of ATE. Consequently, the vendors ofanalog ATE concentrated their efforts on developing new cir-cuit design techniques to improve their product’s performancein the areas of nonlinear distortion, resolution, bandwidth, lownoise, etc. needed to support emerging analog and mixed-signal IC’s. Nonetheless, perhaps the three most significantadvances in the development of mixed-signal ATE were auto-calibration, DSP based measurements, and the improvementof the test plan development and debugging environment.

The accuracy of analog instruments is limited by compo-nent variations over temperature changes, random differencesbetween replaced components, and component drift over ex-tended periods of time. Given a specific set of components,system errors due to the temperature variation can be mini-mized by bringing the instrument to the thermal temperatureconstant. The elevated temperature is selected because it iseasier to control the heating of the instrument than its cooling.This method has been in use for many years in the bench-top instrumentation environment. To eliminate the remainingerrors within the measurement instrumentation, the calibrationof the instrument is required immediately before or after eachmeasurement. Variations of this method are used throughoutthe ATE industry in the form of auto-calibration [10], [11]. Thetest system monitors changes in the environment and automat-

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 16: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 625

ically adjusts some of its instrumentation parameters (delays,offsets, etc.). Some of the problems associated with calibrationare eliminated due to better temperature and relative humiditycontrol in today’s facilities, and tighter tolerances on thecomponents used within instruments.

Arguably the most important advancement in evolution ofmixed signal ATE was the introduction of DSP techniquesinto the world of measurement and instrumentation. It shouldbe pointed out that DSP’s would have found limited use inanalog ATE if it were not for concomitant advances in analog-to-digital converter (ADC) and digital-to-analog converter(DAC) devices. The important feature of sampled systems anddigital signal processing is the fact that once the signal iscaptured/digitized, with the microprocessing power available,the engineers have control of virtually any set of variables theychoose [12]. The introduction of sampled systems, digitizersand arbitrary waveform generators, made it possible to performsynchronization between events on a sample (vector) by sam-ple basis [83], [84]. Because of this, the DSP based testingoffers the benefits of accuracy, flexibility, throughput, andrepeatability which are not as easily realizable with continuoustime-based instrumentation. In addition, conventional analoginstruments are much less accurate in measuring dynamicparameters than they are in measuring constant events. Thisweakness is totally eliminated by application of the DSP-basedinstrumentation [83].

DSP techniques combined with the advances in IC technol-ogy led to many new and imaginative ATE solutions. A fullsynchronization of analog and digital systems within today’sATE’s simplifies the dynamic testing by application of thecoherence principle [83]. Coherent testing is a systematicmethod for testing unknown or arbitrary wave shapes. Whena sinusoidal waveform with frequency is sampled at therate of one can define a unit time interval comprised of

signal cycles and sampling intervals. If is a primenumber and the relation in (1) holds, then the samples withinthis unit time interval are unique. In other words

(1)

this coherence ensures that the location of thesamples iscontrolled and repeatable. A longer waveform can then beconstructed by graphically superimposing several unit timeintervals. If the waveform is unit time intervals long and(1) is satisfied, then the resulting pattern will be comprised of

repetitions of the same samples. is typically chosento be a power of two to accommodate fast Fourier transform(FFT) algorithm. The application of the coherency principlemakes testing of data converters more efficient and reliableby simplifying the analysis of the spectra resulting from theFFT and Chirp- transforms of the time domain signals. Thisway the dynamic performance, e.g., signal-to-noise (SNR),total harmonic distortion [THD], can be reliably tested [85].In addition, typical data converter parameters such as thelinearity, quantization error, missing codes, and the settlingtime can be tested with more flexibility.

The decreasing cost of DSP’s has enabled DSP’s to beallocated on a per instrument basis. The recent advancementallows for some of the signal analysis to be performed locallyto the instrument capturing the data [86]. This approachcan perform “real time” signal analysis using the techniquesdeveloped by the adaptive signal filtering theory [87]. Theallocation of more “computational power” to the individualinstruments makes the implementation of “virtual instruments”even more flexible and time efficient.

The third major advancement in the mixed-signal ATE wasthe introduction and continuous improvement of test plandebugging methods [91], [93], [94]. Prior to the introductionof the debugging tools, the task of test plan development wasa very tedious and painful process. With today’s graphicaluser interface and menu driven debugging tools, it is impos-sible to imagine how the work of a test engineer could beaccomplished prior to emergence of these tools.

As implied earlier, advancements of the performance ofintegrated circuit ADC’s and DAC’s directly enabled thespread of DSP’s into today’s ATE’s. Of particular note theapplication of sigma-delta modulation [95] to waveform gen-erators and digitizers made testing of high resolution (18b) audio devices possible. Also, advances in direct digitalsynthesis and the performance of phase lock loop circuitsbrought the ATE higher precision clocking schemes and timemeasurement instruments. The computing power of the ATEcan be used to emulate traditional control loops. Here, thearray processor operates at a much higher frequency than thesignal frequency within the emulated loop. The DSP engines,coupled with programmable digital channels, provide a widevariety of digital waveforms needed for the testing of complexcommunication devices. The testing of read channel devicesfor hard disk drives is made possible by the applicationof the programmable arbitrary waveform generators (see thespecifications shown in Table I).

Demands on the analog testing capabilities of mixed-signalATE are continuously growing. The trend of ATE vendorshas been to incorporate additional functionality into theirnewer ATE models to meet new testing demands. Recentdevelopments in personal wireless communications has createdan abrupt new demand for RF (3 GHz for now) testingcapabilities, which are being added to new generation ofmixed signal ATE. Heretofore the RF ATE market has beena niche market dominated by companies like Hewlett-Packardand Roos Instruments. However, the trend of integration ofmore and more functions into ATE will likely continue for theforeseeable future and the coexistence of all digital, basebandanalog and RF instruments on a single ATE platform is in theworks. LTX, Teradyne, and Hewlett-Packard are working onsuch systems.

If one contrasts this discussion of the development ofmixed-signal IC testing methods with that of the previoussection on digital IC testing methods, the following importantobservations can be made. First the major advances in digitalIC testing have been in development of fault based methodsof testing while the significant advances in mixed- signaltesting have been in the capabilities of the ATE. The testmethods used for mixed-signal IC’s are essentially the same

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 17: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

626 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 24. Simplified schematic of IC with P1149.4 functionality [102].

functional tests performed in the time of bench top testing,except for adaptations needed to translate these measurementsto a DSP environment. A by-product of this translation isthat the measurements are more robust and reliable, but muchof the same functional parameters are still being measured.Research and development in nonfunctional analog testingmethod, such as fault based analog testing, has been subcriticaland not very productive. We suspect that this is in part dueto the fact that analog IC complexity has not yet renderedfunctional testing impractical and that the problem analogfault based testing is seen to be less tractable than it wasfor the digital case. A sign of change in this scene has beenthe recent application of BIST methods to test an ADC ona mixed-signal IC [100]. The on-chip tests described in thispaper are frequency response (FR), SNR, gain tracking (GT),intermodulation distortion (IMD), harmonic distortion (HD);i.e., classical ADC functional quantities.

Over the last three to four years, the work to standardizethe mixed signal testing has gained enough followers that theextension to P1149.1 standard is currently under considerationby an IEEE committee. The preliminary P1149.4 mixed-signal boundary scan standard extends the IEEE 1149.1 digitalboundary scan standard (described in Section III), by the ad-dition of the analog and parametric measurement capabilities.1149.4 is intended to aide in automation of mixed-signal boardtesting, and to reduce test interconnect. This is an attemptat the application of many benefits delivered by 1149.1 indigital testing directly to mixed signal testing. The P1149.4may prove useful for simple shorts-and-opens testing (similarto the 1149.1 shorts-and-opens test), for two-probe parametricmeasurement, and it is being considered for many more kindsof measurements [101], [102].

The interconnections for particular measurements can beachieved by connecting any point in the mixed signal circuit toone or both of the on-chip global buses AB1 and AB2, whichcan be in turn connected to the external test pins AT1 andAT2. The resources to perform the tests are provided by analogboundary modules. The proposed functionality of such moduleis shown in Fig. 24. Switch 1 disconnects the circuit core, sothe test circuitry can control the pin. Switches 2 and 3 pull thenode high or low. Switches 4 and 5 connect the pin to the AB1and AB2 buses; switch 4 can drive the pin with a current ora voltage from AB1, and switch five drives voltages from thepin to the test system for measurement. A digitizing receivercan be a comparator with reference supplied via AT1 or AT2.The digitizing receiver may be a more elaborate design. The

Fig. 25. Block diagram of the DSP-based spectrum analyzer concept.

switches shown in this figure represent the functionality andnot physical implementation. The implementation of connect-ing and disconnecting subcircuits to and from the test busesand pads are left to individual designers.

Mixed-signal systems are much harder to characterize andmeasure than pure digital systems. Systems with extreme highfrequencies, small amplitudes, or high precision componentsmay prove difficult to test with P1149.4. Since P1149.4performs static tests much like 1149.1, some nonstatic orfeedback dependent circuits may also be hard to test [102].

There are still many open issues with this proposal. It isbelieved that implementation of the global bus will be difficultif not impossible for high frequency circuitry. The effects ofpresence of these busses on the overall performance of theIC’s is also of concern.

V. RF TESTING METHODS

Radio frequency (RF) circuits have been around for manyyears; but not until the recent explosion in personal wirelesscommunications industry was there a need for investmentin high volume RF testing capabilities. When RF IC’s wereexclusively for modest volume military, government and nichecommercial applications, there was little demand for highvolume RF ATE capabilities. The demand for large volume andlow price of the RF integrated circuits (RFIC) for consumercommunications products has stimulated the ATE industry torespond with the introduction of RF measurement upgrades toalready existing lines of mixed signal ATE.

Over a decade of the experience in analog base-band testingis now being extended to RF measurements. The DSP- basedFFT spectrum analyzer is the work-horse of today’s ATE’s[96]. Let us review the block diagram of the typical DSP basedfrequency analyzer shown in Fig. 25.

The input signal is passed through an (variable) attenuatorand an anti-aliasing (low pass) filter to respectively providemeasurement range and to remove the unwanted high fre-quency components that are outside of the frequency rangeof the instrument. Next, the signal is sampled and convertedto digital format by ADC. The DSP (often called an arrayprocessor in the ATE literature) calculates the spectrum ofthe waveform using the FFT and passes the results out to thedisplay. In an ATE environment the array storing the spectrumcomponents is normally further processed to obtain the SNR,THD, and/or RF measurements.

To apply this approach to the RF signals, the signals to beanalyzed must be converted to frequencies within the range

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 18: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 627

Fig. 26. Block diagram of the DSP-based RF spectrum analyzer concept.

Fig. 27. S-parameter definition.

of operation of the ATE. This is achieved by application ofmixing techniques. The block diagram shown in Fig. 25 ismodified in Fig. 26 to include the mixer. Fig. 26 shows theblock diagram of a DSP based RF spectrum analyzer. Thisis the basic idea behind the implementation of the modernRF instrumentation. Typical RF measurements like scattering-parameters, noise figure, and intermodulation are built uponthis basic model. The local oscillator (LO) signal is selectedto bring the signal down to the frequency range of the ATEdigitizer.

The fundamental mathematical network description for theterminal or port behavior of an RF network are the scatteringor parameters. Similarly the characterization and testing ofRF circuits, has been historically based on the measurement of

parameters. parameters relate to the incident and reflectedpower flow that occurs in a network, as observed at its ports[96]. Unlike the impedance, admittance, and hybrid parametersthat equivalently relate the voltages and currents existing ata network’s ports, the correspondingparameters relate theincident and reflected waves observed at the same ports. Theydescribe the bidirectional wave propagation in the network.The definition of parameter terms and the equations relatingall the parameters are given in Fig. 27.

To extend the instrument shown in Fig. 26 to the mea-surement of parameters, it is necessary to introduce abidirectional coupler for measurements of the incident andreflected waveforms in the path between the signal sourcein the ATE and the DUT. This approach puts the radioand microwave interface right next to the DUT and mixesthe signals down to the frequency band of operation of thedigitizers. Fig. 28 shows a typical arrangement for theparameter measurement [90]. The RF signal is guided via acontrolled impedance environment to the input of the DUT.The power level is adjusted to meet the specifications ofthe DUT. The incident and reflected waveforms are digitized

Fig. 28. Block diagram of the RF test configuration.

with the high speed or high resolution digitizer, dependingon the bandwidth requirement of the particular measurement.The particular arrangement in Fig. 28 shows a single digitizerarchitecture which allows measurement of the incident andreflected waveforms through the same signal path. The digi-tizer is controlled by the digital vector patterns, which allowsfor fully coherent data capture. The digitized representation ofthe waveforms is sent to the DSP to be processed. From hereon, the DSP that is either central or local to the digitizer isused to transfer this data from time-to-frequency domain. Thesame DSP is used for analysis of the resulting spectrum andcalculation of the reflected and incident signal power levels.These values are used to calculate the parameter for theDUT.

VI. NEW PARADIGM FOR MIXED-SIGNAL AND RF TESTING

If one contrasts the discussion in the previous two sectionsregarding the development of mixed-signal and RF IC testingmethods with that of Section III on digital IC testing, thefollowing important observations can be made. First the majoradvances in digital IC testing have been in the development offault based methods of testing, while the significant advancesin mixed-signal and RF testing have been in the capabilitiesof the ATE. Digital ATE capabilities have advanced too, buttheir development has been driven by the extensive R&D innonfunctional methods of testing. The test methods used formixed-signal and RF IC’s are essentially the same functionaltests performed in the time of bench top testing, exceptfor adaptations needed to translate these measurements to aDSP based measurement environment. A by-product of thistranslation is that the measurements are more robust andreliable, but much of the same functional parameters are stillbeing measured. Research and development in nonfunctionalanalog (baseband and RF) testing methods, such as fault basedanalog testing, has thus far been subcritical and not veryproductive. We suspect that this is due to the fact that analog ICcomplexity has not yet rendered functional testing impractical,and that analog fault base testing is seen to be less tractablethan it is for the digital case. The advances in the signalhandling between the ATE and DUT have also been largelyoccurred in the ATE. Nonetheless, the advancing complexityand performance of mixed-signal and RF IC’s is stressingfunctional test methods and the ATE to their limits.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 19: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

628 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Given the context provided by the previous sections, weare in a position to consider some potential future directionsfor mixed-signal and RF IC testing. We believe the timeis ripe for sort of fundamental changes that will open newR&D opportunities for the industrial and academic circuitsand systems community. Unless a fundamental departure fromthe current testing paradigm occurs, the future will likely bedominated by packing increasingly more resources into theATE. We submit that the limit to this approach, which hasserved the semiconductor industry well since the mid-1970’s,is near at hand. This limit is not one of technology, but eco-nomics. Simply put, the ATE are becoming too costly and theyare unavoidably one generation behind the advancing mixed-signal IC performance curve. Furthermore, functional tests arebecoming increasingly more difficult and time consuming toperform. Thus, simply maintaining the status quo in mixed-signal and RF testing will become less and less acceptable. Itis time to consider new testing paradigms. We believe that themost fertile paradigms will come from efforts where academeand the industrial IC test community work in partnership. Itwas this view that brought the authors of this paper together.Our purpose in this section is to stir the pot by identifying thecritical limits of the current mixed-signal IC testing paradigmand to relate some potential alternative approaches for thereader’s consideration.

The BIST work of Robertset al. [100] represents the kind offresh approaches that are needed. “Analog fault modeling” andother unconventional testing methods need to be investigatedin a pragmatic way as well. There are some reports on theapplication of inductive fault analysis (IFA) to the modelingthe faults in analog IC’s. Proponents of this technique arguethat IFA based testing combines the circuit topology andprocess defect data to arrive at the realistic fault models thatare specific to the circuit. This information can be exploitedby test engineers to generate effective and economic tests[69]. Some other work has concentrated on adapting digitalcircuit test techniques to the testing of analog IC’s. Theseattempts have lead to methods for analog test generation andfault simulation that are mapped into the digital domain. Theanalog circuit is mapped to a digital representation using astandard continuous to discrete transformation. This techniqueis proposed as a more cost effective simulation-based meansof analog test generation [97]. This type of R&D activity hasto grow to critical mass if the escalating costs of mixed-signal and RF IC testing are to be countered. The insidioustypes of fault that are particularly problematic in analog IC’sare the performance failures that are caused by one or morerandom mismatches in ratioed components (e.g., capacitors,transistor gm’s, etc.), even when all the process parametersare within acceptable range. Such a fault occurs when arandom physical mismatch combines with a sufficient degreeof circuit sensitivity to realize a performance failure. Thecircuit sensitivity that relates the failed performance parameterto the mismatch, is inherent to the circuit schematic and/or thephysical layout. Only when a circuit is sufficiently insensitiveto a particular mismatch, such that it can only produce aperformance failure when the process parameters are out ofacceptable range, does the mismatch not represent a potential

Fig. 29. Loaded DIB (photo).

fault. As in digital circuits, fault detection in analog circuitsdepends on observability and controllability of the DUT. BISTapproaches can enhance both of these qualities.

When trying to improve a process, it is often helpful toidentify the weakest parts or links in the process. Is often inaddressing the weakest link where the most impact can berealized for a given amount of effort. We believe the weakestlinks in the current ATE based test process for mixed-signaland RF IC’s to be the SIGNAL HANDLING function of theATE and the interface between the DUT and ATE. For analogbaseband and RF tests, the signal handling and ATE/DUTinterface must be robust, well modeled, and well controlled.As described in section II, the physical separation betweenthe ATE and DUT is quite large in physical and electricaldimensions. Also, the device interface board or DIB (witheither a probe card or socket adapter) serves as the primaryinterface between the ATE and DUT. All the electronics usedto facilitate this interface are mounted in a limited area (seeFig. 7) on to the DIB. These electronics are usually a collageof small scale integrated IC’s and discrete components. Thewiring may involve custom interconnect fabricated on to oneor more of the un-dedicated DIB planes or it may involve(all or in part) discrete wiring. In comparison to the DUTand ATE, this critical interface is surprisingly low-tech andad hoc. An example of a loaded DIB is shown in Fig. 29.The problems with this setup require little explanation: thedesign and implementation of DIB electronics are ad hoc, theresulting circuitry is not well posed for accurate simulation,and the functionality is limited by the constrained space onthe DIB and a discrete circuit implementation.

Testing is part of the overhead associated with IC manu-facturing; and, as such, testing is ultimately an economicallydriven process. The objective is to achieve the requisite qualitycontrol derived from testing at the lowest cost. With testingapproaching, for some mixed-signal IC’s, up to 50% of themanufacturing cost; reducing test costs is of keen interest

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 20: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 629

to IC manufacturers. However, reductions in test costs thatlessen quality control will have a detrimental impact thatmore than compensates for the savings. The cost of testingcan be broken down into three components; namely, the costof test development, the cost of using the ATE on a perpart basis, and the cost of design for testability. The cost oftest development covers the cost of designing the tests to beconducted, writing software to program the ATE, the designand construct of the DIB interface electronics. The cost of ATEusage is largely the allocation of the cost of owning, operatingand maintaining the ATE equipment to each part tested or toeach part delivered to customers (in dollars per unit time). Thecost of the design for testability is the real-estate used on thechip for either testability features or BIST; and the cost ofdeveloping the associated on-chip electronics and integratingthem into the DUT. To achieve a systemic reduction in the costof testing necessitates a coordinated effort to address these costcomponents in a systematic way.

To understand the cost issues, let us briefly examine whathas driven these cost components in recent years. To matchthe increasing complexity of mixed signal and RF parts, andthe demand for more sophisticated tests, the complexity andcost of ATE have been escalating since the 1980’s. That isas the performance, frequency, and features of mixed-signalIC’s have advanced, the ATE industry has responded withnew models and plug-in features. Furthermore, as we alludedearlier, the ATE are perpetually obsolete in that they mustfollow, rather than lead, the IC performance advancementcurve. It is difficult to compare the current ATE with earlysystems from the early 1980’s, due to the improved perfor-mance of instrumentation and increased complexity of thelatest models. A comparison of the configuration from tenyears ago to an identical model sold today indicates that therehas been about a 50% decrease in the price per ATE function.Most of this decrease is due to increases in the levels ofintegration and decreases in the cost of the subassemblies.On the other hand, the new ATE models are packed withincreasingly more capabilities that are needed to test IC’sof escalating complexity and performance. Thus the price ofATE systems has been continuously escalating; and, due tothe inherent obsolescence, there appears to be no end in sight.Any new testing methodologies associated with fault basedtesting or any other form of nonfunctional testing will likely,in the current IC testing paradigm, necessitate correspondingchanges in the ATE and further increased ATE costs.

Another consequence of the increased complexity of ATEsystems, is that the qualifications and salaries for test engineershave escalated to rival those for the circuit design engineers.Their work involves the design of the tests (usually in col-laboration with the circuit designers), development of ATEtest programs, and design and construct of DIB electronics tointerface the DUT to the ATE. We submit that the current ICtesting paradigm does not take full advantage of the talents ofcontemporary test engineers. The compensation and utilizationof test engineers is an important component in the cost andquality of IC testing.

Design for testability has become an increasingly importantpart of the design of all integrated circuits. With design-

ers including testability issues as requirements on par withfunctional requirements, parts move much more efficientlyfrom design into manufacture. Design for testability mayinclude one or any combination of the following on-chipfeatures: I/O buffers to suitably interface DUT pins to the ATE,multiplexing schemes to reconfigure the DUT’s pins, circuitfunctions to facilitate testing; and built-in self testing or BIST.These items increase development cost and increase the diesize of the DUT. Thus, there are limits to the functionalitythat can be put on every DUT to facilitate testing. Sincethere is no systematic economical model to measure therelative cost-benefits of testing tradeoffs, a rule of thumbused for this limit is about 10% of the active die. Giventhe importance of economics in evaluating testing options andtradeoffs, there is a need for some interdisciplinary researchto develop meaningful economic models for IC manufacturingand testing, in particular.

The current IC testing paradigm vests the intelligence andprocessing power for testing largely in the ATE. This, oncea strength and a necessity, has become an Achilles heel.This arrangement is unnecessarily inflexible and is puttingdisproportionate pressure on the ATE to track all the mixed-signal IC advances, including the addition of RF functions.The addition of testability and BIST features on the DUTloosen this pressure a bit; however, this on-chip functionalityis limited by the economics that constrain any increase inthe DUT’s die size. At the same time we have shown theDIB electronics, that provide the critical interface between theATE and DUT, to be the weakest link in the test process.Thus, many of the same forces that stimulated the foundingof the ATE vendor companies in the early 1970’s, havereached levels that are ripe for a significant paradigm shiftin both test development and the ATE architecture. With amore synergistic approach that more equitably balances theallocation of the test intelligence and processing resourcesamong the ATE, DIB and DUT, there is an opportunity togreatly enhance both the quality and the economy of ICtesting. If this resource allocation were more flexible, it wouldpermit the test engineer to develop test strategies that areoptimized to specific applications. Moreover, it would enableATE vendors to reconsider the ATE architecture and it wouldrelieve inherent ATE obsolescence that is unavoidable inthe current paradigm. The impact of the test engineer onthe quality and economy testing has been limited by a testparadigm that has seen little change outside the ATE since the1970’s. Perhaps unleashing the talents of the test engineer isone of the keys to implementing a fundamental change in thetest paradigm.

With the DIB being at the critical interface between theATE and DUT, its design and construct are worthy targetsto focus some attention. Over the past several years, DIBinterface electronics have escalated in both complexity andimportance in order to relieve the pressure on the ATE,particularly when the DUT has features that are not wellmatched to the ATE’s capabilities. However, due to theirpiece-part construct, the design of these interface electronicshave not taken advantage of contemporary circuit CAD toolsand do not usually use any advanced functionality, e.g., a

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 21: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

630 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

Fig. 30. ATE-TID-DUT interface.

DSP. If one thinks about, it the DIB is a significantly underutilized resource, in addition to being a weak link in thetest process. Moreover, since the DIB is common to all likeDUT’s tested, the DIB electronics are not driven by the sameeconomic constraints as electronics on board the DUT. Thus,a natural evolution to consider is to introduce a more VLSIapproach to the design and implementation of DIB interfaceelectronics. We expect that this would enable the complexityand robustness of the DIB interface electronics to increase byintegrating them on to a single chip or a set of chips, withthe functionality appropriately distributed among them. Thiswould also allow the DIB interface electronics to be modeledand simulated using the same CAD tools as the DUT, withand without the DUT. Let us call such a chip or chip set theintegrated Test Interface Device (TID).

All communications and connections between the DUT andATE could be through the TID, which would be in very closeproximity to the DUT. This is illustrated in Fig. 30. Besidesthe impact on the quality of the DIB interface electronics,the TID could relieve the DUT designer of the need to beconcerned about incorporating into the DUT features that makeit work within the ATE environment that are not needed inthe application environment, e.g., buffers to drive the ATEloads. The implementation of the DIB interface electronicsas an integrated TID, could open up a tremendous numberof possibilities that were not possible in a piece-part design.Perhaps, most important it could greatly reduce the designcycle and risk that was tolerated with the piece-part design,even if the TID were to have comparatively greater func-tionality and complexity. A VLSI approach to the ATE-DUTinterface, such as the TID, could allow this interface to realizeequivalent benefits from the rich VLSI CAD environmentsavailable today as the DUT.

VII. CONCLUSION

In this paper we have discussed IC testing for qualityassurance in manufacturing from several view points; i.e.historic, current status and future. Our objective has beento relate to the circuits and systems community the growingimportance of this area which is so vital to the semiconductormanufacturing industry. Although our prime focus has beenon the testing of mixed-signal and RF IC testing, we felt itimportant to also review digital IC testing. First, mixed-signalIC’s include substantial digital circuitry that is often made

independently testable (i.e. a design for testability strategy).Second, the R&D effort for digital and analog IC (basebandand RF) testing have followed different paths that were drivenby both economic considerations and the different ways digitaland analog IC’s advanced over the past 25 years.

We showed in Section II that the emergence of dedicatedautomatic test equipment (ATE) in the 1970’s forever changedthe testing of digital and analog IC’s. ATE systems were shownto have advanced remarkably in both testing functionality andease of use. However, due to fast advancing complexity ofdigital IC’s, the testing R&D and the development of ATEfor digital IC’s responded to the realization that functionaltesting was quickly becoming impractical. Thus, as we showedin Section III, a rich lore of theoretical and experimentalliterature, circuit architectures with testability features, andCAD tools emerged to support a wide range of nonfunctionaltesting strategies. In contrast, we showed in Section IV thatthe major advancements in analog and mixed-signal IC’swere in performance, e.g., resolution and dynamic range.Consequently, the testing methods remained functionally basedand the development of ATE for analog and mixed signalIC’s responded to the challenges of testing to increasinglyhigher standards of performance. RF IC’s, which were largelydeveloped for military and government systems, were shownin Section V to be the last frontier for ATE developments. Wealso recounted that RF analog IC’s testing has and continuesto be based on functional tests.

In response to the need to test IC’s with increasingly higherlevels of mixed-signal integration, and the emergence of highvolume commercial applications of RF IC’s, we showed inSections II, IV, and V that the current trend for ATE is tomerge the functions needed to test digital, mixed-signal, andRF IC’s into a single ATE platform. In order to keep pace withthe need for new test features, the ATE are fast escalating inboth complexity and cost. At the same time the complexityand frequency range of mixed signal IC’s (including RF) arecausing functional testing and testing at full speed to reachpractical limits. In Sections II and VI we also showed thatthe signal handling interface between the ATE and DUT viathe DIB to be the weakest link in the current testing paradigm.Consequently, the priorities associated with design for testabil-ity and the role of the test engineer in IC manufacturing haveadvanced significantly. Commensurate with their escalatingresponsibilities, the test engineer’s qualifications, capabilities,and salaries have all advanced. Based on these developmentsand looming deficiencies, we made the case in Section VI thattime was prime for the development of new test paradigms.In Section VI we introduced the concept of the Test InterfaceDevice or TID. The TID is a low volume, application specificIC that can be designed to house needed testing resourcesnot currently included on the ATE and to facilitate the signalhandling between the DUT and ATE.

REFERENCES

[1] W. R. Mann, “Microelectronic device test strategies; A manufacturer’approach,”IEEE Test Conf., 1980.

[2] G. A. Perone, R. S. Malfatt, J. P. Kain, and W. I. Goodheim, “A closerlook at testing cost,”IEEE Test Conf., 1982.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 22: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 631

[3] J. J. Durgavich and S. M. Schlosser, “Considerations for, and impactof, the ATE/UUT interface,”Autotestcon, 1977.

[4] B. Leslie and F. Matta, “Membrane probe card technology (The futurefor high performance wafer test,”IEEE Int. Test Conf., 1988.

[5] C. Barsotti, S. Tremaine, and M. Bonham, “Very high density testing,”IEEE Int. Test Conf., 1988.

[6] S. R. Craig, “Alternative strategies for incoming inspection,”IEEE TestConf., 1979.

[7] R. G. Fulks, “Status report on the proposed IEEE/IEC digital interfacefor programmable instrumentation,”ASCC, 1974.

[8] Augat Inc., Interconnection products division, “VME/VXI engineeringdesign guide,” Revision 6.0, 1991.

[9] W. J. Horth, F. G. Hall, and R. G. Hillman, “Microelectronic deviceelectrical test implementation problems on automated test equipment,”IEEE Test Conf., 1982.

[10] K. Skala, “Continual autocalibration for high timing accuracy,”IEEETest Conf., 1980.

[11] E. Rosenfeld, “DSP calibration for accurate time waveform reconstruc-tion,” IEEE Int. Test Conf., 1991.

[12] T. Higgins, “Digital signal processing for production testing of analogLSI devices,” IEEE Test Conf., 1982.

[13] M. Abramovici, M. A. Breuer, and A. D. Friedman,Digital SystemTesting and Testable Design. New York: Computer Science, 1990.

[14] L. Ackner and M. R. Barber, “Frequency enhancement of digital VLSItest system,” inProc. IEEE Int. Test Conf., Sept. 1990, pp. 444–451.

[15] V. D. Agrawal, C. J. Lin, P. Rutkowski, S. Wu, and Y. Zorian, “BISTfor digital IC’s,” AT&T Tech. J., Mar./Apr. 1994, pp. 30–39.

[16] V. D. Agrawal and T. J. Chakraborty, “High-performance circuit testingwith slow-speed testers,”Proc. Int. Test Conf.,Oct. 1994, pp. 302–310.

[17] P. Agrawal, V. D. Agrawal, and S. C. Seth, “A new method forgenerating tests for delay faults in nonscan circuits,” inProc. 5th Int.Conf. VLSI Design,Jan. 1992, pp. 4–11.

[18] H. Ando, “Testing VLSI with random access scan,” inProc. COMP-CON, 1980, pp. 50–52.

[19] D. B. Armstrong, “On finding a nearly minimal set of fault detectiontests for combinational logic nets,”IEEE Trans. Comput.,vol. EC-15,pp. 66–73, Feb. 1966. .

[20] J. A. Abraham and S. M. Tatte, “Fault coverage of test programs formicroprocessor,”IEEE Test Conf.,1979

[21] P. Banerjee and J. A. Abraham, “Characterization and testing of physicalfailures in MOS logic circuits,”IEEE Design Test Comput., vol. 1, pp.76-86, Aug. 1984.

[22] P. Bardell, W. McAnney, and J. Savir,Built-In Test for VLSI Pseudo-random Technique. New York: Wiley, 1987.

[23] S. Barton, “Characterization of high-speed (above 500 MHz) devicesusing advanced ATE—techniques, results and device problems,” inProc.Int. Test Conf.,Sept. 1989, pp. 860–868.

[24] Y. K. Malaiya and Y. H. Su, “A new fault model and testing techniquefor CMOS devices,”IEEE Test Conf.,1982.

[25] D. Bhattacharya and J. P. Hayes, “A hierarchical test generation method-ology for digital circuits,” J. Electron. Testing: Theory and Applicat.,vol. 1, 1990, pp. 103–123.

[26] , “Designing for high-level test generation,”IEEE Trans.Computer-Aided Design,vol. 9, pp. 752–766, July 1990.

[27] M. J. Chalkley, “On test generation for IDDq testing of bridging faultsin CMOS circuits,” IEEE Int. Test Conf., 1991.

[28] D. Bhattacharya, P. Agrawal, and V. D. Agrawal, “Test generation forpath delay faults using binary decision diagrams,”IEEE Trans. Comput.,vol. 44, pp. 434–447, Mar. 1995.

[29] R. D. Blanton and J. P. Hayes, “Design of a fast, easily testable ALU,”in Proc. VLSI Test Symp., May 1995, pp. 9–16.

[30] S. Bose, P. Agrawal, and V. D. Agrawal, “Logic systems for path delaytest generation,” inProc. Euro. Design Automation Conf.,Sept. 1993,pp. 200–205.

[31] D. Brahme and J. A. Abraham, “Functional testing of microprocessors,”IEEE Trans. Comput., vol. C-33, pp. 475–485, June 1984.

[32] K. M. Butler and M. R. Mercer,Assessing Fault Model and Test Quality.Boston, MA: Kluwer Academic, 1992.

[33] M. A. Breuer and A. D. Friedman,Diagnosis and Reliable Design ofDigital Systems. New York: Computer Science, 1976.

[34] K. T. Cheng, S. Devadas, and K. Kuetzer, “Delay fault test generationand synthesis for testability under a standard scan design methodology,”IEEE Trans. Computer-Aided Design, vol. 12, pp. 1217–1231, Aug.1993.

[35] W.-T. Cheng and J. H. Patel, “Testing in two-dimensional iterativelogic arrays,” in Proc. Int. Symp. Fault-Tolerant Comput.,July 1986,pp. 76–81.

[36] C. A. Dean and Y. Zorian, “Do you practice safe test? What wefound out about your habits,” inProc. Int. Test Conf.,Oct. 1994, pp.887–892.

[37] R. Dekker, L. Beenker, and L. Thijssen, “A realistic fault model andtest algorithm for SRAM’s,”IEEE Trans. Computer-Aided Design, vol.9, pp. 567–572, June 1990.

[38] E. B. Eichelberger and T. W. Williams, “A logic design structure forLSI testability,” J. Design Automation and Fault-Tolerant Computing,vol. 2, 1978, pp. 165–178.

[39] Y. M. El-Ziq and S. Y. H. Su, “Fault diagnosis of MOS combinationalnetworks,” IEEE Trans. Comput.,vol. C-31, pp. 129–139, Feb. 1982.

[40] F. J. Ferguson and J. P. Shen, “Extraction and simulation of realisticCMOS faults using inductive fault analysis,” inProc. Int. Test Conf.,Sept. 1988, pp. 475–484.

[41] M. Franklin, K. K. Saluja, and K. Kinoshita, “Row/column patternsensitive fault detection in RAM’s via built-in self-test,” inProc. 9thInt. Symp. Fault-Tolerant Comput.,June 1989, pp. 36–43.

[42] A. D. Friedman, “Easily testable iterative systems,”IEEE Trans. Com-put., vol. C-22, pp. 1061–1064, Dec. 1973.

[43] K. Fuchs, F. Fink, and M. H. Schulz, “DYNAMITE: An automatic testpattern generation system for path delay faults,”IEEE Trans. Computer-Aided Design,vol. 10, pp. 1323–1335, Oct. 1991.

[44] H. Fujiwara and T. Shimono, “On the acceleration of test generationalgorithms,” IEEE Trans. Comput., vol. C-32, pp. 1137–1144, Dec.1983.

[45] S. Funatsu, N. Wakatsuki, and T. Arima, “Test generation systems inJapan,” inProc. 12th Design Automation Symp.,pp. 114–122.

[46] A. R. Ghosh, S. Devadas, and A. R. Newton, “Test generation for highlysequential circuits,” inProc. Int. Conf. Computer-Aided Design,Nov.1991, pp. 214–218.

[47] P. Goel, “An implicit enumeration algorithm to generate tests forcombinational logic circuits,”IEEE Trans. Comput.,vol. C-30, pp.215–222, Mar. 1981.

[48] J. P. Hayes, “Pseudo-Boolean logic circuits,”IEEE Trans. Comput., vol.C-35, pp. 602–612, July 1986.

[49] IEEE Standard Test Access Port and Boundary-Scan Architecture,IEEEComputer Society, 1990 (additions 1993).

[50] V. S. Iyengar, B. K. Rosen, and I. Spillinger, “Delay test generation 1& 2” in Proc. Int. Test Conf.,Sept. 1988, pp. 857–876.

[51] N. K. Jha, “Multiple stuck-open fault detection in CMOS logic circuits,”IEEE Trans. Comput.,vol. 37, pp. 426–432, Apr. 1988.

[52] N. K. Jha and Q. Tong, “Detection of multiple input bridging and stuck-on faults in CMOS logic circuits using current monitoring,” inProc.Euro. Design Automation Conf.,1990, pp. 350–354.

[53] J. Khare and W. Maly, “Inductive contamination analysis (ICA) withSRAM application,” inProc. Int. Test Conf., Oct. 1995, pp. 552–560.

[54] Special Issue on Partial Scan Methods,J. Electron. Testing: Theory andApplicat., vol. 7, Aug./Oct. 1995.

[55] J. P. Lesser and J. J. Shedletsky, “An experimental delay test generatorfor LSI logic,” IEEE Trans. Comput., vol. C-29, pp. 235–248, Mar.1980.

[56] S. C. Ma, P. Franco, and E. J. McCluskey, “An experimental chip toevaluate test techniques—experiment results,” inProc. Int. Test Conf.,Oct. 1995, pp. 663–672.

[57] R. Z. Makki, S.-T. Su, and T. Nagle, “Transient power supply currenttesting of digital CMOS circuits,” inProc. Int. Test Conf.,Oct. 1995,pp. 892–901.

[58] Y. K. Malaiya, A. P. Jayasumana, and R. Rajsuman, “A detailedexamination of bridging faults,” inProc. Int. Conf. Comput. Design,1986, pp. 78–81.

[59] W. Maly, F. J. Ferguson, and J. P. Shen, “Systematic characterizationof physical defects for fault analysis of MOS IC cells,”IEEE DesignTest Comput.,vol. 2, pp. 13–26, Dec. 1985.

[60] P. C. McGeer, A. Saldanha, P. R. Stephan, R. K. Brayton, andA. L. Sangiovanni-Vincentelli, “Timing analysis and delay-fault testgeneration using path-recursive functions,” inProc. Int. Conf. Computer-Aided Design,Nov. 1991, pp. 180–183.

[61] E. J. McCluskey, “Verification testing–A pseudo-exhaustive testingtechnique,”IEEE Trans. Comput.,vol. C-33, pp. 541–546, June 1984.

[62] P. C. Maxwell and R. C. Aitken, “IDDQ testing as a component of a testsuite: the need for several fault coverage metrics,”J. Electron. Testing:Theory and Applicat., vol. 3, pp. 305–316, Dec. 1992.

[63] C. M. Maunder and R. E. Tulloss,The Test Access Port and BoundaryScan Architecture. New York: IEEE Computer Society Press, 1990.

[64] P. Mazumdar, J. H. Patel, and W. K. Fuchs, “Design and algorithms forparallel testing of random-access and content-addressable memories,” inProc. Design Automat. Conf.,July 1987, pp. 688–694.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 23: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

632 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: ANALOG AND DIGITAL SIGNAL PROCESSING, VOL. 44, NO. 8, AUGUST 1997

[65] T. J. Powell, F. Hwang, and B. Johnson, “Test development tools formodular designs,”Texas Instrum. Tech. J.,vol. 4, no. 3, pp. 108–114,May–June 1987.

[66] M. M. Pradhan, E. O’Brien, S. L. Lam, and J. Beausang, “Circular BISTwith partial scan,” inProc. Int. Test Conf.,Sept. 1988, pp. 719–729.

[67] A. K. Pramanick and S. M. Reddy, “On unified delay fault testing,” inProc. 6th Int. Conf. VLSI Design,Jan. 1993, pp. 265–268.

[68] J. P. Roth, “Diagnosis of automata failures: A calculus and a method,”IBM J. Res. Develop.,pp. 278–281, Oct. 1966.

[69] M. Sachdev and B. Atzema, “Industrial relevance of analog IFA: a factor a fiction,” in Proc. Int. Test Conf.,Oct. 1995, pp. 61–70.

[70] M. Schulz, M. E. Trischler, and T. M. Sarfert, “SOCRATES: Ahighly efficient automatic test pattern generation system,”IEEE Trans.Computer-Aided Design,vol. 7, pp. 126–137, Jan. 1988.

[71] J. P. Shen and F. J. Ferguson, “The design of easily testable VLSI arraymultipliers,” IEEE Trans. Comput., vol. C-33, pp. 554–560, June 1984.

[72] G. L. Smith, “Model for delay faults based upon paths,” inProc. Int.Test Conf.,Nov. 1985, pp. 342–349.

[73] J. M. Soden and C. F. Hawkins, “IDDQ testing and defect classes–atutorial,” Proc. Custom Integrated Circuits Conf.,May 1995.

[74] T. Sridhar and J. P. Hayes, “A functional approach to testing bit-slicedmicroprocessors,”IEEE Trans. Comput.,vol. C-30, pp. 563–571, Aug.1981.

[75] T. Stanion, D. Bhattacharya and C. Sechen, “An efficient method forgenerating exhaustive test sets,”IEEE Trans. Computer-Aided Design,vol. 14, pp. 1516–1525, Dec. 1995.

[76] J. H. Stewart, “Future testing of large LSI circuit cards,”Digest ofPapers, 1977 Semiconductor Test Symposium,pp. 6–15.

[77] B. Vinnakota, “Monitoring power dissipation for fault detection,” inProc. VLSI Test Symp.,May 1996, pp. 483–488.

[78] R. L. Wadsack, “Fault modeling and logic simulation of CMOS andMOS integrated circuits,”Bell Syst. Tech. J.,vol. 57, pp. 1449–1474,May–June 1978.

[79] D. L. Wheater, P. Nigh, J. T. Mechler and L. Lacroix, “ASIC testcost/strategy trade-offs,” inProc. Int. Test Conf.,Oct. 1994, pp. 93–102.

[80] Y. You and J. P. Hayes, “A self-testing dynamic RAM chip,”IEEE J.Solid-State Circuits,vol. SC-20, pp. 428–435, Feb. 1985.

[81] M. J. Y. Williams and J. B. Angell, “Enhancing testability of large scaleintegrated circuits via test points and additional logic,”IEEE Trans.Comput.,vol. C-22, Jan. 1973, pp. 46–60.

[82] Y. Zorian, A. J. Van de Goor, and I. Schanstra, “An effective BISTscheme for ring-address type FIFO’s,” inProc. Int. Test Conf.,Oct.1994, pp. 378–387.

[83] M. Mahoney,DSP based testing of analog and mixed-signal circuits.New York: IEEE Computer Society Press, 1987.

[84] F. Brglez, “Digital signal processing considerations in filter-CODECtesting,” in Proc. Int. Test Conf.,1981.

[85] W. J. Bowhers, “Filtering methods for fast ultra-low distortion measure-ments,” in Proc. Int. Test Conf.,1982.

[86] K. Karube, Y. Bessho, T. Takakura, and K. Gunji, “Advanced mixedsignal testing by DSP localized tester,” inProc. Int. Test Conf.,1991.

[87] A. Grochowski and L. Hsieh, “THD and SNR tests using the simplifiedVolterra series with adaptive algorithms,” inProc. Int. Test Conf.,1995.

[88] X. Haurie and G. Roberts, “Arbitrary-precision signal generation forband-limited mixed-signal testing,” inProc. Int. Test Conf.,1995.

[89] K. F. Zimmermann, “SiPROBE-a new technology for wafer probing,”Proc. Int. Test Conf.,1995.

[90] LTX Corp., “PCS family board data sheet.”[91] LTX Corp., “Syncretic test system; Product description.”[92] D. Haworth, Tektronix, “Designing and evaluating VXI systems.”[93] Teradyne, “A585/A575/A565 series of advanced mixed signal test

systems; System description.”[94] Hewlett Packard, “HP9490 series mixed signal LSI test systems; System

specifications.”[95] R. M. Gray, “Oversampled sigma-delta modulation,”IEEE Trans. Com-

mun., May 1987.[96] R. A. Witte, Spectrum and Network Measurements. Englewood Cliffs,

NJ: Prentice-Hall, 1991.[97] H. H. Zheng and A. Balivada, “A novel test generation approach for

parametric faults in linear analog circuits,”Proc. Int. Test Conf.,1995.[98] N. H. E. Weste and K. Eshraghian,Principles of CMOS VLSI Design: A

Systems Perspective, second ed. Reading, MA: Addison Wesley, 1992.[99] K. R. Laker and W. M. C. Sansen,Design of Analog Integrated Circuits

and Systems. New York: McGraw-Hill, 1994.[100] B. R. Veillette and G. A. Roberts, “A built-in self test strategy for

wireless communications systems,” inProc. Int. Test Conf.,Oct. 1995,pp. 930–939.

[101] S. Sunter, “The P1149.4 Mixed signal test Bus: costs and benefits,” inProc. Int. Test Conf.,1995, pp. 440–450.

[102] K. Lofstrom, “A demonstration IC for the P1149.4 mixed signal teststandard,”Proc. Int. Test Conf.,1996.

Andrew Grochowski (M’89) received the B.Sc.and M.Sc. degrees in electrical engineering fromLehigh University Bethlehem, PA, in 1988 and1992, respectively.

He joined AT&T Microelectronics in 1988 wherehe participated in various stages of mixed-signalproduct development. In 1993 he joined AT&T BellLaboratories as a Member of Technical Staff towork on design and test development of DSP basedmixed-signal IC’s. He currently is a member ofthe Mixed Signal Laboratory, a branch of the DSP

Research and Development Center at Texas Instruments, Inc. His interests arein the area of the automated mixed-signal integrated circuit testing, design fortestability, and design/test development integration.

Debashis Bhattacharya (S’84–M’88–SM’97) re-ceived the B.Tech. (Honors) degree in 1983 from theIndian Institute of Technology, Kharagpur, India.He received the M.S. and Ph.D. degrees in fromthe Computer, Information and Control Engineering(CICE) program of the University of Michigan, AnnArbor, in 1985 and 1988, respectively.

From 1983 to 1988, he worked on high-leveland hierarchical test generation at the University ofMichigan. Between 1988 and 1995, he was with theElectrical Engineering Department, Yale University,

New Haven, CT, first as an Assistant Professor (1988–1992), and later as anAssociate Professor (1992–1995). At Yale, he conducted extensive researchin digital testing and simulation, parallel algorithms, ASIC architectures, andmultiple-valued logic. His achievements include the development of efficientbinary decision diagram based techniques for digital circuit testing underboth stuck-at and path delay fault models, time and space-efficient parallelalgorithms for testing and simulation of digital circuits, and a theoreticalframework for estimating performance of parallel algorithms commonly usedin CAD. Since 1995, he has been with the DSP R&D Center at TexasInstruments, Inc. His current research interests include digital and mixed-signal circuit testing, and l ow -power design of digital signal processingcircuits. He has held consulting positions at AT&T Bell Laboratories, MurrayHill, NJ, and at David Sarnoff Research Center, Princeton, NJ. He has authoredmore than 25 papers and a book titledHierarchical Modeling for VLSI CircuitTesting(Boston, MA: Kluwer Academic, 1990).

Dr. Bhattacharya is a member of the IEEE Computer Society, the IEEECircuits and Systems Society, and Tau Beta Pi.

T. R. Viswanathan (S’62–M’64–SM’76–F’97) re-ceived the B.Sc. degree in physics in 1956 from theUniversity of Madras, India, the D.I.I.Sc. degree inelectrical communications engineering in 1959 fromthe Indian Institute of Science, Bangalore, India, theM.Sc. degree in electrical engineering in 1960, andthe Ph.D. degree in electrical engineering in 1964from the University of Saskatchewan, Saskatoon,Canada.He was Professor of electrical engineering at theIndian Institute of Technology, Kanpur, India, the

University of Michigan, Carnegie Mellon University, and the University ofWaterloo, Ontario, Canada. From 1985 to 1995, he was Supervisor of theAnalog Integrated Circuit Design Group at AT&T Bell Laboratories. He ispresently the Director of the Circuit Technology Laboratory, Texas InstrumentsInc, Dallas. His research interests are in mixed-signal integrated systemsdesign and testing.

Dr. Viswanathan was an Associate Editor of the IEEE TRANSACTIONS ON

CIRCUITS AND SYSTEMS.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.

Page 24: Integrated Circuit Testing For Quality Assurance In Manufacturing…web.cecs.pdx.edu/~ecex75/readings/1997_Laker_ic_quality.pdf · 2014-01-08 · the emergence of mixed signal, including

GROCHOWSKIet al.: IC TESTING FOR QUALITY ASSURANCE 633

Ken Laker (S’70–M’73–SM’78–F’83) received the B.E. degree in electricalengineering from Manhattan College, and the M.S. and Ph.D. degrees fromNew York University.

From 1973 to 1977, he served as a USAF Officer at the Air ForceCambridge Research Labs. In 1977 he joined AT&T Bell Laboratories, wherehe conducted and managed R&D. He joined Bellcore as District Manager in1983 and was appointed to the University of Pennsylvania Faculty in 1984, asProfessor and Chair of the Electrical Engineering Department. He served asDepartment Chair until July 1992. In 1990 he was appointed the Alfred FitlerMoore Professor of Electrical Engineering. He was also founder and Director,Center for Communications and Information Science and Policy from 1985to 1992. His work in microelectronic filters has contributed four text books,more than 80 scientific articles, and six patents.

Dr. Laker is a member of Eta Kappa Nu and Sigma Xi. He is listedin Who’s Who in Science and Engineering, Who’s Who in America, andWho’s Who in the World. In 1984 he received an IEEE Centennial Medaland in 1994 he received the AT&T Clinton Davisson Trophy for his patentin switched capacitor circuits. He was President of the IEEE Circuits andSystems Society in 1984. In 1991 he was elected to serve on the IEEE Boardof Directors to represent the five IEEE Technical Societies in the areas Circuitsand Devices for the 1992–1994 term. In 1994 he served as Chair of theIEEE Philadelphia Section. In 1993 he was elected to the IEEE office of VicePresident of Educational Activities and Chair of the Educational ActivitiesBoard. As IEEE Vice President of Educational Activities, he was responsiblefor all educational programs and services provided to over 300 000 IEEEmembers worldwide; and oversight of the accreditation of electrical, computerand related undergraduate engineering programs in the United States. Hisprofessional activities also include past tenures as Associate Editor,Journalof the Franklin Instituteand member of the Educational Advisory Board for theInternational Engineering Consortium (IEC). He was also Director of the NewTechnologies Institutes for the 1991 and 1992 IEC Eastern CommunicationsForums (ECF). He also was chair and speaker at University-Industry Colloquiafor ECF 90, NCF 95, and NCF 96.

Authorized licensed use limited to: PORTLAND STATE UNIVERSITY. Downloaded on December 4, 2009 at 19:34 from IEEE Xplore. Restrictions apply.