quest 123: a benchmarking systemfortechnologyassessment at

15
Quest 123: a benchmarking system for technology assessment at NASA Madjid Tavana La Salle University, Philadelphia, Pennsylvania, USA Keywords Benchmarking, Decision making, Analytical hierarchy process, Project management Abstract Technology assessment is a difficult task at the Mission Control Center (MCC). The difficulty is inherent in the unavailability of structured information and exacerbated by the lack of a systematic assessment process. New technology deployment to the MCC requires testing and certification in three labs: Quest 1, 2, and 3. The Mission Control Center Systems (MCCS) architecture team, a multidisciplinary group of MCC experts and scientists is chartered to redefine the next generation of MCCS by developing a systematic process to assess and certify new technologies. Quest 123 is a benchmarking tool that was successfully implemented at the Johnson Space Center to assess and certify new technology initiatives for each lab before final deployment to the MCC. Quest 123 integrates the analytic hierarchy process with an additive multi-criteria decision-making model into a dynamic benchmarking framework. 1. Introduction The concept of benchmarking has become synonymous with achieving successful performance. The original meaning of the word benchmark refers to a metric unit on a scale of measurement. A benchmark is the standard of excellence against which one measures and compares actual performance ( Jarrar and Zairi, 2001). While a number of definitions for benchmarking can be found in the literature, they all essentially share the same theme. Benchmarking is the process of continuously improving performance by adapting outstanding practices and processes found inside or outside the organization (Cox et al., 1997; Francis and Holloway, 2002; Jarrar and Zairi, 2001; Spendolini, 1992). Some benefits from benchmarking include establishing goals, meeting requirements, measuring performance, and becoming competitive (Camp, 1989; Drew, 1997). There are many typologies that have been developed to describe or classify the characteristics of benchmarking. The most commonly cited is Camp’s (1995) distinction between internal, competitive, functional, and strategic benchmarking. Internal benchmarking studies the best practices in an organization, competitive benchmarking deals with the best competitors in an industry, functional benchmarking investigates companies with similar processes in the same function outside one’s industry, and strategic benchmarking studies the best global business practices. Czuchry et al. (1995), Dorsch and Yasin (1998) and Jackson et al. (1994) provide a good survey of benchmarking literature. The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-5771.htm This research was supported by NASA grant number NAG 9-1526. The author is grateful to Jack Knight, Steven Gonzales, Anthony Bruins, Mission Control Center Systems architecture team, and Quest Labs, all from Johnson Space Center, and his colleagues Dennis Kennedy and Anne Marie Smith for their assistance with this research project. BIJ 11,4 370 Benchmarking: An International Journal Vol. 11 No. 4, 2004 pp. 370-384 q Emerald Group Publishing Limited 1463-5771 DOI 10.1108/14635770410546773

Upload: others

Post on 15-Jul-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Quest 123: a benchmarking systemfortechnologyassessment at

Quest 123 a benchmarkingsystem for technology assessment

at NASAMadjid Tavana

La Salle University Philadelphia Pennsylvania USA

Keywords Benchmarking Decision making Analytical hierarchy process Project management

Abstract Technology assessment is a difficult task at the Mission Control Center (MCC)The difficulty is inherent in the unavailability of structured information and exacerbated by the lackof a systematic assessment process New technology deployment to the MCC requires testing andcertification in three labs Quest 1 2 and 3 The Mission Control Center Systems (MCCS)architecture team a multidisciplinary group of MCC experts and scientists is chartered to redefinethe next generation of MCCS by developing a systematic process to assess and certify newtechnologies Quest 123 is a benchmarking tool that was successfully implemented at the JohnsonSpace Center to assess and certify new technology initiatives for each lab before final deployment tothe MCC Quest 123 integrates the analytic hierarchy process with an additive multi-criteriadecision-making model into a dynamic benchmarking framework

1 IntroductionThe concept of benchmarking has become synonymous with achieving successfulperformance The original meaning of the word benchmark refers to a metric unit on ascale of measurement A benchmark is the standard of excellence against which onemeasures and compares actual performance ( Jarrar and Zairi 2001) While a number ofdefinitions for benchmarking can be found in the literature they all essentially sharethe same theme Benchmarking is the process of continuously improving performanceby adapting outstanding practices and processes found inside or outside theorganization (Cox et al 1997 Francis and Holloway 2002 Jarrar and Zairi 2001Spendolini 1992) Some benefits from benchmarking include establishing goalsmeeting requirements measuring performance and becoming competitive (Camp1989 Drew 1997) There are many typologies that have been developed to describe orclassify the characteristics of benchmarking The most commonly cited is Camprsquos(1995) distinction between internal competitive functional and strategicbenchmarking Internal benchmarking studies the best practices in an organizationcompetitive benchmarking deals with the best competitors in an industry functionalbenchmarking investigates companies with similar processes in the same functionoutside onersquos industry and strategic benchmarking studies the best global businesspractices Czuchry et al (1995) Dorsch and Yasin (1998) and Jackson et al (1994)provide a good survey of benchmarking literature

The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at

wwwemeraldinsightcomresearchregister wwwemeraldinsightcom1463-5771htm

This research was supported by NASA grant number NAG 9-1526 The author is grateful to JackKnight Steven Gonzales Anthony Bruins Mission Control Center Systems architecture teamand Quest Labs all from Johnson Space Center and his colleagues Dennis Kennedy and AnneMarie Smith for their assistance with this research project

BIJ114

370

Benchmarking An InternationalJournalVol 11 No 4 2004pp 370-384q Emerald Group Publishing Limited1463-5771DOI 10110814635770410546773

The fundamental problem in any benchmarking is finding meaningful data to useas a benchmark (Landeghem and Persoons 2001) Most benchmarking models focus onthe ldquohardrdquo rather than ldquosoftrdquo data (Cassell et al 2001) Sometimes softer measures areignored because of the difficulty in measuring them Reliance on hard measures aloneoften undermines the strategies the company must pursue to survive in the long-termTodayrsquos smart organizations are searching for ways to integrate soft measures intotheir regular performance evaluation (Geanuracos and Meiklejohn 1993) Most currenttools such as ldquospiderrdquo or ldquoradarrdquo diagrams and the ldquoZrdquo chart for gap analysis focus onpresenting the hard data in some graphical form (Camp 1995) Statistical methodsutilized in benchmarking such as regression and various descriptive statistics are alsoconcerned with hard data (Blumberg 1994 Schefcyzk 1993) One approach which isused mostly for softer data is the analytic hierarchy process (AHP) AHP has been avery popular technique for determining weights in multi-criteria decision-making(MCDM) problems (Shim 1989 Zahedi 1986) The advantage of AHP is its capabilityto elicit judgments and scale them uniquely using a procedure that measures theconsistency of these scale values (Saaty 1989) AHP utilizes the perceptual andsubjective values of decision-makers in a weighted scoring technique to analyzevarious benchmarks (Sarkis 2001) AHP has been successfully utilized forbenchmarking in process performance (Frei and Harker 1999) logistics performance(Korpela and Tuominen 1996) strategic performance (Lu et al 1994 Partovi 2001)quality performance (Min et al 2002) and project management performance (Prasanta2002)

Although benchmarking is widely used in the public and private sectors publicorganizations and government agencies typically face unique operational issues andstrategic environments that differ from private organizations because they are notrequired to maximize and increase profitability ( Dorsch and Yasin 1998)Nevertheless they are broadly similar in organizational goals and objectives andperformance assessment is considered to be of equal importance in both sectorsInternal benchmarking is largely used at NASA to accelerate organizationaltransformation and institutionalize continual improvement in mission operationsaviation information technology and quality assurance This paper presents Quest123 an internal benchmarking tool that integrates AHP with an additive MCDMmodelinto a dynamic benchmarking framework for technology assessment at NASA

2 Benchmarking at NASAThroughout the history of human spaceflight the Mission Control Center Systems(MCCS) at the Johnson Space Center ( JSC) has been a model for mission planningcommunications command and control architectures The mission operationsdirectorate (MOD) overseeing the Mission Control Center (MCC) and MCCS mustensure that the overall system performance meets current and planned needs whilelooking for innovative ways to curtail operational costs and continually addressevolving operational scenarios The directorate must also enforce the highest return oninvestment on the funding it receives annually This vision provides a basis for thelong-term as well as day-to-day decision-making that ultimately impactsrequirements design change and budget plans The MCCS architecture teama multidisciplinary group of MOD experts and scientists is chartered to redefine thenext generation of MCCS by developing integrated systems design architecture

Quest 123

371

The original MCCS was designed as nonintegrated pieces of a whole independentlysupporting the larger goal of human spaceflight operations While more advancedcomputing capabilities have allowed the MCCS to morph from completely independentfunctioning systems into a distributed design architecture the technological advancesof the last several years have allowed for the potential implementation of a trueintegrated systems design architecture The MCCS which has always served as thenerve center of US human spaceflight operations has evolved from a centralizedmainframe computer architecture in which all aspects of mission planningcommunications command and control were performed predominantly frommultiple buildings located at JSC to a distributed architecture with multiple remotefacilities located around the world

The current MCCS is a functionally robust set of distributed systems primarilysupporting the space shuttle program (SSP) and the International Space Station (ISS)mission operations The MCCS also performs the following functions real-time data(telemetry and trajectory) monitoring and analysis real-time command near real-timedata storage retrieval and analysis space-to-ground and ground-to-ground voicevideo data and mail distribution as well as real-time and near real-time planning andsimulations

Forged around the uniquely complex and demanding requirements of humanspaceflight the MCCS has developed and evolved within the limits of technologicalcapabilities of the time The dynamic and continually evolving environment in whichMCCS functions has demanded that the over-arching structure of the systems ndash thesystems architecture ndash continues to evolve as well

As a result of dwindling funding for the foreseeable future the MCC must focus onways to reduce costs while still maintaining and even expanding its capability tosupport the SSP and an increasingly larger and more complex ISS As part of an earlierMCC mission computing strategy study the MCC adopted the following set of goalsthat the MCCS architecture team uses as major evaluation factors in its earlyproof-of-concept work toward the vision

Design for change Commercial standards compliant standard interfaces andsimplified configuration management processes for migration

Design for flexibility Acommodate future manned spaceflight programrequirements and operations concepts without significant impacts to existingarchitecture design

Design for connectivity Information transfer between international partnersMCC users (flight control) payload community joint space operations withgovernment industry and academia collaborations using the Internet wherepractical

Design for access Security commonalitysimplification ndash robust but simplesecurity for user concurrent access to multiple security levels that does notconstrain system functionality

Design for cost reduction Commodification (a move toward an infrastructure thatcan be supplied by many different vendors) consolidation operational conceptmodifications and re-engineer

Design for ease of use and effectiveness Intuitive graphical user interface layoutsthat supports data visualization and intelligent systems minimal steps for taskcompletion effective decision-making and job productivity

BIJ114

372

Design from a systems perspective Develop and test new systems and processimprovements that address all issues and concerns relative to the overallsystems as a whole entity

Deployment of new technology initiatives to the MCC is a long and difficult taskrequiring a four-phase certification process in three separate labs Quest 1 2 and 3Approximately about 5 percent of all technology initiatives reviewed are certified in alllabs and deployed to the MCC A technology initiative is initially considered forpossible adoption and testing during phase I of the testing and certification processOnce adopted the initiative is put through stand alone testing in Quest 1 during phaseII of the testing process Upon certification in Quest 1 the initiative is put throughadditional systems integration testing in Quest 2 during phase III of the process Oncecertified in Quest 2 the initiative is ready for human and systems integration testing inQuest 3 during phase IV of the process Finally initiatives certified in Quest 3 aredeployed to the MCC

Prior to implementation of Quest 123 the MCCS architecture team used a simple andunstructured naive approach to assess and certify new technology initiatives in eachlab Using this naive approach MCCS architecture teammembers assigned a subjectivenumerical score between 0 and 100 to each technology initiative The average score ofall team members was used to certify or deny certification to a technology initiativeNo formal assessment guidelines were provided to the MCCS architecture team Oncethe mean score was calculated the team used this score to decide if the technology hasadequate merits to be certified The MCC management had expressed their concernsabout the lack of a systematic framework and the potential for inconsistency intechnology assessment Quest 123 was created in response to this concern

3 The modelQuest 123 uses a filtering system to certify technology initiatives in each lab In orderto develop this system we analyzed 112 initiatives reviewed by the MCCS architectureteam over the last 2 years The median score for all 112 initiatives reviewed during theinitial screening in phase I was 52 while the mean score was 68 (Table I) Of the 48initiatives approved for follow-up testing in Quest 1 in phase II 45 had an overall scorehigher than the median while only 32 scored higher than the mean Since the meanswere larger than the medians the distribution was highly skewed As a resultthe medians provided a better measure of central tendency and approximation for thenumber of initiatives approved in all assessment phases

Quest 123 is a MCDM that utilizes AHP to develop an overall score for eachtechnology initiative under consideration The project evaluation and selection

Assessmentphases

Number ofinitiativesreviewed Median Mean

Number ofinitiativesapproved

Number ofinitiatives with anoverall score greater

than median

Number ofinitiatives with anoverall score greater

than mean

Phase I 112 52 68 48 45 32Phase II 48 55 71 25 23 14Phase III 25 58 74 9 8 6Phase IV 9 61 77 5 5 3

Table IHistorical cases and their

characteristics

Quest 123

373

methods proposed in the literature can be classified into two categories compensatoryand non-compensatory models (Oral et al 2001) Compensatory models reduce amultidimensional evaluation to a single-dimensional one through an aggregatingldquovaluerdquo function thereby establishing trade-offs between criteria Multi-objectivemathematical programming multi-attribute utility theory and AHP can be cited asexamples for this category Non-compensatory models such as Ordinal Ranking andELECTRE trade-offs between criteria are restricted The proposed method in thisstudy belongs to the first group and it is similar to the multi-criteria analysis methodsof Oral et al (2001) and Tavana (2003) To formulate the model algebraically let usassume wi as the importance weight of the ith criterion ethi frac14 1 2 mTHORN z jkfi thepreference rating of the kth technology initiative for the ith criterion in the fth phase ofthe evaluation process by the jth decision-maker ethi frac14 1 2 m j frac14 1 2 n k frac141 2 p and f frac14 1 2 qTHORN S kf the overall score of the kth technology initiative inthe fth phase of the evaluation process ethk frac14 1 2 p and f frac14 1 2 qTHORN and ~Sf themedian overall score for the fth phase of the evaluation process eth f frac14 1 2 qTHORN where0 wi 1 0 z jkfi 10 and 0 S kj 10

Assuming that a jkf is the alternative representing the jth decision-maker and the kthtechnology initiative in the fth phase of the evaluation process and Z frac14 z jkji represents the preference rating of the kth technology initiative for the ith criterion inthe fth phase of the evaluation process by the jth decision-maker

wi a 111 a 211 anpq

w1 z1111 z2111 znpq1

w2 z1112 z2111 znpq1

wm z111m z211m znpqm

S kf frac14

Xm

ifrac141

Xn

jfrac141

wizjkfi

n ethi frac14 1 2 m and j frac14 1 2 nTHORN

S kf $ ~Sf ethGrant certificationTHORN

S kf a ~Sf ethDeny certificationTHORN

The MCCS architecture team identified i criteria to be used as major evaluation criteriawhen early proof-of-concept work was under consideration The importance weight ofeach criterion (wi) was captured and measured with AHP using the questionnairepresented in Appendix 1 The MCCS architecture team was asked to provide theirsubjective assessment of each pairwise comparison Assuming that in an MCCSarchitecture team memberrsquos mind c1 c2 ci are the ith criteria that contribute to atechnology initiativersquos success The team memberrsquos goal is to assess the relativeimportance of these factors

BIJ114

374

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 2: Quest 123: a benchmarking systemfortechnologyassessment at

The fundamental problem in any benchmarking is finding meaningful data to useas a benchmark (Landeghem and Persoons 2001) Most benchmarking models focus onthe ldquohardrdquo rather than ldquosoftrdquo data (Cassell et al 2001) Sometimes softer measures areignored because of the difficulty in measuring them Reliance on hard measures aloneoften undermines the strategies the company must pursue to survive in the long-termTodayrsquos smart organizations are searching for ways to integrate soft measures intotheir regular performance evaluation (Geanuracos and Meiklejohn 1993) Most currenttools such as ldquospiderrdquo or ldquoradarrdquo diagrams and the ldquoZrdquo chart for gap analysis focus onpresenting the hard data in some graphical form (Camp 1995) Statistical methodsutilized in benchmarking such as regression and various descriptive statistics are alsoconcerned with hard data (Blumberg 1994 Schefcyzk 1993) One approach which isused mostly for softer data is the analytic hierarchy process (AHP) AHP has been avery popular technique for determining weights in multi-criteria decision-making(MCDM) problems (Shim 1989 Zahedi 1986) The advantage of AHP is its capabilityto elicit judgments and scale them uniquely using a procedure that measures theconsistency of these scale values (Saaty 1989) AHP utilizes the perceptual andsubjective values of decision-makers in a weighted scoring technique to analyzevarious benchmarks (Sarkis 2001) AHP has been successfully utilized forbenchmarking in process performance (Frei and Harker 1999) logistics performance(Korpela and Tuominen 1996) strategic performance (Lu et al 1994 Partovi 2001)quality performance (Min et al 2002) and project management performance (Prasanta2002)

Although benchmarking is widely used in the public and private sectors publicorganizations and government agencies typically face unique operational issues andstrategic environments that differ from private organizations because they are notrequired to maximize and increase profitability ( Dorsch and Yasin 1998)Nevertheless they are broadly similar in organizational goals and objectives andperformance assessment is considered to be of equal importance in both sectorsInternal benchmarking is largely used at NASA to accelerate organizationaltransformation and institutionalize continual improvement in mission operationsaviation information technology and quality assurance This paper presents Quest123 an internal benchmarking tool that integrates AHP with an additive MCDMmodelinto a dynamic benchmarking framework for technology assessment at NASA

2 Benchmarking at NASAThroughout the history of human spaceflight the Mission Control Center Systems(MCCS) at the Johnson Space Center ( JSC) has been a model for mission planningcommunications command and control architectures The mission operationsdirectorate (MOD) overseeing the Mission Control Center (MCC) and MCCS mustensure that the overall system performance meets current and planned needs whilelooking for innovative ways to curtail operational costs and continually addressevolving operational scenarios The directorate must also enforce the highest return oninvestment on the funding it receives annually This vision provides a basis for thelong-term as well as day-to-day decision-making that ultimately impactsrequirements design change and budget plans The MCCS architecture teama multidisciplinary group of MOD experts and scientists is chartered to redefine thenext generation of MCCS by developing integrated systems design architecture

Quest 123

371

The original MCCS was designed as nonintegrated pieces of a whole independentlysupporting the larger goal of human spaceflight operations While more advancedcomputing capabilities have allowed the MCCS to morph from completely independentfunctioning systems into a distributed design architecture the technological advancesof the last several years have allowed for the potential implementation of a trueintegrated systems design architecture The MCCS which has always served as thenerve center of US human spaceflight operations has evolved from a centralizedmainframe computer architecture in which all aspects of mission planningcommunications command and control were performed predominantly frommultiple buildings located at JSC to a distributed architecture with multiple remotefacilities located around the world

The current MCCS is a functionally robust set of distributed systems primarilysupporting the space shuttle program (SSP) and the International Space Station (ISS)mission operations The MCCS also performs the following functions real-time data(telemetry and trajectory) monitoring and analysis real-time command near real-timedata storage retrieval and analysis space-to-ground and ground-to-ground voicevideo data and mail distribution as well as real-time and near real-time planning andsimulations

Forged around the uniquely complex and demanding requirements of humanspaceflight the MCCS has developed and evolved within the limits of technologicalcapabilities of the time The dynamic and continually evolving environment in whichMCCS functions has demanded that the over-arching structure of the systems ndash thesystems architecture ndash continues to evolve as well

As a result of dwindling funding for the foreseeable future the MCC must focus onways to reduce costs while still maintaining and even expanding its capability tosupport the SSP and an increasingly larger and more complex ISS As part of an earlierMCC mission computing strategy study the MCC adopted the following set of goalsthat the MCCS architecture team uses as major evaluation factors in its earlyproof-of-concept work toward the vision

Design for change Commercial standards compliant standard interfaces andsimplified configuration management processes for migration

Design for flexibility Acommodate future manned spaceflight programrequirements and operations concepts without significant impacts to existingarchitecture design

Design for connectivity Information transfer between international partnersMCC users (flight control) payload community joint space operations withgovernment industry and academia collaborations using the Internet wherepractical

Design for access Security commonalitysimplification ndash robust but simplesecurity for user concurrent access to multiple security levels that does notconstrain system functionality

Design for cost reduction Commodification (a move toward an infrastructure thatcan be supplied by many different vendors) consolidation operational conceptmodifications and re-engineer

Design for ease of use and effectiveness Intuitive graphical user interface layoutsthat supports data visualization and intelligent systems minimal steps for taskcompletion effective decision-making and job productivity

BIJ114

372

Design from a systems perspective Develop and test new systems and processimprovements that address all issues and concerns relative to the overallsystems as a whole entity

Deployment of new technology initiatives to the MCC is a long and difficult taskrequiring a four-phase certification process in three separate labs Quest 1 2 and 3Approximately about 5 percent of all technology initiatives reviewed are certified in alllabs and deployed to the MCC A technology initiative is initially considered forpossible adoption and testing during phase I of the testing and certification processOnce adopted the initiative is put through stand alone testing in Quest 1 during phaseII of the testing process Upon certification in Quest 1 the initiative is put throughadditional systems integration testing in Quest 2 during phase III of the process Oncecertified in Quest 2 the initiative is ready for human and systems integration testing inQuest 3 during phase IV of the process Finally initiatives certified in Quest 3 aredeployed to the MCC

Prior to implementation of Quest 123 the MCCS architecture team used a simple andunstructured naive approach to assess and certify new technology initiatives in eachlab Using this naive approach MCCS architecture teammembers assigned a subjectivenumerical score between 0 and 100 to each technology initiative The average score ofall team members was used to certify or deny certification to a technology initiativeNo formal assessment guidelines were provided to the MCCS architecture team Oncethe mean score was calculated the team used this score to decide if the technology hasadequate merits to be certified The MCC management had expressed their concernsabout the lack of a systematic framework and the potential for inconsistency intechnology assessment Quest 123 was created in response to this concern

3 The modelQuest 123 uses a filtering system to certify technology initiatives in each lab In orderto develop this system we analyzed 112 initiatives reviewed by the MCCS architectureteam over the last 2 years The median score for all 112 initiatives reviewed during theinitial screening in phase I was 52 while the mean score was 68 (Table I) Of the 48initiatives approved for follow-up testing in Quest 1 in phase II 45 had an overall scorehigher than the median while only 32 scored higher than the mean Since the meanswere larger than the medians the distribution was highly skewed As a resultthe medians provided a better measure of central tendency and approximation for thenumber of initiatives approved in all assessment phases

Quest 123 is a MCDM that utilizes AHP to develop an overall score for eachtechnology initiative under consideration The project evaluation and selection

Assessmentphases

Number ofinitiativesreviewed Median Mean

Number ofinitiativesapproved

Number ofinitiatives with anoverall score greater

than median

Number ofinitiatives with anoverall score greater

than mean

Phase I 112 52 68 48 45 32Phase II 48 55 71 25 23 14Phase III 25 58 74 9 8 6Phase IV 9 61 77 5 5 3

Table IHistorical cases and their

characteristics

Quest 123

373

methods proposed in the literature can be classified into two categories compensatoryand non-compensatory models (Oral et al 2001) Compensatory models reduce amultidimensional evaluation to a single-dimensional one through an aggregatingldquovaluerdquo function thereby establishing trade-offs between criteria Multi-objectivemathematical programming multi-attribute utility theory and AHP can be cited asexamples for this category Non-compensatory models such as Ordinal Ranking andELECTRE trade-offs between criteria are restricted The proposed method in thisstudy belongs to the first group and it is similar to the multi-criteria analysis methodsof Oral et al (2001) and Tavana (2003) To formulate the model algebraically let usassume wi as the importance weight of the ith criterion ethi frac14 1 2 mTHORN z jkfi thepreference rating of the kth technology initiative for the ith criterion in the fth phase ofthe evaluation process by the jth decision-maker ethi frac14 1 2 m j frac14 1 2 n k frac141 2 p and f frac14 1 2 qTHORN S kf the overall score of the kth technology initiative inthe fth phase of the evaluation process ethk frac14 1 2 p and f frac14 1 2 qTHORN and ~Sf themedian overall score for the fth phase of the evaluation process eth f frac14 1 2 qTHORN where0 wi 1 0 z jkfi 10 and 0 S kj 10

Assuming that a jkf is the alternative representing the jth decision-maker and the kthtechnology initiative in the fth phase of the evaluation process and Z frac14 z jkji represents the preference rating of the kth technology initiative for the ith criterion inthe fth phase of the evaluation process by the jth decision-maker

wi a 111 a 211 anpq

w1 z1111 z2111 znpq1

w2 z1112 z2111 znpq1

wm z111m z211m znpqm

S kf frac14

Xm

ifrac141

Xn

jfrac141

wizjkfi

n ethi frac14 1 2 m and j frac14 1 2 nTHORN

S kf $ ~Sf ethGrant certificationTHORN

S kf a ~Sf ethDeny certificationTHORN

The MCCS architecture team identified i criteria to be used as major evaluation criteriawhen early proof-of-concept work was under consideration The importance weight ofeach criterion (wi) was captured and measured with AHP using the questionnairepresented in Appendix 1 The MCCS architecture team was asked to provide theirsubjective assessment of each pairwise comparison Assuming that in an MCCSarchitecture team memberrsquos mind c1 c2 ci are the ith criteria that contribute to atechnology initiativersquos success The team memberrsquos goal is to assess the relativeimportance of these factors

BIJ114

374

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 3: Quest 123: a benchmarking systemfortechnologyassessment at

The original MCCS was designed as nonintegrated pieces of a whole independentlysupporting the larger goal of human spaceflight operations While more advancedcomputing capabilities have allowed the MCCS to morph from completely independentfunctioning systems into a distributed design architecture the technological advancesof the last several years have allowed for the potential implementation of a trueintegrated systems design architecture The MCCS which has always served as thenerve center of US human spaceflight operations has evolved from a centralizedmainframe computer architecture in which all aspects of mission planningcommunications command and control were performed predominantly frommultiple buildings located at JSC to a distributed architecture with multiple remotefacilities located around the world

The current MCCS is a functionally robust set of distributed systems primarilysupporting the space shuttle program (SSP) and the International Space Station (ISS)mission operations The MCCS also performs the following functions real-time data(telemetry and trajectory) monitoring and analysis real-time command near real-timedata storage retrieval and analysis space-to-ground and ground-to-ground voicevideo data and mail distribution as well as real-time and near real-time planning andsimulations

Forged around the uniquely complex and demanding requirements of humanspaceflight the MCCS has developed and evolved within the limits of technologicalcapabilities of the time The dynamic and continually evolving environment in whichMCCS functions has demanded that the over-arching structure of the systems ndash thesystems architecture ndash continues to evolve as well

As a result of dwindling funding for the foreseeable future the MCC must focus onways to reduce costs while still maintaining and even expanding its capability tosupport the SSP and an increasingly larger and more complex ISS As part of an earlierMCC mission computing strategy study the MCC adopted the following set of goalsthat the MCCS architecture team uses as major evaluation factors in its earlyproof-of-concept work toward the vision

Design for change Commercial standards compliant standard interfaces andsimplified configuration management processes for migration

Design for flexibility Acommodate future manned spaceflight programrequirements and operations concepts without significant impacts to existingarchitecture design

Design for connectivity Information transfer between international partnersMCC users (flight control) payload community joint space operations withgovernment industry and academia collaborations using the Internet wherepractical

Design for access Security commonalitysimplification ndash robust but simplesecurity for user concurrent access to multiple security levels that does notconstrain system functionality

Design for cost reduction Commodification (a move toward an infrastructure thatcan be supplied by many different vendors) consolidation operational conceptmodifications and re-engineer

Design for ease of use and effectiveness Intuitive graphical user interface layoutsthat supports data visualization and intelligent systems minimal steps for taskcompletion effective decision-making and job productivity

BIJ114

372

Design from a systems perspective Develop and test new systems and processimprovements that address all issues and concerns relative to the overallsystems as a whole entity

Deployment of new technology initiatives to the MCC is a long and difficult taskrequiring a four-phase certification process in three separate labs Quest 1 2 and 3Approximately about 5 percent of all technology initiatives reviewed are certified in alllabs and deployed to the MCC A technology initiative is initially considered forpossible adoption and testing during phase I of the testing and certification processOnce adopted the initiative is put through stand alone testing in Quest 1 during phaseII of the testing process Upon certification in Quest 1 the initiative is put throughadditional systems integration testing in Quest 2 during phase III of the process Oncecertified in Quest 2 the initiative is ready for human and systems integration testing inQuest 3 during phase IV of the process Finally initiatives certified in Quest 3 aredeployed to the MCC

Prior to implementation of Quest 123 the MCCS architecture team used a simple andunstructured naive approach to assess and certify new technology initiatives in eachlab Using this naive approach MCCS architecture teammembers assigned a subjectivenumerical score between 0 and 100 to each technology initiative The average score ofall team members was used to certify or deny certification to a technology initiativeNo formal assessment guidelines were provided to the MCCS architecture team Oncethe mean score was calculated the team used this score to decide if the technology hasadequate merits to be certified The MCC management had expressed their concernsabout the lack of a systematic framework and the potential for inconsistency intechnology assessment Quest 123 was created in response to this concern

3 The modelQuest 123 uses a filtering system to certify technology initiatives in each lab In orderto develop this system we analyzed 112 initiatives reviewed by the MCCS architectureteam over the last 2 years The median score for all 112 initiatives reviewed during theinitial screening in phase I was 52 while the mean score was 68 (Table I) Of the 48initiatives approved for follow-up testing in Quest 1 in phase II 45 had an overall scorehigher than the median while only 32 scored higher than the mean Since the meanswere larger than the medians the distribution was highly skewed As a resultthe medians provided a better measure of central tendency and approximation for thenumber of initiatives approved in all assessment phases

Quest 123 is a MCDM that utilizes AHP to develop an overall score for eachtechnology initiative under consideration The project evaluation and selection

Assessmentphases

Number ofinitiativesreviewed Median Mean

Number ofinitiativesapproved

Number ofinitiatives with anoverall score greater

than median

Number ofinitiatives with anoverall score greater

than mean

Phase I 112 52 68 48 45 32Phase II 48 55 71 25 23 14Phase III 25 58 74 9 8 6Phase IV 9 61 77 5 5 3

Table IHistorical cases and their

characteristics

Quest 123

373

methods proposed in the literature can be classified into two categories compensatoryand non-compensatory models (Oral et al 2001) Compensatory models reduce amultidimensional evaluation to a single-dimensional one through an aggregatingldquovaluerdquo function thereby establishing trade-offs between criteria Multi-objectivemathematical programming multi-attribute utility theory and AHP can be cited asexamples for this category Non-compensatory models such as Ordinal Ranking andELECTRE trade-offs between criteria are restricted The proposed method in thisstudy belongs to the first group and it is similar to the multi-criteria analysis methodsof Oral et al (2001) and Tavana (2003) To formulate the model algebraically let usassume wi as the importance weight of the ith criterion ethi frac14 1 2 mTHORN z jkfi thepreference rating of the kth technology initiative for the ith criterion in the fth phase ofthe evaluation process by the jth decision-maker ethi frac14 1 2 m j frac14 1 2 n k frac141 2 p and f frac14 1 2 qTHORN S kf the overall score of the kth technology initiative inthe fth phase of the evaluation process ethk frac14 1 2 p and f frac14 1 2 qTHORN and ~Sf themedian overall score for the fth phase of the evaluation process eth f frac14 1 2 qTHORN where0 wi 1 0 z jkfi 10 and 0 S kj 10

Assuming that a jkf is the alternative representing the jth decision-maker and the kthtechnology initiative in the fth phase of the evaluation process and Z frac14 z jkji represents the preference rating of the kth technology initiative for the ith criterion inthe fth phase of the evaluation process by the jth decision-maker

wi a 111 a 211 anpq

w1 z1111 z2111 znpq1

w2 z1112 z2111 znpq1

wm z111m z211m znpqm

S kf frac14

Xm

ifrac141

Xn

jfrac141

wizjkfi

n ethi frac14 1 2 m and j frac14 1 2 nTHORN

S kf $ ~Sf ethGrant certificationTHORN

S kf a ~Sf ethDeny certificationTHORN

The MCCS architecture team identified i criteria to be used as major evaluation criteriawhen early proof-of-concept work was under consideration The importance weight ofeach criterion (wi) was captured and measured with AHP using the questionnairepresented in Appendix 1 The MCCS architecture team was asked to provide theirsubjective assessment of each pairwise comparison Assuming that in an MCCSarchitecture team memberrsquos mind c1 c2 ci are the ith criteria that contribute to atechnology initiativersquos success The team memberrsquos goal is to assess the relativeimportance of these factors

BIJ114

374

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 4: Quest 123: a benchmarking systemfortechnologyassessment at

Design from a systems perspective Develop and test new systems and processimprovements that address all issues and concerns relative to the overallsystems as a whole entity

Deployment of new technology initiatives to the MCC is a long and difficult taskrequiring a four-phase certification process in three separate labs Quest 1 2 and 3Approximately about 5 percent of all technology initiatives reviewed are certified in alllabs and deployed to the MCC A technology initiative is initially considered forpossible adoption and testing during phase I of the testing and certification processOnce adopted the initiative is put through stand alone testing in Quest 1 during phaseII of the testing process Upon certification in Quest 1 the initiative is put throughadditional systems integration testing in Quest 2 during phase III of the process Oncecertified in Quest 2 the initiative is ready for human and systems integration testing inQuest 3 during phase IV of the process Finally initiatives certified in Quest 3 aredeployed to the MCC

Prior to implementation of Quest 123 the MCCS architecture team used a simple andunstructured naive approach to assess and certify new technology initiatives in eachlab Using this naive approach MCCS architecture teammembers assigned a subjectivenumerical score between 0 and 100 to each technology initiative The average score ofall team members was used to certify or deny certification to a technology initiativeNo formal assessment guidelines were provided to the MCCS architecture team Oncethe mean score was calculated the team used this score to decide if the technology hasadequate merits to be certified The MCC management had expressed their concernsabout the lack of a systematic framework and the potential for inconsistency intechnology assessment Quest 123 was created in response to this concern

3 The modelQuest 123 uses a filtering system to certify technology initiatives in each lab In orderto develop this system we analyzed 112 initiatives reviewed by the MCCS architectureteam over the last 2 years The median score for all 112 initiatives reviewed during theinitial screening in phase I was 52 while the mean score was 68 (Table I) Of the 48initiatives approved for follow-up testing in Quest 1 in phase II 45 had an overall scorehigher than the median while only 32 scored higher than the mean Since the meanswere larger than the medians the distribution was highly skewed As a resultthe medians provided a better measure of central tendency and approximation for thenumber of initiatives approved in all assessment phases

Quest 123 is a MCDM that utilizes AHP to develop an overall score for eachtechnology initiative under consideration The project evaluation and selection

Assessmentphases

Number ofinitiativesreviewed Median Mean

Number ofinitiativesapproved

Number ofinitiatives with anoverall score greater

than median

Number ofinitiatives with anoverall score greater

than mean

Phase I 112 52 68 48 45 32Phase II 48 55 71 25 23 14Phase III 25 58 74 9 8 6Phase IV 9 61 77 5 5 3

Table IHistorical cases and their

characteristics

Quest 123

373

methods proposed in the literature can be classified into two categories compensatoryand non-compensatory models (Oral et al 2001) Compensatory models reduce amultidimensional evaluation to a single-dimensional one through an aggregatingldquovaluerdquo function thereby establishing trade-offs between criteria Multi-objectivemathematical programming multi-attribute utility theory and AHP can be cited asexamples for this category Non-compensatory models such as Ordinal Ranking andELECTRE trade-offs between criteria are restricted The proposed method in thisstudy belongs to the first group and it is similar to the multi-criteria analysis methodsof Oral et al (2001) and Tavana (2003) To formulate the model algebraically let usassume wi as the importance weight of the ith criterion ethi frac14 1 2 mTHORN z jkfi thepreference rating of the kth technology initiative for the ith criterion in the fth phase ofthe evaluation process by the jth decision-maker ethi frac14 1 2 m j frac14 1 2 n k frac141 2 p and f frac14 1 2 qTHORN S kf the overall score of the kth technology initiative inthe fth phase of the evaluation process ethk frac14 1 2 p and f frac14 1 2 qTHORN and ~Sf themedian overall score for the fth phase of the evaluation process eth f frac14 1 2 qTHORN where0 wi 1 0 z jkfi 10 and 0 S kj 10

Assuming that a jkf is the alternative representing the jth decision-maker and the kthtechnology initiative in the fth phase of the evaluation process and Z frac14 z jkji represents the preference rating of the kth technology initiative for the ith criterion inthe fth phase of the evaluation process by the jth decision-maker

wi a 111 a 211 anpq

w1 z1111 z2111 znpq1

w2 z1112 z2111 znpq1

wm z111m z211m znpqm

S kf frac14

Xm

ifrac141

Xn

jfrac141

wizjkfi

n ethi frac14 1 2 m and j frac14 1 2 nTHORN

S kf $ ~Sf ethGrant certificationTHORN

S kf a ~Sf ethDeny certificationTHORN

The MCCS architecture team identified i criteria to be used as major evaluation criteriawhen early proof-of-concept work was under consideration The importance weight ofeach criterion (wi) was captured and measured with AHP using the questionnairepresented in Appendix 1 The MCCS architecture team was asked to provide theirsubjective assessment of each pairwise comparison Assuming that in an MCCSarchitecture team memberrsquos mind c1 c2 ci are the ith criteria that contribute to atechnology initiativersquos success The team memberrsquos goal is to assess the relativeimportance of these factors

BIJ114

374

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 5: Quest 123: a benchmarking systemfortechnologyassessment at

methods proposed in the literature can be classified into two categories compensatoryand non-compensatory models (Oral et al 2001) Compensatory models reduce amultidimensional evaluation to a single-dimensional one through an aggregatingldquovaluerdquo function thereby establishing trade-offs between criteria Multi-objectivemathematical programming multi-attribute utility theory and AHP can be cited asexamples for this category Non-compensatory models such as Ordinal Ranking andELECTRE trade-offs between criteria are restricted The proposed method in thisstudy belongs to the first group and it is similar to the multi-criteria analysis methodsof Oral et al (2001) and Tavana (2003) To formulate the model algebraically let usassume wi as the importance weight of the ith criterion ethi frac14 1 2 mTHORN z jkfi thepreference rating of the kth technology initiative for the ith criterion in the fth phase ofthe evaluation process by the jth decision-maker ethi frac14 1 2 m j frac14 1 2 n k frac141 2 p and f frac14 1 2 qTHORN S kf the overall score of the kth technology initiative inthe fth phase of the evaluation process ethk frac14 1 2 p and f frac14 1 2 qTHORN and ~Sf themedian overall score for the fth phase of the evaluation process eth f frac14 1 2 qTHORN where0 wi 1 0 z jkfi 10 and 0 S kj 10

Assuming that a jkf is the alternative representing the jth decision-maker and the kthtechnology initiative in the fth phase of the evaluation process and Z frac14 z jkji represents the preference rating of the kth technology initiative for the ith criterion inthe fth phase of the evaluation process by the jth decision-maker

wi a 111 a 211 anpq

w1 z1111 z2111 znpq1

w2 z1112 z2111 znpq1

wm z111m z211m znpqm

S kf frac14

Xm

ifrac141

Xn

jfrac141

wizjkfi

n ethi frac14 1 2 m and j frac14 1 2 nTHORN

S kf $ ~Sf ethGrant certificationTHORN

S kf a ~Sf ethDeny certificationTHORN

The MCCS architecture team identified i criteria to be used as major evaluation criteriawhen early proof-of-concept work was under consideration The importance weight ofeach criterion (wi) was captured and measured with AHP using the questionnairepresented in Appendix 1 The MCCS architecture team was asked to provide theirsubjective assessment of each pairwise comparison Assuming that in an MCCSarchitecture team memberrsquos mind c1 c2 ci are the ith criteria that contribute to atechnology initiativersquos success The team memberrsquos goal is to assess the relativeimportance of these factors

BIJ114

374

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 6: Quest 123: a benchmarking systemfortechnologyassessment at

Saatyrsquos AHP (Forman and Gass 2001 Saaty and Vargas 1998) is a method ofderiving a set of weights to be associated with each of the ith criteria Initially the teammember is asked to compare each possible pair cj ck of criteria and provide quantifiedjudgments on which one of the criteria is more important and by how muchThese judgments are represented by an i pound i matrix

A frac14 ethajkTHORN eth j k frac14 1 2 iTHORN

If cj is judged to be of equal importance as ck then ajk frac14 1 if cj is judged to be moreimportant than ck then ajk 1 and if cj is judged to be less important than ck thenajk 1

ajk frac14 1=akj ajk ndash 0

Thus the matrix A is a reciprocal matrix that is the entry ajk is the inverse of the entryakj ajk reflects the relative importance of cj compared with criterion ck For examplea12 frac14 125 indicates that c1 is 125 times as important as c2

Then the vector w representing the relative weights of each of the ith criteria can beobtained by computing the normalized eigenvector corresponding to the maximumeigenvalue of matrix A An eigenvalue of A is defined as l which satisfies thefollowing matrix equation

Aw frac14 lw

where l is a constant called the eigenvalue associated with the given eigenvector wSaaty (1977 1980 1983 1990 1994) has shown that the best estimate of w is the oneassociated with the maximum eigenvalue (lmax) of the matrix A Since the sum of theweights should be equal to 100 the normalized eigenvector is used Saatyrsquos algorithmfor obtaining this w is incorporated in the software Expert Choice (2000)

One of the advantages of AHP is that it encourages teammembers to be consistent intheir pairwise comparisons Saaty suggests a measure of consistency for the pairwisecomparisons When the judgments are perfectly consistent the maximum eigenvaluelmax should equal i the number of criteria that are compared In general the responsesare not perfectly consistent and lmax is greater than n The larger the lmax the greater isthe degree of inconsistency Saaty defines the consistency index (CI) as ethlmax 2 iTHORN=ethi2 1THORN and provides the following random index (RI) table for matrices of order 3-10

n 3 4 5 6 7 8 9 10

RI 058 090 112 132 141 145 149 151

This RI is based on a simulation of a large number of randomly generatedweights Saatyrecommends the calculation of a consistency ratio (CR) which is the ratio of CI to the RIfor the same order matrix A CR of 010 or less is considered acceptable When the CR isunacceptable the team member is made aware that his or her pairwise comparisons arelogically inconsistent and he or she is encouraged to revise them

The responses were processed with Expert Choice (2000) and those withinconsistency ratios greater than 010 were asked to revisit their judgments as it issuggested by Satty The mean importance weights were calculated for the MCCSarchitecture team after necessary adjustments were made to inconsistent responsesEach MCCS architecture team member was presented with hisher individual score

Quest 123

375

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 7: Quest 123: a benchmarking systemfortechnologyassessment at

along with the group mean weights IT members were given the opportunity to revisittheir judgments and make revisions to their pairwise comparison scores based on thisfeedback Some MCCS architecture team members took advantage of this opportunityand revised their judgments in the second round The mean importance weights for thefirst and second round are presented in Table II As it is shown the second roundresults were slightly different from the first round results

There has been some criticism of AHP in the operations research literature Harkerand Vargas (1987) show that AHP does have an axiomatic foundation the cardinalmeasurement of preferences is fully represented by the eigenvector method and theprinciples of hierarchical composition and rank reversal are valid On the other handDyer (1990a) has questioned the theoretical basis underlying AHP and argues that itcan lead to preference reversals based on the alternative set being analyzedIn response Saaty (1990) explains how rank reversal is a positive feature when newreference points are introduced The geometric aggregation rule is used to avoid thecontroversies associated with rank reversal (Dyer 1990a b Harker and Vargas 1990Saaty 1990 Tavana 2002)

4 A technology certification decision scenario at NASAQuest 123 is a four-phase filtering process utilized at NASA to certify technologyinitiatives in three test labs before final deployment to the MCC Figure 1 shows agraphical presentation of Quest 123 The process begins with an initial screening inphase I eth f frac14 1THORN All j MCCS architecture team members individually assess the kth

technology initiative under consideration by providing a performance rating z jk1i

between 0 and 10 for each of the ith evaluation criteria adopted by MCC All individualperformance ratings in phase I are combined with the previously establishedperformance weights of criteria (wi) using the model presented earlier to calculate anoverall score (S k1) for the kth initiative in phase I This score is compared with themedian score eth ~S 1THORN of the first phase retrieved from Quest database If S k1 $ ~S 1 theinitiative is selected for phase II testing in Quest 1 lab If S k1 a ~S 1 the initiative isrejected for further testing The Quest database is updated with all technologyinitiative scores whether the initiative is accepted or rejected The inclusion of newentries into the Quest database results in recalculation of the median score

A Quest database was initially created with 112 records for phase I eth ~S 1 frac14 564THORN 48records for phase II eth ~S 2 frac14 648THORN 25 records for phase III eth ~S 3 frac14 736THORN and nine records forphase IV eth ~S 4 frac14 814THORN The first technology initiative considered by Quest 123 was the113th initiative The MCCS architecture team members individually evaluated this

Evaluation factor Round 1 Round 2

Ease of use and effectiveness 0296 0293Systems perspective 0184 0191Flexibility 0140 0132Access 0086 0083Connectivity 0091 0091Change 0088 0084Cost reduction 0116 0126Inconsistency ratio 0081 0068

Table IIFirst and second roundimportance weights

BIJ114

376

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 8: Quest 123: a benchmarking systemfortechnologyassessment at

technology initiative and assigned anumerical preference rating between 0 and10 for eachof the seven evaluation criteria adopted by MCC The combination of individualperformance ratings with the overall performance weights of criteria using the modelpresented earlier resulted in an overall score for each team member The median score of

Figure 1New technology

assessment process

Quest 123

377

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 9: Quest 123: a benchmarking systemfortechnologyassessment at

the individual teammember scoreswere used as the overall score for the 113th initiative inphase I ethS 1131 frac14 676THORN Since S 1131 $ ~S 1 the 113th technology initiative was approvedfor adoption and stand alone testing in Quest 1 As a result the number of cases in phase Iwas increased by 1-113 and a new median was recalculated for this phase eth ~S1 frac14 567THORN

The MCCS architecture team members individually performed the stand alonetesting on this initiative during phase II in Quest 1 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for stand alone testing Aggregation of the individual scoresproduced a median score of 694 for this initiative in phase II ethS 1132 frac14 694THORN SinceS 1132 $ ~S 2 the 113th technology initiative was approved for adoption and systemsintegration testing in Quest 2 As a result the number of cases in phase II wasincreased by 1 to 49 and a new median was recalculated for this phase eth ~S 2 frac14 654THORN

The MCCS architecture team members individually performed systems integrationtesting on this initiative during phase III in Quest 2 Again a numerical preferencerating between 0 and 10 for each of the seven evaluation criteria adopted by MCC wasgiven to this initiative for systems integration testing Aggregation of the individualscores produced a median score of 686 for this initiative in phase III ethS 1133 frac14 686THORNSince S 1133 a ~S 3 the 113th technology initiative was denied certification and removedfrom the process and further testing It should be noted that deletion of this initiativedoes not change the number of cases and the median of phase III All technologyinitiatives are similarly assessed by Quest 123

The process describes here is an iterative one whereby the median is used as afilter to separate the higher ranked initiatives from the lower ranked initiatives thatwere evaluated in the Quest labs Initially a certain number of initiatives qualifybased on the overall median and are sent to Quest 1 in phase I The scores arere-evaluated in Quest 1 during phase II and those ranked above the median areallowed to proceed to Quest 2 for phase III testing Similar process is repeated forall four phases in all three Quest labs until the technology initiative is certified to bedeployed to the MCC

5 Evaluation of Quest 123Prior to implementation of Quest 123 the MCCS architecture team used a simplescoring method described earlier to evaluate new technology initiatives at MCC One tothe lack of a systematic process and inadequate thorough analysis MCC managementand the team were concerned about the potential for inconsistency and poor decisionsThe team decided to replace the current simple scoring method with a more completestructured and analytical process

Quest 123was evaluated by 13MCCS architecture teammembers The questionnairepresented in Appendix 2 was used to compare the post-quest 123 with the systemcurrently used by the MCCS architecture team Team members were asked to comparethe two systems using the following seven evaluation factors adopted by the team

(1) Flexibility The number of criteria technology initiatives decision-makers orphases of the process at MCC is not predetermined The new system should beflexible enough to be able to handle these changes

(2) Simplicity The new system needs to be theoretically sound and at the same timeeasy to understand by users and management

BIJ114

378

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 10: Quest 123: a benchmarking systemfortechnologyassessment at

(3) Analytical The new system has to be analytical and capable of evaluatingmulti-criteria problems

(4) Completeness The new system should have facilities to collect and integrate awide range of information concerning change flexibility connectivity accesscost reduction ease of use effectiveness and systems perspective

(5) Structured The new system should have manageable steps with a facility tointegrate the results from each step

(6) Decision process quality The new system should improve decision processquality

(7) Decision quality The new system should improve the decision quality

The results shown in Figure 2 show that post-quest 123 scores are higher on all of theassessment dimensions Decision quality with a median score of seven followed bydecision process quality and structured with a median score of six were stronglyfavored in the post-quest 123 over the pre-quest 123 system Completeness with a scoreof five and analytical with a score of three for post-quest 123 were perceivedmoderately better than the pre-quest approach Finally simplicity and flexibility withmedian scores of two and one for post-quest 123 were perceived slightly better thanpre-quest approach by the MCCS architecture team members

Next the Wilcoxon signed ranks test of medians was performed All the medianswere statistically different from zero at a frac14 005 A Kruskal-Wallis test also showsthat all the medians were significantly different at a frac14 005 (Table III) These resultsindicate that the post-quest system was preferred to the pre-quest system on allthe criteria

Figure 2MCCS architecture team

median scores

Evaluation factor Median Significant (a frac14 005)

Decision quality 7 YesDecision process quality 6 YesStructured 6 YesCompleteness 5 YesAnalytical 3 YesSimplicity 2 YesFlexibility 1 Yes

Table IIIPost-quest 123 test of

medians

Quest 123

379

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 11: Quest 123: a benchmarking systemfortechnologyassessment at

6 ConclusionAdvances in computer technology and data availability have made decision makingmore complex than ever Quest 123 uses AHP and an additive MCDM model to reducethese complexities by systematically structuring the technology assessment probleminto manageable and transparent steps at MCC The earlier assessment model usedby MCC was highly intuitive and unstructured While intuition is still favored bypracticing managers it may be dangerously unreliable when used to solve repetitiveproblems because of the potential for inconsistency Quest 123 has several attractivefeatures that address some of the limitations inherent in the earlier naive approachused by MCC

It should be noted that using a structured framework like Quest 123 does not implya deterministic approach to technology assessment While Quest 123 enables MCCSarchitecture team members to crystallize their thoughts and organize their judgmentsit should be used very carefully Team member judgment is an integral component ofQuest 123 therefore the effectiveness of the model relies heavily on their cognitiveabilities Quest 123 utilizes intuitive methods such as AHP and MCDM for subjectiveestimation of weights and scores because they cannot be obtained by empiricalanalysis While these judgments often mirror a team memberrsquos beliefs they should beused with caution As with any decision calculus model the practicing managers mustbe aware of the limitations of subjective estimates When empirical analysis is feasibleand makes economic sense it should be utilized to improve these estimates (Lodish1982)

On the other hand Quest 123 only addresses some of the problems inherent intechnology certification decisions Quantification of all evaluation criteria is a difficulttask Storing information generated during a session for use in future sessions alongwith information on actual performance of a technology initiative can facilitate theprocess of learning from past mistakes This may be done by interfacing Quest 123with knowledge-based systems for fine tuning weights and subjective scores and forinferring in a non-algorithmic way for example by inductive reasoning Neuralnetworks also offer a potential for developing knowledge-based models in this areaQuest 123 presents a step toward developing a more comprehensive and structuredframework in benchmarking

References

Blumberg DF (1994) ldquoStrategic benchmarking of service and logistic support operationsrdquoJournal of Business Logistics Vol 15 No 2 pp 89-119

Camp RC (1989) Benchmarking The Search for Industry Best Practices that Lead to SuperiorPerformance ASQC Quality Press Milwaukee WI

Camp RC (1995) Business Process Benchmarking Finding and Implementing Best PracticesASQC Quality Press Milwaukee WI

Cassell C Nadin S and Gray MO (2001) ldquoThe use and effectiveness of benchmarking inSMEsrdquo Benchmarking An International Journal Vol 8 No 3 pp 212-22

Cox JRW Mann L and Samson D (1997) ldquoBenchmarking as a mixed metaphor disentanglingassumptions of competition and collaborationrdquo Journal of Management Studies Vol 34No 2 pp 285-314

BIJ114

380

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 12: Quest 123: a benchmarking systemfortechnologyassessment at

Czuchry AJ Yasin MM and Dorsch JJ (1995) ldquoA review of benchmarking literature ndash aproposed model for implementationrdquo International Journal of Materials and ProductTechnology Vol 10 No 1-2 pp 27-45

Dorsch JJ and Yasin MM (1998) ldquoA framework for benchmarking in the public sectorliterature review and directions for future researchrdquo International Journal of Public SectorManagement Vol 11 No 2-3 pp 91-115

Drew JE (1997) ldquoFrom knowledge to action the impact of benchmarking on organizationalperformancerdquo Long Range Planning Vol 30 No 3 pp 427-41

Dyer JS (1990a) ldquoRemarks on the analytic hierarchy processrdquo Management Science Vol 36No 3 pp 249-58

Dyer JS (1990b) ldquoA clarification of lsquoRemarks on the analytic hierarchy processrsquordquo ManagementScience Vol 36 No 3 pp 274-5

Expert Choice (Computer Software) (2000) Decision Support Software Inc McLean VA

Forman EH and Gass SI (2001) ldquoThe analytic hierarchy process ndash an expositionrdquo OperationsResearch Vol 49 No 4 pp 469-86

Francis G and Holloway J (2002) ldquoBeyond comparisons ndash the role for the operational researcherin benchmarkingrdquo Journal of the Operational Research Society Vol 53 pp 283-91

Frei FX and Harker PT (1999) ldquoMeasuring aggregate process performance using AHP 1rdquoEuropean Journal of Operational Research Vol 116 No 2 pp 436-42

Geanuracos J and Meiklejohn I (1993) Performance Measurement The New Agenda BusinessIntelligence Ltd London

Harker PT and Vargas LG (1987) ldquoThe theory of ratio scale estimation Saatyrsquos analytichierarchy processrdquo Management Science Vol 33 pp 1383-403

Harker PT and Vargas LG (1990) ldquoReply to lsquoRemarks on the analytic hierarchy processrsquordquoManagement Science Vol 36 No 3 pp 269-73 by JS Dyer

Jackson AE Safford RR and Swart WW (1994) ldquoRoadmap to current benchmarkingliteraturerdquo Journal of Management in Engineering Vol 10 No 6 pp 60-7

Jarrar YF and Zairi M (2001) ldquoFuture trends in benchmarking for competitive advantagea global surveyrdquo Total Quality Management Vol 12 No 7-8 pp 906-12

Korpela J and Tuominen M (1996) ldquoBenchmarking logistics performance with an applicationof the analytic hierarchy processrdquo IEEE Transactions on Engineering ManagementVol 43 No 3 pp 323-32

Landeghem RV and Persoons K (2001) ldquoBenchmarking of logistical operations based on acausal modelrdquo International Journal of Operations amp Production Management Vol 21No 1-2 pp 254-67

Lodish LM (1982) ldquoExperience with decision calculus models and decision support systemsrdquoin Schulz R and Zoltners A (Eds) Marketing Decision Models North-HollandNew York NY

Lu MH Madu CN Kuei CH and Winokur D (1994) ldquoIntegrating QFD AHP andbenchmarking in strategic marketingrdquo The Journal of Business and Industrial MarketingVol 9 No 1 pp 41-50

Min H Min H and Chung K (2002) ldquoDynamic benchmarking of hotel service qualityrdquo Journalof Services Marketing Vol 16 No 4 pp 302-19

Oral M Kettani O and Cinar U (2001) ldquoProject evaluation and selection in a network ofcollaboration a consensual disaggregation multi-criteria approachrdquo European Journal ofOperational Research Vol 130 pp 332-46

Quest 123

381

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 13: Quest 123: a benchmarking systemfortechnologyassessment at

Partovi FY (2001) ldquoAn analytic model to quantify strategic service visionrdquo InternationalJournal of Service Industry Management Vol 12 No 5 pp 476-99

Prasanta KD (2002) ldquoBenchmarking project management practices of Caribbean organizationsusing analytic hierarchy processrdquo Benchmarking An International Journal Vol 9 No 4pp 326-56

Saaty TL (1977) ldquoA scaling method for priorities in hierarchical structuresrdquo Journal ofMathematical Psychology Vol 15 pp 234-81

Saaty TL (1980) The Analytic Hierarchy Process McGraw-Hill New York NY

Saaty TL (1983) ldquoAxiomatic foundations of the Analytic Hierarchy Processrdquo ManagementScience Vol 32 No 2 pp 841-55

Saaty TL (1989) ldquoDecision making scaling and number crunchingrdquo Decision Sciences Vol 20pp 404-9

Saaty TL (1990) ldquoAn exposition of the AHP in reply to the paper lsquoRemarks on the analytichierarchy processrsquordquo Management Science Vol 36 No 3 pp 259-68

Saaty TL (1994) ldquoHow to make a decision the analytic hierarchy processrdquo Interfaces Vol 24No 6 pp 19-43

Saaty TL and Vargas LG (1998) ldquoDiagnosis with dependent symptoms Bayes theorem andthe analytic hierarchy processrdquo Operations Research Vol 46 No 4 pp 491-502

Sarkis J (2001) ldquoBenchmarking for agilityrdquo Benchmarking An International Journal Vol 8No 2 pp 88-107

Schefcyzk M (1993) ldquoIndustrial benchmarking a case study of performance analysistechniquesrdquo International Journal of Production Economics Vol 32 No 1 pp 1-11

Shim JP (1989) ldquoBibliographical research on the analytic hierarchy process (AHP)rdquoSocio-Economic Planning Sciences Vol 23 pp 161-7

Spendolini MJ (1992) The Benchmarking Book AMACOM New York NY

Tavana M (2002) ldquoEuclid strategic alternative assessment matrixrdquo Journal of Multi-CriteriaDecision Analysis Vol 11 No 2 pp 75-96

Tavana M (2003) ldquoA multi-criteria group-decision-making model for evaluating and prioritizingadvanced-technology projects at NASArdquo Interfaces Vol 33 No 3 pp 40-56

Zahedi F (1986) ldquoThe analytic hierarchy process ndash a survey of the method and its applicationsrdquoInterfaces Vol 16 pp 96-108

Further reading

Eyrich HG (1991) ldquoBenchmarking to become the best of breedrdquoManufacturing Systems Vol 9No 4 pp 40-7

BIJ114

382

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 14: Quest 123: a benchmarking systemfortechnologyassessment at

Figure A1

Appendix 1 Quest 123

383

Figure A2

Appendix 2BIJ114

384

Page 15: Quest 123: a benchmarking systemfortechnologyassessment at

Figure A2

Appendix 2BIJ114

384