performance evaluation of consumer decision support systems

18
28 International Journal of E-Business Research, 2(3), 28-45, July-September 2006 Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited. Performance Evaluation of Consumer Decision Support Systems Jiyong Zhang, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland Pearl Pu, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland ABSTRACT Consumer decision support systems (CDSSs) help online users make purchasing decisions in e- commerce Web sites. To more effectively compare the usefulness of the various functionalities and interface features of such systems, we have developed a simulation environment for decision tasks of any scale and structure. Furthermore, we have identified three criteria in an evaluation framework for assessing the quality of such CDSSs: users’ cognitive effort, preference expression effort, and decision accuracy. A set of experiments carried out in such simulation environments showed that most CDSSs employed in current e-commerce Web sites are suboptimal. On the other hand, a hybrid decision strategy based on four existing ones was found to be more effective. The interface improvements based on the new strategy correspond to some of the advanced tools already developed in the research field. This result is therefore consistent with our earlier work on evaluating CDSSs with real users. That is, some advanced tools do produce more accurate decisions while requiring a comparable amount of user effort. However, the simulation environment will enable us to efficiently compare more advanced tools among themselves, and indicate further opportunities for functionality and interface improvements. Keywords: consumer decision support systems; decision accuracy; electronic catalogs; elicitation effort; extended effort-accuracy framework; multi-attribute decision problem; performance-based search; performance evaluation; product recommender systems. INTRODUCTION With the rising prosperity of the World Wide Web (WWW), consumers are dealing with an increasingly large amount of product and service information that is far beyond any individual’s cognitive effort to process. In early e-commerce practice, online intermediaries were created. With the help of these virtual store- fronts, users were able to find product informa- tion on a single Web site that gathers product information from thousands of merchants and service suppliers. Examples include shopping.yahoo.com, froogle.com, shopping. com, cars.com, pricegrabber.com, and so forth.

Upload: others

Post on 03-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

28 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

Performance Evaluation ofConsumer Decision Support Systems

Jiyong Zhang, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

Pearl Pu, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland

ABSTRACT

Consumer decision support systems (CDSSs) help online users make purchasing decisions in e-commerce Web sites. To more effectively compare the usefulness of the various functionalitiesand interface features of such systems, we have developed a simulation environment for decisiontasks of any scale and structure. Furthermore, we have identified three criteria in an evaluationframework for assessing the quality of such CDSSs: users’ cognitive effort, preference expressioneffort, and decision accuracy. A set of experiments carried out in such simulation environmentsshowed that most CDSSs employed in current e-commerce Web sites are suboptimal. On theother hand, a hybrid decision strategy based on four existing ones was found to be moreeffective. The interface improvements based on the new strategy correspond to some of theadvanced tools already developed in the research field. This result is therefore consistent withour earlier work on evaluating CDSSs with real users. That is, some advanced tools do producemore accurate decisions while requiring a comparable amount of user effort. However, thesimulation environment will enable us to efficiently compare more advanced tools amongthemselves, and indicate further opportunities for functionality and interface improvements.

Keywords: consumer decision support systems; decision accuracy; electronic catalogs;elicitation effort; extended effort-accuracy framework; multi-attribute decisionproblem; performance-based search; performance evaluation; productrecommender systems.

INTRODUCTIONWith the rising prosperity of the World

Wide Web (WWW), consumers are dealing withan increasingly large amount of product andservice information that is far beyond anyindividual’s cognitive effort to process. In earlye-commerce practice, online intermediaries were

created. With the help of these virtual store-fronts, users were able to find product informa-tion on a single Web site that gathers productinformation from thousands of merchants andservice suppliers. Examples includeshopping.yahoo.com, froogle.com, shopping.com, cars.com, pricegrabber.com, and so forth.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 29

However, due to the increasing popularity ofelectronic commerce, the amount of online re-tailers proliferated. As a result, there are noweasily millions (or 16-20 categories) of brand-name products available on a single online in-termediary Web site. Finding something is onceagain difficult, even with the help of variouscommercially available search tools.1 Recently,much attention in e-commerce research has fo-cused on designing and developing more ad-vanced search and product recommender tools(Burke, Hammond, & Young, 1997; Pu &Faltings, 2000; Reilly, McCarthy, McGinty, &Smyth, 2004; Shearin & Lieberman, 2001;Shimazu, 2001; Stolze, 1999). However, theyhave been not employed in large scales in prac-ticing e-commerce Web sites. Pu and Kumar(2004) gave some reasons as to why this is thecase and when such advanced tools are ex-pected to be adopted. This work was based onempirical studies of how users interact with prod-uct search tools, providing a good direction asto how to establish the true benefits of theseadvanced tools. However, insights gained fromthis work are limited. This is mainly due to thelack of a large amount of real users for theneeded user studies and the high cost of userstudies, even if real users were found. Each ofthe experiments reported in Pu and Kumar (2004)and Pu and Chen (2005) took more than 3 monthsof work, including the design and preparationof the study, the pilot study, and the empiricalstudy itself. After the work was finished, it re-mains unclear whether a small amount of usersrecruited in an academic institution can fore-cast the behavior of the actual user population,which is highly diverse and complex.

Our main objective in this research is touse a simulation environment to evaluate vari-ous search tools in terms of interaction behav-iors: what users’ effort would be to use thesetools and what kind of benefits they are likelyto receive from these tools. We base our workon some earlier work (Payne, Bettman, &Johnson, 1993) in terms of the design of thesimulation environment. However, we haveadded important elements to adapt such envi-ronments to online e-commerce and consumer

decision support scenarios. With this simula-tion environment, we hope to more accuratelyforecast the acceptance of research tools in thereal world, and curtail the evaluation of eachtool’s performance from months of user studyto hours of simulation and a week of fine tun-ing the simulation results against a small butdiverse amount of real users. This should allowus to evaluate more tools and, more importantly,discover design opportunities of new tools.

Our initial work of measuring the perfor-mance of various decision support strategiesin e-commerce environments was reported in aconference paper (Zhang & Pu, 2005). The cur-rent article is an extended version of the con-ference paper. Besides adding significantlymore details on the work already reported, thereare a number of important and new contribu-tions:

• In the conference paper, we only reportedthe performance evaluation results of vari-ous decision strategies such as the lexico-graphical (LEX) strategy, the elimination-by-aspects (EBA) strategy, and so forth; in thispaper, we consider the evaluation of a con-sumer decision support system as an inte-gral unit compromising decision strategies,user interfaces, and the underlying productcatalog;

• In the extended effort-accuracy frameworkdescribed in the conference paper, we onlyused a classical definition of decision accu-racy; here we propose two new definitionsof decision accuracy that correspond moreprecisely with a user’s choice behavior in e-commerce situations rather than the classi-cal choice problem in decision literature;

• Based on the new definitions, we were ableto draw more conclusions from the simula-tion results: not only can we establish thathybrid decision approaches can reduceuser’s effort while achieving a high level ofdecision accuracy, but we can also see someopportunities for improving consumer deci-sion support systems by designing betterinterfaces and decision approaches, cour-tesy of the simulation environment.

30 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

This paper is organized as follows: thesecond section reviews some related researchwork; the third section defines the consumerdecision support system (CDSS) and clarifiesits relationship with our earlier published con-cepts on multi-attribute decision problem(MADP) and various decision strategies; thefourth section describes in detail the simula-tion environment for the performance evalua-tion of CDSSs; the fifth section describes theextended effort-accuracy framework consistingof three performance criteria: cognitive effort,elicitation effort, and decision accuracy; thesixth section reports the performance evalua-tion of various CDSSs with respect to a set ofsimulated MADPs and user preferences; theseventh section discusses the main researchresults obtained, followed by the conclusionsection.

RELATED WORKIn traditional environments where no

computer aid is involved, behavioral decisiontheory provides adequate knowledge describ-ing people’s choice behavior, and presents typi-cal approaches of solving decision problems.For example, Payne et al. (1993) established awell-known effort-accuracy framework that de-scribed how people adapted their decision strat-egies by trading off accuracy and cognitive ef-fort to the demands of the tasks they faced.The simulation experiments carried out in thatwork were able to give a good analysis of vari-ous decision strategies that people employ, andthe decision accuracy they would expect to getin return.

In the online electronic environmentwhere the support of computer systems is per-vasive, we are interested in analyzing users’choice behaviors when tools are integrated intotheir information processing environments.That is, we are interested in analyzing whengiven a computer tool with its system logic,how much effort a user has to expend and howmuch decision accuracy he or she is to obtainfrom that tool. On one hand, though the deci-sion maker’s cognitive effort is still required, itcan be significantly decreased by having com-

puter programs carry out most of the calcula-tion work automatically; on the other hand, thedecision makers must expend some effort toexplicitly state their preferences to the computeraccording to the requirements of the underly-ing decision support approach employed in thatsystem. We would like to call this extra usereffort (in addition to the cognitive effort) pref-erence elicitation effort. We believe that elicita-tion effort plays an important role in the neweffort-accuracy model of users’ behavior inonline choice environments.

Many other researchers have carried outsimulation experiments in evaluating the per-formance of their systems or approaches. Payneet al. (1993) introduced a simulation experimentto measure the performance of various deci-sion strategies in off-line situations. Recently,Boutilier (Boutilier, Patrascu, Poupart, &Schuurmans, 2005) carried out their experimentsby simulating a number of randomly generatedsynthetic problems, as well as user responsesto evaluate the performance of various querystrategies for eliciting bounds of the param-eters of utility functions. In Reilly et al. (2005),various users’ queries were generated artificiallyfrom a set of off-line data to analyze the recom-mendation performance of the incremental cri-tiquing approach. These related works gener-ally suggest that simulating the interaction be-tween users and the system is a promising meth-odology for performance evaluation. In ourwork, we go further in this direction and pro-pose the general simulation environment thatcan be adopted to evaluate the performance ofvarious CDSSs systematically within the ex-tended effort-accuracy framework. To the bestof our knowledge, our simulation work is thefirst attempt in systematically evaluating theperformance of various CDSSs with simulationmethodology.

CONSUMER DECISIONSUPPORT SYSTEM

In a scenario of buying a product (suchas a digital camera), the objective of consumersis to choose the product that most closely sat-

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 31

isfies their needs and preferences (decision re-sult), and furthermore, they are not likely toregret the products that they have bought (howaccurate their decision is). They usually face alarge amount of product alternatives (or op-tions) and need a decision support system inorder to process the entire product catalog with-out having to examine all items exhaustively.Therefore, a consumer decision support sys-tem consists of three components: (1) the prod-uct catalog that is accessible to consumers viaan interface; (2) the underlying decision sup-port approach that helps a consumer to chooseand determine the product most satisfying hisor her preferences; (3) the user interface withwhich a consumer interacts in order to state hisor her preferences. We will now introduce thesethree components respectively.

The product catalog or more precisely anelectronic product catalog (EPC) (Palmer, 1997;Torrens, 2002) provides a list of products, eachone represented by a number of attributes. Theprocess of determining the most preferred prod-uct from the EPC can be formally described assolving a multi-attribute decision problem2 ΨΨΨΨΨ =⟨X, D, O, P⟩, where x={x

1, ...,x

n} is a finite set of

attributes the product catalog has,D=D

1×...× Dn indicates the space of all pos-

sible products in the catalog (each Di(1≤i≤n)

is a set of possible domain values for attributeX

i), O={O

1,...,O

m} is a finite set of available prod-

ucts (also called alternatives or outcomes) thatthe EPC offers, and P={P

1,...,P

t} denotes a set

of preferences that the decision maker mayhave. Each preference P

i may be identified in

any form as required by the decision methods.The solution of a MADP is an alternative Omost satisfying the decision maker’s prefer-ences.

In traditional decision making environ-ments, consumers usually adopt various deci-sion strategies such as EBA or LEX to obtaindecision results (Payne et al., 1993). In a com-puter assisted scenario, the distribution of workis quite different. It is the CDSS that will per-form these decision strategies to help the con-sumer to make decisions. The consumer is onlyrequired to input his or her preferences as re-

quired by the specific decision strategy; andthen the solution can be chosen for the con-sumer automatically. When a decision strategyis adopted in the consumer decision supportsystem, we also say it is a decision supportapproach for that system. In this paper, we fo-cus on the following decision strategies andstudy the performance of CDSSs based on thesedecision strategies.

1. The weighted additive (WADD) strategy. Itis a normative approach based on multi-at-tribute utility theory (MAUT) (Keeney &Raiffa, 1993). In our simulation experiment,we use it as the baseline strategy.

2. Some basic heuristic strategies. They are theequal weight (EQW) strategy, the elimina-tion-by-aspects strategy, the majority-of-confirming dimensions (MCD) strategy, thesatisficing (SAT) strategy, the lexicographic(LEX) strategy and the frequency of goodand bad features (FRQ) strategy. Their de-tailed definitions can be found in Payne etal., 1993 and Zhang & Pu, 2005.

3. Hybrid decision strategies. Besides the ba-sic heuristic strategies, people may also usea combination of several of them to make adecision to try to get a more precise deci-sion result. These kinds of strategies arecalled hybrid decision strategies. As a con-crete example of hybrid decision strategies,The C4 strategy (Zhang & Pu, 2005), whichis a combination of four basic heuristic strat-egies: EBA, MCD, LEX, and FRQ, is alsostudied in this paper.

In a CDSS, the user-interface componentis used to obtain the consumers’ preferences.However, such preferences are largely deter-mined by the underlying decision support ap-proach that has been adopted in the system.For example, the popular ranked list interface isin fact an interface implementing the lexico-graphical strategy. Also, if we adopt the weightadditive strategy in a consumer decision sup-port system, the user interface will be designedin the manner of asking the user to input corre-sponding weight and middle values for each

32 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

attribute. In our current work, we assume theexistence of a very simple user interface. Thus,we regard the underlying decision support ap-proach as the main factor of the consumer deci-sion support system.

SIMULATIONENVIRONMENTFOR PERFORMANCEEVALUATION

Our simulation environment is concernedwith the evaluation of how users interact withconsumer decision support systems (CDSSs),how decision results are produced, and thequality of these decision results.

The consumer first interacts with the sys-tem by inputting his or her preferences throughthe user interface. With the help of decisionsupport, the system generates a set of recom-mendations to the consumer. This interactiveprocess can be executed multiple times untilthe consumer is satisfied with the recommendedresults (i.e., a product to purchase) or gives updue to loosing patience.

As shown in Figure 1, for a given CDSS,we evaluate its performance in a simulated en-vironment by the following procedure: (1) wegenerate a set of MADPs using Monte Carlomethod to simulate the presence of an elec-tronic catalog up to any scale and structure

characteristics; (2) we generate a set of con-sumer preferences also with the Monte Carlomethod, taking into account user diversity andscale; (3) we carry out the simulation of theunderlying decision approach of the CDSS tosolve these MADPs; (4) we obtain associateddecision results for the given CDSS (which prod-uct has been chosen given the consumer’s pref-erences); and finally, (5) we evaluate the per-formance of these decision results in terms ofcognitive effort, preference elicitation effort, anddecision accuracy under the extended accu-racy-effort framework (detailed discussion ofthis framework in the next section).

The simulation environment can be usedin many ways to provide different performancemeasures of a given CDSS. For instance, if boththe detail product information of CDSS and theconsumer’s preferences are unknown, we cansimulate both the alternatives and theconsumer’s preferences, and the simulation re-sults would be the performance of the CDSSindependently of users and the set of alterna-tives; if the detail product information of theCDSS is provided, we then only need to simu-late the consumer’s preferences, and the alter-natives of the MADPs can be copied from theCDSS instead of being randomly generated. Thesimulation results would be the performance ofthe CDSS under the specified product set.

As a concrete example to demonstratethe usage of such a simulation environment,

Figure 1. An architecture of the simulation environment for performance evaluation of a givenconsumer decision support system

Simulation Environment

Decision Accuracy,Elicitation Effort ,Cognitive Effort

Generate MADPs

CDSS

Evaluate Performance

Simulate Decision Making

PerformanceResults

Underlying Decision Approach

UserInterface

Product Information

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 33

we will show a procedure in evaluating the per-formance of various CDSSs in terms of the scaleof the MADPs, which is determined by twofactors: the number of attributes n and thenumber of alternatives m . Since we are tryingto study the performance of different CDSSs(currently built on heuristic decision strategies)in different scales of MADPs, we assume thatusers and alternatives are both unknown andthey are simulated to give results independentlyof the user and the system. More specifically,we classify the decision problems into 20 cat-egories according to the scales of n (the num-ber of attributes) and m (the number of alter-natives): n has five values (5, 10, 15, 20, and25), and m has four (10, 100, 1,000, and 10,000).To make the performance evaluation result moreaccurate, each time we randomly generate 500different MADPs in the same scale and usetheir average performance as the final result.The detail simulation result will be reported inthe experimental result section.

THE EXTENDED EFFORT-ACCURACY FRAMEWORK

The performance of the system can beevaluated by various criteria such as the de-gree of a user’s satisfaction with the recom-mended item, the amount of time a user spendsto reach a decision, and the decision errors thatthe consumer may have committed. Withoutreal users’ participation, the satisfaction of aconsumer with a CDSS is hard to measure. How-ever, the other two criteria can be measured.

The amount of time a user spends to reacha decision is equivalent to the amount of timehe or she uses to express preferences and pro-cess the list of recommended items in order toreach a decision. The classical effort-accuracyframework mainly investigated the relationshipof decision accuracy and cognitive effort ofprocessing information by different decisionstrategies in the off-line situation. In the onlinedecision support situation, however, the effortof eliciting preferences must be considered aswell.

Furthermore, most products carry a fairamount of financial and emotional risks. Thus,the accuracy of users’ choices is extremely im-portant. That is, there is a posterior processwhere users evaluate the search tools in termsof whether the products they have found viathe search tool are really what they want andwhether they had enough decision support.This is what we mean by decision accuracy.

We therefore propose an extended effort-accuracy framework by explicitly measuringthree factors of a given consumer decision sup-port system: cognitive effort, elicitation effort,and decision accuracy. In the remainder of thissection, we first recall the measurement of cog-nitive effort in the classical framework, we givevarious definitions of accuracy, and then wedetail the method of measuring elicitation ef-fort. Finally, the cognitive and elicitation effortof these decision strategies are analyzed in sec-tion 5.4, in an online situation.

Measuring Cognitive EffortBased upon the work of Newell and Simon

(1972), a decision approach can be seen as asequence of elementary information processes(EIPs), such as reading the values of two alter-natives on an attribute, comparing them, andso forth. Assuming that each EIP takes equalcognitive effort,3 the decision maker’s cogni-tive effort is then measured in terms of the totalnumber of EIPs. Conformed with the classicalframework, a set of EIPs for the decision strate-gies is defined as (1) READ: read an alternative’svalue on an attribute into short-term memory(STM), (2) COMPARE: compare two alterna-tives on an attribute, (3) ADD: add the valuesof two attributes in STM, (4) DIFFERENCE:calculate the size of the difference of two alter-natives for an attribute, (5) PRODUCT: weightone value by another, (6) ELIMINATE: elimi-nate an alternative from consideration, (7)MOVE: move to next element of the externalenvironment, and (8) CHOOSE: choose the pre-ferred alternative and end the process.

34 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

Measuring Decision AccuracyAccuracy and effort form an important

performance measure for the evaluation of con-sumer decision support systems. On one hand,consumers expect to get highly accurate deci-sions. On the other hand, they may not be in-clined (or able) to expend a high level of cogni-tive and elicitation effort to reach a decision.Three important factors influence the decisionaccuracy of a consumer decision support sys-tems: the underlying decision approach used;the number of interactions required from theend user (if a longer interaction is required, auser may give up before he finds the best op-tion); the number of options displayed to theend user in each interaction cycle (a single itemis likely to miss the target choice compared to alist of items; however, a longer list of items re-quires more cognitive effort to process infor-mation). In our current framework, we investi-gate the combined result of these three factors(i.e., decision approach as well interface designcomponents) of a given consumer decisionsupport system.

In the following sections, we start withclassical definitions of decision accuracy, ana-lyze their features and describe their weaknessesfor the online environments, and then we pro-pose two definitions, which we have developed,that are likely to be more adequate for measur-ing decision accuracy in e-commerce environ-ments. To eliminate the effect of a specific setof alternatives on the decision accuracy results,in the following definitions we assume that

there is a set of N different MADPs to be

solved by a given consumer decision supportsystem that implements a particular decisionstrategy S. The accuracy will be measured inaverage among all those MADPs.

Accuracy Measured by Selection ofNondominated AlternativesThis definition comes from Grether and

Wilde (1983). After adapting it to decision mak-ing with the help of a computer system, thisdefinition says that a solution given by CDSS

is correct if and only if it is non-dominated byother alternatives. So the decision accuracy canbe measured by the numbers of solutions whichare Pareto optimal (i.e., not dominated by otheralternatives, see also Viappiani, Faltings,Schickel-Zuber, & Pu, 2005). We use i

SO to rep-resent the optimal solution given by the CDSSwith strategy S when solving MADP

i(1≤i≤N).

The accuracy of selection of nondominated al-ternatives Acc

NDA(S) is defined as the follow-

ing:

1

Dominated( )( )

NiS

iNDA

N OAcc S

N=

−=

∑, (1)

where

1 if is dominated

Dominated( ) in (1 )

0

iS

iS i

O

O MADP i N

else

= ≤ ≤

According to this definition, it is easy tosee that a system employing the WADD strat-egy has 100% accuracy because all the solu-tions given by WADD are Pareto optimal.Also, this definition of accuracy measurementis effective only when the system contains somedominated alternatives, otherwise the accuracyof the system is always 100%.

This definition of accuracy can distin-guish the errors caused by choosing domi-nated alternatives of the decision problems.However, measuring decision accuracy usingthis method is limited in e-commerce environ-ments. In an efficient market, it is unlikely thatthe consumer products or business deals aredominated or dominating. That is, it is unlikelyan apartment would be both spacious and lessexpensive compared to other available ones.We believe that although this definition is use-ful, it is not helpful to distinguish various CDSSsin terms of how good they are for supportingusers to select the best choice (not just thenondominated ones).

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 35

Accuracy Measured by Utility ValuesThis definition of measuring accuracy was

used in the classical effort-accuracy framework(Payne et al., 1993). Since no risk or uncertaintyis involved in the MADPs, the expected valueof an alternative is equivalent to the utility valueof each alternative. The utility value of eachalternative is assumed to be in the weight addi-tive form. Formally, this accuracy definition canbe represented as:

1

( )

( )( )

iNS

ii WADD

UV

V O

V OAcc S

N==∑

, (2)

where ( )iSV O is the value function given by the

WADD strategy in MADPi. In this definition, a

system employing the WADD strategy is also100% accurate because it always gives out thesolution with the maximal utility value.

One advantage of this measure of accu-racy is that it can indicate not only that an errorhas occurred, but also the severity of the errorof the decision making. For instance, a systemachieving 90% accuracy indicates that an aver-age consumer is expected to choose an itemthat is 10% less valuable from the best-pos-sible option. While this definition is useful forchoosing a set of courses to take for achievinga particular career objective, it is not most suit-able in e-commerce environments. Imagine thatsomeone has chosen and purchased a digitalcamera. Two months later, she discovers thatthe camera that her colleague has bought wasreally the one she wanted. She did not see thedesired camera, not because the online storedid not have it, but because it was difficult tofind and compare items on the particular e-com-merce Web site. Even though the camera thatshe bought satisfied some of her needs, she isstill likely to feel a great sense of regret, if notoutright disappointment. Her likelihood of re-turning to that Web site is in question. Giventhat bad choices can cause great emotionalburdens (Luce, Payne, & Bettman, 1999), wehave developed the following definition of de-cision accuracy.

Accuracy Measured bySelection of Target Choice

In our earlier work(Pu & Chen, 2005), wedefined decision accuracy as measured by thepercentage of users who have chosen the rightoption using a particular decision support sys-tem. We call that option the target choice. Inempirical studies with real users, we first askedusers to choose a product with the consumerdecision support system, and then we revealedthe entire database to them in order to deter-mine the target choice. If the product recom-mended by the consumer decision support sys-tem was consistent with the target choice, wesaid that the user had made an accurate deci-sion.

In simulation environment, we take theWADD strategy as the baseline. That is, weassume the solution given by WADD is theuser’s final most-preferred choice. For anothergiven strategy S, if the solution i

SO is the sameas the one determined by WADD, then we countit as one hit (this definition is called the hitratio). The accuracy is measured statisticallyby the ratio of hit numbers to the total numberof decision problems:

1

( )( )

NiS

iHR

Hit OAcc S

N==∑

, (3)

where

1 if = in ( )

0 else

i ii S WADD iS

O O MADPHit O

=

This measure of decision accuracy is ide-ally consistent with the consumers’ attitudetowards the decision results. However, by thisdefinition, it is assumed that the consumer de-cision support system only recommends oneproduct to the consumer each time. In reality,the system may show a list of possible prod-ucts to the consumer, and the order of the prod-uct list is also important to the consumer: theproducts displayed at the top of the list are

36 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

more likely to be selected by the consumer.Therefore, we have developed the followingdefinition to take into account that a list of prod-ucts is displayed, rather than a single product.

Accuracy Measured by Selection ofTarget Choice Among K-best Items

Here we propose measuring the accuracyof the system according to the ranking ordersof the K-best products it displays. This is anextension of the previous definition of accu-racy. For a given MADP

i, instead of using strat-

egy S to choose a single optimal solution, wecan use it to generate a list of solutions with

ranking order iSL ={ ,1

iSO , ,2

iSO , …, ,

iS KO }, where

,1iSO is the most-preferred solution according to

the strategy S, and ,2iSO is the second-preferred

solution, and so on. The first K-best solutionsconsist of the solution list. If the user’s finalchoice (which is assumed to be given by the

WADD strategy iWADDO ) is in the list, we assign a

rank value to the list according to the position

of iWADDO in the list. Formally, we define this

accuracy of choosing K-best items as:

1_ _

( )( )

NiS

iHR in Kbest

RankValue LAcc S

N==∑

,

(4)

where

,

-11- if = in

(1 ,1 ) ( )

0

i iS k WADD i

iS

kO O MADP

Kk K i NRankValue L

else

≤ ≤ ≤ ≤=

According to this definition, the WADDstrategy still achieves 100% accuracy and isused as the baseline. A special case of this ac-curacy definition is that when K=1, it degener-ates to the previous definition of hit ratio. Inthe simulation experimental results that we willshow shortly, we have set K to 5.

In practice, it is required to eliminate theeffect of random decision, and we expect thatthe strategy of random choice (selecting analternative randomly from the alternative set,denoted as RAND strategy) could only pro-duce zero accuracy. By doing so, we define therelative accuracy of the consumer decisionsupport system with strategy S according todifferent definitions as:

( ) ( )( ) ,

1 ( )Z Z

ZZ

Acc S Acc RANDRA S

Acc RAND

−=− (5)

where

Z= NDA, UV, HR, or HR_in_Kbest.

For example, RAHR

(LEX) denotes the rela-tive accuracy of the LEX strategy under theaccuracy measure definitions of hit ratio.

From the previous definitions, we can seethat each definition represents one aspect ofthe accuracy of the decision strategies. Wethink that the definitions of hit ratio and K-best items are more suitable to measure the ac-curacy of various consumer decision supportsystems, particularly in e-commerce environ-ments. In the sixth section, we will study theperformance of various decision strategies withthese accuracy measurement definitions.

Measuring Elicitation EffortIn computer-aided decision environ-

ments, a considerable amount of decision ef-fort goes into preference elicitation since peopleneed to “tell” their preferences explicitly to thecomputer system. So far, no formal method hasbeen given to measure the preference elicita-tion effort. An elicitation process can be de-composed into a series of basic interactionsbetween the user and the computer, such asselecting an option from a list, filling in a blank,answering a question, and so forth. We callthese basic interaction actions elementary elici-tation processes (EEPs). In our analysis, wedefine the set of EEPs as follows: (1) SELECT:select an item from a menu or a dropdown list,

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 37

(2) FILLIN: fill in a value to an edit box, (3) AN-SWER: answer a basic question, (4) CLICK: clicka button to execute an action.4

It is obvious that different EEPs requiredifferent elicitation effort (for instance, the EEPof one CLICK would be much easier than anEEP of FILLIN a weight value for a given at-tribute). For the sake of simplification, we cur-rently assume that each EEP requires an equalamount of effort from the user. Therefore, givena specific decision approach, elicitation effortis measured by the total amount of EEPs it mayrequire.

This elicitation effort is a new factor forthe online environment. The main differencebetween cognitive effort and elicitation effortlies in the fact that cognitive effort is a descrip-tion of the mental activities in processing infor-mation, while the elicitation effort is about theinteraction effort between the decision makerand the computer system through predesigneduser interfaces. Even though the decision mak-ers already have clear preferences in their mind,they must still state their preferences in a waythat the computer can “understand.” With thehelp of computer systems, the decision makeris able to reduce the cognitive effort by com-pensating with a certain degree of elicitationeffort.

Let us consider a simple decision prob-lem with three attributes and four alternatives.When computer support is not provided, thecognitive effort of solving this problem by theWADD strategy will be 24 READS, 8 ADDS, 12PRODUCTS, 3 COMPARES, and 1 CHOOSE.The total number of EIPs is therefore 48.5

However, with the aid of a computer sys-tem, the decision maker could get the same re-sult by spending two units of elicitation effort(FILLIN the weight value of first two attributes)and one unit of cognitive effort (CHOOSE thefinal result).

Analysis of Cognitiveand Elicitation Effort

With the support of computer systems,the cognitive effort for WADD, as well as thebasic heuristic strategies, is quite low. The de-

cision maker inputs his or her preferences, andthe decision support system executes that strat-egy and shows the proposed product. Thenthe decision maker chooses this product andthe decision process is ended. Thus, the cog-nitive effort is equal to one EIP: CHOOSE thefinal alternative and exit the process. For theC4 strategy, the cognitive effort of solving anMADP with n attributes and m alternatives isequal to that of solving a problem with n at-tributes and 4 alternatives in the traditional situ-ation, the cognitive effort of which has beenstudied in (Payne et al., 1993).

According to their definitions, variousdecision strategies require that preferences withdifferent parameters be elicited. For example, inthe WADD strategy, the component value func-tion and the weight for each attribute must beobtained, while for the EBA strategy, the im-portance order and cutoff value for each at-tribute are required. The required parametersfor each strategy are shown in Table 1.

For each parameter in the aforementionedstrategies, a certain amount of elicitation effortis required. This elicitation effort may vary withdifferent implementations of the user interface.For example, to elicit the weight value of anattribute, the user can just FILLIN the value toan edit box, or the user can ANSWER severalquestions to approximate the weight value. Inour analysis and the following simulation ex-periments, we follow the at least rule: the elici-

Strategy Parameters required to be elicited

WADD Weights, component value functions

EQW Component value functions EBA Importance order, cutoff values MCD None SAT Cutoff values LEX Importance order FRQ Cutoff values for good and bad

features C4 Cutoff values, importance order

Table 1. Elicitation effort analysis of decisionstrategies

38 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

tation effort is determined by the least numberof EEP(s). In the above example, the elicitationeffort for obtaining a weight value is measuredas 1 EEP.

SIMULATION RESULTSIn this section, we report our experimen-

tal results of the performance of various con-sumer decision support systems under the simu-lation environment that was introduced earlier.To simplify the experiments, we only evaluatethose CDSSs built on the decision strategieslisted in Table 1. Without loss of generality, wewill also use the term decision strategy to repre-sent the CDSS built on that decision strategy.

For each CDSS, we first simulate a largevariety of MADPs, and then run the correspond-ing decision strategy on the computer to gen-erate the decision results. Then the elicitationeffort and decision accuracy are calculated ac-cording to the extended effort-accuracy frame-work. For each MADP, its domain values for agiven attribute are determined randomly: thelower bound of each attribute is set to 0, andthe upper bound is determined randomly fromthe range of 2 to 100. Formally speaking, foreach attribute X

i, we define D

i = [0,z

i] where z

i ∈

[2,100].As shown in Table 1, each decision strat-

egy (except MCD) requires the elicitation ofsome specific parameters such as attributeweights or cutoff values to represent the user’spreferences. To simulate the component valuefunctions required by the WADD strategy, weassume that the component value function foreach attribute is approximated by three midvaluepoints, which are randomly generated.6 Thus,each component value function requires threeunits of EEPs. Other required parameters, suchas the weight and cutoff value (each requiresone unit of EEP) for each attribute are also simu-lated by the random generation process. Theorder of importance is determined by the weightorder of the attributes for consistency.

In our simulation experiments, the WADDstrategy is appointed as the baseline strategy,and the relative accuracy of a strategy is calcu-lated according to equation (5). The elicitation

effort is measured in terms of the total numberof EEPs required by the specific strategy, andthe cognitive effort is measured by the requiredunits of EIPs. Since the relationship betweenaccuracy and cognitive effort has already beenstudied and analyzed by Payne et al. (1993), inthis section, we only focus on the performanceof each strategy in terms of decision accuracyand elicitation effort.

Figure 2 shows the changes in relativeaccuracy with four different accuracy measuredefinitions for the listed decision strategies asthe number of attributes increases in the casethat each MADP has 1,000 alternatives. In allcases, the WADD is the baseline strategy; thusit achieves 100% accuracy. When measured bythe selection of nondominated alternatives(RA

NDA), the relative accuracy of each heuristic

strategy increases as the number of alterna-tives increases. This is mainly because the al-ternatives are more likely to be Pareto optimalwhen more attributes are involved. Furthermore,the RA

NDA of all strategies could achieve 100%

accuracy when the attributes number is 20 or25. This shows that the RA

NDA is not able to

distinguish the decision errors occurred withthe heuristic strategies in the simulated envi-ronment. When the accuracy is measured un-der the definitions of RA

UV, RA

HR and

RAHR_in_Kbest

, the EQW strategy achieves thehighest accuracy besides the baseline WADDstrategy, and the SAT strategy has the lowestrelative accuracy. The four basic heuristic strat-egies EBA, MCD, LEX, and FRQ are in themiddle-level range. The LEX strategy, whichhas been widely adopted in many consumerdecision support systems, is the least accuratestrategy among the EBA, FRQ, and MCD strat-egies when there are over 10 attributes. Whenthe accuracy is measured by RA

UV, the EQW

strategy could gain over 90% relative accuracy,while it could only achieve less than 50% rela-tive accuracy when measured by RA

HR. This

comparison generally suggests that most of thedecision results given by EQW strategy maybe very close to a user’s target choice (which isdetermined by the WADD strategy), but arenot identical. Also, in all cases, the accuracy

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 39

measured by RAHR_in_Kbest

(where K=5 in the ex-periment) is always higher than that measuredby RA

HR (which is a special case of RA

HR_in_Kbest

when K=1). This shows that under this defini-tion, the possibility of containing the final tar-get choice in a K-item list is higher when K islarger. Of particular interest is that the proposedC4 strategy, which is a combination of the fourbasic strategies, could achieve a much higheraccuracy than any of them alone. For instance,when there are 10 attributes and 1,000 alterna-tives in the MADPs, the relative accuracy ofC4 strategy could exceed the average accu-racy of the four basic strategies by over 27%when the definition of RA

HR_in_Kbest is adopted.

Figure 3 shows the relationship betweenrelative accuracy and the number of alterna-tives (or the number of available products in acatalog) for the listed decision strategies. Whenthe accuracy is measured by the selection of

nondominated alternatives (RANDA

), all strate-gies except SAT could gain nearly 100% of rela-tive accuracy without a significant difference.This generally shows that the RA

NDA is not a

good definition of accuracy measurement in thesimulated environment. When the accuracy ismeasured by the utility values (RA

UV), the ac-

curacy of the heuristic strategies remains stableas the number of alternatives increases. Withthe definitions of hit ratio (RA

HR) and hit ratio

in K-best items (RAHR_in_Kbest

), however, the heu-ristic strategies strongly descend into a lowerrange of accuracies as the size of a catalog in-creases. This corresponds to the fact that con-sumers have increasing difficulties finding thebest product as the number of alternatives inthe catalog increases. The C4 strategy, thoughits accuracy decreases when the number of al-ternatives increases, could still maintain a con-siderably higher relative accuracy than that of

Figure 2. The relative accuracy of various decision strategies when solving MADPs with differentnumber of attributes, where m(number of alternatives) = 1,000

40 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

the EBA, MCD, LEX, and FRQ strategies whenusing the accuracy definition of RA

HR and

RAHR_in_Kbest

.The effect of the number of attributes on

elicitation effort for each strategy is shown inFigure 4. As we can see, the elicitation effort ofthe heuristic strategies increases much slowerthan that of the WADD strategy as the numberof attributes increases. For instance, when thenumber of attributes is 20, the elicitation effortof the FRQ strategy is only about 25% of thatof WADD strategy. The FRQ and SAT strate-gies require the same level of elicitation effort,since both of them require the decision makerto input a cutoff value for each attribute. Ex-cept the MCD strategy, which requires no elici-tation effort in the simulation environment, theLEX strategy is the one that requires the leastelicitation effort in all cases among the listedstrategies. The combined C4 strategy, which

could share preferences among its four under-lying basic strategies, requires only a slightlyhigher elicitation effort than the EBA strategy.

Figure 5 shows the relationship betweenelicitation effort and the number of alterna-tives for each strategy. As the number of alter-natives increases exponentially, the level of elici-tation effort for WADD, EQW, MCD, SAT, andFRQ strategies remains unchanged. This showsthat the elicitation effort of these strategies isunrelated to the number of alternatives that adecision problem may have. For the LEX, EBA,and C4 strategies, the elicitation effort in-creases slowly as the number of alternativesincreases. As a whole, Figure 5 shows that theelicitation effort of the studied decision strate-gies is quite robust to the number of alterna-tives that a decision problem has.

A combined study from Figure 2 to Fig-ure 5 can lead to some interesting conclusions.

Figure 3. The relative accuracy of various decision strategies when solving MADPs with adifferent number of alternatives, where n(number of attributes) = 10

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 41

For each category of MADPs, some decisionstrategies, such as WADD and EQW, couldgain relatively high-decision accuracy with pro-portionally high-elicitation effort. Other deci-sion strategies, especially C4, MCD, EBA, FRQ,and LEX, could achieve a reasonable level ofaccuracy with much lower elicitation effort com-pared to the baseline WADD strategy. Figure 6illustrates the relationship between elicitation

effort and RAHR_in_Kbest

for various strategieswhen solving different scales of decision prob-lems. For the MADPs with 5 attributes and 100alternatives, the MCD strategy could achievearound 35% relative accuracy without any elici-tation effort. The C4 strategy, in particular, couldachieve over 70% relative accuracy while onlyrequiring about 45% elicitation effort comparedto the WADD strategy.

Figure 5. The elicitation effort of various decision strategies when solving MADPs with adifferent number of alternatives, where n(number of attributes) = 10

Figure 4. The elicitation effort of various decision strategies when solving MADPs with adifferent number of attributes, where m(number of alternatives) = 1,000

42 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

For all the decision strategies we havestudied here, we say that a decision strategy Sis dominated if and only if there is another strat-egy S’ that has higher relative accuracy andlower cognitive and elicitation effort than S inthe decision problem. Figure 6 shows that whenthe MADPs have 10 attributes and 1,000 alter-natives, the WADD, EQW, C4, and MCD arenondominated approaches. However, for asmaller scale of MADPs (5 attributes and 100alternatives), only the WADD, C4, and MCDstrategies have the possibility of being the op-timal strategy. This figure also shows that if theuser’s goal is to make decisions as accuratelyas possible, WADD is the best strategy amongthe listed strategies; while if the decision maker’sgoal is to have reasonable accuracy with a cer-tain elicitation effort, then the C4 strategy maybe the best option.

DISCUSSIONThe simulation results suggest that the

tradeoff between decision accuracy and elici-tation effort is the most important design con-sideration for inventing high-performanceCDSSs. That is, while advanced tools are desir-able, we must not ignore the effort that usersare required to make when stating their prefer-ences.

To show how this framework can pro-vide insights to improve user interfaces for theexisting CDSSs, we have demonstrated theevaluation of the simplest decision strategies:WADD, EQW, LEX, EBA, FRQ, MCD, and SAT(Payne et al., 1993). The performance of thesestrategies was measured quantitatively in theproposed simulation environment within theextended effort-accuracy framework. Since theunderlying decision strategy determines how auser interacts with a CDSS system (preferenceelicitation and result processing), the perfor-mance data allowed us to discover better deci-sion strategies and eliminate suboptimal ones.In this sense, our work provides a new designmethod for developing user interfaces for con-sumer decision support systems.

For example, LEX is the underlying deci-sion strategy used in the ranked list interfacethat many e-commerce Web sites employ (Pu &Kumar, 2004). However, our simulation resultsshow that LEX produces relatively low-deci-sion accuracy, especially as products becomemore complex. On the other hand, a hybrid de-cision strategy, C4, based on any combinationsof LEX, EBA, MCD, and FRQ was found to bemore effective. Combining LEX and EBA to-gether, for example, we can derive an interfacethat looks like SmartClient. EBA (elimination by

Figure 6. Elicitation effort/relative accuracy trade-offs of various decision strategies

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 43

aspect) corresponds to eliciting constraintsfrom users, and this feature was implementedas a constraint problem-solving engine inSmartClient (Torrens, Faltings, & Pu, 2002). Af-ter users have eliminated the product space bypreference constraints, they can use the LEXstrategy (ranked list) to further examine the re-maining items. Even though this hybrid strat-egy does not include any interface features toperform trade-off navigation, the simulation re-sults are still consistent with our earlier empiri-cal work on evaluating CDSSs with real users(Pu & Chen, 2005; Pu & Kumar, 2004). That is,advanced tools such as SmartClient can achievea higher accuracy while requiring users to ex-pend slightly extra cognitive and elicitation ef-fort than the basic strategies it contains.

The strongest implication of the simula-tion results is that we will be able to efficientlyevaluate more-advanced tools and comparethem in terms of effort and accuracy. Our planfor the future therefore includes evaluatingSmartClient (Pu & Faltings, 2000) and its trade-off feature, FindMe (Burke et al., 1997), DynamicCritiquing (Reilly et al., 2004), and Scoring Trees(Stolze, 2000), and comparing their strengthsand weaknesses. We will also perform moresimulations using different scales of K with theaccuracy definition of RA

HR_in_Kbest. Because K

is the size of the result set displayed in eachuser interaction cycle, we hope to gain moreunderstanding on the display strategy used forCDSSs.

Finally, we do emphasize that the simula-tion results need to be interpreted with somecaution. First of all, the elicitation effort is mea-sured by approximation. As mentioned earlier,we assumed that each EEP requires an equalamount of effort from the users. Currently, it isunknown whether this approximation wouldaffect the simulation results largely. In addi-tion, when measuring the decision accuracy,the WADD strategy is chosen as the baseline,assuming that it contains no error. However,this is not the case in reality. Moreover, as theMADPs in the simulation experiments are gen-erated randomly, there is a potential gap be-tween the simulated MADPs and the product

catalog in real applications. We are currentlyaddressing these limitations, and fine-tunesome of the assumptions with real user behav-iors.

CONCLUSIONThe acceptance of an e-commerce site

by consumers strongly depends on the qualityof the tools it provides to help consumers reacha decision that makes them confident enoughto purchase. Evaluation of these consumer de-cision support tools on real users has made itdifficult to compare their characteristics in acontrolled environment, thus slowing down theoptimization process of the interface design ofsuch tools. In this paper, we described a simu-lation environment to evaluate the performanceof CDSSs more efficiently. In this environment,we can simulate the underlying decision sup-port approach of the system based on the con-sumers’ preferences and the product cataloginformation that the system may have. The de-cision results can then be evaluated quantita-tively in terms of decision accuracy, elicitationeffort, and cognitive effort described by theextended effort-accuracy framework. More im-portantly, we were able to discover new deci-sion strategies that led to interface improve-ments of existing CDSSs. Even though this isthe first step, we hope to be able to evaluateand design new user interfaces for high perfor-mance CDSSs, and forecast users’ acceptanceof a new interface based on benefits such aseffort and accuracy.

ACKNOWLEDGMENTSFunding for this research was provided

by the Swiss National Science Foundation un-der grant 200020-103490.

REFERENCESBoutilier, C., Patrascu, R., Poupart, P., &

Schuurmans, D. (2005). Regret-based utilityelicitation in constraint-based decision prob-lems. In Proceedings of the InternationalJoint Conferences on Artificial Intelligence(IJCAI’05), Edinburgh, Scotland, UK (pp.

44 International Journal of E-Business Research, 2(3), 28-45, July-September 2006

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

929–934).Burke, R. D., Hammond, K. J., & Young, B. C.

(1997). The FindMe approach to assistedbrowsing. IEEE Expert, 12(4), 32–40.

Grether, D. M., & Wilde, L. L. (1983). Consumerchoice and information: New experimentalevidence. Information Economics andPolicy, 1, 115-144.

Keeney, R. L., & Raiffa, H. (1993). Decisionswith multiple objectives: Preferences andvalue tradeoffs. Cambridge, UK: CambridgeUniversity Press.

Luce, M. F., Payne, J. W., & Bettman, J. R. (1999).Emotional trade-off difficulty and choice.Journal of Marketing Research, 36(2), 143–159.

Newell, A., & Simon, H. A. (1972). Human prob-lem solving. Englewood Cliffs, NJ: Prentice-Hall.

Palmer, J. W. (1997). Retailing on the WWW:The use of electronic product catalogs. In-ternational Journal of Electronic Markets,7(3), 6–9.

Payne, J. W., Bettman, J. R., & Johnson, E. J.(1993). The adaptive decision maker. Cam-bridge, UK: Cambridge University Press.

Pu, P., & Chen, L. (2005). Integrating tradeoffsupport in product search tools for e-com-merce sits. In Proceedings of the ACM Con-ference on Electronic Commerce (EC’05),Vancouver, Canada (pp. 269–278).

Pu, P., & Faltings, B. (2000). Enriching buyers’experiences: The SmartClient approach. InProceedings of the SIGCHI Conference onHuman Factors in Computing Systems, TheHague, The Netherlands (pp. 289–296).

Pu, P., & Kumar, P. (2004). Evaluating example-based search tools. In Proceedings of theACM Conference on Electronic Commerce(EC’04), New York (pp. 208–217).

Reilly, J., McCarthy, K., McGinty, L., & Smyth,B. (2004). Dynamic critiquing. In Proceed-ings of the 7th European Conference inCase-Based Reasoning (ECCBR 2004),Madrid, Spain (pp. 763–777).

Reilly, J., McCarthy, K., McGinty, L., & Smyth,B. (2005). Incremental critiquing. KnowledgeBased Systems, 18(2-3), 143–151.

Shearin, S., & Lieberman, H. (2001). Intelligentprofiling by example. In Proceedings of theInternational Conferences on IntelligentUser Interfaces (IUI’01), Santa Fe, NewMexico (pp. 145–151).

Shimazu, H. (2001). Expertclerk: Navigatingshoppers buying process with the combi-nation of asking and proposing. In Proceed-ings of the 17th International Joint Con-ference on Artificial Intelligence(IJCAI’01), Seattle, Washington (pp. 1443–1448).

Stolze, M. (1999). Comparative study of ana-lytical product selection support mecha-nisms. In Proceedings of INTERACT 99,Edinburgh, UK (pp. 45–53).

Stolze, M. (2000). Soft navigation in electronicproduct catalogs. International Journal onDigital Libraries, 3(1), 60–66.

Torrens, M. (2002). Scalable intelligent elec-tronic catalogs. Doctoral dissertation no.2690, Swiss Federal Institute of Technology(EPFL), Lausanne, Switzerland.

Torrens, M., Faltings, B., & Pu, P. (2002). Smartclients: Constraint satisfaction as a paradigmfor scaleable intelligent information systems.CONSTRAINTS: An International Journal,7(1), 49–69.

Viappiani, P., Faltings, B., Schickel-Zuber, V., &Pu, P. (2005). Stimulating preference expres-sion using suggestions. In Workshop Notesof the IJCAI-05 Multidisciplinary Workshopon Advances in Preference Handling,Edinburgh, UK (pp. 186–191).

Zhang, J., & Pu, P. (2005). Effort and accuracyanalysis of choice strategies for electronicproduct catalogs. In Proceedings of the 20thACM Symposium on Applied Computing(SAC-2005), Santa Fe, New Mexico (pp.808–814).

ENDNOTES1 Search tools and consumer decision sup-

port systems are two terms used inter-changeably throughout this article althoughthe latter is considered to be more advancedin terms of its implementation and interfacefeatures.

Copyright © 2006, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc.is prohibited.

International Journal of E-Business Research, 2(3), 28-45, July-September 2006 45

2 It is also known as multicriteria decision mak-ing (MCDM) problem (Keeney & Raiffa,1993). Our definition emphasizes on the termattribute, which is an objective aspect ofproducts, not related to the decision maker’spreferences.

3 Though this assumption is obviously im-precise, more studies by assigning differentweighting of effort for the various EIPs showthat the key relationships between the deci-sion strategy and the decision environmentswere largely unchanged. See page 137 of(Payne et al., 1993).

4 We assume that the actions are in their ba-sic forms only. For example, the FILLIN op-eration is not allowed to elicit more than onevalue or even an expression. Otherwise ausability issue will arise.

5 The detail analysis is given at pages 80–81of Payne et al. (1993). This example assumesthe values of all attributes are numeric andconsistent with the decision maker’s prefer-ences.

6 The procedure of assessing componentvalue functions with midvalue points is in-troduced in page 120 of Keeney and Raiffa(1993).

Mr. Jiyong Zhang ([email protected]) obtained both his BS (1999) and MS (2001) incomputer science from Tsinghua University, Beijing, China. Currently, he is a PhD student andresearch assistant in the Human Computer Interaction Group, School of Computer andCommunication Sciences, Swiss Federal Institute of Technology at Lausanne (EPFL),Switzerland. His research interests include intelligent user interfaces, automatic decisionmaking, constraint based problem solving, preferences modeling and handling, and e-commercetechnologies.

Dr. Pearl Pu ([email protected]) is currently a research scientist and director of the HCI Groupin the School of Computer and Communication Sciences at the Swiss Federal Institute ofTechnology in Lausanne (EPFL). She obtained her master’s and PhD degrees from the Universityof Pennsylvania in artificial intelligence and computer graphics. She was a visiting scholar atStanford University in 2001, both in the database and HCI groups. She was also co-founder ofIconomic Systems (1997-2001), and invented the any-criteria search method for findingconfigurable and multi-attribute products in heterogeneous electronic catalogs. Her recentresearch activities are in the areas of decision support systems, information visualization,query optimization for digital libraries, scalable user experience, social navigation, andadvanced display techniques.