v. seenu srinivasan · 2017-12-08 · v. "seenu" srinivasan stanford university i am...

13
V. "Seenu" Srinivasan Stanford University I am delighted to be the editor for this volume, Conjoint Analysis: 7he Pioneering Years, honoring Paul Green's pioneering contributions to conjoint analysis. I first met Paul in 1970 when I was a doctoral student at Carnegie Mellon University's Graduate School of Industrial Administration (now, the Tepper School of Business Administration). Paul Green and Vithala Rao were presenting a paper on the fact that the seemingly incorrect treatment of rank order data as if they were interval-scaled surprisingly produced excellent results in the area of MDS. At around the same time, Allan Shocker, a fellow doctoral student of mine, and I were working on LINMAP, a linear programming method for estimating parameters of a multiattribute preference model using paired comparison data. When I took my first appointment as an assistant professor at the University of Rochester and was teaching a marketing research course, I invited Paul to give a talk to my class which he graciously accepted. We interacted during that visit as well as during a doctoral consortium at Wharton where both of us presented our research. That is when Paul asked me whether I would be interested in working with him in writing a review paper on conjoint analysis. I very much enjoyed my interaction with Paul. There was only one problem. When I revised the paper and sent it to Paul, he would revise it in three days and send it back to me. However, when it was my turn, it would take more than a month. I wish I had learned from Paul the secret to his efficiency and productivity. The review paper on conjoint analysis, was published in the journal of Consumer Research in 1978. We introduced for the first time the term conjoint analysis in that paper and distinguished it from conjoint measurement in which mathematical psy- chologists were primarily concerned with the conditions under which there exist mea- surement scales for both the independent and dependent variables given the order of the joint effects. The aim conjoint analysis, however, was more practical-finding specific numerical values for part-worth utilities, "assuming" that a particular com- position rule (often the additive model) applies, possibly with some error. The latter

Upload: others

Post on 27-May-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

V. "Seenu" Srinivasan Stanford University

I am delighted to be the editor for this volume, Conjoint Analysis: 7he Pioneering Years, honoring Paul Green's pioneering contributions to conjoint analysis.

I first met Paul in 1970 when I was a doctoral student at Carnegie Mellon University's Graduate School of Industrial Administration (now, the Tepper School of Business Administration). Paul Green and Vithala Rao were presenting a paper on the fact that the seemingly incorrect treatment of rank order data as if they were interval-scaled surprisingly produced excellent results in the area of MDS. At around the same time, Allan Shocker, a fellow doctoral student of mine, and I were working on LINMAP, a linear programming method for estimating parameters of a multiattribute preference model using paired comparison data.

When I took my first appointment as an assistant professor at the University of Rochester and was teaching a marketing research course, I invited Paul to give a talk to my class which he graciously accepted. We interacted during that visit as well as during a doctoral consortium at Wharton where both of us presented our research. That is when Paul asked me whether I would be interested in working with him in writing a review paper on conjoint analysis. I very much enjoyed my interaction with Paul. There was only one problem. When I revised the paper and sent it to Paul, he would revise it in three days and send it back to me. However, when it was my turn, it would take more than a month. I wish I had learned from Paul the secret to his efficiency and productivity.

The review paper on conjoint analysis, was published in the journal of Consumer Research in 1978. We introduced for the first time the term conjoint analysis in that paper and distinguished it from conjoint measurement in which mathematical psy­chologists were primarily concerned with the conditions under which there exist mea­surement scales for both the independent and dependent variables given the order of the joint effects. The aim conjoint analysis, however, was more practical-finding specific numerical values for part-worth utilities, "assuming" that a particular com­position rule (often the additive model) applies, possibly with some error. The latter

xxii V. "Seenu" Srinivasan

was much more useful in marketing for product and pricing issues. We also brought together some disparate research streams in that paper: Paul's own earlier research on conjoint measurement, my work on LINMAP, and Richard Johnson's work on trade­off analysis. We also provided a list of steps involved in conducting a conjoint analysis study and how one can mix and match different approaches to the individual steps. Little did we know at that time this paper would turn out to be the most cited paper by either of us (3200 citations at the beginning of 2016, as per Google Scholar).

Our friendship grew during 1979 when I was visiting the European Institute for Advanced Studies in Management in Brussels, Belgium. We jointly taught a course in Brussels for European academics on conjoint analysis and MDS. We went on to write a second review paper on conjoint analysis, published in the Journal of Marketing in 1990.

Paul Green was a giant in the field of marketing research, and there was no dose second. My first introduction to marketing research was by reading his book, Research for Marketing Decisions, coauthored with Donald Tull. Paul's contributions to the field of marketing research are immense, starting with Bayesian analysis of market research decisions, to MDS, and most importantly, conjoint analysis. As of 2015, there are more than 18,000 commercial applications per year of conjoint analysis all over the world.

I am grateful to have had Paul Green as my coauthor and friend.

Terminology

In reading my introduction to this volume, it would be useful to be familiar with the terminology used. Conjoint analysis models consumers' (or respondents') overall preferences for products described in terms of multiple "attributes" (also referred to as "factors" or "features"). The word product is to be broadly interpreted to mean both products and services. Each attribute has "levels" (e.g., $100, $200, and $300 are levels for the price attribute, different brand names are levels for the brand attribute). The objective is to estimate "utility values" (also referred to as "part-worth") for each level of each attribute. The overall preference (or utility) for a product is the sum of the utilities for that product's levels on each of the attributes. Given a set of product options, the consumer is predicted to choose the product with the highest overall preference.

The input data for conjoint analysis are consumers' overall preferences (ratings, rankings, or choices) for a set of hypothetical multiattribute options. In the "full­profile method," each hypothetical option is profiled on all the attributes. In the "trade-off' method of data collection the respondent ranks all possible combination of levels of two attributes at a time and then goes on to other pairs of attributes. In choice-based conjoint (CBC), the respondent repeatedly chooses a product from a set of (say) three products, and this task is repeated for multiple sets. Data are analyzed by different statistical methods, for example, dummy variable multiple regression when

VOLUME INTRODUCTION xxiii

overall preference ratings are used, linear programming when overall preference rank­ings are used, and multinomial logit model estimated by the maximum likelihood method when choice-based data are used.

In providing a brief summary of each of the chapters of in this volume, I have sometimes provided editorial comments from the current (2016) perspective.

Conjoint Analysis Basics

Conjoint Measurement for Quantifying Judgmental Data

This chapter by Paul Green and Vithala R. Rao (1971) introduced "conjoint measurement" to market researchers in 1971. (The current name "conjoint analysis" was coined by Paul Green and V. "Seenu" Srinivasan seven years later.) A canonical example of conjoint analysis as practiced in the commercial world appears as Problem 2 in Chapter. The true part worths for levels of each of the attributes were assumed, and the paper shows that the derived values obtained using Kruskal's (1965) MONANOVA algorithm reproduces the originally assumed values very closely. This example is a proof of the concept that even though we know only the rank order of overall preferences, we are able to obtain underlying part-worth utilities measured on an interval scale.

Indeed the fascination in the early 1970s was the ability to obtain interval-scaled values of the part-worth utilities of the attributes even though only rank order of the overall preferences was known. A similar fascination at that time was with nonmetric MDS (i.e., the ability to locate stimuli on a map based merely on the rank order of distances between stimuli). It is Kruskal's MONANOVA algorithm that caught Paul Green's attention; he could see the enormous potential the method had in the practical world of marketing (see the comments by Vithala Rao in this volume in this connec­tion). The motivation was not the foundational paper by Luce and Tukey on conjoint measurement which was more about the conditions under which the decomposition can be accomplished. Paul Green was more interested in applied marketing rather than on theoretical foundations.

There are several firsts in this chapter relating to conjoint analysis:

1. Emphasis on the additive model: " ... the additive case seems to have proven quite useful in a variety of applications .... " (p. 356)

2. The recognition that the additive model may not be sufficient: " ... bona fide interac­tions may be present." (p. 359)

3. Mention of fractional factorial designs (p. 360) of getting around the problem of many attributes with many levels in which case ranking all possible combinations would be formidable.

4. Recognition chat additive conjoint analysis is a "monotone analog of main-effects analysis of variance." (p. 357)

-------- ---

xxiv V. "Seenu" Srinivasan

Subjective Evaluation Models and Conjoint Measurement

This chapter, which is an empirical research article by Green, Carmone, and Wind (1972), collected data from consumers on their preferences for discount cards. The authors compared the following procedures: (a) Kruskal's MONANOVA algorithm that fits a monotone transformation of the overall rank order preference to part-worth (nonlinear) functions of each of the three attributes, (b) Carroll and Chang's PREFMAP monotone multiple regression where the best fitting monotone transformation of rank order overall preferences is related "linearly" to the three independent variables, and (c) a self-explicated procedure where the respondent specified importance of each of the three attributes. The weighted sum of the three attributes (a linear model) was correlated (Spearman rank order correlation) to the rank order overall preference.

The more general MONANOVA algorithm did not fit the data well for 21 of the 43 respondents (had a stress measure greater than 0. 1). The PREFMAP algorithm, on the other hand, had an average multiple correlation (between monotonically trans­formed overall preference and the overall preference predicted by the model) of 0.976. The relative performance of the two methods is surprising because the MO NAN OVA algorithm is more general than the PREFMAP model, that is, MONANOVA allows the part-worth function of each attribute to be nonlinear. The self-explicated proce­dure produced a moderately high-average Spearman rank order correlation of 0.849.

New Way to Measure Consumers' Judgments

In this managerially oriented chapter, Paul Green and Yoram (Jerry) Wind (1975) illustrate conjoint measurement with a "carpet spot remover" example with utility functions estimated by MONANOVA. The authors provide vignettes into several practical applications, for example, air carriers, replacement tires, and bar soaps. They also show that conjoint data could be collected by the two-attributes at a time, which is a trade-off table approach (suggested by Richard Johnson). The chapter also introduces the idea of computer simulations as a way of answering important mana­gerial questions regarding product and pricing. This chapter popularized conjoint measurement to the broader business world and was frequently used in marketing courses in MBA programs.

Factorial Designs for Conjoint Analysis

On the Design of Choice Experiments Involving Multifactor Alternatives

The main idea of conjoint measurement is its ability to recover utility (part-worth)

functions for each attribute from the overall preference judgments of a respondent on multiattributed alternatives (with each attribute having multiple levels). This poses a practical problem for the market researcher. Consider, for example, a product category

VOLUME INTRODUCTION XXV

with six attributes, with each attribute having three levels. There are 36 = 729 pos­sible product combinations. No respondent would be patient enough to evaluate that many combinations. The chapter introduced the notion of an orthogonal array from the experimental design (statistics) literature. For the example problem above, it is possible to construct an 18-stimuli configuration and estimate the "main effect" part­worth functions under the assumption that interaction effects are minimal.

The chapter also discusses other designs such as incomplete block designs where each respondent can do a card sort by reducing the number of attributes in each block but needs to evaluate multiple blocks containing different subsets of attributes. The use of multiple blocks increases the overall work load on the respondent. In some studies where respondents could be grouped into a segment, different respondents can evaluate different blocks so that a common utility function is estimated at the group level.

The main contribution of the chapter is to offer the orthogonal array idea to make conjoint measurement more applicable to real world studies. Orthogonal arrays and other fractional factorial designs are constantly used in conjoint studies.

Superordinate Factorial Designs in the Analysis of Consumer Judgments

Green, Carroll, and Carmone (1976) propose a new type of preference model in which consumers' overall preference judgment for products can be thought of as resulting from both the underlying attributes of the product (as in conjoint measurement) and the underlying attributes of the consumer (e.g., gender, amount of use of the product category). They use a new statistical technique called CANDELINC (Canonical Decomposition under Linear Constraints). They illustrate the use of the technique in the context of preferences for car rental services.

This chapter also set the stage for componential segmentation by Green and DeSarbo (a later chapter in this volume) in which across-respondent differences get captured by respondent variables (e.g., demographic) via interaction effects between product attributes and respondent variables. The basic idea in this chapter gets uti­lized in current approaches to capturing customer heterogeneity in conjoint analysis through hierarchical Bayes models. For instance, the average part-worth utility for a feature may be different for men versus women, heavy users versus light users of the product category, and so on.

Some New Types of Fractional Factorial Designs for Marketing Experiments

The normal assumption made in conjoint analysis is that there are no interaction ef­fects. In other words, the effect of one attribute on overall preference does not depend on the values of the other attributes. This is often an uncomfortable assumption be­cause it is likely that there are some interaction effects. What is worse, we do not have any means for testing the validity of the assumption because often the design (i.e., the

xxvi V. "Seenu" Srinivasan

particular product profiles shown to the respondent) is such that interaction effects are aliased with main effects. Stated differently, what is measured as one attribute's main effect is not merely that, but it includes some interaction effects.

The best way out of this dilemma is to use a full factorial design, that is, let the respondent evaluate all possible product profiles. This is often impractical, for instance, even with six two-level factors the respondent needs to evaluate 26 = 64

product profiles. This highly technical chapter suggests some alternatives to tackle the problem.

For instance, the Resolution V designs can estimate all main effects and all two-way interaction effects without being aliased with any other main effects or two-way inter­action effects. In other words, we need to make only the weaker assumption that all three-way and higher interaction effects are negligible. It turns out that Resolution V designs with only 16 profiles are available for five two-level factors. For six two-level factors, we need 32 profiles which is often impractical for respondents. The number of profiles needed would go up if factors have more than two levels.

A more practical alternative is resolution IV designs in which no main effect is aliased with any other main effect or two-factor interactions, but the two-factor in­teractions may be aliased among themselves. The chapter describes how to go about constructing such designs when all attributes are two-level factors. The chapter also suggests compromise designs that permit the estimation of main effects for all factors and all two-factor interaction effects among a subset of factors, assuming that the two-factor interactions among the remaining attributes are negligible. In order to use the compromise design, we need to have prior information about which subset of attributes contribute only weakly to overall preference. The chapter also describes some other designs and proposes a stage-wise approach to interaction estimation.

Conjoint Analysis as an Applied Marketing Tool

Conjoint Analysis in Consumer Research: Issues and Outlook

This chapter first coins the term "conjoint analysis" as any decompositional method that estimates the structure of a consumer's preferences (e.g., part-worth, importance, and ideal points) given his/her overall evaluations of a set of alternatives that are pre­specified in terms of levels of different attributes. It distinguishes conjoint analysis from the earlier conjoint measurement, as practiced by mathematical psychologists regarding the conditions under which there exist measurement scales for both the dependent variable (e.g., overall preference) and independent variables (e.g., levels of different attributes) given the order of the joint effects of the independent variables (e.g., rank order of overall preference of product profiles) and a compositional rule (e.g., additive model). Note that conjoint analysis, as applied currently, assumes that the additive model is a good approximation (i.e., there is likely to be some error) rather thap test that assumption, and is primarily concerned with estimating the preference

--VOLUME INTRODUCTION xxvii

parameters (e.g., part-worth functions). Once the part-worth functions are estimated, there are numerous marketing applications, for example, demographic segmentation, benefit segmentation, computer simulation of alternative product and pricing pos­sibilities to answer managerially relevant product and pricing issues.

A second contribution of the chapter was to bring together at least three disparate streams of research that preceded it: (a) conjoint measurement as introduced by Green and Rao using the MONANOVA method (in this volume), (b) the LINMAP estima­tion method of Srinivasan and Shocker, and (c) Johnson's trade-off tables. The chapter brought them together by recognizing that there are different steps in conducting con­joint analysis (e.g., choice of a preference model, measurement scale for the dependent variable, and estimation method) that the three streams of research employed. One can therefore mix and match the different methods in any conjoint study (e.g., data could be collected as a rank order of product profiles as suggested by Green and Rao, but could be estimated by LINMAP).

The chapter also emphasizes testing the reliability and validity of conjoint analysis studies and discusses applications of conjoint analysis in private and public sectors.

Estimating Choice Probabilities in Multiattribute Decision Making

The authors first introduce the Bradley-Terry-Luce (BTL) model in which the choice probability for an alternative is its utility (non-negative) divided by sum of the utili­ties of all the alternatives. One of the alternatives, in general, would be the status quo alternative. For instance, in considering new smartphone alternatives available in the market, a consumer may choose to stay with his/her current smartphone. As more and more new options are considered, the BTL model would imply that the probability of the status quo alternative would decrease. The authors consider the possibility that the status quo alternative's probability may be understated by the BTL model. They propose a two-stage model. The first stage model considers the probability with which one of the new options would be chosen given that the respondent does not choose the status quo alternative. The second stage model quantifies the probability of choos­ing the status quo alternative. (Note that the BTL model collapses the two stages, i.e., status quo alternative is treated just like any other alternative.) For the second stage of the proposed model, one has to specify how the other alternatives are viewed, "as a whole," relative to the status quo option. The authors consider three alternatives for quantifying the value of the whole: (a) average value of all the options, (b) best of all values for the options, and (c) worst of all values for the options. The two-stage model in this chapter resembles the nested logit model.

The authors proceed to test the BTL model and its three alternatives in the con­text of MBA students' likelihoods of seeking an interview with potential employers, the status quo option being not to seek any interview. The empirical results indicate that the original BTL model performs the best. (Quantifying the value of the whole set of options as determined by the best of all the options performs the second best.)

xxviii V. "Seenu" Srinivasan

A General Approach to Product Design Optimization via Conjoint Analysis

One of the values of conjoint analysis is that it can predict the likely market reaction to a new product introduction. This is often done using a simulator. What if we want to find the best new product position described in terms of attribute values of the new product? The authors propose a methodology called Product Optimization and Selected Segment Evaluation (POSSE). The key idea is to model the output of a simu­lator such as profit (based on predicted unit sales, price, and unit cost) as a response function of the new product's multiple attribute values. The "response surface" is a simple function (often a low-order polynomial, such as a quadratic with interaction terms) fitted from the output of the simulator. Once the response function is esti­mated, the firm's objective can be optimized by numerical methods.

The POSSE system was designed as a complete system, one that included com­puter programs for stimulus design, utility estimation, choice simulation, objective function formulation, optimization, sensitivity analysis, and time path forecasting. Furthermore, once the segment of buyers of the optimized new product is identified in terms of demographic and other product category usage variables, the target seg­ment is identified for marketing purposes. The authors state that some 15 applications had been made of the POSSE methodology by 1981. The authors illustrate the use of POSSE in the context of designing a pickup truck for US consumers for personal (as opposed to commercial) use.

Conjoint Analysis of Item Collections

Benefit Bundle Analysis

How could one use conjoint analysis for advertising purposes? The number of attributes is often too many to be included in advertising messages. (The dictum in advertising is "less is more.") The authors propose starting with the individual level importances for the attributes and factor analyzing the results. In the illustrative (disguised) application presented in the chapter, a prominent manufacturer of aerosol hard-surface cleaner was interested in identifying the market's benefit segments. On the basis of the factor analysis study, the authors identified two benefit segments, one that emphasizes "cleans away deep-down, ground-in soil" and another which emphasizes "protects the floor covering against resoiling."

Preference Measurement of Item Collections

The main point made in this paper is that there are situations where the attributes themselves are items and the alternative "products" consist of various collections of items-commodity bundles. In the illustrative application, the authors consider

--VOLUME INTRODUCTION xxix

various menu combinations consisting of an entree and a dessert (two attributes in this case). From the consumer's overall evaluation of the menus, the authors determine the relative values attached to the different entrees and the different desserts. The au­thors indicate that more elaborate models involving interaction terms might be useful to improve the fits for some of the respondents. (This is the subject of the next paper in the volume.)

A Complementary Model of Consumer Utility for Item Collections

In the entree-dessert combination example, it is likely that some respondents' relative preferences for desserts may depend on what the entree is. In other words, there may be interaction effects. The authors posit the interaction effect to be of a complemen­tary nature, that is, whether the dessert has something in common with the entree. They propose the complementary can be modeled as a "vector" model where both entrees and desserts are shown as vectors in a smaller dimensional space (e.g., two­dimensional space). Overall preference for an entree-dessert combination is modeled as the vector product (defined as the length of the entree vector times the length of the dessert vector times the cosine of the angle between the two vectors). (Note that the larger the cosine the closer the two vectors are.) The authors propose the following pragmatic procedure: (a) fit an additive model where the total utility is the utility of the entree plus the utility of the dessert, (b) identify respondents for whom model (a) does not fit well, and (c) for the respondents identified in (b), fit the vector model to the residuals of model (a). In the illustrative example, the additive model did not fit well for 12 of the 27 respondents. Taking the 12 respondents, the residuals are aver­aged for entree-dessert combinations and the results are analyzed by the MDPREF computer program. This results in a two-dimensional space in which the entrees and desserts are positioned as vectors. The authors conclude that the vector model appears to do a good job in capturing complementarities.

Extensions of Conjoint Analysis

An Interaction Model of Consumer Utility

This chapter can be thought of as a generalization of the previous chapters in this volume. Staying with the menu example, the authors consider the overall preference of r items in the menu such as entrees, potatoes, vegetables, and salads as consisting of main effects plus two-way interaction effects. The authors propose a two-step data collection: (a) the respondent is shown all possible two component menus represent­ing all distinct combinations of the r factors taken two at a time. He/she is asked to evaluate each two-component menu, for example, the options for entrees and pota­toes are shown as rows and columns of the table, and the respondent is asked to rank all the cells in the table; (b) the respondent is shown an orthogonal array (a fractional

~~~~~~· ___ , ________ _

XXX V. "Seenu" Srinivasan

factorial) in which each alternative is a complete menu consisting of options for all r

items. The data in (a) are analyzed by MONANOVA followed by a computation of residuals from the best fitting monotone transformation. The data in (b) are also ana­lyzed by MO NAN OVA to provide r utility scales in a "common unit." The common unit is then then used to adjust the residuals of each of the two-factor tables so that all possible two-way interactions are also in the same common unit. Each set of interac­tion tables~at the group level only~is then decomposed by MDPREF to provide graphical assistance in interpreting the interactions.

Incorporating Group-Level Similarity Judgments in Conjoint Analysis

The authors make the reasonable assumption that perceptions are more homogeneous than preferences. Based on the above assumption, they construct a common percep­tual space for all respondents, but allow the preferences of respondents to be hetero­geneous. They illustrate their approach in the context of consumers' perceptions and preferences for vacation sites. In phase ] of the data collection, respondents rank seven vacation sites in order of overall preference. In phase 2, each respondent rated each site on each of the six attributes. The ratings were ordered categories, for example, superb, good, or fair. They utilize the canonical correlation technique to come up with a joint space (in this example, a two-dimensional space) of the vacation sites (points in the space) and the attributes (vectors in the space).

In phase 3 of the data collection, the authors conduct a conjoint analysis of the respondents' overall preference ratings of 18 hypothetical vacation sites. The profiles were constructed using the same six attributes (at three levels each, see phase 2) by an orthogonal main effects plan. The overall preference ratings were analyzed by dummy variable regression technique to estimate the part-worth functions for each of the attributes for each of the respondents (preference heterogeneity).

In order to evaluate how well any particular respondent would prefer a vacation site, the authors use the utility function estimated by conjoint analysis together with the perception data from phase 2, for example, what percentage of respondents rated the site as superb, good, or fair on each of the attributes. The respondent is assumed to use these percentages as probabilities of what the attribute level is for the vacation site.

Componential Segmentation in the Analysis of Consumer Trade-Offs

The authors point out that the average utility functions (part worth) estimated from the conjoint analysis are likely to be different for different respondents described by variables (e.g., demographic, psychographic, situational, and product usage vari­ables). The componential segmentation model explicitly recognizes and estimates those differences.

The model is estimated by first estimating an average utility function across all respondents. The next set of variables to be tested in the hierarchy is the interactions

-VOLUME INTRODUCTION xxxi

between product attributes and respondent variables. This is done by a sequential procedure. Each respondent variable's interactions are tested separately by means of a models comparison test. The respondent variable whose interactions against the full set of product attributes yield the highest incremental R2 is included in the model. The remaining respondent variables are then crossed with the product attribute set and the respondent variable with the highest incremental R2 is chosen. The procedure continues sequentially until further testing leads to no significant interactions. The main effects of the respondent variables are then included in the model. Having settled on the set of predictor variables that comprise the model, the last step in the approach is to estimate their regression coefficients simultaneously

in a single, overall regression. The authors recognize the limitations of their step-wise regression-like procedure

and recommend cross validation on a hold-out sample. They illustrate the procedure in the context of preferences of respondents characterized by several demographic variables for various kinds of vacation spots.

A Conjoint Model for Measuring Self- and Cross-Price/Demand Relationships

Conjoint choice simulators can derive self- and cross-price demand relationships start­ing from utility functions based on two attributes: brand name and price. The authors propose a more realistic data collection format in which a price is affixed to each specific brand and the respondent sees all the brands at their specific prices "simul­taneously." For instance, if there are four brands, each brand could be at its normal price, or at -10%, -20%, or +10%. (Note the specific price corresponding to -10%, for example, will be different for different brands because their normal prices are different.) Since each brand can be at any of its own four prices, there are 4" = 256 possible combinations. One could construct a fractional factorial of 16 combinations. In the illustrative consumer nondurable product application in the chapter, the respondents were classified into four groups based on their most recently purchased brand. They were shown the current set of prices and asked to distribute I 00 points to reflect the subjective likelihood of selecting each brand on the next purchase. Next, the respondent is shown the same four brands but at a new set of prices drawn from the master design of 16 combinations and asked to repeat the point assignment task. Each respondent received four of the 16 combinations. The data were averaged over respondents for each price condition, for each last brand purchased segment. A set of 16 brand-switching matrices, each conditional on a specific set of prices, were thus obtained.

A first-order Markovian model is postulated to represent price-induced brand

switching. The price-demand model is then fitted by generalized least squares to share of choices. The resulting model is then used to determine own- and cross-price demand relationships.

xxxii V. "Seenu" Srinivasan

Hybrid Conjoint Analysis: The Early Years

A Hybrid Utility Estimation Model for Conjoint Analysis

This chapter proposes a hybrid conjoint model. It combines two very different ap­proaches to multiattribute utility measurement. One is the self-explicated approach which asks the respondent to directly rate the desirabilities of the different levels of each attribute on a 1-10 scale, and second is to rate the importance of each of the attributes. The importance of the attribute times the desirability gives the part-worth utility. This approach is easy to use when the number of attributes is large but has less validity. The conjoint analysis approach is likely to be more valid. On the other hand, the full profile method of conjoint analysis breaks down when the number of attri­butes is more than about six because respondents are unable to pay attention to that many attributes of the correspondingly large number of product profiles required to estimate the utility functions. Respondents tend to resort to simplifying tactics dur­ing the data collection task that may not correspond to what they would do in a real buying situation.

The hybrid method starts with the self-explicated approach for each respondent. Instead of giving all the profiles in the design to each respondent, only a small subset of profiles is given to each respondent. This reduces the fatigue of the respondent and makes it more likely to yield valid responses. The self-explicated data are clustered into a few segments. The combined model is an additive combination of "individual level" self-explicated part-worth "plus segment-level" main effects and selected two-way in­teraction effects, estimated so as to fit the overall profile evaluations of the respondents in that segment. The authors illustrate the approach in the context of a pharmaceutical drug category.

A Cross-Validation Test of Hybrid Conjoint Models

This chapter provides a cross-validation test of the hybrid conjoint model described in the earlier chapter in this volume. The context was a study of a new household appliance described by seven attributes. Each of the 476 respondents provided self­explicated data and overall evaluations of 32 product profiles. This allowed the calibra­tion of four models: (a) self-explicated model, (b) conjoint model based only on all the 32 product profiles, (c) hybrid conjoint model at the total sample level in which only 8 of the 32 profiles rated by each respondent were used, (d) same as (c) except respondents were clustered into three clusters (segments) based on their self-explicated utilities. Respondents' preference rankings of four hold-out profiles collected at the very end of the data collection were used for cross validation. The validation measure was average (Kendall's) Tau correlation.

The results showed the self-explicated model performed the worst (approximately 50% of the Tau correlation of the other models), the hybrid models were the best yielding about IO percent greater Tau correlation compared to conjoint models.

VOLUME INTRODUCTION xxxiii

In all cases, the addition of the interaction terms in the hybrid model "decreased" the cross validity, probably due to over fitting. The segmented hybrid model performed only marginally better than the whole sample-based hybrid model. An important ad­ditional finding is that if we take the self-explicated utilities but simply adjust the aver­age importance of the attributes (across the sample) up or down based on a multiple regression on the profile ratings, the resulting self-explicated utilities yielded nearly the same high validity as the full-blown hybrid model. This suggests the main problem with the self-explicated approach is its measurement of attribute importance.

Hybrid Models for Conjoint Analysis: An Expository Review

In this chapter, a review of hybrid conjoint models, Paul Green describes various ap­proaches to hybrid conjoint modeling by summarizing their principal characteristics:

l. scaling type (ranking, rating etc.), 2. level of response aggregation (total sample vs. segments; single weight fr)r self-explicated

data vs. different weights for different attributes of self-explicated data, 3. scaling of the utility functions (metric, semi-metric, etc.), 4. simultaneous versus stage-wise parameter estimation, 5. treatment of interactions (cross-products or Tukey's procedure), 6. fitting procedure (OLS, LINMAP, etc.), and 7. profile variation (full profile, partial profile).

Paul Green goes on to describe in detail three cross-validation studies of hybrid con­joint. The first is the one described in the previous chapter (Chapter 19) in this volume. The second is a study by Akaah and Korgaonkar (1983) in the Journal of Marketing Research that found that traditional conjoint predicted better than hybrid which, in turn, predicted better than self-explicated. The third cross-validation study by Cattin, Hermet, and Pioche (1982) in the conference proceedings on "Analytical Approaches to Product and Market Planning" found that traditional conjoint predicted better than self-explicated which, in turn, predicted better than hybrid conjoint. Paul Green concludes this chapter with a call for additional research on hybrid conjoint analysis.

References

Akaah, P., & Korgaonkar, K. 1983. "An empirical comparison of the predictive validity of self-explicated, Imber-hybrid, traditional conjoint, and hybrid conjoint models". journal of Marketing Research, 20, 187-197.

Cartin, P., Hermet, G., & Pioche, A. 1982. "Alternative hybrid models for conjoint analysis: Some em­pirical results". In Analytical Approaches to Product and Marketing Planning: !he Second Conference. Cambridge, MA: Marketing Science Institute.

Kruskal, Joseph B. 1965, "Analysis of factorial experiments by estimating monotone transformations of the data," journal of the Royal Statistical Society, Series B, Vol. 27, No. 2, pp. 251-263.