viewer satisfaction

Upload: dedy-ngonthelagi

Post on 04-Jun-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Viewer Satisfaction

    1/15

    There are numerous industries in which experts offeropinions about the quality of products and brands.For example, movie critics make suggestions about asoon-to-be released movies artistic and entertainmentvalue, BusinessWeek hosts Robert Parkers column recom-mending wines, Consumer Reports has long comparedbrands across numerous product categories, and so forth. Inaddition, consumers are increasingly posting online evalua-tions of products and brandsfor example, they reviewbooks on Amazon.com, movies on Netflix.com, videogames on Gamespot.com, or restaurants on Citysearch.com.

    Consumers find judgments from both professional crit-ics and amateur communities to be helpful, in part becausethe sheer number of new products and the frequency of theirlaunches (e.g., weekly releases for movies) can be over-whelming for consumers in the choice process. In addition,many such products appear wholly unique, so a comparisonof the movies Terminator Salvation and X-Men Origins:Wolverine or a comparison of the wines Argentinian Malbecand Italian Prosecco is difficult; thus, both critics and otherordinary consumers evaluations assist in decision making.

    108Journal of Marketing Vol. 74 (January 2010), 108121

    2010, American Marketing AssociationISSN: 0022-2429 (print), 1547-7185 (electronic)

    Sangkil Moon, Paul K. Bergey, & Dawn Iacobucci

    Dynamic Effects Among MovieRatings, Movie Revenues, and Viewer

    SatisfactionThis research investigates how movie ratings from professional critics, amateur communities, and viewersthemselves influence key movie performance measures (i.e., movie revenues and new movie ratings). Using movie-level data, the authors find that high early movie revenues enhance subsequent movie ratings. They also find thathigh advertising spending on movies supported by high ratings maximizes the movies revenues. Furthermore, theyempirically show that sequel movies tend to reap more revenues but receive lower ratings than originals. Usingindividual viewerlevel data, this research highlights how viewers own viewing and rating histories and moviecommunities collective opinions explain viewer satisfaction. The authors find that various aspects of these ratingsexplain viewers new movie ratings as a measure of viewer satisfaction, after controlling for movie characteristics.Furthermore, they find that viewers movie experiences can cause them to become more critical in ratings over time.Finally, they find a U-shaped relationship between viewers genre preferences and genre-specific movie ratings forheavy viewers.

    Keywords : movie ratings, professional critics, amateur communities, movie revenues, consumer satisfaction

    Sangkil Moon is Associate Professor of Marketing (e-mail: [email protected]), and Paul K. Bergey is Associate Professor of Information Sys-tem (e-mail: [email protected]), Department of Business Manage-ment, North Carolina State University. Dawn Iacobucci is E. Bronson IngramProfessor of Marketing, Owen Graduate School of Management, VanderbiltUniversity (e-mail: [email protected]). The authorsappreciate assistance from Natasha Zhang Foutz, Deepak Sirdeshmukh,and Glenn Voss. They also thank the participants in the marketing semi-nar at North Carolina State University for their comments on a previousversion of this article. Finally, they acknowledge their indebtedness to thethree anonymous JM reviewers.

    Professional critics commonly provide reviews and rat-ings; this information signals unobservable product qualityand helps consumers make good choices (Boulding andKirmani 1993; Kirmani and Rao 2000). Although amateurconsumers can obtain useful information from critics, theyare sometimes at odds with critics because of some funda-mental differences between the two groups in terms of experiences and preferences (Chakravarty, Liu, andMazumdar 2008; Holbrook 1999; Wanderer 1970). There-fore, consumers often seek like-minded amateurs opinionsin various ways.

    The recent development and proliferation of online con-sumer review forums, in which consumers share opinionson products, has had an enormous impact on the dynamicsof word of mouth (WOM) by effectively connecting con-sumers (Chen and Xie 2008; Eliashberg, Elberse, and Leen-ders 2006; Godes and Mayzlin 2004; Godes et al. 2005;Mayzlin 2006; Trusov, Bucklin, and Pauwels 2009). Theseonline forums lower the product information search costs,which motivates consumers to seek such review information(Stigler 1961). After all, consumer communities collectiveopinions can have as much influence on other consumerschoices as professional critics opinions. In addition to thesetwo influence groups, consumers make choices in accor-dance with their own judgments based on past experiencesin the given product category, which can be contrary toopinions from either professional critics or amateur com-munities. In this sense, consumers are active informationprocessors rather than passive information receivers(Bettman 1970).

    To provide a comprehensive evaluation of how productreviews and ratings influence consumers choices and satis-faction arising from their experiential consumption, we con-

  • 8/13/2019 Viewer Satisfaction

    2/15

  • 8/13/2019 Viewer Satisfaction

    3/15

    1Our empirical analysis confirms this strategy as common prac-tice by movie marketers. The correlation between weekly theaterrevenues (during the previous week) and weekly ad spending (dur-ing the current week) is positive and significant and increases oversubsequent weeks. This evidence indicates that movie marketersallocate their advertising money according to movies commercialsuccesses as they adjust distribution intensity (i.e., the number of screens) in response to the weekly theater revenues (Krider et al.2005).

    2In general, high-cost movies generate high revenues and profits(though not always), and ratings, or consumers acceptance of aproduct, matter. Consider the movie Hotel Rwanda . Its costs andrevenues were low ($31 million and $32 million, respectively), butits average ratings were high (9.5/10). In contrast, consider Char-lies Angels: Full Throttle . Its costs ($147 million) were more thanits revenues ($102 million), and it received poor ratings (5.7/10).

    Note that both factors must be in play; neither is suffi-cient on its own. 2 That is, positive ratings alone cannoteffectively increase revenues, because not enough potentialviewers know about the movie. In addition, highly adver-tised movies cannot generate enough revenue without favor-able ratings from ordinary moviegoers, because negativeWOM spreads more quickly for these types of movies thanfor others. Yet we anticipate that neither piece of informa-tion is sufficient, because the effect is interactive and syner-gistic. Our theorizing may be consistent with signalingtheory, if both sets of signals are calibrated to be equallyeffective. Realistically, we acknowledge that one set of sig-nals may seem to be more diagnostic as a cue than anotherset (Lynch 2006; Milgrom and Roberts 1986). Thus, wepredict an interactive effect, but without specifying that onecontributing signal attenuates another:

    H2: Positive ratings enhance the effectiveness of advertisingspending to raise movie revenues.

    Movie sequels build on the original movies commercialsuccess (Basuroy and Chatterjee 2008). That is, moviegoerstend to view the high quality of the original movie as a sig-nal of the quality of a sequel because they tend to associate

    various products of the same brand with product quality(Erdem 1998). With generous production budgets andheavy advertising based on the original movies brandpower, a sequel usually achieves box office success, even if it does not meet the box office levels attained by the parentmovie (Basuroy and Chatterjee 2008; Ravid 1999).

    Although sequels can make money, they are often ratedless favorably than original movies. That is, the originalmovies success leads to high expectations for the sequel,which are often difficult to meet, thus leading to less satis-faction (Anderson 1973; Oliver 2009). Viewers may be lesssatisfied and less impressed as a result of satiation on expe-riential attributes arising from a sequels lack of novelty andsurprise, which results in lower ratings by moviegoers. Fig-uratively speaking, when the punch line is known, thehumor is less effective. Distancing a movie somewhat fromthe expectations of the original pays off; Sood and Drze(2006) find that dissimilar sequels were rated higher thansimilar ones and that sequels with descriptive titles (e.g.,Pirates of the Caribbean: Dead Mans Chest ) were ratedhigher than those with numbered titles (e.g., Spider-Man 2 ).Low ratings on a sequel tend to spread in the movie popula-tion, thus limiting its revenues in following weeks, whichdoes not bode well for a sequel in the long run. Such lowratings of sequels may partially explain why subsequentsequels are rarely made.

    Therefore, we hypothesize that the effects of bothhigher revenues and lower ratings of sequels are likely real-ized predominately in the early weeks after release becausesequels tend to stimulate their loyal consumer base quickly.

    110 / Journal of Marketing, January 2010

    positive experiences with the movie. This relationship isstrengthened by the mutual confirmation of the online com-munity environment composed of ordinary viewers.

    Movie marketers are known to enhance advertisingspending for movies that were commercially successful inpreceding weeks, which in turn draws more positivereviews from movie viewers. 1 In other words, advertisingcan also play a role in confirming viewer satisfaction, whichtranslates into higher movie ratings in subsequent weeks(Spreng, MacKenzie, and Olshavsky 1996). Thus, we testthe following hypothesis:

    H1: High movie revenues enhance subsequent movie ratings.

    This hypothesis implies that early successful revenuesfrom early adopters of new products serve as an informationcue for late adopters purchases and satisfaction. Word of mouth is known to be a powerful and effective force, assist-ing the diffusion of consumer packaged goods, durables,and services in the market. Early commercial success is alsoa proxy of assurance of high quality from early adopters, asegment often acknowledged as experts in the relevantproduct category; as such, early sales figures lend credibil-ity to the product launch, which in turn enhances the prod-ucts success.

    The movie literature takes somewhat of a signalingtheorys perspective, in that a consumer who witnesses theearly commercial success of a movie can infer that themovie has qualities that make it popular and might alsoinfer that the movie has artistic or creative merit. The litera-ture maintains that the same quality signal from twosourcesone from marketers (advertising) and the otherfrom consumer communities (ratings)can effectivelyinfluence consumers choices in the marketers favor bygreatly reducing uncertainties about new products unob-servable quality (Kirmani 1997; Nelson 1974).

    Some recent research suggests that in the movie indus-try, certain signals may become less useful in the presenceof others; for example, Basuroy, Desai, and Talukdar (2006)find the attenuating role of third-party information sources(e.g., critics review consensus and cumulative WOM) onthe strength of the adverting signal. Other research arguesthat advertising and ratings indeed function synergistically,enhancing revenues when well-known movies (those withlarge budgets and heavy advertising) receive positive WOM(high movie ratings) (Basuroy, Chatterjee, and Ravid 2003;Liu 2006). To contribute to this line of inquiry, we also testan interactive hypothesis from analogous sources. We pre-dict an interactive effect that advertising spending upgradesrevenues further when ratings are more positive. Our theo-rizing suggests that if movie ratings are positive, potential

    consumers are more likely to respond to the advertising,thus enhancing the effect of advertising on revenues.

  • 8/13/2019 Viewer Satisfaction

    4/15

  • 8/13/2019 Viewer Satisfaction

    5/15

    4This argument is empirically confirmed by the data used in thisresearch.

    riences become stabilized over time. Specifically, on theone hand, it becomes more difficult to satisfy them, andthus they give high ratings less often. On the other hand,their improved expertise and accumulated experienceenable them to avoid movies that are unsuitable to theirtastes, and thus they give low ratings less often. Therefore,amateurs movie ratings become stabilized in the form of less variability with consumption experiences.

    H4: Amateur viewers movie-viewing experiences generateless favorable ratings with less variability.

    In the long run, amateur viewers ratings should stabi-lize at a certain level because there will not be any moresubstantial learning experience in critiquing movies. There-fore, this hypothesis is primarily focused on amateur view-ers who are acquiring relative new movie consumptionexperiences as opposed to seasoned and experienced ama-teur viewers.

    Next, given the association between movie preferencesand ratings with genre (Liu 2006), such as children beingfans of animation movies, we expect that viewers give theirfavorite genres (more precisely, members frequentlyviewed genres) high ratings because they are internally pre-

    disposed to like that category of movies (upward preferredgenre effect). In contrast, as we predict in H 4, as viewerschoose more movies beyond their best choices in their non-favorite genres, they may rate those movies lower withouthaving the preferred genre effect as in their favorite genres(downward viewed set effect). That is, as viewers choosemore movies, they settle for less attractive movies becausethey have exhausted their top choices in certain genres.Thus, the relationship between genre proportion (i.e., thepercentage of movies seen in the genre compared with allmovies seen for that individual viewer) and average genrerating may be nonmonotonic because of these two conflict-ing effects and warrants further investigation.

    Specifically, we expect genre loyalists with a highrange of genre proportions to generate high ratings approxi-mately proportional to their genre proportion because of their strong internal inclination toward their favorite genres.In most cases, their strong inclination toward their fre-quently viewed genres prevents them from choosing fromother, less favored genres. That is, genre loyalists arestrongly predisposed to specific aspects of their favoritegenres. For example, thriller movie loyalists enjoy how thestory unfolds and entertains their own anticipated scenarios.Thus, such strong preferences for their favorite genres leadthe loyalists to rate most movies in their preferred genresfavorably (upward effect by preferred genres). In contrast,this effect should be weak or nonexistent for viewers whobalance multiple genres, and accordingly, they shouldexhibit a downward effect by viewed set. Finally, we expecta low range of genre proportions to result in a mediumrange of ratings due to viewerschoosing only the most rec-ognizable movies in a genre that is only a little known tothem (e.g., through intensive advertising exposure, friendsstrong recommendation). However, their lack of stronginclination toward a particular genre leads them to haveonly a moderate level of satisfaction, despite the moviesstrengths.

    112 / Journal of Marketing, January 2010

    ing standard deviation and a new movie rating. Regardingthe fourth and fifth points, the communitys percentage of the highest and lowest ratings of the movie of interest indi-cates two opposite extreme ratings (e.g., 5 and 1 on a five-point scale, respectively) and is a strong indicator of newratings beyond the communitys simple average rating.Thus, the highest rating is positively correlated with a newmovie rating, and the lowest rating is negatively correlatedwith a new movie rating.

    These two data perspectives, along with movie charac-teristics, converge to lend a better understanding of howboth consumers own consumption experiences and com-munity opinions influence consumers postconsumptionevaluations in the movie category. Recently, marketingscholars have emphasized connected consumers in theInternet era, in which people can easily exchange consump-tion information (Chen and Xie 2008; Chevalier and May-zlin 2006). However, consumers still value and use theirown experiences (i.e., internal information) in decisionmaking, in addition to community opinions (i.e., externalinformation). In general, consumers are likely to seek vari-ous types of information sources to reduce uncertainty asthe perceived risk associated with a purchase increases

    (Murray 1991; West and Broniarczyk 1998). In the movieindustry, the uniqueness of each movie makes movie choicechallenging, along with the financial and transaction costs(e.g., ticket price, travel to the theater).

    In-Depth Viewer-Level Effects: Rating Pattern Developments with Experiences (H 4 H 5 ) In the previous subsection, we focused on illuminating twogroups of factors that influence amateur viewers new rat-ingsviewers own past ratings and movie communitiesopinions. Here, we also consider how amateur viewers rat-ings develop as these viewers acquire more movie con-sumption experiences (Alba and Hutchinson 1987). There-

    fore, in this subsection, we present two in-depth hypotheseson individual viewers rating pattern developments: onehypothesis on viewers rating changes over time (H 4) andone hypothesis on how viewers movie consumption experi-ences are associated with their genre preferences (H 5).

    We examine how amateur viewers ratings can changeover time. We test and verify that members with more rat-ings experiences rate movies lower, similar to critics rat-ings, which are generally lower than amateurs ratingsbecause of the critical nature of their reviews. 4 By watchingmore movies, members develop a reliable, large referencebase and, accordingly, should be able to analyze moviessimilarly to professional critics. Alba and Hutchinson(1987) indicate that increased familiarity (i.e., the numberof product-related experiences) results in increased con-sumer expertise (i.e., the ability to perform product-relatedtasks successfully). Furthermore, we expect members tochoose preferred movies first and then a set of movies thatdo not include their best choices.

    At the same time, members ratings become lessvariable with experiences because their consumption expe-

  • 8/13/2019 Viewer Satisfaction

    6/15

    ratings in both online and offline domains. Thus, in general,late viewers make better-informed choices, which is empir-ically supported by more positive ratings in the followingweeks. In the process, commercially successful movies cangenerate more satisfactory viewers who then rate themovies higher. That is, these viewers choose the moviesbecause of previous ratings and reviews.

    We tested H 2 regarding the interaction effects of movieratings (from either critics or amateurs) with ad spending onbox office revenues using weekly data. Table 3 provides thesignificant variables at the .05 level (for the initial indepen-dent variables used in each regression, see the Appendix).In the weekly analysis, we used the previous weeks adspending measures (i.e., weekly ad spending, weeklytheater ad proportion, and weekly movie ad proportion) andthe accumulated movie ratings (i.e., critics and amateursratings) up to the previous week to measure their effects onthe following weeks theater revenues. We used the accu-mulated ratings because moviegoers can review all the pastratings to determine which movie to see.

    In this weekly analysis, we confirmed that the interac-tion effects of ratings and spending (critics ratings adspending and amateurs ratings ad spending) were signifi-

    cant in Week 2Week 7, whereas the main effects of ratingswere not. This implies that movie revenues cannot be maxi-mized without the combination of ratings and ad spendingin these weeks; thus, H 2 is empirically supported for thelater weeks. The nonsignificant main effects of both ratings(critics and amateurs ratings) indicate that ratings alonecannot increase movie revenues without enhanced buzz cre-ated through advertising spending. In contrast, we observe amixture of positive and negative main effects of the moviecost variables (i.e., weekly ad spending, weekly theater adproportion, and weekly movie ad proportion). Although weexpect that movie costs are positively correlated with movierevenues, significantly negative movie cost effects imply

    that some advertising money was excessively wastedbeyond its proper use, based on its effective combinationwith favorable ratings (measured by both interaction terms).In other words, negative movie cost effects occurred afterthe regression was accounted for by both significantly posi-tive interaction term effects. Notably, our empirical analysisshows that movie marketers tend to allocate more advertis-ing dollars to movies that collect high revenues in precedingweeks. It demonstrates that in many cases, marketers usedadvertising money inefficiently because they did not con-sider both revenues and ratings when allocating their adver-tising resources.

    In contrast, in the opening week (Week 0), we foundthat only one main effect (critics ratings) and one interac-tion effect (amateurs ratings ad spending) were signifi-cant. The result of Week 0 implies that critics ratings havea significant main effect on theater revenues because criticsreviews and ratings are intensively published in various out-lets shortly before opening week (Week 0). In the sameweek, amateurs ratings showed no significant main effect,probably because there are only a limited number of ama-teur reviews before a movies release. Accordingly, highamateur ratings can only enhance revenues with the help of substantial ad spending in the week. In the following week

    Movie Ratings, Movie Revenues, and Viewer Satisfaction / 113

    Importantly, we hypothesize that this U-shaped relation-ship is pronounced only for heavy viewers, who gainenough consumption experiences through enhanced analy-sis and elaboration abilities to process product information(Alba and Hutchinson 1987). We do not expect the relation-ship to be strong for light viewers (i.e., novices), becausetheir experiences do not allow them to fully develop eitherthe upward effect (preferred genre effect) or the downwardeffect (viewed set effect).

    H5: There is a U-shaped relationship between experiencedviewers genre proportion (i.e., genre preference) and theirgenre rating.

    Empirical AnalysisWe divide our empirical analyses into two parts accordingto the different nature of the available data. First, we pro-vide the empirical results for H 1H3, using movie-level datafrom various sources, including Rotten Tomatoes for pro-fessional critics ratings and Yahoo Movies for amateursratings. These data do not include information on individualcritics or individual amateur viewers. Second, with individualmembersdata mainly from Netflix, we run a regression ana-lyzing individual viewers movie ratings to test H 4 and H5.

    Movie-Level Data

    The data contain specific information regarding 246 moviesthat cover six major genre categories: thriller, romance,action, drama, comedy, and animation. The movies werereleased during the May 2003October 2005 period intheaters and on video. Table 1 provides a summary on thesedata, such as movie characteristics, costs, and revenues.

    We gathered the ratings information on the 246 moviesfrom two popular movie Web sites: Rotten Tomatoes andYahoo Movies. Both sites allow members to post movie rat-ings but differ in terms of membership conditions. The Rot-ten Tomatoes site comprises accredited movie critics exclu-sively. Accordingly, these members are active in eitherselect movie critic societies/associations or print publica-tions. They are regarded as professional movie critics. Incontrast, the Yahoo Movies site is open to the public and,for the most part, comprises amateur movie lovers (seeTable 1).

    Movie-Level Analysis: Movie Ratings and Revenues (H 1H 3 )

    In the weekly regression summarized in Table 2, highweekly theater revenues induced more favorable ratingsfrom amateurs in the following week in six of the sevenweeks tested, in support of H 1. Week 1 was the only excep-tion (with theater revenues in Week 0 [opening week] andmovie ratings in Week 1), which suggests that viewers inthe opening week (Week 0) tended to have mixed evalua-tions about the movie, perhaps because they saw the moviefor different reasons. For example, heavy advertising frommovie marketers or insufficient and inconsistent informa-tion from like-minded moviegoers can make satisfactorychoices difficult. During the opening week, however,enough people view the movie and spread their reviews and

  • 8/13/2019 Viewer Satisfaction

    7/15

    Category Variable Summary Statistics

    Movie characteristics Six genres Thriller (35, 14%), romance (25, 10%), action (51, 21%), drama (50,20%), comedy (74, 30%), animation (11, 4%)

    Sequel 36 movies (15%)

    Eight major studiodistribution

    Theater distribution (181, 74%), video distribution (186, 76%)

    MPAA rating R (78, 32%), non-R (PG-13, PG, and G) (168, 68%)7 major holiday release Theater release (34, 14%), video release (36, 15%)

    M SD Minimum Maximum

    Running time (minutes) 108 19 68Poohs Heffalump Movie

    201Lord of the Rings: The

    Return of the King Video release lagging days 143 39 67

    From Justin to Kelly 431

    Luther

    Movie costs Production budget(thousands of dollars)

    47,798 39,098 150Pieces of April

    210,000Spider-Man 2

    Ad costs(thousands of dollars)

    14,132 8,441 271Eulogy

    45,981Shrek 2

    Total costs(thousands of dollars)

    61,930 44,622 2,193Pieces of April

    242,077Spider-Man 2

    Theater screens 11,325 6,403 44Eulogy

    27,354Shrek 2

    Movie revenues Box office revenues(thousands of dollars)

    42,418 44,897 54Eulogy

    301,861Shrek 2

    Video rental revenues(thousands of dollars)

    24,381 13,121 803She Hate Me

    62,068The Day After Tomorrow

    Video sales revenues(thousands of dollars)

    15,359 24,088 82Northfolk

    233,090Finding Nemo

    Total revenues(thousands of dollars)

    82,158 74,164 1,246She Hate Me

    473,118Finding Nemo

    Rotten Tomatoes movieratings (critics ratings)

    Number of ratings 150 43 33I Am David

    257The Passion of the Christ

    Average rating:ten-point scale

    5.50 1.46 1.80Alone in the Dark

    8.69Lord of the Rings:The

    Return of the King Yahoo Movies ratings

    (amateurs ratings)Number of ratings 1,509 2,585 29

    Eulogy 34,672

    The Passion of the Christ

    Average rating:ten-point scale

    6.89 1.40 2.16House of the Dead

    9.50Hotel Rwanda

    TABLE 1Summary of the 246 Movie Data Sample

    Notes: MPAA = Motion Picture Association of America.

    impact of sequels on theater revenues occurred only in thefirst two weeks after the movies release and that the impactwas much stronger in the opening week (Week 0) than in

    the following week (Week 1). Afterward, the impact weak-ened, probably because the buzz and attention for the sequeldissipated quickly. Our weekly analysis in Table 2 shows anegative impact of a sequel on movie ratings in Weeks 1and 2 as well. In brief, the empirical results show thatsequels can have a positive impact on theater revenuesbased on the originals success, but they leave viewersunimpressed and unsatisfied relative to the original movies.Yet these sequel effects are pronounced only in the earlyweeks and become subsequently neutralized because the

    114 / Journal of Marketing, January 2010

    (Week 1), we observe one significant main effect (amateursratings) and one significant interaction effect (critics rat-ings ad spending). In this particular week, amateurs rat-ings create enough information and buzz from early movie-goers, and thus the still-new movies do not need the supportof heavy advertising to enhance revenues. In contrast, thecombination of critics ratings and ad spending enhancesmovie revenues effectively beginning this week. After thefirst two weeks, because of reduced voluntary attention andbuzz among ordinary viewers, only a combination of goodratings and heavy ad spending made a substantial influenceon theater revenues.

    Next, we tested H 3, which compares sequels and theircontemporaneous originals. Table 3 shows that the positive

  • 8/13/2019 Viewer Satisfaction

    8/15

    Week 1 2 3 4 5 6 7

    Intercept 1.66354 1.02730 1.30307 .69374 1.75857 2.19604 1.69142Genre (thriller) .85705 n.s. n.s. n.s. n.s. n.s. n.s.Sequel .28891 .32098 n.s. n.s. n.s. n.s. n.s.Running time n.s. n.s. n.s. .00766 n.s. n.s. .01127MPAA rating n.s. n.s. n.s. n.s. n.s. n.s. n.s.Theater revenues in

    millions of dollars(previous week)

    n.s. .007 .018 .017 .098 .090 .047

    Amateurs ratings(previous week)

    .81645 .84198 .80520 .77912 .70521 .65247 .58119

    R2 .7609 .8017 .6627 .6496 .5620 .3894 .3785N 246 243 239 236 229 223 210

    TABLE 2Determinants of Amateurs Movie Ratings: Stepwise Linear Regression

    Notes: Dependent variable = weekly amateurs movie ratings (current week).Each regression included only significant independent variables atthe .05 level. Each movie showed its first eight weeks (Week 0 [opening week]Week 7) after its release in the data used. n.s. = notsignificant.

    Week 0 1 2 3 4 5 6 7

    Intercept 32,138,681 4,037,404 440,143 321,435 451,267 295,158 334,726 207,801Sequel 9,398,002 1,450,018 n.s. n.s. n.s. n.s. n.s. n.s.Holiday week 5,445,809 4,383,865 976,660 1,919,559 670,210 463,581 n.s. 406,835Major studio release n.s. 1,725,768 n.s. n.s. n.s. n.s. n.s. n.s.Weekly number of

    screens 12,205 n.s. 615 n.s. 943 540 1617 1206Running time 91,297 n.s. n.s. n.s. n.s. n.s. n.s. n.s.Weekly ad spending 2,592 n.s. 827 914 2,402 2,715 3,351 5,210Weekly theater

    ad proportion n.s. n.s. n.s. n.s. 1,885,493 941,568 2,207,783 n.s.Weekly movie

    ad proportion n.s. n.s. n.s. n.s. 2,576,562 2,196,000 2,420,714 1,329,766Critics ratings 4,117,721 n.s. n.s. n.s. n.s. n.s. n.s. n.s.Amateurs ratings n.s. 578,971 n.s. n.s. n.s. n.s. n.s. n.s.Critics ratings

    ad spending n.s. 65 117 40 204 40 66 359Amateurs ratings

    ad spending 217 n.s. 67 93 161 176 488 143Box office revenues

    (previous week) N.A. .49371 .44448 .65020 .28627 .48853 .11823 .26666R2 .6182 .8480 .8972 .9132 .8969 .9292 .8540 .8827Adjusted R 2 .6086 .8442 .8942 .9113 .8932 .9266 .8492 .8787N 246 246 243 239 236 229 223 210

    TABLE 3Determinants of Box Office Revenues: Stepwise Linear Regression

    Notes: Dependent variable = weekly amateurs theater revenues (current week). Each regression included only significant independentvariables at the .05 level. Each movie showed its first eight weeks (Week 0Week 7) after its release in the data used. N.A. = not applic-

    able, and n.s. = not significant.lected the data between October 1998 and December 2005,and they reflect the distribution of all ratings received dur-ing the period. The title and release year of each movie arealso provided.

    From the Netflix Prize public data, we selected13,734,151 ratings of the 246 movies used for our previousmovie-level analysis and matched this viewer-level datawith the movie-level data. The data included 456,476 Net-flix members. The ratings selected cover the June

    Movie Ratings, Movie Revenues, and Viewer Satisfaction / 115

    fan base stemming from the original tends to view thesequel early.

    Viewer-Level Data

    We used individual members movie rental and rating histo-ries data from the Netflix Prize site for the individualviewerlevel analysis. The public data contain more than100 million ratings of 17,770 movie titles from 480,000randomly chosen, anonymous Netflix members. We col-

  • 8/13/2019 Viewer Satisfaction

    9/15

    Variable Group Variable Estimate SE p -ValueIntercept 1.01621 .09838

  • 8/13/2019 Viewer Satisfaction

    10/15

    Genre InterceptMember Rating

    Order R 2 N

    Overall 3.67597 (

  • 8/13/2019 Viewer Satisfaction

    11/15

    Variable Overall Heavy Viewers Light Viewers

    Intercept 3.67588 (

  • 8/13/2019 Viewer Satisfaction

    12/15

    5For example, three Number 3 blockbuster movies in 2007fell short of expectations at the box office compared with their firstsequels. Specifically, Spider-Man 3 (2007) reaped $337 millioncompared with $374 million by Spider-Man 2 (2004), Shrek theThird (2007) made $323 million compared with $441 million byShrek 2 (2004), and Pirates of the Caribbean: At Worlds End (2007) reached only $309 million compared with $423 million byPirates of the Caribbean: Dead Mans Chest (2006).

    sizing the positive aspects of less viewed DVDs, marketerscan increase the uses of less frequently viewed stocks of DVDs more efficiently. We point out that the intention of this particular research is to provide insight into a viewerrating mechanism rather than to develop a full-fledged rat-ing forecasting model.

    Finally, such recommendation systems are essential tothe survival and success of new and existing products as aneffective customer relationship management tool (Ansari,Essegaier, and Kohli 2000; Bodapati 2008; Iacobucci, Ara-bie, and Bodapati 2000; Krasnikov, Jayachandran, andKumar 2009; Moon and Russell 2008). Ubiquitous onlinecommunity forums can be a significant information sourceto improve the performance of such recommendation sys-tems for firms that develop new products (e.g., books,music, video games).

    Limitations and Further Research We indicate some limitations of this research and shed lighton potential avenues for further research. First, this researchwas based primarily on our analysis of the demand side of the movie industry and was limited in integrating the supplyside (e.g., production and distribution processes and costs)

    with the same level of intensity and focus. Because supply-side factors influence movie profits as much as demand-side factors, further research could take a more balancedand integrated approach toward demand and supply factorsof the industry. Second, we used summary ratings, but wedid not analyze the content of the textual reviews (Chevalierand Mayzlin 2006; Wyatt and Badger 1990). Third, wetapped into individual viewers postconsumption evalua-tions but not their choices per se. We examined the choiceissue only at the aggregate movie level. Research on indi-vidual choice in movies could be conducted with properdata acquisition. Note that such research would require asophisticated choice model development because the set of

    available movies is large and changes across both viewersand time.From a substantive perspective, research on sequels

    impact on movie performances, both in the theater and onvideo, in association with viewers ratings would provideuseful information to movie production studios. For exam-ple, star power is well known in the movie industry (Ravid1999) and has proved important in the success of sequels.As we noted previously, some sequels do not use the samestars as the originals but still can be successful. It would beworthwhile to investigate the magnitude of star power insequels by comparing sequels with and without the samestars as the originals.

    This research could be applied to similar hedonic con-sumption situations in which consumers continually facenew products and thus need to determine the value of thenew products according to their own experiences and thecommunitys general opinions (e.g., entertainment goods,such as books and music). In these categories, the Internetenables ordinary consumers to share their ratings andreviews based on their own experiences with other like-minded consumers.

    Other product categories are not immune to the impactof expert and consumer ratings. Automobiles are rated on

    Movie Ratings, Movie Revenues, and Viewer Satisfaction / 119

    are likely to leave viewers less satisfied (Anderson 1973).Therefore, a front-loaded advertising strategy should beconsidered when promoting sequels.

    Finally, a diminishing fan base may explain why subse-quent sequels can be a risky investment and suggests thatstudios should be cautious about extending sequel moviesinto a long series (Basuroy and Chatterjee 2008). 5 Thus,sequels can be a relatively safe investment based on theoriginal movies commercial success, but satisfying the fanbase is a critical factor in turning sequels into successfullong-term series, such as the James Bond movies. Theseresults on movie sequels should inform brand extensionsand brand alliances (Rao, Qu, and Ruekert 1999) of otherexperiential hedonic goods because brand extensions arecommon and often effective in such product categories(e.g., Harry Potter book series).

    Viewer-level effects . First, for movie rental companies(e.g., Netflix, Blockbuster), we may assume that movie rat-ings are an effective measure of a members satisfaction andthat satisfied members stay with the company longer. Thisstudy highlights members satisfaction mechanism on thebasis of their viewing and rating histories and movie com-munities opinions as internal and external informationsources, respectively (Murray 1991). Indeed, Netflixsemphasis on the importance of providing better recommen-dations to members is clearly indicated by Netflix Prize, acontest in which the firm will give a potential $1 millionprize to a team with the best ideas on how to improve thecompanys movie recommendation system.

    Second, insights from our research findings regardingthe roles of members own experiences and communityopinions suggest that these consumer voices can strengthencompanies market orientation (Jaworski and Kohli 1993;Kohli and Jaworski 1990). Notably, marketers are espe-cially interested in how to use online consumer forums intheir favor (Dellarocas 2006; Mayzlin 2006). Godes and

    colleagues (2005) indicate that at least some of the socialinteraction effects in such forums are partially within thefirms control. For example, our movie data show that thenumber of accumulated ratings on the focal movie by themember community has a negative impact on the new rating(Y1 in Table 4) because of the correlation between themembers interest level and his or her viewing timing. Thatis, the most interested viewers see the new movie first, andthe less interested viewers see it later. Cinematch, Netflixsmovie recommendation system, does not fully considersuch factors indicated in Table 4, which can potentiallyimprove the systems accuracy.

    In addition, when community opinions have a signifi-

    cant influence on members choices, marketers can empha-size community opinions to persuade members. By empha-

  • 8/13/2019 Viewer Satisfaction

    13/15

    REFERENCESAinslie, Andrew, Xavier Drze, and Fred Zufryden (2005), Mod-

    eling Movie Life Cycles and Market Share, Marketing Sci-ence , 24 (3), 508517.

    Alba, Joseph W. and J. Wesley Hutchinson (1987), Dimensions of Consumer Expertise, Journal of Consumer Research , 13(March), 41154.

    Anderson, Rolph E. (1973), Consumer Dissatisfaction: TheEffect of Disconfirmed Expectancy on Perceived Product Per-formance, Journal of Marketing Research , 10 (February),

    3844.Ansari, Asim, Skander Essegaier, and Rajeev Kohli (2000), Inter-net Recommendation Systems, Journal of Marketing Research , 37 (August), 36375.

    Basuroy, Suman and Subimal Chatterjee (2008), Fast and Fre-quent: Investigating Box Office Revenues of Motion PictureSequels, Journal of Business Research , 61 (July), 798803.

    , , and S. Abraham Ravid (2003), How CriticalAreCritical Reviews? The Box Office Effects of Film Critics, StarPower, and Budgets, Journal of Marketing , 67 (October),103117.

    , Kalpesh Kaushik Desai, and Debabrata Talukdar (2006),An Empirical Investigation of Signaling in the Motion PictureIndustry, Journal of Marketing Research , 43 (May), 28795.

    Bettman, James R. (1970), Information Processing Models of Consumer Behavior, Journal of Marketing Research , 7(August), 37076.

    Boatwright, Peter, Suman Basuroy, and Wagner Kamakura (2007),Reviewing the Reviewers: The Impact of Individual Film Crit-ics on Box Office Performance, Quantitative Marketing and Economics , 5 (4), 401425.

    Bodapati, Anand (2008), Recommendation Systems with Pur-chase Data, Journal of Marketing Research , 45 (February),7793.

    Boulding, William and Amna Kirmani (1993), A Consumer-SideExperimental Examination of Signaling Theory: Do Con-sumers Perceive Warranties as Signals of Quality? Journal of Consumer Research , 20 (June), 11123.

    Chakravarty, Anindita, Yong Liu, and Tridib Mazumdar (2008),Online User Comments Versus Professional Reviews: Differ-ential Influences on Pre-Release Movie Evaluation, MSI Reports , working paper series, Report No. 08-105, 11537.

    category = animation), sequel, running time, MPAA rating(R versus non-R ratings; reference category = non-R ratings),theater distribution by the eight major studio distribution, andtheater revenues (previous week).

    Week indicates the week of amateurs movie ratings (depen-dent variable).

    Details on Table 3

    The initial set of independent variables is genre (six cate-gories; reference category = animation), sequel, holiday

    week, number of theater screens, running time, MPAA rating(R versus non-R; reference category = non-R), theater distri-bution by eight major studio distribution, theater ad propor-tion (theater ad spending for the focal movie/ad spending inall theater movies in the corresponding weeks [%] [previousweek]), movie ad proportion (theater ad spending for thefocal movie/[ad spending in all theater movies in the corre-sponding weeks + ad spending in all video movies for thecorresponding weeks] [%] [previous week]), ad spending(previous week), critics ratings, amateurs ratings, criticsratings ad spending (previous week) (interaction term), andamateurs ratings ad spending (previous week) (interactionterm).

    Details on Table 5

    Member rating order indicates the ratings serial number inorder for each member.

    Rating mean indicates the average of all the ratings thatbelong to each rating order across all the members.

    Rating standard deviation indicates the standard deviation of all the ratings that belong to each rating order across all themembers.

    The overall case is based on all the rating cases across all thesix genres included.

    The overall case covers up to #129 in member rating order toselect only cases with at least 21 members for each ratingorder. Rating mean becomes unstable after #129.

    In each of the six genre cases, we included only observationswith more than five members for each genre rating order.

    120 / Journal of Marketing, January 2010

    design and functionality by experts and on consumptionexperience attributes by ordinary drivers. Services, fromrestaurants to dry cleaners, are endorsed by experts and byconsumers who have tried the services and want to have avoice in encouraging or warning others. Thus, expert ratingshave long existed in many product categories. With the easeof online posting, consumer ratings are exploding in popu-larity, indeed motivating experts to provide ratings of prod-ucts previously unrated (e.g., dietary content in fast-foodmeals). Thus, although this research focused on movies, itsapplicability should be much broader. In many purchasecategories, it is important to begin to tease apart the individ-ual and synergistic effects of WOM induced by ratings andadvertising dollars, the dynamic effects of subsequent prod-uct launches, and the moderating effects of customers rela-tive experiences.

    Appendix: Variable DescriptionsDetails on Table 1

    Eight major studios: Buena Vista, Fox, MGM/UA, Miramax,Paramount, Sony, Universal, Warner Bros.

    Seven major holidays: New Years Day, Presidents Day,Memorial Day, Independence Day, Labor Day, Thanksgiv-ing, Christmas.

    Video release lagging days: the number of days betweentheater release and video release dates.

    Total costs = production budget + ad costs. All movie costsand revenues are based on the first eight weeks after eithertheater release or video release except for production budget.

    The number of theater screens is the accumulated number of weekly screens.

    Total revenues = box office revenues + video rental revenues +video sales revenues.

    Details on Table 2

    The initial set of independent variables is made up of the fol-lowing for both regressions: genre (six categories; reference

  • 8/13/2019 Viewer Satisfaction

    14/15

    Graphical Method Applied to Movies, Marketing Science , 24(4), 63545.

    Liu, Yong (2006), Word of Mouth for Movies: Its Dynamics andImpact on Box Office Revenue, Journal of Marketing , 70(July), 7489.

    Lynch, John G., Jr. (2006), Accessibility-Diagnosticity and theMultiple Pathway Anchoring and Adjustment Model, Journalof Consumer Research , 33 (June), 2527.

    Mayzlin, Dina (2006), Promotional Chat on the Internet, Mar-keting Science , 25 (2), 15563.

    Milgrom, Paul and John Roberts (1986), Price and Advertising

    Signals of Product Quality, Journal of Political Economy , 94(4), 796821.Moon, Sangkil and Gary J. Russell (2008), Predicting Product

    Purchase from Inferred Customer Similarity: An AutologisticModel Approach, Management Science , 54 (1), 7182.

    Murray, Keith B. (1991), A Test of Services Marketing Theory:Consumer Information Acquisition Activities, Journal of Mar-keting , 55 (January), 1025.

    Nelson, Phillip (1970), Information and Consumer Behavior, Journal of Political Economy , 78 (2), 31129.

    (1974), Advertising as Information, Journal of Political Economy , 82 (4), 72954.

    Nord, Walter R. and J. Paul Peter (1980), A Behavior Modifica-tion Perspective on Marketing, Journal of Marketing , 44(Spring), 3647.

    Oliver, Richard L. (2009), Satisfaction: A Behavioral Perspectiveon the Consumer . New York: McGraw-Hill.

    Prosser, Elise K. (2002), How Early Can Video Revenue BeAccurately Predicted, Journal of Advertising Research , 42(MarchApril), 4755.

    Rao, Akshay R., Lu Qu, and Robert W. Ruekert (1999), SignalingUnobservable Product Quality Through a Brand Ally, Journalof Marketing Research , 36 (May), 25868.

    Ravid, S. Abraham (1999), Information, Blockbusters, and Stars:A Study of the Film Industry, Journal of Business , 72 (4),46392.

    Reinstein, David A. and Christopher M. Snyder (2005), TheInfluence of Expert Reviews on Consumer Demand for Experi-ence Goods: A Case Study of Movie Critics, Journal of Indus-trial Economics , 53 (1), 2751.

    Rothschild, Michael L. and William C. Gaidis (1981), BehavioralLearning Theory: Its Relevance to Marketing and Promotions, Journal of Marketing , 45 (Spring), 7078.

    Sood, Sanjay and Xavier Drze (2006), Brand Extensions of Experiential Goods: Movie Sequel Evaluations, Journal of Consumer Research , 33 (December), 35260.

    Spreng, Richard A., Scott B. MacKenzie, and Richard W.Olshavsky (1996), A Reexamination of the Determinants of Consumer Satisfaction, Journal of Marketing , 60 (July),1532.

    Stigler, George J. (1961), The Economics of Information, Jour-nal of Political Economy , 69 (3), 21325.

    Trusov, Michael, Randolph E. Bucklin, and Koen Pauwels (2009),Effects of Word-of-Mouth Versus Traditional Marketing:Findings from an Internet Social Networking Site, Journal of Marketing , 73 (September), 90101.

    Wanderer, Jules J. (1970), In Defense of Popular Taste: Film Rat-ings Among Professionals and Lay Audiences, American Journal of Sociology , 76 (2), 26272.

    West, Patricia M. and Susan M. Broniarczyk (1998), IntegratingMultiple Opinions: The Role of Aspiration Level on ConsumerResponse to Critic Consensus, Journal of Consumer Research , 25 (June), 3851.

    Wyatt, Robert O. and David P. Badger (1990), Effects of Infor-mation and Evaluation in Film Criticism, Journalism Quar-terly , 67 (2), 35968.

    Movie Ratings, Movie Revenues, and Viewer Satisfaction / 121

    Chen, Yubo and Jinhong Xie (2008), Online Consumer Review:Word-of-Mouth as a New Element of Marketing Communica-tion Mix, Management Science , 54 (3), 47791.

    Chevalier, Judith A. and Dina Mayzlin (2006), The Effect of Word of Mouth on Sales: Online Book Reviews, Journal of Marketing Research , 43 (August), 34554.

    Dellarocas, Chrysanthos (2006), Strategic Manipulation of Inter-net Opinion Forums: Implications for Consumers and Firms, Management Science , 52 (10), 157793.

    Duan, Wenjing, Bin Gu, and Andrew B. Whinston (2008), TheDynamics of Online Word-of-Mouth and Product Sales: An

    Empirical Investigation of the Movie Industry, Journal of Retailing , 84 (2), 23342.Eliashberg, Jehoshua, Anita Elberse, and Mark A.A.M. Leenders

    (2006), The Motion Picture Industry: Critical Issues in Prac-tice, Current Research, and New Research Directions, Market-ing Science , 25 (6), 63861.

    , Jedid-Jah Jonker, Mohanbir S. Sawhney, and BerendWierenga (2000), MOVIEMOD: An Implementable Decision-Support System for Prerelease Market Evaluation of MotionPictures, Marketing Science , 19 (3), 22643.

    and Steven M. Shugan (1997), Film Critics: Influencersor Predictors? Journal of Marketing , 61 (April), 6878.

    Erdem, Tulin (1998), An Empirical Analysis of Umbrella Brand-ing, Journal of Marketing Research , 35 (August), 33951.

    Godes, David and Dina Mayzlin (2004), Using Online Conversa-tions to Study Word-of-Mouth Communication, MarketingScience , 23 (4), 54560.

    , , Yubo Chen, Sanjiv Das, Chrysanthos Dellarocas,Bruce Pfeiffer, et al. (2005), The Firms Management of Social Interactions, Marketing Letters , 16 (34), 41528.

    Goldberger, Arthur S. (1991), A Course in Econometrics . Cam-bridge, MA: Harvard University Press.

    Griffiths, William E., R. Carter Hill, and George G. Judge (1993), Learning and Practicing Econometrics . New York: John Wiley& Sons.

    Hirschman, Elizabeth C. and Morris B. Holbrook (1982), Hedo-nic Consumption: Emerging Concepts, Methods, and Proposi-tions, Journal of Marketing , 46 (July), 92101.

    Holbrook, Morris B. (1999), Popular Appeal Versus Expert Judg-ments of Motion Pictures, Journal of Consumer Research , 26(September), 14455.

    Iacobucci, Dawn, Phipps Arabie, and Anand Bodapati (2000),Recommendation Agents on the Internet, Journal of Interac-tive Marketing , 14 (3), 211.

    Jaworski, Bernard J. and Ajay K. Kohli (1993), Market Orienta-tion: Antecedents and Consequences, Journal of Marketing ,57 (July), 5370.

    Jedidi, Kamel, Robert E. Krider, and Charles B. Weinberg (1998),Clustering at the Movies, Marketing Letters , 9 (4), 393405.

    Kahneman, D. and A. Tversky (1979), Prospect Theory: AnAnalysis of Decision Under Risk, Econometrica , 47 (March),26391.

    Kirmani, Amna (1997), Advertising Repetition as a Signal of Quality: If Its Advertised So Much, Something Must BeWrong, Journal of Advertising , 26 (3), 7786.

    and Akshay R. Rao (2000), No Pain, No Gain: A Critical

    Review of the Literature on Signaling Unobservable ProductQuality, Journal of Marketing , 64 (April), 6679.Kohli, Ajay K. and Bernard J. Jaworski (1990), Market Orienta-

    tion: The Construct, Research Propositions, and ManagerialImplications, Journal of Marketing , 54 (April), 118.

    Krasnikov, Alexander, Satish Jayachandran, and V. Kumar (2009),The Impact of Customer Relationship Management Imple-mentation on Cost and Profit Efficiencies: Evidence from theU.S. Commercial Banking Industry, Journal of Marketing , 73(November), 6176.

    Krider, Robert E., Tieshan Li, Yong Liu, and Charles B. Weinberg(2005), The Lead-Lag Puzzle of Demand and Distribution: A

  • 8/13/2019 Viewer Satisfaction

    15/15

    Copyright of Journal of Marketing is the property of American Marketing Association and its content may not

    be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

    permission. However, users may print, download, or email articles for individual use.