information gaps: golman and loewenstein - homepage - cmu

23
Decision Information Gaps: A Theory of Preferences Regarding the Presence and Absence of Information Russell Golman and George Loewenstein Online First Publication, October 10, 2016. http://dx.doi.org/10.1037/dec0000068 CITATION Golman, R., & Loewenstein, G. (2016, October 10). Information Gaps: A Theory of Preferences Regarding the Presence and Absence of Information. Decision. Advance online publication. http://dx.doi.org/10.1037/dec0000068

Upload: others

Post on 16-Oct-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Information Gaps: Golman and Loewenstein - Homepage - CMU

DecisionInformation Gaps: A Theory of Preferences Regarding thePresence and Absence of InformationRussell Golman and George Loewenstein

Online First Publication, October 10, 2016. http://dx.doi.org/10.1037/dec0000068

CITATION

Golman, R., & Loewenstein, G. (2016, October 10). Information Gaps: A Theory of PreferencesRegarding the Presence and Absence of Information. Decision. Advance online publication.http://dx.doi.org/10.1037/dec0000068

Page 2: Information Gaps: Golman and Loewenstein - Homepage - CMU

Information Gaps: A Theory of Preferences Regarding the Presenceand Absence of Information

Russell Golman and George LoewensteinCarnegie Mellon University

We propose a theory of preferences for acquiring or avoiding information and forexposure to uncertainty (i.e., risk or ambiguity) which is based on thoughts and feelingsabout information as well as information gaps, that is, specific questions that thedecision maker recognizes and is aware of. In our theoretical framework utility dependsnot just on material payoffs but also on beliefs and the attention devoted to them. Wespecify assumptions regarding the determinants of attention to information gaps,characterize a specific utility function that describes feelings about information gaps,and show with examples that our theory can make sense both of the acquisition ofnoninstrumental information and of the avoidance of possibly useful information, aswell as source-specific risk and ambiguity aversion and seeking.

Keywords: belief-based utility, information gap, uncertainty

Thomas Schelling’s characterization of TheMind as a Consuming Organ (Schelling, 1987)highlights the fact that most if not all consump-tion is “in the mind.” Schelling’s observationpresents a challenge for the revealed preferencephilosophy so prevalent in economics. We can-not directly observe the objects of preferenceconsumed in the mind.1 What is the mind con-suming (or preferring not to consume) when weobserve people succumbing to clickbait on theInternet, or skipping a visit to the doctor despiteunrelieved symptoms of illness, or gambling ontheir favorite sports teams after purchasing alow-deductible insurance policy? Distinct be-havioral economic models, based on psychol-ogy rather than revealed preference, can ac-count for some of these patterns of behavior, butnone provides an integrated explanation of thediversity of information-related behaviors.

Here, we develop a theory that generates pre-dictions about when people will obtain or avoidinformation and when they will exhibit risk andambiguity seeking or aversion, based on psy-chologically grounded assumptions about howpeople think and feel about the presence andabsence of information.

At the core of our theory is the concept of aninformation gap (Loewenstein, 1994). There aremany things that one does not know and doesnot think about, but when a person is aware ofa specific unknown, it often attracts attentionand evokes emotions. We introduce a reducedform question-answer framework for represent-ing knowledge and awareness. In this frame-work, an information gap opens when a personbecomes aware of a question and is uncertain aboutthe correct answer. People form beliefs aboutinformation gaps, making judgments about thechance that each answer is correct. We make aseries of assumptions describing how much at-tention each of these beliefs attracts, and wepropose a specific utility function defined overthese beliefs and the attention paid to them.

1 Just as psychology has moved beyond behaviorism andembraced cognition, economics also has much to gain byacknowledging the inner workings of the mind (Bernheim& Rangel, 2009; Chater, 2015; Kimball, 2015). Caplin andLeahy (2004), for example, offer a clear demonstration thatthe revealed preference framework is insufficient to analyzebenevolent information disclosure policies.

Russell Golman and George Loewenstein, Department ofSocial and Decision Sciences, Carnegie Mellon University.

We thank Botond Köszegi, David Laibson, Erik Eyster,Juan Sebastian Lleras, Alex Imas, Leeann Fu, and DonaldHantula for helpful comments on an earlier draft entitledCuriosity, Information Gaps, and the Utility of Knowledge.

Correspondence concerning this article should be ad-dressed to Russell Golman, Department of Social and De-cision Sciences, Carnegie Mellon University, 5000 ForbesAvenue, Pittsburgh, PA 15213. E-mail: [email protected]

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Decision © 2016 American Psychological Association2016, Vol. 3, No. 4, 000 2325-9965/16/$12.00 http://dx.doi.org/10.1037/dec0000068

1

Page 3: Information Gaps: Golman and Loewenstein - Homepage - CMU

From these primitives, we derive predictionsabout how people will respond to informationgaps, in terms of whether they will seek oravoid information, risk, and ambiguity.

Relation to Existing Literature

Background

The study of decision making under uncer-tainty grew out of expected utility theory(Anscombe & Aumann, 1963; Savage, 1954;von Neumann & Morgenstern, 1944). Tradi-tionally, choices were thought of in consequen-tialist terms, with outcomes corresponding toobjective reality (Hammond, 1988). Accordingto expected utility theory, risk preferences couldbe described by utility function curvature, andpeople would not be sensitive to ambiguity.

The first economic analysis of informationpreference was pioneered by Stigler (1961).Drawing attention to a phenomenon that hadhitherto not been addressed by economists, Sti-gler assumed that information is a means to anend; it is valued because, and only to the extentthat, it enables people to make better decisions.Preferences about information could then bederived from expected utility theory, assumingBayesian updating of beliefs, with the implica-tion that information will be sought to the de-gree that it raises expected utility (cf. Hirshle-ifer & Riley, 1979). According to Stigler’stheory, information could never have negativevalue; people would never deliberately avoidreceiving information (in private).

Of course, people do not generally conformto expected utility theory or, in part for thatreason, value information in accordance withStigler’s theory. Drawing on a set of psycho-logically grounded assumptions, Kahneman andTversky (1979; Tversky & Kahneman, 1992)developed prospect theory to account for viola-tions of expected utility, such as the simultane-ous purchase of insurance and lottery tickets,and economists developed more general theo-ries of decision making under uncertainty torepresent a range of possible preferences aboutrisk and ambiguity, based (unlike prospect the-ory) on a revealed preference approach (e.g.,Abdellaoui et al., 2011; Gul, 1991; Klibanoff etal., 2005). Nonexpected-utility theories ledeconomists to revisit the value of information.

Given exposure to some uncertainty, acquir-ing information about that uncertainty can beseen as resolving a lottery over the remaininguncertainty. The value of this informationwould be the difference between the utility ofthe compound lottery and the utility of the orig-inal prospect, and for nonexpected-utility theo-ries, this value could be negative (Andries &Haddad, 2015; Dillenberger, 2010; Grant et al.,1998; Kreps & Porteus, 1978; Wakker, 1988).Such theories can thus describe certain in-stances of information avoidance.

Theories of belief-based utility represent an-other step in the progression of economists’analysis of information preferences that beganwith Stigler. These theories recognize that peo-ple derive utility not (only) from objective re-ality but from their beliefs about that reality, forexample, their anticipatory feelings (Loewen-stein, 1987).2 From this perspective, acquiringinformation can be seen as resolving a lotteryabout what the person may come to believe.Risk aversion (lovingness) over beliefs impliesthat people will want to avoid (obtain) informa-tion (Caplin & Leahy, 2001; Köszegi, 2003;Schweizer & Szech, 2014). Risk aversion overbeliefs (and hence information avoidance)could develop when people hold favorable be-liefs and do not want to lose them (e.g., Ben-abou & Tirole, 2002; Köszegi, 2006) or whenpeople are generally loss averse (e.g., Köszegi,2010).

Existing belief-based utility theories clearlydo make predictions about when people willobtain information and when they will avoid it,and prospect theory clearly does make predic-tions about when people will seek or steer clearof risk and uncertainty, but some stylized factsin both domains still call out for explanation.

First, (in contrast to the predictions of Ben-abou and Tirole (2002) and Köszegi (2006)),information avoidance is more common whileholding unfavorable beliefs than while holdingfavorable beliefs (Dwyer et al., 2015; Eil &Rao, 2011; Fantino & Silberberg, 2010; Gan-guly & Tasoff, 2016; Lieberman et al., 1997;Karlsson et al., 2009). Existing belief-basedutility models, which assume that risk prefer-ences over beliefs are independent of those be-

2 See also Abelson, 1986; Geanakoplos et al., 1989; Aschet al., 1990; Yariv, 2001.

2 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 4: Information Gaps: Golman and Loewenstein - Homepage - CMU

liefs, cannot explain such a pattern because withBayesian updating expost beliefs cannot be ex-pected to be better or worse than exante beliefs(Eliaz & Spiegler, 2006).

Second, information acquisition often occursin situations in which a person does not carewhat he finds out, for example, answers to triviaquestions (Berlyne, 1954, 1960; Kang et al.,2009; Marvin & Shohamy, 2016). Existingmodels of belief-based utility cannot accountfor such pure curiosity because when the pos-sible outcomes all have the same utility, thesure-thing principle implies that a mixture be-tween these outcomes also has the same utility.In fact, people sometimes seek out informationwhich they know will make them miserable(Hsee & Ruan, 2016; Kruger & Evans, 2009),which poses an even more fundamental chal-lenge to existing belief-based utility models.

Third, information acquisition or avoidanceis highly dependent on situational determinants,such as awareness of related uncertainties, thepresence of clues about the information content,or opportunities for distraction (Falk & Zim-mermann, 2016; Litman et al., 2005; Loewen-stein, 1994; Menon & Soman, 2002; van Dijk &Zeelenberg, 2007). Few, if any, of these patternsare predicted by existing models of informationpreference.

Stylized facts in the domain of preferenceunder risk and ambiguity also call out for ex-planation. While people often exhibit risk andambiguity aversion, they also tend to gamble onuncertainties that they feel they have expertiseabout (Heath & Tversky, 1991). Additionally,the degree of risk or ambiguity aversion (orseeking) people exhibit depends on contextualfactors, such as the presence of other risky orambiguous options for comparison or for men-tal accounting (Fox & Tversky, 1995; Gneezy& Potters, 1997). Although existing models(e.g., Abdellaoui et al., 2011) can accommodatethese phenomena, they do not specifically pre-dict them.

Our Approach

Here we propose a unified theory that canaccount for these stylized facts. We follow Ca-plin and Leahy (2001) and Köszegi (2010) inapplying expected utility theory to psychologi-cal states rather than to physical prizes, but weexpand the domain of psychological states that

people can have feelings about. We proposespecific assumptions about attention and a spe-cific utility function that takes as inputs beliefsand the attention devoted to them (as well asmaterial payoffs). We incorporate Loewen-stein’s (1994) insight that information gapsstimulate curiosity, as well as Tasoff andMadarász’s (2009) insight that obtaining infor-mation stimulates attention and thus comple-ments anticipatory feelings. To the extent that aperson is thinking about (i.e., attending to) aninformation gap, feelings about this informationgap enter into utility.3 We show that our modelallows for acquisition of noninstrumental infor-mation as well as avoidance of possibly usefulinformation, and that it provides a novel ac-count of ambiguity aversion in the famous Ells-berg paradox along with ambiguity seeking ingambling for pleasure. A companion article(Golman & Loewenstein, 2016) explores thefull set of implications of our proposed utilitymodel for information acquisition or avoidance.Another companion article (Golman, Loewen-stein, & Gurney, 2016) uses the model devel-oped here to derive and test predictions aboutrisk and ambiguity aversion and seeking. Wesummarize these analyses in the present articleto give the reader a sense of the wide range ofstylized facts that our model can reconcile.4

In the next section we introduce a frameworkfor representing questions and answers, beliefsabout them, and the attention to them. We thenprovide psychological motivation for a specificutility function that incorporates beliefs and at-tention, and formally characterize this utilityfunction with seven properties. We proceed todescribe how decision making operates, andthen present our assumptions about the deter-minants of attention. In the section on, Informa-tion Acquisition and Avoidance, we show howour theory can be applied to make sense ofinformation acquisition due to curiosity as wellas information avoidance due to anxiety. In the

3 Curiosity correlates with brain activity in regionsthought to relate to anticipated reward (Kang et al., 2009),suggesting that information is a reward in and of itself.Similarly, making decisions while aware of missing rele-vant information correlates with brain activity in regionsthought to relate to fear (Hsu et al., 2005).

4 We do not claim to reconcile all behavioral patterns inthe domains of information preference and risk and ambi-guity preference. No theory is perfect.

3INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 5: Information Gaps: Golman and Loewenstein - Homepage - CMU

section on Risk and Ambiguity Preference, weapply our theory to preferences about uncertaingambles, showing that it can account for boththe Ellsberg paradox and gambling for pleasure.The examples in these two sections demonstratethe usefulness of our theory. We conclude witha brief discussion of how our assumptions abouthow people think and feel about informationgaps allow us to integrate our accounts of in-formational preferences and of risk and ambi-guity preferences.

Theoretical Framework

Traditional economic theory assumes thatutility is a function of consumption bundles ormaterial outcomes, or (perhaps subjective) dis-tributions thereof. Our basic premise is thatutility depends not only on such material out-comes but also on beliefs and the attention paidto them. Figure 1 depicts this perspective andillustrates how we construct our framework. Atits core are questions and answers, a structurewe introduce to represent the information a per-son has and the information that he is aware heis missing. Questions delineate the issues that aperson is aware of. Each has one or more pos-sible answers. The person may or may not knowwhich answer is actually correct. Moving outfrom the core in this diagram, a person formsbeliefs about the answers to the questions he isaware of. He also pays some degree of attentionto each of these beliefs/questions. His cognitivestate, which we shall define as these beliefs and

the attention paid to them, is then the object ofhis preferences.

Cognitive States

While there surely is an infinite set of possiblestates of the world, we assume that a person canonly conceive of a finite number of questions atany one time. We represent awareness with a setof “activate”’ questions Q � �Q1, . . . , Qm� and aremaining set of “latent” questions. Activatedquestions are those that the individual is aware of(i.e., pays at least some attention to). Latent ques-tions are those that the individual could become,but is not currently, aware of. A vector of attentionweights w � �w1, . . . , wm� � ��

m indicates howmuch attention each activated question gets.5

A question Qi has a countable set of possible(mutually exclusive) answers Ai � �Ai

1, Ai2, . . .�.6 A

person has a subjective belief about the proba-bility that each answer is correct. (The subjec-tive probabilities across different questions maywell be mutually dependent.) This frameworkallows us to capture information gaps, whichare represented as activated questions lackingknown correct answers, as depicted in Table 1.

Anticipated material outcomes, or prizes, canalso be incorporated into this framework. We letX denote a countable set of prizes—that is,material outcomes. The subjective probabilityover these prizes is in general mutually depen-dent with the subjective probability over an-swers to activated questions; that is, the receiptof new information often leads to revised beliefsabout the likelihood of answers to many differ-ent questions as well as about the likelihood ofdifferent material outcomes. Denote the spaceof answer sets together with prizes as � �A1 � A2 � · · · � Am � X. Then, given a stateof awareness defined by the set of activatedquestions Q,7 we represent a person’s cognitivestate C with a probability measure � defined

5 We can think of the (presumably infinite) set of latentquestions as having attention weights of zero.

6 We use the term countable here to mean at most countable.The restriction of a countable set of answers to a countable setof possible questions does still allow an uncountable set ofpossible states of the world, but as awareness is finite, theprecise state of the world would be unknowable.

7 In most cases, we will assume that activation of ques-tions is determined exogenously—that is, by the environ-ment. We do not model growing awareness (see Karni &Viero, 2013).

Figure 1. Schematic illustration of the development of ourtheory.

4 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 6: Information Gaps: Golman and Loewenstein - Homepage - CMU

over � (i.e., over possible answers to activatedquestions as well as eventual prizes) and a vec-tor of attention weights w. We denote the set ofall possible cognitive states as C � ���� ���

m (with the notation �(�) referring to thespace of probability distributions over � withfinite entropy).8 Each marginal distribution �ispecifies the (subjective) probability of pos-sible answers to question Qi, and similarly �Xspecifies the (subjective) probability overprizes.9

The formal representation of a cognitive stateis depicted in Table 2. Consider, for example, acollege professor deciding whether or not tolook at her teaching ratings. The set of activatedquestions (and possible answers) might include:“How many of my students liked my teaching?”(0, 1, 2, . . .); “Did they applaud on the last dayof class?” (yes/no); “How good a teacher am I?”(great, good, so-so, bad, awful); “Will I gettenure?” (yes/no). Prior belief about the firstquestion might be quite uncertain. The answerto the second question, on the other hand, mightalready be known with certainty. There may ormay not be much uncertainty about the third andfourth questions. All of these beliefs (to theextent they are uncertain) are jointly dependent.Each of these beliefs also attracts some degreeof attention. The material outcome might benext year’s salary, which would also be highly,but not perfectly, correlated with whether or notshe gets tenure.

Preferences Over (Distributions of)Cognitive States

The conventional theory of choice under riskassumes that a lottery over outcomes is evalu-ated according to its expected utility. We makethe analogous assumptions leading to an ex-pected utility representation for lotteries overcognitive states.

We assume that there is a complete and tran-sitive preference relation � on ��C� that is con-

tinuous10 and that satisfies independence, so thereexists a continuous expected utility representationu of � (von Neumann & Morgenstern, 1944).

The assumption of independence across cog-nitive states here means that when informationcould put a person into one of many possiblecognitive states, preference is consistent withvaluing each possible cognitive state indepen-dently of any other cognitive states the personmight have found herself in.11

The Utility Function

To generate testable predictions, we need topropose a utility function. In this section wespecify a utility function that describes feelingsabout information gaps.

Psychological Insights

We first introduce a few psychological in-sights about the effects of attention and feelingsabout uncertainty to motivate our specific utilityfunction.12 Neuroeconomic research indicatesthat attention shapes preference and valuation(cf. Fehr & Rangel, 2011). Attention weights inour model specify how much a person is think-ing about particular beliefs and, in turn, howmuch those beliefs directly impact utility. We

8 The restriction to distributions with finite entropy servesa technical purpose, but it should not trouble us—intuitively, it means that a person cannot be aware of aninfinite amount of information, which is also the basis forour assumption that the set of activated questions is finite.

9For any A˜ � Ai, we have �i�A

˜� � ��A1 � · · · �

Ai�1 � A˜ � Ai�1 � · · · � Am � X�.10 The induced topology on C (derived from the order

topology on ��C�) should be a refinement of the ordertopology on C (see Nielsen, 1984).

11 This might seem to imply that the utility of a state ofuncertain knowledge is equal to the expected utility of eachof the possible beliefs—for example, that being uncertain ofwhether the object of my desire reciprocates my affectionsprovides the same utility as the sum of probabilities times theutilities associated with the possible outcome belief states. Itneed not, because (as we discuss in detail below) obtaining theinformation, and indeed the specific information one obtains, islikely to affect one’s attention weights. Such a change inattention can encourage or discourage a decision maker fromresolving uncertainty, depending on whether the news that willbe revealed is expected to be good or bad.

12 In a sense we are trying to use a notion of hedonic utilityto serve as a decision utility. We do not propose that peopleactually compute utilities and optimize as part of the decisionprocess, but only that they use some heuristic process that iswell adapted to often maximize hedonic utility.

Table 1The Question–Answer Knowledge Structure

Question Answer Belief

Latent — Unawareness

ActivatedUnknown Uncertainty

↕ information gap-Known Certainty

5INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 7: Information Gaps: Golman and Loewenstein - Homepage - CMU

may think of beliefs as having intrinsic value,which is then amplified by these attentionweights. Beliefs have positive (or negative) in-trinsic value when a person likes (or dislikes)thinking about them, that is, when more atten-tion enhances (or depresses) utility.

It is useful to distinguish two sources of abelief’s intrinsic value: valence and clarity. Va-lence refers to the value of definitive answersto questions (see Brendl & Higgins, 1996). Toillustrate the concept of valence, we return tothe example of a professor’s belief that she is agood (or bad) teacher as one with intrinsicallypositive (or, respectively, negative) valence.Smith, Bernheim, Camerer, and Rangel (2014)shows that valences (of beliefs about consump-tion) can be inferred from neural activity.

Clarity refers to preferences between degreesof certainty, independent of the answers one iscertain of (see Kaplan, 1991). We assert that,ceteris paribus, people prefer to have greaterclarity (i.e., less uncertainty or more definitivebeliefs). The aversion that people feel towarduncertainty is reflected in neural responses inthe anterior cingulate cortex, the insula and theamygdala (Hirsh & Inzlicht, 2008; Sarinopouloset al., 2010).13 It manifests in physiologicalresponses as well. Subjects who know to expectan electric shock, but who are uncertain whetherit will be mild or intense, show more fear—theysweat more profusely, and their hearts beat fas-ter—than subjects who know for sure that anintense shock awaits (Arntz et al., 1992). Thedesire for clarity is consistent with an underly-ing drive for simplicity and sense-making(Chater & Loewenstein, 2015). When valenceand clarity pull in opposite directions, it may bethe case that people prefer a certain answer to abelief that dominates it on valence or that peo-

ple prefer uncertainty when it leaves space forbetter answers.

A useful measure of the uncertainty about a par-ticular question is the entropy of the probability dis-tribution over answers (Shannon, 1948; see alsoCabrales et al., 2013). The entropy of a (marginal)probability �i is H��i� � ��Ai�Ai

�i�Ai�log�i�Ai�(with the convention that 0log 0 � 0).14 At oneextreme with minimal entropy 0, there is asingle answer known for sure; at the other ex-treme, for a question with finitely many an-swers, a uniform distribution maximizes en-tropy. Entropy, weighted by attention, satisfiesBerlyne’s (1957) criteria for a measure of theinternal conflict or dissonance in one’s cogni-tive state. We associate a psychological cost forbeliefs with higher entropy as an instantiation ofthe desire for clarity.

A Specific Utility Function

To make precise predictions we consider aspecific utility function incorporating a concernabout material outcomes as well as a preferencefor beliefs characterized by high valence andclarity, taking the strength of this preference tobe proportional to attention weights:

u(�, w) � �x�X

�X(x)vX(x)

� �i�1

m

wi� �Ai�Ai

�i(Ai)vi(Ai) � H(�i)�. (1)

13 Also, monkeys’ dopamine neurons fire, indicating anintrinsic reward, when they have the opportunity to reduceuncertainty about the amount of water they will be given(Bromberg-Martin & Hikosaka, 2009).

14 The base of the logarithm in the entropy formula isarbitrary and amounts to a normalization parameter.

Table 2Representation of a Cognitive State

Activated questions Possible answers Subjective probabilitiesa Attention weights

Q1 A1 � �A11, A1

2, · · ·� [�1�A11�, �1�A1

2�, . . .] w1

É É É ÉQm Am � �Am

1 , Am2 , . . .� [�m�Am

1 �, �m�Am2 �, . . .] wm

Possible Prizes

N/A X � �x, x�, x�, . . . � [�X(x), �X(x=), . . .] N/A

a Answers to different questions are not generally independent. Typically, the joint probabilitymeasure � �1

. . . �m · �X.

6 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 8: Information Gaps: Golman and Loewenstein - Homepage - CMU

The first term, the material component of theutility function, is the expected valuation ofprizes. The second term, the belief-based com-ponent of the utility function, adds the value ofbeliefs weighted by attention, where the valueof a belief consists of the expected valence ofthe answers minus its entropy.

We now describe properties (some quitestrong and almost certainly not always satisfied)that characterize (and necessarily imply) thisutility function (see Theorem 1 below).

Properties

The utility function in Equation (1) satisfiesthe following seven properties.15

Separability between questions. Additiveseparability of utility between questions meansthat a person can place a value on a belief abouta given question without needing to considerbeliefs about other questions.

P1. A utility function satisfies additiveseparability between questions if u��, w� �

uX��X� � �i�1m ui��i, wi�.16

Property (P1) may seem quite strong becausewe can imagine representations of sensible pref-erences that are not additively separable. Forexample, the value of a belief about whether acar on sale has a warranty intuitively coulddepend on the cost of the car in the first place(not to mention one’s desire for a new car, one’sestimation of the costs of car repairs, etc.).However, we may be able to represent thesepreferences as separable after all. We mightsuppose that these beliefs do have separablevalues but that they correlate with some otherhighly valued belief, perhaps about how good adeal one can get on the car. That is, whileintuition tells us that the value of beliefs aboutdifferent questions (e.g., “Does she like me?”and “Does she have a boyfriend?”) is ofteninterdependent, this dependence may be medi-ated by the existence of additional questions(e.g., “Will she go out with me?”), beliefs aboutwhich may be mutually dependent, but indepen-dently valued.

Expected valuation of prizes. Our theoret-ical framework assumes independence acrosscognitive states, giving us expected utility overcognitive states. Apart from the utility derivedfrom beliefs, expected utility might extend toprizes as well.

P2. When the material component of the utilityfunction is separable, it satisfies expected valua-tion of prizes if uX��X� � �x�X �X�x�vX�x�.

Property (P2) implies that when a lottery isindependent of beliefs about the world, its util-ity is simply the expected value of the prizes.However, to the extent that a gamble depends inpart on some beliefs, the utility derived fromthese beliefs will also be relevant.

Monotonicity with respect to attentionweights. Preferences satisfy the property ofmonotonicity with respect to attention weightsif whenever increasing attention on a given be-lief enhances (or diminishes) utility, it will doso regardless of the absolute level of attentionweight. At a psychological level, the interpreta-tion of this monotonicity property is that whena belief is positive, more attention to it is alwaysbetter, and when a belief is negative, more at-tention is always worse. In fact, the propertyprovides a natural definition of whether a beliefis positive or negative.

P3. Preferences satisfy monotonicty with re-spect to attention weights if for any w, w, and

w � ��m such that wi � wi � wi for all i j and

wj wj wj, we have u(�, w) � u(�, w) if

and only if u��, w� � u��, w�, with equality onone side implying equality on the other, for all� � �(�).

In the case that these inequalities holdstrictly, we say that �j, the belief about questionQj, is a positive belief. If they hold as equalities,we say that �j is a neutral belief. And, in thecase that the inequalities hold in the reversedirection, then �j is a negative belief.

Linearity with respect to attention weights.The next property describes how changing theattention on a belief impacts utility. For anygiven attention weight, the marginal utility of achange in belief depends on what those beliefsare and how much the individual values them.The property of linearity with respect to atten-tion weights means that, in general, the mar-

15 While our utility function violates Savage’s (1954)sure-thing principle, it does satisfy a weaker “one-sided”version of it presented in the Appendix.

16A subset of questions Q � Q can also be separable,

in which case u��, w� � �i:Qi�Q ui��i, wi� � u �Q

���Q ,w�Q � where ��Q is the marginal distribution overanswers to the remaining questions and prizes and thevector w�Q contains the remaining components of w.

7INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 9: Information Gaps: Golman and Loewenstein - Homepage - CMU

ginal utility associated with such a change inbelief (assuming the utility of this belief is sep-arable) is proportional to the attention on thatbelief.

P4. When the utility of question Qi is sepa-rable, linearity with respect to attention weightsis satisfied if for any wi and wi � �� and �i� and�i� � ��Ai�, we have

ui(�i�, wi) � ui(�i�, wi)

�wi

wi�ui��i�, wi� � ui��i�, wi��.

Property (P4) allows us, in the case of sepa-rable utility, to assign an intrinsic value v to beliefssuch that ui��i�, wi� � ui��i�, wi� � wi�vi��i�� � vi��i���. We abuse notation by referring to thevalence of answer Ai as vi(Ai), with it beingdefined here as the intrinsic value vi of beliefwith certainty in Ai. We have taken the libertyof specifying a precise relationship between at-tention weights and utility as a convenient sim-plification; it should be noncontroversial be-cause we do not claim to have a particularcardinal measure of attention weight.

Label independence. Intuitively, the valueof a belief should depend on how an individualvalues the possible answers and on how proba-ble each of these answers is, and these factors(controlling for attention weight of course)should be sufficient to determine the utility ofany (uncertain) belief. In particular, the value ofa belief should not depend on how the questionor the answers are labeled.

P5. Label independence is satisfied if, whenthe utility of questions Qi and Qj are separable,a bijection � : Ai ¡Aj, such that vi(Ai) �vj(�(Ai)) and �i(Ai) � �j(�(Ai)), implies thatvi(�i) � vj(�j).

Reduction of compound questions. Theintuition behind the assumption of label inde-pendence also seems to suggest that the utilityof a belief perhaps should not depend on theway the question giving rise to the belief isasked, that is, on whether a complicated ques-tion is broken up into pieces. We should recall,however, that the activation of a particular ques-tion directs attention to the belief about thisquestion. Thus, in general, the utility of a beliefwill not be invariant to the question beingasked. Still, it may be the case that utility re-mains invariant when a compound question isbroken into parts as long as the attention oneach part is weighted properly. If utility remainsinvariant upon setting attention weights on con-ditional questions to be proportional to theprobabilities of the hypothetical conditions,then we say that the utility function satisfies thereduction of compound questions property. Fig-ure 2 demonstrates the reduction of a compoundquestion with appropriate attention weights oneach subquestion.

P6. A separable utility function satisfies thereduction of compound questions property ifwhenever there is a partition � of the answers Ai

(to question Qi) into � �Ai1, . . . , Ain

� and abijection � : ¡ Aj into the answers to somequestion Qj such that for any h � [1, n] and anyAi � Aih

,

Ques�on Qi

A�en�on weight w

Answers

A1

A2

A3

A4

40%

10%

20%

30%

Ques�on Qj

A�en�on weight w

Ques�on Qi1

A�en�on weight .6 w

Ques�on Qi2

A�en�on weight .4 w

60%

40%

Answers

A1

A2

A3

A4

67%

33%

75%

25%

Figure 2. Decomposition of a compound question.

8 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 10: Information Gaps: Golman and Loewenstein - Homepage - CMU

vi(Ai) � vj(�(Aih)) � vih

(Ai)

and

�i(Ai) � �j(�(Aih)) · �ih

(Ai),

It follows that

ui(�i, �) � uj(�j, �)

� �h�1

n

uih(�ih

, �j(�(Aih)) · �).

Ruling out unlikely answers increasesclarity. A final property operationalizes thepreference for clarity. Controlling for the va-lence of one’s beliefs, by considering situationsin which one is indifferent between differentpossible answers to a question, there should bea universal aversion to being uncertain about theanswer to an activated question. As a buildingblock toward quantifying the uncertainty in abelief, we assert here that when an unlikely (andequally attractive) answer is ruled out, uncer-tainty decreases (and thus the utility of thatuncertain belief increases).

P7. Ruling out unlikely answers increasesclarity if, when the utility of question Qi isseparable and all answers to this question havethe same valence, that is, vi�Ai� � vi�Ai�� for allAi and Ai� � Ai, then for any � where withoutloss of generality �i(Ai

h) is weakly decreasing inh and for any �= such that �i��Ai

h� � �i�Aih� for

all h � [1, h�] (with at least one inequality strict)and �i��Ai

h� � 0 for all h � h� , for some h� , weconsequently have vi��i�� vi��i�.

Characterization of Our Utility Function

Theorem 1. If the properties P1–P7 are satis-fied, then the utility function takes the form ofEquation (1).

Proof. Linearity with respect to attentionweights allows us to pull an attention weight onquestion Qi outside of the utility ui(�i, wi) �wivi(�i) (using a neutral belief to calibrate vi). Apartition of Ai into singletons Aih

such thatvi�Ai� � vih

�Ai� allows us, by reduction of thecompound question, to determine that the func-tion F��i� � vi��i� � �Ai�Ai

�i�Ai�vi�Ai� doesnot depend on vi(Ai) for any Ai � Ai. Moreover,–F(·) satisfies Shannon’s (1948) axioms (conti-

nuity, increasing in the number of equiprobableanswers, and reduction of compound questions)characterizing the entropy function H��i� �

��Ai�Ai�i�Ai�log�i�Ai�.

Choices of Actions

People cannot just choose to put themselves intheir most preferred cognitive state in C. The de-cision variables in our model are actions, such aswhether or not to acquire information or make awager, which will influence beliefs and/or atten-tion. Actions, in general, are operators on cogni-tive states that map to new cognitive states or todistributions over cognitive states.

Choosing Between Sequences of Actions

An initial choice is often made in the context ofsubsequent actions that will, or may, becomeavailable, as well as subsequent events beyondone’s control that may also operate on one’s cog-nitive state. For example, a college professor de-ciding whether or not to review her teaching rat-ings may subsequently have the option to enroll ina teacher improvement class. Similarly, a gamblerdeciding whether to wager on a football gamemay subsequently have the option to watch thegame or just check the final score.17

A sequence of actions (and events) can beanalyzed with the convention that an operatorpasses through a distribution over cognitivestates.18 Thus, we represent a sequence s ofactions and events acting on a cognitive state(�, w) as s · ��, w� � ��C�. Choice from a strat-egy set S, that is, from a set of sequences of(possibly state-contingent) actions where earlyactions may reveal information that will informlater actions, is represented as utility maximiza-tion: A sequence s* � S may be chosen by a

17 A person may be naive, that is, may not recognize thatsome subsequent action or event may take place, in whichcase the choice should be modeled without it.

18 Analogous to the standard assumption in decision un-der risk, the model assumes reduction of compound distri-butions over cognitive states. This does not imply the tra-ditional reduction of compound lotteries.

9INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 11: Information Gaps: Golman and Loewenstein - Homepage - CMU

decision maker in the cognitive state (�, w) ifs* � argmaxs�S u�s · ��, w��.19

Expected utility over cognitive states impliesdynamic consistency for sequences of choices.For example, if the college professor intends toreview her teaching ratings and enroll in theteacher improvement class if and only if lessthan half of the class liked her teaching, thenafter discovering, say, that everybody liked herteaching, she would not want to enroll in theclass (or, conversely, if in this scenario shewould want to enroll in the class, then shewould have intended to do so as part of herinitial plan).20

We can make use of dynamic consistency tobreak a complicated dynamic decision probleminto pieces that are easier to analyze. We maybegin by considering an initial choice, countingon subsequent choices to maximize utility ineach contingency. We find it useful to define autility function over cognitive states, contingenton the set of strategies that may subsequently bechosen:

U(�, w | S) � maxs�S

u(s · (�, w)). (2)

In the example of the professor’s teachingratings, the initial choice is whether to reviewthe teaching ratings, and the set of availablesubsequent actions is to enroll in the teacherimprovement class or not to enroll in the class.Looking at the ratings resolves a lottery overcognitive states, each of which confers utilitythat is conditional on making the optimal choiceof one of these subsequent actions. This dy-namic decision problem is harder to grasp asone large decision. We have to define contin-gent plans of action: Enroll in the teacher im-provement class if and only if the number ofstudents who liked the teacher is in some range.The full strategy set then includes all such con-tingent plans of action (along with actually re-viewing the teaching ratings) as well as notreviewing the teaching ratings and indepen-dently enrolling or not enrolling.21

Examples of Actions

We distinguish two kinds of actions: infor-mational actions answer a question; instrumen-tal actions affect the chances of receiving vari-ous prizes (outcomes).22 For example, wageringon the color of a ball drawn from an urn is an

instrumental action. Examining the contents ofthe urn is an informational action. Note thatsome actions will have both instrumental andinformational effects. Examples include payinga fee for a property value appraisal or hiring aprivate eye.

A purely instrumental action acting on theprior cognitive state determines a particular newcognitive state. Typically, it preserves the priorjudgment about the probability of each answerset and then specifies a new distribution overprizes conditional on each possible answer set.An instrumental action may also affect the im-portance of various questions (as formalized inthe next section) and thereby influence the at-tention weights. For example, the decision toparticipate in a karaoke session will likely raisethe attention weight on the question “Am I agood singer?”

Acquiring information also changes one’scognitive state. Ex ante, as one does not knowwhich answer will be discovered, the prospectof acquiring information offers the decisionmaker a lottery over cognitive states. Uponlearning answer Ai to question Qi, one’s proba-bility measure over �(�) changes from �0 to�Ai � �0�· � Ai�. We assume Bayesian updatinghere, which means that ex ante, before one

19 If a sequence of actions produces a temporal profile ofcognitive states, we can of course discount the utility ofcognitive states reached in the future relative to those oc-curring in the present, but for simplicity we present ourformal model with no time delay for subsequent actions.

20 We assume expected utility over cognitive states, notover material outcomes. Thus, a person may, for example,plan to accept a compound lottery and then observe the finaloutcome, whereas if there were an opportunity to observethe outcome of the first stage and then reconsider the com-pound lottery, he may have rejected it (perhaps regardless ofthe outcome of the first stage). Observing the first stage is aconsequential action in our model, and sequences of actionsare absolutely not assumed to be commutative. Recognizingcognitive states as our objects of preference, accepting thecompound lottery is not equivalent to accepting and observ-ing the successive stages, so we have no violation of dy-namic consistency here. See Machina (1989) for an illumi-nating discussion of consequentialism and dynamicconsistency.

21 If there are n students who have filled out teachingratings, and thus n 1 possible answers to how manystudents like the teacher, then the full strategy set includes2n1 2 possible sequences of actions!

22 A third kind of action, which we might call meditative,only focuses or relaxes attention. For example, dwelling onthe contents of an urn (or removing it from sight) would bea meditative action.

10 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 12: Information Gaps: Golman and Loewenstein - Homepage - CMU

knows what one will discover, an informationalaction determines a distribution over subjectivejudgments such that the expectation of this distri-bution equals the prior judgment. That is, by thelaw of total probability, �Ai�Ai

�i0�Ai��Ai � �0.

An informational action would decrease expectedentropy because conditioning reduces entropy(see, e.g., Cover & Thomas, 1991, p. 27). Newinformation generates surprise (as formalized inthe next section), which changes the attentionweights too. Given the prior attention weightvector w0, we let wAi denote the new attentionweight vector immediately after learning Ai,resulting from surprise at this discovery. Wemake special note of a particular event that mayfollow after an informational action: adapting,that is, relaxing attention after getting used tonew beliefs. We discuss how adaptation leads todecreased attention in the next section.

Determinants of Attention

To apply our framework in settings in whichattention cannot be directly observed, we needto spell out how an action influences the atten-tion paid to each activated question.23 In thissection we propose assumptions about the de-terminants of attention that describe how muchpeople will think about various informationgaps before and after taking some action.

Beginning with William James (1890), psy-chologists have drawn a distinction betweenautomatic and voluntary control of attention. Inour framework, in some situations, a person canchoose to focus attention on (or ignore) a ques-tion, as when a reader of a mystery novel pausesto ponder who did it. In many familiar situa-tions, however, attention is focused involun-tarily, as when an anxious assistant professorwonders whether he will get tenure.

We formalize the concepts of importance,salience, and surprise, all of which, we assume,contribute to attention weight when attention isautomatic. The importance i of a question Qireflects the degree to which one’s utility de-pends on the answer. Salience, distinctly, re-flects the degree to which a particular contexthighlights the question. We denote the salienceof question Qi as �i � ��. Finally, surprise is afactor that reflects the dependence of attentionon the dynamics of information revelation, andspecifically on the degree to which receivingnew information changes one’s beliefs. We de-

note the surprise associated with a revised beliefabout question Qi as �i.

Assumption A1. We assume that the attentionwi on an activated question Qi is strictly increas-ing in this question’s importance i, its salience�i, and the surprise �i associated with it.

Importance

The importance of a question reflects howmuch the answer matters to the person. That is,a question is important to the extent that one’sutility depends on the answer. To be precise, werefer to the spread of the utilities associated withthe different answers to a question.

Given a particular prior probability measure�0 and a set S of sequences of actions availableto the decision maker, the importance i ofquestion Qi is a function (only) of the likelihoodof possible answers and the utilities associatedwith these answers, captured as

�i � ����i0(Ai), U(�Ai, wAi | S)�Ai�supp��i

0��where U is the utility function defined in Equa-tion (2). We do not specify the precise form ofthis function , but instead impose the follow-ing conditions: We require that (i.e., impor-tance) increases with mean-preserving spreadsof the (subjective) distribution of utilities thatwould result from different answers to the ques-tion, and that it is invariant with respect toconstant shifts of utility.

Thus, raising the stakes increases importance,as does increasing the range of potential out-comes (see Goldstein & Beattie, 1991). Con-sider, for example, the question “How muchdoes (s)he like me?” (referring to a potentialromantic partner). The importance of this ques-tion would increase either if the subject caredmore about the partner’s feelings—for example,if the subject desired a relationship—or if therange of possible answers to the question ex-panded—for example, if signals of strong inter-est alternated with signals raising doubt (seeGivens, 1978, p. 349). On the other hand, if an

23 We think of attention as observable (perhaps imper-fectly) through some combination of eye tracking, brainscans, and self-reports, but choice experiments do not typ-ically include such observations, which would be especiallydifficult to collect in the field.

11INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 13: Information Gaps: Golman and Loewenstein - Homepage - CMU

answer is known with certainty, then by ourdefinition nothing is at stake, so the underlyingquestion is no longer important.

Admittedly, acquiring information should af-fect the importance of the questions being ad-dressed, but this reconsideration of importanceis not immediate. We model reconsideration ofimportance as part of adaptation to new beliefrather than as an anticipated consequence ofinformation acquisition.

Assumption A2. We assume that the impor-tance of a question is updated if and when thedecision maker adapts to new beliefs (ratherthan when the decision maker first updates thesebeliefs).

Our definition of importance is, by design,circular. Importance depends on utility, whichin turn depends on the attention weight, butimportance also contributes to attention weight.There is psychological realism to this circular-ity, which captures the dynamic processes giv-ing rise to obsession: attention to a questionraises its importance, and the elevated impor-tance gives rise to intensified attention. If weassume that these processes unfold instanta-neously, then importance (and, in turn, attentionweight and utility) will be a fixed point of thiscomposition of functions. In practice, we canmake simple comparisons of importance with-out going to the trouble of specifying precisevalues.

Salience

The salience of a question depends on a va-riety of exogenous contextual factors. For ex-ample, a question could be salient if it hasrecently come up in conversation (i.e., it hasbeen primed) or if other aspects of the environ-ment remind an individual about it. Alterna-tively, a question could be more salient to anindividual if the answer is, in principle, know-able, and even more so if other people aroundher know the answer but she does not. Compar-ison and contrast generally increase a question’ssalience (Duncan & Humphreys, 1989; Itti &Koch, 2001). Distractions, on the other hand,can decrease a question’s salience (see Lavie,2005).

Often a question may be salient despite beingunimportant. For example, if someone asked aninnocuous question such as “Is that plant afern?” one might become curious about the an-

swer despite not caring about whether it is “yes”or “no.” It seems natural to think that somedegree of salience is a necessary, and sufficient,condition for attention, while some degree ofimportance is not. One can, for example, oftenbe curious about the answer to a question with-out preferring any particular answer over anyother.

Assumption A3. We assume that a question Qiis activated if and only if it has positive salience�i � 0.

Further, it seems natural to assume that sa-lience and attention are positive complements;an increase in importance should produce agreater increase in attention for a more salientquestion, and vice versa.

Assumption A4. We assume that attentionweight wi has strictly increasing differences(i.e., a positive cross-partial derivative, if weassume differentiability) in (i, �i).

Surprise

The third factor that we posit influences at-tention is the surprise one experiences uponacquiring new information. Surprise reflects thedegree to which new information changes ex-isting beliefs. We adopt Itti and Baldi’s (2009)specification of surprise: When the answer to aparticular question Qj is learned, thereby con-tributing information about the answers to as-sociated questions and causing their probabili-ties to be updated, the degree of surpriseassociated with a new belief about question Qican be defined as the Kullback-Leibler diver-gence of �i

Aj against the prior �i0,

�i��iAj �i

0� � �Ai�Ai

�iAj(Ai)log

�iAj(Ai)

�i0(Ai)

.

Surprise is positive with any new informa-tion, and is greatest when one learns the mostunexpected answer with certainty. Itti and Baldi(2009; Baldi & Itti, 2010) show that surprise,specified this way, predicts the level of attentionpaid to information (see also Meyer et al.,1991). Mellers, Schwartz, Ho, and Ritov (1997)find that more surprising good (or bad) news ismore elating (or disappointing) than less sur-prising news.

We return again to the example question “Doother people like me?” to illustrate surprise. If,

12 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 14: Information Gaps: Golman and Loewenstein - Homepage - CMU

having believed that she was generally well-liked, an individual were to discover that thecomments about her were actually unfavorable,the discovery, necessitating a radical change inher belief, would be quite surprising (and wouldincrease her attention to the question).

The feeling of surprise is not permanent.Assumption A5. We assume that if and when

the decision maker adapts to new beliefs, theyare no longer surprising.

The Belief Resolution Effect

The impact of new information on attention isgreatest when uncertainty about a question isresolved completely. Surprise generates an im-mediate spike in attention. However, with ad-aptation, surprise fades and the underlyingquestion becomes unimportant because, withthe answer known, there is no longer a range ofpossible answers. Taken together, these factorscreate a pattern of change in attention weightfollowing the discovery of a definitive answer,what we call the belief resolution effect—whenan answer is learned with certainty, there is animmediate boost in attention weight on it, butwhen the decision maker adapts, the questionthen receives less attention. It is as if the brainrecognizes that because a question has beenanswered, it can move on to other questions thathave yet to be addressed. Janis (1958) recog-nized the belief resolution effect when he ob-served that surgical patients getting informationabout their upcoming procedures initially worrymore about the surgery but subsequently expe-rience less anxiety. The belief resolution effectallows for hedonic adaptation to good or badnews (Smith et al., 2009; Wilson et al., 2005;Wilson & Gilbert, 2008). Kahneman and Thaler(2006, p. 230) explain:

Withdrawal of attention is the main mechanism ofadaptation to life changes such as becoming a paraple-gic, becoming suddenly wealthy or getting married.Attention is normally associated with novelty. Thus,the newly paraplegic, lottery winner or newlywed isalmost continuously aware of that state. But as the newstate loses its novelty it ceases to be the exclusive focusof attention.

Often, however, people do not anticipate thebelief resolution effect because they do not rec-ognize that they will adapt (Wilson & Gilbert,2005).

Information Acquisition and Avoidance

We can apply our utility function along withour assumptions about attention to decisionsabout information acquisition or avoidance. Wedevelop our analysis in a companion article(Golman & Loewenstein, 2016) that uses ourmodel to lay out three distinct motives for in-formation acquisition or avoidance and derivesthe behavioral implications that result from theinterplay of these motives. Here we summarizethese implications and show with examples howour model allows for acquisition of noninstru-mental information or avoidance of possiblyuseful information.

The desire for information, in our model, canbe decomposed into three distinct motives: rec-ognition of the instrumental value of the infor-mation, curiosity to fill the information gap(s),and motivated attention to think more or lessabout what could be discovered. The instrumen-tal value of information arises from its impacton subsequent actions. As in the standard ac-count of informational preferences, it is definedas the difference between the expected utility ofsubsequent actions conditional on having theinformation and the utility expected in the ab-sence of the information. Curiosity arises fromthe expected reduction in uncertainty upon ac-quiring information. It is defined as the ex-pected utility of revised beliefs, given prior lev-els of attention. The magnitude of curiositydepends on the attention devoted to each infor-mation gap that stands to be addressed. Moti-vated attention arises from the anticipated sur-prise (i.e., increase in attention to a question)upon acquiring information. It is defined as theexpected utility from increased attention onwhatever happens to be discovered, condition-ing on all possible outcomes. Motivated atten-tion is a motive to acquire information that’sexpected to be good and to avoid informationthat is expected to be bad.

Putting the three motives together, our modelmakes many predictions about when, and thedegree to which, information will be sought oravoided. When anticipated answers are neutralor even potentially positive, information shouldbe sought. The strength of the desire for thisinformation should increase with the number ofattention gaps that can be addressed, the atten-tion paid to them, and the valence of the possi-ble outcomes. However, when anticipated out-

13INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 15: Information Gaps: Golman and Loewenstein - Homepage - CMU

comes are sufficiently negative, informationwould be avoided.24 This “ostrich effect” whenanticipating bad outcomes is consistent with agrowing body of empirical evidence (see, e.g.,Eil & Rao, 2011; Falk & Zimmermann, 2016;Ganguly & Tasoff, 2016; Karlsson et al., 2009).For example, Ganguly and Tasoff (2016) findthat willingness to pay to avoid testing for aherpes infection is greater for the more dreadedType II infection than for Type I. According toour theory, information avoidance allows a per-son to escape from thinking about such negativebeliefs. Falk and Zimmermann (2016) confirmthis hypothesis, finding that information avoid-ance about impending electric shocks (clearlybad outcomes) is more prevalent when subjectscan distract themselves by playing a game.

Information Acquisition: Curiosity

Judging by the demand for celebrity gossipblogs, many people are curious about the pri-vate lives of celebrities, be it whether a partic-ular actress is pregnant again or whether a mu-sician has relapsed into drug addiction or simplywhat clothing the celebrity was recently wear-ing in public. In many (yet not necessarily all)of these cases, an individual is curious despitehaving no use for the information and even notcaring what the answer turns out to be. Forexample, a provocative headline may tempt aperson to view a photo of Caitlyn Jenner’s outfitto see whether she is wearing tight-fitting orloose-fitting jeans even if the person never in-tends to discuss the topic (or imitate the style)and is indifferent between these two possibili-ties. Our model is able to make sense of infor-mation acquisition due to curiosity because weexplicitly model thoughts and feelings about theinformation gap.

Suppose “What is Caitlyn Jenner wearing?”is an activated (salient) question with possibleanswers tight jeans or loose jeans. The actualanswer should not affect other beliefs or mate-rial outcomes at all. Consider the choicewhether or not to find out the actual answer,with no other actions or choices subsequentlyavailable. We assume that knowing what she iswearing is a neutral belief, but the desire forclarity makes not knowing a negative belief.Learning the answer to the question reduces theentropy of this belief and thus raises utility.(Additional attention to her outfit, due to some

surprise upon finding out, does not affect utilityfrom neutral beliefs, and there is no instrumen-tal value in the absence of subsequent actions.)Thus, the model yields a strict preference forfinding out.

The concept of awareness embedded in thequestion–answer framework is crucial for thetreatment of curiosity. It allows us to accommo-date curiosity after reading the provocativeheadline along with a lack of curiosity beforereading the headline. In both cases the person isuncertain about what Caitlyn Jenner is wearing,but only after reading the headline does theperson become aware of this uncertainty (andhence become curious to resolve it).

Information Avoidance: The Ostrich Effect

Despite its instrumental value as well as themotive of curiosity, people sometimes avoidpotentially useful information (see Golman,Hagmann, & Loewenstein, 2016 for a compre-hensive review). For example, many at-risk in-dividuals avoid getting tested for HIV despitethe availability of life-saving treatment options(Thornton, 2008). Our model allows for infor-mation avoidance in this scenario (with bad toneutral outcomes anticipated) because getting apositive test result would lead to, and focusattention on, the negative belief that the indi-vidual has HIV.

Suppose “Do I have HIV?” is an activatedquestion Q with possible answers yes or no. Wemight describe the relevant material outcomesin terms of quality-adjusted life years. The in-dividual has prior belief � about the probabilityof having HIV and, conditional on this diagno-sis, of the quality and length of life that can beexpected. We denote the probability of havingHIV as p � �Q(yes). For simplicity of presen-tation, we consider the choice of whether or notto get tested as equivalent to a choice of whetheror not to find out for sure if you have HIV. Inreality, of course, diagnostic tests are not per-fectly accurate, and a more careful analysiswould consider the test results to be a distinctactivated question that correlates strongly with

24 The belief-resolution effect in our model also leads toa novel prediction: individuals with more foresight (andwho discount the future less) should be less likely to exhibitthe ostrich effect and more likely to acquire informationdespite anticipated bad news.

14 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 16: Information Gaps: Golman and Loewenstein - Homepage - CMU

actually having HIV, but our simplified analysismore clearly demonstrates how the modelworks in application. Also for simplicity, weignore other questions people might realisticallyhave, such as how having HIV would affecttheir sex lives. The set of actions subsequentlyavailable includes an instrumental action sM(taking medicine) or doing nothing. People donot typically take medicine to manage a poten-tial HIV infection unless they get tested andactually have HIV, so we assume that u(sM · (�,w)) � u(�, w) and u(sM · (�no, wno)) � u(�no,wno), but u(sM · (�yes, wyes)) � u(�yes, wyes).

We use the expected utility representation overcognitive states to express the expected utilityafter getting tested as pu(sM · (�yes, wyes) (1 �p)u(�no, wno). Let us now adopt our proposedutility function specification from Equation (1).Let us set the material value of not having HIV(i.e., no reduction in the expected quality or lengthof one’s life) to be 0 and denote the material valueof the expected quality-adjusted life years withand without medicine as uXM and uXH, respec-tively, where uXH � uXM � 0. Let us also assumethat having HIV has highly negative belief va-lence vH � 0 and that not having HIV has neutralbelief valence 0. We can now write the expectedutility after getting tested as p(uXM wyes vH). Incomparison, the utility without getting tested canbe written as:

puXH � w0(pvH � plogp � (1 � p) log(1 � p)).

The individual would get tested if and only if

p(uXM � uXH) � p(wyes � w0)vH

� w0(p logp � (1 � p) log(1 � p)) � 0.

The first and last terms, corresponding to instru-mental value and curiosity respectively, contrib-ute positively. However, the middle term isnegative because wyes � w0 due to surprise (andbecause vH � 0). If thinking that you have HIVis sufficiently scary or unpleasant (vH �� 0),then this middle term dominates and causes theindividual to avoid getting tested.

Risk and Ambiguity Preference

The previous section discusses how themodel we have developed allows us to describe

a desire to acquire or to avoid information thatencompasses motives (namely, curiosity andmotivated attention) that have been largely dis-regarded in the economics literature. We canapply this same model to an entirely new do-main: preferences about wagers that depend onmissing information. Risk and ambiguity prefer-ence are complex topics, and we develop theseapplications in depth in a companion article (Gol-man, Loewenstein, & Gurney, 2016). That articlederives behavioral implications of the model inthe domain of risk and ambiguity and reports anexperimental test confirming our main new pre-diction. Here, we summarize those results andthen apply the model to the Ellsberg paradox andto gambling for pleasure to show how the modelproves useful in application.

Decision making under risk and under ambi-guity both expose decision makers to informa-tion gaps. Imagine a choice between a gambleand a sure thing. Deciding to play the gamblenaturally focuses attention on the question:what will be the outcome of the gamble? Ofcourse, deciding to not play the gamble does notstop an individual from paying some attentionto the same question (or, if not choosing thegamble means it will not be played out, therelated question: what would have been the out-come of the gamble?) but playing the gamblemakes the question more important, and thatbrings about an increase in the attention weighton the question. If the individual is aware of thiseffect, which it seems natural to assume, thenwhether it encourages risk taking or risk aver-sion will depend on whether thinking about theinformation gap is pleasurable or aversive.When thinking about the missing information ispleasurable, then the individual will be moti-vated to increase attention on the question,which entails betting on it. Conversely, whenthinking about the missing information is aver-sive, the individual will prefer to not bet on it.This may help to explain why, for example,people generally prefer to bet on their hometeams than on other teams, and especially whenthe other team is playing against their hometeam (Babad & Katz, 1991). A preference forbetting on uncertainties that one likes thinkingabout shares much overlap with, but is distin-guishable from, a preference for betting on un-certainties that one has expertise about (Heath& Tversky, 1991).

15INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 17: Information Gaps: Golman and Loewenstein - Homepage - CMU

Decision making involving uncertainties thatare ambiguous is similar to the case with knownrisks, but with an additional wrinkle: With am-biguity, there is an additional information gap.In a choice between a sure thing and an ambig-uous gamble, for example, a second relevantquestion (in addition to the one above about theoutcome of the gamble) is: What is the proba-bility of winning with the ambiguous gamble?(And there may be additional relevant questionsthat could inform someone about this probabil-ity, so even a Bayesian capable of making sub-jective probability judgments would be exposedto these information gaps.) Again, betting on theambiguous gamble makes these questions moreimportant and thus will increase the attentionweight on them. So, desire to play the gamblewill be increasing with the degree to whichthinking about the gamble is pleasurable. To theextent that abstract uncertainties are not plea-surable to think about, this model provides anovel account of standard demonstrations ofambiguity aversion, including those first gener-ated by Ellsberg (1961) in his seminal article onthe topic.

Ellsberg Two-Urn Paradox

In the Ellsberg two-urn paradox, subjects arepresented with 2 urns. Urn 1 contains 100 redand black balls, but in an unknown ratio. Urn 2has exactly 50 red and 50 black balls. Subjectsmust choose an urn to draw from, and bet on thecolor that will be drawn—they will receive a$100 payoff if that color is drawn, and $0 if theother color is drawn. Subjects must decidewhich they would rather bet on: (a) a red drawfrom Urn 1, or a black draw from Urn 1; (b) ared draw from Urn 2, or a black draw from Urn2; (c) a red draw from Urn 1, or a red draw fromUrn 2; and (d) a black draw from Urn 1, or ablack draw from Urn 2. Intuition suggests thatpeople will be indifferent between red and blackin choices a and b, by the principle of insuffi-cient reason, but will prefer Urn 2 to Urn 1 inchoices c and d because this urn is less ambig-uous. The axioms of subjective expected utilitytheory (Anscombe & Aumann, 1963; Savage,1954), however, imply that a preference for anurn in choice c should imply a preference for theother urn in choice d, because the change incolors simply reverses winning and losing. In-deed, experimental evidence confirms the sus-

pected violation of subjective expected utilitytheory (Becker & Brownson, 1964; MacCrim-mon & Larsson, 1979).

According to our model, the desire for clarity,along with the desire to pay less attention tonegative beliefs, would cause an individual tobet on the known urn rather than the ambiguousurn in the Ellsberg paradox. When a decisionmaker is presented with Ellsberg’s choices, thefollowing questions, among others, are acti-vated:

• What is the composition of red and blackballs in Urn 1?

• What is the composition of red and blackballs in Urn 2?

Only the second question is known with cer-tainty. Despite having no information fromwhich to form an objective probability overanswers to the first question, we assume thedecision maker can form a subjective probabil-ity, and specifically that the decision maker islikely to assume a uniform distribution overpossible compositions of Urn 1. Moreover,savvy decision makers will recognize that pay-offs result from a compound lottery with Stage1 determining the composition of the urn andStage 2 determining the ball drawn from an urnwith that composition, and they will reduce thecompound lottery to form a belief that the prizewill be won with probability.5. Nevertheless, abet on a draw from Urn 1 makes the anticipatedpayoff contingent on the uncertain answer to thefirst question, whereas a bet on a draw from Urn2 makes the anticipated payoff contingent onthe certain answer to the second question.

We rely on three assumptions we have made:(a) attention weight on a question increases withthe importance of that question; (b) increasingattention weight on a question with an unfavor-able belief decreases utility; and (c) uncertainbeliefs over answers to which one is indifferentare less favorable than certainty about one suchanswer. In this case, we assume no preferenceabout the composition of an urn, independent ofthe eventual payoff, but there is of course theaforementioned preference for certainty. We as-sume that knowing the composition (of Urn 2),whatever it may be, is a neutral belief. The beliefthat Urn 1 has a uniform distribution over possiblecompositions, because of this uncertainty, is anegative belief. Thus, the decision maker prefersnot to increase the attention weight on (the com-

16 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 18: Information Gaps: Golman and Loewenstein - Homepage - CMU

position of) Urn 1 and can avoid doing so bychoosing to bet on a draw from Urn 2 rather thana draw from Urn 1.

Recognizing feelings about information gapsallows us to explain the preference for bettingon the known urn rather than on the unknownurn, even when the subjective probability judg-ment about the odds of winning a prize is thesame for both urns. Our account relies on aver-sion to missing information rather than a dis-tinction between objective and subjective prob-abilities. Also, the prediction of ambiguityaversion depends crucially on the maintainedassumption that beliefs about the compositionof an urn do not have positive valence. If thesource of the ambiguity was an uncertainty thatpeople enjoyed thinking about, the model wouldpredict ambiguity-seeking behavior rather thanambiguity aversion.

Gambling for Pleasure

People sometimes voluntarily expose them-selves to risk and ambiguity. For example, somefootball fans like to wager on the outcome of afootball game. Our model accommodates gam-bling for pleasure in this scenario because placingthe wager makes the outcome of the football gamemore important, and if a person enjoys thinkingabout football, this will increase his utility.

When a football fan has an opportunity towager on a game, the question of who will winthe game is activated.25 Suppose, for simplicity,he believes each team has a 50% chance to winthe game and is offered an even-money (1:1odds) wager on the team of his choice. As afootball fan, he enjoys thinking about who willwin the game, so his (50/50) belief is a positivebelief. Suppose the wager is modest enough thathis valuation for the monetary outcomes is ef-fectively linear. Then the wager would not af-fect the material component of his utility func-tion because the expected monetary value of thewager is 0. Still, the wager can affect the belief-based component of his utility. If he has arooting interest in the game, say if the Steelerswinning has higher valence than the Brownswinning, then the game is already somewhatimportant to the fan, but betting on the Steelersmakes the outcome even more important (be-cause the fan would gain even more if theSteelers win and lose even more if they lose), sohe would prefer to bet on the Steelers. Addi-

tionally, if he has no rooting interest in thegame, that is, equal valence for beliefs thateither team will win, then the outcome is notimportant if he does not bet, but becomes im-portant if he does. In this case, he would strictlyprefer betting on either team to not betting at all.

Conclusion

In much of the economics literature, prefer-ences about information have been viewed as de-rivative of risk preferences. We take a comple-mentary perspective, considering thoughts andfeelings about information gaps as primitive andviewing preferences about risk and ambiguityalong with preferences about information as de-rivative of these thoughts and feelings.

Thoughts and feelings about informationgaps underlie the acquisition of noninstrumentalinformation as well as the avoidance of poten-tially useful information. People may obtainnoninstrumental information purely to satisfycuriosity. Loewenstein (1994) proposed an in-formation-gap account of curiosity, which pro-vides insight about its situational determinants.There are many things that people do not knowand that do not bother them, but awareness ofspecific pieces of missing information canprompt an unreasonably strong desire to fillthese gaps. Our theory embraces the informa-tion gap concept and provides a new formaldefinition of an information gap. Our utilityfunction assumes that people want to fill infor-mation gaps ceteris paribus (i.e., they desireclarity or dislike uncertainty), and this is a uni-versal motive for information acquisition ratherthan avoidance. We identify this motive as cu-riosity. We hypothesize that information avoid-ance derives from a second motive, a desire toavoid increasing attention on a negative antici-pated outcome. More generally, we suggest thatindividuals have an inclination to acquire (oravoid) information whenever they anticipatethat what they discover will be pleasurable (orpainful). Our fundamental assumption is thatobtaining information tends to increase atten-tion to it (as in Gabaix et al., 2006; Tasoff &Madarász, 2009) to the extent that it is surpris-ing. This leads to the implication that people

25 Many other related questions would realistically be acti-vated as well, but we ignore them for simplicity of presenta-tion. A more careful analysis would proceed similarly.

17INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 19: Information Gaps: Golman and Loewenstein - Homepage - CMU

will seek information about questions they likethinking about and will avoid information aboutquestions they do not like thinking about.

Research has shown that missing informationhas a profound impact on decision making un-der risk and ambiguity. For example, Ritov andBaron (1990) studied hypothetical decisionsconcerning whether to vaccinate a child, whenthe vaccine reduces the risk of the child dyingfrom a disease but might itself be harmful.When the uncertainty was caused by salientmissing information about the risks from vacci-nation—a child had a high risk of being harmedby the vaccine or no risk at all but it wasimpossible to find out which—subjects weremore reluctant to vaccinate than in a situation inwhich all children faced a similar risk and therewas no salient missing information. We suggestthat ambiguity aversion in this scenario stemsfrom the unpleasantness of thinking about thismissing information (see also Frisch & Baron,1988). This account of risk and ambiguity pref-erence is conceptually different from, and hasdifferent testable implications from, the usualaccount of risk aversion involving loss aversionand the usual account of ambiguity aversioninvolving vague probabilities.26 Our fundamen-tal assumption relating risk and ambiguity pref-erence to feelings about information gaps is thatexposure to risk or ambiguity attracts attention tothe underlying information gaps because it makesthem more important. This leads to the implica-tion that people will be averse to risk and ambi-guity when they do not like thinking about theuncertainty and will seek risk and ambiguity whenthey like thinking about the uncertainty.

Preferences about information acquisitionand about exposure to risk or ambiguity areoften inextricably linked, and our theory makespredictions about their relationship. We canidentify questions for which desire for informa-tion to answer the question should be positivelyor negatively correlated with desire to bet onwhat the answer will be.27 Figure 3 displays ourtheoretical predictions as the valence of theanswers to a question changes, in the specialcase that all answers are considered to haveequal valence and equal probability and thisbelief is independent of other beliefs. When allanswers have sufficiently negative valence, wewould predict information avoidance and risk orambiguity aversion; when all answers have suf-ficiently positive valence, we would predict in-

formation acquisition and risk or ambiguityseeking. When all answers have neutral valence,however, we would predict a negative correla-tion, that is, information acquisition and risk- orambiguity aversion. Our theory makes thestrong prediction that people will not exhibitinformation avoidance and risk or ambiguityseeking for the same question.

Additionally, relating informational preferencesand risk and ambiguity preferences to thoughtsand feelings about specific uncertainties (i.e., in-formation gaps) provides coherence to the mixedfindings about preferences about the timing ofresolution of uncertainty (for gambles that sub-jects are already exposed to). Different studieshave found varying idiosyncratic choices betweenearly and late resolution as well as between grad-ual or one-shot resolution (Ahlbrecht & Weber,1997; Falk & Zimmermann, 2016; Kocher, Kraw-czyk, & van Winden, 2014; Zimmermann, 2014).Our perspective suggests that people would likegradual resolution for uncertainties they enjoythinking about (to savor them) and one-shot res-olution for unpleasant uncertainties (to get themover with). Indeed, Kocher, Krawczyk, and vanWinden (2014), using lottery jackpot drawings,find fairly frequent choices for gradual resolutionwhereas Falk and Zimmermann (2016), usingelectric shocks as outcomes, find that a majority ofsubjects opted for one-shot resolution. Also con-

26 For example, low-stakes risk aversion (Rabin, 2000)could be attributed to the discomfort of thinking aboutuncertainties.

27 We make no predictions about correlations betweendesire for information to answer one question and desire tobet on the answer to another, unrelated question.

Neutral Posi�veNega�ve

Valence

Informa�on Avoidance

Risk- and Ambiguity Aversion

Informa�on Acquisi�on

Risk- and

Ambiguity Aversion

Informa�on Acquisi�on

Risk- and

Ambiguity Seeking

Figure 3. Theoretical predictions of informational prefer-ence and risk and ambiguity preference arising fromthoughts and feelings about an information gap for which allanswers have equal valence and equal probability, indepen-dent of other beliefs.

18 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 20: Information Gaps: Golman and Loewenstein - Homepage - CMU

sistent with our basic premise, anticipated feelingsabout waiting for a gamble to be resolved havebeen found to correlate with willingness to takethe gamble in the first place (Loewenstein et al.,2001; Lovallo & Kahneman, 2000; van Winden etal., 2011).28

In this article we formally define informationgaps and describe people’s thoughts and feelingsabout them. We outline our theory’s application topreferences for information acquisition or avoid-ance and preferences about risk and ambiguity,and we show with examples the usefulness of ourtheory for reconciling behavioral anomalies. Werefer the reader to companion articles (Golman &Loewenstein, 2016; Golman, Loewenstein, &Gurney, 2016) that derive more general resultsand give each of these applications the attentionthey deserve.

28 According to our model, risk and ambiguity preferencecan be affected by plans to observe the resolution of uncer-tainty. If beliefs about good or bad outcomes of a lottery (orgamble) not only correlate with the material component ofutility but also have valence, then surprise will accentuatethe hedonic impact of observing the outcome. Becausesurprise is larger for unexpected outcomes, unlikely eventswill have disproportionate impacts on expected utility.Much like overweighting of small probabilities in the orig-inal formulation of prospect theory (Kahneman & Tversky,1979), this would lead to risk (and ambiguity) seeking forpositively skewed lotteries (and gambles) and event-splitting violations of dominance (see Birnbaum, 2004).

References

Abdellaoui, M., Baillon, A., Placido, L., & Wakker,P. (2011). The rich domain of uncertainty: Sourcefunctions and their experimental implementation.American Economic Review, 101, 695–723.

Abelson, R. (1986). Beliefs are like possessions.Journal for the Theory of Social Behavior, 16,223–250.

Ahlbrecht, M., & Weber, M. (1997). Preference forgradual resolution of uncertainty. Theory and De-cision, 43, 167–185.

Andries, M., & Haddad, V. (2015). Information aver-sion. Paper presented at the Bounded Rationalityin Choice Conference, New York, NY.

Anscombe, F., & Aumann, R. (1963). A definition ofsubjective probability. Annals of MathematicalStatistics, 34, 199–205.

Arntz, A., van Eck, M., & de Jong, P. (1992). Un-predictable sudden increases in intensity of painand acquired fear. Journal of Psychophysiology, 6,54–64.

Asch, D., Patton, J., & Hershey, J. (1990). Knowingfor the sake of knowing: The value of prognosticinformation. Medical Decision Making, 10, 47–57.

Babad, E., & Katz, Y. (1991). Wishful thinking:Against all odds. Journal of Applied Social Psy-chology, 21, 1921–1938.

Baldi, P., & Itti, L. (2010). Of bits and wows: ABayesian theory of surprise with applications toattention. Neural Networks, 23, 649–666.

Becker, S., & Brownson, F. (1964). What price am-biguity? Or the role of ambiguity in decision-making. Journal of Political Economy, 72, 62–73.

Benabou, R., & Tirole, J. (2002). Self-confidence andpersonal motivation. Quarterly Journal of Eco-nomics, 117, 871–915.

Berlyne, D. (1954). An experimental study of humancuriosity. British Journal of Psychology, 45, 256–265.

Berlyne, D. E. (1957). Uncertainty and conflict: Apoint of contact between information-theory andbehavior-theory concepts. The Psychological Re-view, 64, 329–339.

Berlyne, D. (1960). Conflict, arousal, and curiosity.New York, NY: McGraw-Hill.

Bernheim, B. D., & Rangel, A. (2009). Beyond re-vealed preference: Choice-theoretic foundationsfor behavioral welfare economics. The QuarterlyJournal of Economics, 124, 51–104.

Birnbaum, M. (2004). Tests of rank-dependent utilityand cumulative prospect theory in gambles repre-sented by natural frequencies: Effects of format,event framing, and branch splitting. Organiza-tional Behavior and Human Decision Processes,95, 40–65.

Brendl, C., & Higgins, E. (1996). Principles of judg-ing valence: What makes events positive or nega-tive? In M. P. Zanna (Ed.), Advances in experi-mental social psychology (pp. 95–160). New York,NY: Academic Press.

Bromberg-Martin, E., & Hikosaka, O. (2009). Mid-brain dopamine neurons signal preference for ad-vance information about upcoming rewards. Neu-ron, 63, 119–126.

Cabrales, A., Gossner, O., & Serrano, R. (2013).Entropy and the Value of Information for Inves-tors. American Economic Review, 103, 360–377.

Caplin, A., & Leahy, J. (2001). Psychological ex-pected utility theory and anticipatory feelings.Quarterly Journal of Economics, 116, 55–79.

Caplin, A., & Leahy, J. (2004). The supply of infor-mation by a concerned expert. Economic Journal,114, 487–505.

Chater, N. (2015). Can cognitive science create acognitive economics? Cognition, 135, 52–55.

Chater, N., & Loewenstein, G. (2015). The under-appreciated drive for sense-making. Journal ofEconomic Behavior and Organization, 126, 137–154.

19INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 21: Information Gaps: Golman and Loewenstein - Homepage - CMU

Cover, T., & Thomas, J. (1991). Elements of infor-mation theory. New York, NY: Wiley.

Dillenberger, D. (2010). Preferences for one-shot res-olution of uncertainty and allais-type behavior.Econometrica, 78, 1973–2004.

Duncan, J., & Humphreys, G. W. (1989). Visualsearch and stimulus similarity. Psychological Re-view, 96, 433–458.

Dwyer, L., Shepperd, J., & Stock, M. (2015). Pre-dicting avoidance of skin damage feedback amongcollege students. Annals of Behavioral Medicine,49, 685–695.

Eil, D., & Rao, J. (2011). The good news-bad newseffect: Asymmetric processing of objective infor-mation about yourself. American Economic Jour-nal: Microeconomics, 3, 114–138.

Eliaz, K., & Spiegler, R. (2006). Can anticipatoryfeelings explain anomalous choices of informationsources? Games and Economic Behavior, 56, 87–104.

Ellsberg, D. (1961). Risk, ambiguity, and the savageaxioms. Quarterly Journal of Economics, 75, 643–699.

Falk, A., & Zimmermann, F. (2016). Beliefs andutility: Experimental evidence on preferences forinformation. (IZA Discussion Paper No. 10172).

Fantino, E., & Silberberg, A. (2010). Revisiting therole of bad news in maintaining human observingbehavior. Journal of the Experimental Analysis ofBehavior, 93, 157–170.

Fehr, E., & Rangel, A. (2011). Neuroeconomic foun-dations of economic choice—Recent advances.Journal of Economic Perspectives, 25, 3–30.

Fox, C., & Tversky, A. (1995). Ambiguity aversionand comparative ignorance. Quarterly Journal ofEconomics, 110, 585–603.

Frisch, D., & Baron, J. (1988). Ambiguity and ratio-nality. Journal of Behavioral Decision Making, 1,149–157.

Gabaix, X., Laibson, D., Moloche, G., & Weinberg,S. (2006). Costly information acquisition: Experi-mental analysis of a boundedly rational model.American Economic Review, 96, 1043–1068.

Ganguly, A., & Tasoff, J. (2016). Fantasy and dread:The demand for information and the consumptionutility of the future. Management Science.

Geanakoplos, J., Pearce, D., & Stacchetti, E. (1989).Psychological games and sequential rationality.Games and Economic Behavior, 1, 60–79.

Givens, D. (1978). The nonverbal basis of attraction:Flirtation, courtship, and seduction. Psychiatry:Interpersonal and Biological Processes, 41, 346–359.

Gneezy, U., & Potters, J. (1997). An experiment onrisk taking and evaluation periods. Quarterly Jour-nal of Economics, 112, 631–645.

Goldstein, W., & Beattie, J. (1991). Judgments ofrelative importance in decision making: The im-portance of interpretation and the interpretation ofimportance. In D. Brown & J. E. K. Smith (Eds.),Frontiers of mathematical psychology (pp. 110–137). New York, NY: Springer.

Golman, R., Hagmann, D., & Loewenstein, G.(2016). Information avoidance. Journal of Eco-nomic Literature. Manuscript in preparation.

Golman, R., & Loewenstein, G. (2016). The demandfor, and avoidance of, information. [Working Pa-per].

Golman, R., Loewenstein, G., & Gurney, N. (2016).Information gaps for risk and ambiguity. [WorkingPaper].

Grant, S., Kajii, A., & Polak, B. (1998). Intrinsicpreference for information. Journal of EconomicTheory, 83, 233–259.

Gul, F. (1991). A theory of disappointment aversion.Econometrica, 59, 667–686.

Hammond, P. (1988). Consequentialist foundationsfor expected utility. Theory and Decision, 25, 25–78.

Heath, C., & Tversky, A. (1991). Preference andbelief: Ambiguity and competence in choice underuncertainty. Journal of Risk and Uncertainty, 4,5–28.

Hirsh, J., & Inzlicht, M. (2008). The devil you know:Neuroticism predicts neural response to uncer-tainty. Psychological Science, 19, 962–967.

Hirshleifer, J., & Riley, J. (1979). The analytics ofuncertainty and information—An expository sur-vey. Journal of Economic Literature, 17, 1375–1421.

Hsee, C., & Ruan, B. (2016). The Pandora effect: Thepower and peril of curiosity. Psychological Sci-ence, 27, 659–666.

Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., & Cam-erer, C. (2005). Neural systems responding to de-grees of uncertainty in human decision-making.Science, 310, 1680–1683.

Itti, L., & Baldi, P. (2009). Bayesian surprise attractshuman attention. Vision Research, 49, 1295–1306.

Itti, L., & Koch, C. (2001). Computational modellingof visual attention. Nature Reviews Neuroscience,2, 194–203.

James, W. (1890). Principles of psychology. NewYork, NY: Holt.

Janis, I. (1958). Psychological stress: Psychoanalyticand behavioral studies of surgical patients. NewYork, NY: Wiley.

Kahneman, D., & Thaler, R. (2006). Anomalies: Util-ity maximization and experienced utility. Journalof Economic Perspectives, 20, 221–234.

Kahneman, D., & Tversky, A. (1979). Prospect the-ory: An analysis of decision under risk. Economet-rica, 47, 263–291.

20 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 22: Information Gaps: Golman and Loewenstein - Homepage - CMU

Kang, M. J., Hsu, M., Krajbich, I., Loewenstein, G.,McClure, S., Wang, J., & Camerer, C. (2009). Thewick in the candle of learning: Epistemic curiosityactivates reward circuitry and enhances memory.Psychological Science, 20, 963–973.

Kaplan, S. (1991). Beyond rationality: Clarity-baseddecision making. In T. Gärling & G. W. Evans(Eds.), Environment, cognition and action: An in-tegrative multidisciplinary approach (pp. 171–187). New York, NY: Oxford University Press.

Karlsson, N., Loewenstein, G., & Seppi, D. (2009).The ostrich effect: Selective attention to informa-tion. Journal of Risk and Uncertainty, 38, 95–115.

Karni, E., & Viero, M.-L. (2013). “Reverse Bayes-ianism:” A choice-based theory of growing aware-ness. American Economic Review, 103, 2790–2810.

Kimball, M. (2015). Cognitive economics. NBERWorking Papers 20834.

Klibanoff, P., Marinacci, M., & Mukerji, S. (2005). Asmooth model of decision making under ambigu-ity. Econometrica, 73, 1849–1892.

Kocher, M., Krawczyk, M., & van Winden, F.(2014). Let me dream on! Anticipatory emotionsand preference for timing in lotteries. Journal ofEconomic Behavior & Organization, 98, 29–40.

Köszegi, B. (2003). Health anxiety and patient be-havior. Journal of Health Economics, 22, 1073–1084.

Köszegi, B. (2006). Ego utility, overconfidence, andtask choice. Journal of the European EconomicAssociation, 4, 673–707.

Köszegi, B. (2010). Utility from anticipation andpersonal equilibrium. Economic Theory, 44, 415–444.

Kreps, D., & Porteus, E. (1978). Temporal resolutionof uncertainty and dynamic choice theory. Econo-metrica, 46, 185–200.

Kruger, J., & Evans, M. (2009). The paradox ofalypius and the pursuit of unwanted information.Journal of Experimental Social Psychology, 45,1173–1179.

Lavie, N. (2005). Distracted and confused?: Selectiveattention under load. Trends in Cognitive Sciences,9, 75–82.

Lieberman, D., Cathro, J., Nichol, K., & Watson, E.(1997). The role of s- in human observing behav-ior: Bad news is sometimes better than no news.Learning and Motivation, 28, 20–42.

Litman, J., Hutchins, T., & Russon, R. (2005). Epis-temic curiosity, feeling-of-knowing, and explor-atory behaviour. Cognition and Emotion, 19, 559–582.

Loewenstein, G. (1987). Anticipation and the valua-tion of delayed consumption. The Economic Jour-nal, 97, 666–684.

Loewenstein, G. (1994). The psychology of curios-ity: A review and reinterpretation. PsychologicalBulletin, 116, 75–98.

Loewenstein, G., Weber, E., Hsee, C., & Welch, N.(2001). Risk as feelings. Psychological Bulletin,127, 267–286.

Lovallo, D., & Kahneman, D. (2000). Living withuncertainty: Attractiveness and resolution timing.Journal of Behavioral Decision Making, 13, 179–190.

MacCrimmon, K., & Larsson, S. (1979). Utility the-ory: Axioms versus “paradoxes”. In M. Allais &O. Hagen (Eds.), Expected utility hypotheses andthe allais paradox (pp. 333–409). Dordrecht, Hol-land: D. Reidel Publishing Company.

Machina, M. (1989). Dynamic consistency and non-expected utility models of choice under uncer-tainty. Journal of Economic Literature, 27, 1622–1668.

Marvin, C., & Shohamy, D. (2016). Curiosity andreward: Valence predicts choice and informationprediction errors enhance learning. Journal of Ex-perimental Psychology: General, 145, 266–272.

Mellers, B. A., Schwartz, A., Ho, K., & Ritov, I.(1997). Decision affect theory: Emotional reac-tions to the outcomes of risky options. Psycholog-ical Science, 8, 423–429.

Menon, S., & Soman, D. (2002). Managing thepower of curiosity for effective web advertisingstrategies. Journal of Advertising, 31(3), 1–14.

Meyer, W., Niepel, M., Rudolph, U., & Schutzwohl,A. (1991). An experimental analysis of surprise.Cognition & Emotion, 5, 295–311.

Nielsen, L. (1984). Unbounded expected utility andcontinuity. Mathematical Social Sciences, 8, 201–216.

Rabin, M. (2000). Risk aversion and expected-utilitytheory: A calibration theorem. Econometrica, 68,1281–1292.

Ritov, I., & Baron, J. (1990). Reluctance to vacci-nate: Omission bias and ambiguity. Journal ofBehavioral Decision Making, 3, 263–277.

Sarinopoulos, I., Grupe, D., Mackiewicz, K., Her-rington, J., Lor, M., Steege, E., & Nitschke, J.(2010). Uncertainty during anticipation modulatesneural responses to aversion in human insula andamygdala. Cerebral Cortex, 20, 929–940.

Savage, L. (1954). The Foundations of Statistics.New York, NY: Wiley.

Schelling, T. (1987). The mind as a consuming organ.In J. Elster (Ed.), The multiple self (pp. 177–196).Cambridge, UK: Cambridge University Press.

Schweizer, N., & Szech, N. (2014). Optimal revela-tion of life-changing information. [Working Pa-per].

Shannon, C. (1948). A mathematical theory of com-munication. Bell System Technical Journal, 27,623–656.

21INFORMATION GAPS

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.

Page 23: Information Gaps: Golman and Loewenstein - Homepage - CMU

Smith, A., Bernheim, B. D., Camerer, C., & Rangel,A. (2014). Neural activity reveals preferenceswithout choices. American Economic Journal Mi-croeconomics, 6(2), 1–36.

Smith, D. M., Loewenstein, G., Jankovich, A., &Ubel, P. A. (2009). Happily hopeless: Adaptationto a permanent, but not to a temporary, disability.Health Psychology, 28, 787–791.

Stigler, G. (1961). The economics of information.Journal of Political Economy, 69, 213–225.

Tasoff, J., & Madarász, K. (2009). A model of atten-tion and anticipation. [Working Paper].

Thornton, R. (2008). The demand for, and impact of,learning HIV status. American Economic Review,98, 1829–1863.

Tversky, A., & Kahneman, D. (1992). Advances inprospect theory: Cumulative representation of uncer-tainty. Journal of Risk and Uncertainty, 5, 297–323.

van Dijk, E., & Zeelenberg, M. (2007). When curiositykilled regret: Avoiding or seeking the unknown indecision-making under uncertainty. Journal of Ex-perimental Social Psychology, 43, 656–662.

van Winden, F., Krawczyk, M., & Hopfensitz, A.(2011). Investment, resolution of risk, and the role

of affect. Journal of Economic Psychology, 32,918–939.

von Neumann, J., & Morgenstern, O. (1944). Theoryof games and economic behavior. Princeton, NJ:Princeton University Press.

Wakker, P. (1988). Nonexpected utility as aversionof information. Journal of Behavioral DecisionMaking, 1, 169–175.

Wilson, T., Centerbar, D., Kermer, D., & Gilbert, D.(2005). The pleasures of uncertainty: Prolonging pos-itive moods in ways people do not anticipate. Journalof Personality and Social Psychology, 88, 5–21.

Wilson, T., & Gilbert, D. (2005). Affective forecast-ing: Knowing what to want. Current Directions inPsychological Science, 14, 131–134.

Wilson, T., & Gilbert, D. (2008). Explaining away: Amodel of affective adaptation. Perspectives onPsychological Science, 3, 370–386.

Yariv, L. (2001). Believe and let believe: Axiomaticfoundations for belief dependent utility function-als. Cowles Foundation Discussion Paper No.1344.

Zimmermann, F. (2014). Clumped or piecewise? Ev-idence on preferences for information. Manage-ment Science, 61, 740–753.

Appendix

The One-Sided Sure-Thing Principle

The one-sided sure-thing principle assertsthat people always prefer a certain answer touncertainty among answers that all have va-lences no better than the certain answer (holdingattention weight constant):

For any � � �(�), let supp(�) � � denotethe support of �. If for all A � x � supp(�) wehave u(�=, w) � u(�A�x, w), then u(�=, w) �

u(�, w), with the latter inequality strict when-ever there exist A= � x= and A� � x� � supp��� such that A� A�.

Received January 25, 2016Revision received June 21, 2016

Accepted August 12, 2016 �

22 GOLMAN AND LOEWENSTEIN

Thi

sdo

cum

ent

isco

pyri

ghte

dby

the

Am

eric

anPs

ycho

logi

cal

Ass

ocia

tion

oron

eof

itsal

lied

publ

ishe

rs.

Thi

sar

ticle

isin

tend

edso

lely

for

the

pers

onal

use

ofth

ein

divi

dual

user

and

isno

tto

bedi

ssem

inat

edbr

oadl

y.