neural affective decision theory: choices, brains,...

22
Neural affective decision theory: Choices, brains, and emotions Action editor: Ron Sun Abninder Litt a, * , Chris Eliasmith b,c , Paul Thagard b,d,e a Graduate School of Business, Stanford University, Stanford, CA 94305-5015, United States b Department of Philosophy, University of Waterloo, Ontario, Canada N2L 3G1 c Department of Systems Design Engineering, University of Waterloo, Ontario, Canada N2L 3G1 d Department of Psychology, University of Waterloo, Ontario, Canada N2L 3G1 e Cheriton School of Computer Science, University of Waterloo, Ontario, Canada N2L 3G1 Received 25 June 2007; accepted 29 November 2007 Available online 14 April 2008 Abstract We present a theory and neurocomputational model of how specific brain operations produce complex decision and preference phe- nomena, including those explored in prospect theory and decision affect theory. We propose that valuation and decision making are emo- tional processes, involving interacting brain areas that include two expectation-discrepancy subsystems: a dopamine-encoded system for positive events and a serotonin-encoded system for negative ones. The model provides a rigorous account of loss aversion and the shape of the value function from prospect theory. It also suggests multiple distinct neurological mechanisms by which information framing may affect choices, including ones involving anticipated pleasure. It further offers a neural basis for the interactions among affect, prior expec- tations and counterfactual comparisons explored in decision affect theory. Along with predicting the effects of particular brain distur- bances and damage, the model suggests specific neurological explanations for individual differences observed in choice and valuation behaviors. Ó 2008 Elsevier B.V. All rights reserved. Keywords: Computational neuroscience; Decision making; Emotion; Framing; Prospect theory 1. Introduction How do people decide what clothes to wear, what to eat for dinner, what car to buy, or what kind of career to pur- sue? In traditional economics, the standard answer is that people decide by maximizing expected utility, but psychol- ogists have found many problems with this kind of decision theory as a description of human behavior (e.g., Camerer, 2000; Kahneman & Tversky, 2000; Koehler, Brenner, & Tversky, 1997; Rottenstreich & Hsee, 2001; Tversky & Kahneman, 1991). Economists commonly take preferences as given, but from a psychological point of view it should be possible to explain how preferences arise from cognitive and affective processes. Work in this spirit has made tre- mendous progress in revealing key features and dynamics missed by theories disconnected from the study of cogni- tive, emotional and socially motivated phenomena, such as a common hypersensitivity to losses over equivalent gains (Kahneman & Tversky, 1979) and the affective influ- ence of prior expectations and counterfactual comparisons on preference judgments (Mellers, 2000). Moreover, with the rise of cognitive and affective neuroscience, it should be possible to identify precise neural mechanisms underly- ing these behavioral-level explanations of why people make the choices that they do. We propose neural affective decision theory as a psy- chologically and neurologically realistic account of spe- cific brain mechanisms underlying human preference and 1389-0417/$ - see front matter Ó 2008 Elsevier B.V. All rights reserved. doi:10.1016/j.cogsys.2007.11.001 * Corresponding author. E-mail address: [email protected] (A. Litt). www.elsevier.com/locate/cogsys Available online at www.sciencedirect.com Cognitive Systems Research 9 (2008) 252–273

Upload: truongkhanh

Post on 28-Jul-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

Available online at www.sciencedirect.com

www.elsevier.com/locate/cogsys

Cognitive Systems Research 9 (2008) 252–273

Neural affective decision theory: Choices, brains, and emotions

Action editor: Ron Sun

Abninder Litt a,*, Chris Eliasmith b,c, Paul Thagard b,d,e

a Graduate School of Business, Stanford University, Stanford, CA 94305-5015, United Statesb Department of Philosophy, University of Waterloo, Ontario, Canada N2L 3G1

c Department of Systems Design Engineering, University of Waterloo, Ontario, Canada N2L 3G1d Department of Psychology, University of Waterloo, Ontario, Canada N2L 3G1

e Cheriton School of Computer Science, University of Waterloo, Ontario, Canada N2L 3G1

Received 25 June 2007; accepted 29 November 2007Available online 14 April 2008

Abstract

We present a theory and neurocomputational model of how specific brain operations produce complex decision and preference phe-nomena, including those explored in prospect theory and decision affect theory. We propose that valuation and decision making are emo-tional processes, involving interacting brain areas that include two expectation-discrepancy subsystems: a dopamine-encoded system forpositive events and a serotonin-encoded system for negative ones. The model provides a rigorous account of loss aversion and the shapeof the value function from prospect theory. It also suggests multiple distinct neurological mechanisms by which information framing mayaffect choices, including ones involving anticipated pleasure. It further offers a neural basis for the interactions among affect, prior expec-tations and counterfactual comparisons explored in decision affect theory. Along with predicting the effects of particular brain distur-bances and damage, the model suggests specific neurological explanations for individual differences observed in choice and valuationbehaviors.� 2008 Elsevier B.V. All rights reserved.

Keywords: Computational neuroscience; Decision making; Emotion; Framing; Prospect theory

1. Introduction

How do people decide what clothes to wear, what to eatfor dinner, what car to buy, or what kind of career to pur-sue? In traditional economics, the standard answer is thatpeople decide by maximizing expected utility, but psychol-ogists have found many problems with this kind of decisiontheory as a description of human behavior (e.g., Camerer,2000; Kahneman & Tversky, 2000; Koehler, Brenner, &Tversky, 1997; Rottenstreich & Hsee, 2001; Tversky &Kahneman, 1991). Economists commonly take preferencesas given, but from a psychological point of view it should

1389-0417/$ - see front matter � 2008 Elsevier B.V. All rights reserved.doi:10.1016/j.cogsys.2007.11.001

* Corresponding author.E-mail address: [email protected] (A. Litt).

be possible to explain how preferences arise from cognitiveand affective processes. Work in this spirit has made tre-mendous progress in revealing key features and dynamicsmissed by theories disconnected from the study of cogni-tive, emotional and socially motivated phenomena, suchas a common hypersensitivity to losses over equivalentgains (Kahneman & Tversky, 1979) and the affective influ-ence of prior expectations and counterfactual comparisonson preference judgments (Mellers, 2000). Moreover, withthe rise of cognitive and affective neuroscience, it shouldbe possible to identify precise neural mechanisms underly-ing these behavioral-level explanations of why people makethe choices that they do.

We propose neural affective decision theory as a psy-chologically and neurologically realistic account of spe-cific brain mechanisms underlying human preference and

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 253

decision. The theory consists of four principles, which weshall list here and describe in detail later:

1. Affect. Decision making is a cognitive–affective process,crucially dependent on emotional evaluation of poten-tial actions.

2. Brain. Decision making is a neural process driven bycoordinated dynamic interactions among multiple brainareas, including parts of prefrontal cortex as well asmajor subcortical systems.

3. Valuation. The brain forms preferences via interactingbut distinct mechanisms for positive and negative out-comes, encoded primarily by dopamine and serotonin,respectively.

4. Framing. Judgments and decisions vary depending onhow the context and manner of the presentation ofinformation initiate different neural activation patterns.

There is substantial empirical evidence for each of theseprinciples, and when integrated in the precise manner weoutline they can explain the findings of a wide range of psy-chological and neurological phenomena.

In order to connect these principles with experimentalresults in a mathematically and neurologically rigorousfashion, we have developed a neurocomputational modelcalled ANDREA (affective neuroscience of decisionthrough reward-based evaluation of alternatives). It oper-ates within the neural engineering framework (NEF) devel-oped by Eliasmith and Anderson (2003), using biologicallyrealistic populations of neurons to encode and transformcomplex representations of relevant information.ANDREA simulates computations among several thou-sand neurons to model coordinated activities in sevenmajor brain areas that contribute to valuation and decisionmaking: the amygdala, orbitofrontal cortex, anterior cin-gulate cortex, dorsolateral prefrontal cortex, the ventralstriatum, midbrain dopaminergic neurons, and serotoner-gic neurons centered in the dorsal raphe nucleus of thebrainstem.

ANDREA successfully produces detailed neural-levelsimulations of behavioral findings explored in prospect the-ory (Kahneman & Tversky, 1979) and the decision affecttheory of Mellers and colleagues (1997). It shows how spe-cific neural processes can produce behaviors observed inboth psychological experiments and real-world scenariosthat have provided compelling evidence for these prefer-ence and choice theories. In particular, ANDREA providesneurological explanations for the major hypothesis of pros-pect theory that losses have greater psychological forcethan gains, as well as for the fundamental claim of decisionaffect theory that the evaluation (and subsequent potentialchoice) of an option is strongly influenced by its perceivedrelative pleasure, an emotional determinant that is depen-dant on expectations and counterfactual comparisons. Inour concluding discussion, we compare ANDREA to othermodels in decision neuroscience, describe promising ave-nues of expansion for ANDREA and neural affective

decision theory, and suggest additional psychological phe-nomena that are likely to fall within the scope of ourtheory.

2. Neural affective decision theory

We now examine in detail the four guiding principles ofneural affective decision theory, including connections toand supporting evidence provided by a diverse array ofresearch in both psychology and neuroscience. TheANDREA implementation of the theory we describe laterprovides the formal integration of these ideas necessary forour detailed simulation experiments.

2.1. Principle 1. Affect

According to our first principle, decision making is acognitive–affective process, crucially dependent on emo-tional evaluation of potential actions. This claim rejectsthe assumption of traditional mathematical decision the-ory that choice is a ‘cold’ process involving the calculationof expected values and utilities (Kreps, 1990; Von Neu-mann & Morgenstern, 1947). The original 19th-centuryconcept of utility was a psychologically rich, affectiveone based on pleasure and pain (Kahneman, Wakker, &Sarin, 1997). In contrast, 20th-century economics adoptedthe behaviorist view that utilities are mathematical con-structions based on preferences revealed purely by behav-ior. There is no room in this view for findings observed inboth psychological experiments and everyday life thatpeople’s decisions are often highly emotional, with prefer-ences arising from having positive feelings for someoptions and negative ones for others. While psychologyhas introduced a more complex characterization of thecognitive processes underlying decision making, the spe-cific influence of affect on behavior has frequently beenignored. Rottenstreich and Shu (2004) argue that thisneglect of affect may stem from an original desire of psy-chological decision researchers to minimize differenceswith the terminology and general themes of classical nor-mative decision theories.

But there is increasing appreciation in cognitive sciencethat emotions are an integral part of decision making (e.g.Bechara, Damasio, & Damasio, 2000, 2003; Churchland,1996; Lerner & Keltner, 2000; Loewenstein, Weber, Hsee,& Welch, 2001; Sanfey, Loewenstein, McClure, & Cohen,2006; Slovic, Finucane, Peters, & MacGregor, 2002; Wagar& Thagard, 2004). Kahneman (2003, p. 710) argues that‘‘there is compelling evidence for the proposition that everystimulus evokes an affective evaluation.” Common experi-ence suggests that emotions are both inputs and outputsof decision making. Preference for one option over anotherdepends strongly on their relative emotional interpreta-tions, and the process of decision making can itself gener-ate emotions such as anxiety or relief. The relevance ofemotion to decision making is consistent with physiologicaltheories that regard emotions as reactions to somatic

254 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

changes (Damasio, 1994; James, 1884). It also fits withsome cognitive theories of emotions, which regard themas judgments about the extent to which ones goals arebeing satisfied (Oatley, 1992). From a neurological perspec-tive, it is easy to see how emotions can be both cognitiveand physiological, as there are numerous interconnectionsamong the relevant brain areas.

2.2. Principle 2. Brains

According to our second principle, decision making is aneural process driven by coordinated dynamic interactionsamong multiple brain areas, including parts of prefrontalcortex as well as major subcortical systems. In particular,activity in brain regions involved in assessing and actingupon the appetitive or aversive nature of stimuli (com-monly conceptualized as part of the brain’s reward system)seems most crucial to understanding judgment and choicebehavior (for a review, see Sanfey et al., 2006). Empiricalneuroscientific investigation of the nature of preferenceand decision has been developing rapidly, and much worktoday is identifying specific brain areas involved in produc-ing decision-related behaviors (e.g., Bayer & Glimcher,2005; Breiter, Aharon, Kahneman, Dale, & Shizgal, 2001;Knutson, Taylor, Kaufman, Peterson, & Glover, 2005;McClure, York, & Montague, 2004; Montague & Berns,2002). This nascent field of decision neuroscience representsan exciting frontier of deep exploration into how and whypeople act, think and feel as they do in choice and judg-ment scenarios (Shiv et al., 2005).

But taking a neural approach to decision making allowsfor much more than simply identifying brain areas acti-vated in the subjects of behavioral studies. The develop-ment of biologically plausible theories of how brain areasinteract to produce preferences and choices can providemore refined mechanistic explanations of decision behavi-ors. Moreover, investigation at the neural level can suggestnovel experiments, for example the recent discovery that anodorless nasal spray preparation of the neuropeptide oxy-tocin increases trust in risky choice scenarios, includingthose involving monetary transactions (Kosfeld, Heinrichs,Zak, Fischbacher, & Fehr, 2005). Such a finding is one thatdecision neuroscience can reveal, but that would be missedby higher-level psychological study alone. Neurosciencecan thus inform the development of more detailed predic-tions and richer understandings of behavioral-levelobservations.

2.3. Principle 3. Valuation

Our third principle states that the brain forms prefer-ences via interacting but distinct mechanisms for positiveand negative outcomes, encoded primarily by dopamineand serotonin, respectively. There is extensive evidence thatmidbrain dopamine neurons, such as those in the ventraltegmental area and nucleus accumbens, are involved inthe computation of a discrepancy between the expected

and actual rewarding nature of an outcome (e.g., Knutsonet al., 2005; Schultz, 2000; Suri, 2002), although recent evi-dence suggests that this activity is only involved in theencoding of positive deviations from expectations, that is,getting more than one expected. (Bayer & Glimcher,2005). Daw, Kakade, and Dayan (2002) describe a plausi-ble alternative brain mechanism for situations in which onereceives less than expected, arguing that serotonin innerva-tion from the dorsal raphe nucleus of the brainstem is cru-cial for producing characteristic reactions to negativelyvalued stimuli and matters being considered (e.g., optionsin a choice scenario). There are thus neurobiological rea-sons for viewing gains and losses as being encoded and sub-sequently assessed in a fundamentally different manner bythe brain, involving distinct neural circuits and activationpatterns. This provides the basis for our explanation ofthe central finding of prospect theory that losses loom lar-ger than gains. Our neurocomputational model ANDREAsimulates how interactions of the dopamine and serotoninsystems with the amygdala and other brain areas mayenable this asymmetric assessment of positive and negativeoutcomes.

2.4. Principle 4. Framing

The last principle states that judgments and decisionsvary depending on how the context and manner of the pre-sentation of information initiate different neural activationpatterns. The importance of framing is evident from thelong history of influential work by Tversky and Kahneman(1981, 1986; see also Kahneman & Tversky, 2000). Theydemonstrated that framing a decision in terms of eitherlosses or gains can substantially affect the choices that peo-ple make, and related phenomena have been observed inmany real-life arenas such as the stock market and con-sumer choice (Camerer, 2000). We contend that framingcan be understood even more deeply from the neural affec-tive perspective we propose in our first two principles. Thesimulation results we describe later show how this enrichedconception of framing allows for the integration of diverselines of behavioral decision research, as well as the postula-tion of important new predictions and hypotheses.

We take the concept of framing to encompass anypotential effects of the manner or context of presentationon decisions and judgments. Following this characteriza-tion, such findings as preference reversals when outcomesare evaluated jointly versus separately (e.g., Hsee, Loewen-stein, Blount, & Bazerman, 1999; Hsee, Rottenstreich, &Xiao, 2005) might also be considered to be framing results.Another such framing effect is illustrated by the ‘trolley-footbridge’ dilemma (e.g., Greene, Sommerville, Nystrom,Darley, & Cohen, 2001): most people consider flipping aswitch to kill one person instead of five morally justified,but consider it immoral to personally push a person intothe path of an oncoming trolley, killing that person butpreventing the trolley from killing five others. We will alsoexplain some of the findings of the decision affect theory of

AMYG

OFC

5-HT

DA

VS

ACC

DLPFC

Fig. 1. Basic connectivity framework. Dotted arrows represent externalinputs to the model. Abbreviations: 5-HT, dorsal raphe serotonergicneurons; ACC, anterior cingulate cortex; AMYG, amygdala; DA,midbrain dopaminergic neurons; DLPFC, dorsolateral prefrontal cortex;OFC, orbitofrontal cortex; VS, ventral striatum.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 255

Mellers and colleagues (1997) as framing effects that differ-entially activate specific neural systems.

These four principles make strong claims about the pro-cesses that constitute human decision making, but alonethey are not sufficiently precise to explain particular experi-mental results. We now describe a rigorously defined, bio-logically realistic neurocomputational model that specifieshow different brain areas might interact in a manner consis-tent with neural affective decision theory to produceobserved behavioral phenomena.

3. The ANDREA model

A neuropsychological theory consists of a set of hypoth-eses about how specific brain operations produce observedbehaviors. Because of the complexity of the brain, compu-tational models are indispensable for theoretical neurosci-ence, both for precisely specifying relevant neuralstructures and activities and for examining their implica-tions through appropriate simulation experiments. Litt,Eliasmith, and Thagard (2006) proposed a biologicallydetailed neural model of reward system substrates of valu-ation and decision. We describe here the primary func-tional components of this model, and introduce severalexplanatorily valuable additions regarding neural responsecharacteristics and the complexity of emotional arousalencoding. This version of the model we call ANDREA,for affective neuroscience of decision through reward-basedevaluation of alternatives.

Our model applies the neural engineering framework(NEF) developed by Eliasmith and Anderson (2003), andhas been implemented in MATLAB using the NEF simula-tion software NESim (see Appendix A). Neural popula-tions (‘ensembles’) and their firing activities are describedin the NEF in terms of mathematically precise representa-tions and transformations, with the dynamic characteristicsof neural computations characterized using the tools ofmodern control theory. Appendix B outlines the exactmathematical nature of representation, transformationand dynamics as defined by the NEF. This rigorous, gener-alized mapping of high-level mathematical entities andtransformations onto biophysical phenomena such as spikepatterns and currents allows for biologically constrainedcomputations and dynamics to be implemented in physio-logically realistic neural populations, and has proven suc-cessful in modeling phenomena ranging from theswimming of lamprey fish (Eliasmith & Anderson, 2000)to the Wason card task from cognitive psychology (Elia-smith, 2005b).

Fig. 1 shows the connectivity structure between the dif-ferent brain regions we have modeled. A comprehensiveexamination of afferent and efferent transmission amongthese regions would feature many more connections thanwe have included. The interactions shown represent partic-ular paths of coordinated activity that contribute toobserved behaviors, rather than a full characterization ofall relevant neural activity. Appendix A provides details

regarding the specific numbers of neurons used to modeleach of these brain areas, as well as the physiologicalparameters used to model individual neurons in each ofthese populations. Each input–output relation symbolizedby a connection line in Fig. 1 maps onto one or more spe-cific mathematical transformations, as summarized inAppendix C. We now describe these coordinated neuralcomputations as they are relevant to explaining decisionsand valuations.

3.1. Subjective valuation by emotional modulation

Valuation of alternatives and other information is anessential part of decision making. Central to the perfor-mance of this task by ANDREA is an interaction betweenthe amygdala and orbitofrontal cortex (Fig. 1). Muchresearch has implicated orbitofrontal cortex in the valua-tion of stimuli (e.g., Rolls, 2000; Thorpe, Rolls, & Maddi-son, 1983), particularly in light of its extensive connectionswith sensory processing areas of the brain. Several recentstudies have indicated an important role for orbitofrontalneurons in providing a sort of ‘‘common neural currency”

(Montague & Berns, 2002) which allows for the evaluationand comparison of figurative (or even literal) apples andoranges (Padoa-Schioppa & Assad, 2006). Recent studiesof the amygdala have challenged its traditional associationwith mainly aversive stimulus processing, showing insteadactivation based on the degree to which stimuli are salientor arousing, rather than a specific valence type (for areview, see McClure et al., 2004). This has inspired a rein-terpretation of classic results as indicating that negativelyappraised events may be in general more emotionallyarousing than positive outcomes, perhaps because of aneed to alter current behavior in response to aversive feed-back. In accord with research on the role of the amygdalain emotional attention (Adolphs et al., 2005) and multipli-cative scaling observations for visual attention (e.g., Treue,

256 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

2001), ANDREA performs a multiplicative modulation byamygdala-encoded emotional arousal of the valuationcomputation performed in orbitofrontal cortex (Fig. 2).That is, orbitofrontal valuations are modeled as being mul-tiplicatively dampened or intensified, depending onwhether the individual is in a lowered or heightened stateof affective arousal, respectively.

Let V represent baseline orbitofrontal stimulus valua-tion, based on initial sensory and cognitive processingand provided as an input in our model. Taking A to repre-sent amygdala-encoded emotional arousal, we characterizethe output subjective valuation S at time t as

SðtÞ ¼ AðtÞ � V ðtÞ: ð1ÞThus, increased levels of emotional arousal will amplify thesubjective valuation of stimuli by orbitofrontal cortex,while lower arousal levels lead to valuation attenuation.

0 0.1 0.2 0.3 0.40

0.5

1

1.5

2

2.5

3

AM

YG

out

put

0 0.1 0.2 0.3 0.4–2

–1

0

1

2

OF

C in

put

0 0.1 0.2 0.3 0.4–4

–3

–2

–1

0

1

2

t

OF

C o

utpu

t

a

b

c

Fig. 2. Arousal modulation of valuation. (a) The ‘‘emotionless” input signachanges of varying magnitude. The vertical axis can be interpreted as a sort of nstepping down indicates a loss. Positive/negative sign corresponds to appetitiDecoded output from spiking neuron populations (see Appendix B). Upsurgesprovided stimulus value signal in (a), demonstrating the role such changes playactivity presented in (a) by that shown in (b). Emotional arousal can induce s

As we discuss in our later account of prospect theory, AN-DREA also introduces realistic biological constraints im-posed by neural firing saturation that help to explainvaluation behaviors observed in humans, an advancementover our earlier reward system model (Litt et al., 2006).

3.2. Surprise as deviation from expectations

Fig. 2 shows that our model generates amygdala activityupsurges accompanying changes in the valuation input toorbitofrontal cortex, and that these upsurges are valence-asymmetric: negative changes in valuation (losses) producegreater affective arousal increases than equivalent positivechanges (gains). This neurological behavior is producedmainly through a modulation of amygdala-encoded emo-tional arousal by a reward prediction error signal. This issimply the discrepancy between expected and actual stimu-

0.5 0.6 0.7 0.8 0.9 1

0.5 0.6 0.7 0.8 0.9 1

0.5 0.6 0.7 0.8 0.9 1ime

l to orbitofrontal cortex consists here of positive and negative valuationeural currency scale: upward steps in the graph thus represent gains, while

ve/aversive valence. (b) Emotional arousal reflected in amygdala activity.correspond to arousal increases coinciding with changes in the externallyin influencing emotional engagement. (c) Multiplicative modulation of theignificant changes in valuation from the baseline input signal.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 257

lus valuation, and as such represents the effect of the sur-prising nature of a stimulus on how emotionally arousingit is.

For this computation we employ the temporal difference(TD) model (Sutton & Barto, 1998), due to its simple math-ematical structure and robust correspondence with experi-mental neural activity observations (e.g., Schultz, 2000).TD computes reward prediction error (E) based on the dif-ference between the latest reward valuation and a weightedsum of all previous rewards (P). Using our arousal-modu-lated signal S as the input regarding current stimulus valu-ation, this leads to the modeled recurrent equations

EðtÞ ¼ SðtÞ � P ðt � 1Þ ð2ÞP ðtÞ ¼ P ðt � 1Þ þ a � EðtÞ; ð3Þ

where a is a learning rate constant between 0 and 1. Thisactivity has typically been modeled by increased midbraindopamine firing with positive prediction errors (that is, get-ting more than expected) and firing rate depression for neg-ative errors (getting less than expected) (Schultz, 1998,2000; Suri, 2002). While this approach seems valid for par-ticular ranges, recent work has called into question the fea-sibility of midbrain dopamine acting alone to perform thiscomputation (Daw et al., 2002; Dayan & Balleine, 2002). Inparticular, such physiological constraints as low baselinefiring rates make it difficult to envision how activity depres-sion could be used to well encode highly negative predic-tion errors, a concern supported by recent experimentalfindings (Bayer & Glimcher, 2005).

Accordingly, we adopt an interacting opponent encod-ing of positive prediction errors by midbrain dopamineand negative errors by serotonergic neurons in the dorsalraphe nucleus of the brainstem. This is supported by a vari-ety of experimental studies in humans and other animals(Deakin, 1983; Evenden & Ryan, 1996; Mobini, Chiang,Ho, Bradshaw, & Szabadi, 2000; Soubrie, 1986; Vertes,1991; for a review, see Daw et al., 2002). By separatingthe encodings of losses and gains, we are able to distinctly

calibrate the modulatory effects of positive and negativevaluation changes to provide a plausible neural mechanismfor loss aversion (Appendix C). That is, asymmetries inloss–gain valuation can be modeled via differing amygdalasensitivity to inputs from dorsal raphe or midbrain areas,perhaps realized in actual brains through differences in spe-cific neurotransmitter receptor concentrations in the amyg-dala, or similar mechanisms of connectivity strengthvariation (e.g., receptor sensitivity differences).

3.3. Increased behavioral saliency of negative outcomes

Valence-asymmetry in emotional arousal is furtherstrengthened in our model through the activities we haveassigned to the anterior cingulate and dorsolateral prefron-tal cortices, specifically via a proposed dissimilarity in theinfluences of losses and gains on required behavioral plan-ning. Much evidence supports the importance of dorsolat-

eral prefrontal cortex in the planning, representation andselection of goal-directed behaviors (e.g., Owen, 1997),and the anterior cingulate cortex in the detection of con-flicts between current behavior and desired or expectedresults, interfacing appropriately with dorsolateral prefron-tal in the process (e.g., Bush, Luu, & Posner, 2000). Wehypothesize an increased behavioral saliency of negative

reward prediction errors, as such results may indicate thatcurrent behavior needs to be modified, rather than be sim-ply maintained or strengthened as a positive error wouldindicate. Such a situation would introduce the attendantcognitive resource requirements of new action plan forma-tion and execution in response to the displeasing outcome,as well as potential environmental risks stemming fromaltering current behavior. We model this increased behav-ioral saliency through a corresponding increase in emo-tional arousal (Appendix C). Thus, feedback fromdorsolateral prefrontal cortex and the anterior cingulateto the amygdala in our model further increases the affectiveinfluence of losses over similarly sized gains.

In combination with the previously discussed roles ofdopamine and serotonin in influencing the amygdala, wearrive at our final characterization of how emotional arou-sal Ais influenced by the saliency of unexpected gains andlosses, as well as potential behavioral modification costsassociated with the latter:

AðtÞ ¼ A1ðtÞ þ b �DAðtÞ þ c � 5-HTðtÞ þ CðtÞ: ð4ÞA1 represents a degree of emotional arousal determined byexternal factors unrelated to reward prediction error or thedescribed dorsolateral–cingulate contribution. In previouswork we have provided this as a straightforward input sig-nal to the model (Litt et al., 2006). We shall describe laterhow ANDREA expands A1 arousal by incorporating priorexpectations regarding valuation targets, which allows fora neurobiological explanation of decision affect theory(Mellers, Schwartz, Ho, & Ritov, 1997). DA and 5-HTare the opponent encodings of positive and negative rewardprediction error, respectively, with c chosen to be a connec-tion-strength constant greater than b to simulate an in-creased influence of serotonin-encoded losses overdopamine-encoded gains on emotional state. Finally, C

represents the additional costs associated with losses thatincrease their behavioral saliency, and hence emotional im-port, as determined by activity in the anterior cingulate anddorsolateral prefrontal cortices.

4. A neural account of prospect theory

Prospect theory, a theoretical framework for under-standing risky choice developed by Kahneman and Tver-sky (1979, 1982, 2000), has been applied to manypreference and choice behaviors commonly exhibited bypeople. The most famous such phenomenon is loss aver-sion, whereby people behave asymmetrically in their per-sonal valuations of objectively equivalent losses andgains. Central to prospect theory’s resolution of this and

258 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

other inconsistencies between classical decision researchand actual behavior is a redefined characterization of thenature of subjective evaluations of decision outcomes.The resulting value function proposed by the theory hasthe following essential characteristics (Kahneman & Tver-sky, 1979; Tversky & Kahneman, 1984): (i) subjective valueis defined on gains and losses – that is, deviations from aneutral reference point – rather than on total wealth, asis typical of expected utility theory; (ii) the value functionis concave for gains and convex for losses; and (iii) thecurve is steeper for losses than gains. Taken together, anasymmetric sigmoid value function is the well-known result(Fig. 3).

4.1. Loss aversion

Neural affective decision theory, via the ANDREAmodel, provides a compelling explanation of loss aversion.The combination of arousal modulation of subjective valu-ation and the increased affective import of losses producesemotionally influenced orbitofrontal valuations that over-weight losses. This can be seen in Fig. 2, and even more viv-idly in the simplified simulation of Fig. 4. The resultingeffects on thinking and behavior would produce the asym-metries in peoples’ evaluations of and responses to gainsand losses that have been documented and explained byprospect theory.

We turn now to extending this neural account of lossaversion into a detailed biological explanation for the spe-cific shape of prospect theory’s sigmoid value function.

4.2. The value function of prospect theory

In developing a neural theory of preference that mesheswith the behaviorally inspired prospect theory value func-tion, the first step is to identify brain regions that shouldbe expected to produce responses corresponding to the

Fig. 3. A hypothetical prospect theory value function, illustratingsubjective valuation commonalities observed in tests of numerous subjects.

sorts of behaviors monitored in psychological studies ofpreference and valuation. As discussed earlier, it seems nat-ural to look to orbitofrontal cortex as the site of activitymapping directly onto people’s subjective valuations ofgains and losses. It has been implicated strongly in tasksrelated to valuation and comparison of outcomes, eventsand perceived stimuli in general, and we have described afundamental affective modulation of this encoding thatmay form the basis of the subjective nature of ultimate out-come valuation.

The next step is to identify features of the ANDREAmodel that might explain the specific nature of the sigmoidvalue function, as described previously in terms of threeprimary characteristics (Fig. 3). Feature (i) of the curve,valuation in terms of reference point deviation, identifiesthe sort of input signal to be modulated in orbitofrontalcortex. Since the degree to which a stimulus is considereda loss or a gain is a representation of its divergence froma neutral reference, evaluating such changes in value callsfor a step-style input, similar to those shown in Figs. 2and 4, where the subjective valuation of a deviation of sizeX will be determined by the emotional modulation pro-duced by an input valuation step from 0 to X. For example,to produce orbitofrontal activity corresponding to the sub-jective valuation of a loss of $200, we measure the emotion-ally modulated output from orbitofrontal cortex to a stepinput that moves from 0 (the reference point) to our targetvalue (�200). Leaving aside feature (ii) for a moment, thethird aspect of the prospect theory value curve, a steeperslope for losses than gains, is simply loss aversion (Kahn-eman & Tversky, 1979). The biological account of lossaversion we previously described will thus serve as a criticalcomponent of our neural explanation of the S-curve.

The second feature of the sigmoid value function, theleveling-off of loss and gain valuations at the extremes,requires appeal to additional neurological mechanisms. Inparticular, we introduce the notion of neural saturation.Any type of neuron has hard biological constraints onhow fast it can fire. Each action potential is followed bya refractory period of repolarization during which the neu-ron cannot fire, and issues such as cellular respirationrequirements and local neurotransmitter depletion intro-duce unavoidable limitations on spike rates. In the contextof neurocomputation in the NEF (and hence ANDREA),this means that the range of values that can be encodedby any neural population is limited. This inherent restric-tion can actually serve an explanatory purpose in the caseof the prospect theory value function. To explain howANDREA could produce leveling-off valuations at theextremes, note that we encode values through increasingneural spike rates in the encoding population as these val-ues become larger (in either the positive or negative-direc-tion). Thus, in attempting to provide subjective valuationsfor very large gains or losses, neurons in orbitofrontal cor-tex will begin to saturate, as they simply cannot fire fastenough to produce linearly distinctive affective responsesto increasingly large value deviations.

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

1

2

3

4

AM

YG

out

put

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1–2

0

2

OF

C in

put

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1–10

–5

0

5

time

OF

C o

utpu

t

a

b

c

Fig. 4. Unbalanced evaluation of gains and losses. (a) The input signal to orbitofrontal cortex consists of positive and negative changes in value of equalmagnitude. (b) Arousal level modulated by prediction error and likely behavioral saliency of stimuli. The loss induces a much greater arousal increase thanthe equal gain. (c) The outcome of the unevenness displayed in (b). Reductions in stimulus valuation (losses) are disproportionately amplified compared togains.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 259

An alternative encoding of value magnitude throughactivated population size, rather than the firing levels of afixed population, would seem to deny this saturation-basedaccount of diminishing marginal sensitivity. Such a schememight be akin to accumulator models that have beenapplied to numerosity encoding in intraparietal regions(Roitman, Brannon, & Platt, 2007). However, applying thisapproach to valuation encoding seems less plausible from aneurophysiological resources perspective; while the salientbrain areas do have millions of neurons, all of these wouldrequire direct connectivity to areas interpreting magnitudeinformation in order to impart the same information asnaturally rate-tuned transmitter release by a populationwhich encodes magnitude via firing rates. Indeed, accumu-lator models generally allow for cardinal value encodingson restricted integer scales such as 2-to-32, rather thanthe potentially arbitrary and quasi-analog scale on whichvaluations often lie.

Fig. 5 illustrates the results of running simulations basedon the preceding description. Each data point at X alongthe horizontal axes of Fig. 5b and c is the result of measur-ing a modulated orbitofrontal valuation output in a simu-lation providing orbitofrontal cortex a step input from 0 toX, in accord with the reference-deviation characterizationof value in prospect theory. As expected, the effects of lossaversion are clear, with the slope differential indicating a

greater affective impact of losses over equivalent gains.The 2:1 slope ratio observed here for moderate losses andgains mirrors behavioral evidence in the decision literature(e.g., Kahneman, Knetsch, & Thaler, 1991). Finally, theconcavity features of the value function have been success-fully replicated in these simulations. Fig. 6 shows the spe-cific role of neural saturation in this regard. Each row inthese spike rasters represents an individual orbitofrontalcortex neuron, and each point represents a single actionpotential at a specific point in time produced by the neuronin question. Clearly, equal changes in the size of a loss orgain do not necessarily produce similar changes in neuralspiking, particularly as neurons begin to saturate. Thiscauses a decreasing distinctness in firing response at theextremes, which we propose as the neurological basis forthe leveling-off in loss and gain valuation observed inbehavioral studies. Overall, the mechanisms we have out-lined combine to produce a detailed, biologically plausibleneural explanation of the nature of the value functiondescribed by prospect theory.

4.3. Framing through reference-value manipulation

Understanding how individuals respond differentlydepending on the manner in which information regardinga situation is presented is considered to be one of the primary

Fig. 5. Value function simulation results. (a) A typical prospect theoryvalue function. (b) Subjective valuation outputs from orbitofrontal cortex.Each data point represents the emotionally modulated valuation of theloss or gain value chosen along the horizontal axis. (c) Close-up of thecentral portion of (b), showing the differing slopes for loss and gainvaluations.

event

$50 loss

event

$100 loss

event

$150 loss

time

n = 226

n = 344 ( 52%)

n = 403 ( 17%)

a

b

c

Fig. 6. Orbitofrontal cortex spike rasters. Note the clear difference inactivity between (a) and (b) (a 52% increase in immediate post-eventspiking). This is in contrast to the relative similarity in spiking in (b) and(c) forced by neural saturation, as the $50 change moves farther awayfrom the reference-value of 0. This response characteristic forms the basisof our neural explanation for the concavity features of the prospect theoryvalue function.

260 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

explanatory successes of prospect theory. A famous illus-tration of the power of framing is the presentation of twodifferent choice-sets to subjects regarding the consequencesof different plans to handle the outbreak of a diseaseexpected to kill 600 people (Tversky & Kahneman, 1981,1986):

Problem 1: Program A – 200 people will be saved.

Program B – 1/3 probability 600 are saved, 2/3probability nobody is saved.

Problem 2: Program C – 400 people will die.

Program D – 1/3 probability nobody will die,2/3 probability 600 will die.

Faced with the choice in Problem 1, 72% of subjects choseProgram A over B, whereas only 22% of subjects choseProgram C over D when faced with Problem 2. Clearly,though, Programs A and C are objectively equivalent, asare their respective alternatives. The framing of situationsin terms of losses or gains may thus cause dramatic rever-sals of preference in decision scenarios.

0

200

600

OF

C in

0

200

600

OF

C o

ut

400/600 die

200/600 saved

a

b

Fig. 7. Simulation results for framing in terms of gains and losses. (a)Objectively equivalent outcomes (ending up with 200 people alive)evaluated as deviations from different reference points. The thin-linedstep input represents Problem 1/Program A, and the heavy line Problem 2/Program C (Tversky & Kahneman, 1981). (b) Opposite directions ofdeviation produce opposite directions of emotional amplification insubjective valuation, leading to more a positive outlook towards ProgramA than Program C.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 261

The mechanisms implemented in the ANDREA modelprovide a realistic neural basis for such framing effects.We describe means by which objectively equivalent out-comes can produce markedly different subjective valuationsin orbitofrontal cortex, depending on the manner in whicheach is framed. In particular, note that feature (i) of theprospect theory value function, the definition of subjectivevaluation on deviations from some neutral reference-value,points towards an obvious mechanism for the framing ofdecisions: variation of the reference value itself. In the dis-ease example, Problem 1 is framed in terms of lives savedrather than lives lost, while the reverse is true for Problem2. Thus, choosing ‘‘zero” reference points from which sub-jective valuations shall deviate in each case should lead todifferent values for each problem. For Problem 1, ‘‘zerolives saved” would indeed correspond to 0 on a scale mea-suring the total number of people of our original 600 whoare expected to be left alive after the choice of a given pro-gram for combating the disease. Crucially, however, the‘‘zero lives lost” reference point for Problem 2 would cor-respond to the value 600 when measured on this same scale,since 600 out of 600 people alive indicates that no liveshave been lost, as required.

Thus, in the case of the Program A option in Problem 1,a subjective valuation deviation described as ‘‘200 peopleout of 600 will be saved” represents a positive-direction devi-ation from 0 lives saved to 200 lives saved. In contrast, theobjectively equivalent Program C option in Problem 2,described as ‘‘400 people out of 600 will die”, represents anegative-direction deviation from 0 lives lost to 400 liveslost, that is, from 600 left alive down to 200 left alive. Notethat both deviation construals end at the value 200 on thescale of people still alive, since they are objectively equiva-lent outcomes. Nevertheless, because of our multiplicativemodulation of valuation deviations by emotional arousal,opposite directions of deviation will produce subjective val-uations that are emotionally amplified in opposite direc-tions. Fig. 7 illustrates the results of characterizing thistype of framing as a manipulation of the deviation referencepoint. We obtain a subjective valuation of orbitofrontalstep input corresponding to Program A that is much morepositive than that of a step input corresponding to ProgramC, simply because of opposite directions of emotional mod-ulation. This would explain the preference reversal thatoccurs upon switching decision frames, as what was seenas a gain in comparison to one reference-value is suddenlyevaluated as a loss in comparison to a different referent.Later we will discuss framing effects that operate in waysother than varying reference-values, and how these differentsorts of framing can in combination explain the observedinteractions between affect, prior expectations and counter-factual comparisons explored in decision affect theory.

4.4. Predictions

Our neurological explanation of prospect theory sug-gests a range of testable neural-level predictions and

hypotheses. Litt et al. (2006) outline several such predic-tions in relation to loss aversion and the behavioral influ-ence of serotonin. For example, the extent of a particularindividual’s hypersensitivity to losses is hypothesized tobe correlated with the concentration in the amygdala of aspecific serotonin receptor subtype, which would influencethe degree to which negative reward prediction errors affectamygdala activity. As well, degraded connectivity betweenmidbrain dopamine neurons and the raphe serotonin sys-tem is predicted to increase emotionally influenced over-valuation of both gains and losses, due to mutualattenuation effects that we have modeled between these sys-tems. Such correlation between loss and gain sensitivity hasindeed been shown in recent work by Tom and colleagues(2007). The particular neural activity they describe suggeststhe mechanism underlying this relationship may involveimportant additions to the computations captured in theANDREA model, such as the effects of noradrenergiccircuits.

Further empirical investigation of the neural correlatesof loss aversion can provide more such tests and potentialfalsifications of the relationships proposed in ANDREA,although practical barriers to imaging brain stem serotoner-gic activity limit the capacity of the approach of the Tomet al. study in this respect. Additionally, this work and otherexplorations of ‘‘prospect theory on the brain” by this teamdid not show any significant amygdala activity relevant toloss aversion (Tom, Fox, Trepel, & Poldrack, 2007; Trepel,Fox, & Poldrack, 2005). Besides the studies we have previ-ously cited that indicate an important role for the amygdalain emotional valuation, significant amygdala activity

262 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

directly corresponding to loss aversion has contrastinglybeen directly observed in other fMRI experiments (e.g.,Weber et al., 2007). Such discrepancies and limitationsmight be resolved by employing alternative experimentaltechniques; for instance, it is possible to temporarily dimin-ish cerebral serotonin levels by an ingestion of a tryptophandepleting drink (Cools et al., 2005), which our model sug-gests would specifically diminish loss aversion. Depletionstudies of decision-related phenomena hold much promisefor testing the validity of neurocomputational models likeANDREA, as well as advancing our understanding of thebiological bases of complex behaviors and cognitions. Var-ious other anomalous choice-related behaviors arising fromspecific disconnections and damage patterns are alsodescribed by Litt et al. (2006). The enriched conceptionsof decision phenomena provided by neural-level explorationallow for postulation of potential influences on behaviorthat would be missed by studies restricted to examininghigher-level psychological processing.

A straightforward set of additional predictions derivingfrom our ANDREA simulations would be expectations ofactivation of the brain regions described in our model(Fig. 1) to be observed in imaging experiments involvingpeople performing preference and choice tasks. But a moreinteresting explanatory and predictive utility of the modelinvolves the study of individual differences in value func-tions. While the general features of the curve have beenreproduced by many different experiments and in largenumbers of people, the specific functions of different indi-viduals are known to vary widely (e.g., Kahneman & Tver-sky, 1982). Because our simulations essentially representthe value function produced by a single brain (to whichwe have complete access, as its designers and builders)the model offers biological reasons for why and how indi-vidual value functions vary, while preserving some com-mon general features. As discussed previously,connectivity strength between dorsal raphe serotonin andthe amygdala can influence loss aversion, and thus the nat-ure of the subjective valuations performed in orbitofrontalcortex. In persons with heightened serotonin sensitivity inthe amygdala, we would expect to see a value function withan even steeper slope for losses than that of gains. Theopposite also holds, in terms of lessened serotonergic influ-ence or stronger midbrain dopaminergic connectivity, witheither of these cases predictive of less discrepancy in slopebetween losses and gains (i.e., reduced loss aversion). Thedamaged innervation between the raphe and midbrainmentioned earlier would have the effect of steepening boththe loss and gain portions of the curve, although the sloperatio might be unaffected. These connectivity manipula-tions can be understood psychologically as altering theextent to which one is emotionally aroused by losses andgains, with this affective influence central to producing sub-jective valuations of these deviations from a referencepoint.

We also predict that specific neural saturation range dif-ferences between brains may underlie individual differences

in value function shape, with more readily saturatingorbitofrontal populations prone to producing curves thatlevel-off quicker and more markedly. In the NEF, this sat-uration can itself be modeled asymmetrically around 0, sowe can readily create a neural system of specific architec-ture and connectivity that yields significant leveling-offfor losses but much less so for gains, or vice versa. Insum, the ANDREA model allows for numerous experi-mentally testable neural-level predictions regarding pros-pect theoretic behavior.

5. A neural account of decision affect theory

The hedonic influence of prior expectations and coun-terfactual comparisons on the subjective valuation of out-comes is characterized by the work of Mellers andcolleagues on what they call decision affect theory (Mel-lers & McGraw, 2001; Mellers et al., 1997; Mellers, Sch-wartz, & Ritov, 1999). Its fundamental claim is thatevaluation by an individual of an outcome, event or deci-sion option is strongly influenced by the ‘‘relative plea-sure” it is considered to provide (Mellers, 2000). Thisrelativity derives in part from the effects of counterfactualcomparisons, as illustrated by the finding that Olympicsilver medalists are more likely to feel disappointed thanbronze medalists because of generally higher personalexpectations (McGraw, Mellers, & Tetlock, 2005).Another factor is the degree to which an obtained out-come is considered surprising, with greater emotionalimpact for unexpected results (either good or bad) thanfor expected outcomes. The mathematical expression ofdecision affect theory is

RO ¼ J ½uO þ dðuO � uEÞ � ð1� sOÞ� ð5Þ(cf. Eq. (1) in McGraw et al., 2005). RO is the emotionalfeeling associated with the obtained outcome and J is a lin-ear function relating the felt pleasure to a specific numeri-cal response. uO and uE are the respective utilities of theobtained and expected outcomes, and d(uO � uE) is a disap-pointment function that models how the obtained outcomeis compared to the alternative expected outcome. sO is thesubjectively judged probability of the obtained outcomeactually occurring, so the weighting by the complementary(1 � sO) term models the degree to which the obtained out-come was not expected (i.e., the subjective probability thatsomething else would occur). The importance of emotionalinfluence becomes clear with the finding that people willchoose what feels best – that is, make decisions in such amanner as to maximize average positive emotional experi-ence (Mellers et al., 1997).

Thus feelings about outcomes and choices, and hencethe decisions people may be expected to make, are greatlyinfluenced by the size and valence of the discrepancybetween anticipated and actual results. The expected emo-tional reaction to gaining $20 will be vastly different if theprior expectation is gaining $100, versus the case where theexpected yield is only $1. Indeed, the degree of influence of

–4

0

OF

C in

0

4

AM

YG

in

0

16

AM

YG

out

–12

–4

4

OF

C o

ut

a

b

c

d

Fig. 8. Simulation results for framing that changes emotional-context. (a)A step input to orbitofrontal cortex indicating a negative change in value,such as the consideration of a situation in which one’s actions cause thedeath another person. (b) Two different base arousal inputs to theamygdala, corresponding to differing levels of context-produced emotionalengagement. (c) Because of differing base arousal levels, arousal upsurgescorresponding to the negative valuation deviation differ as well. (d) Highercontext-motivated base arousal leads to greater amplification of thesubjective valuation change, and thus a belief that the more arousingscenario is actually worse than the objectively equivalent but less arousingscenario.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 263

this discrepancy from anticipated results is such that anobjectively worse outcome can sometimes be morepleasur-able than one which is better. Consider the easily imagin-able experience of feeling happier in stumbling across a20-dollar bill while walking home from work than fromreceiving an underwhelming 1% raise the same day, despitethe monetary value of the raise being significantly higherthan that of the found note. Decision affect theory thusdescribes in a revealing and systematic fashion the natureof certain situations in which less can actually feel likemore. In describing a neural basis for the theory suggestedby ANDREA, we shall draw upon an integration of thesort of framing discussed in our examination of prospecttheory with the effects of the emotional-context of informa-tion presentation on valuation and choice, which we nowexplore in depth.

5.1. Framing through direct emotional arousal influence

Our earlier discussion of framing effects focused on thetype most discussed in prospect theory, concerning alterna-tive descriptions of losses and gains. However, framing canaffect decisions in other ways, as in the trolley-footbridgeexperiments of Greene and colleagues (2001). These exper-iments are not reference-value manipulations, and aretherefore not explainable by the neural mechanisms thatwe described for prospect theory. The judgment of moral-ity reversal occurs between two outcomes that are both

described as killing one person to save five others. Themanipulation involved here is not one of reference point,but rather the personal or impersonal nature of the specificaction performed that leads to the described outcome. It isthus differing contexts of choice presentation (i.e., the nat-ure of the situation story) that produce the change in situa-tional evaluation, rather than any suggestive presentationof choices in terms of either losses or gains. We will call thisemotional-context framing, in contrast to the reference-

value framing we discussed in relation to prospect theory.We propose that emotional-context framing in the trol-

ley-footbridge dilemma occurs through increased arousalassociated with the direct, personalized action of pushinga person to their death, compared to the more detachedand impersonal act of flipping a switch that will cause atrolley to divert from hitting five people towards hitting asingle person. Such an increase in emotional engagementinduces greater amplification of the subjective evaluationof causing a death in the personal case, which would pro-vide a neurological basis for the reversal in typical judg-ments of the morality of the actions in question. fMRIexperiments by Greene and colleagues seem to support thisneural account of the trolley-footbridge dilemma, showingincreased amygdala and orbitofrontal activity in cases ofhighly personal characterizations of morally debatableactions (Greene & Haidt, 2002; Greene, Nystrom, Engell,Darley, & Cohen, 2004).

Fig. 8 illustrates the neural explanation we havedescribed for the type of framing produced by changing

the emotional-context of the presented information. Justas for reference-value framing, we get different subjectivevaluations for scenarios that have identical objective val-ues, which would allow for preference reversals to occurwhen the decision frame is altered. The primary differenceis that this mechanism employs a direct manipulation ofemotional arousal state, whereas our neural basis for thereference-value framing caused emotional modulationchanges indirectly though manipulation of valuation devia-tion reference points.

5.2. Decision affect theory as an integrated framingphenomenon

Explanation of the experimental results examined indecision affect theory requires both reference-value framingand emotional-context framing. The hedonic impact ofcounterfactual comparisons can be produced by settingthe reference point of the valuation deviation for anobtained outcome to the value of the unobtained counter-factual result, which is exactly reference-value framing.

264 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

Then winning $20 when the counterfactual comparison is a$100 win would be construed as a loss, whereas a counter-factual comparison to winning only $1 would reframe the$20 win as a gain. In addition, the degree to which theobtained outcome was unexpected can have the effect ofemotional-context framing. We formally model this inANDREA by having emotional-context influence neuralbehavior through the arousal input A1 (see Eq. (4)) in amanner that takes into account the perceived probabilityof the obtained outcome X actually occurring:

A1ðtÞ ¼ A0ðtÞ þ k � ð1� P ½outcome X �Þ: ð6ÞIn this enriched conception of emotional arousal, A0 fillsthe previous role of A1 as a base arousal level determinedby external factors and provided as an input to the model.A1 emotional arousal is now explicitly increased in inverseproportion to the expected probability of obtaining theoutcome under consideration. Surprising, low-probabilityoutcomes produce higher arousal than unsurprising out-comes, where 1 � P[X] is closer to 0. The constant multi-plier k may be related to the relative affect-richness of theoutcome in question, as this variable seems strongly relatedto the degree to which uncertainty affects valuations (Rot-tenstreich & Hsee, 2001).

This direct manipulation of base emotional arousal is aformalization of emotional-context framing, in this case acontext related to outcome uncertainty. It and reference-value framing, implemented in ANDREA by the mecha-nisms described earlier, together produce the canonicalfinding of decision affect theory that objectively worse out-comes can sometimes feel better than more advantageousalternatives (Fig. 9). The mechanisms described aboveand illustrated in Fig. 9 allow us to map the standard math-ematical model of decision affect theory (Eq. (5)) onto spe-cific neural structures and computations. The calculation ofsubjective utilities and their subsequent comparison, asembodied in the d(uO � uE) term in Eq. (5), are performedneurologically through step-functional deviation in orbito-frontal cortex exactly as we modeled for prospect theoreticsubjective valuation. The ‘disappointment’ effects of com-paring obtained and expected outcome valuations resultfrom prediction error computations by dopamine and sero-tonin networks feeding back to influence emotional arousalencoded in the amygdala, which in turn modulates orbito-frontal valuations. The subjective probability augmenta-tion to our arousal representation (Eq. (6)) is identical tothe (1 � sO) surprise term in the decision affect theorymodel of Mellers and colleagues. In combination withour multiplicative modulation of valuation by affectivearousal (Eq. (1)), it is clear that this enhancement alsohas similar mathematical effects to that of for the surpriseterm in Eq. (5).

Figs. 10 and 11 show the results of more comprehensivesimulations of the behavioral findings of Mellers and col-leagues (Mellers et al., 1997, 1999). In Fig. 10, the valueof the unobtained counterfactual comparison outcome isheld constant at $0 (i.e., neither losing nor gaining money)

while the obtained outcome value and the expected proba-bility of obtaining that outcome are varied. Fig. 11describes an opposing experiment where the $0 is theunvarying obtained outcome – that is, subjects neither losenor gain any money. What is instead varied is the expectedprobability of obtaining this $0 outcome and the value ofthe unobtained outcome used as a counterfactual compari-son. In both figures and in both the behavioral andANDREA simulation results, lower-probability curves(corresponding to surprising obtained outcomes) producemore intense affective experiences, as reflected by moreextreme emotional response ratings. As well, there are casesin both the behavioral findings and our simulation datawhere an objectively worse outcome produces a more posi-tive emotional response than one which is objectivelygreater. For instance, both Fig. 10a and b show more ela-tion from winning $17.50 instead of $0 with an expectedprobability of such a win of only 0.09 than for winning$31.50 instead of $0 when the anticipated probability ofthis outcome is 0.94. The surprising smaller gain feels bet-ter than the unexpected larger gain. Our proposed neuralbasis for decision affect theory thus provides a plausibleand thorough biological characterization of thephenomenon.

We have been able to explain the central findings ofdecision affect theory using the same mechanisms that weapplied to the phenomena explained by prospect theory,with the addition of framing by emotional-context. Amajor motivating factor for the exploration of any subjectat more basic levels of explanation is the desire to unitefindings that are disconnected at higher levels of studythrough a set of shared lower-level mechanisms. In thisvein, an important undertaking in the neuroscientificexploration of the psychology of preference and choice isto uncover shared underpinnings for phenomena that haveyet to be rigorously tied together at the behavioral level.ANDREA demonstrates such a means to connect decisionaffect theory and prospect theory via the two neural mech-anisms for framing we have outlined.

5.3. Predictions

There are undoubtedly other kinds of framing besidesthe reference-value and emotional-context types that wehave discussed. Additional brain mechanisms may berequired to explain other cases in which differing modesof identical information presentation produce divergentresults, such as the case of reversals in preference whenoptions are considered jointly versus separately (e.g., Hseeet al., 1999, 2005). Emotional-context framing could alsobe relevant to explaining a prominent result in constructivememory research. In the car accident study by Loftus andPalmer (1974), they describe significant effects on speed-of-impact memory judgments by subjects based simply on theemotiveness of the action verb used to describe the collisionbetween two cars (‘‘contacted each other” producing lowerremembered speed judgments than ‘‘smashed into each

0.1 0.2 0.3 0.4 0.5 0.60

5

OF

C in

0.1 0.2 0.3 0.4 0.5 0.60

4

AM

YG

in

0.1 0.2 0.3 0.4 0.5 0.60

0

20

AM

YG

out

0 0.1 0.2 0.3 0.4 0.5 0.6

0

20

time

OF

C o

ut

Ob: 4.5Unob: 0

Ob: 3.5Unob: 0

P(4.5) = 0.96(Unsurprising)

P(3.5) = 0.04(Surprising)

3.829 10.5

7.582 11.94

a

b

c

d

Fig. 9. Decision affect theory as an integration of framing ideas. (a) Counterfactual comparisons between obtained and unobtained outcomes are encodedthrough appropriate setting of valuation deviation reference points, as in reference-value framing. (b) The surprising natures of obtained outcomes areencoded through direct manipulation of base emotional arousal input, as in emotional-context framing (Eq. (5)). (c) The effect of surprise can outweigh anobjectively larger valuation deviation, producing greater hedonic intensity for a surprising smaller gain than an unsurprising larger one in this case. (d) Thesubjective valuation of an objectively worse outcome is greater than that of the better outcome, because of the added pleasure of being surprised by thesmaller gain here.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 265

other”). This seems analogous to the baseline emotionalarousal manipulation inherent in our second neural fram-ing mechanism, which in turn causes increasingly amplifiedsubjective valuations that would correspond to inflatedspeed judgments by subjects when asked more emotivelyframed questions by Loftus and Palmer.

Further predictions arising from the framing mecha-nisms we have described relate to how the neural activitymaking up these mechanisms, and hence the effects offraming, could be induced without performing explicitdecision framing. For instance, it might be possible tomanipulate default subjective valuation reference pointseither upwards or downwards via positive or negativepriming, or perhaps even through direct neuropharmaco-logical intervention to influence orbitofrontal activity. Thiscould potentially produce effects similar to reference-valueframing in a less conspicuous manner. Similarly for emo-tional-context framing, manipulation of affective arousalin a manner wholly unrelated to situation context couldcause ‘‘bleed-over” effects identical to those produced bysituation-related arousal modulation. Methods of manipu-lation include prior exposure to violent or sexual imagery,relaxing or stressful preceding tasks, and direct pharmaco-logical modulation of amygdala activity. There is a wide

variety of means by which brain activity similar or identicalto our mechanisms of framing can be induced eitherbehaviorally or neurochemically, and we predict that suchalternative routes should produce behaviors in people sim-ilar to explicit framing effects, regardless of whether or notthey are aware of how they are being influenced.

6. General discussion

We have shown how neural affective decision theory, asstated in our four principles and as implemented in theANDREA model, can account for the central phenomenadescribed by prospect theory and decision affect theory.Our view of the general process of decision making is sum-marized in Fig. 12. People are presented with a decisionproblem by verbal or perceptually experienced descriptionswhich they must interpret based on the context in which thedecision is being made, resulting in an overall representa-tion of the problem that is both cognitive and emotional.Options, outcomes, and goals can be encoded by verbaland other cognitive representations, but with an inelimin-able emotional content; in particular, goals are emotionallytagged. The translation of the presentation of a problemand its context into an internal cognitive–emotional

Fig. 10. Comparing behavioral and simulation results in decision affecttheory. (a) Behavioral findings of Mellers et al. (1997), for lotteries with aconstant $0 unobtained counterfactual comparison and varying obtained

outcomes and expected obtained outcome probability. Emotionalresponse was reported by subjects by ratings of feelings on a scale of 50(extreme elation) to �50 (extreme disappointment) (b) Results of modelsimulations of the Mellers et al. experiment in (a), with data pointsdetermined through simulations in line with our proposed neural basis fordecision affect theory (Fig. 9).

Fig. 11. Comparing behavioral and simulation results in decision affecttheory, opposite experiment to that described in Fig. 10. (a) Behavioralfindings of Mellers et al. (1997), for lotteries with constant $0 obtainedoutcome and varying unobtained counterfactual comparison outcomevalue and expected obtained outcome probability. (b) Model simulation ofthe experiment in (a), with data points determined through simulations inline with our proposed neural basis for decision affect theory (Fig. 9).

266 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

representation produces framing effects, because differentrepresentations will invoke different neural affective evalu-ations. ANDREA shows how these evaluations can becomputed by coordinated activity among multiple brainareas, especially the orbitofrontal cortex, the amygdala,and dopamine and serotonin systems involved in encodingpositive and negative changes in valuation. The result isdecisions that select options inducing the highest emotionalsubjective valuations.

ANDREA has greater explanatory scope than otherneurocomputational models of decision and reward thathave focused on in-depth modeling of more restricted sub-systems of the brain, and accordingly limited ranges ofbehavioral phenomena. Two such examples are the modelof reward association reversal in orbitofrontal cortex byDeco and Rolls (2005) and the GAGE model of cogni-tive–affective integration in the Iowa gambling task andself-evaluations of physiologically ambiguous emotional

states (Wagar & Thagard, 2004). Additionally, a straight-forward diffusion decision process implemented in supe-rior colliculus cells seems able to accurately characterizeaccuracy and reaction time in simple two-choice decisiontasks (Ratcliff, Cherian, & Segraves, 2003). Task modelingof this sort is important for exploring basic details of neu-ral mechanisms for specific phenomena, but examiningbrain processes on a larger scale is required for explainingmore complex and wide-ranging psychological findings,such as prospect theory and decision affect theory. Buse-meyer and Johnson (2004) describe a connectionist modelthat they apply to a range of behaviors as diverse as thoseexplored by ANDREA, including preference reversaleffects and loss aversion. The network model called affec-tive balance theory (Grossberg & Gutowski, 1987) alsoexplores a wide range of risky decision phenomena in amathematically sophisticated fashion, and proposes effectsof emotional-context on cognitive processing that arelargely consistent with those implemented in ANDREA.

Fig. 12. Overview of neural affective decision theory.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 267

The main improvement that our approach offers overthese two models is in neurological realism, as reflectedby modeled characteristics of individual processing units(‘neurons’) and the mapping of proposed computationsonto specific brain regions and interactions supported byempirical findings. The models of Grossberg and Gutow-ski (1987) and Busemeyer and Johnson (2004) are notcomparable in this respect to either ANDREA or the pre-viously mentioned works of Deco and Rolls (2005) andWagar and Thagard (2004). This of course is not a criti-cism of such methods of modeling. Rather, it is more ofan indication of the different levels at which theoreticianscan formulate explanations of behaviorally studied psy-chological phenomena. While the proposed mechanismsare interesting from the larger perspective of computa-tional models of decision making, these artificial neuralnetworks are of a fundamentally different nature thanANDREA and similar models in computational cognitiveneuroscience.

A recent model of decision-related interactions betweenbasal ganglia and orbitofrontal cortex by Frank and Claus(2006) recognizes the utility of taking the sort of broad-scaleapproach we employ in our design of ANDREA, and main-tains a similar level of neurological realism and detail.Despite certain similarities regarding modeled brain regionsand proposed computations, this model diverges fromANDREA in several fundamental ways, leading to both dif-ferent mechanisms and different explanatory targets for eachof our models. After briefly describing the structure of theFrank and Claus (2006) model, we compare it in detail withour own model of valuation and choice phenomena.

The main focus of the Frank and Claus model is themeans by which basal ganglia dopaminergic activity andorbitofrontal computations enable adaptive decision mak-ing responsive to contextual information. Computationand representation of expected decision value informationis accomplished through a division of labor between sub-cortical dopamine and prefrontal networks. A basal gan-glia dopaminergic network learns to make decisionsbased on the relative probability of such decisions leadingto positive outcomes. This process is augmented by orbito-frontal circuits that provide a working memory representa-tion of associated reinforcer value magnitudes thatexercises top–down control on the basal ganglia activity,which allows more flexible response to rapidly changinginputs. The proposed computations are detailed, elegant

and well-supported by empirical data, and the model iseffective in explaining decision-related behaviors as diverseas risk aversion/seeking, reversal learning, and peoples’performance in a variant of the Iowa gambling task in bothnormal and brain-damaged scenarios.

While ANDREA does not implement the specific compu-tations proposed by Frank and Claus (2006), this is due moreto differing targets of explanation than to any major incon-sistencies in our respective conceptions of the roles of variousbrain regions in decision making. Both models describeimportant functional differences between orbitofrontal-amygdala networks and dopaminergic activity in line withempirical findings demonstrating the involvement of orbito-frontal cortex in valuation and dopaminergic encoding ofreward prediction error. How these subsystems might inter-act is a question that both ANDREA and Frank and Claus(2006) address, and one that has been neglected in previoustheoretical modeling of the neurobiology of reward. Never-theless, whereas Frank and Claus develop a comprehensivecharacterization of how orbitofrontal cortex learns and rep-resents reinforcer value, our goal is to describe how specificexternal influences differentially alter the magnitudes of theseorbitofrontal-encoded values, such as via the asymmetricemotional modulation by losses and gains on valuations thatwe describe as the basis of loss aversion in prospect theory.As a result, the models are best suited to providing neuralexplanations of different psychological phenomena, andwhere they address similar phenomena they do so with con-trasting emphases on specific relevant brain mechanisms.

The most prominent difference is that of representa-tional complexity of amygdala activity. In both models,the amygdala encodes the magnitude of losses and gainsin proportion to overall activity level, which then influencesorbitofrontal representations of reward values. Frank andClaus (2006) do not explore how the amygdala forms suchrepresentations of reinforcer magnitude, providing theminstead as direct model inputs. In contrast, ANDREAmodels multifaceted means by which emotional arousalrelated to outcome magnitude is encoded by the amygdala(Eqs. (4) and (5)). This allows for the postulation of neuralexplanations for phenomena not addressed by Frank andClaus (2006), such as multiple mechanisms for framingand the observations of decision affect theory. In addition,while both models describe loss aversion as resulting fromgreater amygdala activation by losses than equivalentgains, ANDREA offers specific neurological reasons of

268 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

how this might occur through differential calibration of dis-tinct loss and gain reward prediction error networks, aswell as feedback to the amygdala from dorsolateral andcingulate processing of behavioral saliency. Our detailedcharacterization of how the amygdala comes to representmagnitude information thus allows us to both explain addi-tional phenomena and provide more complete biologicalmechanisms for valuation and decision behaviors simu-lated by both models.

Important structural differences between the modelsresult from their differing conceptions of amygdala activity.The Frank and Claus (2006) model focuses primarily onthe top–down biasing effect of orbitofrontal activity ongradual, multi-trial learning in the basal ganglia dopaminenetwork, with much less emphasis on the reciprocal influ-ence of relative reinforcement probability computationson orbitofrontal activation. We are able to explore in depthsuch effects in ANDREA through feedback of reward pre-diction error information to the amygdala that influencesits encoding of emotional arousal, which in turn modulatesorbitofrontal valuations. These valuations have top–downbiasing effects on reinforcement learning and action selec-tion similar to those modeled by Frank and Claus (2006),for instance how differing orbitofrontal representations ofidentical reward values caused by framing effects producedifferent activity in dopamine and serotonin predictionerror subsystems.

There are several other issues on which the models differ inimportant ways. Perhaps the most obvious is regarding therole of serotonin acting in opponency with dopamine inreward prediction error. Frank and Claus (2006) argue thatthe low baseline firing rates of dopaminergic cells need notmean that firing rate depressions are less capable of encodinghighly negative outcomes, as there may exist countervailingsensitivity differences in dopamine receptors to firing burstsversus dips. While they remain open to a role for serotoninin negative reinforcement learning and aversive stimulusprocessing mediated within OFC, they argue for the central-ity of dopamine firing dips to negative valuation processingmediated by the basal ganglia, even in light of evidence show-ing these dips are physiologically limited in scope (e.g., Bayer& Glimcher, 2005). Our model can be considered structurallyconsistent with this line of reasoning, since the mutual bias-ing effects we have implemented between dopamine and sero-tonin produce the same dopaminergic firing depressionsutilized computationally by Frank and Claus in cases wherewe describe encoding via serotonergic activity. Clearly,though, the models differ markedly in the degree to whichthey assign explanatory import to dopamine firing dips ver-sus concomitant serotonergic firing increases. Further differ-ences are evident in respective extents of brain regionmodeling. Frank and Claus employ more complex concep-tions of orbitofrontal cortex and midbrain dopamine areasthan are implemented by ANDREA, differentially utilizingspecific subpopulations of these broadly defined brain areas.In contrast, ANDREA includes limited but important con-tributions from anterior cingulate and dorsolateral prefron-

tal cortices. These include involvement in encoding thebehavioral relevance of outcomes, how this encoding maydiffer for positive and negative outcomes, and the subsequenteffects of behavioral saliency on emotional arousal. Theseare brain regions omitted in the model of Frank and Claus(2006) that they acknowledge may be crucial to understand-ing decision phenomena. Finally, ANDREA is the firstmodel to explore a possible role for neural saturation inexplaining the nature of subjective valuation. This effect isnot examined in the Frank and Claus work, which doesnot address the leveling-off of the prospect theory value func-tion for increasingly extreme losses and gains. Thus, whilethese two models are fairly consistent with one anotherand share similarities in their large-scale approaches to mod-eling the neural foundations of decision making, they bothmake unique contributions to explaining different aspectsof relevant behaviors and psychological processes.

One of the most fertile areas for future applications ofneural affective decision theory and the ANDREA modelis the burgeoning field of neuroeconomics, which operatesat the intersection of economics, psychology, and neurosci-ence (Camerer, Loewenstein, & Prelec, 2005; Glimcher &Rustichini, 2004; Sanfey et al., 2006). Examples of suchapplications include the previously mentioned findingsregarding preference reversal in joint versus separate optionevaluation (Hsee et al., 1999, 2005) and observed interac-tions between risk, uncertainty and emotion (Rottenstreich& Hsee, 2001), both of which seem explainable via the neuro-logical mechanisms we have modeled. Unlike traditionaleconomic theory, we do not take preferences as given, butrather explain them as the result of specific neural opera-tions. A person’s preference for A over B is the result of a neu-ral affective evaluation in which the representation of A

produces a more positive anticipated reward value (or atleast a less negative value) than the representation of B. Asdepicted in Fig. 12, the neural affective evaluation of optionsdepends on their cognitive–emotional representation, whichcan vary depending on the presentation and context of infor-mation. This dependence explains why actual human prefer-ences often do not obey the axioms of traditionalmicroeconomic theory. In addition to neuroeconomics, weare exploring the relevance of our theory and model tounderstanding ethical judgments, the neural bases of whichare under increasing investigation (Casebeer & Churchland,2003; Greene & Haidt, 2002; Moll, Zahn, de Oliveira-Souza,Krueger, & Grafman, 2005). Finally, while neural affectivedecision theory is primarily intended as a descriptive accountof how people actually do make decisions, but it can providea starting point for developing a prescriptive theory of howthey ought to make better decisions (Thagard, 2006).

Like all models, ANDREA provides a drastically sim-plified picture of the phenomena it simulates, and thereare many possible areas for improvement and extension.These include increasing the complexity of individual pop-ulations, adding more brain areas, modeling more relation-ships between brain areas, and exploring the effects ofneuronal firing saturation beyond simply orbitofrontal

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 269

cortex. Nevertheless, we have described a variety of neuro-biologically realistic mechanisms for fundamental decisionprocesses, and shown their applicability to explaining sev-eral major experimental findings in behavioral decisionresearch. Besides implementing original mechanistic ideas,such as a role for saturation in explaining diminishing mar-ginal sensitivity in prospect theory, ANDREA contributesto two important classes of explanation in decision neuro-science: (1) Generalization and novel syntheses of hithertounrelated mechanisms of neural processing (e.g., multipli-cative models of attention and reward reinforcement learn-ing); and (2) Specific and detailed grounding of behaviorallyexplored psychological phenomena in such plausible andrealistic neurocomputational mechanisms. The result, wehope, is a deeper understanding of how and why peoplemake the choices that they do.

Acknowledgments

We thank Daniela O’Neill, Christopher Parisien, BryanTripp, and three anonymous reviewers for comments onearlier versions. This research was supported by the Natu-ral Sciences and Engineering Research Council of Canada,and the William E. Taylor Fellowhip of the Social Sciencesand Humanities Research Council of Canada.

Appendix A. Neurocomputational details

Our reward model was implemented in MATLAB 7.0.1on a PC with an Intel Pentium 4 processor running at2.53 GHz, with 1.00 GB of RAM available. For simula-tions of the extent that we have conducted and describedhere, these specifications represent close to the minimumrequired, based on the memory and other resource require-ments of the most recent version of the NESim NEF simu-lation software running within MATLAB. NESim

documentation and software download links are availableonline at http://compneuro.uwaterloo.ca.

We modeled spiking activity for a total of 7600 neuronsspread over seven specific populations (Fig. 1), using thecommon and physiologically realistic leaky-integrate-and-fire (LIF) model for each of our modeled neurons (seeAppendix B). In particular, we use 800–1200 neurons forsimulating each of the amygdala, orbitofrontal cortex, ven-tral striatum, anterior cingulate cortex, and dorsolateralprefrontal cortex, representing one- to three-dimensionalvectors as needed in the neural engineering framework(Appendix B). The areas representing midbrain dopaminer-gic neurons and the dorsal raphe nucleus of the brainstemare each modeled with 1200-neuron ensembles, each withseveral discrete subpopulations, in order to capture theadditional complexities involved in the encodings andtransformations we assign to these areas in our model(recurrent, rectified, biased-opponent calculation of rewardprediction error; see Appendix C).

Each individual neuron is based on a reduced-complexitybiophysical model that includes features fundamental to

most neurons, including conventional action potentials(spikes), spike train variability, absolute refractoriness,background firing, and membrane leak currents. The mem-brane time constant for our LIF neurons is set to 10 ms, witha refractory period (a post-spike delay during which the neu-ron may not fire) of 1 ms. We introduce at simulation run-time 10% Gaussian, independent zero-mean noise, relativeto normalized maximum firing rates, to simulate the noisyenvironment in which our neurons operate. These choicesare based on plausible biological assumptions (see Eliasmith& Anderson, 2003), and we have made reasonable efforts toselect neurobiologically realistic values for other networkcell parameters as well. For example, our 5 ms synaptic timeconstant for dorsal raphe serotonergic neurons is consistentwith the 3 ms decay constants observed in vivo (Li & Bayliss,1998). Neurons in the cortical areas we have modeled havebeen shown to have maximum firing rates ranging from20–40 Hz in dorsolateral prefrontal and orbitofrontal areas(Wallis & Miller, 2003) to at least 50 Hz in anterior cingulatecortex (Davis, Hutchison, Lozano, Tasker, & Dostrovsky,2000). Thus, our selection of a 10–80 Hz saturation rangefor neurons in these model ensembles is a physiologically rea-sonable compromise to maintain population sizes that aremanageable for simulation purposes. Ensembles must growincreasingly large to allow for meaningful representationwith slower-firing, smaller saturation-range neurons makingup the ensembles.

Practical considerations nonetheless made necessarysome limitations on biological realism. Principally, the sat-uration ranges we selected for our modeled subcorticalregions are appreciably higher (by roughly a factor of 10)than those observed empirically for typical neurons in theseareas. The much larger neural ensemble sizes that would berequired for clean representation and transformation usingthe extremely low experimentally observed firing rateswould have made our large-scale simulations computation-ally impracticable given available resources. We addition-ally support this compromise of biological realism bynoting that significantly higher firing peaks (greater than100 Hz) are observed in the bursting behavior of both cer-tain raphe serotonergic neurons (e.g., Gartside et al.,2000; Hajos, Gartside, Villa, & Sharp, 1995) and subpopu-lations of the amygdala (Driesang & Pape, 2000; Pare &Gaudreau, 1996), and to less degree in midbrain and stria-tum dopaminergic neurons as well (Hyland, Reynolds,Hay, Perk, & Miller, 2002). The specific activity our modelproduces in these subcortical areas seems well-suited to cod-ing via bursting neurons (large but transient firing upsurgesthat interpose lengthier periods of near-zero activity). Whilewe do not define here an explicit alternative neuron model toLIF, Eliasmith (2005a) describes how bursting could beincorporated into the NEF, and thus NESim simulations.

For the most part, we have chosen neuron firing thresh-olds (that is, the respective input levels above which indi-vidual neurons begin to respond) from an evendistribution over a range represented symmetrically aroundzero, with neuron preferred directions in the space of

270 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

representation also chosen from an even distribution (i.e.,equal numbers of ‘on’ and ‘off’ neurons; see AppendixB). The exceptions to this rule are single subpopulationsof both the dorsal raphe and midbrain dopaminergicregions. In these cases, our establishment of minimum fir-ing thresholds of zero combines with an exclusive use ofpositively sloped ‘on’ neurons in these subpopulations toproduce insensitivity to negative values (i.e., rectifiedreward prediction error encoding). This corresponds wellwith experimentally observed physiological limitations inthe computation of reward prediction error in these brainregions (see Bayer & Glimcher (2005), as well as the discus-sion of dopamine and serotonin within the main textdescription of the ANDREA model).

Appendix B. NEF representation, transformation and

dynamics

The NEF consists primarily of three fundamental prin-ciples regarding the representation of information in neuralpopulations, the means by which these representations aretransformed through interactions between populations,and the control theoretic nature of the characterization ofneural dynamics. Eliasmith and Anderson (2003) presenta rigorous explication and analysis of the NEF, but themathematical details we outline here should be sufficientfor understanding the fundamental nature and operationof our neural model of affective choice and valuation.

Consider a neural ensemble whose activities ai(x) encode

some vector quantity x(t) mapping onto a real-world quan-tity (eye position, emotional arousal, etc.). Note that thisquantity need not be a vector; scalars, functions, and func-tion spaces can be represented and manipulated in the NEFin a near-identical fashion. The encoding of x involves aconversion of x(t) into a neural spike train:

aiðxÞ ¼X

n

dðt � tinÞ ¼ Gi½J iðxðtÞÞ�; ðB1Þ

where Gi[�] is the nonlinear function describing the specificnature of the spiking response, Ji is the current in the cellbody (soma) of a particular neuron and i and n are relevantindices (i indexing specific neurons, n indexing the individualspikes produced by a given neuron). The nonlinearity G weemploy is the common leaky-integrate-and-fire (LIF) model:

dV i=dt ¼ �ðV i � J iðxÞRÞ=sRCi ; ðB2Þ

where Vi represents somatic voltage, R the leak resistance,and sRC

i the RC (membrane) time constant. The system isintegrated until the membrane potential Vi crosses thethreshold Vth, at which point a spike d(t � tin) is generatedand Vi is reset to zero for the duration of the refractory per-iod, sref

i (Eliasmith & Anderson, 2003). A basic descriptionof the soma current is

J iðxÞ ¼ aih~/i � xi þ J biasi þ gi; ðB3Þ

where Ji(x) is the current input to neuron i, x is (in thiscase) the vector variable of the stimulus space encoded by

the neuron, ai is a gain factor, ~/i is the preferred directionvector of the neuron in the stimulus space, J bias

i is a biascurrent (accounting for any background activity) and gi

models any noise to which the system is subject. Note inparticular that the dot product h~/i � xi describes how apotentially complex (i.e., high-dimensional) physical quan-tity, such as an encoded stimulus, is related to a scalar sig-nal describing the input current. For scalars, the encodingvector is either +1 (an ‘on’ neuron) or �1 (an ‘off’ neuron).(B1) thus captures the nonlinear encoding process from ahigh-dimensional variable, x, to a one-dimensional somacurrent, Ji, to a train of neural spikes, d(t � tin).

Under this encoding paradigm, the original stimulusvector representation can be estimated by decoding thoseactivities; that is, converting neural spike trains back intoquantities relevant for explanations of neural computationat the level of our chosen representations. A plausiblemeans of characterizing this decoding is as a specific lin-

eartransformation of the spike train. In the NEF, the origi-nal stimulus vector x(t) is decoded by computing anestimate xðtÞ using a linear combination of filters hi(t) thatare weighted by certain decoding weights /i:

xðtÞ ¼X

in

dðt � tinÞ � hiðtÞ/i ¼X

in

hiðt � tinÞ/i; ðB4Þ

where the decoding weights are calculated by a mean-squared error minimization (Eliasmith & Anderson, 2003)and the operation ‘*’ indicates convolution. The hi(t) filtersare linear temporal decoders, which are taken to be thepostsynaptic currents (PSCs) in the associated neuron i

for reasons of biological plausibility. Together, the nonlin-ear encoding in (B1) and the linear decoding in (B4) definean ensemble ‘code’ for the neural representation of x.

The next aspect of the NEF to examine is the means bywhich computations are performed in order to transformthe representations present in a given model. The main taskneeded to be performed is the calculation of connectionweights between the different populations involved in atransformation. As an example, let us consider the trans-formation z = x � y. The process of connection weight cal-culation can be characterized as substituting into ourencoding Eq. (B1) the decodings of x and y (as per (B4))in order to find the encoding of z, which represents ourtransformation of interest:

ckðzÞ ¼ ckðx � yÞ ¼ Gk½ak~/kðx � yÞ þ J bias

k �

¼ Gk ak~/k

Xi

aiðxÞ/xi �X

j

bjðyÞ/yj

!þ J bias

k

" #

¼ Gk

Xi;j

xkijaiðxÞbjðyÞ þ J biask

" #;

where xkij ¼ ak~/k/

xi /

yj represents the connection weights

between neurons i, j and k in the x, y, and z populations,respectively. It should be noted that the nonlinear neuralactivity interaction suggested in this example is avoided

Table C1Transformation summary

Brainarea

Inputs Outputs

AMYG A0(t)(ext.)A1(t)DA(t)5-HT(t)C(t)

A(t) = A1(t) + b � DA(t) + c � 5-HT(t) + C(t),where A1(t) = A0 (t) + k � (1 � P[outcomeX])

OFC V(t) (ext.) S(t) = A (t) � V(t)A(t)

5-HT S(t) E�(t) = P(t � 1) � S(t)E+(t) 5-HT(t) = rE�(t) � (1 � r)E+(t)P(t � 1) P(t) = P(t � 1) + aE(t)

DA S(t) E+(t) = S(t) � P(t � 1)E�(t) DA(t) = rE+(t) � (1 � r)E�(t)P(t � 1) P(t) = P(t � 1) + aE(t)

VS DA(t) E(t) = DA(t) � 5-HT(t)5-HT(t)

ACC S(t) B(t) = 2 � (S(t) P 0)-1E(t) R(t) = B(t)/[g + E(t)]C(t) C(t)

DLPFC R(t) C (t) = l � 5-HT(t)5-HT(t)

A0 and V are provided as external inputs to the model. Note the recurrentconnectivity and opponent interaction between 5-HT and DA. Abbrevi-ations: AMYG, amygdala; OFC, orbitofrontal cortex; 5-HT, raphe dor-salis serotonergic neurons; DA, midbrain dopaminergic neurons; VS,ventral striatum; ACC, anterior cingulate cortex; DLPFC, dorsolateralprefrontal cortex.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 271

in our actual model – all interactions are implemented in apurely linear fashion, as is typically taken to be the case inreal neural systems (see Eliasmith & Anderson (2003), forcomplete implementation details).

Finally, dynamics play a fundamental role in the overalloperation of our model, such as in our recurrent rewardprediction error computation. We can describe the dynam-ics of a neural population in control theoretic form via thedynamics state equation that is at the basis of modern con-trol theory:

_xðtÞ ¼ AxðtÞ þ BuðtÞ; ðB5Þwhere A is the dynamics matrix, B is the input matrix, u(t)is the input or control vector, and x(t) is the state vector. Atthis high-level of characterization we are detached fromany neural-level implementation details. It is possible, how-ever, to introduce simple modifications that render the sys-tem neurally plausible. The first step in converting thischaracterization is to account for intrinsic neural dynamics.To do so, we assume a standard PSC model given byh(t) = s�1e�t/s, and then employ the following derived rela-tion (Eliasmith & Anderson, 2003):

A0 ¼ sAþ I

B0 ¼ sBðB6Þ

so that our neurally plausible high-level dynamics charac-terization becomes

xðtÞ ¼ hðtÞ � ½A0xðtÞ þ B0uðtÞ�: ðB7ÞTo integrate this dynamics description with the neural

representation code we described previously, we combinethe dynamics of (B7), the encoding of (B1), and the popu-lation decoding of x and u as per (B4). That is, we take dec-odings x ¼

Pjnhjðt � tjnÞ/x

j and u ¼P

knhkðt � tknÞ/uk and

introduce neural dynamics into the encoding operation asfollows:X

n

dðt � tinÞ ¼ Gi ai~/ixðtÞD E

þ J biasi

h i

¼ Gi ai~/i A0xðtÞ þ B0uðtÞ½ �D E

þ J biasi

h i

¼ Gi ai~/i½A0

Xjn

hjðt � tjnÞ/xj

*"

þB0X

kn

hkðt � tknÞ/uk �+þ J bias

i

#

¼ Gi

Xjn

xijhjðt � tjnÞ"

þX

kn

xikhkðt � tknÞ þ J biasi

#: ðB8Þ

It is interesting to note that h(t) in the above characteriza-tion defines both the neural dynamics and the decoding ofthe relevant representations. xij ¼ aih~/iA

0/xj i and xik ¼

aih~/iB0/u

ki describe the recurrent and input connectionweights, respectively, which implement the dynamics de-

fined by the control theoretic structure from (B7) in a neu-rally plausible network.

Appendix C. Representation/transformation summary

Table C1 encapsulates the complete inputs, outputs andtransformations we use to model specific interactionsbetween the brain regions included in our model (see alsoFig. 1 and discussions of equations in the main body formore high-level, conceptual characterizations of connectiv-ity and signal transformation). Variable names are as in thetext of the Methods section.

These equations describe explicitly the nature of the con-nectivity relationships and signal transformation processesoutlined in Fig. 1 and discussed in the main body descrip-tion of the ANDREA model.

References

Adolphs, R., Gosselin, F., Buchanan, T. W., Tranel, D., Schyns, P., &Damasio, A. R. (2005). A mechanism for impaired fear recognitionafter amygdala damage. Nature, 433, 68–72.

Bayer, H. M., & Glimcher, P. W. (2005). Midbrain dopamine neuronsencode a quantitative reward prediction error signal. Neuron, 47,129–141.

Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decisionmaking and the orbitofrontal cortex. Cerebral Cortex, 10(3), 295–307.

272 A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273

Bechara, A., Damasio, H., & Damasio, A. R. (2003). Role of theamygdala in decision-making. Annals of the New York Academy of

Sciences, 985, 356–369.Breiter, H. C., Aharon, I., Kahneman, D., Dale, A., & Shizgal, P. (2001).

Functional imaging of neural responses to expectancy and experienceof monetary gains and losses. Neuron, 30, 619–639.

Busemeyer, J. R., & Johnson, J. G. (2004). Computational models ofdecision making. In D. Koehler & N. Harvey (Eds.), Blackwell

handbook of judgment and decision making (pp. 133–154). Oxford:Blackwell.

Bush, G., Luu, P., & Posner, M. I. (2000). Cognitive and emotionalinfluences in anterior cingulate cortex. Trends in Cognitive Sciences,

4(6), 215–222.Camerer, C. F. (2000). Prospect theory in the wild: Evidence from the

field. In D. Kahneman & A. Tversky (Eds.). Choices, Values, and

Frames. New York: Cambridge University Press.Camerer, C., Loewenstein, G. F., & Prelec, D. (2005). Neuroeconomics:

How neuroscience can inform economics. Journal of Economic

Literature, 34, 9–64.Casebeer, W. D., & Churchland, P. S. (2003). The neural mechanisms of

moral cognition: A multiple-aspect approach. Biology and Philosophy,

18, 169–194.Churchland, P. S. (1996). Feeling reasons. In A. R. Damasio, H. Damasio,

& Y. Christen (Eds.), Neurobiology of decision-making (pp. 181–199).Berlin: Springer.

Cools, R., Calder, A. J., Lawrence, A. D., Clark, L., Bullmore, E., &Robbins, T. W. (2005). Individual differences in threat sensitivitypredict serotonergic modulation of amygdala response to fearful faces.Psychopharmacology, 180(4), 670–679.

Damasio, A. R. (1994). Descartes’ error. New York: G.P. Putnam’s Sons.Davis, K. D., Hutchison, W. D., Lozano, A. M., Tasker, R. R., &

Dostrovsky, J. O. (2000). Human anterior cingulate cortex neuronsmodulated by attention-demanding tasks. Journal of Neurophysiology,

83(6), 3575–3577.Dayan, P., & Balleine, B. W. (2002). Reward, motivation, and reinforce-

ment learning. Neuron, 36, 285–298.Daw, N. D., Kakade, S., & Dayan, P. (2002). Opponent interactions

between serotonin and dopamine. Neural Networks, 15, 603–616.Deakin, J. F. W. (1983). Roles of brain serotonergic neurons in escape,

avoidance and other behaviors. Journal of Psychopharmacology, 43,563–577.

Deco, G., & Rolls, E. T. (2005). Synaptic and spiking dynamics underlyingreward reversal in the orbitofrontal cortex. Cerebral Cortex, 15, 15–30.

Driesang, R. B., & Pape, H. (2000). Spike doublets in neurons of thelateral amygdala: Mechanisms and contribution to rhythmic activity.NeuroReport, 11(8), 1703–1708.

Eliasmith, C. (2005a). A unified approach to building and controllingspiking attractor networks. Neural Computation, 17(6), 1276–1314.

Eliasmith, C. (2005b). Cognition with neurons: A large-scale, biologicallyrealistic model of the Wason task. In B. Bara, L. Barasalou, & M.Bucciarelli (Eds.), Proceedings of the XXVII annual conference of the

cognitive science society. Mahwah, NJ: Lawrence Erlbaum Associates.Eliasmith, C., & Anderson, C. H. (2000). Rethinking central pattern

generators: A general framework. Neurocomputing, 32–33(1–4),735–740.

Eliasmith, C., & Anderson, C. H. (2003). Neural engineering: Computa-

tion, representation and dynamics in neurobiological systems. Cam-bridge, MA: MIT Press.

Evenden, J. L., & Ryan, C. N. (1996). The pharmacology of impulsivebehaviour in rats: The effects of drugs on response choice with varyingdelays of reinforcement. Psychopharmacology (Berl.), 128, 161–170.

Frank, M. J., & Claus, E. D. (2006). Anatomy of a decision: Striato-orbitofrontal interactions in reinforcement learning, decision making,and reversal. Psychological Review, 113(2), 300–326.

Gartside, S. E., Hajos-Korcsok, E., Bagdy, E., Harsing, L. G., Sharp, T.,& Hajos, M. (2000). Neurochemical and electrophysiological studieson the functional significance of burst firing in serotonergic neurons.Neuroscience, 98(2), 295–300.

Glimcher, P. W., & Rustichini, A. (2004). Neuroeconomics: The consil-ience of brain and decision. Science, 306(5695), 447–452.

Greene, J., & Haidt, J. (2002). How (and where) does moral judgmentwork? Trends in Cognitive Sciences, 6(12), 517–523.

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J.D. (2004). The neural bases of cognitive conflict and control in moraljudgment. Neuron, 44, 389–400.

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., &Cohen, J. D. (2001). An fMRI investigation of emotional engagementin moral judgment. Science, 293, 2105–2108.

Grossberg, S., & Gutowski, W. E. (1987). Neural dynamics of decisionmaking under risk: Affective balance and cognitive–emotional inter-actions. Psychological Review, 94(3), 300–318.

Hajos, M., Gartside, S. E., Villa, A. E., & Sharp, T. (1995). Evidence for arepetitive (burst) firing pattern in a sub-population of 5-hydroxytryp-tamine neurons in the dorsal and median raphe nuclei of the rat.Neuroscience, 69(1), 189–197.

Hsee, C. K., Loewenstein, G. F., Blount, S., & Bazerman, M. H. (1999).Preference reversals between joint and separate evaluations of options:A review and theoretical analysis. Psychological Bulletin, 125(5),576–590.

Hsee, C. K., Rottenstreich, Y., & Xiao, Z. (2005). When is more better?On the relationship between magnitude and subjective value. Current

Directions in Psychological Science, 14, 234–237.Hyland, B. I., Reynolds, J. N., Hay, J., Perk, C. G., & Miller, R. (2002).

Firing modes of midbrain dopamine cells in the freely moving rat.Neuroscience, 114(2), 475–492.

James, W. (1884). What is an emotion? Mind, 9, 188–205.Kahneman, D. (2003). A perspective on judgment and choice: Mapping

bounded rationality. American Psychologist, 58(9), 697–720.Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1991). Anomalies: The

endowment effect, loss aversion, and status quo bias. Journal of

Economic Perspectives, 5, 193–206.Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of

decision under risk. Econometrica, 47, 263–291.Kahneman, D., & Tversky, A. (1982). The psychology of preferences.

Scientific American, 246(1), 160–173.Kahneman, D., & Tversky, A. (2000). Choices, Values, and Frames. New

York: Cambridge University Press.Kahneman, D., Wakker, P. P., & Sarin, R. (1997). Back to Bentham?

Explorations of experienced utility. Quarterly Journal of Economics,

112, 375–405.Knutson, B., Taylor, J., Kaufman, M., Peterson, R., & Glover, G. (2005).

Distributed neural representation of expected value. Journal of

Neuroscience, 25(19), 4806–4812.Koehler, D. J., Brenner, L. A., & Tversky, A. (1997). The enhancement

effect in probability judgment. Journal of Behavioral Decision Making,

10, 293–313.Kosfeld, M., Heinrichs, M., Zak, P. J., Fischbacher, U., & Fehr, E. (2005).

Oxytocin increases trust in humans. Nature, 435, 673–676.Kreps, D. M. (1990). A course in microeconomic theory. Princeton:

Princeton University Press.Lerner, J. S., & Keltner, D. (2000). Beyond valence: Toward a model of

emotion-specific influences on judgement and choice. Cognition and

Emotion, 14, 473–493.Li, Y., & Bayliss, D. A. (1998). Presynaptic inhibition by 5-HT1B receptors

of glutamatergic synaptic inputs onto serotonergic caudal rapheneurones in rat. Journal of Physiology, 510(1), 121–134.

Litt, A., Eliasmith, C., & Thagard, P. (2006). Why losses loom larger thangains: Modeling neural mechanisms of cognitive–affective interaction.In R. Sun & N. Miyake (Eds.), Proceedings of the 28th annual

conference of the cognitive science society (pp. 495–500). Mahwah, NJ:Lawrence Erlbaum.

Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Riskas feelings. Psychological Bulletin, 127, 267–286.

Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobiledestruction: An example of the interaction between language andmemory. Journal of Verbal Learning and Verbal Behavior, 13, 585–589.

A. Litt et al. / Cognitive Systems Research 9 (2008) 252–273 273

McClure, S. M., York, M. K., & Montague, P. R. (2004). The neuralsubstrates of reward processing in humans: The modern role of fMRI.The Neuroscientist, 10(3), 260–268.

McGraw, A. P., Mellers, B. A., & Tetlock, P. E. (2005). Expectations andemotions of Olympic athletes. Journal of Experimental Social Psychol-

ogy, 41, 438–446.Mellers, B. A. (2000). Choice and the relative pleasure of consequences.

Psychological Bulletin, 126, 910–924.Mellers, B. A., & McGraw, A. P. (2001). Anticipated emotions as guides

to choice. Current Directions in Psychological Science, 10, 210–214.Mellers, B. A., Schwartz, A., Ho, K., & Ritov, I. (1997). Decision affect

theory: Emotional reactions to the outcomes of risky options.Psychological Science, 8, 423–429.

Mellers, B. A., Schwartz, A., & Ritov, I. (1999). Emotion-based choice.Journal of Experimental Psychology: General, 128, 332–345.

Mobini, S., Chiang, T. J., Ho, M. Y., Bradshaw, C. M., & Szabadi, E.(2000). Effects of central 5-hydroxytryptamine depletion on sensitivityto delayed and probabilistic reinforcement. Psychopharmacology

(Berl.), 152, 390–397.Moll, J., Zahn, R., de Oliveira-Souza, R., Krueger, F., & Grafman, J.

(2005). The neural basis of human moral cognition. Nature Reviews

Neuroscience, 6(10), 799–809.Montague, P. R., & Berns, G. S. (2002). Neural economics and the

biological substrates of valuation. Neuron, 36, 265–284.Oatley, K. (1992). Best laid schemes: The psychology of emotions.

Cambridge: Cambridge University Press.Owen, A. M. (1997). Cognitive planning in humans: Neuropsychological,

neuroanatomical and neuropharmacological perspectives. Progress in

Neurobiology, 53, 431–450.Padoa-Schioppa, C., & Assad, J. A. (2006). Neurons in the orbitofrontal

cortex encode economic value. Nature, 441, 223–226.Pare, D., & Gaudreau, H. (1996). Projection cells and interneurons of the

lateral and basolateral amygdala: Distinct firing patterns and differ-ential relation to theta and delta rhythms in conscious cats. Journal of

Neuroscience, 16(10), 3334–3350.Ratcliff, R., Cherian, A., & Segraves, M. (2003). A comparison of

macaque behavior and superior colliculus neuronal activity to predic-tions from models of two-choice decisions. Journal of Neurophysiology,

90, 1392–1407.Roitman, J. D., Brannon, E. M., & Platt, M. L. (2007). Monotonic coding

of numerosity in macaque lateral intraparietal area. PLoS Biology,

5(8), e208. doi:10.1371/journal.pbio.0050208.Rolls, E. T. (2000). The orbitofrontal cortex and reward. Cerebral Cortex,

10, 284–294.Rottenstreich, Y., & Hsee, C. K. (2001). Money, kisses and electric shocks:

On the affective psychology of risk. Psychological Science, 12, 185–190.Rottenstreich, Y., & Shu, S. (2004). The connections between affect and

decision making: Nine resulting phenomena. In D. J. Koehler & N.Harvey (Eds.), Blackwell handbook of judgment and decision making

(pp. 444–463). Oxford: Blackwell.Sanfey, A. G., Loewenstein, G., McClure, S. M., & Cohen, J. D. (2006).

Neuroeconomics: Cross-currents in research on decision-making.Trends in Cognitive Sciences, 10(3), 108–116.

Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal

of Neurophysiology, 80, 1–27.

Schultz, W. (2000). Multiple reward signals in the brain. Nature Reviews

Neuroscience, 1, 199–207.Shiv, B., Bechara, A., Levin, I., Alba, J. W., Bettman, J. R., Dube, L.,

et al. (2005). Decision neuroscience. Marketing Letters, 16, 375–386.Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2002). The

affect heuristic. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.),Heuristics and biases: The psychology of intuitive judgement

(pp. 397–420). Cambridge: Cambridge University Press.Soubrie, P. (1986). Reconciling the role of central serotonin neurons in

human and animal behavior. Behavioral and Brain Sciences, 9,319–335.

Suri, R. E. (2002). TD models of reward predictive responses in dopamineneurons. Neural Networks, 15, 523–533.

Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning. Cambridge,MA: MIT Press.

Thagard, P. (2006). Hot thought: Mechanisms and applications of emotional

cognition. Cambridge, MA: MIT Press.Thorpe, S. J., Rolls, E. T., & Maddison, S. (1983). The orbitofrontal

cortex: Neuronal activity in the behaving monkey. Experimental Brain

Research, 49, 93–115.Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural

basis of loss aversion in decision-making under risk. Science,

315(5811), 515–518.Trepel, C., Fox, C. R., & Poldrack, R. A. (2005). Prospect theory on the

brain? Toward a cognitive neuroscience of decision under risk.Cognitive Brain Research, 23(1), 34–50.

Treue, S. (2001). Neural correlates of attention in primate visual cortex.Trends in Neurosciences, 24(5), 295–300.

Tversky, A., & Kahneman, D. (1981). The framing of decisions and thepsychology of choice. Science, 211, 453–458.

Tversky, A., & Kahneman, D. (1984). Choices, values, and frames.American Psychologist, 39, 341–350.

Tversky, A., & Kahneman, D. (1986). Rational choice and the framing ofdecisions. Journal of Business, 59, 251–278.

Tversky, A., & Kahneman, D. (1991). Loss aversion in riskless choice: Areference dependent model. Quarterly Journal of Economics, 106,1039–1061.

Vertes, R. P. (1991). A PHA-L analysis of ascending projections of thedorsal raphe nucleus in the rat. Journal of Comparative Neurology,

313(4), 643–668.Von Neumann, J., & Morgenstern, O. (1947). Theory of games and

economic behavior. Princeton: Princeton University Press.Wagar, B. M., & Thagard, P. (2004). Spiking Phineas Gage: A

neurocomputational theory of cognitive–affective integration in deci-sion making. Psychological Review, 111, 67–79.

Wallis, J. D., & Miller, E. K. (2003). Neuronal activity in primatedorsolateral and orbital prefrontal cortex during performance of areward preference task. European Journal of Neuroscience, 18(7),2069–2081.

Weber, B., Aholt, A., Neuhaus, C., Trautner, P., Elger, C. E., & Teichert,T. (2007). Neural evidence for reference-dependence in real-market-transactions. NeuroImage, 35, 441–447.