motivating agents to acquire information€¦ · 3 characterizes the optimal decision rule for the...

50
MOTIVATING AGENTS TO ACQUIRE INFORMATION * Charles Angelucci First Version: January 2014 This Version: October 2016 Abstract It is difficult to motivate advisors to acquire information through standard performance-contracts when outcomes are uncertain and/or only observed in the distant future. Decision-makers may however ex- ploit advisors’ desire to influence the outcome. We build a model in which a decision-maker and two advisors care about the decision’s con- sequences, which depend on an unknown parameter. To reduce uncer- tainty, the advisors can produce information of low or high accuracy; this information becomes public but the decision-maker cannot assess its accuracy with certainty, and monetary transfers are unavailable. More- over, the information the two advisors produce may be conflicting. We show the optimal decision-rule involves making the final decision ex- ceedingly sensitive to the jointly provided information. Keywords: Information acquisition, experts, communication, advice. JEL Classification Numbers: D00, D20, D80, D73, P16. * I am deeply indebted to Philippe Aghion, Emmanuel Farhi, Oliver Hart, Patrick Rey, and Jean Tirole for their invaluable guidance and support. This paper has also bene- fited from discussions with Nava Ashraf, Julia Cag´ e, Wouter Dessein, Alfredo Di Tillio, Matthew Gentzkow, Renato Gomes, Ben Hebert, Christian Hellwig, Daisuke Hirata, Emir Kamenica, Nenad Kos, Thomas Le Barbanchon, Yanxi Li, Lucas Maestri, Eric Mengus, Simone Meraglia, Marco Ottaviani, Nicola Persico, Alexandra Roulet, Andrei Shleifer, Alp Simsek, Kathryn Spier, David Sraer, Eric Van den Steen, and audiences at Harvard, HBS, Yale SOM, Chicago Booth, Columbia, Kellogg, WUSTL, Toulouse, Queen’s, Bocconi, CUNEF, HEC, INSEAD, Rochester, Polytechnique, Oxford, Universit´ e Paris Dauphine, and the Petralia Workshop. Opinions and errors are mine. Columbia Business School. Email: [email protected] 1

Upload: others

Post on 25-Feb-2021

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

MOTIVATING AGENTS TO ACQUIREINFORMATION∗

Charles Angelucci†

First Version: January 2014

This Version: October 2016

AbstractIt is difficult to motivate advisors to acquire information through

standard performance-contracts when outcomes are uncertain and/oronly observed in the distant future. Decision-makers may however ex-ploit advisors’ desire to influence the outcome. We build a model inwhich a decision-maker and two advisors care about the decision’s con-sequences, which depend on an unknown parameter. To reduce uncer-tainty, the advisors can produce information of low or high accuracy;this information becomes public but the decision-maker cannot assess itsaccuracy with certainty, and monetary transfers are unavailable. More-over, the information the two advisors produce may be conflicting. Weshow the optimal decision-rule involves making the final decision ex-ceedingly sensitive to the jointly provided information.

Keywords: Information acquisition, experts, communication, advice.

JEL Classification Numbers: D00, D20, D80, D73, P16.

∗I am deeply indebted to Philippe Aghion, Emmanuel Farhi, Oliver Hart, Patrick Rey,and Jean Tirole for their invaluable guidance and support. This paper has also bene-fited from discussions with Nava Ashraf, Julia Cage, Wouter Dessein, Alfredo Di Tillio,Matthew Gentzkow, Renato Gomes, Ben Hebert, Christian Hellwig, Daisuke Hirata, EmirKamenica, Nenad Kos, Thomas Le Barbanchon, Yanxi Li, Lucas Maestri, Eric Mengus,Simone Meraglia, Marco Ottaviani, Nicola Persico, Alexandra Roulet, Andrei Shleifer,Alp Simsek, Kathryn Spier, David Sraer, Eric Van den Steen, and audiences at Harvard,HBS, Yale SOM, Chicago Booth, Columbia, Kellogg, WUSTL, Toulouse, Queen’s, Bocconi,CUNEF, HEC, INSEAD, Rochester, Polytechnique, Oxford, Universite Paris Dauphine, andthe Petralia Workshop. Opinions and errors are mine.†Columbia Business School. Email: [email protected]

1

Page 2: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

1 Introduction

Information acquisition is central to decisions made under uncertainty, yet

it often relies on intermediaries. For instance, regulatory agencies benefit from

internal staff expertise; judges rely on prosecutors to collect evidence; and

managers ask subordinates to estimate projects’ returns. However, delegating

information acquisition responsibilities triggers a natural concern about the

quality of the evidence upon which the decision will be made.

In many environments, performance-based contracts offer a powerful means

ensuring the provision of accurate information from experts (e.g., portfolio

managers). In other environments, performance is difficult to measure and

so alternative levers must be found. This is particularly true when the ap-

propriateness of decisions becomes apparent only in the distant future (e.g.,

regulatory actions), if ever (e.g., judicial sanctions).1 In addition, specifying

the desired quality of advice in a contract—to reward the information itself

rather than the outcomes—is often prohibitively costly or unenforceable in a

court of law (e.g., because too subjective), so that not only is performance

difficult to assess, but also transfers are a rather coarse instrument.2 However,

in many cases, a decision-maker may be capable of motivating information-

acquisition by exploiting her advisors’ desire to influence the outcome.3 This

alternative channel of influence is the object of this paper.

1The appropriateness of a past decision may also be hard to measure in absence of clearcounterfactuals (e.g., pollution standards and merger decisions).

2In many environments, monetary transfers contingent on performance are banned al-together (e.g., regulatory hearings, expert witnesses in courts of law, etc). See Szalay [2005]and Alonso and Matouschek [2008] for discussions of this matter.

3An advisor’s desire to influence the outcome may be due to his reputational concerns,or simply because the decision affects him directly. See for instance Prendergast [2007] onthe intrinsic motivation of bureaucrats.

2

Page 3: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

In particular, we investigate the issue of how to optimally motivate multiple

advisors. Decision-makers often rely on several advisors, and doing so presents

advantages and drawbacks whose consequences for the decision-making process

we analyze.

A decision-maker interacts with one or two agents before taking an action.

All parties are interested in the decision, which depend on some unknown state

of nature. For most of the analysis, players agree on which decision is optimal

when they have access to the same information. Agents privately acquire one

of two informationally-ranked signals, with the more accurate signal being also

the costliest one. Agents have access to identical signals and signal realizations

are conditionally independent. The information they produce becomes public,

but its accuracy remains privately known. The decision-maker wishes to mo-

tivate agents to acquire the more accurate signal, but is unable to infer with

certainty which signal was acquired by looking at its realization. Moreover,

agents have insufficient incentives to acquire the more informative signal when

the ex post optimal action is implemented. Monetary transfers are ruled out,

and thus the choice of the final action is the only means by which the decision-

maker can motivate her agents to each acquire the more informative signal.

The decision-maker is capable of committing to a decision rule, which speci-

fies the action to implement as a function of the information produced by the

agents. This provides a good approximation of the settings in which decision-

makers can restrain their behavior through pre-specified decision-making pro-

cedures (e.g., regulatory procedures, evidentiary rules in the judicial system,

etc). Finally, the informational environment we consider is such that higher

realizations of either signal lead to correspondingly higher desired actions.

3

Page 4: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

We first look at the version of the model with a single agent. In this

environment we show that, to induce the acquisition of the more accurate

signal, it is optimal to implement a higher (resp. lower) action than the ex

post optimal action when observing a realization such that the agent would

desire a lower (resp. higher) action under the less accurate signal. Departing

from the ex post optimal action in this manner hurts the agent under the

more accurate signal, but relatively more so under the less accurate signal.

This asymmetric effect occurs because players suffer increasingly more as the

implemented action becomes more distant from their desired action.

We then restrict attention to environments in which signals induce sched-

ules of desired actions that are rotations one of another around the status-quo

(i.e., the action desired under the prior). In these environments, it is optimal

to implement a higher action than desired when desiring an action higher than

the status-quo, and a lower action than desired when desiring an action lower

than the status-quo. Distorting the final decision away from the status-quo

motivates the acquisition of the more informative signal because it punishes

the agent whose posteriors are induced by the less informative signal.

We extend the baseline model by incorporating a second agent. Relying on

a second agent is potentially valuable because of the additional information he

provides, but also because it may allow the decision-maker to better control

her agents by comparing their information. However, introducing a second

agent also creates difficulties: the information the agents provide may be con-

flicting. Specifically, the two signal realizations may be such that one agent

desires a lower action than the ex post optimal action when deviating and

acquiring the less informative signal, whereas the other agent desires a higher

4

Page 5: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

action than the ex post optimal action when deviating and acquiring the less

informative signal. As a consequence, departing from the ex post optimal

action necessarily dampens one agent’s incentive to acquire the more infor-

mative signal. This phenomenon occurs because each agent, upon unilaterally

deviating and acquiring the less informative signal, puts more weight on the

other agent’s signal realization than on his own. We show it is optimal for the

decision-maker to introduce a distortion that targets the agent whose signal

realization is the most extreme. This, in turn, implies it is again optimal to

implement higher actions than desired when desiring actions higher than the

status-quo, and lower actions than desired when desiring actions lower than

the status-quo.

The main insight of our model is thus that the decision-maker should make

her final decision exceedingly sensitive to the provided information. This, we

show, is the optimal way of structuring the final decision in a way that makes it

privately interesting for her agents to acquire costly but accurate information.

It is common for organizations to commit to ex post inefficient rules to mo-

tivate information acquisition by its members (for instance, through the del-

egation of decision-rights, the choice of default-actions or standards of proof,

and the non-admissibility of certain decisions or relevant information). In the

course of the analysis, we discuss mechanisms through which organizations can

implement (or approximate) the decision rule characterized in this paper. All

involve making the status-quo (and actions near it) relatively hard to imple-

ment and the final decision exceedingly sensitive to the provided information.

The rest of the paper is structured as follows. A review of the literature

concludes the introduction. Section 2 presents the setup of the model. Section

5

Page 6: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

3 characterizes the optimal decision rule for the baseline model with one agent

(Section 3.1); it then turns to the analysis with two agents (Section 3.2).

Section 4 provides several extensions and discussions, and Section 5 concludes.

Related Literature

This paper contributes to a literature that incorporates information acqui-

sition in principal-agent models without monetary transfers or, more generally,

with limited or costly transfers (see, e.g., Prendergast [1993], Aghion and Ti-

role [1997], Dewatripont and Tirole [1999], Li [2001], Szalay [2005], Gerardi

and Yariv [2008b], and Krishna and Morgan [2008]).4 To the best of our

knowledge, our paper is the first to investigate the issue of designing rules that

motivate information acquisition by multiple agents in environments of non-

transferable utility and principal-commitment.5 Our paper is closely related

to Li [2001] and Szalay [2005], who consider single-agent settings. In both

papers, the second-best decision rule is such that the status-quo (and actions

near it) are implemented less often than ex post optimal. In Szalay [2005], the

decision-maker implements a higher action than recommended when the rec-

ommended action is above the status-quo, and vice versa. This “exaggeration”

property manifests itself in various settings, including the present paper. It,

for instance, also arises in procurement models with information acquisition

(see, e.g., Cremer et al. [1998], and Szalay [2009]). Our analysis suggests it

4See also Austen-Smith [1994], Che and Kartik [2009], and Argenziano et al. [2016] formodels with information acquisition but lack of commitment. See Demski and Sappington[1987] on performance-contracts.

5Multiple experts have been considered in other environments (often without costlyinformation acquisition). See, e.g., Krishna and Morgan [2001], Battaglini [2004], Gentzkowand Kamenica [2016], Gul and Pesendorfer [2012], and Bhattacharya and Mukherjee [2013].

6

Page 7: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

also holds in advisee/multiple advisors settings.

This paper also relates to the literature on experts (see Scharfstein and

Stein [1990], Dewatripont et al. [1999], and Holmstrom [1999]), and that on

committees (see Gilligan and Krehbiel [1987], Persico [2004], Gerardi and Yariv

[2008a], Cai [2009], and Gershkov and Szentes [2009]). Although delegation is

not optimal in our setting, our paper is related to the literature on optimal

delegation (see, e.g., Melumad and Shibano [1991] and Alonso and Matouschek

[2008]). Aspects of our analysis are reminiscent of the early analyses of moral

hazard (see Holmstrom [1979], Holmstrom [1982], and Mirrlees [1999]). Finally,

our way of modeling information acquisition is related to work by Persico

[2000], Athey and Levin [2001], Johnson and Myatt [2006], and Shi [2012].

2 The Setting

A decision-maker, DM , relies on two agents, A1 and A2. DM has payoff

u (a, ω) and Ai has payoff u (a, ω)−C (ei), for i = 1, 2. Payoffs depend on DM ’s

action a ∈ A ⊂ R (where A is compact) and the unobservable state of the world

ω ∈ Ω ⊆ R, where ω is the realization of a random variable W with c.d.f. FW (ω)

(the common prior) on full support Ω. The function C (ei) represents Ai’s cost

of exerting effort ei. The decision-maker and the agents have identical gross

preferences, but the decision-maker does not internalize the agents’ cost C (·)

of acquiring information. We assume homogeneous gross preferences to isolate

the consequences of DM ’s failure to internalize the agents’ costs.6 In Section

6We could have assumed DM ’s payoff is the sum of the agents’ payoffs, i.e., 2u (a, ω)−2∑i=1

C (ei). This model could capture, for instance, a committee in which the solution to

7

Page 8: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

4, we provide an extension with heterogeneous gross preferences.

We suppose DM is able to commit to a decision rule but unable to make

monetary transfers to her agents. For simplicity, we also ignore participation

constraints at the agent level. We comment on both assumptions in Section 4.

2.1 Payoff Functions

We restrict attention to payoff functions u (a, ω) of the form

u (a, ω) = l (ω) + a · (ηω + κ)− ψa2, (1)

where l (ω) : Ω→ R, κ ∈ R, and η, ψ > 0. For instance, u (a, ω) = − (ηω + κ− a)2

or u (a, ω) = ωa − 12a

2. Let a(ω) ≡ argmaxa∈R

u (a, ω) = ηω+κ2ψ denote the players’

desired action when W = ω. Because η, ψ > 0, a(ω) is strictly increasing in ω.

We set η = ψ = 1 and κ = 0 because they play no role in the analysis.

First, u (a, ω) is strictly concave in a and symmetric around a (ω), that is,

u (a(ω) + ε, ω) = u (a(ω)− ε, ω), ∀ε. Second, the players’ desired action when

there exists uncertainty about W depends on the expected value of W only.

Finally, a player’s expected payoff under a given belief about W is a translation

of this player’s expected payoff under any other belief about W .

2.2 Effort and Information Acquisition

Ai unobservably chooses effort ei ∈ E ⊂ R, which determines the acquisi-

tion of signal Xei from a set of signals Xee∈E identical for both agents. We

DM ’s problem is the optimal aggregation rule. This approach is qualitatively identical toours because it shares the essential element whereby each agent’s net benefit from acquiringinformation is lower than DM ’s, and the key insights our model generates therefore apply.

8

Page 9: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

denote by GXe|W (x|ω) signal Xe’s conditional c.d.f., where x denotes a realiza-

tion. All signals have full, connected, and common support X ⊆ R. Agents’

signal realizations are conditionally independent, and xi denotes Ai’s signal

realization. Also, GXe (x) = EW[GXe|W (x|ω)

]=∫

ΩGXe|W (x|ω) dFW (ω) repre-

sents the marginal distribution of Xe, with associated density function gXe(x).

Let x ≡ [x1, x2] and GXe1 ,Xe2(x) ≡ GXe1

(x1)GXe2(x2).

We assume effort levels to be unobservable, but signal realizations x1 and

x2 to be publicly observable. Each agent produces public information of pri-

vately known and unverifiable accuracy.7 This setting captures, for instance,

a situation in which both DM and the agents are experts, but DM is unable

or unwilling to verify every aspect of the agents’ information gathering activi-

ties.8 If effort levels/signals were observable, the players’ desired action when

observing Xe1 = x1 and Xe2 = x2 would be:

aXe1 ,Xe2(x) ≡ argmax

a∈R

∫Ωu (a, ω) dFW |Xe1 ,Xe2

(ω | x) = EW |Xe1 ,Xe2(x) , (2)

where Bayes’ rule applies. We assume signals are such that aXe1 ,Xe2(x1, x2) is

nondecreasing in each signal realization, ∀Xe1 , Xe2 .9 For any signals, players

wish to match higher signal realizations with higher actions. We refer to

aXe1 ,Xe2(x) as players’ schedule of desired actions under signals Xe1 and Xe2 .

7DM cannot infer which signal was acquired from observing its realization due to thecommon support assumption. Suppose an agent is in charge of data collection and analysis.Although DM can replicate the analysis if given the data, she may lack the resources toverify the representativeness of the sample, and cannot infer it from observing the output.

8Assuming observable realizations but unobservable signals allows us to model partiallyverifiable information, which provides a reasonable approximation of settings in which agentscannot fully manipulate information because, for instance, subject to procedures (or becausethe decision-maker is also an expert).

9To ensure this property, it is enough, for instance, to restrict attention to signals thatinduce conditional densities for W that satisfy MLRP.

9

Page 10: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

For most of the analysis, we set E = e, e and let X ≡ Xe and X ≡ Xe.

Agents choose one signal from two available signals. In Section 4, we provide

an extension with continuous effort. We use the subscript Xi to represent a

given signal acquired by Ai, and the subscript Xi (resp. Xi) to mean that Ai

acquired signal Xi (resp. Xi). For instance, aXi,Xj(x) denotes the players’

schedule of desired actions when Ai acquires signal X and Aj acquires signal

X. When indexes are omitted, the pair X,X ′ always denotesX1, X

′2

.

We assume signal X is more informative than signal X, in the sense that,

for any Xj , DM ’s highest possible expected payoff when Ai acquires X,

∫X×X

∫Ωu(aXi,Xj

(x) , ω)dFW |Xi,Xj

(ω | x) dGXi,Xj(x) , (3)

is higher than DM ’s highest possible expected payoff when Ai acquires X,

∫X×X

∫Ω

u(aXi,Xj

(x) , ω)dFW |Xi,Xj

(ω | x) dGXi,Xj(x) , (4)

where i, j = 1, 2 and i 6= j. We set C (e) = 0 < C (e) = c. Acquiring the more

informative signal is more costly to the agents.

2.2.1 Rotations

We sometimes focus on environments that generate schedules of desired

actions that are rotations one of another, where the schedule induced by a more

informative pair of signals is a steeper rotation of the schedule induced by a less

informative pair of signals around some “rotation-point.” To fix ideas, suppose

there exists one agent only, and let aX(x) denote the players’ desired action

under signal X: the schedule of desired actions aX(x) is a steeper rotation of

10

Page 11: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

the schedule of desired actions aX(x) around some rotation-point x if

(aX (x)− aX (x)

)(x− x) > 0, ∀x 6= x. (5)

Figure 1 provides an illustrative example (where x = 0). As discussed below,

many commonly assumed informational environments generate rotations.

We refer to the realizations higher (resp. lower) than the rotation-point x

as high (resp. low) realizations.10 The desired action when observing a high

(resp. low) realization is higher (resp. lower) under signal X than signal X.

Insert Figure 1 about here.

In what follows, we sometimes focus on the more stringent case in which

the conditional expectation of W is a convex combination of the prior and the

signal realization x, where the weights are independent of x, that is, aX (x) =

EW |X [ω | x] = βx+(1− β

)ω0, and aX (x) = EW |X [ω | x] = βx+

(1− β

)ω0, with

0 ≤ β < β ≤ 1. One verifies aX (x) is a steeper rotation of aX (x) around

x ≡ ω0, where ω0 ≡ EW [ω] =∫

Ω ωdFW (ω). Signal realizations higher (resp.

lower) than ω0 are high (resp. low). Moreover, under either signal, the desired

action when observing x > ω0 (resp. x < ω0) is higher (resp. lower) than the

action desired under the prior: the status-quo (denoted a0, where a0 = ω0).

Two agents. For tractability, in the two-agent case, we focus exclusively

on environments that generate schedules that are rotations one of another.

Consider a given signal Xj and realization xj . Schedule aXi,Xj(xi, xj) is a

10This terminology is borrowed from Athey and Levin [2001], who offer a notion ofinformativeness that builds upon the distinction between low and high realizations.

11

Page 12: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

steeper rotation of schedule aXi,Xj (xi, xj) around rotation-point xi (xj , ω0) if

(aXi,Xj

(xi, xj)− aXi,Xj (xi, xj))

(xi − xi (xj , ω0)) > 0, ∀xi 6= xi (xj , ω0) . (6)

Specifically, with two agents, we always require the prior belief and the signals

to generate expressions for the conditional expectation of W that are a convex

combination of the prior and the signal realizations, where the weights are

independent of the signal realizations, that is

EW |Xi,Xj[ω | xi, xj ] = α (xi + xj) + (1− 2α)ω0, (7)

EW |Xi,Xj[ω | xi, xj ] = αxi + αxj + (1− α− α)ω0, (8)

where 0 ≤ α < α < α ≤ 1. Equal weight α is put on signals of equal informa-

tiveness, more weight is put onX than X when both signals are acquired (i.e.,

α > α), more weight is put on signal X when the other acquired signal is X

rather than X (i.e., α > α), and the weight put on signal X is lower than the

weight put on signal X when the other acquired signal is X (i.e., α < α).

There exist many informational environments that generates conditional

means that can be expressed as (7)-(8). Suppose, for example, that W ∼

N(0, 1

h

), X = W+ε, and X = W+ε, where ε and ε are independently distributed

and such that ε ∼ N(

0, 1h

)and ε ∼ N

(0, 1

h

). Suppose also h < h. Then,

EW |Xi,Xj[ω | xi, xj ] =

h(xi+xj)

h+2hand EW |Xi,Xj

[ω | xi, xj ] =hxi+hxjh+h+h

.

Signals whose associated expressions for the conditional expectation of W

are given by (7)-(8) generate schedules of desired actions that are rotations

one of another. In particular, if Aj acquires signal X, then aXi,Xj(xi, xj) is a

12

Page 13: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

steeper rotation of aXi,Xj(xi, xj) around rotation point

xi (xj , ω0) =

(α− αα− α

)xj +

(2α− α− αα− α

)ω0. (9)

Building on the Normal distribution example, aXi,Xj(xi, xj) =

h(xi+xj)

h+2his a

steeper rotation of aXi,Xj(xi, xj) =

hxi+hxjh+h+h

around xi (xj , ω0) = hh+h

xj .

Exactly as with a single agent, we refer to the realizations xi higher (resp.

lower) than xi (xj , ω0) as high (resp. low) realizations, where again the desired

action when observing a high (resp. low) realization is higher (resp. lower)

under signal X than under signal X. However, observe xi (xj , ω0) is a function

of xj . With two agents, whether xi is high/low depends on the value xj takes.

In particular, xi is high if and only if xj is sufficiently low. Figure 2 depicts

A2’s schedules when x2 is high (b) and low (c), assuming X1 =X.

Insert Figure 2 about here.

LR Condition. In the two-agent case, in which conditional means are a con-

vex combination of the produced information and the prior, we suppose the

likelihood ratio LR (x) ≡ gX(x)

gX(x) is weakly increasing in |x− ω0| and symmetric

(i.e., LR (ω0 + ε) = LR (ω0 − ε), ∀ε). Notice the action Ai desires when observ-

ing Xi = ω0 coincides, ∀Xi, with the action he desires under the prior, that

is, EW |X [ω | ω0] = EW |X [ω | ω0] = ω0. Assuming LR (x) is increasing in |x− ω0|

amounts to assuming that realizations that are increasingly distant from ω0—

that is, realizations that are increasingly “extreme”—become increasingly

more likely under X than X.11 In the example above, limx→∞ LR (x) = ∞.

11This assumption imposes enough “monotonicity” for us to characterize a simple decisionrule in the two-agent case. In some sense, it is the analogue of the MLRP assumption for the

13

Page 14: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Heuristically, more noisy information is produced when the agents shirk.12

Many environments (i) satisfy the LR condition and (ii) generate schedules

that are convex in the prior and the realizations. For instance, these conditions

are generally satisfied if the joint distribution of W , X, and X belongs to the

class of Elliptical distributions, which includes the Normal distribution.13 In

the course of the analysis, we sometimes provide explicit solutions assuming

that W ∼ N (0, 1), X = ρW +√

1− ρ2ε, and X = ρW +√

1− ρ2ε, with 0 ≤ ρ <

ρ ≤ 1 and where ε and ε are i.i.d. and such that ε, ε ∼ N (0, 1).14

2.3 Decision Rules and Timing

We characterize the optimal deterministic decision rule a∗ (x1, x2) : X×X →

R that ensures the existence of an equilibrium in which A1 and A2 acquire sig-

nalX. The timing is as follows. First, DM announces and commits to a (x1, x2).

Second, each agent Ai privately chooses his signal Xi ∈X,X

. Third, the real-

izations (x1, x2) are publicly observed and, fourth, DM implements a (x1, x2).15

conditional density of output in standard moral hazard settings. We discuss the consequencesof removing this assumption when commenting on Proposition 4.

12In our model, agents behave “opportunistically” by shirking, which, for most of theanalysis will mean providing less accurate information. However, one could imagine otherforms of opportunism. For instance, agents may be tempted to choose signals that foolDM into not revising its prior (see Prendergast [1993] on a related issue). Our approachwould naturally extend to these alternative scenarios but, for reasons of space, we leavethese considerations for future research.

13See Deimen and Szalay [2016] and references therein.14Joint distributions that do not belong to the Elliptical class can also satisfy our condi-

tions. This is the case, for instance, if W ∼ U [0, 1], X = W + ε, and X = W + ε, with ε

and ε independently distributed and such that ε ∼ N(

0, 1h

)and ε ∼ N

(0, 1

h

), with h > h

and X (ω) = [ω − ν, ω + ν]. As an additional example, consider also W ∼ N(

12 ,

)on

Ω = [0, 1], with X |W = W and X |W ∼ U [0, 1].15Analyzing alternative timings (in which, for instance, the agents gather their signals

sequentially) is beyond the scope of this paper. On this issue, see Gerardi and Yariv [2008b].

14

Page 15: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

3 The Model

3.1 The Single-Agent Case

3.1.1 The General Case

Suppose DM relies on a single agent A, and let a(x) ≡ aX(x) and a(x) ≡

aX(x). To rule out trivial cases, we assume c is sufficiently large that A has

insufficient incentives to acquire signal X if DM implements the first-best de-

cision rule a(x). Formally, we assume that, under the decision rule a(x), A’s

expected payoff when gathering signal X,

Π (a)− c ≡∫X

∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x)− c, (10)

is lower than his expected payoff when gathering signal X,

Π (a) ≡∫X

∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) . (11)

When c > Π (a)− Π (a), DM must depart from the first-best to motivate A to

acquire X. By contrast, inducing the acquisition of X can straightforwardly

be achieved by implementing a(x). As a result, inducing the acquisition of X

is desirable only if the informational gain outweighs the payoff losses due to

the inefficiencies necessary to motivate A. We now characterize the nature of

these inefficiencies, that is, we characterize the “second-best” decision rule.

Suppose DM wishes A to acquire signal X. She then solves

maxa(x)∈Q(x)

∫X

∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) , (12)

15

Page 16: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

subject to inducing A to acquire signal X rather than signal X:

∫X

∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x)− c ≥ (13)

∫X

∫Ωu (a (x) , ω) dFW |X (ω | x) dGX (x) .

To ensure the existence of a solution, we restrict a(x) to belong to Q(x) =

[al(x), ah(x)], ∀x.16 We also assume (i) al(x) = a(x)−q(x) and ah(x) = a(x)+q(x),

with q(x) > 0, ∀x, and (ii) a(x) ∈ Q(x), ∀x.17 The optimal decision rules under

X and X are feasible. Because c > Π (a) − Π (a), we anticipate (13) binds at

the optimum and set up a Lagrangian, where λ is the multiplier associated

with (13). Also, we assign the multipliers γl(x) and γh(x) to a(x) ≥ al(x) and

a(x) ≤ ah(x), respectively. DM chooses a (x) to maximize pointwise:

∫XL (x, a) dx =

∫X

(∫Ωu (a (x) , ω) dFW |X (ω | x) gX (x) (14)

16Let TV b′

b (a) be the total variation of function a in [b, b′]. We assume that for somesufficiently large interval

[θ, θ]⊆ X , there exists at least one function a0(x) such that

a0(x) ∈ [al(x), ah(x)], ∀x ∈ X , with TV θθ (a0) <∞, and such that∫[θ,θ]

∫Ω

u (a0(x), ω) dFW |X (ω | x) dGX(x)−c−∫

[θ,θ]

∫Ω

u (a0(x), ω) dFW |X (ω | x) dGX(x) >

∫\[θ,θ]

∫Ω

u (a(x), ω) dFW |X (ω | x) dGX(x)−∫\[θ,θ]

∫Ω

u (a(x), ω) dFW |X (ω | x) dGX(x),∀a(x).

One can prove the existence of a solution by restricting attention to functions a (i) such

that a ∈ L1 (X ) and (ii) such that, for any [b, b′] ⊆ X , and some parameter H >TV θθ (a0)

(θ−θ),

TV b′

b (a) ≤ H(θ′ − θ

)(i.e., we restrict attention to functions with a uniformly bounded

total variation). We assume H to be large enough for our class of functions to include allfunctions of economic interest that satisfy (i) and (ii).

17We assume the bounds al(x) and ah(x) depend on x to avoid corner solutions in someexamples provided below. The assumption is inessential for our main results, and may bereplaced with a(x) ∈ [al, ah]. Also, we suppose the bounds are equidistant from a(x) toshorten some proofs. However, this assumption can be relaxed without affecting our results.

16

Page 17: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

(gX (x)

∫Ωu (a (x) , ω) dFW |X (ω | x)− c

−gX (x)

∫Ωu (a (x) , ω) dFW |X (ω | x)

)

+gX(x)

(γl (x) (a (x)− al (x)) + γh (x) (ah (x)− a (x))

))dx.

Let K ≡ x ∈ X : a(x) = a(x) be the set of all realizations such

that the desired actions under signals X and X coincide. Similarly, let

K+ ≡ x ∈ X : a(x) > a(x) be the set of high realizations and K− ≡

x ∈ X : a(x) < a(x) be the set of low realizations. Also, let λ ≡ 1+λλ . We

now state the relationship between the first- and second-best decision rules.

The proof of Proposition 1, and most other proofs, are in the Appendix.

Proposition 1 Suppose DM wishes A to acquire signal X. Then, it is optimal

to implement, for almost every x, a decision rule a∗ (x) such that:

1. a∗ (x) ∈ (a (x) , ah(x)] if x ∈ K+,

2. a∗ (x) ∈ [al(x), a (x)) if x ∈ K−,

3. a∗ (x) = a (x) if x ∈ K and λ ≥ LR(x), and either a∗ (x) = al (x) or

a∗ (x) = ah (x) if x ∈ K and λ < LR(x).

Although DM does not observe which signal A acquires, she knows, for any

realization x, which action A desires both in case he has acquired signalX and

in case he has acquired signal X. As a result, for all realizations such that the

desired action differs depending on which signal was acquired (i.e., ∀x /∈ K),

DM implements an action that departs from the first-best a(x) in the direction

opposite to the action a(x) desired under signal X. DM thus implements

17

Page 18: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

higher actions than desired when observing high realizations, and lower actions

than desired when observing low realizations. These distortions induce the

acquisition of X because, even though they hurt both players in equilibrium,

they would hurt A relatively more so if the latter was to deviate and acquire

X. This asymmetric effect occurs because players suffer increasingly more as

the implemented action becomes more distant from their desired action.18

When the realization x is such that A desires the same action regardless

of the acquired signal’s accuracy (i.e., when x ∈ K), the logic highlighted

above breaks downs: distortions are ex post equally damaging to A under

both signals. However, DM chooses an action other than the first-best a(x) in

case gX(x) is sufficiently larger than gX(x), that is, in case the likelihood ratio

LR(x) is sufficiently large. Heuristically, in these instances, departing from the

first-best hurts—in expectation—much more severely A in case he deviates and

acquires signal X than DM and A under signalX, and DM thus introduces as

large a distortion as possible (by implementing either al(x) or ah(x)).

Corollary 1 When x /∈ K, all else equal, the size of the distortion

|a∗ (x)− a (x)| is weakly increasing in both |a(x)− a(x)| and LR (x).

Because players suffer increasingly more as the chosen action becomes more

distant from their desired action, the effectiveness of a departure from the

first-best in punishing A in case he acquires X increases with the distance

between a(x) and a(x). Further, the likelihood ratio LR (x) =gX(x)

gX(x) determines

how effective distortions are in expected terms, so that DM introduces larger

distortions for the realizations with a higher likelihood ratio.

18Specifically, departing from a (x) in the opposite direction than a (x) involves a second-order loss to DM and A in equilibrium, but a first-order loss to A in case he deviates.

18

Page 19: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

3.1.2 Rotations

We suppose that, under either signal, the conditional expectation of W can

be expressed as a convex combination of the prior and the realization x. Recall

a(x) is then a steeper rotation of a(x) around x = ω0. Recall also a0 denotes

the action desired under the prior, that is, a0 denotes the status-quo. In this

environment, a (ω0) = a (ω0) = a0 , and a(x), a(x) > a0 if x > ω0 (and conversely).

Corollary 2 Suppose DM wishes A to acquire signal X. Then, it is optimal

to implement, for almost every x, a decision rule a∗(x) which is a rotation of

the schedule a(x) around ω0, that is,

(a∗ (x)− a (x)) (x− ω0) > 0, ∀x 6= ω0. (15)

The final decision is made exceedingly sensitive to the provided information, so

that implementing an action close to a0 is less likely than under the first-best.

Proof. By assumption, (a(x)− a(x)) (x− ω0) =(β − β

)(x− ω0)2 > 0, ∀x 6=

ω0. It follows from Proposition 1 that (a∗(x)− a(x)) (x− ω0) > 0, ∀x 6= ω0.

The decision rule is such that, compared to the first-best, it is less likely that

an action close to a0 is implemented, which hurts A in case he acquires signal

X, because the actions he desires are are then closer to the status-quo (i.e.,

β > β).19 Notice DM does not make it less likely to implement an action close

to the status-quo by introducing random noise in the final decision. Instead,

she makes her final decision exceedingly sensitive to the provided information:

DM implements higher actions than desired when the first-best action is above

19Formally, Pr [|a∗(x)− a0| > ε] > Pr [|a(x)− a0| > ε], ∀ε > 0.

19

Page 20: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

a0, and conversely when the first-best action is below a0. In other words, DM

overreacts to the provided information in order to motivate her agent.

3.1.3 An Explicit Solution

Suppose u (a, ω) = − (ω − a)2, W ∼ N (0, 1), X = ρW +√

1− ρ2ε, and X =

ρW +√

1− ρ2ε, where (i) 0 ≤ ρ < ρ ≤ 1 and (ii) ε ∼ N (0, 1). Here, W | X = x ∼

N(ρx, 1− ρ2

), so that a(x) = ρx and a(x) = ρx. To limit the number of cases,

suppose ρ > √ρ. Also, ∀x, q(x) is sufficiently large that a∗(x) ∈ intQ (x).

Proposition 2 Suppose DM wishes A to acquire signal X. If c ≤ 2ρ(ρ− ρ

),

DM implements, for almost every x, the first-best decision rule a∗ (x) = ρx.

Otherwise, for almost every x, DM implements:

a∗ (x) =

(c

2(ρ− ρ

))x, wherec

2(ρ− ρ

) > ρ.

When the cost to A of acquiring signal X is large, DM must depart from

the first-best decision rule and adopt a decision rule which is a steeper rotation

of the first-best rule a(x) around 0. Specifically, DM puts excessive weight on

the provided information (compared to the first-best), thereby making it less

likely that an action close to the status-quo is implemented. Also, notice the

weight put on A’s information is increasing in the cost c and decreasing in the

informational superiority of signal X, as measured by ρ− ρ.20

20Recall that—when c > Π (a) − Π (a)—inducing the acquisition of X may not be opti-mal because of the distortions. In this example, one computes it is optimal to induce the

acquisition of X when c ≤ 2(ρ− ρ

) (ρ+

√ρ2 − ρ

).

20

Page 21: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Corollary 3 The variance VX [a∗(x)] of the optimal decision rule a∗(x) is

larger than the variance VX [a(x)] of the first-best decision rule a(x) = ρx.

Proof. VX [a∗(x)] =∫X(a∗(x)− EX [a∗(x)]

)2dGX (x) > VX [a(x)] =∫

X(a(x)− EX [a(x)]

)2dGX (x) because (i) EX [a∗(x)] = EX [a(x)] = 0 and (ii)

(a∗(x)− a(x)) (x− x) > 0, ∀x 6= x.

In our setting, signal X is more informative than signal X ′ if and only if

VX [aX(x)] > VX′ [aX′(x)], that is, if and only if the spread of the desired actions

under signal X is higher than that under signal X ′. Because the second-best

decision rule a∗(x) is a rotation of the first-best decision rule a(x), and because

the expected value of both decision rules is the same, VX [a∗(x)] > VX [a(x)].

In order to motivate her agent, DM behaves as if the provided information’s

underlying accuracy is higher than what it actually is. In other words, DM

behaves as if overconfident about the information she is provided with.

3.2 The Two-Agent Model

Suppose DM now relies on two agents, A1 and A2, where each may either

acquire signal X, at zero cost, or signal X, at cost c. Recall our focus on

environments in which conditional expectations of W are a convex combination

of the signal realizations (x1, x2) and the prior ω0 . Relying on a second agent

is potentially valuable because of the additional information, but also because

it may allow DM to better control agents by comparing their information.

Our objective is to characterize the optimal decision-rule that ensures the

existence of an equilibrium in with both agents acquire signal X.21 In par-

21As in any multiple-agent setting, a concern exists about equilibrium multiplicity. How-

21

Page 22: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

ticular, we wish to investigate whether and how the main intuition of the

single-agent case—i.e., that DM makes her final decision excessively sensitive

to the provided information so as to make it harder to implement actions close

to a0—survives with two agents. We do not seek conditions under which moti-

vating both agents to acquire signal X is optimal (as opposed to, for instance,

relying on one agent, or relying on two agents but requesting them to acquire

different signals). Computing closed-form conditions would require first spec-

ifying many of our primitives.22 Intuitively, requesting both agents to acquire

X will be optimal whenever the informational gain outweighs the necessary dis-

tortions. We provide conditions that ensure motivating both agents to gather

X is optimal when solving for specific examples.

Suppose DM wishes both agents to acquire signal X. She then solves

maxa(x1,x2)∈Q(x1,x2)

∫X×X

∫Ωu (a (x) , ω) dFW |X,X (ω | x) dGX,X (x) , (16)

subject to inducing Ai to acquire signal X rather than signal X:

ΠXi,Xj(a)− c :=

∫X×X

∫Ωu (a(x), ω) dFW |Xi,Xj

(ω | x) dGXi,Xj(x)− c ≥ (17)

ΠXi,Xj(a) :=

∫X×X

∫Ωu (a(x), ω) dFW |Xi,Xj

(ω | x) dGXi,Xj(x),

for i = 1, 2 and j 6= i. To ensure the existence of a solution, we restrict

ever, in our setting with preferences given by (1), one shows that, at the optimum, Aigenerally has stronger incentives to gather X when believing Aj is acquiring X instead of X(because of a “free-riding on a public good” argument), so that no such multiplicity exists.

22Note that our analysis is a necessary step in DM ’s larger optimization problem. Toanalyze whether, for instance, relying on two agents dominates relying on a single agent,one must necessarily compute the two associated optimal decision rules first.

22

Page 23: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

a (x) to belong to the interval Q (x) = [al (x) , ah (x)], ∀x.23 We assume (i)

al (x) = aX,X (x) − q (x) and ah (x) = aX,X (x) + q (x), with q (x) > 0, ∀x, and

(ii) aX1,X2(x) , aX1,X2

(x) ∈ Q (x), ∀x.24 The optimal decision rules under all

relevant combinations of the signals are feasible decision rules.

Notice that although the left-hand side of the “effort constraint” (17) is

identical ∀i, the right-hand side will generally be different: if Ai unilaterally

deviates and acquires signal X, it believes Aj acquires signal X. The fact

that agents hold different beliefs (and thus desire different actions) out-of-

equilibrium constitutes the main difficulty of the two-agent case.

Distinguishing between high and low realizations is again useful. For all

joint realizations (x1, x2), DM determines for each agent whether he desires an

action higher or lower than the first-best in case he has unilaterally deviated

and acquired X. Whether Ai’s signal realization xi is high/low depends on

its comparison with both ω0 and xj (see Section 2). Although it is possible

that both x1 and x2 are high (resp. low)—i.e., both agents desire a lower (resp.

higher) action than the first-best when unilaterally deviating, it is also possible

that one agent’s signal realization is high whereas the other agent’s realization

is low—i.e., one agent prefers a lower action that the first-best in case of a

23Let TV(a, Ib

′b

)be the total variation of function a in rectangle Ib

′b = [b1, b1′]× [b2, b2′].

We assume that for some sufficiently large rectangle Iθθ = [θ1, θ1] × [θ2, θ2] ⊂ X 2, there

exists at least one function a0(x) such that a0(x) ∈ [al(x), ah(x)], ∀x1, x2 ∈ X × X , with

TV(a0, I

θθ

)<∞, and such that ΠXi,Xj

(a0)−ΠXi,Xj(a0) ≥ c for i = 1, 2. One can prove

the existence of a solution by restricting attention to functions a (i) such that a ∈ L1(X 2)

and (ii) such that, for any Ib′b ⊂ X 2, and some parameter H ≥ TV

(a0, I

θθ

), H ≥ TV

(a, Iθθ

).

24Again, we assume the bounds of Q (x) depend on x to avoid corner solutions in someof the examples provided below. One may assume a ∈ Q without qualitatively affecting ourmain results. Similarly, we assume al (x) and ah (x) are equidistant from aX,X (x) to shortensome of the proofs. The assumption can be relaxed without affecting our main results.

23

Page 24: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

unilateral deviation, and conversely for the other agent.

The existence of such polarized out-of-equilibrium desired actions makes

it unclear whether and how the intuition of the single-agent case—i.e., that

DM departs from the first-best in the direction opposite to what the (single)

agent desires under signal X—survives with two agents. Figure 2 depicts both

agents’ schedules of desired actions under signals X and X, holding fixed the

other agent’s signal realization (and assuming it acquired X). Figure 2 ((a)-

(b)) considers a situation with congruent information in which x1 and x2 are

both high. Figure 2 ((c)-(d)) considers a situation with conflicting information

in which x1 is high whereas x2 is low.

3.2.1 Costless Public Signal

To distinguish between the modifications to the second-best decision rule

(compared to the single-agent case) that occur because of the presence of a sec-

ond signal from those that stem from having a second agent to motivate, take

it as given, for the moment, that Aj acquires signal X. Equivalently, suppose

there exists a costless public signal Xj .25 We maintain all other assumptions.

To rule out trivial cases, we suppose DM wishes Ai (i 6= j) to acquire signal

X and assume c is large enough that Ai has insufficient private incentives to

gather X in case the first-best aXi,Xj(xi, xj) is implemented.26

We denote by a∗i,Xj

(x) the solution to DM ’s problem when Ai’s

effort constraint binds and signal Xj is observed. Let K (xj) ≡xi ∈ X : aXi,Xj

(xi, xj) = aXi,Xj(xi, xj)

, that is, let K (xj) be the set of re-

25We set Xj =X for notational convenience only. Our results are identical if Xj = X.26Formally, we suppose c > ΠXi,Xj

(aXi,Xj )−ΠXi,Xj(aXi,Xj ).

24

Page 25: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

alizations xi such that Ai desires the same action under X and X, for a given

Xj = xj . Also, K+ (xj) ≡xi ∈ X : aXi,Xj

(xi, xj) > aXi,Xj(xi, xj)

, K− (xj) ≡

x ∈ X : aXi,Xj(xi, xj) < aXi,Xj

(xi, xj)

, and λi = 1+λiλi

. K+ (xj) (resp. K− (xj))

is the set of high (resp. low) realizations, for a given Xj = xj . From (9), we

know xi ∈ K+ (xj) if xi > xj (xi, ω0) and xi ∈ K− (xj) if xi < xj (xi, ω0).

Proposition 3 Suppose DM wishes Ai to acquire signal X. Then, it is op-

timal to implement, for almost every (xi, xj), a decision rule that specifies an

action a∗i,Xj

(x) such that:

1. a∗i,Xj

(x) ∈(aXi,Xj

(x) , ah (x)]

if xi ∈ K+ (xj),

2. a∗i,Xj

(x) ∈[al (x) , aXi,Xj

(x))

if xi ∈ K− (xj),

3. a∗i,Xj

(x) = aXi,Xj(x) if xi ∈ K (xj) and λi ≥ LR (xi), and either a∗

i,Xj(x) =

al (x) or a∗i,Xj

(x) = ah (x) if xi ∈ K (xj) and λi < LR (xi).

For all joint realizations (xi, xj), DM chooses an action that departs from

the first-best (i.e, aXi,Xj(x)) in the direction opposite to the action Ai desires

when deviating and acquiring signal X (i.e., aXi,Xj(x)). As in the single-

agent case, this policy motivates information acquisition because it ensures

the decision rule is a sufficiently better match with the posterior beliefs jointly

inducedXi,Xj

than with those jointly induced by

Xi,Xj

.

In the single-agent case without additional signal, DM compared x to the

prior ω0 to determine the sign of the distortion. For instance, DM would

introduce an upward distortion when x > ω0. Here, instead, DM compares xi

to the rotation-point xi (xj , ω0) induced by Xj . Thus, a∗i,Xj

(x) > aXi,Xj(x) if

25

Page 26: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

xi > xi (xj , ω0) and a∗i,Xj

(x) < aXi,Xj(x) if xi < xi (xj , ω0).27,28 Therefore, DM

exploits signal Xj not only because of its informational content (notice a∗i,Xj

(x)

is a function of aX,X(x)), but also to better control Ai.

Further, unlike the single-agent case without additional signal, DM does

not necessarily introduce an upward distortion when the first-best is above

the status-quo (i.e., when aXi,Xj(x) > a0), or a downward distortion when

the first-best is below the status-quo (i.e., when aXi,Xj(x) < a0). For some

realizations, DM may instead implement an action that lies closer to a0 than

the first-best. To see this, suppose, for instance, that 12 (xi + xj) > ω0, which

in our setting implies aXi,Xj(x) > a0 (see (7)-(8)). In words, suppose (xi, xj)

is such that, in equilibrium, the desired action is above the status-quo. If the

realizations are also such that xi < xi (xj , ω0), that is, if xi is low given xj ,

the final decision a∗i,Xj

is lower than aXi,Xj(x) and thus possibly closer to a0.

More generally, joint realizations (x1, x2) such that either “12 (xi + xj) > ω0 and

xi < xi (xj , ω0)” or “12 (xi + xj) < ω0 and xi > xi (xj , ω0)” occur with positive

probability and may generate a decision-rule that does not make it less likely

to implement actions close to the status-quo compared to the first-best.

Explicit Solution. Suppose u (a, ω) = − (ω − a)2, W ∼ N (0, 1), and let X = ε

and X = ρW +√

1− ρε, where 0 < ρ < 1 and ε ∼ N (0, 1). To avoid corner

solutions, we assume that, ∀ (xi, xj), q (x) is large enough that a∗ (x) ∈ intQ (x).

Ai’s schedule of desired actions underXi,Xj

, aXi,Xj

(x) = ρ1+ρ2

(xi + xj), is a

27Formally, ∀xj , a∗i,Xj (xi, xj) is a steeper rotation of aXi,Xj (xi, xj) around rotation-point

xi (xj , ω0). Notice xi (xj , ω0) is random from an ex ante perspective.28Similarly to Corollary 1, one can also show that, ∀xj , if xi /∈ K (xj), the size of the

distortion∣∣∣a∗i,Xj

(x)− aXi,Xj (x)∣∣∣ is weakly increasing in |xi − xj (xi, ω0)| and LR (x).

26

Page 27: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

steeper rotation of aXi,Xj= ρxj around rotation-point xi (xj , ω0) = ρ2xj . Notice

the first-best decision rule can be expressed as a function of the sum of the

signal realizations, that is, (xi + xj) is a sufficient statistic for (xi, xj).

Suppose DM wishes Ai to acquire signal X and suppose also c >

ΠXi,Xj(aXi,Xj

)−ΠXi,Xj(aXi,Xj

) = 2ρ2

(1+ρ2)2 . Solving DM ’s problem yields

a∗i,Xj

(x) = aXi,Xj(x) + λ∗ (c)

1 + ρ2

)(xi − ρ2xj

), (18)

where λ∗ (c) = 15ρ4−3

(1ρ

√c(1 + ρ2

)2 (5ρ4 − 3

)+ ρ2

(7− 8ρ4 + ρ8

)−(1 + ρ4

)).

The second-best decision rule a∗i,Xj

(x) is a steeper rotation of aXi,Xj(x) around

xi (xj , ω0) = ρ2xj , ∀xj . Also,(xi − ρ2xj

)determines both the sign and size of

the distortion. Finally, notice the optimal decision rule (18) cannot generally

be rewritten as a function of (x1 + x2). This occurs because DM uses Xj not

only for its informational content, but also to better motivate Ai.

3.2.2 Two Agents to Motivate

Suppose now DM must motivate both agents to acquire signal X. Let λi

denote the multiplier associated to (17), for i = 1, 2. To focus on the interesting

case, assume c is sufficiently large that—when aX,X (x1, x2) is implemented—no

agent has private incentives to acquire signal X (when anticipating the other

agent is acquiring X). Similarly, we also assume c is sufficiently large that Aj

(j = 1, 2) acquires signal X if a∗i,Xi

(x) is implemented.29 Finally, we assign the

multipliers γl (x) and γh (x) to a(x) ≥ al(x) and a(x) ≤ ah(x), respectively.

29Formally, we set c > max

[ΠXi,Xj

(aXi,Xj

)− ΠXi,Xj

(aXi,Xj

), ΠXi,Xj

(a∗j,Xi

)−

ΠXi,Xj

(a∗j,Xi

)]. When this inequality holds, (17) binds (i.e., λi > 0) for i = 1, 2.

27

Page 28: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

DM chooses a (x) ∈ Q (x) to maximize∫X×X L (x, a) dx pointwise, where

L (x, a) =

(gX,X(x)

∫Ω

u (a(x), ω) dFW |X,X(ω | x) (19)

+2∑i=1

λi

(gX(x1)gX(x2)

∫Ω

u (a(x), ω) dFW |X,X (ω | x)− c

−gXi(xi)gXj

(xj)

∫Ω

u (a(x), ω) dFW |X,X (ω | x)

)

+gX,X(x)

(γl (x) (a(x)− al(x))) + γh (x) (ah(x)− a(x))

)).

The next proposition compares the first- and second-best decision rules. In

what follows, let ˜λ = 1+2λ

λ .

Proposition 4 Suppose DM wishes both agents to acquire signal X. Then,

λ∗i = λ∗, for i = 1, 2, and it is optimal to implement, for almost every (x1, x2),

a decision rule a∗ (x) such that:

1. a∗ (x) ∈(aX,X (x) , ah (x)

]if aX,X (x) > a0,

2. a∗ (x) ∈[al (x) , aX,X (x)

)if aX,X (x) < a0, and

3. a∗ (x) = aX,X (x) if aX,X (x) = a0 and ˜λ ≥

2∑i=1

LR (xi), and either a∗ (x) =

al (x) or a∗ (x) = ah (x) if aX,X (x) = a0 and ˜λ <

2∑i=1

LR (xi).

For all joint realizations (x1, x2), DM implements an action that departs

from the first-best aX,X (x1, x2) in the direction opposite to the status quo

a0. Therefore, DM implements higher actions than desired when the first-

best action is above a0, and lower actions than desired when the first-best

28

Page 29: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

action is below a0. Exactly as in the single-agent case, DM makes it less likely

to implement actions close to a0. This feature arises despite the occurrence

of polarized out-of-equilibrium desired actions, and despite DM ’s desire to

compare the provided information so as to better control her agents.

When information is congruent—for instance, when both realizations are

high—the analysis is identical to the single-agent case. Both agents would

desire an action lower than the first-best upon unilaterally deviating, so that

DM introduces an upward distortion. The distortion hurts all players in equi-

librium, but it would hurt the agents relatively more so if they were to deviate.

However, when information is conflicting, introducing, say, an upward dis-

tortion will be useful in terms of incentive provision for the agent whose realiza-

tion is high, since his desired action when deviating is lower than the first-best.

However, for the other agent, whose realization is low, introducing an upward

distortion will dampen his incentives, because it means implementing an action

closer to his desired action when deviating. When faced with this trade-off,

DM introduces a distortion that targets the agent whose signal realization is

the most “extreme” (i.e., the farthest away from ω0). The reason for this is

twofold. First, given the focus on an environment in which schedules of desired

actions are linear in the produced information and rotations one of another, the

agent whose signal realization is the most extreme is the agent’s whose desired

action is the most sensitive to the accuracy of his own signal. Specifically, if

|xi − ω0| > |xj − ω0|, then∣∣∣aXi,Xj

(x)− aXi,Xj(x)∣∣∣ > ∣∣∣aXi,Xj

(x)− aXi,Xj(x)∣∣∣.30

Because agents suffer increasingly more as the implemented action is farther

from their ideal, and ignoring likelihood ratios for now, it pays the most to

30This inequality is derived using (7)-(8).

29

Page 30: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

introduce a distortion targeting the agent whose signal realization is the most

extreme. In addition, because more extreme realizations become increasingly

more likely under signal X than X—that is, because |xi − ω0| > |xj − ω0| im-

plies LR (xi) ≥ LR (xj), distortions become increasingly effective at providing

incentives (in “expected terms”), which reinforces the first effect.31

Unlike the single-agent case with an additional signal, here DM always

implements an action that lies farther away from the status-quo than the

first-best. As a result, actions close to a0 are unambiguously less likely under

the second-best decision rule. In our setting, when realizations are congruent

and, say, high, then it is necessarily the case that aX,X (x) > a0, so that DM

straightforwardly implements a∗ (x) > aX,X (x).32 When information is con-

flicting, say x1 is high (i.e., x1 ≥ x1 (x2)) and x2 is low (i.e., x2 ≤ x2 (x1)), the

optimal decision-rule targets the most extreme realization. Thus, in our exam-

ple, if |x1 − ω0| > |x2 − ω0|, DM implements an upward distortion. However,

the most extreme realization determines whether 12 (x1 + x2) Q ω0, that is, the

most extreme realization determines whether aX,X (x) Q a0.33 Thus, again in

our example, |x1 − ω0| > |x2 − ω0| implies that 12 (x1 + x2) > ω0, and the sign of(

a∗ (x)− aX,X (x))

is necessarily the same as the sign of(aX,X (x)− a0

).

DM is limited in its ability to control her agents by comparing their infor-

mation because of the occurrence of conflicting information. When information

31Removing the assumption whereby (i) LR(x) is increasing in |x− ω0| and (ii)LR (ω0 + ε) = LR (ω0 − ε), ∀ε, would mean that, in case of conflicting information, DMtargets the most extreme realization if and only if its associated likelihood ratio is suf-ficiently high. She otherwise targets the less extreme realization. As a result, whethera∗ > aX,X (resp. a∗ < aX,X) when aX,X > a0 (resp. aX,X < a0) would be ambiguous.

32Formally, using (7)-(8), one shows x1 ≥ x1 (x2) and x2 ≥ x2 (x1) together imply12 (x1 + x2) > ω0, and thus aX,X (x) ≥ a0.

33Because (i) |x1 − ω0| > |x2 − ω0| and (ii) x1 is high whereas x2 is low, it must be thecase that either x1 > x2 > ω0 or x1 > ω0 > x2.

30

Page 31: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

is conflicting, the agents’ out-of-equilibrium desired actions are polarized and

DM ’s distortion can only target one agent at most.

Explicit Solution. Suppose u (a, ω) = − (a− ω)2, W ∼ N (0, 1), X = ρW +√1− ρ2ε, and X = ε, where ε ∼ N (0, 1). To shorten expressions, suppose also

ρ = 23 and c ≤ 1

2 .34 One shows implementing the first-best aX,X (x1, x2) =

613 (x1 + x2) fails to motivate the agents to gather signal X if c > 72

169 .35 When

c ∈[

72169 ,

12

], the second-best decision rule is given by

a∗(x) =

(6

13+ λ∗ (x)

56

117

)(x1 + x2) , (20)

where λ∗(c) = 27224

(5− 13

√1− 2c

). DM ’s associated expected payoff is

14

(−9 + 5

√1− 2c+ 13c

). Notice the decision rule (20) is a function of the suf-

ficient statistic (x1 + x2), unlike expression (18). This occurs because DM is

unable to control her agents by comparing the information they provide. No-

tice also V [a∗(x)] > V [a(x)] again holds: DM behaves as if the jointly provided

information were more accurate than what it actually is.

Recall that it may not always be optimal to induce both agents to gather

signal X. We now provide conditions specific to this example which ensure

it is. When c ≤ 72169 , the first-best is implementable so that relying on two

agents is clearly optimal (DM ’s expected payoff is then − 513). When c > 72

169 ,

relying on two agents guarantees a higher payoff than relying on a single-agent

if and only if c ≤ 41521

(142 + 15

√10).36 When c ∈

[4

1521

(142 + 15

√10), 1

2

], DM

34We also assume that, ∀ (xi, xj), q (x) is large enough that a∗ (x) ∈ intQ (x).35Implementing a∗

i,Xj(x1, x2) also fails to motivate Aj to gather signal X when c > 72

169 .

One computes a∗i,Xj

(x1, x2) = 613 (xi + xj) + 81

163

(9781 −

32

√14326059049 −

275476561 c

)613

(xi − 4

9xj).

36Because in this example ρ = 0, the single-agent case coincides with the two-agent case

31

Page 32: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

optimally relies on a one agent and her expected payoff is −59 .37

4 Extensions

We extend the single-agent case to allow for both continuous effort and

heterogeneous preferences. We also comment on two of our modeling assump-

tions, and discuss ways for organizations to implement our decision-rule.

4.1 The Continuous Effort Case

Let u (a, ω) = − (a− ω)2. Suppose A exerts effort e ∈ E ≡ [0, e], and incurs a

cost c (e) (ce (e) > 0 and cee (e) > 0). To avoid corner solutions, let ce (0) = 0 and

ce (e) = +∞. Also, suppose A’s effort does not affect the marginal distribution

G (x) but only the posterior beliefs FW (ω | x, e), where FW (ω | x, e) is twice

differentiable with respect to e. Moreover, for any e′ > e, we assume:

(a(x, e′

)− a (x, e)

)(x− x) > 0, ∀x 6= x. (21)

Finally, for simplicity, suppose (i) ω0 = 0 and (ii) a (x, e) = 0, ∀e. DM chooses

a (x) and e ∈ [0, e] to maximize her expected payoff:

−∫X

∫Ω

(ω − a(x))2 f (ω | x, e) dωdG (x) , (22)

subject to:

−∫X

∫Ω

(ω − a(x))2 f (ω | x, e) dωdG (x)− C (e) ≥ (23)

in which agents gather signals of different accuracy.37In the single-agent case, the effort constraint binds if and only if c > 8

9 .

32

Page 33: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

−∫X

∫Ω

(ω − a(x))2 f(ω | x, e′

)dωdG (x)− C

(e′),

for ∀e′ ∈ [0, e]. We replace the global constraint (23) by the local constraint:38

−∫X

∫Ω

(ω − a(x))2 fe (ω | x, e) dωdG(x) = Ce (e) , (24)

which is a necessary condition for an interior effort e to constitute an optimum.

We first characterize the solution to the relaxed problem, and then derive

sufficient conditions for this solution to also satisfy (23). Let the multiplier of

(24) be λ. Pointwise maximization with respect to a (x) yields:

a(x) =

∫Ωωf (ω | x, e) dω + λ

∫Ωωfe (ω | x, e) dω, ∀x, (25)

and the first-order condition associated to e is:

−∫X

∫Ω

(ω − a(x))2 fe (ω | x, e) dωdG(x) (26)

(−∫X

∫Ω

(ω − a(x))2 fee (ω | x, e) dωdG(x)− Cee (e)

)= 0.

From (25), because λ ≥ 0, a (x) is positive if x > x, equal to zero if x = x, and

negative if x < x. Provided λ > 0, a(x) is a steeper rotation of a(x, e) around

x. We now show λ > 0. A’s expected payoff under a(x) and effort e is:

−VarW [ω]−∫Xa(x)2dG(x) + 2

∫Xa(x)

∫ΩωfW (ω | x, e) dωdG(x)− C (e) . (27)

Differentiating (27) twice with respect to e yields:

2

∫Xa(x)Eee [ω | x, e] dG(x)− Cee (e) . (28)

38See, for instance, Jewitt [1988] and Szalay [2009] on the First-Order Approach.

33

Page 34: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

It is sufficient that (i) Eee [ω | x, e] < 0 if x > x and (ii) Eee [ω | x, e] > 0 if x < x

for (28) to be negative; that is, for A’s expected payoff to be concave. Suppose

this condition holds. Finally, note the first term in equation (26) is positive,

by virtue of condition (24) and the fact that ce (e) > 0. Because the expression

within brackets in equation (26) is negative, λ > 0 necessarily.

Because A’s expected payoff is concave for the solution of the relaxed prob-

lem, (24) is sufficient to characterize the incentive constraints. The first-order

conditions of the original problem are the same as those of the relaxed problem,

and a∗(x) and e∗ are characterized by equations (24), (25), and (26).

Proposition 5 The solution to the relaxed problem is the solution to the gen-

eral problem if∫

Ω ωfee (ω | x, e) dω < 0 (> 0) ∀x > x (∀x < x), in which case

the optimal decision rule, for a given e∗, is such that:

1. a∗(x) < a(x, e∗) if a(x, e∗) < a0,

2. a∗(x) = a(x, e∗) if a(x, e∗) = a0, and

3. a∗(x) > a(x, e∗) if a(x, e∗) > a0.

When effort is continuous, the decision-maker finds it optimal to make the

final decision excessively sensitive to the information provided by the agent.

4.2 Heterogeneous Preferences

Take the framework of Section 3.1.3 and let DM ’s payoff be u (a, ω, η, κ) =

− (ηω + κ− a)2, with κ > 0. Not only has DM a different sensitivity to the

underlying state of nature relative to A, she is also upward biased.

34

Page 35: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Proposition 6 Suppose c ≤ 2ηρ(ρ− ρ

). Implementing a∗ (x) = ηρx + κ is

optimal. Suppose instead c > 2ηρ(ρ− ρ

). The optimal decision rule is then:

a∗ (x) =

(c

2(ρ− ρ

))x+ κ,

where c2(ρ−ρ)

> ηρ.

Compared to the solution in Section 3.1.3, the second-best decision rule

when c is sufficiently high is shifted upwards by an amount equal to κ.

4.3 Monetary Transfers

Our environment was one of non-transferable utility. This begs the ques-

tion of whether the main insights are preserved if transfers contingent on the

provided information are feasible. Observe that if the agents’ payoffs do not

depend on the action a, and if transfers contingent on x are possible, our anal-

ysis becomes a standard moral hazard problem. Unlike the one-dimensional

decision a in our setting, in a such a hypothetical environment, DM could

offer individual schedules of transfers. DM would tend to make high pay-

ments when observing realizations with a low likelihood ratio LR(x), and low

payments when observing realizations with a high likelihood ratio. Also, DM

would make each agent’s transfers contingent on both agents’ signal realiza-

tions to exploit the underlying common uncertainty (see Holmstrom [1982]).

However, if monetary transfers are possible and the agents are also affected

by a (and protected by limited liability), DM should use both channels (money

and distortions to the decision-rule) to induce the acquisition of accurate in-

formation. Making her final decision exceedingly sensitive to the provided

35

Page 36: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

information would allow DM to reduce expected payments.39

4.4 Participation Constraints

Our analysis assumed the agents’ participation was guaranteed. This is a

natural assumption when agents cannot escape the consequences of the deci-

sion. However, extending the model to allow DM to make flat payments would,

on the one hand, not contradict the view that it is difficult in practice to design

a contract specifying transfers as a function of the provided information and,

on the other, allow the former to ensure the agents’ participation. Because

a flat payment does not impact the agents’ incentives to acquire information,

the main features of our decision-rules would remain unchanged.40

4.5 Organizational Decision-Making

It is common for organizations to commit to ex post inefficient rules to fos-

ter information acquisition by its members (e.g., through the delegation of de-

cision rights or the choice of admissible actions, default-actions, and standards

of proof).41 Organizations can implement (or approximate) the decision-rule

characterized in this paper in several ways. One approach is for organizations

to put excessive weight on the information produced by their own staff to

force the latter to internalize the importance of producing accurate informa-

tion. Choosing a weight to put on certain types of information is routinely

39See Aghion and Tirole [1997] on related matters.40Our results are qualitatively unaffected when payments are infeasible and the decision-

rule must also guarantee agents’ participation (as long a solution continues to exist). Aproof of this robustness check for the single-agent case is available upon request.

41See for instance Aghion and Tirole [1997] on delegation, Szalay [2005] on the choice ofadmissible actions, and Li [2001] on the choice of standards of proof.

36

Page 37: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

done by administrations which must commit to transparent and predictable

decision-making procedures (e.g., US Sentencing Guidelines).42,43 Organiza-

tions can also affect the pros and cons of the possible decisions they face

by manipulating enactment costs. In our setting, it is optimal to raise the

enactment costs associated to the status-quo, and to lower those associated

to actions different from the status-quo.44 An alternative approach consists

of dismissing or garbling already available information (e.g., reputation, past

history, etc). By restricting the role played by the prior, the newly produced

information becomes more pivotal. For instance, one rationale often put forth

to justify the non-admissibility of certain relevant pieces of evidence in the

inquisitorial system of law (e.g., criminal records) is that it gives incentives to

the prosecutor to find better information.45,46 More generally, it has been docu-

mented by organizational sociologists that decision-makers within firms ignore

relevant information or have poor “organizational memory”, (see for instance

Feldman and March [1981] and Walsh and Ungson [1991]). Our paper provides

one possible efficiency rationale: by disregarding available information, firms

42The Sentencing Guidelines are rules that establish a uniform sentencing policy for de-fendants (individuals or organizations) convicted in the federal court system. The guidelinesoperate as a formula that maps facts (nature of conduct, harm, etc) into a fine or jail sen-tence. For antitrust violations, for instance, the harm to consumers (often estimated by thestaff of the Department of Justice) is taken into account. Conceivably, the weight put onsuch estimates of harm affects the motivation of the staff in charge of its production.

43This logic can rationalize the often-criticized tendency by administrations (or, moregenerally, by decision-makers) to over-rely on (or be over-confident about) their expertise.

44See Stephenson [2011] on enactment costs.45See Lester et al. [2012] on a related matter.46Similarly, an administration such as the US Food and Drug Administration can com-

mit to a policy of ignoring other jurisdictions’ publicly observable administrative decisions(say, from Europe or Canada) when reaching their own decisions. Among many other con-sequences, such a policy mechanically makes the information provided by the FDA’s staffmore pivotal. A similar reasoning holds for the FDA’s policy of not taking into accountPharmaceutical companies’ track records when determining whether to market a new drug.

37

Page 38: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

motivate employees by making their (better) information pivotal. Finally, or-

ganizations can delegate decision-rights to less experienced managers—with a

more diffused prior—who over-react to the information provided by their more

experienced subordinates.47

5 Concluding Remarks

This paper has investigated a situation in which a decision-maker exploits

her agents’ desire to influence the outcome to motivate information acquisi-

tion. The main insight that emerges from the single-agent analysis is that

the decision-maker should make her final decision excessively sensitive to the

provided information. This policy motivates information acquisition because

the implemented action is then a very poor match with what the agent desires

when he produces low-quality information. A direct consequence of this strat-

egy is that actions near the status-quo become less likely to be implemented.

Whether this insight generalizes to more complex organizations that comprise

two agents was unclear. With multiple agents, what each agent desires when

unilaterally producing low-quality information may be drastically different,

so that distorting the final decision away from the status-quo may actually

reward one agent for shirking. Despite this difficulty, we showed it is again

optimal to make the final decision exceedingly sensitive to the jointly provided

information, and to make it hard to implement actions near the status-quo.

47Detailed computations regarding these statements are available upon request.

38

Page 39: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

References

Philippe Aghion and Jean Tirole. Formal and Real Authority in Organizations.Journal of Political Economy, 105(1):1–29, 1997.

Ricardo Alonso and Niko Matouschek. Optimal Delegation. The Review ofEconomic Studies, 75(1):259–293, 2008.

Rossella Argenziano, Sergei Severinov, and Francesco Squintani. StrategicInformation Acquisition and Transmission. American Economic Journal:Microeconomics, 8(3):119–55, 2016.

Susan Athey and Jonathan Levin. The value of Information in MonotoneDecision Problems. Mimeo Stanford, 2001.

David Austen-Smith. Strategic Transmission of Costly Information. Econo-metrica, 62(4):955–963, 1994.

Marco Battaglini. Policy Advice with Imperfectly Informed Experts. The B.E.Journal of Theoretical Economics., 4(1):1–34, 2004.

Sourav Bhattacharya and Arijit Mukherjee. Strategic Iinformation Revelationwhen Experts Compete to Influence. Rand Journal of Economics, 2013.

Hongbin Cai. Costly Participation and Heterogeneous Preferences in Informa-tional Committees. Rand Journal of Economics, 40(1):173–189, 2009.

Yeon-Koo Che and Navin Kartik. Opinions as Incentives. Journal of PoliticalEconomy, 117(5):815–860, 2009.

Jacques Cremer, Fahad Khalil, and Jean-Charles Rochet. Contracts and Pro-ductive Information Gathering. Games and Economic Behavior, 25(2):174–93, 1998.

Inga Deimen and Dezso Szalay. Information, Authority, and Smooth Commu-nication in Organizations. Working Paper, University of Bonn, 2016.

Joel S. Demski and David E.M. Sappington. Delegated Expertise. Journal ofAccounting Research, 25(1):68–89, 1987.

Mathias Dewatripont and Jean Tirole. Advocates. Journal of political econ-omy, 107(1):1–39, 1999.

Mathias Dewatripont, Ian Jewitt, and Jean Tirole. The Economics of Ca-reer Concerns, Part I: Comparing Information Structures. The Review ofEconomic Studies, 66(1):183–198, 1999.

39

Page 40: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Martha S. Feldman and James G. March. Information in Organizations asSignal and Symbol. Administrative Science Quarterly, 26(2):171–186, 1981.

Matthew Gentzkow and Emir Kamenica. Competition in Persuasion. Reviewof Economic Studies, 2016.

Dino Gerardi and Leeat Yariv. Information Acquisition in Committees. Gamesand Economic Behavior, 62(2):436–459, 2008a.

Dino Gerardi and Leeat Yariv. Costly Expertise. American Economic Review:Papers and Proceedings, 98(2):187–193, 2008b.

Alex Gershkov and Balazs Szentes. Optimal Voting Schemes with CostlyInformation Acquisition. Journal of Economic Theory, 144(1):36–68, 2009.

Thomas Gilligan and Keith Krehbiel. Collective Decision-Making and Stand-ing Committees: An Informational Rationale for Restrictive AmendmentProcedures. Journal of Law, Economics and Organization, 3(2):345–350,1987.

Faruk Gul and Wolfgang Pesendorfer. The War of Information. The Reviewof Economic Studies, 79(2):707–734, 2012.

Bengt Holmstrom. Moral Hazard and Observability. The Bell Journal ofEconomics, 10(1):74–91, 1979.

Bengt. Holmstrom. Moral Hazard in Teams. The Bell Journal of Economics,13(2):324–340, 1982.

Bengt Holmstrom. Managerial Incentives Problems: a Dynamic Perspective.The Review of Economic Studies, 66(1):169–182, 1999.

Ian Jewitt. Justifying the First-Order Approach to Principal-Agent Problems.Econometrica, 56(5):1177–1190, 1988.

Justin P. Johnson and David P. Myatt. On the Simple Economics of Adver-tising, Marketing, and Product Design. American Economic Review, 96(3):756–784, 2006.

Vijay Krishna and John Morgan. A Model of Expertise. The Quarterly Journalof Economics, 116(2):747–775, 2001.

Vijay Krishna and John Morgan. Contracting for Information under ImperfectCommitment. The Rand Journal of Economics, 39(4):905–925, 2008.

40

Page 41: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Benjamin Lester, Nicola Persico, and Ludo Visschers. Information Acquisitionand the Exclusion of Evidence in Trials. Journal of Law, Economics, andOrganization, 28(1):163–182, 2012.

Hao Li. A Theory of Conservatism. Journal of Political Economy, 109(3):617636, 2001.

Nahum D. Melumad and Toshiyuki Shibano. Communication in Settings withno Transfers. The RAND Journal of Economics, 22(2):173–1982, 1991.

James A. Mirrlees. The Theory of Moral Hazard and Unobservable Behaviour:Part I. The Review of Economic Studies, 66(1):3–21, 1999.

Nicola Persico. Information Acquisition in Auctions. Econometrica, 68(1):135–148, 2000.

Nicola Persico. Committee Design with Endogenous Information. The Reviewof Economic Studies, 71(1):165–191, 2004.

Canice Prendergast. A Theory of “Yes Men”. The American Economic Review,83(4):757–770, 1993.

Canice Prendergast. The Motivation and Bias of Bureaucrats. The AmericanEconomic Review, 97(1):180–196, 2007.

David S. Scharfstein and Jeremy C. Stein. Herd Behavior and Investment. TheAmerican Economic Review, 80(3):465–479, 1990.

Xianwen Shi. Optimal Auctions with Information Acquisition. Games andEconomic Behavior, 74(2):666686, 2012.

Matthew Stephenson. Information Acquisition and Institutional Design. Har-vard Law Review, 124:1422–1483, 2011.

Dezso Szalay. The Economics of Clear Advice and Extreme Options. TheReview of Economic Studies, 72(4):1173–1198, 2005.

Dezso Szalay. Contracts with Endogenous Information. Games and EconomicBehavior, 65(2):586–625, 2009.

James P. Walsh and Gerardo Rivera Ungson. Organizational memory. TheAcademy of Management Review, 16(1):51–91, 1991.

41

Page 42: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Appendix

Proof of Proposition 1 and Corollary 1

Let U (a, x) ≡∫

Ω u (a, ω) dFW |X (ω | x) and U (a, x) ≡∫

Ω u (a, ω) dFW |X (ω | x).

We ignore the constraints a(x) ≥ al(x) and a(x) ≤ ah(x), and identify ex post

when they hold. From (14), it follows

∂aL (x, a) = (1 + λ) gX(x)

∂aU (a, x)︸ ︷︷ ︸

I(a)

−λgX(x)∂

∂aU (a, x)︸ ︷︷ ︸

II(a)

.

Because ∂2

∂a2u (a, ω) = −1, Sign

[∂2

∂a2L (x, a)

]= Sign

[λgX(x)− (1 + λ) gX(x)

].

Therefore, L (x, a) is concave if λ ≡ 1+λλ > LR(x) ≡ gX(x)

gX(x) , and otherwise convex.

Case 1: x ∈ K+. We first show a∗ ≥ a. Suppose al ≤ a∗ < a. This

cannot be optimal because, for instance, L (2a− a∗, ω) > L (a∗, ω), due to

(i) U (2a− a∗, x) > U (a∗, x) and (ii) U (2a− a∗, x) = U (a∗, x). Suppose now

a∗ ∈ [a, a). The terms I and -II are strictly positive for ∀a ∈ [a, a), so that

a∗ ≥ a. To see that a∗ > a, note that, on the one hand, I(a) = 0, while, on the

other, −II (a )> 0 , implying ∂∂aL (x, a) > 0 when a = a. Moreover, if λ > LR(x),

L (x, a) is strictly concave, so that a∗ is equal to the minimum between L (x, a)’s

global maximizer on R and ah. If λ ≤ LR(x), L (x, a) is convex and a∗ = ah.

Case 2: x ∈ K−. The proof proceeds through identical steps and is left out.

Case 3: x ∈ K. Because a = a, I (a )= II (a) = 0 . If λ > LR(x), L (x, a) is

strictly concave and thus a∗ = a = a. If λ < LR(x), L (x, a) is strictly convex,

42

Page 43: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

a = a is L (x, a)’s global minimizer, and a∗ is equal to either al or ah. Finally,

if λ = LR(x), L (x, a) is flat and a∗ ∈ [al, ah].

Corollary 1. Suppose x ∈ K+. If λ > LR(x), L (x, a) is concave and a∗

is equal to the minimum between the value of a that solves I − II = 0 and

ah. Suppose a∗ is given by −I (a∗) = −II (a∗). Divide both sides by g(x), and

relabel I (a) and II (a) as I (a) and II (a). Interpret −I (a) as the marginal cost

of increasing a starting at a = a, and −II (a) as the marginal benefit. −I (a)

and −II (a) are increasing in a for a ∈ [a, ah]. Moreover, (i) I (a) is independent

of LR(x) and |a− a| on [a, ah] whereas (ii) −II (a) is higher on [a, ah] the higher

LR(x) and |a− a| are. The greater LR(x) is, the higher the function −II is in

[a, ah], and thus the weakly higher a∗ is. Similarly, the greater |a− a| is, the

higher the function −II is in [a, ah], so that the weakly higher a∗ is. Similarly,

if a∗ = ah, then, all else equal, increases in LR(x) and |a− a| do not affect a∗.

Finally, if λ ≤ LR(x), L (x, a) is (weakly) convex and thus a∗ = ah independently

of LR(x) and |a− a|. The proof for x ∈ K− is symmetric and thus left out.

Proofs of Propositions 2 and 6

Suppose DM has payoff function − (ηω + κ− a)2 and A has gross payoff

function − (ω − a)2. DM chooses a (x) to maximize pointwise

∫XL (x, a) dx =

∫ ∞−∞

(−∫ ∞−∞

(ηω + κ− a(x))2 dFW |X (ω | x) gX(x) (29)

(−∫ ∞−∞

(ω − a(x))2 dFW |X (ω | x) gX(x)− c

43

Page 44: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

+

∫ ∞−∞

(ω − a(x))2 dFW |X (ω | x) gX(x)

))dx,

which yields, ∀x, first-order condition a (x) = ηρx+κ+λ(ρ− ρ

)x. Substituting

a(x) in A’s effort constraint and rearranging yields λ =c−2ρη(ρ−ρ)

2(ρ−ρ)2 . A’s effort

constraint binds if and only if c > 2ρη(ρ− ρ

). Substituting λ in a(x) and

rearranging concludes (with η = 1 and κ = 0 for Proposition 2).

Proof of Proposition 3

Let U (a,x) ≡∫

Ω u (a, ω) dFXi,Xj(ω | x) and U (a,x) ≡∫

Ω u (a, ω) dFXi,Xj(ω | x). DM chooses a (x) to maximize pointwise

∫X 2

L (x, a) dx =

∫X 2

(gXi

(xi) gXj(xj)U (a,x) (30)

λi

(gXi

(xi) gXj(xj)U (a,x)− c− gXi

(xi) gXj(xj)U (a,x)

)+gXi

(xi) gXj(xj)

(γl (x) (a(x)− al(x)) + γh (x) (ah(x)− a(x))

))dx.

For any given xj , the proof is identical to the proof of Proposition 1, replacing

K with K (xj), K+ with K+ (xj) , and K− with K− (xj).

Proof of Proposition 4

DM chooses a (x) ∈ [al (x) , ah (x)] to maximize L (x, a), ∀x.48

∂aL (x, a) = gX(x1)gX(x2) (1 + λ1 + λ2)

∫Ω

∂au (a, ω) dFW |X,X (ω | x)︸ ︷︷ ︸

I(a)

48For the sake of brevity, we ignore al(x) ≤ a(x) and a(x) ≤ ah(x) and identify ex postwhen these constraints are susceptible to hold.

44

Page 45: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

− gX(x1)gX(x2)λ1

∫Ω

∂au (a, ω) dFW |X,X(ω | x)︸ ︷︷ ︸

II(a)

− gX(x1)gX(x2)λ2

∫Ω

∂au (a, ω) dFW |X,X(ω | x)︸ ︷︷ ︸

III(a)

.

Since ∂2

∂a2u (a, ω) = −1, Sign

[∂2

∂a2L (x, a)

]= Sign

[2∑i=1

LR (xi)λi −(

1 +2∑i=1

λi

)].

L (·) is concave if λ1LR (x1) + λ2LR (x2) < 1 + λ1 + λ2, and otherwise convex.

Therefore, the value of a that solves ∂∂aL (x, a) = 0 is L (·)’s maximizer if (i) it

belongs to [al (x) , ah (x)] and (ii) if L (·) is concave. Otherwise, L (·)’s maximizer

is equal to either al (x) or ah (x). In what follows, we exploit (i) whether L (·)

is concave/convex, (ii) Sign[I(aX,X (x)

)− II

(aX,X (x)

)− III

(aX,X (x)

)], and

(iii) the properties of gX(x1)gX(x2) (1 + λ1 + λ2)∫

Ω u (a, ω) dFW |X,X (ω | x) and

U (a,x) := gX(x1)gX(x2)λ1

∫Ωu (a, ω) dFW |X,X(ω | x)

+gX(x1)gX(x2)λ2

∫Ωu (a, ω) dFW |X,X(ω | x),

where ∂∂aU (·) = II(a) + III(a). Let a (x) := argmax

a∈RU (a,x).

Lemma A.1

1. Function U (a,x) is strictly concave, such that U (a+ ε,x) =

U (a− ε,x), ∀ε, and such that (i) a ∈[min

[12

(aXi,Xj

+ aXi,Xj

), aXi,Xj

],

max[

12

(aXi,Xj

+ aXi,Xj

), aXi,Xj

] ]if LR (xi)λi > LR (xj)λj, and (ii)

a = 12

(aXi,Xj

+ aXi,Xj

)if LR (xi)λi = LR (xj)λj.

2. Moreover, ∂∂λi

∣∣∣a− aXi,Xj

∣∣∣ < 0 and ∂∂LR(xi)

∣∣∣a− aXi,Xj

∣∣∣ < 0.

45

Page 46: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Proof. The concavity and symmetry of U (a,x) follows from (1). Also, given

that∫

Ω u (a, ω) dFW |X,X(ω | x) is a translation of∫

Ω u (a, ω) dFW |X,X(ω | x),

a = 12

(aXi,Xj

+ aXi,Xj

)if LR (xi)λi = LR (xj)λj . The rest of Part i and

Part ii follow from the observation that λi and LR (xi) are weights put on∫Ω u (a, ω) dFW |Xi,Xj

(ω | xi, xj) in U (a,x), for i = 1, 2.

Let i, j = 1, 2 and i 6= j. We consider all 4 possible rankings of aXi,Xj,

aXi,Xj, and aXi,Xj

in turn.

Case 1: x ∈ K++ =

x ∈ X × X | aXi,Xj≥ aXi,Xj

& aXi,Xj> aXi,Xj

Here, a ≤ aXi,Xj

. Setting al ≤ a < a is not optimal because L (2a− a,x) >

L (a,x).49 Similarly, setting a ∈[a, aXi,Xj

]is not optimal because ∂

∂aL (x, a) >

0 throughout the interval. Finally, a∗ > aXi,Xjis strictly optimal because

I(aXi,Xj

)−II

(aXi,Xj

)−III

(aXi,Xj

)> 0. Also, x ∈ K++ ⇒ aX,X > a0 because

xi ≥ xi (xj) and xj > xj (xi) imply xi + xj > 2ω0, which implies aXi,Xj> a0.

Case 2: x ∈ K−− =

x ∈ X × X | aXi,Xj≤ aXi,Xj

& aXi,Xj< aXi,Xj

The proof of Case 2 is symmetric to the proof of Case 1, and thus omitted.

When x ∈ K−−, a∗ < aXi,Xj. Also x ∈ K−− ⇒ aXi,Xj

< a0.

Case 3: x ∈ K =

x ∈ X × X | aXi,Xj= aXi,Xj

& aXi,Xj= aXi,Xj

Here, a = aXi,Xj

and aXi,Xjis L (·)’s global maximizer if λiLR (xi) +

λjLR (xj) < 1 + λi + λj . Otherwise, L (·) is convex and a∗ is equal to either

al or ah. Moreover, aXi,Xj= aXi,Xj

= aXi,Xjif and only if xi = xj = ω0,

implying x ∈ K ⇒ aXi,Xj= a0 (see (7)-(8)).

49Moreover, 2a− a < ah because al = aXi,Xj − q ≤ a < a ≤ aXi,Xj < ah = aXi,Xj + q.

46

Page 47: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

Case 4: x ∈ K+− =

x ∈ X × X | aXi,Xj> aXi,Xj

> aXi,Xj

a) |xj − ω0| < |xi − ω0|

We show a0 < aXi,Xj< a∗. First, we establish |xi − x| > |xj − x| implies

aXi,Xj−aXi,Xj

> aXi,Xj−aXi,Xj

. Using (7)-(8), one computes that the necessary

and sufficient condition for inequality aXi,Xj−aXi,Xj

> aXi,Xj−aXi,Xj

to hold is

xi+xj > 2ω0. This strict inequality always holds because either (i) xi > ω0 > xj

(with |xi − ω0| = xi − ω0 > |xj − ω0| = ω0 − xj) or (ii) xi > xj > ω0. Indeed, any

other ranking of (xi, xj, ω0) violates either x ∈ K+− and/or |xi − ω0| > |xj − ω0|.

To continue, suppose λi ≥ λj . Also, recall that |xi − ω0| > |xj − ω0|

implies LR (xi) ≥ LR (xj). Then, from Lemma A.1, we know a ∈[aXi,Xj

, 12

(aXi,Xj

+ aXi,Xj

)]. Further, aXi,Xj

− aXi,Xj> aXi,Xj

− aXi,Xjim-

plies 12

(aXi,Xj

+ aXi,Xj

)< aXi,Xj

. It follows a < aXi,Xj. Because (i) U (a,x)

is symmetric and (ii) a < aXi,Xj, al ≤ a < a cannot be optimal. Indeed,

L (x, a) < L (x, 2a− a).50 Moreover, a ∈[a, aXi,Xj

)cannot be optimal either,

because ∂∂aL (x, a) > 0 throughout the interval. Finally, I

(aXi,Xj

)= 0 and

II(aXi,Xj

)+ III

(aXi,Xj

)< 0, so that ∂

∂aL(x, aXi,Xj

)> 0. Therefore, when

λi ≥ λj , a∗ > aXi,Xj. Finally, note that aXi,Xj

> a0 if and only if xi + xj > 2ω0,

which we have shown to hold. Thus, when λi ≥ λj , a∗ > aXi,Xj> a0.

Suppose now λi < λj , and recall ∂∂λj

∣∣∣a− aXi,Xj

∣∣∣ < 0. Because aXi,Xj<

aXi,Xj, there exists a threshold, denoted λj (xi, xj), such that a > aXi,Xj

if and

only if λj > λj (xi, xj).51 If λi < λj < λj (xi, xj), a < aXi,Xj

and the proof is

identical to that when λi ≥ λj . If λj = λj (xi, xj), then a = aXi,Xjand a∗ is

50Again, 2a− a < ah because al = aXi,Xj − q ≤ a < a ≤ aXi,Xj < ah = aXi,Xj + q.51Notice λj (xi, xj) > λi because |xi − ω0| > |xj − ω0|.

47

Page 48: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

equal to either aXi,Xjif λiLR (xi) + λjLR (xj) < 1 + λi + λj , or otherwise either

al or ah. Finally, if λj > λj (xi, xj), then a > aXi,Xjand the proof follows the

same steps as those in the previous paragraph, with however a∗ < aXi,Xj.

b) |xi − ω0| = |xj − ω0|

Here, xi > x > xj . Any other ranking violates x ∈ K+− or |xi − ω0| =

|xj − ω0|. Thus, xi + xj = 2ω0 and aXi,Xj− aXi,Xj

= aXi,Xj− aXi,Xj

, thereby

implying aXi,Xj= 1

2

(aXi,Xj

+ aXi,Xj

). Also, |xi − ω0| = |xj − ω0| implies

LR (xi) = LR (xj). Therefore, a ∈[min

[aXi,Xj

, aXi,Xj

],max

[aXi,Xj

, aXi,Xj

]]if λi > λj , and a = aXi,Xj

if λi = λj . Here, aXi,Xj= a0 because xi + xj = 2ω0.

Thus, a∗ > aXi,Xj= a0 if λi > λj and a∗ < aXi,Xj

= a0 if λj > λi. If λi = λj ,

then a∗ = a0 if λiLR (xi) + λjLR (xj) < 1 + λi + λj , and otherwise a∗ is either al

or ah.

c) |xi − ω0| < |xj − ω0|

If λj ≥ λi, then a∗ < aXi,Xj< a0. Also, there exists a threshold λi (x) > λj

such that (i) a∗ ≤ aXi,Xjif λj < λi ≤ λi (x) and (ii) a∗ > aXi,Xj

if λi > λi (x).

Proof that λ1 = λ2. Because agents have identical payoffs and available

signals, and both effort constraints are binding, a∗ (x) must be such that λ∗1 =

λ∗2 > 0. By contrast, suppose, for instance, that λ1 > λ2 ≥ 0. Then, the LHS of

both effort constraints is identical. However, the RHSs will not generally be

equal, which contradicts the fact that both constraints bind at the optimum.

Because λ∗1 = λ∗2 > 0, it follows from Cases 1-4 that a∗ > aX,X if aX,X > a0,

a∗ < aX,X if aX,X < a0, and a∗ ∈al, aX,X , ah

if aX,X = a0.

48

Page 49: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

x

R

x = 0

aX(x)

aX(x)

Figure 1: The Single-Agent Case

49

Page 50: MOTIVATING AGENTS TO ACQUIRE INFORMATION€¦ · 3 characterizes the optimal decision rule for the baseline model with one agent (Section 3.1); it then turns to the analysis with

aX,X(x1, x′2)

aX,X(x1, x′2)

x1 (x′2) x′1

aX,X (x′1, x′2)

x1

R

(a) Agent 1

aX,X (x′1, x2)

aX,X (x′1, x2)

x2 (x′1) x′2x2

R

aX,X (x′1, x′2)

(b) Agent 2

aX,X(x1, x′2)

aX,X(x1, x′2)

x1 (x′2) x′1

aX,X (x′1, x′2)

x1

R

(c) Agent 1

aX,X (x′1, x2)

aX,X (x′1, x2)

x2 (x′1)x′2

aX,X(x′2, x′1)

x2

R

(d) Agent 2

Figure 2: Congruent and Conflicting Information

50