· research: competition and altruism¤ ramon xifré olivay may 2002, work in progress comments...

43
RESEARCH: COMPETITION AND ALTRUISM ¤ Ramon Xifré Oliva y May 2002, Work in progress Comments welcome, Quotations discouraged -Abstract. Assume that research basically consists of acquiring information and transmitting it. This paper aims at understanding the impact in both activities of researchers’ motivations. We consider two researchers concerned, on one hand, with discovering the value of an unknown variable. On the other hand, each researcher can be of one of two possible types: altruistic (if he prefers the other to discover it also) or competitive (the contrary). Each researcher, who knows his type and ignores the rival’s, makes a costly e¤ort to acquire information, exchanges it freely with the other and …nally proposes an approximate value for the variable. ‘Competition’ provokes a deadweight loss in the transmission phase but it may be bene…cial for acquiring information, only if the cost is su¢ciently low. Confronted with a presumed competitive rival, the other researcher may exert a higher e¤ort whose value, due to communication, is appropriated by the former that ends up in the best position to estimate the truth. This e¤ect participates in establishing a relationship between the type of a researcher and his preference for public or private information setups. The competitive is unambiguously in favor of private information while the altruistic is likely to prefer public information environments, with certain exceptions. Keywords: cheap-talk, con‡icting motives, communication, information acquisition, incom- plete information. JEL classi…cation numbers: C72, D82, Z13. 1 introduction Researchers are a good example of agents that perform two activities: make a costly investment to acquire new information …rst, and then transmit it freely to other researchers. How can this be rationally explained? Under the assumption that each researcher works independently, the most common view is that honest communication of results helps each other by re…ning their respective private evidence. Basically, if one researcher can trust the reports of another one, he has access for free to the other’s original results; it is as if he had conducted two experiments instead of conducting only his own. But, what is the implication on information acquisition and transmission individual decisions when one is not sure he can rely on others? The aim of this paper is to address this issue. ¤ Special thanks go to Marco Celentani, my thesis advisor, for his valuable suggestions, insights and encourage- ment throughout this project. I also wisth to thank Antoni Calvó-Armengol, Jordi Massó, Carmine Ornaghi, Luis Úbeda and the audience of XIV IMGTA (Ischia) for helpful comments. I thank Santiago Forte for computational advice. The usual disclaimer applies. Financial support of MCYT (Spain) under project PB98-00024 is gratefully acknowleged. y Department of Economics, Universidad Carlos III de Madrid, Getafe (Madrid) 28903, Spain; [email protected]. 1

Upload: others

Post on 22-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

RESEARCH: COMPETITION AND ALTRUISM¤

Ramon Xifré Olivay

May 2002, Work in progressComments welcome, Quotations discouraged

-Abstract. Assume that research basically consists of acquiring information and transmittingit. This paper aims at understanding the impact in both activities of researchers’ motivations.We consider two researchers concerned, on one hand, with discovering the value of an unknownvariable. On the other hand, each researcher can be of one of two possible types: altruistic (ifhe prefers the other to discover it also) or competitive (the contrary). Each researcher, whoknows his type and ignores the rival’s, makes a costly e¤ort to acquire information, exchanges itfreely with the other and …nally proposes an approximate value for the variable. ‘Competition’provokes a deadweight loss in the transmission phase but it may be bene…cial for acquiringinformation, only if the cost is su¢ciently low. Confronted with a presumed competitiverival, the other researcher may exert a higher e¤ort whose value, due to communication, isappropriated by the former that ends up in the best position to estimate the truth. This e¤ectparticipates in establishing a relationship between the type of a researcher and his preferencefor public or private information setups. The competitive is unambiguously in favor of privateinformation while the altruistic is likely to prefer public information environments, with certainexceptions.

Keywords: cheap-talk, con‡icting motives, communication, information acquisition, incom-plete information.

JEL classi…cation numbers: C72, D82, Z13.

1 introduction

Researchers are a good example of agents that perform two activities: make a costly investmentto acquire new information …rst, and then transmit it freely to other researchers. How can thisbe rationally explained? Under the assumption that each researcher works independently, themost common view is that honest communication of results helps each other by re…ning theirrespective private evidence. Basically, if one researcher can trust the reports of another one, hehas access for free to the other’s original results; it is as if he had conducted two experimentsinstead of conducting only his own. But, what is the implication on information acquisition andtransmission individual decisions when one is not sure he can rely on others? The aim of thispaper is to address this issue.

¤Special thanks go to Marco Celentani, my thesis advisor, for his valuable suggestions, insights and encourage-ment throughout this project. I also wisth to thank Antoni Calvó-Armengol, Jordi Massó, Carmine Ornaghi, LuisÚbeda and the audience of XIV IMGTA (Ischia) for helpful comments. I thank Santiago Forte for computationaladvice. The usual disclaimer applies. Financial support of MCYT (Spain) under project PB98-00024 is gratefullyacknowleged.

yDepartment of Economics, Universidad Carlos III de Madrid, Getafe (Madrid) 28903, Spain; [email protected].

1

Page 2:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

For doing so, it is necessary to assume that a researcher cares not only about what he does,but also about what his peers do. In order to make things simple, consider that researchers can beof only two polar types: competitive and altruistic. Both types of researcher would like his ownperformance to be good; they di¤er on how each regards the others’ performance. An altruisticresearcher prefers his peers to do well (basically he regards others’ actions as if they were his ownactions), while a competitive researcher has exactly the opposite attitude and prefers his peers toperform poorly (due to, e.g., spitefulness, status-seeking, envy,...)1.

For instance, consider two researchers engaged in discovering the treatment for a given disease.Preparing the necessary experiment is costly, but writing a brief summary of the results is cheap.In an ‘ideal world’, where both researchers were altruistic (and both knew that), each wouldgenerously try to make both discoveries equally successful. This would entail helping each otherby reporting exactly what each observes in his laboratory. Then each would fully rely on thecolleague’s reports on results as if they had come from his own laboratory. But this mutual trustin the communication stage that seems to enhance e¢ciency on the surface, is problematic for itinduces a free-rider behaviour in the information acquisition phase. Since each knows that he canrely on the colleague’s reports just as if they were his own results, both may decide to design acheaper (and less informative) experiment with the ultimate purpose of free-riding on the other’sone. At the end, it may happen that some socially valuable research e¤ort is not exerted. Ourconcern is to which extent, and how, this problem depends on the prior beliefs each researcherhas about each other.

What if researcher 2 doubts whether researcher 1 is altruistic or competitive? Assume, further,that researcher 2 is altruistic and that researcher 1 knows so; for the moment forget about theactual type of researcher 1. If researcher 2 believes that there is a positive probability thatresearcher 1 might be competitive then, independently of 1’s actual type, 2 will make a limiteduse of 1’s reports because he knows that they come with added noise. That is, to the extent thathe believes that 1 is competitive, he believes that 1 does not say what he sees but somethingelse, tailored precisely to make him go the wrong way. Now, recall that each researcher acquiresinformation for two motives: …rst for ‘private use’, to perform well himself; and second for ‘externaluse’, to help (if altruist) or to obstruct (if competitive) the other in performing well. We couldsay that what each researcher decides is the objective value of his experiment: basically howinformative it is. But, instead, when a researcher receives another researcher’s report, he adjustssomehow its objective value before using it, and works with what we could call the discountedvalue of the other’s experiment. Both values will coincide only in the case that the receiver iscompletely sure that the sender is altruist; with this exception, the discounted will be alwayssmaller then the objective. In a sense, the adjusted value represents the optimal compromisebetween being careful enough not to credit a competitive peer and being smart enough not toallow valuable, altruist information to be lost.

In our example, researcher 2 discounts researcher 1’s experiment value while 1 does not dis-count 2’s experiment (because 1 has no doubts about 2’s intentions). Let us compare both re-searchers’ behaviour in this case with their behaviour in the ‘ideal world’ described above, whereboth were altruistic and both knew so (and, therefore, none discounted). With respect to thatsituation, 1 acquires less information now, regardless of his actual type: his private incentivesremain unchanged but 2’s discounting on the value of his experiment reduce his external marginalgains of acquiring information. The case of researcher 2 is the opposed one and he acquires more

1Our scepticism about researchers motives is in line with the recent and growing concerns about the so calledNew Economics of Science. Its purpose is to apply economic analysis tools to the functioning of research processesand groups and acknowledge a certain strategic burden in scientists’ behaviour. See the reviews and contributionsby Dasgupta and David (1994), Stephan (1996), David (1998) and Sent (1999).

2

Page 3:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

information. His external incentives for doing so remain as before but his private marginal gainsincrease: the information lost in the discounting he makes of 1’s experiment can only be recov-ered by designing a better experiment in his own laboratory. Notice that this holds (for di¤erentreasons) for both possible actual types researcher 1 might be: altruistic or competitive.

Recall that the quality of the treatment each researcher will …nally propose depends on (theobjective value of) his own experiment and (the discounted value of) the other’s experiment. In ourexample, researcher 2’s experiment has a larger objective value than 1’s, i.e. 2’s experiment is moreinformative than 1’s. The curious thing to notice is that the researcher with the best treatmentavailable for the disease will be researcher 1, the one who was suspicious of being competitive andwho performed the worse experiment. For preparing his treatment, researcher 2 counts with theobjective value of his own experiment and with the discounted value of 1’s experiment. On theother hand, researcher 1 is able to use the objective value of his own experiment, but also the full,objective value of 2’s experiment as well.

How should we assess equilibrium play? In this paper we present two di¤erent analyses.First we are concerned with the prior probability distributions over types that induce maximuminformation extraction through the interaction between both researchers. In a comparative-staticsmanner, we look for the optimal uncertainty that induces the highest infromation acquisitiondecisions depending on the exogenous information acquisition cost. We assume the existence ofa somewhat arti…cial third party that receives the advice from both researchers and who is onlyinterested in obtaining the best available treatment. Basically the recipient has to decide whetherto pay attention to one or the other researcher. In this framework, and from the standpoint ofthe third party, one can look for the optimal degree of reluctance between both researchers thatleads them to play in such a way to produce the best estimates of the truth.

Second, we study the preferecnce of each type, competitive and altruistic, for a public or privateinformation environment. Basically we are interested in knowing whether each type prefers toplay the game with a rival that knows him, i.e. knows his actual type (public information setup),or with one that only has incomplete information about it (private information).

Formally, the analysis of this paper concerns a two-player game of incomplete and imperfectinformation. An unobservable state of world, 0 or 1, is realized and both players aim at choosingan action, any number between 0 and 1, as close as possible to it.2 A player’s utility is decreasingwith the distance between his action and the true value of the state of the world but it alsodepends on how well his opponent does in the same dimension. Each of the two researchers canbe of one of two possible types: altruistic, in which case his utility is decreasing with the distancebetween the opponent’s action and the true value; or, competitive, in which case the utility isincreasing in this measure. Each player knows his type and ignores the other’s during all thegame.

Once each player learns his type, the …rst thing both do is to choose, simultaneously, thequality of their private information: each player chooses the accuracy of the signal he will receivefrom nature later on. The information gathering technology is simple, depending on just oneparameter associated to the cost intensity (equal for both types). After accuracies are chosen,each player receives the signal from nature. Nature sends only two possible signals: 0 or 1, andthe accuracy is the probability with which the signal coincides with the true state. Each player’s

2A binary outcome space and a continous action space is a common assumption in the advice literature underimperfect information. For instance, if one is not sure about whether killing a certain type of cells is useful oruseless to …ght against a given desease, one may well decide to kill only part of the cells. Or, if a social scientist isnot sure about whether a¢rmative action (e.g. by allocating quotas to the disfavoured group) is socially bene…cialor harmful, he may advice the allocate just a fraction of them.

3

Page 4:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

accuracy choice and signal are private information and remain hidden to the other player duringthe whole game. Thus, this is a case of secret information acquisition. The next step is thecommunication of signals. Each player sends to the other a message that, again, can be either0 or 1. Communication here is simultaneous and modelled as a ‘cheap-talk’, i.e. messages arepayo¤-irrelevant and non-veri…able. Finally each player, which has two pieces of informationavailable (his own signal, and the other’s message) together with a prior belief about the type ofhis opponent decides an action level. His action, the other’s action and the true state, which areall known at the end of the game, determine his total payo¤.

The game can be solved backwards. We use the perfect Bayesian equilibrium as our equilib-rium concept. At the last period, each player chooses the action level that corresponds to hisbest estimate for the expected value of the parameter, using Bayes’ rule. This behaviour allowsus to characterize naturally the communication strategies. For studying those, we concentrateon non-‘babbling equilibria’3 and we …nd that both types, due to the opposing nature of theexternal component of their utility functions, behave in exactly opposing ways. The altruistictype reports exactly the signal he observes from nature without introducing any further noise,and the competitive places as much noise as possible between his signal and his message (whichis not equivalent to lying always but rather entails ‘probabilistic lying’). Competitive researcher’sstrategy of probabilistic lying aims at turning the message completely unreliable and makingcommunication, thus, useless.

In the …rst stage of the game, we …nd that both types of a given player will choose exactlythe same accuracy level (although, admittedly, for opposed reasons). We further …nd that themarginal value of acquiring information decreases with the own likelihood of being competitiveand increases with the other’s likelihood of being so. This second relationship is a consequence ofthe fact that accuracies are strategic substitutes in our model. Based on this result, and imposingthe conditions that yield existence and uniqueness of equilibrium, we establish the general rulethat the player with the lower probability of being competitive chooses the higher accuracy inequilibrium. Paradoxically, the player with the best estimate of the true state is the other one,the one with the higher probability of being competitive because, due to communication, he isable to gain access in better relative terms to the information of the former.

There are a number of research strands to which this paper is related. First, the vast literaturedealing with the two main activities researchers perform in our model: acquiring information andtransmitting it. There are many papers that study the issue of acquiring information in di¤erentstrategic environments4 and, on the other hand, strategic communication of information betweensender and receiver has been extensively studied since the seminal contributions by Green andStokey (1980) and Crawford and Sobel (1982) who …rst used cheap talk for this purpose; Okuno-Fujiwara et. al. (1990) were also pathbreaking outside the chep-talk, sender-receiver setup studynghow the possibility of communication may eliminate asymmetries of information.with. But notmany works study the behaviour of agents that are able to perform both activities.

One of those is the paper by Lewis and Sappington (1997) who study how a principal shoulddesign appropriate incentive contracts to induce the agent to acquire valuable information beforecontracting. In their setup, there is only one informed party (the agent) and the goals of the otheruninformed party (the principal) are to induce the highest possible investigation. Therefore, they

3 In a ‘babbling equilibrium’, which is a common feature of ‘chap-talk’ games, no message carries information andnobody pays attention to messages. This type of pooling strategy coupled with no updating of beliefs constitute aperfect bayesian equilibrium.

4E.g. in oligopolistic competition setups: Vives (1984), Hwang (1993), Hauk and Hurkens (2001) and Hurkensand Vulkan (2001); in auctions: Matthews (1984) and Moressi (2000); in agency relationships: Mezzetti andTsoulouhas (2000); and in more general settings: Hurkens and Vulkan (1999) and Perea and Swinkels (1999).

4

Page 5:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

model communication unidirectionally and it is fostered by means of a contract. In our setup, thereis no such hierarchic relationship, but rather the interplay of ‘equals’ in a reduced-form frameworkwithout considering contracts. Since there is communication between agents, our reasons for oneplayer to induce the other to acquire information are di¤erent from theirs: the public-good natureof information. On the other hand, our model is not able to adress at all the adverse selectionissues they solve in their model by means of the appropiate incentive scheme designed by theprincipal to induce the agent to acquire information e¢cienty.

With respect to Crawford and Sobel (1982), apart from the fact that in our setup both playersare at the same time senders and receivers, the other important di¤erence is that they holdinformation quality as …xed while we let players choose it. Their fundamental result, extendedby Spector (2000), is that equilibrium signalling is more informative when agents’ preferences aremore similar. In our case, the crucial metric is not the similarity of preferences but rather howpervasive altruism is. With this quali…cation, there is a certain analogy between their resultsand ours in the communication phase: as altruism is more pervasive, the information transmittedin equilibrium increases; on the other hand, the comparison is not possible at the informationacquisition stage because this is not studied in their paper. The paper by Okuno-Fujiwara et. al.(1990) is also closely related to ours at least for two reasons: i) they …nd that communicationprior to play may reduce the asymmetries of information among N players while in this paper,by construction, the possibility of communication among 2 players is the sole responsible forthe asymmetries of information that subsist in equilibrium; ii) the conditions they supply forinformation transmission can be naturally related to the ones we obtain for higher informationacquisition. (see the end of section 3 for details).

Another interpretation of our work is a cross-advising model in which each researcher re…nesthe other evidence (‘advises’ him) and, at the same time, both researchers (or ‘experts’, as some-times called in this literature) become advisors of a third party which is only interested in the bestof both estimates of the parameter and that does not su¤er the cost of investigation. To the extentof the validity of this interpretation, the present work is related to Morris’ model (2001) whichacknowledges con‡ict of motives between advisor and decision maker as well as decision maker’sinability to distinguish advisor’s type. In his dynamic model, where a binary parameter has to bediscovered also, there is a unique advisor concerned, apart from advising well, with maintainingthe reputation. Our static model (unable to deal with reputation issues) is mainly concernedwith the transmission of the advising bias to the information acquisition decisions, which is notalways negative. The work of Krishna and Morgan (2001) is also closely related to ours since theymodel explicity two biased experts advicing a decision maker. However, a radical di¤erence is thattheir experts are perfectly informed while ours choose how informed to be. Another importantdivergence between both setups is how they model bias: both experts can be biased in oppositeor in the same direction; but they do not care per se about what the other does. Their approachis to …nd the ‘talking ragime’ (both talking at the same time, or one after the other, or withrebuttal...). Our framework is not rich enough to capture these design speci…cations, instead the…rst stage of our game in which accuracies are chosen determines endogenously the subsequentoptimal talking behaviour.

The paper is organized as follows. In section 2 we deal with the last part of the game and weanalyze the talking and listening strategies taking the signals’ accuracies as given. In section 3we go one step backwards and study the endogenous choice of accuracies that takes place at thebeginning of the game. Section 2 and 3 characterize the equilibrium of the full game depending onthe initial distributions over types for both players. Section 4 establishes the e¢ciency propertiesof the equilibria induced by those initial distributions. Section 5 concludes. All proofs appear in

5

Page 6:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Appendix A, except for those concerning a generalization of the talking strategy, Appendix B.The informal treatment of the multiplicity of equilibria appears in Appendix C.

2 the model with exogenous accuracies

2.1 Main elements

There are two players, i = 1; 2. Both are uncertain about a state of world ! which determineswhich is the optimal action to take. There are two possible states of the world, ! 2 f0; 1g. Weintroduce the notation of the space of events, I ´ f0; 1g: For simplicity, assume that both statesoccur with the same probability, i.e. Pr(! = 1) = Pr(! = 0) = 1=2: Each player can be ofone of two possible types, ti 2 fc; ag. Player i is ‘competitive’ (ti = c) with probability ¸i and‘altruistic’ (ti = a) with the complementary one, 1 ¡ ¸i. We may refer to the space of types asT = fc; ag. After learning his type, each player i receives a signal si 2 I that with probability¹i is equal to the actual state of nature. Both ¹1 and ¹2 are common knowledge and allowed tobe di¤erent, ¹1 6= ¹2. Signals are informative but not perfectly, and we assume more speci…callythat each ¹i 2 (12 ; 1); we could assume they are strictly lower than 1

2 and it would be just amatter relabelling. We consider that both players are engaged in a process of communication oftheir own information (signals). This communication consists in sending a (cross-) message to theother researcher. In particular, player i of type t, or simply player it, sends the message mt

i 2 Ito researcher j; both players send their messages simultaneously. After this, each player ends upwith two pieces of information: the signal he received from nature and the message he receivedfrom his fellow expert (which was sent strategically). With these two, each player decides anaction level ai 2 A ´ [0; 1]. Notice that we do not restrict A to be the elements of I = f0; 1g butrather we allow for a richer action space, the real line between 0 and 1. The reason for doing sois that the action ai will capture all the inferences done by player i and imposing A = I woulddeliver some technical drawbacks like discontinuities.5

In brief, the time structure is as follows:

t = 1 Nature chooses ! 2 I . The game G(¹; ¸) needs as parameters ¹ ´ (¹1; ¹2) 2 (1=2; 1)2

which are common knowledge. Each player learns privately his type, c or a.

t = 2 Nature chooses (s1; s2) 2 I2 and each player i receives privately his signal si:

t = 3 Each player i sends a message mi 2 I to the other player. Both messages are sent simulta-neously.

t = 4 Player i of type t decides the action level ati.

t = 5 The state of nature ! is observed and each player receives his payo¤s.

The communication stage is cheap-talk because the exchanged messages are payo¤-irrelevant,as well as non-veri…able and non-binding. Notice that the strategy for sending a messages cannotdepend on the message received (since both are sent at the same time). Finally, recall that eachplayer ignores the type of his rival (each player i knows only the probability distribution ¸j overthe possible types of player j).

5See the footnote in the introduction for justi…cation.

6

Page 7:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

2.2 Payo¤s

Both types are de…ned by (and di¤er only in) the utility functions and, more speci…cally, in theexternal component of them. For simpli…cation purposes we assume …rstly that utility functionsare additively separable into private and external utility. Both types prefer his action to get asclose as possible to the true state, and this de…nes identical private utilities for both types. Weassume that this type of preferences can be represented by a quadratic loss function. The externalutility of player i is de…ned as a function of the private utility player j and the type of playeri: The external utility of competitive type is increasing in the loss of the rival player, and theexternal utility of the altruist type is decreasing in it. That is, private losses of the competitivetype can be compensated by external losses; instead, the altruist su¤ers both from private andfrom external losses.

Let uti(ati; ajj!) be the Bernoulli utility function of player i once he has learned his type t,

contingent on the true state of the world being !. We denote by ¹uti(atij!) the private utility,

which depends only on his own action ati and we denote by ~uti(ajj!) the external utility, whichdepends only on the other’s action aj . So we have that uti(a

ti; ajj!) = ¹uti(a

tij!) + ~uti(ajj!). With

the quadratic loss, as noted above, we have that ¹uci (aj!) = ¹uai (aj!) = ¡(a ¡ !)2 and ~uci (aj j!) =(aj ¡ !)2 = ¡~uai (ajj!) and therefore …nally,

uci (aci ; aj j!) = ¡(aci ¡ !)2 + (aj ¡ !)2

uai (aai ; ajj!) = ¡(aai ¡ !)2 ¡ (aj ¡ !)2

¾(1)

2.3 Strategies, beliefs and expected utility

2.3.1 Strategies Each player it is called to play in two di¤erent information sets. At thelast stage of the game, at t = 4, he needs to decide an action level ati 2 A counting with twopieces of information: the signal si he has received from nature and the message mj that has beensent by the other player. Therefore player it’s strategy is function ati : I2 ! A, where ati(si;mj)is the action he chooses in case of having received si and mj . The stage before, at t = 3, he hasto send the message mi to the other player and for that he has available only the signal he hasreceived, si. Therefore his strategy is a function ¾ti : I ! [0; 1], where ¾ti(si) is the probability ofannouncing the message mi = 1 in case of having received si. We denote both types of player i’sstrategy by ¾i = (¾ci ; ¾

ai ).

Figures 1 and 2 summarize the notation. Figure 1 shows the how the signal from nature isreceived and how the message to the other researcher is transmitted. Figure 2 represents that thishappens for both researchers; there we can see how the action is determined by the signal andthe message. In both pictures, the …lled black circles represent the uncertainty about the truestate of the world, ! 2 f0; 1g. The pictures are limited and does not show how di¤erent types ofplayers can behave di¤erently; notice only how the strategies ¾ have as superindexes the type ofthe player.

Introducing a bit more of notation, µti(mij!) captures the probability that researcher it sendsthe cross-message mi to researcher j conditional on the actual state being !, i.e.

µtii (mij!) = Pr(mij!; ti): (2)

Notice that µti(mij!) is useful as it captures the inference made by player j. Further, player jupon observing a certain message mi 2 I, when he makes inference about the signal it received(which is the real important thing) he needs to take into account the accuracy of player i. Inother words, notice how Pr(mij!) =

Psi2I Pr(mijsi) Pr(sij!). Finally, it is worth noticing that it

7

Page 8:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

may well be the case that µtii di¤ers for the two di¤erent types of player i. For the sake of brevityin notation we shall write

µi(mij!) = Pr(mij!) =Pti2T

µtii (mij!; ti)Pr(ti): (3)

2.3.2 Beliefs Beliefs in each stage are updated according to the Bayes’ Theorem, whenpossible. Notice that before receiving any signal both players have identical, uninformative beliefsPr(! = 0) = Pr(! = 1) = 1=2: The sequence for updating beliefs is as follows.

At t = 2, both types of player i receive signal si and therefore believe that

bi(si) = Pr(! = 1jsi) =

½¹i if si = 11 ¡ ¹i if si = 0

: (4)

At t = 3, player i, apart from not forgetting the signal si; receives a new piece of information,mj. He will take into account this information, with its accuracy and strategic burden, to updatehis beliefs. Then, using Bayes’ theorem one …nds the probability with which player i believes thatthe actual state is 1, given he has received si and mj .

¯i(si;mj) = Pr(! = 1jsi;mj) =Pr(mj; sij! = 1)

Pr(mj ; sij! = 1) + Pr(mj ; sij! = 0);8(si;mj) 2 I2 (5)

or, equivalently, using the de…nition of µi(mij!)

¯i(si;mj) =Pr(sij! = 1)µj(mjj! = 1)

Pr(sij! = 1)µj(mjj! = 1) + Pr(sij! = 0)µj(mj j! = 0)

Notice that the beliefs of player i refer to the probability distribution over !, because it isthe payo¤-relevant issue here, rather than the type of the rival. It may deserve some furtherexplanation why here we concentrate on updated beliefs about the state of nature alone, andnot about the type of the rival, as in most sender-receiver games. In the classical sender-receiversetup, the payo¤ of both players is determined by the actions of the receiver and the types of both,without any further source of uncertainty. In our setup we have an extra source of uncertainty:the unknown state of the world ! 2 f0; 1g. We could ‘decompose’ the updated beliefs on ! as thecombination of the updated beliefs about the type of the rival together with a conjecture aboutthe strategy of each of the two types.

2.3.3 Expected utility The expression for the expected utility changes depending on theinformation available. We state here the utility function at the two nodes where player it has anaction to take.

At t = 2, player it has only received the signal si from nature and has to decide which messageto send. He ignores the true state, !, the type of his rival, tj, the signal his rival has received,sj, and the message that the rival will send (which depends on the two previous unknowns). Letm = (m1;m2) and s = (s1; s2). Thus, it takes the form of

euti(¾tijsi; ¸j) =

P!2I

(Ptj2T

(Psj2I

(Pmj2I

(uti(ati; a

tjj j!)) Pr(mj jsj; tj)) Pr(sj j!)) Pr(tj)) Pr(!) (6)

where ati = ati(si;mj) and atjj = a

tjj (sj ;mi) and mi = mi(si; ti). It may be worth noting the

interdependence of these sources of uncertainty. Both the state of the world, !, and the type ofthe rival, tj, are ‘priors’ of the model and no other information can be used to re…ne their initial

8

Page 9:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

distributions. For this reason, we cannot condition their realization on any other variable. Thesignal the rival receives, sj , does not depend on his type tj but it does indeed depend on the truestate ! through the known signal transmission mechanism, which is common knowledge, and thuscan be incorporated in player i’s calculations. Finally, using the inference about the signal therival has received, player i can make the …nal relevant inference about the action player j willtake, which is also dependent on the type of the rival.

At t = 3, player it has received the message mj from the other player. As noted above, hecould make inferences of the type of the rival, but since the rival’s type per se is payo¤-irrelevant,there is no point in making such inferences. The expected utility looks like

euti(atijsi; mj;mi; ¸j) =

P!2I

(Ptj2T

(Psj2I

uti(ati; a

tjj j!) Pr(sj j!)) Pr(tj)) Pr(!) (7)

where ati = ati(si;mj) and atjj = a

tjj (sj; mi): At this stage player i has received the message from

player j and this unknown vanishes. Again, both the true state and the type of the rival followthe prior distribution and cannot be conditioned to anything. Player i needs still to make aninference about the signal his rival received from nature sj because this will determine his rival’saction. For this reason player i needs to maintain the inference about sj in the computation ofthe expected utility at this stage.

2.4 Equilibrium

All the preceding de…nes a game of incomplete and imperfect information with parameters¸ = (¸1; ¸2) and ¹ = (¹1; ¹2), G(¸; ¹): Each of both players knows his type and ignores therival’s. We turn to the study of the subgame that starts when accuracies are already chosen, …xedand common knowledge. In the next section, we will study the …rst movement of the game, whenaccuracies need to be chosen and this will complete the analysis of the whole game. Therefore, atthis stage, an equilibrium must specify, for each player, a talking strategy (how to send messages),a listening strategy (how to make inferences on the basis of messages and signals received) and anaction strategy. Basically, at this stage of the game, with accuracies already chosen, we could saythat the game is basically ‘between types’, rather than ‘between players’, as it will be the case inthe next section. The equilibrium concept behind the following de…nition is the Perfect BayesianEquilibrium.

De…nition 1 (¾; ¯; a) is an equilibrium of the game G(¸; ¹) if:

1. each researcher’s message maximizes his expected utility, given both beliefs ¯1 and ¯2; foreach possible signal he may receive, see equation (6),

2. each researcher’s action maximizes his expected utility, given both beliefs, for any possiblesignal and message he may receive, see equation (7),

3. both beliefs are derived from the other researcher’s strategy according to the updating rule in(5).

The internal consistency of this equilibrium notion applied to our situation is that, indepen-dently of the realization of those variables outside the control of a player, the strategies this playerchooses maximize his expected utility, so that he does not need to regret for any contingency. Fur-ther, as a common requirement of Perfect Bayesian Equilibrium, the beliefs on which players relyto decide his actions have to be consistent, in the standard sense, i.e. being derived from theother’s strategies using the Bayes rule. We turn now to the analysis of the action strategy.

9

Page 10:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Lemma 1 (Optimality of the expected value). In any equilibrium,

ati = ¯ti(si;mj):

Our …rst result is that the action that each player it takes at the end of the game coincideswith his belief about the true state being ! = 1, which is precisely how we have de…ned ¯ti in(5). First, notice that once that a player is at the end of the game and has to decide his actionati on the basis of the two pieces of information available to him, si and mj , he cares only abouthis private utility, since he is aware he cannot in‡uence the other’s action. Given this, the uniqueoptimal action the expected value of the variable, which, in our simple setup, coincides with theprobability that ! = 1. The proof in the appendix allows for more general preferences thanthose considered in the text and the result is not qualitatively di¤erent: the action is an strictlyincreasing function of ¯ti, depending on the function that transforms the original utility function.

This …nding simpli…es considerably the solution of the game. Basically we build all the follow-ing upon this basis: the information available to a player determines how …ne his beliefs are, andthis determines how large his private payo¤s are. The external payo¤s are determined preciselyby how precise are the other player’s beliefs.

Before proceeding we shall pay attention to a common feature present in most cheap talkgames: the existence of ‘babbling’ equilibria. These are equilibria in which nobody sends infor-mative messages and nobody pays attention to the messages he receives. Given this, the originalbehaviour of sending non-informative messages (and anything else) is part of an equilibrium, aswe de…ned it above. First we state formally how a babbling strategy looks like and then establishthat any of those constitute an equilibrium.

De…nition 2 (¾; ¯; a) is a babbling strategy pro…le if for some (cc1; ca1; c

c2; c

a2) 2 [0; 1]4, and for

i = 1; 2, j 6= i, t 2 fc; ag:

1. ¾ti(0) = ¾ti(1) = cti;

2. ¯i(si;mj) =

½¹i if si = 11 ¡ ¹i if si = 0

for any mi 2 f0; 1g;

3. aci = aai = bi(si).

Lemma 2 Every babbling strategy pro…le is an equilibrium of the game G(¸; ¹).

In a babbling equilibria, all players send message 1 with the same probability independently ofthe signal they observe. This makes impossible to make any informative inference by the listeningplayer. Basically, babbling means talking in such a way to uncorrelate completely signals andmessages. Then, applying the updating rules of (5) we …nd that the updated beliefs are identicalto the prior beliefs, that is, the listening player learns nothing by listening. Therefore it is optimalfor the listening player not to update beliefs and retain the prior ones. Finally, this prior beliefsdetermine the optimal action to take, as we have established in lemma 1.

Thus the interesting issue is the existence and properties of informative (non-babbling) equi-libria. The general point and the key intuition for the following results is simple. The utilityfunctions of the di¤erent types determine opposite attitudes towards the rival: the altruistic typeprefers to help his rival and therefore he is for sending the most possible accurate information. Incontrast, the competitive type prefers to attack his rival and the way to do that is to make theinformation as useless as possible, completely unreliable, if possible. This is the main idea behind

10

Page 11:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

the two following propositions. The distinction is made depending on the value of ¸i, which canbe interpreted as a measure of the in‡uence power of the competitive type. We shall try to conveythe basic intuitions starting with an explanation of Figure 2 and 3.

Proposition 1 Suppose that ¸i > 1=2. Then the following talking strategies for each types ofplayer i and listening strategy for player j, are a part of an equilibrium of the game G(¸; ¹). SeeFigure 2.

1. ¾ai (0) = 0, ¾ai (1) = 1, (truth telling)

2. f(¾ci (0); ¾ci(1)) 2 [0; 1]2j ¾ci (0) = ¾ci(1) + 1¡¸i¸i

g, (probabilistic lying)

3. ¯j(sj; mi) = bj(sj), (no belief updating)

4. aj(si;mj) = ¯j(si; mj):

Consider …rst Figure 3, this corresponds to a situation in which the initial distribution overpossible types of player 2 gives a relatively large weight to the competitive type. As we saidabove, at this stage with accuracies …xed and endogenous, the game is basically ‘between types’.Figure 3 tries to capture the two radically opposed attitudes of both types of player i. Considerthe altruist type. The graph aims at showing that the altruist’s goal is to send as message exactlythe signal he has received, i.e. to reduce the noise of his message which is equivalent to correlateperfectly (and positively, though this is not crucial) the signal he has received from nature andthe message he sends to the other player.

The …gure has to specify what each type sends in each of the two possible contingencies,si = 0 or si = 1; there is a di¤erent rectangle for each signal. The behaviour of the altruist typeis what appears below the 1 ¡ ¸i region. His noise-reducing behaviour is illustrated by showingthat when he receives signal 1 (by convention depicted as a black square) he sends message 1(black rectangle); when he receives signal 0 (white square) he sends message 0 (white rectangle).Simply he sends what he receives. The behaviour of the competitive type is somewhat morecomplex, and it is depicted below the ¸i region (which, since ¸i > 1=2, is larger than the (1 ¡ ¸i)region). The goal of the competitive type is to render message mi completely uninformative, ifpossible. And in this case with ¸i > 1=2 this is possible but not by simply contradicting whatthe altruist has done, i.e. sending message 1 in case of observing signal 0 and message 0 in caseof signal 1; if he were to do that, the message that player j would receive would carry someinformation, negatively correlated with the signal, but correlated at the end. The strategy thecompetitive type follows, and that needs as necessary condition ¸i > 1=2, consists of two parts.The …rst thing he has to do is to ‘cancel’ the information sent by the altruist. This is done bysending the opposed messages to those sent by the altruist but with a given probability, exactlywith the same probability that the altruist exists, 1 ¡ ¸i. In the picture, this is represented bysending white when he observes black and vice versa, in the same ‘proportion’ as the altruist did.This alone provokes complete confusion in the listening player, because once he hears a message,knows that it is equally likely that the message coincides with the signal (if he faces an altruist)or that the message is the contrary of the signal (if he faces a competitive). Finally, with theremaining probability ¸i ¡ (1 ¡ ¸i) = 2¸i ¡ 1 > 0, for ¸i > 1=2 the competitive sends a kindof babbling strategy, with half probability message 0 and with the other half message 1, this isrepresented by the grey area (exactly 1

2 black and 12 white), for both possible signals he may

receive. The …nal combination of strategies of these two types appears in the last rectangle ofFigure 3. Notice that the strategy designed by the competitive type is able to achieve his goal:

11

Page 12:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

making the message mi completely uninformative, represented by a 12 : 12 grey color which stands

for non-informativeness. The formal equivalent of this 12 : 12 gray is that inference done by player j

is limited to Pr(si = 0jmi) = Pr(si = 1jmi) = 12 : 12 , for both mi 2 f0; 1g. Finally, given messages

convey no information, the listening player earns nothing by listening and therefore beliefs are notupdated and remain as the prior ones, that depend only on the signal and disregard the message.Proposition 1 formalizes this intuition. Notice that the strategy for the competitive is de…ned onlyin di¤erences between probabilities. But any pair of probabilities ¾ci(0) and ¾ci (1) ful…lling theabove condition yields the same …nal outcome and renders message mi completely uninformativeand useless for player j.

Proposition 2 Suppose that ¸i < 1=2. Then the following talking strategies for each types ofplayer i and the listening strategy for player j, are a part of an equilibrium of the game G(¸; ¹).See Figure 3.

1. ¾ai (0) = 0, ¾ai (1) = 1, (truth telling)

2. ¾ci (0) = 1, ¾ci(1) = 0, (lying)

3. ¯j(sj; mi) 6= bj(sj), (belief updating)

4. aj(si;mj) = ¯j(si; mj).

Consider now Figure 4, which corresponds to the case of ¸i < 1=2. The motivations for bothtypes remain as before. The only twist here is that the existence of the competitive type is‘not likely enough’ to play the complex behaviour described above. First, we have the altruistperforming as before: saying black when he observes black and white when he observes white.Basically, the main point is that now his ‘frank’ and transparent behaviour dominates anythingthe competitive may design to try to cancel the information. The most destructive behaviour thatthe competitive can take in this case is to contradict what the altruist has said: saying black whenin fact the signal was white and vice versa. Even doing that, player j is able to make informativeinferences about the signal si on the basis of the information conveyed by the message mi as thetwo rectangles in Figure 4 shows; notice that none of these two is exactly grey in the proportionthat was the above 1/2:1/2 grey. Admittedly, none of the two rectangles at the bottom of Figure4 are white or black (which would mean they are perfectly informative), but the …rst one, thatcorresponds to the inference j does in case of receiving message 1 is darker than the 1/2:1/2 gray(closer to black, that represents 1) and the second one, that corresponds to the inference j doesin case of receiving message 0 is lighter than the 1/2:1/2 gray (closer to white, that represents0). Any other strategy of the competitive type would make messages even more informative (thetop bar darker and the bottom lighter) and this goes against competitive external utility. Again,Proposition 2 makes this point formally.

To conclude this section, it remains to show that these strategies are the only ones that ful…llthe equilibrium requirements. For this task it will be helpful to introduce the simple notion ofreliability that can be computed easily from our notation.

De…nition 3 (Reliability) We de…ne the reliability of message mi, R(mi) = Pr(! = mijmi)

R(mi) =Pr(mij! = mi) Pr(! = mi)

Pr(mij! = mi) Pr(! = mi) + Pr(mij! 6= mi) Pr(! 6= mi)

this, using the de…nitions of µi in (2) and (3), can be rewritten as

R(mi) =µi(mij! = mi)

µi(mij! = mi) + µi(mij! 6= mi)(8)

12

Page 13:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Basically, the notion of reliability is the formal equivalent of the story about intensities ofthe grey colour. The reliability of a message mi is the inference that the rival player j can doabout the probability with which the true state ! coincides with the message he has received.In a sense, it captures how informative a message is. But notice that valuable information neednot to come under the form of message positively correlated with the true state (in this case, wewould have R(mi) > 1=2); valuable information can also be conveyed as a message negativelycorrelated with the true state (and in this case we would have R(mi) < 1=2)). This is so becausethe listening player understands the game and knows how to switch things, reversing the meaningof the message and interpreting it properly. Thus, the truly harmful thing for the listener is toinfer that the reliability of a message is exactly 1/2 because in this case the message adds noinformation to his prior and is, therefore, useless.

The idea of reliability helps to understand the following lemma that establishes that proposi-tions 1 and 2 capture the unique equilibrium behaviour. There exists another equilibria, followingthe same ideas we have presented above, but in which messages are negative correlated withsignals.

Lemma 3 The above are the unique equilibria of the game, up to a labelling convention.

First, we deal with the labelling convention. Propositions 1 and 2 could be relabelled in thefollowing way and still being an equilibrium, because it will have the same underlying logic. Forproposition 1, the altruist type always lies, ¾ai (0) = 1 and ¾ai (1) = 0 and the competitive plays‘probabilistic truth-telling’, ¾ci(1) = ¾ci (0) + 1¡¸i

¸i. Now the listening behaviour of player j should

be adjusted accordingly, to take into account that when he observes mi = 0 in fact he has toconsider that he is observing message mi = 1 and vice versa. That is, formally we should writethat j’s actions are found as aj(si;mj) = ¯j(si; 1 ¡ mj), i.e. he simply inverts the messages. Forproposition 2 the relabelling amounts to ¾ai (0) = 1 and ¾ai (1) = 0 for the altruist type, ¾ci (0) = 0and ¾ci(1) = 1 for the competitive and again aj(si;mj) = ¯j(si; 1 ¡ mj) for the listening player j.

Now we turn back to the original labelling and try to show why the above propositions capturethe equilibrium behaviour. Take …rst the case of ¸i > 1=2, that is, proposition 1. It can be shownthat, other things being equal, it is in the interest of altruist type to drive the reliability of themessage away above 1/2 while the competitive prefers the reliability to be as close as possible to1/2. The reason for this complete opposition in preferences over reliability has to be found in thedivergence of the external expected utility. First we concentrate on the altruist type to show thathe cannot do nothing best. It is clear that if we take the strategy of competitive type as given, thealtruist is maximizing reliability by sticking the proposed strategy in proposition 1. But further,why nothing else could be an equilibrium? Suppose the altruist type places some distortion inhis messages, we know the competitive would respond to that with a di¤erent probabilistic-lyingstrategy; for any conceivable probabilistic-lying strategy, the initial, distortionary strategy of thealtruist would not be optimal because one without distortions would beat it. Given the altruisthas this kind of ‘dominant strategy’ consisting in telling the truth, the unique best reply by thecompetitive is to make messages uninformative, as the proposed strategy in proposition 1. Noticethat in this case of ¸i > 1=2, the competitive is able to make R(mi = 0) = R(mi = 1) = 1=2.In fact one, derives the proposed equilibrium strategy just by imposing the preceding equations,which is equivalent to impose

¾ci (0) ¡ ¾ci(1) =1 ¡ ¸i

¸i(¾ai (1) ¡ ¾ai (0)) : (9)

Consider now the case of proposition 2, that is ¸i < 1=2: In this case, it is easier to show thatthe opposing attitudes of both types lead them necessarily to choose those strategies proposed

13

Page 14:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

in the proposition. Basically, the altruist type continues to have that kind of dominant strategyand now the competitive cannot even try to play this ‘two-part’ strategy because the likelihoodof his presence is not high enough; on the light of this, the best thing he can do is cancel as muchinformation as possible of the messages. But it is worth noting that in this case messages willconvey information, this can be expressed by means of the reliability of the messages, R(mi =0) = R(mi = 1) = 1 ¡ ¸i > 1=2, for ¸i < 1=2:

The main conclusion of this part is that the presence of competitive players never helps inthe communication phase. We have seen clearly the destructive, noise-creating role they playin transmitting information, as opposed to the transparent and noise-minimizing strategy of thealtruist type. In other words, as altruism is more pervasive, communication is (weakly) moreinformative.

3 the case with endogenous accuracies

3.1 Extended framework

The main elements of the game remain as before, but there is a new stage in it. Now, eachplayer after learning his type, ti 2 fc; ag, decides which is his accuracy ¹ti; so we allow di¤erenttypes having di¤erent accuracies. As a consequence, each player it will receive a di¤erent signalsti that it is associated to the true state under the same law as before, i.e.

Pr(sti = !) = ¹ti:

The time structure would be now as follows:

t = 1 Nature chooses ! 2 I. Each player learns his type, c or a.

t = 2 Each player it chooses an accuracy level, ¹ti 2 (1=2; 1). The cost of acquiring informationis equal for both, scaled by x, which is a parameter of the game: common knowledge andexogenous.

t = 3 As before, nature chooses (s1; s2) 2 I2 and each player i receives privately his signal si:

t = 4 As before, each player it sends a message mti 2 I to the other player. Both messages are

sent simultaneously.

t = 5 As before, player i of type t decides the action level ati.

t = 6 As before, the state of nature ! is observed and each player receives his payo¤s.

Payo¤ functions are as before. Each player it is called to play in three di¤erent informationsets. The strategies in the two last information sets are as in the preceding case with exogenousaccuracies. We keep the de…nition of µti(mij!) in (2) as the probability that researcher it sendsthe cross-message mt

i to advisor j given state ! but now it needs to be generalized as µti(mij!) =¹ti¾

ti(!) + (1 ¡ ¹ti)¾

ti(1 ¡ !). We also maintain (and adapt) the de…nition of µi(mij!) in (3).

The rest of notation has to be relabelled accordingly. Concerning the beliefs at t = 3, nowbi(si) equals ¹ti or 1 ¡ ¹ti depending on the same rule as in (4). Following the same token, thede…nition of beliefs at t = 4 is adapted from (5) by replacing ¯j by ¯tj and ¹j by ¹tj .

In this extended set-up we must specify a cost of acquiring information. For the moment wejust de…ne it in general terms as dependent of only one cost intensity parameter x,

14

Page 15:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

c(¹; x) (10)

being strictly increasing in both arguments and convex in ¹. This is to be substracted from thetotal expected utility. We assume that both types of player have the same cost function.

In this modi…ed setup each player chooses the accuracy of his signal before receiving it so wehave to deal with his expected utility at the very beginning on the game. Notice that player it

at t = 2 ignores even the signal he himself will receive, and also his rival’s. For convenience, lets = (s1; s2) and m = (m1;m2): Then player it’s expected utility at t = 2 is

euti(¹tij¸j) =

P!2I

(Ptj2T

(Ps2I2

(Pm2I2

(uti(ati; a

tjj j!))Pr(mjs; tj))Pr(sj!)) Pr(tj)) Pr(!) ¡ c(¹ti; x): (11)

with ati = ati(si; mj) and atjj = a

tjj (sj ;mi) with mj = mj(sj ; tj). Notice the di¤erence between

this expression and the expression for the expected utility in (6); now player i does not know thesignal he will receive from nature.

In order to solve the problem of information acquisition it will be useful to decompose theexpected utility in private and external expected utility. Thus the net private expected utility(without counting the cost) is

euti(¹tij¸j) =

P!2I

(Ptj2T

(Ps2I2

(Pmj2I

¹uti(atij!)Pr(mjjsj ; tj))Pr(sj!)) Pr(tj)) Pr(!)

=P!2I

(¸j(Ps2I2

(¹uti(atij!) Pr(sj!)) + (1 ¡ ¸j)(

Ps2I2

(¹uti( _atij!)Pr(sj!))) Pr(!)

with the following notation used for brevity ati = ati(si;mj jtj = c) and _ati = _ati(si;mjjtj = a).When we focus on the private expected utility, it is useful to separate the impact of the actions ofa competitive and altruist rival. Typically, altruist’s action would aim at helping and competitiveto harm.

And the net external expected utility is

feuti(¹tij¸j) =P!2I

(Ptj2T

(Ps2I2

(Pmi2I

~uti(atjj j!) Pr(mijsi))Pr(sj!)) Pr(tj)) Pr(!)

=P!2I

(Ptj2T

(Ps2I2

(¾ti(si)~uti(a

tjj j!) + (1 ¡ ¾ti(si))~u

ti( _a

tjj j!))Pr(sj!)) Pr(tj)) Pr(!)

with the following notation used for brevity atjj = a

tjj (sj; 1) and _a

tjj = _a

tjj (sj; 0). When we deal

with the external expected utility, the useful separation is motivated by player i’s own strategies.

3.2 Equilibrium

We turn now to the de…nition of equilibrium for the whole game. The game needs as parametersthe distribution over types for both players, ¸ = (¸1; ¸2) and the cost intensity parameter x. Eachplayer knows his type and ignores the other’s. Again, the underlying major equilibrium conceptis the Perfect Bayesian Equilibrium and the logic behind this concrete formulation is analogous tothe previous one: independently of the events that are outside the player’s control, each part ofplayer’s strategy has to maximize his expected utility. Any equilibrium must specify now, apartfrom the listening, hearing and action strategies of the previous part, an information acquisitionstrategy. We will see this with more detail later on, but now the game is basically ‘betweenplayers’ 1 and 2, rather than ‘between types’ as was the case for the previous section.

15

Page 16:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

De…nition 4 (¹; ¾; a; ¯) is an equilibrium of the game G(¸; x) if:

1. each player’s choice of ¹ maximizes his expected utility, given the beliefs ¯ and given theother players’ choices of ¹, see (11),

2. each researcher’s message m maximizes his expected utility, given the beliefs, for each possiblesignal he may receive, see (6),

3. each researcher’s action a maximizes his expected utility, given the beliefs, for any possiblesignal and message he may receive, see (7),

4. all beliefs are derived according to the Bayesian updating rules, see (5).

Before proceeding with the analysis of the information acquisition decisions done at the begin-ning of the game we shall check that the results already obtained still hold when the accuraciesof the signal can be chosen. First, Lemma 1 is still valid in this enlarged context. That is, in anyequilibrium we continue to have that

ati = ¯ti(si;mj):

Notice that its proof does not depend on whether accuracies are the same. The only quali…cationis that while in the previous part, both types of a player had the same beliefs (since the accuracyof their respective signals was the same), now this is not the case and therefore both types wouldtake, in principle, di¤erent actions, i.e. since ¹ci 6= ¹ai then ¯ci (si;mj) 6= ¯ai (si; mj): Lemma 2 isvalid also. But also, as before, we are interested in studying the consequences of non-babblingbehaviour. In fact it is crucial for the analysis carried on in this section to be meaningful to focusonly on those non-babbling equilibria.

Restricting thus attention to non-babbling equilibria, propositions 1 and 2 remain valid. Wehave simply to take into account the condition for non-informative messages of (9) is replaced bysomewhat more general version of

¾ci (0) ¡ ¾ci (1) =

µ1 ¡ ¸i

¸i

¶µ2¹ai ¡ 1

2¹ci ¡ 1

¶(¾ai (1) ¡ ¾ai (0)) : (12)

which, as before, comes from imposing R(mi = 0) = R(mi = 1) = 1=2. What it requires tobe done to ensure that both propositions hold is to show that this condition is equivalent, inequilibrium, to the original condition of (9). For that purpose, and for solving the problem ofinformation acquisition, we should turn now to characterize the choices of ¹ai and ¹ci .

Notice that ¹ti has two type of e¤ects on the expected utility of player it. First, it modi…esthe beliefs of both players: himself and the rival since ¯ti = ¯ti(¹

ti; ¹j) for i = 1; 2; j 6= i. This has

consequences for his private expected utility through ¯ti and his external expected utility through¯j . Second, a change in ¹ti changes the probability distribution over the signals that player it

receives (a higher ¹ti makes that the signal si is more likely to be correct).Indeed an increase inthis probability has an impact on his private utility (positive) and on his external utility (positiveif he is altruistic and negative in case of being competitive).In other words, the two main e¤ects ofincreasing the accuracy are that a) beliefs (and thus, actions) are more accurate, and b) the ‘right’signal appears more often, or the signal’s distribution is ‘better’. For brevity, let ¹i = (¹ci ; ¹

ai ):

The conjectures (¹¹a1; ¹¹c1; ¹¹

a2; ¹¹

c2) will form part of an equilibrium if, for given parameters (¸1; ¸2),

they satisfy the following,

¹¹ti 2 arg max¹ti

euti(¯ti(¹

ti; ¹¹j); ½

ti(¹

ti; ¹¹j)) + feuti(¯j((¹¹ti; ¹¹ki ); ¹¹j); ½j(¹ti; ¹j); ¹¹j)) ¡ c(¹ti; x)

i; j 2 f1; 2g; i 6= j and t; k 2 T; t 6= k

)(13)

16

Page 17:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

where, with loose notation, ¯ represents the beliefs and ½ the signal’s distribution.It may deserve some detail the explanation the conditions that determine whether a particular

system of conjectures (¹¹a1; ¹¹c1; ¹¹

a2; ¹¹c2) is an equilibrium. Basically ¹¹ti, the conjecture of player it;

will be part of an equilibrium if, holding …xed the belief player j has about it player it …nds indeedoptimal to choose ¹ti. But notice that this needs a di¤erent treatment in the private and in theexternal utility. In the private utility, when player it decides his investment on ability, he needsto take into account that this decision modi…es both his own beliefs ¯ti and the distribution overthe signals we will receive ½ti. Instead, the impact of ¹ti is the external utility modi…es only thedistribution with which player j receives the messages ½j, but not the beliefs themselves ¯j , whichdepend on j’s conjecture about ¹¹i, and not on ¹i directly.

We now establish two properties of the problem that allow a considerable simpli…cation of itssolution.

Lemma 4 In any game G(¸; x), we have that

1. Both types of each player choose the same accuracy in equilibrium. That is

¹c1 = ¹a1 = ¹1 and ¹c2 = ¹a2 = ¹2;

for any cost function c(¹; x) equal for both types.

2. The gross (without counting the cost) marginal expected utilities are

@eui=@¹i = ¹j¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢+ (1 ¡ ¹j)(¯i(1; 1)

2 ¡ ¯i(0; 1)2) (14)

@feui=@¹i = ¹j(¯j(0; 1)2 ¡ ¯j(0; 0)2) + (1 ¡ ¹j)(¯j(1; 1)

2 ¡ ¯j(1; 0)2) (15)

with ¹j = ¸j(1 ¡ ¹j) + (1 ¡ ¸j)¹j. And we denote meui ´ @eui=@¹i and gmeui ´ @feui=@¹i.

The …rst part of lemma 4 is due to the fact that both types obtain the same marginal revenuefrom ¹, as the second part makes also clear. This, together with the assumption of identicalmarginal costs for both types delivers directly the result. The reason why both types obtain thesame marginal expected utility from investing in ¹ is simple. For a given prior belief about playeri’s type, a larger ¹i makes messages mi more reliable, and therefore player j changes his actionmore strongly due to di¤erent messages mi. This has the same value for both types, for opposingreasons, but exactly the same value. The altruist type values his information being taken intoaccount because allows him to help j; and the competitive values his information being taken intoaccount because in this way the rival j chooses an action which is further from the truth. Usingthe …rst part of the lemma, the general formulation of (13) is simpli…ed into this

¹¹1 2 arg max¹1

(eu1(¹1; ¹¹2) + feu1(¹1; ¹¹1; ¹¹2) ¡ c(¹1; x))

¹¹2 2 arg max¹2

(eu2(¹¹1; ¹2) + feu2(¹¹1; ¹2; ¹¹2) ¡ c(¹2; x))

9=; (16)

Another direct consequence of the fact that both types choose the same accuracy is that thecondition presented of (12) gets simpli…ed as the original version in (9). Then we can apply thelogic of propositions 1 and 2 and their results hold. If we apply those results in the present setupin which the parameters of the game are only the distribution over types we …nd that games cande divided into three groups, depending on ¸1 and ¸2. The reason for introducing such distinctionis that the behaviour of both types di¤ers notably in each group.

17

Page 18:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

De…nition 5 The game G(¸; x) is of

1. Type I, if both messages are informative, i.e. if ¸1; ¸2 < 1=2;

2. Type II.i, if message i 2 f1; 2g is informative and message j 6= i is not, i.e. ¸i < 1=2,¸j > 1=2:

3. Type III, if neither message is informative, i.e. ¸1; ¸2 > 1=2

The second part of lemma 4 clari…es a bit the …rst part and can be interpreted easily. We couldunderstand the marginal expected utility as a measure of the value of information, i.e. which isthe return by investing in acquiring information. Recall that in our setup, total expected utilitycan be decomposed into private and external. For the case of the private utility, the value ofinformation is the value that comes from receiving signal 0 or 1, that is, how much the playerlearns from signals only. Keeping …xed the message, how di¤erent are the beliefs, and actions, ifhe receives from nature signal 0 or signal 1. Instead, for the case of the external utility, the valueof information is the value that comes from receiving message 0 or 1. Holding a signal as …xed,how the beliefs change depending on the message that hears from the rival.

Finally, it is worth noting that the expressions for (15) and (14) can be simpli…ed in gamesof type II.i, II.j and III. Notice that in type III games, when neither message is informative, wehave that ¯i(0; 0) = ¯i(0; 1) = 1 ¡ ¹i and ¯i(1; 1) = ¯i(1; 0) = ¹i for i = 1; 2. This provokesthat meui = 2¹i ¡ 1 and gmeui = 0. Instead, in case of a type II.i game (when the message mi isinformative and the mj is not) we have only that meui = 2¹i ¡ 1 while the expression for gmeuicannot be further simpli…ed and remains as in (15). On the other hand, in type II.j games, it ismeui that remains as in (14) and gmeui = 0. One can check easily that all this amounts to

meuII:ii > meuIi > meuIIIi ; 8¹i; ¹j 2 (1=2; 1)2: (17)

The conclusion we can draw from this is important. This allows us to obtain a measure ofthe value of information in each of the three cases, crucial for the analysis of the next section.Therefore we see that for both types of player i, the case in which the return for investing in ¹iis higher is in games of type II.i, i.e. when his is the unique informative signal; the second mostvaluable investment is the one done in games of type I, when both messages are informative, and…nally the less valuable investment, in relative terms, is the one corresponding to type III games,i.e. games in which no message is informative.

The rest of this section is devoted to study the properties of this problem in an attempt tocharacterize its solution. We will establish the following results about the gross marginal utilities(without counting the cost) and interpret them; with this done, we can characterize the solutionto the above system (16) depending on the initial likelihoods of each type ¸1 and ¸2:

Lemma 5 The gross marginal expected utility has the following properties (with strict sign incase of games of type I).

1. @meui=@¸i 6 0;

2. @meui=@¹j 6 0;

3. @meuj=@¸i > 0:

18

Page 19:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

The …rst part of Lemma 5 establishes that the marginal utility of player 1 is decreasing inplayer 1’s likelihood of being competitive. The reason for that has to be found in the listeningbehaviour of his rival, player 2. From the analysis in the previous section, player 2 knows thata higher likelihood for player 1 to be competitive, means that he can expect less informationtransmitted in the communication phase or, more speci…cally, that the information transmittedis less reliable. In the limit, we already know that, if ¸1 > 1=2, player 2 knows he can expect tolearn nothing from player 1’s messages. This listening behaviour is taken into account by player1. Since player 2 is paying less and less attention to a rival who is perceived to be competitivewith a high probability, the possibility of player 1 of in‡uencing player 2’s beliefs become scarceand therefore, for both types of player 1, the value of exerting e¤ort diminishes also.

The second part of Lemma 5 also plays a central role in our result. Basically it means thatboth accuracies are strategic substitutes. Payo¤s are determined by actions, which correspondwith beliefs. The beliefs of player 1 depend on his accuracy, ¹1, the accuracy of the rival, ¹2, andthe parameter that determines the distribution over types of player 2 that allows player 1 to makeinferences. These beliefs are …ner (close to truth) the larger is ¹1 and ¹2 and the smaller is ¸2.This would be formalized by saying that @eui=@¹i > 0, @eui=@¹j > 0 and @eui=@¸j > 0: But thisclaim says something else, the marginal contribution of ¹1 to these beliefs is smaller the largeris ¹2. That is, if ¹2 is low the marginal value for player 1 of making an e¤ort and increasing ¹1is very large. This property stems from the fact that we use Bayes’ rule to update beliefs andtherefore it is quite general and robust.

The third part is closely related to the fact that both accuracies are strategic substitutes.The simple way to understand it is that ¹2 and ¸2 play opposed roles in player 1’ marginalutility. An increase in ¹2 increases the e¤ective accuracy that player 1 can count on, ¹2 =¹2(1 ¡ ¸2) + (1 ¡ ¹2)¸2, while an increase in ¸2 produces the opposed e¤ect, it decreases thee¤ective accuracy.

The …rst and the third point are represented in …gure 5. It represents the best reply functionsfor both players under two assumptions about ¸. The bold lines, for BR1 and BR2, representthe …rst situation in which ¸1 = ¸2 = 1=10. It is clearly a symmetric situation. The lighter lines,BR¤1 and BR¤2, correspond to the case in which ¸1 > ¸2. We shall see the …rst part of the lemmain the change from BR1 to BR¤1. It can be appreciated how, for any ¹2, we have BR1 > BR¤1.The third part of the lemma is seen in the change from BR2 to BR¤2. In this case, we check thatfor any ¹1, we have BR¤2 > BR2:

Before the characterization of the solution, we should mention a particular property of theproblem: the existence of multiple equilibria. The issue is treated in detail in Appendix C, andhere we only present the main points and intuitions. The existence of multiple equilibria can bebounded on the values of the two parameters of the game. It is more likely for multiple equilibriato exist if the cost of acquiring information is relatively low, if ¸1 and ¸2 are relatively low andsimilar to each other. The explanation why these conditions favour the existence of multipleequilibria is simple. If both ¸1 and ¸2 are similar and small it means that it is likely that myrival is an altruist, so that I can rely on his messages as if there were my own, in a similar way ashe can do with my messages. This is favoured when the cost of acquiring information is not toohigh. Being a bit more formally, we shall the de…ne the set U(x) as the set of f¸1; ¸2g for whichthe equilibrium (¹1; ¹2) in the game G(¸; x) is unique. And based on that de…nition, one …ndsthe cost level ¹x as U(¹x) with ¸1 = ¸2 = 0 to be the maximum cost level produces multiplicity,and above which there is uniqueness for any other (¸1; ¸2) 2 (0; 1)2.

As we said above, we restrict our analysis to non-trivial problems, that is to say, for costintensities relatively high. In particular as we stated when we introduced the cost function, we

19

Page 20:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

assume that x is large enough to guarantee pure-strategy, interior solutions for any type of game,i.e. x >x. Then we can establish the equilibrium behaviour in the information gathering phase.

Proposition 3 Depending on the type of the game G(¸; x), the equilibrium is characterized inthe following way.

1. In games of type I, and if f¸1; ¸2g 2 U(x); the equilibrium (¹1; ¹2) is unique and satis…es

¸i > ¸j , ¹Ij > ¹Ii ;

while if f¸1; ¸2g =2 U(x) then there are 3 equilibria f(¹1:n; ¹2:n)gn=1;2;3,

2. In games of type II.i, the equilibrium (¹i; ¹j) is unique and satis…es

¹II:ii > ¹II:ij ;

3. In games of type III, the equilibrium (¹1; ¹2) is unique and satis…es

¹III1 = ¹III2 ;

4. Further, we have that

¹II:ii > ¹Ii > ¹IIIi :

The general rule behind proposition 3 is that altruism and exerting e¤ort go together, that is,that the player with the highest a priori probability of being altruist is the one that makes thelarger e¤ort to acquire information. The basic justi…cation is that the messages sent by a playerwho is believed to be altruistic with high probability, are highly respected and used. This, inturn, independently indeed of the type of such player, increases relatively his incentives to acquireinformation because of the e¢ciency in his ‘external use’ of information. Finally, part 4 of theproposition just re‡ects the comparison in terms of the marginal revenue from investing acrossthe di¤erent types of game.

4 assessment of equilibrium outcomes

The preceding sections have investigated the interactions between both players depending onthe initial distribution over types. Basically we have been able to …nd a relationship betweenindividuals’ prior beliefs about each other’s preferences and their equilibrium communicationand information acquisition choices. The assessment of the impact of di¤erent preferences in theinformation communicated in equilibrium has already been done at the end of section 2 with aclear conclusion: the more pervasive is altruism, the more informative is communication. In otherwords, increasing the a priori likelihood that one of the two players is competitive never increasesthe information transmitted in equilibrium. This analysis was done taking the accuracies as given.

Can we establish the analogous result for the information acquisition phase? Namely, that asaltruism is more pervasive, information acquisition e¤orts are higher? The analysis of the impactof the distributions over type in the information acquisition decisions is more subtle. Basically,the presumed presence of a competitive rival may lead to the other research (regardless of hisactual type) to make a larger e¤ort than if he were sure his fellow was an altruistic.

In this section, we will try to assess the properties of the equilibrium outcomes. First, fo-cusing on the information acquisition decisions, we will look for the environment that leads to

20

Page 21:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

maximum information acquisition. The rationale behind this is that there is an objective functionto be maximized by choosing the initial distributions over types. Second we will try to relateeach possible type of player, competitive and altruistic, with his preference for public or privateinformation, i.e. whether eahc type would prefer to be in a situation where types are known or,instead, a the situation we have been studying through the paper with uncertainty about types.

4.1 Optimal Uncertainty

Suppose that both researchers o¤er their advice to a third party, concerned only with ob-taining the best possible prediction of the true state, regardless of which particular researcher isresponsible for this prediciton. The third party (TP) observes both actions (or estimates) a1 anda2 and knows both distributions over types, ¸1 and ¸2. The Bernoulli utility function of the TPcan be modelled naturally as

v(aj!) = ¡(a ¡ !)2

which coincides with the private utility of each of the two possible types; this, in turn, allows forone alternative intepretation of this TP as a …ctious new type of researcher who has no regardsfor the other’s success, i.e. does not internalize the consequences of the other’s action.and caresonly for his own performance.

The TP faces the problem of choosing an action, a1 or a2; knowing the distribution, ¸1 and ¸2,but ignoring each reseracher’s accuracy, ¹1 and ¹2. Therefore he is comparing the expected utilitythat comes from choosing a1, ev(a1) and the expected utility from chosing ev(a2) and choosesaccordingly, i.e.

ai % aj , ev(ai) ¸ ev(aj):

Now, our approach here is to identify the initial distributions (¸1; ¸2) that maximize TP’sexpected utility when the cost of acquiring information is known to be x. Since the TP canpredict the information acquisition choices of both agents depending on (¸1; ¸2) and x, evenif one ignores the true ¹1 and ¹2 the problem is well de…ned. Notice that since the TP ispaying the information acquisition costs, we do not incorporate them into the objective functionto be maximised. Therefore, we de…ne the social welfare function, dependent on the a prioridistribution, as

W (¸1; ¸2) = maxi2f1;2g

fE(v(aij!))g = maxi2f1;2g

fE(¹ui(aij!))g

= maxi2f1;2g

f P!2I

(P

(si;mj)2I2ui(si;mjj!) Pr(si;mjj!))Pr(!)g

i.e. consists in minimizing the expected loss from adopting recommendation a1 or a2. It can beshown easily that one can rewrite the above expression as

W (¸1; ¸2) = maxi2f1;2g

fU1; U2g (18)

with

Ui =

0BB@

¹i¹¹j¹i(1 ¡ ¹¹j)(1 ¡ ¹i)¹¹j

(1 ¡ ¹i)(1 ¡ ¹¹j)

1CCA

00BB@

¯i(0; 0)2

¯i(0; 1)2

¯i(1; 0)2

¯i(1; 1)2

1CCA

21

Page 22:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

where ¹¹i = ¸i(1 ¡ ¹i) + (1 ¡ ¸i)¹i: That is, the TP is choosing the action that produces higherexpected utility.

Finally, to solve numercially for the equilibrium in order to obtain welfare properties, we needto assume a certain cost function. The main concern that guides us in choosing a cost functionwith which to perform the welfare analysis is to ensure that we are in front of a meaningful andtractable problem. We mean by that we prefer to restrict ourselves to interior and pure-strategysolutions of the information acquisition problem. We aim at …nding a certain cost speci…cationthat produces a unique intersection for any pair of a priori distributions (¸1; ¸2) 2 [0; 1]2. Furtherwe would like the cost function to be relatively simple, more speci…cally, parametrized by a uniquescaling factor, for instance, with a multiplicative e¤ect, i.e.

c(¹; x) = xc(¹):

Concerning the cost function itself, c(¹), it has to be convex enough as to ensure that the marginalcost crosses only once the marginal revenue, for any game G(¸; x); inside the region ¹ 2 (1=2; 1).In particular, consider type III games, in which both ¸1; ¸2 > 1=2: We have discussed in theprevious section how, in this case meu = 2¹ ¡ 1, which is linear in ¹. If we were to adopt aquadratic cost function, we would end up with a linear marginal cost and, therefore, at leastfor this type of games, our problem would have solution only at the boundaries. A natural stepforward consists in considering cubic total costs, that yield quadratic marginal costs. It turns outthat this ensures interior solutions for all games, of type I, II or III, provided that the scalingparameter is above a critical value x. Therefore we choose as cost function

c(¹; x) = x(¹ ¡ 1=2)3, 8 x > x = 4=3: (19)

The results appear in the following proposition.

Proposition 4 There exist cost intensities, xa and xb, with the property that

¹x < xa < xb

such that,

1. for x 2 (¹x; xa], if ¸¤i = 0 and ¸¤j > 1=2, we have that

W (¸¤i ; ¸¤j ) > W (¸i; ¸j)

for any other ¸i 6= 0 or any ¸j < 1=2;

2. for x 2 (xa; xb], if ¸¤i = 0 and ¸¤j = ^(x) 2 (0; 1=2) where ^0(x) < 0, we have that

W (¸¤i ; ¸¤j ) > W (¸i; ¸j)

for any other ¸i 6= 0 or ¸j 6= ¸¤j ;

3. for x > xb,

W (0; 0) > W (¸i; ¸j)

for any other ¸i 6= 0 or ¸j 6= 0:

22

Page 23:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

The interpretation of our …nal proposition is simple: in the cost intensity dimension, thereare three regions with di¤erent implications in each of them for the optimal a priori distributionsover types. When research is very cheap to do, the optimal con…guration of preferences entailsthat one researcher is completely altruistic and the other is su¢ciently competitive as to make theformer believe his messages are completely uninformative. Then one player’s message are fullyreliable and the other’s are not at all. The compensation e¤ect done by the researcher whichsends informative messages is large enough and this enhances e¢ciency. In the next region, withmoderate research costs, the same logic applies but to a limited extent. It is still socially optimalthat one player is altruistic for sure, but now, due to increased investigation costs, it is betterthat the other player is not so competitive, and therefore his messages are partially informative,although not too much. Finally for the case of expensive research activities, e¢ciency requiresavoiding fully the distortions that competitive researchers create; society bene…ts more from twopurely altruistic researchers.

The results of the above proposition appear in …gures 6, 7 and 8. Figures 6 and 7 (anenlargement of …gure 6) represent the utility of the tihrd party as a function of the two probabilitydistributions ¸1 and ¸2. It can be shown that the maximum information extraction occurs whenone of the two ¸ is 0. Figure 8 depicts precisley the optimal function ^(x) that appears in thepoint 2 of the proposition.

4.2 Types and preference for public information

In the previous analysis we have performed a somewhat arbitrary exercise by choosing thosebeliefs that induce a maximum information extraction. In a sense, there is a drwaback: beliefscannot be manipulated so easily. Our concern now is di¤erent. Is it possible to establish arelationship between the type of a researcher (competitive and altruisitic) and his preference forpublic information about types or forp private information. For better exposition, we distinguishtwo instances: the communication phase and the whole game which also includes the informationacquisition phase.

4.2.1 Communication phase We are interested the preference for public or private in-formation that both types exhibit once the information acquisition decisions are already made,i.e. we take ¹1 and ¹2 as given. The competitive type knows that if his type is known publicly,then the other player will pay no attention to him. Thus his adjusted acuracy will be reducedto the minimum ¹ = 1=2. If this were the case, he cannot in‡uence the other’s action. Instead,if information is private and therefore there is uncertainty about his actual type, the listeningplayer will pay attention to him and the adjdusted accuray will be ¹ > 1=2 and the other willpay attention to him. Recall the best response of the listener if he believes that that the speakeris competitive with probability ¸ is to have beliefs ¯(0; 0) < ¯(0; 1) and ¯(1; 1) > ¯(1; 0). Butthen, if the rival researcher pays attention to the messages, the competitive type has the chanceto harm the rival. Recall that that those beliefs were derived by the rival under the assumptionof imperfect information. Thus we conclude that the competitive type prefers information abouthis type to be private; in other words, he prefers to hide his type to the other.

The reasoning of the altruistic type is the contrary. If his type were public knowledge, thenthe other player will pay attention to him fully and would not discount any of the information.This would allow him to convey all the information which is bene…cial for him. If informationwere private, then he could not convince the listener about his good intentions; as a consequencethe listener would discount useful information and this is bad for the altruistic. Therefore the

23

Page 24:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

altruistic type is for public informaition about his type.

Proposition 5 Assume accuracies ¹1 and ¹2 are aIready chosen. In the communication stage,with exogenous accuracies we have that

1. A competitive researcher preferes to hide his type, i.e. that his type remains private infor-mation,

2. An altruistic researcher prefers to announce his type, i.e. that his type becomes public in-formation.

4.2.2 The whole game The argument mentioned above to show that the competitive typeis in favour of hiding his type remains valid in the whole game, when we also consider informationacquisition decisions. If his type were known then the competitive knows that the other playerwill not pay attention to him in the communication phase and cannot harm him. Instead, if histype is not known then his information is taken into account and this allows him to mislead theother.

The analysis of the preferred informational regime by the altruistic is more subtle. On onehand if his type is known then he knows the other player will rely on him and this will be usefulto both. But on the other hand when we consider the information acquisiton problem a force onthe opposite direction emerges: if there is uncertainty about his type, this may lead the otherresearcher to exert a higher e¤ort to compensate for having a poor fellow. Of course whetherthe optimal setup is one with perfect (and true) information or one with imperfect informationdepends on the cost intensity x and the belief the altruist player has about the degree of altruismof his fellow.

Proposition 6 Considering the whole game, including the choice of accuracies, we have that

1. A competitive player prefers to hide his type, i.e. that his type remains private information,

2. An altruistic player,

(a) If he believes that the other player is altruistic with a su¢ciently high probability andthe cost of acquiring information is low enough, then he prefers to hide his type, i.e.that his type remains privete information,

(b) Otherwise, he prefers to announce his type, i.e. that his type becomes public informa-tion.

We summarize the results in the following table.

Prefence for the nature of information

Competitive AltruisticCommunication Private PublicThe Whole Game(Communication and Research)

Private, 8¸j ;8xPrivate, if ¸j < ¸¤j and x < x¤

Public, otherwise

24

Page 25:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

5 conclusion

This paper aims at understanding the impact of two researchers’ mistrust towards each other.Both researchers perform two operations in order to discover the value of unknown parameter:make a costly investment to acquire information …rst, and then transmit it for free to the other.Finally each chooses his respective best estimate of the parameter on the basis of the two piecesof information he has received. We only allow for two polar cases in terms of researchers’ types:friend and enemy, and we assume that each researcher ignores the type of his opponent duringthe whole game.

Holding information quality as …xed, the presumed presence of a competitive type in thetransmission phase is never good, because it makes communication less informative and, thus,ine¢cient. But perversity of ‘competition’ may vanish at the time that information acquisitiondecisions are made. The reason is paradoxically related to the ine¢ciency mentioned above:i) a researcher discounts the information that comes from an opponent which is suspicious ofbeing, even slightly, competitive (section 2); ii) on that basis, and given that research e¤orts arestrategic substitutes, if one researcher believes that there is a positive probability that his rival iscompetitive, he has larger marginal incentives to acquire information in order to compensate forthe information lost in the discounting he himself has done of the rival’s information (section 3).Finally, and depending on the cost intensity associated to a given research activity, social welfare(measured by the expected utility derived from the most accurate of both predictions) may bemaximized if one researcher is moderately suspicious of being competitive, (section 4).

Thus, the paper’s results suggest that, depending on how costly or ‘hard’ a given researchproject is, social preference over the private prior beliefs changes. In relatively cheap projects, orelementary research …elds, maximum con‡ict should be fostered: one researcher being completelyaltruist and the other being competitive enough so as to turn his messages empty of any infor-mation. In contrast, when dealing with a costly, delicate research the social priority is to havetwo completely altruistic researchers. And for cases with a moderate research cost, the optimalcon…guration is to have a completely altruistic researcher and the other with a strictly positiveprobability (depending on the cost) of being competitive.

This e¤ect is leading the other main conclusion of the paper: the relationship between thetypes and the preference for public or private information. If a player is actually competitivehe prefers his competitive type to be private information, i.e. hidden to his fellow; both if weconsider only the communication stage or the whole game. Instead, the preference for the natureof information of the altuistic type is di¤erent and more subtle. If we restrict the attention to thecommunication stage, the altruistic is always in favour of making his type public. However, whenwe consider the whole game if a player who is indeed altruistic believes that the other player isaltruistic with a su¢cient large probability (and the cost of acquiring information is su¢cientlylow) he prefers to play in an incomplete information environment aiming at leading the otherplayer to exert superior e¤ort.

A promising extension would be to study a similar problem but in an agency setup, explic-itly modelling how the principal would design incentive contracts to foster e¢cient informationacquisition. It would be also interesting to approach the con‡ict of goals di¤erently: consideringthat ‘good’ scientist does not care so much about what he does, or what the other does, butrather about society’s …nal decision. This would entail that the action a researcher takes is notnecessarily his estimate of the expected value of the parameter, for he may choose it strategically,and not ‘frankly’ as in the present work.

25

Page 26:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Appendix A: Proofs.

Lemma 1. Suppose that player it believes that, given the information available to himself, denotedeti = (si;mj), that Pr(! = 1jet

i) = p and therefore Pr(! = 0jeti) = 1 ¡ p: Notice that p coincides also with

the expected value of ! given eti, i.e. E(!jet

i) = p: We use p for brevity in notation, but p stands for thebelief ¯t

i(eti). The expected utility of player it follows the expression in (7) which can be written as

euti(a

ti; aj) = ¸j

0BB@

p¼p(1 ¡ ¼)(1 ¡ p)¼

(1 ¡ p)(1 ¡ ¼)

1CCA

0 0BB@

uti(a

ti; a

cj j1)

uti(a

ti; b

cj j1)

uti(a

ti; a

cj j0)

uti(a

ti; b

cj j0)

1CCA +

(1 ¡ ¸j)

0BB@

p¼p(1 ¡ ¼)(1 ¡ p)¼

(1 ¡ p)(1 ¡ ¼)

1CCA

0 0BB@

uti(a

ti; a

aj j1)

uti(a

ti; b

aj j1)

uti(a

ti; a

aj j0)

uti(a

ti; b

aj j0)

1CCA (20)

where ¼ = ¹ti¹j + (1 ¡ ¹t

i)(1 ¡ ¹j) is the probability that both players have received the same signal.Concerning the (possible) actions of the rival ac

j = (acj ; b

cj) and aa

j = (aaj ; ba

j ), we denote by aj the action incase that player j received the same signal that i received and bj in case of receiving the other signal, i.e.at

j = atj(si;mi) and at

j = atj(1 ¡ si;mi): Notice that at this stage of the game (the last one) messages have

already been sent and therefore player it knows the message that player j has received: the only piece ofevidence which is private information of player j is precisely the signal he has received secretly from nature.Given that the utility function is additively separable in both actions, the solution to the maximization ofthe above (total) expected utility coincides with the solution to the simpler problem of (private) expectedutility maximization, i.e.

ati 2 arg max

ati

euti(a

ti; aj) , at

i 2 arg maxat

i

euti(a

ti):

Now, it follows from (20) that

euti(a

ti) = put

i(atij1) + (1 ¡ p)ut

i(atij0): (21)

We will allow somewhat more general preferences than those considered in the text. We will consider ageneral transformation of the quadratic functions ¹ut

i(atij!) = ¡f((at

i¡!)2) and ~uci (aj j!) = g((aj ¡!)2) = ¡

~uai (aj j!) so that

uci (a

ci ; aj j!) = ¡f((ac

i ¡ !)2) + g((aj ¡ !)2)

uai (aa

i ; aj j!) = ¡f((aai ¡ !)2) ¡ g((aj ¡ !)2)

with the properties that f(¢); g(¢) > 0, both di¤erentiable in ati, strictly increasing in (a ¡ !)2 and strictly

convex: Now, given those assumptions, the expression (21) is di¤erentiable in ati and strictly concave. Thus

the solution to the maximization problem is found where the …rst order condition holds.

p@¹u(at

i; 1)

@ati

+ (1 ¡ p)@¹u(at

i; 0)

@ati

= 0 ,

p@(¡f(at

i; 1)2)

@ati

+ (1 ¡ p)@(¡f(at

i; 0)2)

@ati

= 0 ,

pf 0(ati ¡ 1) + (1 ¡ p)f 0(at

i) = 0 (22)

The claim to prove is that there exists a continuous, strictly increasing function ®f (p) solving equation(22). Continuity is guaranteed by the assumptions. To see that the function is increasing consider thatthe pair (a¤; p¤) satis…es the equation(22), where a = ®f (p). We know that a 2 [0; 1] and thereforea > a¡1 and, due to the convexity of f , this implies f 0(a) > f 0(a¡1) and therefore we conclude f 0(a) > 0and f 0(a ¡ 1) < 0. Now consider a p0 > p; notice that the pair (a; p0) does not anymore ful…ll (22) for

26

Page 27:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

that expression would be now strictly negative. The equality is restored with an a0 > a, and recall thata0 = ®f (p0). Therefore we have that ®f (p0) > ®f (p) i¤ p0 > p. Now it is straightforward to come back tothe particular case of simple quadratic loss (in which f and g are the identity functions) and the expression(22) becomes

p(ati ¡ 1) + (1 ¡ p)at

i = 0

ati = p:

Lemma 2. This is an immediate consequence of the de…nition of a babbling strategy and a commonfeature of cheap-talk games. The message that each player sends does not in‡uences the other player’sbeliefs neither his actions. Thus the posterior beliefs ¯i(si;mj) depend only on the signal received and areidentical to the prior beliefs b(si). Therefore, the rival expert is indi¤erent between all the other strategies,including the uninformative one he uses in equilibrium. No player sends informative messages; given this,no player pays attention to the other’s messages; and given anything would be optimal.

Proposition 1. The proof will have to show that mutual consistency of the above strategy; for thatpurpose we will divide the proof in several steps. In the …rst step we will show that the communicationstrategy of player i produces non-informative messages (which indeed is the key for the whole argument.)Given this, we show in the next step that player j is playing a best response by no updating his priorbeliefs. Finally, we come back to the communication strategy of player i to show that the communicationstrategies of both types are optimal given the behaviour of the rival j.

1. The message mi 2 I = f0; 1g is non-informative i¤ the inference that player j can do is that

Pr(! = 1jmi) = Pr(! = 0jmi) = 1=2:

Notice that, since Pr(! = 1)=Pr(! = 0)=1/2, we have that

Pr(!jmi) =Pr(mij!)

Pr(mij!) + Pr(mij1 ¡ !); for all !;mi 2 I

and therefore the condition for non-informativeness of message mi 2 I is

Pr(mij! = 0) = Pr(mij! = 1): (23)

Notice that the condition (23) can be rewritten by using (2) and (3) as

µi(mij1) = µi(mij0);8mi 2 I: (24)

We want both messages mi = 0; 1 to be non-informative. We start by message 0. Recalling thede…nition of µi(mij!) in (2) and (3), the condition (24) for the case of mi = 0, i.e. µi(mi = 0j! =1) = µi(mi = 0j! = 0) amounts to imposing (¸i(1¡¾c

i (0))+(1¡¸i)(1¡¾ai (0)))¹i +(¸i(1¡¾c

i (1))+(1 ¡ ¸i)(1 ¡ ¾a

i (1)))(1 ¡ ¹i) = (¸i(1 ¡ ¾ci (0)) + (1 ¡ ¸i)(1 ¡ ¾a

i (0)))(1 ¡ ¹i) + (¸i(1 ¡ ¾ci (1)) + (1 ¡

¸i)(1 ¡ ¾ai (1)))¹i which is equivalent to imposing

¾ci (0) ¡ ¾c

i (1) =1 ¡ ¸i

¸i(¾a

i (1) ¡ ¾ai (0)) : (25)

And it remains only to check that the proposed equilibrium strategies ful…ll this condition. Theequilibrium strategies player i’s type c is de…ned only in di¤erences, once we set the strategy ofhis counterpart type a. It is worth noticing that type c’s strategy is only feasible if (1 ¡ ¸i)=¸i �1 , ¸i > 1=2, because otherwise either ¾c

i (0) > 1 or ¾ai (1) < 0, and both cases are not allowed,

for them being probabilities. To sum up, we have shown, in case that ¸i > 1=2, that the proposedequilibrium strategies lead to Pr(! = 1jmi = 0) = Pr(! = 0jmi = 0), i.e. to make message mi = 0

27

Page 28:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

non-informative. This step is completed by noticing that the proposed equilibrium strategies alsoful…ll the condition for making message mi = 1 non-informative because the corresponding conditionis identical to the one above. For that purpose, notice that imposing µi(mi = 1j! = 1) = µi(mi =

1j! = 0) implies the condition that (¸i¾ci (0)+(1¡¸i)¾

ai (0))¹i +(¸i¾

ci (1)+((1¡¸i)¾

ai (1))(1¡¹i) =

(¸i¾ci (0) + (1 ¡¸i)¾

ai (0))(1 ¡¹i) + (¸i¾

ci (1) + (1 ¡¸i)¾

ai (1))¹i which is equivalent to condition (25)

2. If the inference available to player j about the true state ! after he has received message mi is thatPr(! = 1jmi) = Pr(! = 0jmi) for both mi, then applying the beliefs as de…ned by (5) we …nd that

¯j(1; 1) = ¯j(1; 0) = bj(1), and

¯j(1; 0) = ¯j(0; 0) = bj(0),

that is, once player j updates his beliefs, given the information supplied by player i so noisy as tobecome useless, he knows nothing (else) new.

3. Player i anticipates the previous step and therefore is aware that player j will not update his beliefsirrespectively of the message mi that player j observes (and therefore, also independently of anydistribution over those). Recall that the only reason why player i cares about player j’s beliefsis because they determine (coincide, in fact) his action aj which determines the expected externalutility for player i, feui(aj): Given this action aj will come from the a priori beliefs, without anyconnection with any of player i’s strategies (for both types) ¾c

i 2 [0; 1]2 and ¾ai 2 [0; 1]2, any possible

strategy for both types is (vacuously) a best reply to the player j’s non-updating behaviour.

Proposition 2. Again the proof will need to demonstrate how the di¤erent parts of the prosedstrategy are each consistent with each other and this is to be done also in several steps. First, recallinga part of the previous proof we establish that if ¸i < 1=2, then there is no way in which any messagemi = f0; 1g can be made non-informative, i.e. both messages mi 2 f0; 1g will be informative. Next, giventhe informative content of both messages, the optimal reception strategy for the player j is to pay attentionto what player i is saying and therefore to update his beliefs. The last thing to do, and the crux of thematter, is to show that once that both types of player i are aware that they can in‡uence player j’s beliefs(and actions), their proposed communication strategies are optimal.

1. There are di¤erent ways to see that if ¸i < 1=2, then both messages of player i will be informative.Perhaps the simplest one is to realize that the communication strategy of player i studied in theprevious lemma –which, as we showed, is the unique that guarantees non-informativeness– is notyet any more feasible. With ¸i < 1=2, following that strategy would imply a mixed strategy withsome probabilities lying outside the interval [0; 1]. A consequence of that impossibility is that nowPr(! = 1jmi) and Pr(! = 0jmi) should be di¤erent than 1/2 (i.e. informative) for both messagesmi = 0; 1: Now it is time to make use of the reliability of the messages de…ned in (8). One checksthat µi(mi = 1j! = 0) = µi(mi = 0j! = 1) = (1 ¡ ¹i)(1 ¡ ¸i) + ¹i¸i and that µi(mi = 0j! = 0) =µi(mi = 1j! = 1) = (1 ¡¹i)¸i +¹i(1 ¡¸i): Now, recall our de…nition of ¹¹i = ¸i(1 ¡¹i) + (1¡¸i)¹i,and it is easy to check that ¹¹i 2 (1=2; 1) provided that ¸i 2 (0; 1=2) and that ¹i 2 (1=2; 1). Thenone …nds that reliability of both messages mi is the same and equals

R(mi = 0) = R(mi = 1) = ¹¹i > 1=2

2. Given that player i’s messages are informative, Bayes’ rule updates player j’s beliefs as shown in (5).Now the relevant pieces of evidence for player j are composed of his own signal sj and the message

28

Page 29:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

he receives mi; for brevity of notation we write ej = (sj ;mi). Then we have the following system ofbeliefs,

¯j(ej) =

8>>>>>>>>>>><>>>>>>>>>>>:

¹j¹¹i

¹j¹¹i + (1 ¡ ¹j)(1 ¡ ¹¹i)if ej = (1; 1);

¹j (1 ¡ ¹¹i)

¹j (1 ¡ ¹¹i) + (1 ¡ ¹j)¹¹i

if ej = (1; 0);

(1 ¡ ¹j)¹¹i

(1 ¡ ¹j)¹¹i + ¹j (1 ¡ ¹¹i)if ej = (0; 1);

(1 ¡ ¹j) (1 ¡ ¹¹i)

(1 ¡ ¹j) (1 ¡ ¹¹i) + ¹j ¹¹i

if ej = (0; 0):

These beliefs functions have some useful properties, that hold for any f¹i; ¹¹jg 2 (1=2; 1)2;

P1. ¯j(0; 0) + ¯j(1; 1) = ¯j(0; 1) + ¯j(1; 0) = 1; (obvious)

P2. ¯j(1; 1) > ¯j(1; 0) and ¯j(0; 1) > ¯j(0; 0), (obvious)

P3. ¯j(1; 1) + ¯j(1; 0) > 1; with equality i¤ ¹j = 1=2 and ¹¹i = 1. To see why, we will use thepartial derivatives of the beliefs functions (and the fact that they are all monotonic) to identifythe most adverse case for P3 to hold. Then we will establish that in this worst case P3 holdsand therefore it holds for all other cases. Notice that @¯j(1; 1)=@¹j > 0 and @¯j(1; 0)=@¹j > 0,from this we can conclude that the sum is also increasing, i.e. @(¯j(1; 1) + ¯j(1; 0))=@¹j > 0.The response of ¯j(1; 1) + ¯j(1; 0) to ¹¹i is apparently less clear since @¯j(1; 1)=@¹¹i > 0 but@¯j(1; 0)=@¹¹i < 0. But if one e¤ectively sums both expressions and di¤erentiates with respect

to ¹¹i, one …nds that @(¯j(1; 1)+¯j(1; 0))=@¹¹i = ¹j(1¡¹¹i)=¡¹j ¹¹i + (1 ¡ ¹j)(1 ¡ ¹¹i)

¢2¡¹j(1¡¹¹i)=

¡¹j(1 ¡ ¹¹i) + (1 ¡ ¹j)¹¹i

¢2which is strictly negative for any f¹j ; ¹¹ig 2 (1=2; 1)2 and equal

to zero for ¹¹i = 1=2. This is so because ¹j¹¹i +(1¡¹j)(1¡ ¹¹i) > ¹j(1¡ ¹¹i)+(1¡¹j)¹¹i for anyf¹j ; ¹¹ig 2 [1=2; 1]2: Therefore the minimum value of ¯j(1; 1) + ¯j(1; 0) is attained at ¹j = 1=2

and ¹¹i = 1, for other values of any of two ¹ the sum is strictly larger. Finally one checks thatin this most adverse case, we have that ¯j(1; 1) = 1 and ¯j(1; 0) = 0, and this satis…es P3.

3. Now that player j responds to player i’s messages, we should check that both types of player i arechoosing their strategies optimally. We examine each type in turn.

t = c Player i’s type c strategy consists of two moving probabilities ¾ci(0) and ¾c

i (1) which specify theprobability of sending message mi =1 in case of receiving signal si = 0 or si = 1 from nature.Under the equilibrium requirement, he has to choose those probabilities in order to maximizehis expected utility counting with the reaction of the other type a of player j and with thereaction, studied above, of player j. He would prefer player j to be as far as possible from thetrue ! although he ignores it. His only information comes from his private signal si; based onit he is able to derive a probability distribution over the possible !. This distribution will playa key role in choosing the optimal strategies.

s = 0 Consider …rst the case in which he receives si = 0. In what follows, for brevity ¾c standsfor ¾c

i (0): In this case, one …nds that

feuci (¾

c) =

0BB@

¹i¹j

¹i(1 ¡ ¹j)(1 ¡ ¹i)(1 ¡ ¹j)

(1 ¡ ¹i)¹j

1CCA

0 0BB@

¾c¯j(0; 1)2 + (1 ¡ ¾c)¯j(0; 0)2

¾c¯j(1; 1)2 + (1 ¡ ¾c)¯j(1; 0)2

¾c(¯j(0; 1) ¡ 1)2 + (1 ¡ ¾c)(¯j(0; 0 ¡ 1)2

¾c(¯j(1; 1) ¡ 1)2 + (1 ¡ ¾c)(¯j(1; 0) ¡ 1)2

1CCA

29

Page 30:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

using the property P1 above we …nd that it simpli…es into

feuci (¾

c) =

0BB@

¹i¹j(1 ¡ ¾c) + (1 ¡ ¹i)¹j¾c

¹i¹j¾c + (1 ¡ ¹i)¹j(1 ¡ ¾c)

¹i(1 ¡ ¹j)(1 ¡ ¾c) + (1 ¡ ¹i)(1 ¡ ¹j)¾c

¹i(1 ¡ ¹j)¾c + (1 ¡ ¹i)(1 ¡ ¹j)(1 ¡ ¾c)

1CCA

0 0BB@

¯j(0; 0)2

¯j(0; 1)2

¯j(1; 0)2

¯j(1; 1)2

1CCA :

Now we can see how feuci (¾

c) changes depending on ¾c. By making the operations, we seethat

@feuci(¾

c)

@¾c= ¹j(2¹i ¡ 1)

£¡¯j(0; 1)2 ¡ ¯j(0; 0)2

¢+

¡¯j(1; 1)2 ¡ ¯j(1; 0)2

¢¤

Recalling the property P2 of the belief function, we see that @feuci (¾

c(0))=@¾c(0) is strictlypositive, and therefore, given the beliefs of player j, player i’s type c optimal strategy isto set ¾c

i (0) = 1.s = 1 In this case, where now ¾c stands for ¾c

i (1), one …nds that

feuci (¾

c) =

0BB@

(1 ¡ ¹i)¹j

(1 ¡ ¹i)(1 ¡ ¹j)¹i¹j

¹i(1 ¡ ¹j)

1CCA

0 0BB@

¾c¯j(0; 1)2 + (1 ¡ ¾c)¯j(0; 0)2

¾c¯j(1; 1)2 + (1 ¡ ¾c)¯j(1; 0)2

¾c(¯j(0; 1) ¡ 1)2 + (1 ¡ ¾c)(¯j(0; 0 ¡ 1)2

¾c(¯j(1; 1) ¡ 1)2 + (1 ¡ ¾c)(¯j(1; 0) ¡ 1)2

1CCA

again, by means of property P1 simplify the above expression into

feuci (¾

c) =

0BB@

(1 ¡ ¹i)¹j(1 ¡ ¾c) + ¹i(1 ¡ ¹j)¾c

(1 ¡ ¹i)¹j¾c + ¹i(1 ¡ ¹j)(1 ¡ ¾c)

1 ¡ ¹i)(1 ¡ ¹j)(1 ¡ ¾c) + ¹i¹j¾c

(1 ¡ ¹i)(1 ¡ ¹j)¾c + ¹i¹j(1 ¡ ¾c)

1CCA

0 0BB@

¯j(0; 0)2

¯j(0; 1)2

¯j(1; 0)2

¯j(1; 1)2

1CCA

which lead to

@feuci (¾

c(1))

@¾c(1)= ¹j(2¹i ¡ 1)

£¡¯j(0; 0)2 ¡ ¯j(0; 1)

+¡¯j(1; 0)2 ¡ ¯j(1; 1)2

¢¤

which again by property P2, we …nd to be strictly negative and therefore, given the beliefsof player j, player i’s type c optimal strategy is to set ¾c

i (1) = 0

t = a The proof for type a can be simply built from the one corresponding to his counterpart type c

by noticing that feuci = ¡feua

i and that, therefore @feuai (¾a

i (0))=@¾ai (0) < 0 leading to the optimal

decision of ¾ai (0) = 0 and, analogously, that @feua

i (¾ai (1))=@¾a

i (1) > 0 which leaves as optimalstrategy ¾a

i (1) = 1.

Lemma 3. We shall arrange the proof by distinguishing the two cases on ¸ that de…ned propositions1 and 2. We show that no deviation from the prescribed equilibrium strategies is stable.

1. Case ¸i > 1=2:

t = c Suppose the player ic does not choose ¾ci(0) and ¾c

i (1) such that ¾ci (0) = ¾c

i(1) + (1 ¡ ¸i)n¸i.Then, both messages mi = f0; 1g are informative. Given this, player j should pay attention tomessage mi and try to update his beliefs from bj to ¯j. But given this updating behaviour, theunique best response of player ic is to play the equilibrium strategy to make ¯j = bj and removeopportunities for player j. This shows that no deviation from the equilibrium is sustainable.

30

Page 31:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

t = a Suppose that player ia does not play ¾ai (0) = 0 and ¾a

i (1) = 1: Then, maintaining …xed thestrategy of player ic, messages are informative and therefore player j pays attention to them.But then, the best response of player ia to this is to make the messages as more reliable aspossible and this is done, maintaining …xed the strategy of player ic, only by setting precisely¾a

i (0) = 0 and ¾ai (1) = 1:

2. Case ¸i < 1=2:

t = c Suppose that player ic plays something di¤erent than ¾ci (0) = 1, ¾c

i (1) = 0: Then, given¸i < 1=2 and for any strategy of ia (including the prescribed by the equilibrium) this alternativestrategy increases the reliability of the messages; and this is never optimal.

t = a For the analogous reason, any strategy of type ia di¤erent from ¾ai (0) = 0, ¾a

i (1) = 1, for anystrategy of ic, decreases the reliability of the messages; and this is never optimal:

Lemma 4. To show that both types choose the same accuracy we will rely on the fact that bothobtain the same marginal utility from it; this is all we need to do because we have assumed the marginalcost is equal for both. It will be useful to decompose total expected utility in private and external. Weconcentrate …rst on the private expected utility. We have that

euti(¹

ti; ¢) =

¡¸j

2 f

0BB@

¹ti¹

cj

¹ti(1 ¡ ¹c

j)(1 ¡ ¹t

i)¹cj

(1 ¡ ¹ti)(1 ¡ ¹c

j)

1CCA

0 0BB@

¯ti(0; 1)2

¯ti(0; 0)2

¯ti(1; 1)2

¯ti(1; 0)2

1CCA +

0BB@

¹ti¹

cj

¹ti(1 ¡ ¹c

j)(1 ¡ ¹t

i)¹cj

(1 ¡ ¹ti)(1 ¡ ¹c

j)

1CCA

0 0BB@

(¯ti(0; 1) ¡ 1)2

(¯ti(0; 0) ¡ 1)2

(¯ti(1; 1) ¡ 1)2

(¯ti(1; 0) ¡ 1)2

1CCAg

¡ (1¡¸j)2 f

0BB@

¹ti¹

cj

¹ti(1 ¡ ¹c

j)(1 ¡ ¹t

i)¹cj

(1 ¡ ¹ti)(1 ¡ ¹c

j)

1CCA

0 0BB@

¯ti(0; 0)2

¯ti(0; 1)2

¯ti(1; 0)2

¯ti(1; 1)2

1CCA +

0BB@

¹ti¹

cj

¹ti(1 ¡ ¹c

j)(1 ¡ ¹t

i)¹cj

(1 ¡ ¹ti)(1 ¡ ¹c

j)

1CCA

0 0BB@

(¯ti(0; 0) ¡ 1)2

(¯ti(0; 1) ¡ 1)2

(¯ti(1; 0) ¡ 1)2

(¯ti(1; 1) ¡ 1)2

1CCAg

by using property P1 that appears in the proof of Proposition 2 we have that

euti(¹

ti; ¢) =

¡¸j

0BB@

¹ti¹

cj

¹ti(1 ¡ ¹c

j)(1 ¡ ¹t

i)¹cj

(1 ¡ ¹ti)(1 ¡ ¹c

j)

1CCA

0 0BB@

¯ti(0; 1)2

¯ti(0; 0)2

¯ti(1; 1)2

¯ti(1; 0)2

1CCA ¡ (1 ¡ ¸j)

0BB@

¹ti¹

aj

¹ti(1 ¡ ¹a

j )(1 ¡ ¹t

i)¹aj

(1 ¡ ¹ti)(1 ¡ ¹a

j )

1CCA

0 0BB@

¯ti(0; 0)2

¯ti(0; 1)2

¯ti(1; 0)2

¯ti(1; 1)2

1CCA

euti(¹

ti; ¢) = ¡f¹j(¹i¯

ti(0; 0)2 + (1 ¡ ¹i)¯

ti(1; 0)2) + (1 ¡ ¹j)(¹i¯

ti(0; 1)2 + (1 ¡ ¹i)¯

ti(1; 1)2)g

with ¹j = ¸j(1 ¡ ¹cj) + (1 ¡ ¸j)¹a

j . The …rst thing to notice is that this expression in the same for bothtypes of player i. To get the private marginal utility we should di¤erentiate with respect to ¹t

i. As notedabove, changes in ¹t

i lead to changes in the distribution and to changes in the beliefs. We have

@euti(¹

ti; ¢)

@¹ti

= ¹j(¹ti(

@¯ti(1; 0)2

@¹ti

¡ @¯ti(0; 0)2

@¹ti

) + ¯ti(1; 0)2 ¡ ¯t

i(0; 0)2)

+(1 ¡ ¹j)(¹ti(

@¯ti(1; 1)2

@¹ti

¡ @¯ti(0; 1)2

@¹ti

) + ¯ti(1; 1)2 ¡ ¯t

i(0; 1)2)

¡(¹j

@¯ti(1; 0)2

@¹ti

+ (1 ¡ ¹j)@¯t

i(1; 1)2

@¹ti

) (26)

31

Page 32:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Now, we use the Envelope Theorem to simplify this expression. With this purpose, de…ne the value functionVt

i (¹ti) as the value attained by the total expected utility eut

i(¹ti; ¯

ti(¹

ti)) at a solution to the maximization

problem. The derivative of Vti (¹

ti)

@Vti (¹

ti)

@¹ti

=@eut

i(¹ti; ¯

ti(¹

ti))

@¹ti

+@eut

i(¹ti; ¯

ti(¹

ti))

@¯ti

d¯ti(¹

ti)

d¹ti

can be simpli…ed by noting that as a …rst order condition for maximization we have @euti(¹

ti;¯

ti(¹

ti))

@¯ti

= 0.This simpli…es the expression above leaving it as

@Vti (¹

ti)

@¹ti

=@eut

i(¹ti; ¯

ti(¹

ti))

@¹ti

:

Therefore we can dispose of the ‘indirect’ e¤ect of ¹ in euti through ¯t

i and look only at the direct e¤ect.This simpli…cation turns (26) into

@euti(¹

ti; ¢)

@¹ti

= ¹j

¡¯t

i(1; 0)2 ¡ ¯ti(0; 0)2

¢+ (1 ¡ ¹j)(¯

ti(1; 1)2 ¡ ¯t

i(0; 1)2);8t 2 T:

We proceed now with the external expected utility .In this case, the expression is di¤erent for each type ofagent. Thus, the external expected utility of type c is, using again property P1 of the beliefs,

feuci (¹

ci ; ¢) =

Ptj2fc;ag

0BB@

¹ci¹

tj

¹ci (1 ¡ ¹t

j)(1 ¡ ¹c

i )¹tj

(1 ¡ ¹ci )(1 ¡ ¹t

j)

1CCA

0 0BB@

¯tj(0; 1)2

¯tj(1; 1)2

¯tj(0; 0)2

¯tj(1; 0)2

1CCAPr(tj)

and the internal expected utility of type a is

feuai (¹a

i ; ¢) = ¡ Pt2fc;ag

0BB@

¹ai ¹t

j

¹ai (1 ¡ ¹t

j)(1 ¡ ¹a

i )¹tj

(1 ¡ ¹ai )(1 ¡ ¹t

j)

1CCA

0 0BB@

¯tj(0; 0)2

¯tj(1; 0)2

¯tj(0; 1)2

¯tj(1; 1)2

1CCAPr(tj):

Notice now that the ¯tj do not depend on the actually chosen ¹t

i but on the conjecture player j has aboutthe choice of player i, i.e. they depend on ¹¹t

i (in our notation). Then we …nd that

@feuci (¹

ci ; ¢)

@¹ci

=@feua

i (¹ai ; ¢)

@¹ai

=P

t2fc;ag

¡¹t

j

¡¯t

j(0; 1)2 ¡ ¯tj(0; 0)2

¢+ (1 ¡ ¹t

j)¡¯t

j(1; 1)2 ¡ ¯tj(1; 0)2

¢¢

Lemma 5.

1. First, notice that @@¸i

meui = @@¸i

gmeui because @@¸i

meui = 0. This is so because @¯i(si;mj)@¸i

= 0: Now,

@

@¸igmeui =

@

@¸i

£¹j(¯j(0; 1)2 ¡ ¯j(0; 0)2) + (1 ¡ ¹j)(¯j(1; 1)2 ¡ ¯2(1; 0)2

¤: (27)

Notice that in general @(¯i(si;mj))2

@¸i= 2¯i(si;mj)

@¯j(sj ;mi)

@¸iand

@¯j(sj ;mi)

@¸i=

@¯j(sj ;mi)

@¹i

@¹i

@¸iwith

@¹i=@¸i = 1 ¡ 2¹i < 0 for ¹i > 1=2 as we have assumed throughout the paper. Therefore ex-pression (27) becomes

@

@¸igmeui = 2¹j

@¹i

@¸i

µ¯j(0; 1)

@¯j(0; 1)

@¹i

¡ ¯j(0; 0)@¯j(0; 0)

@¹i

+2(1 ¡ ¹j)@¹i

@¸i

µ¯j(1; 1)

@¯j(1; 1)

@¹i

¡ ¯j(0; 0)@¯j(1; 0)

@¹i

¶: (28)

32

Page 33:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Then one computes that@¯j(0;1)

@¹i= (1¡¹j)¹j=((1¡¹j)¹i +¹j(1¡ ¹i))

2 = ¡@¯j(1;0)

@¹i, and

@¯j(1;1)

@¹i=

(1 ¡ ¹j)¹j=(¹j¹i + (1 ¡ ¹j)(1 ¡ ¹i))2 = ¡@¯j(0;0)

@¹i: Therefore both terms in (28) inside the brackets

are positive, which together with the fact that.@¹i=@¸i < 0, make this expression negative.

2. To show that @@¹j

meui < 0 we will proceed by showing that both @@¹j

meui < 0 and @@¹j

gmeui < 0

meu Notice that

@

@¹j

meui =@

@¹j

(¹j

¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢+ (1 ¡ ¹j)(¯i(1; 1)2 ¡ ¯i(0; 1)2))

=@¹j

@¹j

¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢+ ¹j

@

@¹j

(¯i(1; 0)2 ¡ ¯i(0; 0)2)

+@(1 ¡ ¹j)

@¹j

(¯i(1; 1)2 ¡ ¯i(0; 1)2) + (1 ¡ ¹j)@

@¹j

(¯i(1; 1)2 ¡ ¯i(0; 1)2)

this can be rewritten as

=@¹j

@¹j

©(¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢¡ (¯i(1; 1)2 ¡ ¯i(0; 1)2)

ª(29a)

+f¹j

@

@¹j

(¯i(1; 0)2 ¡ ¯i(0; 0)2) + (1 ¡ ¹j)@

@¹j

(¯i(1; 1)2 ¡ ¯i(0; 1)2)g (29b)

We concentrate on (29a). Notice that @¹j=@¹j = 1 ¡ 2¸j > 0, which is positive in all games oftype I i.e. for both ¸1 and ¸2 smaller than 1/2. We use the property P1 of beliefs to manipulateand to obtain that (

¡¯i(1; 0)

2 ¡ ¯i(0; 0)2¢

¡ (¯i(1; 1)2 ¡ ¯i(0; 1)2) = 2(¯i(1; 1) ¡ ¯i(1; 0)) ¡2(¯i(1; 1) ¡ ¯i(1; 0)) = 2(¯i(1; 1) ¡ ¯i(1; 0))[1 ¡ f¯(1; 1) + ¯i(1; 0)g] and now using propertyP3 of the beliefs we know that ¯i(1; 1) + ¯i(1; 0) > 1 which makes the preceding expressionnegative and thus converts the whole (29a) in negative. Concerning (29b), here we need to dealwith the derivatives of the belief functions. By means of the chain rule (29b) can be rewrittenas the following sum 2¹j

@¹j

@¹j(¯i(1; 0)@¯i(1;0)

@¹j¡¯i(0; 0)

@¯i(0;0)@¹j

) +2(1 ¡ ¹j)@¹j

@¹j(¯i(1; 1)@¯i(1;1)

@¹j¡

¯i(0; 1)@¯i(0;1)

@¹j): The important thins is that sign of this sum equals to the sign of the sum of

¯i(1; 0)@¯i(1; 0)

@¹j

¡ ¯i(0; 0)@¯i(0; 0)

@¹j

+ (30a)

+¯i(1; 1)@¯i(1; 1)

@¹j

¡ ¯i(0; 1)@¯i(0; 1)

@¹j

: (30b)

We have already found the derivatives @¯i=@¹j . To show that (29b) is negative is equivalentto show that the sum of (30a) and (30b) is negative. We start by the …rst dealing with(30a), in concrete with ¯i(1; 0)@¯i(1;0)

@¹j¡ ¯i(0; 0)@¯i(0;0)

@¹j. Notice that both @¯i(1;0)

@¹j< 0 and

@¯i(0;0)@¹j

< 0. But, further, we …nd that¯¯@¯i(1;0)

@¹j

¯¯ >

¯¯@¯i(0;0)

@¹j

¯¯ (they have the same numerator

but the denominator of @¯i(0;0)@¹j

is larger). Therefore we conclude that (30a) is always negative.

Consider the second part, we …nd that @¯i(1;1)@¹j

> 0 and @¯i(0;1)@¹j

> 0, but notice that now we

cannot resolve the indeterminacy in the same way because¯¯@¯i(1;1)

@¹j

¯¯ <

¯¯@¯i(0;1)

@¹j

¯¯.and therefore

leaves undeterminate the sign of (30b) and thus the whole sign of (29b). To complete theproof, we distinguish two cases depending on the sign of (30b): i) in case that it is negative,the we are done because both (30a) and (30b) would negative and thus (29b) would also beso; ii) in case that (30b) is positive, then to complete the proof we should that despite ofbeing negative is smaller (in absolute value) than (30a). Assume then that (30b) is positive,then its absolute value is the same thing and equals to j(30b)j = (30b) = (1 ¡ ¹i)¹

2i (1 ¡

33

Page 34:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

¹j)=[(1 ¡ ¹i)¹j + ¹i(1 ¡ ¹j)]3¡ (1 ¡ ¹i)

2¹i(1 ¡ ¹j)=[¹i¹j + (1 ¡ ¹i)(1 ¡ ¹j)]3. Instead since

(30a) is negative, its absolute value is obtained switching the sign, i.e. j(30a)j = ¡(30a) =(1 ¡ ¹i)¹

2i (1 ¡ ¹j)=[¹i¹j + (1 ¡ ¹i)(1 ¡ ¹j)]

3¡ (1 ¡ ¹i)2¹i(1 ¡ ¹j)=[¹i(1 ¡ ¹j) + (1 ¡ ¹i)¹j ]

3.Now, one checks that the condition j(30a)j > j(30b)j is equivalent to

1¡¹i(1 ¡ ¹j) + (1 ¡ ¹i)¹j

¢2 >1

¡¹i¹j + (1 ¡ ¹i)(1 ¡ ¹j)

¢2

which holds because both ¹i; ¹j > 1=2:

gmeu Notice that

@

@¹j

gmeui =@

@¹j

(¹j(¯j(0; 1)2 ¡ ¯j(0; 0)2) + (1 ¡ ¹j)(¯j(1; 1)2 ¡ ¯j(1; 0)2))

= (¯j(0; 1)2 ¡ ¯j(0; 0)2) ¡ ((¯j(1; 1)2 ¡ ¯j(1; 0)2))

+¹j

@

@¹j

((¯j(0; 1)2 ¡ ¯j(0; 0)2) + (1 ¡ ¹j)@

@¹j

((¯j(1; 1)2 ¡ ¯j(1; 0)2))

And to prove that this is negative we follow a similar procedure as before using the facts wehave already established.

3. First, notice that, opposing the part 1 of this proof, now we have@

@¸jmeui =

@

@¸jmeui because

@

@¸jgmeui = 0, because @¯j(sj ;mi)=@¸j = 0: Now,

@

@¸jmeui =

@

@¸j(¹j

¡¯i(1; 0)

2 ¡ ¯i(0; 0)2¢

+ (1 ¡ ¹j)(¯i(1; 1)2 ¡ ¯i(0; 1)2))

=@¹j

@¸j[¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢¡ (¯i(1; 1)2 ¡ ¯i(0; 1)2)]

+¹j

@

@¸j[¡¯i(1; 0)2 ¡ ¯i(0; 0)2

¢] + (1 ¡ ¹j)

@

@¸j[(¯i(1; 1)2 ¡ ¯i(0; 1)2)]

= 2¹j

@¹j

@¸j(¯i(1; 0)

@¯i(1; 0)

@¹j

¡ ¯i(0; 0)@¯i(0; 0)

@¹j

)

+2(1 ¡ ¹j)@¹j

@¸j(¯i(1; 1)

@¯i(1; 1)

@¹j

¡ ¯i(0; 1)@¯i(0; 1)

@¹j

) (31)

and we have already established that (¯i(1; 0)@¯i(1;0)@¹j

¡ ¯i(0; 0)@¯i(0;0)@¹j

) < 0 and (¯i(1; 1)@¯i(1;1)@¹j

¡

¯i(0; 1)@¯i(0;1)

@¹j) < 0, this together with

@¹j

@¸j< 0 makes the whole expression (31) positive.

Proposition 3. Basically, we rely on lemma 5 to establish the results. The bottom line of the lemmais that in any game G(¸; x), both types of each player choose the same accuracy and therefore di¤erencesare only in terms of player 1 and player 2 choices, ¹1 and ¹2. Start with games in which ¸1 = ¸2 (somecases in type I games and all games of type III). Then it is clear that ¹1 = ¹2. Then using the results ofthe lemma about the impact in the marginal expected utility of changes in ¸i, we …nd the main point: theplayer with lower ¸ chooses a higher ¹, and this covers games of type II.i and the rest of type I.

The proofs of Propositions 4, 5 and 6 is available from the author upon request.

34

Page 35:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

Appendix B

Here we show that the results of proposition 1 and 2 extend to a case with a generic number N ofoutcomes (and signals and messages). The outcome space is now I = f1; :::; Ng rather than f0; 1g. Weneed to assume that all these sets have the same structure, I = Si = Mi = f1; :::; Ng: We assume that eachstate ! 2 I is, a priori, equally likely, i.e. Pr(!) = 1=N , 8! 2 I. We also assume the natural generalizationof the signals’ precision

Pr(sj!) =

½¹ if s = !

1¡¹N if s 6= !

:

To make a given message m 2 I non-informative, we need that

Pr(!jm) = Pr(!0jm); 8 !; !0 2 I: (32)

Now, since Pr(!) = Pr(!0) = 1=N for all !; !0 2 I we have that for any given !¤ and m¤,

Pr(!¤jm¤) =Pr(m¤j!¤)Pr(!¤)P!2I

Pr(mj!) Pr(!)=

Pr(m¤j!¤)P!2I

Pr(mj!):

Notice that the denominator is identical for any !¤ 2 I. This implies that the condition (32) becomes

Pr(mj!0) = Pr(mj!); 8 !; !0 2 I: (33)

The above condition states that message m is uninformative if it is sent with the same probability by therival player in all possible states. Consider, in particular, the case of making message m¤ uninformative.The condition of non-informativeness in (33) can be rewritten if one …xes !¤ = m¤ as

Pr(m¤j!¤) = Pr(m¤j¹!); 8 ¹! 2 I; ¹! 6= !¤ (34)

Now, one has that for any arbitrary ! and de…ning s = !

Pr(m¤j!) =Ps2I

¾(m¤js)Pr(sj!)

= ¾(m¤js)Pr(sj!) +P

s0 6=s

¾(m¤js0)Pr(s0j!)

with ¾(mjs) = ¾c(mjs)Pr(c) +¾a(mjs)Pr(a): Therefore, to make a given message m¤ uninformative, with!¤ = m¤, s¤ = !¤, and ¹s = ¹! condition (34) is transformed into

¾(m¤js¤) Pr(s¤j!¤) +P

s0 6=s¤¾(m¤js0)Pr(s0j!¤) = ¾(m¤j¹s) Pr(¹sj¹!) +

Ps0 6=¹s

¾(m¤js0)Pr(s0j¹!);

8 ¹! 2 I; ¹! 6= !¤

¾(m¤js¤)Pr(s¤j!¤) + ¾(m¤j¹s)Pr(¹sj!¤) +P

s0 6=fs¤;¹sg¾(m¤js0) Pr(s0j!¤)

= ¾(m¤j¹s)Pr(¹sj¹!) + ¾(m¤js¤)Pr(s¤j¹!) +P

s0 6=f¹s;s¤g¾(m¤js0)Pr(s0j¹!);8 ¹! 2 I; ¹! 6= !¤

¾(m¤js¤)Pr(s¤j!¤) + ¾(m¤j¹s)Pr(¹sj!¤) = ¾(m¤j¹s)Pr(¹sj¹!) + ¾(m¤js¤)Pr(s¤j¹!)

35

Page 36:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

¹¾(m¤js¤) + 1¡¹N ¾(m¤j¹s) = ¹¾(m¤j¹s) + 1¡¹

N ¾(m¤js¤) (35)

with ¾(mjs) = ¾c(mjs)Pr(c) + ¾a(mjs)Pr(a): Therefore, (35) becomes

¸(¾c(m¤js¤) ¡ ¾c(m

¤j¹s)) = (1 ¡ ¸)(¾a(m¤j¹s) ¡ ¾a(m¤js¤))

¾c(m¤js¤) ¡ ¾c(m

¤j¹s) = 1¡¸¸ (¾a(m¤j¹s) ¡ ¾a(m¤js¤):

which is the generalization of the condition (9).

Appendix C is available from the author upon request.

References

[1] Crawford, V. and J. Sobel (1982). Strategic Information Transmission, Econometrica 50, 1431-1451.

[2] David, P. A. (1998). Common Agency Contracting and the emergence of “Open Science” Institutions.American Economic Review, May 1998 (Papers and Proceedings), 88.2, 15-21.

[3] Dasgupta, P. and P. A. David (1994). Toward a new economics of science, Research Policy, 23,487-521.

[4] Green, J. and N. Stokey (1980). A Two-person Game of Information Transmission, Harvard University,H.I.E.R. Discussion Paper 751, March 1980.

[5] Hauk, E. and S. Hurkens (2001). Secret Information Acquisition in Cournot Markets, Economic The-ory, 18.3, 661-681.

[6] Hurkens, S. and N. Vulkan (1999). Endogenous information structures, UPF Working Paper 386.

[7] Hurkens, S. and N. Vulkan (2001). Information Acquisition and Entry, Journal of Economic Behaviourand Organization, 44.4, 467-479.

[8] Hwang, H. (1993). Optimal Information Acquisition for Heterogeneous Duopoly Firms, Journal ofEconomic Theory, 59, 385-402.

[9] Hwang, H. (1995). Information Acquisition and Relative E¢ciency of Competitive Oligopoly andMonopoly Markets, International Economic Review, 36, 325-340.

[10] Krishna, V. and J. Morgan. (2001). A model of expertise, Quarterly Journal of Economics, 116.2,747-775.

[11] Lewis, T. R. and E. M. Sappington (1997). Information Management in Incentive Problems. Journalof Political Economy, 105.4, 796-821.

[12] Matthews, S. (1984). Information Acquisition in Discriminatory Auctions, in Bayesian Models inEconomic Theory, eds. M. Boyer and R. E. Kihlstrom. Elsevier.

[13] Mezzetti, C. and T. Tsoulouhas (2000). Gathering information before signing a contract with a pri-vately informed principal, International Journal of Industrial Organisation, 18, 667-689.

36

Page 37:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

[14] Moresi, S. (2000). Information acquisition and research di¤erentiation prior to an open-bid auction,International Journal of Industrial Economics, 18, 723-726.

[15] Morris, S. (2001). Political correctness, Journal of Political Economy, 109.2, 231-265.

[16] Okuno-Fujiwara, M; A. Postlewaite and K. Suzumura (1990). Strategic information revelation, Reviewof Economic Studies, 57, 25-47

[17] Perea, A. and J. Swinkels (1999). Selling Information in Extensive Form Games, UC3M WorkingPaper, Economics Series 99-44.

[18] Sent, E. M. (1999). Economics of Science: survey and suggestions, Journal of Economic Methodology,6.1, 95-124.

[19] Spector, D. (2000). Pure communication between agents with close preferences, Economics Letters,66, 171-178.

[20] Stephan, P. E. (1996). The Economics of Science, Journal of Economic Literature, 34.3, September1996, 1199-1235.

[21] Vives, X. (1984), Duopoly Information Equilibrium: Cournot and Bertrand, Journal of EconomicTheory, 34, 71-94.

37

Page 38:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

figures

±°²¯ ±°²¯

±°²¯ ±°²¯¹i

- -

- -

¾ti(1)

1 ¡ ¾ti(0)¹i

¡¡¡¡¡¡¡¡¡µ@

@@@@@@@@R

1 ¡ ¹i

1 ¡ ¹i

¾ti(0)

1 ¡ ¾ti(1)

¡¡¡¡¡¡¡¡¡µ@

@@@@@@@@R

! = 0 si = 0 mti=0

! = 1 si = 1 mti=1

~

~

Figure 1

! = 0

! = 1

! = 0

! = 1¾¾

¾¾

- -

- -£££££££££±@

@@@@@@@@R

@@

@@

@@

@@@I £

££££££££°HH

HHHH

HHHH

HHHH

HHY½

½½

½½

½½

½½

½½=

½½½½½½½½½½½>

HHHHHHHHHHHHHHHHj

s1 = 0 m1 = 0

s1 = 1 m1 = 1¹1

1¡¹1

1¡¹1

¹1

¾t1(1)

1¡¾t1(0)

¾t1(0)

1¡¾t1(1)

1¡¹2

1¡¹2

¹2

¹2

¾t2(1)

1¡¾t2(0)

¾t2(0)

1¡¾t2(1)

at1(1;0)

at1(1;1)at1(0;1)

at2(0;0) at2(1;0)

at2(0;1)

s2 = 0

s2 = 1

m2 = 0

m2 = 1

Player 2

j

PPPP

PPPPPi

©©©©©©¼

??

j

j j

j j

j jPPPPPPPPPq

©©©©

©©*

66

at2(1;1)

at1(0;0)

j

j j

j

j

j j

j

z

z

z

z

Player 1

Figure 2

38

Page 39:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

¸i 1 ¡ ¸i

si = 1

si = 0

?

mi = 0; 1

Figure 3

¸i 1 ¡ ¸i

si = 1

si = 0

?

mi = 1

mi = 0

Figure 4

39

Page 40:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

0.5

0.6

0.7

0.8

0.9

1

m1

0.5 0.6 0.7 0.8 0.9 1m2

BR2*

BR1

BR1*

BR2

Figure 5

40

Page 41:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

0.5

0.6

0.7

0.8

0.9

1

m1

0.6 0.7 0.8 0.9 1m2

0.5

0.6

0.7

0.8

0.9

1

m1

0.6 0.7 0.8 0.9 1m2

Figure M.1, ¸ Fgiure M.2. x

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

{λΜ(x),λΜ(x)}

U

M

λ1

λ2

{0,λm(x)}

0 0.05 0.1 0.15 0.2 0.252.6

2.8

3

3.2

3.4

3.6

3.8

4

4.2

λ1

Cost, x

λM(x)λm(x)

Figure M.3 Figure M.4

41

Page 42:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 6-0.2

-0.19

-0.18

-0.17

-0.16

-0.15

-0.14

Cost, x

W(0,0)

W(0,.5)

W(0,.1)

Figure 6

5.3 5.32 5.34 5.36 5.38 5.4 5.42 5.44-0.189

-0.1885

-0.188

-0.1875

-0.187

-0.1865

-0.186

-0.1855

-0.185

Cost, x

W(0,.5)

W(0,.1)

W(0,0)

Figure 7

42

Page 43:  · RESEARCH: COMPETITION AND ALTRUISM¤ Ramon Xifré Olivay May 2002, Work in progress Comments welcome, Quotations discouraged-Abs trac . A su me th arcb ily o nfq gd it. This paper

4 4.2 4.4 4.6 4.8 5 5.2 5.4 5.6 5.8 60

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Cost, x

λ

λ*(x)

Fgiure 8

43