game theory with costly computation: formulation and ...rafael/papers/gtcost-ics.pdfgame theory with...

21
Game Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y. Halpern * Rafael Pass Cornell University [email protected] [email protected] Abstract: We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equi- librium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations to observed behavior in well-studied games (such as finitely repeated prisoner’s dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application of this framework, we develop a definition of protocol security relying on game-theoretic notions of implementation. We show that a natural special case of this this definition is equivalent to a variant of the traditional cryptographic defini- tion of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with “deviating” players in two different communities—Nash equilibrium in game theory and zero-knowledge “simulation” in cryptography—are intimately related. Keywords: game theory, costly computation, cryptography 1 Introduction Consider the following game. You are given a ran- dom odd n-bit number x and you are supposed to de- cide whether x is prime or composite. If you guess cor- rectly you receive $2, if you guess incorrectly you instead have to pay a penalty of $1000. Additionally you have the choice of “playing safe” by giving up, in which case you receive $1. In traditional game theory, computation is considered “costless”; in other words, players are allowed to perform an unbounded amount of computation without it affecting their utility. Traditional game theory suggests that you should compute whether x is prime or composite and output the correct answer; this is the only Nash equi- librium of the one-person game, no matter what n (the size of the prime) is. Although for small n this seems reason- able, when n grows larger most people would probably decide to “play safe”—as eventually the cost of comput- ing the answer (e.g., by buying powerful enough comput- ers) outweighs the possible gain of $1. The importance of considering such computational is- sues in game theory has been recognized since at least the work of Simon [1]. There have been a number of at- tempts to capture various aspects of computation. Two major lines of research can be identified. The first line, initiated by Neyman [2], tries to model the fact that play- * Supported in part by NSF grants ITR-0325453, IIS-0534064, IIS- 0812045, and IIS-0911036, and by AFOSR grants FA9550-08-1-0438 and FA9550-09-1-0266, and ARO grant W911NF-09-1-0281. Supported in part by a Microsoft New Faculty Fellowship, NSF CA- REER Award CCF-0746990, AFOSR Award FA9550-08-1-0197, and BSF Grant 2006317. ers can do only bounded computation, typically by model- ing players as finite automata. Neyman focused on finitely repeated prisoner’s dilemma, a topic which has contined to attract attention. (See [3] and the references therein for more recent work on prisoner’s dilemma played by finite automata; Megiddo and Wigderson [4] considered pris- oner’s dilemma played by Turing machines.) In another instance of this line of research, Dodis, Halevi, and Rabin [5] and Urbano and Vila [6] consider the problem of im- plementing mediators when players are polynomial-time Turing machines. The second line, initiated by Rubin- stein [7], tries to capture the fact that doing costly com- putation affects an agent’s utility. Rubinstein assumed that players choose a finite automaton to play the game rather than choosing a strategy directly; a player’s utility depends both on the move made by the automaton and the complexity of the automaton (identified with the num- ber of states of the automaton). Intuitively, automata that use more states are seen as representing more complicated procedures. (See [8] for an overview of the work in this area in the 1980s, and [9] for more recent work.) Our focus is on providing a general game-theoretic framework for reasoning about agents performing costly computation. As in Rubinstein’s work, we view players as choosing a machine, but for us the machine is a Turing machine, rather than a finite automaton. We associate a complexity, not just with a machine, but with the machine and its input. The complexity could represent the running time of or space used by the machine on that input. The complexity can also be used to capture the complexity of the machine itself (e.g., the number of states, as in Rubin- 1

Upload: others

Post on 14-Jun-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

Game Theory with Costly Computation:Formulation and Application to Protocol Security

Joseph Y. Halpern∗ Rafael Pass†

Cornell [email protected] [email protected]

Abstract: We develop a general game-theoretic framework for reasoning about strategic agents performing possiblycostly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equi-librium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanationsto observed behavior in well-studied games (such as finitely repeated prisoner’s dilemma and rock-paper-scissors).Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application ofthis framework, we develop a definition of protocol security relying on game-theoretic notions of implementation. Weshow that a natural special case of this this definition is equivalent to a variant of the traditional cryptographic defini-tion of protocol security; this result shows that, when taking computation into account, the two approaches used fordealing with “deviating” players in two different communities—Nash equilibrium in game theory and zero-knowledge“simulation” in cryptography—are intimately related.

Keywords: game theory, costly computation, cryptography

1 IntroductionConsider the following game. You are given a ran-

dom odd n-bit number x and you are supposed to de-cide whether x is prime or composite. If you guess cor-rectly you receive $2, if you guess incorrectly you insteadhave to pay a penalty of $1000. Additionally you havethe choice of “playing safe” by giving up, in which caseyou receive $1. In traditional game theory, computation isconsidered “costless”; in other words, players are allowedto perform an unbounded amount of computation withoutit affecting their utility. Traditional game theory suggeststhat you should compute whether x is prime or compositeand output the correct answer; this is the only Nash equi-librium of the one-person game, no matter what n (the sizeof the prime) is. Although for small n this seems reason-able, when n grows larger most people would probablydecide to “play safe”—as eventually the cost of comput-ing the answer (e.g., by buying powerful enough comput-ers) outweighs the possible gain of $1.

The importance of considering such computational is-sues in game theory has been recognized since at leastthe work of Simon [1]. There have been a number of at-tempts to capture various aspects of computation. Twomajor lines of research can be identified. The first line,initiated by Neyman [2], tries to model the fact that play-

∗Supported in part by NSF grants ITR-0325453, IIS-0534064, IIS-0812045, and IIS-0911036, and by AFOSR grants FA9550-08-1-0438and FA9550-09-1-0266, and ARO grant W911NF-09-1-0281.†Supported in part by a Microsoft New Faculty Fellowship, NSF CA-

REER Award CCF-0746990, AFOSR Award FA9550-08-1-0197, andBSF Grant 2006317.

ers can do only bounded computation, typically by model-ing players as finite automata. Neyman focused on finitelyrepeated prisoner’s dilemma, a topic which has continedto attract attention. (See [3] and the references therein formore recent work on prisoner’s dilemma played by finiteautomata; Megiddo and Wigderson [4] considered pris-oner’s dilemma played by Turing machines.) In anotherinstance of this line of research, Dodis, Halevi, and Rabin[5] and Urbano and Vila [6] consider the problem of im-plementing mediators when players are polynomial-timeTuring machines. The second line, initiated by Rubin-stein [7], tries to capture the fact that doing costly com-putation affects an agent’s utility. Rubinstein assumedthat players choose a finite automaton to play the gamerather than choosing a strategy directly; a player’s utilitydepends both on the move made by the automaton andthe complexity of the automaton (identified with the num-ber of states of the automaton). Intuitively, automata thatuse more states are seen as representing more complicatedprocedures. (See [8] for an overview of the work in thisarea in the 1980s, and [9] for more recent work.)

Our focus is on providing a general game-theoreticframework for reasoning about agents performing costlycomputation. As in Rubinstein’s work, we view playersas choosing a machine, but for us the machine is a Turingmachine, rather than a finite automaton. We associate acomplexity, not just with a machine, but with the machineand its input. The complexity could represent the runningtime of or space used by the machine on that input. Thecomplexity can also be used to capture the complexity ofthe machine itself (e.g., the number of states, as in Rubin-

1

Page 2: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

stein’s case) or to model the cost of searching for a newstrategy to replace one that the player already has. For ex-ample, if a mechanism designer recommends that player iuse a particular strategy (machine) M , then there is a costfor searching for a better strategy; switching to anotherstrategy may also entail a psychological cost. By allowingthe complexity to depend on the machine and the input,we can deal with the fact that machines run much longeron some inputs than on others. A player’s utility dependsboth on the actions chosen by all the players’ machinesand the complexity of these machines.

In this setting, we can define Nash equilibrium in theobvious way. However, as we show by a simple exam-ple (a rock-paper-scissors game where randomization iscostly), a Nash equilibrium may not always exist. Otherstandard results in the game theory, such as the revelationprinciple (which, roughly speaking, says that there is al-ways an equilibrium where players truthfully report theirtypes, i.e., their private information [10, 11]) also do nothold. (See the full paper.) We view this as a feature. Webelieve that taking computation into account should forceus to rethink a number of basic notions.

First, we show that the non-existence of Nash equilib-rium is not such a significant problem. We identify natu-ral conditions (such as the assumption that randomizing isfree) that guarantee the existence of Nash equilibrium incomputational games. Moreover, we demonstrate that ourcomputational framework can explain experimentally-observed phenomena, such as cooperation in the finitelyrepeated prisoner’s dilemma, that are inconsistent withstandard Nash equilibrium in a psychologically appealingway.

We also show that our framework is normative in that itcan be used to tell a mechanism designer how to design amechanism that takes into account agents’ computationallimitations. We illustrate this point in the context of cryp-tographic protocols. We use our framework to provide agame-theoretic definition of what it means for a commu-nication protocol to securely compute [12] a function ofplayers’ inputs. Rather than stipulating that “malicious”players cannot harm the “honest” ones (as traditionallydone in cryptographic definitions [12, 13]), we say thata protocol Π is a secure implementation of function f if,given any computational game G for which it is an equi-librium for the players to provide their inputs to a medi-ator computing f and output what the mediator recom-mends, running Π on the same inputs is also an equilib-rium. In other words, whenever a set of parties want tocompute f on their inputs, they also want to honestly runthe protocol Π using the same inputs. Perhaps surpris-ingly, we show that, under weak restrictions on the util-ity functions of the players (essentially, that players neverprefer to compute more), our notion of implementationis equivalent to a variant of the cryptographic notion ofsecure computation. This result shows that the two ap-

proaches used for dealing with “deviating” players in twodifferent communities—Nash equilibrium in game theory,and zero-knowledge “simulation” [13] in cryptography—are intimately connected once we take cost of computa-tion into account.

Finally, we believe this work leaves open a huge num-ber of exciting research questions. We outline several di-rections in Section 5. To give just one example, under nat-ural restrictions on the utility function of players, we cancircumvent classical impossibility results with respect tosecure computation (e.g., [14]), suggesting an explorationof more game-theoretic (im)possibility results.

2 A Computational FrameworkIn a Bayesian game, it is implicitly assumed that com-

puting a strategy—that is, computing what move to makegiven a type—is free. We want to take the cost of compu-tation into account here. To this end, we consider what wecall Bayesian machine games, where we replace strategiesby machines. For definiteness, we take the machines tobe Turing machines, although the exact choice of comput-ing formalism is not significant for our purposes. Givena type, a strategy in a Bayesian game returns a distribu-tion over actions. Similarly, given as input a type, themachine returns a distribution over actions. Just as wetalk about the expected utility of a strategy profile in aBayesian game, we can talk about the expected utility ofa machine profile in a Bayesian machine game. However,we can no longer compute the expected utility by just tak-ing the expectation over the action profiles that result fromplaying the game. A player’s utility depends not only onthe type profile and action profile played by the machine,but also on the “complexity” of the machine given an in-put. The complexity of a machine can represent, for ex-ample, the running time or space usage of the machine onthat input, the size of the program description, or somecombination of these factors. For simplicity, we describethe complexity by a single number, although, since a num-ber of factors may be relevant, it may be more appropri-ate to represent it by a tuple of numbers in some cases.(We can, of course, always encode the tuple as a singlenumber, but in that case, “higher” complexity is not nec-essarily worse.) Note that when determining player i’sutility, we consider the complexity of all machines in theprofile, not just that of i’s machine. For example, i mightbe happy as long as his machine takes fewer steps thanj’s.

We assume that nature has a type in {0, 1}∗. Whilethere is no need to include a type for nature in standardBayesian games (we can effectively incorporate nature’stype into the type of the players), once we take compu-tation into account, we obtain a more expressive class ofgames by allowing nature to have a type (since the com-plexity of computing the utility may depend on nature’s

2

Page 3: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

type). We assume that machines take as input strings of0s and 1s and output strings of 0s and 1s. Thus, we as-sume that both types and actions can be represented aselements of {0, 1}∗. We allow machines to randomize, sogiven a type as input, we actually get a distribution overstrings. To capture this, we assume that the input to amachine is not only a type, but also a string chosen withuniform probability from {0, 1}∞ (which we can view asthe outcome of an infinite sequence of coin tosses). Themachine’s output is then a deterministic function of itstype and the infinite random string. We use the conven-tion that the output of a machine that does not terminateis a fixed special symbol ω. We define a view to be apair (t, r) of two bitstrings; we think of t as that part ofthe type that is read, and r is the string of random bitsused. (Our definition is slightly different from the tradi-tional way of defining a view, in that we include only theparts of the type and the random sequence actually read.If computation is not taken into account, there is no lossin generality in including the full type and the full randomsequence, and this is what has traditionally been done inthe literature. However, when computation is costly, thismight no longer be the case.) We denote by t; r a stringin {0, 1}∗; {0, 1}∗ representing the view. (Note that hereand elsewhere, we use “;” as a special symbol that acts asa separator between strings in {0, 1}∗.) If v = (t; r) andr is a finite string, we take M(v) to be the output of Mgiven input type t and random string r · 0∞.

We use a complexity function C : M ×{0, 1}∗; {0, 1}∗ ∪ {0, 1}∞ → IN , where M denotes theset of Turing machines to describe the complexity of amachine given a view. If t ∈ {0, 1}∗ and r ∈ {0, 1}∞,we identify C (M, t; r) with C (M, t; r′), where r′ is thefinite prefix of r actually used by M when running oninput t with random string r.

For now, we assume that machines run in isolation, sothe output and complexity of a machine does not dependon the machine profile. We remove this restriction in thenext section, where we allow machines to communicatewith mediators (and thus, implicitly, with each other viathe mediator).

Definition 2.1 (Bayesian machine game) A Bayesianmachine game G is described by a tuple ([m],M, T,Pr,C1, . . . ,Cm, u1, . . . , um), where• [m] = {1, . . . ,m} is the set of players;• M is the set of possible machines;• T ⊆ ({0, 1}∗)m+1 is the set of type profiles, where

the (m + 1)st element in the profile corresponds tonature’s type;• Pr is a distribution on T ;• Ci is a complexity function;• ui : T × ({0, 1}∗)m× INm → IR is player i’s utility

function. Intuitively, ui(~t,~a,~c) is the utility of playeri if ~t is the type profile, ~a is the action profile (where

we identify i’s action with Mi’s output), and ~c is theprofile of machine complexities.

We can now define the expected utility of a ma-chine profile. Given a Bayesian machine game G =([m],M,Pr, T, ~C , ~u) and ~M ∈ Mm, define the random

variable uG, ~Mi on T × ({0, 1}∞)m (i.e., the space of type

profiles and sequences of random strings) by taking

uG, ~Mi (~t, ~r) = ui(~t,M1(t1; r1), . . . ,Mm(tm; rm),

C1(M1, t1; r1), . . . ,Cm(Mm, tm)).

Note that there are two sources of uncertainty in com-puting the expected utility: the type t and realization ofthe random coin tosses of the players, which is an ele-ment of ({0, 1}∞)k. Let Prk

U denote the uniform distribu-tion on ({0, 1}∞)k. Given an arbitrary distribution PrX

on a space X , we write Pr+kX to denote the distribution

PrX ×PrkU onX× ({0, 1}∞)k. If k is clear from context

or not relevant, we often omit it, writing PrU and Pr+X .

Thus, given the probability Pr on T , the expected util-ity of player i in game G if ~M is used is the expectationof the random variable uG, ~M

i with respect to the distribu-

tion Pr+ (technically, Pr+m): UGi ( ~M) = EPr+ [uG, ~M

i ].Note that this notion of utility allows an agent to prefer amachine that runs faster to one that runs slower, even ifthey give the same output, or to prefer a machine that hasfaster running time to one that gives a better output. Be-cause we allow the utility to depend on the whole profileof complexities, we can capture a situation where i can be“happy” as long as his machine runs faster than j’s ma-chine. Of course, an important special case is where i’sutility depends only on his own complexity. All of ourresults continue to hold if we make this restriction.

2.1 Nash Equilibrium in Machine GamesGiven the definition of utility above, we can now define

(ε-) Nash equilibrium in the standard way.

Definition 2.2 (Nash equilibrium in machine games)Given a Bayesian machine game G = ([m],M, T,Pr,~C , ~u), a machine profile ~M ∈ Mm, and ε ≥ 0, Mi is anε-best response to ~M−i if, for every M ′i ∈M,

UGi [(Mi, ~M−i)] ≥ UG

i [(M ′i , ~M−i)]− ε.

(As usual, ~M−i denotes the tuple consisting of all ma-chines in ~M other than Mi.) ~M is an ε-Nash equilibriumof G if, for all players i, Mi is an ε-best response to ~M−i.A Nash equilibrium is a 0-Nash equilibrium.

There is an important conceptual point that must bestressed with regard to this definition. Because we are

3

Page 4: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

implicitly assuming, as is standard in the game theory lit-erature, that the game is common knowledge, we are as-sume that the agents understand the costs associated witheach Turing machine. That is, they do not have to doany “exploration” to compute the costs. In addition, wedo not charge the players for computing which machineis the best one to use in a given setting; we assume thatthis too is known. This model is appropriate in settingswhere the players have enough experience to understandthe behavior of all the machines on all relevant inputs, ei-ther through experimentation or theoretical analysis. Wecan easily extend the model to incorporate uncertainty, byallowing the complexity function to depend on the state(type) of nature as well as the machine and the input.

One immediate advantage of taking computation intoaccount is that we can formalize the intuition that ε-Nashequilibria are reasonable, because players will not botherchanging strategies for a gain of ε. Intuitively, the com-plexity function can “charge” ε for switching strategies.Specifically, an ε-Nash equilibrium ~M can be convertedto a Nash equilibrium by modifying player i’s complexityfunction to incorporate the overhead of switching fromMi to some other strategy, and having player i’s util-ity function decrease by ε′ > ε if the switching cost isnonzero; we omit the formal details here. Thus, the frame-work lets us incorporate explicitly the reasons that play-ers might be willing to play an ε-Nash equilibrium. Thisjustification of ε-Nash equilibrium seems particularly ap-pealing when designing mechanisms (e.g., cryptographicprotocols) where the equilibrium strategy is made “freely”available to the players (e.g., it is accessible on a web-page), but any other strategy requires some implementa-tion cost.

Although the notion of Nash equilibrium in Bayesianmachine games is defined in the same way as Nash equi-librium in standard Bayesian games, the introduction ofcomplexity leads to some significant differences in theirproperties. We highlight a few of them here. First, notethat our definition of a Nash equilibrium considers be-havioral strategies, which in particular might be random-ized. It is somewhat more common in the game-theoryliterature to consider mixed strategies, which are proba-bility distributions over deterministic strategies. As longas agents have perfect recall, mixed strategies and behav-ioral strategies are essentially equivalent [15]. However,in our setting, since we might want to charge for the ran-domness used in a computation, such an equivalence doesnot necessarily hold.

Mixed strategies are needed to show that Nash equi-librium always exists in standard Bayesian games. Asthe following example shows, since we can charge forrandomization, a Nash equilibrium may not exist in aBayesian machine game, even in games where the typespace and the output space are finite.

Example 2.3 (Rock-paper-scissors) Consider the 2-player Bayesian game of roshambo (rock-paper-scissors).Here the type space has size 1 (the players have noprivate information). We model playing rock, paper, andscissors as playing 0, 1, and 2, respectively. The payoffto player 1 of the outcome (i, j) is 1 if i = j ⊕ 1 (where⊕ denotes addition mod 3), −1 if j = i ⊕ 1, and 0 ifi = j. Player 2’s playoffs are the negative of those ofplayer 1; the game is a zero-sum game. As is well known,the unique Nash equilibrium of this game has the playersrandomizing uniformly between 0, 1, and 2.

Now consider a machine game version of roshambo.Suppose that we take the complexity of a deterministicstrategy to be 1, and the complexity of strategy that usesrandomization to be 2, and take player i’s utility to behis payoff in the underlying Bayesian game minus thecomplexity of his strategy. Intuitively, programs involv-ing randomization are more complicated than those thatdo not randomize. With this utility function, it is easy tosee that there is no Nash equilibrium. For suppose that(M1,M2) is an equilibrium. If M1 uses randomization,then 1 can do better by playing the deterministic strategyj⊕1, where j is the action that gets the highest probabilityaccording to M2 (or is the deterministic choice of player2 if M2 does not use randomization). Similarly, M2 can-not use randomization. But it is well known (and easy tocheck) that there is no equilibrium for roshambo with de-terministic strategies. (Of course, there is nothing specialabout the costs of 1 and 2 for deterministic vs. random-ized strategies. This argument works as long as all threedeterministic strategies have the same cost, and it is lessthan that of a randomized strategy.) It is well known thatpeople have difficulty simulating randomization; we canthink of the cost for randomizing as capturing this diffi-culty. Interestingly, there are roshambo tournaments (in-deed, even a Rock Paper Scissors World Championship),and books written on roshambo strategies [16]. Cham-pionship players are clearly not randomizing uniformly(they could not hope to get a higher payoff than an oppo-nent by randomizing). Our framework provides a psycho-logically plausible account of this lack of randomization.

The non-existence of Nash euqilibrium does not dependon charging directly for randomization; it suffices thatit be difficult to compute the best randomized response.Consider a variant of roshambo where we do not chargefor the first, say, 10,000 steps of computation, and afterthat there is a positive cost of computation. It is not hardto show that, in finite time, using coin tossing with equallikelihood of heads and tails, we cannot exactly computea uniform distribution over the three choices, although wecan approximate it closely.1 Since there is always a de-

1Consider a probabilistic Turing machine M with running timebounded by T that outputs either 0, 1, or 2. Since M ’s running time

4

Page 5: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

terministic best reply that can be computed quickly (“playrock”, “play scissors”, or “play paper”), from this obser-vation it easily follows, as above, that there is no Nashequilibrium in this game either.

In these examples, computing an appropriate random-ized response was expensive, either because we chargeddirectly for randomizing, or because getting the “right”response required a lot of computation. It turns out thatthe cost of randomization is the essential roadblock to get-ting a Nash equilibrium in Bayesian machine games. Wemake this precise in the full version of the paper; here weprovide only an informal statement of the result.

Theorem 2.4 (Informally stated) Every finite Baysianmachine game, where (1) the utility functions and the typedistribution are computable, and (2) “randomization isfree”, has a Nash equilibrium.

As an intermediate result, we first establish the following“computational” analog of Nash’s Theorem, which is ofinterest in its own right.

Theorem 2.5 (Informally stated) Every finite Bayesiangame where the utility functions and the type distributionare computable has a Nash equilibrium that can be imple-mented by a probabilistic Turing machine.

As noted above, no Turing machine with bounded run-ning time can implement the Nash equilibrium strategyeven for roshambo; the Turing machine in Theorem 2.5has, in general, unbounded running time. We remark thatthe question of whether Nash equilibrium strategies canbe “implemented” was considered already in Nash’s orig-inal paper (where he showed the existence of games wherethe Nash equilibrium strategies require players to samplesactions with irrational probabilities). Binmore [17] con-sidered the question of whether Nash equilibrium strate-gies can be implemented by Turing machines, but mostlypresented impossibility results.

Our proof relies on results from the first-order theoryof reals guaranteeing the existence of solutions to certainequations in particular models of the theory [18]; simi-lar techniques were earlier used by Lipton and Markakis[19] and Papadimitriou and Roughgarden [20] in a relatedcontext.

In the full version, we present several other differencesbetween Nash equilibrium in traditional games and inBayesian machine games. In particular, we show severalexamples where our framework can be used to give simple(and, arguably, natural) explanations to experimentallyobserved behavior in classical games where traditionalgame-theoretic solution concepts fail. For instance, weshow that in a variant of the finitely repeated prisonner’s

is bounded by T , M can use at most T of its random bits. If M outputs0 for m of the 2T strings of length T , then its probability of outputting0 is m/2T , and cannot be 1/3.

dilemma (FRPD) where players are charged for the useof memory, the strategy tit-for-tat—which does exceed-ingly well in experiments [21]—is also a Nash equilib-rium, whereas it is dominated by defecting in the classicalsense. (This follows from the facts that (1) any profitabledeviation from tit-for-tat requires the use of memory, and(2) if future utility is discounted, then for sufficiently longgames, the potential gain of deviating becomes smallerthan the cost of memory.)

3 A Game-Theoretic Definition of SecurityThe traditional cryptographic definition of secure com-

putation considers two types of players: honest and ma-licious. Roughly speaking, a protocol is secure if (1) themalicious players cannot influence the output of the com-putation any more than they could have by selecting a dif-ferent input; this is called correctness, and (2) the mali-cious players cannot “learn” more than what can be effi-ciently computed (i.e., in polynomial-time) from only theoutput of the function; this is called privacy and is for-malized through the zero-knowledge simulation paradigm[13]. Note that this definition does not guarantee that hon-est players actually want to execute the protocol with theirintended inputs.

By way of contrast, this property is considered in thegame-theoretic notion of implementation (see [11, 22]),while privacy and correctness are not required. Roughlyspeaking, the game-theoretic notion of implementationsays that a strategy profile ~σ implements a mediator if, aslong as it is a Nash equilibrium for the players to tell themediator their type and output what the mediator recom-mends, then ~σ is a Nash equilibrium in the “cheap talk”game (where the players just talk to each other, rather thantalking to a mediator) that has the same distribution overoutputs as when the players talk to the mediator. In otherwords, whenever a set of parties have incentives to tell themediator their inputs, they also have incentives to honestlyuse ~σ using the same inputs, and get the same distributionover outputs in both cases.

There have been several recent works that attempt tobridge these two notions (e.g., [5, 23–28])). Most no-tably, Izmalkov, Lepinski and Micali [26] (see also [27])present a “hybrid” definition, where correctness is definedthrough the game-theoretic notion,2 and privacy throughthe zero-knowledge paradigm. We suggest a different ap-proach, based on the game-theoretic approach.

Roughly speaking, we say that Π implements a media-tor F if for all games G that use F for which (the utilitiesin G are such that) it is an equilibrium for the players totruthfully tell F their inputs, running Π on the same setof inputs (and with the same utility functions) is also an

2In fact, Izmalkov, Lepinski, and Micali [26] consider an evenstronger notion of implementation, which they call perfect implemen-tation. See Section 3.1 for more details.

5

Page 6: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

equilibrium and produces the same distribution over out-puts as F .3 Note that by requiring the implementation towork for all games, not only do we ensure that playershave proper incentives to execute protocols with their in-tended input, even if they consider computation a costlyresource, but we get the privacy and correctness require-ments “for free”. For suppose that, when using Π, someinformation about i’s input is revealed to j. We considera game G where a player j gains some significant util-ity by having this information. In this game, i will notwant to use Π. However, our notion of implementationrequires that, even with the utilities in G, i should want touse Π if i is willing to use the mediatorF . (This argumentdepends on the fact that we consider games where com-putation is costly; the fact that j gains information abouti’s input may mean that j can do some computation fasterwith this information than without it.) As a consequence,our definition gives a relatively simple (and strong) wayof formalizing the security of protocols, relying only onbasic notions from game theory.

To make these notions precise, we need to intro-duce some extensions to our game-theoretic framework.Specifically, we will be interested only in equilibria thatare robust in a certain sense, and we want equilibria thatdeal with deviations by coalitions, since the security lit-erature allows malicious players that deviate in a coordi-nated way.

Computationally Robust Nash Equilibrium Computersget faster, cheaper, and more powerful every year. Sinceutility in a Bayesian machine game takes computationalcomplexity into account, this suggests that an agent’s util-ity function will change when he replaces one computerby a newer computer. We are thus interested in robustequilibria, intuitively, ones that continue to be equilibria(or, more precisely, ε-equilibria for some appropriate ε)even if agents’ utilities change as a result of upgradingcomputers.

Definition 3.1 (Computationally robust NE) Let p :IN → IN . The complexity function C ′ is at most a p-speedup of the complexity function C if, for all machinesM and views v,

C ′(M, v) ≤ C (M, v) ≤ p(C ′(M, v)).

Game G′ = ([m′],M′,Pr′, ~C ′, ~u′) is at most a p-speedup of game G = ([m],M,Pr, ~C , ~u) if m′ = m,Pr = Pr′ and ~u = ~u′ (i.e., G′ and G′ differ only in theircomplexity and machine profiles), and C ′i is at most a p-speedup of Ci, for i = 1, . . . ,m. Mi is a p-robust ε-bestresponse to ~M−i in G, if for every game G that is at most

3While the definitions of implementation in the game-theory litera-ture (e.g., [11, 22]) do not stress the uniformity of the implementation—that is, the fact that it works for all games—the implementations pro-vided are in fact uniform in this sense.

a p-speedup of G, Mi is an ε-best response to ~M−i. ~Mis a p-robust ε-equilibrium for G if, for all i, Mi is anp-robust ε-best response to ~M−i.

Intuitively, if we think of complexity as denoting runningtime and C describes the running time of machines (i.e.,programs) on an older computer, then C ′ describes therunning time of machines on an upgraded computer. Forinstance, if the upgraded computer runs at most twice asfast as the older one (but never slower), then C ′ is 2-speedup of C , where k denotes the constant function k.Clearly, if ~M is a Nash equilibrium of G, then it is a 1-robust equilibrium. We can think of p-robust equilibriumas a refinement of Nash equilibrium for machine games,just like sequential equilibrium [29] or perfect equilibrium[30]; it provides a principled way of ignoring “bad” Nashequilibria. Note that in games where computation is free,every Nash equilibrium is also computationally robust.

Coalition Machine Games We strengthen the notion ofNash equilibrium to allow for deviating coalitions. To-wards this goal, we consider a generalization of Bayesianmachine games called coalition machine games, where, inthe spirit of coalitional games [31], each subset of playershas a complexity function and utility function associatedwith it. In analogy with the traditional notion of Nashequilibrium, which considers only “single-player” devi-ations, we consider only “single-coalition” deviations.Roughly speaking, given a set Z of subsets of [m], ~M is aZ-safe Nash equilibrium for G if, for all Z ∈ Z , a coali-tion controlling players in Z does not want to deviate. SeeAppendix A for the formal definitions.

Machine Games with Mediators We extend Bayesianmachine games to allow for communication with a trustedmediator. A Bayesian machine game with a mediator is apair (G,F), where G is a Bayesian machine game (butwhere the machines involved now are interactive TMs)and F is a mediator. A particular mediator of interest isthe communication mediator, denoted comm, which cor-responds to what cryptographers call authenticated chan-nels and economists call cheap talk. We defer the detailsto the full version.

3.1 A Computational Notion of Game-TheoreticImplementation

In this section we extend the traditional notion of game-theoretic implementation of mediators to consider com-putational games. Our aim is to obtain a notion of imple-mentation that can be used to capture the cryptographicnotion of secure computation. For simplicity, we focus onimplementations of mediators that receive a single mes-sage from each player and return a single message to eachplayer (i.e., the mediated games consist only of a singlestage).

6

Page 7: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

We provide a definition that captures the intuition thatthe machine profile ~M implements a mediatorF if, when-ever a set of players want to to truthfully provide their “in-put” to the mediator F , they also want to run ~M using thesame inputs. To formalize “whenever”, we consider whatwe call canonical coalition games, where each player ihas a type ti of the form xi; zi, where xi is player i’s in-tended “input” and zi consists of some additional infor-mation that player i has about the state of the world. Weassume that the input xi has some fixed length n. Suchgames are called canonical games of input length n.4

Let ΛF denote the machine that, given type t = x; zsends x to the mediator F and outputs as its action what-ever string it receives back from F , and then halts. (Tech-nically, we model the fact that ΛF is expecting to commu-nicate with F by assuming that the mediator F appends asignature to its messages, and any messages not signed byF are ignored by ΛF .) Thus, the machine ΛF ignores theextra information z. Let ~ΛF denote the machine profilewhere each player uses the machine ΛF . Roughly speak-ing, to capture the fact that whenever the players want tocompute F , they also want to run ~M , we require that if~ΛF is an equilibrium in the game (G,F) (i.e., if it is anequilibrium to simply provide the intended input to F andfinally output whatever F replies), running ~M using theintended input is an equilibrium as well.

We actually consider a more general notion of imple-mentation: we are interested in understanding how wellequilibrium in a set of games with mediator F can be im-plemented using a machine profile ~M and a possibly dif-ferent mediator F ′. Roughly speaking, we want that, forevery game G in some set G of games, if ~ΛF is an equi-librium in (G,F), then ~M is an equilibrium in (G,F ′).In particular, we want to understand what degree of ro-bustness p in the game (G,F) is required to achieve anε-equilibrium in the game (G,F ′). We also require thatthe equilibrium with mediator F ′ be as “coalition-safe”as the equilibrium with mediator F .

Definition 3.2 (Universal implementation) Supposethat G is a set of m-player canonical games, Z is a setof subsets of [m], F and F ′ are mediators, M1, . . . ,Mm

are interactive machines, p : IN × IN → IN , andε : IN → IR. ( ~M,F ′) is a (G,Z, p)-universal imple-mentation of F with error ε if, for all n ∈ IN , all gamesG ∈ G with input length n, and all Z ′ ⊆ Z , if ~ΛF is ap(n, ·)-robust Z ′-safe Nash equilibrium in the mediatedmachine game (G,F) then

1. (Preserving Equilibrium) ~M is a Z ′-safe ε(n)-Nashequilibrium in the mediated machine game (G,F ′).

4Note that by simple padding, canonical games represent a settingwhere all parties’ input lengths are upper-bounded by some value n thatis common knowledge. Thus, we can represent any game where thereare only finitely many possible types as a canonical game for some inputlength n.

2. (Preserving Action Distributions) For each type pro-file ~t, the action profile induced by ~ΛF in (G,F) isidentically distributed to the action profile inducedby ~M in (G,F ′).

As we have observed, although our notion of universalimplementation does not explicitly consider the privacyof players’ inputs, it can nevertheless capture privacy re-quirements.

Note that, depending on the class G, our notion of uni-versal implementation imposes severe restrictions on thecomplexity of the machine profile ~M . For instance, if Gconsists of all games, it requires that the complexity of ~Mis the same as the complexity of ~ΛF . (If the complexity of~M is higher than that of ~ΛF , then we can easily construct

a game G by choosing the utilities appropriately such thatit is an equilibrium to run ~ΛF in (G,F), but running ~Mis too costly.) Also note that if G consists of games whereplayers strictly prefer smaller complexity, then universalimplementation requires that ~M be the optimal algorithm(i.e., the algorithm with the lowest complexity) that im-plements the functionality of ~M , since otherwise a playerwould prefer to switch to the optimal implementation.Since few algorithms algorithms have been shown to beprovably optimal with respect to, for example, the num-ber of computational steps of a Turing machines, this, atfirst sight, seems to severely limit the use of our definition.However, if we consider games with “coarse” complex-ity functions where, say, the first T steps are “free” (e.g.,machines that execute less than T steps are assigned com-plexity 1), or n2 computational steps count as one unit ofcomplexity, the restrictions above are not so severe. In-deed, it seems quite natural to assume that a player is in-different between “small” differences in computation insuch a sense.

Our notion of universal implementation is related to anumber of earlier notions of implementation. We nowprovide a brief comparison to the most relevant ones.• Our definition of universal implementation captures

intuitions similar in spirit to Forges’ [22] notion of auniversal mechanism. It differs in one obvious way:our definition considers computational games, wherethe utility functions depend on complexity consider-ations. Dodis, Halevi and Rabin [5] (and more recentwork, such as [23, 23–25, 27, 32]) consider notionsof implementation where the players are modeled aspolynomially-bounded Turing machines, but do notconsider computational games. As such, the notionsconsidered in these works do not provide any a pri-ori guarantees about the incentives of players withregard to computation.

• Our definition is more general than earlier notionsof implementation in that we consider universalitywith respect to (sub-)classes G of games, and allowdeviations by coalitions.

7

Page 8: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

• Our notion of coalition-safety also differs somewhatfrom earlier related notions. Note that if Z containsall subsets of players with k or less players, thenuniversal implementation implies that all k-resilientNash equilibria and all strong k-resilient Nash equi-libria are preserved. However, unlike the notion ofk-resilience considered by Abraham et al. [23, 33],our notion provides a “best-possible” guarantee forgames that do not have a k-resilient Nash equilib-rium. We guarantee that if a certain subset Z ofplayers have no incentive to deviate in the mediatedgame, then that subset will not have incentive to de-viate in the cheap-talk game; this is similar in spiritto the definitions of [26, 32]. Note that, in contrastto [26, 34], rather than just allowing colluding play-ers to communicate only through their moves in thegame, we allow coalitions of players that are con-trolled by a single entity; this is equivalent to con-sidering collusions where the colluding players areallowed to freely communicate with each other. Inother words, whereas the definitions of [26, 34] re-quire protocols to be “signalling-free”, our definitiondoes not impose such restrictions. We believe thatthis model is better suited to capturing the security ofcryptographic protocols in most traditional settings(where signalling is not an issue).• We require only that a Nash equilibrium is preserved

when moving from the game with mediator F to thecommunication game. Stronger notions of imple-mentation require that the equilibrium in the com-munication game be a sequential equilibrium [29];see, for example, [35, 36]. Since every Nash equi-librium in the game with the mediator F is also asequential equilibrium, these stronger notions of im-plementation actually show that sequential equilib-rium is preserved when passing from the game withthe mediator to the communication game.While these notions of implementation guaranteethat an equilibrium with the mediator is preservedin the communication game, they do not guaranteethat new equilibria are not introduced in the latter.An even stronger guarantee is provided by Izmalkov,Lepinski, and Micali’s [26] notion of perfect imple-mentation; this notion requires a one-to-one corre-spondence f between strategies in the correspondinggames such that each player’s utility with strategyprofile ~σ in the game with the mediator is the sameas his utility with strategy profile (f(σ1), . . . , f(σn))in the communication game without the mediator.Such a correspondence, called strategic equivalenceby Izmalkov, Lepinski, and Micali [26], guarantees(among other things) that all types of equilibria arepreserved when passing from one game to the other,and that no new equilibria are introduced in the com-munication game. However, strategic equivalence

can be achieved only with the use of strong primi-tives, which cannot be implemented under standardcomputational and systems assumptions [34]. We fo-cus on the simpler notion of implementation, whichrequires only that Nash equilibria are preserved, andleave open an exploration of more refined notions.

Strong Universal Implementation Intuitively, ( ~M,F ′)universally implements F if, whenever a set of partieswant to use F (i.e., it is an equilibrium to use ~ΛF whenplaying with F), then the parties also want to run ~M (us-ing F ′) (i.e., using ~M with F ′ is also an equilibrium). Wenow strengthen this notion to also require that whenevera subset of the players do not want to use F (specifically,if they prefer to do “nothing”), then they also do not wantto run ~M , even if all other players do so. We use ⊥ todenote the special machine that sends no messages andwrites nothing on the output tape.

Definition 3.3 (Strong Universal Implementation)Let ( ~M,F ′) be a (G,Z, p)-universal implementationof F with error ε. ( ~M,F ′) is a strong (G,Z, p)-implementation of F if, for all n ∈ IN , all gamesG ∈ G with input length n, and all Z ∈ Z , if ~⊥Z isa p(n, ·)-robust best response to ΛF−Z in then ~⊥Z is anε-best response to ~M−Z in (G,F ′).

4 Relating Cryptographic and Game-Theoretic Implementation

We briefly recall the notion of precise secure computa-tion [37, 38], which is a strengthening of the traditionalnotion of secure computation [12]; more details are givenin Appendix B. An m-ary functionality f is specified by arandom process that maps vectors of inputs to vectors ofoutputs (one input and one output for each player). Thatis, f : (({0, 1}∗)m × {0, 1}∞) → ({0, 1}∗)m, where weview fi as the ith component of the output vector; that is,f = (f1, . . . , fm). We often abuse notation and suppressthe random bitstring r, writing f(~x) or fi(~x). (We canthink of f(~x) and fi(~x) as random variables.) A media-tor F (resp., a machine profile ~M ) computes f if, for alln ∈ N , all inputs ~x ∈ ({0, 1}n)m, if the players tell themediator their inputs and output what the mediator F tellsthem (resp., the output vector of the players after an ex-ecution of ~M where Mi gets input xi) is identically dis-tributed to fn(~x). Roughly speaking, a protocol ~M forcomputing a function f is secure if, for every adversaryA participating in the real execution of ~M , there exists a“simulator” A participating in an ideal execution whereall players directly talk to a trusted third party (i.e., a me-diator) computing f ; the job of A is to provide appropriateinputs to the trusted party, and to reconstruct the view ofA in the real execution such that no distinguisher D candistinguish the outputs of the parties and the view of the

8

Page 9: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

adversary A in the real and the ideal execution. (Notethat in the real execution the view of the adversary is sim-ply the actual view of A in the execution, whereas in theideal execution it is the view output by the simulator A).The traditional notion of secure computation [12] requiresonly that the worst-case complexity (size and running-time) of A is polynomially related to that of A. Precisesecure computation [37, 38] additionally requires that therunning time of the simulator A “respects” the runningtime of the adversary A in an “execution-by-execution”fashion: a secure computation is said to have precisionp(n, t) if the running-time of the simulator A (on inputsecurity parameter n) is bounded by p(n, t) whenever Aoutputs a view in which the running-time of A is t.

For our results, we slightly weaken the notion of pre-cise secure (roughly speaking, by changing the order thequantifiers in analogy with the work of of Dwork, Naor,Reingold, and Stockmeyer [39]). We also generalize thenotion by allowing arbitrary complexity measures ~C (in-stead of just running-time) and general adversary struc-tures [40] (where the specification of a secure computa-tion includes a set Z of subsets of players such that theadversary is allowed to corrupt only the players in one ofthe subsets in Z; in contrast, in [12, 37] only thresholdadversaries are considered, where Z consists of all sub-sets up to a pre-specified size k). The formal definitionof weak ~complex-precise secure computation is given inAppendix B.1. Note that the we can always regain the“non-precise” notion of secure computation by instanti-ating CZ(M,v) with the sum of the worst-case running-time ofM (on inputs of the same length as the input lengthin v) and size of M . Thus, by the results of [12, 13, 41],it follows that there exists weak ~C -precise secure compu-tation protocols with precision p(n, t) = poly(n, t) whenCZ(M,v) is the sum of the worst-case running-time ofM and size of M . The results of [37, 38] extend to showthe existence of weak C -precise secure computation pro-tocols with precision p(n, t) = O(t) when CZ(M,v) isthe sum of the running time (as opposed to just worst-case running-time) of M(v) and size of M . The resultsabove continue to hold if we consider “coarse” measuresof running-time and size; for instance, if, say, n2 com-putational steps correspond to one unit of complexity (incanonical machine games with input length n). See Ap-pendix 4.2 for more details.

4.1 EquivalencesAs a warm-up, we show that “error-free” secure com-

putation, also known as perfectly-secure computation[41], already implies the traditional game-theoretic notionof implementation [22] (which does not consider com-putation). To do this, we first formalize the traditionalgame-theoretic notion using our notation: Let ~M be anm-player profile of machines. We say that ( ~M,F ′) is a tradi-tional game-theoretic implementation ofF if ( ~M,F ′) is a

(Gnocomp, {{1}, . . . {m}}, 0)-universal implementation ofF with 0-error, where Gnocomp denotes the class of all m-player canonical machine games where the utility func-tions do not depend on the complexity profile. (Recallthat the traditional notion does not consider computationalgames or coalition games.)

Proposition 4.1 If f is anm-ary functionality, F is a me-diator that computes f , and ~M is a perfectly-secure com-putation of F , then ( ~M, comm) is a game-theoretic im-plementation of F .

Proof: We start by showing that running ~M is a Nashequilibrium if running ~ΛF with mediator F is one. Recallthat the cryptographic notion of error-free secure compu-tation requires that for every player i and every “adversar-ial” machine M ′i controlling player i, there exists a “sim-ulator” machine Mi, such that the outputs of all players inthe execution of (M ′i , ~M−i) are identically distributed tothe outputs of the players in the execution of (Mi, ~ΛF−i)with mediator F .5 In game-theoretic terms, this meansthat every “deviating” strategy M ′i in the communicationgame can be mapped into a deviating strategy Mi in themediated game with the same output distribution for eachtype, and, hence, the same utility, since the utility dependsonly on the type and the output distribution; this followssince we require universality only with respect to gamesin Gnocomp. Since no deviations in the mediated game cangive higher utility than the Nash equilibrium strategy ofusing ΛFi , running ~M must also be a Nash equilibrium.

It only remains to show that ~M and ~ΛF induce the sameaction distribution; this follows directly from the defini-tion of secure computation by considering an adversarythat does not corrupt any parties.We note that the converse implication does not hold. Sincethe traditional game-theoretic notion of implementationdoes not consider computational cost, it does not take intoaccount computational advantages possibly gained by us-ing ~M , issues that are critical in the cryptographic no-tion of zero-knowledge simulation. We now show thatweak precise secure computation is equivalent to strongG-universal implementation for certain natural classes Gof games. For this result, we assume that the only ma-chines that can have a complexity of 0 are those that“do nothing”: we require that, for all complexity func-tions C , C (M,v) = 0 for some view v iff M = ⊥ iffC (M,v) = 0 for all views v. (Recall that ⊥ is a canoni-cal representation of the TM that does nothing: it does notread its input, has no state changes, and writes nothing.)If G = ([m],M,Pr, ~C , ~u) is a canonical game with inputlength n, then

1. G is machine universal if the machine setM is theset of terminating Turing machines;

5The follows from the fact that perfectly-secure computation is error-free.

9

Page 10: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

2. G is normalized if the range of uZ is [0, 1] for allsubsets Z of [m];

3. G is monotone (i.e., “players never prefer to com-pute more”) if, for all subset Z of [m], all typeprofiles ~t, action profiles ~a, and all complexityprofiles (cZ ,~c−Z), (c′Z ,~c−Z), if c′Z > cZ , thenuZ(~t,~a, (c′Z ,~c−Z)) ≤ ui(~t,~a, (cZ ,~c−Z));

4. G is a ~C ′-game if CZ = C ′Z for all subsets Z of [m].Let G ~C denote the class of machine-universal, normal-ized, monotone, canonical ~C -games. For our theorem weneed some minimal constraints on the complexity func-tion. For the forward direction of our equivalence results(showing that precise secure computation implies univer-sal implementation), we require that honestly running theprotocol should have constant complexity, and that it bethe same with and without a mediator. More precisely, weassume that the complexity profile ~C is ~M -acceptable,that is, for every subset Z, the machines (ΛF )b

Z andM b

Z have the same complexity c0 for all inputs; that is,CZ((ΛF )b

Z , ·) = c0 and CZ(M bZ , ·) = c0.6 As mentioned

in Section 3, an assumption of this nature is necessaryto satisfy universality with respect to general classes ofgames; it is easily satisfied if we consider “coarse” com-plexity functions. For the backward direction of ourequivalence (showing that universal implementation im-plies precise secure computation), we require that certainoperations, like moving output from one tape to another,do not incur any additional complexity. Such complexityfunctions are called output-invariant; we provide a formaldefinition in Appendix C.

We can now state the connection between secure com-putation and game-theoretic implementation. In the for-ward direction, we restrict attention to protocols ~M com-puting some m-ary functionality f that satisfy the follow-ing natural property: if a subset of the players “aborts”(not sending any messages, and outputting nothing), theirinput is intepreted as λ.7 More precisely, ~M is an abort-preserving computation of f if for all n ∈ N , every subsetZ of [m], all inputs ~x ∈ ({0, 1}n)m, the output vector ofthe players after an execution of (~⊥Z , ~M−Z) on input ~x isidentically distributed to f(λZ , ~x−Z).

Theorem 4.2 (Equivalence: Information-theoretic case)Suppose that f is an m-ary functionality, F is a mediatorthat computes f , ~M is a machine profile that computes f ,Z is a set of subsets of [m], ~C is a complexity function,and p a precision function.• If ~C is ~M -acceptable and ~M is an abort-preserving

weak Z-secure computation of f with ~C -precisionp and ε-statistical error, then ( ~M, comm) is a strong

6Our results continue to hold if c0 is a function of the input length n,but otherwise does not depend on the view.

7All natural secure computation protocols that we are aware of (e.g.,[12, 41]) satisfy this property.

(G ~C ,Z, p)-universal implementation ofF with errorε.

• If ~C is ~M -acceptable and output-invariant, and( ~M, comm) is a strong (G ~C ,Z, p)-universal imple-mentation of F with error ε′, then for every ε < ε′,~M is a weak Z-secure computation of f with ~C -

precision p and ε-statistical error.

As a corollary of Theorem 4.2, we get that known (pre-cise) secure computation protocols directly yield appro-priate universal implementations, provided that we con-sider complexity functions that are ~M -acceptable. For in-stance, by the results of [38, 41], every efficient m-aryfunctionality f has a weak Z-secure computation proto-col ~M with C -precision p(n, t) = t if CZ(M,v) is thesum of the running time of M(v) and size of M , and Zconsists of all subsets of [m] of size smaller than |m|/3.This result still holds if we consider “coarse” measures ofrunning-time and size where, say, O(nc) computationalsteps (and size) correspond to one unit of complexity (incanonical machine games with input length n). Further-more, protocol ~M is abort-preserving, has a constant de-scription, and has running-time smaller than some fixedpolynomial O(nc) (on inputs of length n). So, if we con-sider an appropriately coarse notion of run-time and de-scription size, CZ is ~M -acceptable. By Theorem 4.2, itthen immediately follows that that every efficient m-aryfunctionality f has a strong (G ~C ,Z, O(1))-universal im-plementation with error 0.

Theorem 4.2 also shows that a universal implementa-tion of a mediator F computing a function f with re-spect to general classes of games is “essentially” as hardto achieve as a secure computations of f . In particu-lar, as long as the complexity function is output-invariant,such a universal implementation is a weak precise se-cure computation. Although the output-invariant condi-tion might seem somewhat artificial, Theorem 4.2 illus-trates that overcoming the “secure-computation barrier”with respect to general classes of games requires makingstrong (and arguably unnatural8) assumptions about thecomplexity function. We have not pursued this path. InSection 5, we instead consider universal implementationwith respect to restricted class of games. As we shall see,this provides an avenue for circumventing traditional im-possibility results with respect to secure computation.

In Section 4.2 we also provide a “computational” ana-logue of Theorem 4.2, as well as a characterization of the“standard” (i.e., “non-precise”) notion of secure compu-tation. We provide a proof overview of Theorem 4.2 here;we defer the complete proof to the full version.

8With a coarse complexity measure, it seems natural to assume thatmoving content from one output tape to another incurrs no change incomplexity.

10

Page 11: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

Proof overview Needless to say, this oversimplifiedsketch leaves out many crucial details that complicate theproof.

Weak precise secure computation implies strong univer-sal implementation. At first glance, it might seem likethe traditional notion of secure computation of [12] easilyimplies the notion of universal implementation: if thereexists some (deviating) strategy A in the communicationgame implementing mediator F that results in a differ-ent distribution over actions than in equilibrium, then thesimulator A for A could be used to obtain the same dis-tribution; moreover, the running time of the simulator iswithin a polynomial of that ofA. Thus, it would seem likesecure computation implies that any “poly”-robust equi-librium can be implemented. However, the utility func-tion in the game considers the complexity of each execu-tion of the computation. So, even if the worst-case run-ning time of A is polynomially related to that of A, theutility of corresponding executions might be quite differ-ent. This difference may have a significant effect on theequilibrium. To make the argument go through we needa simulation that preserves complexity in an execution-by-execution manner. This is exactly what precise zeroknowledge [37] does. Thus, intuitively, the degradation incomputational robustness by a universal implementationcorresponds to the precision of a secure computation.

More precisely, to show that a machine profile ~M is auniversal implementation, we need to show that wheneverΛ is a p-robust equilibrium in a game G with mediator F ,then ~M is an ε-equilibrium (with the communication me-diator comm). Our proof proceeds by contradiction: weshow that a deviating strategy M ′Z (for a coalition Z) forthe ε-equilibrium ~M can be turned into a deviating strat-egy MZ for the p-robust equilibrium ~Λ. We here use thefact that ~M is a weak precise secure computation to findthe machine MZ ; intuitively MZ will be the simulator forM ′Z . The key step in the proof is a method for embed-ding any coalition machine game G into a distinguisherD that “emulates” the role of the utility function in G. Ifdone appropriately, this ensures that the utility of the (sim-ulator) strategy MZ is close to the utility of the strategyM ′Z , which contradicts the assumption that ~Λ is an ε-Nashequilibrium.

The main obstacle in embedding the utility function ofG into a distinguisher D is that the utility of a machineMZ in G depends not only on the types and actions ofthe players, but also on the complexity of running MZ . Incontrast, the distinguisher D does not get the complexityof M as input (although it gets its output v). On a highlevel (and oversimplifying), to get around this problem,we let D compute the utility assuming (incorrectly) thatMZ has complexity c = C (M ′, v) (i.e., the complexity ofM ′Z in the view v output by MZ). Suppose, for simplic-ity, that MZ is always “precise” (i.e., it always respects

the complexity bounds).9 Then it follows that (since thecomplexity c is always close to the actual complexity ofMZ in every execution) the utility computed by D cor-responds to the utility of some game G that is at most ap-speed up of G. (To ensure that G is indeed a speedupand not a “slow-down”, we need to take special care withsimulators that potentially run faster than the adversarythey are simulating. The monotonicity of G helps us tocircumvent this problem.) Thus, although we are not ableto embed G into the distinguisher D, we can embed a re-lated game G into D. This suffices to show that ~Λ is not aNash equilibrium in G, contradicting the assumption that~Λ is a p-robust Nash equilibrium. A similar argument canbe used to show that ⊥ is also an ε-best response to ~M−Z

if ⊥ is a p-robust best response to ~Λ−Z , demonstratingthat ~M in fact is a strong universal implementation. Wehere rely on the fact ~M is abort-preserving to ensure thataborting in (G,F) has the same effect as in (G, comm).

Strong universal implementation implies weak precise se-cure computation. To show that strong universal imple-mentation implies weak precise secure computation, weagain proceed by contradiction. We show how the exis-tence of a distinguisher D and an adversary M ′Z that can-not be simulated by any machine MZ can be used to con-struct a game G for which ~M is not a strong implemen-tation. The idea is to have a utility function that assignshigh utility to some “simple” strategy M∗Z . In the medi-ated game with F , no strategy can get better utility thanM∗Z . On the other hand, in the cheap-talk game, the strat-egy M ′Z does get higher utility than M∗Z . As D indeed isa function that “distinguishes” a mediated execution froma cheap-talk game, our approach will be to try to embedthe distinguisher D into the game G. The choice of Gdepends on whether M ′Z = ⊥. We now briefly describethese games.

If M ′Z = ⊥, then there is no simulator for the machine⊥ that simply halts. In this case, we construct a gameG where using ⊥ results in a utility that is determinedby running the distinguisher. (Note that ⊥ can be easilyidentified, since it is the only strategy that has complexity0.) All other strategies instead get some canonical utilityd, which is higher than the utility of ⊥ in the mediatedgame. However, since ⊥ cannot be “simulated”, playing⊥ in the cheap-talk game leads to an even higher utility,contradicting the assumption that ~M is a universal imple-mentation.

If M ′Z 6= ⊥, we construct a game G′ in which eachstrategy other than ⊥ gets a utility that is determined byrunning the distinguisher. Intuitively, efficient strategies(i.e., strategies that have relatively low complexity com-pared toM ′Z) that output views on which the distinguisheroutputs 1 with high probability get high utility. On the

9This is an unjustified assumption; in the actual proof we actuallyneed to consider a more complicated construction.

11

Page 12: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

other hand, ⊥ gets a utility d that is at least as good aswhat the other strategies can get in the mediated gamewith F . This makes ⊥ a best response in the mediatedgame; in fact, we can define the game G′ so that it is ac-tually a p-robust best response. However, it is not even anε-best-response in the cheap-talk game: M ′Z gets higherutility, as it receives a view that cannot be simulated. (Theoutput-invariant condition on the complexity function Cis used to argue that M ′Z can output its view at no cost.)

4.2 A Computational Equivalence TheoremTo prove a “computational” analogue of our equiv-

alence theorem (relating computational precise securecomputation and universal implementation), we need tointroduce some further restrictions on the complexityfunctions, and the classes of games considered.• A (vector of) complexity functions ~C is efficient

if each function is computable by a (randomized)polynomial-sized circuit.• A secure computation game G =

([m],MT (n),Pr, ~C , ~u) with input length n issaid to be T (·)-machine universal if

– the machine setMT (n) is the set of Turing ma-chines implementable by T (n)-sized random-ized circuits, and

– ~u is computable by a T (n)-sized circuit.Let G ~C ,T denote the class of T (·)-machine univer-sal, normalized, monotone, canonical ~C -games. LetG ~C ,poly denote the union of G ~C ,T for all polynomialfunctions T .

Theorem 4.3 (Equivalence: Computational Case)Suppose that f is an m-ary functionality, F is a mediatorthat computes f , ~M is a machine profile that computes f ,Z is a set of subsets of [m], ~C is an efficient complexityfunction, and p a precision function.• If ~C is ~M -acceptable and ~M is an abort-preserving

weak Z-secure computation of f with computational~C -precision p, then for every polynomial T , there ex-

ists some negligible function ε such that ( ~M, comm)is a strong (G ~C ,T ,Z, p)-universal implementation ofF with error ε.• If ~C is ~M -acceptable and output-invariant, and

for every polynomial T , there exists some negligi-ble function ε, such that ( ~M, comm) is a strong(G ~C ,Z, p)-universal implementation of F with er-ror ε, then ~M is a weak Z-secure computation of fwith computational ~C -precision p.

A proof of Theorem 4.3 appears in the full version.Note that Theorem 4.3 also provides a game-theoreticcharacterization of the “standard” (i.e., “non-precise”)notion of secure computation. We simply consider a“coarse” version of the complexity function wc(v) that

is the sum of the size of M and the worst-case running-time of M on inputs of the same length as in the view v.(We need a coarse complexity function to ensure that C is~M -acceptable and output-invariant.) With this complex-

ity function, the definition of weak precise secure com-putation reduces to the traditional notion of weak securecomputation without precision (or, more precisely, with“worst-case” precision just as in the traditional definition).Given this complexity function, the precision of a securecomputation protocol becomes the traditional “overhead”of the simulator (this is also called knowledge tightness[42]). Roughly speaking, “weak secure computation”with overhead p is thus equivalent to strong (G ~wc,poly, p)-universal implementation with negligible error.

5 Directions for Future ResearchWe have defined a general approach to taking compu-

tation into account in game theory that subsumes previ-ous approaches, and shown a close connection betweencomputationally robust Nash equilibria and precise securecomputation. This opens the door to a number of excitingresearch directions in both secure computation and gametheory. We describe a few here:• Our equivalence result for secure computation might

seem like a negative result. It demonstrates that con-sidering only rational players (as opposed to adver-sarial players) does not facilitate protocol design.Note, however, that for the equivalence to hold, wemust consider implementations universal with re-spect to essentially all games. In many settings,it might be reasonable to consider implementationsuniversal with respect to only certain subclasses ofgames; in such scenarios, universal implementationsmay be significantly simpler or more efficient, andmay also circumvent traditional lower bounds. Welist some natural restrictions on classes of games be-low, and discuss how such restrictions can be lever-aged in protocol design. These examples illustratesome of the benefits of a fully game-theoretic notionof security that does not rely on the standard cryp-tographic simulation paradigm, and shows how ourframework can capture in a natural way a number ofnatural notions of security.

To relate our notions to the standard definition of se-cure computation, we here focus on classes of gamesG that are subsets of G ~wc,poly (as defined in Section4.2). Furthermore, we consider only 2-player gamesand restrict attention to games G where the utilityfunction is separable in the following sense: there isa standard game G′ (where computational costs arenot taken into account) and a function uC

i on com-plexity profiles for each player i, such that, for eachplayer i, ui(~t,~a,~c) = uG′

i (~t,~a) +uCi (~c). We refer to

G′ as the standard game embedded in G. Intuitively,

12

Page 13: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

this says that the utilities in G are just the sum of theutility in a game G′ where computation is not takeninto account and a term that depends only on compu-tation costs.

Games with punishment: Many natural situationscan be described as games where players can chooseactions that “punish” an individual player i. For in-stance, this punishment can represent the cost of be-ing excluded from future interactions. Intuitively,games with punishment model situations where play-ers do not want to be caught cheating. Punishmentstrategies (such as the grim-trigger strategy in re-peated prisoner’s dilemma, where a player defectsforever once his opponent defects once [21]) are ex-tensively used in the game-theory literature. We givetwo examples where cryptographic protocol designis facilitated when requiring only implementationsthat are universal with respect to games with pun-ishment.

Covert adversaries: As observed by Malkhi etal. [43], and more recently formalized by Aumannand Lindell [44], in situations were players do notwant to be caught cheating, it is easier to construct ef-ficient protocols. Using our framework, we can for-malize this intuition in a straightforward way. To ex-plain the intuitions, we consider a particularly simplesetting. Let Gpunish consist of normalized 2-playergames G (with a standard game G′ embedded inG), where (1) honestly reporting your input and out-putting whatever the mediator replies (i.e., playingthe strategy implemented by ~Λ) is a Nash equilib-rium in (G′,F) where both players are guaranteed toget utility 1/2 (not just in expectation, but even in theworst-case), and (2) there is a special string punishsuch that player 1− i receives payoff 0 in G if playeri outputs the string punish. In this setting, we claimthat any secure computation of a function f with re-spect to covert adversaries with deterrent 1/2 in thesense of [44, Definition 3.3] is a (Gpunish, poly,Z)-universal implementation of F with negligible error(where Z = {{1},{2}}, and F is a mediator com-puting f ). Intuitively, this follows since a “cheating”adversary gets caught with probability 1/2 and thuspunished; the expected utility of cheating is thus atmost 1/2× 1 + 1/2× 0, which is no greater than theexpected utility of playing honestly.

Fairness: It is well-known that, for many functions,secure 2-player computation where both players re-ceive output is impossible if we require fairness (i.e.,that either both or neither of the players receives anoutput) [45]. Such impossibility results can be easilycircumvented by considering universal implementa-tion with respect to games with punishment. Thisfollows from the fact that although it is impossible to

get secure computation with fairness, the weaker no-tion of secure computation with abort [46] is achiev-able. Intuitively, this notion guarantees that the onlyattack possible is one where one of the players pre-vents the other player from getting its output; this iscalled an abort. This is formalized by adapting thetrusted-party in the ideal model to allow the adver-sary to send a special abort message to the trustedparty after seeing its own output, which blocks itfrom delivering an output to the honest party. Toget a universal implementation with respect to gameswith punishment, it is sufficient to use any securecomputation protocol with abort (see [38, 46]) mod-ified so that players output punish if the other playeraborts. It immediately follows that a player cannever get a higher utility by aborting (as this will bedetected by the other player, and consequently theaborting player will be punished). This result can beviewed as a generalization of the approach of [5].10

Strictly monotone games: In our equivalence re-sults we considered monotone games, where playersnever prefer to compute more. It is sometimes rea-sonable to assume that players strictly prefer to com-pute less. We outline a few possible advantages ofconsidering universal implementations with respectto strictly monotone games.

Gradual-release protocols: One vein of research onsecure computation considers protocols for achiev-ing fair exchanges using gradual-release protocols(see e.g., [47]). In a gradual-release protocol, theplayers are guaranteed that if at any point one playeraborts, then the other player(s) can compute the out-put within a comparable amount of time (e.g., wecan require that if a player aborts and can computethe answer in t time units, then all the other playersshould be able to compute it within 2t time units).We believe that by making appropriate assumptionsabout the utility of computation, we can ensure thatplayers never have incentives to deviate. Consider,for instance, a two-player computation of a functionf where in the last phase the players invoke a grad-ual exchange protocol such that if any player abortsduring the gradual exchange protocol, the other play-ers attempts to recover the secret using a brute-forcesearch. Intuitively, if for each player the cost ofcomputing t extra steps is positive, even if the otherplayer computes, say, 2t extra steps, it will never

10For this application, it is not necessary to use our game-theoreticdefinition of security. An alternative way to capture fairness in thissetting would be to require security with respect to the standard(simulation-based) definition with abort, and additionally fairness (butnot security) with respect to rational agents, according to the definitionof [5, 25]; this approach is similar to the one used by Kol and Naor [27].Our formalization is arguably more natural, and also considers rationalagents that “care” about computation.

13

Page 14: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

be worth it for a player to abort: by the security ofthe gradual-release protocol, an aborting player canonly get its output twice as fast as the other player.We note that merely making assumptions about thecost of computing will not be sufficient to make thisapproach work; we also need to ensure that play-ers prefer to get the output than not getting it, evenif they can trick other players into computing for along time. Otherwise, a player might prefer to abortand not compute anything, while the other player at-tempts to compute the output. We leave a full explo-ration of this approach for future work.

Error-free implementations: Unlike perfectly-secureprotocols, computationally-secure protocols proto-cols inherently have a nonzero error probability.For instance, secure 2-player computation can beachieved only with computational security (withnonzero error probability). By our equivalence re-sult, it follows that strong universal implementa-tions with respect to the most general classes of 2-player games also require nonzero error probability.Considering universality with respect to only strictlymonotone games gives an approach for achievingerror-free implementations. This seems particularlypromising if we consider an idealized model wherecryptographic functionalities (such as one-way func-tions) are modeled as black boxes (see, e.g., the ran-dom oracle model of Bellare and Rogaway [48]),and the complexity function considers the numberof calls to the cryptographic function. Intuitively, ifthe computational cost of trying to break the crypto-graphic function is higher than the expected gain, itis not worth deviating from the protocol. We leaveopen an exploration of this topic. (A recent paper byMicali and Shelat [49] relies on this idea, in com-bination with physical devices, to acheive error-freeimplementations in the context of secret sharing.)

Using computation as payment. Shoham and Ten-nenholtz [28] have investigated what functions f oftwo players’ inputs x1, x2 can be computed by theplayers if they have access to a trusted party. Theplayers are assumed to want to get the output y =f(x1, x2), but each player i does not want to revealmore about his input xi than what can be deducedfrom y. Furthermore, each player i prefers that otherplayers do not get the output (although this is not asimportant as i getting the output and not revealing itsinput xi). Interestingly, as Shoham and Tennenholtzpoint out, the simple binary function AND cannotbe truthfully computed by two players, even if theyhave access to a trusted party. A player that has input0 always knows the output y and thus does not gainanything from providing its true input to the trustedparty: in fact, it always prefers to provide the input 1

in order to trick the other player.

We believe that for strictly monotone games thisproblem can be overcome by the use of crypto-graphic protocols. The idea is to construct a cryp-tographic protocol for computing AND where theplayers are required to solve a computational puz-zle if they want to use 1 as input; if they use input 0they are not required to solve the puzzle. The puzzleshould have the property that it requires a reason-able amount of computational effort to solve. If thiscomputational effort is more costly than the potentialgain of tricking the other player to get the wrong out-put, then it is not worth it for a player to provide input1 unless its input actually is 1. To make this work,we need to make sure the puzzle is “easy” enough tosolve, so that a player with input 1 will actually wantto solving the puzzle in order to get the correct out-put. We leave a full exploration of this idea for futurework.

More generally, we believe that combining compu-tational assumptions with assumptions about utilitywill be a fruitful line of research for secure compu-tation. For instance, it is conceivable that difficultiesassociated with concurrent executability of protocolscould be alleviated by making assumptions regardingthe cost of message scheduling; the direction of Co-hen, Kilian, and Petrank [50] (where players who de-lay messages are themselves punished with delays)seems relevant in this regard.

• As we have seen, universal implementation is equiv-alent to a variant of precise secure computation withthe order of quantification reversed. It would beinteresting to find a notion of implementation thatcorresponds more closely to the standard definition,without a change in the order of quantifier; in par-ticular, whereas the traditional definition of zero-knowledge guarantees deniability (i.e., the propertythat the interaction does not leave any “trace”), thenew one does not. Finding a game-theoretic defini-tion that also captures deniability seems like an in-teresting question.

• We showed that Nash equilibria do not always ex-ist in games with computation. This leaves open thequestion of what the appropriate solution concept is.

• Our notion of universal implementation uses Nashequilibrium as solution concept. It is well knownthat in (traditional) extensive form games (i.e., gamesdefined by a game tree), a Nash equilibrium mightprescribe non-optimal moves at game histories thatdo no occur on the equilibrium path. This can leadto “empty threats”: “punishment” strategies that arenon-optimal and thus not credible. Many recentworks on implementation (see e.g., [26, 35]) there-fore focus on stronger solution concepts such as se-

14

Page 15: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

quential equilibrium [29]. We note that when takingcomputation into account, the distinction betweencredible and non-credible threats becomes more sub-tle: the threat of using a non-optimal strategy in agiven history might be credible if, for instance, theoverall complexity of the strategy is smaller thanany strategy that is optimal at every history. Thus,a simple strategy that is non-optimal off the equi-librium path might be preferred to a more compli-cated (and thus more costly) strategy that performsbetter off the equilibrium path (indeed, people of-ten use non-optimal but simple “rules-of-thumbs”when making decisions). Finding a good definitionof empty threats in games with computation costsseems challenging.

One approach to dealing with empty threats to toconsider the notion of sequential equilibrium in ma-chine games. Roughly speaking, in a sequentialequilibrium, every player must make a best responseat every information set, where an information set isa set of nodes in the game tree that the agent cannotdistinguish. The standard assumption in game theoryis that the information set is given exogenously, aspart of the description of the game tree. As Halpern[51] has argued, an exogenously-given informationset does not always represent the information that anagent actually has. The issue becomes even moresignificant in our framework. While an agent mayhave information that allows him to distinguish twonodes, the computation required to realize that theyare different may be prohibitive, and (with compu-tational costs) an agent can rationally decide not todo the computation. This suggests that the infor-mation sets should be determined by the machine.More generally, in defining solution concepts, it hasproved necessary to reason about players’ beliefsabout other players’ beliefs; now these beliefs willhave to include beliefs regarding computation. Find-ing a good model of such beliefs seems challenging.See the full paper for further discussion of sequentialequilibrium.• A natural next step would be to introduce notions

of computation in the epistemic logic. There hasalready been some work in this direction (see, forexample, [52–54]). We believe that combining theideas of this paper with those of the earlier pa-pers will allow us to get, for example, a cleanerknowledge-theoretic account of zero knowledge thanthat given by Halpern, Moses, and Tuttle [52]. A firststep in this direction is taken in [55].• Finally, it would be interesting to use behavioral ex-

periments to, for example, determine the “cost ofcomputation” in various games (such as the finitelyrepeated prisoner’s dilemma).

6 AcknowledgmentsThe second author wishes to thank Silvio Micali and

abhi shelat for many exciting discussions about cryptog-raphy and game theory, and Ehud and Adam Kalai forenlightening discussions.

Appendix

A Coalition Machine GamesWe strengthen the notion of Nash equilibrium to al-

low for deviating coalitions. Towards this goal, we con-sider a generalization of Bayesian machine games calledcoalition machine games, where, in the spirit of coali-tional games [31], each subset of players has a complex-ity function and utility function associated with it. Inanalogy with the traditional notion of Nash equilibrium,which considers only “single-player” deviations, we con-sider only “single-coalition” deviations.

More precisely, given a subset Z of [m], we let −Zdenote the set [m]/Z. We say that a machine M ′Z con-trols the players in Z if M ′Z controls the input and outputtapes of the players in set Z (and thus can coordinate theiroutputs). In addition, the adversary that controls Z hasits own input and output tape. A coalition machine gameG is described by a tuple ([m],M,Pr, ~C , ~u), where ~Cand ~u are sequences of complexity functions CZ and util-ity functions uZ , respectively, one for each subset Z of[m]; m, M, and Pr are defined as in Definition 2.1. Incontrast, the utility function uZ for the set Z is a func-tion T × ({0, 1}∗)m × (IN × INm−|Z|+1) → IR, whereuZ(~t,~a, (cZ ,~c−Z)) is the utility of the coalition Z if ~t isthe (length m+ 1) type profile, ~a is the (length m) actionprofile (where we identify i’s action as player i output), cZis the complexity of the coalitionZ, and~c−Z is the (lengthm − |Z|) profile of machine complexities for the playersin −Z. The complexity cZ is a measure of the complex-ity according to whoever controls coalition Z of runningthe coalition. Note that even if the coalition is controlledby a machine M ′Z that lets each of the players in Z per-form independent computations, the complexity of M ′Z isnot necessarily some function of the complexities ci of theplayers i ∈ Z (such as the sum or the max). Moreover,while cooperative game theory tends to focus on superad-ditive utility functions, where the utility of a coalition is atleast the sum of the utilities of any partition of the coali-tion into sub-coalitions or individual players, we make nosuch restrictions; indeed when taking complexity into ac-count, it might very well be the case that larger coalitionsare more expensive than smaller ones. Also note that, inour calculations, we assume that, other than the coalitionZ, all the other players play individually (so that we useci for i /∈ Z); there is at most one coalition in the picture.Having defined uZ , we can define the expected utility ofthe group Z in the obvious way.

15

Page 16: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

The benign machine for coalition Z, denoted M bZ , is

the one where that gives each player i ∈ Z its true input,and each player i ∈ Z outputs the output of Mi; M b

Z

write nothing on its output tape. Essentially, the benignmachine does exactly what all the players in the coalitionwould have done anyway. We now extend the notion ofNash equilibrium to deal with coalitions; it requires thatin an equilibrium ~M , no coalition does (much) better thanit would using the benign machine, according to the utilityfunction for that coalition.

Definition A.1 (NE in coalition machine games) Givenan m-player coalition machine game G, a machineprofile ~M , a subset Z of [m] and ε ≥ 0, M b

Z is anε-best response to ~M−Z if, for every coalition machineM ′Z ∈M,

UGZ [(M b

Z ,~M−Z)] ≥ UG

Z [(M ′Z , ~M−Z)]− ε.

Given a set Z of subsets of [m], ~M is a Z-safe ε-Nashequilibrium for G if, for all Z ∈ Z , M b

Z is an ε-best re-sponse to ~M−Z .

Our notion of coalition games is quite general. In par-ticular, if we disregard the costs of computation, it al-lows us to capture some standard notions of coalition re-sistance in the literature, by choosing uZ appropriately.For example, Aumann’s [56] notion of strong equilib-rium requires that, for all coalitions, it is not the casethat there is a deviation that makes everyone in the coali-tion strictly better off. To capture this, fix a profile ~M ,and define u ~M

Z (M ′Z , ~M′−Z) = mini∈Z ui(M ′Z , ~M

′−Z) −

ui( ~M).11 We can capture the notion of k-resilient equi-librium [23, 33], where the only deviations allowed are bycoalitions of size at most k, by restricting Z to consist ofsets of cardinality at most k (so a 1-resilient equilibriumis just a Nash equilibrium). Abraham et al. [23, 33] alsoconsider a notion of strong k-resilient equilibrium, wherethere is no deviation by the coalition that makes even onecoalition member strictly better off. We can capture thisby replacing the min in the definition of u ~M

Z by max.

B Precise Secure ComputationIn this section, we review the notion of precise secure

computation [37, 38], which is a strengthening of the tra-ditional notion of secure computation [12]. We consider asystem where players are connected through secure (i.e.,authenticated and private) point-to-point channels. Weconsider a malicious adversary that is allowed to corrupta subset of the m players before the interaction begins;

11Note that if we do not disregard the cost of computation, it is notclear how to define the individual complexity of a player that is con-trolled by M ′Z .

these players may then deviate arbitrarily from the proto-col. Thus, the adversary is static; it cannot corrupt playersbased on history.

As usual, the security of protocol ~M for computing afunction f is defined by comparing the real execution of~M with an ideal execution where all players directly talk

to a trusted third party (i.e., a mediator) computing f . Inparticular, we require that the outputs of the players inboth of these executions cannot be distinguished, and ad-ditionally that the view of the adversary in the real execu-tion can be reconstructed by the ideal-execution adversary(called the simulator). Additionally, precision requiresthat the running-time of the simulator in each run of theideal execution is closely related to the running time ofthe real-execution adversary in the (real-execution) viewoutput by the simulator.

The ideal execution Let f be an m-ary functionality. LetA be a probabilistic polynomial-time machine (represent-ing the ideal-model adversary) and suppose that A con-trols the players in Z ⊆ [m]. We characterize the idealexecution of f given adversary A using a function denotedIDEALf,A that maps an input vector ~x, an auxiliary inputz, and a tuple (rA, rf ) ∈ ({0, 1}∞)2 (a random string forthe adversary A and a random string for the trusted thirdparty) to a triple (~x, ~y, v), where ~y is the output vector ofthe players 1, . . . ,m, and v is the output of the adversaryA on its tape given input (z, ~x, rA), computed accordingto the following three-stage process.

In the first stage, each player i receives its input xi.Each player i /∈ Z next sends xi to the trusted party.(Recall that in the ideal execution, there is a trusted thirdparty.) The adversary A determines the value x′i ∈ {0, 1}∗a player i ∈ Z sends to the trusted party. We assume thatthe system is synchronous, so the trusted party can tell ifsome player does not send a message; if player i does notsend a message, i is taken to have sent λ. Let ~x′ be thevector of values received by the trusted party. In the sec-ond stage, the trusted party computes yi = fi(~x′, rf ) andsends yi to Pi for every i ∈ [m]. Finally, in the third stage,each player i /∈ Z outputs the value yi received from thetrusted party. The adversary A determines the output ofthe players i ∈ Z. A finally also outputs an arbitraryvalue v (which is supposed to be the “reconstructed” viewof the real-execution adversary A). Let viewf,A(~x, z, ~r)denote the the view of A in this execution. We occasion-ally abuse notation and suppress the random strings, writ-ing IDEALf,A(~x, z) and viewf,A(~x, z); we can think ofIDEALf,A(~x, z) and viewf,A(~x, z) as random variables.

The real execution Let f be an m-ary functionality, letΠ be a protocol for computing f , and let A be a machinethat controls the same set Z of players as A. We charac-terize the real execution of Π given adversary A using afunction denoted REALΠ,A that maps an input vector ~x,

16

Page 17: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

an auxiliary input z, and a tuple ~r ∈ ({0, 1}∞)m+1−|Z|

(m−|Z| random strings for the players not in Z and a ran-dom string for the adversaryA), to a triple (~x, ~y, v), where~y is the output of players 1, . . . ,m, and v is the view of Athat results from executing protocol Π on inputs ~x, whenplayers i ∈ Z are controlled by the adversary A, who isgiven auxiliary input z. As before, we often suppress thevector of random bitstrings ~r and write REALΠ,A(~x, z).

We now formalize the notion of precise secure compu-tation. For convenience, we slightly generalize the def-inition of [37] to consider general adversary structures[40]. More precisely, we assume that the specification ofa secure computation protocol includes a set Z of sub-sets of players, where the adversary is allowed to corruptonly the players in one of the subsets in Z; the definitionof [12, 37] considers only threshold adversaries where Zconsists of all subsets up to a pre-specified size k. We firstprovide a definition of precise computation in terms ofrunning time, as in [37], although other complexity func-tions could be used; we later consider general complexityfunctions.

Let STEPS be the complexity function that, on input amachine M and a view v, roughly speaking, gives thenumber of “computational steps” taken by M in the viewv. In counting computational steps, we assume a repre-sentation of machines such that a machine M , given asinput an encoding of another machine A and an input x,can emulate the computation of A on input x with onlylinear overhead. (Note that this is clearly the case for “nat-ural” memory-based models of computation. An equiva-lent representation is a universal Turing machine that re-ceives the code it is supposed to run on one input tape.)

In the following definition, we say that a function isnegligible if it is asymptotically smaller than the inverseof any fixed polynomial. More precisely, a function ν :IN → IR is negligible if, for all c > 0, there exists somenc such that ν(n) < n−c for all n > nc.

Roughly speaking, a computation is secure if the idealexecution cannot be distinguished from the real execu-tion. To make this precise, a distinguisher is used. For-mally, a distinguisher gets as input a bitstring z, a triple(~x, ~y, v) (intuitively, the output of either IDEALf,A orREALΠ,A on (~x, z) and some appropriate-length tuple ofrandom strings) and a random string r, and outputs ei-ther 0 or 1. As usual, we typically suppress the randombitstring and write, for example, D(z, IDEALf,A(~x, z)) orD(z, REALΠ,A(~x, z)).

Definition B.1 (Precise Secure Computation) Let f bean m-ary function, Π a protocol computing f , Z a set ofsubsets of [m], p : IN × IN → IN , and ε : IN → IR.Protocol Π is a Z-secure computation of f with precisionp and ε-statistical error if, for all Z ∈ Z and every real-model adversary A that controls the players in Z, thereexists an ideal-model adversary A, called the simulator,

that controls the players in Z such that, for all n ∈ N , all~x = (x1, . . . , xm) ∈ ({0, 1}n)m, and all z ∈ {0, 1}∗, thefollowing conditions hold:

1. For every distinguisher D,∣∣∣PrU [D(z, REALΠ,A(~x, z)) = 1]−

PrU [D(z, IDEALf,A(~x, z))] = 1∣∣∣ ≤ ε(n).

2. PrU [STEPS(A, v ≤ p(n, STEPS(A, A(v)))]) = 1,where v = viewf,A(~x, z)).12

Π is a Z-secure computation of f with precision p and(T, ε)-computational error if it satisfies the two conditionsabove with the adversary A and the distinguisher D re-stricted to being computable by a TM with running timebounded by T (·).

Protocol Π is a Z-secure computation of f with statis-tical precision p if there exists some negligible function εsuch that Π is a Z-secure computation of f with precisionp and ε-statistical error. Finally, protocol Π is a Z-securecomputation of f with computational precision p if for ev-ery polynomial T , there exists some negligible function εsuch that Π is a Z-secure computation of f with precisionp and (T, ε)-computational error.

The traditional notion of secure computation is obtainedby replacing condition 2 with the requirement that theworst-case running-time of A is polynomially related tothe worst-case running time of A.

The following theorems were provided by Micali andPass [37, 38], using the results of Ben-Or, Goldwasserand Wigderson [41] and Goldreich, Micali and Wigder-son [12]. Let Zm

t denote all the subsets of [m] containingt or less elements. An m-ary functionality f is said to bewell-formed if it essentially ignores arguments that are notin ({0, 1}n)m for some n. More precisely, if there existj, j′ such that |xj | 6= |xj′ |, then fi(~x) = λ for all i ∈ [m].(See [45, p. 617] for motivation and more details.)

Theorem B.2 For every well-formed m-ary functionalityf , there exists a precision function p such that p(n, t) =O(t) and a protocol Π that Zm

dm/3e−1-securely computesf with precision p and 0-statistical error.

This result can also be extended to more general adver-sary structures by relying on the results of [40]. We canalso consider secure computation of specific 2-party func-tionalities.

12Note that the three occurrences of PrU in the first two clauses rep-resent slightly different probability measures, although this is hidden bythe fact that we have omitted the superscripts. The first occurrence ofPrU should be Pr

m−|Z|+3U , since we are taking the probability over

the m + 2− |Z| random inputs to REALf,A and the additional randominput to D; similarly, the second and third occurrences of PrU shouldbe Pr3U .

17

Page 18: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

Theorem B.3 Suppose that there exists an enhancedtrapdoor permutation.13 For every well-formed 2-aryfunctionality f where only one party gets an output (i.e.,f1(·) = 0), there exists a a precision function p such thatp(n, t) = O(t) and protocol Π thatZ2

1 -securely computesf with computational-precision p.

Micali and Pass [37] also obtain unconditional results(using statistical security) for the special case of zero-knowledge proofs. We refer the reader to [37, 57] formore details.

B.1 Weak Precise Secure ComputationUniversal implementation is not equivalent to precise

secure computation, but to a (quite natural) weakening ofit. Weak precise secure computation, which we are aboutto define, differs from precise secure computation in thefollowing respects:• Just as in the traditional definition of zero knowl-

edge [13], precise zero knowledge requires that forevery adversary, there exists a simulator that, on allinputs, produces an interaction that no distinguishercan distinguish from the real interaction. This sim-ulator must work for all inputs and all distinguish-ers. In analogy with the notion of “weak zero knowl-edge” [39], we here switch the order of the quanti-fiers and require instead that for every input distribu-tion Pr over ~x ∈ ({0, 1}n)m and z ∈ {0, 1}∗, andevery distinguisher D, there exists a (precise) simu-lator that “tricks” D; in essence, we allow there tobe a different simulator for each distinguisher. Asargued by Dwork et al. [39], this order of quantifica-tion is arguably reasonable when dealing with con-crete security. To show that a computation is securein every concrete setting, it suffices to show that, inevery concrete setting (where a “concrete setting” ischaracterized by an input distribution and the distin-guisher used by the adversary), there is a simulator.• We further weaken this condition by requiring only

that the probability of the distinguisher outputting1 on a real view be (essentially) no higher thanthe probability of outputting 1 on a simulated view.In contrast, the traditional definition requires theseprobabilities to be (essentially) equal. If we think ofthe distinguisher outputting 1 as meaning that the ad-versary has learned some important feature, then weare saying that the likelihood of the adversary learn-ing an important feature in the real execution is es-sentially no higher than that of the adversary learningan important feature in the “ideal” computation. Thiscondition on the distinguisher is in keeping with thestandard intuition of the role of the distinguisher.

13See [45] for a definition of enhanced trapdoor permutations; theexistence of such permutations is implied by the ”standard” hardness offactoring assumptions.

• We allow the adversary and the simulator to dependnot only on the probability distribution, but also onthe particular security parameter n (in contrast, thedefinition of [39] is uniform). That is why, whenconsidering weak precise secure computation with(T, ε)-computational error, we require that the adver-sary A and the simulator D be computable by cir-cuits of size at most T (n) (with a possibly differentcircuit for each n), rather than a Turing machine withrunning time T (n). Again, this is arguably reason-able in a concrete setting, where the security param-eter is known.

• We also allow the computation not to meet the pre-cision bounds with a small probability. The obviousway to do this is to change the requirement in the def-inition of precise secure computation by replacing 1by 1− ε, to get

PrU [STEPS(A, v) ≤ p(n, STEPS(A, A(v))] ≥ 1−ε(n),

where n is the input length and v = viewf,A(~x, z)We change this requirement in two ways. First,rather than just requiring that this precision inequal-ity hold for all ~x and z, we require that the probabilityof the inequality holding be at least 1− ε for all dis-tributions Pr over ~x ∈ ({0, 1}n)m and z ∈ {0, 1}∗.The second difference is to add an extra argumentto the distinguisher, which tells the distinguisherwhether the precision requirement is met. In thereal computation, we assume that the precision re-quirement is always met, thus, whenever it is notmet, the distinguisher can distinguish the real andideal computations. We still want the probability thatthe distinguisher can distinguish the real and idealcomputations to be at most ε(n). For example, ourdefinition disallows a scenario where the complex-ity bound is not met with probability ε(n)/2 and thedistinguisher can distinguish the computations with(without taking the complexity bound into account)with probability ε(n)/2.

• In keeping with the more abstract approach usedin the definition of robust implementation, thedefinition of weak precise secure computation isparametrized by the abstract complexity measure C ,rather than using STEPS. This just gives us a moregeneral definition; we can always instantiate C tomeasure running time.

Definition B.4 (Weak Precise Secure Computation)Let f , Π, Z , p, and ε be as in the definition of precisesecure computation, and let ~C be a complexity function.Protocol Π is a weak Z-secure computation of f with~C -precision p and ε-statistical error if, for all n ∈ N , allZ ∈ Z , all real-execution adversaries A that control theplayers in Z, all distinguishers D, and all probabilitydistributions Pr over ({0, 1}n)m × {0, 1}∗, there exists

18

Page 19: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

an ideal-execution adversary A that controls the playersin Z such that

Pr+({(~x, z) : D(z, REALΠ,A(~x, z), 1) = 1})−Pr+({(~x, z) : D(z, IDEALf,A(~x, z), p)) = 1}) ≤ ε(n),

where p = preciseZ,A,A(n, viewf,A(~x, z))), andpreciseZ,A,A(n, v) = 1 if and only if CZ(A, v) ≤p(n,CZ(A, A(v))).14 Π is a weak Z-secure computationof f with ~C -precision p and (T, ε)-computational error ifit satisfies the condition above with the adversary A andthe distinguisher D restricted to being computable by arandomized circuit of size T (n). Protocol Π is a Z-weaksecure computation of f with statistical ~C -precision p ifthere exists some negligible function ε such that Π is a Z-weak secure computation of f with precision p and sta-tistical ε-error. Finally, Protocol Π is a Z-weak securecomputation of f with computational ~C -precision p if forevery polynomial T (·), there exists some negligible func-tion ε such that Π is a Z-weak secure computation of fwith precision p and (T, ε)-computational error.

Our terminology suggests that weak precise securecomputation is weaker than precise secure computa-tion. This is almost immediate from the definitions ifCZ(M,v) = STEPS(M, v) for all Z ∈ Z . A moreinteresting setting considers a complexity measure thatcan depend on STEPS(M,v) and the size of the descrip-tion of M . It directly follows by inspection that Theo-rems B.2 and B.3 also hold if, for example, CZ(M, v) =STEPS(M, v) + O(|M |) for all Z ∈ Z , since the simu-lators in those results incur only a constant additive over-head in size. (This is not a coincidence. As argued in[37, 57], the definition of precise simulation guaranteesthe existence of a “universal” simulator S, with “essen-tially” the same precision, that works for every adversaryA, provided that S also gets the code of A; namely givena real-execution adversary A, the ideal-execution adver-sary A = S(A).15 Since |S| = O(1), it follows that|A| = |S| + |A| = O(|A|).) That is, we have the fol-lowing variants of Theorems B.2 and B.3:

Theorem B.5 For every well-formed m-ary functional-ity f , CZ(M,v) = STEPS(M,v) + O(|M |) for all setsZ, there exists a precision function p such that p(n, t) =O(t) and a protocol Π that weakZm

dm/3e−1-securely com-

putes f with ~C -precision p and 0-statistical error.

Theorem B.6 Suppose that there exists an enhancedtrapdoor permutation, and CZ(M,v) = STEPS(M,v) +

14Recall that Pr+ denotes the product of Pr and PrU (here, the firstPr+ is actually Pr+(m+3−|Z|), while the second is Pr+3).

15This follows by considering the simulator S for the universal TM(which receives the code to be executed as auxiliary input).

O(|M |) for all sets Z. For every well-formed 2-ary func-tionality f where only one party gets an output (i.e.,f1(·) = λ), there exists a precision function p such thatp(n, t) = O(t) and a protocol Π that weak Z2

1 -securelycomputes f with computational ~C -precision p.

It is easy to see that the theorems above continue to holdwhen considering “coarse” versions of the above com-plexity functions, where, say, n2 computational steps (orsize) correspond to one unit of complexity (in canonicalmachine game with input length n).

C Output-invariant complexity functionsRecall that for one direction of our main theorem we re-

quire that certain operations, like moving output from onetape to another, do not incur any additional complexity.We now make this precise.

Recall that in the definition of a secure computation, theideal-execution adversary, MZ , is an algorithm that con-trols the players in Z and finally provides an output foreach of the players it controls and additionally producesan output of its own (which is supposed to be the recon-structed view of the real-execution adversary). Roughlyspeaking, a complexity function is output-invariant if MZ

can “shuffle” content between its tapes at no cost.

Definition C.1 A complexity function C is output-invariant if, for every set Z of players, there exists somecanonical player iZ ∈ Z such that the following threeconditions hold:

1. (Outputting view) For every machine MZ control-ling the players in Z, there exists some machine M ′Zwith the same complexity asMZ such that the outputof M ′Z(v) is identical to MZ(v) except that playeriZ outputs y; v, where y is the output of iZ in the ex-ecution of MZ(v) (i.e., M ′Z is identical to MZ withthe only exception being that player iZ also outputsthe view of M ′Z).

2. (Moving content to a different tape) For every ma-chine M ′Z controlling players Z, there exists somemachine MZ with the same complexity as MZ suchthat the output of MZ(v) is identical to M ′Z(v) ex-cept if player iZ outputs y; v′ for some v′ ∈ {0, 1}∗in the execution of M ′Z(v). In that case, the only dif-ference is that player iZ outputs only y and MZ(v)outputs v′.

3. (Duplicating content to another output tape) Forevery machine M ′Z controlling players Z, there ex-ists some machine MZ with the same complexity asMZ such that the output of MZ(v) is identical toM ′Z(v) except if player iZ outputs y; v′ for somev′ ∈ {0, 1}∗ in the execution of M ′Z(v). In that case,the only difference is that MZ(v) outputs v′.

Note that the only difference between condition 2 and 3

19

Page 20: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

is that in condition 2, player iz only outputs y, whereas incondition 3 it still outputs its original output y; v′.

We stress that we need to consider output-invariantcomplexity functions only to show that universal imple-mentation implies precise secure computation.

References[1] H. A. Simon. A behavioral model of rational choice. Quar-

terly Journal of Economics, 49:99–118, 1955.[2] A. Neyman. Bounded complexity justifies cooperation in

finitely repated prisoner’s dilemma. Economic Letters, 19:227–229, 1985.

[3] C. H. Papadimitriou and M. Yannakakis. On complexityas bounded rationality. In Proc. 26th ACM Symposium onTheory of Computing, pages 726–733, 1994.

[4] N. Megiddo and A. Wigderson. On play by means ofcomputing machines. In Theoretical Aspects of Reason-ing about Knowledge: Proc. 1986 Conference, pages 259–274. 1986.

[5] Y. Dodis, S. Halevi, and T. Rabin. A cryptographic so-lution to a game theoretic problem. In CRYPTO 2000:20th International Cryptology Conference, pages 112–130. Springer-Verlag, 2000.

[6] A. Urbano and J. E. Vila. Computationally restricted un-mediated talk under incomplete information. EconomicTheory, 23(2):283–320, 2004.

[7] A. Rubinstein. Finite automata play the repeated prisoner’sdilemma. Journal of Economic Theory, 39:83–96, 1986.

[8] E. Kalai. Bounded rationality and strategic complexity inrepeated games. In Game Theory and Applications, pages131–157. Academic Press, San Diego, 1990.

[9] E. Ben-Sasson, A. Kalai, and E. Kalai. An approach tobounded rationality. In Advances in Neural InformationProcessing Systems 19 (Proc. of NIPS 2006), pages 145–152. 2007.

[10] R. Myerson. Incentive-compatibility and the bargainingproblem. Econometrica, 47:61–73, 1979.

[11] F. Forges. An approach to communication equilibria.Econometrica, 54(6):1375–85, 1986.

[12] O. Goldreich, S. Micali, and A. Wigderson. How to playany mental game. In Proc. 19th ACM Symposium on The-ory of Computing, pages 218–229, 1987.

[13] S. Goldwasser, S. Micali, and C. Rackoff. The knowledgecomplexity of interactive proof systems. SIAM Journal onComputing, 18(1):186–208, 1989.

[14] R. Cleve. Limits on the security of coin flips when half theprocessors are faulty. In Proc. 18th ACM Symposium onTheory of Computing, pages 364–369, 1986.

[15] H. W. Kuhn. Extensive games and the problem of informa-tion. In H. W. Kuhn and A. W. Tucker, editors, Contribu-tions to the Theory of Games II, pages 193–216. PrincetonUniversity Press, Princeton, N.J., 1953.

[16] G. Walker and D. Walker. The Official Rock Paper ScissorsStrategy Guide. Simon & Schuster, Inc., New York, 2004.

[17] K. Binmore. Modeling rational players I. Economics andPhilosophy, 3:179–214, 1987.

[18] A. Tarski. A Decision Method for Elementary Algebra andGeometry. Univ. of California Press, 2nd edition, 1951.

[19] R. J. Lipton and E. Markakis. Nash equilibria via polyno-

mial equations. In Proc. LATIN 2004: Theoretical Infor-matics, pages 413–422, 2004.

[20] C. H. Papadimitriou and T. Roughgarden. Computingequilibria in multi-player games. In Proc. 16th ACM-SIAMSymposium on Discrete Algorithms, pages 82–91, 2005.

[21] R. Axelrod. The Evolution of Cooperation. Basic Books,New York, 1984.

[22] F. Forges. Universal mechanisms. Econometrica, 58(6):1341–64, 1990.

[23] I. Abraham, D. Dolev, R. Gonen, and J. Y. Halpern. Dis-tributed computing meets game theory: Robust mecha-nisms for rational secret sharing and multiparty compu-tation. In Proc. 25th ACM Symposium on Principles ofDistributed Computing, pages 53–62, 2006.

[24] D. Gordon and J. Katz. Rational secret sharing, revis-ited. In SCN (Security in Communication Networks) 2006,pages 229–241, 2006.

[25] J. Y. Halpern and V. Teadgue. Rational secret sharing andmultiparty computation: extended abstract. In Proc. 36thACM Symposium on Theory of Computing, pages 623–632,2004.

[26] S. Izmalkov, M. Lepinski, and S. Micali. Perfect im-plementation of normal-form mechanisms. Available athttp://people.csail.mit.edu/silvio/. This is an revised ver-sion of “Rational secure computation and ideal mechanismdesign”, Proc. 46th IEEE Symposium on Foundations ofComputer Science, 2005, pp. 585–595., 2008.

[27] G. Kol and M. Naor. Cryptography and game theory: De-signing protocols for exchanging information. In Theoryof Cryptography Conference, pages 320–339, 2008.

[28] Y. Shoham and M. Tennenholtz. Non-cooperative comput-ing: Boolean functions with correctness and exclusivity.Theoretical Computer Science, 343(1–2):97–113, 2005.

[29] D. M. Kreps and R. B. Wilson. Sequential equilibria.Econometrica, 50:863–894, 1982.

[30] R. Selten. Reexamination of the perfectness concept forequilibrium points in extensive games. International Jour-nal of Game Theory, 4:25–55, 1975.

[31] J. von Neumann and O. Morgenstern. Theory of Gamesand Economic Behavior. Princeton University Press,Princeton, N.J., 2nd edition, 1947.

[32] M. Lepinski, S. Micali, C. Peikert, and A. Shelat.Completely fair SFE and coalition-safe cheap talk. InProc. 23rd ACM Symposium on Principles of DistributedComputing, pages 1–10, 2004.

[33] I. Abraham, D. Dolev, and J. Y. Halpern. Lower bounds onimplementing robust and resilient mediators. In Fifth The-ory of Cryptography Conference, pages 302–319, 2008.

[34] M. Lepinski, S. Micali, and A. Shelat. Collusion-free pro-tocols. In Proc. 37th ACM Symposium on Theory of Com-puting, pages 543–552, 2005.

[35] D. Gerardi. Unmediated communication in games withcomplete and incomplete information. Journal of Eco-nomic Theory, 114:104–131, 2004.

[36] E. Ben-Porath. Cheap talk in games with incomplete infor-mation. Journal of Economic Theory, 108(1):45–71, 2003.

[37] S. Micali and R. Pass. Local zero knowledge. In Proc. 38thACM Symposium on Theory of Computing, pages 306–315,2006.

[38] S. Micali and R. Pass. Precise cryptography. Available at

20

Page 21: Game Theory with Costly Computation: Formulation and ...rafael/papers/gtcost-ics.pdfGame Theory with Costly Computation: Formulation and Application to Protocol Security Joseph Y

http://www.cs.cornell.edu/˜rafael, 2007.[39] C. Dwork, M. Naor, O. Reingold, and L. J. Stockmeyer.

Magic functions. Journal of the ACM, 50(6):852–921,2003.

[40] M. Hirt and U. Maurer. Player simulation and general ad-versary structures in perfect multiparty computation. Jour-nal of Cryptology, 13(1):31–60, 2000.

[41] M. Ben-Or, S. Goldwasser, and A. Wigderson. Complete-ness theorems for non-cryptographic fault-tolerant dis-tributed computation. In Proc. 20th ACM Symposium onTheory of Computing, pages 1–10, 1988.

[42] O. Goldreich, S. Micali, and A. Wigderson. Proofs thatyield nothing but their validity, or all languages in NP havezero-knowledge proof systems. Journal of the ACM, 38(1):691–729, 1991.

[43] D. Malkhi, N. Nisan, B. Pinkas, and Y. Sella. Fairplay—secure two-party computation system. In Proc. 13thUSENIX Security Symposium, pages 287–302, 2004.

[44] Y. Aumann and Y. Lindell. Security against covert adver-saries: Efficient protocols for realistic adversaries. In Proc.of Theory of Cryptography Conference, 2006.

[45] O. Goldreich. Foundations of Cryptography, Vol. 2. Cam-bridge University Press, 2004.

[46] S. Goldwasser and Y. Lindell. Secure computation withoutagreement. In DISC ’02: Proc. 16th International Con-ference on Distributed Computing, pages 17–32. Springer-Verlag, 2002.

[47] D. Boneh and M. Naor. Timed commitments. InProc. CRYPTO 2000, Lecture Notes in Computer Science,Volume 1880, pages 236–254. Springer-Verlag, 2000.

[48] M. Bellare and P. Rogaway. Random oracles are practical:a paradigm for designing efficient protocols. In ACM Con-ference on Computer and Communications Security, pages62–73, 1993.

[49] S. Micali and A. Shelat. Purely rational secret sharing. InProc. TCC 2009, 2009.

[50] T. Cohen, J. Kilian, and E. Petrank. Responsive roundcomplexity and concurrent zero-knowledge. In Advancesin Cryptology: ASIACRYPT 2001, pages 422–441, 2001.

[51] J. Y. Halpern. On ambiguities in the interpretation of gametrees. Games and Economic Behavior, 20:66–96, 1997.

[52] J. Y. Halpern, Y. Moses, and M. R. Tuttle. A knowledge-based analysis of zero knowledge. In Proc. 20th ACM Sym-posium on Theory of Computing, pages 132–147, 1988.

[53] J. Y. Halpern, Y. Moses, and M. Y. Vardi. Algorithmicknowledge. In Theoretical Aspects of Reasoning aboutKnowledge: Proc. Fifth Conference, pages 255–266. 1994.

[54] Y. Moses. Resource-bounded knowledge. In Proc. Sec-ond Conference on Theoretical Aspects of Reasoning aboutKnowledge, pages 261–276. 1988.

[55] J. Y. Halpern, R. Pass, and V. Raman. An epistemic charac-terization of zero knowledge. In Theoretical Aspects of Ra-tionality and Knowledge: Proc. Twelfth Conference (TARK2009), pages 156–165, 2009.

[56] R. J. Aumann. Acceptable points in general cooperativen-person games. In A. W. Tucker and R. D. Luce, editors,Contributions to the Theory of Games IV, Annals of Math-ematical Studies 40, pages 287–324. Princeton UniversityPress, Princeton, N. J., 1959.

[57] R. Pass. A Precise Computational Approach to

Knowledge. PhD thesis, MIT, 2006. Available athttp://www.cs.cornell.edu/˜rafael.

21