# introduction to game theory notes · pdf file what is game theory? decision theory unique...

Post on 05-May-2020

8 views

Category:

## Documents

Embed Size (px)

TRANSCRIPT

• N ik o l a o s Lio n is

U n i v e r s i t y O f A t h e n s

( R e v i s e d : O c t o b e r 2 0 1 4 )

Introduction to Game Theory 1

• What is Game Theory?

 Decision Theory  Unique decision maker

 Game Theory (Interactive Decision Theory)  Many decision makers

 Strategic interdependence

Note: For the interaction to become a strategic game, however, we need something more, namely the participants’ mutual awareness of the cross effects of their actions

2

• Methodology of Game Theory 3

Strategic Problem

Game Model

Outcomes

Equilibrium Concepts

Equilibrium Outcomes

Predictions

• F R O M A S T R A T E G I C P R O B L E M

T O A G A M E M O D E L

4

Constructing a Game

• Pricing Problem 5

 Two firms are about to price their products. They have to choose between high or low prices. The firm with the lowest price serves the whole market.

 Their profits are:

 If both set a high price, they will get profit 2.

 If they both set a low price then they will get 1 as a profit.

 If they differentiate their prices, i.e. one firm chooses a high price and the other chooses a low price, then the firm with the low price will get the whole market demand and will earn a profit of 3. The other firm earns 0.

 What should the firms do?

• Basic Elements of Games 6

1. The players

2. The actions

3. The order of play

4. The information

5. The outcomes

6. The payoffs

• Who is involved? 7

The players

 A finite set I = {1,2,…N}

 … it can also be and an infinite set

 Pricing Example  I = { Firm 1 , Firm 2}

• What can the players do?

Actions  For each i  I, a nonempty set Ai = {a1

i, a2 i, …, aki

i}

Strategies

 A strategy for player i is a complete plan of actions, i.e. a complete plan of how a player will play a game.

 Importantly, a strategy determines the action a player will take for any situation he could face during the game.

 For each i  I, a nonempty set S i = {s1 i, s2

i, …, smi i }

 Pricing example

 A Firm 1 = { H, L } and A Firm 2 = { H, L }  S Firm 1 = { H, L } and S Firm 2 = { H, L }

8

• Who moves when?

Order of Play

 Simultaneous moves (Static games)  All players move at the same time

 …. or without knowing their opponents’ moves

 Sequential moves (Dynamic games)  One player moves

 Second player observes and then moves

9

• What the players know when they move?

Information

 Complete information  All players’ payoff functions are common knowledge

 Incomplete information  At least one player is uncertain about another player’s payoff

function

10

• What are the outcomes of the game?

Outcomes

 For each possible set of strategies by the players, what is the outcome of the game?

 An outcome s is a list (vector) of strategies chosen by each player i, i.e. s = (s1, s2, …, sN)

 Pricing example  s = ( (H, L), (H, H), (L, L), (L, H) )

11

• What the players get?

Payoffs

 The payoffs capture everything in the outcomes of the game that the player cares about

 What are the players preferences over the possible outcomes?

 A payoff function  i for player i assigns a real number  i(s) to every outcome of the game. Formally,  i: S  

12

• What the players get?

Pricing example

 Preferences over the possible outcomes  Firm 1: (L, H) ≻ (H, H) ≻ (L, L) ≻ (H, L)

 Firm 2: (H, L) ≻ (H, H) ≻ (L, L) ≻ (L, H)

 Payoff functions  Firm 1:  Firm 1 (s) such that

 Firm 1 ((L, H)) = 3 >  Firm 1 ((H, H)) = 2 >  Firm 1 ((L, L)) = 1 >  Firm 1 ((H, L)) = 0

 Firm 2:  Firm 2 (s) such that

 Firm 2 ((H, L)) = 3 >  Firm 2 ((H, H)) = 2 >  Firm 2 ((L, L)) = 1 >  Firm 2 ((H, L)) = 0

13

• Game Models 14

Order of Play

Static or Dynamic

Players’ Information

Complete or Incomplete

• Game Models 15

Static Games

with

Complete Information

Dynamic Games

with

Complete Information

Static Games

with

Incomplete Information

Dynamic Games

with

Incomplete Information

• Axiomatic Assumptions 16

: In any given situation a decision- maker always chooses the action which is the best according to his preferences, i.e. players play rational

: Rational play is common knowledge among all players in the game, i.e. each player knows that the other players play rational, knows that the other players’ know he is rational, etc. (ad infinitum)

• Rules of the Game 17

Rules of the Game

Basic Elements • The players • The actions

• The order of play • The information

• The outcomes • The payoffs

Assumptions • Rational play

• Common Knowledge

• N O R M A L F O R M

A N D

E X T E N S I V E F O R M

18

Representations of Games

• Pricing Problem Normal Form Representation

19

Firm 2

Firm 1

Players

• Pricing Problem Normal Form Representation

20

Firm 2

High Price Low Price

Firm 1

High Price

Low Price

Players, Actions/Strategies, Outcomes

• Pricing Problem Normal Form Representation

21

Firm 2

High Price Low Price

Firm 1

High Price 2 , 2 0 , 3

Low Price 3 , 0 1 , 1

Players, Actions/Strategies, Outcomes, Payoffs

• Pricing Problem Extensive Form

22

Firm 1

L H

First Decision Node: Player, Actions

• Pricing Problem Extensive Form

23

Firm 1

Firm 2

L H L H

L H

Next Decision Nodes: Player, Actions

• Pricing Problem Extensive Form

24

Firm 1

Firm 2

L H L H

L H

2 2 0 3 3 0 1 1

Players, Actions/Strategies, Order of Play, Information, Outcomes, Payoffs

• Pricing Problem as a Static Game Extensive Form

25

Firm 1

Firm 2

L H L H

L H

2 2 0 3 3 0 1 1

Players, Actions/Strategies, Order of Play, Information, Outcomes, Payoffs

• Pricing Problem as a Dynamic Game 26

Firm 1

Firm 2

L H L H

L H

2 2 0 3 3 0 1 1

Players, Actions/Strategies, Order of Play, Information, Outcomes, Payoffs

• Actions and Strategies in Dynamic Games

Actions

 For Firm 1: A1 = {H, L}

 For Firm 2: A2 = {H, L}

Strategies

 A strategy for each firm is a complete plan of actions

 For Firm 1: S1 = {H, L}

 For Firm 2: S2 = {(H,H), (H,L), (L,H), (L,L)}

27

• F R O M O U T C O M E S

T O E Q U I L I B R I U M O U T C O M E S

28

Solving a Game

• Methodology of Game Theory 29

Strategic Problem

Game Model

Outcomes

Equilibrium Concepts

Equilibrium Outcomes

Predictions

• Equilibrium Concepts

An equilibrium is a strategy combination consisting of each player’s best strategy

Equilibrium in Dominant Strategies

Nash Equilibrium

Subgame Perfect Nash Equilibrium

Bayesian-Nash Equilibrium

Perfect Bayesian-Nash Equilibrium

30

Note: The main concept is Nash Equilibrium. All the rest are just equilibrium refinements of Nash Equilibrium.

• Models Equilibrium Concept

1. Static Games with Complete Information

2. Dynamic Games with Complete Information

3. Static Games with Incomplete Information

4. Dynamic Games with Incomplete Information

Nash Equilibrium

Subgame Perfect Nash Equilibrium

Bayesian Nash Equilibrium

Perfect Bayesian Nash Equilibrium

31

Models and Equilibrium

• S T A T I C G A M E S

W I T H

C O M P L E T E I N F O R M A T I O N

A N D

N A S H E Q U I L I B R I U M

32

Solving a Static Game

• Nash Equilibrium

 An outcome s = (s1, s2, …, s N) is said to be Nash Equilibrium if no player would find it beneficial to deviate provided that all other players are pl